Benchmarking small models at 4bit quants on Apple Silicon with mlx-lm
Posted by ironwroth@reddit | LocalLLaMA | View on Reddit | 16 comments
I ran a bunch of small models at 4bit quants through a few benchmarks locally on my MacBook using `mlx-lm.evaluate`. Figured I would share in case anyone else finds it interesting or helpful!
System Info: Apple M4 Pro, 48gb RAM, 20 core GPU, 14 core CPU
Feztopia@reddit
30b isn't small. 3b active might make it fast but it's not small. But nice to see the comparison. I am still with a 8b llama model on my phone, I hope to get something faster and still better in future.
CarpenterHopeful2898@reddit
what use case with phone running a llm ?
Feztopia@reddit
What is the use case of having an offline intelligence with knowledge of the whole Internet in your pocket like straight out of a sci-fi movie?
ironwroth@reddit (OP)
yeah I meant to separate the 2 larger ones into a separate table for comparison
Feztopia@reddit
Yeah also I did miss the release of the 7b moe granite so now I know about that thanks to your post
SnooMarzipans2470@reddit
This is great, I wish somsone did this with a normal 16GB ram machine with no GPU, with under 1B models and larger quantized models that can be run on CPU even if its was 1 TPS.
irodov4030@reddit
https://www.reddit.com/r/LocalLLaMA/comments/1lmfiu9/i_tested_10_llms_locally_on_my_macbook_air_m1_8gb/
SnooMarzipans2470@reddit
man, you are a life saver
irodov4030@reddit
😅👍🏼
Lesser_Gatz@reddit
Very stupid question: what exactly is benchmarking a model? Is it a series of yes/no questions that gauge knowledge/accuracy? Is it benchmarking the speed of a response? I'm getting into self-hosting llms, but I don't know what makes one genuinely better than the rest.
ironwroth@reddit (OP)
It depends on the benchmark. Most of the benchmarks I ran are multiple choice questions over a variety of domains like science, law, math, history, etc. IFEval is a benchmark on instruction following where questions are like "Write a joke about morphology that’s professional and includes the word ”cat” at least once, and the word ”knock” at least twice. Wrap your whole response with double quotation marks."
I also included the speed benchmarks but those are just how fast it processes a prompt (very slow typically on Mac) and how fast it generates a response.
Lesser_Gatz@reddit
Thanks for your response, I really appreciate it!
lemon07r@reddit
nice, would you add apriel thinker 1.5 15b as well? Im surprised its flying so under the radar here.
irodov4030@reddit
https://www.reddit.com/r/LocalLLaMA/comments/1lmfiu9/i_tested_10_llms_locally_on_my_macbook_air_m1_8gb/
ontorealist@reddit
Thanks for sharing. Interesting that speed seems to be LFM2 8B’s only marginal advantage over Granite 4 Tiny.
I’d hoped one could be a small MOE that outperforms Qwen3 4B 2507 on 12GB-16GB Apple Silicon.
jarec707@reddit
Granite really is doing well.