Training LFM-2.5-350M on Reddit post summarization with GRPO on my 3x Mac Minis — final evals and t-test evals are here

Posted by East-Muffin-6472@reddit | LocalLLaMA | View on Reddit | 2 comments

So, with this project I want to see if a length constrained (like 64 tokens only) quality summarization can be done by tiny LLMs using GRPO!

So, I trained two variants of this task:

I ran LLM-As-A-Judge eval for checking the summarization quality using DeepEval tools. Those are:

Th results are as attached and the final one is follows:

Ranking of t-test for other rewards:

Summary Table

Reward Configuration Composite Faithfulness Coverage Conciseness Clarity Pass Rate
length-quality-meteor-rouge 2.769 0.832 0.511 0.659 0.767 44.3%
length-quality-bleu-rouge 2.732 0.810 0.502 0.650 0.770 39.1%
length-quality-meteor-bleu 2.664 0.792 0.468 0.648 0.756 38.3%
length-quality-rouge-l 2.555 0.725 0.415 0.637 0.778 32.4%
length-quality-meteor 2.484 0.721 0.427 0.625 0.711
length-quality-bleu 2.400 0.680 0.399 0.577 0.744 26.9%
length-only (baseline) 2.416 0.678 0.407 0.592 0.739 30.7%

Performed on the test sample of 200 of smoltldr dataset. Baseline: length penalty only

All the code and wandb charts in the comments!

Setup: 3x Mac Minis in a cluster running MLX.

One node drives training using GRPO, two push rollouts via vLLM-metal framework. All of the work done using smolcluster.com.

Used SyncPS arch which is synchronous parameter server architecture with the master as the node where the training happens and the vllm on the workers nodes.

Eval:

LLM-as-a-Judge (gpt-5)

Faithfulness — no hallucinations vs. source Coverage — key points captured Conciseness — shorter, no redundancy Clarity — readable on its own

The composite score is the mean of the above scores.

length_penalty : basically, -abs(response_length - MAX_LENGTH)

ROUGE-L only cares about the longest common subsequence — it misses synonyms and paraphrases entirely.

METEOR handles both: it aligns tokens with synonym matching via WordNet and balances precision + recall with a chunk-order penalty.

BLEU on the other hand, focuses more on n-gram precision and length penalty.