Hebrew_Nemo: a state-of-the-art Hebrew large language model
Posted by Sicarius_The_First@reddit | LocalLLaMA | View on Reddit | 5 comments
Hebrew_Nemo is a state-of-the-art (SOTA) Hebrew language large language model specifically optimized for Hebrew language understanding and generation. Built upon the Mistral Nemo architecture, this model represents a significant advancement in Hebrew NLP capabilities, combining the robust multilingual foundations of Mistral Nemo with extensive Hebrew-specific fine-tuning and optimization.
As part of my efforts to democratize AI, Hebrew_Nemo is released with a permissive Apache 2.0 license. The model demonstrates competitive performance with Gemma3-27B, one of the world’s leading open-source models in multilingual capabilities—despite Gemma3-27B being more than twice its size. This result highlights Hebrew_Nemo’s efficiency and effectiveness, making SOTA capabilities widely available for consumers, as well as corporations.
Get the model here:
Pojiku@reddit
Nice to see people posting finetunes again!
BananaPeaches3@reddit
Reminds me of this hallucination:
Sicarius_The_First@reddit (OP)
Yeah well, hallucinations happen for sure.
It's a 12B after all.
Anyway, the goal is to have something that's better than Gemma-3-27B while being easily runnable, and with a better license.
Sicarius_The_First@reddit (OP)
I've submitted it for eval, will update once I have results, and share how it compares to Gemma-3-27B.
From my internal tests it surpassed gemma, but I will be glad to share the independent benchmarks, once the results arrive!
Sicarius_The_First@reddit (OP)
GGUFs are being uploaded now