Ternary Bonsai: Top intelligence at 1.58 bits

Posted by pmttyji@reddit | LocalLLaMA | View on Reddit | 76 comments

Today, we’re announcing Ternary Bonsai, a new family of 1.58-bit language models designed to balance strict memory constraints with high accuracy requirements.

This release builds on the efficiency frontier we began exploring with the recently released 1-bit Bonsai models. The 1-bit family showed that extreme compression could still produce commercially useful language models. Ternary Bonsai targets a different point on that curve: a modest increase in size for a meaningful gain in performance.

The models are available in three sizes: 8B, 4B, and 1.7B parameters. By using ternary weights {-1, 0, +1}, these models achieve a memory footprint approximately 9x smaller than standard 16-bit models while outperforming most peers in their respective parameter classes on standard benchmarks.

Blog post : https://prismml.com/news/ternary-bonsai

Models : https://huggingface.co/collections/prism-ml/ternary-bonsai

FP16 safetensors (HuggingFace format) of the ternary Bonsai-8B model. This repo exists for users who want to run Ternary Bonsai with stock HuggingFace tooling or frameworks that don't yet support any of the packed ternary format. The MLX 2-bit format is currently the only packed format available; more formats for other backends are coming soon.

Hope these ternary Bonsai models come with no/less hallucinations.

Waiting for 20-40B models(like Qwen3.5-27B, Qwen3.5-35B-A3B, Gemma-4-31B, Gemma-4-26B-A4B, etc.,) from them soon! That would be start of game change for big/large models.