Kwai-Klear/Klear-46B-A2.5B-Instruct: Sparse-MoE LLM (46B total / only 2.5B active)
Posted by paf1138@reddit | LocalLLaMA | View on Reddit | 4 comments
jacek2023@reddit
interesting size, any info about arch?
Herr_Drosselmeyer@reddit
Mmh, benchmarks don't tell the whole story, but it seems to lose to Qwen3-30B-A3 2507 on most of them while being larger. So unless it's somehow less "censored", I don't see it doing much.
ilintar@reddit
Yeah, seems more like an internal "proof-of-concept" than a real model for people to use.
Different_Fix_2217@reddit
>quality filters
Just stop it already. This is why they are great at benchmarks but terrible at real world use, it loses all ability to generalize when you only train it on "high quality samples"