Why is no open weight model inference provider hosting Mimo-v2.5 or Mimo-v2.5-pro?

Posted by True_Requirement_891@reddit | LocalLLaMA | View on Reddit | 30 comments

Literally no api inference provider is hosting the mimo-2.5 series models from Xiaomi. They seem to be reallly good.

High token efficiency and very low halucination rate compared to Kimi-k2.6, Deepseek-V4 or GLM-5.1, and yet no provider not even chutes is hosting it other than Xiaomi themselves.

I find it very strange.