Turns out Gemma 4 had MTP (multi token prediction) all along
Posted by Electrical-Monitor27@reddit | LocalLLaMA | View on Reddit | 39 comments
Hey Everyone, While I was trying to utilize Gemma 4 through the LiteRT api in my android app, I noticed that Gemma 4 was throwing errors when loading it on my Google Pixel 9 test device of the "mtp weights being an incompatible tensor shape". I did some digging and found out there's additional MTP prediction heads within the LiteRT files for speculative decoding and much faster outputs.
Well turns out I got confirmation today from a Google employee that Gemma 4 DOES INDEED have MTP but it was "removed on purpose" for "ensuring compatibility and broad usability".
Well would've been great to be honest if they released the full model instead, considering we already didn't get the Gemma 124B model leaked in Jeff Dean's tweet by accident. Would've been great to have much faster Gemma 4 generation outputs, ideally on the already fast MoE. Maybe someone can reverse engineer and extract the tensors and the math based on the compute graph in LiteRT?
Here's a link to the conversation:
LagOps91@reddit
so they don't want to give us anything that would compete with their closed weights apis. is this supposed to be a surprise? and in terms of MTP... llama.cpp still doesn't have anything, right?
MerePotato@reddit
I'd argue Gemma 4 already matches the performance of the budget API options
EffectiveCeilingFan@reddit
Yeah still no MTP in llama.cpp last time I checked. Qwen3.5 has MTP as well, really hoping to see support one day.
dampflokfreund@reddit
It's a shame, but in the end those are people working for free, so can't blame them. I would like if Alibaba and Google could step in to integrate MTP support in llama.cpp.
EffectiveCeilingFan@reddit
Eh. I’d rather they put that money into more QA before release. The Qwen team has a fraction of the budget of the Gemma team, so I give them much more leeway, but Qwen3.5 was also pretty borked for the first week.
david_0_0@reddit
ediction saves inference time significantly
PreciselyWrong@reddit
"all along"
Bro it was released a few days ago
IShitMyselfNow@reddit
I mean they couldn't even get it working fully without this for release, I don't think this is such a big conspiracy.
Would certainly be nice to have, but don't forget how many OSS projects they ended up implementing the support in. Adding this as well would have been a ton more work.
EffectiveCeilingFan@reddit
I’d normally agree, but they’re specifically choosing to make it impossible for the community to support MTP on its own. All the information required to one day support MTP on anything other than LiteRT has been stripped. It’s profoundly anti-community. They’re just trying to push people towards using Google LiteRT-LM model orchestration framework on the Google LiteRT runtime.
poco-863@reddit
If LiteRT is oss I fail to see how this is anti-community
EffectiveCeilingFan@reddit
The LiteRT release is quantized, so you wouldn’t be getting the MTP weights in a format suitable for re-use. Even after extracting them, you’d need to manually reconstruct the safetensors file, inserting the MTP weights. No easy task. Whereas they could have just… released the safetensors with the required weights in full precision. If they’re worried about it causing bugs, they could just release a “non-HF” set of safetensors that include the MTP weights for basically 0 extra work. AllenAI has some non-HF-compatible safetensors releases, for example. And, I mean, Gemma 4 was completely borked on almost all the frameworks they listed with “day 1 support” anyway, so the idea of trying to avoid bugs just doesn’t hold up.
hackerllama@reddit
RemindMe! 30 days
RemindMeBot@reddit
I will be messaging you in 1 month on 2026-05-07 12:21:49 UTC to remind you of this link
CLICK THIS LINK to send a PM to also be reminded and to reduce spam.
^(Parent commenter can ) ^(delete this message to hide from others.)
a_beautiful_rhind@reddit
MTP has never speed anything up for single user inference. All implementations have been slower.
Vicar_of_Wibbly@reddit
This is simply wrong. Completely untrue. Qwen3.5 397B A17B supports MTP and I use it every day, often in singe user inference. It most assuredly does speed up inference with high acceptance ratios for 2-token drafts.
ortegaalfredo@reddit
It works pretty well on Qwen3.5-27B, because it's the only dense enough and slow enough to actually get faster with MTP. And it gets quite faster.
Beginning-Window-115@reddit
not true
a_beautiful_rhind@reddit
they are like the only one then or doing parallel requests.
Ok-Ad-8976@reddit
I experimented with MTP a little bit. It helped for QWEN 3.5-27B when running tensor parallel = 2, but it needed, obviously, two GPUs.
It did not help at all for MOE models. It basically didn't work. I don't think that architecture really supports that.
alex20_202020@reddit
Why? I run usually on CPU cause my GPU is old and I don't think I need two CPUs for that. GPU adds even more parallelism than multi-threaded CPU, why do you need two of it?
Soft_Match5737@reddit
MTP on a MoE model is a weird combination because you're predicting multiple future tokens but each token might route through completely different experts. That means the MTP heads have to implicitly learn which expert combinations are likely to co-fire in sequence — basically encoding routing patterns as a side effect of the training objective. Whether llama.cpp can actually exploit this for speculative decoding depends on whether the MTP head predictions stay accurate when you're running quantized experts, since quantization errors compound differently across expert boundaries than in dense models.
EffectiveCeilingFan@reddit
Yeah that “explanation” of theirs is horseshit. Qwen3.5 HF safetensors have MTP and that has not caused any problems at all as far as I’m aware, even though llama.cpp has no MTP support. They’re clearly terrified of how good local AI models are getting, so now they’re trying to lock people in to their LiteRT garden.
GasolinePizza@reddit
Yes this is so much more likely than just you misunderstanding technical details or not being aware of some implementation/technical nuance!
FullOf_Bad_Ideas@reddit
MTP is usually used as a secondary training objective since it helps with reducing loss - it makes the model better, even if MTP is removed later.
MTP on MoE with batch size 1 is very unlikely to speed up inference, it works only on higher batch sizes where almost all experts are activated anyway.
That said, they probably could have kept it, but there's a chance it was optimized to be a training time optimization or they wanted to ensure that Gemma hosted on cloud apis will not be too competitive with Gemini on speed.
Porespellar@reddit
^ This guy MTPs.
FullOf_Bad_Ideas@reddit
I actually never used MTP locally. I read a lot of papers about LLM pre-training.
stoppableDissolution@reddit
It would significantly speed up the dense one tho
FullOf_Bad_Ideas@reddit
Yes. It would help out dense models. MoE + MTP comment was a response to OP who said:
WithoutReason1729@reddit
Your post is getting popular and we just featured it on our Discord! Come check it out!
You've also been given a special flair for your contribution. We appreciate your post!
I am a bot and this action was performed automatically.
Fresh_Month_2594@reddit
I'm not sure I understand MTP not being supported on Hugging Face? I get that the existing Transformers Hugging Face Inference API may not support MTP, but it being there shouldn't break anything? Qwen 3.5 27B has MLP out of the box and it greatly speeds up inference on RTX PRO 6000 (almost 2x inference throughput with MTP enabled on vLLM)
PortiaLynnTurlet@reddit
Honestly this reads to me more as putting less effort into the transformers-compatible release than anything malicious. Someone will convert the LiteRT weights soon if it hasn't happened already.
protestor@reddit
Deepseek also has MTP, is this also stripped for huggingface?
https://docs.vllm.ai/projects/ascend/en/main/user_guide/feature_guide/Multi_Token_Prediction.html
Fade78@reddit
I'm not familiar with this. Is that a bad thing?
abnormal_human@reddit
Google scoped out a feature because they didn't have a way to make it stable/supportable, like 95% of us do in our jobs every week, but they are the villains because this is r/LocalLLaMA and holding back anything is a betrayal.
cpldcpu@reddit
interesting typo there.
Cultural_Meeting_240@reddit
so they shipped MTP weights but forgot to tell anyone. classic google move.
oxygen_addiction@reddit
No. They stripped them from the release
david_0_0@reddit
open source models pushing innovation forward. multitoken prediction is a game changer for inference speed
Maleficent-Low-7485@reddit
hidden speculative decoding in a supposedly open model. the irony writes itself.