Sparse Adaptive Attention “MoE”, a potential performance breakthrough for LLMs?
Posted by kaggleqrdl@reddit | LocalLLaMA | View on Reddit | 23 comments
Recently a post was made on this topic. https://medium.com/@hyborian_/sparse-adaptive-attention-moe-how-i-solved-openais-650b-problem-with-a-700-gpu-343f47b2d6c1
The idea is to use MoE at the attention layer to reduce compute usage for low signal tokens. Imho, this is probably the closest: https://arxiv.org/abs/2409.06669
The post was a weird combination of technical insight and strange AI generated bravado.
If I were going to leak IP, this is pretty much how I would do it. Use gen AI to obfuscate the source.
There has been a lot of research in this area as noted in the comments (finding these required some effort):
https://arxiv.org/abs/2312.07987
https://arxiv.org/abs/2210.05144
https://arxiv.org/abs/2410.11842
https://openreview.net/forum?id=NaAgodxpxo
https://arxiv.org/html/2505.07260v1
https://arxiv.org/abs/2410.10456
https://arxiv.org/abs/2406.13233
https://arxiv.org/abs/2409.06669
Kimi especially has attempted this: https://arxiv.org/abs/2502.13189
It's very challenging for us, as local LLM folks, to say this whether this is a breakthrough. Because while it appears promising, without mass GPU, we can't absolutely say whether it will scale properly.
Still, I think it's worth preserving as there was some effort in the comments made to analyze the relevance of the concept. And the core idea - optimizing compute usage for the relevant tokens only - is promising.
teachersecret@reddit
So, I kinda caught the same post and IDK, it tweaked my ears.
I did some tests, and yeah, kaggle, I think this guy is onto something potentially interesting.
kaggleqrdl@reddit (OP)
Yeah, it'd be interesting to try with something like this ... https://huggingface.co/Corianas/Tiny-Moe/tree/main
kaggleqrdl@reddit (OP)
my idea is to add in a layer to softmax # of experts (maybe 1 to 3) and baseline it at 2
teachersecret@reddit
I'm already running some experiments.
The concept works and scales nicely.
kaggleqrdl@reddit (OP)
what's interesting with llms is how it will dump attention in weird places. https://arxiv.org/abs/2410.10781 in gpt-oss they added a sinks thingy to just absorb the attention but i think it caused issues like ignoring user context. i'm wondering if something like this could be a better fix
teachersecret@reddit
The interesting thing I noticed was it was putting the most attention on the empty spots, and the least on the jaggy edges. I thought it would be the opposite, but I guess thinking about it, if you have an open field of blue knowing where the blue ends is probably a difficult problem. :)
kaggleqrdl@reddit (OP)
there is a whole field of mechanistic interprability for ai alignment and safety which i think would benefit from this if it works. what is the llm really paying attention to
teachersecret@reddit
I think it works. Every test I'm doing has it performing better than a dense competitor. I'm tagging in some deepseek OCR now and seeing how it plays nice with that since I've done vision->llm, may as well ;p.
kaggleqrdl@reddit (OP)
yeah that's very cool for sure. when red teaming gpt-oss for the kaggle thingy i struggled a lot trying to see where it was looking https://www.kaggle.com/competitions/openai-gpt-oss-20b-red-teaming/writeups/a-disturbingly-helpful-model
kaggleqrdl@reddit (OP)
in https://arxiv.org/pdf/2409.06669 they seem to calculate attention importance which is odd. i'd think just letting the model figure it out or itsel would be better, hmm
teachersecret@reddit
badgerbadgerbadgerWI@reddit
MoE is having a moment but the dirty secret is deployment complexity. You're basically running distributed systems on a single GPU.
The real win here isn't just performance - it's that you can scale expertise without scaling parameters linearly.
Aaaaaaaaaeeeee@reddit
I think that idea is heavily confused merely lower compute/kv cache vram hog. Not all of these optimizations work the same way, the important part might be the "active memory savings"
You have these massive 200B FFN with a 4B FFN activated. Why don't we try the same thing for attention layers. You can enlargen the total attention parameters into a massive sparse SOTA one. You have to compare that to the original. Don't think it's sparsity to make the 2B ATTN, 2B activated into 2B ATTN 2B 400M activated.
of a wimpy 2B total
Let's say I believe a 40 billion parameter dense is a minimum amount necessary for them to cook a model without any fatal flaws. A third of the model (which would be 13.3B) is attention layers, the rest are dense layers.
I want to make a new mixture of expert model with 3B total active parameters so that I can run it on my mobile device SSD.
FFN layers are sparse and equivalent to giant, massive layers. But the attention layers remain a small 1B. I think many people agree 1B worth of attention layers isn't enough to beat the latest Claude/GPT in general, it's too small.
They should increase it to what the 40 billion was (13.3B), and also do the top-k sparse method for attention in addition to ffn. Maybe active attention layers is bottlenecking intelligence, OR there's nothing you can do about it, you might need further dense matmul activity and a certain threshold of intermediate representations fusing with each other.
Compute reduction isn't as big of a deal as the memory access being way lower. If you engineer this in a way where you still have to read the entire attention layer parameters, it's not ideal.
The people who are writing the papers still hold the mentality where attention layers, memorize context contents and MLP layers memorize world knowledge. Want to see where this goes.
srigi@reddit
It has been discussed here already. Not only is that article an AI generated mess, with lots of bragging, but hear the mighty Karpathy at this exact time (24:24) of the recent podcast: https://youtu.be/lXUZvyajciY?t=1464
kaggleqrdl@reddit (OP)
Kimi seemed to do well. I wouldn't take Karpathy's word for much. LLMs are worth trillions. The only people giving away stuff seem to be the chinese right now.
srigi@reddit
Did you watched the video at the timestamp? That is exactly what Karpathy said - DeepSeek (china) us already playing with sparse attention.
kaggleqrdl@reddit (OP)
Ah, yes, many are. That's why I included all of the papers above.
kaggleqrdl@reddit (OP)
Can you quote what Karpathy said that was relevant? I listened briefly, but didn't hear anything related.
I really can't stand listening to him. Arxiv is a much better source.
LagOps91@reddit
Does this really make much sense? attention is already rather small in large MoE models (like <10% of weights most of the time). sure, you could reduce active parameter counts a bit, but you get a much larger effect when improving sparsity for ffn weights. it only makes sense if you already have really high levels of sparsity for ffn weights to even consider also doing MoE for attention imo.
kaggleqrdl@reddit (OP)
Read the papers. It makes sense and many have achieved good results.
LagOps91@reddit
Didn't have time to read the paper yet. Since you highlighted the compute aspect, that's what I focused on. If the idea is to improve attention by introducing some learned sparsity to not get distracted by low importance tokens, then I can see the benefits.
kaggleqrdl@reddit (OP)
Part of the potential win isn't just compute but also understanding better what are the important tokens versus the noise.
kaggleqrdl@reddit (OP)
Let me provide an example using gpt-oss, which has been 'freely shared'.
Imagine a prompt where I feed it some code, say 10000 underline tokens (something that doesn't get merged automatically) and two key words - "please respond with Hi!". Gpt-oss will not optimize based on this content when processing that prompt.
That is obviously pretty dumb, right?
There are risks here ofc, but intuitively, it does seem like the right path to go down.