Planning a local Gemma 4 build: Is a single RTX 3090 good enough?
Posted by LopsidedMango1@reddit | LocalLLaMA | View on Reddit | 32 comments
Hey everyone. I am planning a local build to run the new Gemma 4 large variants, specifically the 31B Dense and the 26B MoE models.
I am looking at getting a single used RTX 3090 because of the 24GB of VRAM and high memory bandwidth, but I want to make sure it will actually handle these models well before I spend the money.
I know the 31B Dense model needs about 16GB of VRAM when quantised to 4-bit. That leaves some room for the context cache, but I am worried about hitting the 24GB limit if I try to push the context window too far.
For those of you already running the Gemma 4 31B or 26B MoE on a single 3090, how is the performance? Are you getting decent tokens per second generation speeds? Also, how much of that 256K context window can you actually use in the real world without getting out of memory errors?
Any advice or benchmark experiences would be hugely appreciated!
semangeIof@reddit
Are you guys missing
-np 1?My full cmdline is:
Using Unsloth UD-Q4_K_XL quant of Gemma 4 31B dense you will be surprised how much context you can fit into 24GB VRAM.
Gemma 4 handles Q4 KV remarkably well.
InstaMatic80@reddit
You are not offloading to cpu, but some said is possible to offload the visual encoder. Do you know how?
Mr_International@reddit
`--no-mmproj-offload`
GPU is technically the offload, so that keeps the visual encoder on cpu
cyberdork@reddit
Whats your ctx? No mmproj?
semangeIof@reddit
I have no use for multimodal. 138k~
bb943bfc39dae@reddit
I tried 31B with Q5 GGUF on a single 3090, ctx 100k, ctk and ctv q8, it consistently produced 4tps 😂 I’d rather look at two 32GB GPUs instead.
channingao@reddit
Dual 5090 will be fine 😂
Mr_International@reddit
The 26B MoE yes, you'll be able to run at Q4_K_M with the image processor offloaded to CPU, but the 31B Dense at Q4_K_M is *just* a bit too big in my testing to fit on the 3090.
The 26B MoE I've been getting about 128K context limit via llama.cpp on Ubuntu 24.04 on a desktop that doubles as my personal computer (aka other GPU VRAM overhead for system processes like the activities window selector etc. which takes about 2GB of VRAM out of your 24GB)
LopsidedMango1@reddit (OP)
Thanks for the heads-up on the 2GB Ubuntu desktop overhead, that is really good to know! That makes perfect sense why the 31B Dense is just barely missing the cut for you on default settings. Out of curiosity, have you tried forcing the KV cache into a 4-bit format or enabling Flash Attention? I've been reading that doing that shrinks the memory footprint enough to squeeze the 31B Dense into 24GB with a decent context size. Also, offloading the vision encoder to the CPU is a great tip, I will definitely be using that strategy!
Mr_International@reddit
Nah, not yet. I may at some point but frankly I'm just waiting for the Turboquant KV cache updates to be GA'ed and see if that fixes it for zero effort on my part. If it does, then it payed to wait. If it doesn't, then I'll have to start fiddling and spending my own precious time on it.
For the moment, the 26ba4b is good enough for my purposes.
Ariquitaun@reddit
It looks like turbo quant has a very large performance cost as the context fills up. You're trading context size for speed basically
SKirby00@reddit
I've actually managed to regain that ~2GB VRAM that gets lost to the system overhead just by plugging my monitor into the motherboard rather than the GPU and using the CPU's integrated graphics to run the OS.
Not ideal if you also plan to game on the same PC, but otherwise no real downside. I do this on my Fedora PC, can't imagine that Ubuntu would be any different.
Eyelbee@reddit
For coding and most tasks Qwen 27B is already better, so no need to stretch for gemma, and no need to downgrade to 26bA4B. The best practice would be running a quant of 27B with enough context size for the use case.
CharacterAnimator490@reddit
I have the same experience with a 4090.
I can run the 26B MoE q8 quant at \~100 tps with 64k context.
For the 31B dense i use the unsloth IQ4_NL version it fits to the vram with 64k context q8 kv cache 25-40 tps.
tmactmactmactmac@reddit
I agree with this. I'm a novice so take what I say with a grain of salt but I think 26b is perfect for single 3090 but 31b wants 2x 3090. I'm running q_4 kv cache which allows for a bigger context window. I can pretty much max out the 26b at 255k but the 31b will only take \~60k. This could be due to me using Ollama but regardless, that's my limit. Dual 3090 with q_8 kv cache would be dialed IMO.
Lakius_2401@reddit
KoboldCpp Benchmark of RTX 3090 on Windows10, gemma-4-31B-it-UD-Q4_K_XL (unsloth) with 48k context, SWA enabled, Quantize KV Cache OFF, Batch size 256, no vision loaded (about 0.5 GB of VRAM left unused):
MaxCtx: 49152
GenAmount: 100
-----
ProcessingTime: 56.694s
ProcessingSpeed: 865.21T/s (up to 1000 depending on context load)
GenerationTime: 4.191s
GenerationSpeed: 23.86T/s (up to 26 depending on context load and luck, honestly)
TotalTime: 60.885s
Batch size was the extra bit needed to go from 32k context to 48k. You really don't get much of a speed penalty with it decreased, 0 speed benefits to increasing it, unlike some architectures that almost linearly increase in processing speed with it. You also need SWA enabled to get any reasonable context compression, penalty of SWA is losing ContextShift so running out of context forces a full reprocess every time. (You can try SWA on and off with the same seed if you want to test you're not damaging anything, I get the same outputs and overflow into shared memory with 8k ctx, vs some spare with 48k SWA)
Bump it down to Q4_K_S to get 64k context limit. Don't unload even a single layer to squeeze more context in, the speed penalty is MASSIVE for dense models. 55/61 layers on VRAM is already losing more than 75% of the generation speed on my DDR4 rig.
26B MoE will give you some crazy context and throughput, and you can use MoE CPU offloading to save more speed and cram more context if you need it (you probably don't). There's very few cases where the intelligence of the model over the max rated context is worth it, in my honest opinion. If you're using a crazy agentic workflow that wastes 32k tokens on the reg, or you're ingesting entire books, or you're ingesting entire codebases to avoid reading them... sure.
psyclik@reddit
27b-a3b in q4, 192k context (q8_0 - make sure to use a recent llama-cpp to get last patches on quantized kv cache, making q8 basically free quality wise) - about 100 ts with empty context, 50 with context nearly full.
My new daily driver.
Gringe8@reddit
Id go with two 3090s. With 48gb vram you can use Q8 with 131k context on 31B. I use a 5090 with a 4080.
tome571@reddit
I'm running 31B Gemma 4, Q4 on a 3090. You're gonna have a limited context window OR slow speeds with having to offload some.
I keep around 6k context window, which doesn't feel awful for general stuff, but definitely depends on your use case. Any significant coding it just won't have the window for it unless you offload some to system ram and it then crawls to 2 tok/sec.
I'm using it to see limitations on the model and work on some theories and experiments on memory systems, and it has been impressive thus far in that area. Very smart model for it's size.
Around 20 tok/sec when all on GPU. Drops to 2-3 when offloading to get more context window.
3090 Ryzen 3900x CPU 128 GB DDR4 system RAM
Hope this helps.
YourNightmar31@reddit
Q4 KV Cache can go up to 50k on my rtx 3090 with the model on Q4 too.
x0wl@reddit
I fit 131072 into 24GB with Gemma 4 31B (Q4_K_S model, Q4_0 KV) (please not that I run almost nothing else on that GPU):
YourNightmar31@reddit
Yeah that makes sense. I run it in LM Studio and its the main gpu in my pc, so i don't dare to go over the 22.5GB estimate it gives me
tome571@reddit
Ahh yeah, worth noting that since I'm doing memory/RAG stuff, the embedding model takes some space too. Makes sense you're getting a bit more
SocialDinamo@reddit
I am having a really good time with Gemma 4 26b in a 4 bit AWQ quant! Your mileage may vary but it handled agentic workflows in opencode really well. I haven’t used it for any serious coding
Monkey_1505@reddit
Could quantize the context to q8, or q8 and use turbo quant on the v portion (gives you about 2.5x), if you insist on static quants of a particular size.
teachersecret@reddit
24gb can run a 4 bit 26b gets up close to 200k context no problem on GPU. I run 180k with 2 slots in ubuntu while doing other things on the same rig. That's using f16 kv cache, so it'll double if you go down to 8 bit kv or lower.
That model would probably work quite well on the 3090 as it sits.
The 31b is going to be significantly slower and significantly less context. Still a smart model and I like it, though.
One thing I do want to point out is these models are visibly better at low context than they are at high context, so even if you can run high context... you really should try to keep most prompts under 20k, which means even the lower context 31b is fine for most tasks.
jacek2023@reddit
Try to plan two 3090s, it's a totally new world. And now with TP it's even more important.
putrasherni@reddit
i think you'll need 4
--Rotten-By-Design--@reddit
I tested through LM Studio with my 3090.
gemma-4-26b-a4b q4_k_m:
Context max is 80K, leaving less than 1GB of VRAM.
Token Generation speed: 98.21.
gemma-4-31b-it q4_k_m:
Context max is 14k, leaving less than 1GB of VRAM.
Token Generation speed: 25.99.
fragment_me@reddit
Gemma 4 31b UD Q4 K XL can get 120-140k context with kv cache Q8. You’ll need the -np 1 parameter for llama cpp. Id highly recommend getting 32GB VRAM if you can get similar mem bandwidth of 3090. 2x 3090 is pretty good for running UD Q8 K XL. Don’t expect more than 20 TG tok/s.
stddealer@reddit
If you're ok with 32k token window, yes.
squachek@reddit
No