Performance Benchmark - Qwen3.5 & Gemma4 on dual GPU setup (RTX 4070 + RTX 3060)

Posted by DracoTorpedo@reddit | LocalLLaMA | View on Reddit | 16 comments

Hi everyone,

Been following a lot of local LLM talk in this forum lately—learned quite a bit from you all! This is my first post, hopefully not my last. I wanted to share some interesting benchmarks I did in my free time testing out a dual-GPU setup.

Hardware Specs:

Software Setup:

The "Llama_benchy" Metrics:

I’ve had a blast with the Qwen3.5 series lately—especially the 35BA3B model. It was already fast on my old setup (4070 + RAM offload), but adding the RTX 3060 gives me way more headroom. I tested these 4 models:

  1. Bartowski Qwen3.5 35BA3B Q4KS @ 50k context
  2. Jackrong qwopus3.5-27b-v3 Q4KM @ 50k context
  3. Unsloth Gemma4-26BA4B Q4KM @ 60k context
  4. Unsloth Gemma4-31B-IT Q4KM @ 15k context (Higher context wouldn't fit in my VRAM)

All models used max_concurrent_preds=1, full GPU offload, and flash attention enabled.

Benchmark Results:

[Prompt Processing Speed - Dual GPU](

[Token Generation - Dual GPU](

[Time to first response - Dual GPU](

Analysis:

The "New GPU" Comparison

I wanted to see how much the RTX 3060 actually helped my favorite model, Qwen3.5 35B-A3B, compared to my old setup (4070 + CPU + RAM offload):

Analysis:

[Prompt Processing - Dual vs Single GPU](

[Token Generation Throughput - Dual vs Single GPU](

[Time to first response - Dual vs Single GPU](

VRAM & Utilization Notes: I didn't get perfect readings (mostly just Task Manager), so take this with a grain of salt. The RTX 4070 hovered around 40-45% utilization, while the 3060 was between 50-60%.

The memory split was a bit weird; despite the 4070 being primary, the 3060 always seemed to take a slightly larger chunk of VRAM (about 300–400 MB more), excluding the base Windows usage.

Conclusions:

Final advice: If you’re on the fence about a dual-GPU setup, go for it! Just keep realistic expectations—it's amazing for hobbyist use and honestly just a lot of fun to hunt for deals, installing them and playing around with.

If anyone has suggestions to improve my setup or tools for objective quality testing, please let me know!

Closing remarks: I corrected the text for grammar issues with Gemma4-26B-A4B at the end: It was quite fast but kept insisting that qwen2.5 and gemma2 are the latest models – and added that I would lose credibility if I don’t use the correct version numbers😂