Intel Arc Pro B70 32GB performance on Qwen3.5-27B@Q4
Posted by Puzzleheaded_Base302@reddit | LocalLLaMA | View on Reddit | 50 comments
Posted something when I initially got the GPU on r/IntelArc. Did not have vllm working at the time, so no real use case numbers. After many nights fighting with vllm, I finally got it to work.
Here are some summery.
- both llama.cpp and llm-scaler-vllm produce \~12tps token generation rate.
- tensor parallel degrade performance in all fronts (this may have something to do with my PCIe topology)
- pipeline parallel improves PP, but degrades TG at single query, improve both at high concurrency
- high concurrency performance is a lot better. TG reach 135 tps at 32 concurrency, which is about 20% less than RTX PRO 4500 32GB
- Power consumption at 32 concurrency is about 50% higher than RTX PRO 4500 32GB, which is consistent with spec. Power consumption is maxed out at PP step, it drop almost half during single query TG period. Power consumption does not maxed out during TG step even at high concurrency situation.
- you will need the latest beta fork to get qwen3.5 working.
- once you install ubuntu 26.04 (yes, pre-release version), no special driver installation is needed. i was not able to get ubuntu 24.04.4 working at all, and also not in any mood to install officially supported ubuntu 25.10, which will be obsolete in 3 months.
The below command-line prompt will get your vllm intel fork running qwen3.5 on Ubuntu 26.04 LTS
export HF_TOKEN="---your hf token---"
docker run -it --rm \
--name vllmb70 \
--ipc=host \
--shm-size=32gb \
--device /dev/dri:/dev/dri \
--privileged \
-p 8000:8000 \
-v \~/.cache/huggingface:/root/.cache/huggingface \
-e HF_TOKEN=$HF_TOKEN \
-e VLLM_TARGET_DEVICE="xpu" \
--entrypoint /bin/bash \
intel/llm-scaler-vllm:0.14.0-b8.1 \
-c "source /opt/intel/oneapi/setvars.sh --force && \
python3 -m vllm.entrypoints.openai.api\_server \\
\--model Intel/Qwen3.5-27B-int4-AutoRound \\
\--tokenizer Qwen/Qwen3.5-27B \\
\--served-model-name qwen3.5-27b \\
\--gpu-memory-utilization 0.92 \\
\--allow-deprecated-quantization \\
\--trust-remote-code \\
\--port 8000 \\
\--max-model-len 4096 \\
\--tensor-parallel-size 1 \\
\--pipeline-parallel-size 1 \\
\--enforce-eager \\
\--distributed-executor-backend mp"
Below are measured token rate:
- Single GPU
Concurrency: 1
| model | test | t/s | peak t/s | ttfr (ms) | est_ppt (ms) | e2e_ttft (ms) |
|---|---|---|---|---|---|---|
| qwen3.5-27b | pp2048 | 1700.83 ± 7.03 | 1196.95 ± 13.22 | 1104.11 ± 13.22 | 1196.99 ± 13.22 | |
| qwen3.5-27b | tg512 | 13.43 ± 0.09 | 14.00 ± 0.00 |
Concurrency: 4
| model | test | t/s (total) | t/s (req) | peak t/s | peak t/s (req) | ttfr (ms) | est_ppt (ms) | e2e_ttft (ms) |
|---|---|---|---|---|---|---|---|---|
| qwen3.5-27b | pp2048 (c4) | 1492.15 ± 93.77 | 802.83 ± 468.06 | 3155.68 ± 1403.00 | 3047.58 ± 1403.00 | 3155.71 ± 1402.98 | ||
| qwen3.5-27b | tg512 (c4) | 45.91 ± 0.46 | 12.03 ± 0.38 | 52.00 ± 0.00 | 13.00 ± 0.00 |
Concurrency: 8
| model | test | t/s (total) | t/s (req) | peak t/s | peak t/s (req) | ttfr (ms) | est_ppt (ms) | e2e_ttft (ms) |
|---|---|---|---|---|---|---|---|---|
| qwen3.5-27b | pp2048 (c8) | 1554.80 ± 5.58 | 533.91 ± 466.39 | 5677.56 ± 2849.77 | 5580.43 ± 2849.77 | 5677.59 ± 2849.76 | ||
| qwen3.5-27b | tg512 (c8) | 84.37 ± 0.31 | 11.73 ± 0.72 | 112.00 ± 0.00 | 14.00 ± 0.00 |
Concurrency: 32 this basically saturates all the compute cores on B70.
| model | test | t/s (total) | t/s (req) | peak t/s | peak t/s (req) | ttfr (ms) | est_ppt (ms) | e2e_ttft (ms) |
|---|---|---|---|---|---|---|---|---|
| qwen3.5-27b | pp2048 (c32) | 1503.41 ± 1.04 | 194.92 ± 302.24 | 20599.68 ± 11444.52 | 20509.48 ± 11444.52 | 20599.70 ± 11444.52 | ||
| qwen3.5-27b | tg512 (c32) | 130.90 ± 13.08 | 5.22 ± 0.91 | 288.00 ± 0.00 | 10.39 ± 1.60 |
Now Dual GPUs. Tensor Parallel 2
Concurrency: 1
| model | test | t/s | peak t/s | ttfr (ms) | est_ppt (ms) | e2e_ttft (ms) |
|---|---|---|---|---|---|---|
| qwen3.5-27b | pp2048 | 1019.80 ± 67.88 | 1962.77 ± 135.14 | 1835.82 ± 135.14 | 1962.82 ± 135.14 | |
| qwen3.5-27b | tg512 | 9.10 ± 0.45 | 11.00 ± 1.41 |
Concurrency: 32
| model | test | t/s (total) | t/s (req) | peak t/s | peak t/s (req) | ttfr (ms) | est_ppt (ms) | e2e_ttft (ms) |
|---|---|---|---|---|---|---|---|---|
| qwen3.5-27b | pp2048 (c32) | 1057.36 ± 1.69 | 133.90 ± 206.98 | 29738.38 ± 16330.06 | 29597.02 ± 16330.06 | 29738.40 ± 16330.05 | ||
| qwen3.5-27b | tg512 (c32) | 140.30 ± 1.78 | 6.08 ± 1.14 | 320.00 ± 0.00 | 10.32 ± 0.47 |
Pipeline Parallel 2
Concurrency 1
| model | test | t/s | peak t/s | ttfr (ms) | est_ppt (ms) | e2e_ttft (ms) |
|---|---|---|---|---|---|---|
| qwen3.5-27b | pp2048 | 1680.59 ± 124.37 | 1367.69 ± 105.88 | 1161.99 ± 105.88 | 1367.74 ± 105.89 | |
| qwen3.5-27b | tg512 | 10.31 ± 0.01 | 12.00 ± 0.00 |
Concurrency 32
| model | test | t/s (total) | t/s (req) | peak t/s | peak t/s (req) | ttfr (ms) | est_ppt (ms) | e2e_ttft (ms) |
|---|---|---|---|---|---|---|---|---|
| qwen3.5-27b | pp2048 (c32) | 2750.77 ± 1.96 | 261.41 ± 294.53 | 11889.30 ± 5927.16 | 11768.85 ± 5927.16 | 11889.32 ± 5927.16 | ||
| qwen3.5-27b | tg512 (c32) | 195.82 ± 4.09 | 7.14 ± 0.57 | 293.33 ± 7.54 | 9.51 ± 0.50 |
Puzzleheaded_Base302@reddit (OP)
LM Studio (llama.cpp vulkan) results in case people want to compare.
single gpu
concurrency 1
Concurrency 2
Concurrency 4
fallingdowndizzyvr@reddit
Here are the numbers from my A770 and Strix Halo. Which both are faster for TG and not that much slower for PP. Which is why I don't use my A770s anymore. Since the Strix Halo is pretty much like a 128GB A770. Which also why I probably won't get the B70. Since Strix Halo is pretty much like a 128GB B70.
A770
Strix Halo
Thanks-Suitable@reddit
Its crazy how much the drivers matter for single concurrency scenario token generation! Hope the cards are actually available in stores in europe aswell so we can try fixing the software support :) or at least contribute
RaDDaKKa@reddit
The cards are in stock in Poland, but it looks like there’s a 10-day lead time
https://www.morele.net/karta-graficzna-intel-arc-pro-b70-32gb-gddr6-33p01ib0bb-15926398/
ea_man@reddit
Here's it is cheaper: https://www.dustin.dk/product/5020089421/arc-pro-b70-ai-workstation
Nvclead@reddit
cant order without an organisation number
ea_man@reddit
Can't you order as a guest?
Nvclead@reddit
nope, it asks for the orgnanisation number too
Pablo_the_brave@reddit
In EU also in proshop for similar price.
Thanks-Suitable@reddit
Its true but its around 1250 eur, so its closer to the AMD card at 1500 with solid drivers :/ But Thanks!
DistanceAlert5706@reddit
Crazy. It's 2 times slower than RTX5060ti for single use. Support is not there. And enforce eager in vLLM command so it's not using any graph optimizations.
This_Maintenance_834@reddit
removing the —force-eager argument will crash vllm. i don’t know if this card support the graph function or it is due to lack of driver support.
vllm literally complains it does not know how to build the graph. this is a intel fork of vllm. so, if the official intel fork doesn’t support it, who supports it?
TheBlueMatt@reddit
There's definitely some trivial driver and optimization headroom, but we'll see how far it goes. With some trivial patches going upstream that shouldn't make a huge difference and the mesa opts from https://gitlab.freedesktop.org/mesa/mesa/-/work_items/15162 on a single Arc Pro B60 using unsloth/Qwen3.5-27B-GGUF:Q4_0 (which I assume is what you used - its probably similar to the OP at least), I get concurrency 1 tg512 15.87 ± 0.40.
Capital_Evening1082@reddit
Qwen3.5-27B-FP8 runs at 29t/s on 2x AMD R9700 for a single request. 524t/s at concurrency 32.
This is the league the B70 should play in. Less than 10t/s an concurrency 1 and 200t/s at concurrency 32 hints at a massive software issue.
fallingdowndizzyvr@reddit
That's how Intel rolls. It was the same with the A770. It should have been 3070/3080 performance based on paper, it was 3060 in reality.
Otherwise-Host9153@reddit
I did opus tune a little bit the llamacpp code - that's what i was possible to get right now:
Pablo_the_brave@reddit
And with 100k context?
Monad_Maya@reddit
That's kinda low for a single user single GPU scenario. I hope it's just a software optimization issue.
bennyb0y@reddit
Agree. Let’s go Intel we are all rooting for you.
seamonn@reddit
I still have PTSD from how quickly they abandoned Intel IPEX.
stormy1one@reddit
You and me both. IPEX made my Arc 170T scream
simracerman@reddit
Put in perspective. In main llama.cpp, my single 5070 Ti with iGPU offload does 16 t/s at empty and ~12-13 t/s at 64k context.
I had higher hopes for this intel variant. On paper it should be slightly slower than my 5070 Ti when both models are in VRAM.
Hytht@reddit
Someone got the same 13 tk/s tg with even larger dynamic FP8 quant https://forum.level1techs.com/t/intel-b70-launch-unboxed-and-tested/247873/2
Something is wrong in this setup.
Puzzleheaded_Base302@reddit (OP)
i hope that is the case. otherwise, this card won't fly in datacenter either, even if they can beat on token/dollar figure of metric.
munkiemagik@reddit
Well just maybe this is a chance to get hands on a relatively cheap product because they suck (sorry intel, you are trying and sincerely thank you for that)
But if/when they fix up, the price on these is surely going to skyrocket just like everything else due to demand because everyone and their granny will be trying to get one (or two or four)
mr_zerolith@reddit
Bought fourth tier hardware, got sixth tier performance
LocalLLaMa_reader@reddit
Are you intending to continue with llama.cpp or VLLM, not that you managed to set it up? Why?
Thank you so much for sharing and taking the plunge. Let's hope Intel indeed improves their software...
fallingdowndizzyvr@reddit
Don't hold your breath. I'm still waiting for my A770s to reach their paper potential.
This_Maintenance_834@reddit
i kind of intended to sell the card. i have already got RTX PRO 4500.
this was a cheaper way to get 128GB VRAM, but the token rate is so horrible that I have no use of it even if I get to 128GB with four. This card with presnet driver support is only financially making sense in datacenter when concurrency is high.
D2OQZG8l5BI1S06@reddit
OpenVINO backend is faster.
Monkey_1505@reddit
That's the speed my mobile amd dgpu pushes out for tg when i'm using an moe that doesn't entirely fit in vram. NGL if I brought this card, I'd feel pretty bad about that.
an0maly33@reddit
Exactly what I was thinking. "I get that with my 8gb 3070 while it thrashes between vram and system ram."
RIP26770@reddit
Use Vulkan and double the speed
libregrape@reddit
It's crazy, how I literally get the better result (\~800ts on pp, and \~25ts on tg) with rtx 5060 ti 16GB + CUDA + llama.cpp in single-user scenarios. What a disappointment. I hope that Intel fixes their software.
MiniCactpotBroker@reddit
even my old 3090 is miles ahead
RaDDaKKa@reddit
So, a total disappointment. I expected this to be a solid card for local LLMs like Qwen 3.5 27B or Gemma 4 31B with at least a 100k context. I considered a dual gpu setup, perhaps even a quad, but given these benchmarks, it seems I'm better off saving for Nvidia hardware. It might be viable for multi-agent systems, but for now, we just have to wait for software optimizations.
overand@reddit
It looks "fine" for that use case, for single user (and maybe more.) But, it;s not knocking it out of the park. I wonder how much of it is the kinda "meh" memory bandwidth
suprjami@reddit
This is worse performance than a 3060.
15 tok/sec makes reasoning pretty unfeasible.
Similar-Republic149@reddit
It's not worse than a 3060 but it is worse than an AMD MI50
suprjami@reddit
I own three 3060s. It is worse than a 3060.
Makers7886@reddit
I'm looking at my 3090s rn like they are the bruce willis of gpus
DataGOGO@reddit
It isn't the card.
Sound_and_the_fury@reddit
Yeah dam...was thinking of getting one should some stats alighen but not impressed
Pablo_the_brave@reddit
The same fillings. I was seriously thinking to buy two-three of them for my job but this... Thx OP for sharing. Currently in my home lab I have hybrid setup with Vulkan, 5070Ti+iGPU 780M and with Qwen3.5 27B i4_XS, kvcache 8_0, 85k context I've got prefill 700-300 t/s and decoding 14-11t/s (the lower for full context)...
Ok_Try_877@reddit
On the NVFP4 model of 27B I get 300 t/s+ aggregated output, running batches of 14 with 30K contexts and over 4000 t/s prompt processing with 2x 5060ti. They idle at 5w each and max out at 110 to 115w each without changing any voltage/power settings.
Winter_Tension5432@reddit
Hey, could you tell me what Runtime and config you're using?
Ok_Try_877@reddit
Hi,
I compiled VLLM from source with Cuda 13.1. I think there is a small patch/script you have to run as well to get native NVFP4 working with consumer Blackwell cards
CUDA_VISIBLE_DEVICES=0,1 vllm serve Kbenkhaled/Qwen3.5-27B-NVFP4 --tensor-parallel-size 2 --max-model-len 30000 --max-num-seqs 14 --skip-mm-profiling --gpu-memory-utilization 0.92 --kv-cache-dtype fp8
What's also a bit weird is this model has been taken off HuggingFace, or I can't find it..... I'm sure some of the others that have appeared since work just as well.
MiniCactpotBroker@reddit
Honestly not impressive at all. I almost got the card yesterday lol
__JockY__@reddit
Is it quietly disabling prefix caching?
Final-Rush759@reddit