Getting unexpected output with Gemma 4 31b-it on vLLM
Posted by Fr4y3R@reddit | LocalLLaMA | View on Reddit | 4 comments
Hey everyone,
I'm running into a weird issue and hoping someone here might have a fix or some troubleshooting ideas. I'm currently trying to run the new Gemma 4 31b-it model using vLLM (v0.20.0-cu130) deployed via Helm chart (https://github.com/vllm-project/vllm/tree/main/examples/online_serving/chart-helm).
For context, this is the command I used for running vLLM:
```
command: ["vllm", "serve", "/data", "--served-model-name", "google/gemma-4-31b-it","--safetensors-load-strategy", "lazy", "--dtype", "bfloat16", "--max-model-len", "4096", "--gpu-memory-utilization", "0.8", "--host", "0.0.0.0", "--port", "8000", "--chat-template", "/data/chat_template.jinja", "--reasoning-parser", "gemma4"]
```
When I try to send a simple message to the model using the following script:
```
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8000/v1",
api_key="",
)
response = client.chat.completions.create(
model="google/gemma-4-31b-it",
messages=[
{"role": "user", "content": "hello how are you?"}
]
)
print(response.choices[0].message.content)
```
Instead of a normal response, I keep getting this strange, repetitive output:
thinking nvarchar(max) nvarchar(max) nvarchar(max)...
Has anyone experienced this specific issue with this model or vLLM version? Any pointers on what might be causing it or how to fix my configuration would be hugely appreciated!
Thanks in advance.
HVACcontrolsGuru@reddit
Make sure the template has the correct tags for think as well. Gemma doesn’t close them and a recent update was made to some of the models chat templates a few days ago.
Dyriusx@reddit
Try changing the template to ChatML to see if it's template related. Repeat penalty could help too, but it's likely that Jinja template.
BitGreen1270@reddit
Did you try any other model to see if it works? Potentially a corrupted file?
DinoAmino@reddit
4K context is not enough if you're going to enable thinking.