Comparison of latest reasoning models on the most recent LeetCode questions (Qwen-32B vs Qwen-235B vs nvidia-OpenCodeReasoning-32B vs Hunyuan-A13B)
Posted by kyazoglu@reddit | LocalLLaMA | View on Reddit | 30 comments

Testing method
- For each question, four instances of the same model were run in parallel (i.e., best-of-4). If any of them successfully solved the question, the most optimized solution among them was selected.
- If none of the four produced a solution within the maximum context length, an additional four instances were run, making it a best-of-8 scenario. This second batch was only needed in 2 or 3 cases, where the first four failed but the next four succeeded.
- Only one question couldn't be solved by any of the eight instances due to context length limitations. This occurred with Qwen-235B, as noted in the results table.
- Note that quantizations are not same. It's just me, trying to find the best reasoning & coding model for my setup.
Coloring strategy:
- Mark the solution green if it's accepted.
- Use red if it fails in the pre-test cases.
- Use red if it fails in the test cases (due to wrong answer or time limit) and passes less than 90% of them.
- Use orange if it fails in the test cases but still manages to pass over 90%.
A few observations:
- Occasionally, the generated code contains minor typos, such as a missing comma. I corrected these manually and didn’t treat them as failures, since they were limited to single character issues that clearly qualify as typos.
- Hunyuan fell short of my expectations.
- Qwen-32B and OpenCodeReasoning model both performed better than expected.
- The NVIDIA model tends to be overly verbose ( A LOT ), which likely explains its higher context limit of 65k tokens, compared to 32k in the other models.
Hardware: 2x H100
Backend: vLLM (for hunyuan, use 0.9.2 and for others 0.9.1)
Feel free to recommend another reasoning model for me to test but it must have a vLLM compatible quantized version that fits within 160 GB.
Keep in mind that strong performance on LeetCode doesn't automatically reflect real world coding skills, since everyday programming tasks faced by typical users are usually far less complex.
All questions are recent, with no data leakage involved. So don’t come back saying “LeetCode problems are easy for models, this test isn’t meaningful”. It's just your test questions have been seen by the model before.
No_Shape_3423@reddit
In my personal tests involving long prompts with a series of instructions, quantization impacted performance and specifically instruction following (IF). 235b Q3KL performed worse for me than Qwen 2.5 70b Q8 and was even beaten by Qwen 3 32b BF16/Q8. There was a measurable drop-off from BF16->Q8 for Qwen3 32b and 30b, although Q8 usually scored well. Taking 70b down to Q4KM? Forget about it. I bet 235b Q8 would crush it here.
JacopoBandoni@reddit
Are this totally new invented leetcode questions?
MKU64@reddit
Can you by chance try Qwen 3 30B-A3B? It’s a really good model and I think it would be good to see how well it does compared to these bigger models!
kmouratidis@reddit
^ This. You can run it unquantized too, it would be interesting to see if it is any better than FP8 / AWQ. I think AWQ MoE fixes were recently merged in vLLM too (same version as the Hunyuan fixes).
maxpayne07@reddit
Can you do qwen3 30b a3b please?
EternalOptimister@reddit
Why is everyone ignoring the nemotron? Looks to me like it beats all of the rest?
daank@reddit
I just tried it since reading the article, but it ain't working well for me. The unsloth quantizations seem to get stuck in a thinking loop. It spent 2000+ tokens thinking about writing a sorting algorithm in python before I cut it off.
The difference might be quantization. Would be interesting to see which models react most graciously to quantization and which suffer the most.
kyazoglu@reddit (OP)
certainly very strong. beats qwen3-32b? arguable
Single-Persimmon9439@reddit
qwen3 30b a3b
fast moe model. should be much faster qwen3 32b
Chromix_@reddit
Yes, faster than 32b, but roughly on par with 14B in terms of capability.
kyazoglu@reddit (OP)
I've just seen the MetaStone-S1-32B model which looks promising. I started benchmarking it. It'll be here couple of hours later.
kyazoglu@reddit (OP)
looks like there is not enough time for it today. I'll post it on Wednesday. So far:
Chromix_@reddit
Interesting, the Qwen3 235B model should beat the Qwen3 32B in general, despite slightly lower number of active parameters. It was a INT4 to FP8 comparison though. So maybe that's the reason why it performed worse in 3 cases and never better. Yet the number of tests doesn't seem that large, maybe running 500 will paint a different picture. Especially as running 4 to 8 generations means that the generated code could still be subject to a bad dice roll.
PurpleUpbeat2820@reddit
I don't understand why people keep stating this. IME, the 32b is clearly better than the 235b in practice and the reason seems obvious to me: the number of active parameters is too small. I've seen this many times with from mixtral to llama4. Just look at how bad the 30b is compared to the 32b.
OMG, for sure. I cannot wait. Qwen2.5-coder:32b has been hands-down the best coding model for me (including frontier models).
Chromix_@reddit
The 235B MoE beats the 32B dense in all benchmarks published by the Qwen team, except for instruction following. It also beats it on the Aider coding leaderboard by quite some margin. Maybe you've tried both under conditions not represented in those benchmarks.
tomz17@reddit
This is the correct answer. A lot of these coding models suffer from quantization (including KV quantization), often utilized by hobbyists to fit a model into VRAM (e.g. in this case OP had to go down to INT4). I would take the official numbers over benchmarks like this any day.
henfiber@reddit
Although not a reasoning model, you could also include Qwen2.5-coder-32b in your tests as a baseline.
Devstral would also be an interesting one.
kyazoglu@reddit (OP)
I've used 2.5 Coder for a long time before it was bested by the others. It's a great model for speed and constructing the backbone of the code but fails miserably in complex coding tasks. I have never used Devstral but it is advertised as agentic model so I'd assume not a great fit
choose_a_guest@reddit
Can you provide the success rate for each model in each question (success count/number of attempts)?
Even for this small number of samples, knowing that a model succeeded 4/4 and the alternatives only succeeded 1/8 would paint a very different picture in this comparison.
kyazoglu@reddit (OP)
well, I have to automate everything to keep track of these kind of details. For now, I'm doing it manually but if I find enough time, I'll automate everything and repeat this test again in the future with different models
gamblingapocalypse@reddit
Tiny (ish) and mighty (ish)
FalseMap1582@reddit
I would be really interesting to see how much worse Qwen 3 235b INT4 is compared to Qwen 3 235b FP8/FP16
a_slay_nub@reddit
We know, it's the recruiters who don't
skyline159@reddit
Tl;dr Qwen3-32B is the best size/performance
AdamDhahabi@reddit
Waiting for Qwen3 coder, they are building it as mentioned here: https://www.youtube.com/watch?v=b0xlsQ_6wUQ&t=985s
WeGoToMars7@reddit
You should've included the percentile of runtime into which these solutions fall, as reported by LeetCode as the baseline. I've noticed that for problem 3599. Partition Array to Minimize XOR, despite being accepted, all LLMs provided a very inefficient solution, with a massively long runtime of 12-21 seconds, depending on the model.
This problem has a pretty straightforward O(N^(3)) DP solution, and 95% of accepted solutions have runtimes below 1 second (see my sample solution here), 10-20x faster than code that LLMs have produced. I don't know how they messed up, but that wouldn't fly at any real coding interview and is a failure.
Most other medium and hard problems also have suspiciously long runtimes despite being accepted, so the real conclusion here is that all these LLMs have done a very poor job on novel problems that aren't marked as easy.
WeGoToMars7@reddit
You should've included the percentile of runtime into which these solutions fall, as reported by LC. I've noticed that for problem 3599. Despite being accepted, Partition Array to Minimize XOR has a massively long runtime of 12-21 seconds.
This problem has a pretty straightforward O(N^(3)) DP solution, and 95% of submitted solutions have runtimes below 1 second [see my sample solution here], 10-20x faster than code that LLMs have produced. Not sure how they messed up, but that wouldn't fly at any real coding interview and is basically a failure
Most other medium and hard problems also have suspiciously long runtimes despite being accepted, so the real conclusion here is that all these LLMs have done a very poor job on problems that aren't marked as easy.
ffpeanut15@reddit
Impressive results from Qwen3 32B
Secure_Reflection409@reddit
What's the tldr here for people who can't see the picture properly?
kyazoglu@reddit (OP)
take a look at my observations