Quick and simple test of various 3.5 and 3.6 qwen models on production code base which have deployed to an enterprise .
Posted by Voxandr@reddit | LocalLLaMA | View on Reddit | 11 comments
during dockr build of docker-compose.watch.yaml I got an error. Fix it
Step 23/29 : RUN uv add /workspace/app/app-docverter
---> Running in 0b7c1654e880
Bytecode compiled 24281 files in 1.61s
+ app==0.3.8 (from file:///workspace/app)
Bytecode compiled 24281 files in 1.63s
+ app==0.3.8 (from file:///workspace/app)
error: Distribution not found at: file:///workspace/app/app-docverter
Code - Structure
Mnono-Repo with Worksapces
app-unstructured = backend code app-docverter = document conversion module.
Developer added a docker-compose with bind mount to ../app-docverter and DockerFile that does uv add the app-docverter.
uv add works in runtime , but it dosen't in buildtime.
Qwen-Coder-Next-Apex-I-Quality: Close but Fails
added a line
COPY ../app-docverter /workspace/app/app-docverter
But it dosen't understand that Docker build context needed it to be in same context/relative folders paths and the build fails.
Qwen3.6-27 UD-Q6_K
Worst outcome
# CMD ["litestar","run","--reload","--host","0.0.0.0","--port","8000"]
93-CMD ["sh", "-c", "uv add /workspace/app/app-docverter && exec litestar run --reload --host 0.0.0.0 --port 8000"]
92+# CMD ["litestar","run","--reload","--host","0.0.0.0","--port","8000"]
93+# Note: `uv add` runs at container start (CMD) because the app-docverter volume is mounted at runtime, not build time
94+CMD ["sh", "-c", "test -d /workspace/app/app-docverter && uv add /workspace/app/app-docverter || echo 'warning: app-docverter not mounted, skipping uv add'; exec litestar run --reload --host 0.0.0.0 --port 8000"]
Its only add a worning and comment lol.
unsloth--Qwen3.5-122B-A10B-GGUF-MXFP4_MOE
Same result as Qwen-Coder-Next-Apex-I-Quality
Qwen3.5-122-APEX-I-Quality
Works in First try , it modifies the Docker-Compose file correctly , and adds the depedency in DockerFile correctly.
if you want to see more tests i will do more.
fala13@reddit
I also like 122b but after getting the params right I use 3.6 27B for such admin tasks. For me the important part was to add '--jinja' and make sure the model doesn't have any additional harness or inference issues to overcome. And getting the right jinja template for 3.5 was hell.
Voxandr@reddit (OP)
not toolcalling problem btw.
Pretend_Engineer5951@reddit
Thank you for pointing out APEX family models. I did a try on Qwen3.5-122-APEX-I-Quality yesterday and found it... surprisingly useful and fast. I haven't figured out: did it fine-tuned?
Voxandr@reddit (OP)
It have different quantization strategy . From my tests Both I-Balanced and I-Quality Surpasses UD Quants .
Wild_Requirement8902@reddit
can you run this through an llm and make it readable and corect all your spelling mistakes ? give us link to your qwants. This just noise....
Voxandr@reddit (OP)
DOne
Voxandr@reddit (OP)
Sometimes you guys need to appreciate pre-AI text posts.
seamonn@reddit
uhhh... Can you present your findings in a better format?
Voxandr@reddit (OP)
Post will get rejected for AI Slop.
seamonn@reddit
write it better yourself?
Voxandr@reddit (OP)
Dont have time while AI can do that.