What would you use for local coding assist on a "weak" machine (6GB VRAM 32 GB RAM) - light FE coding, no architecture. is QWEN3 good enough?

Posted by vishnoo@reddit | LocalLLaMA | View on Reddit | 4 comments

so as it says, I am not a FE eng, but want to do some light FE work
I don't need the smartest model but need to get some work done.
I ran out of tokens (20$ a month) for the week on day 2, so thinking of running something local
I tried serving QWEN3 with ollama and connecting codex to it, but it was clunky at best.

I figured I'd ask the experts

so local windows machine, I ran it on WSL, but codex then had issues accessing the local directories. is it better to run it in PowerShell (shudder)

gemma4:26 (quantized) also sort of fits but provided worse results.

to sum up
1. WSL vs windows native
2. codex? (claude-code blocked local models) opencode?
3. qwen? gemma?