I have never seen a agent willing to work so much like Qwen 3.6 27B
Posted by cviperr33@reddit | LocalLLaMA | View on Reddit | 14 comments

It just constantly wants to build and execute , i mean i dont mind it at all , im actually quite happy . (The Qwen 3.6-35B on opencode is wrong i just didnt change the name in the setting)
So i was playing around with it and and we are refactoring an old project , and when i started a new session i jokingly implied that his predecessor was killed because he did a "lazy job" .
And i noticed that this model in particular or either because i said this joke , it didnt stop building and testing the stuff itself , so i had to stop it multiple times when i noticed that it was doing something i didnt ask it to.
And on my last pause i saw that "They're amused by my eagerness" i just spat my drink laughing , its so funny how they can imitate human emotions and simulate fear or eagerness to work.
And so far very impressive results , it constantly finds a way to fix broken things on its own , without me even imagining that there is such a way to do it.
Witty_Mycologist_995@reddit
And then Gemma is really lazy.
ComplexType568@reddit
2 sides of a coin
JuniorDeveloper73@reddit
just say him do this and stop,
Always stop because the mtf want to build his way out of the pc
cviperr33@reddit (OP)
haha đ they cant build they ways out of the PC yet simply because their contex windows are too small , to pull something like this they would need probably a few bilions
__JockY__@reddit
Our only chance against the robot overlords is to take them down while they run auto compaction!
Pleasant-Shallot-707@reddit
Donât anthropomorphize these models
Kagemand@reddit
All Qwens seem like this. 9B 3.5 also never gives up when it hits problems, it keeps trying tools with different parameters over and over, however, I think that also might be a question of using a harness that might stop that.
onyxlabyrinth1979@reddit
Yeah Iâve seen similar behavior, itâs great until it starts acting on assumptions you didnât explicitly approve. Feels powerful, but also a bit risky if youâre integrating it into anything real. Iâd be curious, are you letting it touch anything persistent or just sandboxed runs? Thatâs usually where things get interesting fast.
Big_Mix_4044@reddit
I also noticed this behavior but came to conclusion that it's a skill issue on my part, as in I need to be more specific with a prompt. I'd take eager agent instead of lazy hack job any day of the week.
deejeycris@reddit
The prompt needs to be definitely more defined than claude models, this is why I found qwen models to work great as an executor but the plan is better given to other models if human is not doing it already.
notlesh@reddit
Agreed, I've been using it quite a bit for a few days and it works until it goes off on a rabbit hole and either doesn't feel like it needs direction or thinks it got direction. The latter can be pretty apparent from looking through its "thinking" output, which can often say things like "the user said X" when I can see that X was actually just its own internal thinking dialog.
So it tends to either get the right idea and do a great job with it, or send itself down its own loosely correlated rabbit hole.
This is driving me nuts because it's so close to being exceptional but this one thing is enough to warrant all the constraint.
HistorianPotential48@reddit
the chinese worker soul distilled into 27B, free in your hands.
Darth_Candy@reddit
Seeing this is awesome compared to Anthropicâs âphilosopherâ saying that Claude has anxiety (as if itâs a feature and not a bug, lol)
Kodix@reddit
35B is the same. I think this trait might be partially to "blame" for their effectiveness as agents. It's really nice and leads to impressive resulrs most of the time.