What it feels like to have to have Qwen 3.6 or Gemma 4 running locally

Posted by GodComplecs@reddit | LocalLLaMA | View on Reddit | 40 comments

What it feels like to have to have Qwen 3.6 or Gemma 4 running locally

Well or pretty close to it, they are excellent work horses. I run them in real work scenarios doing some of the work I used to do myself as an skilled expert in my field, billing 200$ an hour. Ofc the key is building a system around their weaknesses, and I've had already LLM systems doing expert work years ago when first ones came (shout out nous hermes 2 mistral!).

But yeah pretty neat, especially noonghunnas club 3090 and you can have 3.6 27B fly on a single 3090.