Best coding setup for macbook pro
Posted by leetcode_knight@reddit | LocalLLaMA | View on Reddit | 10 comments
After listening to various perspectives across numerous threads, I’ve encountered a wide range of experimental approaches. I invite you to share your setups here as well, so we can try to identify the absolute best configuration. The best coding setup I’ve seen so far is Qwen 3.5 27B 8-bit + llama.cpp + async KV cache (K=Q8, V=Turboquant—I learned about this from an Alex Zistand video).
Responsible_Buy_7999@reddit
If you make money doing this just pay Anthropic the end
leetcode_knight@reddit (OP)
This is the way I want to eliminate for freedom. Third-party dependency poses a significant risk to SaaS.
BingpotStudio@reddit
Better to use the most efficient tool available and deal with that later. If you took this approach last year you’d have lost a year of increased productivity.
adamgoodapp@reddit
This, I tried local and hope it gets to the point I can switch fully to it but for professional work nothing beats Opus 4.6 and I don't mind the cost as it makes my work faster.
Responsible_Buy_7999@reddit
There are other providers.
Local coding is a pain in the ass expensive flaky hobby. It will get “there” but if you just care about shipping, you pay. You can host a site off a computer in your basement and a free cloudflare service too. Nobody does.
Enough_Big4191@reddit
depends what u optimize for, best quality, best speed, or least fiddling. i’d honestly start with the setup that is easiest to debug after updates, half the pain on mac isn’t model quality, it’s when one tiny config change breaks ur whole flow.
AurumDaemonHD@reddit
Best u can do is throw apple into thrashbin where it belongs and get a proper gpu.
AurumDaemonHD@reddit
Imagine paying premium for hardware that u cant repair and is subpar on ai tasks and its ecosystem is a walled garden with closed source.
If you ask apple is poorly spent money.
mikedoise@reddit
Alex makes amazing content! I've been watching his stuff for a while.
I have tried using Gemma 4 26B with Claude Code and Ollama on my 48 GB M3 Max MacBook Pro, but it seems to end tasks and exit without output at times. I'd love to know what others are doing.
xraybies@reddit
https://www.reddit.com/r/LocalLLM/comments/1sf5aqy/how_are_people_using_local_llms_for_coding/