Is it stupid to buy a 128gb MacBook Pro M5 Max if I don’t really know what I’m doing?
Posted by A_Wild_Entei@reddit | LocalLLaMA | View on Reddit | 160 comments
Just based on the title, the answer is yes, but I want to double check.
I’m learning to code still but want to become a hobbyist/tinkerer. I have a gaming laptop running Windows that I’ve done a little bit of AI stuff with, but it’s a few years old and has minor issues.
I’ve been working a second job to save up fun money, and I can nearly afford the new Mac if I really wanted it. From what I’ve gathered, it can’t run the top models and will be somewhat slower since it’s Mac architecture.
I was planning on buying an M5 Pro anyway, so I’m wondering if I should just splurge and get the M5 Max to avoid having any regrets.
Some points in favor: RAM prices are just going up, local models are getting more capable, I needed a Mac anyway, privacy is really important to me, and it will hopefully force me to make use of my purchase out of guilt.
Some points against: it’s probably overkill for what I need, it probably won’t be powerful enough anyway, and I’ve never had a Mac and might hate it (but Windows is a living hell anyway lately).
Please validate me or tell me I’m stupid.
mr_zerolith@reddit
Yes. You need something with way better thermals than a laptop. And also, the Mac sheds heat to the case, so this is going to actively discourage you from running the model and coding at the same time.
Consider that a M5 Max will have a bit less than half the performance of a 5090. It's not too powerful to begin with. It has all kinds of ram but not the speed. To get acceptable speeds, you'll have to run smaller models, or exceptionally efficient ones like GPT OSS 120b.
I'm not a particularly patient person so this slow speed would annoy me. So i'd have a hard time recommending it.
Dramatic_Machine8693@reddit
a more closer comparision would be pro 6000 with 96g of ram. it has a bit more performance compare to 5090 and 3x the ram. a m5 max 128g has half the performance but 4x the ram, also come with half of the price of pro 6000. I would not get a laptop, but i will wait for the mac studio desktop
mr_zerolith@reddit
RTX PRO 6000 has 15% more performance than a 5090.
( I run a 5090 + RTX PRO 6000 to run big models )
For reference i can run Step 3.5 Flash 197b 4bit at 125 tok/sec on first prompt, 43 tok/sec at the end of the 96k context window.
I would be unhappy with half of that speed plus \~1/5 the prompt processing speed. But i only use human in the loop processes, so reducing the latency is very important.
The problem is that paralellizing macs to run bigger models ends up not contributing to speed much, so that's why i paid the extra for nvidia.
Eyelbee@reddit
Pay 20 dollars and get a subscription.
Dramatic_Machine8693@reddit
if you use claude or other agent tools, 20 bucks subscription will not last very long. Claude use like 200k token for just to go through one of our application's code, not to mention try to do anything with the code trying to find the piece of the code that generate the error. I blew through abacus pro subscription limitation within a week of light usage.
Altruistic_Tension41@reddit
Listen I don’t mean to encourage bad spending habits but I got an M4 Max 128GB and it’s saved low 4 figures in terms of what it would have cost via Anthropic’s API in like 2 months of heavy usage plus has let me keep my privacy. Have had it churning in the background going through various project ideas from my notes with omlx and GLM 4.7 Flash, and at this point I’m over 550 million tokens processed (5% output tokens, 95% cached input tokens). You could realistically do this with just an M5 Max 64GB but you’ll likely chew through your SSD faster with the cache getting cycled over and over again… I’m going to sell my MBP and get the M4 Ultra or M5 Max when it’s available in the Mac Studio form factor so I’d say it’s worth it 👍
power97992@reddit
u couldve saved even more money and gotten better quality outputs if u just got a claude max or discounted gemini ultra sub with antigravity... Macs are slow and the models are small.
Dramatic_Machine8693@reddit
Gemini is not really good at dealing with build and deployment error, so far as I experienced. copilot is even crappier, Grok is fine, the best is opus 4.6. but opus is about the most expensive ones amount the frontier models. i actually get pretty good result with qwen 3.5 with opus 4.6 distiled, using it with my 3090 save me a lot of money. I have componay provided github copilot subsctiption and my own abacus LLM subscription, i blow through them within a few day if I really throw everything to opus, by using local model for lower end stuff, then using opus for the most difficult issues get me through most situations.
fasteddie7@reddit
You could save a little and get a previous gen model and be just as happy, then once you know what you need out of a machine, sell it and get newer. The last three chips are pretty close in performance. This might help: https://youtu.be/a2F-Ln9JFcU?si=pYbq78Rext5eW3MZ
RandomCSThrowaway01@reddit
My personal take is - you buy M5 Max 128GB saying you won't have regrets and a month later you realize that top open models are 256GB to run at Q4 anyway. And you still need a Claude subscription for Opus grade model that doesn't even have an equivalent in open source world.
If you are unsure whether you genuinely need it - I would stick to subscriptions for now, they are a LOT more cost efficient and flexible.
Go for multi thousand dollar expenses only once you have clarified what exactly you really need.
In fact, go rent a decent server for a week first with a big happy GPU like Blackwell 6000 and actually try out what it feels like (just keep in mind it will have more than twice the token generation speed compared to M5 Max), play with different models, set up any agentic mode you needed etc. If you are 100% satisfied with the results - well, NOW you can consider buying that M5 Max.
But don't do it on a whim because you can very easily come to regret this purchase.
ImpressiveHair3798@reddit
Donc un m5 max 128 go ne sera a rien pour de l’ia local alors que Apple le vend comme tel c’est ce que t’est en train de dire en gros
King_Tofu@reddit
Hey! I'm like the OP. Do you have suggestions for services for renting a Blackwell 6000? I was going to ask gemini how to find and rent a cloud service but would like the actual expert insight because I actually don't know what I'm doing haha and am using this to learn
RandomCSThrowaway01@reddit
Step 1 - google "rtx 6000 blackwell hosting". There will be few options.
Step 2 - make sure option you are choosing doesn't have unreasonable setup fees or minimum commitments. Runpod is pretty popular and comes with Linux image that already has all the drivers included (runpod/pytorch:1.0.2-cu1281-torch280-ubuntu2404). They also have Mi300x actually which is 192GB VRAM for $2/hour so potentially a better deal than Blackwell 6000 which is $1.7/hour (but then you deal with AMD which is not as beginner friendly). Make sure to grab at least 500GB of storage as well since LLMs are pretty fat.
Step 3 - now, the question is - do you know what's SSH and how to manage a Linux instance? Cuz you will get terminal access but not a full fledged graphical environment. General workflow (something you can ask Gemini too) - first you update every single package, then you set up fail2ban, then you set up firewall (UFW, you only enable HTTP/HTTPS and SSH).
Step 4 - now you need to deploy your LLM server itself. For instance here's a decent guide: https://www.jeremymorgan.com/blog/generative-ai/local-llm-ubuntu/
Step 5 - and now you can access your server from anywhere in the world. Just a reminder - it's a pretty costly beast.
Specifically for Runpod I see they also do offer Serverless vLLM. In which case you choose a GPU and only pay per request + storage and can skip most of the configuration. Might be better to play around, assuming 5s per request on average and 1000 requests a day it comes to a total of $166/month (+ storage so probably around 200), you also only pay for the traffic you use.
ZachCope@reddit
Runpod has a Jupyter lab option making it easy to work with files etc
King_Tofu@reddit
omg, thank you so much for taking the time to write my that map! I can fill in the gaps with gemini and google, and feel confident I won't be stepping into a hallucinated hole haha. Love that runpod idea, I think I'll start off first with that.
Ok-Internal9317@reddit
If you don't know what you are doing then don't rent the pro6000, rent a 4070ti first, then you can learn from the workflow.
King_Tofu@reddit
thanks!
Ok_Try_877@reddit
I used to buy condoms when I was a virgin lol
nh_t@reddit
lol
AdultContemporaneous@reddit
OP, listen to virgin condom guy. It's better to be prepared than spending your life filled with regret.
Disastrous_Room_927@reddit
Condoms are like a gun. I’d rather have one and not need it than the other way around.
Naud1993@reddit
Not worth it for the 0.01% chance of me actually needing it.
Ok-Internal9317@reddit
Good life advice actually
Wallaboi-@reddit
This is brilliant 😂
King_Tofu@reddit
I laughed out loud in a cafe and I think people are looking at me funny. Thank you for the joke haha. And, yes, OP, I also did this. No regrets, for what it's worth
shengjunwang@reddit
Just buy,not use it
cell-on-a-plane@reddit
Practice makes perfect.
Transhuman-A@reddit
Did it force you to get sex out of guilt?
sometimes_angery@reddit
Depends. Is your monthly income around 1k? Stupid. Is it 10k? Not stupid.
Naud1993@reddit
My monthly disposable income is around $1000 and I feel like I can buy whatever I want, yet I don't because there are too many choices, so I'm stuck with my 12 year old half broken laptop.
Inevitable-Plantain5@reddit
This is the answer lol.
Simultaneously, if this is seen as a toy vs an investment that matters too. I would probably invest more in a desktop type of tool (mac studio) than a laptop to start but with mac the large vram and much better prompt processing on the m5 make it hard for me to tell you to wait til fall for the m5 studio.
There's lots of people who will say always invest less at the beginning to see if you like it. However if you never get value out of what you are doing then you're probably not going to go deeper. A crappy guitar that doesn't hold its tuning properly will constantly be out of tune making the music sound terrible and you'll quit if you don't upgrade it. Likewise with less useful small local models on weaker hardware... also when a single 80gb h100 is like 30-50k then 5k is entry level comparatively so what do we mean investing a lot as it depends on what you are comparing to.
You're lucky to be coming in at a time when gaming gpus can run a model like qwen 3.5 35b q6kxl on my 5090 and do real work. Honestly the 27b of that model family performs better than the 122b but it needs a fast gpu. 5090 does 50t/s with llama.cpp and I'm guessing vllm would be even better. But generally, that level of performance comes from 120b or higher class of sparse models.
So, you dont need this level of hardware for some local benefit. Simultaneously, I regret buying smaller hardware in the beginning. I could have several rtx pro 6000s at this point but I gradually upgraded hardware so I have decent high end consumerish hardware and entry/mid level enterprise stuff that I see as a drag and wish I had just gone harder in the beginning.
Naud1993@reddit
You're doubling the price of an already expensive MacBook Pro just to reach the performance of a $300 mid-range GPU and get overpriced RAM and storage. One could argue it's a low end GPU since the fastest one (RTX 5090) is 5 times faster. However, even the base M5 chip has a faster single core CPU than the fastest CPU any other company has ever made.
Might as well buy a cheaper MacBook and a fast desktop PC with Windows or Linux on it for the same price.
SolFlorus@reddit
If you are working a second job, then yes. It’s a poor financial decision and you likely have more pressing needs.
A_Wild_Entei@reddit (OP)
The second job is purely for fun money so that I can save up from my full-time gig, but yes, I could be doing better financially.
NFTArtist@reddit
you could buy a couple 1000 toilet rolls and be set for life
Important_Coach9717@reddit
And they will last longer than a Mac snd keep their performance
gojukebox@reddit
Oh please 🙄 Macs last decades. I have too mini Macs from 2014 and a MacBook Pro from 2011 still going strong.
whoisraiden@reddit
Can you run an LLM with it? Cause toilet paper will wipe your ass the same way at any moment in history.
Wooden-Duck9918@reddit
Unlike LLMs, my ass is not demanding more toilet paper every year, or changing the preferred texture of toilet paper
mcglothi@reddit
Mine is. I gotta lay off the cheetos.
StewPorkRice@reddit
You don't need it.
What do you need to run local models for? You don't even know what you're doing.
catplusplusok@reddit
I think everyone, possibly including yourself, is missing this part. Would you have motivation to work the second job if it was just to save for retirement? If not, you decide what sounds more fun for you. So what sounds more fun? A laptop you can tinker with while improving your skills or an air vacation to a foreign country that can also expand your mind? Make a call and spend money accordingly.
socklessgoat@reddit
M1 Air exists for these things.
Gargle-Loaf-Spunk@reddit
Learn on rental compute platforms like vast, runpod etc. You don’t need a $7K computer for learning.
thrownawaymane@reddit
Even Claude Max would be better—there’s nothing private about learning (well, mostly) and that’s local’s big advantage imo
username_taken4651@reddit
Isn't it kind of unneccesary for OP to pay $100/$200 a month if they're just learning to code and that they have no idea what they're doing yet? I would recommend that OP start first with free-tier cloud LLMs, then move up from there if needed. OP also states they wanted to be a 'hobbyist/tinkerer', which is why they asked about local LLMs.
I have seen multiple posts recommened Claude Max out of the gate and tbh it's a bit strange in this scenario.
thrownawaymane@reddit
At least in my case I said it would be better than blowing 7k. I think ideal would be paying $20 a month to have some skin in the game, get an O’reilly subscription and getting down to it.
Inevitable-Plantain5@reddit
Depends, are you learning how to build apps with inference or learning how to build inference based systems? I think the cloud providers want you to think there are no other viable options and I don't believe that's true. I also think centralizing the inference and the implementation in the cloud sets a path for basically all business to be taken over by the cloud provider.
I dont want the government for example being dependent on cloud provided solutions and as a business owner I dont want to be reliant on a cloud solution when it is clear the business model they currently offer will continue to rapidly worsen for consumers as they went from $20 unlimited plans to $200 subsidized seats with growing restrictions for the people who actually use the tools.
As a consumer I dont want competition continuing to dwindle to a few mega corps who are friendly and work together way too much.
Im not against what you are saying but I worry their is this short sighted perspective of what makes sense because a $20 subscription is cheaper than a gpu. $20 a month puts cloud providers in debt. That isnt what the end will be for anyone actually using this stuff significantly. Saying you know how to run claude isnt a market differentiator for a job especially since that platform is extending its native abilities. Working backward from claude/ chat gpt to local models usually doesnt happen.
We should be encouraging people to understand more than asking claude. This is an existential threat to sovereignty and the illusion of human value that drives our world.
StewPorkRice@reddit
nobody is telling everyone to go cloud...
this dude is working a second job to buy a 7k laptop to tinker with local models and they don't even know how to code yet..
Why do u think ppl telling him to just spend a few months learning on a cheaper platform disregards local?
insulaTropicalis@reddit
It has 460 GB/s shared memory bandwidth, so generation is going to be decent to good. I have read that M5 is much faster in prompt processing which was the Achille's heel of Apple systems. Probably it's the first portable system which is really decent at local AI, so if you like the Apple ecosystem go for it.
Accomplished_Ad9530@reddit
That’s the bandwidth of the M5 Pro; the Max is 614 GB/s. But, yeah, that and tensor acceleration for 3x-4x prompt processing makes it a very capable machine.
I wouldn’t worry about the ecosystem for dev, though, since most things Linux build for macOS, which is also *nix. The only real limitation is the lack of eGPU support, and while people have come up with workarounds, I’m not sure how effective they are.
power97992@reddit
no ,the m5 pro 307gb/s of BW, the binned m5 max has 460GB/s of BW, and the full 40 core m5 max has 614GB/s of BW... but you are right about the 128gb version having 614 gb/s of BW.
Accomplished_Ad9530@reddit
Ah, right, forgot about the binned Max, my bad
Altruistic_Tension41@reddit
The actual hardware isn’t bad, it’s the tooling that is bad. Nothing really supports MLX + hybrid model caching correctly and that gets misconstrued as the hardware itself being slow in PP (since the full prompt has to be recomputed on each query) when it’s about on par as a 3060 when implemented correctly
oblivic90@reddit
Depends what you want to develop. Want to just follow some basic coding courses? Get a macbook air with 16GB RAM, get a Claude 20$ sub if you want AI to guide you. By the time you will feel the need to upgrade you will have much better options, or would be able to buy the same M5 Max for way cheaper.
Adcero_app@reddit
128GB unlocks running 70B models without quantizing down hard. 64GB can run 70B at Q4 and it's genuinely fine, fast enough to not feel slow. 128GB lets you run Q6/Q8 or have two large models loaded at once.
for a beginner tinkerer, that distinction won't matter much for a while. you'll hit the learning curve well before you hit the RAM ceiling.
the only thing worth emphasizing: RAM is the one spec you can't upgrade later. if you can afford it without it hurting, buying more RAM is the right call on Macs. everything else can be worked around.
FullOf_Bad_Ideas@reddit
Nobody is running 70B dense models here anymore. Certainly not on Macs.
Cool_Slice1313@reddit
Can you tell me why? I'm just curious since I'm thinking of doing exactly this.
FullOf_Bad_Ideas@reddit
New releases in that size range are 95%+ MoE's. And this community usually quickly jumps onto new models. So llama 3.3 70B (released in December 2024) and Qwen 2.5 72B (released in September 2024) are no longer being inferenced often among people here. 70B dense models are seen as old and slow, since they run slow on most consumer hardware and this size has been almost abandoned.
Obviously many enterprise apps are deployed on older models and they still run fine, those aren't bad models and personally I've liked some coding-focused Qwen 2.5 72B finetunes as well as devstral 2 123B, but Macs are those lean devices where it would run slower than 200B A10B MoE model would.
Dense 70B on lean compute device usually means 10-100 t/s PP and 2-8 t/s TG and it gets even lower at high contexts.
Cool_Slice1313@reddit
Thank you for the detailed explanation.
Adcero_app@reddit
plenty of people in this sub run Llama 3.3 70B and Qwen 72B on M-series Macs regularly. the question is what quant level you're comfortable with, and 128GB gives you more headroom there. the point about RAM being non-upgradeable stands regardless of which models are popular right now.
ea_man@reddit
> I’m learning to code still but want to become a hobbyist/tinkerer.
OMG this is like woodworking /sub worst nightmare: I dunno what I'm doing but I want to spend thousands on powertools that I don't know how to use so I'll be *good*.
Man go buy a good keyboard, a big monitor and maybe join a computer class. You can pay 5$ a month to code with AI assist now and learn the ropes better than on a crimped local model.
You don't even know what you are supposed to wanna know.
xienze@reddit
I'm gonna go one step further. If you're really learning how to program, don't use an LLM at all! You're going to be tempted to lean heavily on it to be the easy button that solves all your problems when you meet the slightest bit of resistance. You won't develop good critical thinking and problem solving skills starting out this way.
ea_man@reddit
Dunno, explain code / suggest better solution are nice option to have when learning.
BringMeTheBoreWorms@reddit
Yep that’s about right. If you’re learning to code just use a subscription. If you want to learn how to run and setup local models because there’s crap you want to do then get a beefy machine to play with
the__storm@reddit
I'd say if you're learning to code don't even use a subscription - go full manual (maybe copy-paste into a free web chat every once and I while if you need help with a bug).
techno156@reddit
They also don't know if there might be a change in the future that might make that kind of thing obsolete. In that case, unless they have a different reason to use that much memory, no point getting that much.
Important_Coach9717@reddit
But if he buys the Mac he can run a local model to tell him that!
RedTheRobot@reddit
They could also just rent a VM tinker with it for a month or two and then decide if they truly want to dive in the deep end.
And-Bee@reddit
The woodworking analogy makes sense to me. As someone who’s deep into tech and love all of the things a 128gb m5 could bring, I was about to tell him “yeah go for it, it will be great” but as someone who’s wants to build my first tuned subwoofer enclosure I am hesitant to buy all of the fancy things that would make the job easier, but someone who loves woodworking would just tell me to go for it. Haha.
ea_man@reddit
Yeah the point is that for your first project a 30$ saw and a ruler is better than a fancy saw-table 1K, but boys gonna be boys and buy expensive toys for reasons.
1800-5-PP-DOO-DOO@reddit
From what I remember, the pro is actually better than the max for running LLms
Pleasant-Shallot-707@reddit
That’s completely wrong
1800-5-PP-DOO-DOO@reddit
You had me look it up, yep wrong.
baptizedbycobalt@reddit
It’s extreme overkill for anything but running the current models locally. I would personally get something more entry level if you’re learning to code and want a Mac.
I’m a professional developer of 20+ years and my personal machine is a MacBook Air M4 w/32GB ram. For most development work it’s completely sufficient, even runs kubernetes well.
I’ve run some smaller LLMs on it without issue, but I rely on the cloud for heavy lifting.
4baobao@reddit
it's stupid to buy a mac for any reason tbh, but something that's not overpriced
metmelo@reddit
128GB macs go from $3.5k, Strix Halo for $3k and the cheapest DGX Spark is $3.5k too. Pretty on par if you consider the resell value.
Real_Ebb_7417@reddit
Btw. I envy you xD $3.5k for Mac with 128Gb In Europe you need to pay like $6k for it.
metmelo@reddit
My brother I live in BRAZIL out of all places. There are loopholes though.
Real_Ebb_7417@reddit
Ok I just checked. They DO have unified RAM, but Mac will still be MUCH faster, especially with bigger models, where memory bandwidth matters much more than compute power.
- MacBook with M5 Max -> 614 GB/s
- Strix Halo -> 256 GB/s
- DGX Spark -> 273 GB/s
So... they are a bit cheaper than MacBook, but not that much and MacBook is still a way better choice for running models locally. IMO for this purpose Mac hes better price/quality ratio than DGX and Strix
metmelo@reddit
The M5 should go from higher, but yea they are pretty good.
Real_Ebb_7417@reddit
Does it have unified RAM like MacBooks?
A_Wild_Entei@reddit (OP)
Windows (and computers that run Windows) is so fucking dogshit for the money that I can’t justify buying these laptops anymore.
Investolas@reddit
Mac offers a controlled hardware and OS environment making development for Mac much easier.
Real_Ebb_7417@reddit
I mean, Macs are generally overpriced, but how much will it cost you to build a local machine with 128Gb VRAM? Definitely more. Yeah, I know that it’s not equal and models will run significantly faster on 128Gb vRAM with GPU(s), than on Mac. But on Mac it will still be a reasonable speed, incomparably faster than on a classical PC with 16 or 32Gb VRAM and 96Gb RAM. (Talking about big models that would need offload to RAM of course)
buecker02@reddit
Not stupid yet but you have to learn how to do math and a cost analysis. In no sane world does someone at your financial level can justify a several thousand dollar depreciating asset. Use cloud based subscriptions. Buy Used. Whatever.
InfraScaler@reddit
It would be stupid if you needed that money for something else, and even so, once you have the machine, you can only learn and learn and learn and play Cyberpunk 2077 and learn and learn.
rduito@reddit
Get a used M1 16gb MacBook.
These are ~$400, will let you find out if you like it and run some tiny models, and have great resale value. (And if I'd bought a M5 max 128gb, I'd keep the M1 for risky travel.)
rduito@reddit
PS: M1 MacBook with 16gb is great for lots of purposes, and faster + longer battery than some new pc laptops.
WestMatter@reddit
I know this is the local LLM forum, but my suggestion is that if you're learning to code, I'd recommend you to get Claude Max plan instead. As much as I wish the local coding agents were good enough, I haven't been able to get good enough results and it ends up taking way too much time. With Claude Max I get solid results most of the time, which saves me so much time compared to when I've had to troubleshoot the output from a local LLM.
Competitive_Knee9890@reddit
Just install Linux on your current machine and turn it into a server for local models. Your goal is learning, you don’t need huge models, you need to focus on knowing what you’re doing anyways, and you can do that on a low vram GPU with tiny models, and learn a thing or two about infrastructure in the meantime, which won’t hurt.
szansky@reddit
if you still dont know what you need it for then dont buy the top tier for insane money because you need learning time way more than 128 gb of ram
Hanselltc@reddit
grab a idk 36gb m5 mba and test the waters first lol you might not like it that much, plus you can grab the mac studio later and probably end up spending a similar amount of money total
guesdo@reddit
Just pay a subscription until you know what you are doing.
noni2live@reddit
Dude, I bought a $4500 M4 Max under similar circumstances as you. I didn't need something this powerful, but I had extra money and said "why not.."
I've only used for internet browsing, youtube, and netflix. I did run a local model through Llama once and played Factorio for a bit. To be honest, its mostly for gooning on the go. lol
But, I know if I need to do anything cool with it, it has the capability to do so.
A_Wild_Entei@reddit (OP)
I would definitely end up like this. Thank you for your sacrifice
krilleractual@reddit
Similar situation. Honestly its nice to be able to edit 4k video without a stutter, open up unreal engine and make anything I want, or run open source models for stable diffusion and LLMs.
The way I see it, people spend thousands of dollars on hobbies all the time. Does everybody have to be a top level competitor to enjoy their gear? No.
Cameras. Guns. Golf Clubs. Bicycles. You name it, money can be spent on it irrespective of skill, and thats not a good or bad thing.
What definitely sucks is spending less than you could have and then missing out because of it.
Pleasant-Shallot-707@reddit
Mine is coming in April.
Annual_Award1260@reddit
Just buy the base model 14” m5. Could upgrade to 2TB if needed.
16GB ram is perfectly fine for all coding tasks. You can still tinker with training financial AI models for the stock market etc. running large models is essentially impossible on a laptop.
I’m a seasoned programmer and I will keep my m1 pro 14” till it dies.
BlobbyMcBlobber@reddit
Yes
lambdawaves@reddit
This makes no sense financially. Get a 24GB or 32GB refurbished air
XCherryCokeO@reddit
First but a pi5 and if required upgrade
Protopia@reddit
Yes
catplusplusok@reddit
If your second job is to save up fun money, and tinkering with AI is fun for you, obviously go for it! Someone wants a muscle car, you want local models and motivation to make use of it to improve yourself is a real thing.
opi098514@reddit
Just get the m4 Mac mini. Much less expensive.
Southern_Sun_2106@reddit
I bought an M3 Max, it was a 'dumb' decision (trust me, nobody on this sub except for me, will tell you to buy it, but it doesn't mean you should not) but I had so much fun tinkering with it the last, what, like 2 or 3 years, that I pulled the trigger on M5 Max the **day they came out** no hesitation. The stories about PP being awesome with this one are true.
If that's a possibility for you - get it, run it for 2 weeks - keep it if it works for your tinkering case; or, return it, knowing for sure.
The people here are anti-mac (some of them are triggered by the word Apple, but they don't get tired *itching about high Nvidia prices, energy bills, and noise either. more competition is always good for the consumer, something they fail to grasp) - so they hate Mac in their majority, they don't know your specific use cases or high those might evolve, they don't know the future. You know your situation and yourself best; take it for a spin and make up your mind. Macs also keep their resale value. Good luck!
Living_Commercial_10@reddit
Mine is being delivered this Thursday. Can’t wait to see how my local ai app performs on it
Battleagainstentropy@reddit
What else do you need the money for? If it’s rent money then don’t spend it on something like this without a really good plan. If it was otherwise going to draft kings then yeah, even if there’s only a 20% chance you use it to learn stuff (and a 1% chance you do something really really cool with it), buy the Mac.
fatso784@reddit
You’re stupid yes. I work in AI and only have a 64GB. Works pretty well for my purposes, which include running some “smaller” models. Very performant too. However it cost a god awful amount, and I only justified it because it’s my job. Even then, most of the time I just use cloud subscriptions anyway. So, go with a smaller model if you’re set on buying one.
synn89@reddit
A Mac as a coder will be nice. Hmm, 3849 for a M5 Pro with 64GB of RAM/2TB SD vs 5549 for a 128GB M5 Max with 2TB storage. So, 1700 price difference for mostly LLM work.
I'm not sure the performance upgrade on the Max will really be useful for coding. I feel like you mostly want RAM for a lot of Dockers and heavy code tooling.
I will say, the 64GB to 128GB jump matters for LLMs. It didn't in the dense model days, a 70B dense was about all you could run with any sort of speed and 64GB of Mac RAM was perfect for that. But today's MOE Qwen3.5-122B-A10B can do a lot at Q6 and even 200B's at Q3 aren't half bad. It's a big jump from the 35B model range.
freddycheeba@reddit
Well, if you can afford it, a Mac Studio with as much memory as you can get, is gonna be much better performing, and can run bigger models.
doxploxx@reddit
Yes it is stupid. You are procrastinating on the hard stuff (i.e., actually learning) by looking into specs and prepping for a level of utilization and expertise you are unlikely to ever reach.
rorowhat@reddit
Yes, buy a strix halo instead. Cheaper and much more versatile.
boutell@reddit
I've been building my own chatbot around local llms, and that's fun. But I'm using claude code to build it. Using my Claude Max subscription. Because that's smart enough to be highly practical. I think you'll find the same. There's nothing you're going to learn by self-hosting that you can't learn by working with cloud hosted models.
piedamon@reddit
It’s a trap. 128 is still way too small for the bigger models.
It’s far cheaper to get 16, 24, 32 versions if you want a nice (ie. overkill) toy for 2-3k. You can still run a small model but you’re mostly using this hardware as a terminal and orchestrator not a server, inference, training, or compute machine.
Even the 512 Mac Studio is still only handling the medium-sized local models. So you’re going to be mostly in the cloud until you know precisely what you’re going to be doing.
Western-Image7125@reddit
I’ve been working in the ML field for 15 years and have great financial stability and I was tempted to purchase a Mac mini just to try openclaw and try out some ideas I had, but thought better of it because why spend $1400 without having a clear usecase and path to positive gains from it. What you’re suggesting is buying something 3x more expensive with even less of an idea what to do with, with (I’m reasonably guessing) less financial stability. So do with that information what you will
RTDForges@reddit
Right out of the gate I’m really worried about heat issues with a laptop and how heavy the load LLMs cause is. I don’t have first hand experience with that computer. I would love to be wrong because as a Mac user if it can handle the heat that is awesome. I would be extremely surprised if it can though.
Also based on my experience you’re also better off having a dedicated workstation and an LLM box. Way better not having to constantly fight against the LLMs for resources. Plus if you just try to wind it and figure it’ll be good enough in the scenario I just described you’ll likely take otherwise capable models and suddenly have them hallucinating / going on side quests you never wanted them to go on like crazy.
cl326@reddit
Put another way, is it retarded to do something that’s retarded?
FormalAd7367@reddit
Just use Api for now.. start building building…
Ok-Radish-8394@reddit
Dude, buy a MacBook Air, learn to code, come to terms with what you know and what you’d like to do and then think of such extravagant purchases.
sdmat@reddit
Yes, spending $5K on something you don't need and have no specific purpose for when you clearly need the money elsewhere is stupid.
Ok-Radish-8394@reddit
Well why don’t you get the 64GB version then?
WanderingZoul@reddit
Yes
maschayana@reddit
Yes
Caffdy@reddit
wait for the M5 Ultra to make it worth it
IAmtheBlackWizards_@reddit
M4 Pro 48GB was my compromise about a year ago. Very happy with this decision.
Conscious-Ad9285@reddit
Try ai subscription for a bit and you find yourself becoming a power user you can always pull the trigger
Spanky2k@reddit
If you have to buy a machine now then the answer mainly depends on your own personal economics. However, if you're already committed to buying a machine and have an interest in LLMs then the added VRAM is not going to be a mistake.
Most of my LLM experimentation has been on my 'old' M1 Ultra 64GB Mac Studio. I bought that for my normal work when they were released and ended up upgrading 18 months later to a M3 Max 64GB MacBook Pro as I needed the very occasional portability. The Mac Studio just gathered dust until a little over a year ago when I discovered you could run LLMs locally and I started playing around on it. What I find amazing about this is that I was using a 3 year old machine to play with cutting edge LLM models that were better than ChatGPT was a year previously and none of this kind of stuff was on my or basically any one else's (mainstream) horizon when my Mac Studio was released. I just find that really cool. My main regret with my two purchases was not going for 128GB of RAM as it would have been really handy right about now (although would have been overkill for anything I used the machines for at the time).
One thing I would suggest you consider, however, is that the M5 MacBook Pro is in a bit of a weird space as it's basically the last dance for the MacBook Pro in its current form. The M6 MacBook Pro is expected as soon as the autumn of this year and will be coming with a redesigned form factor. My guess is they will release that as a higher priced premium MacBook Pro option (MacBook Max?) and sell both models concurrently for a year or two. This will allow them to maintain their margins in the market of higher hardware costs and is clearly the way premium personal electronics is going. But nevertheless, I'd rather have the first of a new model line than the last of the old one. Both in terms of resale value (if that's important to you) and also in terms of how 'new' it feels. When you're spending that much on a laptop, it still looking and feeling 'current' does make a psychological difference in my opinion.
You could just pick up a Neo for now and see how you like it while you wait for the new form factor MacBook Pros to come out. My first mac was the motherboard of an old eMac G4 (educational edition iMac) that I happened to find for cheap on eBay and thought I'd have a play with and run 'headless' (we're talking two decades ago now) and I loved the experience so much, particularly after working in PC tech support, that I ended up using it as my primary machine. Even though it was an order of magnitude less powerful than my PC at the time. That led to me switching almost entirely. I still use a Windows PC for gaming but for everything else I use a mac now and it's all thanks to that one low powered machine!
BringMeTheBoreWorms@reddit
If you’re learning to code then you’ll likely be better off with a Claude and codex subscription. That’ll help you way more than getting into local llm.
There’s a whole other level of complexity that you need to get over before local llm starts to become useful. And running lmstudio is not it.
Captain2Sea@reddit
Motivation is more important than setup
gamesntech@reddit
If you don’t really know what you’re doing buying anything is stupid let alone a 128gb MacBook Pro M5 Max
3dom@reddit
Fun fact: renting a remote 5090 will cost you $3-5 to run AI during workday (8 hours). I.e. $60-100 total per month if your system will be loaded 100% of work time (much less most likely)
You can learn/work on remote hardware spending that is basically a restaurant dinner per month.
BitXorBit@reddit
It’s a strong machine for AI, but i find laptops very annoying soon as they start inferencing, the fans going wild and the noise annoying me. I would wait m5 ultra
sala91@reddit
I mean if you have the money it's a best of a machine
nierama2019810938135@reddit
I would just learn coding on the gaming laptop. You don't need another computer for that. What programming are you looking to learn?
JacketHistorical2321@reddit
If you can afford it and you plan to use it for a long time then yes I would say max out what you can get because they'll be no upgrading later on. Unless of course you want to sell what you bought at a loss and upgrade later. I bought a Mac studio m1 ultra 128 GB ram about 3 years ago and I'm still using it today. As much as people love to hate on it Apple silicone holds up really well long term.
DarkNo7318@reddit
This is bad life strategy. All those hours spent in the second job could be spent learning and tinkering
Trennosaurus_rex@reddit
Yes
mapsbymax@reddit
Since you're already planning to buy an M5 Pro, here's the practical breakdown:
The Pro→Max upgrade is the real question. The M5 Pro 48GB will comfortably run models up to ~30B parameters at decent quants. The M5 Max 128GB opens up the 70B+ class and the newer MoE models (like Qwen 3.5 122B-A10B) that are absolutely crushing it right now. With 614 GB/s memory bandwidth on the Max, generation speed is genuinely good — not GPU-fast, but faster than you can read.
For a learner, here's what I'd actually suggest: Start with Ollama (one command install on Mac). Pull a small model like Qwen 3.5 7B. You'll be running local AI in about 5 minutes. This is where you'll spend most of your first few months — learning prompting, trying different models, building small projects. You don't need 128GB for any of that.
But here's the thing about the RAM argument: You can't upgrade it later, and 128GB gives you 3-5 years of headroom as models get better. The MoE architecture trend means you can run genuinely capable models (100B+ parameters but only activating 10-20B at a time) on 128GB that you simply can't touch on 48GB. That gap is going to keep widening.
Privacy angle is legit. If that matters to you, local inference is the only real answer. No logs, no data leaving your machine, no subscription that can change its terms.
Bottom line: If the money is truly "fun money" and you won't stress about it — the Max is the better long-term buy. If it's a stretch, the M5 Pro 48GB is more than enough to learn on and you can always use API credits for the big models when you need them.
Ok-Measurement-1575@reddit
Not knowing what you're doing is part of the intrigue for me.
It's all over once I've sussed it out.
f0xsky@reddit
for most people just use the free tier LLMs of chatgpt, gemeni. And i would wait for the mac studio if you already have a PC and can re-use monitor and other accessories
superSmitty9999@reddit
If you want to get into AI training and development the DGX spark is a better buy.
It’s slower at inference but better software compatibility and more well rounded
Realistic_Luck_95@reddit
I have an M3 Pro with 36GB of RAM and I learned a lot by just messing with Ollama and some smaller models on it.
milktea-mover@reddit
For the price, I would recommend you buy a Framework Desktop with 128GB of RAM for LLM inference, and a Macbook Air M5 with maybe 32GB of RAM for your normal usage. If you think that doing that is stupid, then, you definitely shouldn't buy MBP with 128GB of memory.
CreamPitiful4295@reddit
Mac book M5 isn’t going to help you doing anything with AI that a new Mac $600 couldn’t do. It’s all going to be browser based.
staatsclaas@reddit
r/lostredditors
CreamPitiful4295@reddit
?
Piyh@reddit
It has minor issues → I want to drop $7k on a new laptop is quite the leap
segmond@reddit
it all depends on how motivated you are, if you are a very motivated individual and will put in all your effort into making the best out of it, then I won't call it stupid. i personally will put aside that $5k and put it towards a mac studio with better specs ... and use a cheap laptop/phone/tablet to access it remotely.
theabominablewonder@reddit
Currently the allowances on the subscriptions are good value and stuff like an M5 max will only get cheaper. If subscriptions become worse value then you can take the leap then, possibly with depreciation you can then get a better laptop and run more powerful models.
FullOf_Bad_Ideas@reddit
Linux and Nvidia GPUs are best if you want to do serious tinkering with AI models like LLMs and image/video generation models that isn't just "using OpenWebUI".
If you care about portability, consider some devices like Olares One (they also have some software that makes it easier to actually use many open source projects that you could use with a different hardware too as long as it's Nvidia and running Linux). Or getting a gaming PC with 3090. Single 3090 allows you to do a ton of tinkering and it should be cheap, maybe under $1200 for the whole setup. External enclosure for 3090 added to the gaming laptop would work too and would be even cheaper.
Getting a M5 Pro/Max Mac would be fine too but you'd be limited in terms of projects you'd be able to run - you'd have to use Google Colab or rented GPUs more often and at that point you can as well run it on a potato with Core 2 Duo.
ActuallyAdasi@reddit
Yes
Transhuman-A@reddit
Get a used M1/M2 Max with 32GB RAM 1TB SSD. Anything that it can’t handle should be offloaded into the Cloud.
AMadHammer@reddit
No one is gonna be able to answer that question for you. My advice is to go for it and make stupid decisions.
Sylverster_Stalin_69@reddit
This stupid decision is quite expensive 💀💀
Low-Opening25@reddit
yes it’s exactly what you said it is, stupid.
asfbrz96@reddit
Get a strix halo to play
mobileJay77@reddit
You are already halfway there, since you need a new one anyway. You can use it to tinker some more. This should give you decent, but not top-notch models.
last_llm_standing@reddit
You can never go wrong with M5
amydgalas@reddit
For LLM Text I got a M1 Max 64GB, been fine, it generates faster than I can read, so don't go overboard.