Help me choose: Unified Memory (Apple Silicon) or 64GB DDR4 for a Budget Home AI Server?

Posted by khazenwastaken@reddit | LocalLLaMA | View on Reddit | 24 comments

​Hi folks, I’m a CS student looking to set up my first local LLM server. My goal is to run agents for automation and get help with coding/debugging. Since I'm on a budget, I have to decide between raw capacity and memory bandwidth:

​Mac Mini M1 (16GB) / M2 (24GB): Fast inference thanks to unified memory, but very limited in terms of model size.

​Refurbished Mini PC (e.g., i5-8500T) with 64GB DDR4: Slow memory speeds, but I can fit much larger parameters or higher-quantized models.

​The Trade-off: I don't mind waiting a bit for the output, but I'm terrified of being stuck with "dumb" models due to the 16GB-24GB RAM limit. Would a larger model running slowly on a 64GB Mini PC be more useful for complex coding than a fast but small model on a Mac?

​What’s the sweet spot for a student budget? Speed or VRAM?