Best path to learning modelling, Spark or 5090 or Mac?

Posted by yes_i_tried_google@reddit | LocalLLaMA | View on Reddit | 13 comments

I’m spiralling on the options and, combined with my natural habit of overanalysing everything, I can’t figure this out! So help please :)

For context, I’m a Director of Software Eng in banking trying to learn LLM and modelling techniques in my spare time. We use Copilot at work, and I’ve been hammering Claude Code at home to try and keep up until the bank brings things in.

My workflow up until now.

Claude Opus designs the hard stuff, cross-check with Ollama cloud models then gets one of the cloud models to build stuff. Works great, quality awesome, here I’m happy in principle but it’s burning £300/m on subs between Claude/Ollama - which I max out weekly but the quality of output is near perfect.

Now, I want to learn more about modelling, and tuning/quanting etc. in these areas, I’ve zero knowledge. If I could move some of my workflow locally and cut the Ollama subscription, even better.

Options I’m considering….

DGX Spark / GB10 — £3,800-4,000

Corsair Strix Halo — £2,550

5090 workstation — £4,900

Also Mac, but the memory shortage limits what we can/can’t order in UK, combined with a 3 month wait on what we can

Models I use in Ollama. Qwen3.5, glm5.1, kimi2.5, minimax2.7, Gemma4

I’m spinning on these options, but I THINK the Spark. Please could I get some views? It would be headless, and I’d carry on working with my baby MacBook M1 8GB / VPS.