Thoughts on my new workstation build for development and local AI?
Posted by Mark_Schwan@reddit | buildapc | View on Reddit | 14 comments
Hey everyone,
I’m planning a new PC build and would love some honest feedback before I commit. I’m not a gamer. This machine is mainly for software development, AI-related work, multitasking, and general productivity. Most of my current AI work is still online/cloud-based, but I want a strong desktop that gives me room to grow into more local workloads.
Current build idea
- Case: Fractal Design North XL Tempered Glass
- CPU: Intel Core Ultra 7 270K Plus
- CPU Cooler: be quiet! Pure Loop 3 LX 360mm
- Motherboard: MSI MAG Z890 TOMAHAWK WIFI
- RAM: 64GB DDR5-6000 CL30 Kingston FURY Beast RGB
- GPU: Nvidia GeForce RTX 5070 Ti 16GB
- PSU: be quiet! PURE POWER 13 M 850W 80+ Gold
- SSD: 2TB WD Black SN850X
Context
I’m currently coming from an older AMD system:
- Ryzen 7 7800X
- 32GB RAM
- 1TB SDD
- ASRock Fatal1ty X370 Motherboard
- An older GTX 1080 Ti-class GPU
So this is a pretty big jump for me.
I also use a Zenbook 14 OLED as my laptop:
- Intel Core Ultra 9 285H
- 32GB LPDDR5X
- 1TB SSD
- Intel Arc graphics
My questions
- Does this desktop config make sense for development + AI work?
- Is the RTX 5070 Ti 16GB the right balance, or would you push for something else?
- Any weak spots in the build?
- Would you change anything for better value, reliability, or longevity?
I’m especially interested in feedback from people who use their PCs for coding, containers, VMs, local AI, or heavy multitasking rather than gaming.
Thanks in advance! ✌🏻
utopian201@reddit
local AI depends heavily on VRAM as that will determine the size of models you can use. Depending on your CPU needs, you could consider dropping down a grade of CPU and use that budget for a GPU with more RAM (or even an older gen GPU with 20 or 24gb ram)
Worst comes to worst, it can offload from VRAM into your normal ram, but it will slow it down a lot.
Mark_Schwan@reddit (OP)
So, if I could get a used RTX 5070 Ti with 24 GB RAM, I should go for this? (Basically, the more VRAM, the better, right?) The RTX 4090 24 GB is just too expensive right now to justify.
utopian201@reddit
5070ti only have 16gb, so if you wanted more, you'd need to go higher end; 3080 has 20gb, 3090 and 4090 have 24gb which is why 3090's are so prized as they are the cheapest way to get 24gb vram
Mark_Schwan@reddit (OP)
Thanks for the insights, I'll think more about that! (And I need to upgrade my CPU, because the current one doesn't run Windows 11 as well.)
utopian201@reddit
I'm running Win 11 on a 7600 and its fine. Theres no such thing as a 7800x - did you mean 7700x or 5800x? In any case, even 5800x with a 5090 I'd guess will be faster than any other combination with a slower GPU and faster CPU
But my suggestions are where AI is the priority
Mark_Schwan@reddit (OP)
Oh boy, somehow I mixed it up. Currently I'm running an AMD Ryzen 7 1800X. So, you see why I need to upgrade that one as well. 😅
utopian201@reddit
For AI centric workloads, I'd upgrade to a Ryzen 5000 series and a GPU with the most VRAM you can get - it really depends on how much of a priority AI is to you.
Also will you be training or running inference? Inference - majority of budget should go a high VRAM GPU. Training relies a bit more on the CPU, but still mainly on the GPU.
What kind of dev do you do? If you need to compile stuff, get a CPU with more cores. As you don't game, x3d isn't necessary. I'm a python dev and even my old haswell notebook is fine for what I do.
You'll probably already have a good idea of what CPU and RAM you'll need as a dev based on your current needs, but I'd say with several containers, 32gb is enough. I dev usually with a single container, but even when I need to run multiple containers at once, 32gb is enough for me now.
If you're only using two RAM slots, you can add another 2 sticks potentially although I'd probably wait until ram prices fall.
Mindless_Fisherman68@reddit
for dev plus local LLM inference, the key decisions are usually wrong on first-pass builds. without seeing your exact part list, here are the calls that matter for that workload:
RAM over everything. local AI eats RAM. 64GB minimum, 96-128GB if you're doing anything bigger than 13B parameter models or running an IDE plus VMs plus inference simultaneously. DDR5-5600 CL36 is the practical sweet spot for AM5; chasing 6400+ rarely pays back on workstation tasks and runs into 4-DIMM stability issues.
GPU choice is dominated by VRAM, not flops. for inference: 24GB RTX 3090 (used, $700-900) crushes a 16GB 4070 Ti at any model that doesn't fit in 16GB. 4090 24GB if budget allows ($1500-1700 used). don't buy a 5070/5060 Ti for AI work, 8-16GB caps you at small models and quantization tricks. AMD GPUs work for inference via ROCm but software support lags and most local AI tooling still assumes CUDA, expect 30% more fight per problem.
CPU: more cores helps compile times more than gaming would. 7950X / 9950X (16C/32T) is the AM5 pick for dev. X3D doesn't help dev or AI inference, the extra cache only matters for games and a few sim workloads.
storage: 2x NVMe minimum. one fast 2TB Gen4 for OS plus active projects, one large 4TB+ Gen3 or SATA for datasets and model weights. local AI models are 5-50GB each, you'll burn through 2TB quickly.
PSU sized for GPU TDP plus headroom. 24GB 3090 is 350W actual, 4090 is 450W, plus 250W CPU plus drives plus overhead = 850W gold minimum, 1000W if you ever plan to add a second GPU.
case: airflow over aesthetics. workstation thermals matter more than RGB. Fractal North, Lian Li Lancool 216, Phanteks XT Pro are all good airflow picks under $150.
if you can share the actual part list I can be more specific. but these decisions trip up most first-time AI workstation builds.
Mark_Schwan@reddit (OP)
Thanks for the insights! First, the parts are written in the main post, or am I missing something?
About RAM: After reading https://www.corsair.com/de/en/explorer/diy-builder/how-tos/memory-for-local-llms-how-much-ram-do-you-need-and-when-speed-matters/, I thought that 64 GB DDR5-6000 sounds like the best price/performance right now.
About GPU: Yeah, gotcha. I'll look into getting more VRAM.
About CPU: So you would recommend going for AMD instead?
I got a spare 2 TB Samsung Something SSD lying around here; I wanted to use that one as well (not mentioned above). And I'll also look into a bigger PSU; however, case and cooling should be fine with what I chose.
utopian201@reddit
I thought 6000 was the sweet spot for AM5 since it gives maximises the 1:1 with the fabric?
Accomplished_Emu_658@reddit
Isn’t vram low or do you plan to add more later?
Mark_Schwan@reddit (OP)
Right now I’m mainly trying to avoid overpaying since GPU prices are still pretty wild. My thinking is that I can build the rest of the system sensibly now, and if prices improve later, I can just upgrade the GPU down the line.
braydon125@reddit
Your problem is going to be pcie lanes if you ever plan to scale past one gpu. Consumer/gamer cpu\mobo aren't meant to run tons of hardware
Mark_Schwan@reddit (OP)
Totally fair point, but I just don’t think I’ll end up going that route. 😅