PC for local LLM inference/GenAI development
Posted by JMarinG@reddit | LocalLLaMA | View on Reddit | 7 comments
Hi to all.
I am planning to buy a PC for local LLM running and GenAI app development. I want it to be able to run 32B models (maybe 70B for some testing), and I wish to know what do you think about the following PC build. Any suggestions to improve performance and budget are welcome!
CPU: AMD Ryzen 7 9800X3D 4.7/5.2GHz 494,9€ Motherboard: GIGABYTE X870 AORUS ELITE WIF7 ICE 272€
RAM: Corsair Vengeance DDR5 6600MHz 64GB 2x32GB CL32 305,95€
Tower: Forgeon Arcanite ARGB Mesh Tower ATX White 109,99€
Liquid cooler: Tempest Liquid Cooler 360 Kit White 68,99€
Power supply: Corsair RM1200x SHIFT White Series 1200W 80 Plus Gold Modular 214,90€
Graphics card: MSI GeForce RTX 5090 VENTUS 3X OC 32GB GDDR7 Reflex 2 RTX AI DLSS4 2499€
Drive 1: Samsung 990 EVO Plus 1TB Disco SSD 7150MB/s NVME PCIe 5.0 x2 NVMe 2.0 NAND 78,99€
Drive 2: Samsung 990 EVO Plus 2TB Disco SSD 7250MB/S NVME PCIe 5.0 x2 NVMe 2.0 NAND 127,99€
KillerQF@reddit
You may want to get a different motherboard with better placement of pcie slots to accommodate a second 5090 in the future.
JMarinG@reddit (OP)
True... What do you think about this motherboard then? MSI X870E ATX AM5 PRO X870E-P WIFI DDR5 PCIe 5.0 Wi-Fi 7 5G LAN RGB
KillerQF@reddit
That would work, but the second slot is pcie4x4 from the chipset. more than fine for most llm inference.
but depending on what else you want to do, you may want to look for one with the capability to do 2xpcie5x8. like the
ASUS ProArt X870E-CREATOR
MPG X670E CARBON WIFI
Or other
JMarinG@reddit (OP)
Looking at the specs of the one I suggested, it says:
3 x PCIe x16 Gen5, 1 x PCIe x1
Isn't that what you say?
JMarinG@reddit (OP)
Looking at the spècs of the one I suggested, it says:
Isn't that what you say?
yani205@reddit
Don’t need x3D for LLM inference
JMarinG@reddit (OP)
Good! I was doubting if that would make a difference. So would the AMD Ryzen 9 9950X or AMD Ryzen 9 9900X be a better choice?