Planning Multi-RTX 5060 Ti Local LLM Workstation (TRX40 / 32–64GB VRAM)

Posted by Special-Art-9369@reddit | LocalLLaMA | View on Reddit | 24 comments

TL;DR:
Building my first multi-GPU workstation for running local LLMs (30B+ models) and RAG on personal datasets. Starting with 2× RTX 5060 Ti (16GB) on a used TRX40 Threadripper setup, planning to eventually scale to 4 GPUs. Looking for real-world advice on PCIe stability, multi-GPU thermals, case fitment, PSU headroom, and any TRX40 quirks.

Hey all,

I’m putting together a workstation mainly for local LLM inference and RAG on personal datasets. I’m leaning toward a used TRX40 platform because of its PCIe lanes, which should help avoid bottlenecks you sometimes see on more mainstream boards. I’m fairly new to PC building, so I might be overthinking some things—but experimenting with local LLMs looks really fun.

Goals:

Current Build Plan (I/O-focused):

Questions for anyone with TRX40 multi-GPU experience:

TRX40 quirks / platform issues

Case / cooling

Power supply / spikes

Basically, I’m just trying to catch any pitfalls or design mistakes before investing in this set up. I’d love to hear what worked, what didn’t, and any lessons learned from your own multi-GPU/TRX40 builds.

Thanks in advance!