[Setup discussion] AMD RX 7900 XTX workstation for local LLMs — Linux or Windows as host OS?

Posted by ElkanRoelen@reddit | LocalLLaMA | View on Reddit | 20 comments

Hey everyone,

I’m a software developer and currently building a workstation to run local LLMs. I want to experiment with agents, text-to-speech, image generation, multi-user interfaces, etc. The goal is broad: from hobby projects to a shared AI assistant for my family.

Specs: • GPU: RX 7900 XTX 24GB • CPU: i7-14700K • RAM: 96 GB DDR5 6000 • Use case: Always-on (24/7), multi-user, remotely accessible

What the machine will be used for: • Running LLMs locally (accessed via web UI by multiple users) • Experiments with agents / memory / TTS / image generation • Docker containers for local network services • GitHub self-hosted runner (needs to stay active) • VPN server for remote access • Remote .NET development (Visual Studio on Windows) • Remote gaming (Steam + Parsec/Moonlight)

The challenge:

Linux is clearly the better platform for LLM workloads (ROCm support, better tooling, Docker compatibility). But for gaming and .NET development, Windows is more practical.

Dual-boot is highly undesirable, and possibly even unworkable: This machine needs to stay online 24/7 (for remote access, GitHub runner, VPN, etc.), so rebooting into a second OS isn’t a good option.

My questions: 1. Is Windows with ROCm support a viable base for running LLMs on the RX 7900 XTX? Or are there still major limitations and instability? 2. Can AMD GPUs be accessed properly in Docker on Windows (either native or via WSL2)? Or is full GPU access only reliable under a Linux host? 3. Would it be smarter to run Linux as the host and Windows in a VM (for dev/gaming)? Has anyone gotten that working with AMD GPU passthrough? 4. What’s a good starting point for running LLMs on AMD hardware? I’m new to tools like LM Studio and Open WebUI — which do you recommend? 5. Are there any benchmarks or comparisons specifically for AMD GPUs and LLM inference? 6. What’s a solid multi-user frontend for local LLMs? Ideally something that supports different users with their own chat history/context.

Any insights, tips, links, or examples of working setups are very welcome 🙏 Thanks in advance!