Potential Local LLM Setup Question
Posted by limejeller@reddit | LocalLLaMA | View on Reddit | 5 comments
I want to set up a local coding llm, maybe with Qwen3:30BA3B (i have heard it's good). I want to use what I have as much as possible, I have an old desktop with a Ryzen 5600G and 16GB DDR4 RAM. I saw an rx7900xt for a really good price, and am tempted to buy it for local llm purposes. Could I still get reasonable performance out of older hardware since the 7900xt has a decent amount of vram? I'm totally new to this, so I apologize if it's a dumb question. Thanks!!
mycycle_@reddit
Once it bleeds into the system ram versus the V ram that’s where you start to see the pain points. So depends on your context size and if large how patient you are
limejeller@reddit (OP)
Hmm true. Thanks for the reply!
mycycle_@reddit
look into those intel gpu's, and a channel called country boy computing. If you're on a budget, there are options if you aren't going the apple route (which is ironically the most performant to cost)
MelodicRecognition7@reddit
you'll also need to upgrade to 32 GB RAM, if your current RAM is 2x8 not 1x16 then you'll have to sell your current RAM and buy 2x16
limejeller@reddit (OP)
Ahh I see. Thanks!