Rack server for local LLM

Posted by Typhoon-UK@reddit | LocalLLaMA | View on Reddit | 10 comments

Hi, has anyone tried running local LLM on dell/hp rack server with older xenon processors and 100+ GB RAM and no GPU?

Dell PowerEdge R720 2 x Xeon-2650v2 - 128gb RAM

I currently run qwen3.5-2b 8_0 on a dell xps 7590 with 16gb RAM and 4gb nvidia gpu. Its alright in chat mode but struggles when integrating with opencode.