Anyone else running local LLMs on older hardware?

Posted by lewd_peaches@reddit | LocalLLaMA | View on Reddit | 44 comments

I'm using an old Xeon workstation with a decent amount of RAM and it's surprisingly usable. What's the oldest/weirdest hardware you've successfully run a model on?