Need practical local LLM advice: Only having a 4GB RAM box from 2016

Posted by Tall-Ant-8557@reddit | LocalLLaMA | View on Reddit | 19 comments

Sorry, not so tech person.

I’m trying to figure out the most practical local LLM setup using my spare machine:

4 GB RAM

No GPU for now, so please assume CPU-first unless I mention otherwise.

I want advice on:

Interested in models for:

Would appreciate recommendations.