Hey everyone! Need suggestions
Posted by bhagwachad@reddit | LocalLLaMA | View on Reddit | 4 comments

Which LLM/SLM will be the best for my hardware? I want something that'll help me with studies (doubt-solving, resource planning etc.) & coding (debugging, refactoring etc.)
[honestly I've no clue what is eating up so much of RAM, gotta check Task Manager]
MelodicRecognition7@reddit
consider switching to Linux if you are not required to run some Windows-only software.
bhagwachad@reddit (OP)
i'm planning to do so in future, dual boot clean windows and linux, keeping linux as my primary go to
i used to use Arch BTW (XD) on my old laptop but since the last 4 years I have been using win11 on my new one
WhoRoger@reddit
Granite 4.0 H 1B or LFM2.5 1.2B Thinking.
Maybe Smollm3 3B, it should fit into 4GB fine if you don't go overboard with context.
Use Q6_K, or maybe IQ4_NL for Smol if Q6 won't fit.
You won't get much coding done tho, tho Granite should handle basic scripts.
Run llama.cpp. If I could figure it out, probably everybody can.
bhagwachad@reddit (OP)
alr, thanks a lot WhoRoger