Help on SLMs
Posted by Mission_Big_7402@reddit | LocalLLaMA | View on Reddit | 3 comments
I am building a context aware terminal wrapper, which suggests the completion of the commands(as vscode code suggestions but for commands), I've completed building for the local bash history, it auto completes the last matching command, shows in gray first.
So, I'm trying to use a SLM for the use case predicting or completing the user command by also understanding the context which is stored in the CONTEXT.md file, sent for every keystroke, But most of the SLMs are slow or just generate random things, making no sense,
I've tried the Qwen 2.5 coder(1.5 & 0.5), and llama 3.2(1b), which are lightweight
Are there any other good models out there, and Is it possible for what I'm trying to build?
share your thoughts and suggestions, I'm just trying to build something and learn
qubridInc@reddit
Yeah it’s possible, but SLMs alone won’t cut it; use rules + history for most cases and let the model only refine, otherwise it’ll stay slow and unreliable.
ttkciar@reddit
I haven't tried them yet, but you might want to try Gemma4's 2B and 4B models, which purportedly support fill-in-middle.
Qwen3.5-2B is worth checking out, too.
Mission_Big_7402@reddit (OP)
will definitely try