Local model run in ollama for vscode copilot can not get the context of workspace
Posted by Proud-Obligation1277@reddit | LocalLLaMA | View on Reddit | 4 comments
I use ollama local model for vscode copilot, but it seems could not get the context of the workspace. For example, I command it to edit or summarize the current opening file, but it does not know which file to work.

Here is the model infomation I use:

Electronic-Space-736@reddit
careful the "ollama" word triggers these folks, better to use "Open AI API style LLM"
Proud-Obligation1277@reddit (OP)
What does this mean? I'm a beginner and have no idea about the community culture here. I'm just stating the problems I've encountered based on the facts.
Electronic-Space-736@reddit
Yeah, I am new too, I use Ollama, I tried a few posts to show what I was working on, and a few comments when ollama can solve an issue and you just get downvoted to hell.
This is the reality:
Based on what you’re building (agents, automation, local-first):
You’re hitting a nerve because:
So responses become:
Not because you’re wrong — but because you’re off-narrative.
lemondrops9@reddit
? its because Ollama is junk. It has its own gguf system which doesnt make sense. Who wants to be converting gguf files just for Ollama. It tends to be 2x slower than llama.cpp.