Any luck integrating local ollama models into VS Code Copilot Chat?
Posted by ShadowBannedAugustus@reddit | LocalLLaMA | View on Reddit | 7 comments
Hi all,
I tried quite a few models and approaches, but had no luck integrating local models into VS Code Copilot Chat extesion in a useful way.
Of course I can see the models there and can choose them, but none of them seem to work even remotely close to even the smallest cloud (or free) models built-into the Copilot Chat. They don't want to edit files or follow instructions at all, at best they somewhat work in "Ask" mode, but in "Agent" mode I cannot achieve anything.
Did someone make this work? Any tips are most welcome! Thanks!
Addyad@reddit
You can use either of the following extensions in vscode
both of them are OpenAPI compatatible plugins. The first one integrates with exiting copilot chat. The other one gives you more or less similar UI to that of copilot chat. In both the cases, you need to configure the config.yaml files so you can communicate with your ollama server.
Since vscode is botched with telemetry and stuff that I couldn't stop, i switched to vs codium, it does almost the same things. except that its opensource with no microslop. I use continue extension to chat with my model in llamacpp server.
ShadowBannedAugustus@reddit (OP)
Thanks, did you also manage good code edits with this approach? That is where it falls short for me, I can chat, but the actual direct code edits are the key use case for me.
acbonymous@reddit
Are you using a model that supports tools?
ShadowBannedAugustus@reddit (OP)
I tried devstral:24b, mistral-nemo:latest, qwen3.6:35b-a3b and qwen2.5-coder:14b, no luck with any of them.
bssrdf@reddit
No need to use ollama. llama.cpp works but you have to use vscode insider version. see https://www.reddit.com/r/LocalLLaMA/comments/1rt5e84/a_simple_set_up_using_local_qwen_35_27b_in_vs/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button
Addyad@reddit
you dont needt insider edition.
with these extensions, it worked in the normal vscode. I tested this a couple of weeks ago. but then i switched to vscodium.
M4A3E2APFSDS@reddit
I think you can do that using vscode insider edition. In the add models dropdown you get an option to add openai compatable model or you can add it via config. google for more details.