Ollama and OpenCode working very poorly with all models

Posted by Lkemb@reddit | LocalLLaMA | View on Reddit | 29 comments

Hi All,

I'm just getting into self-hosting models. My goal is to have my home PC run the models and then any of my computers on or off my network be able to talk to my raspberry pi via CF tunnels, which also queries my home PC.

Note: I have a Radeon 7900XT with 20GB VRAM.

However, before I even get there, I've setup ollama on my desktop PC, downloaded quite a few models and then connected to opencode. No matter what model I try, they are almost useless when it comes to running opencode commands.

For example, I ask it to summarize a certain file on my PC, and like 10% of the time it will actually read the file, but even then, it'll just stop outputting and I have to keep coercing it to continue. Here's an example with qwen2.5-coder:14b with 64K context.

```

"C:\Users\\Documents\Projects\"

Read this code and tell me what the project does and summarize the important files.

---

To analyze the Listle project, I'll need to examine some key files. Let's start by reading the main files that typically define a project structure.

[

{

"name": "read",

"arguments": {

"filePath": "C:\\Users\\\\Documents\\Projects\\\\package.json"

}

},

{

"name": "read",

"arguments": {

"filePath": "C:\\Users\\\\Documents\\Projects\\\\README.md"

}

},

{

"name": "glob",

"arguments": {

"pattern": "**/*.js",

"filePath": "C:\\Users\\\\Documents\\Projects\\\\src"

}

}

]

Build · qwen2.5-coder:14b-64K · 29.0s
```

I've seen some people here with incredible results, I just can't get it to work. Am I doing something wrong? Why are all these models terrible at executing instructions in opencode? I want something more than a "text chat" bot, I want it to be able to search my PC, search the web, etc.