Built a terminal chatbot in Python that uses Ollama + Qwen3.5:4b — fully offline, beginner project but works well
Posted by Beneficial-Job-3082@reddit | LocalLLaMA | View on Reddit | 5 comments
Hey everyone, I am interested in exploring Python and wanted to build something with local LLMs instead of using OpenAI.
Built a simple terminal chat app that:
- Runs Qwen3.5:4b locally via Ollama
- Remembers conversation history mid-session
- Has a clean command system (/reset, /history, /clear etc.)
- Zero cloud, zero API keys, everything stays on your machine
It's nothing fancy but it was a great way to learn how Ollama's API works under the hood.
GitHub: https://github.com/Aditya-rc4/localai_chat
Happy to hear any feedback or suggestions for improvements!
Beneficial-Job-3082@reddit (OP)
What do you think of this project then, what makes it blant or low standard or whatever, Honestly I did put some real efforts to it to make this. I felt really great to make something like this for the first time but I guess I am missing some key things.
Emotional-Baker-490@reddit
ewww, ollama
Beneficial-Job-3082@reddit (OP)
why dude, is there something wrong with it? just tell me, i'm an absolute beginner in these stuffs, so i'd really appreciate it
mlhher@reddit
Ollama is taking llama.cpp, making it worse, making it slower, wrapping marketing over it and saying "look at this shiny project"
OpenClaw falls into a similar category even if for different reasons.
Beneficial-Job-3082@reddit (OP)
I see...