Best chat interface currently (Aug 2025)
Posted by cmdr-William-Riker@reddit | LocalLLaMA | View on Reddit | 27 comments
I have a home server and I'm trying to find the best frontend chat interface to setup that would actually be useful for day to day use. LibreChat is alright but feels bloated and overcomplicated, OpenWebUI is alright, but also feels a bit overcomplicated and somehow falls short on multiple models support, I'd like to be able to connect an anthropic API key without jumping through hoops. For now I mostly make due with an interface I hacked together in Vue, but I'm just curious if there is any better options I have overlooked. What I would love is a good chat interface with the usual features (edit messages, retry, etc.) with support for multiple models from multiple providers, artifact support and basic chat session management (the ability to track the cost of the session in real-time like Roo or Cline is the one fancy feature I would love to have). I don't need multi user support, I don't need chain of thought, I just need a way to interact with the APIs and keep track of conversations.
Frequent_Cow_5759@reddit
If you'd like to connect to different models, you can try using Portkey's AI gateway. Jump through models + add RBAC, budgets, etc.
there's a live session about this if you'd like to attend: https://luma.com/cywhfpko
sahilypatel@reddit
I had the same issue with LibreChat/OpenWebUI feeling bloated. Been using AgentSea instead -- clean interface, supports all major models, keeps history, and shows usage.
Huge-Promotion492@reddit
OpenWebUI feeling like a cockpit is a common vibe. If you're already halfway happy with your Vue setup, you might be closer than you think—usually it's just "a few toggles and a price tag" away from perfect. The real test for a chat UI is whether you stop noticing it: quick edit/retry, easy model swaps, and a tiny peek at cost so you don't get jump-scared at the end of a session. If something makes you dig through nested settings or breaks your flow on mobile/remote, it's not "best," it's busywork. Honestly, publish your Vue one when you can—half the "feature requests" threads are just people wanting exactly that: fast, unbloated, and predictable.
daniel_nguyenx@reddit
Shameless plug: if you’re on macOS, give boltai.com a try. It’s native, fast and feature-rich.
UndercutBeta@reddit
This need a web app or iOS app
daniel_nguyenx@reddit
The mobile app is in beta (free in beta). Give it a try https://boltai.com/beta
spacemiidi@reddit
I am using writingmate.ai as my all in one AI hub. Access to best models for all providers, no unexpected limit walls, works when chatGPT, finally added MCP integration, etc. Advanced features like thinking / coding are better in ChatGPT but decent enough for my basic tasks. I use writing&re-writing a lot and interested in best texts possible, and luckily I can compare texts given by up to four AI models to my prompt.
JMowery@reddit
I'm also curious about this. OpenWebUI runs like garbage on my server (takes like 10 - 15 seconds to load initially), and it just feels so unbelievably heavy.
I want the control of OpenWebUI, but I don't need the dozens upon dozens of different pages. Also OpenWebUI is getting too commercialized for my taste.
z_3454_pfk@reddit
openwebui runs like shit. i think a lot of the front end has blocking elements which makes it super slow. on my iphone it takes about 4-5s to load when the whole thing has been cached otherwise 10-15s. librechat is much faster (2s) but that’s also a bit of a mess
Realistic-Mix-7913@reddit
What type of CPU and memory? I have it running in a VM on my proxmox host (3200g/64gb ram/2x nvme) with the vm having 8gb ram and 2vcpu, it's pretty quick, but is garbage on mobile (my complaint with it). +1 on them layering in more and more stuff that will probably end up being paid not too long from now.
JMowery@reddit
Its an old desktop gaming computer running Proxmox. Intel 8700 CPU (from 2017 or 2018) and 16 GB of RAM and some SSDs. Running Debian with Docker containers. I also have other services running on it.
Definitely need to add more RAM or just build actual server some day. But I'm still just in the tinkering phase.
Realistic-Mix-7913@reddit
Those 8700s have great IPC though (I have a 8086k in the closet I need to rebuild), but that ram is probably your bottleneck. Are you running inference on the host as well?
JMowery@reddit
Nope! I have inference running locally on my main computer which is a 7950X + RTX 4090 + 64 GB DDR5.
OpenWebUI runs perfectly fine when I install it locally. I was just wanting to have it on my server for more availability and to keep my main PC a bit cleaner when possible (I frequently distrohop, lol).
Spirited_Example_341@reddit
i dont understand for the life of me given how popular sites like char ai are and others. why there are not more ai character card based offline desktop apps. there used to be backyard but its no longer updated ... the only other real alternative i found so far is hammerai that actually works pretty well but it would be nice to see other options too
baffles me that there are so few options there given how a lot of the 8b models and all can be run just fine on a typical decent gaming pc setup and do well
coffeeandhash@reddit
I'll be the guy to mention SillyTavern. For power users it's great to have that level of control over variables and customization options.
richardanaya@reddit
I’m biased as a programmer, the best chat experience is one you make yourself. I made one I’ve been polishing for personal use for the last year. It’s ultra simple, mobile oriented, I’ve been tempted to make it even more minimal and remove button names and icons because I know what everything is by memory.
cmdr-William-Riker@reddit (OP)
Nah, real programmers don't use any interface at all and interact with the APIs through CURL /s Sounds like a cool interface you made though. I got a similar setup right now, was just wondering what else is out there
Linkpharm2@reddit
If you aren't using Ai to code your own frontend, are you really a member of r/LocalLLaMA?
cmdr-William-Riker@reddit (OP)
Oh I did, just wondered if there was anything that has the few features I haven't had time to add without being too bloated
Prestigious_Thing797@reddit
Openwebui definitely has a learning curve and I was cursing the developers when I first set it up. But that being said I've been able to connect it to every local model setup I've tried. Llamma.CPP, vllm, ollama, even k transformers.
It has all the features I want and plenty more I just ignore.
Most of all I appreciate that I open it up and I can immediately type up my prompt without clicking through crap (looking at you anythingllm!).
I had my own that I used for a while but there were features I wanted and I got sick of reinventing the wheel.
__JockY__@reddit
Jan.ai is simple and effective.
It won’t do multi modal, but it is open source. You can bind a hotkey to make the prompt window pop up, which is nice.
Multimodal has been on the roadmap for ever, so… maybe… “soon”?
unlucky_fig_@reddit
I’ve been using anythingLLM for a couple of weeks and enjoy it enough. I’m not sure I would call it the best but so far it’s not been the worst. Librechat has been my least favorite. I struggled to get it set up and then struggled to use it. I dropped it quick
Marksta@reddit
I don't really think there is a 'good' one as far as UI/UX goes. They all have a weird Mish mash of supporting a lot and missing some things. 100% settings menu is going to be huge on any of them, but OpenWebUI's settings menu takes the cake for horrendous design.
It doesn't help that there is so many different use cases and they're all generalist for the most part, so they all definitely suck at some specific thing you might want to do.
Felladrin@reddit
Maybe you'll like DeepChat:
- Website: https://deepchat.thinkinai.xyz
- Source Code: https://github.com/thinkinaixyz/deepchat
It's a desktop app that doesn't look bloated and the only feature that it doesn't have, from the ones you listed, is the real-time tracking of the session cost.
cmdr-William-Riker@reddit (OP)
That looks pretty interesting, will try it out! Thanks!
plankalkul-z1@reddit
You're asking a question that I have myself, for a long time now...
I ended up using https://github.com/Toy-97/Chat-WebUI
It has few rough edges (I can't edit saved chat titles; image submission is buggy: it keeps submitting to VLM the very first image you gave it in a chat session even it you follow up with other images, etc.), but overall it's the very best UI I could find for my needs.
Maybe it will suit you as well: just like you, I find Open WebUI and other popular choices unnecessarily overcomplicated. For goodness sake, I already have inference engines, lots of them... But no, every other UI (with the exception of Mikupad) insists on installing gigabytes of their own...
cmdr-William-Riker@reddit (OP)
I'll look into Chat-WebUI, thanks! Glad to know it's not just me. I'll see if I can clean up and publish my interface later also, just wasn't sure if there was some obvious choice available that I was missing