OSS alternative to Open WebUI - ChatGPT-like UI, API and CLI
Posted by mythz@reddit | LocalLLaMA | View on Reddit | 103 comments
Posted by mythz@reddit | LocalLLaMA | View on Reddit | 103 comments
phenotype001@reddit
What do you mean OSS alternative, Open-WebUI isn't closed.
mythz@reddit (OP)
Not clear why I was downvoted for posting a link to a reddit thread that answers the question and explains why their new License is no longer OSI compliant, but the context matters and is important:
https://www.reddit.com/r/opensource/comments/1kfhkal/open_webui_is_no_longer_open_source/
__JockY__@reddit
As far as I can tell it’s open except for replacing their logo. Fine with me. What’s the issue?
simcop2387@reddit
Those extra restrictions and things mean that it doesn't meet the OSI definition (and many others) of Open Source™. That's not necessarily a problem for a lot of users but it can be for some users who want to (rightfully for themselves) to more ideologically aligned projects. It may also make some businesses/non-hobbyists more wary of using the project due to potential future changes to the license since the current setup potentially leaves little room for a fork or future path to continue using it if things do get changed as the license effectively blockades forks from happening now (they could still happen, but then the original developer could use other means like trademark to shut them down since the license does not allow them to remove the branding/trademark-able bits).
__JockY__@reddit
I see. Thanks. For those affected it would seem there are three main remedies: fork the last known OSI-compliant commit, pay for a license, or don’t use open-webui.
The OSS/OSI purity thing is of no interest to me, so I’m happy toddling along as-is, but I get why it would bother others.
Thanks for taking the time to explain an unfamiliar perspective.
stylist-trend@reddit
Yes, these are the same choices as every time a FOSS project does a rug-pull, and it doesn't make the situation any better.
__JockY__@reddit
Yeah it seems like it’s a good way to shoot one’s self in the foot! Surely it just dissuades get developers from making contributions and encourages forks from the last available truly open commit.
I trust this happened with open-webui and there’s a forked, open version?
milkipedia@reddit
forking a project that is this active in updates without some kind of backing is a recipe for a failed, dead-end project.
tedivm@reddit
People don't always fork, they often go to alternatives. I switched to LibreChat myself and have been very happy with that decision, not just because it's truly open source but also because it's simply a better application.
simcop2387@reddit
Yea it's one of those areas where most direct users of the project aren't pragmatically affected but they are at a fundamental level in terms of what they're allowed to do with the software. the typical term being used these days for this situation is "source available" rather than "open source" because of the common expectations of things called "open source". The Futo apps relatively recently have talked about those expectations and such and made some criticisms about how OSI and the FSF do things, https://futo.org/about/futo-statement-on-opensource/ . There's definitely good arguments on both sides here, I personally tend to lean more towards the FSF/OSI prinicples on this, that users should have those freedoms but I do also agree with Futo on the topic that that being the only "proper" thing is also reasonable as long as software is something that puts food on developers tables. A fun philosophical conundrum on ideological arguments vs pragmatism.
Tai9ch@reddit
If all you ever do with the program is download it and run it, then it's not a huge difference.
If you want to integrate with other software, the license effectively prevents that. You can't use any of the code in any way except as part of a web based application with the provided UI that displays their branding in the way the current complete project does.
HiddenoO@reddit
OSS is not defined as 'whether __JockY__ is fine with it'.
__JockY__@reddit
I never said it was, please don’t put words in my mouth. That’s a gross misrepresentation of my statement. For shame, man.
The question was: what’s the issue with the modified license?
HiddenoO@reddit
They never claimed it was an issue for everybody (or an issue at all). As for why it's an issue for many people, see my edit.
__JockY__@reddit
So I guess we’re gonna see a fork like
actually-open-webuibased on the last OSI-compliant commit?HiddenoO@reddit
Probably, I'm not sure whether you'll actually find a lot of people working on it though given that there are already a bunch of still OSS alternatives such as LibreChat.
rm-rf-rm@reddit
Better link to the key comment: https://www.reddit.com/r/opensource/comments/1kfhkal/comment/mqqtb0r/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button
Marksta@reddit
If you click the fork button on Github you violate their license terms and open yourself up to being sued. Absolutely not even close to OSS.
ClassicMain@reddit
Absolutely untrue. Provide source for your claim.
Marksta@reddit
The license is very explicit, it doesn't cover just the software. It covers the software, how you deploy it, and how you distribute it to others. So the license absolutely covers using a public github repo, and changing the name of the repo from
open-webui/open-webuito something else would be removing open-webui branding which as defined includes their name. Under these given license constraints, pressing the fork button is 100% breaching their license as they wrote it.ClassicMain@reddit
You can interpret it that way
Or you can hold onto your sanity and interpret it like anyone else would which is that software licenses are limited to the software.
Github is not open webui.
The fact that github will show your own username as the repository owner if you fork it is not part of the open webui software and not covered by a software license either.
Betadoggo_@reddit
Not at all, it's only if you remove their branding from the interface
https://docs.openwebui.com/license/
Marksta@reddit
Forking the repo on github is distributing it. Removing the repo from the official
open-webui/open-webuipage would fall under a “Open WebUI” branding in your distribution.MyName/open-webuifork would have an element of their branding removed, of which you're not permitted to remove any.Would it hold up in court? Who knows, nobody is going to waste their time to challenge the legal interpretation of that. The literal one is you absolutely cannot fork it.
This is why we just say they're not OSS, they don't have an OSI approved license and it makes it an unknown liability that requires lawyers to figure out instead of developers or users.
Betadoggo_@reddit
Sure they could sue for that, or sue for anything really, but any reasonable court would throw it out as the name of the repo owner would not be considered "branding" to a reasonable person. I'm not disagreeing that the openwebui license is not OSI compliant, I'm just saying that the risk of getting sued is not a concern for any user or developer following the license as it's intended and clarified in other docs.
OSI certification means very little, for serious business use lawyers will always need to get involved. OSI lists AGPL on their site, a license notorious for scaring away online service based companies.
mythz@reddit (OP)
https://www.reddit.com/r/opensource/comments/1kfhkal/open_webui_is_no_longer_open_source/
dash_bro@reddit
It's literally permissive and "closed" only to protect their branding
SlowFail2433@reddit
Many only count apache 2.0 or MIT as open
mythz@reddit (OP)
You can find the OSI list of approved OSS licenses at:
https://opensource.org/licenses
SlowFail2433@reddit
Yeah and I disagree with them. I only use Apache 2.0 or MIT where possible because they are the least restrictive and very crucially they have been extensively tested in court.
mythz@reddit (OP)
Sure everyone can have their preferences, and OSS licenses are different to serve different purposes and use-cases, but this is the OSI maintained canonical list of approved licenses which meet the OSS definition.
SlowFail2433@reddit
Its not a canonical definition its one of the many definitions. Linux history and politics is complex.
mythz@reddit (OP)
Since you dismissed it so confidently, I thought this was a good question to ask llms .py!
> Where can I find the canonical list of OSS licenses?
gpt-5:
Short answer:
grok-4
The most authoritative and canonical source for a list of approved open-source software (OSS) licenses is the Open Source Initiative (OSI), which reviews and certifies licenses that meet the Open Source Definition.
Another widely used and comprehensive resource is the SPDX License List...
Screenshot receipts:
https://gist.github.com/mythz/6aa45d0dc2db29822293a7695947abfb
SlowFail2433@reddit
These are hallucinations
mythz@reddit (OP)
I'm sure they're scanning this Reddit comment right now and correcting the error of their ways!
SlowFail2433@reddit
Okay so you are one of those people who just believes what LLMs say and never questions it.
Ironically even in the reply you posted it admitted that the FSF (Stallman’s gang) has their own competing list. In my experience the FSF definitions actually get used the most by the way. I am saying that as someone who is not even a Stallman fan.
Linux history and politics is a story about many competing factions each competing to make their definitions and standards the most prominent. It’s literally an ongoing conflict so it doesn’t make sense to declare winners already.
Both Linus and Stallman are much more influential than the OSI. If there was going to be a “spokesperson for open source” it would be one of those two not the OSI.
Fox-Lopsided@reddit
MIT <3
FastDecode1@reddit
So both you and Open WebUI developers agree, it's not open-source software.
mythz@reddit (OP)
OSS has a meaning, adding your own custom terms which violates the definition, makes it no longer OSS. You can call it OpenWebUI/OpenAI/whatever but you can no longer call it OSS.
SameIsland1168@reddit
Exactly. Bingo. Sorry OWU, we appreciate your contribution, but you’re clearly not understanding OSS. OSS means allowing for assholes to use your work in the way you don’t want them to.
mythz@reddit (OP)
You're under no obligation to publish your code under an OSS license, doing so communications that you welcome others to fork and use your contributions. Don't do it if you would prefer others not to be able to use it under the OSS terms it was published with.
JEs4@reddit
Sorry people don’t seem to understand this. Using tools for toys is one thing but closed licenses create an immense amount of challenges for production use in any form. Thank you for this!
FastDecode1@reddit
Yes it is. Read the license.
It's open in the same way that OpenAI is open.
lolwutdo@reddit
What we really need is an Open WebUI alternative that doesn't require docker/python install bs; give me some clean simple installer for MacOS/Windows like LM Studio.
egomarker@reddit
And it has to be native build, not a 1.2Gb RAM electron (or the likes) app like Jan.
pmttyji@reddit
Agree. BTW Jan already removed electron thing. Months ago.
egomarker@reddit
Tauri they use is the same, if not worse.
pmttyji@reddit
Yep. But their recent windows setup file sizes are around 50MB only. Other setup file sizes are under 200MB.
nmkd@reddit
It still steals like a gigabyte of VRAM because it's web based
pmttyji@reddit
Replied on other comment. Started using llama.cpp this month start.
BTW which tool are you using?
nmkd@reddit
Mostly just backend.
When I need a frontend, usually llama-server's built-in UI.
pmttyji@reddit
Thanks. I think it'll take some time to like built-in UI for me. Wish I explored llama.cpp & lk_llama.cpp 6 months ago.
nmkd@reddit
Yeah I switched fairly recently from koboldcpp since it's a bit behind in terms of features (though has more overall, but I don't need most)
pmttyji@reddit
Actually Koboldcpp helped me(cmd window) to get closer with llama.cpp & ik_llama.cpp. Yeah, many even not impressed with UI. I'm fine with it anyway.
egomarker@reddit
RAM usage is what matters, not file size.
pmttyji@reddit
Yeah, I'm aware. Previously their setup files like 500+MB.
Anyway since this month start, I started using llama.cpp. For tiny/small models still I use Koboldcpp & Jan. For instant purpose(don't want to run cmd stuffs for tiny models)
nmkd@reddit
Or LMStudio.
PeruvianNet@reddit
its called llama-server
Betadoggo_@reddit
You don't actually need docker, it's just the safest way if you're deploying for multiple users. Installation is as little as 1 line (assuming no environment conflicts), and 4 if you need need a separate python environment (a good idea in most cases).
I have the super basic update and launch script that I use here: https://github.com/BetaDoggo/openwebui-launcher/blob/main/launch-openwebui.bat
pmttyji@reddit
+32K
hyperdynesystems@reddit
This is my biggest issue with most of these, I don't feel like installing Docker (on Windows at least it's very annoying).
mythz@reddit (OP)
Sure, tho my lm-studio.AppImage is sitting at 1GB. Whilst the llms_py python package is <700k with all its CLI and Server API functionality is in 1 .py file. No plans to package it in an Electron AppImage wrapper, but am looking at a Docker version since someone asked for it.
BTW nothing wrong with LM Studio - it's great for local models, I just needed my own LLM gateway and prefer to maintain exportable converstation/request log history in browser storage.
ai_hedge_fund@reddit
I starred the repo because I am interested in supporting this work and also to give you a small win for putting up with the comments here
There is a lot of whitespace still in the client applications and I support more choice beyond Open WebUI. WebUI has its place but it’s not for everyone.
We have had a need for a much lighter client application that can connect to OpenAI-compatible endpoints so your single-file contribution is well received here.
Thank you
DistanceSolar1449@reddit
I’m still waiting for a client that lets me ditch ChatGPT Plus
This is just the basic feature set to make a product that’s usable as an alternative to ChatGPT. OpenWebUI is the closest but doesn’t support native web search, which is a shame, because ChatGPT web search is a killer feature.
einmaulwurf@reddit
You could take a look at librechat
DistanceSolar1449@reddit
It's proprietary and paid for code interpreter, and doesn't support OpenAI web search either. It's also slower than OpenWebUI.
Watchguyraffle1@reddit
Huh? It’s not closed source. I see it right at that link.
DistanceSolar1449@reddit
Ok, point at the source code for the code interpreter then
Watchguyraffle1@reddit
Oh. You mean specific interpreter. Fair. I see your point. What about rolling your own like this guy:
https://github.com/ronith256/Code-Interpreter-LibreChat
Express_Nebula_6128@reddit
Conduit, it’s open source afaik
Everlier@reddit
Hollama is the lightest out of the fully featured ones I know. In fact, you don't even have to install it and can run off their GitHub pages.
mythz@reddit (OP)
thx, appreciated!
hapliniste@reddit
I'll go with librechat myself and it seems like the best solution for a long time. Any input on that?
pmttyji@reddit
Can I use that with existing downloaded GGUF files? (I use Jan & Koboldcpp that way)
I couldn't find that option when I checked last time months ago.
hapliniste@reddit
You have to run the models yourself I think, there are no integrated local backend AFAIK
pmttyji@reddit
Oh OK. Thanks for this info.
AD7GD@reddit
I didn't find it to be a good option for pairing with local LLMs. For example, to beautify the names of models when using the openai API, it has a mapping of model name to pretty name. But if your model is not in that list (for example, you are running it with vLLM), then you get generic attribution, no matter what model you use. So it's not easy after even one exchange with an LLM to know which LLM you used. If you are constantly experimenting wtih models, it's really a non-starter.
tedivm@reddit
I don't understand what you're trying to say here. I use LibreChat with local models without issue.
pokemonplayer2001@reddit
I have not heard of librechat before. Thanks for pointing it out.
mythz@reddit (OP)
Not tried it, but I needed LLM gateway features in a ComfyUI Custom Node so llms.py is intentionally lightweight with all CLI/Server in a single .py that doesn't require any additional deps in ComyUI and only aiohttp outside it. Basically should be easy to install and use in any recent Python environment without conflicts.
entsnack@reddit
nah I’m good. OWUI is tried and tested, and has an active user and contributor community which ensures that it runs reliably. There are tons of also-rans in this space.
rm-rf-rm@reddit
There are many many many such projects - if you could share information such as long term goal, dev sustainability, etc it will help potential users like me (who do want to move away from OpenWebUI) use and support you.
Please also share how AI is used to generate code (vibe coded or a robust agentic system with a full SQA sutie).
__JockY__@reddit
I like that there are options other than owui, but without any form of tool calling / MCP it's not really a viable alternative for many folks. However I do like the clean command-line client, that's pretty rad.
mythz@reddit (OP)
Yep, it's still in active development. Will be looking at support for Docker (soon), then a plugin system initially, feel free to submit any feature requests as Issues to prioritize features sooner.
Better-Monk8121@reddit
You mean in active vibe coding stage? GitHub is full of it already
__JockY__@reddit
Less interested in plugins, more interested in MCP :)
egomarker@reddit
No MCP, plugins and llama.cpp though
__JockY__@reddit
Shame about MCP, but thank god it avoids plugins and yet another bundled copy of llama.cpp... and I can't tell you how refreshing it is to see one of these vibe-coded projects that doesn't rely on Ollama.
All we need 99% of the time is a way to hit an OpenAI-compatible API. This is The Way.
egomarker@reddit
It actually does have ollama support out of all local
z_3454_pfk@reddit
looks good. the big thing with OWUI is how easy it is to expand with functions and custom tools, something other uis (such as this or librechat) lack
mythz@reddit (OP)
Yeah adding a plugin system is on the short term todo list, you can already run it against a local modified copy of UIs with `llms --root /path/to/ui`. It uses native JS Modules in web browsers so doesn't require any build step, i.e. you can just edit + refresh at runtime.
I'm also maintaining a C# port of this which uses the same UIs and .json config files which specifically supports custom branding where every Vue component can be swapped out to use a local custom version:
https://docs.servicestack.net/ai-chat-ui#simple-and-flexible-ui
z_3454_pfk@reddit
that sounds really good. is there any chance of a docker container? and how does mobile support look? i want to try this, but without docker support it's a bit cumbersome and I feel a lot will say the same (even though it's just one command to run).
mythz@reddit (OP)
Docker should now be supported, e.g:
$ docker run -p 8000:8000 -e OPENROUTER_API_KEY=$OPENROUTER_API_KEY ghcr.io/servicestack/llms:latest
docs:
- https://github.com/ServiceStack/llms#using-docker
- https://github.com/ServiceStack/llms/blob/main/DOCKER.md
mythz@reddit (OP)
Sure, it only has 1 very popular (aiohttp) dependency so installing it with pip shouldn't have any conflicts.
Can definitely run it in Docker although it would limit to running 1 command, i.e. `llms --serve` but still doable. I'm assuming running it with Docker compose would be ok as it would need to have an external volume to the user modifiable llms.json/ui.json
All CSS uses tailwindcss so it's easy to make it responsive, but there's a lot of UI to try fit within a mobile form factor, so will only look at supporting iPads/tablets at this time.
If you raise an issue I can let you know when a Docker option is available.
j17c2@reddit
I understand OWUI is only "semi" OSS, but I much rather continue to use that. As an OWUI user, I can confidently pull the docker image and get a few changes every few weeks because I know that OWUI is a mature, well-maintained repository with a key, active maintainer and several contributors continually working on it, fixing bugs, adding new features, etc. It has lots of features, many of which are well-documented. I have no issues using it for myself, and if it means a bit of branding then sure. I personally weigh actively/well-maintained "semi" open source over "fully" open source software like this, which seems to offer no advantages aside from being properly OSS.
FastDecode1@reddit
It's not "semi OSS", it's proprietary software with a proprietary license.
If you like the software and the development model (where Open WebUI, Inc. makes contributors sign a CLA to transfer their rights to the company so the company can use their work and sell enterprise licenses which, funnily enough, allow those companies to rebrand the software), then go ahead and use the software. But don't go around spreading misinformation about it being open-source software.
Open WebUI is free as in beer, not as in speech. It's literally one of the best-known ways of describing the difference between OSS and no-cost software, yet people still get it wrong.
j17c2@reddit
you're right, I agree. I didn't know that. However, seeing as the top comments all mention it, I thought it was quite clear to other people that Open WebUI is not OSS anyways.
I'd like to point out that I've noticed a few people like me don't understand what it actually means for software to be open-source, or when software is actually open-source or not. To me, Open WebUI feels open source, but I know it's not open-source software. I'd like to think you'd agree and understand that from my perspective that it FEELS open and is literally OPEN. Like I can see the code, I can fork the repo, I can edit the code... I now know that's not quite what constitutes what open-source software, but it feels really confusing on what makes software OSS. tl;dr naming is terrible and I got confused by it. Hopefully I'm not the only idiot.
wishstudio@reddit
To some extent, it's even worse than proprietary software. Typical proprietary software are privately developed. If they uses other free software they respects their license terms, which usually means proper attribution. In the case of OWUI they want to use others' contributions freely but forbid others to do the same to them.
j17c2@reddit
well if you mean "... but forbid others to do the same to them." as in using it freely, the license doesn't forbid others from forking, modifying, using, or redistributing it, there's just no rebranding for 50+ user deployments w/o permission from what I can tell. but yeah, kind of hypocritical. though imo that's much better than typical proprietary software (like Windows) where you don't have access to the source code at all.
wishstudio@reddit
They know what "use it freely" means. Just read their CLA:
> By submitting my contributions to Open WebUI, I grant Open WebUI full freedom to use my work in any way they choose, under any terms they like, both now and in the future.
And if you use their code:
> ..., licensees are strictly prohibited from altering, removing, obscuring, or replacing any "Open WebUI" branding, including but not limited to the name, logo, or any visual, textual, or symbolic identifiers that distinguish the software and its interfaces, in any deployment or distribution, regardless of the number of users, except as explicitly set forth in Clauses 5 and 6 below.
Sudden-Lingonberry-8@reddit
checks license
is this even OSS?
mythz@reddit (OP)
The New BSD License is a very popular OSS License https://opensource.org/license/bsd-3-clause
ThinCod5022@reddit
no MIT license? :c
Competitive_Ideal866@reddit
FWIW, I just asked Claude to write me one. Simple web server but it does what I want: