Ollama violating llama.cpp license for over a year
Posted by op_loves_boobs@reddit | LocalLLaMA | View on Reddit | 140 comments
Posted by op_loves_boobs@reddit | LocalLLaMA | View on Reddit | 140 comments
teleprint-me@reddit
This is why I do not use the MIT License.
You are free to do as you please with the underlying code because it's a release without liability.
It does cost you cred if you don't attribute sources. Businesses do not care. Businesses only care about revenue versus cost.
If you really care about this, LGPL, GPL, or AGPL is the way to go. If you want to allow people to use the code without required attribution, LGPL or GPLv2 is the way to go.
IANAL, this stuff is complicated (and I think it's by design). I find myself learing how licensing and copyright work everyday and my perspective is constantly shifting.
In the end, I do value attribution above all else. Giving proper attribution is about goodwill. Copyright is simply about ownership, which is why I think it's absolutely fucked up.
Personally, I would consider the public domain if it weren't so suceptible to abuse — which again, is why I avoid the MIT License and any other License that enables stripping creators, consumers, and users of their freedom.
eNB256@reddit
If I remember correctly, and as far as I know (NAL,) MIT/Expat already requires attribution, unless only a small part was taken
The GPLs do have other stuff that enhance attribution, e.g. that the files(GPL2.0)/work(GPL3.0) must contain a prominent notice that you have modified it, with a relevant date, but that part seems to be ignored. The GPLs are instead perceived as the "give source code" license, but the MPL2.0 might then be closer.
Though the term freedom tends to be used, it seems more like the following: you can play the game with me, but you'll have to follow the rules for fair play. However, interestingly, it seems it's just the source code rule that's followed. It seems developers don't really read it.
In addition to attribution,
The complete corresponding source (all parts that make the whole, excluding system libraries) must be given in a certain way, and build scripts. Parts that do not make a whole, like separate programs, can be bundled together however.
Prominent notice that the files/work has been modified (interestingly Apache-2.0 has this too, as "files") and a relevant date
Extraneous restrictions cannot be imposed. This part also seems to be ignored. e.g. it is forbidden for Apache-2.0/BSD-4-Clause/GPL-3.0 parts to be with GPL-2.0-only parts if the GPL-2.0-only part(s) are not yours.
the one who imposed the GPL isn't subject to the rules only others are
other stuff...
-p-e-w-@reddit
No, the MIT license doesn’t require “attribution”. It requires the original copyright and permission notices to be preserved, nothing more and nothing less. You can read the entire license yourself in one minute; it’s literally three paragraphs.
eNB256@reddit
... which is a form of attribution.
Copyright (C)
Perhaps the furthest away that might meet the conditions to include it in the software would be to store the "copyright notice and permission notice" in a part of an executable that's never displayed except to users who use a hex editor or some other software that displays ASCII.
SidneyFong@reddit
What do you mean? The GNU licenses are strict supersets of MIT and they aren't easier to comply or enforce...
op_loves_boobs@reddit (OP)
The lack of attribution by Ollama has been mishandled for so long it’s virtually sophomoric by now. Other projects making use of llama.cpp at least try to give their roses like continue.dev. It really makes one question if Ollama is refraining to allow themselves to look like VC-bait.
I concur heavily with the view of one of the commentators at Hacker News:
IShitMyselfNow@reddit
Counterpoint: ...
No wait I can't think of one. There's no good reason to do what they're doing.
Pyros-SD-Models@reddit
Providing entertainment?
Because it's pretty funny watching a community that revolves around models trained on literally everything, licensing or copyright be damned, suddenly role-play as the shining beacon of virtue, acting like they’ve found some moral high ground by shitting on Ollama while they jerk of to waifu erotica generated by a model trained on non-open source literature (if you wanna call it literature).
Peak comedy.
its_an_armoire@reddit
Humans are complex and can hold many ideas at the same time. In fact, holding contradictory ideas is normal for humans, including you.
I can smoke cigarettes and tell my nephew not to start smoking. Am I a hypocrite? Yes. But am I right? Also yes.
Snoo_28140@reddit
Difference is that AI training is argued to be a transformative use to create an entirely new work. While ollama literally ships with llamacpp.
candre23@reddit
It might have been explainable by mere incompetence a year ago. At this point though, it's unambiguously malicious. Ollama devs are deliberately being a bag of dicks.
-p-e-w-@reddit
The MIT license doesn’t require “attribution”, at least not in the form many people here seem to expect. If it did, almost every website would need to be riddled with attributions for the countless MIT-licensed JS libraries that the modern web relies on.
What it requires is including the original copyright notice in the derivative software, which Ollama currently doesn’t do.
So there are two separate issues here:
Expensive-Apricot-25@reddit
they do credit llama.cpp on the official ollama page.
This is fake news.
lothariusdark@reddit
Where? I wrote about it a few days ago, there is no clear crediting on the readme.
Under the big heading of Community Integrations you need to scroll almost all the way down to find this in between:
Supported backends
Neither does the website contain a single mention of llama.cpp acknowledging the work serving as a base for their entire project.
Thats not giving credit, thats almost purposeful obfuscation in the way its presented. Its simply sleazy and weird to hide it, no other big project thats a wrapper/installer/utility/ui for a different project does this.
cms2307@reddit
Now that llama.cpp has real multimodal support, there’s no need for ollama
a_beautiful_rhind@reddit
There never was.
TheOneThatIsHated@reddit
There is value in an easier to use server, download, interface etc… but lmstudio is so much better as all of it, not including much better performance and support
MorallyDeplorable@reddit
yea, ollama's kind of like the worst offering there is. It's slow to inference, it's tedious to configure
FotografoVirtual@reddit
In my case, Ollama was the only LLM tool that functioned immediately, just a single command and it connected perfectly with open-webui for seamless model switching. After compiling every version from source code for over a year without any issues, I can confidently say that 'tedious' is not a word I'd use to describe it.
nic_key@reddit
Same for me. I also wrote about losing multiple hours of my time on llama.cpp compared to Ollama and how you need third party software to do model swapping and more manual configuration for other things compared to Ollama (some time ago in another post as a comment).
But for the people in this subreddit Ollama is just a llama.cpp wrapper (but llama.cpp is not a wrapper or extension for GGML of course) and I perceive some kind of cult or religion behind llama.cpp. All glory to llama.cpp, which is completely independent and god forbid does not need any other dependencies.
To be extremely clear here, I am not at all against llama.cpp and everything that came out of it. But it seems like a purely technical discussion about inference tools is overshadowed by the company behind the tool.
I am extremely thankful for llama.cpp and also I recognize that it comes with many benefits AND tradeoffs. I don't have hours to spend on looking up cli options in missing or deleted docs, so if it is possible for me, I will chose the tool that offers similar functionality (with less control) that simply does the job quickly.
Marshall_Lawson@reddit
i agree, I am a complete moron and ollama is the first AI thingy I've gotten to run locally on my computer, and i was trying for like a year.
tiffanytrashcan@reddit
Have you not tried KoboldCPP? It's super simple. No command line needed.
BinaryLoopInPlace@reddit
lmstudio is the easier version of ollama, so easy I could even set up a local API with it without much fuss at all
I don't really see the point of ollama tbh
MorallyDeplorable@reddit
"After I can say it's not tedious"
it still defaults to 2048 context
FotografoVirtual@reddit
/set parameter num_ctx
But if you're using open-webui, all ollama parameters (including the context size) are configured through its user-friendly interface. This applies to both the original model and any custom versions you create.
MorallyDeplorable@reddit
Impressively missing the point there
FotografoVirtual@reddit
what's the point, then? Enlighten me, my friend
cms2307@reddit
Ollama shouldn’t use proprietary and hard to work with files, they should just use the ggufs directly like literally everything else
SporksInjected@reddit
If only there was some solution that did this
faldore@reddit
I love ollama, and regularly proselytize it on Twitter.
But this context issue is a valid criticism.
It's not dead simple to work around either, it requires printing the Modelfile, modifying it, and creating a new model from it.
They should make it easy to change at runtime from the repl, and even when making requests through the API.
Fortyseven@reddit
They bumped it to 4096 in 0.6.4 for what it's worth.
But I have
OLLAMA_CONTEXT_LENGTH=8192
set and it's never been an issue. (And I typically setnum_ctx
in my inference options anyway.)Or am I misunderstanding the needs?
(This isn't a 'gotcha' response, I'm legitimately curious if I'm overlooking how others are using these tools.)
BumbleSlob@reddit
I don’t understand, you can change it via front ends like Open WebUI. It’s a configurable param at inference time.
BangkokPadang@reddit
If you’re doing that with an ollama model you haven’t fixed or didn’t ship with the right configuration, ollama is only receiving the most recent 2048 tokens of your prompts.
You can set your frontend to 32k tokens and the earliest/oldest 30k will just be discarded.
faldore@reddit
You can limit it at inference time.
But the max is set by the Modelfile.
sqomoa@reddit
If you’re using docker, IIRC it’s easy as setting an environment variable for context size as the new default for all models.
BumbleSlob@reddit
Then change your parameters lol. In what world is this a valid criticism lmao.
FotografoVirtual@reddit
In the world of Ollama haters, the same ones who are downvoting your comment.
arman-d0e@reddit
I think what he meant by tedious was just unnecessary and annoying.
Now that llama.cpp has multimodal Ollama is just llama cpp with different and arguably worse commands
Fortyseven@reddit
I dunno, man, it's run great out of the box every time I've installed it. Only time I had to 'configure' anything was when I was sharing the port out to another box on the network and just had to set an environment variable.
relmny@reddit
Yes it is.
Claiming there isn't is being blind to the truth. And it doesn't help anyone.
Is, probably, the most used inference wrapper because is very easy to install and get it running. Having some kind of "dynamic context length" also helps a lot to run models. And being able to swap models without any specific configuration files and so, is great for beginners or people that just want to use a local LLM.
And I started to not to like (or even hate) Ollama when the "naming scheme", specially with Deepseek-R1, that made people claim that "Deepseek-R1 sucks, I run it locally on my phone, and is just bad".
I also started to move to llama.cpp because of things like that. Or because of things like OP's or because I want more (or actually "some") control.
But Ollama works right away. Download the installer, and that's it.
Again. I don't like Ollama. I might even hate it...
cms2307@reddit
If you want something that works right away, or is great for just chatting, then LM studio is way better than ollama, and if you want the configuration you can use llama.cpp. Ollama really isn’t that much easier than llama.cpp anyway especially for inexperienced users who may have never seen a command line before installing
sammcj@reddit
LM studio is closed source, their license doesn't let you use it in the workplace without seeking their permission first, it doesn't have proper dynamic model loading via its API and it's an electron web app.
MrSkruff@reddit
Could you explain this?
Organic-Thought8662@reddit
I would normally recommend KoboldCPP as a more user friendly option for llama.cpp. Plus they actually contribute back to llama.cpp frequently.
relmny@reddit
Sorry but saying "isn't that much easier than llama.cpp" is just not true.
You download the installer, install it, and then download the models even from ollama itself (ollama pull xxx). It works. Right away. It swaps models. It has some kind of "dynamic context length", etc.
And yes, LM studio is an alternative, but is that, alternative. And another wrapper.
There's a reason why Ollama has so many users. Negating them makes no sense.
I hate looking like I'm defending Ollama, but what is true, is true no matter what.
Fortyseven@reddit
For a field full of free options to use, a lot of folks are behaving as if they have a gun pointed to their head, being forced to use Ollama.
I'd never give shit to someone else using the tools that work for them. Valid criticisms, sure, but at the end of the day, if we're making cool shit, that's all that matters.
lothariusdark@reddit
LM Studio isnt open source so thats a no from me.
plankalkul-z1@reddit
What "control" are you missing, exactly? Genuine question.
Can you use, say, custom chat template with llama.cpp? With Ollama, it's trivial (especially given that it uses standard Go templates).
Modifying system prompt (getting rid of all that "harmless" fluff that comes with models from big corps) is also trivial with Ollama. Any inference parameters that I ever needed are set trivially.
So what is it that you're missing?
Granted, Ollama wouldn't help you run a model that's much bigger than total memory of your system, but if you're in that territory, you should look at ik_llama, ktransformers, and friends, not vanilla llama.cpp...
P.S. Nice atmosphere we've created here at LocalLLaMA: it seems to be impossible to say a single good thing about Ollama without fear of being downvoted to smithereens by those who don't bother to read (or think), just catching "an overall vibe" of a message is enough to trigger a predictable knee-jerk reaction.
You seem to have caved in too, didn't you? Felt obliged to say you "hate" Ollama?.. Hate is a strong feeling, it has to be earned...
relmny@reddit
I started trying to move away from Ollama after the "naming" drama that confused (and still does) many people, and after realizing that they don't acknowledge (or barely do) what they use.
That lead me to not to trust them.
Maybe that "atmosphere" (it depends on the thread) is because, as I mentioned before, Ollama uses other OS code without proper acknowledge it.
Anyway, by "control" I mean things like offloading some layers to the CPU and others to the GPU (and by then able to run Qwen3-235b in a 16gb GPU at about 4.5 t/s).
Maybe that's possible in Ollama, but I wouldn't know how.
Also I found that llama.cpp is sometimes faster. But I'm only just starting to use llama.cpp.
plankalkul-z1@reddit
Three attempts to reply didn't get through for reasons completely beyond me. I give up.
mxforest@reddit
Pros build llama.cpp from source and noobs would download LMstudio and be done with it. What is the value proposition of Ollama?
poop_you_dont_scoop@reddit
They have a bunch of easy hookins like you can plug it into vs-code or crewai. They have a bad way of handling the models that makes it really irritating, but more irritating than that is their go templates when everyone else and all the models use that jinja. I've had a lot of problems with the think models because of it. Really irritating issues.
__Maximum__@reddit
sudo pacman -S ollama ollama-cuda & pip install open-webui
ollama run hf.gguf
In open-webui choose the model and it automatically switches for you. Very user friendly.
ImprefectKnight@reddit
Not even Lmstudio, just use Koboldcpp.
cms2307@reddit
It had much better support of vision models for a little while, as well as being an almost one click install with the openwebui and ollama docker module
LumpyWelds@reddit
Or just "brew install llama.cpp" for the lazy. But they do recommend compiling it yourself for best performance.
https://github.com/ggml-org/llama.cpp/discussions/7668
The heroes at Huggingface provided the formula.
mxforest@reddit
Compilation is very easy anyway. In my case i need to build for different platforms so can't do brew everywhere. I have tried rocm, inferentia and some other builds too.
Pro-editor-1105@reddit
Aren't they moving away from llama.cpp?
Ok_Cow1976@reddit
they don't have the competence I believe
Pro-editor-1105@reddit
And why do you say that?
Horziest@reddit
Because they've been saying they are moving on for a year and only 1 model is not using llama.cpp
TechnoByte_@reddit
4 models actually: https://ollama.com/blog/multimodal-models
Horziest@reddit
I apologize for exaggerating. I didn't take the time to get the exact number. Llama.cpp is still the main part of ollama, at least for now. And them not wanting to work with the existing ecosystem slows everyone down.
iwinux@reddit
Yeah. Copy code from llama.cpp and ask GPT to "rewrite" it so that it becomes "original".
op_loves_boobs@reddit (OP)
Not from the look of it. Still referencing llama.cpp and the ggml library in the Makefile and llama.go with cgo.
Pro-editor-1105@reddit
Well ya that is why they are moving away and they have not completely scrapped it. But I think they will just build their own engine on top of GGML.
op_loves_boobs@reddit (OP)
That’s fine but that doesn’t mean you can absolve one of following the requirements of the license and providing proper attribution today because you’re going to replace it with your own engine later on. Especially after you’ve built up your community on the laurels of others work.
BumbleSlob@reddit
https://github.com/ollama/ollama/blob/main/llama/llama.cpp/LICENSE
Did you have any other misconceptions I could help you with today?
kweglinski@reddit
lol, calling burried licence file a fix.
Obviously everybody is talking about human decency that is expected when you're using other people's work. The actual licence requirement is just something people catch on, but the real pain is the fact that they play it as if they've made it.
This licence file is like court ordered newspaper apology. That turns into a meme how they "avoided" the court order.
BumbleSlob@reddit
It’s also included in the main README.md so your point makes literally zero sense and it seems like you are trying to make yourself angry for the sake of making yourself angry.
lothariusdark@reddit
Where? I wrote about it a few days ago, there is no clear crediting on the readme.
Under the big heading of Community Integrations you need to scroll almost all the way down to find this in between:
Supported backends
Neither does the website contain a single mention of llama.cpp acknowledging the work serving as a base for their entire project.
Thats not giving credit, thats almost purposeful obfuscation in the way its presented.
SkyFeistyLlama8@reddit
What makes it worse is that downstream projects that reference Ollama or use Ollama endpoints (Microsoft has a tone of these) also hide the llama.cpp and ggml mentions because they either don't know or they don't bother digging through Ollama's text.
At this point, I'm feeling like Ollama is the Manus of the local LLM world.
kweglinski@reddit
Don't be silly, I'm not emotional about this.
You've got me curious tho, where is it in main readme? Last time I've checked the only place where it said llama.cpp it was in "community integrations" section, under "supported backends" right below "plugins" meaning something completely different.
Master-Meal-77@reddit
To what? They don't have the brainpower to replace llama.cpp
XyneWasTaken@reddit
isn't Ollama the same platform that tried to masquerade the smaller deepseek models as "Deepseek-R1" so they could claim they had wide ranging R1 support over their competitors?
EXPATasap@reddit
LOL, they never did. They literally had the information in the cards thingy, the names that were headers like, oh, h1 maybe an h2, those were, “wrong”, perhaps, but not when it read the next few lines down. lol. Y’all have become WAY too lazy with reading/words, 😝
PS, I suck at humorous banter so hopefully I didn’t come off wrong 🙃😅
nananashi3@reddit
Wayback Machine. When it fist showed up on ollama, the full size wasn't there for two days, and the page never said or explained anything about distills for a week, which made them sound like the DeepSeek in varying sizes.
The issue was not necessarily about ollama intentionally trying to trick users which may or may not be true, but the combination of starting the shittiest possible way and possibly clueless social media influencers acting like "wow you can run AGI on your device with one simple command!" For people familiar with dunking on ollama for various reasons including incompetence or improper attribution, this event lets them dunk on them again.
On the contrary, those who are familiar with LLMs and/or can read know it's not "the real DeepSeek", and they're upset of the potential of ignorant mainstream users who are unfamiliar with LLMs and/or can't read not knowing, and ollama getting the attention tied to R1.
Starman-Paradox@reddit
Ollama just always fucks up naming. They called "QWQ preview" just "QWQ" so when actual QWQ came out there was mass confusion.
kopaser6464@reddit
We really need to start koboldcpp ad campaign..
divided_capture_bro@reddit
So what you are saying is that ollama should waste money on a legal team to ensure compliance with licenses that have no teeth?
gittubaba@reddit
Huh, I wonder if people really follow MIT in that form. I don't remember any binary I downloaded from github that contained a third_party_licenses or dependency_licenses folder that contained every linked library's LICENCE files...
Do any of you guys remember having a third_party_licenses folder after downloading a binary release from github/sourceforge? I think many popular tools will be out of compliance if this was checked...
lily_34@reddit
Most proprietary (or perhaps commercial) software usually has an "open source licenses" somewhere in its menus, that shows all the copyright notices for all MIT-licensed code included. But FOSS programs do tend to forgo that...
op_loves_boobs@reddit (OP)
Microsoft does with Visual Studio Code and there are several references to MIT licensed libraries
gittubaba@reddit
Good example. I was more thinking of popular tools/libraries with single or 2-3 maintainers. Microsoft and companies that has legal compliance department obviously will spend the resource to tick every legal box.
Arcuru@reddit
Microsoft is probably also very afraid of how it would look if they didn't follow OSS license requirements in the most popular IDE for OSS devs. So I'd expect they spend a lot of money/time to ensure that doesn't happen.
No_Afternoon_4260@reddit
Waze has a page that lists all the libs they use and a link to the licence iirc
WolpertingerRumo@reddit
God dammit Ollama, just cite your sources
BumbleSlob@reddit
ITT: people complaining that Ollama is not voting their sources when Ollama, in fact, cites their sources
The irony is palpable and every single person who constantly complained and harasses authors of free and open source software should be relentlessly mocked into the ground
emprahsFury@reddit
The mit license require attribution of the copyright holder in all distributions of the code. That include much more than the source code your linked too. It must be in the binaries people download as well as the source code you linked too.
lily_34@reddit
So, are all linux distros that ship MIT-licensed software in their repos in violation (since most software doesn't actually include attribution to its authors in the binaries)?
Internal_Werewolf_48@reddit
How exactly do you plan on viewing attributions included inside a binary?
Fortyseven@reddit
They could probably add a blurb in the version string.
WolpertingerRumo@reddit
Ups, you‘re right. They do cite llama.cpp pretty openly. Last I saw it, there was just a small, little acknowledgement. My bad.
Expensive-Apricot-25@reddit
they do, its cited on their own cite
WolpertingerRumo@reddit
Yeah, I was misinformed.
_wOvAN_@reddit
who cares
extopico@reddit
Try interacting with the ollama leads on GitHub and you will no longer be puzzled.
Ging287@reddit
If they don't follow the free license and the free license no longer replies. They should be sued and made to apply attribution at the very least. Otherwise it's copyright infringement. The license matters.
GortKlaatu_@reddit
What are you talking about? It's right here:
https://github.com/ollama/ollama/blob/main/llama/llama.cpp/LICENSE
StewedAngelSkins@reddit
I think the contention is that binary distributions are still in violation. The text of the MIT license does suggest that you need to include the copyright notice in those too, though it's extremely common for developers to neglect it.
GortKlaatu_@reddit
If llama.cpp added the copyright notice to the source code it might show up in the binary as other do.
Not even the Ollama license is distributed with Ollama binary
op_loves_boobs@reddit (OP)
I mean that’s not how that works my friend. The lack of Ollama including their own license doesn’t negate that they must give attribution in the binary to fulfill the requirements of the MIT License.
If I go on my PlayStation and I go to the console information, I see the correct attributions for the libraries that were used to facilitate the product. It’s not a huge ask.
GortKlaatu_@reddit
That's exactly how it works, they aren't including any license files at all ikn the binary distribution of ollama is my point.
Ollama source code is publicly available on Github and they give attribution and include the ggml license.
Marksta@reddit
Were you taught to hand in papers to your professor with no citations and tell them they can check your github if they want to see citations?
I just checked my Ollama installation on Windows, there isn't a single attribution at all. They're 100% in violation. Even the scummiest corpos ship their smart TVs with a menu option somewhere with attribution to open source software they're using.
op_loves_boobs@reddit (OP)
Considering this is /r/LocalLlama lets as a LLM:
The MIT License does require attribution to be included in binary distributions, not just source code.
Here’s the exact clause again:
“The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.”
Let’s break it down:
This language is intentionally broad. It doesn’t distinguish between:
Embedded or bundled copies
How to comply in a binary distribution
If you’re distributing a binary, attribution must still be included—though it doesn’t have to be in the binary file itself. Common compliant ways include:
Documentation or README files shipped with the product
Why some people think it’s “source-only”
Some confusion arises because many developers encounter MIT-licensed code on GitHub or through source-based packages, so they assume attribution is only required when source is visible. But legally, that’s incorrect.
In practice, enforcement is rare, especially when the code is statically compiled or part of a larger system. But:
Once again, just follow the license, as i said previously. it’s not a huge ask. Just because Ollama doesn’t include their library in distribution doesn’t mean they can exclude the attribution for llama.cpp
GortKlaatu_@reddit
There you go:
A LICENSE or NOTICE file alongside the binary
And links to the source code which includes both attribution and the actual license are linked to from the website which distributes the binary.
op_loves_boobs@reddit (OP)
Sir, the operator and is inclusive. It’s not one of the other.
You yourself said they didn’t include their own license in the distribution let alone llama.cpp’s license so how are they including a license or notice file alongside the binary or even in it? Run it for yourself:
GortKlaatu_@reddit
Grep for it here too https://github.com/ggml-org/llama.cpp/blob/master/LICENSE haha.
op_loves_boobs@reddit (OP)
This is your own comment 12 minutes ago:
There you go:
A LICENSE or NOTICE file alongside the binary
And links to the source code which includes both attribution and the actual license are linked to from the website which distributes the binary.
A LICENSE or NOTICE file alongside the binary
Nothing more to say to you, your views are your views and I leave you to them. Have a lovely day /u/GortKlaatu_
GortKlaatu_@reddit
Along side meaning on the website. It doesn't need to be inside the binary.
LjLies@reddit
No, it doesn't mean that. It needs to be shipped with the binary.
StewedAngelSkins@reddit
They likely don't want to do it because getting license texts for your deps into a go binary is a pain in the ass, which is why it's so common not to do it (particularly since the vast majority of developers using the MIT license don't actually care). But factually, this is what the license requires.
Minute_Attempt3063@reddit
It was added 4 months ago.
Before that, it was never said that ollama was using llama.cpp under the hood, especially the non tech people didn't know
rockbandit@reddit
Non-tech people have no idea what llama.cpp is, nor do they have the inclination to set it up. Ollama has made that super easy.
I get that not giving attribution (nor upstreaming contributions!) isn’t cool, but they aren’t technically in violation of any licenses right now, as they also use the MIT license (which is very permissive) and also include the original llama.cpp MIT license.
Notably, there is no requirement in the MIT license to publicly declare you’re using software from another project, it only requires that: “The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.”
That’s it.
op_loves_boobs@reddit (OP)
That clause is that you mentioned at the end is the root of the issue. They must provide attributions in their distributions.
I provided a neutral view of that clause with another user
GortKlaatu_@reddit
Yep it was added. I'm glad we agree.
Minute_Attempt3063@reddit
The problem is, that it wasn't for the longest time.
op_loves_boobs@reddit (OP)
Another commentator already chimed in on this at Hacker News. The core of it is, the attribution is lacking in binary-only releases however Ollama isn’t the only group to fail at this. Rather than reiterate I’ll post the comment as followed:
The clause at issue is this one:
The copyright notice is the bit at the top that identifies who owns the copyright to the code. You can use MIT code alongside any license you'd like as long as you attribute the MIT portions properly.
That said, this is a requirement that almost no one follows in non-source distributions and almost no one makes a stink about, so I suspect that the main reason why this is being brought up specifically is because a lot of people have beef with Ollama for not even giving any kind of public credit to llama.cpp for being the beating heart of their system.
Had they been less weird about giving credit in the normal, just-being-polite way I don't think anyone would have noticed that technically the license requires them to give a particular kind of attribution.
GortKlaatu_@reddit
Did you get the Ollama license with that distribution?
deejeycris@reddit
Is there any way to enforce the license on Ollama, or are expensive lawyers needed?
Amazing_Athlete_2265@reddit
Yes.
tmflynnt@reddit
I am not on the "Ollama is just a llama.cpp wrapper" bandwagon, but I will say that I did find these particular comments from a reputable contributor to llama.cpp to be quite instructive as to why people should maintain a critical eye when it comes to Ollama and the way the devs have handled themselves: link.
StewedAngelSkins@reddit
This trend of freaking out about open source projects violating the attribution clause of the MIT license kind of reminds me of when people go rooting around in the ToS of whatever social media platform they're currently pissed off at until they find the "user grants the service provider a worldwide royalty-free sublicensable license to... blah blah blah" boilerplate and then freak out about it. Like they're certainly right about the facts, and they're even arguably right to think there's something ethically wrong with the situation, but at the same time you can't help but notice that it only ever comes up as a proxy for some other beef.
op_loves_boobs@reddit (OP)
You know what, I actually fully concur with you that this is somewhat of a proxy battle; but here’s the thing, just add the attribution or at least more credit than a lackluster line at the end of a README and move on. It really isn’t a huge ask.
We’ve had these sort of issues crop up over the years left and right and the solutions have ended up being ham-fisted a lot of times. Think ElasticSearch and AWS’s OpenSearch or the BSL license debacle.
A lot of people live for Open-Source and want the community to flourish. It’s not a requirement to give back from someone forking and making use of the code but at the bare minimum follow the license and give credit where it’s due.
StewedAngelSkins@reddit
Yes, they should include the license text either alongside their binary or have it be printable via some kind of
ollama licenses
command. I think you're kind of underestimating how much of a pain in the ass it would be to actually comply with this for all upstream deps, rather than just the one you care about, but that's a bit beside the point.To your main point: I'd rather not litigate community disputes via copyright, to be honest. Would you actually be satisfied if they did literally nothing else besides adding the license text to their binary releases?
op_loves_boobs@reddit (OP)
First, licenses should be followed for all references. So the assumption that it’s only the one I care about is your own subjective view. I don’t care if it’s
zstd
tobzip2
: give the attribution or not if required by the license.Secondly, I’m aware how much of a pain in the ass it would be but here’s the thing it’s more of a pain in the ass to recreate your own libraries rather than to append license text:
Visual Studio Code’s Third Party Notices
Granted, Microsoft has tons of resources or some system in place to figure it out for VS Code.
But yes the license text should be in the binary release considering their target demographic is likely not going on GitHub to retrieve the binary.
StewedAngelSkins@reddit
Let's quantify it then.
Have you ever in your life posted about this issue as it relates to any other project? It is, after all, very common, so you will have had plenty of opportunity.
Can you tell me without looking which of ollama's other dependencies are missing attribution?
GortKlaatu_@reddit
https://github.com/ollama/ollama/tree/main?tab=readme-ov-file#supported-backends
op_loves_boobs@reddit (OP)
This is my final reply to you /u/GortKlaatu_ as I keep replying to you all over different comments the thread:
The attribution needs to be in the binary to fulfill the MIT License.
GortKlaatu_@reddit
Go compile any open source software and tell me the license is inside the binary. I'll wait.
DedsPhil@reddit
At this point, I just dont use llama.cpp because it doesn't have an easy plug and play option on n8n like ollama does
sleepy_roger@reddit
People will still use ollama until llama.cpp makes things easier forthe every man. These gotchas on technicalities do nothing to push people to llama.cpp. I know lots of people who just want to run a local ai server with minimal effort and call it good Ollama still provides that like it or not.
op_loves_boobs@reddit (OP)
It’s not about pushing people to Ollama or llama.cpp. It’s open-source, use what you want to use nobody is forcing that on you.
What isn’t cool is making use of llama.cpp with cgo and not properly including the attribution with the distribution.
It’s not about hating on Ollama, I personally use both. It’s about giving respect to Georgi Gerganov and the rest of the contributors. They can both co-exist, complement and be symbiotic to each other. But historically, Ollama hasn’t made an earnest attempt at that. In their own README they specify llama.cpp as a supported backend without really divulging how much the project spawned from work of the ggml contributors. It leaves a bad taste in one’s mouth
sleepy_roger@reddit
You were already proven went and blocked the guy who did it.
op_loves_boobs@reddit (OP)
They can just add the attribution to the distribution, this is exactly what I mean about this whole thing being sophomoric. Add the attribution to the distribution or a link to the license and move on, it’s not that serious.
I use llama.cpp on my Hackintosh that requires Vulkan and Ollama on my gaming rig with my NVIDIA GPU. I use both, I started off with Ollama and begun using llama.cpp as I became more interested in tinkering. They both have their use cases. No one is arguing that you have to use one or the other.
The argument is whether proper attribution to Georgi is being provided by the license he used, which it isn’t.
Also the guy you’re referring to kept spinning his wheels ignoring the fact that the license literally isn’t the distribution. This is Reddit my guy, past a certain point I don’t owe anyone here constant conversation and I can block and move on with my day as I see fit.
Considering both you and him are carrying those downvotes, the most I can say is possibly consider your opinion from a different viewpoint. I’m considering yours and personally it seems like you’re not even focusing on the debate at hand but rather the Ollama hate.
lighthawk16@reddit
This here is my opinion too. I love Ollama but won't hesitate to drop it as soon as a more viable option arrives.
stuffitystuff@reddit
THANKS, OLLAMA!
giq67@reddit
😂
TheLumpyAvenger@reddit
No way! It's like a rule around here. We can't go a week without someone posting REEEEEEEE! about ollama. Thanks for winning me my weekly bet with my friend
Original_Finding2212@reddit
As far as they are willing to acknowledge?
https://ollama.com/blog/multimodal-models