Ollama now official supports llama 3.2 vision
Posted by youcef0w0@reddit | LocalLLaMA | View on Reddit | 49 comments
Posted by youcef0w0@reddit | LocalLLaMA | View on Reddit | 49 comments
logan__keenan@reddit
It's supported by Open WebUI
hummingbird1346@reddit
Msty my beloved. Please add.
Arkonias@reddit
Msty is a llama.cpp wrapper, so doubt 3.2 vision support will land anywhere that uses that in a while. Ollama supports it due to their own custom go stuff.
AnticitizenPrime@reddit
Msty's actually Ollama based under the hood and you can upgrade the Ollama instance manually without needing to wait for Msrty to update. I've been using Llama 3.2 vision with Msty for the past two weeks or so with the preview release of Ollama.
AnticitizenPrime@reddit
Msty is Ollama based, works already.
Sudden-Lingonberry-8@reddit
now they need to support python 3.12
mpasila@reddit
When llama.cpp support? (so it will support all platforms and not just one)
JakoDel@reddit
llama.cpp has to be one of the worst popular projects out there. they are ignoring too many things. there is a commit that doubles the performance for mi100 gpus just sitting there because one of the developers of the project wanted to do it in another way that never came.
Few_Painter_5588@reddit
They mentioned that they want new devs to join the project before they implement this.
robberviet@reddit
Unlikely sadly.
Lossu@reddit
Sadly llama.cpp is allergic to vision models
Red_Redditor_Reddit@reddit
I want to know too.
Melbar666@reddit
Open WebUI / ollama 0.4.0 / llama3.2-vision:11b
If I post a follow up picture, it does not see the picture, it only pretends and hallucinates the content. Is this a known bug?
JasonP27@reddit
Hmm... how much VRAM do you need for this? I have a 10GB 3080 and 64GB RAM
Intelligent_Jello344@reddit
Llama 3.2 Vision 11B requires least 8GB of VRAM, and the 90B model requires at least 64 GB of VRAM.
JasonP27@reddit
Sweet, guess I know what I'm installing later today
earslap@reddit
more VRAM?
JasonP27@reddit
Lol I wish. I bought this PC last year before I knew anything about running local AI models. I would have gone for the 3090 instead. Now I'm stuck with this for a while.
KL_GPU@reddit
Bros cooking
mrjackspade@reddit
Of course. 12 hours after I go through the effort of figuring out how to get the preview running, its officially released.
Dead_Internet_Theory@reddit
This is why I deploy tactical laziness.
ShadowbanRevival@reddit
Lmao i can't tell you how many times this has happened to me
robberviet@reddit
I know we can't wait, but sometimes just wait.
KeldenL@reddit
has anybody figured out how to get it working on open-webui?
Qual_@reddit
it works. (using the docker install )
KeldenL@reddit
weird.. i’m using pinokio which shouldn’t make a difference — did u just attach the image with the attachment button?
Qual_@reddit
Yes !
KeldenL@reddit
ah i restarted open-webui after downloading the ollama model and now it works! yay!
TheManicProgrammer@reddit
Any idea for non-docker? Mine uses up all the CPU instead of GPU....
AlexLove73@reddit
You ask en français and they answer en anglais, mdr
Qual_@reddit
Ptdr maintenant que tu le dis. Je pense que vu que tout le mot sur la photo sont en anglais, il s'est fait niqué tout seul
kremlinhelpdesk@reddit
Sounds perfectly in character for a French model, to be honest.
dubesor86@reddit
I am using docker, too, and it works out of box without any issues.
Hoodfu@reddit
It's working for me. I'm on the latest version (I use the docker one)
logan__keenan@reddit
Does it not work out of the box? open-webui supports the llava vision model
ZoobleBat@reddit
I read Obama
vlodia@reddit
Quickest way to run this in Mac?
robo_cap@reddit
Google
No_Afternoon_4260@reddit
Isn t ollama running llama.cpp? What backend does it use?
youcef0w0@reddit (OP)
ollama started off as a llama.cpp wrapper, but they're doing their own implementations now since llama.cpp progress is stalling.
llama.cpp is refusing to accept new implementations if you're not willing to commit to maintaining it long term
robberviet@reddit
Free OSS can only lasts so far. In the end, money is needed.
Pro-editor-1105@reddit
Now we need more vision models.
Nexter92@reddit
We need more vulkan runner merge than other model for the moment....
Enough-Meringue4745@reddit
It’s true. Only supporting llama.cpp is quite useless
carnyzzle@reddit
it's been a while since ollama added pixtral support...
klop2031@reddit
Source?
Hoodfu@reddit
It has pixtral support? I don't see that on their site.
hp1337@reddit
This is wonderful, would be amazing if Molmo and QwenVL were supported too.
Jazzlike_Tooth929@reddit
interesting