Meta has not given up on open-source
Posted by jd_3d@reddit | LocalLLaMA | View on Reddit | 73 comments
Source: https://x.com/AIatMeta/status/2041910285653737975?s=20
Posted by jd_3d@reddit | LocalLLaMA | View on Reddit | 73 comments
Source: https://x.com/AIatMeta/status/2041910285653737975?s=20
EffectiveCeilingFan@reddit
Yeah I’ll believe it when I see it
SquareKaleidoscope49@reddit
Look at the history. Meta has never open sourced anything.
Every single model of the past 10 years that Meta ever uploaded for public view has been the castrated version of the model they used internally. It's not just vibes either. People did a comparison of the latent vectors of SAM open source models, and the ones available via API and found significant differences and noticeably low performance.
I remember watching a talk from an important person at Meta, who said that they release these models in hopes that somebody starts making money with them, and then Meta uses their infrastructure to take over the business. What they failed to mention is that if they build a 1 to 1 copy of your company that uses Meta's models, Meta will just use better checkpoints than you, driving you out of business.
I am sorry but this kind of fucked up shady practice cannot be called "open sourcing". Fuck Meta.
ImpressiveSuperfluit@reddit
That checks out, that's precisely the playbook they've used before. A certain vpn cones to mind...
FullstackSensei@reddit
Of course there's a chance they don't release it or release it with a license that renders it useless, but let's not forget Llama was the model that started this all. Llama is what inspired all the other labs, especially the Chinese labs that eventually gave us Qwen, Kimi, GLM, etc. LocalLLaMA, and llama.cpp all exist thanks to that original release.
SquareKaleidoscope49@reddit
Regardless of license, Meta has never released the checkpoints they used in their papers or use internally. It's always worse versions.
EstarriolOfTheEast@reddit
Before llama.cpp, there was whisper.cpp, a port of OpenAI's whisper and the initial foundation for ggml. Also before llama were gpt-neo and gpt-j (IIRC, was the first widely deployed model to use RoPE, which llama also implemented). llama also might have borrowed from nvidia's megatron, my memory is vague there. Thanks to chatgpt 3.5, llama's release was well placed to channel rising enthusiasm about LLMs into catapulting the local scene to the next level (before this it was relegated to just researchers, ML engineers and hardcore AI enthusiasts).
Which is to say, there was a local LLM ecosystem before llama-1 and for practical uses of the time, Google's FlanT5-11B was better. Llama-1 was also initially released only to a selected few researchers, until it was leaked on 4chan, after which LeCun was key in championing a move to proper opensource. Without that leak and LeCun's subsequent championing of opensource LLMs, I'm not sure how the open LLM scene would have gone and what Meta's role would have been.
Altruistic_Heat_9531@reddit
so you are telling me that both LLM (Llama-1) and diffusion (NAI SD1.5) foundation model that start all of these are both leaked in 4Chan
IrisColt@reddit
Perfectly put. The whole thing started 3 years ago...
Zanion@reddit
Unless all the people who made that model spontaneously decide to return to Meta, I can't imagine why we'd give a shit.
ThatRandomJew7@reddit
It's like Stable Diffusion: Shit final release that killed them in the community, but they were the ones to kick the whole thing off
ThinkExtension2328@reddit
Perpetual loyalty for a past gift is how lemmings jump off a cliff. It may be true they kicked it off but others now exist and do great work and deserve allot more credit for their sota models they provide with great licences.
LelouchZer12@reddit
Welp their most recent models all have a more annoying license than previously so even if they open source it I wouldnt necessarily be happy
_raydeStar@reddit
imo they want to open source everything but they're behind in the game and don't want to embarrass themselves.
Turnip-itup@reddit
Meta knows they’re cooked if they don’t rely on open source to gain traction .
Finanzamt_Endgegner@reddit
Yeah the more top labs doing open source like the Chinese ones the faster open source can match sota, well prob never actually catch up but once it's like 3months that ain't that bad
EmPips@reddit
I'm just a retail investor so it doesn't count for much, but any company that can forge a foundation model from nothing ends up in my pile of "ahead of the curve" regardless of how far it is from SOTA.
_raydeStar@reddit
I mean I agree with you -- I do not think they're sitting there spending billions on nothing.
From what they're geared towards my thought is they're working on replacing the cell phone and trying to break into a new frontier that way. In that case, they're going small (fit into Raybans, etc)
po_stulate@reddit
Yeah, they released dino all they can because they're at the top of the game for segmentation, but for LLM we've seen how hard behemoth failed.
erwan@reddit
It's really a waste when you see they had the FAIR and LeCun
nullmove@reddit
Especially when it's being said by a snake like Alexandr Wang.
EffectiveCeilingFan@reddit
Never even heard of him what did he do 😭
WPBaka@reddit
becoming a billionaire off of sweatshop labor is pretty shitty ngl
DistanceSolar1449@reddit
Also he’s a super toxic and abusive boss. Basically everyone at Scale hates him. He’s the “I believe shouting at my workers makes me cool” type.
a_beautiful_rhind@reddit
He slopped all our models.
Velocita84@reddit
Wake me up when weights drop on hf
131sean131@reddit
Fr we at the actions not words part of this industry.
Limp_Classroom_2645@reddit
Words mean nothing to me, only actions matter
YogurtExternal7923@reddit
Miss the llama 70b days.. anyone else feels like every lab that stops making models is like a flavour gone?
paperbenni@reddit
Playing devil's advocate, Muse's safety training might be dog shit or non-existent and they instead have another dedicated model check prompts before giving them to Muse. That makes its bioweapon refusal rate a lot less impressive, but it does mean they'll have to change their approach before open sourcing it.
BumblebeeParty6389@reddit
This one is available as private API to select partners and they hope to release open source future versions? This all validates the rumors that has been running here since Zuckerberg's letter about changing their AI plan.
I think it shows:
1- They are going proprietary like we expected.
2- They don't make any promises on open-sourcing
3- They are saying "versions" and doesn't talk about the whole thing. So even if they end up releasing something open source it might be tiny, heavily lobotomized and red teamed version of the original thing.
Ok_Warning2146@reddit
Why don't they put it on lmarena instead of just giving out bench numbers that can be cooked?
Altruistic_Heat_9531@reddit
I give meta kinda a break when it comes to producing models, they donate and still manage pytorch
waffleseggs@reddit
open ____*weight*____. ugh Meta is so awful.
tarruda@reddit
This is one model I'm not looking forward to. Apparently it was benchmaxxed: https://x.com/fchollet/status/2042004767585751284
Far-Low-4705@reddit
Let’s gooooo
Better than nothing, hopefully they don’t open source like grok (or even google tbh) tho
robertpro01@reddit
What do you mean?
Creepy-Bell-4527@reddit
Open source when it's old and behind other open source models.
bosoxs202@reddit
Gemini Nano 4 being based on Gemma 4 technically means Google's leading edge small model is is open-source
Far-Low-4705@reddit
This
emprahsFury@reddit
You all need to quit drinking corpo kool-aid. Don't let the people who control whether shit gets released tell you "they hope they can release it" Stop letting employees of a corp tell you "well the corp says." No hold them responsible for being in the corporation. They have agency, make them use it.
a-calycular-torus@reddit
You do realize that there are different positions in companies and they have differing levels of power, right? It's not that deep
temperature_5@reddit
"hope to" ... "future models"
SGmoze@reddit
Meta is like drug distributor. They give you the free sample, then they get you hooked and you need to pay for premium.
__JockY__@reddit
That sentence is doing a lot of heavy lifting!
DigRealistic2977@reddit
So this what they made using those porn torrent files issues months ago? Nice.. can't wait for the OG zuck to drop some raw trained on the web AI 😂
Significant_Fig_7581@reddit
I hope to see that too
Living_Director_1454@reddit
"if you are the data , I'll make it more accessible to others" - Meta
DeepOrangeSky@reddit
So is this supposed to be related to the "Avocado" thing in the same style of relationship as Gemini / Gemma (if we are to believe them, I mean)? Or are they saying there would be like an open-source Abovado line and then a separate Muse line where they occasionally open-sourced some giant frontier-sized models from Muse, and then had some separate line of little ~9b avocado models or whatever that were built to be local models from the start?
SlaveZelda@reddit
Avocado is the code name for Muse afaik
shaolinmaru@reddit
Between "we hope" and "we will" there is an entire universe.
hyggeradyr@reddit
People actually want to use Meta AI?
LagOps91@reddit
"hope to open" is a strong maybe at best
logic_prevails@reddit
Stop hoping and do it 😂
mana_hoarder@reddit
Don't hope... Do it.
jacek2023@reddit
That's a good news. Llama 5 is still on my wishlist :)
pseudonerv@reddit
“hope” ? “future” ?
You hope meta don’t give up on open weights in the future.
laterbreh@reddit
We hope. Uhuh, we hoped llama 4 didnt blow ass. Howd that go?
T_UMP@reddit
Heavy on hopium...
a_beautiful_rhind@reddit
I hope in the future I will be a millionaire. Two more weeks?
organicmanipulation@reddit
“We hope”…
This_Maintenance_834@reddit
they probably hope it is not too embarrassing
Xanian123@reddit
Yeah we'll see what they do with that idiot wang at the helm. The previous iteration was terrible anyway
yellow_golf_ball@reddit
More competition, even if not open-source is good right now. And I also I think not open-sourcing right now allows them to focus more on catching up.
Far_Cat9782@reddit
"hope to..," nonsense the only people preventing it from open source is themselves haha
nomorebuttsplz@reddit
We hope that we care about open source. Not sure though. Anyway...
pokemonisok@reddit
Hope…
RetiredApostle@reddit
"Hope" is the only variable.
sedition666@reddit
This is like when you tell your wife you hope to get to that DIY project next week. Everyone knows that isn't happening.
Technical-Earth-3254@reddit
Theyre writing this like it's not their own model lmfao
ilarp@reddit
We have given up on them though
MundanePercentage674@reddit
they need to clean up your post and cpmmend on facebook first
Daemontatox@reddit
"Hope" copium
NinjaOk2970@reddit
Let's remember Facebook has itself being opensourced in the first place.