A Tribute to MetaAI and Stability AI - 2 Giants Who Brought us so Much Joy... And, 2025 is the Year they Die... So Sad!đ˘
Posted by Iory1998@reddit | LocalLLaMA | View on Reddit | 50 comments
I mean, this sub and its amazing community wouldn't be here if it were not for Stability AI and Stable Diffusion. I personally created an account on Reddit just so I could join r/LocalLLaMA and r/StableDiffusion. I remember the first day I tried SD1.4 on my shiny new RTX 3070 Ti. I couldn't contain my excitement as I was going through Aitrepreneurâs video on how to install AUTOMATIC1111.
I never had Conda or PyTorch installed on my machine before. There was no ChatGPT to write me a guide on how to install everything or troubleshoot a failure. I followed Nerdy Rodent's videos on possible issues I could face, and I heavily relied on this sub for learning.
Then, I remember the first image I generated. That first one is always special. I took a few minutes to think of what I wanted to write, and I went for "Lionel Messi riding a bicycle." (Damn, I feel so embarrassed now that I am writing this. Please don't judge me!).
I cannot thank Stability AI's amazing team enough for opening a new world for meâfor us. Every day, new AI tutorials would drop on YouTube, and every day, I was excited. I vividly remember the first Textual Inversion I trained, my first LoRA, and my first model finetune on Google Colab. Shortly after, SD 1.5 dropped. I never felt closer to YouTubers before; I could feel their excitement as they went through the material. That excitement felt genuine and was contagious.
And then, the NovelAI models were leaked. I downloaded the torrent with all the checkpoints, and the floodgates for finetunes opened. Do you guys remember Anything v3 and RevAnime? Back then, our dream was simple and a bit naive: we dreamed of the day where we would run Midjourney v3-level image quality locally đ¤Ł.
Fast forward 6 months, and Llama models were leaked (7B, 13B, 33B, and 65B) with their limited 2K context window. Shortly after, Oobabooga WebUI was out and was the only frontend you could use. I could barely fit Llama 13B in my 8GB of VRAM. GPTQ quants were a pain in the ass. Regardless, running Llama locally always put a smile on my face.
If you are new to the LLM space, let me tell you what our dream was back then: to have a model as good as ChatGPT 3.5 Turbo. Benchmarks were always against 3.5!! Whenever a new finetune dropped, the main question remained: how good is it compared to ChatGPT? As a community, we struggled for over a year to get a local model that finally beat ChatGPT (I think it was Mixtral 8x7B).
This brings me to the current time. We have many frontier open-source models both in LLM and image/video generation, and neither Meta nor Stability AI made any of them. They both shot themselves in the foot and then effectively committed suicide. They could've owned the open-source space, but for whatever reason, they botched that huge opportunity. Their work contributed so much to the world, and it saddens me to see that they have already sailed into the sunset. Did you know that the first works by DeepSeek and other Chinese labs were heavily built upon the Llama architecture? They learned from Llama and Stable Diffusion, and in 2025, they just killed them.
I am sorry if I seem emotional, because I am. About 6 months ago, I deleted the last Llama-based model I had. 3 months ago, I deleted all SD1.5-based models. And with the launch of the Z-model, I know that soon I will be deleting all Stable Diffusion-based models again. If you had told me 3 years ago that by 2025 both Meta and Stability AI would disappear from the open-source AI space, I wouldn't have believed you in a million years. This is another reminder that technology is a ruthless world.
What are your thoughts? Perhaps you can share your emotional experiences as well. Let this post be a tribute to two otherwise awesome AI labs.
LelouchZer12@reddit
Meta is still really doing a lot for open source AI , just take a look at https://github.com/facebookresearch/dinov3 , https://github.com/facebookresearch/sam3 or https://github.com/facebookresearch/omnilingual-asr
Ai is not just LLM and generative things..
Iory1998@reddit (OP)
Well, I am talking about LLM.
investigatingheretic@reddit
No, youâre talking about diffusion models.
Iory1998@reddit (OP)
So, what's your point, if you don't mind?
investigatingheretic@reddit
Just making the distinction between two separate technologies. Both fall under generative AI, but the mechanics, math, and training dynamics have almost nothing in common.
TomLucidor@reddit
Think about it this way: Facebook is now creating tools to allow people to fine-tune models easier (Omnilingual ASR for more data for LLMs, DINO + SAM + JEPA for stable diffusion and Wan 2.2 / LTX-2).
Things like layer-skipping, SoCE (advanced model merging), and SPG (RL for diffusion LM) shows that they still care, but Spirit LM (interleaving text with voice) and CWM (code world models) are a bit concerning with their licensing nowadays for researchers. No more creative tomfoolery.
MitsotakiShogun@reddit
Thanks, AI bros and artists, for fighting each other and dragging down serious research because of it!
the__storm@reddit
They laid off a whole bunch of FAIR employees recently though - I fear this is just the tail end of stuff that was already in the works. Guess we'll find out though.
B-lovedWanderer@reddit
The pivot makes sense if you look at the "data wall" we're hitting. The early open source wins came from training on the easy broad internet data. Now that we're moving into the era of synthetic data loops and reasoning models (like o1), the compute and curation costs are skyrocketing.
Stability and Meta gave us the base layer for free, but maintaining SOTA in 2025 requires proprietary data pipelines that are too expensive to give away.
The future of local LLMs might not be chasing the GPT-5 parity, but rather specialized, highly-efficient SLMs that run on consumer hardware for specific agents.
We shouldn't be mourning the death of massive generalist open weights, but celebrating the birth of specialized local agents.
dtdisapointingresult@reddit
Dicks out for Emad Mostaque (StabilityAI founder who released Stable Diffusion as open-source), a real G who will forever be a legend in AI. He's like the Linus Torvalds of image generation, minus the successful ending. I hope he's happy wherever he is.
liviuberechet@reddit
Itâs important to remember that all of these Big Tech corporations are not doing any of this âopen sourceâ for the benefit of humanity.
They are just leveraging the power of a huge community to iterate faster, fix bugs they donât know how to fix, trial-and-test products with a wide audience, QA and assume client-market-fit without any financial commitment (and expectations).
The second they got all of it down, they will instantly switch their stance on everything (180 turn), lock everything down behind âtrade secretsâ claims, and put a paywall.
I am fairly new to AI, however I have worked specifically in tech and innovation space for close to 20 years... I have experienced this myself in 2 startups in different sectors, and seen friends struggle in other sectors, over and over again, for decades now.
AI is no different. If any startup does any major innovation in this space, they will invest early, and either dissolve or merge their contribution. And this is a positive outcome, more often they just copy and rebuild in-house with the mentality âif someone else could build it, we can build it tooâ, shutting down innovators before they get to do much.
This is inevitable.
Itâs how they operate.
Very few innovations managed to escape this pattern. Like you OP, I also hope AI will be one of these exceptions, but we have to have realistic expectations.
Itâs sad, but it is what it is.
Enjoy the journey, because the outcome will the stale.
Iory1998@reddit (OP)
Thank you very much for your reply. I appreciate you taking time to reply. I agree with your explanation, completely. I am not blaming any business for seeking profit, far from that. I understand. However, I wish these two companies could keep doing what they started to do a bit longer.
octoberU@reddit
What's the alternative to stability ai models these days? as far as I'm aware people still tune 1.5 to this day
Iory1998@reddit (OP)
Yeah, people still finetune SD 1.5 for fun. But the model is obsolete. There is Flux, Qwen-image, hunyuan-image, HiDream, and now Z-model. I'd say there is much diversity now than 3 years ago.
Disposable110@reddit
I got into it during the GPT-J days, and then we got GPT Neo-X.
My first finetune was a GPT-J variant trained on a couple hundred thousand tokens of my own sci-fi writing.
And then EleutherAI shot themselves in the foot and committed suicide.
MetaAI had Fairseq and I used it to write my first AI written book. Then that disappeared too.
AI Dungeon was endless fun, then OpenAI killed it overnight by withdrawing GPT-3 access because apparently some pedos were using it to write dirty stuff and there were absolutely zero safeguards. Suddenly everyone speaking up against OpenAI's censorship was a pedo and got a ban. That drama prompted me to use local models only.
If you don't have it on device it's only a matter of time because it gets taken away, enshittified, censored, put behind paywalls, stuffed full of ads and crippled.
Misha_Vozduh@reddit
Where's the tribute to RunwayML? The guys who actually brought you 1.5 and then were issued a takedown notice from Stability because they wanted to censor it more?
-p-e-w-@reddit
I think itâs premature to declare Meta AI dead, considering the obscene amounts of money they are investing into R&D at the moment.
Stability, on the other hand, is about as alive as the hedgehog I saw on the highway yesterday, who was so flat it looked like it was printed on the pavement.
fish312@reddit
don't you mean hedgehog (laying on grass)
but yeah, its all lykon's fault for cucking the model.
Iory1998@reddit (OP)
Meta might still exist but they won't open-source models anymore, effectively making them dead, as far as I am concerned.
Few_Painter_5588@reddit
They have released quite a few LLMs since Llama-4, including a \~30B reasoning model
Iory1998@reddit (OP)
What are you talking about? A llama-4-30B? No they didn't
Few_Painter_5588@reddit
No need to be rude, you could just use the internet/your brain
https://huggingface.co/facebook/cwm
Salendron2@reddit
Are they confirmed to not be releasing any more open-weight models? Llama 4 was a complete disaster for them, so I'm not too surprised.
What I'm really waiting for is whatever mistral is making, they've been my daily driver since they made their 8x7B model - I vastly prefer the prose of mistral over any of the other I've tried. Still running a variant of their 24B model. Though I've not heard anything from them for a while now, kinda worrying.
Iory1998@reddit (OP)
Meta CEO Mark Zuckerberg is backsliding on the company's open-source approach to AI. It's a sensible pivot.
https://www.businessinsider.com/meta-ceo-mark-zuckerberg-backsliding-open-source-approach-ai-2025-7
Salendron2@reddit
From that it sounds like theyâre still going to be releasing some open weight models, but keeping the weights of their bleeding edge models for themselves.
This is still more than weâre getting from literally every other American AI lab (except Grok ig, but who knows if Elon will actually follow through with OWâing G3/4), who either have released nothing, or they release models that have gone under so many lobotomies that they are completely uselessâŚ
koeless-dev@reddit
What...
What 2?
Salendron2@reddit
I tried both of those, Gemma was alright, but OSS was just unusable. OSS felt benchmaxxed, to say the least; and pretty much every post I saw on this subreddit was mocking it for being terrible when it came out.
ResidentPositive4122@reddit
That's speaking more about the state of the sub than the state of the model.
At launch every model will have teething issues. That's 100% guaranteed. On top of that, there's way too much politics, tribalism and myteam vs. yourteam here to get an accurate feeling for models these days.
gpt-oss are banger models for normal day-to-day use, especially if tool use is necessary. They just don't work for horny people, and that's pretty much it.
Salendron2@reddit
My use case for open weight models have mainly been creative writing, for this OSS was hot garbage, even for non-explicit writing. For programming and tasks where quality is paramount I just use frontier models; why use OSS when you can get much better results without the hassle.
TheRealMasonMac@reddit
Yeah, those are excellent examples of what they mean.
ttkciar@reddit
.. not to mention Microsoft's Phi family of models, or AllenAI's OLMo models (they just released OLMo3 a few weeks ago), or IBM's Granite.
Admittedly these are more special-purpose models, with their competencies focused on a few kinds of applications. The only Western models I would consider comprehensively multi-purpose are Gemma, Mistral, and Llama.
Agreeable-Market-692@reddit
Doubtful. LeCun is out. Their new ASI guy is not pro-open, quite the opposite.
Also if I was a part of Olmo I'd be kind of hurt by your second paragraph. They deserve praise.
noiserr@reddit
I mean they don't have to release their top model for all I care. It's not like most of us can run it anyway. As long as they open source smaller models they can still get back in the game. I mean look at gpt-oss. It's been great.
Iory1998@reddit (OP)
Yeah, that's what I have been saying for months. Just keep doing what you were doing with Llama-3!
Aromatic-Distance817@reddit
Gotta go fast(er than that)
segmond@reddit
Meta is dead, they are not hungry, too much money has never been the recipe for success. It's going to be a disaster worse than metaverse.
WithoutReason1729@reddit
Your post is getting popular and we just featured it on our Discord! Come check it out!
You've also been given a special flair for your contribution. We appreciate your post!
I am a bot and this action was performed automatically.
changing_who_i_am@reddit
SOTA open-source is essentially useless in the US until they get over their hangups about "muh copyright". Until then China will simply win, while Western models will be more about tinier specialized tasks that people can do on a smaller budget.
TomLucidor@reddit
China does not care about copyright, because LLMs have a main task: expanding surveillance and direct censorship (e.g. Qwen3Guard). Contrast this to the US strategy of direct work displacement (professional models) and propaganda (gimped models).
ttkciar@reddit
That's a fair take, though I suspect the Chinese government is just interested in its propaganda applications as are US politicians and commercial marketing departments.
TomLucidor@reddit
Chinese propaganda consistently sucks, it's like an unskippable ad, bro. They are better off planting the tools in social media, then to get a bunch of posers talking to people who are better off not being posers after years of compulsory education.
Last_Ad_3151@reddit
Stability took a wrong turn and it turned fatal. Itâs a simple but important lesson. Donât alienate your fam.
Agreeable-Market-692@reddit
I think Llama 3.1 70B is still fine for synthetic data generation. It has a decent amount of world knowledge and enough attention bandwidth and instruction following to do some very rich transformations of data into more data.
That said, it seems like META is not as interested in open weights LLMs because it apparently doesn't make the dollar dollar bills the way calling kids sexy in their DMs does.
segmond@reddit
If we were in a bar, I would be buying you drinks till the bartender cut us off because same journey, same feelings.
Iory1998@reddit (OP)
Hahahah! Thanks mate. You put a big smile on my face âşď¸
rm-rf-rm@reddit
Absolutely! Now its on us to keep the open source dream alive and prevent what happened to the internet, to smartphones from repeating
existee@reddit
This is as good a model as one can get with open source training data - ie the internet.
OpenAI has tons of novel usage data incoming as users interact with their systems. Google has that and for a wide variety of non-AI apps too.
People think the trillion dollar spend is for the r&d and scale. Research is a fraction of the opex, and the scale is not about breaking through new model features but to suck in as much actual human usage data as possible to train exclusively better models.
southern_gio@reddit
They might be going for what open AI is doing, which is use their LLMs for themselves(users gonna be paying) and now and then release some open source ones like open AI did with GPT OSS 20B - 120B. I donât think meta is near the end on this space. They bring a lot into the table.
toothpastespiders@reddit
Lyk dis if u cry evrytim
But in all seriousness, I think about that a lot too. Localllama and stablediffusion are the two hobby subs I read most frequently on here. And both have become anachronistic in their naming.
It's kind of a mixed blessing, but at least llama 3.3 70b is probably going to be the king of that size range for a long time. Though I wish it wasn't due to dense models that size being more or less abandoned.
I remember really, really, trying to like stable diffusion 3. I figured that it had to just be a lack of further training, style loras, etc before it reached its full potential. In the end I gave up and switched to flux. Which did take really well to all that.
Still, I'm always going to have a lot of good memories of those companies really changing the tech world for a while.
Iory1998@reddit (OP)
Me too.