Is AGI the End For Local LLMs?
Posted by spiritxfly@reddit | LocalLLaMA | View on Reddit | 32 comments
If leading AI conpanies are after AGI and the whole chatbot/agentic AI is just a phase for them to get to the end goal, then what does that mean for local LLMs? I would like to believe local LLMs are the future, but if AGI is achieved, do the local LLMs become obsolete and useless? Where does that leave us with our 12x3090 builds, macstudios with 512gb and 6000 pros?
Due_Net_3342@reddit
AGI :))) if you think “autocomplete” models can achieve AGI in the next 100 years you are misinformed
Silver-Champion-4846@reddit
Scale gooners: Hey, gboard can't code, but GPT8 CAN! How can you know that transformer large language models aren't going to magically produce perfect ai?
ZealousidealBadger47@reddit
For AGI, if some LLM improve itself, we will be all be bad, Bank hacked, account hacked, everything can be hacked, the world will goes back to paper...
Silver-Champion-4846@reddit
But I'm blind, and papers don't have a screen reader! Lol.
Potential-Gold5298@reddit
Firstly, the models that we have right now would have been called true AGI without hesitation just 3 years ago. AGI requirements are constantly shifting as models reach performance targets.
Secondly, people use local models for different purposes. Do you need Mistral Nemo or Qwen2.5-14B now? If you use AI as a tool - no. If you use AI for creativity or research, it's quite possible that yes. For many, the best model is the one with the highest benchmark score, and they change it as soon as a model comes out that scores 0.1% better. It reminds me of the “megapixel race” in digital cameras. It reminds me of the megapixel race in digital cameras.
I think modern (and even older) models will continue to be of interest to enthusiasts. In 10 years, it will be possible to deeply retrain small models from scratch on consumer hardware (unless Sam Altman eats up all the RAM), and people will continue to improve old models, squeezing ever more performance out of them.
Silver-Champion-4846@reddit
User: Sam Altman, please don't eat up all the ram again, please? Sam Altman: WE NEED MORE COMPUTE! BUY ALL THE RAM PRODUCED FOR THE NEXT FIFTY YEARS! AND THE GPUS! AND THE NPUS! AND THE OPTICAL CHIPS! AND THE NEUROMORPHIC CHIPS! User: NOOOOOOOOOOOOOO!
Clank75@reddit
LLMs simply will not lead to AGI.
Maybe world models will lead to AGI, but we're likely a long way away from it. Let's see how Yann gets on.
My gut feeling is that AGI wull come when we can do quantum computing at scale (I feel like the extraordinary power efficiency of the human brain probably relies on some quantum effects), but that's speculation. Maybe I'm wrong and AGI will be unveiled tomorrow, but what I am 100% convinced of is that LLMs neither predict that, nor will it come from an evolution of LLMs. Nothing whose entire known universe is just tokens of human developed languages can give rise to AGI.
(Language is a product of intelligence, not the source of it. And the idea that all you need to encapsulate and train intelligence is a bunch of tokens in English and Chinese is extraordinary, foolish hubris.)
Silver-Champion-4846@reddit
A potential proof for what you're saying is that the first stage of training these models, pre-training, only leads it to predict the next token. It doesn't know to accomplish tasks magically. You have to keep telling it to associate a certain kind of tokens with a certain kind of response. Only then will it leverage its own accumulated patterns to mimic those patterns you told it to mimic. A child without a body and without emotions, forced to read a ton of books in different languages, and then expected to be better than humans at intelligence while its own brain is frozen into a snapshot that can't update itself with new information unless you give it external knowledge bases and instruct it to stay away from any information it might natively possess.
ComplexType568@reddit
I have not been able to articulate my beliefs but this is exactly what I've been thinking. LLMs are just not how AGI will arise.
sleepingsysadmin@reddit
AGI has already happened. ASI hasnt.
The problem is that nobody has a proper definition of AGI.
Artificial General Intelligence vs Artificial Super Intelligence
Artificial Intelligence is not dumb anymore. It's smarter than the vast majority of humanity.
So a "general" intelligence isnt someone capable to programming entire projects. Our frontier models are smarter than a general human.
Superintelligence hasnt quite happened yet, I expect it's a hardware problem right now.
Zarzou@reddit
No because you won't be allowed access to the cloud AGI
ryunuck@reddit
Remote gaming hasn't picked up or changed the world. Nobody wants to play with a high ping. You just can't get a 60 FPS AGI from a cloud connection, nor is it gonna be AGI if there's a ping in the first place. AGI starts at real-time. No LLM can ever be AGI
LagOps91@reddit
agi is so close and yet so far...
megadonkeyx@reddit
it leaves you on ebay hoping for a good price
ttkciar@reddit
AI companies don't really want AGI. That's a line Altman feeds investors to keep the purse-strings open.
Investors are risk-averse. Altman takes advantage of that by spinning a yarn about AGI ushering in a new world of ultra-prosperity, and anyone who didn't invest in it will be the biggest losers in the history of losing.
That twigs investors' risk-aversion reflex, and they throw more money at OpenAI just in case he's right, even though they know he probably isn't. They figure, on one hand they risk losing their money, but on the other hand they don't want to risk being the biggest losers in the history of losing.
You really shouldn't lose any sleep over it. LLM technology is inherently narrow-AI, and cannot be incrementally improved into AGI. Deliberately designing AGI requires a sufficiently complete theory of general intelligence, which is something the CogSci field has been picking at for decades now. They've made gradual progress, but they're still a far cry from having one, and precisely none of those corporate billions are being invested in CogSci. They are investing it all in LLM technology, so they're not doing anything to expedite the development of new general intelligence theory.
I suspect AGI will be developed eventually, but I seriously doubt it will be in less than ten years. It might not be for twenty years. Like I said, don't lose sleep over it.
NandaVegg@reddit
100% with this comment.
Also to achieve "AGI" (sigh) in terms of fully automated productivity/advance with the current LLMs, we'd need to make everything virtual/terminal-based. But in reality not everything is like that.
It would require significantly more robotics than just LLM at the very least. But anything that requires hardware installation gets significantly slowed down by the natural market dynamics of commodities (already datacenters are too expensive to be built in time). Agentic LLM accelerated in the area that can be done 100% in computer and network, riding the data and infrastructure already made through 30 years of internet's existence, it had zero friction other than datacenter costs and network bandwidth. For "AGI" the entire world is just starting to build the infrastructure for it, which could take 10-20 years at least.
Fabulous_Fact_606@reddit
The bar just keeps moving. My 2x3090 running qwen3.6 with a Rag memory harness is scary coherent. These small models confabulates too easily. We need more vram to move to larger local models.
NandaVegg@reddit
Please please don't use the word AGI.
It was originally a thought experiment-like term low-key used in forums like LessWrong, then commercialized (in pitch in quest to grab some VC and govt money) by OpenAI around late 2020\~early 2021.
The term is pure marketing buzzword and is a genuine slap in the face to interpretability people.
NNN_Throwaway2@reddit
LLMs are not capable of AGI, to the extent we even know how to define AGI.
rosie254@reddit
no?? its just a hype marketing term imho, and even if it's achieved, local models will still be useful
Kodix@reddit
Strange and purely theoretical question.
But, in the future where AGI *is* a thing.. there's no telling how compressed it can actually be. Given recent local model results, I am actually pretty damn hopeful that, whatever progress the frontier of AI makes, local models may benefit disproportionately much from.
Huge frontier models will always be better in *some* capacity, but the surprisingly high agentic capability of local models makes me genuinely hopeful.
dolomitt@reddit
if AGI is achieved - you will not be able to afford it !
Jolakot@reddit
If you brought Claude Opus to 2010 everyone would have called it AGI. We will never achieve AGI because the goalposts will shift as incrementally as progress is made.
Intelligent_Ice_113@reddit
what is AGI?
BothYou243@reddit
I think that's a fair response,
I even exactly don't know AGI, maybe it's like a model with parameter but more intelligence density, a completely new architecture, more capabilities
and ofcourse a harness
Riric65@reddit
AGI is Articial General Intelligence, this mean that the AI is able to do anything an human can do (except anything physical), basically, it's considered something better than us. Something we probably should be afraid of, and we are.
custodiam99@reddit
Nope, you will buy future quantum chips and bio-neural chips to have next level AI at home. If it's not about transformers anymore that won't mean you can't use it locally. You just need new hardware, not a PC.
Important_Quote_1180@reddit
We are farther away from AGI than we are from the typewriter
shokuninstudio@reddit
There won't be an "AGI" in the way it has historically been defined.
The industry will keep shifting the goal posts.
Each company's marketing department having their own definition of AGI.
They will reach a point where they will say "This is good enough to call AGI" even though it will still be very flawed and consume far more energy than all large language models do today.
Weirdos will insist they are experiencing real AGI because a kawaii chatbot called 'Are Are' tells them adult bedtime stories for $5 in tokens.
New cults will arise. Some of these cults will secretly be run by intelligence services. They'll be a PITA and some politicians will try to appease the followers for votes.
tracagnotto@reddit
AGI is not even nearly close to be achieved.
The current AI foundations and technology is faulted and insufficient.
What you think a trillion (because now we have trillion param models) of if/else can simulate a brain? even if you backpropagate to refine their params?
Lmao, no. All they're doing is just leveraging your gullibility to their bold claims to make money on your shoulders.
"Hey AGI is here!!!!"
"This AI kills every other AI, it's too powerful, we will release later" --> Anthropic style
"Programmers will be dead in 2 years!!!!!" -> Recurring every 3 months from a different CEO 🤣🤣🤣🤣
🤣🤣🤣🤣🤣
AI now is a bunch of if else, with ton of scalability and deterministic problems that just randomly guesses what it can do and it excels on what you train it more.
if you take Claude mythos or Capybara which are supposedly the best models around and they aren't still released, I can guarante you 1000000% that if you make a big physics test a narrow AI model trained solely on physics with 1 million parameter with 30mb of size can beat Mythos, Capybara and any trillion param model with terabyte size
ALl they're doing beside monetizing is finding palliative solutions like RAG, KAG, Chain of thoughs and whatever else to lighten the workload on these AI models. All workarounds that does not solve the fundamentally flawed architecture
To obtain AGI we need a breaktrough in this technology.
Quantum computing or neuromorphic computing or biologic computing could be one of those breakthrough.
For sure not building bigger and bigger models.
The biggest AI discovery up to this days is transformers and google TurboQuant.
Transformers changed the way ai architecture is made and in that is a first breakthrough
Turboquant is the first real thing that actually aims to the rotten root cause of troubles of the current AI.
lmaoo we won't see AGI for long time, nor programmers will be replaced
am2549@reddit
Tbh I‘d say we have small AGI. Open Source will always lag behind frontier models, but Open Source will also go to AGI, just later.
MoodRevolutionary748@reddit
Is this AGI in the room with us right now?
They know there's no such thing as AGI - at least not following the current approach. They want to make money and bullshit us with all kinds of nonsense. Dario tells you his models are super dangerous and you're gonna be losing your job in the next 5 months (For the last 2 years), Sam is always surprised how good his next model is and it's always very close to AGI.
BS