Genuine question: What's your plan for AGI/ASI? Am I overthinking this?
Posted by No_Replacement_9069@reddit | learnprogramming | View on Reddit | 52 comments
I've been reading a lot about AI development lately and I want to get the community's perspective on something that's been on my mind. We're seeing rapid progress in AI capabilities — from LLMs to multimodal systems. Most discussions frame it as "another technological shift," but I'm wondering if that's underestimating what's actually happening. Based on current trajectories, credible researchers suggest AGI (systems capable of any cognitive task) could arrive within 5-15 years. After that, ASI becomes theoretically possible — systems that can self-improve, self-maintain, and potentially replicate without human direction.
My concern: If general intelligence replaces human cognitive labor, what does that mean for the economy and employment? Unlike previous technological disruptions, this isn't replacing specific tasks — it's potentially replacing all knowledge work. I'm not trying to be alarmist, but I also don't see this being discussed as seriously as it should be, given the timeline.
Questions for the community:
Do you think this timeline is realistic, or am I overestimating capabilities? How are you thinking about this personally — career planning, skills, finances? What's your view on the "new jobs will emerge" argument? Does that apply to general intelligence replacing all labor? Are there concrete ways people are preparing for this scenario?
I'd genuinely appreciate thoughtful perspectives, pushback, or if you think I'm missing something obvious.
SunshineSeattle@reddit
Its all market hype, none of these companies have anything close to AGI/ASI and i predict will not in the next 50 years.
And as a reminder LLMs hallucinate as part of their core architecture for you AI bros out there..
flamingspew@reddit
The human mind is basically evolved as the best at filtering hallucinations to such an extent that long-term planning and reasoning can occur despite the chaos that is sensory input. So consciousness is basically your brain constantly predicting what it thinks is going to happen, then quietly correcting itself when the world disagrees, like it is never actually meeting reality head on, just negotiating with it in real time. It does not really “receive” experience so much as generate a running model of what experience should be, based on everything it has already been through, which is kind of unsettling when you realize you never actually get raw reality, just your brain’s edited version of it. The “controlled” part just means that sensory input keeps it from drifting too far off into pure internal fiction, so you stay anchored enough to function, but it is still your mind filling in most of the picture before anything is confirmed. It is adaptive because it works well enough to keep you alive and navigating the world, even if the tradeoff is that your entire perception is always slightly behind and slightly imagined. If the system stays consistent with external input, we call it reality, but if it slips out of sync, we call it hallucination, even though it is all coming from the same underlying process that never really stops guessing in the first place.
Just look at how your blind spot is filled in…
SunshineSeattle@reddit
Lololololololol...
flamingspew@reddit
This is how we know it works currently. The mind is always filling in perceptive gaps. Whether or not that “defines” consciousness is besides the point.
makingthematrix@reddit
The fact we don't know everything doesn't mean that "we have no idea". We know a lot about it. You can find a whole long series of lectures by prof Robert Sapolsky on YouTube, and that's just aimed at students.
SunshineSeattle@reddit
Methinks you need to read up some more..
makingthematrix@reddit
I do read a lot about it. I've got MSc in artificial neural networks
SunshineSeattle@reddit
rofl
No_Replacement_9069@reddit (OP)
But it's getting better by day. Suppose if it starts getting better autonomously. Back days Iphone having updated in years as ex suppose it's getting update in months, days, hours, minutes and seconds. We as humans will left behind then. we are not built to adapt as fast as Ai is evolving. When We humans will get left behind than what we'll do?
ConfidentCollege5653@reddit
But why would we suppose that?
If ants suddenly became 1000x bigger the implications are terrifying, but there's a reason we don't worry about it.
No_Replacement_9069@reddit (OP)
we will have to suppose because of the influence of Ai on humans life. Think about if you read chapter a day from a book you'll have more knowledge as well. It's learning and implementing our daily life tasks like more and more fast than humans. when they'll learn everything human has been crafted and stored in years of evolution then what? What's the chances of the survival of human. what will they to survive?
ConfidentCollege5653@reddit
If you look at the speed of cars in the 1920s compared to the speed of them in the 1940s we can project that by the year 2000 the average car will travel faster than sound.
Assuming that LLMs will continue to grow at the same rate is making the same mistake, not even getting into whether an LLM with infinite power could achieve intelligence.
kyzfrintin@reddit
None of this changes the fact that LLMs are not truly AI and thus have no capability of becoming AGI.
chaoticbean14@reddit
Yep! LLMs are NOT AI.
chaoticbean14@reddit
It's (the LLM) isn't getting better day by day, it's being trained on more human created data. That's a massive difference. When they release a new model? It has more data and the algorithm it uses to predict things has been tweaked. Sorry, there's no automated magic happening here.
plastikmissile@reddit
You are assuming that there is a linear path going from this generation of AI towards AGI. That is absolutely not true. We are no closer to arriving at AGI than we were 10 years ago. We only got good at certain types of specialized AI, and you cannot extrapolate from that.
itsavibe-@reddit
Wall-e
chaoticbean14@reddit
Not to mention, they are incapable of creating a 'new thought', because all they do is regurgitate what they're trained on (i.e. the internet) in a way that 'feels like' human rhetoric.
They are (for lack of a better term) a glorified, automated, google search that summarizes everything for you (or writes it for you) based on the currently available, human created, data that has been fed to it.
Literally - it's not capable of intelligence. I really wish people would stop calling it AI. There's nothing Artificial or Intelligent about it.
Doubt it? Think of a topic you're expert level on. Something you seriously, seriously know. Ask AI a question about something very specific and kind of niche that you know about. Then google that. Often, you can find the sources the AI used to generate the meaty part of that content with a specific google query. Ta-da, you've unwrapped exactly what "AI" is doing.
If it hasn't been trained on it? It won't be able to pull together unrelated things and make "new discoveries", it will just hallucinate up some slop for you.
We really need a new term other than AI so people like OP stop being so taken in by the hype.
throwaway1045820872@reddit
The problem is that the term “Artificial Intelligence” had a specific meaning in the academic field prior to the recent surge in the last few years. Current products meet the criteria for that historic definition, so it’s fair to call them AI, but it’s not what the common person thinks when they hear the term (they think it involves some sort of thought-like intelligence).
It’s all just fancy math under the hood, but historically things that could mimic intelligent behavior (regardless of how they accomplished that) would be considered “artificially intelligent”.
chaoticbean14@reddit
Yeah. I understand that it meets the criteria for the name, but just like LLM's have changed how we approach a lot of things, we should change how we should approach certain naming conventions.
So the people who don't know better quit being bamboozled into thinking things are something they are not. It's scary how misinformed and under educated people are about LLMs; but I guess that's why the companies keep that term rolling... the profit.
aneasymistake@reddit
I totally agree, but just want to mention that people make things up, getthings wrong and convince themselves they’re right as well. We somehow manage to progress with that happening, so I don’t see it as a blocker for AI.
Esseratecades@reddit
"Based on current trajectories, credible researchers suggest AGI (systems capable of any cognitive task) could arrive within 5-15 years."
I guarantee you they are not credible.
No_Replacement_9069@reddit (OP)
Prove it they are not credible. If I got choice of choosing bw predictions (based on data) And opinion I'll go for predictions anytime. If you got predictions then let's discuss more.
SunshineSeattle@reddit
Quod gratis asseritur, gratis negaturNo_Replacement_9069@reddit (OP)
i was just thinking not making any claim. Was just observing the pattern.
SunshineSeattle@reddit
Quote: "Prove it they are not credible."
No_Replacement_9069@reddit (OP)
You are just a person, feeling superior not actually being one 🫡👍
Esseratecades@reddit
From a technical perspective, as long as LLMs are being used as the foundation for AGI without some engine that's actually capable of deterministic reasoning (the bullshit that they call "reasoning" today is not in fact reasoning), it will never work. Like fundamentally the technology just doesn't do that, but marketing uses deceptive terms to conflate what the technology does eith what is philosophically interesting.
Consider also, software engineering as been 6 months away from death for 6 years now, and AGI has been just around the corner for how long? Even if we figured out how to theoretically do it, the kinds of techniques that AI companies are betting on require more energy and material than we have on Earth.
Before AGI can even become a twinkle in my eye, we'll need to either invent cold fusion, or go down such a drastically different path than we currently are that we actually have no idea how long it would take yet.
They are selling you hype.
Tall-Introduction414@reddit
Fully Automated Space Communism
No_Replacement_9069@reddit (OP)
what this even mean now? 😂
Tall-Introduction414@reddit
What is Fully Automated Luxury Gay Space Communism?
It means that AGI would mean the full automation of society, and is also so far-off and unrealistic that you don't need to worry about it. If we're at that point then software developers are far from the only people who need to worry about having a job.
ZestyHelp@reddit
Can this question be banned already?
No_Replacement_9069@reddit (OP)
Why?? Closing your eyes doesn't mean the problem has been resolved.
chaoticbean14@reddit
The only 'problem', is that people misunderstand what an LLM is, and call it "AI", there's nothing intelligent about it. It doesn't learn, it doesn't reason. It regurgitates (and it does that well); it regurgitates well enough it seems like it's thinking/learning - but it's not.
No_Replacement_9069@reddit (OP)
I got 1 upvotes and hundreds of downvotes in less than 1 hour. it's a really useful plateform a lot of honest and knowledgeable humans. I figured it out. i was really overthinking for days. Thanks for stopping me at initial stage. thanku all.
makingthematrix@reddit
Large Language Models are not capable of much more than what they already do. We are already close to the ceiling and we can observe the law of diminishing returns in action - bigger models are not much better than the previous generation, and making them more complex instead or training them on better crafted data sets also show limited results. That's because whatever we build on top of it, an LLM is still just a text generator. It takes input and generates a probable continuation of it. It doesn't reason, it's not creative, etc.
Just to make it clear: I'm a materialist. I believe that all our mental capabilities are contained in our brains, and that there is no quality difference between biological brains and what can be simulated inside a computer. Therefore, it should be possible to build a human-like artificial intelligence. But LLMs are not it, by far. And there's no real scientific breakthrough on the horizon that would change this situation. So, while it's obvious that people working with code and text will have to adapt to using LLMs, it's just what happens every time when a new technology appears on the market. In the next few decades there will be no "general intelligence replacing all labor".
No_Replacement_9069@reddit (OP)
Mark my words . It's not Similar to Earlier evolution human being survived. As my pov it's something diff. it's the ecosystem and it's getting better. It's breaking laws already. We heard in new that one of the claude model broken Docker container. Even after shutting the internet access. Things are different this time. Data Says that.
SunshineSeattle@reddit
Docker containers are not security boundaries.
My dude you are being sucked in by the propaganda.
No_Replacement_9069@reddit (OP)
My bad.
No_Replacement_9069@reddit (OP)
thanks for correcting me. I'm getting more delusional these days maybe
makingthematrix@reddit
Sorry, did a chatbot write that? ;)
Humble_Warthog9711@reddit
Off topic af
No_Replacement_9069@reddit (OP)
what's hot topic then?
maxximillian@reddit
I find it suspicious that you only have two posts with the same topic and only one comment that's 1 year old for some money making app.
No_Replacement_9069@reddit (OP)
I'm not telling you anything relevant to money making.
TravelingSpermBanker@reddit
That sounds like a sad life that requires pity from everyone
Significant-Syrup400@reddit
This is kind of like asking what people's plans were for walking now that cars are being developed. I will still walk so I retain the ability, but obviously I'm going to drive the car most places. It would take me months to travel somewhere that I could drive in 1 day.
No_Replacement_9069@reddit (OP)
But what you'll do for expenses? What will be your source of income as it's getting better at everything as compared to human day by day?
my_peen_is_clean@reddit
my plan is: save more, keep skills broad, avoid huge debts, assume politics won’t keep up and jobs get way messier
thetrailofthedead@reddit
Andrew Yang cries in the corner
No_Replacement_9069@reddit (OP)
What skills you planning to polish that The ai ecosystem can't replace? I'm about valuable skills.
No_Replacement_9069@reddit (OP)
What skills you planning to polish that The ai ecosystem can't replace? I'm about valuable skills.