Do you believe we're in an AI bubble?
Posted by No-Rush-Hour-2422@reddit | ExperiencedDevs | View on Reddit | 616 comments
As a Software Developer I have been constantly told that AI is going to replace me, so I may be biased against it. And it's possible that my algorithms reflect that bias. So I'm writing this to see if other people outsider my echo chamber are seeing the same things as me or not.
I remember when the blockchain came out, and it was going to change the world, and everyone had to include it in everything. Then people realized it's limits and got tired of hearing about it. Then VR came out, and it was going to change the world, and everyone had to include it in everything. Facebook even changed their company name to Meta, because the Metaverse was where the future was. Then people realized it's limits and they got tired of hearing about it. I'm seeing this same pattern with AI. Everyone is convinced that it's going to change the world, and everyone is forcing it into every product, even ones that don't make sense. And people are already realizing it's limits and getting tired of hearing about it. But I think the real problem is going to come when people realize that it's a scam.
When people hear about AI, they think about what they see in movies, and assume that's what we have now. But that's just not true. LLMs are just advanced auto complete. They are given huge amounts of written words, and they use that data to guess what the next word should be when it's writing out it's answers. It isn't actually doing any thinking or reasoning. So there's really no intelligence involved whatsoever, and recent studies like the one done by Apple have proven this. So calling it intelligence is false advertising. And the idea that we are a few years away from AGI is nonsense, because we don't even have real AI yet.
The biggest difference between AI and something like blockchain is that corporate executives can't play with blockchain and use it, but they can play with AI. But because they don't understand how it's working, they think it's real. And this includes CEOs of tech companies like Google, who are so far removed from actual technical work by now that even they are being fooled. To be clear though, I'm not saying "AI" doesn't have its uses. There are plenty of ways it can be very useful, just like blockchain and VR can be useful. The issue is just that people think it's going to be useful in ways that it isn't, because they think it's the AI they've seen in movies.
Then there's the layoffs. During COVID, many companies over hired tech workers, and they've been slowly readjusting since then. But investors don't like to hear that a company has made mistakes and hired too many people. Then along came AI, and companies found their excuse. Rather than admit that they've made a mistake and hired too much, they're saying that they're optimizing their workforce by using AI, in order to spin layoffs as a positive for investors.
So in my opinion, it's only a matter of time before the scam is revealed and the bubble is burst. And it's possible that it could be on the same scale as the dot com bubble. What do you think?
Significant-Ad-6947@reddit
You do not sound like a dev, bro.
Exact_Flamingo_8642@reddit
Well stated! I agree. But there is another way to debunk the AI hype and that is by looking at the revenues for AI software companies. They are very small, much much smaller than the revenues for the internet back at the peak of the bubble in 1999. I present a lot of that data in this article for market watch. https://www.marketwatch.com/story/the-ai-bubble-is-looking-worse-than-the-dot-com-bubble-heres-why-f688e11d. Why are the revenues so small? Because companies won't pay much for the things you describe.
truebastard@reddit
I wouldn't exactly call it a scam because the use cases for LLMs are very real. People are using it and getting real efficiency gains which were not possible before.
But the overhyping and overinvestment is the potential bubble (or "scam") and just like what happened with initial Internet overhyping leading up to the dot com crash could happen with LLMs. Progress towards could be slower than expected but still be real.
symbiatch@reddit
But the thing is that only specific people are getting the gains. LLMs are pushed as helpful to all in all situations. They’re not.
They mostly help lower experience devs and/or boilerplate/copypaste coding in the most common languages. That’s it.
Yes, I’ve tried them many times. Yes they claim to help me. No they don’t actually help. They give wrong answers, just go “interesting question!”, provide wrong code, when they do something wrong they just write more and more vomit instead of actually knowing what to do.
If you don’t believe me I’ll be happy to have someone show how an LLM writes Hamiltonian path algorithm with given constraints so it runs in sub-second time for a large network, or planarizes non-planar polygons, or…
They defo give out code and answers but they don’t work. And some of us work in a bit more important and/or complicated stuff than “hey add a suggestion box to my simple React app.”
OpenJolt@reddit
One issue I see is company’s are still in “eat up as much market share as possible and take massive losses” mode. What will happen when the expected return doesn’t pan out?
truebastard@reddit
They'll have massive losses, canceled projects (or rather entire teams or departments), pipeline projections that never realized, layoffs and firms wiped out.
Just like with the dot com bubble, there could be also few surviving firms and projects with stronger, more realistic use cases and wide enough adoption (or robust balance sheets to absorb the losses). They'll continue on with more realistic progress.
rayred@reddit
While it is true that there are real efficiency gains, I’m in the camp that they are grossly overstated. And I do wonder if it really is worth the cost - especially once the VC dries up.
Also, as time goes on - I do wonder if it will turn into a net negative when you consider the plethora of younger devs who are so reliant on it. Effectively forsaking solid principles of engineering and all that jazz.
poolpog@reddit
Are we in a bubble? Yes
Is it like the blockchain bubble? IMO, No.
I think it is probably more like the 2000 era DotCom bubble. VCs throwing money at shit left and right. Startups galore. Crazy utopian visions. Darn stupid uses of technology.
But out of that bursting bubble, a whole ton of useful and successful technologies and companies emerged.
A lot died in the bursting bubble, though.
I think history will show that this iteration of "AI" -- basically, modern LLMs -- will have a similar pattern.
I'm not even gonna try to guess who will be the winners or losers though.
Wide-Pop6050@reddit
Okay I buy this. There are some actual uses but no where near as how many uses people think there are.
king_yagni@reddit
there are a lot of actual uses, this tech is genuinely revolutionary.
…and it’s still true that there are nowhere near as many uses as a lot of people think there are.
it’s insanely useful and insanely overhyped at the same time.
_Meds_@reddit
There are a lot of uses but they’re not really revolutionary. It will replace the areas we already replaced with tech, mostly web forms and initial point of contact. We’ve already replaced these with forms and algorithms, and we’ll do it again with a new form of algorithm.
The revolution will be that a ceo can run a brand new campaign and collect whatever customer info he wants and he doesn’t need to slack his Wordpress contractor, to spin up a new page
AlligatorRanch@reddit
I’m curious what uses people believe in that you think AI won’t have. In my opinion most of the uses people predict for AI will happen, we’re just not at that point yet
Wide-Pop6050@reddit
I think there are good use cases for AI, if its used in a focused way and with oversight. However right now many of the uses aren't actually adding any discernible value.
mozaik32@reddit
This. AI being actually useful (and applicable in much more businesses than VR and blockchain, despite OP's accurate claims about its limitations) and the existence of an AI bubble (including AI being shoehorned into every product regardless of its actual value there) are not mutually exclusive - they are both true.
mrxplek@reddit
I feel crypto and vr could have worked. It’s just low adoption. They need to be adopted massively and crypto couldn’t because govts were not interested and vr couldn’t because it’s just too expensive.
vanKlompf@reddit
What are use cases of crypto besides money laundering and hiding money transfers?
mrxplek@reddit
Country to country transfers with low fees, ach, no visa, Mastercard, no banks.
vanKlompf@reddit
There is plenty services like that.. actually cheaper and simpler than BTC.
mrxplek@reddit
What do you mean?
chalk_tuah@reddit
VR will be useful
…one day
…maybe in 30 years
ZealousidealPace8444@reddit
I’ve noticed that too, some folks chase titles or trendy roles, but the ones who build lasting careers focus on mastering the fundamentals and delivering real value. In startups, especially, titles don’t build products, execution does.
alex88-@reddit
The winners will be the tech giants this time around.
They weren’t really giants yet back in 2000
StatusObligation4624@reddit
Microsoft was literally a monopoly in 2000 and currently has the largest stake in OpenAI.
alex88-@reddit
Yeah but Amazon/Google/Meta were nowhere near the level they’re at now
StatusObligation4624@reddit
Sure but Microsoft and other giants at the time like Yahoo, eBay, PayPal, GE, IBM etc. still allowed Google, Meta and Amazon to happen.
Why can’t that be repeated today?
alex88-@reddit
My point is just that the tech industry today is a lot different than where tech was at in 2000.
How can you compete with Meta in AI when they have enough money to offer $100 mil salaries to researchers?
alex88-@reddit
My point is just that the tech industry today is a lot different than where tech was at in 2000.
How can you compete with Meta in AI when they have enough money to offer $100 mil salaries to researchers?
And training models is very compute intensive. Only companies like Meta/Amazon have the necessary capital and infrastructure to do it currently. Smaller players just won’t be able to compete in core AI services.
The tech giants will very much have a moat in this area, just like Amazon/Microsoft have a moat on cloud services atm.
poolpog@reddit
I think the previous comment was probably thinking of Google, only. Because of course, you are absolutely correct.
But also, Microsoft didn't get huge from the dot com bubble. They got huge on DOS, Windows and Office. And anti-competitive extending and extinguishing.
Calm-Procedure5979@reddit
Well said. Absolutely in a bubble. Its a really really annoying bubble because of all of the hype, fodder, and absolute bullshit coming from the CEOs.
Meanwhile im like "well its better than a Google search". Who knows who will win or lose
Informal_Butterfly@reddit
I feel the same way. There's a lot of value in the current AI revolution, but as with every new tech, it is overhyped right now.
Speaking in terms of the Gartner Hype Cycle, we are currently near the "Peak of Inflated Expectations". Ultimately, the hype will die down and we will come down to have more realistic expectations from the technology.
Sensitive_Peak_8204@reddit
Eh kinda. The only winner out of this will be Google with a better search product. That’s it.
fuka123@reddit
Go try Claude Code on your codebase and have it implement a new feature or refactor a test suite, then update your thesis
No-Rush-Hour-2422@reddit (OP)
Yes, it's very very good at predictive text. But it's not actually intelligent. That was my thesis
Adventurous-War1187@reddit
Developer from respectable languages, Ruby, JS, Python, PHP etc are using claude code. Even the creators of the most popular web frameworks tell that claude code is amazing.
And yet, here is this guy saying claude code is not enough for him?
fuka123@reddit
Its a good thing :)
NoleMercy05@reddit
I knew you were going to say that
Constant-Listen834@reddit
How do you define intelligent? How does that differ from predictive text?
pippin_go_round@reddit
Soooo... I'd love to. How do I do that without sending my code to a server my company doesn't control? Because that's a huge compliance no-no.
fuka123@reddit
This is not a problem for 99.9%. But for folks working in banks/government, use the internal ai tooling, which will perform better as it id trained on your specific shit.
BigBootyWholes@reddit
bupkizz@reddit
It’s a handy tool. It’s also absolute dog shite at actually programming.
I worry about the pipeline of devs because Sr devs will be on ever higher demand… and you dont get to be sr if we replace all the jr’s with ai.
ruddet@reddit
It's easy to generate dogshit with just prompting AI, but if you set up your rules, context and instructions in such a way then the new models are generating really good code.
BigBootyWholes@reddit
Check out Claude code, with 30 mins I installed Postgres, setup a db, tables, models, react app w/ backend crud and analytics, and a deploy script to a cheap linode vps all with a few prompts in the terminal. I had to prompt like a knowledgeable engineer, so I don’t think a non coder could get as far as I did so quickly. Claude code blows cursor out of the water
Accomplished_Rip8854@reddit
I keep hearing the same things and I find that a telltale sign that nobody is being reasonable about this.
“AI is only in its infancy” - the ideas on how this works is like 30 years old.
“It will only get better” - how do you know that this is not as good as it gets? I ‘m not an expert by any means, but I don’t see any major breakthroughs coming anytime soon
And worst of all, the generated code is most of the times badly written and I sometimes lose patience and just write it myself.
I like it for SQL, but I don’t see it replacing anyone.
No-Rush-Hour-2422@reddit (OP)
Agreed. That's exactly what I'm worried about. To get better it needs more data, but there isn't an unlimited amount of data to feed it. Eventually we will run out
Henrijs85@reddit
I'm at 4yoe and completely ignore most of the "help" copilot offers. I ask it to convert my C# response models to typescript types for my playwright tests though. Decent auto complete for test cases though after you've written the first couple.
SketchySeaBeast@reddit
AI is infinitely more practically useful than blockchain, but that's because blockchain had no practical use. We're absolutely in a bubble. It's speculation right now - everyone is yelling that you need to get on the AI train or you'll be be run over in a year or two. It's using fear to sell.
At some point people are going to look around, realize that the same promises of miracles coming "in a year or two" have been promised for 5 years and then chill out, but right now people think the AI chart is straight up when it's a sigmoid that's already starting to flatten.
Rollingprobablecause@reddit
Honestly, the largest effect it's had on us is less use of stackoverflow and google. It's basically a really really good/fast search engine that stitches together the last mile of solution search. Great a starter/boilerplate coding to get your mind moving, but after that....not much use (diminishing returns almost immediately if you know what you're doing and incredibly dangerous if you don't)
Adorable-Fault-5116@reddit
This is also a limited time effect, because once it kills stackoverflow et al, where does it get it's training data from?
AI is fundamentally parasitic, and so it has to be used only in ways that don't kill the host (human creativity).
informed_expert@reddit
StackOverflow killed itself with its toxic community.
stanzou@reddit
Less money made on ads
t_sawyer@reddit
It’s trained on stack overflow but has killed stack overflow. How will it consolidate answers for new tech?
Rollingprobablecause@reddit
oh ofc we know that is going to be the biggest issue down the road, what I imagine is going to happen is we'll see a resurgence of StackOverflow (or maybe a different name?) where it will be more solutioning again for AI slop outputs but also just people having to re-find things all over. I'd love to have some kind of open research site that university/grad students can contribute to - have grants dedicated to solving long-standing bugs/impossible problems/workarounds for people to focus on. We did this at GT when I was a grad student and it was really neat.
warm_kitchenette@reddit
I have particularly wondered why sites that depend on Google traffic haven't been hitting the roof over the traffic plunge from Google AI results. Intuitively, it would immediately reduce traffic for many casual queries. That's what at least one investigation showed.
t_sawyer@reddit
That’s the power of monopoly. Many of those sites get revenue from Adsense.
trusting-haslett@reddit
https://fly.io/blog/youre-all-nuts/
Rollingprobablecause@reddit
Oh I've read this for sure. it's littered with good points, really harmful ideas (working for fly must be an exercises in cognitive dissonance) and ridiculous opinions that I can relate to but also not enjoy. It reads like someone toked 100g of weed and went off lol.
The Agent argument rooting around your good and magically being good only works for startups with simpler codebases. This guy cites credentials all the time but maybe fly has a really nice clean codebase. I don't know about you, but an ecommerce site with millions of dependencies, monolithic tendencies, integrations with 3rd party payment platforms, etc., etc. is not going to see that benefit (as me how I know)
SketchySeaBeast@reddit
Yeah, it's a decent SO equivalent just because it's faster and its answer are also half shit.
BroBroMate@reddit
I watched Cursor go in an endless refactoring cycle the other day, it would change the tests, then the implementation, then the tests, then the implementation, never able to get the tests to pass.
I'm not too worried about being replaced by LLMs lol.
256BitChris@reddit
Cursor os retarded and uses retarded models when compared to Claude Code and Opus.
It's a step change, and it's only getting better every single day. If you think you're not about to be replaced you're gonna have a big surprise coming in the next 1-2 years.
Wiyry@reddit
mainly posts in AI focused subreddits, 89 day old account, little to no post history, barely any reddit karma
Is this like, your throwaway account or something?
NoOrganization6671@reddit
Just to touch on this, I don't agree with you banning the use of something like copilot. When the nailgun was invented, would you force your workers to use hammers? AI is a great productivity boost for me. Your workers may be using an LLM on the side anyways.
Wiyry@reddit
My guy, we have had multiple massive security breaches and the quality of code has actively gone down since LLM’s from my perspective. I’ve nearly had all my startups data snatched because of a hidden prompt injection in one of our emails. LLM’s have been nothing but a headache in every experience I’ve had with them.
Sometimes, using older methods and tools leads to a better product/outcome. Did you know that we still use draft horses in certain places because they are superior to vehicles. Draft horses are mostly still used in lumber work because they can handle certain terrain much more easily than vehicles, are environmentally friendly, and are sometimes more cost effective.
This idea that technology is just a one-way linear progression (like it’s some kind of tech tree in a strategy game) has led to so many issues in the modern world. You need to research and study to find the best way forward and from my studying: I’ve found that LLM tech is nowhere near mature enough to rely on. I’m not gonna use a nailgun that has a random chance of blowing up in my face.
I’d rather release a quality product and be “behind the times” then to barrel forward into immature and untested tech that will most likely lead me into spending more to untangle the mess it’s left in its wake.
NoOrganization6671@reddit
Fair enough, and I'm sorry to hear about your data breach. However, and I hesitate to continue the hammer analogy but here we go, sometimes you use a hammer and sometimes you use a nailgun -- it depends on the situation.
I'm not arguing to exclusively vibecode your way forward 100% of the time. But, I do think you are arguing to solely code by hand. To me, a Sr. SRE at a large tech company, LLMs have been a great boon to my speed in understanding a novel problem, new section of the codebase, write tests, translate my English into Bash, etc. I still prune most of what is generated, and I often do not use the first or even 5th response. However I do find it a very useful tool which has certainly improved(!) my code quality and speed, not lowered them.
Anywho, I probably won't change your mind on this, it sounds like you've been bitten and I can understand why you've taken a somewhat drastic, in my eyes, ban on the technology for now. I'm just putting forward my POV and I wish you the best :).
256BitChris@reddit
Clear skill issue, which is pretty common in most AI Deniers.
Cope harder, lol.
thedeuceisloose@reddit
How many years of software engineering experience do you have to make this assessment?
256BitChris@reddit
25+
thedeuceisloose@reddit
For sure dude!
barbouk@reddit
Anyone who used AI - regardless of the model - and doesn’t realize it’s not that good at anything remotely complex, is probably not a good developer to begin with and would get easily replaced… by even other, more capable humans.
256BitChris@reddit
Cope harder.
BigBootyWholes@reddit
Claude Code is amazing!
Noblesseux@reddit
Yeah I think this is one of the interesting things with some of the tech boosters on Reddit in particular. They're under the impression that it'll just linearly improve forever, not really understanding that that's incredibly rare as a paradigm in real life.
A lot of the "if it's this good this year it'll be replacing x next year" talk relies on the misunderstanding that there's some guarantee that any given approach to AI has unbounded improvement potential. As opposed to what actually normally happens which is that something gets hyped for a while and then people eventually find the limits/downsides of it and start treating it as a normal tool instead of just a cure-all for every problem.
lurkin_arounnd@reddit
It has and will, however, have some incredible impacts on medicine and education.
Alphafold really is a paradigm shift
-MtnsAreCalling-@reddit
The reason people think it’s “different this time” is not entirely irrational, though it may end up being wrong. The idea is that at some point in the near future AI will be good enough to improve its own code, and the improved version will be good enough to improve itself further, and so on in a self-reinforcing feedback loop that enables it to keep improving at an increasingly rapid rate.
Noblesseux@reddit
I mean it is irrational if you understand how the specific tech they're talking about actually works. The problem is that often when people say AI they're thinking of just a generic concept of a sci fi AI and not a concrete technology that has implementation details and mathematical limits.
Like if you asked a researcher if they thought that the current AI paradigms that places like OpenAI are using would result in an AGI capable of improving itself, they'd laugh in your face because that's not the type of technology this is. The more you actually understand the underlying idea of how these models work and learn the less likely you are to seriously buy into that type of boosterism.
-MtnsAreCalling-@reddit
And yet OpenAI’s CEO is claiming exactly that: https://fortune.com/2025/06/19/openai-sam-altman-the-singularity-event-horizon-takeoff-ai-has-begun-to-modify-and-improve-its-own-code/
Obviously he has a financial motivation to make such claims, and I actually suspect that you’re right on the merits. But he’s far from the only prominent figure in the AI space who is making such claims and it’s not irrational to believe them, or at least accept that there is a possibility they’re correct.
thedeuceisloose@reddit
“Man with vested interest in specific outcome suggests outcome highly likely!”
Low_Level_Enjoyer@reddit
The guy who'll make 29939229 gazillion dollars if he can convience everyone he achieved AGI is trying to convince people he can achieve AGI? Crazy.
SamA just vagueposts all day.
Zero papers have come out from OpenAI (or anyone else) explaining a theory on how to scale modern LLMs to AGI.
DatDawg-InMe@reddit
His word means literally nothing.
Grounds4TheSubstain@reddit
OpenAI shipped a hell of a lot on his watch, man.
DatDawg-InMe@reddit
So? That doesn't mean he's telling the truth here. OpenAI bleeds money, and they have a history of lying too.
Grounds4TheSubstain@reddit
No, it doesn't mean that he's correct here. But he's delivered many times. I generally believe what he says (again, who knows about this). Saying his word is worth literally nothing is, uh, a bit far-fetched. What have you shipped in the past two years?
drakir89@reddit
It's less about shipping and more about incentives. He has in no way shown himself to be exceptionally honorable, and he benefits a lot from hyping AI.
Talking about how AIs are so capable they might end us in 10 years or whatever is effective at hyping AI.
DatDawg-InMe@reddit
It is completely irrelevant what I've shipped.
And me saying his word means nothing in this context doesn't mean his company isn't still doing things or that he knows nothing. But on the topic of vague claims about super AI, his word alone is not a valid argument.
Noblesseux@reddit
Sam Altman is like straight up known in-industry as a massive fabulist almost on par with Elon lmao. Like he's notorious for lying because it induces investors with fomo to give him money.
sheebery@reddit
Wow, businessmen in the AI space are making outlandish claims about the capabilities of AI in years to come?! Color me surprised!
eaton@reddit
But that’s been the idea for the last several breakthroughs, and the generative AI boom, while interesting and full of novel innovations, shows no evidence of being The Breakthrough That Finally Singularities, or something like that.
eaton@reddit
It’s particularly interesting given that sigmoid curve has been part of every AI “boom” since AI was first coined as a term. It’s been around for more than 75 years at this point, and the incremental improvements and evolutionary jumps are impressive — but every single time, the latest breakthrough (expert systems, machine learning, neural networks, LLMs…) have been heralded as the One That Will Finally Solve All The Problems And Replace All The People.
The UK government famously attempted to build an expert system to codify its immigration laws in the 80s, and that project ground to a halt after years of just-around-the-corner. The underlying tech is now all over the place, but the we still struggle to automate tangly, context-rich tasks in ways that boosters continue to underestimate.
Goducks91@reddit
I'm at the point where if it's going to replace software development it's going to replace A LOT of other careers as well. At that point there is not a lot of work until it's replacing Blue Collar jobs as well and we'll for sure need some sort of UBI.
YouGoJoe@reddit
I always think about the analogy from the industrial revolution: that cars were going to put the horse industry out of business. Which, they mostly did, but also they created the automotive sector. I'm very skeptical about anything resulting in mass unemployment.
YetAnotherRCG@reddit
Just because something has happened in the past doesn’t mean it will continue.
Cars require mechanics sales people massive infrastructure projects all of which exists in the real world and is relatively low skill.
Sure it’s a firmly established pattern but it’s not like one of the laws of physics. It shouldn’t just be assumed that an arbitrarily large number of jobs will always exist.
Glock99bodies@reddit
This is just centrism. You think right now, at this time is history, is the most important. People thought the same when Caesar conquered Rome. The same when hitler invaded.
It’s this same shit everyday. WW3 is coming. AI will replace us. I’m tired of it.
YetAnotherRCG@reddit
Like political centrism? Because I really don't see what you mean by that.
Also the present time is by definition the only time where actions can be taken. When Caesar conquered Rome they were correct to think it was the most important moment in history because it was the present. What a stupid argument.
And for the record WW3 probably isn't coming, "AI" is a marketing term but will cost an immense number of peoples jobs while creating very very few jobs.
There was a lot of work that people would have loved to do (or to have be done by someone else) in the industrial revolution that they couldn't because they had to spend almost all human energy on producing food so everyone didn't starve.
As basic survival was made easier by technological innovation people were able to move into the other work they already knew existed. Assuming this pattern which has only lasted a couple hundred years is going to infinitely repeat forever is as stupid as our ancestors who assumed they could never deforest.
There is no reason to assume there is boundless numbers of hidden jobs just waiting to reveal themselves.
Glock99bodies@reddit
Centrism in that everyone thinks when they are alive is the most important time in history.
In 5000 years they will see this era the same as the invention of the cotton gin.
Just not that long ago people viewed the atomic bomb as the end of society. See dr strange love.
I get that some people are worried about their own jobs but as a whole society will just keep on chugging along. It’s all just fear and uncertainty. But it’s almost a certainty that society will keep moving forward.
YetAnotherRCG@reddit
I can't find any mention of this this definition for centrism where did you come across this?
And I must continue to stress that all historical figures who thought the time they lived was the most important were correct as are we are to think the same thing. Action can only be taken in the present moment it is always the most important time and history is composed of present moments.
Further picking out a specific moment as the most important in history is equally odd and pointless since it depends wholly upon how the definitions are setup for most important.
Moving past that to the main point, I really don't see how this assumption that there are infinite numbers of possible jobs to replace existing ones lost to automation. Its certainly possible this is true, but what I would like to see is a logical chain of arguments that explains why that would be instead of a vague gesturing to an (admittedly reliable) pattern without any argument for why this is the case.
Because to my mind this pattern is equally well explained by there simply being a very large yet finite number of jobs that we simply haven't fully explored yet due to higher priority jobs out compensating them. In that case the pattern will actually terminate at some point.
And it is my experience that things aren't usually infinite on the finitely sized planet earth. And that people going around thinking about things that aren't infinite like they are has caused a whole hell of a lot of problems for our species.
FakeBonaparte@reddit
Exactly this - not all inventions make life better.
Inventing agriculture led to longer working days and shorter lives and we didn’t make up the shortfall for 10,000 years.
Inventing the stirrup gave nomadic armies a huge advantage that led to waves of depopulation and mini dark ages in the civilised worlds.
Etc, etc.
Goducks91@reddit
I’d argue the dream is mass unemployment and have robots do the hard/boring stuff so we can just enjoy life and do what we want to do. Of course people are way too selfish and work is too tied to our identities right now for that to be the case. Then you also have the moral dilemma of basically creating sentient machine slaves 🤷♂️
CodyEngel@reddit
If we don't work then we don't earn money and you need money for food.
Goducks91@reddit
The whole point is that basic necessities + more would be covered by the government because humans no longer need to produce economic output because robots/AI have gotten so productive.
DaRadioman@reddit
The problem is that falls apart financially. You have to have the means of the production willing to just give away the profits. That doesn't work with capitalism.
The only viable path there is complete socialism/communism but that fails when people get involved.
So unless we let AI rule we are probably in for a real bad time.
Goducks91@reddit
Exactly. It's a pipedream utopia that's not going to happen. Funny thing is complete socialism/communism actually makes sense if we get to this point but the connotations are so negative and the rich are so powerful that it'll never happen.
Fair_Atmosphere_5185@reddit
It's not the rich or powerful. It's every day people. People need work to feel useful and valuable. Without it - people just spiral and don't know what to do with themselves. You'll end up with massive substance abuse issue and general nihilism.
Brawldud@reddit
Are you sure about that? The retired folks I know are having a great time. They're skiing and playing pickleball and doing triathlons and biking around town and joining book clubs and learning to crochet. There's so much to do that doesn't need to be anchored in the idea of producing economic value. All you really need is time and access to those things.
Fair_Atmosphere_5185@reddit
Retired folks have their needs met and have no need to prove themselves
Brawldud@reddit
"Having your needs met" is an economic resource allocation problem and "needing to prove yourself", and particularly feeling the need to do that by means of performing work for a private employer, is a you problem.
I have no need to prove myself but I do need healthcare, a roof over my head, foot to eat and a bike to ride.
Fair_Atmosphere_5185@reddit
"needs" can be nebulous.
Young people have a need to compete, gather resources, and generally evaluate themselves against others to see where they are in the pecking order.
And that order influences so so so much.
The retired know where they are in the pecking order and no longer have a need to compete or prove themselves.
Brawldud@reddit
Ok... and before we make people do it by giving up most of their waking hours to afford basic necessities, young people accomplish that just as well in a soccer league or a debate club or musical theater.
We don't need a whole orphan crushing machine just so hypercompetitive people can measure their dicks against each other.
Fair_Atmosphere_5185@reddit
Measuring our dicks against each other is exactly what we are doing and it's intrinsic to humanity.
Brawldud@reddit
Glad you agree that jobs are completely unnecessary for this purpose then.
Fair_Atmosphere_5185@reddit
Absolutely not.
The tribe values those who produce more value. Capitalism just distills that into a slightly more abstract form. Instead of hunting or tracking, we have abstracted that into money.
Brawldud@reddit
I can’t say your “the world needs to be cruel and horrible even when there is enough for everyone” worldview is particularly compelling to me
Fair_Atmosphere_5185@reddit
There is no "needs to be". The world simply is. Sometimes it's cruel and horrible. Sometimes it's not. It's up to you to do what you want with the time allocated to you. If you don't want to compete - that's certainly your choice. But I do. I want a larger home, nicer car, more privileges and benefits, date the prettier & healthier women, raise my kid, and give my kids every advantage I can possibly bestow upon them to make the game even more unbalanced in their favor.
And I am willing to compete and work absurd hours to make sure I get those things.
Brawldud@reddit
The entire premise of this discussion - a premise grounded in reality - is that there is more than enough for everyone to live a dignified life, and the fact that that's not our experience is entirely a function of how we distribute resources.
You are not just stating scarcity as a reality but advocating for it normatively, as though we must have poverty in order for people to prove their mettle.
I really don't care how you live your life. It is your prerogative to buy fully into the worldview that advertisers want to sell you, where material possessions and aesthetic markers of success are the end-all be-all of your life, but I feel that that prerogative ends at the point that you think there is any intrinsic need, in modern society, to enforce artificial scarcity upon other individuals in a world that is actually abundant.
SmartassRemarks@reddit
Your arguments here show a serious lack of understanding about humanity. Most humans want to just chill and have friends and spend time with their children. A small percentage are deeply greedy and will never be satisfied no matter what. This is not new. This is not unique to capitalism either. Human history is filled with conquerors, and they existed before recorded history as well.
Brawldud@reddit
I think human history is filled with unstable equilibria, failed experiments, and moments of incredible technological and societal change that redefine what is possible. Call me an optimist; I think human society ought to operate as a project to reduce scarcity and poverty and violence. We should draw lessons from history in the pursuit of a more connected, more humane, more prosperous society but I think if we draw the conclusion that that pursuit is futile then we have erred in our thinking and need to re-evaluate.
Whether you think humanity requires some kind of hierarchy is not really relevant to whether humanity requires scarcity and poverty; chess.com, soccer leagues and WoW guilds all operate on some kind of hierarchy. I think the notion that economic/military domination is the inevitable outlet of choice for the ambition of "conquerers" is not categorically true. I think that all of these perfectly human impulses, when subjected to societies of different structures or values, will manifest differently.
SmartassRemarks@reddit
Is there a single society of any scale on earth that functions the way you want it to today, or has there ever been? I doubt it. If so, how long did it last?
Are there any social animals that don’t compete for social and sexual dominance? Are there any primates that don’t?
Humans are just animals. Animals built for advanced social activity. Animals evolved according to natural selection and sexual selection. Let’s not forget it. There are many inescapable things about this: the 7 sins are a good quick way to sum it up.
Brawldud@reddit
I don't think any of these actually address anything I said; I've thought about how to reply and each reply to each point is just, verbatim, what I already said in my previous comment. None of anything you've said has anything to do with whether scarcity and poverty are necessary. You are just gesturing vaguely in the direction of "people want to compete with each other" as some flimsy pretext for why people should starve if they do not produce economic output.
SmartassRemarks@reddit
I'm not the guy arguing that poverty and scarcity are necessary. I was only reacting to your comments that seem to believe in a utopian future of communism or something like communism, which would be enabled by a post-scarcity world.
On the other hand, do we need poverty and scarcity to have meaning in life? That's what the other guy was getting at, more or less. I don't think we do. I think that poverty and scarcity are a dangling fear that keeps the populace afraid, which in turn motivates them to join groups and systems that make them feel secure. But these are not the only such motivators for this behavior. Others include violence, feeling alone, feeling unheard, feeling rejected or outcast, etc.
Hence, I don't think that a uptopian future of distributed resources and harmony is possible. The few bad apples ruin it for everyone else. This is consistently the case for all of human history, written or otherwise. This is why humans (and our primate cousins) go to war. This is why men are larger than women on average. This is why groups have hierarchies, which involve groups of people falling in line under a shared belief system that makes them feel secure.
Brawldud@reddit
I think we are in agreement on the second paragraph so I'll go to the third one.
I do think this is not a solved problem. In the world we inhabit today you can see how an increasingly insular and insane group of ultrawealthy people have accumulated a level of power and wealth that distorts our ability to have a functioning democracy or an economy that can function without a permanent underclass of people who are homeless or one missed paycheck away from becoming homeless. It's a tremendous problem that wreaks unfathomable harm in the day to day lives of ordinary people.
Given how thorny and relevant the problem is under our current system I just don't think of it as a compelling reason to abandon the idea of a prosperous postscarcity society, particularly as technology makes the math on this increasingly workable; if it is sufficient reason to discard alternatives to our current system it should be sufficient reason to discard our current system as well. I think this problem is solvable, or mitigible or compartmentalizable. Wars, bigotry, insecurity about our place in social strata are all downstream of emotions and impulses that are normal parts of the human condition. I think there is some solution for society to channel these emotions and impulses in healthy ways that do not lead to violence and suffering. And I noodle around with what I think these might look like but I don't think it's absolutely essential for me to have a single, fully-formed, unassailable answer at this stage.
The other thing is that if we discard this notion then what direction do we want society to move in? How and why would we deal with having an economy and level of technological progress that can support postscarcity while rejecting the idea that we should actually have postscarcity? The future is coming, regardless of whether we think proactively about how to shape it or not.
Fair_Atmosphere_5185@reddit
I'm saying that the human brain is unable to comprehend and live in a world of abundance. We require suffering and hardship to be happy. A life a leisure and abundance is directly contrary to our entire evolutionary history. I'd actually argue that society today is too abundant and it is too easy to meet one's needs with minimal work and effort.
Artificial scarcity forces a competitive mindset, that in my opinion, is ultimately beneficial for society. Hard work and competition is the crucible that forges better men.
None of this is driven by advertisers or consumption really. I don't use social media outside of reddit - and I only use this because it's anonymous. My vacations are trips into the outdoors to hunt, backpack, and otherwise do things off grid. My days are largely working for sunrise to sundown for the benefit of my family. When I get a day off - I spend it hiking for 12+ hours.
I am driven by money because it affords my family a bett to and better quality of life. I can pay for tutors and other academic help for my children. I can pay for their college in full and they don't have to take out loans. I can buy them their first homes. My interests are working out, hiking, backpacking, biking, and spending time outside. They really aren't that expensive
Brawldud@reddit
You know, I enjoy riding a bike up a really long, steep hill, miles long at 8+% grade. Even if it's a hot and humid day, even if it takes me forever, even if I'm suffering dearly in the moment. In some sense, doing the climb successfully - or giving it my best shot - is a tremendous reward in and of itself and is immensely good for my self-improvement, but the really nice thing about climbing uphill is getting to take the same grade back down and enjoy the spoils of my work.
That's not a metaphor. It's just a form of suffering that I seek out, one that allows me to improve my skill and perform my talents, and one that has no connection whatsoever to economic production. I believe very strongly that, absent economic pressures, we'd all make do in terms of finding our own challenges to struggle with, grow from, and structure our personal narratives around. In fact I believe the freedom from arbitrary scarcity is exactly the freedom to self-actualize through struggles that are personally meaningful. When I look around in my community and see people who are struggling to find work or pay bills and getting kicked into survival mode, that's actually debilitating to their personal growth, their ability to focus on the things that matter to them and their ability to connect more deeply to the people around them.
You do plainly state that you value money as a means of achieving a bigger house, nicer car and sexual partners who better meet your standard of desirability. Perhaps it is important to you to have this individualistic notion that the world demands you acquire the modern trappings of success and provide materially for your people - and you take the narrow view that this means your partner and children - and that if you can provide, you've shown your worth and earned your place. But this really is just a thing you choose for yourself, something you choose structure your life narrative around, and not some kind of iron law of humanity or something everyone else needs to imitate. I think it's strange to project that onto other people and assert that other people would be lost without society demanding they do the same, and reject the idea that satisfaction can come from analogous struggles that do not have existential stakes.
Fair_Atmosphere_5185@reddit
You seek out long bike rides. I seek out grueling trips in the back country. I live a life of abundance and I go out of my way to subject myself to the most absolutely miserable conditions I can. The actual trip itself is usually suffer dear - but that is what gives me the greatest pleasure and joy in the world, at least afterwards.
We currently live in the richest, most prosperous, and most comfortable time in human history. We already should be at this Eden you think will come. I don't see it and I frankly think the sickness at the core of society will only get worse as abundance becomes more prevalent.
Brawldud@reddit
Why would that bolded statement be true? We have nothing remotely approaching an equitable distribution of resources and we have an economic system that actively funnels resources away from working people and toward people who have orders of magnitude more resources than they could ever truly want or need. We have politicians who are complicit in accelerating this process and breaking the systems that should be serving people in need.
It seems pretty patently obvious to me how this contradiction could exist - that technology has advanced so far and yet society could never seem bleaker - and it's got everything to do with how we organize and allocate wealth.
You yourself assert that poverty must exist and is good. I claim that arbitrary, needless cruelty, such as the enforced poverty and precarity that exists on such a huge swath of the human population, is causing incredible amounts of social harm and instability.
Fair_Atmosphere_5185@reddit
Perhaps such social harm and instability is necessary? Perhaps humans are not meant to live in idyllic peace and we need conflict and struggle to justify our exist once. It's easier when to accept the struggle when it's against an external other. It's harder to accept when the object of our struggles is ourselves.
teslas_love_pigeon@reddit
It's always a pathetic mindset I see from people that are completely alienated from their friends, family, and community.
The idea you need to work is deeply dystopian.
ThrowRA_Elk7439@reddit
Humans might not need employment, but we do need work as a vocation and a purpose.
BlazeBigBang@reddit
Sure, but you can do that right now outside of work. I don't find purpose in my work. I enjoy it, but it's only there to pay the bills.
ThrowRA_Elk7439@reddit
No, same. My point was that we do need some mid-to-major activity that is aligned with our values, talents and interests. The 9-5 life though is absurd.
Moon_Atomizer@reddit
HOBBIES! Hobbies and community! We have words for these things people c'mon... it's sad that we live in such a capitalist dystopia that we struggle describe things without the framework of 'work'
ThrowRA_Elk7439@reddit
That's not the same thing. We, humans, are innately inclined to be purposeful and seek engagement, which can manifest as a lifelong serious specialization, whether it's paid or not. Hobbies are more of a pleasant pastime by definition. https://www.vocabulary.com/dictionary/hobby
And, when thinking of work as dictated by capitalism, don't forget that labor is a foundational tenet of socialism—there is no socialism without labor.
Brawldud@reddit
During my last bout of funemployment I got really fit from weights and biking, started doing yoga and took up playing piano again. It was really nice. I ended up jettisoning the yoga and piano once my days got super busy again but if I could have all that time back that I spend at work, I’d be unstoppable.
ogmo0n@reddit
I love this so much. I just wonder how we will have the things. Like I can’t see it all the way to the end. Is gas free? Are my tacos free? The whole world economic model would have to change and there would be less of a need for our government if they didn’t have to fight fake wars for the elites.
Brawldud@reddit
Well, you know like, I can't see our current system to the end either - eventually a tiny sliver of people have all the money in the world, 95% of the population is unemployed and can't afford anything, the planet is depleted of resources and the climate is fucked? - so I feel there should be grace for not being able to conceptualize the whole thing front-to-end in your head.
teslas_love_pigeon@reddit
I bet you would really enjoy the work of Rutger Bregman. He wrote a great book called Utopia for Realists.
To answer your question, we can absolutely have all these things. We just have to collectively decide them.
Always remember that our entire species was built on collaboration with others, for 100s of thousands of years. Always have solidarity for your fellow workers.
Goducks91@reddit
I don’t think so. People need work to feel useful because they have to work to live. Take away the have to part of work and people with find their own joy. I’d argue that substance abuse issues would decrease drastically because poverty would essentially be eliminated.
Fair_Atmosphere_5185@reddit
I don't think people will find their own joy.
That something to work for is their own status, position, and general position in the pecking order. In a world of abundance there is no way to differentiate between who is better than who.
doob10163@reddit
I disagree with this and feel that this is something that is taught to people that live in this society. I'm sure that many people do feel this need and that's why this society continues to perpetuate in this way. I reject this and personally don't care about this. If I could have my necessities and find joy in life outside of needing to care about being superior in the general pecking order I would not be participating in it at all.
Fair_Atmosphere_5185@reddit
If you have no desire to compete then you generally are at the bottom of said pecking order
Goducks91@reddit
There's plenty of ways to satisfy the drive to compete without it being tied to our jobs. Shit ranking up in League of Legends would satisfy that need.
Fair_Atmosphere_5185@reddit
I think having a rank in League of Legends would signal a lack of standing
doob10163@reddit
In which society? Let's say you are Faker, the league of legends pro who has been the most successful person in esports for a while by many metrics. I would say that his standing both monetarily and in the society that he lives in is very high. This is just to say that the standing that you're in is more reflective of the society that you are living in rather than what your self worth and life satisfaction could be tied to.
Fair_Atmosphere_5185@reddit
Playing video games is not a flex lol
doob10163@reddit
You're talking about things which you personally find as a 'flex' as in something places one in an arbitrary ranking of how someone ranks in society. Video games in the context of the society that is discussed here is the pecking order and rank that we are talking about. This argument shows that what is being valued as far as needing self satisfaction (as per your initial statement) is in the context of the people that you are surrounded by. For what it's worth, he's also successful in other endeavors because of this and most likely is ahead of many people for what you likely personally consider to be status symbols, e.g. money, relationships, leadership, how people look up to him, etc. My argument is that the pecking order says more about the people you are surrounded by than what is intrinsically motivating.
doob10163@reddit
It seems like you're the type of person that needs to feel superior to others to get life satisfaction so we may not agree with each other. I find your conclusions to be illogical, as there are people on the bottom of said 'pecking order' who do care about it, and those who are at the top who may not as much. We are talking about life satisfaction, not the specifics of how someone fits into the pecking order system. I find immense satisfaction in creating things, whether its through code, playing music, or growing plants or tending to my dog and relationships. I don't care about feeling superior to other people in how society perceives us in terms of status.
DaRadioman@reddit
People need something to work towards. This doesn't in the least bit mean they need to work for the purposes of making money. That's a lie perpetuated on us by folks it benefits to sell that.
Look back in history, plenty of people didn't work for a paycheck and did great.
bluesquare2543@reddit
lmao no wonder we haven't unionized yet. Shame on you
Fair_Atmosphere_5185@reddit
I would never join a union so at least you have that right
OddWriter7199@reddit
Why assume benign intent on the part of AI? Agree with your first two points. For the third would remind everyone that AI is programmed by humans. Wealthy ones.
BlazeBigBang@reddit
Yeah, that's why Marx always said that the revolution would be bloody.
DaRadioman@reddit
Which I disagreed with, but in this hypothetical AI driven "We don't need your labor" world we are discussing basic economics fall apart.
drakir89@reddit
If future tech gives us 10x productivity, theoretically capitalism can work fine if you bundle it with very high marginal taxes and UBI. Capitalists would compete with each other over who gets to satisfy the demand from the public's UBI spending.
DaRadioman@reddit
Very high marginal taxes are counter capitalism, and when money == power there's no equilibrium when there are haves and have nots.
You cannot have rampant wealth inequality AND UBI in the same system. They are polar opposites.
Spirited-Camel9378@reddit
You’re a fool if you think those that own this tech won’t ensure it creates an underclass that can be exploited for further gain
Goducks91@reddit
Oh, I'm not a fool and absolutely think this is the direction it's going to go. I'm just dreaming of the ideal situation lol.
SporksInjected@reddit
If people have disposable income, art will still have a niche. No amount of industrialization has displaced that so far. Code isn’t really any different either i guess; if someone wants handmade code, they may pay for that even though it’s not objectively better.
Electrical-Ask847@reddit
i mean it could happen. ppl now have way more leisure than ever before, there is no reason to believe this is peak leisure.
ottieisbluenow@reddit
It doesn't fucking matter even a little to the people being replaced. I am in my mid 40's. Do I just fuck off and die now?
nerd_herd3@reddit
Except this time we're the horses
projexion_reflexion@reddit
Our population will be greatly reduced, and the most beautiful ones get ridden by rich men.
Pokeputin@reddit
Sir, I'm nor sure r/experienceddevs is the right place to talk about kinks.
DrXaos@reddit
Reasoning by analogy may be improper sometimes.
This is more like importing very eager slaves who are rapidly getting better at everything. The owners of the slaves might have a very small number of slave whippers and foremen jobs, but for the most part like all other slavery it greatly suppressed the wages and employment of free people.
Yourdataisunclean@reddit
If this actually happens at a fast pace. Congrats. You're now an Alignment Engineer. You've been invited to work on the semi-governmental Toilet Paper Production Partnership (Pronounced T-P's) where you help develop and oversee the AI that makes toilet paper for all of humanity while preventing it developing an alien ethics system and turning the entire earths ecosystem into a massive white sog (TP version of grey goo).
BudgetMight9270@reddit
Wild that you think TP will be relevant by then. Team bidet babay.
Lubs@reddit
Crazy you think bidets will still be relevant.
Better get on team seashells.
trwolfe13@reddit
The UX for them is just terrible though, and the documentation is unheard of.
NUTTA_BUSTAH@reddit
I mean you tend to get three, isn't that already a great experience? Imagine working with one.
pkat_plurtrain@reddit
I've always ever used the other two to operate one, I feel sorry for those who've got none
Bodine12@reddit
Crazy how you think our future AI-overlords will allow us the time for wiping. Team Colostomy Bag for the win.
Lubs@reddit
Hahaha here we all are pretending we won’t end up as batteries.
SketchySeaBeast@reddit
He doesn't know how to use the three seashells?
BeerInMyButt@reddit
Convinced all you bidet people on reddit are just perverts that want people to think about them using it
ಠ_ಠ
renoirb@reddit
We have the Metaverse, anti-work about toilet paper coming soon. Then corporations as states and we’ll have a “snow crash”.
runonandonandonanon@reddit
It's weird there's so much focus on using this unprecedented productivity to cut costs rather than to do more. I wonder if anyone's imagination has really caught up to a force multiplier of this magnitude. (I do think we're in a bubble fwiw)
rashnull@reddit
Name one job category that will be created by deploying LLMs
utopia-@reddit
code debugger 😅
E3K@reddit
Prompt engineer.
rashnull@reddit
Is that a joke?!
E3K@reddit
It's already reality.
MistSecurity@reddit
I am simply skeptical that this advancement will generate more jobs than it is going to take away.
Sure, jobs will undoubtedly be created, but enough to replace entire multiple sectors it will have wiped out by then?
FakeBonaparte@reddit
That’s a limited sample size. How about the invention of agriculture? It extended work days and shortened life expectancy and we didn’t make up the shortfall until about 10,000 years later.
maximumdownvote@reddit
A lot of claims to base your argument on.
FakeBonaparte@reddit
Not at all, it’s a single, well-known topic in academic pre-historical study. “Why did humans adopt agriculture en masse given that it seems to have made life worse?”.
But if you prefer, rather than introducing new information here I could just edit my comment down to “source?”. I’m not the one making the extraordinary claims here.
sleepahol@reddit
Unless we're the horses in this analogy.
maximumdownvote@reddit
I think we are the ai in this example.
MCFRESH01@reddit
There will be a lot of people who need to pivot their careers. I'm a self taught dev. I have a great career but the minute I get asked to do AI related things other than use an API I'm pretty much screwed. Luckily I have a marketing degree and some practical experience, I can probably pivot to product fairly easily.
crazyeddie123@reddit
but the minute I get asked to do AI related things other than use an API I'm pretty much screwed.
Because you can't? Or because you won't?
MCFRESH01@reddit
I do not have the math background but I guess there is probably some work I can do around it if I had to
NUTTA_BUSTAH@reddit
But that's similar to comparing building database engines to hosting or using them. It is difficult and highly specialized (like maths you'd need for AI research/tuning), but the grunt work has already been done for us, and the remaining improvements are handled by the handful of specialists. We keep rocking on the good old abstractions like SQL.
Goducks91@reddit
Wouldn’t marketing and project management also be replaced by AI if it’s replacing software developers?
MCFRESH01@reddit
Someone’s got to talk to the customer and figure out what there needs actually are. Hopefully AI doesn’t get good at that too
NUTTA_BUSTAH@reddit
I believe in this hypothetical the customer representative is an AI, their needs are figured out by an AI and the conversation happens between your AIs and their AIs.
It's AI all the way.
PoopsCodeAllTheTime@reddit
If it's not good at that then what is it good at, yk
DaRadioman@reddit
That's something AI is decent at...
Kardif@reddit
Don't worry, you won't be asked to do that stuff, or if you do, it'll be in the service of someone else with the necessary qualifications. A masters/PhD in the field is basically required at this point, this shit is hard and even understanding how it's working requires a decently solid level of mathematics most people don't have
dbxp@reddit
Back then they were really crying out for a solution to the horse problem due to the amount of waste they created: https://www.historic-uk.com/HistoryUK/HistoryofBritain/Great-Horse-Manure-Crisis-of-1894/
There isn't the same sort of critical demand for AI as there is for say farm automation
SergeantPoopyWeiner@reddit
The one thing that blows your analogy apart is AGI. I'll let the readers ruminate on why that is.
dbbk@reddit
It's not going to replace software development. Don't confuse the vibecoded junk with experienced engineers using it as a tool to move faster.
cockNballs222@reddit
No career will truly be “replaced”, it’s just a matter of now you need one person to “supervise” VS previously you needed 5 people to actually “do”. There will always be a job for experts in their field, the bottom 50% should watch out tho.
fundthmcalculus@reddit
And that's part of the situation. So many people jumped into CS for the quick-buck mentality (fueled by ZIRP,article124 taxes, etc). Now there is a glut of under-qualified people, which unfortunately helps pull down salaries for the qualified people.
TechnicianUnlikely99@reddit
You think a $500 a month UBI will save you? Lol
Goducks91@reddit
It’ll have to be a livable amount if we get to the point where AI has taken over 80% of the economy lol
TechnicianUnlikely99@reddit
Will it? Why? What makes you think the rich need us if AI gets to that point?
metalOpera@reddit
Who's gonna keep them in business and buy what they're selling?
obviousoctopus@reddit
What is being replaced specifically, in your view?
I find it hard to see what jobs can be substituted with systems that provide probabilistic output without any understanding.
Maybe I confuse "replacing jobs" with "replacing jobs with concern for quality and accountability" while the idea is to just "replace jobs" without any of these concerns? Annihilation via infinite enshittification of everything, but with great profits for a minute or two?
Goducks91@reddit
My view is nothing is getting replaced. But if AI becomes good enough where it does replace it’s going to replace a lot of things. I don’t think that happens until AI is sentient.
Own-Chemist2228@reddit
I wonder why the media is emphasizes how it can replace software engineers but never seems to mention that is just as likely to replace doctors and lawyers. Of course these professions require formal credentials, and the public would never accept literally replacing a surgeon or courtroom lawyer with AI. But medicine and law are otherwise perfect candidates for AI: They both require making decisions using large amounts of data. There are lots of lawyers and doctors working behind the scene" doing research, and these people do not technically need to be credentialed to do this work. In fact they don't even need to be people.
I don't see AI replacing SW engineers one-for-one, an AI is not going to be attending daily standup. But it could reduce workload and possibly headcount in some roles. But that is true with many professions, including some lucrative ones like medicine and law.
The difference is that AI will also create demand for SW engineers and we don't ultimately know how the market will balance out.
Pikaea@reddit
Finance its going to destroy, all those associates spending all their time on excel and creating powerpoints with update PE ratios? Goodbye.
Repulsive-Hurry8172@reddit
The reason why other white collar professions (especially those in business/ finance) are still happy with AI is because they think they are irreplaceable by AI, even though a lot of them already use AI consistently.
I work with actuaries and already they want our team replaced (by them, with AI) yet they still cannot produce software without us. They threaten and question our output/ efficiency because AI is supposed to work for devs too... Only it does not because AI can't cross organizational inefficiencies.
I don't think they will be replaced too, but with their obvious dependence on AI, I see them getting replaced first. If a RAG would be built on ISO, they'd probably sink more into AI's bosom
MalTasker@reddit
Ubi aint happening in the us anytime soon, even if democrats luck their way into power somehow
SketchySeaBeast@reddit
Yeah. It can certainly help at times, but if it gets to the point where it can be a software dev it'll be able to replace a lot of other jobs, leading to a complete societal shift as those with the resources to obtain the compute will have all the power and won't require workers at all anymore. Kind of a hyper-capitalist apocalypse.
vinny_twoshoes@reddit
Maybe, but that's a load-bearing "if". What I've seen isn't able to replace devs, and it's also not _close_ to replacing devs. Maybe I'm biased or misinformed but it does seem to be getting better and better at producing work that _looks_ good, which in turn can fool more people into the hype cycle. If AI gets so good that good engineers no longer add value, then I suspect we've got bigger problems on our hands.
SketchySeaBeast@reddit
Oh yeah, I'm not convinced by the AI at all. I'm just saying if the "best/worst" case happens it will not be a good thing.
mcmaster-99@reddit
Im perfectly fine with UBI honestly. I’d fuck off to some cabin or remote island, make friends with a couple bears, and forget the corporate world.
PizzaCatAm@reddit
It will replace a LOT of other work streams as we know them, but people will be involved, that’s my prediction, we are just going to be working at a different level in a different way.
roodammy44@reddit
I was honestly more convinced that self driving would be here 5 years ago than AI will be taking over in 3 years. But that’s probably because I’ve had the chance to use AI.
elprophet@reddit
I was on the Elon FSD train in 2016-2018. I got off when the only improvement came in 2021 when it finally could... make a "Bing" when the streetlight turned green.
mishathepenguin@reddit
It's the most expensive machine in the hospital!
ewankenobi@reddit
To me it's more like the dot com bubble. Think there will be some kind of correction in the market, but it's still going to change society in the long term.
Whether that means it can replace programmers or not in the future I don't know. I find ChatGPT pretty useful when I'm stuck, but had to turn off Co pilot auto complete after a few days as it was more distracting than helpful
corny_horse@reddit
I'm not a crypto enthusiast by any stretch of the imagination, but infinitely useless is an undersell. Its utility is not particularly relevant to the "1st world," but there are many countries in which the banking system is quite unreliable and/or currencies are in wild value fluctuation, it can provide stability and reduce transaction friction in a way that can be difficult to emulate. This is particularly true as Western countries, which could provide a reasonable alternative currency, are often antagonistic towards the countries that would need this the most.
That isn't to say they do this well or that a better system couldn't be adopted that doesn't burn carbon for dubious return value (although some currencies have attempted to solve for that with proof of stake, which radically decarbonises the process).
From a first world perspective though, yes, "AI" is substantially more practical.
SketchySeaBeast@reddit
You're right, I was exaggerating there, blockchain can be used for something, regardless of whether or not it should be. And people tried to jam it into everything. AI is much easier to try that with.
corny_horse@reddit
I'm definitely less jaded than most engineers it seems. I've somehow managed to avoid both hype trains. I never once had management try to get me to shoehorn in blockchain into anything I was working on and I had to beg my current company to let us use an LLM, which they reluctantly did - finally - like last month. And we haven't even thought about integrating LLMs into the product as far as I know!!
abrandis@reddit
Too bad by then a shit ton of developers will be on the unemployment line, while the executive who put them there are cashign out their fat bonus checks ... Funny how so much about being successful in capitalism is timing..
RadicalDwntwnUrbnite@reddit
The pendulum will swing the other direction. Once it's brutally apparent that AI isn't replacing dev jobs, it's just creating mountains of technical debt and there is a generation gap of competent Sr SWEs as they stopped hiring jr/intermediate, the ones that haven't rotted their brains on prompt engineering are going to be in huge demand.
thedeuceisloose@reddit
Unfucking AI generated code will become a specialist job in short order
geon@reddit
No. Because if that ever happens, 90 % of the population will be unemployed as well, and no one can afford to buy whatever the executive is selling. Total economic collapse.
SketchySeaBeast@reddit
No reason to even put in that "or". Just look at the people who own the compute today. They are already impossibly wealthy and powerful and, if they aren't working towards that post-scarcity utopia today, why would they tomorrow?
geon@reddit
Just wanted to cover all edge cases.
_Questionable_Ideas_@reddit
Blockchain had multiple killer use cases like buying illegal drugs on the internet, tax evasion, funding terrorism etc.
SawToothKernel@reddit
That's funny, but you are inadvertently pointing out the one absolutely killer use case, which is money that no-one else can touch or mess with.
warm_kitchenette@reddit
No, there's nation-state uses as well. It turns out that Iran folks were using it to bypass all of the sanctions. Never knew that until an Israeli APT blew it up in June. (They deliberately destroyed $90mm by sending all funds to unusable wallets.)
The Russians have been using it similarly. Ukraine has been going after it.
drcforbin@reddit
Just wait until LLM-based agents can buy drugs from terrorists for you with cryptocurrency while hallucinating your tax returns.
shared_ptr@reddit
Problem is what AI can do already even if it stops improving right now is still really disruptive. The industry as a whole has not learned how to use these tools effectively yet, systems that leverage the latest models are still mostly under development and not yet hit the market, but those systems are going to be very transformative.
Already, the idea of a junior engineer is very different now than it was a year ago. Most of the work you’d get a junior to do can now be done with AI, so that’s a big change for the industry to figure out (you can effectively buy a junior engineer for 1/1000th of a price now, which does impact the economics of the role).
This is all assuming the models don’t improve at all too, which seems unlikely even if I’d personally be very happy if we have hit a limit on progression for this stuff.
chunkypenguion1991@reddit
I think a better analogy is the dot-com bubble. The underlying tech(World Wide Web) was solid but the speculation about its possibility took 10-15 more years the be fully realized. Soon AI will go through the trough of disillusionment and companies will need to set more realistic goals for what AI can achieve.
Atupis@reddit
This, if you develop agents, it is more than obvious that tech is not product-ready yet, but same time it is just magical and wonderful.
Constant-Listen834@reddit
Agreed. The blockchain analogy is terrible. Blockchain never had many use cases. Meanwhile every single dev at my company is using AI and seeing efficiency gains . My teacher friend uses it to grade papers. My accounting friend uses it to double check work, etc.
Wiyry@reddit
This is a portion of the reason why I’m not pivoting my start up into AI. I used to be terrified that AI would make me obsolete…until I actually took a look around and did some research. The more I actually learned about LLM’s and the more I studied its progression: the less I viewed it as this all consuming technology.
Hell, even in my own use and experiments: current AI fluctuates in quality and reliability almost daily and it seems to struggle to code any better than a fresh grad junior at times.
This isn’t to say the tech isn’t possibly useful but that we are still a long ways away from this all encompassing miracle machine.
the_fresh_cucumber@reddit
Infinity divided by zero
Tricky_Acanthaceae39@reddit
The difference is that peak hype for AI is complete replacement of people and also singularity. Each of these is so high that it seems reasonable that we will land somewhere between where we are and those points.
SketchySeaBeast@reddit
Well, sure, but if I were to say I'm headed to the moon the second story of my house would be considered "landing somewhere between where I am and that point".
Tricky_Acanthaceae39@reddit
That’s not the point and it’s a poor metaphor. Singularity with AI would be more like travel to the moon is possible today and peak hype would be terraforming entire solar systems.
Jobs are being replaced by AI as we speak. Tools like MS Copilot do a lot of the work jr devs used to do and we are seeing similar in legal, hr, and acct/finance and creative work too. It just remains to be seen how much further the technology can actually be deployed.right now it’s a multiplier not a full replacement I’m not convinced it will fully replace teams but it will shrink them considerably.
Historical_Owl_1635@reddit
Back then people genuinely thought it would become a usable currency.
I literally implemented it in software used by banks worldwide as a selling point because the banks were practically demanding it to stay with relevant.
StatusObligation4624@reddit
Usauble decentralized currency. Once government started to regulate it, it lost a lot of its value.
quentech@reddit
lmfao seriously?
Hmm.. what was the value of a Bitcoin in the Mt. Gox day? $100? $200? $1000, tops.
What's the value of a Bitcoin today? Hmm... over $100,000.
Remind me, which number is bigger - $1000 or $100,000?
Historical_Owl_1635@reddit
It was interesting watching it all unfold in realtime.
Initially the idea of an unregulated currency seemed great, then as hacks kept hacking and people realised they had no protections they suddenly started wanting regulation and protections.
cbusmatty@reddit
There are already miracles though. It isn’t even in near final state and miracles are happening around testing, documentation, time savings, problem solving. Not even actually implementing a single thing there are already miracles that have changed how we do work forever.
PoopsCodeAllTheTime@reddit
People are like "now I'm 20% faster", well some of us already type 20% faster than most, that's not a miracle, just a marginal improvement
cbusmatty@reddit
I’m not talking 20% we have things that could legally never have been built being built securely
SketchySeaBeast@reddit
Jesus Christ, what does that even mean?
cbusmatty@reddit
Sorry it auto corrected, things that we never could have realistically have built
SketchySeaBeast@reddit
How are you validating that these things that could have realistically been been built are being built securely?
cbusmatty@reddit
How do you validate that things are built securely today?
SketchySeaBeast@reddit
I'm concerned you're answering my question with a question. Did you mean to type that into your LLM?
cbusmatty@reddit
No? you're asking a silly question, why would you ask how we're validating something? we're validating it today, we validated it before llms. Im confused you do not understand how the sdlc works.
SketchySeaBeast@reddit
Given that AI massively changes a lot of the SDLC I would expect there to be changes in validation, especially as the first step in security is having someone who understands and cares about security write the code. But silly me. Can totally slot a LLM in there without that process changing at all.
cbusmatty@reddit
Literally no one said to "slot an LLM in there", it only changes if you do not understand how to properly secure your SDLC to begin with, which makes sense based on your questions. I mean no disrepect at all, this isn't an LLM issue, this is a lack of understanding the current enterprise tooling and the safeguards that are in place that allow us to experiment with LLMs to dramatically improve our SDLC while maintaining a highly secure enterprise environment.
SketchySeaBeast@reddit
You've given absolutely no information here, even though you specifically called out security. Why call it out if you changed nothing at all? Do you just throw words in because they sound good?
cbusmatty@reddit
What information do you want? you asked how you secure software. You do not understand how software is secured. Why would that change at all?
SketchySeaBeast@reddit
No, I asked how you're measuring it, because you claimed you were. Has your process not changed at all? If so, why call it out. Why not call out reliability or performance? You said it was still secure, but there's not just one way to know that - you can do vulnerability checks, code reviews, external pen tests. There's so many possible options, and depending on what you're building you need to meet specific standards. How do you know?
drcforbin@reddit
Obviously the LLM sounds confident that it's right and secure, and they're taking its word on that. No measurement needed, LLM says this blob of code here is secure therefore it is secure, and people believe it.
SketchySeaBeast@reddit
I feel like it's not a crazy question, when someone tells you that they "have things that could [realistically] never have been built being built securely", to ask how they measure it.
cbusmatty@reddit
I have built a ton of things, what proof do you want? But we also use the llm to help implement security features, but as always we validate by twistlock and contrast, it’s so evident you do not understand how to actually do dev work in an enterprise
SketchySeaBeast@reddit
I'm not asking for proof, I'm asking how you measure. The answer is you use SAAS tools. Got it.
cbusmatty@reddit
No we measure via Jira, we validate via sdlc tools, they are not “SaaS” tools lol
drcforbin@reddit
Exactly. They aren't measuring anything, it's a completely baseless claim.
cbusmatty@reddit
Incorrect we measure by tagging Jira cards, we have 50x in some isolated cases especially around graph work we don’t have a ton of experience in, mostly around 20x and all secured and validated through twistlock and contrast and Pharos. What in the world are you doing? Crazy behavioe
Noblesseux@reddit
Wtf does "legally couldn't have been built" that even mean lmao.
Bobby-McBobster@reddit
The fuck are you talking about?
SketchySeaBeast@reddit
It's a "miracle" in the same way debugging and intellisense is a "miracle" - a helpful tool that, when it works, helps your flow, but it's never going to replace the butt in the seat.
cbusmatty@reddit
Yes except 10 times that but sure
SketchySeaBeast@reddit
Sure, a 10x intellisense. I can work with that. I find copilot often suggests way more than 10x the code intellisense would suggest, and sometimes it's even right.
cbusmatty@reddit
Copilot sucks
SketchySeaBeast@reddit
Ugh, OK, sure, we'll keep sliding that goalpost to wherever you want.
Just an observation, but you started with clear thoughts and complete sentence and have become sloppier as time goes on. It's as if you're caught up in some increasingly incoherent AI fugue state, determined to spread the good word of the LLM.
cbusmatty@reddit
You drove a 2000 Camry and said that cars aren’t really that fast. No one is moving goalposts loo
SketchySeaBeast@reddit
"loo" indeed.
That's actually an apt metaphor. The 2000 Camry is around and still kicking, doing its job of getting us from place to place, while the shiny new mustang is going to end up wrapped around a light pole.
Historical_Owl_1635@reddit
AI research constantly stagnates.
2 years ago we were a few months away from AGI, now people have started realising we’re actually nowhere near AGI.
cbusmatty@reddit
You are talking about ai companies hyping and marketing I’m talking about developers making software, we are no where close to the peak
Noblesseux@reddit
Except you're not, the way you just described that is almost exactly the opposite of what's really happening. You're talking about the hype and they're talking about the reality that is very obvious if you're actually a developer.
AI does in fact constantly have boom bust cycles where a thing will get super popular and basically become a buzz word for a few years and then eventually people settle down and it reverts to just being a normal tool.
Also like none of the things you described are "miracles" and a lot of time they're not even really solving problems or even saving much time. A huge number of posts on this subreddit are people being forced to use AI coding tools for work and being annoyed because it isn't practically saving them much of any time.
MistSecurity@reddit
Could blockchain not be used fairly effectively for things like voting, public records keeping, and MAYBE for transferring other types of currency other than crypto?
unreliablenarwhal@reddit
Blockchain actually has tons of uses and is a pretty valuable technology. Medical record anonymization, sharing proof of work (e.g diploma verification, automatic contract fulfillment, etc) But aside from using it to mediate currency, not many of those uses ended up particularly lucrative or worth investment.
Blockchain would have been really cool if it had been easily leveraged by and pushed for the open source community. Instead it was pushed by banks and finance companies who took a while to realize it was basically useless.
Euphoric_Ad9500@reddit
It is starting to flatten but there’s always something around the corner to bring us back down to the bottom of the S-curve. Last time it was chain of thought reasoning. This time who knows
MalTasker@reddit
The sigmoid had been flattening since 2023
SketchySeaBeast@reddit
Certainly the tooling has become more refined, but we've not seen any sort of paradigm shifts like we did at the beginning.
dani_michaels_cospla@reddit
we're in year 3 of it being a year or two away. So yeah. It's not like the signs are yet to come
Connect_Detail98@reddit
I really think AI is the future, but it isn't a long-term future. It's still pretty far away from being what we want it to be.
SketchySeaBeast@reddit
I guess my caveat with that is that there's no way to know that THIS approach to AI gets us to where you're thinking.
Connect_Detail98@reddit
I've seen what AI can do and I already like it the way it is. LLMs can quickly diagnose debug logs and hint people in the right direction. This means it can be fed monitoring, logging and tracing information to give a quick output of potential root causes. This happens in seconds before anyone has had time to process that something is wrong in production.
AI can also help clients navigate the knowledge bases of their products and services. It's much easier to ask a direct question than having to read 10 pages looking for that exact thing.
I've also seen services using LLMs to convert natural language to custom query languages. For example, Jira. I don't want to learn their query language to be able to find specific tickets.
I also saw a demo of a person buying products by talking to an LLM. The LLM would ask question and recommend products from the catalog based on the inputs.
Maybe this approach of AI isn't perfect but it is pretty good for now. I'm honestly thankful because I got to live this. 5 years ago I wouldn't believe if someone told me they could hold a conversation with a machine.
Sad_Option4087@reddit
The stock market is like an ADHD kid with a new hobby.
Important-Product210@reddit
No
SketchySeaBeast@reddit
Oh c'mon, if you're going to take that stance at least ask ChatGPT to make more than one word for you.
bill_1992@reddit
As someone who was somewhat tapped in at the time, crypto didn't really start to leave the limelight until ChatGPT took the world by storm, and even then, looking at Bitcoin prices and all the people still rugpulling crypto, it's hard to say how much Blockchain has really fallen off.
The biggest issue is that we've basically reached the limit to how much novelty and value you can bring to the world with web and mobile dev, so VCs shill AI like no other because it's their only realistic hope of 1000x.
And as long as VCs keep passing out the picks, others will keep digging because VCs distribute the cash and make it profitable to dig for them.
So AI hype is here to stay until either a huge market correction in Silicon Valley venture capitalism (aka the bubble bursts) which would be even worse than now for all devs and dev salaries, or the new hotness gets found in which the flywheel of hype spins again and you get tired of the new new hotness.
publicclassobject@reddit
Ah yeah the redditor who more tapped in to AI potential than the Sundar Pichai
SketchySeaBeast@reddit
The difference between he and I is that I'm not trying to get rich off selling you any that can have "AI" written on the side. It's a terrible idea to uncritically trust the salesman.
standduppanda@reddit
In some ways, yes. Others, no. One of the major issues I’ve seen is management spending a stupid amount of time and money trying to figure out whether investing in AI is worth it or not.
krakends@reddit
This whole manufactured hysteria about these fake productivity gains is primarily to extract consent. These systems produce so much bullshit and are completely unreliable but if you create enough hysteria that people will be left behind if you do not subscribe to their bullshit generating chatbot or agentic nonsense, all the sheep will fall in line. The C-suite are experts in manufacturing these ROI and productivity numbers. They do it all the time to pump their stocks. This time is no different.
LessFudge840@reddit
Definitely feels like a bubble but looking at AI researchers getting paid more than popular athletes doesn't seem like a joke. All the big tech & investors are going all in when it comes to AI investments.
No-Rush-Hour-2422@reddit (OP)
But for how long?
Tenelia@reddit
Have you tried calculating the running costs of training data, compute resources, and the ongoing costs of deploying LLMs? Once you do that, it all becomes extremely clear the incumbents have zero way of making any profit. The larger the model, the dumber they get, and the more impossible to earn profits due to the size.
IE114EVR@reddit
You can add Big Data to your list. It was so hyped as the in demand technology every developer had to know. Now, it’s its own niche not everybody thinks about any more.
tkim90@reddit
Yes, we are. But that doesn't mean it's not useful (certainly way more than blockchain).
Anytime something is hyped, they want you to believe that things will change drastically overnight, like "all software engineers will lose their jobs by 2030!! 😱"
The reality is that things take time to integrate into our daily lives. A lot of time. The value add of LLMs is sticky enough to where it will become a daily part of most people's lives, like owning a smartphone. But by the time it happens, no one will have noticed it. Most of it will probably happen in the background, without people necessarily knowing the app they're using has some AI sprinkled in it.
Alone-Emphasis-7662@reddit
100% agreed! LLMs are like black box nobody knows how they work. They can't change it to do what they want exactly, because it's basically a set of huge numbers, running a complex mathematical equation. By setting the temperature and introducing some randomness, it spits out different outputs. Companies like Anthropic are investing huge amounts of money to understand how LLMs work. I feel companies like OpenAI are intentionally misleading people to think that the model is thinking or reasoning by providing some intermediate outputs like "I see user wants to understand how the input is converted....", but they are generated separately and the output has no bearing to that. I too believe that there is a hard limit on what LLMs can achieve, but I think we did not reach the peak of their potential yet.
EuSouAstrid@reddit
We are in a speculative bubble regarding AI company valuations, but not regarding the importance of the underlying technology.
A correction is not only possible; it is likely. When it happens, many of the companies and startups that "washed AI" without a viable business model will be wiped out. There will be significant financial losses for investors who bought in at the height of the hype.
However, the major players and the technology itself will emerge stronger. The end of the dot-com bubble gave us Amazon and Google as titans. The end of the AI bubble will likely cement the dominance of a new (or renewed) set of tech giants and prove that AI is, indeed, the next great technological revolution.
So, the question isn't whether AI is the future. It is. The question is which companies will survive the inevitable market correction to truly build that future.
By: Gemini
ayushkas3ra@reddit
+1
Dziadzios@reddit
Yes... and no. And not yet.
Why yes? Because there's a hype that AI will be able to replace every salaried employee sitting at the computer. The capitalists are excited to not waste their capital on humans while money line goes up, but ultimately it's just a programming language. SQL was designed for non-tech people writing queries - now only programmers use it. No-code websites were meant to replace frontend devs - didn't happen. AI is similar. It's just another programming tool which enable use of algorithms with more fuzzy logic and more unpredictable inputs. That's it. AI-based algorithms are going to become big (that's the agent hype), but ultimately only programmers are trained well enough to tame AI, so they will still need to be useful.
No, because unlike stuff like VR or blockchain, it's a really useful tool. It's just like doctom bubble - Internet was a real game changer, but a lot of those companies will still be too early.
Not yet because the REAL shit will happen when we'll get to robots. It will mean that programmers will move to algorithms in the meatspace instead of being limited to cyberspace - that's where the revolution will happen. The current state is nothing compared to that.
New_Firefighter1683@reddit
So... TLDR: well AI replace SWEs? Basically something asked on this sub and related subs like 50 million times.
Yes, there is a bubble with investors throwing money at anything AI.
But that's not your question. Your question is... will the bubble burst and they'll start hiring engineers left and right and throw out AI.
What do you think the answer is?
We've already opened Pandora's box. There's no shoving it back in. LLMs are getting better and better at writing code. I went from never using AI for coding to being pretty dependent on it to be effective.
Trevor_GoodchiId@reddit
You’re betting against John Carmack. Don’t bet against John Carmack.
Independent_Grab_242@reddit
What did Master Carmack say? I can't find it.
Trevor_GoodchiId@reddit
He’s been busy
https://youtu.be/4epAfU1FCuQ?feature=shared
motorbikler@reddit
idk. Looks like he's on the TRT/growth hormone train with every other extremely pilled tech guy now. Yes, that does make him suspect.
FinestObligations@reddit
Words to live by. Mf is what people imagine Musk to be.
nycgavin@reddit
this is not a bubble, just today, github copilot AI code review caught 4 actual bugs that a reviewer can't catch unless the person spend an hour or 2 reviewing my code
fuzzynyanko@reddit
AI is in the Trough of Disillusionment (FU Gartner). During The Metaverse's Trough, we got similar layoff waves.
No-Rush-Hour-2422@reddit (OP)
I don't think we're in the trough yet. There's still way too much hype around it. But I think we're a little bit passed the peak and headed down
m3th0dman_@reddit
AI alone will not replace programmers; at least not until something better than LLM appears. But programmers who use AI will surely replace those who don’t.
AI is more like the internet + mobile, probably even bigger.
4InchesOfury@reddit
I'm finding that most roles being "replaced" by AI are actually being replaced by offshore/nearshore. I'm not actively looking (currently employed) but every time I check out the job board of a company that looks interesting there are very few US roles available but lots in India, LatAm, and Eastern Europe. Humans aren't being replaced, just Americans.
non3type@reddit
At best AI just saves us some time so these posts always seem so odd to me. It never seems like enough time savings to cut an FTE position. At worst I see time lines getting tighter. Any company that tries to cut developers with AI is going to lose market share to a company that tries to do more with AI and their current head count.
Lazy-Past1391@reddit
Been unemployed for nearly a year, hold onto that job!
SketchySeaBeast@reddit
Ah, the other meaning of "AI".
kfelovi@reddit
AI means Abroad Indians
De_Wouter@reddit
Like the AI Amazon retail store without checkout
I_did_theMath@reddit
I think there's a bit of a misconception with that. They used indian workers while gathering data hoping to be able to train a model to do it autonomously. It didn't work, so the project was cancelled, but it's not like the plan was to secretly use Indians and pretend it was AI all along.
SketchySeaBeast@reddit
I've heard "Actually Indians", which I like better because it's actually been Indian workers more than a few times.
I_did_theMath@reddit
I think there's a bit of a misconception with that. They used indian workers while gathering data hoping to be able to train a model to do it autonomously. It didn't work, so the project was cancelled, but it's not like the plan was to secretly use Indians and pretend it was AI all along.
hgrwxvhhjnn@reddit
Indians plus AI is a CEO’s wet dream
SpaceBreaker@reddit
Americans Ignored
SaaSWriters@reddit
Tech workers in other countries get affected too.
smedley89@reddit
When my company first really dug into Ai and how to use it, the training was "You won't lose your job to ai. You might lose your job to someone who knows how to use Ai. Learn to use it."
We just treat it like a tool. Basically a local stackoverflow.
BorderKeeper@reddit
My company is actively "consolidating" teams which makes sense, but since most teams hold majority in EU this effectively turns into offshoring. Eastern/Central Europe has quality engineering personel for half the price so it's a no brainer.
subma-fuckin-rine@reddit
yea my last job, any backfills had to be out of the country. they mostly let people quit on their own terms but did lay off a few with good severance. but still not ideal overall
forbiddenknowledg3@reddit
Yeah I have no problems getting interviews in ANZ. We're paid 3-5x less than Americans. Those fat salaries never made sense to me.
sawser@reddit
I've recently gotten two offshore "backups" who I've been tasked to train ASAP to "help me with my duties" as I've been working without a backup for a few years.
I'm absolutely going to be laid off as soon as they think they're caught up and so I'm sorta just coasting and looking for new jobs and not finding anything.
Goducks91@reddit
This has been happening for years.
OpenJolt@reddit
It’s happening more now since we aren’t in ZIRP
Complete-Equipment90@reddit
Yes. Offshoring and automation. It’s been going on for decades. AI is useful technically. It’s also useful for companies to advertise as a reason for restructuring, regardless of how much they are actually leveraging AI.
AffectionateTune9251@reddit
Yes, and AI is definitely being used as an accelerant there.
Imagine an “American-ify” LLM that takes an offshore dev’s choppy English and converts it to perfect business English.
Or an LLM that can clean up sloppy code (common complaint with offshore devs).
gangolfus@reddit
Grammerly?
AffectionateTune9251@reddit
Yeah and prettier
Still, LLMs make this stuff way more seamless.
jrolette@reddit
This AI cycle we are in today is changing the industry and will continue to do so. It's at a similar level as the transitions to desktop computing, the internet, and mobile. Era defining.
Are specific claims about it overhyped today? Absolutely.
Fragrant_Gap7551@reddit
Yes but also no. It's a bubble yes, but when it bursts things won't be like they were before.
mattgrave@reddit
No, we are not. Compared to other bubbles, this one solves real problems. You can now achieve mediocre results with almost 0 cost compared to hiring someone doing the job. This is the kind of technology I am wondering how much will it achieve in the next ten years.
Loose-Wheels@reddit
I think a lot of the people pumping AI have a deep financial incentive to do so. Many tech stocks are being entirely driven by AI hype right now, so of course all their CEOs are going to tell you it's the future and going to change everything.
Matthew_3i94038@reddit
That's a sharp point, and I really appreciate your perspective. It's important to recognize the financial motivations behind the AI narrative, especially when so many tech leaders stand to benefit.
Loose-Wheels@reddit
A lot of CEOs and business leaders, much to our dismay, are really just playing a game of follow the leader. "they must know something I don't, so I'll do what they're doing" seems to be the behavior of basically every tech company right now. Apple seems to be one of the few second-guessing or at least pumping the brakes on the AI ambitions (Apple Intelligence rollback/delay)
sod1102@reddit
It's big chipset, man. The GPU-elites.
No-Rush-Hour-2422@reddit (OP)
Great point
der_struct@reddit
The funny part is that coding is one of the worst things it can be used for.
adambahm@reddit
Yes and no.
Yes, because biz dev folks don’t understand how to capitalize it, leading to a ton of wasted start up money.
No, because it’s better suited to enhance developer productivity and should be the new normal of writing code. LLMs do the donkey work I hate doing as an engineer.
JimDabell@reddit
There are vast numbers of people who believed in blockchain because they didn’t understand it and fell for hype. These people are now doing the same thing with AI.
There are also vast numbers of people who blindly hated blockchain because they didn’t understand it and saw it as a tribal war. These people are now doing the same thing with AI.
Meanwhile, for the people that understand how these things work, blockchain and AI are very, very different. Blockchain is mostly a solution in search of a problem. There are some interesting use cases, but they are extremely rare. Same goes for cryptocurrencies. Same goes for NFTs. Same goes for VR. You’ll see the same people jumping on the hype train and the same people hating the hype train, and the same lack of understanding in both.
Whereas when generative AI became mainstream with ChatGPT, it was immediately valuable with lots of obvious use-cases across many industries. It is clearly not the same type of thing. And you can see that with its proponents. A huge number of people calling cryptocurrencies, blockchain, NFTs, VR, etc. bullshit are nevertheless strong advocates of AI. A huge number of domain experts are saying AI is extremely valuable.
If you think AI is like blockchain, then you’re firmly placing yourself in the blind hater category. They are not only very different, but obviously so. Huge numbers of reasonable people – including domain experts – can see the difference; why can’t you?
Recent studies have not proven this, the Apple study was blown out of all proportion, and “intelligence” is a perfectly reasonable word to describe what is happening.
No, the biggest difference is that domain experts can use it and say “this is useful; this helps solve some of my problems; I will spend money on this”.
You are using “corporate executives” as a thought-terminating cliché; a placeholder that means “stupid person you should ignore” that prevents you from asking why people want it. Replace “corporate executives” with “principal engineer with 25 years experience” and your point fails to work. And yes, there’s plenty of those people about.
Very few of them are doing this. This is what low-information influencers and bloggers masquerading as journalists speculate whenever any layoff happens these days, but the companies themselves often either don’t comment on the situation or they give other reasons (like “flattening org structure”).
There are plenty of experienced developers doing hands-on work who are saying that AI is great. Here are some examples:
At this point if you are going to call AI “a scam” like you are, then the onus is on you to explain why all of these experts are either lying or deluded.
No-Rush-Hour-2422@reddit (OP)
Those are all good points, and I'm going to read all those links so I can see if I can use AI better, so thank you very much.
I do think that your point about AI having clear use cases is not 100% correct though. I can see how some people are finding ways to use it, but I don't know if the use case are clear to everyone. It has been my personal experience that people are saying "our product needs AI" and then trying to invent a use case to shoehorn it in, because that's what investors want. New tech shouldn't have to be forced, it should just naturally make sense to use it.
cloudsquall8888@reddit
You are mistaken if you think high-standing members of huge companies don't know that AI is bs. They just partake i to the theatrics, with the sole purpose of devaluing your work.
There is honestly NOTHING else really going on with AI. Just the promise that developers' cost will go away.
No-Rush-Hour-2422@reddit (OP)
That's a really good point
hippydipster@reddit
I've been programming since the mid 80s, and been a professional programmer since the mid 90s. I have not been "constantly told that AI is going to replace me". What is that? Some sort of weird revisionism? Are we digging up randos for this? It's only since GPT-4 that such a thing has really be talked about.
No-Rush-Hour-2422@reddit (OP)
I guess I should have added "recently". As in since GPT-4
airoscar@reddit
LLM is very useful as a knowledge retrieval tool, and solves a big problem across many industries: unstructured information querying.
But it does not have good enough reasoning to replace human workers, ML researchers today are even debating if LLM is doing any actual reasoning at all or just piecing together information from its trained examples to appear so.
For that reason I think it has huge potential to how work is being done. Some services will evolve and make good use of LLM’s unparalleled ability to surface information from unstructured data, while others will become redundant or obsolete and die. As a result, it will affect workers employed by those services. But at the end of the day, there will always be people.
differential-burner@reddit
There's a tendency to conflate AI and LLMs--they're different things. Are we in an LLM bubble? Definitely. Are we in an AI bubble? Well we've had it for decades I feel like that aspect isn't going away. What about software dev replacement? It's based off overconfidence in LLM reasoning which doesn't actually exist because LLMs do not reason. So yes my prediction is layoffs will continue just I am not so sure when the devs get rehired it will be in wealthy countries.
the_jester@reddit
Some of you haven't seriously seen a Gartner Hype Cycle before, and it shows.
AI does and will continue to generate "real" value by whatever metric you choose. However, the investment in it (measured in either literal VC dollars and/or expectations) are so high there is basically no realistic way to have them met, either. I think we are approaching the "peak of inflated expectations".
Own-Chemist2228@reddit
It's also important to understand that the Gartner Hype Cycle can have different proportions depending on the tech, and the "Plateau of Productivity" is not guaranteed.
Also, different segments of the market may experience different cycles for the same tech. Crypto is an example of this. In some ways it is stuck perpetually in the Peak of Inflated Expectations, while others would say it's already in the Trough of Disillusionment. And there are strong arguments that it will never be productive, at least anywhere near the scale of the hype. Or that the hype is the actual product...
ByeByeBrianThompson@reddit
Crypto is stuck in an infinite loop, constantly oscillating between peak of inflated expectations and trough of disillusionment.
MINIMAN10001@reddit
Crypto is being driven by a speculative market as opposed to market forces demanding usage for crypto.
It has it's uses in some third world countries but not to the extent that investors are buying
syklemil@reddit
I think blockchains & nfts actually found their niche, and actually have kind of stabilized as a grifter tool. As long as there's a good enough supply of suckers that'll part with their money for that shit, there'll be people taking their money. It's kind of the inverse of the xkcd about today's lucky 10000.
At some point maybe regulation will catch up.
carsncode@reddit
Right, I think crypto is largely past the peak, and will likely never exit the trough, because it was pure speculation with no underlying utility.
mattia_marke@reddit
so essentially another winter?
SketchySeaBeast@reddit
I love how it's a concept that Gartner is trying to sell people on and monetize with their conferences, even though real tech often doesn't work out in a cycle like that (as per the criticism in the article).
scarylarry2150@reddit
I think this is kind of what we’re seeing with Vercel and their v0 product. It was super hyped and widely praised initially, but now that they’ve bumped the pricing on it it seems like sentiment towards it has done a full 180. I think so much of the current AI ecosystem value is inflated because most big investors are willing to subsidize cash-burning ideas in the pursuit of “growth”, but once that music stops and users realize how expensive it is to run AI at scale the value disappears pretty quickly
No-Intention554@reddit
This is how many previous tech bubbles have played out in the past as well.
No-Rush-Hour-2422@reddit (OP)
Oh that's really interesting. Thanks!
SkyMarshal@reddit
More like a plateau. LLMs won’t us to AGI without some additional foundational breakthrough in machine reasoning. LLMs will provide the voice of an eventual AGI, but not its mind.
Personal-Status-3666@reddit
Wrong comparision AI is actually usefull in case of software it has better performance than human on small chunk tactical problems that had a lot of training data.
Sadly as it is, it delevaraged SE position on the market.
Will it replace it ? Not with current tech.
So exactly answering your question we are not in the bubble we are in the find out stage.
meevis_kahuna@reddit
I'll go against the grain here and say that no, this isn't a bubble.
All of the criticisms I'm reading here about AI are absolutely valid, and GenAI is unlikely to replace devs anytime soon. But it will be a war of attrition, as AI gets better, it will force hiring and wages down. Juniors are feeling the heat, it's a tough market right now, and AI is playing a role.
Meanwhile, AI will be continuing to gradually displace other white collar work. Accounting, IT, pencil pushers. It will be an incredibly destructive force on the current economy.
Unless there is regulation, or some public backlash, corporations will always choose capital over labor expense. Look at the new AI staffing at Taco Bell and Starbucks. This is just the beginning, agentic AI is absolutely the future.
rayred@reddit
Appreciate you going against the grain. But there is no evidence to suggest that what you are saying is and/or will happen.
The thing I don’t like about the productivity argument (I.e. you saying that it will force hiring and wages down) is that development of software is extremely slow. Painfully slow. Goes at a snails pace. Which is to say, the business roadmap at any tech company outpaces the rate of the development process exponentially.
Companies don’t cap the number of engineers based on a saturated road map. It’s capped by operational expense limitations.
Meaning if you 2x’d your engineering orgs productivity - the demand would remain constant. And, assuming the product roadmaps were well done, it would bring in more capital and ultimately allow the org to hire more engineers (increasing overall demand).
What’s interesting to me is that before the AI hype this was a commonly understood phenomenon. Just ask your PM what ideas they want to implement :)
As for the juniors side of your statement - again, not clear AI is playing a role at all. Google’s best estimate is that they see a 10% productivity boost. And they have a fleet of infrastructure and AI research at their disposal - and they have a pretty strong incentive to put out their most attractive numbers.
What seems more likely to me is that we are in the recovery phase from the COVID / boot camp / overhiring era. And I think the data correlates. (Interest rates / inflation / job openings / etc.)
meevis_kahuna@reddit
My comment wasn’t solely about software development, it’s about broader economic displacement across white-collar roles as AI capabilities and cost incentives advance. Agentic AI is already replacing operational roles in fast food and logistics, and more industries are likely to follow. This isn’t about hype, it’s about structural incentives and early signals pointing toward labor substitution. I don't disagree that broader economic conditions are also a factor.
rayred@reddit
Oh okay. You prefaced it with "GenAI is unlikely to replace devs anytime soon" and this is in r/ExperiencedDevs , so forgive me :)
Xelynega@reddit
It's the "as AI gets better" part that people are questioning, not whether or not better AI would impact society.
As far as most people in this thread seem to believe(me included), the technology isn't replacing developers in any meaningful way. If anything it's caused companies to stop hiring as many juniors(tied with a recession also reducing hiring), but I haven't seen any evidence that GenAI is going to get significantly better any time soon.
Do you think GenAI is boundless in it's utility, we just need to improve it?
meevis_kahuna@reddit
I think we are just scratching the surface of agentic LLMs. Even if that is the only change it will be a big deal.
Even if we plateau in model size and accuracy, there will be renewed efforts to address other mitigating factors like context windows, attention mechanism, hallucinations and so forth. Superior training techniques will emerge.
I see no reason to believe that AI development wont proceed over the next several years. Boundless is too strong of a term, but if you look back at the pace of AI development over the last 20 years, it seems foolish to think we've reached the end of history.
FinestObligations@reddit
It’s so weird that people think this is the plateau 😆
It’s like people seeing the power of a Pentium 1 and go ”surely we cannot go faster than this!”.
meevis_kahuna@reddit
Yes, exactly. Its like those news articles saying the Internet fad is over in 1998.
It's copium. A future with advanced AI is incredibly uncertain and has a real risk of economic displacement. Is it coming next year? Maybe, maybe not. But it's coming, major changes seem inevitable in the next 5-10 years.
MsonC118@reddit
I'll bite.
My thought process has that our job has a much higher impact and skill ceiling. Tiny errors can cost companies millions. Also, it's much more structured than something like an email or report. Out of all the jobs that could be replaced, it'd be from the bottom up. Therefore, jobs with a lower skill barrier are more likely to be entirely replaced than those with a higher skill barrier. This leads me to my conclusion, which is that if junior developers are getting replaced, then developers should be the last people to worry. LLMs are much more likely to replace all the other junior jobs in nearly every field (white-collar work specifically). Accounting, law, IT, customer service, etc...
I believe this is all a classic "shift the blame" situation. When you look who also benefits from the "AI is replacing software developers" narrative, it starts make sense. Why would big tech shoot itself in the foot? If I were them, I'd say AI has replaced as many people as possible. It's all about money, and survival at its core. Just look at how the stock market is rewarding big tech for the layoffs marked as "AI-driven layoffs". Ask yourself with a simple thought experiment, what would happen if big tech came out and said, "These are just regular layoffs. LLMs have improved productivity by 20%, but we're cutting costs by more than that to improve our profitability". That doesn't fit the narrative that "AI is replacing people". When you start to dissect who would benefit from that narrative, and what the benefits of that are, I believe it will give you some clarity.
One last point. Just because it's a bubble, doesn't mean it doesn't provide value. I've talked to quite a few people about LLMs and this bubble. Nearly every time I mention that it's a bubble, I receive pushback on how it's providing value. It's a bubble due to financials, not technological advancements. The valuation and grandiose views from VCs, Wall Street, Tech bro's, and more are not in line with the value that LLMs provide. That's the point I'm getting at. Look at how much value OpenAI provides, but they're losing billions annually and keep having to raise money. Imagine how much a subscription would *actually* cost even for OpenAI to break even? The point being, sure, maybe this works out, but history is usually doomed to repeat itself. Simply because humans are humans, it's the same pattern of conflicts like war.
Napolean_BonerFarte@reddit
Software developers are one of the few roles at whose entire purpose is to produce tangible output. Additionally they are very much viewed as a cost center at companies where tech is not viewed as the primary business. Most other roles are very nebulously defined, and differ in scope and responsibility hugely from company to company, and even in the same organization. Its much harder to imagine replacing, for example, an Operations Analyst because most people cannot even define what that person actually does at any given company. In the minds of most people developers write code all day, which is just text after all. Therefor LLM's can do it faster.
Also, so many roles really do only exist to increase the headcount of some upper manager's fiefdom. They have no interest in replacing those roles with AI because the entire point of those roles really is to be able to say that you manage an organization of X headcount successfully, therefor I deserve to be promoted to COO.
meevis_kahuna@reddit
I suppose it's fair to say that the dot com bubble didn't invalidate the utility of the web tech stack.
With that said, a company's debt alone doesn't determine it's share price. Netflix and Twitter lost money for decades yet they retain huge market cap.
There's probably some middle ground here in which we can agree that the hype around AI is overblown but the technology is nevertheless disruptive and likely here to stay.
s0urpeech@reddit
Thank you for speaking up. I find any time this topic comes up there is no nuance from other devs.
meevis_kahuna@reddit
I'm an AI/ML engineer. (Don't shoot the messenger.) My job includes these discussions.
I think there is rightfully a lot of defensiveness in the dev community especially with some outlandish comments made by Zuckerber and Musk about using AI to replace mid-level devs in 2025. There is clearly a discrepancy between the hype and the reality. On that I'm fully agreed.
That doesn't make the situation a bubble. Yes, the hype is a over inflated. But there won't be a "bust," the functionality will catch up to the hype very soon.
s0urpeech@reddit
Figured. I too am an AI/ML engineer and software dev hybrid having been tech lead in both domains. I know exactly the kind of convos you’ve had because I’ve had them too. Not going to claim I’m an LLM expert but I keep myself updated on how the architectures work. Knowing what we know it would be incredibly naive to think we’re irreplaceable
meevis_kahuna@reddit
We're not just irreplaceable... they are incredibly motivated to replace us. For large tech companies, we're their biggest expense.
Forewarned is forearmed.
256BitChris@reddit
If you have any doubt as to whether the hype is real or if you'll be replaced by AI, then you haven't used Claude Code with Opus.
The technology is already there, it just needs to become more widely adopted. I've written over two months of production quality code in the last week with it. My only hope is that other people either refuse to adopt or won't be as good as I am at prompting.
Vegetable_Wishbone92@reddit
I don't believe you.
256BitChris@reddit
Don't take my word for it, go look over at the Claude subreddit, it's full of similar stories unlike the Cursor subreddit which is more reflective of how things used to be.
NoleMercy05@reddit
They just dont know
FormalShibe@reddit
Obviously yes.
SableSnail@reddit
I think the automation stuff where it’s going to apparently replace loads of developer is a bubble.
But I’ve already used it to replace Google, StackOverflow and even some Reddit subs I’d use for coding help or product recommendations.
Even if all the LLMs end up doing is displacing Google Search, that’s an absolutely titanic seismic shift in the tech landscape.
Xelynega@reddit
How does this work long-term though?
The GenAI is trained on people interacting and solving problems on the internet, and then documenting them.
When people are only going to GenAI and not forums for answers, where will the answers come from?
SableSnail@reddit
That’s a good point. Honestly, I don’t know.
But I think people will switch to LLMs because in the short term it’s much better than the ad infested internet with all the sponsored links and so on.
verzac05@reddit
Forums and search engines don't have to go hand-in-hand, so it's very much possible that search engines' use cases could be replaced by LLMs, with forums still maintaining its relevance.
You'll still have people discussing over problems in forums (my favourite being Github repos' Issues), because humans are social creatures that love to collectively problem-solve (or whinge about an issue hehe).
On the other hand, people will still use search engines to discover communities, but they'll likely switch to using LLMs to solve problems (so way less "how do I make an omelette when I do not have milk" queries on Google Search).
To put it simply, information exchange will most likely change like so: 1. Collective problem-solving <-- still exist in forums and forum-like platforms 2. Discovering facts <-- still stays in Google, e.g. what year is the Honda CX5 created 3. Getting solutions to a specific problem <-- mostly moved to LLMs, especially for things that can be easily verified through a surface knowledge of the subject matter (e.g. if you're a home-cook you can probably verify a recipe outputted by ChatGPT). 4. Verifying critical solutions <-- might still need Google Search for this (just to discover official documentations and whatnot).
If I recall correctly, LLM trainings are still heavily supervised as their input data is curated, hence why it's less likely to spew out garbage than, say, a random Google Search. On the other hand, anyone can list their content on Google Search without much moderation. But I'm not sure if the current method of LLM supervision is financially sustainable or scalable...
Disclaimer: I wouldn't call myself an expert in developing LLMs.
DrXaos@reddit
Good quality data will be removed from open internet. You will have to pay for an AI which is trained on good stuff. The free AIs will be trained on garbage.
Already the major LLM companies are paying people from their investor's money to create proprietary data. At some point the investors will stop subsidizing (like they stopped subsidizing Uber and fares tripled) and you will have to buy it expensively.
BigCardiologist3733@reddit
no, AI is only getting better and better
TenchiSaWaDa@reddit
Cost and customers will determine whondies. The technology i think will progress with a steady rate and be more ingrained in life far more than blockchain and vr. Alos has the added benefit of mainstream appeal.
On the otherhand companies will go from doing ok to bankrupt based on whims/pricing of these models and that will be dangerous both in terms of market amd developers. If you use ai heavily than reliance can be expensive
Cool_As_Your_Dad@reddit
I think its a bubble too. Everyone is buying into the hype or “business will fail”.
They slap AI into everything. Waiting for my toilet paper to be AI improved by now.
JohnWangDoe@reddit
I see a lot of conmen selling to small b2b with low code ai
Cool_As_Your_Dad@reddit
Yip. Then you get vibe coding too. So much wrong with it but yet people think its the way to go.
Its crazy
bluemage-loves-tacos@reddit
I'm kinda having some fun doing vibe coding (at home, not a production codebase) right now. It's frustrating to get things right, and if I didn't keep course correcting things it would be unusable already, but it's currently interesting to learn about what the AI can actually do. I'm seeing what rules I can put in place to get it to follow the "right" patterns as I'd like to see just how much complexity I can force out of it without a complete dumpster fire.
I really do agree that it's the wrong way to go, but since people keep doing it, may as well understand it properly to have a comeback for *why* it sucks and why it's banned from production code
whisperwrongwords@reddit
Duct tape, good intentions is already what's mostly holding up the house of cards that is software. This is just going to tip it all over the edge.
airhart28@reddit
My new washer/dryer is "AI powered". It weighs the clothes before washing... That is apparently "AI".
guns_of_summer@reddit
Lol. At my job, we had this discussion with the product people where they were trying very hard to get the engineering team to refer to this algorithm we have for identifying rapid changes somewhere "AI". It's not that, its just a simple algorithm.
Cool_As_Your_Dad@reddit
AI thermal paste lol
https://www.reddit.com/r/pcmasterrace/comments/1cze06f/ai_thermal_paste_is_here_guys/
yellow-hammer@reddit
An economic bubble, but not a technological one. Dot com bubble all over again.
Historical_Owl_1635@reddit
So much software that already exists is being credited to AI too.
There was a top rated comment a while back in another subreddit that they want AI for things like syncing their audiobook and kindle book to the same page. A feature that’s already been available for literally years.
willcodejavaforfood@reddit
I’ll invest $2000000000 in this AI toilet paper
Cool_As_Your_Dad@reddit
Trust my toilet paper bro
Irtexx@reddit
I'm not a fan of the "next word predictor" reductionist argument. Yes, that's part of it, but emergent behaviors exist.
Some videos worth watching:
A well-respected AI researcher/engineer (worked on the back propagation algorithm, the process used to train most neural networks, including LLMS, for the past few decades) describes current multimodal chatbots as having "subjective experiences". He also believes they do "think", but also believes that our understanding of what "thinking" is isn't useful. It is different between AI and humans, but that doesn't make it any lesser: https://www.youtube.com/watch?v=giT0ytynSqg&t=3663s&ab_channel=TheDiaryOfACEO
A deep dive into how LLMs work. You can see that the next token prediction is only the final step. The stuff that happens before that is much more impressive: https://www.youtube.com/watch?v=wjZofJX0v4M&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi&index=6&t=925s&ab_channel=3Blue1Brown
For a balanced argument, here's someone who also sometimes uses the "next word predictor" reductionist argument. But even in this video, she highlights some interpretability studies that show circuits emerge that resemble reasoning that humans do. (This is different from "reasoning" models like O3): https://www.youtube.com/watch?v=nMwiQE8Nsjc&ab_channel=AlbertaTech
The Apple "Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models" is just about these "reasoning" models - the ones that essentially self-prompt themselves a few times to produce a better answer. It doesn't prove that there's "no intelligence involved whatsoever," as you say. Also, Apple are very behind on LLM tech, so they have a bias to make it sound less impressive.
FinestObligations@reddit
Yes and no.
Unlike Blockchain it actually has merit and does something generally useful.
It will settle down a bit though and just be something we use regularly like search engines.
WorldyBridges33@reddit
"LLMs are just advanced auto complete. They are given huge amounts of written words, and they use that data to guess what the next word should be when it's writing out it's answers."
- Isn't that what we do as humans when we learn languages? As babies, we are given huge amounts of spoken words as input. Our brains pick-up the patterns of these words, and then when we are 3, we can speak the language because our brains accurately predict what the next word in a sentence should be to convey meaning..
YamahaFourFifty@reddit
AI is meant to replace low level jobs.
I mean the lower workforce people are kind of scarey at the lack of problem solving skills they have.
That’s the workforce ‘AI’ will replace in 1-3 years. McDonald’s and Dunkin is already replacing people who take orders inside are now just kiosks, a form of low level AI.
And AI will be used to improve efficiency- it’s going to make teams shrink a lot which is why we have seen a lot of layoffs in tech sectors.
Icy_Computer@reddit
This is what concerns me the most. We have people who do not understand what the technology can and cannot do thinking it has actual intelligence. We've already seen cases of inappropriate denial of care by AI. What happens when they allow AI control of drones and then hallucinate and think a school is a tank?
The sooner this bubble bursts, the better. At least then attention will shift to what LLMs can actually do instead of overhyped claims of AGI.
certainlynotunique@reddit
Yes, this is obvious to anyone on the ground. I have worked and seen "AI projects" that are in no way anchored to reality because decision makers don't understand the technology, and they end up being a complete waste of time and money. When things calm down, we'll have some really cool tech, but also billions of dollars lost by clueless executives throwing money at a hype train.
thegreatcerebral@reddit
So you are 100% correct that "AI" is an incorrect term. That's why when I read the buzz articles about "AI threatened to tell on husband to wife about affair when he threatened to turn off AI"... I just don't buy it. What I buy is that people that are extremely passionate about AI or it being THEIR PRODUCT find ways to stay MORE relevant.
Right now AI is basically the digital half of that thing that everyone wants, a personal assistant. Take a simple task of looking up a phone number. If we can just ask something else to do that for us it frees us up from that task and we have our number. This is the sweet spot for AI for the masses as I see it. I think maybe Apple sees it also but the problem with Apple is well... Apple.
If each family could have their own personal LLM/AI and in that was everything they do, all their emails, all their receipts, all their accounts, all their EVERYTHING so that at any time you can ask "when was the last time we bought milk?" while you are driving to the grocery store is prime. Being able to pull up photos and ask actual questions like "find me all the pictures of Billy at baseball" and it can actually SEE the pictures and having already done that can figure out that DSC001000102.jpg and DSC48317.jpg and DSC04875923.dcim or whatever are all baseball pictures so you don't have to sort through thousands of photos is amazing to think of.
Right now, I can't even get Alexa to reliably run the routine "Turn off the lamps" which I have tied to all of my lamps and right now there is about a 40% chance that I will get "Hmmm there are things that share the name lamps, which are you asking about?" why? NO CLUE! Or my wife was literally inches away from the Alexa Show and said "Alexa set a timer for one hour" and nothing. Does it again and faintly we hear from across the house two rooms over "A second one hour timer has been set on..." like COME ON. I check the app and it just says "Audio not intended for this device". I can't even tell Alexa "Alexa turn off the kitchen light, the bedroom light, and the hallway light". There is a 0% chance all of them turn off, 80% chance one of them turns off, 40% that two of them turn off, and about 20% chance I'll get a "Hmmmm I don't understand..." or similar return.
For the businesses, taking written word, and lots of it and being able to relate or have information collected to be able to let a human correlate information will be huge. I really want to setup a RAG. We are in Manufacturing and have been around for 50 years so we have seen shit. It would be great to ask an AI, we are using this Horizontal CNC and working with this material with this bit, we are having this issue, have we seen this before? And letting AI become the equivalent of that old guy who used to work here since inception who remembers everything and will say "yea, we did and we solved it by running 200rpm more on the spindle for 20 pieces then we could slow it down 100rpm for the remaining" is almost priceless.
As far as straight automation and such, we aren't there. Those that have been working with AI can tell you how easy it is to get stuck in loops with it. "I have 4 apples how can I Cut them to feed 8 people" returns "give them each an apple" and then you go "I told you I only have 4 apples"... "Oh that's right I'm sorry. Thank you for reminding me in that case let me come up with a way to use 4 apples to feed 8 people. So we will want to take 8 apples..." /facepalm No it will not tell you that for something simple like that but you can get in loops real quickly like that and you have to bounce out to a new chat. This is why I fear for people that use it for coding and rely on it too much.
For those sasquatch videos and the Cowboy fans videos... it's perfect and should not be "improved" any further lol.
creaturefeature16@reddit
Seems like it, but the pace is pretty staggering. I guess we'll find out in a year or two. Maybe we're all just drunk on our own hubris and software/coding/development really isn't all that hard to effectively automate, and clean code with consistent design patterns is kind of a waste of time. If so...oh well, I suppose?
ap0phis@reddit
“Clean code with consistent design patterns is kind of a waste of time” is one of those sentences that people are afraid to say out loud but man does it seem true.
creaturefeature16@reddit
If it compiles, if it runs, if it's able to be optimized...does it matter how it looks? And if making changes to it can be done with AI, does it matter even less?
Or perhaps: if it gets results, why should a company care?
This is the grand experiment that we're embarking on.
teslas_love_pigeon@reddit
A company should care because their workers will care. There's a reason why certain industries struggle to find devs and often rely on poorly implemented low-code solutions.
Devs don't like working on poorly designed systems. Systems that are designed well are extremely easy to understand, modify, and delete.
We aren't experimenting with anything, acting like we as workers have a choice in what gets implemented at the company level isn't occurring. Often the actions that do get chosen are done to the detriment of their own workforces.
You have to stop seeing business owners, which is what all the leadership at big tech are, as these smart beings that know more about the world than you.
They're human and they all suffer from hubris the same.
AccordingWarthog@reddit
Human workers may care, but AI doesn't.
Napolean_BonerFarte@reddit
I could not think of a less true statement. Consistent design patterns is one of the most important aspects of a codebase that makes it maintanable. You need to be able to look at a new part of the code and make (safe) assumptions about how it works, and you can only do that if there are consistent patterns in your codebase.
phao@reddit
> Now I'm getting that stuff done and regaining some free time.
Would you mind giving an example of this?
creaturefeature16@reddit
Very tiny example: I have a stack of tasks on my plate today, one of things being I had to refactor a navigation script that we use in our framework to meet accessibility standards, as it needs numerous updates and improvements to ensure full compliance with WCAG. The script is fairly baked at this point, and needs to be done with care so we don't break previous instances of it. It's not a hugely complicated job, but it was going to be a few hours to dissect it and put it back together again. I don't have hours to dedicate to it this week, though. Our other dev didn't feel comfortable/capable in doing the work, and I don't want to hire a contractor for such a minor job.
I provided it to Claude Code, along with the full requirements of changes needed, along with some WCAG docs. Not only did it complete it within minutes, but it adhered to the same pattern I was using, down to the specific variable naming convention and tapping into some of the existing functions that were already a part of the script.
I was able to upload it to a number of test sites and provide a link to the client to review, and then was able to step away from my desk, knowing that task was done, freeing me up to do whatever I wanted with that extra time.
This is just an example that is top of mind from today, but I have countless others and while they are generally small tasks (I don't really like assigning big coding tasks to LLMs; too much code review), this adds up. And would have normally been work that was abundant enough that I would bring in a junior dev or possibly a paid intern...but I haven't even thought about it for well over a year because that work is being done by these tools.
phao@reddit
Thank you very much.
pippin_go_round@reddit
I'd love to experience this! How can I do so, when compliance prohibits me from putting any production code into non-company owned servers? It would be amazing!
creaturefeature16@reddit
Side projects? It's not hard to find ideas to code. Just think of anything you want to build. I've written numerous small personal applications that I use on a daily basis and they took me hours to build, vs. days or weeks.
cockNballs222@reddit
I think you’re overestimating the “difficulty” or “intelligence required” for a typical white collar job, the job description for a lot of them can be boiled down to pattern recognition and some data organization, something current AI is already great at. We don’t need AGI to complete re-organize the job market.
zayelion@reddit
Its a bubble, the hype is pulling in investors and its giving tech companies an excuse to dump labor. Keeping and stealing high performers and forcing everyone below a senior in talent out. Long run this is a disaster. Short term it gives space for needed automation systems to kick in for business survival but the hype has many CEOs basically lobtomizing the company of its trade secrets.
Many companies are escaping the US and this is just a guise. Its not a bad thing its just dishonest allowing people with share options to collect extra cash.
DigThatData@reddit
you're basically right, but the reason they hired too much was because of the trump 2017 tax breaks which let companies treat all of their engineering head count as a tax write off. In 2022, that tax break expired and all of that headcount flipped from being a write-off to a liability.
basically an economic time bomb.
https://www.journalofaccountancy.com/issues/2022/nov/amortizing-r-e-expenditures-under-tcja/
CroakerBC@reddit
As a slight aside, the repeal of the R&D tax break has just itself been repealed, so that could be a fun one.
DigThatData@reddit
republicans do love to play calvinball with the economy.
mightnotbemybot@reddit
No you have the timeline wrong. The tax treatment for R&D was created in 1954, not 2017. The law undoing it was enacted in 2017 but took effect in 2022.
DigThatData@reddit
The TCJA didn't invent or repeal the treatment for R&D. \
What changed in 2017 was treating all SWE headcount as R&D, which is stupid since it is normal operational headcount for contemporary businesses and obviously an abuse of the write-off.
The_Northern_Light@reddit
Well I keep having people who don’t know math much less machine learning trying to tell me AGI in five years is the conservative when it’s probably actually right around corner…
So what do you think?
engineered_academic@reddit
We are in an AI bubble until C suite executives pushing this realize what AI is actually good at is C-suite level stuff: Abstract thinking with lofty goals that have no basis in factual reality. Oftentimes arguing with my CTO is arguing with AI.
Shadowys@reddit
We are but not in a way you think.
Alot of work is being done to turn workflows touchless. The common expectation is to remove the human from the loop, and millions of funding are pouring into the area.
However, LLMs are clearly flawed in that you need a good long, rich crafted prompt, along with an army of validations and special techniques to acheive a zero shot response that is comparable or better than humans. However this performance rapidly decays after the second or third responses, which is why successful enterprise implementations always focus on using LLMs as a zero shot transformer than a chatbot. This is captured in Microsoft research as well, showing 39% decreases in cognitive abilities after sixth gen.
If its a zero shot transformation, then its likely a problem of unstructured data that could already have been a program. This is just the old digitalisation push, not some new paradigm.
For chatbots that uses a longer conversation and context, like programming tools, then you definitely need a human in the loop. We need to focus on AI increasing the value of humans, not replacing humans.
https://danieltan.weblog.lol/2025/06/agentic-ai-is-a-bubble-but-im-still-trying-to-make-it-work
BlossomingBeelz@reddit
I think AI (with its abundant problems) will stick because it raises the floor for less apt people. No one is going to want to give up that superpower.
dinosaurkiller@reddit
I see the limitations of LLMs but also see many use cases that don’t require it to be actual AI. The question is, will the rate of advancement beyond LLMs be as predicted? The AI providers obviously say yes. I honestly don’t know.
spreadlove5683@reddit
Lol this is another echo chamber
Total-Leave8895@reddit
Yes, i think so. This is not the first time AI is at the top of the hype cycle. Look for the wikipedia article on "AI winter"- it describes the past hypes and the winters that followed. There will come a time when people realize that their stack of agents can not replace nearly as much as they hoped for, and I won't have to listen to this agentic crap anymore. :)
Layoffs are happening because central banks have put the brakes on the economy. If you look at interest rates over the last decade, just a year ago they are the highest they have been. Luckily they are slowly coming down now. And, as you said, companies would rather pretend its because of AI.
MathematicianSome289@reddit
I hear you. I am exhausted by the sensationalism and grifting. That said, this “Agentic crap” is the foreseeable future and that it is not going anywhere. Don’t take my word for it, though, watch google I/O 2025.
xDannyS_@reddit
Doesn't need to go anywhere, just needs to be evaluated realistically.
ITGSeniorMember@reddit
So there’s definitely an AI bubble but I think it more closely correlates to the dot-com bubble of the nineties. I think AI will fundamentally change a tremendous amount of how we work and live in the same way as the explosion of commercial websites did in the nineties. Certain professions will see a similar decline to brick and mortar retail but they won’t go away entirely. Versions of those things will pivot.
On the flip side, the investment boom into AI startups is very similar to the investment strategy of the late nineties and (while it might take a few more years) that will pop. A small subset of businesses will survive to suck up the market share to become the next YouTube’s/facebook/etc.
MarimbaMan07@reddit
The only people saying AI will replace software engineers are AI CEOs and people that "vibe code" small projects.
Once your code base hits thousands of lines of code these AI tools quickly stop working well. I work at a company that has existed for decades and has repositories with millions of lines of code. No AI tools I've tried have a clue what is going on in these large repos.
I'm sure eventually these tools will be better but I don't see them replacing us in our lifetime. At best I think it's just another tool in the tool box.
STunstall72@reddit
If coding agents such as Claude Opus (apparently the most cutting edge "AI") are still LLMs at heart, with zero actual intelligence behind them (and let's be honest, if there was intelligence behind them we'd actually have AGI, no?) then that means they are going to make mistakes as they are using pattern matching to suggest code that "fits" gaps. Plenty of mistakes.
I'm personally finding that AI like Sonnet 4, Gemini 2.5 etc keeps writing code that doesn't fit my needs. Its actually becoming a nuisance. This is despite my prompts being as specific as I can make them. Despite having an editorconfig file which defines my (very high) coding standards. Despite a readme.md which I instruct the AIs to read before doing anything, per session.
I can't imagine Claude Opus 4 being light years ahead of that.
We as devs should keep highlighting that these "AIs" are nothing but fancy code completion, to try to bring companies to their senses before they sack the only people that can save their business.
So yes, I think we're in a bubble. And I also think we're being lied to on the daily.
Altman, Musk, Huang et al know there's a dead end coming soon but them pesky shareholders, eh?
Equivalent_Lead4052@reddit
Even if it’s a bubble, how does it help me to know it’s just speculation and nothing more? People are being laid off, standards are way higher, you’re being forced to do training after training and deliver at who knows how many times the speed.
The employees are the ones who suffer at the moment and it’s useless to say “AI is not so spectacular”, because no one listens.
HatersTheRapper@reddit
No I see a lot of manual labor jobs being replaced by robots already, drywall, roofing, painting, harvesting etc. we are still in early early days of AI when it actually develops into real intelligence it will be 10,000X what it is now.
sodbrennerr@reddit
100% but I don't think the burst will affect us as much. At least the majority of us.
SuccessfulTell6943@reddit
I am going to go against the grain and say we are NOT in a bubble, but a lot of the focus of AI being used to replace people is misguided.
AI replaces tasks not people, machines replaced the tasks that horses did, both directly and indirectly (e.g. nobody needed a horse breeder/stud horses anymore) I think we focus so much on our jobs being ourselves that we forget that our jobs are just a loose collection of tasks that we do for money. There are other tasks to do even if the AI replaces the majority of the tasks that a dev would do. The concept of a dev changes, the processes of achieving the same goals changes. That's not replacement, just augmentation.
AI will stay and be useful, it will change our lives for better or for worse, but so long as there are problems that humans need solving, we will still do so.
LeDYoM@reddit
well, I heard COBOL was going to make programmers unnecessary because in comparison with assembly or C, any bussiness man could code in it because it was "almost english". Just saying.
WhyNotFerret@reddit
the real issue that makes it a bubble is the PRICE. right now it's dirt cheap, in order to take over the market aggressively. just like Uber, there will come a day when all the AI companies will have to raise their prices dramatically when the investors want their return. will users drop it when that happens?
moroodi@reddit
AI should be a feature and not the product.
I have no doubt that AI has it's uses, but at the moment we're in the phase where AI is the product. Eventually, we'll move on from the hype, and AI (in whatever form) will be part of the product rather than being the product.
jontzbaker@reddit
I am here for long enough to remember the promise of 3D TVs, with their competing passive and active glasses, smart glasses, Half-Life 2 Episode 3...
Our forefathers came up with a piercing word for that, vaporware. I like it.
def84@reddit
Anyone who thinks AI will REPLACE a programmer has no clue what a programmer does.
Exciting_Walk2319@reddit
Cope harder
Exciting_Walk2319@reddit
Nowhere near the bubble
Interesting-Ice1300@reddit
I don’t think it’s a bubble that’s gonna collapse into nothing - it will change how we do things - how we interact with information, the web and it services
jypKissedMyMom@reddit
I think it's a bubble, but I do think the hype behind generative AI is real. I personally think we're in the early stages of generative AI and we'll discover many use cases for it over the couple decades, similar to how dot com companies were useless but the internet still proliferated into everything.
Fatalist_m@reddit
There is a bubble in the market, in the sense that many startups are overvalued, many of them will go bankrupt and that bubble will eventually burst like the dot com bubble. But if you think people who use AI(for development or other tasks) will stop using it - that's not happening. The use of AI will only increase, to what degree and which jobs it will replace - is hard to say.
Stargazer__2893@reddit
I had dinner with two non-technical people the other day. They were going on and on about how AI "was going faster than Moore's Law." They didn't understand anything about either of the things they mentioned.
I tried to talk with them about a Winograd Schema to explain the limitations of present tech, and when I gave an example of what it was, they kept getting the answer wrong. They didn't understand ordinary intelligence well enough to recognize it, but they were "experts" on artificial intelligence.
During the '08 bubble, random people behaved like they were experts in real estate. In Dot Com, every company with a URL for a business name was thrown money and their stock went up.
People knowing nothing about what they're talking about, but throwing money at it confident it's gonna take over the world, is the definition of a bubble. This thing we're calling AI is very useful, but it is not what people are saying it is, and it never will be.
MsonC118@reddit
This. Just take a loot at companies with "AI" in their name LOL. It's a FOMO and emotional trend, not logical. Humans are just being human, and it'll lead to a bust. It's taught me how many people actually know what they're talking about vs those who are parrots and just buy into the grandiose vision. When you ask any genuine questions, it's like you're attacking them too.
VolkRiot@reddit
My experience with AI as a dev at a large tech company:
It's very useful for search and a nice boost to productivity as a co-pilot for coding, but I still have to do the thinking to provide it with a good context and audit its every output.
I read endless testimonials and blogs online where people claim to have multiplied their productivity 10 fold, and are vibe coding entire new businesses with just a single engineer and a dream, and I am not able to replicate that much of an improvement. To me, one issue I have noticed is that AI is not actually intelligent enough to avoid doing exceptionally stupid and nonsensical things, which makes it hard to treat it like a human of some level of intelligence.
Often when I try to ask online to understand the discrepancy I find people online who say I am just bad at prompting and using technique X or Cursor rule Y. It is getting hard to tell if that is genuinely true or if we are just being gaslit by a large number of less experienced people who are excited because AI's have unlocked something they could not do before - code complex, if somewhat common, apps.
Looking around my own workplace, I am just not seeing the world disrupting sort of AI that was promised last year.
At this point it is impossible to truly tell what the actual value of the current state of AI is amounting to, and you can tell that many people involved in the promotion of this technology have very little professional experience to begin with and get upset when you scrutinize their claims and ask for examples and evidence.
So, I think I am just continuing to use AI as a co-pilot and keeping updated on new features and overall just seeing where it goes. But as of this moment in time I believe LLM based AI tooling to be overblown in its overall impact. Of course that may change soon if it can keep improving, but right now I cannot believe it has replaced anyone's job beyond writing lame copy or answering simple automated support chats.
ventomareiro@reddit
Other people have already mentioned the dotcom bubble, which IMHO is the correct historical comparison.
A new technology appears and people start throwing insane amounts of money at it.
Most of those early initiatives fail but the thing itself turns out to be actually useful, so eventually it ends up becoming a major part of our social and technological landscape.
Personally, I don't think that we will see the straightforward replacement of human developers with AI. What seems a lot more plausible is that individual developers and small teams with the right AI tools will become a lot more productive, which might lead to fewer job openings in the short term (but not necessarily in the long term).
Own-Chemist2228@reddit
It's hard to make predictions, especially about the future.
I've been though a few tech hype cycles and one thing that is different about this one is that it seems very forced. It's a push vs a pull.
During the dot com bubble consumers, developers, businesses ... everybody wanted to use the internet in every way possible.
The AI bubble seems to have much more pressure from big players forcing onto a skeptical user base.
In the late 90s, people wanted to shop online, today nobody wants to talk to an AI customer service agent.
MsonC118@reddit
Agreed. It's actually pushed me away from even trying certain tools. I keep seeing Google's ads on TV shows like Shark Tank LOL. It's like an advertisement infestation. I do like to try new things, but having t pushed infront of my face constantly turns me off from even wanting to try it.
forbiddenknowledg3@reddit
AI is simultaneously impressive but also way overhyped.
People losing their jobs? Yeah they are, but AI is also not solving that companies problems or saving projects. So IMO they're losing their jobs for other reasons. Microsoft is a great example; if AI was so great, why does Windows 11 and their games still suck ass?
Yes it has improved over the past few years, but IMO it's more the tooling/integration. Going from ChatGPT to Claude Code myself, the AI still makes the same fundamental mistakes and need someone experienced in the loop. We won't be replaced unless there's another breakthrough in the actual AI bit.
jacobissimus@reddit
Yeah eventually business folks will realize that if they replace all the juniors with ai, then they’ll run out of senior
MsonC118@reddit
This. I'm senior+ with a solid resume. I currently run my own software companies these days, but if I ever have to go back to corporate, this is precisely what I'm looking forward to. Enrollment is already dropping for CS (if I'm not mistaken, please correct me if I'm wrong). The fear will only help us in the long term.
Additionally, many people who leave the industry will struggle to return. Just keep your head down, keep upskilling, and ride the wave. I'm licking my chops for when this all crumbles and we get paid to fix all the AI mess. So much for those AI "efficiencies". This will all be a HUGE win for us in the mid to long term IMO.
Riseing@reddit
It's going to be a really nice time to be a senior here in about 5 years. Once they've scared all the new blood out of the field, forced a ton of older SWE to retire, and realized that their next word generators don't work so well.
babluco@reddit
time for a sabbatical ? I will just ask AI to catch me up on the latest tech when I come back :-)
pheonixblade9@reddit
That's what I'm doing 😊
pheonixblade9@reddit
It's nice for the people who are actually senior skill level rather than the 1 year of experience 5 times and took a job for the fancy title people.
stevefuzz@reddit
Me looking side eyed at the collective fear of AI taking over dev jobs...
Sheldor5@reddit
I see this as an absolute win $$$$$
thallazar@reddit
All I can say is that I'm very happy to be a senior+ in a diminishing market.
OddWriter7199@reddit
"LLMs are just advanced auto-complete" - well said.
my-ka@reddit
We can only pray for it to be a fraud
Alarming-Nothing-593@reddit
Yes and No. Yes — there are bunch of projects and products that will die out. Due to overhype, low quality and missed expectations. No — because AI definitely reshapes how devs/pms/qas work. I am in the team "we will need more devs" and more importantly more security folks. Treat this whole AI will replace me similar to what wordpress is. You were able to create a website/shop/landing page with no coding tools even before AI. However, the demand for frontend devs never vanished.
phao@reddit
I also wonder if the AI code generation phenomena will create a new, very significant, level of abstraction transition for developers. Maybe a programming language for AI assisted coding (involving elements of natural language and formal language) will imply in a transition like it was from assembly to Fortran/C/Java/Python. Now, however, the transition would be from the high level languages to something yet more convenient.
As I write this, I was wondering if training a high quality "Prolog to C converter" Reasoning LLM would be feasible.
ColumbaPacis@reddit
No, it won’t.
A peogramming language allows you to be precise. Sure, it is an abstraction, but at the end of the day, there is a finite amount of errors a compiler could make. Which a human could resolve, so the language becomes fully ”exact” when translated to binnary.
But LLMs are black boxes. They are inherently chaotic and unpredictable. That is not even close to what a compiler is.
So an LLM can NEVER replace a compiler, to create some “natural language” abstraction layer. Which is what everyone is marketing them as.
Well either that, or going a step further and promising an agentic tool, to replace the human from the loop completely.
rayred@reddit
+1 natural language is a terrible abstraction for computers. We learned this decades ago lol.
mavenHawk@reddit
LLM's are all about being trained on large amount of data. And whatever this new language is, it won't have as much training data as the existing languages. If anything, I think LLMs could hinder the creation of new languages if anything.
JWolf1672@reddit
Exactly this, the architecture of LLMs require them to have a ton of training data for them to be anywhere near decent. What that means is that not only will LLMs reduce incentive to create new languages, they are likely to further entrench certain languages and frameworks, making it difficult for newcomers to gain a foothold.
I've personally noticed that if I don't specify a language then the models tend to give me either JavaScript or python as their default languages of choice as an example.
ShesJustAGlitch@reddit
It’s a bit of a bubble but cursor doesn’t make 500m arr for no reason. There’s great value in ai assisted things, I cannot go back to writing code manually vs ai assisted it’s a huge productivity increase.
Realistically imo we see some ai failures but mostly leaner teams.
roodammy44@reddit
We should count how many businesses still run their operations on excel vs having dedicated code to their problems. If AI really is genuinely reducing the cost of code, there will be a lot more work out there.
Alarming-Nothing-593@reddit
exactly! behind any payment processing company, crypto on-ramp/offramp provider there is a CFO with excel sheet on proper microsoft based laptop.
FinalRide7181@reddit
Well, the .com bubble was a thing but doesnt mean internet was useless. I think the two things are not mutually exclusive: the market has probably exaggerated hopes at least for the near future (maybe what they envision will materialize in a decade) but at the same time AI is extremely useful and cant be compared to blockchain.
If blockchain disappeared tomorrow my life would probably not change at all, if ai did it would have a decent impact
theunixman@reddit
It’s most definitely a bubble. OpenAI is set up to be the pump, at some point it’ll peak and the backers will cash out and then it’s the massive dump. It’s inherently wasteful to an extreme for far worse results than the StackOverflow mods have curated over the decades, and it requires new nuclear power stations to be useful. But the time horizon for that is too long and the VC industrial complex will have long since flushed it all down by then.
eaz135@reddit
Its a complicated question if we are in an AI bubble.
My gut feel is NO, for the following reasons:
- In my day to day I am personally using AI so much for work and personal reasons. I am subscribed to several services that have become critical to my work - and I don't see myself disconnecting them. Its not a nice-to-have thing like a Netflix subscription, if I were to go completely AI-less I found feel it would be a major step backwards.
- Many clients I have worked with have done really cool stuff with AI, such as on-device models, edge-computing applications (IoT at-scale setups), migration projects with AI acceleration, etc.
- Many large enterprises (such as the big banks, insurance companies, telcos, government agencies, etc) are just starting to really setup their platform/infrastructure - and internal AI enablement teams. The real impactful enterprise use-cases have barely even begun, let alone be exhausted to the point of popping a bubble.
- We are starting to see entirely new ways of teams operating, new ways of solving problems, and ability to solve problems that were previously unfeasible (too much work, too expensive, too slow).
There is clearly A LOT of real value to AI already being delivered, and across the vast majority of benchmarks there is a clear trend, new and more capable models continue to be released on a fairly regular basis. We haven't seen a meaningful slowdown in progress according to benchmark results. Of course we can't predict how capable future models will be, but its not too unreasonable to expect some level of continued progress over the next several years, and extrapolating that out its easy to imagine when our current benchmarks become inadequate at measuring their performance. For e.g - if you just do a linear plot of how models have improved over time on SWEBench - you are looking at models scoring 100% in mid to late 2026.
Another reason I say no, is its quite rare that a new technology comes out that is so horizontally applied across sectors/organisations, where there's nearly always a use-case for AI at nearly every company. The last time something like this happened I'd say was the cloud computing transformation. IMO not even Mobile was as wide-reaching as the possibilities for AI. I'd argue that as AI becomes even more capable over the coming years, there will be more companies/people out there that need AI - than companies needing a mobile app.
So everything I've said so far sounds like slam-dunk that we're NOT in a bubble, so why did I say its complicated? As with things like the cloud computing transformation - there will be winners and losers. In hindsight it was easy to say "everybody should have just invested in Amazon and Microsoft". Will AI follow that same time of path, where its kind of a winner-takes-all scenario, with few small-medium sized players, and most of the activity swallowed up by 2-3 giants (e.g Google, Microsoft, Amazon)?
Zenneth014@reddit
I’m starting to see real results from AI at my job in a good way. Not in a “oh no I’m being replaced” but in a “oh that really long tedious pile of debt I’ve been putting off that’ll make my life easier when I get around to it” task can be done way faster and easier now. Or in a “I’ll create one example of 20 things that need to be migrated to a new pattern and AI will get the remaining things 99% correct” type of way. To me this is a real benefit that doesn’t threaten jobs but makes my life less painful. CEOs claiming it’ll replace all of us are definitely wrong but it’s definitely very valuable. “Big data” was a buzzword until it just became commonplace. AI will be the same.
Original-Baki@reddit
Just like the .com bubble and just like the .com bubble A.I. is going to be a transformational technology, it’s just clear that valuations are running ahead of where we are today.
Different-Side5262@reddit
AI increases productivity. Period.
The people out of a job will be people that can't adapt and increase their productivity 100x.
Practical_Cell5371@reddit
AI definitely has earned its place as it’s really remarkable and useful (specifically referring to LLMs). This will likely reduce the need for engineers on most teams and already has as it’s a productivity tool. It allows me personally to help debug issues that would take me 10x the time to do on my own, come up with better designs, improve my code and even get through “writers block”. It’s not going to replace entry level, but the level to entry is definitely higher and more competitive now. I don’t really see LLMs improving much more than they are now, probably going to plateau pretty soon. There’s really nothing else to teach it other than to continuously train it with the data people are prompting it for and try to improve those, but it’s true that it has its limitations. I use either ChatGpt, Gemini and Claude probably about 20-30 times per day, it’s not perfect and I couldn’t expect it to be. If you are an experienced developer (pre AI) that built complex software and used great patterns, then tried experimenting with how AI today would do the same thing you would notice a few things in comparison: (1) it found a better way to do some parts of the code (2) it found an inferior way to do it than how you had determined. (3) it misunderstood the goal completely and needs to be refined constantly.
____purple@reddit
AI is underutilized currently. We are barely scratching the surface of what's it capable.
And I don't even mean research. I mean integration. AI will be everywhere because it's smart enough, and everything is better when it's smart, knows you, and can adjust. It's just a matter of implementation.
By integrating AI I'm not talking about adding iframes with chat. I mean allowing it to manipulate narrative of user's manipulation of the device. Recommend and adapt. All UIs will change.
Customer processing will change. Content consumption will change forever. A lot of stuff.
AI could replace programmers. But it's hard. Idk.
ActiveBarStool@reddit
"is water wet?"
ahspaghett69@reddit
We are in a bubble, yes. AI is useful but it's not magic or the huge leap forward that was promised. Branding what we have as AI is disingenuous, frankly.
2 years ago people said copilot was going to take all software developer jobs. Now the same people say that copilot sucks, actually Claude code is what's going to take all those jobs. Now people are starting to say actually it's not Claude code it's Claude with locally hosted MCP servers etc etc etc
The problem is that as soon as you try to use AI in production, for real use cases, it fails. Because it doesn't really understand anything, it's just predicting the next block of text, or pixels, given the previous block. "Reasoning" as it's currently implemented is basically just splitting up the blocks into smaller predictions, which tend to be more accurate, but then it fails to understand how the blocks relate to each other.
DallasActual@reddit
100% this is an AI bubble.
AI is not replacing anyone. It will ease some use cases, but the only folks who think LLM is the last stop before AGI are the ones selling LLMs.
Middle-Comparison607@reddit
I don’t believe AI is going to replace anything in the short term, but it will more more transformative than any of the tech you mentioned. I compare AI to the birth of internet and later the apps in smartphones revolution. The internet didn’t necessarily end anything but created new forms of business - eg Amazon, Netflix, etc. Apps on smartphones put the internet close to us all the time and things like mobile banking and instant messaging apps were created. Mobile banking didn’t kill bank branches but definitely reduced the need for them. Who misses going to a branch to pay their bills?
As for AI, it will replace the boring part of engineering pretty soon. Who wants to write another CRUD? Or the YAML of YAMLs (K8s, ArgoCD, Crossplane - I’m looking at you)… even terraform is quite boring to me. They are necessary tools (sometimes), but I absolutely hate having to deal with them when I just want to build My Next Cool Thing (tm). For me AI is helping me unlock my creativity when building software. I don’t see it replacing engineers because it cannot be creative, but it is definitely enabling me to do things I wouldn’t do before
FunnyMustacheMan45@reddit
It's safer to assume everything that isn't at least 20 years old is a bubble...
I recall the days when NFTs were a thing. The job market was so hot you could get a job without any experience...
Now, I can't even find a single NFT or Crypto company from back then.
Ahrivan@reddit
You know the bubble is at its peak once there is talks about quantum AI.
nullstacks@reddit
I think it’s a lot about the right timing to be a scape goat for over-hiring mostly under qualified talent over the past few years, an unstable economy, and an unpredictable administration transition coming to a head.
No-Rush-Hour-2422@reddit (OP)
100%
ButterPotatoHead@reddit
I was around when the internet was created and at the time most people thought, what, are people going to email each other all day? Nobody really saw the potential.
Software engineers need to get off of the "AI is going to replace me" angst and understand that AI is going to impact vast swaths of the business landscape. Paralegals, customer service organizations of any kind, insurance underwriting, content creators, advertising and marketing, medical advice -- these industries are all going to get turned upside down by AI. And AI systems are going to have to be developed by software and data engineers. So this is going to be a boom or a bust depending on where in the ecosystem you are.
EasyDev_@reddit
AI is actually helpful, but I think promoting it as if everything is possible through vibe coding is an exaggeration. They need investment. For now, since users still need sufficient expertise to review AI-generated content, there's a certain level of delay. If we ever reach a stage where AI can handle even the review process, we’ll have to worry not about jobs, but about humanity.
andupotorac@reddit
No. This is more like mobile, less like NFTs or crypto. If that’s what you’re implying.
Designer_Holiday3284@reddit
I don't even program anymore and I am not even joking. I prompt all day long and iterate over and over. Is it 100%? More like 70%, but it's better than I could perform in an unknown code in a super tight schedule. In 8 days I spent 200 dollars (paid by the company) in AI API calls.
This will become the default. Not a matter of if but when. It will obviously get cheaper and better overtime.
jonathanmeeks@reddit
We are simultaneously in a bubble at a time when AI/LLMs will soon revolutionize the IT industry, displace all sorts of workers, and have an enormous impact on society. All at the same time when much of it is hype. Seems contradictory, but hear me out...
It reminds me of the dot com era, actually, when all of the above were true then, too.
A bubble ... people vaguely recognizing the potential and throwing money at people who don't know what they're doing. They fail, invevestors loose money, and the bubble pops.
Revolutionizing tech ... by analogy, I started in tech just before the dot com era, I can say first-hand that it had a huge impact. Imagine working in tech without widespread internet access. It will be the same with AI/LLMs. You will have a hard time imagining working without it in a few years.
Changes to society overall ... I can attest that far, far more changes occurred with how we accessed information between 1996 and 2005 than between 1986 and 1995. That latter 10 years was a fundamental shift in information access. Add onto that, commerce. Imagine not having e-commerce. Just drive to a store and hope they have what you want.
But all the hype ... while AI will change a lot, what specifically it will change is largely an unknown. We aren't there yet to know how it plays out. And when people sense a change is coming and want to get involved with it, many will make crap up to get attention. Hence, all the "it will replace all SWE." Maybe it will, maybe it won't, but those saying they know are talking out of their rear-ends.
PS: it won't replace all SWEs :)
funbike@reddit
Is the personal computer a bubble? Is the Internet a bubble? Is the smartphone a bubble?
In 1977 someone from DEC said "Nobody would ever want a computer in their home". In 1995 Newsweek had an article, "The Internet? Bah!". LOL!
Various aspects of AI may indeed be a bubble, but AI is here to stay and it WILL be very disruptive in the long term.
pceimpulsive@reddit
AI has its merits but overall yes, it's a bubble right now. You know that true because the leading AI companies are losing money.
Accomplished_End_138@reddit
ML I think has a bunch it can do that I think still needs to be discovered.
Llm has a bunch too. But not where they wanna use it. For parsing text from a user into something code can use. And a bunch of tools to help.
But its far from the replace every worker they say it will do. But I do think the layoffs will be longer held (but that's cause I think it was over hiring during the pandemic)
I do use ml and llm at work. It can help with things. But generally not in the way you think it would (at least well)
mark1nhu@reddit
A lot of business and applications don’t actually need AGI to extract a lot of value from this technology, even if it’s a glorified autocomplete for now.
One of my clients is an insurtech and part of the bottleneck is reviewing customer forms, documents, police reports, etc, basically approving paperwork related to claims before actually paying them.
The impact of LLM has been HUGE in this case. Not only customer wait way less than before, but employees have been outputting way better work thanks to the amazing support AI gives to them.
Long story short, we’re not in a bubble in my opinion. We indeed have an AI overhype, specially in software engineering, but it’s going to calm down and then become boring ubiquitous technology like Microsoft Excel for most of business. AI isn’t going anywhere in my humble opinion.
shesprettytechnical@reddit
The current state of the market around AI is one big Dunning Kruger example.
Advanced_Poet_7816@reddit
Yes but not like blockchain. The current AI bubble is in the insane investments made into scaling an architecture that will fail. It’s a semi conductor bubble.
Sudden-War3241@reddit
its absolutely helpful, however in development there are aspects which human intelligence has difficulty dealing with, I doubt AI will ever be a one stop solution. I do agree it can be a great tool for increasing productivity but that’s the max i think it can reach for some time.
joe0418@reddit
I totally see a world where I just tell the AI what to write and I review it's output and collaborate with it to a desired result. Do I see it replacing me? No. Someone has to get in the weeds with it and guide it to the proper conclusion. It still can't read anyone's mind and until that is the case, us software engineers will still be tasked with taking vague requirements and converting them to working software.
t-hrowaway123@reddit
I am a data platform architect, and its something AI can't really replace, without asking it to rebuild our entire platform. You'd have to share our entire back and & front end's worth of code repos to give it the context it needs, to complete what might even seem like a complex task. Can AI build an app from scratch or help you launch a tool? Sure. But give it 5-10 years worth of human-based PRs and fat chance you'll get anything other than "searching for answers...." on repeat. As a data-science trained platform architect, that's my take. I still use AI every day (generating correct regex patterns, optimizing really long SQL queries or coming up with SQL solutions to a problem that would be easy to do outside of SQL, but becomes complex in SQL but has to be done so because of pipeline context, etc.). For the time being, the accessibility also become its biggest detractor. You have non technical people or those with very junior experience trying to rely on it entirely and its generating "slop", hallucinations, bad code. Consequently, distrust is growing rapidly. So I think for quite some team, until its used correctly or it becomes so powerful that even a non-technical person can build something meaningful with it, we'll be good.
swallowing_bees@reddit
No. I believe the AI "product" (LLM chat interfaces) is already in it's endgame. They do NOT want AI to replace developers, because developers are shelling tons of money every single month for these LLMs. Why would they kill that cash cow? They use hype to posture like their LLMs are going to explode in intelligence and replace the people using it and paying money for it, so really current services are just an intermediate product while we wait for that explosion. That's BS.
Also, just wait until these LLM services are enshitified a la Netflix. Only a matter of time. But that's not a bubble, they are going to keep making money.
talldean@reddit
Blockchain seemed like a pyramid scheme from as long as I can remember. Bitcoin is the best current use and is burning $130 of electricity per transaction, today. (1370.97 kWh per transaction run today, July 8th 2025. Yeah.)
AI has practical uses, and we're seeing a bunch of them already. Lawyers with less than two or three years of working experience... are already obsolete, except that we need them to get lawyers with 4+ years of experience. Even if we *never* get AGI, we're already seeing entry level white collar jobs - and customer support jobs - start to evaporate.
That said, I suspect AI replaces managers - or allows managers to manage far more reports - notably before it replaces a senior engineer.
Constant-Listen834@reddit
If it was a bubble, I don’t think we’d see so many people on Reddit scared like this.
codeprimate@reddit
LOL. Analysis and comprehension capabilities are the reason I use it every day. I've literally pasted an error backtrace into Cursor and it automatically fixed a longstanding bug that no one else could resolve: watching the entire root cause analysis process, mitigation strategy proposal and revision, and implementation in real-time.
FAR from autocomplete.
gadfly1999@reddit
Kudos to musk for framing the AI alignment in a way normal people can understand.
binaryfireball@reddit
listen if AI delivers on its promise then i wont need to jump on the train
tallicafu1@reddit
The biggest issue for AI is nobody is asking for it and no one I know wants it.
alibloomdido@reddit
Well the dotcom bubble has burst but dotcoms since then got bigger and bigger.
Strict-Plan4528@reddit
"you won't lose your job to a tractor, but to a horse who learns how to drive a tractor." - unknown
abandonplanetearth@reddit
What does this mean?
Arsenic_Flames@reddit
A play on “you won’t lose your job to AI, but to a person who knows how to use AI”. But also referencing the history of horses and tractors, where vehicles have replaced horses in the economy completely, and there are no new jobs for horses to take. I.e it will be the same with humans, and “technology will lead to the creation of new jobs is a fallacy”
whathaveicontinued@reddit
I'm not an AI doomer at all, but I think what people are scared of is that AI IS the tractor AND the horse riding the tractor.
tomthebomb96@reddit
Metaphor meaning that failing to adapt to changing technology is the reason people lose their jobs, rather than the technology change itself.
teslas_love_pigeon@reddit
Why are we acting like writing instructions into a textbox is a skill programmers don't already have?
The only skill vibe coding requires is a 6th grade reading comprehension.
tomthebomb96@reddit
I explained the metaphor to the commenter that asked what it meant, never expressed or implied it is applicable to this subject or that I agree with it.
teslas_love_pigeon@reddit
just let me shout into the void that is modern social media, homie.
ben_bliksem@reddit
That you are going to lose your job to an LLM Tractor
csanon212@reddit
This feels like a cheap shot against cryptocurrency. I think a lot of 'experienced' devs are against it because of the hype train. Yet, if you look at millennial and gen Z millionaires, they are allocating significant amounts of their net worth to it.
Fleischhauf@reddit
it's the same bubble as the Internet in the 2000nes. it's massively over hyped, but once the bubble bursts, there will be a fundamental change.
MathematicianSome289@reddit
No, not in a bubble at all. How do I know? We are just barely getting started with the Art of What’s Possible in GenAI. There are going to be many more developments in this space, for many years to come. This is not some hype wave like we saw in 2015-17 where training your ML model was the hot thing. That is like stone-age compared to where the industry is headed. Most things are going to change. In your office and in the world around you. This is an adapt to thrive situation, not some brush it off fad.
seinfeld4eva@reddit
I've been making websites for 35 years now. I can say I think AI is the biggest development in my entire career. I'm using it today at a startup where I can write a week's worth of quality code in 1-2 days. I'm using Cursor with Claude Code. I spend most of my time writing documentation (tech specs) and reviewing generated code, debugging, and making sure the code follows best practices. I don't know that it's better today than years ago, but my small team can ship a lot of features quickly. I think the skill set required is different. Being able to communicate well seems more important than skill with programming languages. Also, just being thorough about reviewing is critical. I'm able to think on a higher level than I could before, which is nice. Instead of getting stuck on syntax issues, I'm able to think more broadly in terms of design patterns and performance and scalability.
I am worried that many jobs will disappear, and that much more work will be expected out of smaller teams.
wutcnbrowndo4u@reddit
I've got a pretty strong disagreement with your recollection of these "hype" cycles. These technologies both never reached even close to the level of industry hype, nor did they create the amount of value/utility that AI already has, even according to the biggest pessimists. Blockchain hype was dominated by laypeople giddy about their speculative investments. VR didn't have much of a hype cycle at all: in fact, Meta was roundly mocked for their Metaverse bet![1]
[1] if anything, the Meta pivot was underrated by the industry: non-VR wearables have a good shot at delivering a lot of value in the near future, given where the next frontier of AI capability improvements is (multimodal models, provided with easy access to a much broader context than current text-query-based models are). VR IMO remains cool as hell, but there was absolutely no industry hype about it exploding in economic usefulness
fig-lous-BEFT@reddit
Tech is exceptionally talented at over promising then under delivering. Once the hype bubble pops, you can get a better sense of the impact.
billvivinotechnology@reddit
This is seriously unfortunate — and honestly, I get where you're coming from.
As tech progresses, people of all backgrounds are going to want to use it — but most don’t know how, or even when to loop in an actual developer. That’s where folks like you come in. We need more voices educating not just on what AI can do, but also where it shouldn’t be left on its own.
I actually wrote something about this exact gap — on the importance of having real engineers in the loop when building with AI: You Need Human Beings Involved In Your Code
Would love to hear your take.
veryspicypickle@reddit
Yes
chunkypenguion1991@reddit
In the Gartner hype cycle we are at the peak of the initial expectations curve right before it goes into through of disillusionment
topboyinn1t@reddit
To say that it’s a bubble is a huge understatement. I think LLMs will stick around and likely get optimized in every which way possible to minimize cost of inference, etc, but they will not change the world the way everyone is imagining.
The amount of rave I see about LLM coding is astounding when it needs handholding for even the most basic shit. We have already hit diminishing returns on the models as well, but everyone is trying to continue selling the hype. GPT5 is falling extremely short of outrageous claims that OpenAI made, which is why they are stalling.
NUTTA_BUSTAH@reddit
Oh for sure it is a bubble. A lot of snake oil salesmen all around. However I do not think it is necessarily a "scam" so to speak, as in after the bubble bursts, the problem (AI/LLMs) starts to be understood in general and more targeted focus efforts pop up.
E.g. blockchain happened, things like AWS blockchain DB products were built, cryptos happened and finally the snake-oily DB product was killed as it was the wrong application (no customers)
NFTs is more of a "scam" in my eyes.
thephotoman@reddit
The question is not, “Is AI good?”
The question is, “Is AI good enough to turn a profit?” AI is a cable company pitch right now: it’s on an introductory price. They want users that rely on them, then they’re gonna jack up the rates to what it actually costs. Just like Uber did.
I don’t see any of the current players lasting. Big Tech has had its run, but the world is burning around them.
dantheman91@reddit
LLMs are fantastic at doing an OK job at doing relatively simple and well documented asks. They are not good at solving problems.
The reality is, that is you are worried about AI taking your job, you should probably find a better job where you offer more than a brainless pattern matcher could replace you.
Cheses100@reddit
I think here’s where the bubble maybe be different this time. I’m finding myself able to do a significant portion of my coding work using only LLMs. Currently I have to guide them using my knowledge of software engineering and our systems, but they’re able to do reasonably complex tasks within a certain scope. Sorta like the type of work maybe an intern could do but it’s done nearly instantly. Just from the efficiency gains there I do believe a single engineer is maybe 2x more efficient than before. I’ve certainly saved hours at a time from having LLMs write some code for me.
The broader thing here is if you follow a lot of the research, there’s a ton of new advances in reinforcement learning for training LLMs. Some are specifically focusing on agentic tasks, which eventually can lead to much better agents not just for coding but for doing broader tasks within your company that a swe might do.
I’ve also been working pretty closely with LLMs for the past year and a half so I’ve seen the advances made in that time frame. It’s pretty insane how quickly stuff keeps improving, so it’s not a given just yet that this will be everything everyone’s promised, but I also think there’s a real possibility it will fundamentally change software engineering and maybe the economy as a whole over the next 2-5 years.
nuehado@reddit
No
Individual-Praline20@reddit
We are in a series of bullshit bubbles. All from the tech riches that use you for their own benefit. They are the new slavers. 🤷
sod1102@reddit
AI will have its place, but it won't be relied upon as heavily as many companies think. In fact I suspect many companies will rely on it too much, without putting the proper guardrails in place (governance, AppSec, senior human devs to review/validate), and they will end up getting burned big time. Vulnerability, loss of functionality, operational instability, data loss, reputational harm, etc will ensue. Those firms that wind up getting burned by going "all in" on AI for development will likely over correct in the opposite direction and the market will once again be favorable for junior devs. Unfortunately that may take a few years, however.
Garfish16@reddit
Yes, we're definitely in an AI bubble. No, this is nothing like the blockchain bubble. Blockchain was a technology looking for an application whereas AI has lots and lots of applications.
stick_it_in_your_bum@reddit
I’m working on a dark mode for a desktop app right now in win forms (don’t ask why, I’m gonna try to do away with that shit as soon as I can) and I asked ChatGPT how to implement this feature. While it gave some ok rudimentary examples it got stuck on how to implement it correctly for DataGridView. Idk man imo it’s reaallly undercooked and I’m tired of giving it chances to impress me. I’d rather just read the docs myself at this point. At least the docs don’t try to bullshit and lie to your face.
lordnacho666@reddit
There's a few overlapping issues:
- People being laid off is due to covid times ending, inflation, and interest rates normalizing. It happens on top of whatever other things people are talking about, and so often other effects are mistaken for this. So covid bubble deflates, and probably overdeflates considering how many people got hired in the golden years.
- There's two layers of bubble. The first is investment hype. Investors see opportunity, they all rush at the door, some of them will lose their money. It's inevitable that a lot of invested money is just wasted. Petsdotcom, not a terrible idea about how to use the internet, but they messed it up. So yes you will also see AI companies folding due to being badly run or just not winning the race they're in.
- The second is real changes to how people live. This is like the internet again. While the bubble popped, there were real effects on everyday life. You can see this already with AI, it has murdered SEO in a short time. People don't just do a search anymore, or at least when they do, they just read the Gemini summary and they're done. Or like us devs, we code in a different way than we used to. I just got Claude to code a fairly complex feature this morning. It would have taken me the whole week of doing donkey work debugging.
I don't think it's like blockchain. When did you last see anyone pay with a digital currency? There were articles and newsclips about people paying for pizzas with BTC. If it was going to take off that way, it would have. There's a speculative market that is growing, but it's not a massive everyday thing that affects us all.
So are we in a bubble? Certainly AI investments are a bubble, a lot of those firms are spending gigantic amounts of money on compute, and getting token amounts of money back for it. Consolidation will happen once we find some winners.
But the AI tech is here to stay. In ten years, vibe coding will still be a thing, for example. It's not like BTC where you won't be able to pay anyone for coffee anymore.
jcm95@reddit
Devs won’t be replaced, but demand will keep plummeting
BoBoBearDev@reddit
It is a bubble, but before it pops, it will rebubble. The concept of automation is not new and has been done for more than 5000 years. AI is just an evolution of automation and it will never stop and there is no limit to that either.
Equal-Purple-4247@reddit
I think a lot of people are either unaware of or did not notice the cloud revolution.
It has become more than just vertical vs horizontal scaling. The modern tech stack, with infrastructure as code, containers, orchestrations, service discovery, telemetry, consensus protocols, distributed storage / systems etc. are all a result of, or highly influenced by the cloud. New jobs like dev ops and cloud engineers have sprung up, and existing jobs like software engineers and architects have changed.
I'm not a fan of AI in its current form, but I'm confident that big cloud i.e. Google / Microsoft / Amazon will force AI upon us. It's the reason we're all using Teams. Much better tooling (eg. MCP protocol, cursor) will emerge, and slowly but surely, Big AI will augment our job scope such that we play a "supporting" role to AI. Tech ops will change (just like devops), architecture will change (like how everything is a microservice), our favorite frameworks and languages will change (like how everything is a webapp and desktop apps are almost dead), and SWE will change.
Just look at how they've transformed the industry with their cloud certification programs. Blockchain (or at least bitcoin) reached enough critical mass that it's listed on the stock exchange and will possibly be with us forever. IoT devices are still everywhere. They haven't given up on voice assistants since the first Siri. IMO AI is closer to "cloud" than it is to "blockchain", and cloud is everywhere.
AI is endemic.
guhcampos@reddit
We are always in some sort of bubble, AI being the current one. What "bubble" means here has some nuance though.
Right now there are thousands of AI startups around the World, many attempting similar goals, many chasing problems for solutions they provide, a few working in solutions to current problems.
From those thousands, a couple dozen will succeed and thousands will fail. If this is what a bubble burst looks like for you, be sure it will happen, and it will be relatively fast compared to other bubbles of this kind, because AI research costs are quite extreme. A normal software startup with a couple million USD in the bank might have 3 years runway if their team is small. If they want to heavily invest in AI, their cloud costs alone will eat this cash in a year.
DesperateAdvantage76@reddit
LLMs are an excellent alternative to Google but I haven't seen them improve much since I started using them 3 years ago. I don't care what all the synthetic benchmarks say.
ImYoric@reddit
We are absolutely in an AI bubble (assuming that AI means GenAI).
This does not mean that AI cannot be useful, or that AI doesn't threaten jobs, or that AI will never improve. But it does mean that AI companies are insanely overvalued, that the current possibilities of AI are overhyped, and that the current economic model is unsustainable.
I believe that pretty much none of the currently hyped AI-based technologies will exist in 10 years time. I also believe that several AI-based technologies that are currently passed over will cause considerable changes in the tech landscape and society. I am, of course, unable to predict which ones.
Large_Carpenter_8459@reddit
GenAI sucks, but as a tool it is a massive productivity booster
Fidodo@reddit
Yes, but not like with the block chain or vr, but like the Internet bubble. Unlike the Blockchain or vr, AI and the Internet both have real substance backing them, but we still had an Internet bubble. It's not that the Internet wasn't capable, but people were promising too much too soon and just like with ai, many people did not understand how it could be utilized. We eventually achieved the vision for the Internet, but it took much longer than promised and that vision was blurry at the start.
AI provides a real opportunity for a massive impact just like the Internet and just like the Internet a lot of companies will get it wrong. That will cause a bubble, but in the chaos, the companies that get it right have the opportunity to upturn the established order.
tepfibo@reddit
Yes if electricity was a bubble. Or wheels a bubble.
PhatOofxD@reddit
Unlike Blockchain AI is actually very useful for MOST companies.
How useful for most is over hyped yes, and the AI growth is honestly beginning to flatten out.
I wouldn't worry too much about it taking the career away. It'd take others far easily and every time in history this has happened it spawned a replacement industry
agumonkey@reddit
Partly, I think the amount of people able to juggle with AI / LLMs for actual value are few. Most people around are excited to try setup things but it's shallow fiddling afaik and I see no real depth and ability to overcome issues when their new ai-based tool will start to have issues.
Alive_Direction6123@reddit
AI will not replace software engineers. I do believe it will replace simple software developers and coders.
Deaf_Playa@reddit
Yes. The notion that we don't know why GenAI is able to produce semi coherent thoughts is evidence. Until we figure out hallucinations, this bubble will remain.
__scan__@reddit
Yes.
Tango1777@reddit
In 15-20 years? Maybe, but even then not fully replace, but devs work will just evolve.
Anytime soon? No way, I use AI every day, good, commercial models, it definitely helps with every day work, but it's so wrong so often and its mistakes and assumptions escalate very quickly and it cannot undo it, it just goes in the wrong direction all the way. Or it overengineers code by a ton. If we would just blindly accept most of what it creates with just slight adjustments, in a year or two such code base would become unbearable to develop, maintain and debug. The only way to handle it would probably be AI agent mode, which would only make things worse and worse. In the end you'd end up with an app as if it was developed by a bunch of juniors without any supervision. But probably worse than that. And that is where experienced devs are crucial. With or without AI. If we're going into fast, AI based development, mediocre code is created and someone has to keep sane and control it and take care of the quality. Experienced devs expertise will be essential to control AI-oriented development. IF they even wanna do such jobs instead of coding, because it's hella frustrating fighting with AI to stop acting dumb and do a decent job. Not to mention you now have to pay for both devs and AI tokens, which for enterprise tier are not that cheap. So imho AI is here to stay, but it's not taking over anything anytime soon. Most of the time what I do with AI is telling it to stop implementing something wrong and do it as I say. But you need to have experience to know what's the right way. Devs who started learning development in AI times and don't know how to work without it, will not have that knowledge ever. So they can become seniors, but they won't have senior knowledge, nowhere near.
JaneGoodallVS@reddit
Which commercial models?
My buddy is a CTO who uses Claude Code and his company hasn't found productivity gains yet. It shifts work to code review and QA. There are too many little bugs to blindly trust the code. However, he suspects reviewers are too nitty with general code quality and is thinking about paring down code review, but that wouldn't solve the little odd bugs problem. One of the problems he's encountering is that the AI performs better when it has designs that work well for fleshbag eyes so they can't just vibe code.
Gloomy_Actuary6283@reddit
Im assuming we talk about LLMs, which is just subset of AI.
I think many companies will collapse surely. But also, I see another reason for potential collapse, apart from often problematic quality: LLM models are few in number, in comparison to humans. People have different experiences, biases, histories, thoughts. This is the main drive of variety of ideas. Be it books, movies, songs, products, business ideas, concepts...
With few number of LLMs that are trained in similar fashion and taking over more and more areas of life, I have impression diversity of ideas may collapse - if business goes too far.
But other than that, LLMs will stay as a tool.
chaitanyathengdi@reddit
It's inflated for sure, but it'll have its uses. And no, I don't think this is the kind of AI that will lead to AGI and ASI one day. Those are totally different beasts.
Crazyglue@reddit
I was (and mostly am) skeptical since this hype train started. Work gave me a free sub to cursor though and it's been an incredible speed up for me. It's still not good enough to notice specific nuance or to stop itself going down a really dumb rabbit hole. But for hammering out tests, boilerplate, and general structure it feels like a 10x increase
donjulioanejo@reddit
Let's be real, CEOs of companies like Google or Microsoft absolutely know the limits of AI.
But they're also selling shovels in the middle of a gold rush, so they're absolutely going to shill it and downplay any drawbacks.
data-artist@reddit
Yes, but this is how the Tech scam works. You are most likely going to be replaced by an offshore developer before you will be replaced by AI.
protomatterman@reddit
We are absolutely an in AI bubble. It’s like the dot com bubble of 2000. Everyone was sure the internet would be big, rightly, but not exactly sure how or when. Everything was .com even when it made no sense. It was terrible when the bubble burst. Maybe it won’t be so bad this time.
Sheldor5@reddit
if you have to force a technology it's garbage
AI is big tech stealing money from big investors and lying to everybody to invest even more
all the big AI companies don't make any profit (income only covers 10-30% of the operation/development costs), they all rely on investments
and the term AI "Artificial Intelligence" is the biggest scam ever if you know its definition ...
keyless-hieroglyphs@reddit
I am having similar disiullusionment, the more I hear and learn, the more I think so.
The day the company tells me "AI is no longer optional" (as one company is said to have done), I'll quit.
MonochromeDinosaur@reddit
A bubble is only a bubble if it pops. Right mow regardless of whether there is one or not the prudent thing is to ride the wave and have a contingency plan for the pop.
Crashed-Thought@reddit
Vr and blockchains arent things that most people touched. Even by now.
AI is something that every person i know used
I still think vr will have its time to shine but AI is here
TomTeachesTech@reddit
We are in an internet bubble
cromwell001@reddit
I wouldn't really call it a "scam" — same way blockchain or VR weren’t scams either. What you’re seeing is just the usual hype cycle, mostly driven by marketing teams trying to cash in while the buzz lasts. Like you said, we saw the same thing with blockchain.
I've been working in the blockchain space for about 7–8 years now. I still remember back in 2020–21, some marketing guy was telling me how NFTs were going to "change the world" and that we all needed to invest ASAP — but the dude had no idea what an NFT even was, or how blockchain actually works.
RedPandaDan@reddit
If AI is as amazing as they claim, why are we not seeing a massive spike in GDP from the economic boom as it brings us to the next golden age?
It will remain, but its not the wonder that the hype merchants are making it out to be.
zica-do-reddit@reddit
I think so. It's similar to the offshoring wave of the 2000s; eventually companies resumed hiring in Western countries. I think that as the market turns, companies will hire stateside again. The current AI stuff certainly helps and it's great tech, but it's no replacement for humans. Maybe when AGI is achieved and proven to work safely, but the jury is still out.
WorriedGiraffe2793@reddit
Yes it’s 100% a bubble
No-Row-Boat@reddit
Imagine the fallout of security issues and shit we have to fix if it is... My hourly rate will be insane.
No-Intention554@reddit
I think the dot com bubble is a great comprison. It was clearly over-hyped to a ridiculous degree, but some of the dot com companies are among the biggest companies today.
I think AI is in a similar bubble, it does have some value and some of the AI companies will probably find their place in the future.
hau5keeping@reddit
Not at all, these tools create genuine value. They make developers 10x or even 100x more productive. They raise the floor of code quality.
J-P-DOO11@reddit
If you are a 10x dev now because of AI, this means you simply were a 0.1x dev before it. And it raises the floor for code quantity, not quality.
If you really think you are 100x more productive now, it is either because you weren't productive at all, or you are producing a ton of code blindly, sacrificing quality over quantity.
hau5keeping@reddit
RemindMe! 1 year "engineers who dont understand ai"
grizltech@reddit
That’s such a bs number.
Sheldor5@reddit
you forgot "/s"
_____c4@reddit
It’s not a bubble, it’s kinda like an awesome new tool. Think like off the shelf cloud products, except for the fact it can be used in all areas of development and at a much cheaper price than other tools. It’s also easier to integrate it into your system, I think some areas will def become obsolete think testing, front end dev, infra development but core engineering and systems design and debugging will become more important
No-Rush-Hour-2422@reddit (OP)
You and I understand that it's a tool. But the tech CEOs don't. They think it's alive. That's where the problem lies.
_____c4@reddit
Tech CEOs just say whatever to increase their stock value, make shareholders happy, and get more customers. hence how they were all for Bidens policies and now they are all for trumps policies. They did the same thing with big data, cloud, blockchain, and now AI. It’s all optics
SryUsrNameIsTaken@reddit
Whether or not they think it’s alive, they tell everyone they think it’s alive. I’d wager many of them don’t really think it’s alive.
supersnorkel@reddit
I do think AI is a bubble but a way smaller one than lets say blockchain as AI actually does have its use cases. I am more wondering about when we are getting rug pulled, no AI company is actually making money and revenue has to go up somehow
PM_ME_UR_PIKACHU@reddit
Just start writing all your code as ugly as possible and it will poison the AI.
SoggyGrayDuck@reddit
Do bubbles even exist anymore or does the fed think the impact is too big to let gap. I'm so disgusted with the US and global financial planning
_FIRECRACKER_JINX@reddit
if YOU think the makers of Ai will charge ANY less than YOUR FULL SALARY to replace you with their Ai products, I have self-checkout software to sell you friend. Because self-checkout software is a glaring example of why this won't work out long term.
this is a problem that will be solved by capitalism. Ai companies will charge 1x your salary or greater to replace you, because in an environment of ever-increasing profits that only go up and never down, that's the way it's gotta be financially in order to squeeze more ROI out of this business endeavor.
So NO. I don't think they're going to replace you. Not for long. Not until the Ai companies start charging your wage or greater to replace you :(
besides, nobody has addressed the "liability problem" of replacing your employees with Ai.
What happens when the Ai runs across a prompt injected action? Whos going to pay out the damages if a human fails to catch the injected prompt? If it does harm?
Whos' going to pay out the liabilities involved if the Ai leaks anything proprietary? There's tiktoks of Ai leaking secrets of people to other people with the right prompting...
So I have a question for you. How much faith do you have in the insurance companies that will be responsible for insuring the business actions of Ai, that Ai is coming for your job, again?
RoadKill_11@reddit
Don’t fully agree with the “auto-complete” logic
We don’t fundamentally know how human brains work either, isn’t most thought/decision making just pattern recognition and prediction on some level? (Given the context (current situation and past experiences), make a decision)
Not saying we have AGI but this line of thinking has flaws
jonmitz@reddit
AI is only useful (and limited) for coding, it’s not useful for engineering
SpaceBreaker@reddit
Engineering as in building an entire System?
WholeRazzmatazz7658@reddit
I see a lot of parallels with the last decade of tech-hype cycles like blockchain, IoT, and VR, and I’m of the same mind as you that it’s following a repeating cycle of hype then fatigue. It’s a shame because all of these technologies are amazing and provide incredible new capabilities. I think the hype train does them all a huge disservice by directing a ton of capital into those technologies and then pulling all of the money out as soon as payoffs aren’t immediate. I’ve personally seen it destroy a number of promising applications before they could be refined. Like shitty cryptocurrencies instead of secure chain of trust/custody tools, or internet enabled coffee makers instead of breakthrough medical monitoring or energy efficiency applications. The money seems to want quick wins and has no patience for well engineered applications that take time. If the investments dry up in generative AI, We’ll likely be stuck with Italian brainrot and customer service bots and none of the big breakthroughs the tech is promising.
Tiquortoo@reddit
Yes, and that's not a bad thing in general. Look at the hype cycle. New, cool things get a high excitement, then a trough. Then they go through a period of "utility". The question with new neat tech isn't whether they will go through this cycle. It's the extremes of the height and trough and the amount of utility. There is no perfect prediction. Blockchain ended up having relatively low utility in the near term, and AI is likely to have high utility. Both will go through a peak and a trough.
rooygbiv70@reddit
AI is transformative for in some spaces and less so in others. Eventually (hopefully) enterprises will stop buying into this idea that there is some generalizable business use case for AI that will make “stuff” happen 20% more efficiently once you pipe your data into the model. When that happens, AI adoption and development ought to be more focused on areas where there actually is a clear use case. Whether that retraction will be significant enough to constitute the bursting of a bubble remains to be seen. I definitely think a lot of enterprises will see some significant investments not beat fruit.
Pentanubis@reddit
Most definitely. LLMs have brought some amazing science into a practical realm of use, but we weee sold autonomous super-intelligence and there is zero chance LLMs will achieve that. The bet was on the moonshot and we aren’t even out of orbit.
da_supreme_patriarch@reddit
The current hype around LLM-s in software and, really, any other discipline that requires precision and determinism is definitely overblown. For software, I am honestly baffled by what type of work are people doing where writing the code is somehow their bottleneck and not the million other organizational problems, and even then, the LLM-s cannot reliably follow a formal spec if their lives depended on it, so they aren't really that useful in established, large codebases. For other disciplines, people are gradually learning, sometimes the hard way, that LLM-s are no substitutes for attorneys and psychologists, you'd also have to be either very brave, or very stupid, to entrust current AI-s with accounting tasks. All that being said, AI is not a bubble the same way that blockchain was/is though, because the current LLM-s are still pretty useful tools both for average people in their day-to-day lives and for proffesionals in many industries even with their limitations, while VR/blockchain are comparatively very niche.
duskhat@reddit
We’re absolutely in a bubble. It’s still useful, but the expectations are too high and the promises too wild
No-Rush-Hour-2422@reddit (OP)
Agreed
metaconcept@reddit
There's a bit of a bubble, but AI is genuinely disruptive technology. We're going to have a short period of disillusionment before scary AI arrives and the famines start.
Fyren-1131@reddit
News of our demise is greatly exaggerated. I think infinitely more so if you're american.
Bleeding edge tech in Europe moves slowly, for a myriad of reasons. This is a good thing, it insulates us from high octane hype and allows us to focus on things that matter (such as not worrying about our livelihood over the latest fad).
That said, I don't think AI is a bubble that will burst as such. It is useful, but right now they're selling solutions for problems that haven't been thought of yet, but the problem is that this is pushed by deep wallest (google, microsoft etc), so this can't burst in the same way as I think people are expecting or hoping for. They'll find usecases, and they'll do so without hemorrhaging cash.
i_like_trains_a_lot1@reddit
It has some uses, but nowhere near the utility some people hype it up to be. And also not near the price point they are advertising for many use cases.
I'm the CTO of a small startup and I calmed down a bunch of people who fell to the hype by exposing the prompts and parameters to them. I don't have time to tweak and they saw quickly how inaccurate it is even in obvious cases and you have to include every single little detail in the prompt to get to like 80% accuracy.
Sure, it has some use cases, but you have to be very aware of the limitations to use it properly. We found many nice use cases in some automated decision making in our product and some genAI features for filling in missing data in a somewhat good enough state (in some cases missing data is worse than having subpar data).
InevitableView2975@reddit
I think its just going to replace the devs who do repetitive easy stuff. Its not a magical thing. As couple YouTubers pointed out, that the training data chatpgt or other models had was average or below average since thats most of the shared code online. I think its just going to fizzle out and be used in some niche things but only thing itll leave us is that it maybe be replace google for some questions such as why is x happened or formed etc since people just want to read the information asap instead of looking thru sites.
But i must say im very annoyed by all the ai code and ai generated images and sites that looks like the same cuz of ai generation
justleave-mealone@reddit
Idk I think expectations need to be altered, it’s not like the NFT bubble which was utter rubbish.
lab-gone-wrong@reddit
Sooner or later, the AI model/platform providers will have to stop subsidizing their customers and increase prices. That should pop the bubble, or at least let out a lot of its air, as underdeveloped prototypes and low value pet projects fizzle.
Absolutely a bubble but it can run as long as companies like Google and Microsoft have cash to throw at it. And that's potentially a long time.
originalchronoguy@reddit
Anxiety is fueling the fear more than anything.
AI, current genAI, is more augmentation vs displacement. Sure, there are specific jobs at risks. Those jobs were inevitable for displacement through normal means of automation with or without AI.
But anxiety, overall, a bad thing. There is a general fear of upskilling / reskilling. A SWE can definitely upskill, outside of direct AI development, to be relevant. Running models in prod require infrastructure, data-ingestion, data-pipelines, HIL (human in the loop) iterative re-evaluation, and guard-railing. All those jobs are more relevant today. All those problems I just mentioned are real opportunities today in response to the AI uptick. Those workloads can't be augumented nor displaced.
ZenithKing07@reddit
!RemindMe after 24 hours
RemindMeBot@reddit
I will be messaging you in 1 day on 2025-07-09 20:12:49 UTC to remind you of this link
CLICK THIS LINK to send a PM to also be reminded and to reduce spam.
^(Parent commenter can ) ^(delete this message to hide from others.)