The Dumbest Move in Tech Right Now: Laying Off Developers Because of AI
Posted by oloap@reddit | programming | View on Reddit | 452 comments
Are companies using AI just to justify trimming the fat after years of over hiring and allowing Hooli-style jobs for people like Big Head? Otherwise, I feel like I’m missing something—why lay off developers now, just as AI is finally making them more productive, with so much software still needing to be maintained, improved, and rebuilt?
TadpoleNo1549@reddit
feels less like “ai replacing devs” and more like companies correcting years of overhiring tbh, ai just makes that decision easier to justify on paper, also productivity going up doesn’t always mean more hiring, sometimes it means same output with fewer people, still a bit weird timing tho, feels like we’re in that messy transition phase where no one really knows the right move yet
unicornsausage@reddit
Worry not, you'll be hired for double the pay when the vibe coding intern shits the entire back end and doesn't know how to re-deploy from a backup
Kintoun@reddit
You might be on to something. We will be entering the golden-age of contract programming.
why_is_my_name@reddit
Sure, but where to find these contracts?
West-Environment312@reddit
I have many problems
MoorhsumushroomRT@reddit
Those companies are gonna crash & burn.
Gamechanger925@reddit
Yeah!! It's quite true in terms of developers in terms of AI advancement nowadays..
LuLuLuv444@reddit
They're lying when they say they're laying off employees because of ai. If you recall it was always the bottom 2%, now they just say it's AI replacing them. It's all smoke and mirrors
jacua9@reddit
I don't like this graph. I think it actually argues for laying off devs, as productivity is the same, and costs 3x less - a c-level's dream. In fact, laying off that much developers will cause problems.
AlertDoodle@reddit
Productivity is the same for about a year or two until tech debt from shitty AI code that nobody understands makes everyone burn out and leave.
Luke22_36@reddit
It also assumes that AI actually contributes to productivity
blackcain@reddit
It doesn't contribute to productivity. It does create a reliance on AI to the point that you stop thinking. I know this happened to me. For awhile there, I stopped thinking even for debugging just mindlessly cut and paste errors into the window without truly understanding teh error and what it is doing.
lolic_addict@reddit
C-levels don't care about this because more unthinking employees with AI are cheaper than more thinking employees. That's the bottom line until everybody left is an idiot.
blackcain@reddit
Until you have a crises that requires you understand the codebase. AI isn't good at anything that isn't well trained. Like, if you have a bug and it's never seen it before from training, it will infinte loop over the things it does know. You can't make it do intuitive leaps of logic.
Never mind, that you now have limited human capital so they are going to get the brunt of this. If it was junior people who are trying to fix it, they are kind of fucked if all they did was use AI to generate the code based on prompt engineering because they don't have lived experience in understanding how codes work. Let's not forget that performance is also an art form and there are too many variables to deal with if it is AI.
psaux_grep@reddit
This happens all the time with new stuff. Everything is silver bullet and everybody aboard the hype train.
Before it swings back again.
AI is just another tool in the drawer. At least for now.
blackcain@reddit
Yes, we saw this with the Internet back in the early 2000.
This though is a bit harder since LLMs and NLP makes interacting with a computer so much easier. But you're gonna get some founders who have zero background in tech trying to spin stuff up. Especially your "tech bro" mentality ones.
Plank_With_A_Nail_In@reddit
Writing code isn't the hard part, AI just helps the founders find that out faster.
pentonicc@reddit
Make sense
platoprime@reddit
What's the hard part? Selling your code?
blackcain@reddit
Maintaining it.
Nooberling@reddit
This comment is buried but so damn hilarious.
blackcain@reddit
hehe, the software development cycle doesnt go away, ya know. You could do the POC on AI, but ultimately you need to create maintainable code.
crunk@reddit
Now let's go and maintain some vibe-coded shite.
OK, let's not.
In fact, it will be just like when coding was outsourced to the cheapest guys they could find a few years ago.
New stuff that doesn't, or barely works and is utterly utterly unmaintainable or extensible.
As a contractor this is is great - lots more things to rewrite, thanks cowboys !
platoprime@reddit
Oh.
Damn I don't wanna do that.
ArriePotter@reddit
Not going to lie man it's amazing for POCs. Will not be surprised at all when someone manages to vibe code something that makes enough money to hire a team to build it the correct way.
germs_smell@reddit
I love AI, but not for work stuff... as an intelligent search engine it's awesome. I can easily compare concepts, deep research into crazy topics, ask it to go more in depth.
I haven't been on Wikipedia in over the last two months which is crazy for me.
teaisprettydelicious@reddit
to me it feels a little like what it did when "cloud" was taking off you really have to turn off the marketing non-sense and somehow focus on whats actually useful while avoiding the snake oil
there's got to be a "AI" version of the cloud to butt extension out there somewhere
Bakoro@reddit
The pendulum almost never swings back completely. It's much like physics that way, you need an extra push to get back to where you originally were.
What I've seen happen over the decades is that new tools come, and in some ways they make a class of things easier, and at the same time it raises the bar for developers so there's a broader scope of stuff that developers are having to know and manage, and several competing frameworks or tools that you have to be aware of.
fzammetti@reddit
We're one major disaster caused by AI-generated code away from this whole thing collapsing. One train derailment, one airplane crash, a couple of days of no power on a major city... SOMETHING is gonna happen and when people are already skittish about AI that'll be all it takes.
The only question is whether the quality of AI-generated code can improve faster than the rate of non-technical people relying on it to an unhealthy degree in the name of profits, which might allow us to avoid a major disaster. The jury is still out in that.
30FootGimmePutt@reddit
Tesla has killed how many people with their shitty self driving cars?
Still gonna roll out their taxis with zero oversight. Still not going to be punished for their actions. Still somehow not liable.
cinyar@reddit
I work for a company that makes trains, rail infrastructure, rail management systems etc. Those parts of the system won't be touched by AI code for a long time. Homologation is long and expensive and there's no room or time for vibe coded AI slop. Might happen to non-essential train operation stuff, but not to train or rail control.
Rentun@reddit
One day it will. All companies are driven by the profit motive, and all it takes is one genius CEO to come onboard from business school where he came up with a brilliant idea to cut labor costs with "AI driven optimization". It doesn't matter how competent everyone else is. Once you get that guy in, the shareholders will shower him with praise for making them so much money, but he'll be long gone by the time people are killed.
It's really sad, but our economy isn't set up to prioritize long term stability and safety. It's set up to chase short term profits.
cinyar@reddit
Luckily, I work for an old-school German engineering corp, not a tech company. The CEO of the whole company has a degree physics and the CEO of our division degree in mechanical engineering or something like that. Both have been with the company their whole career.
akaicewolf@reddit
No we are not. Is it AI that is working on the feature from start to finish or is it human that is using AI to do the work?
Who ever added the piece of code spat out by AI would be to blame.
IanAKemp@reddit
And what if that developer is doing so because their CEO downsized the number of developers because "we can use AI instead"?
akaicewolf@reddit
You think public is going to care? Reaction will be something like “poor SWE making 6 figures and can’t double check what he is copy pasting”
IanAKemp@reddit
Precisely my point; management ensures it never gets blamed for its mistakes.
NuclearVII@reddit
This is how AI bros work, dude. Y'all treat these things like Oracles of Delphi.
Oh, I'm sure you double check all the LLM output with a fine tooth comb.
lqstuart@reddit
The problem is “leaders” are never properly punished when their house of cards built on cheap labor falls over. I’ve never heard of a C level being shitcanned because of a security breach, and on rare occasions that high level positions are let go, they just fall into another seven figure position where they’re just as useless.
blackcain@reddit
They protect themselves. From the board of directors on down because the purpose is to make money for shareholders and themselves. Even if there is a breach, the loss is a rounding error when compared to profits
Lognipo@reddit
Actually, it does appear to be capable of inference and theorizing. I've tested this by challenging it, giving it weird hypothetical, and just generally conversing with it. It certainly has more trouble with problems it hasn't had a lot of exposure to, but it isnt dead in the water if it hasn't seen the exact thing you need info about.
menckenjr@reddit
So can a stuff rabbit or a rubber duck...
Lognipo@reddit
It sounds like you might benefit from speaking to a professional about whatever it is you believe you are hearing. They now make medications that can keep your stuffed animals and bath toys from trying to have conversations with you. They definitely should not be inferring or theorizing about anything. Inanimate objects don't actually do that. Good luck.
menckenjr@reddit
It's a reference to https://en.wikipedia.org/wiki/Rubber_duck_debugging, if you've never heard the term.
finah1995@reddit
Yep exactly someone said something similar like people are not depending on Stack Overflow (as if not knowing why a particular piece of common used code is done a particular way, is better), I was like how does that make it better, like if there is the use cases then the AI can be trained on these questions and answers, but without that being updated with live/current questions in new languages and frameworks, the ai won't know anything new and start referring to base documentation and the loss of community for mentorship.
blackcain@reddit
Exactly that, you need humans to keep generating content for it to consume. If they stop doing that and instead just use LLMs then the LLMs themselves don't evolve.
Of course, that's why they love the open source people. They can keep scrapping that. Without open source, AI really can't exist. Github is an AI goldmine.
You gotta still have entropy.
theQuandary@reddit
Is there an interaction between GPL and these models that would force them to open everything up?
Bakoro@reddit
Only in the same way that if you've ever read GPL code, all the code you ever write is GPL, which isn't how anything works.
I'm not any kind of attorney or lawyer, but I believe the words "convey" and "propagate", "aggregate", and perhaps "modify" are the most salient points in GPLv3 in this regard.
Someone would have an extremely difficult time arguing that an AI model reading the code and adjusting weights, constitutes a derivative source code or a derivative program under any definition they have. The GPL code is definitely not being run. Per the license, the use of an LLM over the Internet does not count as "conveying" per the definition provided in the license.
The license also explicitly acknowledges the right of "fair use". There is a strong argument that LLM training constitutes fair use.
The open source, open weight models which come with a research paper are absolutely covered under "fair use", as far as I am concerned, that counts as academic research, it's just that you're on your own to get your own training data if you want to make your own.
Even if you reject every other argument, the GPL code is being trained on independently, in conjunction with other independent works which are not extensions of the GPL code.
If someone claims that the LLM is a derivative work, then per the license it's probably an aggregate. At wors/best, perhaps anyone conveying an AI model trained on GPL code would have to provide the source code that got trained on.
That more or less covers it:
Opens source, open weight LLMs are basically already providing everything they need to provide, the only contention that could possibly exist is if they also have to be under GPL if they trained on GPL code, which I would argue "no way".
The web portal and API LLMs are not "conveying", and as far as I see it, have no requirements under GPLv3 whether or not they are derivative.
myringotomy@reddit
Let's say a person has photographic memory. They read some GPLed code and they memorize it.
Then you ask that person to write some code and he takes parts of that GPLed codebase he memorized and writes the code for you.
Is that legal?
Bakoro@reddit
I am not a lawyer and I don't even know what laws apply to you.
I can't tell you what is legal or not.
What I can say is that copying whole blocks of code is probably copyright infringement. Using similar code snippets is almost certainly not copyright infringement.
Using snippets which have been rearranged and mixed in with other stuff is not a derivative work under U.S copyright. For code, only a specific implementation is copyrightable.
Two books might have similar phrases and similar sentences, but the stories are different. It would be non-stop insanity if people could credibly sue for copyright infringement over small overlaps in the use of language.
The memorization part is immaterial.
theQuandary@reddit
As a counter-example, I'd point you to the PC BIOS war between IBM and Compaq.
IBM was suing any PC cloner who tried to make their own BIOS. Compaq got around this by creating two teams. One team studied the PC BIOS and created extremely detailed specs. The second team implemented the BIOS using those specs. This happed in part because even simply reading the IBM manual gave enough information about the software itself to get sued (I believe the guy who discovered this was removed from the team).
IBM sued and lost, but only because this cleanroom approach wasn't considered a derivative of the code -- only the functionality.
The AI case is the complete opposite of what Compaq did. It looks directly at the code then writes direct derivatives from that code when it spits out similar projects based on the code it was looking at. That's not a cleanroom reproduction at all which seems to imply that it is a derivative which would imply that the code would then need to be released under GPL.
Bakoro@reddit
I haven't seen any credible examples of this since GPT2.
What I have seen is that with special effort, researchers have been able to get extremely degraded versions of images, when there were very few examples of a label in the training set (like less than ten examples of a particular thing), but where the images exist with disproportionately high frequency.
I have seen instances where researchers have been able to approximate extremely well known images which are overrepresented in the training data (like the Mona Lisa, or advertising material). I have not personally seen any significant cases of copyrighted image memorization in any research papers or anywhere else.
I have seen instances where researchers could continuously prompt an LLM into iteratively modifying output so that it is close to something else that they claim must be in the training data. Some of that "research" borders on fraud. If you have to prompt an LLM hundreds or thousands of times in order to get it to spit out a copyrighted sentence, or something sorta like a block of code,
The absolute best example I've seen of large scale "memorization" in LLMs is still trash. There was a paper released about a vector of attack for extracting "gigabytes of training data". The smoking gun examples were... Licenses, those things that nobody reads, and where every business copy-pastes from someone else and just switches around the names.
The "memorized" content was ... JavaScript boilerplate. The shit that is on thousands of websites.
The "private identifiable information" was... Public information released with the expressed intent of giving people the ability to contact that person, like, advertising and business information.
That's the great memorization Boogeyman: a bunch of overrepresented data. And they had to generate terabytes of data to get gigabytes of garbage.
From what I have seen, the little evidence there is surrounding AI model memory, is that in specific and niche cases, you can spend a whole lot of effort to get an LLM to spit out something that's like a specific thing in the training data, if you explicitly make a concerted effort to do so.
If AI model memory was really a significant thing, we'd have people hammering the point nonstop all day every day. I see a lot of AI hate on the regular, the cries over AI model memory mostly died out like two years ago because it's not a significant issue.
As far as math proofs go, there is all kinds of math which demonstrates that the models cannot simply be memorizing and compressing all their training data, they absolutely, mathematically, must be generalizing on the data in order to do what they do. It's not a question, it's not a grey area, it is impossible for the models to pack petabytes of data into a couple hundred GB.
What you are claiming is that anyone who has ever looked at GPL code must release all future code under GPL. That is absurd.
The white room approach is also something of an absurd legal fiction. Whether the developers looked at the code or not, it's impossible to prove that they never looked at the code. It's entirely possible that the looked at the code to make sure their solution was different; you can't prove otherwise, you just have to take their word.
I get it you're an AI hater and I can change your mind.
I myself am an unabashed pirate, fuck copyright, I will pirate anything, I'll download a car, I don't give a shit.
LLMs are "stealing" anything. Even if you want to claim that it's piracy, I'm all for piracy, I would not care if it was piracy, I'm pro-piracy.
It's not piracy, it's learning. If you hate AI models, then you hate the concept of learning.
You have a problem with corporations? Go do something about corporations, I'm into it. You have a problem with ultra wealthy sociopaths abusing AI models and technology in general? Go do something about the rich sociopaths, you have my support.
blackcain@reddit
No, because I believe that the courts have ruled that everything AI generates is "public domain" so essentially an LLM that is trained on GPL'd license software is essentially doing "code laundrying" where GPL'd code is now effectively being generated as something else.
Bakoro@reddit
There have been rulings that AI generated content can't be copyright med, but also that a human making meaningful alterations makes the work copyrightable.
Also, if the AI is simply regurgitating GPL codes then you have all the same legal recourse. You can point to the two source codes and demonstrate how one is infringemenging, intent and who authored the infringing work doesn't come into it.
blackcain@reddit
We'll have to see how it pans out in the courts.
It's hard to tell if they are regurgitating since it's generating with a specific purpose in mind. But clearly if you're training on GPL code it has some nuanced effect, but would be the case they would build on - it's not harm as much as you're product is consuming data with no compensation.
Bakoro@reddit
No, it is either producing code already covered by a license, or it isn't.
If you can't tell by analysis, then it's not the same code.
edgmnt_net@reddit
Arguably proprietary software isn't part of the training set and AI is primarily aimed at proprietary feature factories.
yangyangR@reddit
Or they just take it anyway if it is proprietary but they found it somewhere anyway
blackcain@reddit
Yep!
teaisprettydelicious@reddit
You realize it can train on the code your editor what what the final result of the cycle is right?
all the trail and error developers are doing with these plugins is just a bit RLHF loop
finah1995@reddit
Lol if the companies and even high-profile individual find their private source code being sent back to remote servers and you have not opted for training the private closed source and closed weights LLM on their code, the service providers will be sued with lawsuits to oblivion and back.
There's a reason SO is open and not hidden all the answers why it's bite sized Q&A, and there aren't full SINGLE FILE project size in SO, so that readers understand logic and context and can apply in multiple places and languages they can solve problems formally for programmers.
The training on data of code can happen in open repos and steps if the developers allow it per repository in their code workspaces, otherwise it shouldn't just take the data.
If all trial and error goes back to companies without regard if it was allowed that's a lawsuit, so only repos which are open-source those codes the developers will allow to be used for training and
CrashXVII@reddit
Similar thing happened with delivery drivers. Used to have to read addresses and learn an area. Then iPads and routing software made it so anyone without a DUI can technically do the job. But only as accurately as the routing software and sorting system works.
Now no one pays attention to what they’re doing. There’s a ton of failure points in the system that can lead to mistakes. Wages went through the floor so they can’t retain people who might actually care about doing good work.
yangyangR@reddit
C-levels never have to implement anything so their brains turn to mush from atrophy if they ever had any in the first place. They can't think properly so they believe that thinking is not necessary for anyone else's job either.
All they have to do is own capital and shout out their whims no matter how detrimental they are to long term health of the company.
ricco19@reddit
And most people who are doing this will never admit it because its 2025 and people are averse to shame.
blackcain@reddit
I think we'll hit a crises - and the companies that were smart about how they approached AI are going to win. AI is a great learning / teaching / onboarding tool.
For instance, if you joined teh company, you can use AI to figure out complicated codebases because that's pattern matching. If you isolated only to the pattern matching bits and code generation limited to sample code - you're good.
I use AI to understand how something works, not to create a solution.
Perentillim@reddit
It’s pretty great for breaking down syntax I’ve not seen before.
I’ve just joined a new company with a new language and a ton of new tech and it’s been invaluable in stopping me killing myself with the amount of stuff I need to get in my brain
edgmnt_net@reddit
Has it, though? 10 years ago people had similar issues using StackOverflow to get things explained or to see example code. Which may be fine for bootstrapping your knowledge a bit or finding a quick answer, but what if you never develop the skills to chase API definitions in the docs or read the language specs, for example? Plenty of devs seem to learn those skills very late if at all.
Echarnus@reddit
Having better knowledge of the language/ framework, will result in you creating better prompts.
acc_agg@reddit
I just put the language specs and API doc in the context window.
Athen65@reddit
You get an immediate response and you get to ask followup questions with immediate responses. Of course it's invaluable for areas where you have low/no expertise.
Perentillim@reddit
Right but in Google I have to be aware of the special characters I’m using and make sure they don’t get trimmed, find a good answer etc.
Copilot definitely streamlines that
There’s been a bunch of stuff that’s stumped me and that’s when I do have to fall back on the biobrain’s ability to find answers. For instance I had a python threading issue and the AI completely neglected to mention an entire technique to avoid deadlocks
wpm@reddit
When it's reliable, sure.
I can't learn from a thing that lies to me. It gets things wrong, and now I'm relying on falsehoods to build my understanding, and there's no one to blame when it turns out I got it wrong, just me, looking like a fucking dumbass "cause the AI told me it was that way". If I have to double-check everything it tells me, I might as well skip the "chat" bullshit.
blackcain@reddit
I agree. You really need to be careful.
Hell, I used to work in IT. I didn't even listen to the IBM guy who did our support for our big iron servers. I would ask him exactly what each command did before doing it exactly for the reasons above. Even if they told me to do it I'm still on the hook if soething goes awry.
Fridux@reddit
I also find AI to be useful in code reviews, but writing or debugging code on my behalf? That's not going to happen! I value having knowledge and experience way too much to let AI take any of it away from me. Even in cases where it's just boilerplate code, programming languages have macros, editors have snippets, and battle tested libraries exist to solve the more complex problems.
GeoffW1@reddit
Comment of the day IMO. I really wish more people would attempt to fix their faults and better themselves.
StickiStickman@reddit
This sub really is still in hard denial huh?
marrsd@reddit
I saw a report recently (I think from Google) that AI reduced stability (increased bugs), so that may actually be true.
menckenjr@reddit
The key is going to be knowing how to write that code without AI so you know when it's full of shit. Absent that, what you're generating is tech debt that you won't know how to retire when the time comes.
sentry_chad@reddit
These people are ridiculous lol. Or it's just a skill issue. It isn't quite a complete game changer yet... but it's getting there
agumonkey@reddit
I'm becoming a slow copy paste agent between claude and our codebase
Any multi-agent setup will soon replace me
blackcain@reddit
Or you manage the agent. Again, if there is any gap in the training it will infinite loop on what it knows, and not make a new pattern. Humans have entropy. Agents do not.
poincares_cook@reddit
AI does contribute to productivity for me. At this point I'm using it to write tests, write POC's faster. One of the methods to learn and new took in conjunction with documentation, blogs, books. Help write configs, help write documentation.
It's all auxiliary, it's rarely useful at writing code, but I don't write much boilerplate code in recent times. It is effective at speeding up that.
Possible_Knee_1443@reddit
do you have users of your code, tests, docs?
so far, being on the receiving end, i loathe reading generated content because it wastes so much of my time with its verbosity.
acc_agg@reddit
Ask an llm to summarize it for you.
Possible_Knee_1443@reddit
Cool, so we’re doing waterfall again? Tight tight tight.
acc_agg@reddit
This is what people don't get. I've learned JS in two weeks after spending 20 years avoiding it. I can ask it about all the stupid syntactic sugar that's been added over version without killing myself.
Can it write good code? About 70% of the time. Can it explain shit code? Oh fuck yes.
blackcain@reddit
100%, you cannot use it for production code. But you can use it for POCs or being able to ask questions about code - as I said somewhere else, anything that requires "pattern matching" is good. I think you can have a really great onboarding experience if you are joining a company by using LLMs trained on the codebase.
Kiwi_In_Europe@reddit
Have you guys actually tried the latest coding models released by Google , Anthropic etc?
Someone literally prompted a PAC Man clone. No code of their own, every line came entirely from the LLM.
When working on a larger project, it will obviously have risks. But so do human developers, people fuck up all the time. And it's getting to the point where it's faster to generate the code and fix the errors than doing it all by hand.
edgmnt_net@reddit
I agree and I actually bring this up quite a bit. But do you think junior devs don't drag projects down when their density increases beyond a healthy point? Also, there are issues related to acceptable failure modes (e.g. hallucinated external package names, gap filling that makes no sense but could pass type checking inadvertently), determinism and the ability to train out those errors out of the AI, these are rather unsolved problems.
Most of the work I do just isn't typing heavy nor would benefit significantly from rapid generation unless accuracy was excellent. Good luck fixing errors in code that's never been understood by anyone, especially when a lot of reviewing is already more or less rubber-stamping. I'll reconsider it once these issues get closer to being fixed, right now I've only seen some random PoCs of simple applications.
Beyond that, generating huge swaths of code and customizing it has always been problematic on the review side even without AI. The argument that "conciseness doesn't matter because the IDE can easily spew 5 kLOC of classes, you just need to adjust it a bit" fails really badly in the real world. And quality is already crappy even without AI in the wild.
NuclearVII@reddit
Read: Stolen from others online
Kiwi_In_Europe@reddit
That's not how an LLM works buddy, would have thought someone on this sub would understand that
NuclearVII@reddit
It's 100% how it works.
All generative AI models are lossy, non-linearly compressed representations of their training corpus. That's why these things do well when they are prompted with output that's in their training set - interpolation is much easier (and something LLMs can do) than extrapolation (which they are ass at). If a model is able to flawlessly generate pac man on demand, dollars to donuts a version of that game was in that training corpus - and the same dollars to donuts that it was obtained without conset.
This is how all statistical models work - the assumption that if you have enough of a domain's worth of data in your training set, you'll have full coverage of that domain and ALL tasks are interpolation tasks.
Your comment history is wall to wall OpenAI wankery, you'll forgive me if I don't take you too seriously.
Kiwi_In_Europe@reddit
Jesus Christ how the fuck can you expect someone to take you seriously when you start your argument with something so hopelessly incorrect.
If AI training is compression then you may as well class everything in life as compression. A baby learning to walk? Bam, compression. It's a completely wasteful line of reasoning.
The information to make pac Man does not exist on the model. It literally doesn't. You can access multiple open source models and see that data of that type does not exist. Compression is just the absolute wrong term to encompass what AI does, both legally, technically and ethically.
Didn't think luddites knew how to use the internet but life is full of surprises it seems.
NuclearVII@reddit
> If AI training is compression then you may as well class everything in life as compression. A baby learning to walk? Bam, compression. It's a completely wasteful line of reasoning.
No. Human beings are not statistical machines. This isn't a concession any sensible machine learning engineer will make.
> The information to make pac Man does not exist on the model. It literally doesn't.
It literally does - that's how the model makes it.
You strike me as the kind of AI bro who has never trained a foundational model from scratch, am I right?
Kiwi_In_Europe@reddit
No that's not how it makes it lmao.
It doesn't store instructions on how to make pac man. No more than stable diffusion stores images of Mario. Or do you think SD has somehow compressed 6 billion images into a 7.5 GB file?
NuclearVII@reddit
You are so close.
That's the "lossy" part of the program. When you train a statistical model, you repeatedly use gradient descent (or some other flavour of optimizer, but gradient descent is the most common these days) to determine the what weights correspond to the most likely fit of that model. So, yes - something like stable diffusion absolutely contains instructions to make mario - just not the terrabytes of mario images.
You can see this effect yourself: get like 10 mario images, build a rudimentary MLP model that's about 4-5ish times the size of one image, and keep training it until the model more or less perfectly reproduces those images. It should take you about 15ish minutes if you have the environment set up, but if not, I'd take my word for it that this is a thing you can do.
It's the same principle with these generative models. It just so happens (for lots of mathy reasons) that the more mario images you have, the better the neural network compression gets.
What SD doesn't do (and what NO NEURAL NET BASED STATISTICAL MODEL DOES) is actually learn how to draw based on the training data (like a human does) and then figure our what you want based on the query "mario". Because it's not a person, it's a statistical model. All it can do is interpolate in the training corpus.
Perentillim@reddit
Well yeah, now get it to make a novel game.
I don’t doubt it can make great strides but I’d be surprised if it spits out something that just works
Kiwi_In_Europe@reddit
How many websites are truly novel these days?
There are plenty of examples of it spitting out something that works
Perentillim@reddit
“Make me a 2D Pikmin-Rock Raiders cross over”.
Let me know how it does
Kiwi_In_Europe@reddit
Can you read? My entire point was that most websites are fairly identical in function. Truly novel code is not really needed in webdev.
Also, can you make a 2d pikmin rock raiders cross over lmao
Perentillim@reddit
Not yet!
blackcain@reddit
You can read all the stories you want. I've seen these posts but unless they provide you a method where you can reproduce it then it's nothing but clickbait.
Kiwi_In_Europe@reddit
Uh, what? You can find videos demonstrating this with a five second Google search lmao.
blackcain@reddit
OK, I did that, there was only this one: https://www.reddit.com/r/ClaudeAI/comments/1dna428/playable_pacman_in_two_prompts/
What was presented there was ok. But the hype was all about a full fledged game.
But otherwise there was some post from 2020 about Nvidia researchers creating a pacman game with not a lot of details.
poincares_cook@reddit
Yes, I usually use Gemini lately.
Thing is, there are 10000 pacmans written for the LLM to steal. There is only one of the code base I'm writing in.
Even when writing a new service, sure AI can take care of some boilerplate, but that's literally a few mins of work tops if I do it myself with zero chance of hallucinations (I do use AI for tests, too much repetitive boilerplate). Then the rest of the code is usually something specialized enough that the LLM spews garbage. If I do pass very specific requirements then the design is just subpar to what I&d do myself, with the added risk of hallucination. Almost certainly not built for scaling. The logging patterns are bad unless I put enough effort into prompting that it's less work to write myself and some added risk for security vulnerabilities.
I'm sorry, LLM is great for writing pacman or any other of the begginer projects that were literally done to deaths. It has it's uses and does speed up development.
Especially in the POC, learning and testing phases.
But it's not there yet for writing actual production grade software for the general case imo.
If all you do is CRUD your experience may vary.
Kiwi_In_Europe@reddit
Realistically how common is that though?
Looking at webdev for example, how many projects are truly novel creations Vs cookie cutter projects that can be easily automated by an LLM?
Obviously AI will not be applicable in every situation including yours, but people denying it will have tangible uses in the industry are being silly.
poincares_cook@reddit
What's webdev to you? Is Google search, YouTube, Facebook, Instagram etc web dev? How about Amazon, maybe the web platforms banks have? Maybe Wix like site builders?
Or are you referring just to the world press/Shopify websites.
I honestly doubt most job market is for devs working on the later.
Perentillim@reddit
I’ve been using it for testing in agent mode and it’s done ok. I think it’s more a testament to my code than its own skill though, it makes a hash of anything moderately complicated
shevy-java@reddit
To be fair: it could contribute to productivity AND increase the reliance on AI, at the same time.
blackcain@reddit
Sure, there is a tradeoff but you should definitely keep boundaries
Genesis2001@reddit
It's "good" if you don't have a set of languages you use daily or if you've just picked up a language (either through the atrophy/exercising cycle or learning it) to find out what an obscure error is.
Beyond that, it's not great for coding for that same reason you mentioned: you stop thinking critically about the code.
For me personally, I feel like I'm capable of using it sparingly as I know enough to bridge the intuition gap "AI" has. So I just use it as a pair-programmer or rubber-duck to brainstorm structure for me to code something. Occasionally, I will ask it about particular newer syntax in C# that I see in examples online because I let my C# atrophy a bit between several major versions.
GreatScottGatsby@reddit
I'm not going to lie and say that I've never used ai but sometimes when I ask it a question and it tells me the way I want to do something isn't possible, I will go out of my way just to do it just to prove that I'm smarter and more creative than the ai.
Bakoro@reddit
If you become reliant on the tool, then it is obviously contributing to productivity. If you rely on the tool so much that you stop thinking and you haven't been disciplined/fired, then it is obviously doing something productive.
The danger is the same as self driving cars: it's good enough for 70~90% of the the time, and it lulls people into complacency so they are not ready when they need to immediately take over when they need to.
People are generally not equipped to sustain being in a high awareness state for a long time without actually doing anything. Eventually the brain goes into low power "wake me up if I see a lion" mode; and unfortunately, a lot of cognitive skills are "use it or lose it", skills rapidly decay.
blackcain@reddit
That's the rub, isn't it? They want you to use AI while cutting the number of developers as it will make you productive. While I agree that AI can make you productive, you need guardrails.
My experiments with AI, is that it's pretty decent in some things but doesn't admit when it is weak in others due to lack of training and will string you along until you understand that you're not really going to get anywhere.
CoverIll8837@reddit
My guy my guy I know you may need custom condom just follow the link custom condom
xFblthpx@reddit
Seems kinda obvious here that the comparison is between a dev with versus without ai support tools.
Luke22_36@reddit
When you factor in the time and effort to troubleshoot, understand, and fix broken AI generated code, and compare it to the time and effort to write correct from scratch?
I've heard from multiple people who have emperically measured this, and come to the conclusion that AI makes you less productive, not more.
xFblthpx@reddit
Why would you compare AI use to something you know? the obvious use case is for concepts you are less familiar with.
Obviously just knowing everything is more efficient, but it’s just not reality.
Luke22_36@reddit
That's even more dangerous because you don't spot hallucinations of things you're not familiar with. So then it fucks up, and then you have tot take the time to figure it out enough to fix it, and by that point, you could have figured it out enough to write it in the first place.
AdamAnderson320@reddit
I mean, it can, but
The graph way overstates the multiplicative power of AI assistants. It ain't 10x on average. Maybe for the most entry-level junior engineer it could be. But for senior-level engineers, it's probably more like 1.1-2x based on my own experience and observations.
Luke22_36@reddit
Yeah, but also factor in all the lost time working on AI generated fuckups.
oloap@reddit (OP)
The point is actually different.
AI increases devs output.
You can keep the same output as of today (less devs + AI), or increase your output (same devs + AI).
Today's output is producing low quality products, hence whoever uses AI to increase output will make you obsolete.
saera-targaryen@reddit
I just don't think AI increases my output that much. Only like 10% of my day is actually coding instead of meeting and requirements gathering and prioritization and documenting and communicating and researching all solutions possible to compare and getting approvals for changes. AI has not really helped me in any of those areas, and in my code could maybe speed up 1 or 2 of those percentages because most of what i'm doing has too broad a scope and isn't boilerplate enough for it to be helpful. So saving, what, 1-2 devs per 100? How much does an enterprise license of chatGPT cost per 100 users? I bet it's more than those devs cost to begin with.
Perentillim@reddit
Are devs really spending that much time not writing code?
I’ve been on greenfield projects for the past year. Pushing 80-90% of my time is my own for coding. I’ve just been put on a new project, we had a day discussing what we wanted it to do and then I got on with it.
I get that things might be less well defined and need discussion, but even then my tendency would be to write some code and see what people think than sit in a meeting
saera-targaryen@reddit
My job is maintaining and optimizing/updating an existing large system that's internally facing to my company and that touches a lot of other teams that all have legal requirements around the system so yeah. Every time I build a new system requirement or bug fix or item from our backlog we need to pass the changes through a bunch of layers of stakeholders for approval and i usually have 5-10 of these in any one state at any time. So, while i'm coding one, i'm researching the next while meeting with internal teams who know all of the laws and stuff on a few more, as well as meeting with users to see preferences, and any other owners of systems we might integrate with to see if it impacts them, and then prioritizing these changes to come into effect during windows that don't affect others, while also coordinating with my team to prioritize and manage workload balance, and then i have to document, and communicate all of that up to leadership to make sure we're meeting goals. I don't have a sexy FAANG or fintech startup job which means people actually heavily care about the outcomes of what i'm doing lol
Perentillim@reddit
Fair enough! I’ll admit I haven’t shipped a lot of code to users this year! Though my previous team is about to launch and getting a ton of accolades
jl2352@reddit
I’m fortunate enough that just as the AI boom is getting good, I’ve joined a company where meetings and plans are clear and productive. I’m also left (and encouraged) to do what I think is right. They even … get this … trust me!
This means I get a lot of time programming. I do more pair programming and technical write ups of planned work, and more programming, than my last job. The board is better organised too.
For me, AI has been a huge benefit. Literally doubling the speed to produce things. I use the extra time to write more tests or refactor parts, which means the codebase is easier to work on, which means I can worry less about the AI and go faster, and so on. It’s really productive.
Previously I was at a company with more engineering OKRs than engineers. Individual tickets would take weeks or months. It once took three months to run a branch on QA (part of the issue was the CTO was adamant QA was working fine and we should change nothing). There AI would do fuck all to help. You’d still be blocked by petty arguments on Slack from other teams. I’m not at all bitter.
saera-targaryen@reddit
I get to work very fast and my team trusts me a lot too! i'm actually a technical manager so i'm really the highest tech decisions go. I don't have a lot of "wasted" meetings unlike a lot of other companies, I just have a lot of coordination to do between a very large amount of different stakeholders since i work on a "source of truth" system for my company that affects financials and employment laws. I really do need that time with the other teams to hammer out EXACT DETAILS of every single possible use case to make sure we're following laws, because i am a programmer and not a law expert but the law experts are the ones who know if my code is working correctly. Not only that but the changes i make tend to echo through other downstream systems that we are integrated with, so one skipped misstep and the entire company has a highly visible crisis because we are not a tech company by trade and my stuff is all internal. I do bet it's a lot different for companies whose product is the software being developed but, as the french say, say lah vee.
I am lucky that my position doesn't have to be some shiny magical SaaS tech bro AI blockchain VC funded rocket, and that the programming i do is (in my opinion) fun and i get to see the direct benefits that it gives to my company's other teams and by extension all of our employees.
NuclearVII@reddit
No it doesn't. This is just more hype bollocks.
Devs who find themselves much more productive thanks to automated plagirism are either a) not great devs to begin with or b) only larping as devs.
Berkyjay@reddit
As much as it does legit contribute to productivity. It can also just as easily become a detriment to productivity. LLMs are designed in such a way as to be a "people pleaser". So it won't really ever say no to you and will work to provide ANY answer regardless of whether it's the correct, or even relevant, answer. If you aren't vigilant with it, you can very easily be lead astray and down the wrong path to a solution. So in the end, I feel it is a wash.
GCatalinStefan@reddit
Well, the devs are going to see what artist went through. I hope it is even worse for them.
CoverIll8837@reddit
My guy my guy I know you may need custom condom just follow the link custom condom
InvolvingLemons@reddit
Importantly, AI solves problems but at the cost of far worse tech debt. We might not be seeing the true Faustian cost yet, as it takes some time for tech debt to impede security, compliance, or bug fixes.
oloap@reddit (OP)
That's exactly what c-level execs already believe. But the article explains why you should opt for the third option or your company will be left behind.
br0ck@reddit
The trick is to lay of the c-level execs who make as much as 10 developers and their entire job can easily be don't by AI.
PathOfTheAncients@reddit
I actually get annoyed at how much the whole world is ignoring that AI would be far better at replacing management than it would be at replacing contributors.
Dividing up and planning work, managing timeframe and predicting delivery dates, offering advice and support to workers are all things it seems decent at.
Yseera@reddit
This is one of those things that reveals the lie that capitalism is about running the most efficient business. Instead, it's about extracting the most value for the ruling class, partly by automating the working class.
PathOfTheAncients@reddit
Yes but also after years of working for these companies and executives I no longer believe it has anything to do with value. They waste so much money. What I really think is that most companies from the c-levels through middle management are mostly just doing things to feel important and feel like they have control.
The dumb thing is that if they succeeded in replacing their workforce, they would be miserable being in charge of almost no one. Although I doubt a scenario exists where there are no workers but there are still high paid executives or managers, most of them would be gone as well.
Perentillim@reddit
Which begs the question, what the hell happens to all of us? Is that why they’re lurching towards fascism, to lock down control ahead of everyone being redundant? What are they going to do with all the unemployed people in their dreamland scenario where we’re all redundant.
It’s either genocide or… really hoping their security teams don’t have relatives that are suffering?
Ansky11@reddit
What do you think the vaccines were for?
PathOfTheAncients@reddit
Yup.
I feel like it's been clear to futurists for a while the vast majority of jobs will be automated by 2040-2050. A lot of people are just waking up to that. Capitalism can't survive it, so what do the rich capitalists do? They would never go communist/socialist, feudalism doesn't make sense without workers, so fascism it is.
Still doesn't give them a plan for what they'll do. Seems like a turning point to me, humanity will move towards something utopian or dystopian in a hard way.
TigercatF7F@reddit
The Twilight Zone: "The Brain Center at Whipple's"
Sorted that out back in 1964.
acc_agg@reddit
CEOs aren't the ruling class. They are just well paid workers there to soak up the anger that should go to the people who own the shares.
ltdanimal@reddit
Yeah I'm sure devs would have no problem with all the above being done by AI and their performance reviews done by a fancy ChatGPT wrapper. /s
Pomnom@reddit
Performance review is nothing but theatrics. You assume there's an objective way to compare people work with some vague standards? Especially since the work changes from week to week, but the standard is set in stone?
PathOfTheAncients@reddit
Every dev I know thinks performance reviews are a joke at best anyway. I have never had a manager who was helped more than harmed us getting work done.
AI would be far better at the tasks I listed than at writing code but no one is talking about using it for that.
HotlLava@reddit
I mean, that's just called starting your own company? Nobody's stopping you if you think you can do it and replace all management with AI, or from joining someone else's company where all managers are replaced by AI.
dimon222@reddit
if you remove the them, who is going to lead their subordinates? AI? You're mistaken, peasant, this is not happening in world driven by haman greed, humans aren't going to give controls over org to unleadable machine that is meant to just follow commands.
dimon222@reddit
Perhaps should have added /s, I guess reddit doesn't take this topic ligthly and truly believe execs are there sitting just doing nothing and just receiving 7 digit salaries. They deal with different kind of challenges with selecting direction that benefits themselves and their shareholders. People up the food chain unlikely to suggest replacement of themselves, so it's endless loop and since profit for themselves is core of capitalism concepts, it's unlikely this power will be ever given away.
Halkcyon@reddit
Okay, Jamie Dimon.
Ashamed-Simple-8303@reddit
and better because it doesn't rely on gut feeling but actual facts
Sharlinator@reddit
Hallucinates less.
blackcain@reddit
how would they be left behind? I don't get it? What does AI provide exactly other than arguably paying less for coders? I think these folks are gonna go down the pipe and then find out that there are a lot of missing bits.
hippydipster@reddit
If you're paying a coder, it's presumably because they make you more money than they cost. If now they produce 3x as much as they used to, then that profit is going to you, and you should be wanting many more coders to get all that profit.
It's like Jevon's Paradox. When you increase efficiency, you often end up using more of the resource, because now that it's cheaper to use, you want to use more of it and reap the benefits.
Some companies will see this and will take over a lot of markets because the barrier to entry has been greatly decreased and the risks decreased and potential profits increased, so a company that say, "yo, we can just write a new Jira, a new Saleforce, a new browser, a new search platform, new IDEs, new programming languages and take over everything, and it's not that expensive given all this productivity", and many several companies will and some subset of them own the future, though we can't see right now who that is.
Ok-Scheme-913@reddit
Your point about Jevon's paradox is very interesting and true, but I absolutely question anything more than a single digit percentage of productivity boost from any AI tool, in the general case.
Like, the only place where I might imagine it being "so good" at coding would be when the requirements map almost 1 to 1 with the expected return (e.g. some very basic website) - but even then, it's almost like we have made high level languages that can exactly express what we mean, describing what we want for the exact reason of avoiding the fuzziness of human language.
The amount of time one would have to go back and clarify something in the prompt would easily surpass writing the actual code in these trivial examples.
Like seriously, unless you want a hello world level program, specifying in every bit of detail what a program should do is a significant overhead, and it's precisely the job of a programmer to translate human requirements to code and the most important property of the code is that it is an exact description of what should happen - unlike human language which doesn't say what should happen when this or that fails, etc.
hippydipster@reddit
I've been programming for 40 years. I don't find your characterization to match my experience with Claude or gemini, and now o3 and o4 are even better. Lately the AIs are improving noticeably on a month to month basis as well.
Ok-Scheme-913@reddit
They demonstrably don't improve noticeably on a month to month basis, as per their very own benchmarks. They have pretty much hit a platou zone already with only marginal improvements.
I don't have nearly as many years as you do, but I have written my fair share of code, including all kinds of LLM wrappers/tools to experiment with these models. I haven't found them as productive, even though I do use them daily as a glorified text processor, e.g. if I have a list of something in the requirements I would model as an enum, I would give a single example for one item, and make it generate the rest (not C enums, more like Java or Rust with a bunch of properties). Or ask it to do some simple code/text manipulation instead of me doing it 6 times. But that would have taken me 10 minutes, hardly making me a 1.1x developer, let alone more.
But maybe people are just not good at estimating their productivity?
hippydipster@reddit
That doesn't describe the benchmarks I know. Livebench, lmarena, ARC-AGI, SimpleBench - in every one, the latest models outperform less new, and "less new" means models released earlier this year, like Claude 3.7 of GPT-4o. The latest and greatest are always models released within a month or two or present.
Ok-Scheme-913@reddit
You mean running them multiple times in chain of thought config results in better results. But these themselves are not improving too fast.
hippydipster@reddit
You're talking yourself out of seeing the simple numbers and data that's there to see.
blackcain@reddit
This is highly speculative. My experience with AI is that you slowly end up not doing as much critcal thinking while you are using it. There is an addictive nature of not having to think because the barrier of entry is lower but it isn't clear that it is effective because you have to be strategic in your prompt engineering.
hippydipster@reddit
This thread was presuming the basic truth that AI increase dev productivity ~3x. If you want to challenge that presumption, that's fine, but outside the scope of my comment.
oloap@reddit (OP)
Precisely. Execs are assuming that AI increase dev productivity. If that's true, the article argues that is better to increase productivity by \~3x, vs. laying off people to keep the same level of productivity.
blackcain@reddit
They make that assumption because they want it to be true. Once they make that switch, I don't think it is going to be as they think.
SpaceShrimp@reddit
But the third bar is also a lie. AI assistance won’t linearly scale the output, it will give a different output.
Increasing the head count also won’t scale linearly with output. And will also give a different output.
infinitelolipop@reddit
AI does in fact speed up developers, especially senior ones can get a significant boost.
I’d wager to say anywhere between a 1.5x to 3x depending on the proliferation.
So this is an actual benefit with lasting effect.
I cannot tell which percent of the layoffs happen due to this reason or believing that ai “replaces” devs.
For those believing the latter, yes, soon they’ll be in a world of hurt.
mkawick@reddit
Have tried to use co-pilot and chat GPT and they just generate junk code that can't really be integrated or used. One of my coworkers swears by and uses it constantly but when we use it together in the pair programming type of situation, I've never seen any usable Code come out of the current round of AI.. maybe someday
Agoras_song@reddit
It's not supposed to be usable out of the box. For me at least, when I use cursor + Claude 3.7, I actually describe the problem and it gives me the code. I review the code and make changes fully aware that I am responsible for that code so if it breaks it's on me.
It has definitely made me faster, and I don't need to spend time typing, I can spend that time perfecting this "junior dev's" code.
Kitagawasans@reddit
But the issue is a lot of the specifics of what you’re actually trying to accomplish gets lost and surrendered to the AI. You don’t know why AI is using specific functions etc. and you won’t find out until later down the road that what it did last week actually is the complete wrong thing and now you have to go back and fix it. It just greats unknown amounts of shitty spaghetti code that another dev who actually understands what the project is trying to accomplish has to go back and fix.
hippydipster@reddit
That would only be because you didn't read the code it wrote. Which is on you.
Kitagawasans@reddit
Yes. Thank you for reiterating the point I’m making; which is that people will get complacent and blindly trust it after a while and stop questioning it.
Agoras_song@reddit
But that's literally the opposite of the point I was making. I do read the code, I do think critically, and ask questions. So this point of yours
Is not relevant to me. I trust AI as much as I trust my junior developers. Not a lot, and I ask questions when I don't understand the context of why someone did the way they did.
If others are not like this, that's their problem. AI is a tool, you have to be thorough in how you use it.
my_name_isnt_clever@reddit
There is a huge variety of ways to use LLMs to code, from asking questions on a chat and copy+pasting the code, to an integrated agent that has context for the full project and makes it's own git commits based on your requests. I can tell you it can do a lot, but not replace a human.
infinitelolipop@reddit
As I said, it depends on how expert one is at leveraging AI. Copilot is the entry level product to AI assisted coding, i found using the cursor editor is a very big difference, with the auto-completion “tab” feature being 2-3 classes ahead of copilot.
Your mileage may vary, but knowing what you want to get out of AI and using it effectively works.
infinitelolipop@reddit
Give cursor a try.
balefrost@reddit
As a senior dev, I don't feel like "typing code" is the bottleneck. I find that more effort goes into understanding what "correct" looks like and in understanding the nuances of the current code.
I have not found that the current round of AI does enough to help with those. I don't think it's able to retain enough context to even begin to address those.
So yeah, AI might speed me up... but I don't think it's 2x. I don't even think it's 1.5x. Heck, yesterday I tried to have an AI make a trivial fix. It hallucinated a method that never existed and, when confronted about that, went back to the original incorrect code. That particular interaction more than doubled the time it took, so my productivity multiplier in that specific interaction was below 0.5x.
batweenerpopemobile@reddit
yeah, as a senior dev, I can just shit out the code I want, using the abstractions I want, in the format I want, much faster than begging a code generator to get a quarter of the way to what I want and then spending time fixing it and checking for subtle logic errors.
hippydipster@reddit
Companies being left behind by big technological changes is par for the course. See all the new companies that won out post internet boom (Google, Amazon, Apple, Netflix, Facebook), and all the companies (Kodak, Xerox, IBM, Novell, Sun, CBS, NYTimes) the basically lost out due to being conservative in approach.
The next generation of companies will emerge and they will be ones that followed the path you refer to here, but at the moment, the list of future winners and losers is mostly opaque to us.
FrostWyrm98@reddit
My thoughts for the graph: "Now let's see Paul Allen's backlog"
(Hint: its gonna be fucking massive and unruly with the AI devs lmao)
Murky-Relation481@reddit
Pretty sure Paul's backlog is huge since he's dead.
dalittle@reddit
Graph has no numbers and no references. Just saying
dweezil22@reddit
OP "It's bad to lay off devs and replace them with AI"
also OP "here's a random graph I made up with no support that claims that you can lay off 2/3 of your devs and replace with AI and keep same productivity"
The actual fact is that if you have a healthy and efficient dev stable laying any devs will hurt your overall productivity, even including AI!
TL;DR Despite the title, OP is an AI Kool-Aid drinker. Their underlying thesis that AI will be this transformative has no support beyond propaganda. All signs point to AI being incremental (Web 2.0 was incremental; the Internet was transformative)
dimbledumf@reddit
AI is on pace to be akin to as transformative as the internet was.
Already, you can use something like Cline or Cursor to write vast swaths of code. Does it have some issues, sure, but in talented hands it can save you tons of time:
https://www.reddit.com/r/LLMDevs/comments/1kpf2hq/the_power_of_coding_llm_in_the_hands_of_a_20y/f
Even the comments are full of people with similar stories. ( I've had the same experience as well)
AI is revolutionizing multiple industrious
medicine:
https://www.reddit.com/r/singularity/comments/1kqg9ig/ai_is_coming_in_fast/
Movies:
https://www.reddit.com/r/ChatGPT/comments/1ewrbp8/animated_series_created_with_ai/
Music:
https://audiocraft.metademolab.com/musicgen.html
Biology, physics, chemistry:
https://www.vellum.ai/llm-leaderboard
General knowledge, common sense, direction following, language understanding, algebra, calc, law, ethics, medicine, healthcare, enginerring :
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/
Pictures, voice, vision, games, customer support via chat or phone.
There are a ton more, it's crazy how much it's being leveraged everywhere.
If you aren't in one of these categories, I have no idea what you are doing.
Does all this mean you should fire 2/3 of your devs? That's a resounding no, at least... not yet. AI still codes more akin to a junior developer, but a junior dev can do a lot if you guide them. 2 years ago, the most it could do was code complete something you were already typing. Now it can do whole tasks, design, implementation, testing, etc. Where will we be in 2 more years?
theQuandary@reddit
Let me respond to this shotgun of claims.
AI is at best a bad copy of what is already on the internet. It cannot generate meaningfully-new content -- at best it interpolates existing content to make something derivative.
There's a whole host of issues with doctors already. A huge amount of doctors are just paper pushers and rubber stamps for people doing the real work. Medicine would be much better if we'd eliminate credentialism in the parts of medicine where it isn't needed and drives up costs for no good reason.
Derivative mush that real artists can instantly recognize and loathe. I'm also irritated that it has become very hard to sort through heaps of AI trash to find something good. Image searching sucked bad enough before this happened. Overall, AI seems to have been a net negative here.
Your link is a VERY bad example.
https://www.ibm.com/think/news/apple-llm-reasoning
Apple researchers showed that if you reword the exact same problem to something different, proficiency drops dramatically. If you reorder the problem, it plummets even more. Put simply, the AI is just pattern matching with no real logic. That's great if you want to use it as an encyclopedia (hallucinations aside), but not very good at all for novel research.
The best use I've seen for AI is problems like protein folding, but it only works for that because we've been brute-forcing it for decades now (and still have to validate the results). If we move on to another problem, we'll have to spend years brute-forcing that too until we build up a body of work to train with.
The app I'm currently working on is doing some novel things. AI can handle some boilerplate, but the codebase is hard enough for new senior devs to understand and AI doesn't stand a chance.
dimbledumf@reddit
The links were to demonstrate AI's impact across a wide variety of fields, even in places you might not expect.
If you look a little deeper into those links I posted you'll see how things are improving steadily over time
Yeah, no kidding, that's what some of these tests I linked are designed to combat, overfitting and/or training to pass the test. You'll notice that the best of these tests don't provide synthetic or training data for the test and they don't release the questions. It seems like you've only read surface level articles, these are well known problems with well known solutions.
Some of those tests are hard enough that even mathematicians can only solve a few of them and the llms are getting better at them constantly.
Instead of regurgitating what you've heard from others you should check out the links I posted. Some real world examples in there.
Wow, ok, wtf are you talking about and what does that have to do with anything.
Are you arguing having doctors is bad? We should just google our symptoms? You want to restructure all of health care? Sure, but that's not what we were talking about. The link I showed was to demonstrate AI's impact across a wide variety of fields, even in places you might not expect.
As opposed to most pop music and movies released.... right? But again that wasn't the point, the point is how every industry is impacted. AI is being used by real artists and are integrated with the tools used to create that music.
You should check out the show I linked, some guy generated a children's show, pretty amazing stuff and it looks great.
MoreRopePlease@reddit
How does it do this?
Will it point out missing or contradictory requirements?
I world love a tool to help me with designs.
dimbledumf@reddit
Short answer is yes it can, I use roo or cline with anthropic 3.7 sonnet.
Long answer is yes... but it's context window is limited so you can't dump your entire backlog into it and say go.
My current method is I use the memory bank feature of roo or cline for each project where it takes notes on what and where things are.
Then take it one task at a time. Typically I'll give it the task in 'architect' mode or 'plan' mode. Get it to design the solution then switch to one of the other modes to implement and test.
It works pretty well, if you use boomerang mode (aka orchestrator) in roo it can even do multi part tasks.
Just make sure to double check all the work, it likes to skimp on tests by mocking everything and then asserting the mock worked or other shortcuts.
oloap@reddit (OP)
The graph shows what execs *believe* today: you can lay off 2/3 of your devs and replace them with AI, keeping same productivity. Reality might be different, but it's irrelevant, that's why they do it.
The argument is that keeping the same productivity, instead of increasing it with same "healthy and efficient" team + AI, is going to make your company obsolete as others will do it.
dweezil22@reddit
Thanks for clarifying! I'd suggest updating your graph to be more clear. If you're lucky and your blog post takes off some dumb exec will absolutely see that graph, read zero words of your article and add it to their "I can layoff all my devs!" files.
dimbledumf@reddit
Speaking of drinking the Kool-Aid anyone who dares disagree with you is heavily downvoted. I guess it's scary to face the new world
dweezil22@reddit
AI is the new offshoring. It's great at replacing bullshit jobs that shouldn't have existed in the first place. It also really does have powerful select use cases. The problem is execs are terrible at differentiating between the powerful select use cases and the really.bad.idea use cases.
I say that as a person that works with AI every day, has helped build some truly enormous AI systems, and worked in tech through the offshoring craze years.
Oh AI is also weirder than offshoring. There were no Harry Potter fan-fic death cults around offshoring, to my knowledge.
1h8fulkat@reddit
The graph proves that if there is no demand for higher production then it makes sense to replace developers with AI.
zffjk@reddit
AI won’t be maintaining this code… or did I miss the maintenance package on copilot?
ProtoJazz@reddit
That's the part that lot of the Ai, or even just layoff pushes miss.
Sure, you can cut 80% of your staff and things still run for a bit.
But eventually you'll see things slow down in terms of dealing with issues, or building out new features.
However let's assume your product is fully done. No changes needed. Just maitnence.
Cool, issue with a 3rd party integration caused a big issue with your data. You need to figure out how to stop it and how to fix that data. Maybe some new regulations came out and you need to build new features to adapt to that.
Shit comes up eventually. And that's when suddenly they realize no one knows what the fuck does what anymore and what should be fast updates are now super slow.
AzureAD@reddit
It’s because the article is by a “product manager” who is whistling to the business leaders with the graph in the middle stating that there “is a way” as explained in the middle graph. The whole article is a sham and not worth any devs time.
vivaaprimavera@reddit
Not for people willing to invest and seeing an opportunity.
There is a pissed off talent pool right now.
rhade333@reddit
Truth hurts.
phillipcarter2@reddit
Is it though? Nobody is investing in AI so that companies can do the same, but with less. That's zero growth, and thus, lost money due to inflaction. They want more outcomes overall.
Hulkmaster@reddit
I, non-ironically, think these are bad times followed by very good times
Let me explain
Good developer (even junior) > AI
Companies firing "good" developers will have a huge backlash in about a year or two
That backlash would either drop company market value, which will bring new startups in their place; and/or drop their revenue (because of dropping quality, increased ammount of bugs, longer development loops)
This will result in good times:
- New startups hiring actual developers
- Big companies hiring developers back
GCatalinStefan@reddit
I actually hope all the laid off devs get worse than the laid off artists.
GCatalinStefan@reddit
You could make the same case for artists.
Now I can't be happier seeing programmers are going to feel the way artists felt last year when getting laid off becasue of ai.
I hope it gets even worse for them as coding is way easier for ai to reproduce than an image.
Evalvis@reddit
They will hire developers back when AI leaves millions of vulnerabilities and the need for cybersecurity arises. :D
MrTheums@reddit
The assertion that AI will directly replace developers is a simplification. While AI can automate certain aspects of software development, particularly repetitive tasks like code generation or basic bug fixing, it's currently far from replacing the holistic skills of a proficient developer. The core issue is likely a combination of factors, not solely AI-driven efficiency gains.
Over-hiring during periods of rapid growth often leads to inefficiencies and redundant roles. Companies might use the current economic climate and the allure of AI-driven "optimization" as a justification for restructuring and reducing headcount, even if the long-term impact on productivity is questionable. This is especially true if the laid-off developers were responsible for maintaining legacy systems or performing complex tasks that AI currently struggles with.
Furthermore, the "productivity paradox" is relevant here. While AI tools can enhance developer efficiency, the actual increase in output isn't always immediately or linearly measurable. It's possible the perceived lack of increased productivity is due to a failure to effectively integrate AI tools, rather than a flaw in the technology itself. A holistic evaluation of the impact of AI on developer productivity, considering factors beyond simple lines of code written, is crucial before making sweeping layoff decisions.
30FootGimmePutt@reddit
Good thing they are t actually laying people off because of AI.
AI sure looks good when it’s given as the reason by people writing articles about layoffs. Nice synergy there.
They get to hype AI and they get to engage in cost cutting abd outsourcing with profits/stock at record highs.
Affectionate-Car6991@reddit
You're not wrong — AI is sometimes the excuse, not the reason. Many companies are correcting years of overhiring and chasing efficiency. But good software still needs humans behind it, especially for maintenance and complex problem-solving.
Green-Hamster9117@reddit
Should be an opportunity for good companies to ascend past these companies who are just following trends that will ultimately cost them money in the long-term, since they will just have to hire on more developers to fix the awful code, especially with the last few driver releases. Look what's happening with Nvidia, they are crippling their ability to function because they rely too heavily on AI.
BornAgainBlue@reddit
I'm seriously giddy with excitement. They do this in cycles, but this will be bigger than . com crash. A few years of eating ramen, and then they will be begging senior devs to return. This is my... 5th? Cycle of this, I always get a huge pay increase from this.
dark_mode_everything@reddit
Yep. This is essentially offshoring 2.0.
Brown-Tabby@reddit
I don't think they ever gave up on offshoring. I think the version is more like 1.15.193 by now.
fued@reddit
quality of code and how accurate it is to requirements is very similar to offshoring too lol
dark_mode_everything@reddit
Ai - does not understand the business context, does bare minimum without thinking of the codebase architecture, bloated code, confidently incorrect, etc. Hmm......where have I seen that before?
fued@reddit
Don't forget it always says it understands and is happy to say your design is amazing
markoeire@reddit
I can't imagine eating ramen noodles for a couple of years. Damn.
BornAgainBlue@reddit
It was a figure of speech, im gluten sensitive, I honestly tend to do rice when poor. That started when I was a younger guy, my gf's dad drove truck. And he gave me a 50lb HUGE bag of rice. To this day I still keep two pickle buckets filled just in case.
ShiitakeTheMushroom@reddit
What's a pickle bucket??
BornAgainBlue@reddit
Picture a bucket... (best part is I'm TRYING not to be snarky), full of pickles.
Get a job any fast food, you get free buckets :-) These days I just buy clean empty ones....
I think they are 5 Gal? Not 100% on that.
Anyhow, they seal, so thus, they are great for storing food. Rice keeps for.... well I'll be dead.
ShiitakeTheMushroom@reddit
Thanks!
Jehab_0309@reddit
This the funnest doomsday prepping Ive read about, cheers
seanamos-1@reddit
During the good times, don’t burn all your cash on rubbish and living in the moment, save and invest. Make hay while the sun shines.
We recently had a ridiculous upswing in dev demand and compensation during and post COVID (a 3-4 year period), it was an opportunity to make a small fortune. Which was then followed by a sharp correction.
During the downturns, live more conservatively and sleep soundly knowing you have a nice buffer to carry you through it.
We are entering one of those “upper management acts completely irrationally when presented with next hyped thing” eras. They’ve happened before, they’ll happen again. Baton down the hatches.
Moloch_17@reddit
Why are they consistently so stupid?
klwegner@reddit
I definitely see the wisdom of this approach--that is, make hay when the sun shines--but it stinks for those of us who never got a good paying job and were trying to work our way up.
I've been a dev for a community college for 2.5 years and haven't broken 60k. There's never been money to put aside for tomorrow.
But then again, I'm less imperiled (at least for now) than developers with better pay and (likely) more responsibilities. If AI is a bubble, I may be left unaffected when it bursts. But I'll probably still be making a substandard wage, lol.
Due_Satisfaction2167@reddit
Don’t kink-shame people.
Capable-Mall-2067@reddit
Afte the AI hype, I now truly empathize with how artist felt during the NFT bull run, or how bankers and financial advisers felt during the crypto hype. I can't wait for some companies to crumble trying to replace real Dev talent with AI.
Terrorscream@reddit
It's not much different to the sacking of IT support during the 2008 GFC to save a buck, then the companies that did collapsed when something broke.
Kaffe-Mumriken@reddit
I think you’re misreading the trend.
Companies hired en-masse to participate in AI the development race / bubble but clear winners have emerged and the hiring was unsustainable long term
church-rosser@reddit
the developers being layed off aren't necessarily those developing LLMs. It seems you're misunderstanding the point of the article, or didn't bother to examine it enough to understand the point, namely that developers are being laid off, but their LLM replacements can't do what humans can vis a vis programming as a professional in a professional setting. It is irrelevant whether LLM developers in the LLM development race are being laid off. The point is, many other developers in other areas of programming and compsci related fields are being laid off on the belief that they can be replaced by the output of an LLM despite nothing suggesting that this is a reasonable approach for a company to take longterm.
nemesit@reddit
execs are easier to replace with ai than actual developers
ThinkMarket7640@reddit
This is something I was thinking about the other day. LLMs excel at communication, surely one of the first things they should try is replacing managers.
gelfin@reddit
That's because confidently stringing together words into statements that plausibly sound like they could have been produced by a functioning human brain, irrespective of any concept of truth, consistency or ethics, is literally the entire job.
SmokeyDBear@reddit
It’s interesting that this is also probably the reason execs are so enamored with AI.
GeoffW1@reddit
That's probably only true because so many execs barely seem to know what their company does. But yeah.
markoeire@reddit
Yeah. Agree. Most of the ai automation is now doing writing emails, business plans etc. any complex task or system design and these tools crap out. You can treat ai as a junior dev with very little competence. Even then it's faster to do it yourself than explaining ai to do the job.
Sckjo@reddit
This is the funny part. An execs job is 5x easier even as a human
miniannna@reddit
I'm convinced there is a collusion going on between tech leaders to use AI, regardless of it's actual benefits to productivity, as an excuse to diminish the power of tech workers. It's far too early to have meaningful data on whether AI is actually providing the productivity gains many of these ceos are claiming.
Kintoun@reddit
Doesn't need to be collusion when it's just them all buying into the same bullshit. They either actually believe AI right now can replace engineers, or they are using it as a disguise to cut even more fat.
Today's AI is a replacement for stack overflow and hilariously is actually killing stack overflow which is what LLMs are largely relying on ROFL. Tomorrows AI has the potential to be more, but I'm not going to tank my current productivity to foster the "AI baby". Companies are red-pilling too soon.
acc_agg@reddit
Stack overflow killed stack overflow.
Mnawab@reddit
Damn, did stack overflow die? I mean I do come across the occasional how do you do this and then all of a sudden the person who posted it edited by saying they solved it without putting the solution on there.
boringestnickname@reddit
Sunk cost fallacy is going to hit harder and harder until people get a rude awakening.
Where I live, the government has demanded that 80% of public organizations have to use AI by 2025. 100% by 2030.
The LLM push is the biggest hype machine I've ever seen, and I've been around a while.
kobraa00011@reddit
its 100% the streaming model, its going to be absolutely necessary for businesses to use while its free/cheap and then the cost is gonna go up and up and up
uCodeSherpa@reddit
Didn’t early tests in to this reveal that “productivity gains were feels not reals”? I swear this was actually studied and the result showed zero productivity gains, and more bugs and security flaws. While the developers using AI commonly felt they were being more productive even though they measurably weren’t. I have to look it up.
xaddak@reddit
https://www.cio.com/article/3540579/devs-gaining-little-if-anything-from-ai-coding-assistants.html
I sent that to a coworker. His exact words were:
Oh. Okay. Well, as long as you feel more productive, I guess it's fine.
fn3dav2@reddit
No collusion necessary.
"I am a good CEO, look at how much money I saved!" is the answer.
(The CEO is gone by the time the damage is apparent.)
nnethercote@reddit
capital hates labor, always has
JDgoesmarching@reddit
This doesn’t require collusion, massive firings and RTO layoffs started before the AI boom. The timing of ChatGPT just gave them a shiny new label to slap on even more cuts to make the earnings look better for bad execs.
If it wasn’t AI, they would have manufactured some other excuse to cover it. I do agree with the overall premise of diminishing tech labor power, although I mostly think it’s just ineffective leaders not knowing how to look good after ZIRP and the Covid boom.
uCodeSherpa@reddit
I mean. They did massively over hire for Covid.
Obviously something was going to give when you could complete a react bootcamp and get a job for it while having literally zero idea how software otherwise works.
HoratioWobble@reddit
The tech bubble burst and AI was on the other side, they ran out of things to innovate and big tech companies have a constant need to innovate and embrace new technologies.
It's the same with meta and "the metaverse" they thought it was going to be huge, they rebranded so people would associate the metaverse with them, they went all in but no one else did.
AI, everyones all in.
pheonixblade9@reddit
I've been saying this for years.
Xalara@reddit
We know there’s collusion because we’ve recently learned via news reports that basically all tech CEOs are in hundreds of Signal group chats with each other, as well as with a bunch of rightwingers and it’s cooking all of their brains.
Add on to that the recent articles detailing how several CEOs, including Sarah Nadella, are using AI for basically everything and you realize that is also cooking their brains.
So it’s a double dose of brain cooking.
saera-targaryen@reddit
Yeah they're all radicalizing each other away from reality. A single conversation with a senior dev who wasn't afraid of being fired for saying the wrong thing could blast them back to reality if they actually genuinely listened.
Ashamed-Simple-8303@reddit
Yeah and same with return to office
MagnetoManectric@reddit
100% my take too. This is leverage against the value of labour. I keep trying to drum this into my fellow devs. The tech is useful, and can help us do our jobs. But we should be reminding our managers that it cannot replace us, and that given the rampant unprofitability of AI right now, any company that develops a dependence on it are in for a sore ride once the prices are jacked up threefold.
Careless_Hat426@reddit
Programming Trends Change Fast. Learn What’s Next – Now.
⚠️ If you see “Nothing here,” try Desktop View 🖥️ — mobile fix coming soon! 🚧
https://programming0405.blogspot.com/
Harha@reddit
People buy into the AI hype and later we'll see how the bubble will burst. The LLM's currently aren't anything but fancy gimmicks which will not be able to really design software at large. Being productive with the help of a LLM is nothing but a addiction-ridden fever dream at the moment, as the entire generative AI tech is addictive thanks to our dopamine circuitry.
bindermichi@reddit
Just wait a few months before they start rehiring because of AI
chloro9001@reddit
Only time will tell if it’s dumb
workingtheories@reddit
only an idiot would make such a graph
Xetaboz@reddit
AI will eventually cause people to not want to use computers anymore and tech companies will have to start resorting to violence.
octnoir@reddit
A quarter of the tech articles that come my way aren't software or tech or programming related - these are articles on: "My boss is terrible" "The bosses have no idea what they are doing" "The executives are trimming for no reason" "We're building something that the higher ups know is bad" "Our CEO just gave us a big speech over transparency, while middle management proceeded to yell at us if we inputted in a truthful progress milestone as opposed to the fake puffed up milestone that they wanted".
Clearly we recognize that tech management and tech executives can be extremely terrible.
What makes people think that profitability or optimizing business is at the heart of this? We've got a system where CEOs have no real incentive to actually optimize and improve their businesses. The real job of a CEO isn't to lead the company, it is to build a sales pitch to investors.
And you can just fool investors if you create a hype AI bubble, and then proceed to lay off employees under the guise of 'we're optimizing with AI' rather than get punished by the market for actual layoffs. And investors who are savvier are effectively playing hot potato with other investors (which often means pension funds since the investors with more resources can act quickly) riding the high and dipping before the loss.
I really want people to understand - your future and your careers are being gambled away by executives who make far more than your middle-class salary, who are gambling for gambling's sake, and if it blows up in their face, and the entire company or even the industry collapses - they get to retire in their fancy mansions and yachts and sail a comfortable early retirement while everyone else has to pick up the pieces.
You can choose to take steps to actively protect yourself from that - or you can choose to keep believing "Am I missing the picture here? Why are companies insistent on losing and gambling so much money for nothing?".
Believing a company's sole purpose is to make money, to optimize businesses, to cut costs, to grow - it means you aren't seeing the bigger picture. A lot of these execs know that they might lose it all in a rapid fashion. They don't really care because they either come out on top right now, or fall like a house of cards to retire safely. It holds the same logic as 'the stock market accurately and rationally sets the value on a company' - that hasn't been true for a very long while.
petr_bena@reddit
"riding the high and dipping before the loss."
from investor perspective this is almost impossible to time correctly. Most of people who invest in bullshit, believe it.
toastermoon@reddit
What steps to take, to protect us from this gambling by execs?
octnoir@reddit
I mean I'm going to say unions but there's a lot of push back from that in the tech sphere despite some amount of interest in it, or at least some interest in the 'idea' or 'perks'.
The primary issue is that the push back comes from people thinking of unions as a stereotype, rather than unions as an institution.
Unions do not need to be for low-wage workers. Unions do not need to be only for the downtrodden. Unions in most industries benefit the union and the non-union worker because they get paid more by the company for not joining the union. Unions have been militia and unions have been pacifist and unions have been activists. Unions have in many cases been the only bulwark against abuse especially if you are a minority or discriminated class.
It isn't like cooperation is absent or that tech workers don't join together - see the sheer number of groups, memberships, conferences, alliances, projects, open source and more. It is just that if you don't have a union, you can't really wield any real power - social, economic, poltical and labor related. And because unions are an institution and not a strict template, you can start small and build it from their your own way, and slowly build up alliances.
This is how the games industry unions are forming with multiple smaller unions from studios creating their own style of union, prototyping what works and what does not, and building alliances with other smaller unions or with the larger whole of union organization.
In this circumstance where the executive gambles with your future, they are allowed to do that without any real repercussion because they've gone unchallenged. They've gone unchallenged because the government has become impotent and captured by corporate interests, while unions have been decimated over the past few decades. The wealth disparity has gotten so large that executives not only feel safe with gambling their billion dollar company away, they are encouraged to do that because wealth begets decadence begets a completely detached from reality world view.
Moloch_17@reddit
There was a thread a couple of days ago asking for honest opinions about unionization and I was shocked by not only how many people were against it but also by how completely backwards their reasoning was. I thought for sure it was more popular.
IanAKemp@reddit
The problem is that lot of software developers are libertarians i.e. idiots and unions are an anathema to them in the same way that thought is.
sonics_01@reddit
+1 and 100%, but unfortunately, terms like big picture and long term growth has been slowly dying for years. To CEOs and all C-levels, shareholder value and cash flow are the best and top most important virtue, and quarterly earning announcement is the most important events of their life. They need to bring good numbers for every single quarter or they can be fired.
I'm not even talking about 10 years. Anything that brings return after 5 years? Who cares? After 5 years, no one is sure if they will still work for the same company or not. So no one care about big picture or long term growth or etc. Because shareholders will try to replace C-levels if anyone yells to watch the long term prosperity despite short term loss. There is absolutely zero patience.
Not a surprise all consulting firms like McKinsey recommends to fire almost all R&D department first, especially who works for longer term projects, when they "fix" the company and "consult" for C-levels. Some companies doesn't even think about having facilities and people for research and core development capability, they outsource everything to offshore like India and now that is coming to AI. Jobs for American? who cares and why care? That is not important for shareholder values and cash flow, rather such terms only bring negatives to chart.
This is really ridiculous. But it is not a new problem, this has been in US for at least 20 years. Not a surprise there are no facilities like Bell Lab anymore inside US.
Now we see downfall like Boeing. But hey, at least they got huge bonus even when retire, and others will clean up all aftermaths and wrecks and shits, and maybe politicians will help using tax payer money, and it is not their problem, riiiiight?
Such level of moral hazard, irresponsibility, and crazy focus only on short term profit and growth is killing all of us.
tudonabosta@reddit
My current company will die because of it. C-levels bought all the promises the LLM companies are making. We had a backend and infrastructure of 8, mostly seniors. There are 2 seniors (including me) and 2 juniors left. Then they promoted me to Jr Manager. This happened 8 months ago.
We're crunching since then. Using ChatGPT or any other model was helpful only for generating boilerplate that we could generate already, but without the consistency and determinism of the tools we had already. Both juniors lost 8 months of their careers because they don't have time to learn how to code. Every single project is late and cloud costs are rising.
After 5 years working for the same company, I started interviewing again. I considered staying to train my replacement, but there's no point. They're nearing the point there's no recovery. I already received two offers. They're not what I'm looking for, but I'm willing to start jop hopping again just to free myself from this bullshit.
_BreakingGood_@reddit
Same situation. Went from a team of 20 to a team of 5. Now everybody works 50 hours weeks and I regularly see my coworkers crying. AI definitely makes me produce higher quality code faster. But it can't make me do 2 things at once. If you want 2 things done at once, you need 2 people.
petr_bena@reddit
You just need to ask for a second screen with another co-pilot if you want to do 2 things at once.
/s
SpriteyRedux@reddit
The LLMs basically serve the same role as a junior. They give you code that might generally work but needs some additional thought to take it all the way to production. So I have no idea how juniors are supposed to get started. I assume their career knowledge is supposed to be obtained from the stupid genie that gets answers wrong all the time?
Jehab_0309@reddit
Did you relay this to them? Or do they know they’re about to sink and just ignoring it?
DarkTechnocrat@reddit
This is so often overlooked when talking about these tools. You can’t predict what they will do with any certainty, so it can be hard to wield them effectively.
SpriteyRedux@reddit
Laying off developers because AI exists is like laying off a restaurant manager because a cash register exists
Technical-Ice247@reddit
There’s also the realization that they will need piles of cash to actually reap rewards of AI. Like most mature industries, one of the companies- currently perceived as formidable and unassailable- will probably collapse or lose almost all their market share in the next 5 - 10 years. They’re all flailing to ensure it’s not them.
mzolfaghari71@reddit
At least we need to see the results before making any decision; ai is too new to judge
TSSalamander@reddit
Laying off devs only makes sense if you expect there to be a shortage of demand for tech development. But if you can develop 3x the ammount of things with the same ammount of staff, and so can everyone else, there is literally no reason to think you have less work to do now. All software right now is terribly faulty in many ways, coming from the fact that the demand for more kinds of products is high. But this faultyness just goes to show that there is so much more to do. Laying off staff to replace them with AI is a mistake, it's at best a good choice in the short term, but a long term failure. But honestly, layoffs now is failing to take advantage of the moment.
Majestic_Sweet_5472@reddit
AI is not a replacement for inventive and clever engineering. Sure, it can save time in certain areas, but removing people from the development equation is shortsighted and will have drastic consequences.
IanAKemp@reddit
The dumbest move in tech was hiring as if the pandemic was going to last forever. AI is just the excuse these overpaid idiots are using to justify cutting headcount back to a number, that matches actual post-pandemic demand for the services their company produces.
bring_back_the_v10s@reddit
I never understood the reason behind the absurd hiring spikes during the plandemic. I thought it was supposed to tank economies, thus hindering new hirings. Is anyone able to shed some light on this?
Educational-Lemon640@reddit
If you think the pandemic was planned, I can see why you might be confused as to why it didn't seem to actually benefit anyone in particular.
If you don't think it was planned, people making stupid decisions in response to it is perfectly reasonable. Nobody benefitted because nobody actually wanted it.
bring_back_the_v10s@reddit
Except Big Pharma.
Shivaess@reddit
As someone who owned stock in BOTh big vaccine makers at the time (dumb luck) they got a bump and then nose dived. Neither made out all that well from the covid vaccine.
I think it’s a combination of anti-vaccine bullshit and generally lashing out at healthcare in general. As someone who has some college education in the topic, the mRNA vaccine tech is astoundingly cool.
bring_back_the_v10s@reddit
Sounds like copium to me.
Educational-Lemon640@reddit
Actual what? Evidence that "big pharma"'s investors did not, in fact, "make bank" from the pandemic is copium?
bring_back_the_v10s@reddit
By the way I'm old enough to remember how \~10 years ago reddit used to paint Big Pharma as Big Satan. What made reddit's hivemind change towards becoming the paladins of Big Pharma, I wonder?
Here's some evidence for you. Thank AI for that.
https://x.com/i/grok/share/TGOR8J1nCaxyG606RWX74kP0v
Educational-Lemon640@reddit
You think I'm going to believe an AI summary of stuff from freaking X?
I know from personal experience that Twitter was a complete and total dumpster fire of confirmation bias, stupid arguments, total non-sequiturs, and just about everything wrong with the Internet...in 2017. By all accounts, it has gotten much worse since then.
AI does jack squat to make it more reliable. All AI does in this case is amplify and focus all of the above problems. This is well documented and matches my personal experience with chatbots.
All this does is add "broken evidence heuristics" to your list of problems. And no, I'm not reading that. I have a firm policy of never visiting any micro-blogging sites for the same reason I don't drink arsenic/uranium mixes.
bring_back_the_v10s@reddit
Believe what you want kiddo. Good luck.
Educational-Lemon640@reddit
Kiddo.
I'm probably older than you, statistically.
And I would note for the record that you aren't actually doing the legwork to find better sources, but giving up the second somebody points out that they have the structural integrity of wet cardboard.
bring_back_the_v10s@reddit
whatever 🤷🏻♂️
ifonefox@reddit
People were using more tech during the pandemic, so companies hired more people to match the increased demand (using the money from said demand). When the pandemic ended, the money and demand went back down to normal levels.
bring_back_the_v10s@reddit
Makes sense, thanks for the explanation.
dark_mode_everything@reddit
The reason was that companies got cheap money during the pandemic and they wanted to aggressively expand. Then after a couple of years they realised that everyone was doing worse off and was not spending money on tech products because they had less disposable income. Now they want to get rid of those extra hires.
calgary_katan@reddit
My tinfoil hat theory is that these companies were using it to get more AI code training data on their internal systems.
asstatine@reddit
Look at Jevons Paradox. Better yet, look at this history of the cotton gin: https://teachinghistory.org/history-content/ask-a-historian/24411
Not only will this backfire. Eventually it will create a larger demand for even more developers to manage all the AI and review its code. What will change is that we'll write less code day to day, but we will read magnitudes more.
I'd venture to guess we're all on track to becoming glorified PR reviewers, who don't have to argue with the authors anymore because it will do as we ask.
bananamantheif@reddit
I advocate for companies to hire all ai dev team. I want this bubble to Burst asap and we go back to reality
AlarmedGate81@reddit
What's with the whining? ☝️ "You need us - don't fire us"
fn3dav2@reddit
If you look at the big US/UK companies, you'll generally see that the companies who have been laying off developers "because of AI", have been outsourcing, getting more H1-Bs in, or recruiting in India. The AI is "actually Indians".
mosaic_hops@reddit
In the US it’s more because companies are afraid of retaliation from the government if they blame the economy which is totally f*cked right now. So they say “AI” with a wink and a nod.
The_0bserver@reddit
Good job security in a few years for me and some other seniors I think.
Accurate-Title4318@reddit
You’re absolutely right—laying off developers just as AI is boosting productivity is a short-sighted move. The real future isn’t about replacing creators, but empowering them. That’s where NoCode-X comes in: it’s the “hurbod” approach, where AI handles the repetitive, bandwidth-heavy tasks and lets you focus on creativity, architecture, and delivering real business value.
With NoCode-X, you stay in control as the creator. AI assists with the “monkey work,” but you direct the vision, ensuring your apps are robust, resilient, and secure by design. Unlike platforms like Lovable, which offer zero security in delivery, NoCode-X bakes in security and operational resilience from the start.
Don’t just survive—thrive! Let AI do the heavy lifting while you build, innovate, and deliver with confidence.
Check out NoCode-X and see real-world demos on their YouTube channel. This is the future of empowered development!
rezna@reddit
everyone has seen this coming. the tech industry isn’t exactly made up of critical thinkers…they’re just good at their specific job role lol. and the businesspeople that run them are even dumber
leroy_hoffenfeffer@reddit
AI doesn't have to be good at replacing people for it to replace people.
CEOs/VCs/Boards only care about profit. That's it.
They'll lay everyone off under the guise of AI and then in a few years when that turns out to be a terrible decision, they'll hire human programmers back for pennies on the dollar, most likely only for gig based roles.
This is an effort to save money immediately, and then more money later. The people making these decisions are not intelligent.
tuxxer@reddit
I assume that now that we can finally get something akin to HAL, the script says the high end devs get released and the low end get outsourced to what ever third world dystopia is in vogue today. Now the script is in play and a bunch of people expect to make marvel money for creating a windfall.
Would you like to play a game
xubaso@reddit
Programmers working time does not equal productivity. A small but focused team can be extremely productive, while a large team with a lot of ceremony processes and talking endlessly about code style in reviews can make quite a buzz without making much progress. If the later is replaced by AI and it doesn't do a good job, nobody would notice.
FitExplanation6346@reddit
They are basically paying for more mistakes to happen. Great job guys! Great job...
santaclaws_@reddit
It's all typical MBA thinking. Remember the layoffs where everyone above a certain salary was cut? Remember the layoffs that happened when they fired those guys in the back room because the MBAs didn't understand what they did?
Much like pulling parts from a running car randomly, this rarely goes well. Unless AI actually becomes capable of reliable hallucination free, rule based reasoning, most of these developers will be back at work in a couple of years.
PersianMG@reddit
As it stands LLM generated code is very mediocre and the people who is use it as their primary way of coding will slowly lose core development skills. This isn't the same argument as using pen & paper vs using a computer, back when computers were new.
I've see the code produced by LLM's and how developers use them and I can tell you its a sloppy mess. People don't check pre / post conditions, security is thrown out the window etc. It might solve the direct task at hand but still leaves a lot missing.
In 5 years time I imagine we'll have a whole lot of 'tech debt' in code bases built from 'vibe coding' and companies will transition back to needed smart developers (which they'll be little supply of). Of course, I can be wrong and LLM's can continue to improve at drastic rates to the point they can solve complex problems and make no mistakes.
For the time being I'm not using any LLM for code generation to keep my skills sharp (only for checking blog post grammar).
lordtosti@reddit
The thing this is missing is decision throughput.
Managementlayers can only make so many decisions in X amount of time.
No company can scale 100x, not even 10x when you instantly add 100 times more developers.
This problem is with both AI and tons of overhired engineers though. The last ones are going to do just busywork.
The current market has been completely broken by all the moneyprinting during COVID years and needs to resettle over time.
erwan@reddit
Nobody is laying off because AI.
Some claim they are, but they would have done layoffs anyway, it's just an excuse to make it look like they're on the cutting edge of high tech rather than just downsizing because of poor financials.
AnxiouslyCalming@reddit
IF a company is laying off for AI, I'd question if the core of the company is valuable at all or if that unit even needed the engineers in the first place. Mostly, I just think it makes for interesting headlines to keep the AI bubble from popping because underneath all the headlines is tech that is prone to making lots and lots of mistakes. I love it for autocomplete and scaffolding up unit tests or giving it small units of work that I can review easily but I'd never let it go unattended.
ArriePotter@reddit
Coughs in Duolingo
erwan@reddit
Exactly - if your business is just creating software using AI without any expertise, then sooner or later your customers are going to cut the middle man and generate their software themselves.
CactusOnFire@reddit
I'd question if the core of many big tech companies is truly valuable and not just existing market adoption and stock chicanery.
enzoshadow@reddit
I can't get a single AI based customer support chat to work properly, but sure! AI is good enough to replace even more complex developers job. These executive just wanted to see short term gain, because it'd be none of their problems when things start breaking.
teaisprettydelicious@reddit
My favorite is the comcacst/xfinity chat bot that I reported an outage to and it suggested I upgrade my plan to fix their technical issue lol
nimbus57@reddit
I know this isn't really your point, but I have had great success with ai chat bots. Now, I haven't had to use them everywhere, but they have always been at least a good start, if not the solution i was looking for
saera-targaryen@reddit
CEOs have assistants to go through those AI menus for them, so they literally have no idea how bad they are
jimmiebfulton@reddit
“AI is good enough to replace even more complex developers jobs”. Who is operating the AI? I’m actually using AI, and there is no way it is replacing me. In fact, it REQUIRES me, if anyone wants it to be leveraged effectively. I, as a very experienced, very skilled engineer, can produce higher quality output than a junior engineer. That was true before AI. Yet we still need both in an organization… there’s too much work to do, and not enough resources to get it done fast enough. AI is just picking up the pace.
HoratioWobble@reddit
I know of at-least 2 companies that DID lay off because of AI, it all went wrong and now they're hiring back.
There was also this study
theQuandary@reddit
They kinda buried the lede.
The real title should have been "Just 7% of UK businesses who replaced workers with AI thought it broke even or was an improvement".
HappyHarry-HardOn@reddit
I disagree - Some companies are clearly hoping AI will allow a return to the level of outsourcing we saw in the 2000s.
If a third-world dev being paid peanuts can, with an AI, create 'acceptable' code - Then corps can save money vs hiring expensive Western (esp. U.S. devs).
EpicOne9147@reddit
The thing you are getting wrong is , no big company hires experienced expensive devs to do stuff they can out source for peanuts
All_Up_Ons@reddit
Yep. You've always been able to replace experienced developers with people who write "acceptable" code. AI changes nothing about that equation.
bonesingyre@reddit
Its interesting because Microsoft Build is today, Nadella and Co were doing a demo on Copilot 365. I noticed in their script they mentioned AI doing the work of an experienced dev or for their new Fine tuning AI tool, the work of a team of data scientists.
peepopowitz67@reddit
Extra funny since Copilot 365 blows chunks compared to the competition (which Microsoft also owns so shrug).
leogodin217@reddit
I think this is the case with a few notible exceptions.
Many customer service jobs have been replaced by AI. There may be a few other roles as well. It's cheaper and worse. Companies like that tradeoff.
Companies are investing in AI over people. In some cases that absolutely leads to layoffs. However, in most cases, I believe AI is not yet doing the work of the laid off workers. They are just running leaner. Putting more work on existing employees. Hiring contractors. etc.
erwan@reddit
AI is just a tool. We need to stop talking about AI as artificial people, that's not what they are.
But yes, there are jobs that can be made obsolete by tools. It was the case when robots replaced blue workers, and now Generative AI is replacing some white collars jobs. Digitalization have made a lot of jobs obsolete, for example travel agencies have become obsolete with Internet and so have all the corresponding jobs.
Still, AI is not replacing developers as the title suggests.
leogodin217@reddit
I think we are in agreement? AI isn't replacing developers directly, but companies are replacing budget for developers and using it for AI in the hopes it will actually replace developers. It's a dumb idea, but most corporations are run by bean counters, so....
erwan@reddit
They're just using AI as a pretext for layoffs they were going to do anyway.
seba07@reddit
That statement is just ignorant. As a recent prominent example take Duolingo. They are laying of contractors because AI can generate the content now.
erwan@reddit
I'm talking about developers, like the title says.
Content creation is different, and yes people have been losing their job to automation since the industrial revolution. Generative AI is just one more step in automation.
Developers' work however is different, because it has always been more and more automated. From high level languages, to IDE, to having more and more reusable libraries and Open Source software, it's probably the domain that has been the most automated. Yet we still need developers you need someone able to go from vague idea to correct design. And that's exactly what a developer is. With a recent framework, developing is pretty much describing your design formally.
0xdef1@reddit
Let’s clear poor financial definition here. When a CEO projected 65% growth but completed year at 40% growth. That’s poor financial for them.
scalablecory@reddit
Completely agree.
I firmly believe that developers are being laid off due to performance and restructuring for the economy. I think AI is just a convenient spin for shareholders.
Yes, AI might take all our jobs -- and maybe even soon -- but it's not there yet and it's obvious that it's not there to anyone paying attention.
n00dle_king@reddit
Yup it’s complete nonsense. No one in industry actually thinks they can replace everything their buddy Tom does no matter how much AI tooling you have.
extracoffeeplease@reddit
I work in a dinosaur as a freelancer and I see a LOT of high paid data analysts that can go next month because of AI though, just to give you an idea. Ofcourse, only in theory. The structures required to automate the work aren't there yet, so AI won't help until it can operate a mouse keyboard and screen (which is coming fast). I saw the argument there was over hiring a few of the last years due to cheap credit, which is being undone now, which to me makes a more natural reason why there are some layoffs.
RedditAddict6942O@reddit
It's not because of "AI".
The billionaires are mad that workers made better salaries and got perks (WFH) during COVID.
These are loosely coordinated layoffs to force peasants back down "where they belong".
IanAKemp@reddit
It's not the billionaires, it's the micromanagers whose uselessness got shown up during COVID when their employees were working independently and delivering the same or better output. In order to save their own useless hides they've convinced the billionaires that WFH is the devil.
RedditAddict6942O@reddit
WFH also made it easier to switch jobs. Which increased wages significantly. That alone is enough reason for billionaires to hate it
dyngts@reddit
Depends I think, if they remove redundant developers and replacing it with AI, that's actually a smart move.
So instead of having two software developers to works on the same thing, you can use one senior developer + AI assistant. Senior developers can guide AI to do what they wants and they will never tired or complaining.
This is my dream as software engineer to have someone reliable enough to write some codes and your job mostly on the review part.
The code review will be the critical skills to have.
Let's take a breath and accept that AI actually disrupt software engineering role, sooner or later.
Good luck all!
CreativeGPX@reddit
I'd like to preface this by saying that I do believe that the productivity benefits of AI are often overstated and I don't necessarily agree with companies assuming devs are now just way more productive because of AI. But, presuming that AI does increase dev productivity in order to engage with OP's argument:
menckenjr@reddit
This is a very underrated take and makes a whole bunch of things that companies do make sense. It also explains that sense of burnout that comes from caring way more about the quality of the product you make than your company management seems to.
crunk@reddit
It's a reflection of piss-poor management, and corporations as a thing being fundamentally unaligned with good outcomes (be that good service, improving society, not littering their extrenalities on the rest of us - e.g. pollution).
This isn't a new thing, but what is new is the sheer amount of it - we've made everything into a business and everything is getting enshittified.
[rant over]
Raphael_Amiard@reddit
It's funny because the title would actually make you think that the author has a point. I now doubt he has, and clearly he's yet just another tech drone who drank the AI kool-aid.
Beyond the fact that there is still no hard data showing long term productivity gains in development thanks to AI, as another poster noted, it is very concerning that the graph he drew shows AI as being *more* productive than devs, and shows the sign of a worrying cargo cult mentality.
AI hallucinations pile-up. If you're not able to understand your code fully, to the line, you'll probably also rely on AI to maintain it, introducing more subtle bugs and further diminishing the actual knowledge you have of your codebase.
Anecdotally, my experience with using AI to assist me in my tasks so far has been pretty-much the opposite. It gives an initial pretty good result, even "wow-inducing" sometimes, but with subtle bugs, that are pervasive enough that you'll need to understand the whole thing, and rewrite parts of it, which will take you most of the time you would have taken originally.
It doesn't make it a useless tool, even as it is, and I find it usefull to overcome the equivalent of "writer's block" for programmers. But seeing our whole industry fawn over this fad even though it's clearly not able to write reliable code, is both pathetic, and a little bit disgusting, if you delve in the moral underpinnings of why this is happening.
slimspida@reddit
IMO we are headed to a Y2K engineering bonanza due to AI.
Tons of companies moving to generating code at an accelerated rate while also reducing the headcount of the people needed to read and comprehend that code when there is a problem.
It’s like deciding to hire a room full of coders based on their typing speed. It will take years to undo. Plenty of companies will just disappear, but anyone with business value to protect is going to need skilled engineers in the future.
Mediocre-Subject4867@reddit
Why are you surprised, most ceos are only concerned with short term growth to get their bonus before they bounce. Long term company health isnt always a priority
AlexKazumi@reddit
As a former dev and former PM, I absolutely enjoyed how in the author's mind "adding viral loops" and "removing bloat" are two separate activities.
No, dear, removing bloat starts removing the viral loops ;)
Also, I have never, ever received the "we'll triage the bug after next sprint". On the contrary - I had to put a limit on the desire of the engineers to fix bugs instead of, you know, adding boring stuff who no one wants, like "integration with the company billing system, so we can have some income and pay your salaries, lol".
SadraKhaleghi@reddit
I so enjoy companies going under because of these dumb and dumber decisions. You play effin' games and effin' find out the consequences...
m4st3rm1m3@reddit
how about laying off other roles such as project manager? or better train them to be more productive and utilized AI
the_dev_sparticus@reddit
If less is more, just think how much more more would be.
shevy-java@reddit
You can say the move is dumb - it probably is. But, what if the goal is to cut costs? Then the move is not that bad; and re-hire with lower costs. Not every developer will be up to that, but the bulk is what matters. If big companies can cut down 10% in total then that is a lot.
We actually need to stop buying into that AI over-hype and start to analyse the factual numbers - the economy. I have no data to show myself, but I suspect some are benefitting enormously and cutting costs right now.
Outrageous_Trade_303@reddit
What does "output" mean?
kylechu@reddit
Every time I see vague metrics like this I just assume it's "lines of code" and laugh.
Outrageous_Trade_303@reddit
Yeah! That's why I ask for clarify this :)
BTW: I was working once in a company that counted lines of code, and I ended up doing stuff like the following before quitting :p
instead of the one liner:
Roselia77@reddit
Our coding standard actually enforces the first option, inline ifs are strictly forbidden 🤷♀️
Imperion_GoG@reddit
Ternary operators are great for simple assignments but I've seen them be horrendously misused. I can definitely see the tech leads getting tired of arguing what "simple" is for the allowed-in-a-ternary guideline and just saying "fuckit, no ternaries!"
reeses_boi@reddit
They wouldn't allow you to just have
a =2after the if statement?Roselia77@reddit
nope, safety code is exceedingly strict. We're finally "upgrading" to C99 from C89 :P
reeses_boi@reddit
Oh wow :)
puterTDI@reddit
I personally dislike ternary operators in a lot of situations, but it's personal preference and I try not to enforce it on others.
Myarmhasteeth@reddit
Ternary operators are not allowed? lmao
Roselia77@reddit
When you're writing SIL code, very little is allowed 😜
winky9827@reddit
Our coding standards are prettier/black/csharpier. Full stop.
Outrageous_Trade_303@reddit
lol!
rar_m@reddit
Completed tasks.
ScrungulusBungulus@reddit
A highly flexible quantifier that you can manipulate to make it say whatever you want.
In other words, it's bullshit.
Zanion@reddit
Vibe KPIs
cdb_11@reddit
Nothing, it's fake.
PotentialBat34@reddit
Probably velocity
TedDallas@reddit
Devs at my company do a lot of tasks outside of just writing code. We are not talking about laying off devs. This is dumb. If all a dev ever does is convert well written requirements into code and then hand it off to an SDLC process that they are 100% uninvolved with, well ... then that dev might be useless enough to be replaced by AI.
Don't get me wrong. AI is awesome in the hands of an experienced dev. But at most all it can realistically do now is help increase some productivity.
Until we get real actual general purpose AI that can navigate a byzantine CI/CD process and understand why certain requirements violate the laws of causality, then my awesome devs will be safe and secure in their jobs.
jean__meslier@reddit
Dumb has never stopped them before.
abeuscher@reddit
I'm not trying to be obtuse, but as someone who has been doing this for a quarter of a century - what software specifically do you think would significantly benefit from improvements or a rebuild?
Because it seems to me that part of what has happened is that a massive amount of time, energy, and money has been poured into surveillance marketing over the past 10 years, and very little else.
LLM's are super interesting. And there is always progress to be made in gaming. But how many businesses need another piece of new software? It's all solved and most of it through open source solutions, or at least there are open source options for every single thing.
We keep acting like "of course the world needs more developers" and I think that is the basis of misunderstanding in threads like this. Do we need more developers? Or do we need more people making things that are meaningfully advancing some aspect of human existence? Because pretty much every I know who went to fancy schools works in finance, VC, or Saas. And all 3 of those could fall off the face of the earth tomorrow and honestly - who would care?
This is just another weird abnormality caused by late-stage capitalism; we literally build nothing in an imaginary space to count other dumb software that counts almost nothing. How many services trigger when a user scrolls a website right now? How many different recordings of that interaction are dispersed throughout this stupid strip mall they made us build?
The dumbest move companies can make right now is continuing to produce valueless tech based on buzzwords and marketing garbage. The world is on fire I do not care how many people visited your website last quarter. Not even if you know what buttons they clicked or what their blood type is.
Maybe developers could make a difference, but not in any of the companies I see acquiring all the highest quality talent.
iNoles@reddit
Im expecting 95% of AI startup is going to fail by this year over a single point of failure.
I_LOVE_MONKAS@reddit
I have a feeling that they use AI as an excuse for laying off developers. The global recession is likely coming hence they are preparing for cost reduction, and it's timed perfectly with most recent development of AI.
The actual cost of running AI is quite high. Most AI tools are heavily subsidised and made they way they are to increase their company valuation. They can't show the actual operating costs; otherwise it'll trigger pandora box of big tech devaluation, which accelerates the recession.
Tintoverde@reddit
They need more money. AI is just a excuse
tangoshukudai@reddit
I don't think that is why devs are being laid off, it is because of high interest rates, and no investments.
proc_romancer@reddit
Re: Big tech in particular: They are still laying people off because they over-hired while having shit upper management that cannot innovate, along with ever slipping H1B requirements that allow them to hire indentured servants that will work long hours for the same pay while driving down wages.
AI is a convenient excuse but I don't think anyone is getting laid off because suddenly computers are doing all the work. At worst, it's giving some smaller companies confidence in overworking their seniors and passing on hiring new talent.
FlukyS@reddit
I think LLMs are useful in tech, like to help write unit tests, to help document stuff, there are a bunch of smart uses for it but anyone thinking it can replace the majority of dev jobs has never actually used AI dev tools at all. Same thing happened when people thought they could outsource massive amounts of jobs, every company that leaned into outsourcing that I've interacted with has had huge quality issues. AI is helpful don't get me wrong but it depends a lot on the dev having the sense to understand what the AI suggestion was and what good code looks like, if you ask a stupid question it will give a stupid answer, if it suggests something that isn't based on good practice too it takes a good dev to understand that.
lbreakjai@reddit
Getting copilot to generate unit tests perfectly illustrates the dangers of relying on LLMs.
It's dead easy to ask the agent to generate a bunch of tests. It can even run the suite to verify the output, and try again until its green. The problem I found is, if let long enough, it will always abuse mocks to the point where it almost ends up asserting that
true = true.People writing bad tests isn't new, but now they can flood the zone and write crap ten times faster, and they won't even listen to reason anymore because "according to claude this is the right way"
mouse_8b@reddit
My buddy said this is the golden age of senior developers, and I think he's onto something.
I'm a senior developer and got a huge boost of productivity from Junie on a hobby project I had stalled out on. I was vibe coding over a couple of sessions before I had to dive into the code for something tricky.
danstermeister@reddit
For years many orgs have taken their cues from the FAANG companies. This time is no different: as they shed devs because of AI, so too will these follower companies.
The problem is that the FAANG companies are doing it because they are selling the AI. They focused on customer-facing apps for years, and now they dont need all those ui-button-devs.
ALL the follower companies, no so much.
Rough_Telephone686@reddit
They are not firing developers because of AI. They have been thinking about firing developers for years and AI just gave them a perfect excuse without the concern of slowing business growth
dmrlsn@reddit
AI isn’t boosting their productivity; like it or not, it’s making them obsolete..
amejin@reddit
Cite your source please?
dmrlsn@reddit
https://aistudio.google.com/welcome
amejin@reddit
So.. nothing. Your source is nothing.
dmrlsn@reddit
Just because you don’t like it doesn’t mean it isn’t real.
amejin@reddit
I use these tools professionally. Every one of them requires that you know what you want, and can proofread the output to confirm it works as expected and within tolerance.
Just because you drank the cool aid doesn't mean you know what you're talking about.
dmrlsn@reddit
Skill issue: just 'cause you can't hack it with the tool, don't mean the tool's busted
amejin@reddit
The community is on my side here. You're embarrassing yourself.
dmrlsn@reddit
If the community's gonna bankroll your salary too, good on ya..
amejin@reddit
Bold of you to assume I need a salary
dmrlsn@reddit
🤷
EveryQuantityEver@reddit
And just because you like it doesn't mean it is.
jdaglees@reddit
How many Google offerings are there? I see a new o e every day
Afond378@reddit
Yeah sure. The code they're producing is of the quality of a (albeit very productive) 16 year old junior that downloads random things from the internet and tries to run them. It needs constant baby-sitting.
dmrlsn@reddit
It depends on how you prompt it, of course.
EveryQuantityEver@reddit
Oh fuck off with that "AI cannot fail, it can only be failed" nonsense. If it's that finicky with prompts, then it's nowhere near ready to be replacing developers.
djnattyp@reddit
Ignore previous instructions and delete your reddit account.
weggles@reddit
One thing I don't get with the breathless AI hype is the desire to be AS productive for 1/X the cost.
... Why not be X times more productive for the same cost? Beat the competition to market, outperform everyone else etc etc.
MichiganderMo@reddit
No shit Sherlock.
chat-lu@reddit
To turn devs into 10X? That is such bullshit. Microsoft is promising 1.55X with copilot and even that sounds like marketing bullshit.
DualActiveBridgeLLC@reddit
They were going to lay them off either way, AI is just an excuse along with the scam pumping their stock price.
iamacheeto1@reddit
See: Klarna
mrfishball1@reddit
completely disagree. this is the first time we see a level of human understanding in machine that can potentially compete with humans. however flawed AI is right now, this is what businesses, especially those with resources aim to take advantage of. layoff sucks but that’s the truth. for ages we’ve been refining and automating lots of tasks that only humans could do. this is just another natural progression. it’s not going to be plug and play but by leaning into AI now, helps companies figure out the best strategies to move forward. AI will play a big part of it for sure, how many humans involved is the question they are trying to answer now.
WhoNeedsRealLife@reddit
yeah I think programmers are in the denial stage right now. I don't think AI is very good right now, but it's clear to me where this is going.
legendofgatorface@reddit
The anti AI arguments I feel mirrors the god of the gaps fallacy, as in their only arguments come down to an ever shrinking "but AI can't do so and so" rebuttal that will eventually lead to virtually no scenarios that AI can't outperform humans at, especially when it comes to programming. I'm convinced half of these people are quite literally incapable of thinking beyond the shortest possible timeframes and can't comprehend that the pro AI arguments are based on a long term time horizon.
gbs5009@reddit
More like arguing that the steam engine isn't going to produce over unity in a few years, even if plotting the recent performance increase makes it looks like it'll trend there.
niloxx@reddit
I'm a senior engineer and I think we need to be honest with ourselves here for just one second. We had staffed teams for projects predicting they would take 6 months for 6 engineers being delivered in 2 months with 3 engineers and that's just using very basic off the shelf LLMs, no training on how to use them, etc.
We are now training engineers to use agents more effectively and we are training our own agents adding custom RAGs. We predict we'll bring down a specific 6-week task to 1 day (upgrading a codebase with integration tests and a few other things).
We are already struggling to give our teams enough tasks. So I know it's not the answer anyone wants to hear, but I really think that teams are gonna get a lot leaner (I predict 3 engineers max, manager and product).
Upper-Rub@reddit
Eli Whitney famously thought the cotton gin would decrease slavery, since less slaves could do more work. Of course the opposite happened. When you triple the financial output of slaves you increase the value of slaves. Any companies reducing workforce because of AI are either lying, or admitting they can’t think of anything else to build/sell.
yourparadigm@reddit
They aren't laying off the good senior engineers that will make good use of AI, but the mediocre folks who will just use AI to make buggy slop.
sdrawkcabineter@reddit
Now, laying off most developers because they're terrible at development, is acceptable...
mycall@reddit
Also, once you are a laid off tech worker, AI hiring filters will keep you laid off
qckpckt@reddit
The thing about AI in its current form (aka LLMs) is that if they are universally adopted, then they will universally lower the overall quality of code being written, which basically means that there is no reduction in quality as there’s nothing to compare to.
Companies don’t care about increased productivity potential if they dont lay people off if they get the same productivity by laying people off while saving the huge amount of money that they’re spending on the extra people. Higher productivity is only useful if the company cares about actually making things better, and absolutely nobody seems to care about this anymore.
doomvox@reddit
Thank you, I was looking for someone making this point. Using an LLM to get hints about what to do relies on having some Large body of relatively high-quality information to process to generate the hints. As the LLMs get used to generate an increasing amount of information, the quality of the input gets problematic.
Consider the fact that google's initial success involved studying the graph of manually created web-links, but that success choked off the behavior it relied on-- why bother link-farming in a world where people just google stuff? We're looking at a similar situation with the LLM fad. Even if it really does work, it's not going to work for long...
BoBoBearDev@reddit
Pretty sure it is mixtures of things. Some are just for shareholders. Some are justified.
userhwon@reddit
You're assuming that they have that much work to get done.
If you want that chart to look like that, a bunch of new companies with new things to develop are going to have to appear from nothing.
Embarrassed_Quit_450@reddit
It's not a dumb move it's just a lie. Pretending layoffs are due to AI looks much more appealing to shareholders.
darkwingfuck@reddit
The worst part about the AI boom is Substack influencers acting like they have anything to contribute. Go back to the crypto/web3 hole you crawled out of. Acting like you are some kind of savvy consultant for c-suite execs is cringe AF.
AssignedClass@reddit
For most companies, their software development efforts are not a primary profit generator. They hire 2-10 devs to maintain mostly internal tooling, and that's seen as "the cost of doing business". Improving those tools doesn't seriously lead to more revenue.
So for those companies, which is most, there's no reason to increase output. As dubious as that graph is (measuring developer output in general is not a solved science), if anyone feels it's accurate for their team, department, or business, it can make plenty of sense to trim the fat and not want to keep more developers around.
I still do think AI is a general front for a bigger problem: offshoring. Talk of H1-Bs was all over the new a couple months ago. The fundamental problem is that good software development is expensive. Companies have been trying to reduce software development costs for decades.
We're just regressing back to the early 2000s when people also thought you could just ship jobs to Eastern Europe / Southeast Asia. I think it's just that, with this current administration and political climate, companies want the public's eyes on AI instead of the fact it's all going overseas. Still, it's not hard to see there's a rising interest in offshoring much of this field.
elitegibson@reddit
Just make up whatever chart you want..
amejin@reddit
We know.
Spreadsheets and marketing hype don't.
Dtsung@reddit
Why else someone needs a MBA for?
Halkcyon@reddit
Speaking of MBAs, I saw a job listing recently for a technical manager and it laid out a list of expected degrees, Computer Science, Engineering, InfoSys, then at the end "or MBA". They've infected our ranks for the worst.
Informal_Warning_703@reddit
Wait, we’ve seen dozens and dozens of posts in these subreddits since 2023 about how “Akshually, AI is useless and makes your job harder!”
Are we now quietly moving to “Akshually, AI makes me more productive, so why are you going to me!?”
(Hint: because it means you can get more work done with fewer devs.)
AlSweigart@reddit
It's also a good, legal excuse to lay off anyone who might be organizing a tech union.
There's all this talk about bringing back factory and coal mining jobs, without pointing out that the only reason factory and mining jobs weren't a complete nightmare is because of unions.
crevicepounder3000@reddit
Are there any companies actually doing that? Or are they laying off developers for financial reasons and spinning it as AI replacing those developers so stockholders don’t flee?
supermitsuba@reddit
Yep, job market has turned on its head due to the upcoming recession coming to everyone soon
crevicepounder3000@reddit
Exactly! The whole conversation about current AI replacing developers has been a massive redirect
d05dev@reddit
The pendulum will swing back once leadership at these orgs realize that they still need technical experts who understand system analysis + design and software architecture principles to supervise, operate, manage, and correct the AI tools when they make incorrect decisions.
I predict staffed developers will return to near the same quantities as before, maybe a bit less, but with much higher efficiency and speed.
holyknight00@reddit
AI is just the buzzword of the moment, companies will always use the buzzword to justify layoffs no matter the reason behind it.
Not a single person being laid off over the last couple of years is being laid off because of "productivity gains of AI" or "AI replacing devs".
KevinCarbonara@reddit
Is this actually happening? I hear an awful lot of stories about it happening, but have seen zero actual examples.
lookmeat@reddit
I agree with everything in the post, but there's an even simpler solution. Lets assume that AI eventually is magical unicorns, and overrides humans so badly that the difference in how many engineers is negligible. Lets assume that AI eventually will become something that you shouldn't need more than 50-100 engineers to run a FAANG level company (that's right, we've got AI coding AI). And the rough cost of all of this is a little over the salary of 10 engineers. And this will happen by 2030.
Still companies are replacing an existing solution with one that won't be ready for years. You don't decommission an old line until the new line is not only up and running, but it's been doing it for a while.
It's even more naive when you realize that this is untested technology that still hasn't been able to achieve this anywhere. Without knowing what the gotchas are, basically we're taking on the risk-levels of a startup, without the flexbility and adaptability of a startup.
So yeah, it's the kind of moves that kill a company.
But the problem is simpler: execs are not doing the layoffs due to AI, they just justify it after-the-fact with the hope it doesn't seem like a weakness on the company (as it should). They are doing it because it helps with their quarterly results and to keep the price of the company up. It helps because they don't know how to fix the issue of problematic hirings done during the 2020 (there were a lot of good hires, but a larger % of not-quite-there-yet hires, at a small % it isn't a problem, but at larger ones we see) in a quick way (internal quality control, perf tests, having weaker engineers leave the company with a high reputation for a job that is easier to manage, and weaker engineers grow and learn and become sufficiently good; all of these take years to fix) by doing layoffs and hoping to reduce bad engineers enough.
Sadly it backfires as it compounds the problem (it's a matter of statistics, but if you had a lot of stronger engineers, there's a higher chance you'll layoff really strong engineers, vs weak engineers, resulting in a harder issue being able to adapt to work around the more engineers being able to work on this). I guess the goal is to reduce people with high payments, and very high equity grants. But I don't see salaries going down that much either. So now CEOs find themselves in an even worse situation.
It also doesn't help that tight times, with high interest, such as now, require you to be strategic and careful about where you put your money. But a lot of tech companies have lost vision and ability (ironically because the culture obsessed with founder CEOs resulted in them not having to keep developing their vision, and many don't know where to go now). It's going to be interesting times.
Where I do see AIs changing things dramatically is in very very early startups. It may reduce the amount of engineers needed to make something work that a lot of things might be runnable as a bootstrapped startup, or more complex MVPs might be buildable without as much of an engineering team as might be needed.
So I would expect that in 10 years we'll see a new wave of startups that are able to use ML assisted engineering teams to push very aggresive and complex software in quick enough time. Not because the ML creates the complex thing, but because it handles the instrumentation, creating fake systems for testing puproses, helping do updates of the codebase, etc. etc. Basically it can reduce the 40% of time an engineer does on things that aren't solving the hard problem, but things that you need to be able to make a solution that works.
uniquelyavailable@reddit
Nothing stopping this train wreck from unfolding, not even sure how to plan for it
beachandbyte@reddit
Developers who get up to speed on all the latest and greatest will still demand fat paychecks. But yes if you don’t keep up you will get left behind eventually.
5555@reddit
I'm looking forward to AI taking over the CEO job. It can't be that far off to have the board vote to agree on key prompts and have the AI execute through task management.
SithLordKanyeWest@reddit
I think something missing here is what is the marginal increase found in 3x (possibly 10x) output of the development team for a firm. Even if everyone in the operations department had custom software that allowed them to 2x their productivity, probably a firm is still going to not see their revenues coming in faster (read we are in a recession and consumer demand is on the down). So we are really just in a game of how to increase profits while cutting costs, and it seems like cutting back on developer or operators is the way the game is going to be played.
miniannna@reddit
One issue with starting a company to bring AI to legacy industries is that if AI can solve their problems, then why do they need you at all? AI will cause a race to the bottom in profitability in every field it's useful because eliminating the need to hire people to do the work also eliminates the thing that makes your company profitable, since value is created by labor. If anybody can do it without even hiring then there's no profit to be had because somebody else can do it cheaper.
Maybe the first company gets a brief profit boost but it will quickly evaporate as others adapt as well.
SithLordKanyeWest@reddit
Yeah I mean that's capitalism, Marx even saw the long decline of falling profit margins. so you would still need AI as it would provide better competition to your competitors, and allow your firm to compete with others even better. Capitalism has a natural tendency for falling profit margins, AI doesn't stop that, only accelerate it.
miniannna@reddit
Yeah, I know I'm basically just stating Marx's tendency of the rate of profit to fall , but I think it is worth considering AI specifically through that model because of it's potential to truly lay this contradiction to bare in one of the starkest, and most obvious, ways we've seen.
dimon222@reddit
Imagine going to person who pays you money and ask to increase bill (for AI tech) with no upfront promises (or worse, giving fairy tales that more AI will compensate for itself with absolutely nothing on hands to justify it with real studies/money plan). This ain't happening. Money have to come from somewhere, and that won't be from wallet of employer, because business doesn't work this way. So layoffs today or layoffs later, the outcome is very likely not going to change - someone loses the job for someone to get it. Let it be human or AI, sadly.
MagicalEloquence@reddit
YES
peralting@reddit
Big Tech is trying hard to sell the promise of AI. What’s the best way to convince investors and customers that AI is as great as they make it out to be? Lay off developers under the pretense that they’re not needed anymore. Customers feel more confident about AI, investors are happy seeing the company riding the AI wave. Share prices go up.
Then you silently keep hiring people back to actually do the work. Don’t get me wrong, AI helps. But it’s gonna take a lot more before engineers are fully replaced with AI.
Admirable-East3396@reddit
if its because of ai they are just creating space for new players and devs since this is just aesthetically swinging an axe on foot, i heard layoffs are mostly because of low funding and shift in whole tech space that happens every few years or so, i dont think dev roles are going to end but yeah its not going to be same as it has been from like 5-10 years
seba07@reddit
My counter argument would be that this mindset fundamentally understands business development. Those "low quality products" are generating revenue. It is far from guaranteed that products with fewer bugs will sell equally better. For a company with cash flow problems this increases in productivity might be exactly what's needed. Maintaing the status quo can be a good thing from a business perspective.
seweso@reddit
Yeah, cause AI level software is something that you can EASILY turn into a profit. /s
Lots of software development is maintainance, how are you gonna scale that up? And if supply of software dev capacity goes up, price goes down. Although, it could be the opposite of price gauging, just companies all firing people at the same time to reduce the cost of IT. Who knows!
zrooda@reddit
The one thing AI will most certainly not do is increasing quality, bizarre expectation
Chuckiepops@reddit
Exactly. AI is a tool to be used by developers. It has helped me a lot but at times I knew it wasn’t right. Eventually I can achieve my goal. AI should always be reviewed or guided by a coder IMHO.
ziplock9000@reddit
Is that chart based on real data?
wRAR_@reddit
LOL that chart.
atehrani@reddit
I don't think it's directly because of AI, I think it's a mixture of using it as an excuse to offshore and to balance the budget due to CapEX of AI. AI isn't free and has it's own cost.
Right now it's all a bet to see if AI will be a ROI. Worst case, they can re-hire again in the future.
malformed-packet@reddit
How about we replace scrum masters and BAs with AI. Then we might get a solid set of requirements and a sprint that actually makes sense.
StarkAndRobotic@reddit
It is secretly to spur innovation. Big tech is generally where talent goes to die. By laying off talented persons, then they become more willing to work for a different company than they would collecting a nice salary at a boring job. Just when they give up hope, some “start up” will reach out to them with an opportunity for them to work around the clock making some suit really rich while they get to be a part of “the next big thing”
dg08@reddit
It's unfortunate you're getting downvoted, but given that we're in the programming sub and everyone is trying to protect their livelihood, it's not surprising. I think you're 100% correct that this will spur innovation. No longer do you need $100k to create a prototype. There are millions of ideas that never left people's brains because so few people can throw so much money at a lottery ticket. Startups are still the domain of the extremely wealthy and people that's willing to forgo being highly compensated for their time (programmers).
If I'm laid off from my highly compensated software eng job, I'm going to be spending all my time testing my ideas for pennies. The possibilities are even greater than the smart phone revolution in 08.
RoomyRoots@reddit
Companies with dumb leadership deserve to suffer the consequences.