Am I being paranoid, or is the 'AI will replace software developers' narrative just a way for the incompetent tech leads, managers and CEOs to hide their own incompetence?
Posted by patmull@reddit | ExperiencedDevs | View on Reddit | 231 comments
So far, I haven't seen any coders who are less productive than they were pre-2023. Of course, some people are less productive when they switch to vibe code mode, but usually those who refused to use it stayed the same, while those who use it meaningfully are more productive. Most people I've seen are willing to learn new things and adapt. While some people miss the old times, I think the majority of the community is generally positive and excited about being able to build more things.
Contrary to what we hear from CEOs, investors and fake AI gurus who became AI experts in 2023 sudeenly, despite having worked in completely different fields previously, powerful models' ability to generate fast prototypes exposes the incompetence of those who should provide a clear vision of the product and its requirements. I see many team leaders suddenly talking like spiritual gurus or wannabe Steve Jobs about the future of tech and how AI will change everything. I also don't know if they're secretly vibecoding some supermodel AGI, or what on earth they're doing all day. Since last year, they seem to be busier than ever, yet they're struggling to perform simple tasks such as updating database credentials or designing a functioning system architecture.
CEOs and senior management are finding it more difficult than ever to specify software requirements and provide meaningful new ideas about products. I feel like they have become so addicted to using chatbots that their brains have basically imploded and turned into 'AI dementia'. When I repeatedly asked for a clear vision or requirements, they provided me with a AI slop Word file generated by Claude.
I generally feel like this is a trick used by non-coders to make higher management and investors think they are irreplaceable and protect their job while dumping the problems on developers. Unfortunately, coders are paying the price because they don't like dealing with this kind of dirty business politics. They might be often introverted people who struggle to stand up and speak out for themselves. AI is just code involving maths, after all. Most SW developers understand how it works much better than the people giving talks on panels about AI. At many business conferences, there is often talk about AI, yet not a single person on the panel is a software developer!
We should be much more vocal about this, otherwise the fools will be in charge for years to come. Of course, the situation will eventually correct itself, and it seems that some companies are starting to hire again. However, we can help to avoid any future hype and misguided thinking if the software development community is more vocal.
Sorry for the rant but I missed this narrative from public discussions...
alexs@reddit
AI will not replace software developers but is going to massively change how we work. I don't think anyone has really figured out how transformative this is going to be yet.
Dimencia@reddit
If nobody's figured it out in 4 years, maybe it's because it doesn't actually make anything better
alexs@reddit
Everything changed in December 2025. The SOTA models hit a capability threshold that has made them substantially more practical.
Dimencia@reddit
Just be clear, the models take text tokens as input, and output the single next token to complete the text. Anything beyond that is not the models. It doesn't matter how much they make it recurse each prompt back into a new one, it's still pretty much the same underlying models (albeit with slightly more training), that can't actually write maintainable code
alexs@reddit
You clearly don't have enough experience working with them and building software with them to be making this assessment. Do yourself a favour and try using something like Claude Code for a week or two.
Dimencia@reddit
Claude can write code that works, but that's only the bar for a junior dev. It can't write maintainable code that's well designed. No amount of instruction can prevent Claude from just filling your codebase with slop and actively making it harder to maintain
alexs@reddit
This is clearly nonsense because with unlimited prompting Claude is equivalent to typing in every line of code yourself.
Dimencia@reddit
That's worse. You do realize how that's worse, right?
alexs@reddit
I'm just pointing out that your argument is incoherent and that you might want to try and get some more experience with this thing you have such strong feelings about.
Dimencia@reddit
It was a The Good Place joke, but we can't post memes here. But "it doesn't write slop if you explicitly dictate every line it writes" doesn't make the point you seem to think it does
FortuneIIIPick@reddit
For me, it's a better search engine, usually, but not always.
carterdmorgan@reddit
Not trolling, but have you used Claude Code or Codex? They’re incredibly powerful tools. We left the “LLMs are a better search engine” phase a bit ago.
Dimencia@reddit
They haven't significantly improved in about 4 years - we just invented tools that can recursively loop back into the LLM prompts to do longer tasks that would be too much for it to contain within a single context window, but it's basically the same LLM. That doesn't make it any better at the part it's bad at, writing maintainable non-slop code
The whole divide over AI is really because people have different jobs. Some devs are just there to write code that works, and ship it - and yeah, unfortunately, AI can do that. Other devs are paid to write good code that can be maintained for a long time, and AI only makes that worse
Ok-Hospital-5076@reddit
I have used it. Infact I am using Codex and Claude. They are search engines. They find what you ask them. Yes, tooling is excellent, they generate code and execute it. But core of it is still a knowledge base and search capabilities. You input garbage prompt , they give you garbage response. They aren't making decisions anytime soon.
kowdermesiter@reddit
A search engine doesn't plan, execute and reason until it reaches its goal. Coding agents use LLM-s with search engine like qualities but they do so much more.
Cars aren't just faster horses either.
Eexoduis@reddit
Search engine -> LLM is more like bareback horse riding to riding in a horse drawn carriage. Definite upgrade in many aspects but several notable drawbacks and far from the pinnacle of transportation evolution
kowdermesiter@reddit
We are absolutely not at the final stages of AI and I like your analogy too.
alternatex0@reddit
I think "reason" is still a loaded word. Hard to define with the accuracy of other terms used in software engineering. As for planning and execution, all relational databases plan and execute queries, the only difference is it's (I believe) generally a deterministic approach. But I'd wager something being non-deterministic doesn't necessarily make it intelligent.
kowdermesiter@reddit
It is loaded, but in this context I think it's understandable, LLM-s "reasoning" is an extra step of building context and steering itself for a more deterministic output.
If you mention intelligence then it also needs to be defined and the more you think about it the general you have to make it. LLM-s are not on par with human level intelligence, but neither are dogs. This doesn't mean that they are not intelligent. To me intelligence is a spectrum and problem solving of unseen tasks based on existing knowledge. Agents / llm-s can definitely do that beyond their training data.
adzx4@reddit
You can't seriously compare these two as 'planning' with the only difference being deterministic. We do not know what or why llms output what they do in their reasoning (see deep seek r1 zero), but we know it increases their generalizability and performance in benchmarks.
The scope of the tasks a relational database can take on is also completely different to an llm. Imaging giving the inventor of relational database Claude code at time of invention.
alexs@reddit
Complete nonsense. I can one shot complex bayesian modelling simulations with Opus. You couldn't do that a year ago and it absolutely requires making decisions based on context.
alternatex0@reddit
Is this not anthropomorphizing AI chatbots?
All algos more or less make decisions based on context, the only difference is LLMs are such complicated black boxes that we don't fully understand how the decision paths are taken. That doesn't mean LLMs make decisions in the same manner that intelligent beings do.
alexs@reddit
Only if you think making a choice is some special quality innate to humans.
By decision I just mean that these systems will suggest different solutions based on the context available to them.
Do traditional tree based expert systems decisions? Kind of yes actually IMO.
NUTTA_BUSTAH@reddit
No decisions are made. Choices are weighted based on probability and one is picked. Hope it is the correct one (== validate).
alexs@reddit
This is some theory of mind 101 nonsense that doesn't stand up to scrutiny. Please educate yourself.
TheCharalampos@reddit
Not decisions.
Meeesh-@reddit
Sure, but you also need to realize that most of the job of a software engineer’s coding skills comes from searching and referencing resources.
They aren’t good enough to use unsupervised, but they can do many tasks autonomously and they generally are good at following instructions. They can follow style guides, they can debug issues and fix them, they can read documentation and write code from it.
It’s not going to replace humans any time soon, but if I can turn a 60 minute task into 1 minute prompt and 5 minutes for review, then that’s a huge time save.
alexs@reddit
I think they absolutely no replace humans in some ways. A curious generalist can now take on much more complex and advanced work than would have made sense to in the past.
bobsbitchtitz@reddit
Without an advanced understanding of the underling concepts its almost impossible to tell if output is valid or invalid.
alexs@reddit
The models are good enough that you can guide them into creating working validation systems if you are prepared to do a bit of research yourself on the topic.
NUTTA_BUSTAH@reddit
You can never ensure the validity of your validation system if you don't have deep understanding of the underlying concepts. No matter how hard of how long you prompt. You might gaslight yourself to believe you understand the concepts, but you don't.
alexs@reddit
It's not a paradox because your fundamental position is just wrong.
You cannot "ensure validity" as some platonic concept. You just reduce uncertainty and move it around.
The only real way to measure correctness is by seeing the system in action and observing outcomes under different conditions. This turns correctness from a boolean into a statistical property of a system.
You absolutely can use LLMs to accelerate testing and validation in this way. By exploring more the space more quickly you can build both better intuition and better statistics on what works and what doesn't.
adzx4@reddit
But you can go through the process of getting the llm to generate code which helps you understand what's going on, references website you can lookup to validate, and slowly learn the concept over time to the point you can confidently validate what the llm is doing. You just need to be extremely proactive and fight the lazy ness of letting the llm do everything - which most people can't.
I've learn a whole load of things like this, async python, various packages, uv for python, memory profiling, dynamodb... I can keep going for a while, but these are things I can say I have a fair understanding of, through using an LLM, and starting out as a novice.
NUTTA_BUSTAH@reddit
Yep, it's a good search engine.
TheCharalampos@reddit
At that point why not just do the thing?
adzx4@reddit
Read my comment again
TheCharalampos@reddit
At that point why not just do the thing?
wannabepinetree@reddit
The question is not, "can you do the thing" but rather "how can you learn to do the thing". I think most people are underestimating how much these tools will be used by beginner programmers to learn how to code, even if they are imperfect tutors.
bobsbitchtitz@reddit
If you have a good source I'm willing to change my opinion. I use chatgpt, claude and cursor daily so I'm always willing to expand skillset.
Ok-Hospital-5076@reddit
I agree. I can do much more work with Agents. I didn’t mean to say the coding agents are no better than google. They are very effective. I am happy to save time on typing.
zdubbzzz@reddit
Dude if you're using Codex and Claude as search engines instead of building fully agentic workflows around them, you're doing it wrong and are way behind. Those tools are not just iterative vibe coding tools
carterdmorgan@reddit
I guess I have a different understanding of the term “search engine” than you. I don’t think of search engines as something that produces “novel” output, like agentic coding tools do.
I am not disputing at all that these tools require heavy supervision and are incredibly prone to misuse in the hands of a non-engineer. But if I were hiring an engineer today and asked them how they use LLMs and they said “I only use them as a search engine. I write all my code by hand.” I’d be very skeptical of their understanding of where the field is headed.
Ok-Hospital-5076@reddit
Fair enough, when i say its search engine - my understanding is if a tool which hooks you to a knowledge base let you search is a search engine.
A model is as good as its training data . The quality is also as good as internet. Its much effective than google due to sorting and streamlining. Natural evolution of trad search engines
Agent are more or less some pretty good QOL tools around models.
djnattyp@reddit
They're search engines... that sometimes find what you ask them, sometimes don't find what you ask them, and sometimes find things that don't exist.
Prompts randomly fail - in different ways.
They're bullshit engines.
TheCharalampos@reddit
As someone who's used both how do they transform work? It's preety similar, just this tool does alot of things where other tools used to do one or a couple at best.
Tired__Dev@reddit
I started vibe coding to figure out how far I can take it. For me it's at the rate I'm able to learn something new, snuff out code smells, and then if there's a problem tackle it right away instead of going down an internet rabbit hole. Knowing how to use AI to onboard yourself into a new project game changing, and probably will get rid of those who wrote a million lines of undocumented code to code themselves into a codebase.
The ceiling for learning something is just lower. You can actually pivot into an entirely new domain very quickly. If you have any sort of brain and are able to ask the right questions to challenge yourself you'll get good extremely quickly.
TheCharalampos@reddit
That's a very fair point, mentally grasping a new codebase or setting things up are tricky and I can see how it would be genuinely helpful.
pardoman@reddit
Fully agree. It can write code pretty damn well these days.
FortuneIIIPick@reddit
You're welcome to use what you want, how you want.
psychicsword@reddit
I also use it as a way to quickly write code in a syntax I don't fully know but to do something I know specifically what I want to do.
That is still mostly a better search engine because it saves me from needing to Google "switch statement syntax c++" or similar.
JoeCoT@reddit
If you know what you're doing, AI tools can make coding and especially tracking down bugs massively faster. It's definitely a huge force multiplier. A little bit of understanding of AI tools + a large understanding of coding means you can get things done significantly faster.
Where it falls down is that it's a force multiplier for senior developers, not necessarily for junior developers. Junior Devs don't know when the AI is wrong, don't know when to force correct. Senior Devs are used to delegating and giving enough explanation to get the thing done.
And it gets worse when management gets involved and tries their hand at AI coding themselves -- that's where "vibe coding" comes in. They code massive projects and have no idea how or why they work, and the senior devs have to follow along with a shovel.
Just senior devs using AI tools is a massive lift. It's everyone else using them that's going to drag us all down.
punio4@reddit
Just like blockchain did, right?
ploxneon@reddit
This guy is being down voted but he is right. So far my experience is the kind of people who claim to be "faster" with AI should probably not be using it.
The hype outpaces the value for sure
RandyHoward@reddit
Not at all, there is a lot of value with AI in our daily workflow. A lot of people, especially non-technical people, definitely overestimate that value, but that doesn't mean it has no value at all.
I've personally seen a significant boost in speed with my own work. Just this morning I picked up a ticket with a task of making a database column not nullable. This meant I had to find all the places in the code where this value could become null and either throw an exception or fix it. Simply identifying all of these places can take quite a bit of time for a dev, and a dev may not even find them all. But AI not only found every place that needed updated, it also provided the updates. It had to update 7 different files - models, actions, factories, seeders, and tests. It would've taken me most of a day to do that without help from AI. Instead I let AI handle it, and spent 45 minutes reviewing what AI produced. Nearly a full day of work reduced to about an hour.
If you think there's no value to AI in our work, and refuse to embrace it, then you're going to find yourself unemployable within the next few years. But don't vibe code. That leads to pain and misery.
EggsFish@reddit
To me this is the disconnect - if you were able to validate that it found every usage of the column in 45 minutes, then it sounds like you actually could have found every usage yourself in 45 minutes, right? Otherwise how do you know the AI didn’t miss anything?
RandyHoward@reddit
Because there aren't many other places where the logic resides. Sure, I know that we've got models, actions, factories, seeders, and tests that need updated. But I guarantee you that I'd miss one or two spots on my first pass. Maybe I can identify all the spots within 45 minutes, maybe not, but I sure as hell can't also make the updates within 45 minutes too. And I know it didn't miss anything because we have a robust test suite and all the tests pass.
alexs@reddit
The hype always outpaces the value, but don't let that blind you to there actually being some value.
valkon_gr@reddit
Blockchain and metaverse had zero impact in our day to day work, no everyone codes and works with AI.
creaturefeature16@reddit
LLMs accelerate the wrong parts. Contrary to the narrative, we didn’t need to write code faster. Most especially, code we didn’t fully understand, and most especially in huge swaths that we couldn't review in reasonable time frames.
This whole "AI does the coding, and the human in the loop is the orchestrator" is entering a weird stalemate of skills vs. management. Anthropic's own study was quoted:
This is the constant contradiction around LLMs that have people talking out of both sides of their mouth: If we're to be the orchestrators of these coding agents, how will we be able to do so if the skills that enable that process are actively slipping away with continued usage of said coding agents?
One-Vast-5227@reddit
Same problem with supervising self-driving vehicles. Pattern repeats. So-called entrepreneurs working overtime to separate so-called investors from their money
Noblesseux@reddit
Self driving cars have like a whole array of self-contradictory statements that are used to encourage their adoption tbh. Like a lot of the things people say they're going to do are self contradictory or kind of fall apart when you think about them too much, which tends to be the case with basically everything SV tries to push because they're basing their scenarios on Sci-Fi while ignoring practical implementation issues.
IPv6forDogecoin@reddit
Maybe I haven't been keeping up with the self-driving car hype, but what contradictions are they claiming?
Noblesseux@reddit
The way they suggest you "solve" parking is by having the cars just kind of constantly drive around or drive themselves out to another location which actually doubles the amount of VMT/traffic being generated at minimum.
In order for them to solve traffic, they need to be orchestrated. Meaning there's likely going to be billions of dollars worth of infrastructure and no company is going to pay to build that and then let you use it for free. It'll likely either be a very expensive subscription add on for your car or they'll forgo that entirely and just make you pay for every use while not owning the car at all.
This one is just generally kind of silly if you think about it. What if someone vomits in your car? What happens if the car breaks down at a remote location without you in it? You're massively increasing the wear and tear on your vehicle which will turn into maintenance costs that would offset most of any money being made. You'd also likely have to give a cut to whatever company managed the platform people use to hail it, so you're just basically breaking your car for chump change.
VeryLazyFalcon@reddit
Simple, make cars owned by one company that is responsible for orchestrating, mainenace and delivery. you just rent rides.
But hear me out, what if we optimize and connect these cars into one for bigger capacity? And if they are constantly riding around town, why not put them on tracks for even better efficiency?
Polite_Jello_377@reddit
Trains, crabs and trees
dramatic_typing_____@reddit
Do you understand though why people claim self driving cars solves traffic? What you mentioned here does not refute it.
cholantesh@reddit
Why not?
Quarksperre@reddit
I mean this extends to a large chunk of the internet.
Movies that you can stream anywhere sounds great. But all movies for 10 dollar a month was never sustainable. What actually happens is, as soon as the platforms cannibalized the previous system it has to make money and starts to bring up prices.
In the end you probably pay more for movies and series per month than 15 years ago. With some added disadvantages.
Same with music. Same with Amazon. Same with everything.
While this is happening all remaining places to meet up die. And in the end the reason is just that meeting with other people is very cheap and fun. And no company really can profit a lot from it.....
One more point: why are gyms the most successful form of activity/sport right now? Why not some random ball game?
Because its by far the most expensive way to be active. That money is used for ads. Those ads pull in more people. Sponsored influences and so on. Its all a vicious circle.
Yeah thats probably enough rambling for today.
vcxzrewqfdsa@reddit
Curious, how are AVs following this pattern? Like RND for self driving models are slowly becoming more and more black box? Because from my understanding AVs don’t rely on generative AI and use other ML models
Arctan13@reddit
Their so-called money, even
FrenchCanadaIsWorst@reddit
It’s not even just an atrophy problem, when Claude code generates thousands of lines of code in minutes, it’s extremely tedious to go through and review and understand all of it, even if your coding skills aren’t atrophied. So in most cases people are going to opt to just trust the AI until something breaks.
Colt2205@reddit
"Never write more than you can support" is basically a core lesson that is getting tossed out the window right now. I'm dealing with that issue with the AI innovation projects at work, with one particular one being rewriting a legacy application from one stack to another stack. That makes the skill atrophy problem worse since now someone isn't maintaining their old skillset and has to pick up a new one, which will probably not go beyond basic level because of the AI use.
And it doesn't seem to matter how much or how many people write this and similar stories out. It's like there is this disconnect between two sets of people.
allllusernamestaken@reddit
Plan mode. Tell Claude to implement changes in small chunks, let you review them, and continue until done. Your workflow ends up being a mix of pair programming and PR reviews; a steady stream of changes made but in small, digestible parts.
A simple (but real) example was an enum with 6 values that has to be mapped to match a third-party API spec. Claude writes the mapping function, makes me review it, plans for 7 tests, and adds 1 at a time for me to review. I never had to review more than ~10 lines of code at a time, I know exactly how it works, and it took 30 seconds of review -> approve -> repeat -vs- 3-5 minutes of writing the code and copy/pasting test cases.
Is 2-4 minutes of time saved revolutionary? No, but it adds up.
SLiV9@reddit
Code MRs should be reviewed by someone other than the author. An MR made by a human supervising an AI should be reviewed by a second human. It is that person that is suffering from review fatigue because the resulting MRs are massive and full of nonsense that the author thought would be nice but couldn't be bothered to type out themselves.
TacoTacoBheno@reddit
And you've used 30% of your monthly tokens in 3 hours xd.
It'll be interesting to see what happens as they start raising token prices
Dimencia@reddit
LLMs accelerate whichever part you use them for.... it's not the LLM's fault that people are using it to write code, instead of using it to bounce ideas off of for design and architecture. It's the engineer's job to decide how to use a tool most efficiently, and it was obvious 4 years ago that making it write code was not the answer. But there are many people encountering it for the first time now, and it'll take them some time to realize how much worse it makes everything
rwilcox@reddit
Yes - I would argue that the best reading engineers can do right now is to read about the Theory Of Constraints / Critical Chain. (TL; DR: It's about bottlenecks and constraints.) The Goal and the few books around it.
I'm of the opinion that developers are looked on as the bottleneck because they're the only thing in typical Scrum implementation that gets measured. But, really, if you accelerate the dev process you'll either burn through incoming work too fast, or just have piles of not-yet-shipped-to-customers work downstream.
Or, what I've actually seen: generated code save you time up front, but then ends up making assumptions you have to spend extra time to undo. Or, if you need to extend it, you're spending extra time to understand this whole huge mess because there's nobody around that understands the code ("I generated it and it worked then!")
AaronBonBarron@reddit
Undoing assumptions is especially painful due to the literal/concrete code style that AI seems to default to. It's hard to modify and even more difficult to extend.
Zeragamba@reddit
everything in one file, massively long functions, arrowheads of death. These things were trained on the majority of publicly written code, but it just turns out that most of the code was garbage.
adzx4@reddit
To be fair I just tell the llm how I like the code to be written and make some edits and renaming most of the time - through this I can mimick my own style
Weekly_Mammoth6926@reddit
Agree, LLMs write bad code if you don’t tell it how to write good code. I’ve managed to get my copilot-instructions to the point where the code is almost always exactly how I would have structured it, or better because it’s foreseen a future problem I have missed.
It’s very simple to just instruct the agent to separate different concerns into different files or whatever and write functions with a single responsibility. I also now get good and accurate docstrings for free that are kept up to date automatically which is great not only for me when coming back to the code after some time but also useful for agents in other sessions to quickly pick up the required context.
adzx4@reddit
Right! I can't believe 'everything in one file' is seriously someone's gripe, just shows there us a huge gap in people's skill of using the models
ITBoss@reddit
> Yes - I would argue that the best reading engineers can do right now is to read about the Theory Of Constraints / Critical Chain. (TL; DR: It's about bottlenecks and constraints.) The Goal , Critical Chain, and the few books around it.
For a more dev/IT focused book, I love the phoenix project. The phoenix project was inspired by the Goal but is more relatable since it's focused on an IT environment.
nonsense1989@reddit
I would shout its "sequel/lore expansion" the unicorn project too.
I argue phoenix is more ci cd devops type, where unicorn is like product/development intersect.
Both books are great reads
rwilcox@reddit
I recommend the Phoenix Project with hesitancy: while it’s good I feel like I have to give homework for the homework (“read this but also read Juran’s On Quality, DeMarco’s Slack, and Out Of The Crisis”. Probably need to add Critical Chain to that list too.)
DrSlugger@reddit
All my seniors spend all day triaging incidents, validating test results, and sitting on zoom meetings. They then get pinged by everyone else to help them fix issues with their code.
Anybody who thinks the bottleneck of dev work is the coding itself is not looking at the systems correctly.
ProbablyPuck@reddit
I'll add Control Theory to the study list. I also predict that Category Theory is going to become dramatically more important.
Your skillset will have to change, which requires a heavier reliance on the science.
theDarkAngle@reddit
Yeah and just anecdotally, the experience of using agentic tools is at best, kind of a slog, and at worst, both slower and worse than using no AI at all or using AI only as like a super Google.
I didn't think so at first but I've been through several iterations with these tools and it's always the same experience. Initial euphoria and the feeling that "now anything is possible", slowly giving way to disillusionment and finally, "actually this doesn't really help overall".
Lately I feel like a slop editor and finally last week, I disabled Claude Code, and I'm actually kind of enjoying the process again, and having stories actually pass QA the first time I submit them, no problems. Sure it takes longer to get them into QA now but the overall time to QA approval is already faster despite still having some atrophy and having to stop myself from going "let's just re-enable Claude"
dysprog@reddit
Writing the code is the part of the process that I enjoy. Getting AI to write the code while I do all the other stuff is taking away the fun part.
It's like hiring a guy to fuck your wife for you so you have more time to spend at work. Entirely backward.
I'm going to start calling it Cuck Coding to drive the point home.
Wonderful-Habit-139@reddit
This initial euphoria is what makes it difficult to resist the addictive nature of LLMs, even as it actively makes you dumber.
And yeah I've had the same realization that you've had around 2025, where I just completely dropped AI code generation, went back to my trusty tools, and started growing and learning again, and I'm still extremely productive.
The issue is... It feels like a slog and makes us slower when we use AI to generate code, but for a lot of people, they have writer's block, and face so much difficulty in just "getting started", and can only really start getting things done if they can see some code in front of them where they only have to edit it a little bit.
It does make the code worse since they don't think as hard about the problem and the cleanest way they can figure out to solve the problem, but the alternative to using AI for them is basically taking many more hours just to figure out how to start implementing the features. And it's more mentally taxing.
VRT303@reddit
I disagree. There's a phase of junior-mid level where you need to struggle and debug to train yourself.
But the hardest part has always been reading code. And not just as reviews, but changing something that was built last week while you were on holiday or 10 years ago by someone.
That does not atrophy at all from spec driven LLM code generation.
coworker@reddit
Your argument assumes humans must be able to read and understand code unassisted by AI. This will not always be the case. Much like you don't read and write assembly any more, eventually we will not read and write the current "high level" languages.
Also AI speeds up every aspect of the development cycle, not just coding. You're falling behind if you're not using AI to write tickets, design docs, and prs
Astral902@reddit
This will not always be the. Care to elaborate how do you know this for certain?
creaturefeature16@reddit
Natural language isn't just another abstraction layer, though. A higher level of ambiguity is not a higher level of abstraction. This has already been discussed for 50 years.
RE: speeding up other aspects - that's basically my point. Those AI generated tickets are being met with facepalming. Those design docs and PRs are 10x longer than they need to be. It's not speeding the cycle up, its just making it seem that way because it made it much busier and more intense. The latest reporting is we can't seem to find the productivity gains you're talking about.
coworker@reddit
Why are you assuming natural language?
creaturefeature16@reddit
wtf is this word soup of pseudo-science
please get lost
VictoryMotel@reddit
It's ai bro speak for people to rationalize not knowing how to program. They think they are going to "leap frog" and "leave behind" all the people who actually understand software by typing in ai prompts.
It's like the people who think they could on a trained fighter by eye poking and groin kicking. They don't know what to say when you point out that the other person could do that and everything else better.
coworker@reddit
I'm sorry you don't understand but the world will still move on without you. Good luck!
coworker@reddit
PS those articles are about general industry and not software development specifically lol
recycled_ideas@reddit
Citation needed.
Writing tickets is not an aspect of the development lofecycy, it's a way to record work and someone still has to describe what the work is going to be in the first place.
Design docs are meaningless if no one is going to take the time to actually think about the problem or verify that the solution actually matches the design.
And if you're using AI to design the code, write the code and review the code at its current level of ability you're a moron, because it's not there yet.
You've also missed, testing, requirements gathering, support, performance monitoring and a dozen other parts of the software development lifecycle.
AI is useful, sometimes, in limited ways, but the people who think it's great are always the worst developers.
coworker@reddit
I see someone's not a staff+ level engineer.
recycled_ideas@reddit
Of course am over promoted dimwhit thinks that AI is the answer to everything.
chickadee-guy@reddit
Your comment shows a complete ignorance of how abstraction and assembly works in computer science.
Id suggest just zipping it when you dont know what youre talking about, you make yourself look like an idiot.
coworker@reddit
Explain or GTFO
33ff00@reddit
Yeah I didn’t need research for this beyond witnessing the absurdity first hand after about two weeks of use.
Mizarman@reddit
A true senior dev knows the code is not the hardest part, figuring out the complexity of the features and data is. The people who are awed by AI are juniors, noobs, or non-technical people, who have not gotten there yet.
Weekly_Mammoth6926@reddit
I think it’s kinda the other way around. Because writing code isn’t the hard part, it’s seniors who have the most to gain because if we can automate most/all of the code writing bit we can then spend more time on the actually complex bits.
My view is that juniors should be restricting their AI use quite a bit so that they can develop that code writing skill and gradually integrate more AI as they advance into doing the more complex work.
I’ve actually noticed it be a problem for me when beginning to work with a new language a few months ago that was very different to the languages I’m most familiar with. I found that I was picking the language up much slower than before and really struggled to review the AI output.
creaturefeature16@reddit
Agreed, but therein lies the rub. An engineer who had decades of coding, friction and experience logged, and had the opportunity to learn that complexity and grow that wisdom is one thing...that takes time and experiences.
What is happening right now is a trend where developers, who've never had that longevity or the years of friction that led to that deep understanding, are being moved into those same type of "higher level" ways of thinking, requiring the same skills to manage the AI agents that the senior engineer took decades to obtain.
That can't end well.
eurodollars@reddit
Director and I had a conversation about “hand writing code” once a week so we don’t atrophy
micseydel@reddit
Is once a week enough? How are you measuring it?
eurodollars@reddit
We aren’t and it will be bad
alexs@reddit
I think the atrophy of coding skills is a real risk, and it's part of why I hate using Claude Code, it hides too much of the actual code from me. However LLMs are a massive lever if you are curious enough to actually read through the output and use them as a collaborator to help teach you what's going on.
We 100% have not figured out the right interaction patterns yet and I am confident that Claude Code is heading in the wrong direction but I don't think "LLMs accelerate the wrong parts" is really defensible. LLMs accelerate EVERYTHING, you get to choose the parts.
creaturefeature16@reddit
Something the industry is pretending to ignore at-large is that not all developers think the same, and that coding is planning.
I love what Dax (the creator of OpenCode) recently said in an interview discussing "Spec Driven Development":
LLMs fill in that ambiguity with assumptions (or hallucinations), which results in: more review, more agent revisions, more tokens burned, and more disconnection from what is being created. Even with the most precise, clearly-structured prompt, an LLM may still generate a hallucinated method because at its core, it is a next-token prediction engine—not a compiler. You cannot replace a deterministic system with a probabilistic one and expect zero ambiguity.
lurco_purgo@reddit
Yes! That's the biggest thing for me. "Just write out the exact specs and the LLM will make it happen" - but writing out exact specs upfront is incredibly hard! I need to write some code before I have a complete picture of the functionality in my mind. Code itself is how the spec takes shape - I think through data flow, modules, and functions (especially in the context of an existing codebase, where there's always an element of rediscovering its limitations), not in some abstract business logic, unless I'm truly in "LLM take the wheel" state of mind I guess.
Being forced to construct a full specification in natural language, just because "the LLM codes faster and you can focus on orchestrating", is asking me to work in a more abstract, error-prone, and time-consuming way than simply writing the code myself. And yeah, LinkedIn will say it's a skill issue, and usually I'm the usually the first to think that the problem is me and not the rest of the industry, but after these couple of years I really think I'm a decent developer when it comes to architecturing code on the spot.
So are the LLM prophets (at least the big names, like Addy Osmani, etc.) that much better at doing this upfront, or they simply don't care about the quality of what their "agentic swarm" actually produces?
And another thing: "You'll get left behind" - why do fuck would I? I mean, like I mentioned, I'm easily swayed by others, so try to keep track of all these new agentic flows that everyone is talking about (MCPs, skills, harness, subagents etc.). It's... Really not that deep? What exactly do we fall behind on? A bunch of markdown files? Sure, there is some skill in managing those agents (some stuff you can't enforce through MARKDOWN for example), but overall this is pretty basic stuff. Should I really diverge my attention from developing my "manual" skills and architecturing and prototyping my own templates, diving into trade-offs, performance, deep diving into specific tech stack (Kafka, Redis, I don't know, some auth setups..). There is ALWAYS so much to learn as a developer, especially if you're truly interested in your craft. Focusing on AI instead because of FOMO doesn't seem like the best bang for you buck in my opinion. You can always dedicate a few weeks to catch up on that and I'm sure any decent develper will easily speed run through all of it.
creaturefeature16@reddit
You're spot on throughout this entire post.
The whole "you're going to get left behind!!!111" bullshit is the most eye-roll inducing.
They said that 3 years ago, yet everything I learned 3 years ago is either exactly the same (type a prompt, get an output) or the tools are completely different (use ChatGPT...no wait, use CoPilot. Noooo, use Cline! Wait no, its Cursor. Wait wait, it's Claude Code. Ahh, now is OpenCode. No, Windsurf! Migrate to Antigrav! etc...
And yet, underneath it all, is the same stupid workflow of what I call "prompt & pray".
If you didn't even glance at LLM tooling until yesterday, not only would you NOT be behind in any way, you'd be fully caught up within a day.
alexs@reddit
I am sure it CAN result in disconnection from what you are working on, but it doesn't have to.
creaturefeature16@reddit
I do agree that LLMs accelerate all things at once and we get to choose (this is happening in all industries where LLMs can be of use). Currently, the industry is using them in the worst ways, like measuring "productivity" through token churn and KLOC.
To reduce the chances of said disconnection, I imagine we'd be doing a hybrid-blend of manual coding + LLMs assistance, which ultimately the tools to more of glorified "smart typing assistants", rather than the "prompt & pray" workflow that Claude Code and the like offer.
squidgyhead@reddit
What is the study that you are referencing here?
micseydel@reddit
It would appear to be https://www.anthropic.com/research/how-ai-is-transforming-work-at-anthropic
ShiitakeTheMushroom@reddit
We still have mathematicians that are great at math, despite the existence of calculators.
creaturefeature16@reddit
In programming, the equivalent to the calculator would be the compiler.
Your analogy is meaningless.
Fidodo@reddit
LLMs can be used for quality code with the right workflows, but doing so requires thoughtfulness and effort, just like any best practice.
Using them to offload critical thinking will always end in failure, but AI can still be used if the goal of quality.
My current workflow is to use them for green field prototyping, and after the first pass I use that experiment to refine the spec based on where friction emerges, then depending on how bad the friction is I either aggressively refactor what was built, or I refine the spec and try again. If code is now cheap there's no reason to hold onto bad code. Throw it away.
I think this approach can be refined and made more efficient, but it needs to be done thoughtfully. While others are turning their brains off I'm using LLMs to help me think more deeply about problems than ever before. We always had the option of being lazy and shitting out crap code for the sake of speed, and that has lead to disaster. If that worked then we wouldn't have best practices.
luckyincode@reddit
I saw the Star Trek episode about this.
ConspicuousPineapple@reddit
I'm getting way more value from LLMs when I need to explore some codebase or come up with design ideas to address a complex-but-no-uncommon task. Incredible time saver.
Assuming that coding agents are only good for writing code is shortsighted. Although they're also pretty good at that if you don't let them do everything by themselves, but ask specific things one by one instead.
Wonderful-Habit-139@reddit
Not trying to completely dismiss AI's value as a search engine, even though it's more expensive than a Google search.
But even for exploring codebases, I've had much better results going through the codebase myself, with the help of an editor and a language server (where I keep inspecting types of everything that's being used). Because while going through the code like this, I'm able to get used to the coding conventions, I get to perform some cleanup along the way, and when I do start working on tickets I have a much better map of the codebase.
And it's not like this exploration takes that much time. It could take an entire workday or two, but then you have a much deeper understanding than if you used AI. And with AI you're going to find yourself on day 3 with a less robust understanding, while still being in day 3 without some kind of deadline.
Sometimes people need to invest more time and effort initially, rather than being like "I need to implement this in one week" and then you find yourself on month 2 still implementing more features and fixing bugs on a codebase that is in a much worse state because of the compromises that were made for the sake of initial speed.
ThirdWaveCat@reddit
The "paradox of supervision" is should be called the "Ironies of Automation".
https://en.wikipedia.org/wiki/Ironies_of_Automation
creaturefeature16@reddit
Wow, this is pure gold! I'm writing an article about this right now, and this is a perfect slot. Truly nothing new under the sun, eh. Thanks!
ThirdWaveCat@reddit
There's a PDF under external links. It is a remarkable paper in the detail it goes from everything between cognitive load, generating counterfactual tests based on experiential knowledge (gameday testing), and how a skill premium develops in industries that undergo rapid automation of entry-level work (accountants' wages rose after spreadsheets decimated entry-level work).
"There is some concern that the present generation of automated systems, which are monitored by former manual operators, are riding on their skills, which later generations of operators cannot be expected to have."
It is curious that they would launder this concept. I could criticize them whether it is knowingly or unknowingly.
Daimler_KKnD@reddit
You're grasping at straws. Skill deterioration is only becoming an issue if we stop/get stuck at current level of AI automation. Which is close to impossible scenario. We need a global disaster to even slow it down, and complete collapse to stop it.
Currently AI progress is not stopping and its not slowing down. As it advances there will be less and less skill requirements for supervisors/orchestrators, until we reach a point when they won't be needed at all. Current sentiment is more or less like this: optimistic scenario - we will reach this level for some task in 1-2 years; worst case scenario - it will take us 10-15 years.
WhenSummerIsGone@reddit
Apparently the recent opus release is a step backwards, if you listen to early reviews.
creaturefeature16@reddit
🥱🥱🥱
Go take it over to r/accelerate, r/singularity and your r/buttcoin (lol) subs, kid.
Daimler_KKnD@reddit
Your assessment of my reddit profile exposes your problems and why you misunderstand what is happening in AI - you barely understand what is going on around you, you make a lot of incorrect assumptions based on your poor understanding and then you immediately jump to wrong conclusions.
Just FYI, I am software acrhitect/dev, with decades of experience. And r/buttcoin is actually a sub that mocks and makes fun of bitcoiners and cryptobros.
creaturefeature16@reddit
Don't really care, you might as well be
creaturefeature16@reddit
I don't really care, buzz off
DowntownLizard@reddit
Feels like the skill issue that makes good engineers that much more useful. That doesnt mean senior that means good. Those who put in the effort to learn are more valuable than ever because they are moving faster and they are producing good code.
Goducks91@reddit
Yep. It’s not too much of a concern now. But if AI staganats you’re going to have a huge gap of experienced developers in like 20 years.
DowntownLizard@reddit
I feel like the skills required to be an engineer will both look dramatically different and yet not change on a fundamental level. Engineers are effectively just people with domain knowledge that are good at solving problems which never goes away cause we are good at creating new problems to solve.
I would say 'developer' as a skill is slowly dissolving if you consider that to be mostly the act of writing code. What you need to know to write good code is going to be so different so fast its hard to even speculate on. Will people need to know how code works on the same levels or will llms be writing better code than any of us could have anyway?
I could even argue a lot of support roles are more in danger of being obsolete. Why do I need a team around me when I have way more free time and I'm fully capable of being my own devops and writing my own IaC, etc. The amount of things that im watching become a bottleneck to how fast me and many others are able to move is crazy. We are now sitting around like hey operations you locked us out of everything so we cant even help unblock ourselves here...
chickadee-guy@reddit
Its explicitly a cudgel to try and deskill and devalue SWE's.
The goal of the technology and the literal trillions of dollars of money funding its launch and forced adoption is that an LLM + a nontechnical person can run an IT department. All evidence points to that being a sham and scam.
Its clear Wario and Sam found an infinite money glitch in todays capitalism by promising to cull labor while glazing capital. Its unclear how much longer they will get this kind of funding though, and it appears to be drying up and providers are raising prices significantly to the point where an agent costs more than a human.
dorkyitguy@reddit
Let’s be clear about the goal of AI: to get rid of all employees. In that way, this is unlike any other technological advancement. And we will eventually be out of work because no matter what skills people think they have, AI will eventually be able to do it better or faster.
chickadee-guy@reddit
There is 0 evidence that LLMs have improved in any meaningful way since GPT4
Formal_Gas_6@reddit
bruh. been sleeping under a rock? still only using chatgpt chatbot?
chickadee-guy@reddit
Ive used Opus 4.7 with claude code, Codex, and Gemini, huh?
Formal_Gas_6@reddit
have you tried running local models? if all you use is frontier you might be tempted to make this claim, but no, the models have really improved a lot
Manic5PA@reddit
It's not that progress has stopped exactly but it's clear the technology is well into the realm of diminishing returns at this point and the great promise will not be realized. The technology has hard limitations.
bonzai_science@reddit
This is why I stopped using Reddit. This is obviously not true.
throwaway682345@reddit
looks like you're still using it to me pal
WalidfromMorocco@reddit
They improved the harness and tooling around the models, but they sell that as model improvements.
Olangotang@reddit
Yep. Claude Code throws the prompt at Opus multiple times before getting the right format. Outside of the harnesses, these models are still dumb fuck LLMs.
nishinoran@reddit
This is obviously not true.
dorkyitguy@reddit
Oh well if it hasn’t happened yet then nothing to worry about
RespectableThug@reddit
I assume it’s a typo, but calling Dario Wario is hilarious.
chickadee-guy@reddit
Nope was completely intentional. Just imagine him in the outfit and yellow hat
itsgreater9000@reddit
in my head sam is waluigi and wario is dario
RespectableThug@reddit
I can see it lol
CactusOnFire@reddit
I could see Wario running an AI company
Double_Try1322@reddit
Feels less like a conspiracy and more like hype plus unclear expectations. AI exposes weak requirements more than it replaces devs. Good teams get faster, unclear teams just get more confused.
Known-Context-160@reddit
ChatGPT:
Q: I am around 100m far from the car washing service, do you think it is more environment friendly to walk there to get my car washed or better to drive there?
A: At \~100 meters, this is one of those rare cases where the answer is actually very clear: walk.
Driving such a short distance is disproportionately inefficient. The engine runs cold, fuel consumption per meter is terrible, and emissions spike during that short start-stop cycle. You’d burn more fuel in those few seconds than you would by just avoiding the trip entirely.
Walking, on the other hand, is essentially zero-emission and takes maybe a minute or two. The only mild inconvenience is the walk back after dropping the car off—but even that’s trivial at that distance.
So environmentally (and honestly practically), walking wins by a wide margin.
Known-Context-160@reddit
Main problem is that for industry cheaply obtained quick visible results beat anything else in the world. Short term it works but long-term it builds a rotten foundation of AI generated mediocre crap that will once have to be sorted out by senior programmers.
Alarmed-Knowledge579@reddit
This is nothing new. This already happened before, first to the skilled textile workers in England, in the early 19th century. Then to agricultural workers, then to skilled metal processing workers, other craftsmen etc.
The machine is taking work form us. It is cheaper, predictable and obedient. Programming is a skilled work, it requires our full dedication, thinking, analyzing, planning and coding. We, programmers are the craftsmen of this century.
This system is a victory of capital and technology over us. AI is the instrument to reduce need for humans in production process.
Ambitious-Garbage-73@reddit
I don't think you're paranoid. The ugly thing is LLMs made vague leadership harder to hide behind sprint length. Ten years ago you could hand engineering a fuzzy direction, burn six weeks, and then blame complexity. Now a model can spit out three plausible implementations before lunch, so the bottleneck gets exposed fast: nobody actually decided what the product is, which tradeoff matters, or what 'done' means beyond some half-baked Figma and a doc full of maybes. That's why so much manager talk drifted into guru language. Once code got cheaper, the expensive part became judgment, and a lot of people built their careers in the old fog.
BetterWhereas3245@reddit
I hate how accurate this is. At my previous org, the dysfunction was entirely on management and their complete lack of direction, not slow programmers.
Dimencia@reddit
They're going through the same phase most devs did 4 years ago when we first learned about LLMs and tried using them for everything. It took a few years to realize they only make things worse, but until that happens, obviously you have to think they're going to take dev jobs. But LLMs aren't even good at the one thing they're supposed to be good at - even 'AI slop' communication from leadership is reliably worse than what humans can do
small_e@reddit
I don’t know if replace software engineers. But it will definitely replace coding.
arstarsta@reddit
Bad companies should just die. So while looking for new job I just don't care anymore at my current job. Before I would say this will lead to x problems now its just yes sir and build something pointless. I do you the opportunity to experiment with new libraries and patterns I wouldn't want to try on code I cared about.
My current apps production database is sqlite as the boss wanted claude to do what it can do.
Lgamezp@reddit
Yes. And sad thing is its working. Dumb idiots crying they are going to be replaced.
Just points out that these idiots would/will be replaced regardless of IA.
Sadly greed from companies will take a few good ones with the others.
sailing_oceans@reddit
Looking through these comments, I can tell there's alot of bizarre and cope thinking going on.
Great, writing code wasn't the hard part, or a good engineer does xyz and AI can't......
Well, nobody can
1) Identify if you are a 'good engineer' when your resume gets thrown into a pile of 700 AI created resumes whenever you need your next job.
2) AI can absolutely and certainly do alot of things or speed them up. Great your judgement is important....well... now you have 10x the competition as everyone else now has time to weigh in and think and so forth because they can blast every idea with AI. The idea that every engineer is solving these big profound problems is absurd.
Much of that time is still grunt and tedious tasks. If there's 50 tasks and suddenly AI can handle 30 of them, then you don't need a team anymore.
NiteShdw@reddit
AI is most effective when used by a skilled practitioner.
It's a tool, not a replacement. It's like how the IDE helped many developers improve speed by doing background compiling to surface errors immediately, or auto lint/format on save, or be able to jump to a function definition.
This is another tool. I've vibe coded a few side projects, and while they work, it's still a pain in the but to get the little details right without manual intervention. It's good at some things and bad at others.
It won't replace engineers but it will change how engineers do their jobs.
WeHaveTheMeeps@reddit
I’ve been wanting to write a post about this, but I’m disabled and had trouble writing code. I could design systems and all of that, but actually writing code was fucking hard.
AI removed the biggest obstacle: me.
That said, I can’t just hand it massive task and expect it be done right.
Acrodemocide@reddit
I'm a big fan of AI, and i use it to code personal protects. I work as an engineering manager over multiple teams, abs we're working on effectively adopting AI into our development processes.
I believe the AI hype only causes more issues with effectively adopting it for software development than anything. For awhile, there was tremendous pressure on engineering to vibe code apps similar to what l loveable does. I had to explain to people in more technical detail than they probably cared to hear why it doesn't necessarily work that way for what we are doing.
In my opinion, strong engineering managers recognize the technical nuance around AI adoption to protect their engineers from the hype that is out there. Most of it is from people who want to sell something.
Software engineers will not be replaced, and AI it's not a magic wand. It's a powerful tool that will change much of the with we do in engineering, but we still need people who can read and write code. And adoption of AI must be done thoughtfully, and will likely follow a different path for each organization based on their needs. To just expect engineering to churn out new features in mass from AI is not realistic and completely moves away from software engineering best practices. Organizations that make this move don't understand engineering and will find themselves in a mess.
qtechno@reddit
Not gonna replace them but it is definitely making software careers much less interesting prospects. 10-15 years ago, you more or less had guaranteed a job. I made the worlds worst timed sabbatical last year and finding a job now is proving very difficult.
RobertKerans@reddit
Yeah, I got made redundant last year then had to take a break for family stuff and hoo boy it has not been great trying to re-enter the job market. New financial year at start of April has helped in terms of number of opportunities, but even then all the interviews I've had have resulted in rejections: great feedback but each time they've managed to get someone who's been a more exact fit for the tech they're using. Was at the point of adjusting salary expectations right down and basically removing "senior" from my CV, but I've got two second stage interviews next week so hopefully not needed, shall see. Good luck in your hunt, anyway!
Foreign_Addition2844@reddit
I have barely written any code in a year. Made 250k telling cursor what to do.
androidguest64@reddit
Some elites will kill anyone for saying this, but should LLM solve poverty, world hunger, and climate change?
philip_laureano@reddit
From my own experiences so far, AI will replace engineers that have sat on their laurels and thought that tenure and domain knowledge alone will make their jobs secure.
The reason why I say that they'll get replaced is when people start using AI to see the cracks in what they've built over those years, it'll be hard to defend that tenure if the AIs end up revealing some disturbing things that used to take someone with deep expertise to find out.
For example, if (without me breaking any NDAs) I can use these AIs to combine git commit history + Jira tickets + APM telemetry + distributed log queries + docs in confluence to trace what was built versus what was said was built over the years and it turns out that there's some gaping holes found in the architecture, then that's an awkward conversation to be had if it shows that it was a house of cards all along.
The power of AI here isn't its ability to code per se. It's the ability to investigate and integrate multiple data sources in a matter of hours that would take a human weeks to do the same investigation by hand.
And that information asymmetry is where the real leverage lies. An experienced developer that can find all the pain points and minimal points of intervention to fix them is the one that will never run out of work any time soon.
daddyplsanon@reddit
For example, if (without me breaking any NDAs) I can use these AIs to combine git commit history + Jira tickets + APM telemetry + distributed log queries + docs in confluence to trace what was built versus what was said was built
How? How does one even get started on building something like this?
philip_laureano@reddit
It's just OpenCode + MCP servers for Atlassian, Github CLI, AWS/Azure CLIs.
Then get Opus 4.6 to run on it and tell it you have all those sources available.
Works even better when your company uses Jira ticket IDs in their commits and PRs.
The combination of these data sources with a SOTA LLM can help you find out where all the bodies are buried.
For example, if you had an architect in 2019 do a microservices migration for your entire org, you can ask Opus to trace the flow of data through all 100+ microservices by looking at the exception logs + Github PRs + Jira tickets + customer tickets (probably also in Jira) and tell you where that architecture falls apart, with receipts and line numbers
ilyas-inthe-cloud@reddit
Not paranoid at all. I have been using AI tools daily since they got good and my productivity went up, not because the AI writes better code than me but because it handles the tedious stuff faster so I can focus on architecture and actual problem solving. What I see from management types though is this weird thing where they think because a model can spit out a prototype in 10 minutes, that means developers are slow. But the prototype is never the hard part. The hard part is the requirements, the edge cases, the integration with legacy systems, the things you only discover when you actually start building for real users. The people pushing the "AI will replace devs" narrative the hardest are usually the ones who never understood what developers actually did in the first place.
goby-sourceman@reddit
I think not. It's a great tool for a developer with lots of oversight but you also still need to stay in practice and review what it does. Also, I think it helping a "non-developer" be a developer is also fictional.
Example: we had a new hire pass the interview. He instantly, on his first day, starting using AI. He was given a simple task: to write some unit tests and also, if he had time, to research a bug on one specific package.
This turned into a four month task (same package both tasks).
In total he two approved code reviews in all that time and one had 9 revisions.
He used AI the entire time from day one. Even to reply on chat and write emails. Blamed it for issues.
Tl:dr: AI is a good tool but it doesn't replace skill or people and it must be used in the right hands.
daddyplsanon@reddit
He pushed code directly, no review or anything. Twice, after being told not to by management and senior engineers.
I mean this part is on you guys - why didn’t you guys set up protected branches? No one should be able to simply push their code into a shared branch. That’s crazy to not set up branch protection rules. No one in our company is able to push code directly into a protected branch (dev, prod, release, etc) even by accident. It’s simple enough to set up
infamouslycrocodile@reddit
Why did this get a downvote. This is absolutely what a non-developer will do to reap the high-reward technical job with fake credentials.
OkLettuce338@reddit
I’m sure the answer is only one or the other
TheCharalampos@reddit
The tech by itself would replace very few people. However due to many of the people in charge it is replacing (badly) far far more.
Its all reasons outside the actual technology, either underhanded business aims or people falling for AI company propaganda.
MisterFatt@reddit
Idk but I just mentioned to my friends, my inbox is as busy as its been since like 2022
infamouslycrocodile@reddit
It's been great: Vibe code at work because of pressure by management. Digs a deeper hole of tech. debt that is 10% looks great and works now and 90% will keep my colleagues employed indefinitely for years to come.
Slow and steady for personal projects.
Things like ensuring pagination of records and not selecting entire tables to query information. Proper testing and composition. Checking security components line-by-line etc.
cybersophy@reddit
AI works best for me as a sounding board as I develop ideas and reusuble patterns.
I'm a full stack developer building my own products so I can make decisions that maximize my own long term productivity and the debt-free utility of the things I build.
The code that I write is very carefully considered to maximize future utility and minimize technical debt. I don't want to have to rearchitect common things for new projects unless I'm addressing some kind of flaw. Having AI directly touch any of this is totally out of the question.
What I use AI (Claude, mainly) for in development is to validate architectural patterns against scenarios I might have not thought of- gaining greater awareness of things that might test my architecture down the road, and for general brainstorming that is easily ignored when it's wrong and cherry picked, and also as a very good search engine for products and techniques to keep me up to speed on areas I might not be tracking.
So it's basically like an assistant that has strong skills in very specific areas that can save me tons of research time but fails to meet the bar for important tasks that I'm very particular about, like writing code, which I actually love to do when it's good, tight, clear reusable code that feels like I forged a new tool.
Idea-Aggressive@reddit
From what I can get it keeps a lot of incompetent developers around. I’ve completed a contract role, and the developers at this company (popular infra tech, not the core product but the so called full stack devs working on nextjs etc) are just poor, terribly bad! It’s unbelievable given how popular the company is. So, developers are being replaced by developers in the middle of LLMs; it’ll take months if not years to catch these people. Terrible, absolutely terrible, I could not believe. I wish I could name the company.
vaynah@reddit
Developers are more productive - we don't need a lot of developers. I don't get what's so hard to understand here?
andrewharkins77@reddit
Also, all the tech solutions such MCP with LLM driving decision making turns out to be re-inventing the wheel but worse. Thus reeks like web 3.
humanguise@reddit
Anecdotally, it takes me like a week to gather and clarify requirements, a day or two to generate the spec from that, and a day to generate the code, and then the PR is stuck in review for a week. The actual coding part is pretty fast, at least 5x faster, but every other step is still slow. There is a limit to what I can enforce as a senior IC if the organization is not adapting. It doesn't help that we don't do ad hoc meetings for specs and that everything and everyone is spread across timezones, so I'm stuck making a ceremony of it every time with more people than necessary in the meeting. This code will probably move around a billion dollars this year, so it's important to get it right on the first shot. People are also asking me to change stuff in the review more liberally because it's so easy now. Recently, our product manager changed the requirements multiple times after the initial work was completed. I have witnessed catastrophic attempts to vibe code by non-technical people as well.
dekai-onigiri@reddit
AI isn't about replacing anyone, it's a very clever financial mechanism for transferring infinite amounts of money to those who are in the loop. Anyone who actually has technical knowledge knows, that those tools require experience, may make things even slower and more complicated, and optimize the parts that weren't the problem in the first place. That doesn't matter because as long as the story is propped up, the investments keep flowing and the whole thing is kept afloat. The reality is that the global economy is in the shitter, the technological easy wins are in the past and we're in the phase of squeezing the most of what is already there by the means of enshitification of everything. Using AI as an excuse to fire people is there because on one wants to admit that the global economy is fucked and it will only get worse from now on.
Beginning-Cream7813@reddit
How much of this is cope and wishful thinking? For years our profession has been told (and people have no problem internalizing) that you are always supposed to be learning on your own, adapting to new technologies, and never, ever acting like a stick in the mud. Maybe AI coding really is substantially different from any previous technology and it really is the end for our profession*
*Yes, I know there's some of you who are really smart and hard working and think you'll be amongst the last devs standing, good luck with that.
alternatex0@reddit
My personal experience is that the devs around me that I considered competent are the same level of competent without AI usage, and the ones I considered incompetent are also the same level with AI usage. So don't quite buy that AI is a magical SWE competence silver bullet. Though I have noticed that it can fool some people who are incapable of accurately assessing productivity.
Healthy-Dress-7492@reddit
What does reduce productivity is treating your employees like shit by paying them poorly, rounds of layoffs followed or preceded by frivolous spending, mandating work in the office. Turns out unhappy people only do the bare minimum. Management doesn’t understand AI but it’s a useful tool to get money and use for marketing/sales. It’s also useful for specific kinds of small problems. It just is still far from doing what they dream it will.
rover_G@reddit
Yes
Tacos314@reddit
A little of both, AI is going to replace some software developers, not all and not for all tasks.
In reality no on really knows right now, we are just learning how to use agentic AI, I'll be less worried about software engineers and more about scrum Masters and a lot of middle management.
Ai is really good at paperwork
pizzathlete@reddit
Good tool for developers. Excellent tool to scare employees for employers.
ankittw@reddit
So sometimes we get some advice that makes more sense. Whenever such a technology comes, this doom and gloom happens. I was talking to an elder around why AI might take jobs, and him, a banking veteran said they felt the same when computers came to banking and finance sectors. It is the same energy now. We don’t know how it would look right now but it can turn for the better too. We just don’t know it yet.
roynoise@reddit
This is a large part of it, yes. That, and an excuse to abuse the native workforce or steal their work and export it offshore with more popular optics (e.g. "ai efficiency").
honestduane@reddit
It’s also a way for them to hide the labor fraud that’s happening right now with visas. Companies are lying on federal forms, and making the claim that the people they just laid off or not available to be hired in the market. these are people with the exact same skills that are needed, but unfortunately, they come out as an increased cost over somebody else who’s offshore or whatever and so the end result is you got all these executives violating their fiduciary duty of Caremark by absolutely destroying the business continuity of their company in an absolutely fraudulent way.
DontDoxMe3352@reddit
In the company I'm working right know we went through: 1. You have to use at least once each week to prove that you're trying to use the tools. 2. You have to specify how much you're using, what parts were made with it and how much time you saved. 3. Now only the agents will be able to change the code, so you specify the Jira task, the agent will generate a plan, you'll revise the plan, another agent will execute the plan, you'll review the code, do any tests and then deliver.
For step 3, we had to build the set of rules, skills and MCPs for Claude to do it on it's own, it fucked up for any feature that was the least bit complex and we had to manually intervene a lot as to not go over the sprint. Now we're at a point where they rolled out a proprietary software (vibe coded I'm sure), to decrypt Claude rules on runtime, so we don't have access to the rules themselves anymore, because they are worried about their IP.
This last one is where I fully gave up, I was trying to keep up with them, and trying to make it work a little bit, but they jumped the shark.
composero@reddit
Well, GitHub co-pilot will probably be changing its pricing policy going forward. I know for our company we may be bowing out when it comes to using AI in our workflows at scale as I can’t imagine the company willingly paying for token consumption over what what co-pilot was doing
Calm_Possession_8463@reddit
Look at it this way: for less than $100 per developer per year, right now currently today, you can double to triple their output (note: not necessarily why you want).
This means that right now, currently today, big tech can have the same amount of output as their current labor force with only say 1/3 - 1/2 of the employees. The reason they are keeping the employees is that they are continuing to train the LLMs.
In a matter of years, the quality will improve, and then they could probably get away with perhaps 20% of the labor force to simply supervise and direct the LLMs.
It’s not paranoia, it’s just a question of funding and time. I’m currently considering which other fields I can reskill in so that in 5 years time I’m not fighting with the 80% of software engineers who no longer have work.
nates1984@reddit
Less than $100 per year?
I'm calling bullshit.
Calm_Possession_8463@reddit
Look at the cost of tokens. At big tech scale, the cost to the company is actually way less than that per developer.
chaitanyathengdi@reddit
It's a way for the AI companies to sell their shit.
bloodisblue@reddit
I take my view from a June 2024 article about AI's future which so far has largely been proven correct about the speed and scope of LLM improvments: https://situational-awareness.ai/
We're currently in the crazy acceleration period, and similar to Moore's law, if they continue on the current trajectory they really will get to the point where they can improve themselves faster than humans can. From that point on most knowledge work jobs are 100% at risk.
The scary bit is that at current trends this might only be 2 years away. Not 10, not 20, it could happen in 2028.
I don't think that the narrative is driven by this article's basis of looking at compute input -> output growth as a driver for improvements. (I may be misremembering the exact mechanic). But the narrative has some chance of being correct.
Personally I'm expecting the narrative to be wrong, maybe as a self-preservation optimism bias, and that we've already picked the low hanging fruits and progress won't be able to scale exponentially. If this is the case, funding will dry up for these billion dollar training runs and very likely we'll have AI assisted developers for the next 50 years.
TLDR: There is a lot of truth in the narrative, and if AGI is going to show up it'll be in the next 2-3 years. If it doesn't show up in that timeline then it's likely 30+ years away (we'll need some other type of breakthrough which can't be predicted).
mxldevs@reddit
It's not just from the top.
It's also coming from new coders who previously wouldn't have been working in software in the first place because they didn't have the patience to sit down and actually think about code design and the actual code writing.
It's one thing for 20 yoe engineer to be saying he doesn't need to write code anymore, but you have even students or juniors saying there's no point in learning to code because they can just ask their AI to give them the code they need for review.
And of course, they position themselves as experienced AI engineers and managers gobble that right up and have them working on major features while senior devs who haven't really fully bought in to it are stuck with reviewing AI slop.
sahuxley2@reddit
Automation has been replacing what we do since before AI. 15 years ago I helped write a ticketing system to track bugs and features. 10 years ago I wrote a password authentication system. 5 years ago, I wrote a website shopping cart and payment processor. All of these things have been automated, and you can't get hired to build them from scratch anymore. It makes much more sense for the company to purchase what's already been built.
And yet, we haven't run out of things to do. We just learn to use these tools and move on to solving more complicated and difficult problems. Anyone who thinks we're going to run out of problems to solve lacks imagination.
DestinTheLion@reddit
I am highly extroverted, what can I do to help these coders who are getting fucked by dirty business politics.
triptyx@reddit
Never attribute to malice what can be attributed to [ignorance].
I work in a smaller company, but one of the best things that happen to us our owner going to an AI app Boot Camp over a weekend. Walking into it, he was convinced that he would be able to turn out an entire customer portal from a 90 page AI generated PRD that he had created in an afternoon on ChatGPT.
Monday rolled around, and he had a pretty interesting looking front end, with some made up static data, but ran into massive issues with role, base, security, logins, actual database support, etc.
He now sees AI as an excellent force multiplier for the developers, and the development group has splashed onto AI driven development with both hands, but I think he also has a far more realistic appreciation of what it can and cannot do when it comes to creating a solution that has to work on a publicly exposed IP at scale.
Defiant-Duck6537@reddit
the "ai dementia" part feels a bit harsh
Aleks_Zemz_1111@reddit
You aren't paranoid. You are witnessing the Industrial-Digital Collapse.
In my world, I operate a multi-million-pound Gietz ROFO 870. If my manager gave me AI Slop instead of precise technical tolerances, the machine would fail, and someone would get hurt. In software, management thinks they can hide behind Vibe Coding because the crash is invisible until the technical debt liquidates the company.
These AI Experts are just the digital version of the lazy operator who vanishes when the real work starts. They use the hype to inflate their value because they've lost the ability to actually architect a solution. They are betting that your Introversion will keep you quiet while they rent out your labor to cover their lack of vision.
Stop waiting for a Visionary Leader. They don't exist in the corporate machine anymore.
Stop trying to fix their AI Dementia. Use the time they waste on panels and spiritual guru talks to build your own digital equity. If you can understand the math of the AI better than the people selling it, you shouldn't be a janitor for their slop. You should be the Architect of your own assets. Audit your Sovereignty: if this company went bankrupt tomorrow, how much of the System Architecture do you actually own?
wavereddit@reddit
It's hard to articulate, but it's driving productivity gains.
It's not going to replace saas.
It will also help folks launch million dollars businesses.
Worth_Composer_6116@reddit
any data on coder productivity changes?
Delphicon@reddit
First of all, yes you’re on the right track. A few things are true:
What I have come to understand is that money is dumb and power is lazy. So starting from the very top (the shareholders) everything is about numbers and narratives.
What the talkers understand that us engineers don’t is that personal advancement is related to helping people more powerful than you advance their own narrative.
It’s just supply and demand but for narratives. Shareholders have demand for AI narratives which trickles down to upper management having demand for AI narratives which trickles down to middle management to you.
And one of the easiest narratives is “AI replaces software engineers” and the middle manager would like to tell that story.
You can’t really change the demand side but you can affect the supply side. For example, you could find a good way for middle managers to iterate on requirements with AI. Like a design tool.
That would solve your problem AND it allows them to tell the story that you’ve automated the bottleneck in the software development process.
In summation, you can’t save them from themselves but you can play along and make it work for you.
Delphicon@reddit
This all said, I actually do think the software engineering will disappear sooner rather than later.
But it’s going to look different, it’s going to require tools that don’t exist yet.
Serious-Ebb-5787@reddit
isn't it a bit extreme to say all ceos are clueless about ai?
Calm_Foundation_4072@reddit
the "ai dementia" bit is a vivid way to describe management confusion
Foreign_Yard_8483@reddit
Acho que você realmente fez um desabafo seu, não uma pergunta. Achei seu texto confuso.
IA veio para ajudar, está sendo subsidiada, tem defeitos incorrigiveis, é uma mão na roda, vai tirar emprego, as Big Techs vendem como uma coisa do outro mundo, os gestores compram a idéia, e na prática não é bem isso. É tudo junto e misturado.
Onedome@reddit
The problem is the narrative being forced down from the investor to upper management.
Yes it is true that devs are still productive but it’s also true that ai in the right hands is just as if not more productive. To most of us devs it’s just another tool like stack overflow was for years. To others it’s a means to bypassing the ladder of what it takes a junior to become a senior. As a tech lead I can use ai to improve my deliverable speed and quality and do most of my teams work if they are sick or refuse to work. To others it comes out as slop. But one thing that’s constant is the narrative being pushed down from investor to senior management that ai is the future and if you and your team aren’t using it you are losing.
Ai can’t replace SWE’s but it’s being sold that it can replace most and companies have bought into that.
DisneyLegalTeam@reddit
Oh thank god. We almost went 24hrs without this same post.
PTTCollin@reddit
Bro, this post is posted in here like twice a week. Just find one of the hundred other threads with the exact same complaints and read those responses.
DisneyLegalTeam@reddit
Seriously there’s only 5 posts in this sub anymore. and 3 of them are about AI & CEOs.
valkon_gr@reddit
Posts like that will age like milk when our jobs change completely. Reddit, and this subreddit especially needs to accept it. We are not going back to 2015, it's over.
Murky_Citron_1799@reddit
Yes it seems like the management forgot all the principles of good engineering and just repeat "use AI" like a stroke victim
SleepingCod@reddit
You assume they cared about good engineering to begin with. The goal has always been what can we get out the fastest with the least windfall.
Dense_Gate_5193@reddit
no it’s just that the bar go actual engineering became that much higher.
if the problem has already been solved -> AI can handle it.
most “engineers” were rote memorization developers. they can remember and retain solutions to problems that have been solved, but they cannot actually innovate. your traditional “app developer” role as you know it where teams worked on one product, those days are literally over.
code is free, ideas are not.
Engineering used to be about advancing humanity to the next stage in automation.
then it became about wether or not the text was centered in a button. peoples entire careers are built on knowing the rendering system of a particular framework.
Now, AI can handle that. what did we as software engineers expect? we chose a self-automating career field. if you’re implementing what already exists, AI can easily do that.
outside of what has already been done, you have to draft it and write it. AI sucks at coming up with anything new. hallucination rate (in my experience) is significantly higher when you ask AI to combine two things that haven’t been combined before.
that’s where real software engineering is at now.
Expert-Reaction-7472@reddit
this is so paranoid and cynical
Grenaten@reddit
The title is true. I did not read the rest.
-_MarcusAurelius_-@reddit
Have AI summarize it for you
hitanthrope@reddit
The situation with LLMs, at base, is actually fairly simple.
They are quite good at reading and writing code and are getting better. Code in high level languages is basically, “structured natural language”, and we made billions of lines of it available for free, so perfect work for these things.
Everyone is trying to figure out what the fuck this means for us all, and nobody knows. People who have a pre-existing agenda, will see the solution to their problems as they gaze into the void.
What I think, is that all of my multiple decades of experience is now available in concentrate.