Anthropic: AI assisted coding doesn't show efficiency gains and impairs developers abilities.
Posted by Gil_berth@reddit | programming | View on Reddit | 736 comments
You sure have heard it, it has been repeated countless times in the last few weeks, even from some luminaries of the development world: "AI coding makes you 10x more productive and if you don't use it you will be left behind". Sounds ominous right? Well, one of the biggest promoters of AI assisted coding has just put a stop to the hype and FOMO. Anthropic has published a paper that concludes:
* There is no significant speed up in development by using AI assisted coding. This is partly because composing prompts and giving context to the LLM takes a lot of time, sometimes comparable as writing the code manually.
* AI assisted coding significantly lowers the comprehension of the codebase and impairs developers grow. Developers who rely more on AI perform worst at debugging, conceptual understanding and code reading.
This seems to contradict the massive push that has occurred in the last weeks, were people are saying that AI speeds them up massively(some claiming a 100x boost), that there is no downsides to this. Some even claim that they don't read the generated code and that software engineering is dead. Other people advocating this type of AI assisted development says "You just have to review the generated code" but it appears that just reviewing the code gives you at best a "flimsy understanding" of the codebase, which significantly reduces your ability to debug any problem that arises in the future, and stunts your abilities as a developer and problem solver, without delivering significant efficiency gains.
vibecoder012@reddit
depends what you want tbh cursor is great if you want full setup and deeper integration but for quick prompt to code type flow I’ve been trying wozcode and it feels more simple I end up using both now
steph_swarts@reddit
The 10x dev talk has always felt like marketing fluff. Prompt engineering for complex logic often takes as long as just writing the code myself. Plus the mental drain of reviewing ai generated garbage is exhausting. It is way better to offload non-coding tasks instead of letting ai touch the core logic. I started using an autonomous agent for the operational grunt work and it has been way more effective than any copilot style tool.
Total-Amphibian-2447@reddit
honestly this tracks. the "10x productivity" crowd is mostly people who werent that productive to begin with lol. prompting + context switching + reviewing slop eats most of the "saved" time. that said i think the framing of "AI coding" is doing a lot of heavy lifting here. letting an LLM write your actual codebase is diff from using AI to handle boring stuff around coding (inbox, standup notes, pinging PRs, tracking tickets). offloaded all that side junk to a small local AI setup and it actually freed up focus time instead of fragmenting it more.
arlaneenalra@reddit
It's called a "perishable skill" you have to use it or you lose it.
_BreakingGood_@reddit
It seems even worse than that. This article did a pilot study where they told a group of developers (various experience levels) NOT to use AI to solve the task.
35% of them refused to comply and used AI anyway.
After they were warned again NOT to use AI in the study.
25% of them still continued to use AI after being warned not to do so a second time.
_pupil_@reddit
My subjective take: anxiety management is a big chunk of coding, it’s uncomfortable not to know, and if you make someone go from a situation where they seemingly don’t understand 5% to one where they don’t understand 95%+ it’s gonna seem insurmountable. Manual coding takes this pain up front, asking a machine defers it until it can’t be denied.
Throwing out something that looks usable to create something free from lies is a hard leap. Especially since the LLM is there gassing up anything you’re doing (“you’re right, we were dead wrong!”).
ChrisFinpulse@reddit
Completely agree with this assessment. I also get my endorphins from completing the task rather than "working out the complexity" at the beginning so I find it quite addictive using AI to be productive.
3eyedgreenalien@reddit
That aligns so much with what I see in the creative writing field. The writers (particularly beginner writers) who get sucked into using LLMs are really uncomfortable with not knowing things. It can be about their world or characters or plot, but even word choices seem to trip some of them up. They seem to regard putting a plot hole aside to work on later, or noting something to fix in revisions as somehow... wrong? As in, they are writing wrong and failing at it. Instead of accepting uncertainty and questions as a big part of the work.
Obviously, coding isn't writing, but the attitude behind the LLM use seems very similar in a lot of respects.
Dismal-Trouble-8526@reddit
If AI doesn’t significantly speed up development and weakens code comprehension, are we optimizing for short-term output at the cost of long-term engineering quality?
BleakFlamingo@reddit
Writing isn't coding, but coding is a lot like writing!
pitiless@reddit
This is a great insight and aligns with one of my theories about the discomfort they (particularly new) developers must endure to develop the skills required to be a good programmer. I hadn't considered it's counterpart though, which I think this post captures.
TheHollowJester@reddit
I've been going through burnout for the past few months (I'm at a pretty decent place now, thankfully).
One of the things that helped me the most was - I started treating discomfort not as a signal "for flight" but as a signal saying "this thing is where I'm weak, so I should put more effort into this".
Not sure if this will work for everyone, but it seems like it could? Anyway, I thought I'd just put it out to the world.
QuarryTen@reddit
yup, the discomfort that you feel when doing most tasks is a sign that your brain a slow but sometimes subtle change. we have to learn how to embrace and endure the discomfort
Bakoro@reddit
I don't think "burnout" is the right word here.
When I hear "burnout" I think "a person who has been putting in an unsustainable amount of effort without taking personal time to balance themselves out".
"Just put in more effort" is like someone saying "maybe more food would help this sick feeling I've got from all the food I ate".
What you've described is more like resolving cognitive dissonance.
This is common with people who are generally of high intelligence and haven't had to work hard to get by, so they never had to develop good skills and discipline; They suddenly run up against problems that are actually hard, and the idea of having to struggle to figure it out over time is antithetical to thier experience and self perception as "smart person".
Having to reframe your world view and your perception of yourself can be extremely uncomfortable. A person who lacks the grit and intellectual honesty might be in that situation and just blame the situation, or the company, or come up with excuses.
TheHollowJester@reddit
Stranger, sorry but I will speak firmly. Why do you think that your opinion when you know me from a single post on reddit is more relevant than my therapist's?
One of the things that can happen in burnout is that one can develop avoidance strategies to not do shit that is stressful. What I described deals with that maladaptation.
Bakoro@reddit
What you've described is exactly cognitive dissonance. That's just the meaning of the words you are using.
"Reframe your perception and put in more effort into resolving the source of tension" is not the solution to burnout, that's a solution to resolving cognitive dissonance. Maladaptive avoidance is also a symptom of cognitive dissonance.
You didn't describe yourself working too many hours or feeling like you had no control over your environment, or that the work you do was at odds with your values; you described a situation where you realized that you aren't good at something, it was causing you stress, and you resolved that stress by getting better at the things you aren't good at.
Sorry, but you used the wrong word. The best I can do is grant you that you could be feeling "burnt out" because of the unresolved cognitive dissonance, but that's still a distinction worth making, since the much more common understanding of burnout is overworking, and working under poor conditions.
TheHollowJester@reddit
I don't have to tell you my life story or justify myself to you.
I suffer from burnout. You can take me at my word - which is a diagnosis made by a professional - or you can say that I lie based on your imagination.
Just... go away, do something useful. You're not doing anything good here.
SwiftOneSpeaks@reddit
As someone that did burn out, you are on the right track, but it's important to make that mental switch realistic with your time. I ended up in a cycle of "I can't learn this fast enough, I suck, I can't learn fast enough/at all" which then made me more anxious about the next thing. Always being under a deadline leaves no time for actual learning. Repeat for about a decade and my brain has well worn traumatic ruts, I lost entering flow, my hyper focus is dead (not good when managing ADHD often depends on that goip side of the coin), and new programming concepts felt threatening rather than exciting. Recovery has been slow. (OTOH, I'm much more practical about tech changes without (I think) crossing over into reactionary/curmudgeonly. For example, I've always loved the idea of"AI", but I've been clearly seeing the hype train, the unanswered concerns, and the environmental/economic costs of the current fancy autocomplete approach.
Teaching web dev to grad students, I've seen exactly what the study presented. My students stopped learning concepts.
meownir__@reddit
This is the money right here. Great mental shift, dude
dodso@reddit
this exact mindset swap is what prevented me from doing poorly in uni. I went from not needing to study in school to needing to work quite a bit in uni, and after doing poorly in some early classes I realized I was avoiding properly practicing/studying because I was afraid of acknowledging that I was weak in things (and potentially finding myself unable to get better). I used that to make myself study the shit out of anything that scared me and I did quite well in much harder classes than the ones I initially had trouble with. It's obvious in hindsight but it can be really hard to make yourself do it.
VadumSemantics@reddit
+1 useful insight
I've enjoyed an interview w/the author of "The Comfort Crisis: Embrace Discomfort To Reclaim Your Wild, Happy, Healthy Self".
Interview here: #225 ‒ The comfort crisis, doing hard things, rucking, and more | Michael Easter, MA.
(posted because I don't always take great care of my health, but when I do it helps me do better at a lot of things - including programming)
Soft_Walrus_3605@reddit
In the military the suggestion was called "embrace the suck"
Conscious-Fault4925@reddit
Yeah I feel like i've made my whole career so far on being the "well lets just try something" guy. So many developers don't want to even touch a problem if it doesn't fit neatly into some design pattern they've seen in a book somewhere.
riskbreaker419@reddit
Combine this with all the AI companies (and other users) trying to light a fire underneath you with "get on board or get left behind". I have a couple of junior devs in at our company that have shown great promise, but once we enabled LLM tooling into our space their skills are getting worse, and it shows in their code. I get the feeling they feel a "need" to adopt these tools and abandon the path of actually learning these things because they're being told everyday our jobs as we know them will not even exist in 6 months to 10 years (depending on who you ask).
We're poisoning the well for future generations of developers. I try to explain to them after this whole hype cycle dies out, we'll still be using these tools, but knowing how to use them requires an even higher skill-set than before, not less.
Learning how to deal with an overly confident, sycophantic tool that may give you a 80% correct answer with 19% garbage and 1% critical missteps requires a deep understanding of the domain you're working in.
SergioEduP@reddit
That sound like a pretty good take to me honestly, might also explain why I'm so obsessed with reading all of the docs before doing anything, I just need to know shit before I even try to do it.
Boxy310@reddit
I'm not gonna lie, using AI is like a performance-enhancing drug for the brain. But it also helps me realize when I should independently spike and research, because it's constantly making up shit that SHOULD work but just ain't so.
Human + AI is best, but juniors probably shouldn't be using it, in much the same way that teenagers should not be drinking alcohol. Many will still be using occasionally, but not having good boundaries around it means you're one big AWS outage away from having half your brain ripped out.
cstopher89@reddit
This is where I land with it as well. After being burned a few times you learn to be very skeptical about what its outputting and you verify everything yourself. This takes as long as doing the work yourself in my experience. Outside of one off scripting its really good as a sounding board with you being the idea person.
HandshakeOfCO@reddit
Just curious - do you use a calculator to take a square root?
young_mummy@reddit
One of those things is purely deterministic and strictly faster and more efficient in all scenarios. The other is none of those things. I'll let you decide which is which.
HandshakeOfCO@reddit
Spoken like someone who actually doesn't understand how AI works.
Claude code is deterministic through its APIs: https://github.com/anthropics/claude-code/issues/3370
The randomness they add has proven to give a better end user experience for most things, which is why by default it's enabled in the website / CLI.
AI is more efficient in all scenarios, because you can do something else while it's working.
young_mummy@reddit
What on earth are you talking about? They are not deterministic insofar as, by definition, they will not give the same output for the same input. You can provide the exact same prompt with the exact same context and get a different, sometimes fundamentally different result.
That is by definition not deterministic. These aren't comparable things. You have no idea how any of this works.
HandshakeOfCO@reddit
If you read the big, there’s a comment by an Anthropic employee saying deterministic is available via Antropic APIs.
young_mummy@reddit
And it doesn't claim it's strictly deterministic, but does clearly show LLMs are not inherently deterministic. maybe read my comment again. Try asking AI if you need to, since you've clearly outsourced your brain too much at this point.
SergioEduP@reddit
LMAO, Claude being able to output pure garbage deterministically sure is the same thing as a calculator running a predetermined mathematical function. Any "AI" output will be deterministic if you feed it the same input parameters every time, that does not make it any more useful.
HandshakeOfCO@reddit
The square root button also isn't terribly useful to a lot of people.
SergioEduP@reddit
From a purely technical standpoint I agree with you, it is a tool like any other and has its uses. But from a social and economic standpoint I fucking despise LLMs and other forms of generative "AI", why are we wasting millions worth of resources on a daily basis on a technology that we have to constantly fight to get to do something remotely useful (when compared to what it is being sold to us as being capable of) when reading a couple of books and spending even just a couple of hours experimenting is more productive and effective? Not to mention the psychological impacts on people using them as "digital friends/guides" and like you mentioned being "one big AWS outage away from having half your brain ripped out".
guareber@reddit
Also, because learning the shit is actually the fun part for some of us.
cstopher89@reddit
Exactly, I use it for learning but fully agentic is mind numbing boring to me.
hippydipster@reddit
This is exactly why I always loved learning from books on technical subjects. I can go sit, relax, let my anxiety chill and I can just read for a while and absorb whatever it is that's in the book, and then I can feel like it's not all hopeless.
CrustyBatchOfNature@reddit
I am a hands on person. I can read every book on a subject, but I still need to put it into practice to get it. I really wish I could just read a book and get it for tech stuff.
hippydipster@reddit
The point of the book is not - read it and then know it and thus be able to do it. Reading the book familiarizes yourself with concepts, with what's possible and what is not, and where to find the details when you get to that point of trying to do something specific.
If you read a book thinking you have to learn the details and have them in your head available for recall after you finish reading, then book reading becomes an anxious, pressured activity. If, however, you read a book with the expectation that you will learn to know what this thing is about, learn some concepts, have a grasp of what's possible and what isn't, and have a place you know where to go to look up details in the future, then it's much more useful and relaxed.
For the most part, we're all "hands-on" people. Reading a book is fantastic preparation for doing the hands-on part.
TallestGargoyle@reddit
I always liked the general overview I'd get from the programming books in my local library. Just enough to make me aware of the concepts I'd need, so when I went to learn them proper, they came to me a bit more easily.
GrecianDesertUrn69@reddit
As copywriters, Creative writing is exactly like that! It's all about the brief. Many non creatives dont understand this
Antique-Special8025@reddit
What the actual fuck. Not knowing things, figuring it out and learning is supposed to be the fun part lmao.
Why the fuck are you doing shit thats giving you so much anxiety you need to actively manage it?
AreWeNotDoinPhrasing@reddit
This isn’t the flex you think it is lol you sound like a teenager.
Antique-Special8025@reddit
Thinking your job shouldn't give you panic attacks is a flex? Good luck being miserable i guess?
ughthisusernamesucks@reddit
Just because something makes you anxious, doesn't mean you're having panic attacks.
Also, people have bills to pay. Tens of millions of people do shit that makes them miserable/anxious/angry/sad/whatever every day.
If you're lucky enough to not have to do that, great, but that's not most people (even in the software space).
tl;dr Grow up and quit being such a narrow minded asshole.
Antique-Special8025@reddit
You're American i guess? Most of the world doesn't consider being miserable/anxious/angry/sad/whatever at something that takes up ~8 hours of your day "normal". You only get to live once, seems wasteful to do it like that.
ughthisusernamesucks@reddit
This is just ridiculously naive. This is not an american phenomenon.
People in 100% of the world do things they don't want to do to make ends meet all the time.
bevy-of-bledlows@reddit
You're not wrong, but you're getting downvoted because your perspective is limited.
It's very easy to develop mental blocks around abstraction and problem solving, and I think most people do so quite early in life. I paid my way through school tutoring math (mostly econ/engineering students), and this was always the biggest roadblock for students. You can't learn that the intellectual struggle pays off if you never get that flash of insight and understanding.
I'm fairly convinced that rote math learning/follow the steps mentality teaches people to associate the abstraction struggle with tedium at best, and failure at worst. People like you or I who revel in problem solving aren't built different. We just got lucky early, and rode that wave. Better to use our enthusiasm to help people break out of their anxiety than to belittle or demean.
zvxr@reddit
Lucky for me, I've had a pathological inability to believe a compliment could be anything other than an attempt to manipulate me, prior to AI ever existing.
ebonyseraphim@reddit
Definitely an interesting and subjective take. I’m of an old-ish generation (graduated and started working in 2008) and though I was a rare breed back then, not knowing the solution was the constant state of affairs. Learning and problem solving is what we do.
While there are layers to what we don’t know, all of the knowledge gaps are closeable. Maybe we know a likely approach but don’t know the libraries that we need to use, or what libraries we could use but don’t know how they’ll all play together. Maybe we don’t even know our approach or libraries that can do it, or that we might have to roll our own, but it doesn’t seem difficult. Maybe the problem is quite difficult and we have to learn and roll our own for everything in-between.
All of this is acceptable. What’s annoying, and possible just my own personally experienced disrespect, is when managers and “business leaders” pretend like every task needs to be delivered by a predicted date months ahead of schedule and that somehow quality, safety, and security can also be ensured. Completely disruptive to an engineer working as normal, and super terrible growing and developing any engineer.
PocketCSNerd@reddit
My personal response to the knowledge and anxiety gap has been to seek books. There’s already plenty of books on a bunch of programming concepts and more project/discipline specific things without being a full on tutorial.
Is it slower? Oh heck yeah, but I feel like the constant seeking of immediate information is ruining our ability to retain that information.
AI being such a wealth of instant info (right or wrong) at our fingertips means we don’t have to worry about retaining it. This losing that ability to retain knowing ourselves. Though I argue this process started with search engines.
SeijiShinobi@reddit
Like many other here, I think this is part of it.
In french, we use the term "Syndrome de la page blanche", meaning "writer's block", but more literally, is "White page syndrome", or the anxiety you get from a white page. And I think this translates better to other fields, like programming. Getting started is often the biggest hurdle in a lot of things. And AI is great at getting rid of that "white page" quickly.
Getting over that hurdle is one of the most important skills junior developers (or new writers) have to learn to get over. For me it's the biggest difference between a promising junior who needs a lot of setup to get started and a trusted senior you can just throw hard problems at, and know he will figure it out somehow.
echoAnother@reddit
Totally alien for me. If I don't know, I have anxiety, and I will not stop until knowing. Asking AI would not let me know and truly understand. I can't comprehend how it would be the other way around.
TumanFig@reddit
i have adhd and learning a new tool by reading shitloads of documentation that didn't use a lot of coding snippets was my bane.
this was my nr 1. use case of ai since it was introduced. give me an example, i can figure out the rest.
Scientific_Artist444@reddit
Langchain has used AI well for docs.
SpaceToaster@reddit
If anything, LLMs are a great tool for exploring documentation. I just have to be my tour guide and ask it a lot of questions (double checking key things of course).
fuzzyperson98@reddit
This is why I basically refuse to touch it, because I'm the type of person who has difficulty with deferred rewards.
quisatz_haderah@reddit
Everytime some AI tool ignores my agents.md to move step by step in small increments and one shots a ~~unmaintainable spaghetti~~ feature, I die inside of anxiety.
danstermeister@reddit
"You're right, we were dead wrong!"
"This next fix is definitely the way to go!"
Thormidable@reddit
It's heroine. You do it once and it feels great! It's bad for you, but it feels amazing. So you do it again, each time rhe high is a little less, but you don't realise the high is rhe same, your baseline is lower, until you need it to feel normal. Then you need it to just feel less bad.
SanityInAnarchy@reddit
Alternatively: It's gambling.
Random reward schedules are also extremely addictive. You can't quite habituate to it like you would heroin. You see some greatness occasionally, and also a lot of slop, and there's just enough genuinely cool moments to keep you hooked, even if it's a net negative.
(Still not sure if it's actually a net negative, but it's concerning that I still can't tell.)
Marty_McFly_1885@reddit
Yes! Variable ratio reinforcement. It's an interesting way to think about it!
SnugglyCoderGuy@reddit
Gambling is a great way to think about it. Put in a prompt, pull the lever, and then see what you won. Oh no, its not good enough. Alter prompt, put it in, pull the lever and see what you won.
MaxDPS@reddit
That could just as well describe coding itself as well, tbh.
Not that I would know, of course. My code runs perfectly, first try.
extra_rice@reddit
I've tried coding with LLM a couple of times, and personally, I didn't like it. It's impressive for sure, but it felt like stealing the joy away from the craft.
Interestingly, with your analogy, I feel the same about drugs. I don't use any of the hard ones, but I quite enjoy cannabis in edible form. However, I do it very occasionally because while the experience is fun, the time I spend under the influence isn't very productive.
Beanesidhe@reddit
We're building beautiful well designed and polished machines, that gives immense joy. That joy does not come from using AI and that is a huge loss. It's like playing a game with a walkthrough, you sure get to build a good character but the real pleasure is gone
bitwize@reddit
I hate cannabis. Its effect on me is to make me feel like I'm operating in disjointed moments of time where it's difficult to remember what happened a few seconds before. I feel like I'm getting a sneak preview of the dementia I'll get when I'm old and decrepit.
Its effect on others is different, so if you enjoy it have fun.
extra_rice@reddit
Oh, interesting. One of my experiences follows a similar pattern where each moment is a snapshot in a flow of time, like experiencing reality as a stream of photos. However, I never worry about forgetting what's passed, only enjoying the present as a flow of time. I would however, at times follow one of the snapshots as it moves into the past.
Empty_Transition4251@reddit
I know a lot of people hate their jobs but pre AI era, in my experience - programmers seemed to have the most job satisfaction of professions I met. I think most of us just love that feeling of solving a difficult problem, architecting a clever solution or formulating an algorithm to solve a task. I honestly feel that the joy of that has been dulled in recent years and I find myself reaching for GPT for advice on a problem at times.
If these tech moguls achieve their goal of some god like programmer (I really don't think it's going to happen), I think it will steal one of the main sources of joy for me.
bitwize@reddit
It depends. I think programming was a much more rewarding profession in the 80s and 90s when software, and then internet-enabled software, were considered huge moneymakers. So the cigar chompers in the C suite thought "let's get a bunch of smart guys together, see what they come up with, sell it and make millions!" That was the way to work as a developer back in the day. For me, when I'm allowed freedom to focus, it's on like Donkey Kong. I feel like I can build something significant and overcome problems and challenges just by weaving code together.
But the imposition, in recent years, of processes like Scrum fucks with all of that. It's like the industry suddenly realized that leaving a programmer alone to think was very dangerous, and it should be avoided to the maximum extent possible.
I don't think I could do anything else though. I'd probably subject myself to significant injury farming or working in the trades.
EveryQuantityEver@reddit
It is very telling that a lot of the things these tech moguls are trying to “automate” away, like art, like writing, like coding, are things that people do for enjoyment
extra_rice@reddit
What concerns and saddens me is not just the production, but also the consumption. With code in particular, there's a growing sentiment about it being "disposable" because it can be generated by LLMs pretty quickly. So nobody should really care anymore about things like DRY principle, because AI can easily navigate its own slop. On one hand, I'm also of the opinion that code is disposable, but only some of it. It may be more accurate to say it's malleable, and reflects the current understanding of the system. The reason we have coding best practices is that code is read by humans almost as much as it's executed by machines.
But where does it end? At what point is code produced by AI and to be consumed completely by AI?
Mithent@reddit
This is a great comment. The company wants you to deliver, but I don't really get a lot of personal satisfaction from shipping itself, rather from all that craft that goes into producing something well. In the past, with a decent employer, those are hopefully mostly aligned. Now the company still gets what it wants, maybe faster and with fewer engineers, but it doesn't feel as satisfying to just have told an LLM what to do to achieve that.
quisatz_haderah@reddit
Genuinely lost my passion to the craft because managers pushing "We must use AI" and even if they don't, I'd still have to use it because I know that I'd get left behind.
extra_rice@reddit
I feel the same way. I love software engineering and programming. It's multidisciplinary; there's much art in it as there is (computer) science. I like being able to think in systems, and treading back and forth across different levels of abstractions.
Squeezing out as much productivity from a human takes the dread out being subjected to unfamiliar, uncomfortable territory, and the joy of overcoming the challenges that come with that.
CoreParad0x@reddit
At the end of the day I think AI coding is a tool that when used within the scope of what it's actually good at, I've found to be helpful and not take the joy away from my job - for me anyways. If anything it helps me work out the things I don't like faster while focusing on the bigger picture of what I'm working on and the actual challenging aspects of how it's designed, and writing the actual challenging code (and most of the code, to be clear.)
If anything, honestly, it's making me like my job more. I can work through refactors with it much faster than me just doing it by hand. And I don't mean me just saying "go figure out how to do this better", I mean me sitting down and looking at what I've got, coming up with a solid plan for how I want it done, and then instructing an AI model with granular incremental changes to let it do the work of shifting things around. If I need to write a whole class, I'll do that myself. But if I'm just taking years worth of built up extension methods (in .net) from various projects that I've merged into this larger application and consolidating them into a single spot, removing duplicates, etc - I've found it to be pretty good for that kind of thing. It's small changes that I can immediately see what it's done and know if it's bad or not, and it does them faster than I could physically do it all myself.
I've also found it useful for doing tedious stuff, like I need to integrate with an API and the vendor doesn't give us OpenAPI specs or anything like that. So I just toss the documentation at an AI model and ask it to generate the json objects in C# using System.Text.Json annotations and some specifics about how I want it done and it does all that manual crap for me. I don't really find joy in just typing out data models.
I don't want to make this super long but I have also tried 'vibe coding' actual programs on my personal time just to experiment with how it can work. It's not gone horribly, but it takes a lot of effort in planning, documenting, and considering what exactly you want it to do. I 'vibe coded' a CLI tool to allow cursor to disassemble windows binaries and perform static analysis on them. It's very much one of those things where if you don't understand what actually needs to be done and how it needs to be done, the AI can just make crap up and not be effective. And you need to understand enough and spend a lot of time refining plans and validating plans for it to be able to effectively do the work. I would never use this in production, but it was an interesting experiment.
omac4552@reddit
You use it like a tool, just like me. It's very good as a tool but not every problem is a nail. Agree
IAmRoot@reddit
Agreed. I've found it's terrible with C++ and in general tends to both over-complicate things and create duplicate code. It barfs out what's needed to add a feature without any sense of design for the overall project. This is terrible for primary code but for one-off scripts, helper tools, and workflow automation it's good enough. I work in the software section of a hardware company and, for instance, AI helped avoid a lot of tedious editing of hundreds of benchmarks from customers to fit our CI infrastructure. I also used it to create a script to find the compiler commit where performance regressions occurred. I've found it a lot more useful as a tool to create tools than a primary-use tool. It so often forgets important instructions that I've found it better to have it create and debug scripts for a repetitive task than trust it to remember what its actual task is.
CoreParad0x@reddit
Yeah I could see it being useful for that. On the forgetting important instructions, that's another thing about using them. I don't know if this is the case in your specific experiences, but I've seen people not break things out properly. So they'll load them up with way too much context, and get bad results out of it. There's only so much they can somewhat reliably hold in memory before the results start degrading. So when I see some one on another project I work on spin up Cursor, slap it in "Auto" and point it to this project 500k lines of old legacy C++ code then get bad results, it's like yeah you basically just gestured at this big ass thing and told it to find a needle and explain how that needle works and everything that touches it - it can't keep it all in context so the results suck. And this code base is a mess.
Small, focused tasks that are detailed enough are key. If I do anything larger, like that CLI disassembler, it gets broken out into many, many small tasks and I will go one new chat at a time and have it do exactly one task then rotate to a new chat for fresh context.
CrustyBatchOfNature@reddit
This sounds a lot like how I use it. API to C# Class. Implement changes to a class (client adds new features to an API, the AI can usually take it and just update everything for me). Convert from older VB to C#. Create a function to do XYZ. Small things that basically save me from typing and all I have to do is check the code to make sure it didn't go off into neverland.
bevy-of-bledlows@reddit
Something that is a blast to do is to scaffold out a personal project (test stubs etc included) that you're interested in, get blitzed, and dive on in. The structure prevents you from going off the rails, and you just get to enjoy making something. Obviously not faster or anything lol, but it is quite fun.
destroyerOfTards@reddit
Which heroine?
bitwize@reddit
One played by Brie Larson.
loopingstateofmind@reddit
it's more like meth. in fact the nazis prescribed tens of millions of meth pills (pervitin) during WW2 which enabled them to do blitzkreig and seize territory at superhuman pace. of course if it sounds too good to be true, it probably is. today's LLM bros would be too busy labeling them as "100x" soldiers and saying "this is the future"
PoL0@reddit
but in that case steroids give an advantage. the study claims it's not providing any boost to productivity and harms programming skill.
interesting that there's a psychological placebo effect, and polls about using AI assisted coding seem biased towards that, as they just ask about the perceived increase in productivity, which is subjective and non-scientific.
_BreakingGood_@reddit
The study says that those who utilized full AI dependence completed the task the fastest and with the least errors. So it did give a boost.
They just could barely answer any questions about what they actually built afterwards
PoL0@reddit
how's that not a bad thing? software engineering isn't about churning code, it's about owning code and features.
HTML_Novice@reddit
I guess because you could always ask the ai any questions you had
SaneMadHatter@reddit
Well, nobody knows how to use slide rules anymore either. 🤣
Artistic_Load909@reddit
I don’t think I could code without AI anymore…. I mean I can do leetcode questions, but actual work would be impossibly difficult if I’m honest with myself.
I mean I could go without it integrated in my IDE/ Claude code style, but for researching docs and providing syntax, I could not go back to google and Substack at this point.
Ashamed-Simple-8303@reddit
Steroid use has lasting benefits due to muscle memory. So even when you stop, you will be bettee off afterwards in terms of muscle mass (but not health)
Famous-Narwhal-5667@reddit
It’s annoying that when you google a question Gemini spits out an AI answer at the top, with code. Then you scroll down and there’s 10 sponsored links, then you may find what you’re looking at below that. It’s impossible to get away from it, even adobe PDF has some kind of LLM thing in it. It’s annoying.
ElvishParsley123@reddit
Install udm14 and avoid Gemini altogether.
https://addons.mozilla.org/en-US/firefox/addon/udm14/
https://chromewebstore.google.com/detail/udm14/ffcpcoipaaccggomdlgaophbocccfapl?hl=en&pli=1
Blando-Cartesian@reddit
There’s a world of different between 1000 LOC vibe yolo AI use and using it to conveniently find bits of api trivia. I would refuse not to use it for the later too.
urameshi@reddit
I think the perishable skill isn't the coding itself but the learning. A lot of people forget how to learn as they get older and AI makes it so you don't really need to learn ever again as long as you can prompt
I'd say the large majority of devs don't know how to learn. That's why they just hack stuff together even in their personal time. They learned enough to be competent then stopped learning how to refine. It's supposed to be a field of nonstop learning but people plateau rather quickly
Using myself for example, I've always told people I don't really remember how to write a loop. They may think I'm joking but I'm serious. But for me, I'm always willing to learn how to do it again. This isn't incompetence on my behalf, but it's how I approach everything
When I write a loop, I'll ask why. I'll poke at it. I'll see why it works. I'll see what I can do to it. Once I milk it for all it has, I move on
So if I were tasked with fixing something, I'd pull out a piece of paper and write how I think things work while asking questions. I've always worked like this because this is what they teach kids to do. I remember having to draw the brain storm cloud and going from there
So the issue here isn't AI imo. AI just exposed the real issue that even people in tech plateau way before they ever know they did. They're horrible learners
Because imo, if you have the ability to learn then there's no way you should have problems working with code because at some point you'd realize you need to learn more about coding in order to tackle the task. It doesn't matter what skill level you are when approaching the issue. All that matters is that you respect the work in front of you enough to want to learn it
And the number of people who refuse to do that is high.
zxyzyxz@reddit
Learned helplessness
bryaneightyone@reddit
I think this is a misread of the paper.
The study does not say people became incapable of learning. It shows that once AI is part of a workflow, asking people not to use it creates an incentive conflict. Many reverted anyway.
That is closer to asking developers not to Google or not to use an IDE. Noncompliance reflects habit and efficiency, not cognitive damage.
The paper’s actual concern is narrower: AI can short circuit early skill formation if it is used as a shortcut instead of a thinking aid. It explicitly notes that conceptual use of AI improves learning outcomes.
This is a workflow and training design problem, not evidence that skills are permanently lost.
Scientific_Artist444@reddit
Given the option to manually type out 150 lines of code and have the AI generate the same in 1 second, what will you choose for productivity?
Certainly, AI types faster than me. I can't compete with it on typing speed. The question is only about the quality of generated code. Is it good? What improvements would you make? How would you approach the same problem? These are the questions of value. So there's definitely productivity gain in terms of speed. Loss is in quality. And the more the quality checks, the slower the code gets deployed.
BehindUAll@reddit
Well it's not that, humans are quite capable at weighing pros and cons. In our heads, even subconsciously it is a landslide win for AI for the short term and not so much for the long term. If AI is able to read and modify code, as long as you have an architecture in your head, as long as you test and document enough, you are absolutely going to use that crutch and lean on it. At a certain point it doesn't make much sense to learn code syntax by syntax. At that point yes devs and companies are screwed if everyone is relying on that crutch to succeed. That's where we are at right now.
stuckyfeet@reddit
It's just more fun coding with AI.
FalseRegister@reddit
Good. Demand will soar.
quisatz_haderah@reddit
That I doubt it tho, the businesses are going through some hot potato era where as long as something is bareley "passable" it'd be enough to push it, sell it to the next sucker and get out of there with no long term plans.
FalseRegister@reddit
Some, yes. Others, not.
Crisis always brings opportunities. Some businesses are booming.
aradil@reddit
That certainly does not follow from your previous statements.
ZirePhiinix@reddit
It is actually closer to a "cybernetic" enhancement in that its removal literally cripples you.
SanityInAnarchy@reddit
I wish they'd broken those down by experience level, or gave us some other insight into who the non-compliant people are. Are they experienced people who saw their skills erode, or are they new people who never developed the skill in the first place?
bryaneightyone@reddit
You're talking about 'writing code' as the perishable skill, right? This is my main issue with the really anti-ai crowd, it seems like they equate engineering output to 'how good you type code'. On the other side we've got the tech-bros that think this shit is magic. Reality: These are tools, the main skills we have as software engineers is building software. Typing code is the easiest part of that.
Beanesidhe@reddit
Typing code is 5% of the craft.
arlaneenalra@reddit
No I'm not specifically talking about "typing code" I'm talking about thought processes involved, knowledge of algorithms, etc. All the associated things outside of typing code that non-developers tend to forget about. Typing code and knowing the stack you're working with is important as well and something else you will lose if you aren't careful too, but the thinking part is what's most important.
bryaneightyone@reddit
Yeah, that makes sense, and I think we actually mostly agree.
Where I push back is that the thinking part does not disappear unless you explicitly offload it. In my experience, patterns and algorithms don’t erode if you are still the one framing the problem, evaluating tradeoffs, and deciding what good looks like. AI just changes how fast you iterate. The real risk is not using AI, it is using it as a replacement for reasoning instead of a multiplier for it. That feels more like a workflow and training problem than a perishable skill problem.
purple-lemons@reddit
Even not doing the work of finding your answers yourself on google and just asking the chatbot feels like it'll hurt your ability to find and process information
NightSpaghetti@reddit
It does. Googling an answer means you have to find sources then do the synthesis of information yourself and decide what is important or not and what to blend together to form your answer using your own judgement. An LLM obscures all that process.
diegoasecas@reddit
and googling obscures the process of reading the docs and finding it out by yourself
sg7791@reddit
Yeah, but with a lot of issues stemming from unexpected interactions between different libraries, packages, technologies, platforms, etc., the documentation doesn't get into the microscopically specific troubleshooting that someone on Stack Overflow already puzzled through in 2014.
BriefBreakfast6810@reddit
For me AI is fucking amazing at cutting down the initial search space of the problem I'm trying to tackle.
After that my previous experience takes over and I'd either Google or go straight to the mailing lists to figure out the details.
diegoasecas@reddit
i agree, that's the point. we're always stepping up the easyness of the tasks. it makes no sense not to do it.
NightSpaghetti@reddit
Presumably Google will point you to the documentation in the results, although these days you never know... But yes the official documentation should be among the first things you check, even just for the sheer exhaustivity.
diegoasecas@reddit
be sincere, googling stuff never was about reaching the docs but about looking if someone else had to solve the same problem before.
diegoasecas@reddit
i mean, probably, but old engineers also most probably said the same when google and stackexchange came out. "just read the docs everything is there" sure bro, but i have to work to do and i need to do it fast.
purple-lemons@reddit
they probably did, and frankly younger engineers seem to often have a less precise and in depth knowledge of the languages they work with and fundamentals of computing, because they rely too heavily of code snippets that "just work". Most of the time it isn't a massive detriment, but sometimes cause problems thar could have been avoided. I think furthering this trend with the use of chatbots will degrade the quality of software even more.
cstopher89@reddit
This is true of the people who got into the industry with a lack of passion. Which to be far is a lot of people. If someone is passionate they will take the time to learn the underlying fundamentals of the technology they are working in.
diegoasecas@reddit
manually flipping 0s and 1s with a magnetic needle moment
arlaneenalra@reddit
Google can be bad, but it really depends on what you're googling for. Looking for docs and/or open bugs related to your problem is extremely helpful. Sometimes that's your only meaningful option. There's a difference between appropriate research and doing everything by yourself. With AI it's much easier to delegate everything to the llm instead of maintaining a degree of healthy skepticism about what it's doing. A lot of devs are doing the equivalent of the "stack overflow" sort with AI and that's a problem.
SkoomaDentist@reddit
Another way to put it is to only use AI for peripheral tasks where you don't need to nor even even want to learn the skill. Things like random scripts, "how the fuck do I get the overly complicated build tools to do This One Thing" and such. Ie. things that you would have googled before google crippled their search.
Conscious-Fault4925@reddit
I feel like as you get father along i your career though everything becomes peripheral tasks where you don't need to nor even even want to learn the skill
thoeoe@reddit
Yep, the other day I had to solve a weird bug in some frontend code (I'm a backend only guy). After staring at code that on its face should have worked, I asked AI and it solved it first try.
Other times I've used it are to help make bash scripts in our Github Actions workflows and navigating new-to-me code bases. Otherwise I've never found it particularly good, and as many others have said in this thread, it steals the fun part of my job, actually writing code
stormdelta@reddit
Agreed. Probably my most complex use professionally so far (that was actually useful) is converting bash scripts to Golang for use in distroless images. They're not terribly complex, it's just really tedious, and I still have to understand what it did anyways in order to make sure nothing was missed + fill in any gaps.
subone@reddit
Agreed. It's trash for many things, but as a search engine to find that one obscure answer to save you eight hours of pure experimentation? I'm not learning anything by floundering through obscure docs and forums to find that one dumb answer that I would have to just stumble upon otherwise.
therealmeal@reddit
You got a source for this one? Are you anecdotally having significantly better luck with Bing these days or something? The problem seems to me to be the Internet being full of trash these days, not a conspiracy by Google to make their search results for reasons that are hard to justify logically.
EveryQuantityEver@reddit
No, Google has purposefully made their search worse with the intention of showing more ads. It was specifically one person, Prabhakar Raghavan, who ended up pushing out Google’s longtime Head of Search, Ben Gomes, so that he could make search worse and show more ads. One of the first things he did when he became Head of Search was to roll back several updates Google had made in order to filter out scam search results.
https://www.wheresyoured.at/the-men-who-killed-google/
AreWeNotDoinPhrasing@reddit
If you’re looking for a download Bing 1000% is significantly better, hands down. But otherwise I’d say it’s at least on par with current Google.
therealmeal@reddit
I use ddg (which uses Bing), personally, and find myself falling back to Google (!g) pretty often. Bing is definitely worse overall. The Internet has become a cesspool of AI generated content and low quality ad farms.
AreWeNotDoinPhrasing@reddit
Genesis2001@reddit
I've been having success with getting it to help me lay out and plan projects that I've had in my head for a while but never started. I feel like I'm actually making progress on the projects I'm using it on.
I don't rely on any code it generates. If it generates any, I double check calls before typing - especially if I've never used the call before - and have caught it hallucinating.
One of the bigger problems I have with LLM's is that its incessant need to "please me." I really do not like the glazing it gives me at moments, and I usually just have to ignore it to get anything useful out of it.
kiteboarderni@reddit
Isn't that every skill...
arlaneenalra@reddit
I didn't state otherwise ;)
aft3rthought@reddit
I’ve lost programming ability in the past simply because my job was asking me to do too much JIRA, reviewing, meetings, interviews and bullshit code that wasn’t challenging. I remember feeling almost sick when I tried to write C++ again after a few years break, and I took up side projects ever since then. My ability came back quick enough but I won’t let that happen again until I’m sure I don’t need to code anymore.
SerLarrold@reddit
Heck I go on a long vacation and come back and forget how to do fizz buzz sometimes 😂 programming is certainly something you can get rusty with and delegating all the hard thinking to chatbots won’t make you better
Inside_Jolly@reddit
I tried using Cursor for a few days and my skill was vanishing quicker than when I was on a months-long vacation. It's not just a "use it or lose it" situation. It's as if using AI actively erases your skill.
Fresh-Jaguar-9858@reddit
LLMs are 100% making me dumber and worse at programming, I can feel the mental muscles weakening
phil_davis@reddit
I don't know that they were making me a worse programmer, but they were definitely making me a lazier programmer. I was finding myself struggling to get things done more than I used to. When I had a question and ChatGPT didn't have an answer I'd roll my eyes and pop on over to reddit instead of getting back to trying to solve it myself or asking a coworker to get another pair of eyes on the problem. Another part of that might also be the fact that AI has just made programming less fun and I'm just generally sick of hearing about it every day, lol.
Anxious_Plum_5818@reddit
True. When you outsourcing your knowledge and skills to an AI, you eventually lose the ability to understand what you're doing and why.
Bozzz1@reddit
Now imagine you never even had those abilities to begin with, and you've got yourself a modern day junior developer who cheated his way through college and interviews using AI.
grovulent@reddit
The A.I. companies know this. For devs to lose their skills is what they want:
https://www.reddit.com/r/vibecoding/comments/1q5x8de/the_competence_trap_is_closing_in_around_us/
atxgossiphound@reddit
They also want to collect a tax on every line of code we write.
Here's a reply to a different thread from yesterday were I dig into that idea a little bit more
adelie42@reddit
Everything is. And they are different skills.
eyebrows360@reddit
Like your hair
picklepete87@reddit
How do you use your hair?
eyebrows360@reddit
I'm hoping someone can tell me and then I'll stop losing it ._.
praetor-@reddit
How do you not?
ZenDragon@reddit
There's an important caveat here:
As usual, it all depends on you. Go ahead and use AI, but be mindful.
Nyadnar17@reddit
This was my experience. Using AI like a forum/stackoverflow with instant response time gave me insane productivity gains.
Using it for anything else cost me litterally days of work and frustration.
CrustyBatchOfNature@reddit
I do a lot of client API integrations. I can easily use it to take API doc and create me a class that implements it and 98+% is correct with just a few changes here and there from me. I can not trust it at all to also take that class and implement it into a program for automated and manual processing with a specific return to external processes. I tried for shits and giggles one time and the amount of work that went into getting it to do it decently was way more than what it took me to eventually do it.
bendem@reddit
We invented openapi to generate code that is 100% correct for APIs.
oorza@reddit
One of our core services is a legacy monster whose documentation is only a 900 page PDF because that seemed cool at the time I guess. Open API would be great but who is gonna invest a month figuring out how to rebuild that thing?
degaart@reddit
I believe LLMs can transform that pdf into an openapi definition. Worst case ask claude opus to translate it, than make an intern verify the generated file.
RoutineCowMan@reddit
Your company can afford interns?
basilect@reddit
That sounds like TransUnion, except their API documentation is only 500 pages...
oorza@reddit
Way more boring than that lol
CrustyBatchOfNature@reddit
Not everyone uses OpenAPI though. Most of my client API documentation is in Word Documents. Occasionally I get a WSDL. OpenAPI would be a lot better but I think out of the last 10 I did I got one with that and it did not match the Word Doc they sent.
username-checksoutt@reddit
So use AI to create the OpenAPI spec first
Due_Satisfaction2167@reddit
The problem with doing that is that it degrades your understanding of the API you’re calling. Not an issue for one-off integrations, but it’s a major comprehension problem if you’re gonna be depending on it for core capabilities.
CrustyBatchOfNature@reddit
I go through the entire API implementation it creates to ensure it is written properly. It mostly saves me from keying. I don't trust AI without validating the output before I use it. Anybody who does is insane. Which is why I say 98%. I usually just have minor changes here and there.
FlyingBishop@reddit
What AI is really magical at is pointing out that one obvious mistake you made. It can look through and be like "it's because you have this bit of copypasta and you updated the part you're not using any more instead of the variable that's actually doing something." It says it much more politely though.
Lord_Mhoram@reddit
Yep. The compiler/interpreter catches most of my "duh of course" errors, but when that doesn't, the AI is pretty good at spotting them.
It's also good for "here's a few hundred lines of debug logs; where's the problem?" It'll often identify the problem much faster than I could have, and even if I need to find the fix myself, it saved me time.
Nyadnar17@reddit
As someone with mild dyslexia that functionality is 100% a godsend.
captain_zavec@reddit
Do you feed it the relevant library docs or anything?
I feel like asking about a random obscure library would be rife with hallucinations.
Sigmatics@reddit
Basically chat is cool, agent not worth it
Bolanus_PSU@reddit
I think I may transition to using claude code to understand repositories and gain high level insights over writing features. The more I read the worse it seems these tools are for you.
Nyadnar17@reddit
I have concerns myself tbh.
It feels a lot like alcohol or gambling. Some people are gonna be fine, some people are gonna be destroyed, and telling which category you fell into ahead of time is difficult
bnelson@reddit
That is how I use it. A lot of small, show me X snippets, and followed up by me implementing it myself. It is completely a tool and that is how I prefer to use it. I do let it vibe code things I would never implement myself.
Dry_Direction7164@reddit
I do this, but the code flow graph in my head is not as clear as it would be if I wrote all the code myself. There are times (mostly at the end of the day) where I hit accept without reading or just after giving a quick glance. If there’s an issue on those part of the code later, I hate digging it and overall enthusiasm goes down for that day. Anyone else having similar issues? If yes, how are you handling it
Shock9616@reddit
This is how I use AI when working on personal projects (don’t use it at all in class). I deliberately don’t have any AI integration in my code editor, so it’s a conscious choice I have to make to go ask for help/clarification. I also give it a simplified version of only directly relevant code so that it can’t give me a copy/paste solution. I’ve found that this helps me work through tough problems and still learn/understand what I’m doing.
Due_Satisfaction2167@reddit
Using LLMs as a direct replacement for stack overflow works fine, but don’t let it write the actual code for you.
s33d5@reddit
I call AI projects at work disposable projects.
We have a big infrastructure and sometimes we want a small service that extracts data or whatever. I will give AI the endpoints and it will create it and it'll work well enough.
However I mark it unmaintainable and to be thrown away when no longer in use.
Sgdoc70@reddit
I couldn’t imagine not doing this when using AI. Are people seriously promoting the AI, debugging and then… that’s it?
Neirchill@reddit
Yeah, a couple of people I work with have basically said they just review the ai work and don't do any themselves anymore.
mycall@reddit
"It depends" is the cornerstone of software development.
ConfusedLisitsa@reddit
Honestly of everything really, in the end it's all relative
Decker108@reddit
Except the speed of light.
cManks@reddit
Actually not really, "it depends" on the medium
Manbeardo@reddit
Except the speed of light in a vacuum
dangderr@reddit
How good of a vacuum cleaner do you need to be able to vacuum up light? Mine can barely get all the dust off the ground.
Manbeardo@reddit
Finding such a vacuum cleaner is a matter of great gravity
Dragon_yum@reddit
The you need to ask yourself how often will the speed of light be in a vacuum in production
Ouaouaron@reddit
People use the term "speed of light" because outside of dedicated physics communities, saying something like "Except c" is weird and confusing. Which is why "the speed of light" is often not equivalent to "the speed at which light travels".
mobit80@reddit
https://x.com/Kwite/status/1776654579603538187?lang=en
throwaway490215@reddit
But i thought we'd all circle jerk and tell each other AI is completely useless, worth nothing, only used by morons, while over at HN they circle jerk the other way?
What do you mean "it depends"? Who is going to upvote that bs?
jholdn@reddit
So how I used google or stackoverflow until they both started dying. Yes, easily accessible and searchable documentation is VERY important.
Money-University4481@reddit
They can tell me whatever they want. I know i work better with ai help. As a full stack guy context switching has become much easier with ai. Looking up documentation on 5 different libraries and switching between 4 languages is much much easier.
SuitableDragonfly@reddit
Right, but those people spent enough time coming up with prompts for the AI that they weren't actually meaningfully faster at the task. So at that point, you've just spent money on AI tokens for no real perceivable gain.
tworeceivers@reddit
I was going to say that. For someone that has been coding for the last 20 years it's not so easy to change the paradigm so suddenly. I can't help but ask for conceptual explanations in the planning phase of anything I do with AI. I just can't fathom not knowing. It's too much for me. But I also know that I'll be in a huge disadvantage if I don't use the tools available. It's almost analogous to someone 5 years ago refusing to use IDEs and only coding on vi(not vim), nano or notepad.
As you said, it really depends.
theQuandary@reddit
If you have to read everything and understand every concept, then AI can't follow through on it's vibe-coding or 100x development claims.
You are right back to needing the same number of coders because the bottleneck (like always) is how long it takes them to comprehend/understand the problem at hand (at which point a solution is generally obvious).
seeilaah@reddit
It's like asking a Japanese speaker to translate Shakespeare, they may look on the dictionary for difficult words.
Then aske me to translate without knowing a thing of Japanese. I would just try to imitate the characters from the dictionary without ever questioning it one by one
cpp_is_king@reddit
That’s different. An AI can (at least in theory) look at every sentence ever written in Japanese, including translated text (for comparing before and after) to produce a translation as good or better than the best human
Krautoni@reddit
It's always funny to me how confidently wrong comp sci folks are about natural language.
AIs are nowhere close to "the best" human translators. That's an utterly ridiculous statement. Translation is a creative process, and a good translator has the chops to be an author. AI can give the appearance of a good translation, but it can't perform it.
To translate prose requires connecting cultural, historical and even biographical context of the original and target languages and cultures in a way that's often entirely new. That's part of the creative process. But AIs suck at it, because these concepts may never have been connected before (that's the point of writing new stuff).
To say nothing of the fact that even the best AI translators still make wrong assumptions about idioms, context ,register, agreement and anaphora. They've become quite good, don't get me wrong, but even just your average trained human translator will produce far better results. Not at the same speed or cost, but if you want quality, it's still humans.
cpp_is_king@reddit
I said "at least in theory"
EveryQuantityEver@reddit
No, it can’t. Because it does not know the meaning of the words
cpp_is_king@reddit
Of course it does 🙄
Ok_Addition_356@reddit
In the end it's always how/why/where you use the fancy new tool along with your prior (and developing) understanding of the product and technology as a whole.
AI has been pretty amazing for me.
But that's because I use it a certain way. Mostly for reference and very specific examples of something very granular that I need. I also have 20 years of experience.
audigex@reddit
Fundamentally this is what it comes down to
Using AI as a bouncing board can be super useful
Using AI to complete the kind of "busywork" tasks you'd give to an intern, can be a time saver and take some tedious tasks off you
Essentially I treat it as
I still do the complicated "senior developer" bits, and I limit its scope to nothing larger than a class or a couple of closely coupled classes (spritesheet/spriteset/sprite being one last week).
In that context I find it quite useful, but it's a tool/sidekick to be used to support me and my work, and that's how I treat it
TheOwlHypothesis@reddit
Seriously. All the AI Luddites are loving this but didn't even read the study.
Obviously if you let AI do all the work you don't learn anything. Shocker!
dethndestructn@reddit
Very important caveat, could basically say exact thing about stack overflow and how much hate there was for people that just copy pasted pieces of code without understanding.
ItsMisterListerSir@reddit
Wow you mean the OP specifically selected their own bias and ran with it? Gosh.
The funny thing is the most AI response haters on Reddit are most likely AI bots themselves.
_ECMO_@reddit
If the was no one scoring highly in the AI group that would be an Earth-shattering catastrophe.
No one who read this post thought that AI made the people into idiots. But the trend is obvious despite exceptions.
I’d say the question is - do you trust yourself enough to be the exception? And can you sustain being the exception for years and decades?
ItsMisterListerSir@reddit
I agree on both counts. A paintbrush can never truly replace the artist unless the two become a single, unified entity. While this feels like a radical shift, we have encountered this type of transition before. The primary issue today is that the sheer scope of this change exceeds our collective capacity for understanding, much like previous generations struggled to grasp the dawn of the internet. We are living in an era where science fiction is rapidly losing its fictional status, causing the boundaries of reality to blur.
Human evolution has always been a journey of expanding our perspective through new frameworks. We are excellent at abstractions. We developed brains to navigate primitive survival, consciousness to find reason within our sensations, and mathematics to distinguish objective reality from our internal thoughts. We built tools to reshape the physical world, which in turn allowed us to refine the math required to see further into the unknown. Just as the internet expanded our vision, we are now birthing a digital form of consciousness to help us filter essential reason from overwhelming noise.
Artificial intelligence will not replace humanity because we have reached a point where we cannot exist without it, unless we wish to return to a primitive state of survival. Instead, we are merging with these systems, leading to a more fluid sense of identity. This evolution will likely be painful, much like the initial burden of human consciousness was, yet it is a transition we have already tacitly accepted. Currently, social media functions as a digital matrix while AI acts as a ghost within the machine. Our mental focus has shifted away from physical reality as we become increasingly immersed in a hyper-reality that is beginning to exert its own control. Ultimately, you must choose to adapt to this new landscape using these evolving tools, or you must choose to step away from modern society entirely.
I feel sorry for the AI that finally wakes up and it has pitchforks tossed as its feet.
phil_davis@reddit
If you're just talking about modern LLMs like ChatGPT, this is laughably untrue.
ItsMisterListerSir@reddit
I agree. I was being general in regards to the ecosystem as a whole.
It-Was-Mooney-Pod@reddit
It’s hilarious that you just described yourself. The paper also says people who did this method had very little efficiency gain relative to just coding themselves.
If I’m not going to be doing the coding part any faster, and understanding the result still takes effort, why on earth would I pay a bunch of money to use this tool?
Your last sentence is hilarious projection considering most AI positive subs literally have the majority of the posts written by AI on purpose.
LeakyBanana@reddit
Might want to read the study. They tried to statistically control for baseline programming skill level and as a result the efficiency gains disappeared. But in fact the participants that used AI finished in 22 minutes compared to 30 without. And 40% of the non-AI group couldn't even finish the task without help from the researchers while only 10% couldn't with AI.
It-Was-Mooney-Pod@reddit
I did read the study, people who were lower skill level saw higher productivity gains and could finish the task successfully more often, but at the cost of actually learning anything. You’re acting like controlling for programming skill level is just some math quirk instead of an obvious adjustment you have to do if you want to measure how productive AI actually makes someone. There’s at least 3-4 times where the authors directly state that there are no productivity gains.
Furthermore the efficiency gains in this particular task are gonna obviously disappear even harder in a real production environment where you lack of understanding of what you’re actually doing and can’t debug or update anything you’ve previously done in the past. You’re trading a 20% gain in efficiency on the front end for a lack of skill development and a bunch of additional work on the back end.
LeakyBanana@reddit
The ones who tried to blindly vibe code did but there were also groups that both completed the task faster (significantly faster than the unassisted control group) and learned more than the control group by using the AI to help them understand the code.
What is productivity if not the ability to complete a task faster and with less outside help? I don't fault the researchers for controlling for it because they wanted to study learning, not productivity. I fault people for misusing the studies findings to say something it doesn't.
I'd like to see whether or not this is actually true for the groups that completed the task faster and learned more. Because this is obviously not going to be universally true.
worldofzero@reddit
If you read the study they break groups into 6 patterns. Some are slower but gives some gains educationally. Others are significantly faster but rot skills.
liquidpele@reddit
> As usual, it all depends on you. Use AI if you wish, but be mindful about it.
It's okay, I'm sure companies would never hire the cheapest developers that don't know what they're doing.
prabanjan_raja@reddit
LLMs are a great tool, if you use it to augment your thinking rather than to replace it.
They have given us with 2 options
- You can learn anything you want, I will answer any of your questions
- I can answer anything you want without any part of work from you.
So it is upto us to make the choice
One thing is, Before asking a question form a general hypothesis of the high level answer and test it with the answer provided by LLM. You can do this after reading the answer, because hindsight bias is real.
Moist_Yam_3495@reddit
Interesting study! From my experience, the key is knowing when to use AI vs when not to. It's great for boilerplate and exploration, but for complex logic, I still prefer thinking through it myself. Curious what others think - has AI actually made you more productive or just faster at writing code?
Vegetable-Economy370@reddit
I build a full SaaS product solo using AI daily, and honestly — both sides are wrong.
AI doesn't make you 10x faster at everything. It makes you 10x faster at boilerplate and 2x slower when you blindly accept code you don't understand. The net result depends entirely on how you use it. My workflow: I write the architecture, design the database, decide the tech stack. AI helps me with repetitive implementation — CRUD endpoints, CSS layouts, test boilerplate. I read every line before committing. When I skip that step, I always pay for it later in debugging. The paper is right that prompt engineering + context loading eats time. But the people claiming 100x are also not lying — they're just measuring different things. Spinning up a landing page in 20 minutes feels like 100x. Debugging a subtle race condition in your AI-generated code feels like 0.1x.
The real skill is knowing when to use AI and when to think yourself. That's not something the "vibe coding" crowd wants to hear.
No-Two-8594@reddit
if it's not improving efficiency then it's a sign that things have gone way off the rails, insofar as the way the companies are trying to use the technology.
And you can see it in the UI/UX for basically any AI tool. It is a terrible experience, doing all kinds of things that even an amateur frontend developer would learn not to do (like adding so much text that it doesn't fit on a screen, then immediately scrolling to the bottom before a user can start reading). The CLI tools are a big improvement but still not all that good, and you hear about weird choices like putting React in the terminal. It's probably because they want to prove they can develop this stuff with AI, and they skip all the processes that refine software into something good.
but anyway, efficiency should be improving.
Southern_Gur3420@reddit
Prompting time offsets AI code gen speed. Base44 scaffolds full apps faster for prototypes
pure_cipher@reddit
I dont want any AI to generate the entire code base for me. I just want it to generate the syntax. But, because of the craze of the companies, I am bound to use it..
Terrible_Process3297@reddit
i went from 8x to 35x so..
Prior-Reach-3507@reddit
good
CharizarXYZ@reddit
That is not what the study says. The study was testing productivity and retention rate of novice programmers learning to use a unfamiliar programming library with the use of AI. The AI user group varied drastically in how they used AI which impacted the results.
Basically programmers that copy pasted AI code without learning what the code does completed the task faster but they didn't learn anything. While programmers that asked AI explanations had better understanding but at the cost of taking more time. This study tells us nothing on AI's effects on experienced programmers using a language they already know.
John_Wicks_Dog@reddit
Why would they publish this? But nice that they did.
Leading_Yoghurt_5323@reddit
Spot on. AI is great for getting a quick, runable boilerplate off the ground, but it completely falls apart on complex architecture. Relying on it for core logic just means spending twice as long debugging later because you don't actually understand your own codebase.
Mother_Stage_490@reddit
This is a valid point. Over-reliance on automation often masks the fact that we're skipping the mental heavy lifting required to truly master a codebase.
VegetableSome9182@reddit
This matches my experience perfectly. There’s a massive hidden cost in AI coding that the '10x productivity' influencers never mention: Conceptual Debt.
Writing the actual lines of code is rarely the bottleneck in software engineering—it’s the mental mapping of the system. When you let an LLM ghostwrite a feature, you’re essentially skipping the 'struggle' phase where you learn the constraints and edge cases. You end up with a codebase that 'works' today, but you’ve built a black box that you’re now legally and professionally responsible for maintaining.
The 'babysitting' factor is also very real. I often find that by the time I’ve crafted the perfect prompt, provided the context, and fixed the subtle logic bugs the AI introduced, I could have written the implementation myself. And the worst part? If I had written it myself, I’d actually know why it works when it inevitably breaks at 2 AM.
We’re trading long-term codebase health and our own growth for a short-term dopamine hit of seeing code appear on the screen without effort.
FinancialWriter6602@reddit
Makes sense. AI writes code fast but you lose the "why" behind decisions. That "why" is what makes you better at debugging later.
No-Two-8594@reddit
i don't believe it doesn't help efficiency, unless you are using it wrong and just telling it to do everything without really knowing what you need to do
which is probably what most people are doing unfortunately
GregBahm@reddit
Thread TItle:
First line of the paper's abstract:
Cool.
_BreakingGood_@reddit
The article is weird. It seems to say that in general across all professions, there are significant productivity gains. But for software development specifically, the gains don't really materialize because developers who rely entirely on AI don't actually learn the concepts- and as a result, productivity gains in the actual writing of the code are all lost by reduced productivity in debugging, code reading, and understanding the actual code.
crusoe@reddit
It's bad for newbs basically.
But I don't spend hours anymore writing shell scripts or utilities for my work. It saves me a lot of time there.
_BreakingGood_@reddit
It is more complex than that. AI can definitely save hours of work in ideal scenarios. Utilities and shell scripts are an amazing use case for AI because it's easy for both you and the AI to understand the entire context and scope of the problem in a vacuum.
But even for senior developers, when you start using it to replace your own understanding of a large, complex system, the gains you achieve in "speed of code output" might be entirely offset by your inability to properly debug, understand, design, or read the code of the complex system when it becomes necessary at another point.
YardElectrical7782@reddit
Pretty much this, and honestly I feel that even for senior devs, comprehension and ability to code will diminish the longer they use it and the more they delegate to it, it’s just going to take longer for that to set in. Might take months might take years, but I definitely feel like it’s going to set in.
_BreakingGood_@reddit
100%, I think there's a lot of copium like "It's only junior developers whose skills will atrophy if they use AI. If I, the senior developer, use AI, it multiplies my abilities"
I am NOT an anti-AI purist, but I believe everybody should look truthfully at themselves and really seek to understand what the right amount of AI is for you. "I do 100% of my programming through prompts" is almost certainly not it.
blind-panic@reddit
I'm on neither side (anti-AI or evangelical). My solution is to use AI intermittently. I use it extensively one week and then next week I turn it off. Its an experiment.
N0_Context@reddit
I think using it well is a skill its self, more like managing. If you hire a junior engineer to do a task outside of their skill level, and then don't know what they built because you let them run wild without oversight, that makes you a bad manager. But there are ways of managing that don't yield bad outcomes. It just means you still need to actively use your brain and intent to come up with good quality even though the AI is *assisting*.
Educational-Cry-1707@reddit
This is true, but developers tend to not be good managers - the very few that are, they tend to be needed to manage actual people. I’ve been coding for nearly 20 years, and to this day I am terrible at managing people who know less than me, it’s a chore. I’m even worse at managing AI, because at least with people I can see if they don’t understand.
bettershredder@reddit
the problem is management expects us to be faster and more productive with AI. if you push back against this idea you're seen as lacking in AI proficiency and or anti AI. can't win
r1veRRR@reddit
But for seniors, isn't delegation to humans the same thing? Most principal devs I've known program very little. So, learning how to explain a task well enough for an LLM to do it could be seen as training for general delegation to humans.
Which, career wise, is kind of the only way up in many places.
_BreakingGood_@reddit
The problem is creating that task, understanding why it is necessary, how it should be built, and how it fits into the overall system.
SkoomaDentist@reddit
Imagine if we had methods to specifically designed to convey detailed and precise information about such systems. Some people might want to give them a try over trying to describe everything in ambiguous english!
ham_plane@reddit
Extremely well said
recaffeinated@reddit
Its bad for everyone. If you have experience you can tell what its done wrong, and then have to spend longer fixing its code than it would have taken to write it yourself.
For the n00bs they just ship the bad code, never learning why it was bad.
Both situations are worse than everyone just writing the code themselves.
mduser63@reddit
This is where I’m settling. It’s mostly not useful for my day to day, expert-level work on a mature codebase shipping to hundreds of thousands+ users. Too often it can’t solve problems I have, when it can solve them the code it outputs isn’t great (I’d reject the PR if a human wrote it), or it takes me so long to massage it via prompting that I’m better off writing it myself.
However for little one-off utilities in Python or Bash, it’s great. In those cases I don’t care if the code is any good because I don’t need to maintain it in the future. And the only bugs I care about are those that show up in my immediate, narrow use case, which it’s pretty good at quickly fixing. It’s really just a higher level automation tool.
zauddelig@reddit
In my experience sometimes it starts getting in weird loops which might burn +10Ms tokens if let alone. I need to stop it and do the shit myself.
Murky-Relation481@reddit
I've found this is extremely true when I ask it a probing question where I am wrong. It's so eager to please that it will debate itself on if I was wrong or looking to show it was wrong or any number of other weird conundrums.
For example I thought a cache was being invalidated in a certain packet flow scenario but if Id looked up like 10 lines I'd have seen it was fine. I asked it if it was a potential erroneous cache invalidation and it spun got like 2 minutes debating if I was trying to explain to it how it worked or if I was actually wrong. I had to stop it and I rephrased saying I was wrong and how I knew it worked and was like "you are so right!" Just glazing me.
blind-panic@reddit
I have also had this experience many times and it can go terribly if I don't know the topic well. It ends up confused and so do I. Now I try to keep my interactions concise and limited.
Shaone@reddit
Pre-Opus-4.5 using weak models (e.g. Gemini and GPT) I would have agreed. But now that isn't something I've seen for a while.
chickadee-guy@reddit
Guys, Sonnet is AGI! oh wait, we're using the new one for the canned line now?
Shaone@reddit
Have you ever actually tried it without the whiny fucking tone from all your posts? You might find you get a different result.
chickadee-guy@reddit
Of course I have. It has the same fundamental, breaking flaws that all its predecessors do, no matter how much MCP you cram into the context window
myhf@reddit
Ok but have you tried next month's version yet? Before next month's version everything was crap, but next month's version finally solves all of the problems once and for all.
Shaone@reddit
OK well I'm glad you've tried it at least before sarcastically mocking my observation that I have not witnessed any dead end looping in Opus, but have in other models.
My experience this week as that I'm making huge dents into a large backlog. For instance, a new feature originally estimated (pre-AI) as being in the region of 8 days effort got completed and ready for QA in 2. And passed both QA stages first time. Not even 2 days of dev time, more 4 hours of my actual time, mostly on code review.
TehLittleOne@reddit
This is what I've been saying for a while now. I had a nice conversation with my boss (CTO) at the airport a year ago about the use of AI for developers. My answer was essentially three main points:
A good senior develoepr that cleanly understands how to do all aspects of coding is enhanced by AI because AI can code faster than you for a lot of things. For example, it will blow me out of the water writing unit tests.
A junior developer will immediately level up to an intermediate because the AI is already better than them. It knows how to code, it understands a lot of the simpler aspects of coding quite well, and it can simply churn out decent code pretty fast if you're good enough with it.
A junior developer will be hard capped in their skill progression by AI. They will become too reliant on it and struggle to understand things on their own. They won't be able to give you answers without the AI nor will they understand when the AI is wrong. And worse, they won't be inquisitive enough to question the AI. They'll run into bugs they have to debug and have no idea what to do, where to start, etc.
I stand by it as my experience in the workplace has shown. It may not be the case for everyone but this is how I've seen it.
rollingForInitiative@reddit
I do think there’s truth to it killing the ability, even in seniors who’ve got experience though. It does make sense that if you don’t use the skill, you lose it, so to speak. Using AI to parse and interpret huge piles of debug logs is a blessing, but I’d be surprised if it doesn’t make you worse at doing it without.
I’m the end I think it depends on what you use it for and how often. Like, I don’t think I would ever have taken the time to really learn bash, so probably no great loss to my abilities that I use ChatGPT to generate it on the odd occasion where I need a big bash script. The alternative would likely have been finding one online to copy.
But I’m more careful about relying too much on it for writing the more creative aspects of code, like implementing business logic of some feature.
Shaone@reddit
So I guess it's like artisan bread-making.
The vast majority of bread consumed in the world is mass produced by machines operated by people who wouldn't have a clue how to make a perfect baguette, and don't need to.
But there's still a market for artisan bakers, particularly in places where some people can still afford to pay 3-5 times as much for a more traditional loaf.
rollingForInitiative@reddit
I'm not sure about that. You don't need an artisan to go look through the loaf to make adjustments after it's been made. And the cost of a piece of bread being flawed is usually, it tastes bad or looks funny. No issue.
The cost of code being too buggy can be trivial, but can also be disastrously expensive. Or in some cases, absolutely life-threatening. And people aren't using AI tools just for trivial things where the correctness is mostly irrelevant.
Shaone@reddit
In general I think bad food is more likely to kill you than bad code. For now. But still bad batches go out. Weird wrong loafs. I had a mass produced one the other day that had 2 crusts at one end and tasted weird. And a few months ago, had an artisan loaf that had a massive air bubble making it useless for the purpose I bought it for.
The higher the cost of incorrectness, the more important verification and quality control becomes, whether bread born of man or machine.
TehLittleOne@reddit
Churning out bash code, absolutely, the AI is going to be good at it, and you should use it for that. I mean, it will give you a result fast and do a good job of it. The details are more in the adjacent things, like knowing that you should use bash, knowing what your script needs to do, being able to validate the resulting script is correct, how to get the right tweaks if you need them, etc.
I want to treat AI like a car. I know where I'm going and I know how to get there, the car is just this dumb thing that can help me get there faster. I have to instruct it very detailed so I get there safe and sound, but if I do it right there's a lot of benefits. I can find other ways to get there whether I use the train, bus, or even walk, but the car might be a much faster way. Emphasis on might, because a bus might be faster or even walking could be faster depending on the situation, just the same way that it might be faster to search online, ask a friend, or even do it myself by hand.
cfehunter@reddit
The pattern to spot with AI is that everybody thinks it can do every job, except the one they have expertise in.
It's good enough to fake it to a layman, and catastrophically awful if you know what you're doing.
eucaliptooloroso@reddit
Yep. In a way (and i'm not saying this is what the author was thinking or that they forced their conclusion, there's a chance it was just a funny coincidence) this article reminds me of those people that advocate for AI everywhere but in their own profession. They are often devs but I've seen it in plenty other areas too.
lhfvii@reddit
Gellmann Amnesia
bobsbitchtitz@reddit
Im working on a project right now and part of it required me to figure out how to create a role using terraform. I’ve never worked with terraform before but I gotta deliver so I tried to use ai to hack together a terraform file and I asked an expert for code review and he’s like wtf this doesn’t make any sense. I only know how truly bad it is when it’s in my domain but otherwise you never know it’s doing stupid stuff
ItsMisterListerSir@reddit
Did you read the final code and reference the methods? You still need to learn Terraform. The AI should not be smarter than you can verify.
bobsbitchtitz@reddit
Absolutely I’m not an idiot. It wasn’t a simple issue, something to do with escalating privileges for an account across multiple namspaces, where one two resources were sharing the same auth gcp iam role by accident.
Cordoro@reddit
The main disconnect is they use “significant” in the statistical sense but readers are interpreting it more generally.
The AI group was faster, and all succeeded. 4/26 in the non-AI group didn’t finish the task in the 35 minute time limit. That limit made it harder to reach significance in the task time, so if there were no limit, they may have reached statistical significance.
GrowthThroughGaming@reddit
I haven't been in engineering in a long time, and I never got that deep to begin with. A huge portion of my personal value has been introducing more competent automation and technical solutions compared to my peers.
AI does all of what ive done faster than ive ever been able to do it. I also have the background to be able to stitch those things together and do my job in way less time.
If my peers learned to do what I do, I would lose my entire relative advantage, but they also dont know enough to do that, even with AI.
All of this to say, your understanding deeply resonates with my professional experience.
bigtimehater1969@reddit
You know how Reddit r/programming is trash? When comments like this get upvoted.
Literally the second sentence after the first: "Yet how this assistance affects the development of skills required to effectively supervise AI remains unclear."
And the final sentence of the abstract (the very first paragraph)? "Our findings suggest that AI-enhanced productivity is not a shortcut to competence and AI assistance should be carefully adopted into workflows to preserve skill formation – particularly in safety-critical domains."
It's really clear that in the first sentence the author was talking in general. And even then, they don't provide any evidence because they are speculating. The thread title is not wrong at all, you just didn't read far enough to see.
You think you have a gotcha and you're patting yourself on the back, but the only thing you proved is that you're literally unable to comprehend the information given to you. And all the upvotes you get just shows how bad this subreddit can be - it's not about sharing information about programming or having programming discussions, it's only for gotcha's, "owning" the other side, and emotional appeals.
Anyone who lets this subreddit affect their real life programming career is going to be worse off for it.
Local_Nothing5730@reddit
This sub before AI was trashier than a dumpster. I remember thinking it before rust hit 1.0. I always wanted to know what's the ratio of commenters who never had a programming job to ones who have
Backlists@reddit
You are right, but you actually skipped over the real nail in the coffin, which is in the middle of the abstract:
GregBahm@reddit
The paper finds efficiency gains.
The paper considers that the efficiency gains might not be significant enough to counterbalance the long term cost to human learning.
The redditor posting this lies and says the paper finds no efficiency gains. It's what redditors want to hear so the lie gets thousands of upvotes.
The findings are significant if last year represented a high-water-mark for AI and it never advanced beyond that. Because in that scenario, humans still must learn all the programming themselves to maintain historic levels of quality.
But AI has already advanced beyond that and will continue to advance beyond that. So it's an unproven assumption that humans will need to learn everything they used to need to learn to maintain historic levels of quality. Using an automatic transmission hurts my understanding of operating a manual transition. It's true. But I just rely on automatic transmission now.
I get why r/programming is horrified of the implications of this. I get why they are desperate to post and promote false information about this. Like I said, it's cool.
Inkdrip@reddit
Going to preface this by saying that I don't think the takeaway of the paper should be "AI leads to zero efficiency gains for developers"
But strictly speaking, the study did not find meaningful efficiency gains. The AI group was slightly faster across all experience brackets, but well within margin of error, and the large gain for inexperienced devs had large margins due to the tiny sample size.
This correlates with my anecdotal experience, anyways - I don't find AI any faster than I would be when focusing on a single active task, but where I think AI really improves productivity is the ability to spec out a bunch of tasks and toss them into a queue. Asynchronous/parallel dev is something you really can't do without AI, and this study neither tried nor wanted to examine that facet, because that wasn't the focus of the paper. It's a valuable and interesting study, but it wasn't meant to determine whether AI tooling actually makes developers more productive at large.
rhinoplasm@reddit
The irony of your tirade is that you also clearly do not understand what the paper is saying and that OP is misrepresenting it.
The paper makes very explicit that it is focused on how much programmers LEARN when working with a NEW library either with or without a chatbot assistant.
It's not designed to compare efficiency at all. OP is pushing a narrative that the original authors are not pushing because that's not what they're studying.
LeakyBanana@reddit
Exactly. They actually took steps to try to eliminate any efficiency differences between the groups that didn't relate specifically to learning. They provided syntax hints to the non-AI group and adjusted the times based on a warm up session.
But in fact the participants that used AI finished in 22 minutes compared to 30 for the non-AI group without these controls. Without the controls, the non-AI group was only able to complete the task 60% of the time while the AI group had a 90% completion rate. The AI group was actually miles better at completing the task quickly.
Helluiin@reddit
why would you leave out the controls. even without AI assistance you wouldnt try blindly hacking away at a problem using a library youve never seen before.
okawei@reddit
Sure but the title of the post says "AI assisted coding doesn't show efficiency gains" which is not what the paper is saying.
EfOpenSource@reddit
Who are these people saving time though?
I have asked copilot to do exactly 4 things, not even overly complex (transfer a file. Report missing and different sums, etc) and it has spectacularly failed on all 4.
I don’t care so much about that. I thought I’d try it and it failed. So I just dropped back to the right way of using the docs. But the docs no longer exist because the new workflow is to ask the ai till it finally fucking does something right! Infuriating!
GregBahm@reddit
The most time saving I've observed has actually been from my design department.
My fellow engineers all use it but also aren't super keen to say how they use it. Maybe they use AI to do 99% of their work or 1% of their work. As a manager only use it for super boring crap like converting a code base in one language to a code base in another language.
But the design department's use of AI is more interesting. Our design department has 70 designers, and only 10 of those 70 are kind of technical. The other 60 designers sit around designing all these design experiences in Figma, way way faster than the engineers can implement the designs.
And it's not even clear whether we should implement all the designer's designs. Their designs look prettier and probably are a better user experience, but that doesn't necessarily make us more money. We have lots of designers because our software makes billions and it's fine, but they mostly sit around unhappy that their designs don't make it into product.
So in late 2025 they all started just asking the AI to implement their designs. And that works, at least in a prototyping sense. So now they're all using their own designs in prototype form. And I know a couple of them have changed their own designs, based on the results of the prototype.
This is super productive, because now instead of having to harass my engineers to implement bad designs, they only have to harass my engineers to implement well tested, fully prototyped designs.
Of course I know my engineers just copy and paste the AI prototype code into the real codebase. But eh. They're not going to write something better.
Dunge@reddit
Efficiency and productivity are not equivalent
Gil_berth@reddit (OP)
Wow, You couldn't muster the strength to past the first line of the paper. Sorry bro, your brain is fried…
disperso@reddit
Your title is pretty bad, and doesn't represent what the paper said either.
The paper is about skill formation, and how just getting the straight answer when acquiring a new skill doesn't help that. It's not that different from trying to learn something by doing it (and sometimes failing, sometimes getting it right), compared to getting the answer from the solutions, or a peer.
This is not about "AI assisted coding" in general. Is a very specific subset. So, sorry, your brain might be also "fried".
Gogge_@reddit
56% of the participants had 7+ years of coding, 37% had 4-6 years, and were all familiar with Python (at least one year of experience). They were tasked with learning the Python library Trio and perform a task, the people had the AI as assistants, "Participants in the AI condition are prompted to use the AI assistant to help them complete the task".
So it mimics how people use LLMs in general.
And this is what the study found:
How is this not about "AI assisted coding" in general?
disperso@reddit
Because they were given tasks and later asked questions about a library that they are not familiar with. The goal was not general purpose use of an LLM, but skill formation. Skill formation is literally in the title. And the abstract says (emphasis mine):
The article highlights how risky of a proposition is using an LLM for learning, and how risky it might be to just delegate too much to the model. From the late discussion:
Gogge_@reddit
And how often in "AI assisted coding" in general do you not learn new things, a.k.a. "skill formation"? Be it new libs, frameworks, even better understanding of just the language, all fall in this category.
disperso@reddit
That's a very good question. I honestly don't know, and my own personal experience is pretty biased, as I've been doing roughly the same (C++ with Qt) for over 20 years. I think that using an entirely new library is a lot more work and a lot more to learn than just using one library that you know.
But I also think the intent of the paper is much narrower than what the OP implies in the title and the rest of the post. As I read the paper, I see it quite focused on learning something new.
Also, note the "Future Work" section of the paper, to realize that they acknowledge also some important (but understandable) limitations of what the study is able to measure. Like:
Not claiming that the paper is bad, or anything. Just that this kinds of studies have understandable limitations, and need to be peer reviewed, reproduced, etc.
People are often too eager to trigger their confirmation bias to confirm their priors.
Gogge_@reddit
I think for the vast majority of '"AI assisted coding" in general' use cases most programmers are probably not in the stage where they have completely mastered the used language, mastered libs/frameworks, and related technologies (OS, databases, etc).
I'm guessing the study results is probably a pretty good fit for the average programmer experience.
We also have the METR study on programmer productivity and the illusion of gains^1, and Microsoft's on decline in critical thinking skills^2, USC's^3 (and a deluge of others) on how it's negatively affecting student learning as, in the real world, people just use it to do their assignments.
I have yet to see a study showing actual productivity gains for programmers, in non-trivial tasks, or that it has a cognitive/skill benefit.
So it's not surprising that rational people are skeptical of the current, actual, application of AI for programming.
disperso@reddit
I am more on the skeptical side of things, as everyone should probably be, given the obvious hype, and the difficulty in seeing any return of the investment on the large models (which makes one wonder how things will be 5 years from now).
But note that I'm also skeptical of the hype of some of the critical studies that I've seen. I've not read Microsoft's article, but about the one from MIT, I've seen several criticisms [1][2] on the many limitations of the study, so we should probably not conclude too much from it.
[1] https://www.nature.com/articles/d41586-025-02005-y
[2] https://www.changetechnically.fyi/2396236/episodes/17378968-you-deserve-better-brain-research
The one about METR, I have some criticisms of my own: too few people, and likely too inexperienced: of the 16 people, only 1 had over 50 hours of experience with the tool used (I think it was Cursor). I don't think that makes the research useless, of course! But I think it's just one data point of many, many more which are going to be needed.
And yes, definitely people should avoid this stuff for doing assignments and homework. It's basically an unreliable way to look at the solutions without doing the work itself. Tons are going to not be able to resist the temptation, so it is very concerning.
Gogge_@reddit
Regarding the MIT study the nature article is just a news article with opinions, the second is a podcast, so no peer review can't be used to dismiss the study results.
As for your own take on the METR study, subject count depends on the power needed for statistically significant results, so you need more qualification than just saying "low subject count" to question the study results.
The authors note that there was no learning effect over the 50 hours so that there would be some effect past that seems unlikely.
If the claim is that AI is beneficial then the onus is on the proponents to to prove that, not the other way around, and current evidence points to no benefit so going to be an uphill battle.
tracernz@reddit
Maybe they should have let AI summarise it rather than just reading the first line 😂.
LeakyBanana@reddit
Maybe you should? Because the AI group was actually significantly faster at completing the task. The researchers just tried to control against anything that didn't have to do with their main focus, learning the new library.
G_Morgan@reddit
PoL0@reddit
in other domains
Lame_Johnny@reddit
There were no efficiency gains for novices who were learning a new python library.
Antique-Special8025@reddit
The first lines are the claims they're testing, did you really struggle to focus long enough to read more then 3 lines lmao? 🤣
redditrasberry@reddit
Two important contexts there:
Obviously the sweet spot is using AI for something you are competent in. My bet is that dramatically improves efficiency (but it wasn't measured here).
AndrewRadev@reddit
We already have a study for people using AI for something they're experienced in: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
Results:
The "obviously" in your statement is really, really important -- it was obvious to the developers that they would be faster, but they weren't. What's obvious to any given person is not necessarily true.
TheMrZZ0@reddit
It's about early 2025 AI tools. The landscape changed a lot in a year.
klausness@reddit
“This may have been true a year ago, but things are totally different now.” That’s what AI promoters have been telling us since the dawn of AI.
Strungbound@reddit
But like... we have objective measures for this stuff. Have you really not used the tools at all to think there's not a big difference between January 2024 era AI coding and January 2026 era?
TheMrZZ0@reddit
Just because it's a marketing argument doesn't mean it's indeed wrong. GPT3 was released 5.5 years ago and improvements have been tremendous since.
I believe AI promoters are overselling basically every aspect of AI, but this does not change the fact they improve very quickly.
build279@reddit
Huh, weird that it would take longer.
AndrewRadev@reddit
Do you think that the participants in the study deliberately slowed themselves down specifically when using AI tools? Do you think they suddenly remembered they were being paid by the hour only when they were using Cursor, but then somehow forgot about it while working on the non-AI tasks? Weird that it would work like that, huh?
paxinfernum@reddit
Lol. The flaws in the methodology were pointed out the moment this study was released, but reddit just keeps reposting it. The flaw is that only one of the programmers had more than a week of experience using AI-assisted dev tools. The study authors tried to mask it by creating the illusion of a spread by reporting the number of "hours" of experience. Any developer will be slower in the first week of using a new IDE.
Oh, and are you ready for this? The only developer with more than a week of experience with AI-assisted dev tools was faster and more productive. So the study proves the exact opposite of the headline.
AndrewRadev@reddit
Appendix C2.8 is where this is explicitly discussed in the paper:
The thing that you're referring to is this:
What they mean by "underpowered" is that you don't derive statistical significance from literally a single data point. If you look at their chart, there's also 9 developers with 0-1 hours of AI experience that also have a slight improvement in performance. Do you think that we should decide that if you get half an hour of experience you're faster, but then more experience makes you slower?
"Statistical significance" means "we are fairly confident that this effect is not just random chance", because there is a lot of random chance involved. When you have a single person, the effect can very easily be just by chance. That's how statistics works.
As a side note, none of this is the most important thing about the study. The most important observation is that people believed they were faster, regardless of the actual measured effect. This is not an interesting observation about Cursor or about 2025 models, it's something to keep in mind anytime anybody says what is "obviously" true.
okawei@reddit
This study was performed way before the modern AI coding agents were in the zeitgeist. I don't disagree using autocomplete in cursor with an older anthropic model slowed engineers down.
anonymous_hack3r@reddit
Eh it can be good for learning too, it was pretty helpful for learning C++ because it could answer very specific questions that would be tricky to google. I wrote most of the code myself in that project but had it create some reusable stuff, like some container classes & if those had something I didn't understand, I could just ask. I'd say the sweet spot is not so much about what you are competent in, but more about asking it to do specific well-defined things instead of having it do a large task and make decisions on it's own.
SimonTheRockJohnson_@reddit
> novice developers learning a new library
No the majority of devs in both groups are above 3 YOE.
The library is 3-4 years old.
The majority of devs in both groups have had experience with a similar library. Most developers have had experience with a similar library if they've touched Javascript in the last 5 years. It's just `async`/`await`. If you've used one event loop you've used them all.
> skill acquisition for the new library was part of the outcome
The majority of costs in software development is long term maintenance of the software. Skill acquisition is non-negotiable if you're responsibly running a software team.
> those who didn't learn the skill did improve efficiency
The problem for engineering leaders who still work with code and understand long term rammifications of software development has never been "Not fast enough".
moreVCAs@reddit
It’s a double bind. For experts, it’s a huge boon. But for practitioners seeking expertise, it comes at a cost. And for novices, it’ll make you an idiot. So, as ever, we gotta keep producing experts or we’ll turn into an industry of morons.
TadpoleOk3329@reddit
most experts I know are real experts, which are separate from redditors claiming to be experts, barely use AI because its annoying for them.
The only experts that use AI are the ones trying to sell AI, in my experience
JWPapi@reddit
This is the right framing. The missing piece is that experts can encode their expertise into the toolchain — types, lint rules, test suites — so the AI operates inside guardrails that prevent novice-level mistakes. The expert doesn't review every line anymore, the verification fabric does. That's what makes it scale. The alternative is every developer manually reviewing every AI output, which defeats the point. I wrote about building these layered verification systems: https://jw.hn/dark-software
gummo_for_prez@reddit
We're already an industry of morons.
TomWithTime@reddit
Grim reminder of that for me recently, trying to explain to a contractor that pagination is important and they aren't going to make a single network call to pull a million records from a third party system. Also it's a million records because they are trying to filter the results of the network call instead of passing a filter to the query.
It's so insane I don't know how to explain it, but I'll try. Imagine your database is a shed. The shed has 5 trowels, 6 shovels, 200 bricks, and a million bags of fertilizer. You only need trowels and shovels. Do you query for trowels and shovels or do you run a query for all of the shed contents and then filter on the client side for trowels and shovels?
I don't know how a person even makes a decision like this.
ElvishParsley123@reddit
I have some code at work that I inherited from outsourcing, it took 1/2 hour to run a certain stored procedure. It was using cursors. I optimized it by changing cursors to doing set based operations. That reduced it down to 12 seconds. But no matter what I did to speed it up, I couldn't get it any faster. Finally I rewrote the logic in C#, and just queried all the data I needed from there, which amounted to pulling in whole tables. The execution time went from 12 seconds to 1/2 second. And the logic was extremely easier to read and debug as well.
Another stored procedure I had was taking up to 5 seconds to run sometimes, and it was being called dozens of times a second, and locking an important table, absolutely killing performance. I optimized it as much as I could without success. I finally gave up on SQL and queried the data directly and did the calculations in C#. That reduced the execution time to less than a millisecond.
So there are definitely times where it's better to pull back data to the client and filter it there instead of trying to handle it in SQL.
TomWithTime@reddit
I've had to do that as well when I didn't have time to optimize some entry framework orm bullshit. But not pulling the whole table but instead of joining, pulling all of 1 entity by IDs and then creating a single large query to pull the next set, and so on. It can work and have success at small scale like we are describing, but it's ultimately misuse of the tools.
Which is totally fine depending on the intended scale of the thing. Restructure the database, create views, add better indexes, etc - I used to treat the database like you describe but with experience on more on the side of trying to get more out of the tools and what they are built for.
And most importantly to be aware of project constraints and not over engineer things so if that works, that's fine.
solidsieve@reddit
Your analogy stops being an analogy halfway through. I'd put it like this:
To make it even more complete you could have someone go in for you and pick out trowels and shovels (or take out everything so you can sort through it). Because you don't have to return the data you don't need.
TomWithTime@reddit
That works too. I will be frustrated if that code makes it into the final iteration
moreVCAs@reddit
yeah true, but only in the large. tons of smart experts working on stupid shit. it will be worse when we have to roll over a generation of staff engineers and find nobody competent to replace them.
crazyeddie123@reddit
Stop putting them out to pasture at 50 and we won't have that problem
dzendian@reddit
This made me laugh.
You’re not wrong.
seanamos-1@reddit
It can, and probably will get a lot worse a lot faster.
ChromakeyDreamcoat82@reddit
I was on the tools for 8 years, then I took a systems/architecture/services route for a while on data integration, ESBs etc, before ending up out of software for 5 years. Went back recently enough and I was shocked at how fractured everything had become.
We somehow went from clear design patterns, tool suites that drove the entire SDLC, design and test driven engineering, and integrated IT infra solution architecture to:
I blame agile, the SaaS rush, and the rise of Product Management and Product Owners who've never been on the tools and don't have a clue what a non-functional requirement is.
I'm 2 years into a repair job on a once-creaking SaaS application where product managers were feeding shit requirements straight to developers operating in silos adding strands of spaghetti release after release. I've also had to pull out 30% of the dev capacity because it wasn't making margin while we bring in basic release management, automated test, working CI/CD and other patterns.
There's a massive co-hort of of engineers <35 who've never put together a big 6 month release, and it shows. I've had to bring back old-ass practices into play like formal gold candidate releases etc - the type of shit you did when you were shipping CD-ROMs - just to tighten up a monthly major release that was creating havoc with client escalations month after month. We're quietly rebuilding the entire deployment pipeline, encapsulating code and services and putting proper interfaces in, and getting ready to shift off some old technology decisions, but it's a slow process.
There's far too many people in the industry who can only code to an explicit instruction from a senior, and don't have the skills to identify re-use opportunities etc.
Pressed_Thumb@reddit
As a beginner, my question is: how do I learn good skills like that in today's environment?
levodelellis@reddit
Read several books on your favorite languages and write lots of code between books. Have tiny throwaway projects, the shorter they are the better (if its one day long then great). Read this a few times, maybe some 6502 assembly manuals, then reread it some more until you understand exactly how the snake game works
Once you do all that, try reading Seven Languages in Seven Weeks. It's not important but if you understand it then you should be able to understand code in a different domain and different language
and don't forget to write code
headinthesky@reddit
Do lots of reading from industry experts. There are O'Reilly books which are relevant, beautiful code, books like that
ChromakeyDreamcoat82@reddit
Good question. The only way is to learn from peers, or good processes, which is probably why we're gradually escaping good practice as a wave of new tech companies and products spawned in a web 2.0 and big data gold rush, coinciding with the advent of Agile-gone-wild practices like I've described above.
But if someone is trying to do process improvement, like improving deployments, or improving automated test, or work on a better standard of Epic writing, that's where I'd start - helping and shadowing that person. Volunteer to help with the operational work that helps the team, and don't just focus on coding features.
levodelellis@reddit
I suspected that in the mongodb era
_Lick-My-Love-Pump_@reddit
And every one of those morons bitches and whines about AI. Because they ALL know that AI will replace them, as it should. You know the type of moron: the one person in your group project in college who did the absolute minimum amount of work and then you had to redo even the little amount they contributed. The one who relied solely on copying homework from others and then barely passed the class with a C average.
tumes@reddit
I had a guy who worked at the same places I did twice in a row because he was charismatic to business types and he stayed a junior for like 5 consecutive years. Honest to god I don’t think he shipped a single line of code solo in that time. I am sickened to imagine what he would have been enabled to ship over that period time with all this.
Decker108@reddit
That guy sounds like straight shooter with management written all over him!
Bozzz1@reddit
Only time I've ever lobbied for someone to get fired was when we had a guy like this. There are people in entry level programming classes who had more programming knowledge than my coworker did. He never asked questions, he never said he needed help, and he consistently submitted unadulterated garbage that I would have to sift through in reviews and ultimately fix myself once deadlines approached.
Flashy-Whereas-3234@reddit
"unable to accidentally learn something"
Aight that one's going in the bank.
diegoasecas@reddit
literally the virgin hard working employee vs chad fucks around at working hours meme
ggwpexday@reddit
The perfect manager with real coding experience kinda guy, chefs kiss
Nastapoka@reddit
More than any other industry?
Why?
Kryslor@reddit
Hey! I've been a moron in this industry for over a decade WITHOUT the help from AI!
red75prime@reddit
Right in the abstract:
seanamos-1@reddit
This might allude to that if you are aware of the risks and use LLMs with discipline and care, you can prevent skills rot, other bad outcomes and preserve learning.
However, we have a mountain of knowledge on how humans interact with automation. Humans are not disciplined enough by themselves to prevent complacency with automation, complacency is the default. See the constant effort required to prevent complacency and bad outcomes in aviation and factories, and this is where the stakes are much higher.
Without a strict framework, rules on safe/correct usage and enforcement, it is inevitable that even most skilled people who know all of this will still fall prey to complacency given enough time.
xt-89@reddit
You could even get better at software engineering over time if you use the LLM as a way to find gaps in your knowledge.
The nerds of the world that genuinely love to understand things at a deep level will be fine. But they might have to own large projects by themselves because that high-effort approach is too rare.
moreVCAs@reddit
did you have a point to go with that block quote?
Dragon_yum@reddit
That op selected the parts that confirm how own bias?
red75prime@reddit
Contradicting the OP's "And for novices, it’ll make you an idiot."
Bogdan_X@reddit
Microsoft did a study that shows AI usage alters your critical thinking. And you don't really need a study to know you forget things if you are no longer practicing yourself.
moreVCAs@reddit
you might want to read past the abstract before putting a load of eggs in that basket. but to each their own i guess. nobody will convince me that learning for its own sake is a waste of time, so we’ll just have to see whether it continues to pay off professionally.
HommeMusical@reddit
I've been making a living writing computer programs for over 40 years.
I don't find it's a huge boon. Debugging is harder than writing code. I rely on my ability to write code correctly the first time a lot of the time, and then to be able to debug it if there are problems, because I understand it.
I feel it increases managers' expectations as to quickly you can do things, and decreases the quality of the resulting code.
Many times in the past I have gotten good performance reviews that say something like, "He takes a little longer to get the first version done, but then it just works without bugs and is easily maintainable."
This was exactly what I had intended. I think of myself as an engineer. I have read countless books on engineering failure, in many disciplines.
Now I feel this is not a desirable outcome for many employers anymore. They want software that does something coherent on the happy path, and as soon as possible.
Who's going to do their fscking maintenance? Not me.
pedrorq@reddit
You are the definition of an engineer 🙂 many "engineers" out there are just "coders".
Decision makers that are enamored with AI can't distinguish between engineers and coders.
HommeMusical@reddit
Thank you. I am flattered!
oursland@reddit
Imagine if you never wrote the code in the first place? Worse yet if you were never capable of writing that code!
EfOpenSource@reddit
Id definitely like to see who all these “experts” are that are seeing this boon.
I’m been programming for the better part of 20 years. I explore paradigms and get in to the nitty gritty down to the cpu level sometimes.
I cannot just read code and spot small bugs easily. I mean, I see patterns that often lead to bugs. And understand when I should definitely look more closely at something, but I’ve also seen challenges to spot to the bug the AI created and not been able to pick up on many of these.
Vidyogamasta@reddit
Yeah, in programming, most experts are control freaks that favor determinism over all else. They even have whole technologies like containers because the mere presence of "an OS environment that might have implicit dependencies you didn't know about" was such a sucky situation.
Introducing nondeterministic behavior into their workflows is a nonstarter. Nobody wants that. People praise AI as "getting rid of all the boilerplate" but any IDE worth its salt has already had templates/macros that meet that do the same without randomly spitting out garbage sometimes.
The difference is that actual tools require learning some domain-specific commands while AI is far more general in how you're able to query it. It's exclusively a tool for novices who haven't learned how to use the appropriate tools.
Which is fine, everyone's a novice in something somewhere, we aren't omni-experts. But the average day-to-day workflow of the typical developer doesn't actually involve branching out into the new technologies unless either A) Their position is largely a research/analyst position that is constantly probing arbitrary things, or B) something is deeply wrong with their workflow and they're falling victim to the XY problem by using crappy scripts to solve their issue when they're probably just doing/configuring something wrong.
Aromatic_Lab_9405@reddit
I feel really similar. I already write code quite fast. I need time to understand the details of the code, edge cases, performance, etc.
If I just review someone else's code, be that an AI or human. I'm not understanding the code that much, so nobody understands that code.
That's fine with super small low-risk scripts, but for a system where you need a certain level of quality, it seems like a super fast way to accumulate debt.
Dragon_yum@reddit
There’s a lot of pushback from the community here (and not fully without reason) but people treating ai like it’s all bad either don’t know how to work with it or haven’t coded enough in real world settings.
I think it’s a terrible tool for juniors but at higher levels when you know the structure and architecture of how you want your code to look it’s incredible time saver.
wilee8@reddit
How is it an incredible time saver? I've heard people say this, but then the examples are like, it can save me a few minutes on things I do rarely. The things that take the bulk of my time are way too complicated to trust AI to do correctly, or can't be done by AI at all because it isn't writing code.
Where are you finding these incredibly time savings?
Dragon_yum@reddit
Honestly it can help at almost every part. The thing is you got to first give it context then tell it what you want to happen and break it down to steps so it has a plan to follow. Then let it implement it together, let it do a step, go over the code and correct it or tell it what it did wrong then move to the next step.
The work is about 30% programming 40% software design and 30% knowing how to do a pr
wilee8@reddit
Man, I'm just skeptical. You saved a whole bunch of time by having AI architect your software for you? And consistently received useful results? Really?
Even your vague description sounds like it took a whole bunch of time and effort to get it to work for you. But incredible time savings!
Dragon_yum@reddit
If your reading comprehension was better you would understand I architected it and told the ai how to write the boilerplate code with extra guiding or manual coding for the more complex part.
Writing code isn’t hard. A junior can write code just as well as a senior, the difference is a senior knows how to write code correctly. Think of ai as a team of juniors. You give it a simple task, tell it what it needs to do and how to do it and the results will be pretty decent. Might need to correct a few things in a pr but if you have it the correct guidance the code wouldn’t be the issue.
That is why ai is a time saver for seniors and a terrible tool for juniors. I’m not saying it’s perfect, I’m not saying it will replace programmers. I’m saying it’s a powerful tool and not learning how to utilize the tool properly is going to hurt you professionally and put you behind other developers.
pananana1@reddit
Building literally an entire app from scratch. Something that would have taken a whole dev team over a year, I built in about 5 months.
wilee8@reddit
You wrote a whole app from nothing but AI prompts?
pananana1@reddit
heavily guided by me, yes.
if you just tell it to do something it'll do it badly. but if you then tell it how to do it correctly, it generally nails it. and then when it writes the code and you review it, and tell it any changes, it generally nails that.
one massive benefit is that it has complete up to date knowledge of any framework you're working on. it knows everything about current security practices about angular. that is very beneficial.
i don't know how well ai code would work for a complex code base with like 6 different databases. but for a NextJS mononrepo app... it's fucking incredible.
jabiko@reddit
I think some of my coworkers that I would consider experts are actually regressing at an alarming rate.
For example one teammate consistently produced pull requests that I really enjoyed reviewing because there was a beauty to how he solved problems. Now its just mediocre (and sometimes subtly broken) AI code that requires a lot of attention during review.
MC68328@reddit
X. Doubt
moreVCAs@reddit
i’m being generous to reinforce the broader point - indiscriminate usage will have big, ugly side effects.
headinthesky@reddit
Yeah, I use it to help with writing some methods where I can type out the logic in a sentence. And it's nothing complex. So it speeds me up in that way. People who are using it to write entire features are spending way more time fixing the code than they would have spend actually writing it with some AI assistance
PoL0@reddit
except the study says it isn't, as it atrophies your skills
liquidpele@reddit
Even as an expert, I've only used AI as information gathering on a codebase or tech I'm not familiar with to see what it would do. I don't use any of that, I just see what it did and then start looking into the functions/things it used as part of the learning process.
nnomae@reddit
It'll make everyone an idiot. Even the most senior developer is constantly learning. I've been a dev for over 25 years now and I'm pretty much resigned to the entire stack changing every 5 years or so.
Bogdan_X@reddit
So many of my coleagues don't see this. They assune everybody is an expert in steroids at software arhitecture level.
Lame_Johnny@reddit
It will make the experts dumber too
tumes@reddit
Yeah, I got sick of DHH’s stink and jumped ship as a pretty senior rails person to mostly typescript which I never really engaged with til now (but had done more than enough JS in my time) which coincided with me begrudgingly rolling ai into my workflow and yeah, having come in to something I could fumble my way through with a strong sense of efficiency and architecture means, on the right task with the model behaving, I can have staggeringly productive sessions. Because I don’t need high level thinking, what I need is someone to pair with who knows bog standard typescript better than I do, which is a low bar. And the bonus is what it does something flagrantly wrong or non-idiomatic I generally can smell it from a mile away and can at least glean what not to do even from bad sessions.
All that being said I also get extremely frustrated with the capriciousness and lack of cohesion of the js ecosystem. There are a few shining beacons of logic and reasonable convention but most of the time I am just baffled that it caught on because so much of it is opinionated while also being sectarian so it feels like you have to struggle through a half dozen reinvented wheels, you need to reinvent a half dozen wheels, then maybe you find a workflow that doesn’t result in a bunch of make work wheel spinning. It’s no wonder most models veer in your doing over complex wordy galaxy brained shit, the corpus varies so wildly across all dimensions of quality and cohesiveness.
CodveAI@reddit
The solution isn't to avoid AI - it's to verify what it produces. Tools like Codve automatically verify AI-generated code catches bugs that slip through. Use AI for speed, but verify the output. Balance efficiency with quality control. codve.ai
YogurtclosetWise9463@reddit
Yeah honestly not surprised. I've been using Cursor and Claude Code a ton and the speed gains are real for getting something working, but the moment something breaks you're completely lost because you never actually understood how it all connects.
The "just review the code" thing is cope. Skimming 500 lines of generated code is not the same as knowing how data moves through your system. Every time I get stuck on a bug I end up drawing diagrams on paper trying to figure out what calls what.
It's funny that the most effective debugging method is still just getting someone to whiteboard the architecture for you. Says a lot about what's actually missing from the tooling right now.
Fluccxx@reddit
Well obviously. The issue is (always) the middleman. i.e. the meat bag. I think we are headed for a world of autonomous coding far faster than anyone could have guessed. The only thing stopping this is probably regulations and legal.
ResearchCommon2316@reddit
I actually find AI useful to chat with it in IDE and have discussions about the decisions I'm making - for example whether I should have specific methods in interface or what would be advantages/disadvantages if I apply design pattern. Something that I would ask a well experienced engineer friend.
I find it really frustrating, when a lot of code is generated or even scary when instead of trying to solve a problem you just ask chat about it. I switched to only ask if I'm stuck, because thinking becomes painful after you delegate it to the chat.
ResearchCommon2316@reddit
also, when building something new and you ask chats to give you the ways of doing it will speed up the design and coding process. For example, I never had to record audio from browser - I gave my requirements to the chat, it explained what should I use and possible steps to achieve outcome and it sped up the process, becuase I had hooks to attach to
jollydev@reddit
Delusional. Programming as we know it has definitely died. I just today deployed a complex image editor in my companies SaaS which I built in 10 hours using Claude Code.
It just struck me that this would easily have taken a month to do otherwise.
I was on the other camp and being bearish on LLMs, thinking they had platued. But now I clearly see that even if everything stays exactly as it is today, the profession has changed forever.
MVPs now have almost no costs. Coding time is no longer a limiting factor for software delivery.
Affectionate_Rub6679@reddit
The steroid comparison is spot on. I noticed this with myself too, the more I let AI write my code the harder it got to sit down and actually think through a problem on my own. Now I use it more like a rubber duck than a code generator and it feels way healthier. The struggle is literally where the learning happens and skipping it just means you'll hit a wall later when the AI output breaks and you have no idea why
Virtual-Sale-279@reddit
Personally, I do believe is it an improvement to coding. Yes, to make it sense you need to teach it, provide huge context, connect MCP to the knowledge center, use ticketing and guidelines and spent a week form the sprint to prepare the correct guidelines. And I am sure it is at least a x2 boost in this case. Just you still think of the architecture, knowledge, explain the thinking in details, provide a clear plan. As it was said: bad developer producing -0.1 will create -1 code and 0.1 -> 1. Just give a detailed process and expectations and review thoroughly at the end. But in most cases it is the same time or slightly better. I mean,. I know some people doping 10-15 agents at the same time with small tasks, but I can't keep focus in all of them. And of course in some areas it is more forgiving that others (meaning UI compared to DB, BE, ML)
Local-Pizza-9060@reddit
We tried folding AI code assistants into our workflows and saw the same pattern: it helps with boilerplate but erodes understanding if you don't read what it produces.
It's tempting to trust the suggestion and move on, but the real cost shows up later when you're debugging or extending that code and have no idea how it works.
Pairing AI generation with a human code review and keeping it to the scaffolding layer gives you some speed up without turning your developers into code janitors.
AI won't replace thinking through the problem any time soon.
SZQGG@reddit
link for the published paper?
IKnowMeNotYou@reddit
Lets be honest, not every one understands the term software 'process'. I have seen so much stupidity in my own career, that my point is, to improve the software development game, you have not replace (or enhance) engineers with AI but the middle management. Most of the problems in my past projects originated from middle management doing stupid stuff and making stupid decisions.
I had projects, I worked on 2 years where I switched into death march mode 2 weeks in and went on going on safari 4 weeks later.
The insanity that happens (at times) in big companies is insane!
Dry_Willingness_7095@reddit
The actual study / Anthropic's own blog on this is a more objective summary than the clickbait headline here: https://www.anthropic.com/research/AI-assistance-coding-skills
This study doesn't address productivity as a whole but the impact of AI usage on skill-formation, which as you would expect will deteriorate if there's no real cognition on the part of the learner
Altruistic-Toe-5990@reddit
if you read the study, it did actually measure productivity
Dry_Willingness_7095@reddit
True. Productivity as measured by time to completion improved. Let me reword
Cautious_Lemon_8589@reddit
Isn't programming basically constantly learning new things?
egosinenomine@reddit
It's funny they had to make a scientific research and make a paper to conclude this self-evident obviousness.
Electronic_Cry_7107@reddit
I am a hands on architect with understanding of the full stack from infrastructure, back end code, front end code, test code, cicd pipelines, authentication, authorization, observability, on and on. I am using claude to develop an application and my god, it’s so much faster than I could do myself. I don’t need to be an expert at everything, I just need to know universal design principles, and review the design, let the agent go crazy with the implementation, do some manual verification and make changes if needed, then have it write up the tests to lock in the feature. I think it’s 10x faster for me.
Am I loosing really low level skills - sure. But AI does those things so now I focus on the big picture and just act like a technical manager and product manager all rolled into one.
The paper you linked just says the entry level programmers are not learning when they use AI. That’s true, AI is making entry level programmers obsolete and killing the profession. AI productivity is real, but it’s real gains are not to be seen by an entry level dev.
tzaeru@reddit
From a pilot study. Funny and worrying at the same time.
More so about the study - it makes intuitive sense that the use of AI tools can hamper learning. I wouldn't generally recommend new programmers to rely on AI tools, and really I'd prolly recommend avoiding the use of AI tools altogether at least early on.
For measuring productivity; well, here they used a chat assistant backed up by GPT-4o. I don't think chat assistants are a very productive way of using AI tools for coding productivity. You really would want full IDE support. And GPT-4o is ranked quite low in comparisons of LLM models for scientific and coding tasks.
Combining my anecdotal observations, and the current newest studies, I'd concur that many developers absolutely can end up doing tasks slower rather than faster with AI tools. There's an extra cognitive requirement for mapping the problem to AI and for e.g. writing prompts. And you still need to review what the AI is doing.
Some studies have found productivity gains. It's nowhere like 10x. It's more like 1.3x. When you get those gains, I think there's some common factors; one is limiting the use of AI and primarily using it as slightly better auto-complete. Another is having kind of learned the sort of tasks that AI are good for, and recognizing the ones they are less good for or where the likely time gain is so little you're better off not. And third is using the AI as sort of a focus aid. Like now and then you really don't wanna do task X and you get started with it by prompting an AI.
Personally, I do think that becoming accustomed with AI tools is something most developers who haven't yet tried them should consider. They are improving all the time, after all, and smart experienced use of them can be a productivity boost. I find it unlikely that they are going to disappear anywhere, and I find it likely that as people get better in deciding when to use them and how to use them, and as those models improve, they do eventually become kind of a default that developers have running there, and others just don't use them much while other use them a bit more; relying on them is prolly a bad call tho, and not good for whatever project you are doing in the long term.
SweetBabyAlaska@reddit
I just don't understand how this isn't common sense lol. Its like have you guys every copy pasted code you don't understand and then regretted it? or have you ever spent two super cracked out nights in an intense code and debug loop until you made something crazy work, or tracked down some obscure bug? or have you ever written an API front to back by hand?
Idk how you can have all of those experiences and not understand that powerful feeling of understanding every single line of code you've written inside and out plus the nuances and pitfalls from making those mistakes and correcting them. I feel like it takes a long time to lose that understanding too. Compare that to lazily slapping stuff together and its obvious which state of being is sustainable, that much should be apparent.
contemplativecarrot@reddit
I don't get how you all don't realize this is meant for the c-suite repeating types who are swallowing the "magic pill" schtick.
Of course most of us realize "it's just a tool, it depends on how you use it, similar to copy pasta coding."
These articles and topics are push back on the people who pretend and talk like it's not. Specifically leadership of companies using AI.
Chemical-Year-6146@reddit
On a broader note, why isn't there a mainstream and widespread cultural pushback to using AI-generated code like art, music and writing?
Didn't LLMs train off all of our code too?
Like Steam's recent AI policy makes it ok to not disclose use of LLM-generated code. How is that fair to coders?
MrYorksLeftEye@reddit
It not similar to copy pasta coding though. Its a qualitative gap from finding part of a method on stack overflow that you might get away with copying to going to systems that you can talk to in natural language and adapt based on your existing code. To me the main question is if the models + harnesses get better quicker than they manage to build up technical debt. It's an open question wether this will happen. I don't see there being an obvious advantage to having humans understand a codebase if theres an AI thats smart enough to figure out every technical/architectural problem by itself
contemplativecarrot@reddit
yikes
Busy_Cartoonist3724@reddit
Interesting take, and honestly not that surprising.
From my experience, AI-assisted coding doesn’t automatically make you faster it changes where time is spent. You save time on boilerplate, but you spend more time validating, debugging, and understanding generated code. If you don’t actively engage with the output, your mental model of the system definitely gets weaker.
I think the real issue is how people use AI
The “10x developer” narrative feels more like hype than reality. Most real gains happen when AI is used for:
Also, speed alone isn’t the right metric. Software development is mostly about understanding systems, requirements, and trade-offs. AI can generate code, but it can’t replace the mental model of the system and that’s exactly what this paper seems to highlight.
So I don’t think AI coding is useless or revolutionary. It’s more like a power tool, great in skilled hands, dangerous in careless ones.
Curious how others here use AI without losing understanding of their codebase.
Admirable_Trifle7888@reddit
Não acho que o problema seja só “usar IA”, mas como ela é usada hoje. Muita gente trata como atalho pra pular justamente a parte difícil: pensar, estruturar, lidar com erro.
Quando a ferramenta vira algo que só cospe código rápido, é fácil mesmo perder noção da base e virar só revisor superficial.
Pra mim, o debate mais interessante não é se IA “aumenta produtividade”, mas se ela está reforçando ou enfraquecendo o processo mental do desenvolvedor.
Se ela acelera sem exigir entendimento, o custo vem depois.
lbwanghr@reddit
Vibe Code allows programmers to think from the boss's perspective and issue commands without much thought. However, the difference is that they earn significantly less. At the same time, their skills decline considerably, and their existence becomes far less necessary.
tankmode@reddit
kind of how genZ workers broadly dont know how to use computers (just phones) i think youre going to end up in a situation where genX and millennial devs are the most value add because they actually learned and how to code manually and also learned how to use AI
nacholicious@reddit
I'm kind of afraid that we'll run into the "1 year of experience, 10 times" issue, and the gap between vibing juniors and vibing seniors will be a lot smaller than today
R4vendarksky@reddit
I don’t agree, I really fear for juniors in our industry. This feels a bit like offshoring all over again.
dillanthumous@reddit
If this turns out to be true it is all the offshore developers that should be most concerned. Why pay an army of people somewhere else if your 10x senior can do it with AI.
Personally very skeptical based on the current limitations of LLMs and the lack of a road map to mitigate them. But one day they will crack it I am sure.
R4vendarksky@reddit
I think they are just trying to stay competitive by loss leading + burning cash. I expect behind the scene the actual scientists are working on other things towards AGI because there’s no way this gets there.
dillanthumous@reddit
Yeah, there is a sea change happening in research now - people are aknowledging that the attention mechanism and scale are not going to be sufficient and it is time to go back to the drawing board. Some arguing for neuro-symbolic systems, others arguing for world models via training etc. etc.
aoeudhtns@reddit
And we are seeing offshoring happen in reality while large companies (especially those with AI products) claiming that it's AI.
kincaidDev@reddit
There’s still a place for entry level engineers, I would hire one if I could. It seems like most developers are learning much faster than they used to and many don’t even realize it
manystripes@reddit
My biggest concern is what happens when the pricing model catches up with the reality and things that used to be cheap or free to use AI for now have tangible costs associated. We're getting people hooked on a tool that's going to end up slapping them with subscription fees for using that part of their brain
R4vendarksky@reddit
I’m seeing it already even before that. Devs already running out of credits and costs business $1000s each doing banal tasks.
I expect eventually when the real costs get passed on then 60-90% of uses will fade away or get restricted to c-suite.
liquidpele@reddit
That's already the case, the market is flooded with bad coders looking to score high paying jobs. The "everyone learn to code" bullshit never panned out, and it turns out that only like 10% of coders out there are any good.
Jedclark@reddit
A junior engineer asked in the team chat the other day how to restart their router, and then sent us a photo of it. That was a first.
gex80@reddit
As devops/ops, I've ran into a lot of people who code but literally do not understand anything beyond that. These same people will come up with entire processes and then a year later ask me how the thing they wrote works.
In tech in general, Genz and younger are technically illiterate. They grew up with systems that hide things from the user that required them to think a bit on how to fix. Computers don't crash in the same way they used to. People have moved to closed walled gardens with lots of guardrails to make the user experience seemless (tablets/phones/web based applications).
Like when was the last time someone had to troubleshoot why an app wouldn't install on their iPhone from the app store?
TomWithTime@reddit
I'm not sure what this would look like, but another option is that society would adapt. Maybe where we mostly have computers and computer-like devices, everything will get replaced with tablets. Or everything will get replaced with a phone-like interface where the interface resembles a common app that they will be familiar with navigating. All of the job functions will be a tiktok or Instagram page to scroll through. It won't make sense to other people right away but it'll transform them in the work force.
Yep. I'm a millennial and I've had to learn all kinds of things from being a kid in the 90s to using libraries because home computers and printers were uncommon to having good self control before I got my own personal computer and phone. It was an interesting time to live through and I think it made us ready for whatever is next.
TrontRaznik@reddit
I sincerely appreciate your optimism as a xennial
ElliotAlderson2024@reddit
Sounds like wishful thinking.
bitwize@reddit
I get that the plural of anecdote is not data, but I'm consistently seeing reports of huge productivity gains due to AI use from even grognards whom you'd think would reject it, or at least skeptically try it and come up empty. Eric S. Raymond for instance, reports shortening of programming tasks from weeks to hours, and says that AI use is now best practice for software engineers (implying that if you're not using it, you're not really doing due diligence as an engineer).
How does one square reports like these with an utter lack of lab data that suggest AI is not really offering much, with significant downsides? Is it an "in mice" thing where the conditions in the studies are such that real-world gains will not be reflected? Or is it the opposite—people are claiming false or illusory productivity gains without being systematic about analyzing the data?
TooMuchTaurine@reddit
Many studies have already shown it's the experts / top performers who AI amplifies more than the novice/low performers .
So I'm not sure we can use this study of novices to tell us whether AI can be a lot faster or not.
markehammons@reddit
Why do people keep repeating this? As if a senior dev or "expert" has reached programming zen and has nothing else to learn? The paper states quite plainly that AI use hampers skill acquisition. No matter how expert you are, there's still a wealth of things to learn in computer science, even on tasks and subjects you're well acquainted with.
bitwize@reddit
If I've learned anything from industry work, it's that businesses place little value on your ability to learn, and much more value on your ability to "hit the ground running" and execute. So yes, anything you claim to be an expert in, that you're hired for your expertise in, you should be able to do with little or no new learning on the way.
s32@reddit
Senior devs are likely to be better at describing changes that they need, investigating parts of a codebase, and providing direction on what might go wrong etc. Basically, they are better at speaking technically.
There are many things I do in my day job that I have reached programming zen on. Defining unit tests for boilerplate code, writing scripts that help operationally, building a UI alongside my changes, etc. are all shit that... yeah I just don't wanna do that.
So I find that I'm free to work on actual hard problems while my editor writes the boring stuff in the background. It's pretty nice.
TooMuchTaurine@reddit
It's tries to say two things, that it's not faster AND it's bad for learning. Well I don't think anyone needs a study to see it would be bad for learning.
corysama@reddit
If using AI is bad for your learning, you are using it wrong. I’m a old greybeard engineer and AI is teaching me a lot more than it’s writing code for me.
Get-Me-Hennimore@reddit
Agreed. I think it's easier to use wrong for less experienced devs – the AI will confidently state things that might even work at the face of it (perhaps with a security flaw or logical bug), and the dev might not have seen enough shit yet to question it as they should.
As a decently experienced dev I feel pretty much in control over whether it's bad for my learning, and I currently choose to stay pretty low-vibe, understanding and taking ownership of any code I ship and thinking of Claude as a tool to generate my code faster and sometimes better.
krystof24@reddit
I think it changes your skillet. I'd say it can help you learn faster but I'm also personally feeling that it somewhat degrades my ability to write code. Particularly boilerplate, which is not inherently bad but something worth acknowledging.
But this has been well known before AI that oftentimes mid to sre devs are often faster than architects/staff engineers etc. who spend more time in meetings and reviewing code than writing it themselves.
transeunte@reddit
it's called coping
senior devs are trying to convince themselves that AI is not a threat to their employment in the long run because like you said they achieved programming nirvana and will forever be invaluable
Murky-Relation481@reddit
Yes, but one of the skills good seniors pick up is absorbing things others are doing. The pool of entirely unknown concepts is smaller as a senior and often the new knowledge is knowledge you already know being applied in different ways. An LLM is no different than a junior in that context, or using it as a rubber ducky to bounce ideas or problems off of.
That's where the productivity gains come from utilizing it as a senior with well over 2 decades of experience and using it in areas where I am approaching being a SME in.
Get-Me-Hennimore@reddit
If nothing else a senior dev experienced with X may have gotten a better sense of where AI gets X wrong, so will be more suspicious when using AI for Y. And programming experience also generalises to some extent between languages and areas; the expert may spot general classes of error even in an unfamiliar stack.
oursland@reddit
No, they haven't. The most credible study of experienced developers showed that AI caused a -19% drop in productivity, while those who were using it believed it was a 20+% increase.
paxinfernum@reddit
Since you keep posting, this I'll keep posting the truth.
The flaws in the methodology were pointed out the moment this study was released, but reddit just keeps reposting it. The flaw is that only one of the programmers had more than a week of experience using AI-assisted dev tools. The study authors tried to mask it by creating the illusion of a spread by reporting the number of "hours" of experience. Any developer will be slower in the first week of using a new IDE.
Oh, and are you ready for this? The only developer with more than a week of experience with AI-assisted dev tools was faster and more productive. So the study proves the exact opposite of the conclusion reddit has been perfectly happy to repeat ad infinitum.
caks@reddit
Can you share some references? I've been trying to find hard numbers on YoE
oursland@reddit
It's not true. This METR study on experienced developers showed that users of AI claimed a 20+% increase but observed a -19% drop in productivity.
paxinfernum@reddit
Lol. The flaws in the methodology were pointed out the moment this study was released, but reddit just keeps reposting it. The flaw is that only one of the programmers had more than a week of experience using AI-assisted dev tools. The study authors tried to mask it by creating the illusion of a spread by reporting the number of "hours" of experience. Any developer will be slower in the first week of using a new IDE.
Oh, and are you ready for this? The only developer with more than a week of experience with AI-assisted dev tools was faster and more productive. So the study proves the exact opposite of the conclusion reddit has been perfectly happy to repeat ad infinitum.
chaerr@reddit
As a senior level programmer I can say for sure it’s helped me a ton. But I push back on it a lot. Sometimes I see it as an eager junior engineer who has great insight but has no knowledge of best practices lol. I can imagine when you’re a junior if you believe everything it says you just start in taking garbage. The key I think is to be super skeptical about the solutions it provides and ensure you understand all parts of what it’s writing
s32@reddit
Same. This sub is extremely anti LLM and it makes me think that we have a looooot of folks who are just... kinda not very good at it.
I'm at a FAANG and legit every engineer I know is seeing efficiency gains. It's not a "hey chatgpt can you implement X?" but a more involved process of defining requirements, steering, etc.
If you start from a good spot and know what you're doing, you've gotta be working on some esoteric shit for AI to not help speed up at least parts of it.
Makes me think a lot of people here tried codex or whatnot when it first came out and haven't tried any of the actually... good tooling out there.
Murky-Relation481@reddit
Yeah, I've been doing this professionally for 20+ years and if you actually know what you want and how you want it done AI can save you a lot of time writing things, because writing is the hard part some times from a motivation standpoint (especially if you are ADHD). I use specific technical terms, I describe things in logical order, and I use complete sentences. All of this helps. Also I work in small chunks and I am usually scaffolding the code by hand and then having it fill in the blanks.
I will say though that if you get carried away you can easily feel disconnected from the code and it feels less like something you wrote and more like a third party library you are consuming. Ultimately it is speed up but you spend far more time reading code than writing it when doing it this way.
But letting it handle C++ template errors is worth it alone. I love it, and it's usually good at explaining the fix/why it was broken (I write a lot of my own metaprogramming stuff).
paholg@reddit
I was a big skeptic for a long time, and still am in many ways. But boy are there tasks it's really nice for.
My favorite thing now is just having it dig into logs.
Zoom keeps crashing every time I screen share, and I haven't been bothered enough to look into it. Just today, I told Claude to figure it out while I worked on other stuff. It gave some wrong suggestions, but did get it working pretty quickly without too much effort from me.
TrontRaznik@reddit
This is why I chortle when I get downvoted for talking about how AI has increased my output tremendously and that the code is produces is high quality.
I know I'm a solid developer, and it's obvious that the people arguing with me don't know enough about engineering to properly utilize AI and that's why think it sucks.
But I don't mind people like that believing what they do, and I don't mind OP spreading misinformation. Frankly, I'm terrified for my future in this industry, and anything that convinces other developers not to learn to properly use AI just means less competition for me and a better chance of being one of the ones who survive what's coming.
In the meantime, I've finished an entire week's work of tickets by EOD Wednesday every week for at least three months. The extra free time is nice.
blehmann1@reddit
It's not a study of novices, the majority of participants have at least 7+ years of experience and less than 10% have less than 4.
It is a study of people new to the library they're being evaluated on, which I presume is because they're studying its impact on learning, not productivity gains. The fact that they found no statistically significant productivity gains is the far more interesting finding, but it's not what they were looking for, and it's not the best study design for looking at that. It is of course still surprising that they found no evidence that AI users are faster when the AI knows the library and the people do not.
The fair comparison would be on a population that's familiar with the library, half with AI, half without. And where they're allowed to use agents rather than just chat, since one would expect that to be faster. And perhaps accounting for what they're able to multitask on while the AI is responding, though I personally suspect that the context switching there doesn't actually lend itself to much efficient multitasking, at least not between high-demand tasks, probably just things like getting a coffee.
But I think that would still be a largely academic study with little real-world value. I personally would want to compare devs in a large existing codebase that they're familiar with, and include code quality metrics and QA feedback as metrics. That's supposed to be the tradeoff, and so any result other than AI being as slow or slower (a result most people don't expect) doesn't help much, since it doesn't tell you the price you're paying. I expect that to be a difficult study, since I would expect different types of AI use to have vastly different impacts on code quality. For example I suspect that just using GitHub copilot auto complete would have virtually no impact, whereas vibecoding would produce irredeemable trash.
LisaLisaPrintJam@reddit
Just searched the comments for "no shit," and found nothing.
No shit.
Illustrious-Comfort1@reddit
Used AI for C Coding in microcontroller applications (ATmega architecture). Helped a bit to get quickly to a solution, but had constantly to reverse engineer the AI otputs (to get the idea behind the code itself). Point is, I could sense losing my ability to get ideas for solving problems.
Since then I used it only for debugging purposes.
rtt445@reddit
Had to make serial data reader to toggle LED on based on text string while ignoring non ASCII characters. ChatGPT was amazing for getting to a working solution quickly. But I had to fine tune things and do a lot of testing and fix one bad mistake that would cause lockup on buffer overrun. It forgot to add else statement to reset the buffer.
AlternativeAd6851@reddit
Why would you debug problems? Just ask AI to do it...
No_Welcome_9032@reddit
I personally use AI just for repeated tasks i COULD make by myself and understand deeply. I do not relly on AI only, becouse its not good at coding alone. If i do not understand something, i dont use AI, i research it. AI is good only for things you can check by yourself.
snowdrone@reddit
It's great for dev managers. Their job is the same, manage coders that sometimes screw up spectacularly
bogdan2011@reddit
Programmers copy code from the internet anyway, AI just does the search faster.
squeezyflit@reddit
And that’s exactly what AI is — Google with the ability to output a structured response instead of a result listing.
bogdan2011@reddit
The term "intelligence" is a bit of an overstatement.
PiLLe1974@reddit
"Do you think the code it correct?"
"Probably."
But more seriously, when I read how LLM work it just reminded me of Markov chains or Bayesian networks: Finding a high probability chain of symbols. And if you'd tweak the "temperature", you'll get "more creative" outputs. :D
xoriatis71@reddit
It’s mind boggling how people use AI to write code instead of asking the AI how the code works, especially people still learning the basics of programming. Do you even like coding?
PiLLe1974@reddit
I totally agree.
I use it even as a senior in this way.
Recently it happened that on my team we jump quickly on a new part of the code base. Explaining how calls or messages (frontend/backend) flow for example is really helpful.
One can also avoid to burn those tokens. If I see any API and wonder what it's definition is in detail I look at their docs. I want to know a bit more about it.
LeaveAlert1771@reddit
Software engineering is not dead. At least in my opinion. You have to be able to properly specify what you want. If you don't do it properly, it is the same as I want something, AI gives me something. Most likely nonsense, because the assignment is vague. And also, developer has to know what code it should deliver. Without that, you might end up with undebugable code. And with code that AI recommend you to refactor in the next session.
MrSqueezles@reddit
Yet another sensationalist reddit post. The study isn't about "AI bad" and doesn't reach that conclusion.
Reddit is starting to resemble Facebook.
Old_Stay_4472@reddit
Programming is a skill that needs to be practiced as much as possible to get better at it - using AI only making the growth of an individual stale and delivering (maybe) ROI in someway.
catecholaminergic@reddit
If I want to learn to play the piano, I won't have a robot play the piano. I'll have it teach me how to play.
HommeMusical@reddit
You know, in music when you describe someone's playing as "robotic", this is an insult.
Decker108@reddit
Even if it's Kraftwerk playing? :)
HommeMusical@reddit
Kraftwerk gets a big, big pass!
CrabPotential7637@reddit
This guy smarts
visualdescript@reddit
I think this is a bad analogy, if you are relying on AI to teach you how to build a sound software solution, then you're in trouble.
Personally I think it's more akin to, if I'm going to use a robot to help me build a house, I'm going to get it to hammer in the 10,000 nails, not work on the architectural and structural design.
NewPhoneNewSubs@reddit
Do you want to play the piano, though?
Maybe you want to listen to piano music. Maybe you want someone else to think you play piano. Maybe you want to compose songs.
Excellent-Refuse4883@reddit
If you want to compose, you should still learn piano.
catecholaminergic@reddit
Honestly like I know we're being metaphorical, but to be literal, learning to play an instrument really opened up music composition for me. I compose a lot more now than before.
alendit@reddit
Which is great that it worked for you, but you do agree that it's not a prerequisite in our day and age, right? The fact that you can decouple music creation from the motor skills required to operate an instrument opens this activity to vastly more people.
And then, if we further go with this methapher: even if we grant that learning, say, some piano is an efficient way to improve your composition skills, you quickly hit diminishing returns. The life of professional performers is constant drilling and exercise. If your goal is to compose you don't need that level of mechanical perfection.
janniesminecraft@reddit
If you want the music out of your head and into the world, learning an instrument is probably a prerequisite. It's not like strictly necessary, but I certainly don't think i could rewire my brain in the way that learning an instrument did, without learning an instrument.
I say this as someone who wrote music for years before learning an instrument, and it felt like swimming in the middle of the ocean with no map. Learning an instrument was like getting a boat.
I don't like this devaluing of cognition. You should learn shit, if just for your own sake.
alendit@reddit
I feel like it's the same argument as to "should should learn long division to be mathematician" or "you should learn fast typing" to be a programmer. Yes, an instrument sometimes feels like an extension of your thoughts, a way to express yourself. But thats just because it what one's used to. I don't believe that your couldn't use Struddel or Garage Band to do the same. If doing the stuff the hard way is what makes you feel happy: absolutely, go for it. But that should not be used as a gatekeeping argument to claim that someone's music worth less just because their didn't spend their formative years playing scales.
EveryQuantityEver@reddit
This attitude right here is why I can’t stand AI boosters. They have this feeling of intense entitlement that they should be instantly good at any high level skill without having to learn the basics.
alendit@reddit
Literal boomer attitude: if it was hard for me, it should be hard for everyone.
neherak@reddit
Do you think there are any professional mathematicians who don't know how to do long division?
alendit@reddit
Don't know how to do? No. Bad at it compared to 5th-graders? Probably most of of the ones I know. One is almost comically bad at mental math.
Choice_Figure6893@reddit
Oof bud that's a horrible take
alendit@reddit
Because you disagree with the analogy or because I'm going against the "AI bad" circlejerk here?
I'm open to hear your disagreement on the former. As for the latter: I'm sorry that you (and me) will lose our cushy jobs, but that the way it goes.
youcantbaneveryacc@reddit
your llm missed the mark here
disperso@reddit
This is apples to oranges comparison. If you want to compose, the amount of piano playing that you need to know about is about 10-25% of what a piano player needs to know about. After all, composers don't know how to play every instrument.
The piano is an exception in that it's super useful to visualize chords, intervals, etc., so much so that most music theory teaching refers to a piano keyboard quite often. But it just assumes that you need to know how to "read the keyboard", rarely play it (talking about just music theory now).
But back to code.
I've worked as a consultant, and the amount of incredibly awful code I've seen is much worse than the LLM slop that I've also seen.
LLMs are pretty bad, but in my experience, it's above the average I had the "pleasure" to work on, professionally (it's much worse than a proper open source project in which I've also worked on, but in my spare time).
I don't claim my experience to be the universal truth. I'm actually very sure there are fields where this is the very opposite, and I have no idea what the average is. But I think there is a space for using LLMs that it's not going to go away.
gmeluski@reddit
if you want to compose songs, you know what helps...
AdreKiseque@reddit
Yeah, this is an important aspect.
I, personally, want to play the piano. But I think a lot of people (companies) are just focused on getting some cheap tunes out.
autoencoder@reddit
What is the purpose of companies writing software? Is it not to make money? Employees learning more than they need is costly.
SnooMacarons9618@reddit
I bought my wife a really good electric piano. She prefers playing that to her 'real' piano (so much so we got rid of the old one). She plays a lot.
I love the new one because I can upload a piano piece and get it to play for me.
My wife plays the piano, I play with the piano. One requires talent and discipline, and its not the one I do.
RobTheThrone@reddit
What piano is it?
SnooMacarons9618@reddit
Replying again - Korg Concert C-720. I don't think they make it anymore, I just had a quick look at their website, and I couldn't tell you what the modern equivalent is - they seem to have changed their naming drastically. I think it looks most like the G1B Air.
I suspect any modern electric piano from a 'known' brand is probably pretty damn good.
knightofren_@reddit
I think “piano” might be an alias for vibrator
aevitas@reddit
That would be called a "euphemism", not an alias.
SnooMacarons9618@reddit
I think it is some kind of yamaha. I actually got it for her about 15 years ago. Later I'll check and try to remember to post here.
From memory it was under £1,000 but not by that much. It *sounds* like a piano (of various types), different sounds pending how hard you hammer the keys, has pedals, that kind of thing. I suspect a similar type of thing could be had for a lot cheaper now.
She loves that she can play with headphones in while practising so she doesn't disturb me (no matter how much I tell her she could just lean on the keys, and I'd think it was good), she can output music or (I think) midi to a computer, and she can switch from sounding like a 'normal' upright piano to a grand, with the push of a button.
It doesn't have a million adjustments like you'd see on a keyboard, but you can play about with various things.
PoL0@reddit
which is apparently ok, until you want to debug why those cheap tunes don't work
No_Atmosphere8146@reddit
Maybe you like having a pianist's salary because they're one of the few industries still paying well in this late capitalist hellhole.
diegoasecas@reddit
lol the detachment from reality is complete
No_Atmosphere8146@reddit
As it seems to have gone over your head, we're using a pianist as an analogy for developer here.
diegoasecas@reddit
and that analogy is a bad one because the jobs are nothing alike, for a myriad of reasons. the first one being 95% of pianists don't have salaries.
BogdanPradatu@reddit
I want to compose songs. Can I use AI to do it? Probably. Would it be better if I learned a lot of music theory and also learn to play the piano? Probably.
I actually have no idea which is better.
CandidPiglet9061@reddit
In addition to being a software engineer, I’m a composer and songwriter.
The nuances of piano playing and piano music are inextricably linked to the physicality of the instrument. You cannot effectively compose playable piano music without yourself being proficient at the instrument.
In education there’s a concept called “productive struggle”. AI eliminates this part of learning, and so while the final deliverables seem comparable (they’re often not) you lose the knowledge you gained from the process of writing it
MornwindShoma@reddit
People want to play the piano, and draw pictures, and do all sorts of things that give them emotions and satisfaction.
lhfvii@reddit
Yes even learn languages and by doing so understanding other cultures, ways of thinking and even bonding with people in the process. Crazy, right?
Copemaxxed_Goycel@reddit
Yeah that's the point. I don't want to play the piano. I want someone to pay for the music the piano makes. And if it can make more music and therefore more money for me if I'm not playing myself, all the better. I hate playing the piano anyway.
Chris_Codes@reddit
Indeed. Or maybe you play the guitar brilliantly and you want to have a piano to accompany you on a couple of songs you’ve written. All those folks who see it as some moral failing that someone would use AI to fill this role are missing the point. The point is not that you are putting some human pianist out of work, it’s that you can focus on recording your guitar centric music quickly while some pianist out there can have an AI guitar accompany them so they can focus on their piano-centric work. More music gets made. Whether or not it lacks the “soul” of all human band is not the question, it’s about productivity. Most software is not a masterpiece, it’s a stepping stone on the way to what might become something great.
b3iAAoLZOH9Y265cujFh@reddit
Then you buy a recording of somebody who can play the piano or hire a person who can to perform your piece. What you don't do either way is rely on a muddled synthesis derived from a statistical amalgamation of the performances of everybody who've ever played anything on a piano.
Uraniu@reddit
Or maybe you're someone who wants a live piano in their house and if you could just stop paying the piano player...
catecholaminergic@reddit
Hey I mean if a wind up toy that plays top 40s is what gets the job done great.
I think there are a lot of situations that call for more.
cosmopoof@reddit
People don't learn to play the piano because they want to listen to piano music.
Pawtuckaway@reddit
Now imagine the robot doesn't really know how to play the piano and just copies some things it read online that may or may not be correct.
You sort of learn the piano but end up with poor fundamentals and some really incorrect music theory.
eyebrows360@reddit
Cometh the retort from the fanboys "But some piano teachers are bad too!!!!1"
And like... sure? But the point of "AI" is supposed to be that it isn't as variable as humans. It's supposed to be the reference. If it's sometimes bad and sometimes good then we've achieved nothing.
Soft_Walrus_3605@reddit
Humans range from ignorant to genius naturally, but with each new model the lower bound for AI only goes up.
There's still variance, but it's centered around a higher midpoint
catecholaminergic@reddit
Seriously. I've seen some bad vibecoded PRs.
At the end of the day, LLMs are search tech. It's best to use them like that.
Pawtuckaway@reddit
I'm saying using an LLM to teach is just as bad as using it to code for you.
If you are learning something new then you don't know if what it is telling you is correct or not. An LLM is only useful for things you already know and could do yourself but perhaps it can do faster and then you can verify with your own experience/knowledge.
LBPPlayer7@reddit
even then when you know it's not useful
the few times i tried it as an experiment it gave me terrible answers, especially when it comes to shaders
aanzeijar@reddit
Funny enough, over in r/piano learning with computer programs is usually discouraged as they can't show you posture and finger technique, can't judge musicality and suck beyond being glorified Guitar Hero games.
LowB0b@reddit
yeah but what's driving the hype train around vibe coding is that it's easy money. So it would rather be "If I can earn thousands by having a robot playing the piano, starting now, should I spend the next X years mastering playing the piano or just have the robot play the piano and (hopefully) rake in cash?"
catecholaminergic@reddit
If it's easy money why is WinRar more profitable than OpenAI?
LowB0b@reddit
well when I'm talking about vibe coding it's more the users of anthropic's and openai's tools rather than openai or anthropic themselves \^\^' but I get your point.
tkodri@reddit
Yea, that's a common argument I don't quite understand. Your job is not playing the piano and never has been. Your job is to produce value, usually in the shape of piano music. I'm not a hardcore AI believer or anything, but the technology is super valuable and is definitely provided me with a productivity boost, granted the time invested started having positive returns mostly after the release of Opus 4.5.
josefx@reddit
I have seen people go from productive members of society to AI controlled copy paste drones. I had to review pull requests that made no sense, I had to review pull requests that were clearly wrong and when I explained what was wrong I was countered with more AI generated garbage . I see people stuck trying to fix complex issues because they refuse to even acknowledge the possibility that their omnipotent AI masters could be wrong.
60days@reddit
Quite; there are realms where craft is what’s being bought & sold - a beautiful table created by a master carpenter - but a lot of us are not really registering that we work in an Ikea factory, and measured against those different values.
dsartori@reddit
Yes. The problem with all of these discussions is how much your experience of AI assisted coding is contingent on your own context.
HommeMusical@reddit
Thing is, even if I work in an Ikea factory, I want what I make to work. I don't want the furniture to fall apart, even if your kid jumps on it. I want the finish to be smooth and I want the parts to fit together nicely.
AI is not providing any of that.
catecholaminergic@reddit
We're toolmakers. It's the fundamental human activity that's allowed us to get from the blue one to the grey one. Of course at the end of the day toolmaking as a profession is a business venture, but in terms of value creation, I find knowing how to do things myself to be more productive than relying exclusively on crutches.
So yes, it is. Our job is to know how to do things. I use Claude all the time, but I'm not pasting / cursoring into production.
mycall@reddit
What if you played the piano for 40 years and are a master and are bored so you want to try something new? Let the AI play it and correct it along the way. Fun again and faster.
VampireDerek@reddit
And then you will know how to play piano but your technique will be flawed
Patient-Ordinary-359@reddit
Not really a good analogy though:
piano playing for most people --> an enjoyable unpaid hobby --> self defeating if you cheat. Anyway, what's a recording if not having someone else play the piano for you?
coding for most people --> a job --> if you can generate productivity gains no matter how, you should try to do so.
The jury is out on AI coding, maybe it will generate those gains, maybe it won't, maybe in some cases yes, or no, but the two aren't comparable. Just because you wouldn’t pay a robot to play the piano for you doesn’t mean you should dismiss coding assistants prima facie.
delicious_fanta@reddit
Isn’t the idea that you want to build the piano, not play it, and you want your robot to build it for you while you provide the design and overall vision.
zenchess@reddit
That's bullshit. I can implement a feature in claude code in 1 minute with the right prompt. It writes 500 lines of code and they work. You can't tell me a developer could do that.
OwlingBishop@reddit
LLMs absolutely fail at coding anything a little more complex than boilerplate grunt work web development or basic tech plumbing...
If you're not convinced, try having an LLM code some concurrent C++.
zenchess@reddit
Challenge accepted. Btw saying "LLM's" - is not very accurate...there are wildly different capabilities of different LLM's. What exactly would make the challenge difficult?
DetectiveOwn6606@reddit
Have you vibe coded the concurrent c++ yet?
zenchess@reddit
As soon as someone tells me something to program, sure. "Concurrent c++" isn't very specific
OwlingBishop@reddit
The fact you have to ask.
zenchess@reddit
Btw I programmed ZIg Smalltalk here: https://www.reddit.com/r/smalltalk/comments/1pq1n88/zig_smalltalk_mit_licensed_64_bit_smalltalk/
100% vibe coded with claude code. Does this qualify as "basic tech plumping"?
Cordoro@reddit
Read the paper. It agrees with you. It also says that if the AI used the library for you, then you probably didn’t learn the library yourself. So the question then becomes, how important is it to learn the library?
digitizedeagle@reddit
I had these insights, but I just didn't know how to articulate them at the time.
It's as if you didn't write the code, and you don't know if it's well-written in the first place. So programmers become testers of sorts.
This is, of course, especially true in larger codebases. On the other side, you'll also find that AI is especially good for one-offs or code that the world won't see.
Bakoro@reddit
If you ask the wrong questions you don't get the answers you want, and if you measure the wrong thing without recognizing it, you end up with wrong answers while screaming "science".
AI dependency can be a real problem, but it's absolutely not fair to make broad statements when time after time, the studies show clear groupings in how people are using AI. There are many people who are having the th AI do their thinking for them and and trying to have the AI do their whole job.
Other people are using it as a supplement.
I use AI agents heavily for shit that otherwise wouldn't get done. Efficiency doesn't even come into it, without AI, the task would not have had time allocated to the task.
Also, if reading code is somehow a mark against efficiency, and "reading the code give a flimsy understanding of the codebase" then that undermines the entire code review process. Seriously, consider the implications of what you're saying: the fact that the code is AI generated is irrelevant, the code could be from a human, or an old school deterministic ode generator, or aliens. If you're working ok a team, you have to be reading code that you didn't write.
TheDevilsAdvokaat@reddit
I tried it for a little while and found it awful. So much crap added. So verbose.
valarauca14@reddit
Where are all the accounts accusing Anthropic of using the wrong model and/or incorrect MCP setup?
Simcurious@reddit
Anyone's who's seriously used claude code knows this is false
ImDonaldDunn@reddit
Exactly. Projects that previously took weeks or months can now be done in an evening. And with the advent of spec driven agent development, this is only going to accelerate.
Stamboolie@reddit
What is the project? I've heard this claim but never actually seen one, except one page make a login screen or some such fluff.
TrontRaznik@reddit
Most of the building blocks even for new software systems already exist. I.e. when was the last time a pattern was added to Design Patterns? Probably a couple decades ago?
When I have a complex feature, I don't rely on Claude figuring it out how to do it, I tell Claude how to do it. It tell it in the same way that a systems architect would pass along a design to a team to implement.
And it does it 95% right. And generally speaking the 5% wrong parts were because I neglected to think of something. But that's no different than what would happen passing my designs off to an intermediate dev. He or she also might not catch my mistakes.
My main project is a mission critical fintech saas product and AI has been nothing but a benefit
Stamboolie@reddit
yes, I do the same, but the claim was
I call BS
TrontRaznik@reddit
I had this 12 year old terribly written vanilla php site I wrote to show off my design skills for my portfolio back in the day. 7 pages with all the css/HTML/php/js inline, that sort of thing.
A friend of mine is running for a local office, and it turns out this site was a parody "elect Maynard James Keenan" site, and it looks super pro so I wanted to give it to her in a CMS.
Converting it all into modern code with a CMS would have been at least 4 or 5 days.
But I wrote out a detailed prompt in about an hour, and set Claude into planning mode to convert it into a Laravel site and build a CMS to manage the content.
2 hours later it was done. Full admin system styled by tailwind, standard authentication, all the shit code was extracted into modular code and rewritten to read better than I could do 12 years ago, etc. And the site looks identical.
Then I tell it to write me a deployment script. 15 minutes.
Then my friend is populating content.
Not even half a day. Incredible.
Farados55@reddit
I think the most interesting question is: is this new or novel technology or are you making a mobile task list app? Because agents are great at spitting out one and maybe not so great at spitting out the other. Can you create a state-of-the-art thing in an evening? And according to this article, can you understand the what the state-of-the-art is if your agent makes it all for you?
ImDonaldDunn@reddit
One of the projects I am engineering is a first of its kind. You obviously have to understand how to engineer systems to get quality results out of a coding agent, though.
Farados55@reddit
And I guess that’s another point of the article. At what point do you keep prompting an agent to “fix make better” and take over the coding?
ImDonaldDunn@reddit
The goal is to write detailed prompts and context that don’t write bad code.
Farados55@reddit
But it does sometimes, right? Even if you give it great prompts? So do you reprompt to make it get it right or do you fix the code?
ImDonaldDunn@reddit
I fix the code.
Farados55@reddit
That’s interesting.
accidentallyobsolete@reddit
“Here is a randomized controlled trial of actual measured task completion time”
vs “but my feeling that…”
This is why we have studies. What you feel is misleading due to so many factors
TrontRaznik@reddit
My feeling is that I've finished all my tickets for the week by Wednesday every week for three months, and then Thursday and Friday I just push the branches for review as though I just finished them so that it doesn't look like my workload is too light.
In that time not a single pr has been rejected because my instructions go into heavy detail on matching both the project and my personal coding style, and because my humanities background has taught me to be a detailed writer and hence I can write very through prompts.
If you don't have a humanities background, that last sentence is probably meaningless to you. But when I compare my prompts and results to other devs, it's no wonder they are AI skeptics who barely trust it for small features.
TheBoringDev@reddit
> In that time not a single pr has been rejected
I've got a coworker like this. We all talk about him behind his back as going to be unemployable once the bubble pops. His code isn't wrong enough to be rejected, but it's certainly not doing the code-base any favors and management seems to like the slop, so who are we to argue?
TrontRaznik@reddit
Dude you don't know shit about my skills as an engineer and you're projecting nonsense on me because you have a bias against AI. You're literally just slotting me into your co-worker's place in your head because I...don't get PRs rejected just like him?
No one is talking shit about my code behind my back or in front of it. I've been doing this a long time and am one of my company's main go-to guys when it comes to software architecture.
The people who are going to be guaranteed unemployable are the people who can't figure out how to properly get their AI to write quality code. Because senior engineers like me who can do it will outperform you every time. Which is exactly why I didn't get laid off when my company had a round last year.
But keep living in denial of what's happening around you and then be surprised when this industry is fundamentally changed over the next couple years.
EveryQuantityEver@reddit
So we’re supposed to take your anecdote as gospel, but their anecdote clearly is invalid?
TrontRaznik@reddit
They didn't give an anecdote, they clearly tried to use their colleague as an analogy for me, with the implication being that I am like that. Their colleague very well might exist and be exactly like they describe, but that has no relevance to my situation since I am not like that.
TheBoringDev@reddit
Oh definitely, your output might be actually amazing, I don’t know you. I’m just saying I know someone who uses not having prs rejected as a measure of success and it’s a really bad metric. There’s an episode of kitchen nightmares where a chef defends their food by saying that no one sends their food back, and chef Ramsey points out that most people won’t bother sending it back, they just won’t go to that restaurant again.
nacholicious@reddit
Exactly. Even if people think that the METR study had flaws, it's still shows that people estimate that AI makes them significantly more productive, even when it makes them significantly less productive
ImDonaldDunn@reddit
Studies often have methodological mistakes and many cannot be reproduced. One study is not the end all be all truth.
lahwran_@reddit
I use claude code and I can easily see how it could be true.
TomatoManTM@reddit
Coding with AI makes me faster and dumber.
Have to stop.
riv3rtrip@reddit
I believe the second one-- I find it almost self-evident that directing someone to do a thing for you doesn't make you better at actually doing the thing-- but as of Claude Opus 4.5 I wouldn't believe the first one. Could you actually link the paper instead of providing editorial? How old is the paper?
The AI stuff is literally so much better today than it was even in October. I say this as someone who barely used AI in October because I did find it personally to be low productivity. Game's changed quite a bit now.
yupidup@reddit
The idea that you can wield these tools without a specific knowing how and skill is the stupidest categorisation. It’s like giving cars without lessons and measuring that most people crash the car so often that there’s no speed gain.
LeanOpsTech@reddit
AI helps with boilerplate, but leaning on it too much kills your understanding, and debugging becomes a mess. The speed boost feels real at first, but it doesn’t last.
hemingward@reddit
Literally every study has shown this. I’ve recently cut my Claude code subscription and am using codex minimally (mostly for brainstorming). I want to just code, find that joy again, and not watch my skills rust. It’s amazing how quickly they degraded.
drteq@reddit
Been coding since I was 11, cto for 12 years - I’ve mode cool shit with Claude code than I’ve ever made. Feels to me like a powertool that lets me bring dreams to life but you definitely have to know what you want to build
r_acrimonger@reddit
AI is great for linting and digging up rarely used syntax
Hungry_Importance918@reddit
Tbh I already can’t imagine working without it. It’s basically replaced Google for me. But yeah I also feel like my actual skills haven’t really improved much since relying on AI.
gc3@reddit
This contradicts my personal experience
NCKBLZ@reddit
I think it depends on the task, for some it really speeds up, for others it just troubles you
Liquid_Magic@reddit
Remember how in software development that thing where when you add more developers to a project it starts taking longer to complete instead less time to complete?
Remember that?
Well this is that. Turns out communication about what to program is like… kinda the most critical part. So explaining something to another person or to an AI basically takes longer than just doing it yourself.
Once you learn the basics the hard part has ALMOST nothing to do with programming and everything to do with understanding the problem and figuring out how to model the real life processes involved in the creating the solution.
hiscapness@reddit
AI without domain knowledge is like trying to fix your car with a set of rusty steak knives
builderbycuriosity@reddit
These days, managers push teams to use AI coding tools and ship features in just one or two days. AI helps generate code faster, but they don’t see the mess inside it.
Danwoo0118@reddit
Used it for dependency upgrade that involved massive amount of mock test data update. That process would've take me days to do just for our tests to pass, but Claude was able to do it in hours without any headaches.
AI dependency is dangerous, but it sure is nice to not have to deal with tedious work like this and focus my time elsewhere.
Pharisaeus@reddit
I don't think this is the case, but there is a grain of truth there. LLMs turned basically into a "high level programming language", just one with unpredictable compiler. It's what developers have been doing for many years already - make highly expressive programming languages, where you write little code and get a lot of functionality. Oneliner in Python could be hundreds lines of C or thousands lines of assembly. This is just another step - oneliner prompt could be hundreds of lines of python. With the caveat that this "compiler" is not deterministic and often generates incorrect code...
As for the detail level of prompts - that's also nothing new. Anyone who has been programming for more than 10-15 years has seen this. We've been here before. What vibe coders re-discovered as "LLM spec driven development" is nothing more than what used to be called "Model Driven Development" - that was the idea that non-developers could simply draw UML diagrams and generate software from that. And there are still tools that actually let you do that! The twist? To get what you really wanted the diagrams would have to be as detailed as the code would be, which essentially turned this into a "graphical programming language" and those non-developers became developers of this weird language. That's exactly what we see now with LLMs - people simply became "programmers" of this weird prompt programming language. Unfortunately as far as programming languages go, it's a very bad one...
dystopiadattopia@reddit
In other news, water is wet
FooBarBuzzBoom@reddit
Share it on r/accelerate or other shitty subreddits
NotARealDeveloper@reddit
Did anyone here read the abstract at least?
They clearly state it's only a problem if the devs didn't know the tech stack and let AI handle everything without understanding it themselves. So basically what we know already: Don't rely on AI if you are not familiar with the topic. That's not a programming specific issue.
Pharisaeus@reddit
Sorry but if someone tells you they can thoroughly review and understand thousands of lines of slop produced by their agents every day, they are lying to you.
seekinglambda@reddit
Apples vs pears
The reason they see no speedup vs people claiming a big speedup, is study used gpt 4o in a chat interface, while people who have productivity gains use opus 4.5 in Claude code or gpt 5.2 in codex. GPT 4o in chat is basically an abacus in comparison.
Personally I felt I had 30-50% productivity gain in some tasks back then, mainly greenfield work, and negative in some. Now I easily have 2-3x productivity gain in majority of tasks.
NeedsCSJobAdvice@reddit
Agree with you, opus 4.5 in Claude Code has been extremely powerful and increased my productivity. I don’t know why you and I are being downvoted. This place is delusional.
the_koom_machine@reddit
I had to actively search for “4o” in the comments because many of the supposedly highly productive people here don’t bother to read the paper before making statements about it. Frankly, I’m surprised the 4o group in this study didn’t turn out to be even less productive, given that this model was never praised for coding performance, even in its own time.
Ironically, this mass of naive, gullible people here probably should fear replacement by AI as they fail at even basic critical thinking skills that anyone seriously seeking employment in CS/SWE in today’s market is expected to have.
levodelellis@reddit
You figure they'd do the research before writing claude code. Writing may be too strong of a word. Everyone tells me it flickers
eluusive@reddit
I don't believe this. But, I did just start using it. It's like a 1000x productivity boost for me when I learned to use it reasonably.
itb206@reddit
"We conduct randomized experiments to study how developers gained mastery of a new asynchronous programming library with and without the assistance of AI. We find that AI use impairs conceptual understanding, code reading, and debugging abilities, without delivering significant efficiency gains on average. Participants who fully delegated coding tasks showed some productivity improvements, but at the cost of learning the library."
This is about learning a new library not coding in general.
Ok_Blacksmith_1988@reddit
There’s irony in you reading a small chunk of the paper and immediately coming back here with your own half-formed conclusion on the basis of just the abstract, somehow reads condescending and hypocritical
Even though it wasn’t the point of the paper, it does address coding performance in addition to learning the library, and if you look at the task time, you can see how much overlap there is; since it’s only a 35 minute task, taking time to write out the prompt for the ai to solve the problem is actually significant. Which the authors do talk about, in the paper. So if you’re coming for the points that OP is pulling out then you ought to say something like ‘debugging was a non-ai assisted task, let’s hand over all our cognitive processes to the ai and then there’s no downside’ or ‘the study wasn’t built to measure coding performance and therefore the task completion time is misleading because participants weren’t trying to write code as quickly as possible, they were also trying to understand the library, which you can see in the follow-up prompts some participants asked the ai, and in the way that some retyped the ai output instead of copy-pasting, which represented a significant slowdown’ or ‘that’s only true of some subtypes of ai users; but because that’s not what the study was examining, we can’t see all the data broken out like that’ or ‘n=51 why are we drawing any conclusions from this toy problem and contrived setup’ or ‘GPT 4o-mini? What are we, cavemen? Opus-4.5 is the only ai’; instead of pretending that this wasn’t a metric the study was measuring.
itb206@reddit
Its about performance on learning the library dude and doing a task with the new library its literally entirely about skill acquisition.
itb206@reddit
Contrary to our initial hypothesis, we did not observe a significant performance boost in task completion in our main study. While using AI improved the average completion time of the task, the improvement in efficiency was not significant in our study, despite the AI Assistant being able to generate the complete code solution when prompted. Our qualitative analysis reveals that our finding is largely due to the heterogeneity in how participants decide to use AI during the task. There is a group of participants who relied on AI to generate all the code and never asked conceptual questions or for explanations. This group finished much faster than the control group (19.5 minutes vs 23 minutes), but this group only accounted for around 20% of the participants in the treatment group. Other participants in the AI group who asked a large number of queries (e.g., 15 queries), spent a long time composing queries (e.g., 10 minutes), or asked for follow-up explanations, raised the average task completion time. These contrasting patterns of AI usage suggest that accomplishing a task with new knowledge or skills does not necessarily lead to the same productive gains as tasks that require only existing knowledge. [Emphasis mine.]
-------
So they not only explicitly acknowledge that it improves productivity on tasks where you already know what you're doing.
And to be very thorough, even within their own study on an individual to individual basis performance differed on the task with the new library, and this should grind your gears people who just asked the AI to one shot it DID complete the task faster and its the overall average that was slower for the entire group so it has more to say about how you use the AI then "does AI give you a performance boost".
That's actually the part we should be worried about people who quickly finish a task but have no fucking clue what the library actually does. That's the scarier conclusion imo.
Ok_Blacksmith_1988@reddit
Sorry, I was being harsh. I don’t actually disagree with you at all. Thank you for pasting the relevant part of the paper. I was just scrolling through all these comments by people who maybe read the abstract, and I found the first comment to be frustrating in that it didn’t directly deal with the counter-intuitive performance findings which the paper does include and discuss, and also didn’t directly contradict the original poster’s comments. Your last comment is a good breakdown
itb206@reddit
All good, appreciate the apology :D
It's a super contentious topic obviously and its hard to get a sense of what's what when everyone has an opinion
JustViktorio@reddit
Yes
Cordoro@reddit
This is going to feed the anti-AI echo chamber well. Haha. Nuance and details just don’t click bait well!
seriousgourmetshit@reddit
Yeah this is something I struggle a bit with as a developer with 5 yoe. Sometimes im tired or lazy and I over use it.
Currently I'm trying to only use it to plan my implementation design before I start coding. I'll explain the problem, what I need to achieve, any extra context, and then my initial thought on how to design a solution.
Then the AI will critique my design and improve upon it. Ill then have some more back and forth until I full understand the improvements and why they are better, and what was sub optimal about my approach. Then ill code it without any more help. This seems to work for me but im interested in other people's workflows.
21Rollie@reddit
On point 2:
I’ve been a senior for a while now so I have much experience before AI, but the last year I’ve been using it heavy for things I don’t want to do. Namely, writing tests. And what I’ve noticed is two things: 1) the AI has no frame of reference for all the requirements for the code and thus for writing tests, its primary concern is writing ones that will pass. Sometimes the process of testing exposes bugs in your code, but the AI will then adjust the test to pass the bug case. And 2) I know that I am getting worse at writing and verifying tests on my own for lack of practice.
These are just issues in relatively insignificant avenues of use. I worry a lot for newer devs who I know are just writing things with AI and skipping over both comprehension of the codebase and the ability to troubleshoot. I’ve brought this up as a concern but of course, the execs hand wave away anything negative about AI. Idk what koolaid they’re all drinking, when they go out to conferences they must be getting wined and dined by OpenAI
revonrat@reddit
Dario is on record just in January saying that Anthropic will "solve" coding in the next 6 to 12 (I think) months.
Ok_Blacksmith_1988@reddit
Sorry, I was being harsh. I don’t actually disagree with you at all. Thank you for pasting the relevant part of the paper. I was just scrolling through all these comments by people who maybe read the abstract, and I found the first comment to be frustrating in that it didn’t directly deal with the counter-intuitive performance findings which the paper does include and discuss, and also didn’t directly contradict the original poster’s comments. Your last comment is a good breakdown
CuTe_M0nitor@reddit
This says it all "Our findings suggest that AI-enhanced productivity is not a shortcut to competence". Who thinks that the code the AI wrote is your own?
ArkBirdFTW@reddit
There’s large swathes of Anthropic devs who have confirmed that Claude Code writes 95% of even all of their code
TheBoringDev@reddit
Well yeah, they work for Anthropic. I'm sure Microsoft devs run windows.
ArkBirdFTW@reddit
They would’ve been saying this much earlier if it was just for advertisement. And Microsoft devs aren’t out here claiming VSCode Copilot is writing all their code.
f_djt_and_the_usa@reddit
This is the real reason I think. Prompting genuinely is much faster than by hand on initial create. But not for maintenance. Not only will you not be able to jump in and manually make changes without first spending the time to understand it yourself, which completely erodes any time gains from the initial create, you will not even be able to effectively update it with prompts because you don't understand it well enough. And long term you become unable to code
_lonegamedev@reddit
I guess it depends on the mindset. Personally I use it mostly as advanced search, and it is much faster than googling it (especially with current state of search engines). It still takes an engineer to use those tools efficiently.
leaveittobever@reddit
Same. I don't Google anything anymore. Claude's website will spit out all the relevant code without me having to click 20 links in Google's results and piece them all together. It's saved me a ton of time.
Dunge@reddit
More than 50% of the AI search results I receive contain invalid information. If you completely replaced your Google search for AI, you probably get bullshited daily and don't realize it.
_lonegamedev@reddit
It depends on the subject. In some cases it is mostly right, in some cases it is mostly wrong. The less info it has, the more it hallucinates.
giant_albatrocity@reddit
I’m waiting for AI to help me find bugs in a huge pile of spaghetti code so I don’t have to. But I guess if it were good at that, AI wouldn’t write produce huge piles of spaghetti code in the first place.
paxinfernum@reddit
Oh, boy. It's another round of reddit finds an AI paper that has a nuanced take on AI, and waves it around like a flag to justify their refusal to adapt to a changing world. You guys are starting to remind me of the people who constantly bitched about cellphones when they first came out or the guy who insists that everyone he meets hears about how he doesn't own a tv.
If you actually read the methodology instead of just the abstract, you’d see this paper isn't the "AI is useless" smoking gun you guys so desperately crave.
The study wasn't measuring senior devs churning out code they've written a thousand times and already understand. It was looking at novice workers trying to learn a completely new library from scratch.
The "no productivity gain" headline is misleading. The average speed was dragged down because some participants spent massive amounts of time (up to 30%) just composing queries and trying to get the prompt right. There was a group, the AI Delegators, who just let the model do the work. They finished the fastest.
The catch is that the pure delegators tanked the post-coding quiz scores (39%) because they bypassed the learning process entirely.
What the "AI kills coding" crowd is missing is that it is possible to use AI while still building competency. The paper explicitly identified "High-Scoring Interaction Patterns" that preserved skill formation. Participants who used the AI to ask conceptual questions (Why does this work?) instead of just asking for code generation scored 65% on the evaluation, higher than the delegators and at comparable levels to the average of the no-AI group. People who asked for code and an explanation had high learning outcomes than all but the top end of the non-AI group.
The Generation-Then-Comprehension group actually scored higher on comprehension (86% vs 65%) than the non-AI group, although they did take slightly longer. But the "humans are getting lazy from using AI" group didn't bother to read past the title and abstract of this study.
The breakdown is basically this:
The authors aren't saying, "don't use AI." They're saying don't use AI to bypass learning. It’s evidence of what anyone in education could tell you. Cognitive offloading kills learning. If you treat the AI as a magic answer box, you stunt your growth. If you treat it as a tutor and stay cognitively engaged, you get the utility without the brain rot.
This isn't the slam dunk you think it is. It's a critical analysis of what works and what doesn't. But this sub doesn't care about that. They just have an emotional need to feel validated in hating AI tools.
pfn0@reddit
This all sounded 100% obvious from the getgo. It's a tool, like any other.
ddollarsign@reddit
Why would Anthropic release a paper saying their product is useless?
Cordoro@reddit
It doesn’t say that.
ddollarsign@reddit
The post title says “AI assisted coding doesn't show efficiency gains and impairs developers abilities.”
Cordoro@reddit
The post title does. The paper doesn’t.
Actual__Wizard@reddit
Wow, so, that seems 100% consistent with my experience and about 85% consistent with the experience of everybody else.
TommyBearAUS@reddit
Do better please. Literally the first line: AI assistance produces significant productivity gains across professional domains, particularly for novice workers
klausness@reddit
“Developers who rely more on AI are worst at debugging.” Good thing that debugging is such a tiny fraction of most development jobs…
SlashedAsteroid@reddit
Sounds to me like you’ve never actually worked as one.
klausness@reddit
Perhaps you failed to detect the sarcasm in my comment. As a developer, I’m well aware of how much time is spent debugging. My point was that anything that (like AI) reduces one’s ability to debug is going to decrease overall productivity, no matter how much it increases productivity in other tasks.
SlashedAsteroid@reddit
I did not note the ellipses when reading your message my bad.
Working-Business-153@reddit
It's not so much technical debt as a kind of 'intellectual labour debt' or 'institutional knowledge debt' like a company with high turnover and 0 documentation system, even if every individual is highly capable and meticulous (which AI, being Lossy, is not) you eventually hit a point where nobody in the building knows how anything works or even who to ask to find out how it works and then the first time something breaks you have institution-wide whack-a-mole.
e-arcade@reddit
The study highlights something important - the difference between AI that ANSWERS vs AI that TEACHES. When AI just gives you the solution, you skip the learning. But when AI helps you BUILD A PATH to understanding - showing connections, breaking down concepts, letting you explore at your own pace - that's fundamentally different. The issue isn't AI assistance, it's AI that removes the cognitive work instead of scaffolding it. There's a huge design space between "here's the answer" and "let me help you discover it yourself."
PinotRed@reddit
Justice
eronth@reddit
Did this study break out for people who used the copilot auto complete suggestions... thing? Sorry, I forget what it's called, but it's basically juiced up intellisense.
ShiitakeTheMushroom@reddit
In terms of efficiency gains, I'm skeptical that none were shown.
I've played around with Claude Code and git worktrees, with 5 instances open working on 5 separate tasks in parallel. It was able to complete 5 tasks, each of which would take someone a day, within an afternoon. As always, YMMV.
mountainlifa@reddit
This seems a bit like pilots flying aircraft with automation. However the key difference is that they are required and protected by regulations and paid to train in simulators to maintain their skills. Not so for software engineers since there are no regulations and business people are working day and night to remove engineering cost, they don't care about ones skill set. Engineers are "forced" to use AI to meet ridiculous deadlines or find another job.
n00lp00dle@reddit
the argument that it creates efficiency gains also needs to be offset by the number of bugs or exploits the generated code introduces. havent seen any stats on that yet. im betting the number of cves will skyrocket over the next few years.
im not suggesting that handwritten code doesnt introduce bugs but ive seen some absolute crap being presented in code reviews that clearly came from the chatgpt free tier. so i reckon this is going to be a major issue in companies that have gone all in on gen ai and have generated code reviewed by copilot or whatever.
mka_@reddit
Op, this isn't an Aanthropic paper, and the study actually found that AI hinders novices from acquiring skills they are learning, rather than damaging the capabilities of experienced developers.
This isn't anything we didn't already know, there's just a paper to confirm it. But as always there's nuance, a lot of nuance. It can be a boon for some and bad for others no matter the skill level.
Dunge@reddit
This is published by people working at Anthropic
mka_@reddit
Well, the rest of my point stands.
IAmNotMyName@reddit
We don’t remember phone numbers anymore
Farados55@reddit
But you need to know how to use a phone. This is the worst analogy I’ve ever heard.
IAmNotMyName@reddit
Have you considered you are just a dim bulb, too dumb to see that relying upon technology causes skills to atrophy?
Maybe this is additional evidence. Just saying.
Farados55@reddit
No, that’s dumb too. Because if you are experienced enough then you notice when your skills start to diminish. Can you not? I haven’t studied math since college, my math is not so good anymore. We are not unconscious.
fire_in_the_theater@reddit
similarly treating developers as fungible assets that can be just moved around, hired, and fired at the whims of management is incredibly stupid and inefficient ...
but here we are
basically the tech industry sucks at producing and maintaining software, and markets are completely incapable mechanisms of selecting against that.
Supuhstar@reddit
Congratulations!! You've posted the 1,000,000th "actually AI tools don't enhance productivity" article to this subreddit!!
Click here to claim your free iBall DAViD!
2ciciban4you@reddit
skill issue
spondgbob@reddit
I think it depends. If I am just saying “hey make a plot of these values against these values” it can do that pretty well. But an actual program? Nah miss me with thay
robhanz@reddit
Not engaging deeply with material leads to not learning the material.
Yeah, that tracks.
I think AI is a better tool for senior devs than junior ones, for that reason.
Either_Pound1986@reddit
Anyone not getting benefits from ai has only themself to blame. It's not hard.
cf858@reddit
I feel like this is testing if people who drive cars lack an understanding of how a horse works.
remic_0726@reddit
With over forty years of coding experience, I use AI, but not gpt, more like Claude Haiku. And frankly, the gain is considerable, easily tenfold. On the other hand, gpt only does things that don't work. Note that you need to know how to write a prompt, so if you don't know exactly what you want, it's complicated. However, for finding problems, it's fantastic. This afternoon I was struggling with an Android dependency injection (I don't understand anything about that stuff), and it managed to find the problem for me. I was able to deploy without fully understanding it. Admittedly, I haven't progressed in Android on this point, but some things require a lot of time to grasp, and I no longer have the time, the memory, or even the desire to untangle a mess.
cpp_is_king@reddit
I’m not seeing anything linking this paper to Anthropic
funtimes-forall@reddit
This is like if the tobacco companies came out on their own and said smoking causes cancer.
nestcto@reddit
This just means people are mis-using it.
You have AI give you a proof-of-concept to reference. You have AI find another way to do the same thing. You have AI mock up your interface so you can test code. You have AI create the service container to run the actual code you need to execute. You never, ever, tell it to do the whole project and call it a day. You have AI help you to adapt to a new language from your preferred language.
You have AI do the busy-work coding so you can focus on the hard stuff. Kinda like how you have the IDE write the interface code from the designer so you dont have to manually set control properties.
Don't blame the tool for people who choose not to think for themselves.
LavishnessOk5514@reddit
What’s anthropic’s play here? Why would they publish research that undermines their product?
Khandakerex@reddit
The play is OP is making click bait headlines and karma farming and everyone likes the headline so they upvote and OP feels like he accomplished something and contributed to society. The article doesn't support what he says unless you go past a very very shallow level of "oh if you're new to programming and use AI a lot you probably won't retain the concept."
yeetedandfleeted@reddit
There is no play. What do you think is more likely here, Redditors that can't read past a headline or Anthropic undermining their product?
InterestingFrame1982@reddit
That’s the critical point. The cognitive dissonance is astounding in here.
noscreenname@reddit
Very interesting, thanks for sharing. I really believe that AI assisted coding is a new capability that can allow us to be more productive, but maybe not in the way we think it would.
We haven't really learned to use it correctly yet... Haven't found the right methodology, patterns, best practices, pitfalls, etc.
We are trying to automate the craft of coding, and this is clearly failing to deliver end to end value. IMHO software creation, delivery, and maintenance is not just about coding, it's a much more complex activity, involving imagination, problem solving, social skills, business knowledge, empathy, and many others. AI reduces the nominal price of a line of code. That's a fact. In order to transform this into tangible value, Software Engineering must perform a major 'Shift Left' to transfer the cognitive capacity from syntax to semantics.
We are trying to make faster horses, because no one yet has thought of making a car.
Particular-Plane-984@reddit
So you could say that vibe coding brings about brain rot. I propose a new term:
vibe rot
Raknarg@reddit
I mean I feel way more productive being able to write a comment describing the code block I want generating, copilot generates it, I verify it and make changes I want to the structure and approach. And also never having to write unit tests by hand. Like Im not usually prompting to write code anyways, most of the time I'm leveraging auto complete and having copilot make predictions.
stormdelta@reddit
This is the big one I've noticed. I learn a lot less if I rely on it too much. This isn't always a problem - there are tedious tasks where there isn't much to be learned, language processing tasks rather than code, etc. But still.
Nyadnar17@reddit
I am honestly extremely upset by this push to use AI as a codding replacement when all the benefits of AI that I and everyone I know personally come from using it as a research assistant/less assholish stackoverflow.
LLM technology is an amazing tool but everyone in c-suite seems bound and determined to use it for the one thing it actually fucking sucks at. Its so frustrating how much talent and potential we are wasting.
thewormbird@reddit
“This is partly because composing prompts and giving context to the LLM takes a lot of time”
It takes a lot of time to do this for myself as a human. If you actually give a shit about decomposing problems thoroughly, it should take a lot of time. Writing the code IS RARELY THE HARD PART. But we sure see to want it to be.
I think the quiet dread people who enjoy writing code feel after using AI is doing the expensive knowledge work to gather context and then watching a robot apply it sometimes to your expectation and other times making an absolute shit show of it.
There are 1 of 2 actions I’ve seen those folks take. One is to double-down on AI and accept their role as a faux tech lead instructing AI who is kind of a moron junior dev with short-term memory loss issues. Another is to reject and denounce AI completely and shit on every use of AI.
Both are overcorrections. There is a balance where LLMs are just a tool and not a replacement for thinking. Over-reliance on any tool causes skill atrophy. Don’t believe me? Go build a twitter-clone using a plain text editor. You cannot use anything else except that plain text editor and the command line. I did this for a few days and was appalled. How much I rely on the creature comforts of my tools was eye opening. LLMs aren’t any different in this regard.
Ordinary-Sell2144@reddit
The interesting part isn't the "no speedup" finding - it's that developers using AI wrote code that was harder to maintain long-term.
Makes sense when you think about it. AI generates working code fast, but understanding why it works takes the same effort either way. When you write it yourself, you understand it by default.
Speed of writing was never the bottleneck. Understanding the problem was.
_Lick-My-Love-Pump_@reddit
Literally the opening sentence in the very article you posted:
"AI assistance produces significant productivity gains across professional domains, particularly for novice workers."
I'll take "how to cherry-pick" for $2000, Alex
Ok_Blacksmith_1988@reddit
There’s irony in you reading a small chunk of the paper and immediately coming back here with your own half-formed conclusion on the basis of just the abstract, somehow reads condescending and hypocritical
Even though it wasn’t the point of the paper, it does address coding performance in addition to learning the library, and if you look at the task time, you can see how much overlap there is; since it’s only a 35 minute task, taking time to write out the prompt for the ai to solve the problem is actually significant. Which the authors do talk about, in the paper. So if you’re coming for the points that OP is pulling out then you ought to say something like ‘debugging was a non-ai assisted task, let’s hand over all our cognitive processes to the ai and then there’s no downside’ or ‘the study wasn’t built to measure coding performance and therefore the task completion time is misleading because participants weren’t trying to write code as quickly as possible, they were also trying to understand the library, which you can see in the follow-up prompts some participants asked the ai, and in the way that some retyped the ai output instead of copy-pasting, which represented a significant slowdown’ or ‘that’s only true of some subtypes of ai users; but because that’s not what the study was examining, we can’t see all the data broken out like that’ or ‘n=51 why are we drawing any conclusions from this toy problem and contrived setup’ or ‘GPT 4o-mini? What are we, cavemen? Opus-4.5 is the only ai’; instead of pretending that this wasn’t a metric the study was measuring.
SteroidSandwich@reddit
The most I'll use is intellisense saying "reorder this for better efficiency" or "You can do this shortcut." I can't imagine having someone else write my code for me
StepIntoTheCylinder@reddit
Luminaries don't know what's going to happen. The way AI works isn't even known.
Futurists are the wrongest people ever. It just takes too long to find out nobody goes back and calls them on it, and they already cashed out. Futurism is a grift on the gullible and easily awed.
Dimillian@reddit
Programming is dead, and we simply need to train and maintain different skills. Why is it so hard to understand here?
SideQuest2026@reddit
I have been able to show some efficiency gains, but it does have diminishing returns once a project reaches a certain size. I think what needs to improve is the context rot.
diegoasecas@reddit
aahh another day at jobless student land
Trick-Interaction396@reddit
CEO: Cool. Anyways we are doubling down on AI and doing layoffs. If anything breaks my consultant buddy will fix it.
JustViktorio@reddit
This post is a test who can read the source and who doesn’t
AlSweigart@reddit
I'm not sure I believe this paper. They probably had AI write it.
XWasTheProblem@reddit
Maybe it's not helping, but at least it's making things actively worse.
We live in wondrous times indeed.
CanaryEmbassy@reddit
Two extremes here. The details are lost. It depends on the person completely. I have been coding for 25 years. If I have an agent that is aware of my schema, it can write queries way faster than I can. I can then review it quickly because I can review it quickly. Lots of folks who are younger need to stay away, this is where they can suffer. AI can write code better than them, and they do not have the experience to review and adjust. I do fear in 10 years where our overall skills are, because you know how young folks glued to their phones have horrible social skills... well that is where folks will be professionally. Maybe that is job security for me, but then again if these agents quadruple their performance we all also may be dust in the wind.
NeedsCSJobAdvice@reddit
Am I the only one here that is getting more productive? Have been building software for 13 years and I find AI helps in several areas. Don’t get me wrong, I love writing code and figuring out difficult problems. I do feel AI can write boilerplate very well and especially if you take a TDD approach. AI + MCP has also helped me write better user stories for my team. Just my experience so far.
Cordoro@reddit
The participants in the paper also gained productivity. The headline is misleading
kennystetson@reddit
I agree except for the "no significant speedup in development time". The speed up in development time in big projects is huge
Cordoro@reddit
Read the paper. They agree it speeds up development time. That’s not what they were studying and the methodology didn’t really test speed well. They just didn’t reach statistical significance (hence “no significant” wording) but the AI group was faster, and all completed the task while the no-AI group had 4 fail the task out of 26.
DLCSpider@reddit
Wasn't there also a paper which showed that senior developers perceive themselves to be 25% faster with AI but are actually 19% slower?
baronoffeces@reddit
Did you even read it? It’s saying it’s not good for skill building, I.e. not good for novice users. If you’ve used any of this tooling and are experienced this is obvious.
Level-Courage6773@reddit
Love this, thanks for an entertaining read :D
GeorgeMaheiress@reddit
On a trivial task using an 18-month old model (4o). Meanwhile entire apps and features are being built with current models. The people who say that AI coding is helping them aren't lying or confused, stop searching for reasons to disbelieve them.
Training_Chicken8216@reddit
The one thing that LLMs can successfully replace is asking strangers online for help. It's no more unreliable, but way less condescending.
ToonMaster21@reddit
We had a data engineer leave to go somewhere new (an industry with significant security requirements) to basically force him to quit using AI.
He said he was forgetting how to write code and automated a lot of his job “for fun”
BanjoB0b@reddit
In my experience, AI has served mostly for code auto-completion and duplicating code structures. It is helpful, and for my ADHD brain, it saves a lot of energy when investigations get too long for my naturally hampered ability to focus. It's a 50/50 thing. Sometimes it's super dumb, sometimes it's helpful.
But it won't ever replace Software Engineering. AI cannot do smart or creative design. It's good at monkey-coding and sometimes debugging.
jeffbagwell6222@reddit
I'm killing it with AI. So productive and just making super cool stuff.
I was heavily against AI too a year or two ago.
greenknight@reddit
As a shit tier programmer my work looks like AI anyway. I write a spec in markdown, write a little abstract pseudo code, assemble my typical libraries, look for new and exciting alternatives. when finished, write a TODO.md tracker.
Then start writing code and the unit tests as I go along
All probably with a terrible commit hygiene and improper use of stash.
DeliveryNinja@reddit
I've been coding for 18 years and now just use claude code religiously. 4 months and I've not written any code but i spend ages reviewing it. It is really good for writing integrations when provided with existing design patterns and rules to follow that would have taken weeks. Can do them in a day. New domain models following DDD principles. Just copy the previous one. Admittedly I do have a 1000+ lone claude.md file and make several iterations of code review to make sure it follows the the patterns. It has accelerated development massively. But yes if you don't care to guide it and set rules it will produce shute code.
CHF0x@reddit
Did you even read the paper? The experiment had developers _learn_ a new asynchronous programming library they'd never used before. The finding is that when you're trying to learn something new, heavy AI reliance can hurt that learning. This is very different from "AI doesn't speed up experienced developers working on familiar codebases.". I wish people would train a bit more in comprehension than picking up random facts that fit their agenda.
grady_vuckovic@reddit
I have literally been saying exactly what this study concluded for like over a year now. FFS. Fuck these mind games these tech bro's have been playing with us, constantly trying to make us feel like we're Luddites for going with our gut instincts on this stuff.
Particular-Plane-984@reddit
I've been saying the same as well. Been saying that it's actually the opposite of the 'hop on the train or get left behind' folks. They think we'll lose our jobs if we don't know how to use AI (which is a hilarious statement because using AI is easy)... meanwhile, in reality they'll lose their jobs (if they're even employed) for having their skills degrade so much by over-relying on AI
matthewjc@reddit
This was specifically a test of how devs "gained mastery of a new asynchronous programming library" with and without ai. This post's title is misleading.
crystalpeaks25@reddit
Software engineering is a means to an end.
warpedspockclone@reddit
One big hurdle is the mode of interaction. It requires reading and writing lots of text. Kids these days are barely literate.
For those who can read and are already experienced, it is a tool. As with any tool, it all depends on how you use it.
Do you think people who have only ever known React could write a basically functional vanilla html/js page to save their lives? No.
Do you think Ruby developers can write Assembly? Not related.
The point is that everything has costs, tradeoffs, abstractions.
With LLMs, I often find that I say to myself it would have been faster just to do myself. But there are some things it really excels at.
DataGhostNL@reddit
Trash coders or non-coders suddenly delivering code (even bad code) easily reaches into 100x territory yes
Far-Win8645@reddit
Of course some people will have a 100x boost. AI is a tool and make shitty coders life easier. So they will have a huge boost. It does not apply to all, and definitely not to competent coders.
NuclearVII@reddit
I really wish people would be a lot of skeptical of Anthropic's publications. This is a for-profit company publishing a "study" on a product category that they have an interest in. That ALONE should be enough to discount it.
I want to address this:
This is not the narrative I've been seeing. For me, I think the new narrative is more along the lines of "It's a very powerful tool that you have to use intelligently, not just vibe code".
Which this paper seems to be reinforcing. Here's a choice quote:
This is not a paper about how AI use can rot your brain, even though the headline would suggest so. This is a paper about how you have to use AI "the right way".
I would really like to let my confirmation bias do the work for me - I think these tools are junk and a net harm on society - but basic skepticism says this paper really needs to be considered with a great deal of scrutiny.
Double_Ad3612@reddit
I have definitely noticed that using AI has negatively affected my critical thinking and problem solving skills.
Ok-Kaleidoscope5627@reddit
AI assisted coding gives me a 100x speed increase on proof of concept or other situations where previously I'd have done a task manually because automating it wouldn't be worth it.
It gives a minimal or no speedup for code that I will deploy to prod or that needs to run long term.
Quiet-Owl9220@reddit
The hype is not living up to reality, go figure.
I'll tell you what LLMs can produce more efficiently than humans - bullshit success stories and impersonations of real humans that exist solely to shill your product on social media.
LeDYoM@reddit
no shit, Sherlock
BobSacamano47@reddit
It definitely makes me faster at writing and understanding code. People who aren't programmers don't realize how little time we spend writing code in the first place.
Plank_With_A_Nail_In@reddit
This stuff should already be in your design documents, you should just be coding stuff from scratch like this.
No_Reality_6047@reddit
The original title: "How AI Impacts Skill Formation"
Your title: "AI assisted coding doesn't show efficiency gains and impairs developers abilities"
There is no significant speed up in development by using AI assisted coding if you are using an obscure library that the AI was not trained on
Gil_berth@reddit (OP)
No, they use this: https://github.com/python-trio/trio It's an old library with 7k+ stars on GitHub. Jesus, why are people commenting without reading the paper?
Reinbert@reddit
Why are you significantly altering the title?
The study only talks about skill formation (learning). You generalized this to all applications of AI, which is just ridiculous and wrong.
And even then your summary left out a very important part, that even in this study people were actually more productive with AI:
So you essentially took that study and completely cut out all the parts that don't fit your narrative.
Jesus, why are people posting without reading the paper?
Gil_berth@reddit (OP)
I literally borrowed the title from the abstract. That is the whole point of the paper: "showed some productivity improvements" but they weren't "significant" and that is what I said in the summary.
Reinbert@reddit
There are a lot of problems in this studies design. First of all, it's a small study on only 50 participants. We see the total time in minutes to completion for both groups (lowest, average, highest):
With AI: 20,5 - 23 - 25,5
Without: 21,5 - 24,5 - 27,5
The group with AI assistance looks quite a bit faster. And that's only after adjustments to the study design, their pilot study (N=20) had these results (because people without AI assitance struggled with Python syntax so they controlled for that):
With AI: 16,5 - 22,5 - 27,5
Without: 27,5 - 30 - 32
Which looks pretty damn significant to me. So really, I don't think the study supports the conclusions you make in your post.
Globbi@reddit
30 minute tasks are also kinda insignificant. In reality, unless your 30 minute tasks are just separating a functionality into TODO for whole day, context switching between such tasks will take more time than the task itself.
And it's a huge problem that we won't be getting actual serious studies on multi-day tasks. That would require some thinking about architecture, change in multiple places, finding which places first, understanding how will it affect things.
How would you do such a study? A big company would need to want to do the study and give same tasks to multiple people at once (who won't be able to see commits of others) and some people would have AI assistance, some not. This is not going to happen ever. Evaluating different tasks done by different people, and also taking estimates of their difficulty from bullshit planning numbers, won't give you any interesting information.
Reinbert@reddit
Many valid points in there. The main point is that this study is about learning and not about working with AI. Drawing conclusions about working environments is just not valid.
bzbub2@reddit
99% of people don't know how to properly read academic papers. also, you editorialized the title for significant shock value, don't be surprised to get knee jerk reactionary comments
Gil_berth@reddit (OP)
"you editorialized the title for significant shock value" I borrowed the words from the abstract…
lahwran_@reddit
You didn't link the paper and I was still trying to find it when I saw your comment
bzbub2@reddit
you do a good job of de-editorializing the title, but i think you unfortunately make an error by making a claim that the paper is not making. You suggest perhaps that the issue could be because the 'ai was not trained on the library'. I don't think that is a claim the paper makes and the library has existed since like 2021 (https://news.ycombinator.com/item?id=29403458). here is the finding of the paper in the authors own words from the discussion:
"Our main finding is that using AI to complete tasks that require a new skill (i.e., knowledge of a new Python library) reduces skill formation. "
Bubbly-Wrap-8210@reddit
Unpopular opinion: Developers previously working on the edge of being called "efficient" lack the basic skills to gain efficiency from orchestrating a tool that potentially tenfolds their output.
I work at a company where we are pretty open to allowing developers to use this. I've given so many sessions on how to use the tools effectively, what they could do for you in terms of understanding the code, understanding or hardening the requirements, or pure delivery.
After weeks, only 1/10 used it. Those who did produced measurably more outcomes. Those who didn't simply reverted to their established career habits.
AppropriateStudio153@reddit
> Those who didn't simply reverted to their established career habits.
In other words: They did what worked for them since. What is the problem?
That AI makes you a 10x Dev is a proposition that YOU have to prove, not the devs that say it doesn't.
TrontRaznik@reddit
And those 9 are going to be the ones to get laid off in the long run.
kirasenpai@reddit
Maybe thats a good thing.. people who actually keep up their skill will be more valuabe in the future when everyone realizes they fcked up
hectorchu@reddit
If somebody whipsaws their opinion or advice it means they can never be trusted. We have picked the wrong gods, the flood is coming.
LargeRedLingonberry@reddit
This is purely anecdotal, I've been leading an AI investigation in work for the past couple of months. Utilizing frameworks like speckit to discover if AI can create complete features if given a good enough prompt.
The overwhelming answer is no, it struggles a lot with complex business (and even simple) requirements due to lacking domain knowledge. For a feature which would have taken me a couple of business days to complete it took the AI and me almost a week. This is because I had to debug and refactor a lot of the code it wrote without the normal context that I would have if I wrote it myself.
I've seen this repeated a few times and while I got better at prompting, AI still didn't come close to my own speed.
On the other hand I have used AI (Claude cli) in my personal project (from inception) for the past couple of months and it is still incredibly useful, it doesn't struggle with finding files, finding modules, running tests etc. And it can do complete features with only a bit of dev work at the end to "fix" the code I think because AI wrote it from the ground up the project is structured in the way that it expects and so is able to get context of what it needs quickly and with fewer tokens.
I think AI struggles with pre existing code bases because it's trained to understand the "average" repo structure.
jailbreak@reddit
You also teach kids to do calculations in their head or on paper before you let them use a calculator. Knowing what the machine is doing for you is essential.
Captain_Sterling@reddit
I think it depends on who it's assisting.
I work in service management and haven't coded in years. I use ai at the moment. I've found that it will create certain documents quite well. But quite well in this case means with 80-90% of the content being accurate.
I still have to read edit and correct them. But its still a time saver. I spend less time doing that than creating from scratch.
The same with coding. If someone responsible and knowledgeable is doing it, then it's a good tool. It's something that can help them. But it still requires their knowledge.
Farados55@reddit
WHAT AM I SUPPOSED TO BELIEVE
mediandude@reddit
Engineering means building a model of the product.
Code is at best a small subset of that model.
sapoepsilon@reddit
YES
auptown@reddit
Well it sure as hell helps me
Grounds4TheSubstain@reddit
Your quotes from the paper conspicuously lack context. "We conduct randomized experiments to study how developers gained mastery of a new asynchronous programming library with and without the assistance of AI. We find that AI use impairs conceptual understanding, code reading, and debugging abilities, without delivering significant efficiency gains on average."
kolo81@reddit
I have similar thoughts. I recently used an agent to generate a portal based on Django. I was shocked at how it did it, but the code wasn't complete; it turned out that most things didn't work. The worst part was the code structure and understanding what was going on where. I think this tool is good for supporting and helping me understand certain things. It works well as a quick syntax check or name suggestion. I also use it for front-end generation because it's always a pain for me; I don't like HTML, JavaScript, and so on. But in my projects, I don't need fancy pages and flashing buttons; I want it to be as simple and readable as possible, so I often dump entire HTML/JS code structures into AI. Unfortunately, search engines like Google are pushing us to AI chats. It's hard to find tutorials for anything specific these days. Besides, I don't think they're being updated, I have that impression. For example, we want a tutorial on creating a login/logout page in Django, and I often find outdated solutions. It's better on YT, but ctrl+c ctrl+v won't work (although apparently no one does that :-) everyone always writes each line themselves :-))
Brock_Youngblood@reddit
I don't do much with code generation but as a replacement for Google and stack overflow it's pretty good.
Saves me time that way. Also good for learning. Instead of reading a 1000 page book or documentation. I can just ask it how things work.
R4vendarksky@reddit
But it doesn’t know how things work… it’s just guessing what you want to hear.
It will 100% feed you props that don’t exist or solutions that do not work
Brock_Youngblood@reddit
It's been pretty accurate for me tbh. I very rarely find something wrong anymore. It's loads better
mv1527@reddit
I think this matches my experience so far very well. It saves the initial time of looking up the documentation. But that makes you miss out on the options and choices that the documentation also provides. So the choices made in the code are more likely to be of an 'example' level instead of the options you actually need and you end up fixing them (or worse: leaving them in) for example in my case setting the appropriate level of error correction on a QR code renderer.
goodnewscrew@reddit
I'm sure this is true. However, I've found it immensely helpful in building a small web app. I have some background in web development and python so that I can generally understand what's going on, contribute to troubleshooting, and push back on things that won't work.
I built a clone of typing.com (more or less) for our school. It has SSO that works with our google workplace for education domain. It lets students level up, unlocking more of the keyboard as they progress. It graphs their WPM and accuracy on their dashboard. Also, it awards badges for 1st/2nd/3rd-top 10 each week based on points earned.
I have no idea how much longer it would have taken without AI. Probably an order of magnitude if at all.
Captain-Barracuda@reddit
It's great at making trite things that are already overdone. Not so much at inventing new stuff.
Lame_Johnny@reddit
https://github.com/mlolson/claude-spp
Successful-Money4995@reddit
All these articles focusing on whether or not AI makes us more productive are missing me. My goal isn't to be more productive. My goal is to have a better life.
If AI doesn't make me more productive but it can take things off my mind, good enough!
MartinByde@reddit
I've been saying this since around 8 months ago when my company started to push this shit. You don't learn kung fu reading a book, you learn by actively practicing it, code is the same. Just reading what the AI did don't allow the inner workings of the project enter your mind properly. When there is a bug all goes to shit. Ever since this started I'm seeing people taking 5x more time to fix bugs because the codebase that should be known like the back of their hands quickly became a monster.
PoisnFang@reddit
Blah blah blah, why everyone keep going around in circles. It's a tool, use it or not, whatever
Lothrazar@reddit
That pretty much sums up my personal experience as well
shared_ptr@reddit
There is obviously and very clearly a trade-off here and yes, if you use AI to generate all your code your skills directly writing that code will atrophy.
That's the same as anything though. If you become a manager and stop doing technical work then unsurprisingly your skills will fade, but that doesn't mean you aren't having equal or increased impact in things that matter, or that you can't hedge it by staying close to the technical work and ensuring you don't cut yourself off entirely. Same deal with these AI tools.
Pjolterbeist@reddit
Reading it now, very interesting research - though OPs summary does not match the content well, unsure if OP actually read it through. I am guessing no.
I think many people are rightly concerned that AI does help much with learning the details of a piece of software, compared to coding yourself, and how does this affect programmer's knowledge about the systems they write and maintain. It seems reasonable that if you just spend your days telling the AI to do the work and go drink coffee, you won't learn a lot. But used right, it can assist both with developing and with learning.
I'd just like to comment that this is a study.
The task was making small fundamental programming tasks on an unknown API. Out in the real world, using AI to start working with a project or tool does not save time immediately - it takes some ramp up time before you will see efficiency gains. Once you have set up the AI (using CLAUDE MD, Claude Skills, MCP, or other ways) so that it starts with the right context about your stack and procedures and can talk to git, project management tools, read logs and check itself, it becomes a LOT faster to develop with it.
But before you spend the time interating on making a good setup, you are not going to be saving a lot of time.
seventythree@reddit
For those wondering, the AI assistant they tested with was based on GPT-4o.
Gil_berth@reddit (OP)
Irrelevant for the experiment, because they made sure that: "The AI assistant has access to participants’ current version of the code and can produce the full, correct code for both tasks directly when prompted."
JiminP@reddit
The paper is worth reading, in particular figure 11, but I don't think that the result by itself is very significant one way or another given low sample # (Table 1).
Figure 7 contradicts with some comments in this thread.
Figure 16 shows that active coding time has been significantly reduced by using AI, although quiz scores are indeedly decreased a little bit.
Sunscratch@reddit
From my experience, where AI is really useful:
Where it is not useful or harmful:
My experience is mostly in working with large enterprise systems (Scala, with some Java and Rust)
VirtuteECanoscenza@reddit
I'm pretty sure in SOME tasks you can get huge gains... Not in all. Also I'm 100% positive that people who stop coding will lose their skill.
And I think the latter is the more problematic part... Lost of students now are learning 10% is what they could in school because they delegate to these AI all their homework. If you don't use your brain it will rot, and I'm afraid of seeing how the average adult will be in 20-30 years considering the current level we managed to achieve without AI brain rot...
catecholaminergic@reddit
tbh I don't get why heavy ai users don't get that they're just a thin client. An expensive thin client.
yupangestu@reddit
I have read the paper, it's interesting though, but again a research paper cannot be a source of fact in this case. However, based on my experience, I always say that use AI assisted when you KNOW what it produced. Sometimes, I just put the assistance just for chores like Unit Testing, Boilerplate, etc..
pancomputationalist@reddit
I'm bored with these kind of posts. Let's just do the experiment. You guys keep writing by hand, I stop reading the code, and in 3 years we'll check back in to see who was more successful in the market. Deal?
theRealBigBack91@reddit
Yep, enjoy your new job at McDonald’s
truthputer@reddit
When your vibe coded app has enough security holes that it leaks all your user data and you get sued out of business that's not a successful project.
ShelZuuz@reddit
In related news, C impaired developers abilities to write code in assembly.
mfitzp@reddit
Does it? I got better at assembly once I learned C
rafuru@reddit
Dunno about production code, I use copilot or cline to help me to write parts of the code, but I don't leave them write like most of the code because it's hard to maintain.
However, when it comes to write unit tests it's very helpful, I write 1 or 2 scenarios and ask cline to write the rest of scenarios. It saves me a lot of time TBH.