Why agents DO NOT write most of our code - a reality check
Posted by ma_za_octo@reddit | programming | View on Reddit | 90 comments
Posted by ma_za_octo@reddit | programming | View on Reddit | 90 comments
TheRealSkythe@reddit
Are they saying they know LLM create shit but they build em and sell em anyways?
Klutzy_Code891@reddit
for me I like to use it as a demo like if i am stuck dont know how i want it to be like (usually for websites)i ask it to make a demo its usually really glitchy but nice to like see what it looks how i would change it so on
grauenwolf@reddit
A case study on how LLM coding was used at a company? Better downvote it and hide the evidence. We can't let people know how badly this stuff works in the real world.
phillipcarter2@reddit
I don’t think the takeaway is “how badly this stuff works” when the author’s conclusion is that it’s an essential tool in the developer’s toolbelt for ideation, writing tests, debugging and troubleshooting, and refactors; and that they’re great for constrained problems to the point where non-technical people can actually contribute code now.
bobbyQuick@reddit
This is an absurd conclusion. The article is literally about it failing to complete a basic coding task even after hours of guidance from a senior developer. It ignored all of their coding standards and introduced insidious data integrity bugs.
Relative-Scholar-147@reddit
Since visual basic corporate cums when reading "non-technical people can actually contribute code now".
RICHUNCLEPENNYBAGS@reddit
Since VB? That was an object of COBOL
CondiMesmer@reddit
I think you a read a completely different article, or maybe even had an LLM summerize it for you, because that's not remotely what that article came to the conclusion of.
The article is listing constant failures and massive deal breakers from AI agents, and they didn't even mention the big computing fees that come with it. What your comment is referring to is the small redemption they wrote at the end saying it's really just good at small code snippets and some auto-completion, while also plugging in their own AI company's product.
So you just ignored 90% of the content just to be able to misinterpret like a single paragraph at the end of the article. Hopefully your summerization LLM gets an update soon because your critical reading skills are clearly not a viable tool here..
phillipcarter2@reddit
It’s spelled “summarize”.
jaspingrobus@reddit
Is it really the author conclusion? I certainly wouldn't use a word essential
2this4u@reddit
You could, you know, read the link...
guepier@reddit
… So why didn’t you?
serrimo@reddit
Let me out it this way: it's pretty cool to have agentic tools, but if it's really essential to your work flow, you're in deep shit.
phillipcarter2@reddit
Are you? IME it’s fantastic at creating a bunch of test cases for a fairly mature codebase to the point where I can largely hand that job off. Without these tools, I wouldn’t have time to write tests as comprehensively.
amestrianphilosopher@reddit
Test cases are the last thing you should ever be generating with an LLM. The only way I could ever find them reliable as an assistant is if I wrote the test cases, and the code it produced passed those. If you aren’t thinking about how the code you’re shipping should behave, you’ve got some serious problems brewing. There will never be a 1:1 single prompt system that takes English and converts it to flawless code
CondiMesmer@reddit
Only time I use auto-generated code is when I already knew what I was going to write and it's just saving me keystrokes. So I just use it as a glorified auto-correct. Except it's intrusive auto-correct usually pops up in VSCode and disrupts my flow of thinking so I don't even use that lol.
flew1337@reddit
I kind of agree. It seems to be mainly used to generate tests when the logic is already implemented and coverage is required because of some arbitrary metric. To me, that's not writing robust tests, that's making your code appear compliant because your boss asked you to.
phillipcarter2@reddit
To the contrary, there’s all kinds of things where increased coverage can handle things for you and these tools are very quick and whipping up the cases. I had an example like this with some unicode fuckery, where the stdlib couldn’t handle my use case efficiently (too many memory allocs), and so I had to write my own routine. I could have come up with clever unicode use cases myself, but the LLM generated a dozen or so weird scenarios, one of them actually caused my code to fail, and so I fixed it. The point is it was faster to do this.
flew1337@reddit
I did not say it was not useful. It can be useful when you are generating tests for something very standard like unicode. Even then, you are delegating your understanding of what you are testing to the LLM, which links back to what the other commenter was saying. If that's something you truly understand, that's a valid use. It just gets riskier when you are testing code with custom specifications.
My point is that a lot of people generate tests for their internal API because they have some coverage metric to attain. The tests are basically meaningless. Anyway, it's a consequence of the metric and not the tool. People were already writing shady tests. The new method just exposes it.
CloudsOfMagellan@reddit
Tests like this can at least catch unintended changes or point out what code needs to be changed if bugs are found.
Downtown_Category163@reddit
They're OK at building tests that pass, not so good for finding bugs and edge cases in your code
grauenwolf@reddit
That's not my experience. In my last attempt, half the tests were failing. And half again were actual bugs in my code.
Granted this is in a fairly new project where I knew I was working fast and sloppy. I wouldn't expect it to be as useful in a more mature application.
grauenwolf@reddit
You shouldn't be generating all of your test cases, but I've found the LLM can find unexpected stuff.
I do know that I'm awesome the type of person who will use code generators to create hundreds of property tests with the expectation that 99 out of 100 of them won't have it bug and probably couldn't have a bug. But that 1 in a hundred makes the exercise worth it.
Bergasms@reddit
If your codebase got mature without tests then i'm not surprised you love LLM's at all.
phillipcarter2@reddit
Christ.
When you work on something with millions of active users -- not millions of requests, millions of active users -- with an internal and external extensibility model, a marketplace for extensibility that entire 100M+ revenue businesses use as a major distribution channel, and an absolutely wild matrix of supported places parts of the software needs to run ... you're not reaching anywhere close to 100% test coverage.
There is no such thing as having comprehensive tests everywhere with big software that does big things.
So yeah, a good code generator that can follow fairly well-established patterns to get close to exhaustive testing is a significant boon, because once you cover most of the tricky use cases there's a long tail of things that could be tested, but there's no time to actually do that.
Bergasms@reddit
No one said anything about 100% coverage, you'd be stupid to aim for it, but writing actual tests with LLM's after the fact has and will continue to be a recipe for garbage,
We've had better success with using LLM's to generate input to exercise tests because they're great at shitting out nonsense in bulk.
grauenwolf@reddit
I find LLMs to generate a lot of bad tests. But not so bad that I can't make them into useful tests faster than I could write on my own. So they're a net positive for me... when the crappy tools actually try and not just give up after one or two tests.
xtravar@reddit
You are absolutely correct. And it's not just about coding. It's about making PRs, doing research, and automated code review on PRs. That last one is like an awesome spellcheck - not a replacement for an editor/reviewer.
Obviously, more complex code and context isn't going to have good results (yet). But it's very helpful for a lot of things.
My team has automated refactoring PRs for a large framework migration, and then people look it over and sign off.
I need a PR to change a constant. I just tell the agent.
I need a bash utility script - usually gets it right after 1-3 tries.
I need to look into our data tables, it can gather the data and make graphs instead of me schlepping through it.
Saves tons of time on brainless tasks that weren't interesting to begin with.
Clearandblue@reddit
I've used it for exactly this recently and it has worked great. There's a few things that needed some massaging, but overall it saved lots of tiny.
Something I'm aware of is review fatigue though. I'm already doing a lot of PR reviews and I find I'm doing more on top of that with AI. First world problems though as you get more done with a team and with ai than you do on your own.
phillipcarter2@reddit
Yeah, the bottleneck shifting to more review is definitely real. Some folks have been doing okay with a combination of automations and review agents, but IMO it doesn't work very well yet. On the other end there's some promise in the "AI SRE" class of tool that can automatically read logs/traces/metrics for some services and let you know if the change is doing alright, but it's still a far cry from "we verified it does what it needs to in the real world". Toooons of work to do in developer tools for the AI labs if their goal is get AI involved a lot more than it can be right now.
grauenwolf@reddit
It can be interpreted either way, which is still a bad thing in the minds of the AI zealots.
phillipcarter2@reddit
I prefer the interpretation be what the author wrote, not what AI or anti-AI zealots want it to be :)
grauenwolf@reddit
That's your right, but others have their right to their own interpretation.
Personally I don't put much stock in the author's conclusions. Far too often I've read academic papers in which the conclusion was not supported by the facts presented in the paper. So I tend to ignore the conclusions entirely and focus on the body of the content.
phillipcarter2@reddit
So you’re admitting to cherry-picking what you prefer? I mean, sure, if your workplace is pushing AI on you in a way that clearly doesn’t work, don’t let me stop you. But, woof, the author quite clearly wrote that AI has a place in the toolbelt.
grauenwolf@reddit
It's not "cherry picking" to read a set of facts and come to a different conclusion than the presenter of those facts.
Cherry picking is when you ignore facts, not opinions, that you don't like.
phillipcarter2@reddit
Okay, so you’re cherry-picking then. Got it!
irecfxpojmlwaonkxc@reddit
You just hear what you want to hear don't you?
spaceneenja@reddit
It’s only cherry -picking when you do it, not when I do it.
egodeathtrip@reddit
brah, what are you both even arguing about, lol
mb194dc@reddit
Because if it didn't you'd just end up debugging it for much longer than just writing it yourself in the first place...
pm_plz_im_lonely@reddit
Every few days I check this subreddit and the top post is some article about AI where every comment is about how bad it is.
Decker108@reddit
Funny how drastically the corporate speech on AI benefits differs from that of in-the-trenches developers, isn't it?
knottheone@reddit
You've pulled back the veil. :) every major subreddit is like this.
They have whatever their biased and usually uninformed view is and repeat the same process infinitely for years in a horrible circle jerk. They jump on, downvote, and attack people who disagree until they leave, then back to circle jerking.
hu6Bi5To@reddit
I'm stuck squarely in the middle of this debate, and it's a lonely place as most people seem to be at one extreme or the other.
AI agents are vastly more useful than the denialists are claiming. But that's only been true the past couple of months with the latest AI models (Claude Sonnet 4.5, GPT-5 Codex, etc.). They're good enough to handle non-trivial but small tasks on established codebases better than junior developers. They're better at finding bugs in code reviews than even the most experienced developer with an axe to grind (GPT-5 Codex especially).
But there are huge practical limits that still need to be overcome to get beyond that. Like the aforementioned "small" tasks, this is a hard limit set by the context size. I know sub-agents are a thing but something is lost and (to quote the old programming cliche) it doesn't scale. Context sizes are increasing, but that vastly increases the cost, so not by enough. Not to mention Context Rot is still a problem so you may not even want to use all of it for best results.
Yet wherever I look I see developers spending hours on trivial problems they could get an AI agent to do in two minutes (with fewer mistakes). Then I look the other way and see messianic people with 100-slide presentations on how Claude Code 2.0.34 changes everything! All you need is: instructions, and agents, and planning, and memory, and specs, and a million markdown files in twenty seven different locations, and ultrathink!, and, and... ...if it requires that much pre-preparation I'd be faster doing it myself the old fashioned way.
Desolution@reddit
"We made literally one attempt at doing something extremely difficult that people have now spent years getting good at .. and it didn't go very well!"
All the problems they had are real, but very solvable. Like, using the same thread to write code and verify code is a rookie mistake; use sub agents or refresh context
BandicootGood5246@reddit
Been my experience too, I think the reasons it gets over hyped is that people possibly overestimate how hard some of the thing it does are
A common one I hear is that it can generate unit tests really fast - but honestly unit tests should already be pretty fast to write, once you have the first test case the rest is mostly copy paste with a few minor variations. And then when an agent churns them out in 1minute you've then got to spend extra time checking they're useful and valid cases.
And then when it comes to writing features a lot of the time it's not doing a whole lot more than what you could do with copy paste + search in the past, it might save you opening up a few websites and narrow down your search better some of the time. But like copy pasting code snippets you still have to validate and check them which often ends up being the harder part
you-get-an-upvote@reddit
I want to work in your codebase :(
twigboy@reddit
They're clearly not a RelayJs user
I detest that GraphQL framework because the amount of boilerplate required
VoodooS0ldier@reddit
Yeah lol maybe very trivial unit tests but once you need to integration test these, tools can become useful.
Absolute_Enema@reddit
Integration testing is a solved problem -in the right kind of language- since the '80s at the very least.
RammRras@reddit
I like the tab competition, specially what Cursor does, but sometimes when the variable names are a little bit confusing it's very dangerous due to mistakes. Using search/replace and copy/paste is sometimes safer.
But till now my biggest win is the tab completion from LLMs the rest is just code they have copied from GitHub or stack overflow and could be terribly wrong.
jbmsf@reddit
Most of the time, what matters is whether something has a predictable cost, not whether it has a minimal cost.
And most of the time, writing unit tests is predictable. So even if you manage to automate it away, you aren't impacting the underlying question: is X feasible?
zazzersmel@reddit
What value does “x % of code” even have as a statistic?
backfire10z@reddit
Dude, are you trying to brick my MSFT investments?
Difficult-Court9522@reddit
He’s
IE114EVR@reddit
You must be getting downvoted for your grammar. Which isn’t technically wrong… but weird.
Bstochastic@reddit
Finally, honesty.
terrorTrain@reddit
I'm writing an app right now, which I'm very heavily leveraging AI agents for using open code.
It's entirely about how you set it up. I setup the project and established patterns. Then I have a task orchestrator agent, which has project setup guidelines. It literally doesn't have write permissions. It's setup to follow this flow:
Meanwhile, I'm keeping an eye on the git diff as it's working to make sure it isn't doing something stupid, and if so, I'll interrupt it. Otherwise I work on reviewing code, and debugging the e2e tests, which it is just not good at.
The quality of code is high, test coverage is high, tests are relevant. But I've probably done about 3 or 4 months of work for a small team, solo and in about a month.
It baffles me when I see people saying the ai is just creating tech debt. Without the ai on this project, there wouldn't be tech to have debt. We would probably still be in the early phases of development.
Full-Spectral@reddit
A better idea would be that they don't write any of your code, IMO, at least if I'm ever going to be using it.
VeritasOmnia@reddit
The only thing I've found it consistently decent at is unit test coverage for your code with solid APIs to prevent future breaks. Even then, you need to carefully review to be sure your code is doing what it should because it assumes your code is doing what it should.
Full-Spectral@reddit
I get that for people who work in more boilerplate'ish realms with standard frameworks and such it would work better, aka in the cloud probably these days.
It wouldn't be too much use for me, since I have my own unit test framework and my own underlying system down to the OS, none of which it would understand.
theshrike@reddit
You do understand that you're in the 0.0000001% of all coders in your situation?
Full-Spectral@reddit
I didn't mean INCLUDING the OS, I meant just building on top of the OS without using third party stuff. That still obviously doesn't put me in the majority of course, but this kind of thing isn't that uncommon in larger companies and embedded work or regulated work where every bit of third party code becomes a documentation burden and concern.
ub3rh4x0rz@reddit
Do you write holy C targeting temple OS?
Full-Spectral@reddit
I meant down TO the OS not down to and including the OS.
LouvalSoftware@reddit
AI can't say the N word so it's probably WokeOS
JakeSteam@reddit
Interesting read, thanks. Your conclusion seems to match my own experience, where AI is definitely helpful, but an entirely different product from the seemingly magical one startups and influencers apparently use (with no actual output to show for it...)!
Good point about the mental model, for a non trivial codebase extensive AI use has a pretty negative effect on everyone working on it, especially if you're doing something new.
TheNobodyThere@reddit
I'm hoping that agents will get better over time, though I am highly doubtful.
What I am getting from AI agents is sometimes below Junior level code. Methods that are hundreds of lines long, weird difficult to read logic, one letter variables. Sure, you can instruct it to make changes to improve the quality, but even then, it won't be perfect and I would have to do the final edit myself.
The main issue is that the agent doesn't really have a full context of your project. It sends a bunch of your code to LLM everytime you ask it a question. It doesn't scan your codebase to look for some design practices, patterns or code styling to follow.
As a result you get average code advice for your problem based on publicly available code, which is unfortunately below average and often Junior level grade. Good code sits in thousands of private repositories and LLMs can't train on it. Nobody is sharing their good codebase with any LLM.
What I can imagine happening is companies running their own private LLM that is trained specifically on their private repositories. But even that gets tricky and who knows how much it would cost to be actually fast and useful. And that doesn't even consider technological shifts in programming that are very frequent.
In short, it's a tool that makes certain annoying parts of work easier.
sloggo@reddit
Just fyi you can work around the follow-my-lead issues by deliberately asking it to create a readme for itself where it creates a compressed document to establish context. These master guidelines can be maintained both automatically and by hand , to give you the best chance of getting something you’re happy with “out of the box”
blwinters@reddit
This and you can create “rules” for Cursor to follow. I need to do more of that.
FortuneIIIPick@reddit
The article's title is a facade, the ending of the article is like, [but hey, AI is great and will save the world!].
kuzux@reddit
The ending of the article is basically an ad for octomind.
BrawDev@reddit
Ah yes, the "I'll run this" "Oh this didn't work, let me try this"
And it does that, 30x times for everything it has to do, because it isn't intelligent. It deals with text as it comes in. It's not actually aware that you need to do that regen step unless it knows it has to, in that moment, at that execute step, which it never does.
I can only agree entirely with this article.
YEP
Mine seems to do this
thisIsAFunctionWithAVeryLongNameSoAsSuchIWontCondenseItItillJustBeThisLong
???????
AI is an LLM, it has a set of training data that it tries to run to, if you aren't using that training data stack, you're effectively fucked.
I'm in the PHP world. Seeing people promote AI makes me fucking pissed because I know how these LLMs work, I know what is required to train, so when I try it with Filament 4, a recent upgrade to Filament 3. I'm watching an LLM give me Filament 2 code because it's fucking clueless as to what to do.
Try doing package development for your own API and watch it make up so much shit. You spend more time getting the AI Instructions right, which it half ignores anyway.
I refuse to believe anyone is using this actually in production to build. And if you are, it's an idea that we all could do within seconds anyway and if you have any reveue it's just luck or marketing that got you customers.
grauenwolf@reddit
That's what my roommate keeps complaining about. The longer this goes on, the more legacy patterns it's going to try to shove into your code.
LouvalSoftware@reddit
Its so funny writing Python3.13 code and having it recommend shit to support backwards compatibility to 3.8. Of course it doesn't have a single fucking clue about the deployment environment and how controlled it is...
grauenwolf@reddit
AI trained on specific versions would be so much more useful. But there's no way they'd spend the money on making special purpose AI because it would discredit the value of the whole internet models.
Radixeo@reddit
I'm seeing this in Java land as well. LLMs always generate the JDK 8 style
.collect(Collectors.toList())instead of the JDK11+.toList(). They're stuck with whatever was most prominent in their training data set and Java 8 is the version with by far the most lines of code for an LLM to train on.I think this will be a major problem for companies that rely on LLMs for generating large amounts of code in <10 years. As languages improve, humans will write simpler/faster/more readible/more reliable/easier to maintain code just by using new language features. Meanwhile, the LLM code will continue to generate code for increasingly ancient language versions and frameworks. Eventually the improvements in human written code will become a competitive advantage for companies over ones that rely on LLMs.
jimmux@reddit
Svelte is always a struggle. It can convert legacy mode code, but it has to be reminded constantly.
I expect LLMs would be much less successful if we were still in that period of time a few years ago, when everyone was moving to Python 3, ES6 brought in a lot of JS changes, and React was still figuring out its basic patterns.
BrawDev@reddit
To me it makes sense entirely why these companies have been unapologetically just ripping copyright content, and hoping they moon rocket enough to make any legal challenges a footnote.
No chance in hell could OpenAI have such a model, without the rampant abuses it does in scraping everything online - and paying said compute bill on the dime of others while doing it.
BroBroMate@reddit
Yeah, I see Cursor PRs come into our Python 3.12 codebase that either lack type annotations, or if they have type annotations, it's the pre-3.12 style. And it never tries to fill the dict types generic args.
Instead of
And I was always perplexed as to why, but your point explains it - it was trained on older code.
_dontseeme@reddit
Loss of mental model was the worst for me. I had a client that insisted I use ai for everything and paid for all my subscriptions and it got to the point where I just didn’t know what I was committing and could only rely on thorough manual testing that I didn’t have time for.
fisadev@reddit
My experience as well.
tegusdev@reddit
Have you tried Spec-Kit? I find its organizational features keep the LLMs focus much better than just direct prompting.
Its focus on feature development has made me a convert. It's still not 100% a "give it a task and let it go" solution, but it definitely relieves many of the pain points in your article, that I've also suffered from in the past.
Hungry_Importance918@reddit
Not gonna lie AI is def moving in that direction, you can kinda feel it getting closer every year. I’m lowkey hoping it takes its time though. The day it really writes most of our code a lot of jobs will get hit hard lol maybe I’m just extra cautious but the sense of risk feels real.
thegreatpotatogod@reddit
I agree entirely with this article. AI is great at providing little reference snippets or simple helper functions or unit tests. It can even make complete simple projects if you like. It gets increasingly worthless as the project's complexity goes up, and starts adding more and more unnecessary changes for no clear reason, while still failing at the task it was assigned
reddit_ro2@reddit
Is it me or this conversational dialog with the bot is completely off putting? Condescending and dumb at the same time.
HolyPommeDeTerre@reddit
I liked the read
Andreas_Moeller@reddit
Thank you for positing this. I think it is important we get multiple perspectives
goose_on_fire@reddit
Seems a decent middle ground attitude.
I tend to pull it out of the toolbox when I get that "ugh, I don't wanna" feeling-- basically the same list this guy has, plus I'll let it write doxygen comments and do lint cleanup required by the coding standard.
But it does not work well for actual mainline code.
Spleeeee@reddit
If I see an “agents.md” or “Claude.md” file in a repo I immediately assume it is slop.