Code quality in the AI age
Posted by europe_man@reddit | ExperiencedDevs | View on Reddit | 110 comments
This has been bothering me for quite some time now. With the advance of AI tools, we see more and more AI generated code in our projects. Now, stakeholders want speed, so they always value features over good and clean code. And, I don't blame them for it, makes sense from their perspective. Makes sense for the business.
But, due to fast pace, quality degrades, naturally. I mean, you can have standards, guidelines, guardrails, etc. However, at some point, if you become too strict then you become a blocker in the process and need to loosen up. Once that happens, code becomes average or below it, so there is pretty much zero chance of having a human glance over it. It always starts with AI and ends with AI. It is very hard to verify the intent behind the code or the bigger picture.
Now, some people push for this. We pass the torch from individuals (that might choose to leave a company at any point) to the AI that is always available. The argument for this is that you can always rewrite whatever feature you want as long as you use the AI. So, code quality, strict guidelines, etc. do not even matter as you are far isolated from the code and the solution. My argument would be, well, you need to verify changes, make sure they do what they do, etc. Again, the counter-argument is that you can test it and ensure it works. This becomes a never ending loop.
Obviously, I have my own opinion on the code quality and how the code should look like. But, that opinion is based on the pre-AI era. I still think it is relevant, I'd say even more than before, but, it is again my opinion. I can say that there are many other people with similar opinion, but there are many (maybe even more) on the other side pushing for shipping fast without thinking about code or code quality.
tl;dr: What is your stance on code quality in this age? How do you ensure you are not an engineer always pushing back on solutions due to poor quality when most other think it is irrelevant because of AI? I don't want to give up on my standards, but, maybe they are too wired to the traditional coding world and should be heavily adjusted?
xt-89@reddit
Don’t blame AI because your company/team doesn’t have the good sense to be strict about guardrails.
europe_man@reddit (OP)
Where did you read that I blame AI? I am asking how others view code quality in the AI age, and if adjustments are needed.
Goducks91@reddit
Why would we need adjustments? Developers were pumping out shitty code prior to AI the process of getting code merged hasn't changed the only thing that changed is how we as individuals write it. It just places more importance on review whether that be your own review or your teammates review.
europe_man@reddit (OP)
Because we changed the way we write the code? And, because code can be rewritten quicker now? So, it makes sense to lower the bar a bit due to this?
Jedibrad@reddit
I’m feeling similarly these days. Code has always been malleable, but it’s really turned liquid since December. If something is brittle, I’ll just… change it later.
There’s a happy medium. If you have too much code, the agent will suffer. Having constant +10,000 line PRs will be an issue in the world of agentic coding just like it is for human driven maintenance.
For software that will be maintained for decades (say, automotive firmware) this is a key factor. For B2B SaaS, startups, quick stuff... Code quality never was a factor.
xt-89@reddit
I think the core of the issue is that code volume grows linearly. But code coupling metrics grow superlinearly with code volume, generally. Rigorous quality gating (even automated) and frequent refactoring can reduce the growth of coupling. The issue won't necessarily be that code quality is guaranteed to be inferior, just that it is incentives to be inferior since the individual developer can plaster over quality issues for longer. Even if the point where developers stop adding quality shifts, there are still market forces for it to exist though. In the same highly-important domains, you just spend proportionally more of your effort on code quality schemes.
Wide-Pop6050@reddit
This is harshly said, but it's basically it.
There has always been some reason or the other higher ups are prioritizing speed > code quality.
It's always been up to engineering management to establish code guidelines and be able to make the case for why its important.
Ok_Individual_5050@reddit
I think it's fair to blame it though. Removing people one step from the code they're writing and encouraging them to go too fast makes the code worse. I don't see how that is taboo
HDK1989@reddit
Because it is still purely a business decision as to how fast your developers go. That decision shouldn't involve AI.
Companies were forcing their developers to work too fast at the cost of code quality long before AI, whilst others allowed devs to slow down and ship quality code.
xt-89@reddit
Sorry about that, I think I started replying to someone else on this thread and just applied it to the root. Also, my knee jerk reaction is to assume that’s the position on Reddit because it often is. But that’s just my bias and my mistake
fridaydeployer@reddit
Would you mind outlining what the «complex economic dynamics» are, as you understand them? Curious about your thoughts, because this sounds like the start of some arguments that might actually work.
xt-89@reddit
Sure. So let’s start with supply and demand as two dimensions. There’s also coding velocity, quality of software, quality of requirements engineering, strictness of quality gating, budget, and time. That makes 8 relevant dimensions that form some kind of surface.
Generally speaking, you want to find the location on that multidimensional surface that maximizes profit for the company over some time horizon. I haven’t done the modeling myself, but as an approximation, I can intuit that the optimal subsurface wouldn’t be the one that maximizes velocity above all else. In complex systems, you rarely have such a simplistic solution.
The people who claim that either are genuinely battling for the shortest term product market fit (i.e. super early stage startups that should be rewriting such systems in that scenario) or just haven’t thought enough about their actual situation. Incompetence is unfortunately common. Sometimes it’s because the incentive structure is broken and sometimes it’s because the leaders in question don’t actually know how to do that analysis.
hibikir_40k@reddit
This is the easiest time to push for more code quality. As more code is generated by one developer, you either shrink team size, or demand code quality, because otherwise defects go up.
LLMs can detect bad quality all by themselves though, it's just that few people are actually trying to use them for it. They can be the equivalent of an extremely advanced linter when you try. And you can make the recommendations actually mergeable, because the LLM understands the comments well enough.
If anything, you can make code reviews less contentious that way, because your 3 sentences can be a pr over a pr, and it all speeds up. Often there's pushback because implementation of suggestion takes time: You can make it take very little time, because you asked Claude Code to do the implementation of the fix yourself.
Bderken@reddit
NOW your stakeholders want speed??
FlailingDuck@reddit
Exactly. OPs entire premise is based on this. Stakeholders never cared, the engineers enforce quality. At my company the code quality checks are still automated and strictly enforced, AI hasn't impacted that.
If it has at your company, that's because you have lazy engineers who want some(thing) else to do the thinking for them, while they accept a paycheck to not do what they are paid very well for (think and make smart decisions).
positivelymonkey@reddit
Why you blaming engineers? My product manager shipped an 8000 line PR last week. Third one this month.
The last person that talked about quality got laid off.
FlailingDuck@reddit
Just because nobody else at your company doesn't enforce quality, doesn't mean the onus falls on anybody else but engineers. In some places that might be impossible, due to culture. But nobody else will care about code quality. The ones who care will leave such companies and those companies are left with exactly the wrong kind of mentality.
Darkmayday@reddit
What is this logic? Onus is solely on engineers but management can fire engineers who cares about quality. Sounds like the onus is on engineers and management
FlailingDuck@reddit
When have you seen management care about quality? I mean the actual implementation of quality. Sure, they can diktat all engineers must produce quality code, thats actually a baseline expectation of why they hired you. But I have never seen anyone other than other engineers be the ones enforcing it.
Darkmayday@reddit
Right but the point is engineers can't enforce it if management if firing the ones who care. Thus the blame isn't lazy engineers like you said, it's on management who fire those who care
FlailingDuck@reddit
I'm missing the forest? You're talking specifics of toxic companies. This is not what we're discussing here. Did you forget OPs original point. That in this AI world quality is dropping. Yes, management have higher expectations of output because AI promises the world, but is patently failing to meet expectations. Engineers are hired to make good decisions, the ones who let the AI slop proliferate and doing the role a disservice. When I read that POs are vibe coding 8000 line PRs. Who is the gate keeper for letting them think that PR is worth any salt and letting the slop enter the codebase. It's engineers. They had the ability to shut that shit down and tell them these PRs are worthless/woefully underspecd.
I'm not even anti AI. AI should be able to produce higher code quality not less when used in the right hands. I use it everyday, when wielded properly it's a tool for good.
I understand the counter point to this, "it's all management's fault". Because essentially they pay the bills and many people still want a job. That doesn't justify the right course of action is to roll over and take it because Sam Altman wants a bonus this year.
Darkmayday@reddit
Rolling over to those who pay you is simply capitalism. I hate it too but yeah this one is managements fault not "lazy engineers" . In an ideal work we are unionized and have set standards but we aren't.
unflores@reddit
We had a designer who came in and fixed a bunch of things. I think they touched 300 files without a clear change. I think I took a look at a few files, realized that it would be hell to validate and closed it. /Shrug
It's fine for people to vibe code certain things. My PO added an admin module to reuse existing admin apis to better group certain user actions. It was a small pr, easy to test and understand and could just as easily be thrown away.
It probably would have taken them as much time to specify as to vibe code it.
I had a few pieces of feedback and I ended up taking the code over. I'm fine with that as a trade-off.
At the same time if someone came to me and told me that I have an agent now so they expect to see my 10x output, I would just tell them it doesn't work that way. If the company doesn't have money, then it has to make hard choices. If they need to take out enormous, potentially company ending debt... then they should have their eyes open about it. Ai still isn't their silver bullet.
CaptainCactus124@reddit
I’m sorry you work at a company where that happens
Zookeeper187@reddit
Code quality wasn’t the best even before. You will always start to hear “who wrote this?”, “we should rewrite this”, “this is legacy spaghetti” after ~2 year mark. The output is just much higher now and you will reach those points faster.
You can’t keep up with nitpicks any more like clean code, scoping, naming. Focus on general architecture that it’s done correctly, implementation itself won’t cause any problems, performance to some degree, integration is done without any edge case problems, testing is performed.
SingleAttitude8@reddit
Whereas now, no-one is accountable.
Animostas@reddit
No one was ever accountable. It was the "team's responsibility"
SingleAttitude8@reddit
But now not even the team is accountable. The company just tries to distance themselves with disclaimers such as "AI can make mistakes", and other responsibility laundering techniques.
Although the tide seems to be turning. Air Canada lost a court case recently for an AI chatbot error.
And Microsoft Copilot recently updated their terms and conditions clarifying that Copilot is for entertainment purposes only and not intended for actual productive work.
exergy31@reddit
It still says your name on the PR even if you used an agent to write it. You checked it in, you own it. Accountability stands.
ricktherobotguy@reddit
Code quality is even more important in the AI age. Bad quality burns tokens and context, and leads AI to make mistakes for many of the same reasons they cause humans to make mistakes.
Quality is also easier with AI. At one point I got tired of asking people to include a rollback knob for behavior changes so I stuck a rule in the repository. I suddenly saw a massive uptick in changes with rollback knobs. Same thing for preventing utils/common/shared/etc dumping ground classes. Same thing for keeping the readme/claude.md up to date with every change. I’ve also found people are more open to quality comments when they can just have AI fix the comments for them.
JuiceChance@reddit
I stand by my view on code quality during PRs. If the organization kicks me out then it means it is not for me.
europe_man@reddit (OP)
And, what is your view then? How do you justify that to the management and stakeholders? For example, I will review a PR and request changes. Some of the changes are related to code quality. Now, reviewing these things, explaining why it needs to be improved, it takes time.
It becomes challenging to cope with this as the pace is slower due to PRs being rejected. Especially because POs maybe saw a feature in the dev environment, so they become overly excited and might influence the decision to improve the code quality in the future. That future never happens. This was true even before AI.
JuiceChance@reddit
I don’t justify. The EM hates that, product hates it but I am not here for everyone to love me. Potentially, it is my personality but they don’t speak up as it always ends with them being put in their place.
fridaydeployer@reddit
How are they being put in their place? Intimidation, or actual arguments that stick? (I actually want to hear the arguments)
JuiceChance@reddit
No intimadation, bullying or anything like that, this is disgusting. Just a firm statement ‘we are not cutting corners on quality’.
fridaydeployer@reddit
OK. To me, that’s not an argument, that’s a statement. Your not convincing anyone, just standing your ground. I guess that’s fine if it works for you. But as an EM, I’m looking for arguments that will bring people on the same side, pulling in the same direction.
FlowOfAir@reddit
That's the thing, people pulling in the same direction is not going to happen.
Think of it this way. You're a professional, hired to do a job. You know how to do the job. You know the caveats. People without enough training cannot and should not tell you how to do your job.
If you were, say, a cook, you cannot have the store manager tell you you're being too slow if you're saying this is the recipe and that you cannot undercook the food. Even if customers get angry. You cannot risk food poisoning, unhappy customers, etc etc.
Same with software. There's only so many corners you can cut. If management hates it, let them fume. You know you're doing the right thing.
fridaydeployer@reddit
I don’t think I want to be on a team where the leadership has given up on getting the team members to pull in the same direction. So I’ll keep aiming for that, regardless of what you believe 😁.
The cook analogy is good, but I like to add that with software it’s actually possible to make something that solves users’ real problems with shit code quality. It won’t stand the test of time, and it won’t be easy to change, but the users don’t actually see the code, they only see the results. So that’s a little different from cooking, where customers consume the direct result. I agree with the main sentiment of your analogy, though.
Subject_Fix2471@reddit
I agree with you, the cooking analogy is also kinda rebutted with something along the lines of don't try creating a michelin star dish when someone just wanted a cheese sandwich...
Personally as an engineer I'll state what I feel is best, with reasoning. If management don't want to go down that road I don't see the point in assuming they're idiots/unreasonable/whatever else. I don't really know why I'd bother getting myself frustrated like that. I'll try and implement what they want, and if it goes wrong perhaps my initial scoping will provide an alternative approach.
Though I will say, some of the comments sound like pretty poor culture, so I do have empathy for some venting.
DWebOscar@reddit
These are very difficult arguments to make until an incident happens where quality is what saves things from total disaster. But TBH this is actually the argument. Good quality code protects the team from unknown-unkowns. We don't have a reason to worry, until we do. When we do, you'll know it, but it might be too late.
another_dudeman@reddit
What was the argument for code quality before AI slop was a thing?
Unfair_Long_54@reddit
Just yesterday I found out I no longer care if they kick me out just because sometimes I'm using my brain.
onafoggynight@reddit
Some food for thought: - Code quality was never a top priority at basically any company. Under time pressure, code quality is always one of the first things to be sacrificed. - Devs on the other hand almost universally put a lot of emphasis on code quality. See posts all across this sub. - Yet, we are apparently confronted with shitty code all the time.
I do believe there is objectively good / bad code. But I also believe, that devs almost universally are very prejudiced. And that also, and especially goes for AI.
So, assuming there are proper guardrails in place, I would really like to see a firm metric, that establishes that AI produces worse code, than the average dev at a company.
Because until then, this is basically "not invented here" in another shape.
bluetista1988@reddit
The "Good / Fast / Cheap" iron triangle has always been lopsided. The business will always want things just barely good enough but will always want faster and cheaper.
another_dudeman@reddit
We're all being theoretical still. We won't know the consequences (good or bad) for another couple years of agentic coding.
bighappy1970@reddit
Code quality is not important for most products.
Like everything in tech, there are trade offs, know what they use AI for what it’s good at.
If you don’t like change, you’re in the wrong career.
fridaydeployer@reddit
Thanks for writing this post! I’ve been meaning to express the same kind of concern myself, but you did far more eloquently than I would have.
I think some of the commenters are missing part of the point. What you are looking for is either arguments beyond «I like clean code» and «guardrails are important» that will help you convince your org that code quality still matters, or compelling arguments why we should actually not care anymore. And these need to be arguments that make sense in a business perspective, risks and rewards that translate to real money, not just esthetics.
If I’m reading you correctly (and I might be projecting somewhat), that is.
My go-to argument in the past has always been to write good code because it has to be read and understood later. That argument is falling apart (at least partly), because the counter-argument will be that Claude will be able to understand it, even if my variable naming is gibberish and the function is 1500 lines long. I’d like to argue that at some point, even AI will make the same mistakes that humans do when working with messy code, but that’s purely theoretical, I haven’t actually seen that happening yet. So I’m reluctant to bring up that point.
So the next argument is that ensuring high code quality is an insurance against the system turning into unmaintainable spaghetti, forcing a big rewrite every N years. This argument is probably valid, but has two weak points: 1. Humans have a hard time taking risks that are a couple years ahead seriously. Quarterly results are now, that spaghetti is still years into the future (also, see climate change for evidence). 2. This is also theoretical, we can never know that this will happen. The most evangelical of AI enthusiasts will argue that a big rewrite in 2 years will be just about a couple of prompts and having a coffee while the machine is doing the work (because exponential growth, etc).
That’s my two cents, I clearly haven’t solved this either.
FinsOfADolph@reddit
Part of me wonders making those arguments is so hard because business has destroyed the case for quality for their customers. Maybe this is just happening in my workplaces, but a lot of businesses have not cared about keeping a stable environment for their customers since ... At least the Great Recession. It also feels like leadership actively destroys any quality-centered metrics they themselves set if they get in the way of their next bonus or whatever.
Also - It's kinda hard to convince someone to care about quality when they promised you wouldn't be there to block them. Because you'd be unemployed.
fridaydeployer@reddit
I think Facebook’s «move fast and break things» ethos ruined a lot for the rest of us here. That may have been a good approach for them at the time, but it cemented a notion that that should be the standard for all team in all situations (narrator’s voice: it shouldn’t)
FinsOfADolph@reddit
Very good point. Kinda made things harder for those supporting software down the line too.
un_mango_verde@reddit
We have never been able to predict the future, but that is not argument towards assuming current ideas about code quality are outdated. Why asume the fact that there will be continuous exponential improvement in AI? If LLMs hit a wall we could go into a third AI winter. Sure, the current tech won't dissappear, but it could stop improving at some point.
I don't see why we should default to "code quality is no longer important since LLMs will make it obsolete soon" and ask him to argue against that. That is not a given. "Code quality is still important untill definitely proven otherwise" seems like a much more sensible position.
I'm not talking here about nice naming and formatting, that is subjective. There are studies out there about how automated testing and reducing various complexity metrics reduce the number of defects and increase iteration speed, you just need to look for them. Disregarding what the literature says right now based on an assumption of how LLMs will progress is just a bet, and the burden of proof is on his management to show that it's a bet worth taking (compared to adapting AI tech as fast as possible while keeping the same quality levels).
Honestly most places don't have the resources or size to actually study their software engineering practices scientifically and should arguably default to the status quo and risk free improvements, not huge bets they can't really even analyze properly.
fridaydeployer@reddit
Thanks, these are very good points!
single_plum_floating@reddit
Why would i ever want a product made by people who do not understand it nor care to? At that point why am i even hiring you? Genuinely at that point you are more liability then a goddamn literature writer using the same tool, at least he has some level of respect for his work.
fridaydeployer@reddit
Well, I’m actually not hiring people to produce beautiful code, I’m hiring them to help solve users’ problems. Good code quality happens to be one of the techniques that I believe is necessary to solve problems now and in the future. But what I’m saying is that it’s possible to argue that if the main job is to solve users’ problems, code quality doesn’t necessarily need to come into that equation.
And I’m pretty sure you have no idea about the coding knowledge behind some of the products you use. Sometimes it’s obvious, but we don’t have any guarantee that a good product is run by beautiful code.
onafoggynight@reddit
"that will help you convince your org that code quality still matters..."
It never directly did from a business point of view.
fridaydeployer@reddit
I disagree. It could be difficult to argue for, but the idea has been that spending a little more time now, buys you efficiency down the line. So it could make sense from a business point of view if you manage to not be short sighted. Whether business folks managed to do that in practice is a different question.
onafoggynight@reddit
That is most certainly true.
But that efficiency could have no economic value to start with. "Down the line" could mean at an irrelevant time horizon, or under very different economic constraints.
The point is, that bad code is never a blanket business / economic problem. And if you want to categorize it as such generally, that is mostly a hypothetical about the future.
It only causes business problems (defects, more maintenance, unacceptably slower change, ..) under specific circumstances.
That huge ugly module over there, that nobody really wants to touch, and that offends everyone with some good taste: if it runs and needs a minor change every few months, that's not a business problem.
fridaydeployer@reddit
Yeah, I think we basically agree here. It’s not that bad code never had business consequences, but more that they have a longer time horizon, which in some cases make them irrelevant, and almost always hard to make the case for.
For the case of the «huge ugly module» that works and rarely needs updating, I’ve tended to think of it as tech debt with different interest rates. A working module that’s «finished» and never needs to change has close to zero interest rate, and is therefore not the debt you need to attend to first. Of course, the Bank of Technical Debt is not to be trusted, so that interest rate might shoot up over night. But as long as it’s low, it can safely be left alone.
The more interesting discussion now, is about all the code that we change regularly.
yxhuvud@reddit
My view is that there currently is an imbalance, where some people and managers live in the delusion that code out fast is the only metric that matter. In reality each place code is used have a necessary level of solution quality that is necessary to solve the problem. Currently companies are producing MOAR code, but processes have not yet adjusted to ensure necessary levels of solution quality. Processes have also not adjusted to shifting levels of either expected solution quality or possible solution quality.
Solution quality can be addressed in many ways - better feedback loops, better analysis (done by AI or not) of existing systems, better processes etc. Here is where code quality comes in as it is a tool that ensure code can be understood, changed and maintained over time. I would not be surprised if what some see as code quality is pushed to the side, whereas things that actually work probably will work for both humans and for agents.
dbgtboi@reddit
This is what I tell people in my company who have super high expectations about AI and are pushing engineers for more
It's just a simple question, "if AI can automate a software engineer, why do you think you are safe from it and that your own job can't be automated, when your job is significantly easier to automate?"
SplendidPunkinButter@reddit
Even the Claude documentation tells you that there’s a finite context window, and that if you put too much stuff in Claude.md, then Claude will start silently ignoring essentially random parts of it.
What all of this means is that as your code reaches a certain level of complexity, Claude will not be able to handle it. Claude cannot hold the context of the entire program in its memory at the same time, and so it can’t sensibly make changes. And the more sloppy code bloat you add, the more of a problem this is.
All of which is of course a problem for humans too, which is why we invented things like abstraction and clean code. The whole point is you’re supposed to structure your code into chunks that can be understood one at a time, and so that you can look at what each chunk does without having to think about how each chunk works. Structuring your code like this, and doing it effectively, is more art than science, and AI sucks at it.
Don’t even get me started on tests. When you have a failing test, Claude is likely to just change the test to make it pass, even if you tell it not to. Because it’s not intelligent, and it’s not good at determining if it’s the code or the test that’s wrong.
dbgtboi@reddit
If your code is so complicated to the point where Claude can't handle it, then you have a severe tech debt problem and need to clean that shit up
Remote-Pen-8276@reddit
Hook ast-grep to Claude code and write a lot of rules, then include hooks for your code quality scans.
Make it so "good code" isn't subjective
Ok_Individual_5050@reddit
But... It is? And loads of what makes code good is that the assumptions it makes line up with the real world, which a tool like that can't solve?
Remote-Pen-8276@reddit
I'm not saying that this would prevent everything, but it's will prevent the worst of it
LoveSpiritual@reddit
I don’t understand how it isn’t obvious that if the code is hard for humans to understand it’s hard for LLM’s to understand. Code quality becomes MORE important in the age of AI.
Moreover, good code becomes much cheaper to produce. Have one agent be a stickler for good code (however you want to define it), and have it feedback right into a coding agent, telling it to get it right this time. It may not be perfect, but it’s a darn sight better than 90% of the enterprise code I’ve seen at large companies.
We’ve got a lot to figure out, but good code is more important than ever.
Leading_Yoghurt_5323@reddit
“we can rewrite later” almost never happens… bad code just compounds until it slows everything down
(94%)
SufficientBar1413@reddit
To be honest, you are not incorrect if anything, the quality of code is more important now than ever, not less. 😅 AI simply facilitates the rapid generation of poor patterns.
What I have observed to be effective is a shift in focus: rather than contesting every line, it is essential to establish good boundaries (such as clear architecture, naming conventions, and interfaces) and allow tools like Cursor to fill in the gaps. You provide guidance, and AI carries out the execution.
Ultimately, the more significant transformation is that developers are evolving into end-to-end builders… encompassing code, product, and delivery. For instance, I will utilize Runable for the non-code aspects to avoid compromising quality merely to save time in coding. Striking a balance is more important than achieving perfection
remy_porter@reddit
If code can be generated, it means your abstractions are bad. You need a more elegant way to talk about the problem domain. This was true when you’d generate data access classes off of database schemas and it’s true when you use LLMs to generate code.
Immediate_Fig_9405@reddit
I think in the AI age, the term code quality will become irrelevant. The only thing that matters will be if the small modules that are generated pass tests and meet performance requirements. I can see a microservices arch taking over with a bunch of AI written black boxes. Humans only write the "contract" terms or specs that should be met.
PressureAppropriate@reddit
I don't think it matters much anymore...
You can ask AI to rewrite whatever part from scratch as many times as it takes to get it right.
Good code is meant to be read and understood by humans. Take humans out of the loop...it's just code by and for a computer.
Don't get me wrong, I think we're building a house of cards and this will bite us in the ass one day or another but I don't care... I work for a shit company building shitty products that nobody would use if they had any common sense.
Pleasant-Memory-1789@reddit
My takes:
Code quality doesn't seem to matter as much anymore. Contracts, docs, and good tests are 100% more important for agentic development. Let AI handle the implementation (of course, still review and clean up the really stupid stuff).
I actually find myself writing higher quality code. In the past, sometimes I knew I needed to refactor and actually knew exactly what to do, but I was too lazy and would rather scroll on my phone. Now I can tell AI how to refactor the code - and I still get to scroll on my phone!
single_plum_floating@reddit
??? Sounds like they didn't matter in the first place for you.
Pleasant-Memory-1789@reddit
Those aren't mutually exclusive. I also said code quality doesn't seem to matter as much
Goducks91@reddit
hipsterdad_sf@reddit
The real issue nobody is talking about here is that AI generated code passes code review at a much higher rate than it should because reviewers pattern match on "does this look reasonable" rather than "do I understand what this does and why." When a human writes code, the PR usually tells a story of the decisions they made. AI generated code looks clean but there is no decision trail, so reviewers just skim it and approve.
On a previous team we started requiring that any AI generated code in a PR had to include a short paragraph from the author explaining what they asked the AI to do, what alternatives they considered, and what they verified manually. It slowed things down slightly but the number of production bugs from AI generated code dropped significantly within a couple months.
The ownership problem is the core of it. If you generated the code, you own the code. That means you need to understand it well enough to debug it at 3 AM when something goes wrong. If you cannot do that, the code should not be merged regardless of how clean it looks.
bossier330@reddit
I have many lived examples of how fast “meh let’s let it slide” can go off the rails fast. You slowly accumulate “invisible” tech debt, which makes future AI iteration worse, slower, and reinforces the debt. Keep PRs small and make sure you understand the architecture of what you’re shipping.
For personal projects, AI debt is fine. At scale, it’ll break you.
Mediocre-Pizza-Guy@reddit
Right now, using the best available models and tools, on a sufficiently complex codebase means you will quickly run into 'walls' that AI cannot fix.
You describe the bug and tell it to fix it. It changes 8 files and says 'It is fixed'. The code doesn't compile. You prompt it again, it changes some more files. Repeat over and over, but no matter how many times, the bug is either not fixed, or some other obvious bug is introduced.
Right now, we are generating huge amounts of the debt for short term gains. Usually, it's not even a full feature, it's just the appearance of progress on the feature before we hit the wall.
If AI advances sufficiently, then the mindset of 'Who cares?' becomes perfectly reasonable. The code can be trash that no human can understand... Because AI will maintain it.
But that's a giant IF.
It's also a familiar IF. We have been through similar cycles - devs who started out with assembly said similar stuff about the 'slop' produced by higher level languages...but for most of us, that's not even a consideration.
Maybe one day all code will be handled by AI and instead of individual people prompting them we will just transition to detailed requirement driven development and the AI will effortlessly produce it in any underlying language. Or whatever.
But we can't do that yet. Because for as great as AI, it hits the wall frequently, even on trivial tasks.
At the end of the day, I'm an employee. Ask my opinion and I'll share it, but I'm getting paid to execute someone else's vision.
If they want slop, I'll give them slop.
Eridrus@reddit
How much power do you have in the org? If the answer is not much, then you're just going to get steamrolled. Quitting is going to be increasingly a bad idea since the industis adopting AI tooling everywhere rapidly.
My suggestion is to find some greenfield project and agree that it's going to be an all AI with no quality pushback and basically let that be a sandbox for the tools.
I think we have a responsibility to figure out how to work with the tools as they get better. Historically, this has been a lot of manual review, but maybe going forward it's going to be far more about architecture than exact correctness of all the code.
To some extent, I think there has always been some set of code you can evaluate easily with testing, and some code where that is not sufficient, and being clearer on which is which will be important in the future.
Just using your influence to say things should stay the same is not going to be tenable, so try to do some deeper thinking.
crazyeddie123@reddit
I figure if the code is easier for me to read, it'll be easier for Claude to read as well. So it's still worth making it good and readable.
DutyStrategist1969@reddit
The vibe coding debate is really a code ownership debate. If nobody wrote it line by line, the question is who debugs it at 2 AM when it stops working. Speed without understanding looks identical to understanding until the first production incident.
neuronexmachina@reddit
Yup. I think any code that's merged needs to have an approval from the team that will be responsible for dealing with it going forwards. Approving is accepting that responsibility.
Inf3rn0_munkee@reddit
Code quality is still important to me as one of the most senior engineers in the team. The arguments for code quality are the same as they were in the non-AI age: bad quality means we will spend more time on it at an unknown future time. The unknown part there usually helps get the point across because velocity is based on predictability.
These days I'm looking for ways to put in the guard rails, like skills files in the repo and automatic AI code reviews that still need a human to verify and do the final review.
I am sometimes the blocker, I'll reject a PR if the design is terrible but make sure to explain exactly why it's terrible including how it will harm us in the future. I.e. it's not enough to just say "we don't accept bad quality" we need to help non technical people understand why bad quality is bad.
U4-EA@reddit
I wonder how thing might change now that Anthropic and OpenAI are putting up their prices? If the subsidising completely stops, people may find that they have garbage code only the AI can make sense of and doing so may be very expensive.
D-Alembert@reddit
For decades now, advanced compilers have turned source code into machine code that is optimized to the extent the complete can manage, creating machine code gibberish that we could understand but in reality don't.
Perhaps source code is going the same way. Perhaps we're heading to a future where source code isn't the source any more, just another layer of machine code for machines to deal with
coderstephen@reddit
If this is the case, we are doing it badly. Because most don't commit the prompts that were used to generate the code, just the code itself. That's like writing code, compiling to assembly, and then committing only the assembly to Git...
nkondratyk93@reddit
the 'standards and guardrails' framing misses something - guardrails that nobody enforces aren't guardrails, they're suggestions. real question is who actually owns the quality gate. on AI-generated code especially, if it's not someone's explicit job to reject it - it doesn't get rejected. speed pressure means the default is always ship it.
Perfect-Campaign9551@reddit
These posts are getting lame. I think they are just bots.
You can quite easily enter your guidelines and guardrails as prompts to the AI , you know
AI can write much better quality code than you. That's just a fact. Sorry to burst that bubble.
If you're worried about than prompt for it
marzer8789@reddit
Than you, maybe. I'm my experience the code AI writes is never better than what I'll produce by hand, but it can be a good starting point or rubber duck.
duffedwaffe@reddit
The reality is that's how it's always been. They have never cared about clean or efficient code. Just working features.
The only difference is now they think they're experts because they can slam a bunch of sticks together using Lovable and call it a sculpture. Because they only see features, they see no gap between the AI demo and the hardened production juggernaut that STILL takes days/weeks to develop properly, even with AI helping.
c-digs@reddit
Code quality in the AI age can go up. PRs can become more consistent. Juniors can produce code like mid career engs. But that requires careful setup of the operating environment for the agentsnto work in and setting up the agents to write very high quality code.
-ScaTteRed-@reddit
Business users don’t care how elegant your code is, they care about whether the product is reliable, fast, and solves their problem effectively. If you ship slowly, you lose the market, which effects your salary/kpi. That’s why I value a world-class feature more than world-class code.
The code might not be perfect, but if the feature works well, that’s what matters. It’s not something to get overly attached to, it’s just some line of codes in hundred thousand of lines of in the project, in hundred of repos in the company.
And if issues come up later, they can always be fixed, even by AI, while I am chilling with a cup of coffee.
SingleAttitude8@reddit
But is it actually reliable? Aren't systems becoming less reliable post AI, because testing has become more superficial and self-prophesising, rather than meaningful and fit for purpose?
It's like asking a student to write their own exam questions, then watching them pass the test with flying colours, only to fail in the real world.
-ScaTteRed-@reddit
Then how did you verify your system is reliable or not before AI comes? Just do the same way.
Bug always exists regardless it is written by AI or not, done assume AI code always bad and code by human is always perfect.
another_dudeman@reddit
Testing in prod baby!
europe_man@reddit (OP)
But, can they actually be fixed by AI? Here's my problem. Say you ship a feature quickly to prod and users start using it. Then, some weeks after, first bug comes in. Now, obviously, you use AI to fix it and afterwards you verify the result by testing, using the feature, whatever.
Now, since you are not really owner of the feature, things can slip past you, and given the poor quality, you rely more on AI explaining to you what it does rather than reading the code yourself - because it is hard to read. But, AI says fix is good to go, so you ship it. However, once it is in prod, maybe some other bugs pop up, maybe you broke something else. This lowers the trust in the system, and customers notice very qucikly.
onafoggynight@reddit
All of those hypotheticals and issues have existed before.
Artistic-Border7880@reddit
If I write Python code I don’t code review the assembly that the interpreter generates.
Medium to long-term we should get towards that direction.
I know that compilers are deterministic and AI interpreter isn’t… but I think the goal should still stick.
Security and guard rails become more important than ever obviously.
another_dudeman@reddit
This kind of thinking will get people killed
Esseratecades@reddit
You can't argue that AI can do things almost as well as a person but faster when you lower the standards for it to do so. I'm not just as good as you if I'm graded on a curve and you're not.
Nervous-Tour-884@reddit
I don't think code quality and going faster necessarily have to be so opposed to each other. You can have both to some degree. AI makes the fixtures that enable quality much easier to implement. The work it takes to implement TS, unit tests, playwright for automated testing, automated PR reviews as an additional quality gate, it all has got so much easier.
I type a lot less code, and generate a lot more value, but I now spend more time on QA and the fixtures that enable code quality.
Quality went up, but it wasn't automatic. It was because AI makes it easy to implement the fixtures that help enable quality.
matjam@reddit
Our projects have AGENTS.md files that document the projects coding standards and architecture.
I’ve shared my prompts with my views on design and implementation. I have an extensive agent prompt for code reviews that exhaustively checks for the kind of things I look for. I’ve refined it over months by manually reviewing afterwards and adjusting it for anything I notice it didn’t catch.
I’ve not needed to adjust it for a while.
This situation reminds me of what happened when the power loom showed up in the early 1800s. For centuries, weavers sat at their frames threading every pass of the shuttle by hand - slow, painstaking, skilled work. Then the Jacquard loom comes along and suddenly one operator with a stack of punch cards is producing in hours what used to take weeks. But all the weavers didn’t just vanish. The ones who made it were the ones who stopped thinking of themselves as people who push shuttles and started thinking of themselves as people who understand fabric. They learned to design the patterns, set up the machines, figure out what’s wrong when the output looks off. The craft went from making the cloth to directing how the cloth gets made.
That’s exactly where we are right now with AI and code. We’re not going to be hand-weaving every function and for-loop for much longer - that’s just the reality of it. But someone still has to understand the grain of the fabric. What good software actually looks like. How the pieces fit together. Where the machine is going to produce something brittle if you don’t step in. The job isn’t going away - it’s moving up a level. We’re becoming the people who program the loom, not the people who push the shuttle.
Or you know, you could start buying Sabots.
FlipperBumperKickout@reddit
I've yet to see.
If a day come where AI can't just rewrite, because the codebase simply is to bad, then we will know it mattered. Otherwise 🤷♀️
lordnacho666@reddit
I think the danger isn't so much in the old bottoms of code quality. AI is pretty good at making nice looking code in the human style, and it is pretty good at cleaning up its tracks.
The danger is that you think you can build anything, and your code base ends up being a sort of meta-code made of nicely written pieces that put together form a huge spaghetti.
When you dive it anywhere, it doesn't have the classic code smells. No global state, plenty of test coverage, follows conventions.
But when you try to do something new, you are still in a swamp of complexity.
Most-Bookkeeper-950@reddit
This will change as the models continue to improve
SingleAttitude8@reddit
Exactly, it will get much worse.
More dependence on AI, less human accountability, more systems failing.
Minimum-Reward3264@reddit
Code quality was never the priority.
bystanderInnen@reddit
Ai makes maintaining code quality easier. Bad code Quality has existed prior its Bad engeneers.
Medium_Ad6442@reddit
It s true that the bad code has always existed but now you can much easier multiply it with code agents. You cant say that everything is same as before when actually it isnt.
bystanderInnen@reddit
But also bring it into principles, add tests etc, it works both ways, ai is just amplifying