you might be shocked but yes sadly :( it’s being enforced at my company and anyone who wasn’t already a great engineer is now churning out total shit imo. i’ve gotten REALLY direct in code reviews because if people are going to make me look at stuff they didnt read / didnt understand, i’m not going to take time to be polite either
I don’t think so really. My company is hugely profitable (almost a billion in profit last year) but we are somehow locked into a death spiral lmfao.
Nobody in control can figure out how to grow the company more and the shareholders aren’t happy to simply perfect the already shitty products we have that do turn a profit. AI bandwagon it is!
Same deal at my company. Very profitable, by far the biggest name in our space. But now that they're out of ways to grow the business organically, and you know our PE overlords can't accept just 10x'ing their investment, it must increase more, every year, forever. So AI bandwagon + layoffs + insane amount of acquisitions it is.
for a second i wondered if we worked for the same place, but i don’t think mine is doing any acquisitions right now haha. so that’s scary that this is just a trend across the industry.
This is some sort of systemic issue at this point. I hear the same thing from everyone. Cutting costs, chasing growth that isn’t there. I think we’ve optimized ourselves too much for quarterly profits and it’s just cooked now.
What I don't understand about this is why can't companies wait till ai is more mature to integrate it into their core-stuff. Like back when the first airplanes were invented people weren't like "We need to integrate flight into our shoe production and distribution buisness, everyone is doing it, if we don't we'll be left behind!"
Ok, so, if you did this when steam power was invented it might have actually been a good idea - but sometimes maybe its best to wait and chill and see what happens?
i guess if you’re a ceo you probably consume a lot of bullshit business content. if that’s what you’re reading, you’re being promised 50 or 1000x productivity gains in the near future. to them it seems like unlimited productivity unconstrained by headcount.
it’s insane though, despite working for a crazy profitable company, revenue is declining, and in spite of that we suddenly have near unlimited ai budget (it’s f*cking 2 grand a month per engineer lmfao).
nobody at the top in big tech seems to realize the problem isn’t lack of productivity, it’s lack of new ideas. if you want unlimited growth you have to bring crazy new shit to market that gets people excited. nobody really seems to have idea incubators anymore. remember the “moonshot” era in the 2010s? my company laid off the entire incubator division 3 years ago lol (it had produced many profitable products). now literally the only idea big tech has is f*cking cyberpunk transhumanism lol.
finance is calling this tech’s “hard money era”. no more investment in ideas, only a mad dash to the promise of ASI / AGI that will allow the first mover ultimate power. if you’re on the front lines using these tools, imo it feels like agi is decades away. but if you’re a CEO i guess you wouldn’t have noticed that yet :(
The crazy thing is that the AI companies are losing money on that $2k per month. None of them are remotely close to profitable and they're burning investor cash at an astonishing rate. They will have to increase prices dramatically just to break even.
As annoying as it is I can’t really blame them. They truly believe AI can almost deliver on the promises, or that it can already. If that was the case, 2k per head is an astronomical saving compared to hiring more devs.
It’s a disconnect as old as time. Nobody above maybe two levels up from you cares anything about the code, the stack, anything. They only see product. Same thing horizontally - any other area of the business just doesn’t care. That’s a lot of people that can buy the hype
Yeah, that's a good point about new ideas. Like there was a post here yesterday asking why ai hasn't increased the number of apps or steam games released recently - but like are there any really new ideas for apps?
It feels like the only thing people do these days is spin up a random generator that mixes words together and come out with a new app based on that. Like Whatnot is just a mix between twitch and the home shopping network.
Imo, the big difference is we didn't have as many middle management roles revolving around being able to create powerpoint presentations showing more and more fun numbers every week.
I've gotten that nudge from above as well, but also it's clear that the VPs don't know what vibe coding actually is, and in practice the SWEs interpret it as "use more AI to help write your PRs." Which imo is fine. Actual vibe coding, per its original definition, would not be suitable for any code that's checked into production. You have to read the code you sent out for review, obviously!
Seriously lol. I actually had my PM use AI to try and update one of our critical READMEs…
i left like 18 comments and others left more and it would have been much faster if one of the engineers had just done it themselves. instead we are all spending time looking at this MR that is beyond bullshit
We’re currently in a weird part of AI for coding. I feel like I can get code to generate functionality as a proof of concept very easily, but still factoring everything to be efficient, scalable and adopt a consistent nomenclature/syntax/whatever is time consuming.
It’s nice to show executives how something can work, but then leadership is also expecting that same proof of concept to be immediately deployable and that’s never the case.
It’s especially helpful tackling an unfamiliar codebase, library or language
Often takes multiple attempts and/or refinements
Could I have done it faster myself? Sometimes. But while the LLM is thinking, I get to think too—research, about the design, or what I want to try next. (Or just answer chats/emails)
The standard for production code quality at my company is very high, so there is no immediate disaster looming for me. Still, I see prototyped code that is…questionable, and would need a rewrite.
Is Vibe coding just taking the AI code as is unmodified? Because I do use AI tools and review/tweak/modify as needed. It basically just saves me some typing, usually.
Cursor and many other ide/tools control everything in your project. You just prompt to do something. If it works, great. If not, you prompt the error and ask to be fixed.
That's the reason a guy lost his production database, or another got a lot of SQL injections in his web.
I think an acquaintance of my company has switched to vibe coding.. not sure how big and how much but apparently she was told to only vibe code and they shipped a bunch of features
90% of people I see using it are using it to refer to any AI coding, which isn’t even what the term means. If you look at the code, you see not vibe coding.
I've been thinking a lot about this as I've been working with these tools.
I just finished a small side project where I, so far, have not written a line of code. I spent a few hours generating a PRD, which then used to generate a detailed technical specs document, which I then translated to a .MD file and placed it inside my empty project folder. I used Claude Code to generate a project-progress.md file that contained the outline, plan, and a checklist of all tasks/subtasks. Then I ran the command and had it begin. It was a smashing success on the first go, except there was a minor syntax error breaking an XML import. I didn't fix it myself, but rather just pointed the tool to the error and it was patched in short order.
From this point on, I'll likely just take it over and do some cleanup and refactoring, but I technically could just write all my requirements into my MD file and just assign it, which means this would be the first project I've done end-to-end without myself writing any code, but done purely through these tools and verification.
I have mixed feelings about it. I rather like the productivity gains, but I feel like there's dangerous territory now. Not necessarily in terms of security (although that is a big factor, just not with this project) but in terms of skills. So much learning happens in the seemingly mundane and banal, in the rote and repetitive. I know if I was doing this project without these tools, I'm quite sure I would have run into a lot of issues I had to work through which would have the opportunity for all sorts of "micro-lessons" to be had, connections to be made, or "aha!" moments to experience.
Sure, I turned a two day job into a 4 hour job and gained so much time...but what I potentially lost feels hard to measure and quantify.
All the success stories I hear about these tools are starting from a clean slate. That's great and all, but I've gotten to start from a clean slate once in 20 years in the industry, and even then, it was only a clean slate for a couple of months. What I really want to see is someone take a project that's been running in production for 5 years, with a million lines of code, and add a feature to that without breaking anything else. Then I'll start to believe there's some utility to these tools.
I have a counter-example/anecdote. Had to implement a new feature in a relatively complex existing code base. I treated Claude Code like a junior developer who was new to the project. I "talked" to it to describe how the application works, where the primary data structures and database models are, and asked it to "deeply analyse" the code and explain back to me how it all works.
Once I was satisfied that it "understood" the code, I then described the new feature we needed and provided a few starting points for where to look. Then I asked it to come up with an implementation plan.
It was a clever solution that it came up with (totally a mind-blown moment for me). It implemented it, and it worked, almost out of the box.
All of that took an hour. It would have taken me MUCH longer to implement all of this, maybe a day or two.
What I learned: Taking the time to introduce the system to the agent is crucial if you want it to work with you on a legacy code base. It doesn't magically understand all your obscure business requirements. Maybe equally important: This new feature was confined to one particular sub-system. I'm sure it would have been more challenging if this had been an all-out expansion of capabilities across all systems.
But at any rate, given the right requirements and a careful introduction, those agents can be very useful even in legacy code bases.
/u/not_napoleon likely haven't used these tools, but rather like to spout off with generic statements born from bias (mixed with a little fear). Any solid developer who knows what they're doing has integrated these tools properly and exposed their full capabilities. And as a result, wouldn't say they don't work in existing code bases.
Yea. AI is great at writing poems on a blank page or writing fresh code on an empty project. It can use the most common syntax and libraries - the number of potential correct solutions is huge and so the probability of success is high.
But when it has to fit a specific problem in a specific order and the number of valid solutions are more constrained, it tend to struggle. Naturally as a project grows over time, the amount of constraints on solution scope tightens down, less freedom to move when doing dependency changes, refactors, etc.
I feel like it’s a matter of scoping, and to what others say not writing the code does not necessarily mean not doing the thinking, if you know what you need to implement and how to do it, give it the context its not so terrible. Not great but works
Is your code well structured? Have you tried actually competent agents like augment or ampcode? Both these agents work in true production codebases just fine, i only manually do the most important tasks which is 1/5 of my job.
I find that to be a bit of a moving of the goal post, considering just 1.5 years ago everyone was saying how these tools can only produce individual functions/snippets and couldn't generate full projects. The capabilities are certainly growing and its not really possible to deny.
Along those lines: I have a project that is similar to what you're talking about. I started it before GPT3.5 was ever released and have recently integrated Clade Code into it. I can say with unequivocal certainty that it does a phenomenal job in understanding the code base writ-large. Sure, there's nuances that I only I am aware of that I can tell that it misses, but I've used it to add many features with no issues, some even just a one-shot prompt with minimal context.
I find that to be a bit of a moving of the goal post
I agree it's moving goal posts, but that's the whole tech industry. Would you say someone who complains that the iPhone 12 only has a 12mp camera, compared to the 16's 48 megapixels, is moving the goal posts? I would love to live in a world where we said "that's good enough, we can stop now and enjoy the fruits of our labor", but that's not how capitalism works.
I think your standards for success are a bit unreasonable
It's absolutely unreasonable. My boss's expectations of me are unreasonable as well. We gave up giving people stickers for a good try in like the third grade, and I'm even less inclined to do so for a giant pile of linear algebra. The cost of these tools is astronomical (hell, I'm even paying for them when I don't use them in the form of my rising utility rates to subsidize building out new power plants for running data centers), and my expectations have grown to match their cost.
Have they made progress in the past couple of years? sure, I don't doubt it. But like self driving cars that have still failed to really take off, it has to be a lot better than it is now to be worth the cost, IMHO.
None of what you wrote is wrong nor do I disagree, but also a bit irrelevant; you said all the success stories come from a clean slate and all I was saying is: that's unequivocally wrong. You can deny that or say "it's not enough lines of code" or whatever arbitrary metric you want to assign, but I just had to address that erroneous presumption. They are absolutely helpful in large, pre-existing projects.
I think there is going to be a point where the details of the problem/solution become more important than the implementation details. We are not there yet, but I am starting to see the path to getting there.
I'm struggling to agree with that. As the saying goes, "The devil's in the details". And innocuous details can result in potentially catastrophic problems. I fear that if we abstract too much of ourselves away from the implementation details, we'll lose the ability to see those problems/solutions.
Yes, once google searches came along we lost the micro-lessons and skills about learning the Dewey Decimal system, and how to navigate index pages and yellow books and encyclopedia etc
Your concerns sound exactly the same, about losing some micro lessons and skills which frankly will no longer be relevant as critical knowledge, unless you're in a specialized field
The two paths are not mutually exclusive. Such as false asssumption.
The most important thing is technical design. If you have a good blueprint, it can do wonders for both tracks. I see people with non-AI approach do stuff ad-hoc, no SAD (software architecture document), high level, low-level system designs. No ADR (Architectural Decision Record) either.
Some of my apps have over 100 artifacts. Dozens of flow diagrams, model definitions, etc.. That is 99 more than some system designs.
And Claude can read those 100 designs just fine. It knows I have a modal. It has 15 class names. One class has a listener that fetches from an API with a Swagger contract. It sees the data model.
And guess what, when it follows those 100 artifacts, it works just fine.
So the people complaining about AI being messy, where are their system designs? In their head? Or is documented where you can handle to a junior staff or an AI-agent?
I Have a master TOC/Index (table of contents) that links out to as much as 100+ docs.
It usually list 20 key sections. Then in those individual sections, they will have 5-15 documents.
So it reads what it needs. It usually reads 20-30 at repo creation. Then reads along as I get further along in different areas. I am now at a point where it used to take 3 weeks, then 3 days, I can execute a "rebuild, or recreate XYZ app" in 40 minutes. By having it re-run, re-follow the runbook.
Always have one Markdown .md "refer to /path/child.md"
So the agent always reads the TOC index, then know where to go depending on the task.
I have multiple agents reading the same docs.
1) Coder.
2) Auditor.
Example entry point:
AGENTS.md, CLAUDE.md, .github/copilot-instructions.md are all entry points. MOST respect AGENTS.md
They refer to my /plan/ which is a git submodule of docs. Here is an example of an agents. md
# AGENTS. md
## Project Agent Instructions
This repository uses \/plan/INDEX.md` as the single source of truth for project rules, coding standards, and architectural guidelines.`
### Instructions
- Always consult and follow the rules defined in \/design/MAIN.md`.`
If any conflicts arise between inline comments or other docs, defer to \/plan/INDEX.md`.`
### Agent Behavior
....
Apply to .github/copilot-instructions.md and CLAUDE. MD -----
I use Codex, GPT5 to help me with the system design.
Claude Code implements. Because I know for sure Codex does a horrible job at executing.
Qwen3 runs as a QA, linter, git reviewer and load tester.
Amazon Q works as my CICD
There is gonna be a whole new type of workflow where you have 4-5 agents running parallel to do checks and balances against one another. It is funny to see Codex write up an audit in real time and Claude/Opus say "Yeah, that is a very good approach, let me re-read the audit and apply those recommendations"
I don't know why you're getting downvoted. This is the new way to develop, and if you're not learning how to set up these agents you're going to be unemployable soon.
people need to understand that there are two types of categories when trying to accomplish anything. Build a thing, or make an impact. For the latter, maybe it doesn’t matter as much if you are precise, if it’s useful, then mission accomplished. I think llms are good for this. Help me figure out splunk queries, or show me how to use a language i don’t use, or build something that helps me but doesn’t have to ship. For the former, yeah it is mostly a big waste of time.
Senior developers certainly can vibecode, and IMO are the only people who can do it safely because quality of vibe coding correlates 100% with development experience. The more you do it, the less your code will break. At some point it will not be vibe coding and will be AI-assisted development instead.
Explaining code for a component/area quickly to me that I've never seen before.
Code reviews: CodeRabbit does offer some valuable insights from time to time. Can't be your only code review but it is a value add.
Writing quick scripts, tools or prototypes, the kind of code of a throwaway nature.
Writing patterns or things like a Http Client, where I can recognise when it's done right, but it's working knowledge for me, I would have to do some light doc reading or Googling to write it myself.
Generating unit tests, especially to get the ball rolling with a new service. The unit tests rarely work but it's usually more time consuming to write them from scratch than to edit the ones it writes for you.
Writing mapper methods or other tedious boilerplate like work.
dropping some of my vibe coding tips:
- Go slower and enjoy the creation process
- Look at the code to understand (some of) it
- Log your sessions for future context
- Create a full plan but don't give it to the agent
- Instead, "vibe" and chat to slowly build features
do through code review and understand every piece of generated code.
Why not give agent full plan because it tries to do it all and will inevitably get it wrong and it’s hard to debug it. I prefer to start with a plan to get simple features working and slowly building it out by giving the agent only the info it needs.
Considering I spent the better half of an hour today trying to get Claude to count the lines correctly in a 114 line automation script, only to determine that the "critical" issues it was telling me about were, in fact, correct usages of the data type...
Yea, I think this is one headline where the answer might actually be "yes."
This is how the tech game works.
1. Hype up the newest tech, in this case AI, and promise it is going to automate everyone’s jobs away.
2. Get wildly overvalued stock valuations on companies that will fail within 5 years.
3. Spend the next 10 years cleaning up the disaster that unfolds. This is great for contractors.
Possibly. The mobile team at the company I work for is changing tech stacks. The lead dev on the project is vibe coding the whole project. When I asked about setup, he said, “I just ask AI”. No read me file, nothing. My point in this is that I think in some cases we may be.
my company of 10k people is now monitoring ai usage and folding it into performance reviews. so even though you don’t like it i actually find posts like these really interesting.
i know a lot of other companies are doing similar stuff now. this is the biggest issue our industry has faced in years.
news flash, layoffs and outsourcing have been plaguing the industry for decades on and off. enforced ai usage is a brand new idea (in case you weren’t paying attention, LLMs didn’t exist several years ago)
you seem like your brain is pretty smooth, i’m jealous
And you seem to be completely oblivious to the world around you for the past 4 years. To say the patterns of outsourcing and layoffs aren't off is outright lying. NOBODY has to be to that oblivious right???
AcanthaceaeBubbly805@reddit
Does anyone with a professional career even use “vibe coding” other than LinkedIn grifters?
Significant_Treat_87@reddit
you might be shocked but yes sadly :( it’s being enforced at my company and anyone who wasn’t already a great engineer is now churning out total shit imo. i’ve gotten REALLY direct in code reviews because if people are going to make me look at stuff they didnt read / didnt understand, i’m not going to take time to be polite either
tcpukl@reddit
Are the technical directors not even looking at the shit code being submitted and seeing the quality of the project nose dive?
Significant_Treat_87@reddit
I don’t think so really. My company is hugely profitable (almost a billion in profit last year) but we are somehow locked into a death spiral lmfao.
Nobody in control can figure out how to grow the company more and the shareholders aren’t happy to simply perfect the already shitty products we have that do turn a profit. AI bandwagon it is!
Only-Cheetah-9579@reddit
doesn't grow -> layoffs
Stealth528@reddit
Same deal at my company. Very profitable, by far the biggest name in our space. But now that they're out of ways to grow the business organically, and you know our PE overlords can't accept just 10x'ing their investment, it must increase more, every year, forever. So AI bandwagon + layoffs + insane amount of acquisitions it is.
Significant_Treat_87@reddit
for a second i wondered if we worked for the same place, but i don’t think mine is doing any acquisitions right now haha. so that’s scary that this is just a trend across the industry.
Dish-Live@reddit
This is some sort of systemic issue at this point. I hear the same thing from everyone. Cutting costs, chasing growth that isn’t there. I think we’ve optimized ourselves too much for quarterly profits and it’s just cooked now.
humanquester@reddit
What I don't understand about this is why can't companies wait till ai is more mature to integrate it into their core-stuff. Like back when the first airplanes were invented people weren't like "We need to integrate flight into our shoe production and distribution buisness, everyone is doing it, if we don't we'll be left behind!"
Ok, so, if you did this when steam power was invented it might have actually been a good idea - but sometimes maybe its best to wait and chill and see what happens?
Significant_Treat_87@reddit
i guess if you’re a ceo you probably consume a lot of bullshit business content. if that’s what you’re reading, you’re being promised 50 or 1000x productivity gains in the near future. to them it seems like unlimited productivity unconstrained by headcount.
it’s insane though, despite working for a crazy profitable company, revenue is declining, and in spite of that we suddenly have near unlimited ai budget (it’s f*cking 2 grand a month per engineer lmfao).
nobody at the top in big tech seems to realize the problem isn’t lack of productivity, it’s lack of new ideas. if you want unlimited growth you have to bring crazy new shit to market that gets people excited. nobody really seems to have idea incubators anymore. remember the “moonshot” era in the 2010s? my company laid off the entire incubator division 3 years ago lol (it had produced many profitable products). now literally the only idea big tech has is f*cking cyberpunk transhumanism lol.
finance is calling this tech’s “hard money era”. no more investment in ideas, only a mad dash to the promise of ASI / AGI that will allow the first mover ultimate power. if you’re on the front lines using these tools, imo it feels like agi is decades away. but if you’re a CEO i guess you wouldn’t have noticed that yet :(
ToBeEatenByAGrue@reddit
The crazy thing is that the AI companies are losing money on that $2k per month. None of them are remotely close to profitable and they're burning investor cash at an astonishing rate. They will have to increase prices dramatically just to break even.
Nuzzgok@reddit
As annoying as it is I can’t really blame them. They truly believe AI can almost deliver on the promises, or that it can already. If that was the case, 2k per head is an astronomical saving compared to hiring more devs.
It’s a disconnect as old as time. Nobody above maybe two levels up from you cares anything about the code, the stack, anything. They only see product. Same thing horizontally - any other area of the business just doesn’t care. That’s a lot of people that can buy the hype
humanquester@reddit
Yeah, that's a good point about new ideas. Like there was a post here yesterday asking why ai hasn't increased the number of apps or steam games released recently - but like are there any really new ideas for apps?
It feels like the only thing people do these days is spin up a random generator that mixes words together and come out with a new app based on that. Like Whatnot is just a mix between twitch and the home shopping network.
syberpank@reddit
Imo, the big difference is we didn't have as many middle management roles revolving around being able to create powerpoint presentations showing more and more fun numbers every week.
GameRoom@reddit
I've gotten that nudge from above as well, but also it's clear that the VPs don't know what vibe coding actually is, and in practice the SWEs interpret it as "use more AI to help write your PRs." Which imo is fine. Actual vibe coding, per its original definition, would not be suitable for any code that's checked into production. You have to read the code you sent out for review, obviously!
JunkShack@reddit
We have juniors churning out such massive PRs now it’s difficult to know where to start commenting without knocking over the house of cards.
mr_brobot__@reddit
Massive PRs get instantly rejected for being too big. Break it up into pieces so it’s digestible.
Significant_Treat_87@reddit
Seriously lol. I actually had my PM use AI to try and update one of our critical READMEs…
i left like 18 comments and others left more and it would have been much faster if one of the engineers had just done it themselves. instead we are all spending time looking at this MR that is beyond bullshit
SmellyButtHammer@reddit
I’ve noticed in some code reviews I’ve had it’s like me and the “author” are both discovering what the code looks like at the same time…
LongIslandBagel@reddit
We’re currently in a weird part of AI for coding. I feel like I can get code to generate functionality as a proof of concept very easily, but still factoring everything to be efficient, scalable and adopt a consistent nomenclature/syntax/whatever is time consuming.
It’s nice to show executives how something can work, but then leadership is also expecting that same proof of concept to be immediately deployable and that’s never the case.
davispw@reddit
Yes.
temporaryuser1000@reddit
I sometimes vibe code tools that help me in my work, but never the work itself.
SuccotashComplete@reddit
Yes
OK_x86@reddit
Is Vibe coding just taking the AI code as is unmodified? Because I do use AI tools and review/tweak/modify as needed. It basically just saves me some typing, usually.
And I review people's PRs looking for bad slop
TheGocho@reddit
Yes.
Cursor and many other ide/tools control everything in your project. You just prompt to do something. If it works, great. If not, you prompt the error and ask to be fixed.
That's the reason a guy lost his production database, or another got a lot of SQL injections in his web.
kmai270@reddit
I think an acquaintance of my company has switched to vibe coding.. not sure how big and how much but apparently she was told to only vibe code and they shipped a bunch of features
And you guessed it
Bunch of bugs in production
GoTeamLightningbolt@reddit
I vibe code (and then double check) unit tests and Storybook stories sometimes.
phil-nie@reddit
90% of people I see using it are using it to refer to any AI coding, which isn’t even what the term means. If you look at the code, you see not vibe coding.
NoCardio_@reddit
I use it when I do POC‘s, half joking though.
creaturefeature16@reddit
I've been thinking a lot about this as I've been working with these tools.
I just finished a small side project where I, so far, have not written a line of code. I spent a few hours generating a PRD, which then used to generate a detailed technical specs document, which I then translated to a .MD file and placed it inside my empty project folder. I used Claude Code to generate a project-progress.md file that contained the outline, plan, and a checklist of all tasks/subtasks. Then I ran the command and had it begin. It was a smashing success on the first go, except there was a minor syntax error breaking an XML import. I didn't fix it myself, but rather just pointed the tool to the error and it was patched in short order.
From this point on, I'll likely just take it over and do some cleanup and refactoring, but I technically could just write all my requirements into my MD file and just assign it, which means this would be the first project I've done end-to-end without myself writing any code, but done purely through these tools and verification.
I have mixed feelings about it. I rather like the productivity gains, but I feel like there's dangerous territory now. Not necessarily in terms of security (although that is a big factor, just not with this project) but in terms of skills. So much learning happens in the seemingly mundane and banal, in the rote and repetitive. I know if I was doing this project without these tools, I'm quite sure I would have run into a lot of issues I had to work through which would have the opportunity for all sorts of "micro-lessons" to be had, connections to be made, or "aha!" moments to experience.
Sure, I turned a two day job into a 4 hour job and gained so much time...but what I potentially lost feels hard to measure and quantify.
not_napoleon@reddit
All the success stories I hear about these tools are starting from a clean slate. That's great and all, but I've gotten to start from a clean slate once in 20 years in the industry, and even then, it was only a clean slate for a couple of months. What I really want to see is someone take a project that's been running in production for 5 years, with a million lines of code, and add a feature to that without breaking anything else. Then I'll start to believe there's some utility to these tools.
ernbeld@reddit
I have a counter-example/anecdote. Had to implement a new feature in a relatively complex existing code base. I treated Claude Code like a junior developer who was new to the project. I "talked" to it to describe how the application works, where the primary data structures and database models are, and asked it to "deeply analyse" the code and explain back to me how it all works.
Once I was satisfied that it "understood" the code, I then described the new feature we needed and provided a few starting points for where to look. Then I asked it to come up with an implementation plan.
It was a clever solution that it came up with (totally a mind-blown moment for me). It implemented it, and it worked, almost out of the box.
All of that took an hour. It would have taken me MUCH longer to implement all of this, maybe a day or two.
What I learned: Taking the time to introduce the system to the agent is crucial if you want it to work with you on a legacy code base. It doesn't magically understand all your obscure business requirements. Maybe equally important: This new feature was confined to one particular sub-system. I'm sure it would have been more challenging if this had been an all-out expansion of capabilities across all systems.
But at any rate, given the right requirements and a careful introduction, those agents can be very useful even in legacy code bases.
creaturefeature16@reddit
/u/not_napoleon likely haven't used these tools, but rather like to spout off with generic statements born from bias (mixed with a little fear). Any solid developer who knows what they're doing has integrated these tools properly and exposed their full capabilities. And as a result, wouldn't say they don't work in existing code bases.
nomiinomii@reddit
Yes this happens already at my big corporation with multiple giant repos
spiderpig_spiderpig_@reddit
Yea. AI is great at writing poems on a blank page or writing fresh code on an empty project. It can use the most common syntax and libraries - the number of potential correct solutions is huge and so the probability of success is high.
But when it has to fit a specific problem in a specific order and the number of valid solutions are more constrained, it tend to struggle. Naturally as a project grows over time, the amount of constraints on solution scope tightens down, less freedom to move when doing dependency changes, refactors, etc.
thunderjoul@reddit
I feel like it’s a matter of scoping, and to what others say not writing the code does not necessarily mean not doing the thinking, if you know what you need to implement and how to do it, give it the context its not so terrible. Not great but works
tcpukl@reddit
But even then, are people looking at the code and architecture it's creating and realising it won't even last pre-production?
spiderpig_spiderpig_@reddit
They don’t know what they don’t know
mamaBiskothu@reddit
Is your code well structured? Have you tried actually competent agents like augment or ampcode? Both these agents work in true production codebases just fine, i only manually do the most important tasks which is 1/5 of my job.
creaturefeature16@reddit
I find that to be a bit of a moving of the goal post, considering just 1.5 years ago everyone was saying how these tools can only produce individual functions/snippets and couldn't generate full projects. The capabilities are certainly growing and its not really possible to deny.
Along those lines: I have a project that is similar to what you're talking about. I started it before GPT3.5 was ever released and have recently integrated Clade Code into it. I can say with unequivocal certainty that it does a phenomenal job in understanding the code base writ-large. Sure, there's nuances that I only I am aware of that I can tell that it misses, but I've used it to add many features with no issues, some even just a one-shot prompt with minimal context.
not_napoleon@reddit
I agree it's moving goal posts, but that's the whole tech industry. Would you say someone who complains that the iPhone 12 only has a 12mp camera, compared to the 16's 48 megapixels, is moving the goal posts? I would love to live in a world where we said "that's good enough, we can stop now and enjoy the fruits of our labor", but that's not how capitalism works.
It's absolutely unreasonable. My boss's expectations of me are unreasonable as well. We gave up giving people stickers for a good try in like the third grade, and I'm even less inclined to do so for a giant pile of linear algebra. The cost of these tools is astronomical (hell, I'm even paying for them when I don't use them in the form of my rising utility rates to subsidize building out new power plants for running data centers), and my expectations have grown to match their cost.
Have they made progress in the past couple of years? sure, I don't doubt it. But like self driving cars that have still failed to really take off, it has to be a lot better than it is now to be worth the cost, IMHO.
creaturefeature16@reddit
None of what you wrote is wrong nor do I disagree, but also a bit irrelevant; you said all the success stories come from a clean slate and all I was saying is: that's unequivocally wrong. You can deny that or say "it's not enough lines of code" or whatever arbitrary metric you want to assign, but I just had to address that erroneous presumption. They are absolutely helpful in large, pre-existing projects.
nayshins@reddit (OP)
I think there is going to be a point where the details of the problem/solution become more important than the implementation details. We are not there yet, but I am starting to see the path to getting there.
sebaceous_sam@reddit
it’d be crazy if at some point we had a way to directly communicate those solutions details to the computer… almost like some sort of language 🤔
creaturefeature16@reddit
I'm struggling to agree with that. As the saying goes, "The devil's in the details". And innocuous details can result in potentially catastrophic problems. I fear that if we abstract too much of ourselves away from the implementation details, we'll lose the ability to see those problems/solutions.
Bricktop72@reddit
You can also use a draw.io file to describe the workflows. Then have the AI process that along with you documents.
nomiinomii@reddit
Yes, once google searches came along we lost the micro-lessons and skills about learning the Dewey Decimal system, and how to navigate index pages and yellow books and encyclopedia etc
Your concerns sound exactly the same, about losing some micro lessons and skills which frankly will no longer be relevant as critical knowledge, unless you're in a specialized field
creaturefeature16@reddit
Hm, no, not the same whatsoever. A traditional Google doesn't just "give answers", it gives resources.
Your entire premise fails at even the most basic scrutiny.
Old_Dragonfruit2200@reddit
Do experienced engineers still post here ? Feels like every post is made by a junior now days
datsyuks_deke@reddit
All this sub is these days is talking about AI, it’s exhausting.
atomheartother@reddit
OP is a staff engineer at netflix.
nayshins@reddit (OP)
Idk read the article and then the bio
mamaBiskothu@reddit
Or a dinosaur shouting at clouds (i.e. ai)
originalchronoguy@reddit
The two paths are not mutually exclusive. Such as false asssumption.
The most important thing is technical design. If you have a good blueprint, it can do wonders for both tracks. I see people with non-AI approach do stuff ad-hoc, no SAD (software architecture document), high level, low-level system designs. No ADR (Architectural Decision Record) either.
Some of my apps have over 100 artifacts. Dozens of flow diagrams, model definitions, etc.. That is 99 more than some system designs.
And Claude can read those 100 designs just fine. It knows I have a modal. It has 15 class names. One class has a listener that fetches from an API with a Swagger contract. It sees the data model.
And guess what, when it follows those 100 artifacts, it works just fine.
So the people complaining about AI being messy, where are their system designs? In their head? Or is documented where you can handle to a junior staff or an AI-agent?
maverickarchitect100@reddit
How do you get Claude to 'read' those 100 designs? Do you just ask it to ingest those 100 docs?
originalchronoguy@reddit
I Have a master TOC/Index (table of contents) that links out to as much as 100+ docs.
It usually list 20 key sections. Then in those individual sections, they will have 5-15 documents.
So it reads what it needs. It usually reads 20-30 at repo creation. Then reads along as I get further along in different areas. I am now at a point where it used to take 3 weeks, then 3 days, I can execute a "rebuild, or recreate XYZ app" in 40 minutes. By having it re-run, re-follow the runbook.
Always have one Markdown .md "refer to /path/child.md"
So the agent always reads the TOC index, then know where to go depending on the task.
I have multiple agents reading the same docs.
1) Coder.
2) Auditor.
Example entry point:
AGENTS.md, CLAUDE.md, .github/copilot-instructions.md are all entry points. MOST respect AGENTS.md
They refer to my /plan/ which is a git submodule of docs. Here is an example of an agents. md
# AGENTS. md
## Project Agent Instructions
This repository uses \
/plan/INDEX.md` as the single source of truth for project rules, coding standards, and architectural guidelines.`### Instructions
- Always consult and follow the rules defined in \
/design/MAIN.md`.`If any conflicts arise between inline comments or other docs, defer to \
/plan/INDEX.md`.`### Agent Behavior
....
Apply to .github/copilot-instructions.md and CLAUDE. MD
-----
Auditor Agent:
Rules Guardian — Agent Reviewer Guide
Purpose
- Enforce project rules declared in \
/plan/INDEX.md` and `plan/audits/*`.- Continuously watch for violations and alert all agents with a STOP flag.`
Authority
- Single source of truth: ...... [file].
- If any guidance conflicts with [file], defer to that file.
What To Run--
Code check against runbook
Security check on unguarded endpoints
Security check as root exploits.
nayshins@reddit (OP)
That's what this whole post is about: taking the time to think and design before you engage an AI agent.
originalchronoguy@reddit
The AI agent can create those artifacts.
I use Codex, GPT5 to help me with the system design.
Claude Code implements. Because I know for sure Codex does a horrible job at executing.
Qwen3 runs as a QA, linter, git reviewer and load tester.
Amazon Q works as my CICD
There is gonna be a whole new type of workflow where you have 4-5 agents running parallel to do checks and balances against one another. It is funny to see Codex write up an audit in real time and Claude/Opus say "Yeah, that is a very good approach, let me re-read the audit and apply those recommendations"
Setting up the workflow is the fun part.
_blue_pill@reddit
I don't know why you're getting downvoted. This is the new way to develop, and if you're not learning how to set up these agents you're going to be unemployable soon.
Blrfl@reddit
Betteridge's law of headlines says no, but I'm willing to make an exception.
nayshins@reddit (OP)
I'm A/B testing it...
noveltyhandle@reddit
Where is this second society you are testing? Asking for a friend
nayshins@reddit (OP)
Twitter
mamaBiskothu@reddit
Because this entire sub has a Hard-on to shit on AI.
"I know this goes against the pattern thats true all the time but this topic im particularly delusional about so fuck THAT"
OkWealth5939@reddit
No lol. There will be bugs and there will be fixes. Like always
dryiceboy@reddit
Yes, and tenured professionals are just waiting on the sidelines to clean all these up…again.
Sprootspores@reddit
people need to understand that there are two types of categories when trying to accomplish anything. Build a thing, or make an impact. For the latter, maybe it doesn’t matter as much if you are precise, if it’s useful, then mission accomplished. I think llms are good for this. Help me figure out splunk queries, or show me how to use a language i don’t use, or build something that helps me but doesn’t have to ship. For the former, yeah it is mostly a big waste of time.
jay_boi123@reddit
I like using Cursor to generate my unit tests. It’s pretty good at that for Java development.
rag1987@reddit
Senior developers certainly can vibecode, and IMO are the only people who can do it safely because quality of vibe coding correlates 100% with development experience. The more you do it, the less your code will break. At some point it will not be vibe coding and will be AI-assisted development instead.
thewritingwallah@reddit
I've found the most value from it by:
dropping some of my vibe coding tips:
- Go slower and enjoy the creation process
- Look at the code to understand (some of) it
- Log your sessions for future context
- Create a full plan but don't give it to the agent
- Instead, "vibe" and chat to slowly build features
Why not give agent full plan because it tries to do it all and will inevitably get it wrong and it’s hard to debug it. I prefer to start with a plan to get simple features working and slowly building it out by giving the agent only the info it needs.
Aggravating_You5770@reddit
Yes. Experienced engineers have spent years writing terrible spaghetti code, and we're simply accelerating the process with LLMs
Heavy-Report9931@reddit
skipped the experience went straight to spaghetti
kagato87@reddit
Considering I spent the better half of an hour today trying to get Claude to count the lines correctly in a 114 line automation script, only to determine that the "critical" issues it was telling me about were, in fact, correct usages of the data type...
Yea, I think this is one headline where the answer might actually be "yes."
data-artist@reddit
This is how the tech game works. 1. Hype up the newest tech, in this case AI, and promise it is going to automate everyone’s jobs away. 2. Get wildly overvalued stock valuations on companies that will fail within 5 years. 3. Spend the next 10 years cleaning up the disaster that unfolds. This is great for contractors.
icenoid@reddit
Possibly. The mobile team at the company I work for is changing tech stacks. The lead dev on the project is vibe coding the whole project. When I asked about setup, he said, “I just ask AI”. No read me file, nothing. My point in this is that I think in some cases we may be.
local-person-nc@reddit
No but this sub is with its non stop talking about something so apparently useless 🤡
Significant_Treat_87@reddit
my company of 10k people is now monitoring ai usage and folding it into performance reviews. so even though you don’t like it i actually find posts like these really interesting.
i know a lot of other companies are doing similar stuff now. this is the biggest issue our industry has faced in years.
pydry@reddit
That isnt a good reason to have 11 million posts on the exact same topic.
local-person-nc@reddit
Oh not the layoffs and outsourcing? But the AI!1!1!1 🤡
Significant_Treat_87@reddit
news flash, layoffs and outsourcing have been plaguing the industry for decades on and off. enforced ai usage is a brand new idea (in case you weren’t paying attention, LLMs didn’t exist several years ago)
you seem like your brain is pretty smooth, i’m jealous
local-person-nc@reddit
And you seem to be completely oblivious to the world around you for the past 4 years. To say the patterns of outsourcing and layoffs aren't off is outright lying. NOBODY has to be to that oblivious right???
SynthaLearner@reddit
yes, until there is a catastrophe and people die, then maybe, only maybe, we get this in a law somewhere.
elniallo11@reddit
Short answer yes with an if, long answer no with a but
Eric848448@reddit
Ned, have you ever considered any of the other religions? They’re all basically the same thing.
Sweet-Phosphor-3443@reddit
Who cares, not our product, not our problem. It might actually work to experienced developers advantage.
SideburnsOfDoom@reddit
I'm not. Are you?
SoggyGrayDuck@reddit
If you're not rewriting things after you get the final result to make it easier to undo and edit moving forward, YES
BourbonGramps@reddit
Yes