Getting more calls to fix ai generated codebases than actual new builds lately
Posted by CrafAir1220@reddit | ExperiencedDevs | View on Reddit | 86 comments
About 10 years in, mostly consulting for smaller companies and early stage startups. The last few months something shifted in the kind of work coming my way.
Used to be people hiring me to build new things or extend existing systems. Now its cleanup, like straight up triage on codebases that are barely holding together.
The pattern is always the same. A non-technical founder pays someone to build their product. It works on the surface. Then users start hitting it and everything falls apart, slow queries, memory leaks, auth logic thats swiss cheese, error handling that catches everything and does nothing with it.
When I actually look at the code its pretty obvious what happend. AI generated top to bottom. You can tell from the comments alone, that weird overly polite explanation style that no human dev writes. Algorithms that technicaly work but make zero sense for the actual use case, data models that look like someone asked "what are all the possible fields" and the AI just listed everything.
The thing is these founders arent stupid. They saw demos, believed the hype, hired a "developer" who was really just a prompt jockey, and got something that passed a demo but crumbles under real usage.
Im not anti AI at all. I use Glm-5.1 and Claude code daily for my own work and it genuinley speeds things up. But I also know when the output is garbage cause ive written enough code by hand to smell it. Thats the part you cant shortcut.
I think we're about to see a wave of this. Companies built on AI slop that need actual engineers to come in and rebuild the foundations, job security for experienced devs honestly but depressing that it has to happen this way.
Wise_Slice6303@reddit
The irony of ai creating more work for experienced devs is something nobody predicted lol. Cleanup contracts are gonna be a whole market segment.
BehindThyCamel@reddit
There have been plenty of people around the internet saying that in the last few months. Some of it was cope but it turned out to be true anyway.
aidencoder@reddit
I made my early money fixing codebases that were outsourced to India. At the time, all the founders and CTOs thought they were getting something for less, when it was a case of "buy cheap, buy twice."
I look forward to many more years of AI slop that needs fixing. My day rate will be appropriately high.
Graybie@reddit
I am still doing that! 40 year old software system written in a combination of Fortran, Pascal, C#, and C++, initially built by people who knew what they were doing in the 1980s, expanded by people who didn't, and followed up by 10 years of offshore contractor work on top of it. The sheer monstrosity is a sight to behold.
ButchDeanCA@reddit
I literally don’t envy you. That sounds like a complete nightmare.
Hopefully you have everything wrapped up in a test suite to confirm no functionality changes with amendments.
thy_bucket_for_thee@reddit
Why wouldn't you envy someone that is able to charge an obscene amount of money to do work? I use to fix outsource test suites/pipelines for like $60k a job. Great work, people hate it but I like doing 2 months of work for nearly 1/2 my salary at the time.
Will probably get back into it because of this thread. The money is dumb, but it pays the bills just the same.
Graybie@reddit
I have to admit that I don't make an obscene amount of money, but I do have a stable job, decent income, work from home, and have none of the typical pain points of dev work at a large company.
Maybe I can leverage this experience into something more lucrative in the future.
Sunstorm84@reddit
Fixing test suites is nowhere near as bad as fixing the architecture of the application itself after 10 years of poorly written sloppy code.
It’s unbelievably frustrating work at times, but it does pay well, so that’s why I do it.
jumnhy@reddit
How do you land those gigs?
Graybie@reddit
We are slowly working on building test harnesses around the various modules and exes. None of it was built with testing in mind, so it is sometimes quite difficult to find good boundaries for where to add tests.
Calm-Inspector4711@reddit
sounds like the perfect storm of tech debt and poor oversight
divinecomedian3@reddit
Ugh, I had to take over a website originally outsourced to Indians. One of the worst codebases I've ever touched.
GlobalCurry@reddit
I worked on one a few years ago, everything was split into microservices that made requests back and forth between each other. Microservices have their use but this was just microservices for the sake of microservices.
kernelangus420@reddit
Not sure if my theory is correct but with outsourcing in general you can get good code and bad code.
If you pick the lowest bidder, they offer the lowest cost because their speciality is writing fast with no guard rails because their other clients specifically want this style of coding.
Unfortunately there is no way to know which bidder represents quality code until you've tested them out.
aidencoder@reddit
Never seen good outsourced code
dbenc@reddit
how did you land your first client? I'm interested in doing the same
PuzzleheadedLimit994@reddit
Hustle. I started on Upwork... eventually you will be able to network and the work will come to you through word of mouth.
BoutItBudnevich@reddit
Just curious since what's your day rate like haha?
Flashy-Whereas-3234@reddit
Can't wait.
Whole industry speed-running what we've learnt over the last 60 something years.
Sheldor5@reddit
"humans learn from their mistakes" my ass
FatefulDonkey@reddit
It boils down to context window lol
NeitherEchidna3491@reddit
“That men do not learn very much from the lessons of history is the most important of all the lessons that history has to teach.”
hyrumwhite@reddit
“This time it’ll be different”
Izkata@reddit
All three of these combined is like a tightly focused version of Strauss-Howe generational theory - simplistically, things have to be rediscovered and relearned every 60-80 years because the next generation thinks they know better and screw things up.
leprouteux@reddit
Some individuals do learn from their mistakes.
sweetnsourgrapes@reddit
Individually yes, eventually. Collectively, it seems the same mistakes occur over and over again.
Andrew5445@reddit
Collectives don’t really exist at the scale you think. It seems like the human brain is still stuck at small tribe mentally.
Zeragamba@reddit
because it's always a freshly minted fool that does the dumb thing
thy_bucket_for_thee@reddit
Because we aren't working collectively yet, our entire industry is dictated by what SV + VC thinks is profitable.
We don't even have say as devs in our industry in what technology should be invested in. That's handled by incompetent non engineers financiers gambling, sorry investing, with pension funds
Abject-Kitchen3198@reddit
Yes they do.
mvpmvh@reddit
Could mean that humans learn from their own mistakes, not the mistakes of others.
positivelymonkey@reddit
You mean waterfall isn't the best way to develop software?
Surprised pika
Fidodo@reddit
60 years? I've seen the industry relearning the same exact mistakes every 10 or so years.
CrafAir1220@reddit (OP)
Amazing.
deepmiddle@reddit
This is a great way to put it. Everyone’s about to find out what good engineers actually get paid for
kantmakm@reddit
This is not a new thing. Cheap code for POC can work for demos + funding but needs to be refactored to actually work at scale - whether that code was generated by gemini, claude, Tristan the intern, or Ravi from Fiverr.
totallyrandom__@reddit
You are minimizing the situation to not deal with the real issue. Before AI slop coding, this code would have not made it to production, or even built this bad to begin with.
Idea-Aggressive@reddit
Exactly, the OP is just rage baiting.
Cedar_Wood_State@reddit
Yeah people seems to have thought every code base is high quality before AI or something lol. A lot jobs have been doing rewrite on spaghetti code left by previous devs for years
LateToTheParty013@reddit
You forgot Vijay
viktorianer4life@reddit
Write a grep rule for each failure shape in the codebase, run it as a pre-commit gate, then let the agent do another pass to fix whatever trips. You are not trying to teach it style, just fencing off the specific ways it gets things wrong. Four or five patterns accounted for most of it.
jmaventador@reddit
Noticed this too but they are mostly contract roles. It seems they want to rehire to fix their slop and then get rid of you
ColonelKlanka@reddit
Yep and thats what us experiencrd contractors are for - whether it be:
emergency fixes the perms either cant fix (as they are inexperienced or just no longer present coz of cutbacks) or dont want to fix.
Adding new features that the ai/current staff cant do
Then once its done. We leave as we are seen as too expensive to keep - until the next issue appears.
Unfortunately no 2 is still up for grabs by ai vibe coders.
aidencoder@reddit
Not always. My last contract was a decade long.
ColonelKlanka@reddit
lucky you. not very common I doubt though (ive had 2 to 3 yr contracts). A decade is firmly in the part and aprcel fo company land i would think. But good for you.
Routine_Internal_771@reddit
That's fine for a contract role
Euphoric-Neon-2054@reddit
it's chill, our day rate just doubled 🤝
c0ventry@reddit
Was that way before LLMs too though.. ever since boot camps became a thing and they got rid of Software Architects and went full stack for everyone... The general quality of software has gone to hell in the past decade.
Mizarman@reddit
How well can you vibe fix someone else's vibe code? I'm genuinely curious about that.
Maktube@reddit
Anecdotally, if you're a senior engineer, the someone else is very non-technical, AND they're willing to work with you on it, you can actually do really well. I work at a pretty small company, and the CEO and I have had a lot of success with him treating vibe coding as an extremely rapid prototyping tool.
We have an initial meeting to hash out broad goals and any requirements we can nail down ahead of time (we're pretty lax about this BUT it's important that there is an attempt, the context helps a lot later). Then he vibe codes the prototype, iterates until he likes the UI/UX/outputs, and we have another short handoff meeting, and I vibe(-ish) code a reasonable implementation.
Generally I throw out more or less the entire codebase, use the superpowers brainstorming skill to figure out a reasonable architecture, have the AI write the bare bones of it, and then guide it through porting over any needed algorithms/logic into the new framework.
It's been working SUPER well, I'm basically acting like a combination PM/tech lead for the CEO as stakeholder and the LLM as the dev team, which leaves me free to keep being an IC on my regular work (which is super specialized and performance critical, so vibe coding is very much not viable (yet?)).
hiddenhare@reddit
Really interesting, thanks. After you've baked in a higher quality level, what's your strategy for making changes to the code? Is the CEO still able to be involved in that process?
Idea-Aggressive@reddit
Where are you finding those gigs? I haven’t seen them anywhere and I’m looking intensively across YC, HN, to common job boards, discord, X, etc. It’s an obscure contact you have and some what your known to be available and able to solve these new issues?
I don’t buy it!
flavius-as@reddit
Sounds like an opportunity to me.
The tables are turning.
typeof_goodidea@reddit
I've had three gigs in the last six months doing just this. That, or, the original dev bailed once things got hairy and I have to tell them that the code is a Trainwreck. I've been talking with friends about what is being said by others here (and OP) -- that this is a growing market.
I also add a headache multiplier to my rate...
Athen65@reddit
This just occurred to me, but part of the issue with the whole "AI makes you faster" claim is that, even if it makes you faster and you just spend time reviewing that you would've spent coding, you also (hopefully) have someone else who has to review the code before it is accepted. Generally, AI like to write a lot to do a little, so even if you're shipping faster, and even if it does it right the first time, you still either slow down the team or sacrifice some rigor in the review process, and therefore sacrifice quality.
I've already encountered this, where a coworker sends out a large PR which is fine, but only if it happens less frequently than it does now, which is about one every day. That might sound weird but I'm in more of a junior position and the other engineer is solo on another project, so it's a weird situation in general
UnderstandingDry1256@reddit
More jobs to devs haha.
Those who use AI smarter beat those who do it in a stupid way.
ProbablyNotPoisonous@reddit
A career of unfucking AI code actually sounds kind of fun, tbh.
chikamakaleyley@reddit
are these former clients or referrals?
are you charging a significantly higher rate than your base?
i'm kinda stoked
LeadingPokemon@reddit
Waiting for you to add the spam link to your post!
Bderken@reddit
Seriously, this is written by Ai…
weightedslanket@reddit
At least he had it remove every apostrophe for authenticity
Cyral@reddit
100% to promote something. So many posts like this lately that are obviously AI written and made up
LeadingPokemon@reddit
Yep! The bot downvotes are infecting. Glad to meet you, fellow human. I am also a human. I work as a software engineer but I also have a lot of experience. Over 20 years of human experience. Trust me!
another_dudeman@reddit
AI generated code by normies is the new MS Access
MedicatedApe@reddit
How do you market and advertise your services?
Ambitious-Garbage-73@reddit
same shift here. founders think they bought an MVP and what they really bought was deferred debugging debt with a nice demo on top. once auth, retries and background jobs start stepping on each other you realize nobody ever designed the system, it just accumulated.
Fidodo@reddit
That sounds like a great way to get experience reigning in AI too put quality first. I'm curious what lessons have you learned and what workflows you have?
I'm not anti AI either but I am firmly anti slop and pro quality. I see no reason why AI is an excuse to let quality drop. I think with the right AI assisted workflows quality should go up, not down.
BunchCrazy1269@reddit
Ive just started a new role and my whole job is to unfuck a vibecoded react app. Its funny
Kpow_636@reddit
I'm two months into my new job, doing exactly the same thing. Someone at my organization vibecoded an app that became too complex for them.
ButchDeanCA@reddit
I’m using the term “unfuck” as appropriate henceforth lol
mechkbfan@reddit
Mixed feelings
I think it's wonderful that we're lowering the barrier of entry for people to prototype with AI tools
On the other hand, if I ever have to look for work again, I sincerely hope it's not just decoding whatever AI decided to build that day
muntaxitome@reddit
Same here. Honestly I don't really care, billable hours are billable hours. Generally it would have been faster to just write the whole thing from scratch than fix the vibe coding mess.
Wild_Competition_833@reddit
"The thing is these founders arent stupid." - you sure about that?
everything you described screams greedy and stupid with a nice dose of kruger dunning thrown in for good measure.
Ma1eficent@reddit
The wave is here. And OMG, the jobs for technical forensic examinations in legal cases where the whole thing has turned into a mess of legal hallucinations and arguments over where that responsibility lies...
No_Comedian7332@reddit
Not Anti-AI but honestly I hope this is the way it goes. I was on a 4 people team, we were split into "squads" and each squad is now it's own team. We use to have refinement meetings to make sure all the tickets and all the product requirements actually make sense. Now we are drinking AI-generated stories from a fire house. Before it would be 2 weeks for the 4 of us to get a polished feature out there. Now each of us alone is pushing a feature every 2 weeks. I'm not saying this is 100% bad, but I'm starting to see cracks, memory leaks, DB migration errors, things that we would have caught before because we have 4 sets of eyes looking at the PRs now are getting through. Again, is not that we are lazy or we don't care, is just that is so much the amount of work that we have been tasked to implement that is barely time to look at the PRs.
mike3run@reddit
I guess they know that but the hope is the next model will fix it and so on and so forth
Stunning_Algae_9065@reddit
yeah this is becoming way more common
it’s not even that the code is “wrong”, it’s just… not built like a system
everything works in isolation but nothing really holds together once real usage hits
I’ve seen the same patterns... weird abstractions, over-engineered in some places, completely missing basics in others
and yeah that “AI comment style” is a dead giveaway lol
I use AI a lot too but you can’t just let it run end-to-end without actually understanding what it’s doing
feels like most of these builds skip the thinking part and go straight to output
so now instead of building features, people are basically reverse engineering their own codebases
wutcnbrowndo4u@reddit
How do you find these clients?
_k_ley@reddit
AI is an intelligence multiplier
If you're intelligence is a fraction, then it makes things worse
Difficult-Celery-721@reddit
Wouldn't that be with negatives ?
NickW1343@reddit
Sometimes it's a "developer" that crafted these time bombs we're hopefully going to be paid to defuse soon, but I think a lot of the time it's founders/execs that aren't technical, fell into a few week-long AI psychosis stint, and vibecoded out slop. We have a higher-up at work that made tons of Lovable apps and now he's bored of them, so our team is getting the repos to maintain going forward. Frustrating, but obviously a boon to the job market.
kadema@reddit
Same boat, but I'm doing the prompting. Going over code manually is so discouraged, I'm horrified every time I see somebody open 12 PRs in a day and all are merged
fedsmoker9@reddit
Can’t wait!
Manfluencer10kultra@reddit
I cheered a little (inside) when I saw this:
"But Python lambdas don't support type annotations inline. The cleanest way is to use local functions or just leave the lambdas and suppress with a type: ignore comment or cast. Actually, I think Pyright might infer the lambda return type correctly even without annotating
e, since it just calls a function. The real issue is thateitself is untyped. I could convert the lambdas to nested functions with proper type annotations, usecaston them, or add a type ignore comment — the nested function approach is probably cleanest since it keeps everything scoped within_build_transitions. I'm realizing the real issue is with type invariance —list[dict[str, object]]isn't compatible withlist[FsmTransitionDefinition]because Pyright treats the dict value types as invariant. The solution is to change the return type annotation to useFsmTransitionDefinitiondirectly instead ofdict[str, object]."(Sonnet 4.6 (high) )
Who was doing all the lazy typing and casting? GPT 5.4 (xhigh).
And I do have specific instructions, but they get forgotten from time to time... and then the drift - quickly compounds.
One Any makes a cast() and wads of duplicated coercion utils and defensive coding, making it all ever so more readable..
The real security concerns become hidden in plain sight.
You can't make AI development work on large code-bases (technically) or (morally imho) production sservice that consumes other people's information, unless you :
1. know what you're doing in terms of security.
2. extensive tooling to not actually lose time on development.
clamjabber@reddit
Oh I've been there too. I was working on a side thing, a legal ai app. The UI looked good but lots of issues with scaling and then you look at the codebase and then you decide to laugh or cry or both
BlueDolphinCute@reddit
The prompt jockey thing is real. I’ve seen multiple "senior devs" on Linkedin whose entire skillset is pasting requirements into AI and shipping whatever comes out.