The era of AI slop cleanup has begun
Posted by kcib@reddit | ExperiencedDevs | View on Reddit | 481 comments
I’m a freelance software engineer with about 8 years of experience mainly in early stage startups. At this point, I have a pretty steady flow of referrals. I don’t take every project on and not every one works out, but enough do that I can do it more than full time.
Lately, though, I have noticed a large increase in projects where they paid a ton of money for an internal software and it does not work well at all. Tons of errors, unreasonably slow, inefficient and taking up a lot of resources, and large security flaws. At first, I thought maybe people just hired bad developers. The bar is pretty low to call yourself a developer or even a software engineer anyways, but I’m seeing the same problems now on multiple projects.
When I take on a project on, I always sign an NDA and look at their codebase to look at some upfront issues that I can bring up because, most of the time, the people hiring me aren’t technical and don’t understand what the problem is. This is probably the 5th time now that a lot of the code was obviously AI generated. Comments in the code that were obviously written by AI, algorithms that are inefficient and make no sense, cluttered data structures, inconsistent coding patterns, etc.
It might be a few years before we start to see this on an enterprise scale, but I’m noticing this becoming a serious problem for small businesses and startups, especially when the founders / people are in charge aren’t technical enough to identify this ahead of time.
Glum-Psychology-6701@reddit
Ok we'll revisit this in a year, such posts are dime a dozen, the same anti AI rants with the same vocabulary .. What makes a human programmer better than Ai ?
NoRegreds@reddit
Tell me that you never wrote code yourself without telling me you never wrote code yourself.
Glum-Psychology-6701@reddit
Tell me you don't know what you're talking about
Express_Trouble4156@reddit
I had a claude do a complete and utter meltdown during some of my experiences. I've been calling the problem of AI Assistants being easier but not faster the "babysitter problem." I wrote about it here: https://chrisbeckman.dev/posts/the-babysitter-problem
Complex_Disaster1552@reddit
Shouldn't this technically be "improved" when/if AI coding becomes even better?
i.e lovable has a "review security" button now
geon@reddit
That's a very big "if". And even assuming it will eventually happen, there is no telling when. Can your employer/client afford to just wait for AI to improve?
Complex_Disaster1552@reddit
I agree on the "big if" - but if you see how much it has advanced, even from 4o to o3, it's hard to guess where we'll be in a few years
geon@reddit
I haven’t seen any meaningful improvement in the last 2 years. And there is no sign that the current architecture can ever produce better results.
Complex_Disaster1552@reddit
I guess ChatGPT 5 coming out this week will be a good example - if it has improved or not
AntDracula@reddit
Oops.
Complex_Disaster1552@reddit
Hahaha, good point. Although it has improved... a bit?
AntDracula@reddit
My honest assessment is that the rate of growth is slowing (it’s unlikely to literally regress, though people do feel that about GPT-5).
LLMs are not on the path to AGI. It’s not that they’re not useful, it’s that they’re vastly oversold.
Ok_Individual_5050@reddit
Where did you get the idea that you can take a tool that is notoriously good at creating security issues, ask it to review code for security issues and somehow get a better result?
Complex_Disaster1552@reddit
Ha, makes sense. I would say - especially in Lovable's scenario, it did do a good sweep of Supabase's settings
Elctsuptb@reddit
Yes, everyone here is coping hard
lokaaarrr@reddit
It’s pretty clear that current AI tools don’t produce better code than the person using them could have written. In many, but not all, situations they reduce the time needed.
The open question is how fast they will improve and if there is a practical ceiling.
KnowledgePitiful8197@reddit
Other question is what is the limit of its improvement as it is extremely difficult to write reliable code
peripateticman2026@reddit
Fair assessment of the state of the art.
Hawkes75@reddit
The number of times ChatGPT has suggested using "setTimeout" unironically to fix a legitimate issue makes me terrified of what I'm going to find in codebases over the coming years.
sarhoshamiral@reddit
Multiple times when I ask coding agents to fix a bug, they will delete the method in question especially if it is not referred anywhere else in the code. The concept of public API is a bit unknown and this is with prompts saying don't delete code.
Cube00@reddit
After around ten failed attempts Copilot finally said a file it couldn't fix was "corrupted".
It asked me to delete the contents of the file with the promise that it was ok because it would recreate it for me without the bug.
AreWeNotDoinPhrasing@reddit
Shit, I’ve had almost this exact conversaition before with Claude, in CC.
oppalissa@reddit
Well, that's one way to fix it
UltimateTrattles@reddit
It regularly fixes type errors by just casting to any.
oppalissa@reddit
What AI are you using? Copilot is pretty good
UltimateTrattles@reddit
Copilot uses gpt.
I mostly use Claude via cursor but also use other models. I’ve found gpt5 and Claude to be the most reliable.
Need a good rules file to keep them on the rails and a spec to tell them to build from.
RadiantHC@reddit
lmao
Hawkes75@reddit
That too - and I'm like sorry, linter rules.
JuiceChance@reddit
You will find money, a lot of money that you should charge those CEO, CTO idiots and greedy stakeholders.
No_Oil_6152@reddit
Agreed.
Bill them double to fix the issues they'd never have had if they hired human professionals to create their app.
Feel no shame for doing so; these same companies would happily see you thrown on the scrap heap because "savings"
oppalissa@reddit
I believe it's only a matter of time for new gen AI to fix the mess of the old AIs, and again and again.
Time to raise chickens
Skibidirot@reddit
so true
AmbitionHopeful7227@reddit
Or "that happens because you have an old version of a library" (not true), or "your environment is corrupted, just reinstall all lol"
DoctorWaluigiTime@reddit
I'm guilty and have done that on occasion. "Oh this works if
setTimeout(0)
happens, because something something race conditions or something something new thread happens.Guess ChatGPT looked at my code. Sorry everyone.
Breklin76@reddit
You don’t use ChatGPT for coding.
usrlibshare@reddit
The so called "agebtic AIs" don't do any better.
Gearwatcher@reddit
Pretty much everything does better than OpenAI models.
EvilCodeQueen@reddit
I don’t know why the downvotes. I haven’t tried them all, but Claude Sonnet is head and shoulders above ChatGPT.
Breklin76@reddit
Downvoted for calling out a non-coding AI. Hilarious.
Hawkes75@reddit
I use it as a shortcut to Google / StackOverflow. That's all it's good for, and not even that a lot of the time.
syklemil@reddit
The entire point of vibe coding is to ignore a lot of technical details. Which model someone is using is such a technical detail. Of course people are gonna vibe it up with ChatGPT, it's the most recognized name.
Choice-Wafer-4975@reddit
To be fair, I knew a guy who used that for basically every problem. Sequential async stuff not quite working right? Thrown in a 1500ms delay until it works 90% of the time. He did it constantly - professional full-time programmer btw.
DrummerHead@reddit
I see you also met Gregory “loading bar” Waiterson
EvilCodeQueen@reddit
Hey, we all have our kinks.
TypeSafeBug@reddit
Halting problem vs infinite polling, who would win?
EvilCodeQueen@reddit
AI is learning from all code out there, not just the good code.
kaskoosek@reddit
Many codebases have no folders. Or basically one folder with all the files.
I dont know how a team of engineers reached that point.
Sea-Employment3017@reddit
The setTimeout (fix) is a classic AI hallmark. I've also seen it suggest any type casting for TypeScript errors and .catch(( => {}) to "handle" promise rejections. The scary part is these "solutions" often work during development, so non-technical stakeholders think it's fine. Then, production breaks under load or under normal usage.
JakubErler@reddit
Yeah exactly, every time I asked it for a JavaScript patch of a system we are using, it uses setTimeouts and the code is unreliable (it fires too early - the eom is not yet there, it fires too late - it blinks). I had to explain to AI how to replace setTimeout and why. It replaces it and several iterations later it quietly reintroduces it back. Interesting. Maybe it is trained on a low quality code
DrobnaHalota@reddit
There is a lot of cope in this thread. Next gen AI models will do the clean up for the current gen.
AntDracula@reddit
lol THIS is cope
TechnicianUnlikely99@reddit
What should we do? Should we leave the field asap and become nurses, plumbers and electricians?
That feels like the new “learn to code” that will be completely saturated in the next 5-10 years
HomelanderOfSeven@reddit
Well they better be hurry while “AI” bubble still floating
Several_Note_6119@reddit
Why do we think that AI slop won’t be cleaned up by AI in a few years? AI is moving fast, so maybe in a few years, it clean itself up to improve performance, maintainability, etc
AntDracula@reddit
silentjet@reddit
the problem starts even before the code. Do you know what's the problem? Name! What is being called "AI" is not actually AI, not even an element of AI. We, as programmers, know if you'll name an index variable as a sum variable, that would not make it actually a sum. the same here, a great technology, ml, was brainwashed to be an AI, which it is not. But now an expectation and huge selling point is expected to be like AI from futuristic movies, and it cannot give that, because it is not an ai.
The other side of the problem is that ppl start to be scared of the rise of machines, for their jobs, for their future, because it was promoted like that last 50years of movies and that is really bad.
And why is that happening? Because a small number of ppl were to sell you yet another piece of cilicon for a tremendous amount of money... And they are doing that unresponsively, for a cost of entire planet stability and piece
Several_Note_6119@reddit
Sure, you can say today’s AI isn’t true AI. But that doesn’t conflict my original comment. Whatever version of AI we’re using today, it can still be improved to help refactor code slop, implement maintainable design patterns, increase efficiency, etc. in the future.
mikaball@reddit
The day AI is able to do this, Humans will be obsolete in all aspects.
silentjet@reddit
oh well, no, in general, it can't, because to do so an intellect is required, especially in a complex code base. Obviously some small script with a very typical and much limited context can be improved through multiple hallucinations, but even that would not be conscious improvement...
Significant-Scheme-8@reddit
Small structures always try to cut costs ... Before it was with junior devs no with AI 🤣
ILikeBubblyWater@reddit
This sub has become an echo chamber of people believing they will be the saviors of the future
ashtonium@reddit
You just described the current state of the internet.
QueenAlucia@reddit
To be fair I think that will be true for a lot of seniors in any profession.
Fine_Tie_1576@reddit
I think AI generated code is here to stay and I don't think the solution is to try to stop it. I think the solution is to try to generate much better code with better prompts, maybe provide coding guidelines, templates, examples, clear system requirements, security requirements, data models and similar stuff. What do you think - if you provide these specifications, will the code quality be similar to a senior software engineer?
Constant-Listen834@reddit
For every job lost at a FAANG that became more efficient due to AI, two has opened up at a startup that got funding because they vibed up at POC.
I think the jobs will be there, just gonna shift to smaller companies hiring. If you pay attention you’ll probably have already seen this happening
pagerussell@reddit
FAANG aren't cutting because of AI. That's just their cover story.
All those companies are shifting away from being growth companies and towards simple extraction. They don't need new features, their monopolies are set, now they just need a maintenance crew and to pump their margins.
ohio_rizz_rani@reddit
Those are the cheap labour in other countries.
Meta started hiring developers in India starting 2025. Crazy!
Life-Principle-3771@reddit
Maintenance for massive, older services are the hardest jobs in the industry. There is negative value in cheaping on those positions.
plinkoplonka@reddit
They say AI.
it just means Affordable Indians.
Life-Principle-3771@reddit
This is correct, have worked at multiple FAANG and left G recently.
The truth is that AI is not that really used in FAANG companies for development. Maybe a little bit for stuff like drafting reports/emails and there is certainly a lot of interest in the MCP space...but the truth is that the performance/accuracy of AI code generation is negatively correlated with the size/complexity of a codebase, which is a big time blocker for the massive and highly complex codebases that you deal with in those companies.
Plus at Google every CR has to be reviewed by an "expert" in the language, and a lot of AI generated code won't pass that review.
SpawN47@reddit
Yeah Faang is basically tech investment companies at this point.
No-Firefighter-6753@reddit
I always thought this
RegrettableBiscuit@reddit
Some are cutting due to AI, but not in the way they claim. They're firing people to free up capital to invest into building data centers for training LLMs, eroding the actual value they produce. Microsoft is a prime candidate, they are killing Xbox to shift budget over to AI. When the bubble bursts, they will have neither.
pavlik_enemy@reddit
Yep, their flagship products haven’t changed and won’t change. Meta doesn’t even have a cloud business that always require new features aka services
Mission_Cook_3401@reddit
Yes, that seems right . From my experience there are many non technical people in companies “vibing prototypes” , then they bring in a dev to build it
zyro99x@reddit
can you give an example for that?
dpn@reddit
We actually are establishing processes around this. As engineers a well scoped feature as a poc actually makes delivering the final thing so much easier if a pm has implicitly signed off on their own vibed concept
look@reddit
It can be a useful process for engineers, too.
With AI you can quickly create throwaway prototypes.
The part most people aren’t doing yet is learning from them to make the next one that you want to keep.
Brought2UByAdderall@reddit
Telling product owners the thing they see working, even badly, can be disposed of because it's just a prototype.
look@reddit
That’s a human problem, not an AI one. If you have a dysfunctional product/engineering process, then it doesn’t really matter who or what is writing the slop.
OneMillionSnakes@reddit
In theory yeah. However at many orgs a proof of concept tends to become the production code rather quickly. Nothing is more permanent than a temporary solution. Places don't always prectice good prototyping and testing.
arihoenig@reddit
Wait, you mean people weren't using it like this? That's how I've use LLMs since the beginning. That's the entire value proposition of LLMs.
Mission_Cook_3401@reddit
I’m on version 5 of a distributed zero trust NATS infra now ! So much fun , and it’s in Go!
arcanemachined@reddit
You should be very proud.
The National Association of Teachers of Singing is a very prestigious organization.
Mission_Cook_3401@reddit
Yes , they said I can’t sing, but the LLM told me that my singing is perfect
UpAllNight6969@reddit
P
Fidodo@reddit
It's basically a prototype spec. As long as it's properly sequestered to not make it to production (or feature flagged and encapsulated so it's isolated for an AB test), I think it's a totally reasonable approach. You just need to still know what you're doing and be careful about it so you don't let it leak into the actual codebase before you rewrite it.
dpn@reddit
Yep good point, we use flags heavily and have already established ways of working that these pocs fit into.
OneMillionSnakes@reddit
I don't personally see much market evidence for this from an individual contributor point of view, but I do 100% believe that people will adopt workflows like this. Which I find really unfortunate because editing and reading code is often far more time consuming than writing it.
Left-Percentage-1684@reddit
Going by my friends experiences with artists, same thing.
Some guy generates a mockup of "the vibe" of what they want, concepts mostly.
Artist makes it well, actually good.
AaronsAaAardvarks@reddit
This sounds like a completely reasonable use of AI to me.
Trollzore@reddit
What’s the point of vibe coding concepts if you can just do that in figma with a designer
Its_me_Snitches@reddit
It sounds like you might take for granted that you know what Figma is and how to use it, not to mention that you have the mindset and thought process of a senior dev. Lots more going on under the hood in your mind, the knowledge of using “simple” tools to you isn’t that simple or common in general
RotationsKopulator@reddit
Is figma something like ligma?
bumbledog123@reddit
Yeah also when I look at the figma from a PM I usually still have a billion things to specify. If they're making a prototype they might run into those considerations by themselves, and end up having a more clarified requirement
Trollzore@reddit
Yeah that’s a good point. Thanks for the reality.
drjeats@reddit
Concept art is a profession
They're turning to AI tools, but I think they are digging their own grave. Or at least the graves of people who aren't yet established
ikee85@reddit
What of waste of time and resorces.
Mission_Cook_3401@reddit
As is life
Efficient-Design-174@reddit
this has been annoying in my experience. product person sending in suggestions with tidbits of regurgitated llm wisdom during technical design debates without really understanding the tradeoffs in the matter at hand. a special kind of hell.
it makes unnecessary arguments with the uninitiated people longer because now the uninitiated can fake their way into the conversation they shouldn't really be in.
DrobnaHalota@reddit
You come across as butthurt about being called out on your lack of expertise.
Efficient-Design-174@reddit
You're probably right. There is always something to learn. That's about the only thing I do know. Though a better word is tired and annoyed. Maybe it's a good thing my job is being automated away. Soon enough the noobs can feed each other's LLMs and leave me be in peace.
Fidodo@reddit
I'm honestly not against it, so long as they invest in replacing it. Doesn't make sense to invest in a quality MVP if you don't even have a market yet.
Dev_deep@reddit
I've seen this even in Enterprise. One team will basically vibe code a POC - Proof of Concept, get their stakeholders excited about it then toss it over the wall and some other team picks it up and has to make it production ready.
Codebase is filled with comments that LLMs love to include. Even in the CSS.
KaleidoscopeSenior34@reddit
I'm vibing up at POC and quite frankly I've gone through like 3 audits / revisions now to get to a properly architected state.
PoopsCodeAllTheTime@reddit
And this seems efficient to you? What is the complexity of the result?
KaleidoscopeSenior34@reddit
I mean for what I'm doing, yes. Otherwise I'd burn out so much faster. Sometimes is psychological, not purely about efficiency of shipping code.
PoopsCodeAllTheTime@reddit
Now that is something I can agree with, it doesn't feel that much faster to me, but it does feel a lot more chill.
Pruzter@reddit
It’s like when you can plug your brain into more of an autopilot vibe state, like when you play a video game. It’s not that you aren’t thinking at all, you just aren’t engaging your deep thinking at all. This can be more “fun”, but it’s also a lot more lazy. I guess there are trade offs, but in so many ways, it is just another iteration of what is happening on all levels of society. It’s like the programming equivalent of mindless scrolling videos of tik tok, just shamelessly feeding dopamine to your reptile brain.
PoopsCodeAllTheTime@reddit
Yes! It feels lazy, and as such, the results it produces are not any better than if my focus was actually engaging. I just sit it through the most baby steps while I lay back and scroll my phone or whatever. So in a way I can see how this could be a nice pleasantry for devs, but in no way does it produce anything new or improved. It doesn't even save time, I am just taking longer while chillaxing a bit, until it starts to fail at the simple tasks and I have to stop my laziness.
rainroar@reddit
Faang isn’t losing headcount due to ai, no matter how much they tell shareholders that’s the case
just_anotjer_anon@reddit
If AI was so good they'd increase *everyone's" productivity by 4 fold. They'd be building new projects instead of firing anyone, they'd simply have too high of a ROI in that scenario
rainroar@reddit
Exactly
Constant-Listen834@reddit
They absolutely are. Many large companies are
rainroar@reddit
They say they are. I’ve worked at multiple faang’s since the ai boom and it’s all lies to justify layoffs from over hiring.
apartment-seeker@reddit
How are you defining "AI boom"? Because if you've worked at multiple FAANG since ~ Jan 2023, then that doesn't sound that good tbh lol
rainroar@reddit
I’ve been in faang since 2013, across 4 companies. I changed jobs most recently in 2023.
Constant-Listen834@reddit
The don’t need to justify layoffs lol
allywrecks@reddit
They absolutely do because while they're doing layoffs they're hiring up in cheaper global cost centers. If that was the main narrative instead of AI, they'd draw the attention of the America First crowd, many of whom don't need much justification to get pissed off at the tech oligarchs
rainroar@reddit
Yes you do when you spend a decade telling investors that hiring everyone at all costs was the only path forward. It makes you look pretty dumb when you realize cutting staff is the only path to increased margins.
I’m telling you, both places I’ve worked that claim to be “ai focused” are absolutely lying sure we burn 3 million tokens on a day because we were told to or else, but that isn’t what’s doing the work. Devs are doing the work.
teslas_love_pigeon@reddit
It's really something how others can't easily see through the farce of what it is. I really wonder if they've even done any critical thinking in school, no not the insult but like a class on critical thinking.
All throughout my public schooling we were taught how to think more thoroughly regarding political news and events. About conflicts of interests.
Maybe they should cram this into the bootcamps?
shadowsyfer@reddit
Social media and tech influencers are to blame. Not all of them of course, but a large number.
teslas_love_pigeon@reddit
It plays a part yes. It does feel accurate to say that 95% of tech content tailored to programmers is just engaging in boosterism with no real push back or discussions on the political economy within tech.
It's all extremely surface level, but since most of these peddler's engage in access journalism I guess it's to be suspected.
FireHamilton@reddit
It’s just people that aren’t in tech or jealous FAANG people lol
teslas_love_pigeon@reddit
I don't think this is it. People of all stripes know a bullshitter, it has nothing to do with class.
pavlik_enemy@reddit
You can look at Reddit - apparently it now has tons of user-facing analytics for every freaking comment. It was a significant undertaking but will it be useful to advertisers (obviously, regular users don’t care)? Who knows?
pavlik_enemy@reddit
They do need, actually. Otherwise the CEOs have to admit that they allowed bloat for years
shadowsyfer@reddit
100% agree with you. They can market the effectiveness of their AI products, and justify layoffs. It’s all around winning.
eight_ender@reddit
Sweet, it used to be founders who did a Rails tutorial that generated this kind of work. Oh how things come full circle.
VisiblePlatform6704@reddit
This. Ive been involved in startups for 13 years and the quality of MVPs is crazy bad (Understandably). So AI doesn't offer much changes there, maybe only one the quantity of code.
dagistan-warrior@reddit
it is probably an improvement in code quality thanks to ai, compared to how it was with non technical founders trying other put together an MVP
Dry-Aioli-6138@reddit
And the turntables have turned!
mothzilla@reddit
Sadly I think that the companies that "vibed up" will disappear once they get hacked, or investors realise they've sunk money into a ChatGPT wrapper.
So they won't be around long enough for people to be needed to fix the code.
pm_me_ur_happy_traiI@reddit
If you’re right, it’s the end of high salary days for devs. That small shop is going to pay 1/3 of what a bigger company would have, and the rest will be valueless stock options.
jhaand@reddit
I think a lot of contractors can do the clean-up work from all the vibe coders. You could even make a business model from it.
Create a small company with 5 dudes that specialise in testing, QA, refactoring and architecture archaeology and Devops. You can go from one pile of AI slop to the next one I think.
SignoreBanana@reddit
Maybe eventually. It's going to be a long few years though
No-Row-Boat@reddit
Well the numbers aren't backing that thesis unfortunately.
800.000 layoffs, 70k jobs created in the last period?
All the AI optimists should start watching what Geoffrey Hinton is talking about. AI replacing us in the workforce is a more optimistic outcome.
enzamatica@reddit
That would be incredible. Now if they didnt lowball.
quentech@reddit
That's the tough part, but if you find one that's willing to open the purse strings, you can have a ton of leverage as the magic fixer-upper in a small company.
quentech@reddit
I'll take it.
Small companies that need a lead wears-many-hats guy are my jam.
Puggravy@reddit
The jobs will get there one thr AI bubble pops. May be a couple years yet until investors realize how none of these AI startups have any moat.
_Guron_@reddit
They called it "vibe checkers"
sangeli@reddit
100% agreed. Now every PM turned founder can vibe code their own shitty MVP!
poeir@reddit
Great... Now I'm going to have to do (at least) one more job.
sudosussudio@reddit
I go to startup meetups once in a while and already have referred founders whose sites were FUBARd to my freelancer friends. I personally wouldn’t touch them but there is certainly a lot of work to do!
Little-Bad-8474@reddit
There’s gonna be a ton of cleanup at FAANG too with our exec driven ignorant metrics.
wh1t3ros3@reddit
This shift is good the big tech has been allowed to consolidate into something that kills all competition
PublicFurryAccount@reddit
This won’t change that.
They’ve already been consolidated and their ability to purchase any competitor remains.
Particular_Bet_279@reddit
https://x.com/icetigercream/status/1959822437547188666?t=REQZKQXTVUhR0jbhr9mh6w&s=19
Chatgpt even got this basic math fact wrong. It only works if you give it the exact context. Even after that you just need to hope it can prioritize the right things
JonnyBGoodF@reddit
Can confirm. Even at big Fortune 100 size companies there's a lot of vibe coded slop that works well for MVP but will absolutely fall apart under stress and within production environments once they scale to more users.
GWstudent1@reddit
I’m desperately trying to fight off vibe coding at the start-up I work at to avoid this shit. Managers’ brains were permanently broken by agile/MVP//first-to-market product strategies. The only thing on their mind is “Just get it done. Make it work. We’ll make it scalable later.” And that pushes a culture of manufacturing shit code that doesn’t stand up to the pressures of the real world and going make to make stuff scalable is 10x than building it correctly the first time.
Regular_Tailor@reddit
The times I've heard "gold plating" from people when they mean "might not fail" is amazing.
krisolch@reddit
> Make it work. We’ll make it scalable later
Maybe they are correct?
Most startups should not care about scaling until they hit the problem of scaling. Until you've been a founder yourself you won't understand this. And yes I'm a 12 year experienced dev
jl2352@reddit
They are correct, however on the surface that type of talk (get it done and care about scalability later) is easy to say and hard to do. It can lead to very bad results, like the shitty code the person describes.
I find it's better solved at the ticket stage. Where you can design features in ways that the feature does less, or has acceptable restrictions built in, and then you write good and proper code that fits those lower requirements.
GWstudent1@reddit
That’s workable when you get your code to run for 100 customers and then run into a scaling problem at 1000/2000 customers. I have like ~15 demo customers to work with and I know we’re aiming for hundreds of customers at some point. I’ve been given AI-written code that’s like O(N^2) that could be less complex. I know that’s going to hit the wall when we start selling. And I could keep provisioning more DTUs to keep refreshes from hanging, but why would I increase our bills that high when I could just write better code?
pavlik_enemy@reddit
If you have funding you should totally just throw money at the problem
The most difficult thing is to be able to tell people "told ya" without pissing them off and then somehow they should accept the fact that optimizing the code will take a long time after plowing through features with breakneck speed
GWstudent1@reddit
It’s also a startup I own in part, I’m not just an employee drawing a paycheck. It’s my fellow founders (who are not programmers) that are pushing AI slop onto the project that I know is not scalable. And going from $200 a month for database DTUs to $4000 a month is unsustainable for the amount of money we have.
pavlik_enemy@reddit
I guess you should pull rank. If they don’t trust your technical expertise why are you even working together?
break_card@reddit
This effect is probably costing economies billions each year. It’s the fundamental problem of enterprise coding. Rough hacky MVPs are needed to survive at early stages, but these will make your dev cycles lethargic and rotten as it scales to a medium sized business. It’s impossible to know the exact right time to revamp your MVP into a long term software system. It’s extremely hard for devs to convince business folks to make this investment.
jollydev@reddit
Most MVPs never result in products that get to scale to a medium sized business. It's a luxury problem to have a tech indebted product that actually has PMF.
Comparing two sloppy MVPs vs one high quality MVP and I'll take two sloppy ones all day to double my chances of finding PMF in a given timeframe.
Subject_Health_3182@reddit
This reminded me of a story in early 2000s about a company with smart bot that was better than Google. You typed your question, and while info was “loading”, real human behind that screen googled, and then typed back the answer in chat.
DecodeBytes@reddit
I am building an AI Agent Framework at the moment, so frequenting that world a lot. The tidal wave of crap is flabbergasting. My project is quite new, but already far more clean and structured, yet it see's no where near traffic and hype of all the slop out there.
I often look at these projects and its easy to spot them. README's full of emojis. Lots of extravagant promises, like a project which is less than a week old describing itself as 'Enterprise Grade'.
Reams and reams of documentation, some of it hilarious. For example, I came over one yesterday that was just a ton of prompts wrapping open-AI APIs and tons of functions that were not even called, it was described as a 'Cutting Edge, Enterprise Grade, at-scale, Multi Agent Swarm'. In the docs it makes claims about SLA's and uptime. Yet it's CLI!??!
The interesting this is though, we have a very low star count (I know, vanity metric, but helps make you discoverable), yet those who are turning up are seeing the potential and actually contributing. So in the end, I know we will come out on top of the slop.
fmgiii@reddit
Oh you're missing the boat if you're not making this absolutely intentional. It is such a fantastic opportunity that you can't possibly pass this up. Four decade+ IT architect here. Just signed my first ever severance agreement in my entire career. I'm essentially done in 7 months. Being replaced by a crew of outsourced devs from India. Very nice people. I have no qualms with these folks. People need to earn a living around the world, and I am just fine with that.
So I'm tasked with 'knowledge transfer' and of course 'getting sprint tasks completed' because nobody else can yet. Indeed the PR's are coming in from overseas, all AI generated. No surprises here. I fully expect that. The folks from overseas are not stupid. They know how to play the game. Especially with US based money flow.
So (insert Mark Manson's 'Subtle Are of Not Giving a F*k') I start thinking to myself, why do I care so much? Is it because I poured my heart and soul into this code base? Giving it every bit of attention and honor that one would give to something that truly mattered? Or because I cared about the users of the app? Their lives? The impact they would have on others because this code base was of the best quality I could possibly produce? Yes, indeed that was the reason...but
...capitalism doesn't care. So WHY...SHOULD...I?
Introducing a $25.00/month Claude Code Pro subscription. Perfect! They want AI. They're going to get AI. And honestly it's been a JOY to BEHOLD such a fitting REVENGE!!! And so f'ing easy.
I just let Claude Code SLOP IT OUT FOR ME. And it gets reviews and checked in!!!! The code works! Is it extendable? No. Does it fit into the context of the larger architecture? Just barely. Does it fit any overall standards? No. Does one function resemble another in terms of coding style and approach? Absolutely not! Is it code that will stand the test of time? Hell no. Will it be maintainable in a year? Not by anyone will less than 3 decades into the industry. And the list goes on, and on, and on of how this SLOP CRAP violates everything we have held to quality up until now.
Are the capitalists happy? Of course they are! Look at how 'productive' I am now! I can bang out a new radio-set by lunchtime!!! Full stack!!!
But the Beast lies waiting, for the future. Salivating. When the wall hits. When no one knows, anywhere how the overwhelmingly expensive codebase, got to the point where no change can be made to it, without the entire castle, falling to the ground like a house of cards.
Live the dream! SLOP YOUR WAY OUT! We deserved better!
pavlik_enemy@reddit
Given that very few people comment the code, if there are comments at all it's AI generated
DogOfTheBone@reddit
LLM comments are generally hilarious it'll be like
var playerSpeed = 500 // This is the player's speed
pavlik_enemy@reddit
And then there will be something like
speed = speed & 0x7F << 15
without any explanationSleisl@reddit
john carmack is stuck in the AI
Zywoo_fan@reddit
Prompt: "You're John Carmack ..."
TangerineSorry8463@reddit
// evil floating bit point level hacking //what the f?
perk11@reddit
This is one of the 3 things I ask ChatGPT not to do in my custom instructions, and adding it there seems to work well.
AppearanceHeavy6724@reddit
"Excessive comments" help future code generation, though,
PhysicallyTender@reddit
or that could've been PirateSoftware's code.
pavlik_enemy@reddit
What’s the deal with this guy? I know he’s an entitled jerk but did something happened about him and repetitive code?
PaleEnvironment6767@reddit
Turns out he doesn't actually know how to code properly. It's not the worst for a personal project, but then he acts like some elite coder with 20 years in game development. A simple "yeah I know it's shit but it works well enough for me" would've solved that whole issue for him. There are several people on youtube who have analyzed the little bits of code he has shown.
QueenAlucia@reddit
Plus how he keeps repeating "first second generation Blizzard employee" as if it's a flex or something.
pavlik_enemy@reddit
Found the video, this is freaking nuts. At one company I maintained a backend for a game (that survived four games as far as I remember) written in heavily OOP language but without using classes but hell, at least it didn’t contain magic numbers
WoodenPresence1917@reddit
The nested switch statements are a much bigger sin than the magic numbers, to be fair
pavlik_enemy@reddit
As far as I understand, long "wall of code" methods are quite common in gamedev and that's where he picked up this style
SpacemanLost@reddit
having spent well over 30 years in gamedev, and having seen a diverse set of codebases, I am going to say it is not that common
WoodenPresence1917@reddit
I think it's common in beginner coding, especially people not formally trained (eg, me 10 years ago, or half of the scientific programming code I review). Having said that, I've not seen any that is quite as inscrutable or hard to maintain as his.
pavlik_enemy@reddit
Yeah, it's also what every scientist legally obligated to write. But this style is acceptable in certain areas, while having to remember that 42 means "Fireball" is not
SnakeSeer@reddit
I once got
Thanks, Copilot.
cupofchupachups@reddit
Suddenly it all makes sense. The value of the variable varies
jdowl13815@reddit
I find comments annoying. They are usually signs the code wasn’t written in a readable way.
farox@reddit
Sometimes it's useful to document the why. Especially if it's not intuitive.
jdowl13815@reddit
I don’t entirely disagree. There are situations where it is necessary. But it is also, often someone just being lazy. They didn’t want to refactor variables or add variables or function names that have meaning. They didn’t want to break an overly long method doing too many things into readable, single responsibility forms. There are definitely complicated situations that benefit from comments.
farox@reddit
Oh yeah, ideally it's elegant and understandable. But a lot of times you just have to write stuff that doesn't make sense unless you know those exceptions in the business rules, or that bug in the legacy API you're working with.
Hence the occasional "why". Never the "what", very rarely the "how".
zicher@reddit
I have definitely noticed that this seems to be a good indicator
LossPreventionGuy@reddit
if the comment has an emoji it's a guarantee lol
zicher@reddit
Right? What is it with all the emojis. They weren't in the training data, that's for sure.
Chris_Newton@reddit
There seems to be a trend for adding emoji to comments in documentation and blog posts, often highlighting a point that is made in the accompanying text. It seems reasonable that if those sources have been used as training data then an LLM would generate comments in a similar style.
I’m not sure I’d want to see emoji in production code, but as a presentation technique in documentation that features code snippets, it seems quite effective.
thephotoman@reddit
How much do you want to bet that it’s because Microsoft devs use Slack emoji in their comments, and the AI was trained on Microsoft’s code?
ElonTaco@reddit
I assume they trained it on tons of messages, tweets, etc.
Fun-Put198@reddit
I have
on my Netty configuration, and to be honest, I have become lazy to delete some of them when copying lines that are actually useful
ryanstephendavis@reddit
failsafe-author@reddit
I actively try not to comment (and tell others the same).
It’s actually nice that AI comments, because it gives me another reason to go through and make sure all the code does what it’s supposed to (as I remove the comments).
geon@reddit
Surprising to see so many downvotes. This is the way.
"actively try not to comment" does not mean "neglect to comment even when necessary". Perhaps the downvoters are confused about the difference.
And "removing comments" presumably includes refactoring the code not to need them, not just deleting them willy-nilly.
failsafe-author@reddit
Yes, you are absolutely right about that last part, and I probably could have been clearer. I don’t just axe comments- I make sure the code is understandable and intuitive without them- and that’s why it’s useful if I’m going to “own” a bit of AI generated code. I need to make sure I understand it (and don’t just trust comments) before checking it in.
Bitter_Boat_4076@reddit
Maybe you guys should start writing helpful and meaningful comments.
Most of the time is not what the code does, but why.
failsafe-author@reddit
If I’m tempted to write a comment, I instead write a method named what the comment would have been.
There ARE times comments are appropriate, but it’s rare. Code should be self explanatory.
In general, comments are a smell.
Bitter_Boat_4076@reddit
> but it’s rare. Code should be self explanatory.
Again, I think the focus in your comment is on what the code does, not why.
How do you explain a specific requirement in a function name?
Example: during validation, we transform everything to lowercase but one set of property that we store in upper case. Why? Because there's an international standard. Would you create a function: `upcaseThisForInternationalStandard`?
Maybe we even agree, the problem may be that our "quantification" of "rare" differs.
Anyway, I this kind of summarize what I think: https://hackaday.com/2019/03/05/good-code-documents-itself-and-other-hilarious-jokes-you-shouldnt-tell-yourself/
failsafe-author@reddit
No, I do understand the focus on why, not what. But I think the why should, as much as possible, be reflected in the code itself.
I WOULD consider a func along the lines of “transformForInternationalStandard”, but this example leads to a whole lot of other questions. Why are we doing this transformation here? Presumably if we are doing this kind of work, it’s in code that is all about making the response suitable for the consumer, including international standards. There’s a good chance that this concept is already reflected in the code you’re working in.
But, we actually do agree, and I have a good example of this.
At my company we have “refactor parties” where we take bad code and work on it for an hour, led by an expert in the language being used. So there was this one piece of code that was well commented, but very long. I took it and broke it out into several methods, removing nearly all the comments.
As we were presenting our refactors and I mentioned doing this, one of our developers suggested that we add comments as a linting error. That we should not be able to check in code with any comments at all. I am not that extreme, and the person leading the party asked specifically about one comment that had been in the original code- it was a “why” comment that would have been difficult to reflect in the code itself. I couldn’t remember it, but we went to the method in question, and I’d left the comment in (I wish I could remember the specific operation, but it doesn’t matter- I think you get the point). So we had a clear example of why making comments a linting error would be bad, and I’d left the comment in without even thinking about it.
Another example was a PR on reviewed the other day where a developer was setting up web service endpoints. He had all kinds of comments about how he was initializing the library and registering the endpoints. All stuff that anyone familiar with the library should know. I had him remove all of these comments. However, there was one comment about a love of middleware in which the ordered mattered- it had to be registered first. For that, not only did I have him leave the comment in, but also expand on the why is it was very clear.
So yeah, I really think we do agree conceptually. I just think it’s infrequent that a comment adds clarity in good code.
I used to be someone who commented with the why in nearly every line. I now have the why in three other places before a comment: 1. High level documentation in the repo. This is often not suitable for what we’re talking about, but it can give context to what the code is overall trying to accomplish. 2. Unit tests. I am a TDDer, and view tests as enforceable requirements. So often I’ll have a test along the line of “must be uppercase to meet international standard”. 3. Method and variable names. As I said, any time I’m tempted to write a comment, I try to write a method instead that is named what that comment would be.
But there are exceptions, for sure. I will always think through “will a developer understand why I’m doing this without explanation” and try REALLY hard to do it without comments, but if I can’t, I’ll put the comment. And think I generally do a good job of getting this right. I rarely struggle to understand the “why” of my own code when I return to it, and other developers generally tell me they find my code easy to work with.
a_lovelylight@reddit
I find a lot of devs don't understand what's meant by writing "why" comments.
Sometimes you can't encompass a business decision in a variable or function name. Or sometimes you're doing something that's kind of shitty, but is necessary for various reasons (usually related to deadlines or some idiotic thing upstream that no one has time or will to fix), and it's helpful to have it documented right there where the next person to come along will likely have questions.
geon@reddit
Like a library that takes arguments typed as readonly, but actually mutates them, requiring you to make a deep copy before passing them in.
That *requires* a comment, or it will be removed.
pavlik_enemy@reddit
Comments are needed when the code does something weird and most of the code is pretty straightforward
ihxh@reddit
I don’t think you should not comment at all.
Code describes how, comments should describe why. It should give the reader the answer why solution x was chosen instead of solution y.
Commenting things like “add a to b” is useless, but something like “we need to add a to b because system c expects xyz” would be better.
If you have to explain a lot of the “why” in your code base then restructuring might be something to look into. But “why” can also be a non-code / business requirement.
failsafe-author@reddit
I’m surprised to be getting this much pushback- haha. I thought it was commonly accepted that comments are a smell. I suppose not.
But as I said in my other comment, I try to use methods in place of comments, and only fall back to comments when I cannot clearly structure what the code is doing in a way that is self explanatory.
I completely understand that comments are for why, but generally, with small, well named methods you don’t need comments.
pavlik_enemy@reddit
Sometimes it’s better to separate large method with some comments instead of splitting it into small methods that don’t have clear and obvious scope. I don’t think “cleanupWeirdShitOurClientsThinkIsCSV” is a good method name
geon@reddit
"cleanupWeirdShitOurClientsThinkIsCSV" is a bit passive aggressive, but a good scope. It lets the rest of the code be sane.
failsafe-author@reddit
I don’t think so either- lol. And it’s obviously contextual. Something being regarded as a “smell” means to think about it, but it’s not always wrong.
But in general, I avoid comments where possible, and take the impulse as a clue that I should write better code unless I can clearly articulate why I can’t describe the “why” in code. Usually I can.
FWIW, my experience is that other developers rarely have trouble coming in behind me and understanding my code.
pavlik_enemy@reddit
Well, I’m not that rigorous but generally, yeah. Most code should be straightforward and easily understood by a person familiar with the subject matter, be it online retail or a distributed database
devexus0@reddit
very easily avoided with instructions or rules though
gentlychugging@reddit
Yup. In strictly typed languages good code should not need many comments at all.
geon@reddit
It's about writing self explanatory code. Meaningful names, separation of concerns, separation of abstraction levels.
gentlychugging@reddit
Exactly
ndr_brt@reddit
Tests are the best comments
geon@reddit
Well. Tests are good documentation. Making the code testable to begin with is good for readability, which reduces the need for comments.
Alphasite@reddit
Eh. I’ve seen plenty of older code bases that need comments to explain wtf they’re doing.
Rhino_Thunder@reddit
That just means they were poorly written. Good code self documents
DizzyAmphibian309@reddit
Except when it doesn't. If you put a sleep in your code, usually to avoid a race condition with another process outside the codebase, and you don't comment it, you're a fucking idiot, because it's almost certain that someone is going to "optimize" the code later by removing that unexplained sleep and break everything.
Rhino_Thunder@reddit
Sure you should explain “whys” but you shouldn’t need to comment “hows”
usrlibshare@reddit
If I should explain "whys", then code can jardly be called "self documenting", now can it?
PaleEnvironment6767@reddit
No but see comments are stupid except for when they actually aren't as people keep pointing out. They just wanted to briefly pretend they write perfect code that doesn't require comments, which is pretty much impossible in any real life business setting.
Alphasite@reddit
Strong disagree. Every code base where I’ve seen people apply this is usually full of foot guns where people are afraid to change things.
Three very obvious examples where this falls over: 1. You’re interacting with an external system with quirks. You really have to explain that shit. 2. If the domain is complex. It needs to be documented. People aren’t wizards knowing the minuta of a spec having it explained in situ helps reduce the WTFs per line. You really need to explain odd behaviours outside your control. 3. Backwards compatibility concerns; the moment you’ve got client depending on wired behaviours or random features you have to explain why certain things still exist. Maybe there was an error you fixed before but there’s a hundred million dollar customer who will drop you if you change the “feature”.
pavlik_enemy@reddit
My two favorite examples of this are the complex regex that existed to filter input from a certain client (which I promptly simplified) and a code in a core backend of a multi-billion corporation that checked if a supplier belonged to a company they acquired a couple years ago
usrlibshare@reddit
No, it doesn't. That opinion was refuted a long time ago.
TalesfromCryptKeeper@reddit
This is a very anti-junior dev stance smh
Rhino_Thunder@reddit
I think it can be something for juniors to aspire to. They aren’t expected to be perfect out of the gate. I certainly wasn’t
johnpeters42@reddit
Good code mostly just needs comments to explain why it works the way it does. // don't do seemingly obvious X because edge case Y
Rhino_Thunder@reddit
Agreed
newEnglander17@reddit
I strongly disagree and I harp on my coworkers to use comments more often. When you’re constantly being thrown into new code and trying to find something quickly, comments save a lot of reading and help speed up your search. It also explains the context of the logic. You can see the code does something but you don’t know why they programmed it that way.
nullpotato@reddit
We had one engineer who wrote so many comments it made it harder to read their code. Literally every other line was a comment just stating what the code did in English. But this is the only time I've seen too many comments actually be a hindrance.
newEnglander17@reddit
Yeah that’s obviously a bad use of them. But if you have a few code blocks that can be quickly summarized, why make your coworkers do extra unnecessary work reading each line.
tobidope@reddit
From my experience it's better to refactor and give the code blocks function names (extract method). People tend to change code without changing comments. A method with parameters tells you input and output and hopefully has a good name. A comment every ten lines tells me you have large methods. I can read code faster than I can read prose.
nullpotato@reddit
Because he was a bad engineer and a worse programmer. I dunno, but we always got a chuckle out of seeing the scripts pop up randomly.
usrlibshare@reddit
Edge cases, usage examples, hints when it can break, external conditions, side effects.
And that's before we talk about obvious reasons to comment, which is autodocumentation via ajavadoc, docstribg, gdoc, etc.
No, good code doesn't "comment itself". That opinion was refuted decades ago.
pavlik_enemy@reddit
Not even statically typed languages, with a good naming you don’t need inline comments for most applications people write. “I’m iterating over the list of order lines and summing the prices”. Yeah, no shit
Frameworks and libraries need doctstring-type comments but not inline
nullpotato@reddit
A good comment would be why you need to iterate over the list for the sum instead of just accessing the current sum variable. If the code contains the full context then absolutely doesn't need a comment.
Wandering_Oblivious@reddit
yeah usually if I comment on code it's to let the dev know:
I know that this code is janky and offer some context for why we went with a given solution
Apologize to them for the gordian knot they are about to try and untangle
pavlik_enemy@reddit
Yeah, it happens sometimes. You know it's a shit code but you can't think of a better way to do what it has to do
nullpotato@reddit
Or it's some shit way because turns out it is an unsolvable math/CS problem but the business needs it to work exactly like this.
Wandering_Oblivious@reddit
or sometimes you do know a better way, but frankly business needs & deadlines & team capacity trump technical efficiency in most cases unless it's a MAJOR performance differential. thems the bricks.
TraceyRobn@reddit
I comment so that when I come back to the code 6 months later, I know what it does, and why it's there.
Wandering_Oblivious@reddit
There are few things in life more painful than coming across some NIGHTMARE inducing code, then running a `git blame` ....only to see your own name pop up.
Opinion_Less@reddit
Lol. Oh no. Do people think I'm generating everything??
kcib@reddit (OP)
Depends, are your comments “This function is for X and here’s some very informative info that you should know” or is it “✅Here is a function with the correct types”
geon@reddit
vs.
GrapefruitMammoth626@reddit
If comment says “Yolo” you know it wasn’t AI generated, but some absolute cowboy who has long left the company.
pavlik_enemy@reddit
I think I've left my share of
// Fuck this shit
comments in various codebases. Left way moreFuck CI
commit messagesnullpotato@reddit
For being trained on open source repos AI sure doesn't swear in the comments like I would expect.
GrapefruitMammoth626@reddit
The commit messages are usually useless. It’s the PR that’s usually got some useful information. You question a particular line and see comment like “Hope this works now” on the change.
New_Enthusiasm9053@reddit
No mine are "This function is for X and here's some very information info that you should know" but said info is incorrect and wildly out of date.
NuclearVII@reddit
"if you're reading this, line 2456 broke - I knew it would. Sorry, future me"
sciencewarrior@reddit
TODO: treat corner cases
last updated: 07-23-2019
SnooTangerines4655@reddit
Hahha this is hilarious
Jdjdhdvhdjdkdusyavsj@reddit
It's an interesting choice that so much effort seems to have gone into putting an emphasis on ai writing comments for its code. Like everyone over at the ai companies knows there's no real chance for their models to go back and fix something. They're over writing comments, at expense, because they know humans are going to have to come back and deal with it. They're paying some cents per token to write comments for like
Usercomment = true // set usercomment to be true
Over time that cost adds up and it's entirely wasted, no one ever is going to need that comment, why do they do it? Because they don't know what they're writing and they don't know what a useful comment is, but they know it's a human who will have to deal with it
PetroarZed@reddit
It's funny, I often have to go in and comment code I have AI rough out for me, because it comments only useless things ("This method does the thing this method obviously does based on name and brief inspection") and doesn't have the "understanding" to write comments of value.
GrapefruitMammoth626@reddit
Interesting point. I make effort to make meaningful codes particularly on legacy code where I had to scratch my head to understand why something was done a particular way or add some context that just isn’t available otherwise.
I’m mindful when using AI to generate code to look at the code thoroughly myself and rewrite generic comments in my own wording or if lazy, coach it for better comments. But it does have a tendency to over comment, and a lot of the comments are to do with your print as well eg. “// Now the user can see the toggle button”
Extension_Thing_7791@reddit
Do you think the issues can be fixed by AI?
As an experienced developer who does 70% of coding by hand, I get really annoyed when teammates push out AI slop that work but are a huge pain in the rear to maintain or change.
Lately I've been using the strategy of applying more slop on slop with AI and care less about what comes out, because if execs don't care, why should I. But it is a bit like living in the edge because I don't want something bad to happen and then get things pinned on me.
sailnlax04@reddit
I'm cleaning up AI slop in my own code constantly
xyonix_ai@reddit
We recently had someone on our podcast who's working on a 'composable software' solution to address exactly this issue (the company's called Bit Cloud if you wanna check it out). Non-technical vibecoders will continue to muck up codebases until a solution is found. Not sure if composable software is that solution, but it sure was interesting.
It seems like the role of software developers is heading towards more of an orchestration direction vs. an actual creation direction. Very curious to see how the next generation of developers works. Pretty concerned they won't know how to code at all, but who's to say?
Inside_Topic5142@reddit
Yep, seeing this everywhere. Business owners think they’re saving money using AI tools or low-code platforms like Lovable or Firebase, then end up paying double for cleanup. And it’s not just bad code. It’s Frankenstein software: bloated, fragile, and impossible to scale or extend.
You rarely see this from firms with real engineering depth. Companies like Accenture, Capgemini, TCS, Infosys, or Radixweb engineer for longevity, not just "it works for now."
AI helps when guided by solid fundamentals. Without that, it’s just fast-tracking future failures.
movemovemove2@reddit
If there‘s Budget for a Version 2 rewrite, why not? Even without ai a lot of startups Need that sooner than later.
robbyrules530@reddit
I was just promoted to a more senior position so I do a lot more reviewing now and it appears I’m the just guy that reads and corrects the AI generated code from contractors.
It’s can be hard to tell how much a developer audits the AI code, but when they leave those stupid comments in it’s a dead giveaway lol.
Competitive-Fact-313@reddit
Vibe coding is way to fill the pockets which has got a whole already made by the vibe coders
kruvii@reddit
Yep, time to start your own consulting business if you have a lot of connections at AI-adopting orgs.
forbiddenknowledg3@reddit
All these AI guys are convinced we can "code" with natural language and that you don't need to open source files anymore.
Like they think it's another abstraction layer similar to a compiler.
They fail to understand AI is not deterministic like previous abstractions lmao.
I've had AI generate tests without any assertions, saying "it's too complex to test" or O(n^2) algorithms that could easily be optimised with a set/map. If people really think we should ignore such code well... good luck to them.
Also the entire point of code is details. You will always need a human to nail those down.
peripateticman2026@reddit
Non-determinism is indeed the killer.
Jedkea@reddit
An LLM with a temperature of 0 is deterministic though?
That is: same prompt in + same weights + deterministic next token selection = same exact output.
Ok_Individual_5050@reddit
It's deterministic in the sense of getting the same output every time. It doesn't give you the predictability of knowing what the output will be for a given prompt in advance though.
If I code rust, the input is a million miles away from the output. But there are certain contracts about how the input is transformed into an output that never change. I can build a mental model of what machine code will actually get run after I compile the project. That's what LLMs are missing.
Jedkea@reddit
Which is the definition of the word. Complexity does not break determinism.
What do you mean by knowing the output in advance? I don’t think you can know the output before running the rust compiler either. For all you know, the compiler will run forever and never stop given an untested input. You can make a pretty damn good guess that it will terminate and output what you expect, but it’s not guaranteed until you step through the system.
I do get what you mean I think, but these “deterministic arguments” as a way of writing off ai are invalid. LLMs are simply more complex than a rust compiler, but they are both cut from the same cloth.
Ok_Individual_5050@reddit
No. They are absolutely not cut from the same cloth. And you *should* be able to understand at least roughly what the machine code your compiler produces is going to do. If I use u32 I know for a fact I'm going to get an unsigned 32 bit integer at the end. LLMs make no such guarantees and can never do so.
Jedkea@reddit
You can’t even know if the compiler will ever exit, let alone other more complicated claims. Do you disagree?
Both are deterministic state machines. Do you disagree there?
peripateticman2026@reddit
That's the issue though, isn't it? The creators of the LLMs can set the temperature to 0 for some specific things they want to be deterministic (censorship terms etc.). However, if you set that across the board, then it effectively defeats the purpose of using an LLM to begin with, unless the model has exact data based on its training.
https://old.reddit.com/r/LocalLLaMA/comments/1j10d5g/can_you_eli5_why_a_temp_of_0_is_bad/ discusses some of these issues.
Jedkea@reddit
You set the temperature yourself in the api call, the provider doesn’t. I am a contributor to quite a few ai coding tools, and reproducibility is a well known trait. For coding, you are pretty much always using a temp of 0. I believe you might misunderstand how these things work if you think exact training data would be needed to get a “wanted” result with 0 randomness.
That link you shared is not scoped to coding. For something like creative writing or chats, you would indeed want some randomness.
DealDeveloper@reddit
LOL
How do you manage HUMAN developers who are also non-deterministic?
How come you don't realize that it is easier to manage AI than humans?
peripateticman2026@reddit
You must be joking, right?
Recently, for instance, we had a problem with using
Claude
to use some specific MCP servers to achieve a task - apart from the times that it failed because "the prompt was too big" (not the human prompt, but the internal state generated by the same human prompt - also a mark of non-determinism), it still managed to not use specific MCP servers because it chose not to, even after explicitly informing it to do, listing out even the command from said MCP servers. Non-deterministic.The only way to know what it is doing is by logging verbosely, and inspecting it - at which point it becomes an overhead, not a valuable tool.
Furthermore, when using
ReplIt
, it would do the same instruction "do X so that Y" 10 different ways when prompted 10 different times (and with a scarily massive difference in the number of files and configuration it chose to create/delete/edit). Now imagine you have a couple of dozen such changes created and checked in byReplIt
, and you wish to go back to a specific point and redo some changes - there's no way to be sure of the result without looking at all the changes that it does along with knowing how it affects everything else in the codebase. This is not an issue with a human being, regardless of how bad the developer may be.AI is here to stay - I am not a Luddite, but we are not there yet where we can have it replace humans completely, even as a junior intern.
DealDeveloper@reddit
No, I'm not joking.
You used Claude, Replit, and MCP and got bad results.
Try not using all that. Try going through your comment sentence by sentence and solving the problem step by step.
Would you be willing to bet money (using an escrow account) that a LLM (combined with some custom code) cannot replace humans completely . . . now?
How much?
peripateticman2026@reddit
This makes zero sense. So don't use LLMs the way their creators require them to be used? The whole discussion was about non-determinism, and it clearly demonstrates that.
If you're creating a personal CRUD app, that doesn't matter. For anything beyond that, it definitely does.
Sure, but it's a meaningless exercise. What parameters? What domain? What skills? What budget? Projects range from a simple calculator to real-world applications.
Now, if you show me a fully functioning company started, run, and managed entirely by LLMs with no human intervention, and delivering production quality, then that'd be something. We're not there yet.
_TRN_@reddit
Human developers are not non-deterministic in the same way AI is.
No_Lingonberry1201@reddit
TBH the whole "programming in natural languages so that business people can code too" thing has been around longer than I have been living. I mean, COBOL was meant to be easily used by everyone, even non-techies and please stop laughing.
ern0plus4@reddit
I never used COBOL, only read some pages of a ''70s book (it was also 30 years ago), but I still remember such code: "add amount to total". Yes, it was originally intended to be human-language-style.
farox@reddit
But we fixed that with basic and office automation!
No_Lingonberry1201@reddit
*insert non-committal grunts here*
Perfect-Campaign9551@reddit
People don't even care about quality anymore in software. They just want fast, get to market , let the users suffer... And then six months later issues start getting escalated and we get to put out the fire
Management screws us from both direction. They want speed developing and speed fixing. While speed was the cause of the issue in the first place
DealDeveloper@reddit
"They fail to understand AI is not deterministic like previous abstractions lmao."
If only there was a way to check LLM output with deterministic code.
They don't fail to understand . . . you forgot SAST-like checkers exist.
Re-read your comment and replace "AI" with "human developer".
PeachScary413@reddit
Finally bröther, now we just need to convince all (or most) of the devs in training to drop out and become plumbers. Then we will have our payday 🫡
eatlobster@reddit
Have no fear! Most of the devs in training are lost without AI.
VenBarom68@reddit
Yes but what are you arguing?
When I started out 20 years ago I would have been lost without a search engine and internet forums.
I remember at my first internship the lead dev changed the technical interview round, and candidates needed to write some Java on a computer without internet connection and only a local javadoc. I remember thinking how utterly stupid that was.
Is this different?
CharmingBudget1047@reddit
When I started progamming I had the realization that copying and pasting code from LazyFoo's OpenGL tutorials and running them wasnt actually teaching me anything
The week after I started I went back and tried to do the same thing from the tutorial but from memory and I couldnt do it, even if I had done and "written" it properly the first time I went through the tutorial, I had never actually engaged with and tried to understand what was going on (I didnt know why there were parenthesis around some words for example)
I only started to figure out stuff when I started to write my own code, of course based on the tutorials and stuff I was reading, but *I* was writing it and making questions in my mind instead of accepting it
I feel the same with AI, its like its taking over a part of the job that is actually important
And I also question how actually more efficient the thing is to begin with, ask for a robot to write code, and you still need to check and fix and *understand* what it did because at the end of the day you are responsible for that piece of code, to me it seems like the same amount of work just with the "write the code part" a bit faster and a lot less reliable, and that is by far the smallest part of programming
drjeats@reddit
Yes it is fundamentally different, because you ultimately need to synthesize the information you get from web searches.
LLMs can make the random shit you copy paste from StackOverflow compile/run and sort of work without any critical thinking on your part.
Using it like a more targeted search/SO is a different thing from what is being discussed here.
Frogman_Adam@reddit
The way a lot of tech companies are overusing AI though, they’ll be better placed to take the jobs!
gomihako_@reddit
Bröther, I have seen the light. It came from...the pipes.
BillBillerson@reddit
What makes you a qualified plumber?
Well I have a lot of experience with CI\CD pipelines.
RegrettableBiscuit@reddit
I'm an expert at Internet, which is a series of pipes.
SignoreBanana@reddit
You might be surprised to know the principles are very similar.
ings0c@reddit
The pipe was flowing end to end but made too many droplets 😟
kryptoneat@reddit
Calm down Mario
TalesfromCryptKeeper@reddit
In the pipe, five by five
locvs@reddit
My company decided to run AI to train a Chatbot and automate the ordering process through it. It was such a complete disaster. The orders in out ERP were all manually created. 80-85% were fine. The rest was a mess with no clear structure. This was my main concern, that I voiced - our data was simply not good enough to train AI. No one was willing to clean up and improve the structure of the existing data.
Shit In Shit Out!
ocakodot@reddit
Could you elaborate what problems do you generally face? I would assume bad memory management both in back and front end. Ai is not mindful when it comes to create closures. It sometimes gives multiple similar states too.
gravity_kills_u@reddit
Cleaning up slip code from overseas already. AI code can join the party I guess.
giyutanzen@reddit
Hey man, I wonder how you have positioned yourself as a self employed. Am also a DevOps Engineer with 7 years of experience trying to achieve this. Can you give some roadmaps or guidance?
Typical_Newspaper408@reddit
I've done a couple of projects with AI that came out like a lion, but now I'm in "post-AI build dread". Impossible to maintain. Luckily I didn't bet the farm on them, and I needed to do it to learn what to do and what not to do.
Fact is, you have to review any code that AI generates for you if its not a total throwaway. Great for writing one off shell scripts. Complex, long-lived software. No, cause when you hit the wall, it hurts.
DoctorWaluigiTime@reddit
A similar rush occurred when offshoring got huge 20+ years ago.
Same song, different lyrics.
Consultants will never run out of work as long as a C-suite person has a gleam in their eye about the latest 'what if we get fast and cheap and just roll the dice on "quality"' trend.
ComposerTurbulent631@reddit
I keep saying it.. Pepperidge Farms remembers the gravy train engineers hopped on to clean up the hype behind Y2K.. this is just the next scheduled train..
https://futurism.com/companies-fixing-ai-replacement-mistakes
6stringNate@reddit
As an aside - how do you like being freelance? How does it compare in terms of earning and freedom that you get from a traditional corporate gig?
kcib@reddit (OP)
I started by actually working for startups and then I moved into freelance in 2022 so I had 2 or 3 projects to start with and then it went from there. I would say there’s pros and cons like everything.
Pros: - Work whenever you want - You decide what you get to work on (when you have enough deal flow that you’re not desperate) - Work anywhere you want
Cons: - You have to pay for your own health insurance - If you work hourly which is very common, if you dont work, you don’t get paid. I do miss PTO - Especially in the beginning, you need to have good networking skills and be able to form strong relationships with stakeholders
6stringNate@reddit
Thanks for the advice! How much vacation or breaks do you end up taking in a year? And how do you plan those out?
kcib@reddit (OP)
Honestly, I don’t really take much extended time off. I also still work a lot. In a way you can say I traded a 9-5 job for a 9-7 job with no benefits for the illusion of freedom
6stringNate@reddit
Haha makes sense - and important to keep in mind! Thanks for being honest
LuckyHedgehog@reddit
You also need to be prepared to justify hours billed. "Why did this 100 hour project take 120?!?"
Also, negotiating the structure of your contracts is something lots of devs have 0 experience doing and struggle with, and can get you in trouble if the company wants to try taking advantage of the wording of the contract.
mechkbfan@reddit
If a manager is surprised by hours, then that's a fault on our selves for not effectively communicating issues/scope creep.
Often people don't care if something is a bit late, they care more that they weren't told earlier and surprised by it.
I learnt that lesson once and now over communicate anything related to money to the person holding the bag as early as possible.
I know it's a random example, but if it's somewhat linear, then by the 50-60 hour mark it should be clear you're about a day behind and they should be told at that point.
LuckyHedgehog@reddit
If you're just working 40 a week then sure, that's fine and probably not an issue.
What I'm talking about is getting hired to complete a project for X hours. Or fixed bid projects that suddenly gain scope based on "implied" requirements from the contract even though it was never discussed during initial requirements gathering.
Building in language to your contracts to cover those scenarios takes experience, and having the ability to push back on the client when they ask for things outside of the contract while maintaining a good relationship with the client is another skill. There is more to being a freelance dev than coding that lots of people overlook
mechkbfan@reddit
Yeah fair enough. I've deliberately stayed away from fixed projects for that explicit reason.
Did a couple, one went well, other was shit, but overall not fun.
LuckyHedgehog@reddit
There are companies that prey upon these types of contracts to intentionally try getting free work. It takes experience and skill to scope things out ahead of time and push back on the client when that happens. It may mean losing that client altogether and you need to be comfortable walking away from that job. But it is certainly something a lot of devs don't consider when thinking about freelance work; the actual software development is not the hardest part of the job in many cases
mechkbfan@reddit
Agreed on all points
I've almost forgotten to be grateful since I've currently got a great client
ryanstephendavis@reddit
I've learned to start that conversation almost immediately so the people who are trying to get a bunch of free advice go away
kcib@reddit (OP)
This is a great call out. It is definitely uncomfortable at first to press people on contract and payment terms because it is not uncommon for people to gaslight you about this stuff.
TheBear8878@reddit
Do you do any platform like Upwork or just word of mouth stuff? How did you get your first freelance gig?
kcib@reddit (OP)
No, it’s all word of mouth. To get started, I used to do in person networking events or conferences.
WishfulTraveler@reddit
Where can I source projects?
kcib@reddit (OP)
Networking. I used to go to conferences and networking events. Part of the game is realizing that a lot of networking is about establishing relationships over the long term which is hard when you’re starting out and needing work.
codemuncher@reddit
I averaged 400k at Google, sounds like this can be potentially competitive?
kcib@reddit (OP)
It can be but unless you already have a large network of incoming work, it will take years to get to that level of income.
jiog@reddit
If we assume existing progression with llms then yes we will see it in enterprise level companies however it wont be slop as we extend context windows and intelligence.
Far-Race-622@reddit
A dev I really rate has started using A.I. to reply to us and I am trying to find a tactful way of saying that his old communication style of 'yup', 'nup' and 'fixed, take a look' was preferable to screeds of "great question, it's important to have your php version up to date and here's why....So now that you understand why it's so vital to have xyz, here's my game plan for how I think you can proceed from here for optimal success with..."
Impressive_Rest_3540@reddit
Probably another reason trillions is being spent, the leaders realise it's not enough we need to hit superintelligence or its all gonna collapse soon enough
mikaball@reddit
Do we hit superintelligence?
chandra-mouli@reddit
From reading all the conversations I sense a clear conclusion being drawn, that AI can only do POC level coding and not production grade code. Which according to me is not true at all. I have been working in one of the fortune 10 company of the world and I have vibe coded things to a scalable and bug free production environment too.
My take is, if you were already doing production grade coding you will continue to do that with 20-30% more efficiency using vibe coding.
bhh32@reddit
Question: how do you know it’s production grade? You vibe coded it. You didn’t check the code to ensure it was accurate, efficient, secure, and designed/architectured well. There is no such thing as bug free code, only bugs yet to be found. If you were writing production level code before AI, you should know this and never claim that you vibe coded bug free software.
EvalCrux@reddit
You do check the code fully, test, and run it in multiple environments.
blinkdesign@reddit
The definition of vibe coding is not to check
chandra-mouli@reddit
I think that's the misunderstanding that we will start moving from and everyone will eventually conclude that design patterns and design diagrams are still an essential part of code.
I saw this 1 very senior principal architect vibe coding his own application every few prompts later he would do a prompt to create the ERD of the code base and see how the overall picture looks and change method placement or separate concerns as he sees fit from the ERD.
Vibe coding might be new but experience you have matters a lot which building a production grade application.
mikaball@reddit
There are already many tools to design and generate ERD that are more predictable than any AI. You have to come up with a better example. This is the kind of stuff I don't want to replace with AI.
I have written code generators in the past to perform such tasks. I would be more keen to use AI helping me write such tools than using AI to replace such tools.
EvalCrux@reddit
Absolutely incorrect.
blinkdesign@reddit
"A key part of the definition of vibe coding is that the user accepts code without full understanding.[1] Programmer Simon Willison said: "If an LLM wrote every line of your code, but you've reviewed, tested, and understood it all, that's not vibe coding in my book—that's using an LLM as a typing assistant."
I don't agree with this style of working at all. But if you're going to use the term "vibe coding" to describe using an LLM normally, then that is inaccurate.
EvalCrux@reddit
It’s a state of mind man…lol
dm_me_your_accordion@reddit
This seems true, I have gone from replacing outsourced slop to AI slop
kcib@reddit (OP)
the final boss: outsourced AI slop
ern0plus4@reddit
a program written by an ai which is trained exclusively on outsourced stuff
lokaaarrr@reddit
I think there is useful observation here. If you not going to hire good engineers and adopt good practices, AI slop may be no worse than what you would have gotten.
cabropiola@reddit
I'm a semi experienced software engineer and I'm vibe coding alone a quite big project as a side hustle. The important things are clear instruction files, patterns and testing. The code is not amazing, but nothing that cannot be improved and fixed with resources, so kinda the post MVP stage cleanup is considered in my roadmap in case the project works. What I want to say: you can price in the mess.
mikaball@reddit
That's actually a sane approach. Fail fast or price in the costs of refactoring. But managers should be well aware of this.
F0tNMC@reddit
I’m not surprised at all. AI is trained on all the publicly available code. So take all of that code and get the average and that’s what AI is using to generate code. As a professional software developer into my third decade of coding I can safely say that most of the code I see is bad to mediocre and less than 10% is good and a smaller percentage is excellent. It’s absolutely no surprise that AI produces almost all bad to mediocre code in large volumes.
I trust it to explain code pretty well. I trust it to read documentation and find stuff for me. I trust it write boiler plate scaffolding code and testing code. I never trust it to write core functionality. And until we teach it to distinguish good code from mediocre code, I don’t really see it getting better anytime soon.
seunosewa@reddit
You can teach AI some of that stuff with comprehensive prompts
ern0plus4@reddit
It works in smaller units. You can polish your prompt until it's worth to fix the smaller issues by hand, and you are satisfied with the concept as well.
If you know how LLMs work, you don't even try to create bigger program with them.
Ok-Kangaroo-7075@reddit
It can implement but you have to write precise funtional and design specs, sign off the plans and test the result. Time savings are marginal but exist. It is nice if you don‘t know the actual language because that is really not necessary and saves you a lot of time learning new languages
F0tNMC@reddit
I agree, but I rarely take that route and when I do, I don’t really see much if any speed improvement. For languages I already know, I’m faster to write the core code. For languages I don’t know, I’d rather take the time to learn the language. For scaffolding and testing code, I don’t think I’ve written any directly in a couple years and I don’t miss it at all.
DealDeveloper@reddit
"until we teach it to distinguish good code from mediocre code, I don’t really see it getting better anytime soon."
Incorrect; What is another solution?
How would you handle code written by humans who sometimes forget best practices?
BigManWalter@reddit
First time, I try and teach them the right way to do it. Second time, I rewrite it myself and remember not to let them touch that part of the code base moving forward.
Ok_Individual_5050@reddit
If they refuse to learn and improve, you performance manage them. Never been able to PIP a matrix of weights
No_Oil_6152@reddit
Software dev with 30 years pro experience here.
I have noticed in the past decade or so that being seen to complete tickets quickly is the way to be viewed as a "Rockstar" even if the code written to satisfy the AC is utterly shit, lacks exception handlers and has no jsdocs/xmldocs. Who cares if your code fails? The burndown chart is looking good, just throw it on the Tech Debt!
Vibe coding and "AI" is going to make this worse. A lot worse.
Those who aren't using AI are going to be deemed slow.
The customers expect faster turnover of features now.
So we can cleanup AI slop, but we will be expected to do it at a faster pace. How is this going to be viable? What we really need are tech leads and CTOs with spines, who can tell the customer the truth, that AI has broken their product.
mikaball@reddit
Customers will have to accept reality anyway when no one can fix it in a timely manner. But then it will be to late. Maintenance costs will surpass any saved costs at early development.
North_Resolution_450@reddit
This is cope. The amount of additional work is still small compared to how much jobs it eliminated
ILikeBubblyWater@reddit
Yeah this sub has become an echo chamber of devs that can't deal with change
oldDotredditisbetter@reddit
shoo shoo vibe coder
ILikeBubblyWater@reddit
You are like the people that thought IDEs are toys because their notepad does just fine.
Ok_Individual_5050@reddit
The comparison between IDEs and the current batch of code roulette tools is just hilariously inappropriate.
apartment-seeker@reddit
The disparity in utility between NFTs and LLMs is so vast that you are either just trolling or sadly a moron
pinkwar@reddit
100%.
mikaball@reddit
Yes, "Code Slop Engineers" will be in great demand in the future. Proportional to "Vibe Coders". But the former will be paid substantially more.
balletje2017@reddit
My company (huge one, lots of departments, products, processes) had for a few years a "fast code solutions" team to quickly automate all kinds of processes in easy to use application and do it as fast as possible. These apps get churned out SO quickly by a combination of low code engineers, off shore engineers and AI. Then it gets handed over to the functional service management team who when they have to manage it find out its not performing and when they do their quality control audit find the code is often horrendous. They have a few senior developers on hand to clean up all this slob of code before they want to take over the functional management of these applications.
stuartlogan@reddit
This is spot on and honestly something we're seeing more and more at Twine too. Companies are coming to us now specifically asking for developers who can clean up AI-generated messes.
What's interesting is that it's not just the obvious stuff like weird comments or inefficient algorithms. It's the deeper architectural decisions that make no sense when you actually understand the business requirements. AI can write code that compiles and even passes basic tests, but it has zero understanding of scalability, maintainability, or real-world edge cases.
The security issues you mentioned are particularly scary. I've seen codebases where AI just copied patterns from somewhere without understanding the security implications. SQL injection vulnerabilities, hardcoded secrets, you name it.
The ironic part is that fixing AI slop often takes longer than just writing it properly from scratch would have taken in the first place. But non-technical founders don't realize this until they're months in and burning cash on a system that barely works.
I think we're going to see a whole new category of developer emerge - the "AI cleanup specialist." Similar to how there used to be specialists who cleaned up offshore development disasters, except now it's AI disasters.
Your point about the bar being low for calling yourself a developer is key here. Anyone can prompt ChatGPT and get something that looks like working code. But when that code needs to handle real users, real data, and real business requirements... that's when the house of cards collapses.
Keep documenting these patterns you're seeing. There's probably a good blog post or even a business opportunity in systematically cataloging the most common AI code smells.
Silent-Okra-7883@reddit
You are bang on ,correct.i am also seeing it on my projects.
Jesta23@reddit
I’m a civil engineer and I have taught myself some basic programming. In civil3d there are lisp routines that are very basic programming. I have tried all the ai’s trying to get it to write me some pretty basic lisp routines and it fails spectacularly every time. I am shocked that is is capable of writing any real code that can work.
CyberDumb@reddit
It mostly works with web dev code because it is abundant to have an AI trained. I am in embedded and it is fairly useless except for testing.
CaseClosedEmail@reddit
I mainly use it for IaC and its only good to do autocomplete.
dudesweetman@reddit
try it next time you need to translate a small datasheet into a headerfile. sure you need to double-check the results but you where going to do that anyway
empireofadhd@reddit
I used to work at a company where they wanted to replace their core systems.
Problem was for the previous 10-15 years they had no in house devs, just business. The business asked for this and that and always got it but it was always a new app.
When they started the migration they tried a lift and shift and worked on that for 8 years. However it failed. There was too much stuff and the consultants hired more underconsultants so the company had no idea what was hoibgbon. In the end they had to do a writeoff on operating profits of about 50%. All that time was wasted.
Then they started a cleanup where they just deleted all the customizations. This took about 5 years. After that they could start the migration. After 10 years they have migrated parts of it.
I think the real isssues will show up when companies want to merge, sell of a part or do this kind of deep restructurings but it will be 5-10 years from now.
Instigated-@reddit
Yes, those most championing ai focus on the speed something new can be developed. However in the long term the real difficulty is in how easily it can be maintained.
We’re at a point where devs are feeling pressure to use AI to deliver, however may not yet have learned how to use it effectively to maintain code quality (or when to use it and when not to use it).
Often people find it easier to write code than to read it. It’s one reason why people hate legacy code and want to rewrite into the way they prefer it.
If someone has been the one to write the code, it is easier to debug, or evaluate additions to the code. They know it more intimately. Or they can ask questions of another person who wrote the code.
However trying to make sense of buggy code written by AI will be more challenging.
What is your approach with these ai generated projects? Do you take them on or pass?
mpvanwinkle@reddit
I’m not so sure this is true. It’s not that I think you’re wrong, it’s that I think the real impact of AI might be to make software something that is disposable, much like electronics have become. Cost of maintenance will be high, but cost of replacement may be relatively cheap.
Not trying to defend AI but I just think it’s important to acknowledge how the economics can change the technology.
Pristine-Moose2337@reddit
I think your second paragraph is spot on, but I'm not totally sure that your premise of disposability applies to large scale software projects.
In many ways, that sort of software has more in common with industrial equipment like a large power plant than it does with consumer goods that have become relatively cheap and disposable, like a microwave.
There are certainly components in industrial systems that get swapped out rather than repaired, but companies don't just replace a whole turbine in a hydro power plant because it's due for maintenance.
I'm sure there will be some increase in replacement, but how many software systems are actually designed and maintained in a way where things are sufficiently decoupled that it's easy to swap out a database engine, rest api, or microservice? The projects where that's possible are typically well designed to begin with.
It'll definitely be interesting to see how this all plays out.
mpvanwinkle@reddit
Good points all, it’s definitely not apples to apples comparison. But just for the thought experiment, an example I was thinking of was a music app ( that I may or may not have worked on ) that has a microservice for “history”. Everytime a user plays a song it writes and whenever a user views their history it reads. Very simple. Now in the old world, this history service has to be maintained by a team, it has to be upgraded, and it needs to at least have a consistent enough software pattern that if a feature needs to be added that it is relatively easy for a dev to add the feature while ensuring they didn’t break the other features.
Now imagine the new world. The cost or writing such a simple api is trivial because AI. As long as we have an open api spec and a set of contract tests you can validate the api works as expected. Does the code quality matter in this case. If the product team wants to add a feature and the cost of rewriting the entire api from scratch according to the new spec and contract is trivial, is that a problem?? Does the code organization and maintainability matter, or is it ok as long as it does what it says it does.
I’m being a devils advocate a bit here I know, i personally hate what AI is doing to the day-to-day of my job, but I’m also trying to accept it and stay ahead of where the industry is going. “Disposable” software is something we have to at least consider.
Pristine-Moose2337@reddit
I think that depends on whether you also have automated integration tests or something to validate that letting the LLM rewrite the whole microservice didn't produce a bunch of new bugs.
I can totally see people trying it and there could be an argument made that a sufficiently well-trained model that produces incomprehensible code is another layer of abstraction similar to a compiler producing native machine code, or a transpiler producing minified, standards-compliant ecmascript.
The problem I'm still seeing is the "last mile" issue that others were talking about. If the output is either too much work to validate, or inconsistent from run to run, it could easily lead to people backing away from it before it becomes good enough to become the new standard. There's a lot of people throwing money at it in hopes that it will be able to do things exactly like you're positing though. We're living in interesting times.
Instigated-@reddit
The key issue is how it & reliability scales in production, and how easy it is to eliminate bugs and problems that arise in production.
At the moment: ai is not nearly as good as a skilled software engineer. People often advise to consider it a highly productive overconfident junior engineer. Creates a lot of mediocre quality code fast and doesn’t really know how to improve it beyond a level (and may break things trying).
This is fine for a MVP or early stage startup, but not for production systems that require high reliability and scale.
Some people are sure ai will quickly improve, as it has quickly improved over the last couple years, thus negating this issue.
However, how is ai trained to improve?
Initially it was trained on public codebases. Many of which are not production level code.
Next stage they hired a bunch of “AI trainers” to provide code solutions, which resulted in better model training. However this work is largely insecure, poorly paid, and not very rewarding - so it students, people in developing nations who may have little live production experience, and mediocre software engineers rather than best software engineers doing the training.
This “training” is primarily on small concrete examples, not highly complex production products.
So all these massive improvements to the ai models have been the lower hanging fruit.
I have not yet seen a plan revealed how they are going to close that last gap to get it as good as a highly skilled experienced software engineer. This “last mile” is by far the hardest to close.
So we will have ai work that speeds up the easiest 50-80% of work. However that last bit of work will be harder for a human to fix because they did not write the technical solution, cannot talk to anyone who wrote the technical solution, do not understand the decisions that led to that solution, etc.
The last 20 odd years have had significant discussion & approach about how to write “clean maintainable” code for a reason, and many are now throwing that out the window.
mpvanwinkle@reddit
I totally agree there will always need to be an expert engineer for the “last mile”. I also agree that AI makes that job harder in many ways. I was only objecting to the notion of maintainability being a critical concern because in a world where AI is doing 80 percent of the “work” … rebuilding a component is quicker and likely more effective than maintaining an old one.
Ok-Kangaroo-7075@reddit
Oh yeah but it still requires rigorous testing pipelines, otherwise you will introduce tons of unexpected side effects. This will bring the importance of regression testing to the next level.
The problem is that this AI slop systems usually have 0 regression testing, have 20x implementations of the same function etc. so this is an absolute nightmare.
Likely it will just add a new stage of development which is not bad. Client can vibe code what they want and devs can implement that as actual functional code. It is not terrible and can likely remove some back and forth/ miscommunication
geon@reddit
That's the niche excel sheets occupy today. Low quality software that only needs to work most of the time, and someone will review the result anyway.
Real software needs to be high quality, otherwise why even bother?
Replacing organically grown spreadsheets with real applications when the sheets reach the limit of their potential is very common. The same thing is now happening with ai generated code. As OP states. Nothing really changed.
geon@reddit
Exactly. Quickly pushing out more bad code is the problem, not the solution.
SignoreBanana@reddit
"How to use it effectively to maintain code quality"
I'm curious what you mean here. By my experience, the problem with AI isn't so much "code quality" as opposed to "how to take a big problem and find the most efficient solution through it."
It seems allergic to efficiency.
lokaaarrr@reddit
Unless a system is a failure or POC, the majority of the lifetime cost will be in maintenance. Design for maintenance.
asdis_rvk@reddit
Indeed reviewing code is often more difficult and less rewarding than writing it yourself. If you have to fix AI-generated code, this is a shitty job because the "creative" part was done by AI, while the tedious part is on you. It's the exact opposite of what we were promised.
You can go very fast with AI, but you accrue technical debt at a much higher speed too. Few people bother with technical debt but it's not an abstract concept.
TheWorkplaceGenie@reddit
Been feeling this shift too — suddenly we’re not just building things, we’re curating, debugging, and reverse-engineering piles of half-baked auto-generated junk.
Honestly, it's a strange time to be a dev. The velocity is up, but the quality? Not so much.
I’ve started leaning more into roles that emphasize architecture, integration, and human review — basically, acting as the adult in the AI room. Feels like that’s where the value is headed.
Anyone else shifting toward “AI janitor” mode lately?
Mobius00@reddit
Well at least we'll have work forever fixing the ai bugs. sounds awful though.
nivix_zixer@reddit
I told my friends, I'd love to start a company based on this idea. Track all the companies very vocal about fully switching to AI at this stage, then target them hard to fix their junk in 6 months.
Problem is I've never started a company before, so it feels doomed to fail. But someone could. Just a group of awesome devs who hunt down these opportunities like sharks.
enumora@reddit
One of my clients vibe-coded a fairly comprehensive data platform last year before the latest round of tools. They actually did a solid job with business logic, but the architecture and code were a mess and involved a lot of snippets from Claude and ChatGPT copy/pasted into the codebase.
I don't fault them for taking initiative, because doing so enabled them to increase their revenue substantially. They eventually realized it couldn't scale to meet their needs, so they opted to pay someone to manage it.
While the knee-jerk reaction for many is probably "ugh, but then you have to clean up a mess!", here was the reality:
They opted for full rewrite, so I was able to start greenfield just leveraging their business logic as an input to a system. They effectively did their own product scoping by prototyping.
They're already making lots of money with the thing they made, so they don't blink an eye at the cost. Typical seed-stage / Series A founder is doing everything they can to avoid paying cash, because they don't have any.
I think there will be a lot of these types of engagements, and it can actually be quite lucrative for anyone that doesn't mind getting their hands dirty.
What I find problematic is the people branding themselves as developers / agencies and just pushing out vibe-coded slop - not to be confused with code that's been written with assistance from AI by people that actually know what they're doing. That said, there have always been shitty developers and shitty agencies.
arihoenig@reddit
I don't think you'll see much of it frankly. The new stuff that is being generated is quite good; well structured and clean and (if you prompt for optimization) pretty performant.
The stuff you are seeing is just "too early" early adopters and that will be a transitory phase.
Western-Image7125@reddit
End of the day, the problem still as it always has been is developers who have no experience and no idea how to use the tools in front of them and now one of those tools is AI
Perfect-Campaign9551@reddit
Your are probably just vastly underestimating how many bad developers are out there
We (as software developers) don't have an organization or a union or anytime that declares quality.
With software becoming more and not part of every product it is becoming more noticable of how bad most software is created
_L4R4_@reddit
Underrated comment!!
fkrkz@reddit
Too many business owners are oversold with AI prospects. In the next 5 years, no one will learn how to code properly anymore
Shenanigansandtoast@reddit
Everyone is trying to find a shortcut around paying for quality development by qualified developers.
infil__traitor@reddit
"more than full time". Did chatgpt write this?
fafnir665@reddit
Obv not using ai to clean up ai is the problem here
robert323@reddit
Job security 👏
seriouslysampson@reddit
I wouldn’t say these are exactly jobs, but maybe short term contracts. More than half these ideas that are getting vibe coded won’t go anywhere even with good code.
draeneirestoshaman@reddit
it’s already in fortune 500’s in the form of offshoring to india lol
OverOnTheRock@reddit
who in turn vibe-code their assignments. .... just another layer of abstraction
draeneirestoshaman@reddit
vibeshoring is real
peripateticman2026@reddit
To be fair, much easier to clean human slop than AI slop. For now.
Sea-Employment3017@reddit
This resonates hard. I'm seeing similar patterns incode that "works" but is unmaintainable and inefficient. The telltale signs are precisely what you described: inconsistent patterns, over-engineered simple tasks, and those AI-generated comments.
The scary part is that non-technical founders don't see the problem until it's expensive to fix. They received low-cost development services that will ultimately cost three times more to rebuild correctly.
This cleanup work might become a whole specialized niche. "AI code remediation" consulting.
Then_Product_7152@reddit
I disagree, the worst AI will ever be is now
Early-Surround7413@reddit
It's basically the same thing as Cheap Indian code. It's cheap now and will cost 3X as much in the long run for someone to come fix it.
TopSwagCode@reddit
Worked as a consultant some years back. Before the AI train. And that was still exactly my job :P Cleaning up some rubish someone else had conjured up. Nothing new. Being it AI generated, cheap outsourcing, bunch of interns / student workers.
There has and will always be need for good engineers that can build stuff right.
These AI models are built on top of god knows how many Example apps and prototypes, that hasn't followed any best practices. Like even some youtubers / streamers "preaching" good / clean code, that is actual garbage and doesn't take basic edge cases into account.
People keep saying that Developers will loose their jobs and scaring off new people joining the field, because they trust what they have been told and we don't need more people in tech. On one side it's sad and on the other side it's going to ensure I have a steady stream of $$$$$ coming the next many years.
Alternative-Wafer123@reddit
I worked at the top 3 global banks as well as FSTE company, software built by vendor are always shipped at super poor quality. long latency and high resource usage. noone cares the performance unless it hits production having real large impact. Inhouse "tech" guys are not skillful enough, while vendor gives shit to them, they just received them.
AlexxxNVo@reddit
How long has ai coding been around for us? Just a few months..say a year. When ai coding gets better , when models finally can code one or two shot..that will change . A large codebasd vibed today in a year or so bd cleaned up by probably a better model of the one originally used. It will bd just a matter of time when ai code will be part of doing business, a regular part like testing. Any coder who does not use ai for something will be replaced by one who does.
Rockdrummer357@reddit
I find that AI is a fantastic tool, when used properly. You have to understand what you're doingand what it is doing. If you don't understand one or both of those things, inevitably you're going to end up with a pile of dog shit.
Experienced people who know what they're doing, however, will get great use out of it because it takes the mental load off of the "boring"/boilerplate stuff and allows you to keep your focus more on the core problem and architecture.
successfullygiantsha@reddit
I was looking up a record for the worst MLB record on ChatGPT and it gave me clearly the wrong answer. Seems like something that simple should be an easy search but it got it wrong and went on and on about how it was right.
br0ast@reddit
My entire career has been slop cleanup for "10x developers", architects, seniors, juniors, and off shore resources. Now i do this with AI agents and it feels no different
smithereens_1993@reddit
If you’re crazy enough to enjoy this kind of work it’s a really good time to be in the biz.
I built my career on fixing people’s slop, so now I’ve just shifted to doing it with AI’s slop.
The big thing I found is that a lot of the really bad stuff boils down to a comprehensive list of stuff that if you handle it, you’ll at least be okay. I’ve started running that audit for people (as at paid service, https://vibeapprescue.com) way ahead of doing the implementation work and it has been crazy effective
Haunting_Welder@reddit
Well for early stage usually you are just creating prototypes for fast iteration anyways so AI slop at a startup would be expected. What took several junior engineers in the past now is performed by one vibe coder.
keto_brain@reddit
As a consultant I haven't noticed an uptick, I just think that's the popular new complaint. Ever clean up Websphere, Oracle ERP, don't get me started on IBM BPM or ODM. I've seen shit (can I say that here) code most of my career, you really think AI is making it worse? I've seen more systems fall over then I care to count, AI might be the new "way" bad software gets built but there have been bad engineering teams for decades and even worse engineering leaders.
Fit-Notice-1248@reddit
I was going to say this. Even before AI and without AI, awful/shitty code was and still is being produced, especially when teams don't have any engineering practices at all
keto_brain@reddit
AI is the new excuse for devs now who are stuck in the same old job, cannot get promoted, dont understand what they are building and especially for those who dont want to hear the truth.
saintpetejackboy@reddit
Every time somebody sees shitty, junior-level code now, they just will assume it is AI.
Downtown-Jacket2430@reddit
yippee!
europeanimmigrant@reddit
Can you send some of those jobs my way please 🙏
therealchadius@reddit
These LLMs are trained on publicly accessible demo code, not professional enterprise software. The person who coined "vibe coding" specifically said it's for throwaway weekend projects that you have no plans to touch afterward, but investors/VCs/C-suites think they can make money off of replacing human made professional work. Every decade they try and every decade it fails.
MrCallicles@reddit
Not sure at all. When a team uses Cursor or whatever, I'm pretty sure that all the company code is pulled and used for the training...
(How fucked up it is when C-suites blindly accept to pay to let other companies stole all their assets...)
BigManWalter@reddit
Cursor adds your company code to the context but that's not the same as training on it.
Context is like short term memory, it kinda understands it but it forgets things easily.
Training is long term, and it's done in massive batches costing millions of dollars each time a training run happens.
MrCallicles@reddit
Yes, that's exactly what I'm saying.
Data is the goldmine, I think it's insane to believe that companies like OpenAI don't use all the data they can put their hands on to train models, legally or not.
United-Pollution-778@reddit
Shitty developers
Cheap outsourcing
GloomyShake3434@reddit
I left my current startup company when i saw that the AI slop was taking over the codebase. I relaised that we already had an unmaintained codebase with no documentation & with the director forcing the team to use cursor, this will definitely lead to a disaster.
Moving_Forward18@reddit
The cost to fix everything that LLMs are going to screw up will be very, very high. But when it kicks in, there will be a lot of good developers consulting at high rates.
Kiri11shepard@reddit
Two options: - AI keeps improving, and eventually it will be better even than experienced developers, just like chess and go. Then no jobs in Software. - AI still keeps improving, but doesn’t cross a “junior developer” level for a while — then even more jobs will be needed to clean up this mess.
holbanner@reddit
Well I'm part of the hit team in my company that take such clean up missions.
We used to do "audits" but now we can smell A.I bullshit just by the introduction talk from the managers/leqds
HoratioWobble@reddit
It's gone in waves like this for the last 20+ years, I started Freelancing back in 2008 and would almost exclusively pick up work on freelance sites from failed outsource teams.
Yet, almost 20 years later - we have continued growth in outsourced teams and my experience is still the same, they vastly miss the mark in terms of delivery and quality.
This isn't the big revolution you're hoping for, this is now business as usual and yes there will be failed AI projects that need cleaning up but the same people who employed them - will continue to employ the same techniques in other businesses.
pinkwar@reddit
The exact same thing happened when companies started outsourcing the work to cheap developers.
Vivid-Blacksmith-122@reddit
I guess we don't have to worry about Judgement Day just yet then.
pinkwar@reddit
We're only getting started.
Pretty much live a dev, usually your first iteration is not the final solution.
That's how cursor works with the "thinking". It goes back and refactors the code to follow the rules.
If you don't give it rules it will continue to produce slop like an unhinged junior dev.
Give it some time and these pipelines will all run behind the scenes.
People are only starting to figure it all out and how to make it work.
TinyCuteGorilla@reddit
Oh yes 100% my company just did this. Some designer who knows CSS but nothing more created a full on React app. The code was a mess so they hired a freelancer React guy to fix it up.
terrapin1977@reddit
Who just knows CSS?
NUTTA_BUSTAH@reddit
Almost every aspiring developer in training that buy a seat at the first bootcamp to learn JS that seems too complicated so it gets outsourced to LLMs.
Born_Dragonfly1096@reddit
a "designer" just like they said
rokky123@reddit
Personally i think for startups, this is ok. But security flaws alone should be enough to keep ai on a leash in enterprise setting.
Alternative_Song7610@reddit
Some of it isn't even AI slop it's crap created by companies like builder.ai that have low paid eng just doing the bare minimum to pretend they are working on a solution. Shocking the lie lasted so long and go so much funding.
SignoreBanana@reddit
I gave a stab at "vibe coding" non critical audit script for our org. What Claude drummed up seemed to have no clue as to how make an efficient effort. Its attempt would have taken me 10 hours to run through.
After doing some docs digging and trial and error work, I got the script to run in under a minute.
When leadership asks for us to be AI FIRST, I hope they know what the first attempt looks like, because it will be a fuck load more expensive.
lornemalw0@reddit
this - with the junior gap - is going to fuel the next upward cycle
CauliflowerIll1704@reddit
Ive seen some people at a small startup I worked at that I could tell were vibe engineers.
When you hopped in to call with them to consult about a problem, or to do some pair programming, you could see that they didn't understand the code they've been working on for years.. You'd see flows that really didn't make sense at all, comments explaining very simple code, etc..
Folks, don't skip out in the fundamentals and if you don't have a CS degree, please please at least take a community college course or something to learn the basics.
Middle_Ask_5716@reddit
Next years LinkedIn posts:
How to use AI to fix broken AI code
haikusbot@reddit
Next years LinkedIn posts:
How to use AI to fix
Broken AI code
- Middle_Ask_5716
^(I detect haikus. And sometimes, successfully.) ^Learn more about me.
^(Opt out of replies: "haikusbot opt out" | Delete my comment: "haikusbot delete")
henryeaterofpies@reddit
I told my sales team they need to be looking for opportunities like this because we will be able to get paid a lot of money to fix this and its going to be consistent work for a while.
Aggressive-Diet-5092@reddit
I assumed earlier that I will no longer have to debug things step by step when LLMs came but now I am ending up going more into code details, every variable, every method to figure out where things are breaking in the spegatti code created by LLM, the advantage of LLM is more in having answer from documentation with a single question rather than going through all the docs again the accuracy dependening on training data.
FunRutabaga24@reddit
lol We went from fixing outsourced/offshored code to fixing AI/vibe coders. Time marches on.
Trollzore@reddit
You’re more or less tasked with production-izing a POC/MVP app that was vibe coded to make it more scalable. That’s all.
farastray@reddit
Invariably, there will be a lot of this. I also predict there will be a huge amount of software written that completely disregards what it takes to actually make software useful and successful - e.g. product design.
I think that even though the technical part can be cleaned up easily, the human errors of designing authentically useful software are more subtle to fix, design and deploy. There is a massive influx of people that are tangentially technical - smart enough to unleash AI on a problem or UI or whatever, but they are too dumb to understand how you build a successful business from it.
Becoming a good enough engineer to build working software took me maybe 4-6 years. Building software that humans enjoy to use and that provide value, took me probably more like 15-20 years.
RogueJello@reddit
This sounds like possibly reverse survivor bias. Nobody is going to call you in to fix something that they vibe coded and it worked out.
drahgon@reddit
Mo such thing. Inability to see the failures is the entire basis you cannot reverse it.. ..then it is optimal
grizzlybair2@reddit
Is it really different than cleaning slop early 2000 apps?
lokaaarrr@reddit
I have worked with lots of truly great software from the early 2000s, why would when it was written matter?
pavlik_enemy@reddit
I think exactly in early 2000s all the good practices became mainstream in both business and infrastructure software. 20 years later we are writing the software kinda the same way
grizzlybair2@reddit
In the 2000s, we were rewriting all the junk legacy apps from the 80/90s. That's really all software development is in my experience, rewrite legacy apps from 15+ years ago. For every 1 microservice that's made correctly, there's 10 not.
lokaaarrr@reddit
That seems like a weird generalization based on where you worked. I improved and built all sorts of great systems.
grizzlybair2@reddit
Probably. They are likely archaic now as well.
lokaaarrr@reddit
I spent 15 years at one place. I worked on many generations of systems in the same space.
grizzlybair2@reddit
Probably pretty stable then. Was a consultant most of my career and almost every thing was just a rework. Heck I'm about to modernize an app that's only 5 years old. It's main function is to search for some data, if there's more than 5 people searching at the same time on a team of 75 people mm, database crashes and locks out users for 30 minutes. People still make junk.
lokaaarrr@reddit
Rapid growth, thus the need to keep re-designing things. Went from 2k to 100k employees.
NoIncrease299@reddit
Ah, from the 00s offshoring boom - I remember it well.
KnowledgePitiful8197@reddit
AI has access to crappy code and cannot learn to write code that is bug free because that is a different kind of mastery that's not available for it to train on yet. And mission critical code is not something that is being shared.
dc91911@reddit
As long as we have non technical HR looking at our resumes and semi technical managers doing the hiring than the AI slop will continue to increase.
Fidodo@reddit
This already happened with the first wave of outsourcing where companies tried to hire out to the cheapest possible overseas contracting firms with zero oversight or integration and were shocked when the deliverables they got were shit.
tjsr@reddit
I embarked on a little own project a few weeks ago to try making something using Copilot and prompts.
It's great for getting the very basics off the ground, just boilerplate and scaffholding that takes a lot of effort which you don't do frequently when you work on established projects, but wow once the app starts having interactions between various components does it really start crumbling.
It also really struggles with understanding and enabling edge-cases, or anything that say... wasn't covered on stackoverflow 😆
ryanstephendavis@reddit
Agree, except for the fact that I'm already seeing this at enterprise companies as well ... I like your NDA idea to look at code before, if it's AI slop, charge more 🤘😆🤘
Independent_Grab_242@reddit
I was thinking that the AI slop has begun but the AI of tomorrow will clean it up within 2 years.
Gloomy_Freedom_5481@reddit
exactly, for some reason it seems to be taboo around here to think that
chaitanyathengdi@reddit
AI cannot clean up AI. It can only generate more sophisticated garbage that's harder to clean up.
Independent_Grab_242@reddit
The current AI yes but not the AI in two or five years. Be honest to yourself.
AnimusCorpus@reddit
Following the industrial revolution, there were plenty of people who saw the trajectory of development and predicated that very soon we'd all have robot butlers, flying cars, and the end of disease.Turns out technological progress isn't linear, though.
This isn't as trivial as predicting the trajectory of a ball. You may be right, but there is no certainty in what will and won't be possible in 2 years' time, and making assumptions about the future simply isn't wise.
Mindless_Ad_6310@reddit
Or logs / console with emojis or checkmarks. Looks nice on the console output though
No-District2404@reddit
This is one side and there will be another big consequence, I don’t wanna know the quality of a junior engineer or a fresh grad right now, I don’t blame them but they born into AI era. And if we consider that companies did not hire them due to layoffs and AI there will be shortage of decent juniors and there will be also a lot of exits from the sector. It looks like senior engineers will be extremely valuable to fix the mess created by AI hype in the short and middle term
drnullpointer@reddit
I have been telling people from the start of this whole AI thing, that I am happy keeping my development skills sharp. There will be a huge demand for people who can clean up the mess and I will be happy to help with it, for a good price.
mauriciocap@reddit
2030 is the new Y2K 💪
dpn@reddit
Had someone on my team excuse a bad idea because they ran with something AI suggested. I was like... Bruh I'm your manager I knew this was AI code when I reviewed it 🤣 I want to talk about the actual solution plz.
We are AI positive as an org, but clearly the experience of the operator makes a big difference
Sea-Frosting-50@reddit
where are you seeing /sourcing these projects?
Simple-Quarter-5477@reddit
How are you networking and marketing yourself to find startups to bring you on?
Beginning_Opinion377@reddit
I’m feeling this pain, it’s like the new wave of shitty WordPress sites we have to fix
itsbett@reddit
Honestly, this gives me a lot of hope for if I choose to switch industries and make better money. I spend a lot of time developing my skills and understanding mostly older languages, primarily as a hobby, but it was motivating to me that it also made me more employable. I was worrying that the perception of AI's usefulness might make my skills less valuable for my career.
Currently, I'm fairly underpaid, but immensely comfy at my job. AI is banned/illegal to use where I work cuz of proprietary languages, hardware, and software. My job also rides on politics, so I worry that I'll need to switch industries when my skills are perceived as AI replaceable.
So thanks for the good read and info sharing
geeky-head@reddit
Have you tried Claude Paid versions?
Pedroxhp@reddit
lmao bro pls stop
-Knockabout@reddit
Already?!