How to deal with juniors shipping AI slop code?
Posted by theop04@reddit | ExperiencedDevs | View on Reddit | 200 comments
I am not against AI usage at all, in fact, I encourage it; however, I have spent countless hours reviewing AI slop PRs, and whenever I ask them why they made certain decisions, they just give me this blank look back, or come up with some bad explanation.
On several occasions have I been watching their debugging sessions, and their first instinct was just to plug the entire code chunk into Claude. Like just look at the stack trace…
I am tired of it. I have tried pushing them to develop a conceptual understanding of their code rather than treating it like a black box, but I am unable to enforce this.
I’m worried that we are about to enter a dark time where the majority of junior engineers have a lack of the fundamentals and intuition that make a GREAT engineer. Especially since the juniors that will be rolling in the next few years probably never even coded before AI… scary.
I don't know. Maybe it’s just me, but I am exhausted.
Zulban@reddit
Sometimes I send them this, which I wrote, instead of a CR: why I'm declining your AI generated MR
authentic_developer@reddit
The thing worth testing in review isn't the code - it's whether they can predict what the AI produced before you look at it together. If a junior genuinely understood the problem, they can walk you through roughly what solution they expected and why. If they can't, the AI solved a problem they couldn't frame, which means they also can't verify the output or debug it when it breaks.
That's the distinction worth enforcing: AI-assisted thinking vs. AI-substituted thinking. The former uses the model to accelerate a solution they already understand. The latter delegates the problem entirely and reviews the output with no baseline to check it against. Both look identical in a PR.
In review, instead of just requesting changes, try asking "what does this function return if this input is empty?" before explaining what's wrong. If they can answer, they understand it and the problem is style. If they can't, that's the conversation - and it's a faster diagnosis than reading 400 lines of diff looking for red flags.
lunatuna215@reddit
Have you always attempted to have your cake and eat it, too?
Shazvox@reddit
My rational mind can't help thinking "Good, more work for me". I think I might be a bad person...
Hot_Adhesiveness5602@reddit
Create a FAQ or a checklist that they have to do before creating a PR.
FastHotEmu@reddit
Many colleagues think that in a few years we'll be needed more than ever because of this exact issue.
Stellariser@reddit
Their reliance on AI is like copying someone else’s work to pass exams, you might get a result but you’ve gained zero new skills. They’re just going to stagnate.
ninj0etsu@reddit
This is the result of encouraging AI use as you say you do
sweaterpawsss@reddit
You can't force them to stop using AI, but you can enforce standards around the end results in code review. This is going to require a lot of effort on your part if you want to do thorough reviews and maintain a high standard without just rubber stamping the slop. But that's the only way. Otherwise they will keep shipping slop until the system becomes unmaintainable/undiagnosible, because they lack the experience to know the hole they're digging for themselves before it's too late.
If their MRs are too large (thousands of lines of diff), refuse to review their changes until they are broken down into more manageable chunks.
CorrectPeanut5@reddit
What will stop them is CoPilot is about to get a lot more expensive and they are going to start running out of AI Credits to ship slop. It's going to get hard to ship like that without thinking about well crafted planning, prompting and compressing.
marcodave@reddit
Or they might just use "free" models with limited capabilities, causing even worse slop
sweaterpawsss@reddit
Idk, maybe. I feel like one way or another AI is here to stay and it is going to remain prolific as a tool for generating code. Better to adapt to what that means now than wait for an inevitable demise that may never come.
DanFromShipping@reddit
It's here to stay, but I don't think it'll be here to stay at its current cost: more or less free for the dev, a relatively negligible cost for most larger companies. Once it becomes more expensive, I'm thinking it follows the model of people who've only had the deep pockets of big companies jumping to small startups and still trying to architect weird, inefficient Incredible Machines in AWS.
sweaterpawsss@reddit
Yeah, I wouldn't be surprised if individual access became more restricted (either directly or via prohibitive pricing that's only accessible for enterprise customers).
CorrectPeanut5@reddit
Agreed. It's here to stay and I'm a daily user. But there's definitely an art and science to skills, prompts, planning and tooling.
buggedcom@reddit
teach them proper AI workflows...
teach them how to use /plan and then create a skill /refine-plan that will iterate over the plan, internally score, improve it until it meets a certain score threshold.
teach then how to self use /review before creating PRs
teach them about putting in AI guardrails, ie test cases, harnesses etc to make sure scope doesn't go to far
create skills for the AI that utilises and internally skeleton generation tooling instead of simply creating stuff "on-the-fly"
CodelinesNL@reddit
Questions like these show that many companies have fundamentally flawed PR processes. You're not supposed to have a single 'senior' reviewing the 'junior's code. Everyone should be reviewing everyone's code.
Also the PR state is not the moment where you start questioning the technical approach they are taking. That should be done BEFORE they start implementing the code.
So your system was already pretty broken before AI.
uniquesnowflake8@reddit
Code review. “Request Changes”
MCFRESH01@reddit
And when it happens enough times they should learn to drive the AI better and make manual changes when needed. AI is fine as long as the user actually understands the problem and why the change is being made and the tradeoffs
Daemontatox@reddit
Trust me it doesn't help , they keep shipping code thats filled with emojis and bs and after many iterations it becomes too much of a hassle and management doesn't care enough aslong as the tickets are being closed
utihnuli_jaganjac@reddit
They just put the comments in copilot and push more slop
Spider_pig448@reddit
Seriously. End the thread. If you want to keep up with the times, train your LLM code reviewer to have your standards too and gate every pr that doesn't meet them. AI is the key to higher code quality this way.
mrbuh@reddit
This is the way. Fight fire with fire.
I have Claude doing a first pass review on every PR that comes my way. Things that fail linting or other clearly established and documented standards just get auto-rejected with a message why. If that passes then it does an actual review with the rules and guidelines that I carefully crafted into its skill.
If it passes that and Claude thinks it's good, then I read its summary output first and then go review the PR myself.
The worst slop never even touches my eyes. I still have final approval on anything that gets past my slop gatekeeper.
ILikeCutePuppies@reddit
Claude will find issues forever with the code. There is always a [[discard]], noexception or something to add.
You can teach it some things but it's typically not gonna pickup the things a senior is going to pickup.
luluhouse7@reddit
I’d be curious what directives you use. That seems super useful for personal projects where the only reviewer is me myself and I, and by that point I’m totally code blind.
mrbuh@reddit
It was a collab effort with me and a coworker. We defined skills to review PRs, separate skills for each of the main code repos I review at work.
Each one has shared/similar directives for the dead simple baseline shit like
"if anything in CI fails, auto-reject and add a comment giving them a link to the failed bit."
"The linting and style rules are in [this file] in the same repo, auto-reject anything that fails them with a comment linking to that file."
"Don't declare vars that never get used"
Those alone catch the worst offenders.
Then there's some intermediate stuff like
"All PRs must have a ticket number in their description and/or branch name. If there's no ticket, auto-reject. If there is a ticket, go read the ticket and see if this PR meets the stated goals "
"Prefer human-readable syntax" with some examples and counter-examples to stop AI from jamming in a kilobyte of minified json
Past that it gets pretty specific depending on the repo and the goals, but things like ensuring tests are added where appropriate, language specific things.
Then the final part of the skill tells it to summarize all of the above for me, and to draft (but not post until I approve) a comment for the PR.
It's been a real 80/20 situation where the simplest "don't fail linting validation" rules catch all of the worst slop and at worst give prompts to THEIR agent that it needs to be aware of those requirements.
_5er_@reddit
Sure, but you should provide a meaningful reason for requesting changes. And doing this on slop code over and over again it can be exhausting.
On one side, you should give good feedback to junior, so they can learn. But on the other side, there is so much slop code, you can barely keep up.
ILikeCutePuppies@reddit
That is part of your role as a senior engineer.
uniquesnowflake8@reddit
Thought this obvious, yes you need to say what needs changing too
coworker@reddit
Unfortunately too many people on here think "slop" is obvious and can't actually articulate any, even perceived, problems with the code.
xXxdethl0rdxXx@reddit
These assholes are just as bad as the slop submitters.
A half-way competent review culture should stop this slop from being shipped and continually submitted. “This PR has AI vibes to be, denied” is basically just as poor craftsmanship.
lordbrocktree1@reddit
I used to agree with you, now with the amount of slop that gets handed to me, I have a default response that I copy in whenever it’s clear they passed it on to AI without thought. I don’t have time to do all their thinking for them while they just prompt ai and then wait for my reviews to give them the next iteration of fixes.
coworker@reddit
You are simply being lazy. Just have your own AI generate the criticism with little to no effort on your part. I suspect the actual problem is that there is nothing objectively wrong with the submission and your criticisms are too opinionated
lordbrocktree1@reddit
My boilerplate response is something akin to:
If I see 2 or more of these, I stop reviewing and post the whole thing and ask them to “take another stab at it before I will finish reviewing it”
It’s literally the equivalent of me just writing a few random thoughts on a sketchpad and sending it to my editor and saying “hey here is my book ready to publish, let me know any parts that aren’t ready to ship”
And then expecting that they are going to do all the hard work of actually writing and telling me what my plot should be.
We pay well enough that they need to put in the effort for themselves.
Revilo62@reddit
I mean... I'm sure most can, it's just not really worth the time if the author didn't actually put any investment into understanding their own code.
I have colleague's PRs out with 100+ comments about what's wrong, they just have Claude read the comment and "fix" it. I then have to go re-read it and add another few dozen comments. It gets really exhausting trying to understand code that has no logical thought behind it and touches dozens of files.
coworker@reddit
Don't allow PRs to be big enough to have so many comments. Also tighten up your requirements so you don't need to specify behavior at review time
Control_Is_Dead@reddit
https://rfd.shared.oxide.computer/rfd/0576#_llms_as_programmers
So much of this document I agree with and try to get anyone in leadership to read it. At least on my team I've gotten most people on board with this bit.
IlllIlllI@reddit
Yeah, love to go through a slop PR and leave meaningful comments only for the responses to clearly be also fed into the same AI that produced the slop in the first place.
20 minutes of my time, 15 seconds of theirs.
sarhoshamiral@reddit
Unfortunately, you are assuming the dev actually reads your comment. The new trend seems to be just letting agent handle it so you essentially end up arguing with a model in PR comments.
As a senior dev, if I am doing that I may as well just start from scratch with proper prompt and do the change myself.
Treebro001@reddit
This puts the vast majority of the actualy thinking onto the slop PR review sadly and becomes a huge bottleneck.
I think this is not the only answer. You need a systemic and cultural change within the team to really fix something like this. And the problem is so new I don't really see any clear answers.
JeanRalphioTheSecond@reddit
Teams need to be aligned about their goals and values. If we care about maintainability and long term reliability, we can go from there. If some are in this boat but others are thinking ai will fix my slop if something breaks or velocity all costs, but others aren’t, there will be issues
YK5Djvx2Mh@reddit
What do you do when you dont get a chance to review really anything because someone always instantly rubber stamps every PR?
kronik85@reddit
Speak with r viewer. Highlight the rubber stamping when the bugs eventually hit production. Escalate up the chain of command. Change branch protections to require more than one approval. Designate a code owner whose approval is required.
Kaimito1@reddit
Then a conversation with whoever is spamming stamps is required.
They shouldn't just be spamming that
FarYam3061@reddit
I honestly wonder what people did before AI.
another_dudeman@reddit
Same here, it has been like, six months of LLMs capable of this level of slop
ButchDeanCA@reddit
There was a higher bar to entering this field and we wanted to learn properly.
LongjumpingAd9079@reddit
Deffo best advice, ain't nothing worse as a junior dev getting rejected
recursive_arg@reddit
Sure there is, it’s a junior dev getting LGTM reviews on all of their PRs after a 5 second review.
floriv1999@reddit
The issue with this is that you essentially become the vibe coder yourself. I somebody just copy pastes the ticket and opens a PR with the result, it is now your responsibility to correct essentially the AI. So why do you need the middle man in this case. This is especially more relevant if the junior doesn't even read your feedback and just forwards it to the agent.
uniquesnowflake8@reddit
Yeah they are likely just plugging your suggestions back into the agent. But it’s still better than shipping slop
Junior devs will need to showcase a lot of critical thinking to demonstrate value at this point
NewFuturist@reddit
The get them to review each other's code, and then hold them to account if they don't pick up each other's obviously bad code.
Sofi_LoFi@reddit
Have them ask Claude “why was my PR denied?” if they’re so lazy about it
jambalaya004@reddit
I’m in the same boat. I recently had to write an essay in a review (this junior has almost 3 years xp btw) on why a single class shouldn’t have one public method with 20 private methods handling everything from reading files, tokenizing the files bytes, calling different web APIs, and so on. I also had to explain why 1300 line classes were not ok lol. We also have problems with juniors never running the code AI produced or only testing one small part of the generated code. A new interesting comment recently has been “but the tests were green? What’s the problem?”
We have seen a sharp decline in juniors who used to show great progress before the company rolled out agents. It’s really sad to see this happening.
All you can do is reject their PRs and try to guide them on what to do better. If they don’t get with it after a while, hopefully you have built enough of a paper trail to light a fire under them.
pfc-anon@reddit
Ship senior level slop to assert dominance.
k032@reddit
Junior devs making slop wasn't anything new, just now we have Junior devs making AI slop.
Suggest changes in the reviews. The good devs will learn and take the ideas.
feverzsj@reddit
Fire them!
Some_Developer_Guy@reddit
I decline their PR, if they object I ask for a in person walk through were they explain and demo.
❌
355_over_113@reddit
No worries, my juniors have mastered the art of talking like YouTube code-influencers, parroting whatever Claude told them.
havok_@reddit
Oof. I had this. I suggest a change, junior says “didn’t work” I suggest we jump on a call so we can debug. He comes back 20 minutes later : “I looked at the conversation and Claude had done it wrong even though I gave it the docs”.
No shit man. But why didn’t you look at what it did the first time? Waste of my bloody time.
Leather-Rice5025@reddit
What to do when it's your manager doing the slop coding?
ConsiderationSea1347@reddit
Or the other seniors. I am on a team where no one has a clue what is being shipped except me and they are irate that it takes me longer than them to close out tickets. But I am the only one reading the code and it is obvious. My team was kinda plucky and stupid before, but they were kinda nice because they didn’t have AI psychosis, now they all think they are gods because they can open PRs which edit 200 files in two days of work.
355_over_113@reddit
My juniors think they are gods
havok_@reddit
2 days? That’s rookie numbers
Additional_Rub_7355@reddit
Become a manager too
matjam@reddit
Get an ai to analyze a bunch of PRs you consider high quality. Get it to extract principles you need to see in a PR before you’ll look at it. Have the agent review the slop and only ping you when it passes the AI anti slop review.
TechMan61@reddit
Yeah I'm really worried about it. My go-to when there is an incident is look at logs, debug, go to metrics etc.
For new code I try to reason about architecture before even writing a single line.
Using AI for either of these will atrophy existing skills, and lead to ignorance for juniors.
PhysiologyIsPhun@reddit
I might get down voted for this, but once I identify a specific service and error in logs during an incident, I'll often spin up a Claude agent to investigate while I comb through in parallel. First rule of incidents is to stop the bleeding ASAP. If Claude helps me do that, I'm doing my job.
DanFromShipping@reddit
If it's a critical production issue that is losing the company money every second it exists, I agree. The reality that I've been in though, is that there's also multiple classes of "issues" that are much lower priority and can exist in prod for days or weeks or months even, without costing any money. For those, using AI to just do the work for you robs you of critical career growth.
That said, a lot of devs just want to survive through to senior and hop into management asap, where they no longer write code. Maybe for them, AI is worth it to just fake it till you no longer need to fake it.
PhysiologyIsPhun@reddit
Yeah, definitely a good skill to build. I'm just extremely happy I started my career way before AI. My company has basically quadrupled its expectations for dev output since Claude became ubiquitous, so I genuinely don't have time to write everything by hand anymore. I still make sure I fully understand what I'm asking it to implement and review the code thoroughly (usually takes a few cycles of this to get it right), but it still saves me a whole lot of time. Especially because of the fact I work as a full stack engineer kind of between 2 teams and always get consulted for Ops related stuff. I'm context switching all the time, and being able to have a Claude session going and have it recap what we've done so far when I have to drop everything to put out a fire has been really helpful as well.
I mean I still wish I could go back to development before AI, but it's undeniable it's extremely helpful and devs that refuse to leverage it are going to fall behind
AudioRevelations@reddit
Lots of good stuff here. I've also been dealing with this lately and my thoughts are basically this:
Teams and codebases require a certain level of maturity to use AI successfully.
This is assuming you're working on code where a bug slipping through actually has real impact.
AI is an accelerator, and it can very quickly accelerate you off a cliff if you're not careful. More senior teams can generally point things in the right direction, but you can also as a senior help build guard rails to help keep the AI-equipped juniors on track.
In my case we've basically decided that our codebase does not have enough safety nets in place to take massive AI swings with more junior engineers (or even seniors for that matter). We don't have the ability to verify to reasonable satisfaction that we haven't broken things, so it's too risky. We've essentially told our team they are more than welcome to use AI on small focused changes, but massive refactors/features are banned and will be rejected.
In the meantime we're prioritizing trying to make our codebase more resilient and safe to make changes to in hopes that we can expand scope and enable these tools more safely someday. But we're trying our best to balance the speed increases with the extra risk.
This paper also could be worth a read if it hasn't come across your feed - it helped give me some useful vocabulary like "cognitive debt" and "intent debt" that are really useful when talking about these systems and their impact.
shozzlez@reddit
Are we sure that their way of doing things is “wrong”. Maybe a deep understanding of code is not the skill that’s needed for what they are expected to contribute.
PhysicalSession594@reddit
Ok so the thing that i did is partially working for me.
I have created some claude workflow and best practices which is shipped with repository so everyone get that 2. If a dev is junior i ask him to never user bypass permission always on ask permission and see what claude or any agent is doing
In the PR we have auto code reviewer for essential vulnerabilities
4 This one is your call its token exhaustive and time consuming use plugin like superpower for them as mandatory automatically enforces them to a long but good sw eng cycle.
FInal but i am not able to do that honestly for critical pieces I am still doing code review and closing PR. I am not relying on AI right now for this, but above points 1-4 helps me reduce lot of slop before it ends up to me
tiger-tots@reddit
Five years from now is gonna be a real mess for this industry. Either we are all unemployed because AI did get that good. Or we are gonna be missing an entire generation of early mid career engineers.
spez_eats_nazi_ass@reddit
Juniors should not be using these tools given their current half assed state. I welcome the downvote to hell.
NonProphet8theist@reddit
It's a really valuable tool at multiple levels. How it's used by juniors is what matters.
They shouldn't vibe code entire features they don't understand of course, but use the AI to ask questions, gain clarity, and learn something new for the next iteration.
Do that for 6 months of sprints... aaand boom that's 12 or so iterations of on-the-job deep learning. That's how I onboard to new projects these days and it works great.
GoodishCoder@reddit
They should still be going to seniors with their questions and seniors should still be working with them until they understand how to solve problems on their own and why we take certain approaches. Offloading mentorship to ai is doing everyone a disservice.
NonProphet8theist@reddit
That's what pull request reviews are for
GoodishCoder@reddit
If the only time you're mentoring your juniors is in pull requests, you probably shouldn't be considered senior.
NonProphet8theist@reddit
Riiight because I don't check one box so I'm not senior. Are you a hiring manager??!
Honestly I'm around for whatever anyone needs, but you gotta keep in mind that not everyone speaks up when they need help. Good mentorship isn't mind-reading. Often times the only opportunity I'll get to mentor is through code review.
I find this particular method effective because it's straight to the point. I can spend a half hour on another Teams call explaining something, or I can make 2 meaningful comments on a pull request, link to docs or other code, and let them learn their way.
Sounds like delegation.... something a senior does, just sayin' 🤷♂️
GoodishCoder@reddit
Part of a seniors job is mentoring juniors which also includes recognizing where they need help and proactively helping them. Comments on a PR isn't mentorship.
You're essentially taking a sink or swim approach because you're too lazy to explain things.
NonProphet8theist@reddit
Hell yeah I'm lazy dude. It keeps me sand
kayinfire@reddit
you're the very first person that i've seen online that has clearly expressed my view of how LLMs should be used. im glad to see that at least not everyone is just indiscriminately delegating implementation to the AI. that "ask questions, gain clarity, and learn something new for the next iteration" bit is something that strikes me as so obvious, yet it seems to be such a minority view.
Four_Dim_Samosa@reddit
yeah. It can be an augmentor but you need to do the validation. push back on the LLM if something doesn't make sense. Make it find proof behind its claims and cross check its responses. its still a probabilistic model not a deterministic one
luluhouse7@reddit
Yes!! I’m shocked that most people don’t treat it like that (not just for coding but other tasks too). It really feels like the way to do it and honestly seems pretty close to what you’d experience if you work with a mentor or colleague.
NonProphet8theist@reddit
Nice! I also think when less experienced devs do just delegate to AI and it turns into slop, it points out more than just their incompetence in coding, but faults within the team itself.
So while the hammer might come down on the junior dev, it's actually the vague requirements given by product and/or a failure to properly translate those requirements into workable-enough chunks for a junior. When stories aren't scoped properly, that is when the AI slop over-engineering happens.
I fight with scoping stuff constantly on my own stories, even as a more senior-level dev. That's been the real challenge with AI-powered development—devs can be faster and more efficient than ever, but product is still slow AF
frugal-grrl@reddit
This
SlaminSammons@reddit
Nah you're 100% right. AI can provide actual quality code when someone who knows what they are doing is using it. Juniors don't know what they are doing, hence the slop.
grizzlybair2@reddit
While I agree with you as it's dangerous, you see the biggest gain for a junior in terms of actual code produced. But yea they will implement dumb and inefficient things.
aeroverra@reddit
im impressed. The openai sponsored bots seemed to have missed this comment.
Irish_and_idiotic@reddit
That’s quite the username sir.. 😆
dnullify@reddit
My team was just told our entire codebase is being replaced with a ground up AI-first entire rewrite. The whole thing is AI generated from a large comprehensive set of specs and plans. All future development work will be done through plans, and we are not to worry about the implementation details or architecture.
Guardrails are test suites and performance benchmarks. Team is half juniors and mid-levels and development velocity has been so fast that AI first is basically the only way to move. We're expected to ship features so comprehensive in such short dev cycles that even a dedicated principle doing all reviews can't read more than 10% of the code.
What a time to be alive.
shan23@reddit
Please report back once these hit production with client/user data.
Repulsive-Hurry8172@reddit
Isn't that a normal side effect of AI use though? They have no context of the app. Manual coding, tracing which things link to things and making a mental map of those in their brains is what will lead to that point where they understand and can "prompt better". Are they given time to understand the codebase first?
It's also possible they DGAF about learning. If so, just replace with juniors who have brains that are not addled by AI
Curious_Owl197@reddit
Get your manager to review it and tag it under "ship 5x faster with ai"
CrushgrooveSC@reddit
shame them
Thundechile@reddit
and I've read this exact mail posted (multiple times) before. Post slopping?
ConsiderationSea1347@reddit
It is happening everywhere right now and we are all trying to figure out how to deal with it. At my company the more AI code you generate the more esteemed you are, so the slop race is on and I am just looking around in horror because we are a pretty big name in IT, cyber infrastructure, and security.
high_throughput@reddit
Create/mock a bot that just copy-pastes the ticket text, generates a slop PR, and deflects all questions with "I don't know, the AI wrote it". Call it Marvin.
Tell your juniors that they don't need to copy-paste tickets and generate slop, because Marvin can already do that faster and cheaper.
They need to consider what value they can bring that Marvin does not.
Serengade26@reddit
So why are tickets being written with unclear requirements? Garbage in garbage out
kronik85@reddit
That's all I hear.
Serengade26@reddit
You think writing half ass tickets to feed a game of 20 questions is good practice? Insanity.
Why dont we expose yaml to the ceo and just let him fill out what he wants
GoodishCoder@reddit
Unclear and incomplete requirements are just a part of development. Eventually you gain enough experience to know what's missing, what questions to ask, and where some pitfalls might be. Expecting perfection in all tickets is unreasonable.
HokieSnare@reddit
This is one of the things I jumped on early with my team. My philosophy is you can use AI, but you can't submit anything you can't explain yourself. What I preach, though, is to use it as a learning opportunity. Maybe it does something that they don't understand, so go learn how that solution works and determine if you think it's appropriate. It fast tracks the learning in targeted areas that are relevant to the projects they're working on.
tnerb253@reddit
Unfortunately this is the culture companies are shifting towards. I agree they should have baseline knowledge on what their code is doing, but I don't really blame them when they're expected to crunch out more tickets now. I don't really have universal advice here, seems like a culture issue.
kronik85@reddit
It's not (necessarily) a culture issue.
I work in industrial embedded development and juniors are just turning their brains off when it comes to development.
We're not an AI first company, no one is pushing them for faster output, and we don't expect high velocity / do sprints etc.
The appeal of AI is doing the work for you.
I'm glad I'm not first learning programming these days, seems real tough to focus on the learning vs the doing.
tnerb253@reddit
For reference, when I say culture issue I am referring to the interview process that is vetting these engineers. The people pushing their AI slop and not being able to defend their code seems like the company did a poor job of vetting that particular engineer. Also lack of mentorship/guidance for juniors seems to be an issue here. It's becoming less popular for companies to willfully invest in their juniors and expect them to hit the ground running.
SpudroSpaerde@reddit
Plenty of juniors in the pool, give feedback on pushing slop and if they don't improve just cycle them out for new ones.
Overseer55@reddit
Yep. There needs to be space to learn…ultimately, a junior (or senior) shipping slop are a net negative and need to be axed.
LongjumpingAd9079@reddit
How does one "cycle" a developer
Drayenn@reddit
New chat in copilot
Sir_lordtwiggles@reddit
"Give me an exit interview I can give to the next joiner"
aeroverra@reddit
Overseas contractors pretty much..
FreeWilly1337@reddit
Isn’t that was coffee is for?
Mahler911@reddit
You send them to a nice farm family upstate then bring a new one home from the store.
Dyledion@reddit
Turn 'em off and then on again.
LongjumpingAd9079@reddit
🤣
SpudroSpaerde@reddit
Wood chipper
Astral902@reddit
I have already read this exact same post a few days ago. .
abrandis@reddit
Hate to break it to you you're the "old man developer yelling get off my lawn", the industry is changing very quickly and the old ideas of PR's being meticulously analyzed are a vestige of a bygone era.
Today agents do both the coding and the review and they will give you a report, which you now can trust or not ... The days of hand coding really are in their final days...
I get your concerns about loss of tech coal skill etc. but realistically in the near future these AI tools will develop more efficient languages/frameworks that may not be human readable or at least it meant to be , and in order to work at the speed of AI you don't have the luxury to investigate all the AI slop , you'll need to rely on the bots to do that more and more.....
Sw429@reddit
I'm sorry, this subreddit is for experienced devs. As in, people who actually know what they're talking about.
abrandis@reddit
What did I say that was wrong?
Healthy-Dress-7492@reddit
Ban AI? Theyre juniors, they should be able to raw dog; then use AI when they’ve proven they know wtf they doing
DeterminedQuokka@reddit
The same way you handle anyone giving you ai slop. You tell them to go away and come back when they have looked at it more.
istareatscreens@reddit
"How to deal with juniors shipping AI slop code?" start saving and get ready to retire early as this career is going to the dogs.
dc0899@reddit
If they can't explain and can't make a good case for their design, make it a PR blocker.
Sherbet-Famous@reddit
Review the code?
mpanase@reddit
I don't care whether they wrote it by hand or using AI.
They need to explain it and it need to meet the quality standards of any other review. Including the granularity and scope.
Wouldn't be the first time I set a hook to automatically decline any PR with more than 200 lines changed.
Honestly, set clear rules and be inflexible.
tomqmasters@reddit
Doesn't matter if it's AI. Slop is slop.
Qwertycrackers@reddit
Impossible to stop the tide. You are correct about the dark age.
chikamakaleyley@reddit
whats the consequence of you not approving PRs based on the amount of slop?
VegetableTraining733@reddit
pair programming sessions can help them learn better practices and debugging skills
thethirdmancane@reddit
Use the principles of software engineering
Tricky_Tesla@reddit
Create reviewer skills (multiple if needed) that reflects your rigor/style and have them run the produced code against it and attach the report with PR. You still need to review but at least you know it went through something better than Jrs.
Meanwhile, just for Karma, coach your Jrs about better practices in a nicest way.
Additional_Rub_7355@reddit
Let's keep it simple. A junior is supposed to learn, right? Well they aren't learning anything it seems, and they certainly aren't using their brains much if agents do the thinking.
Adventurous_Bend_472@reddit
Create an agent and ask ai to deny and check what you need, paste the code and copy lastly paste response in the comments when you deny the pr. GG
powdertaker@reddit
You fire them.
Decent-Lab-5609@reddit
Lint their ass. That'll be enough to start giving them a headache. Then you can come in to help with actual answers.
GoodishCoder@reddit
How available are you to them during the development process?
Be a mentor to them, pair with them on some stories and have the time you block off to help them through stories be a no ai time.
I've seen a lot of seniors lately shift to an AI first approach and encourage juniors to lean on AI before going to them, then when the junior asks a senior, the senior just throws it into copilot. This isn't fair to the juniors and they'll never learn that way.
My onboarding process for new juniors hasn't changed since AI took off. We work together until they understand enough to walk on their own then I slowly remove myself from the process until they can run on their own.
ATN5@reddit
Lmao I always find it funny when I see someone say “AI SLOP” 😂. It’s like a computing slur lol
thealienmessiah@reddit
In my experience as a Mid level engineer is that based on the speed at which teams are expected to develop and iterate in features, its basically impossible to keep up without AI generation and still have WLB. Juniors no longer have the privilege to take 2 weeks for a single HTTP endpoint and fully understand it. It feels like my experience has changed drastically from my first year and now seeing other juniors coming up.
goaty_mcgee@reddit
Jesus Christ how can I not find a job?
BunchCrazy1269@reddit
I have a checklist with the things im always catching. I scan over the PR for 1 min and if I see an issue where the dev has not self reviewed etc ill just chuck it back.
ZenEngineer@reddit
Explain to then that if all they do is to copy paste from Claude then it's better and cheaper to just fire them and wire Claude into your ticketing / code review systems.
They have to skill up and be able to give the feedback that you're giving. Some of these things should never get to your desk. If they don't their days are numbered.
Ok the other hand you have to adapt a bit. Nits about variable names, coding style, specific code in specific lines don't matter so much anymore. The devs should have a slightly higher view and worry more about maintainability. Even LLMs need code that is understandable, documented (in function names, comments or tests) and broken down into reasonable components (to fit into a context window and be readable by people). You can give feedback on that and the juniors should learn to check those before submitting them.
Constant-Drama4510@reddit
sounds like they rely too much on AI without understanding basics
FinalDevice@reddit
Enable AI code reviews. Copilot will complain about the same slop that it generated. Generate an expectation that the PR author responds to all AI feedback before requesting a human review. Note "responds" -- they don't have to implement every AI-suggested change, but if they're not going to implement the change they should be able to explain why they think it should stay the way it is.
Require a human review before merging. It doesn't matter whether a junior wrote bad code or used an AI agent to generate bad code. Suggest changes.
If we are actually entering a time where the majority of junior engineers have no idea what they're doing, then we're 3 - 5 years away from a massive talent shortage. Play along in the meantime, keep your skills sharp, and then enjoy the massive paycheck when the talent shortage hits.
Drayenn@reddit
In amazed people code like this. We have an AI vibe coder and hes fantastic: takes time to understand code, refactors it a lot, listens to PR feedback... I dont have any issues with him because he stays quality first.
Mr_Bombastic93@reddit
At least they’re developers. Today I had to set up a development environment for my non technical ceo so he can attempt to use Claude code to ship meaningful changes
Looz-Ashae@reddit
Blocking MRs on code review until they fix it on their own. Narrow the on onboarding and training corridor with pre-push hooks, code-style validation, AGENTS.md placed whole over the repo, AI skills that do the job like you intended. Narrowing the path is the only way for people not to fuck everything up. Engineering is scary anyway.
Super-Research-6952@reddit
could it be more about the training approach
moaning-at-urinals@reddit
I wrote a Claude code skill that reviews prs and finds the default mistakes. Once that passes I’m willing to look at someone code who is pushing slop
vivec7@reddit
Regardless of whether you understand it or not:
Juniors won't enjoy being put on the spot like that. And it gives you an opportunity to express how disrespectful it is to offload that work onto others.
If they don't seem fazed by that, make it a group call and ask the offending junior to talk through their "great solution to this problem" in front of their peers, because you want the others "to learn from them".
I like this approach because it gives them the chance to show that they do actually understand what was written. It doesn't say "AI bad", it just sets the expectation that they should understand the code produced to this level.
Over-Veterinarian338@reddit
sounds like it's time to open a junior developer recycling center
Dawido090@reddit
Just reject their PR
SoCaliTrojan@reddit
They didn't learn AI in school. Until they have AI classes it's up to you to teach them how to use AI properly.
tnerb253@reddit
I graduated in 2018 fortunately but I believe that was around the last era before chatgpt was on the rise. I can't even imagine how college students are navigating school nowadays. I feel cheatings never been easier.
ninetofivedev@reddit
People not even realizing that chatGPT has only been mainstream for a few years, not a decade...
tnerb253@reddit
"around the last era before chatgpt was on the rise"
I think I realize that pretty clearly, doesn't exactly take a rocket scientist to count to 4
ninetofivedev@reddit
Lol. An era is 4 years. Got it.
Overall_Gazelle5107@reddit
dude! I deal with seniors doing so!
iPissVelvet@reddit
Unpopular opinion but juniors have always needed supervision and wrote unshippable code. It’s just that with AI they can write semi-passable code at high velocity which puts senior reviewers under pressure.
Teaching juniors has always been an investment, that hasn’t changed before, and it hasn’t changed now.
Fidodo@reddit
If someone isn't curious and doesn't care then you can't really force them to. You already missed the opportunity to fix the situation. They shouldn't have been hired in the first place.
At this point I don't compare performance in technical interviews. You still need to be competent, but the deciding factor for me isn't how fast or completely they solve the problem but do they show genuine interest in figuring out why something is broken when things go wrong. Are they genuinely interested in understanding how things work?
At this point I just select for curiosity.
PeterHickman@reddit
You use version control don't you. Every feature request and bug gets passed to them as "they known their code best". Git blame is my nemesis
Nekadim@reddit
Programmeers do push slop no matter if it was ai or themselves. Slop is a slop
MethodAppropriate470@reddit
It's the same way we did before AI came on the scene. PRs? Code Review? How is anyone not knowing its AI slop before its being shipped?
melesigenes@reddit
So I think one of the reasons they just debug by copy pasting the error into Claude is that they haven’t seen how else to do it and they follow the path of least resistance. Claude is faster than them at debugging so they default to Claude. What I think would be helpful is if you sat down with them and they interactively watched you as you try to figure out where the error is coming from and how fix it.
My proposal would be for seniors to show and role model best practices rather than just telling them what they’re doing is wrong or testing their knowledge or just rejecting their PR. The good ones will follow suit and try to imitate you.
My seniors when I was a junior would pair program with me when I was super stuck and it instilled a lot of deep principles within me.
frugal-grrl@reddit
Yes, I would pair with them for sure.
And have lunch and learns where each junior is assigned a part of the codebase to walk the team through.
AndyKJMehta@reddit
Found the bottleneck
ninetofivedev@reddit
THE SAME FUCKING WAY YOU DEALT WITH JUNIORS SHIPPING BAD CODE BEFORE AI. PLEASE USE YOUR FUCKING BRAIN. SORRY, I'M MAD TODAY!
TheBrownestThumb@reddit
At meta everyone is pretty much forced to ship AI slop. You're competing with the rest of your team/org to ship as fast as possible and if you take your time doing good work you WILL fall behind. Honestly it all works out for the better because sooner or later everyone's going to be sitting on a massive mountain of fragile slop that needs real expertise to fix.
allknowinguser@reddit
AI is a multiplier, if they were sloppy before you are going to get slop x10 faster. There’s some good skills (Claude) that can review code well enough with a good prompt. Encourage that to lessen the slop.
Financial-Grass6753@reddit
No PR is shipped without tests for added code. No code duplication > X% (say, 3). No PR is more than, say, 500 lines, anything more requires an explanation. No manual review until CI shows full green.
The one who has troubles understanding - enjoys a ticket or two of writing E2E tests, any deviation from standard (that has to be explicitly defined somewhere) = ask changes, 3x ask changes - research ticket about best practices on X. DoD on best practices ticket - 1:1 with senior.
tnerb253@reddit
Working with you sounds like fun
Mahler911@reddit
Seriously. I'm imagining a company where everyone wears the same clothes and has the same haircut and they are trained to all blink in unison.
tnerb253@reddit
My last job all my team did was nit pick/gate keep PRs which was half the reason our tickets from the last sprint constantly rolled over. Then we would get a lecture from the TPM about why our tickets keep rolling over
Financial-Grass6753@reddit
Nah, I just love precisely defined rules.
Not necessarily you need a ban hammer of any kind for a fuckup or two, but when the behavior repeats and repeats till oblivion - measures must be taken.
tnerb253@reddit
There's a line between a precisely defined rule and gate keeping a PR though because you came up with 50 different rules on the fly. This kind of thing should be a conversation at standup, not a 20 comment long back and forth thread on someone's PR. There should be rules yes, i'm not discrediting that, nor am I saying criticism in PR's aren't beneficial, but working with people that nit pick your PR all day or all week is another level of things you probably don't want to deal with.
LongjumpingAd9079@reddit
Say 500 lines! Looks good to me squire!
Sypnoticklt@reddit
Teach them that code can be generated by AI, but the changes are still owned by them and their responsibility.
If they cannot explain the code generated by AI then they need a talking to regarding responsibility.
PatchyWhiskers@reddit
Ask for detailed descriptions in the PR of what it does and why.
throwaway_0x90@reddit
What metric are you measuring them with? Does the code work? Are they meeting their deliverables? Did the number of bugs or outages increase?
You could just push back on all their PRs but eventually they'll complain to management. What would you tell your manager that they would agree with?
Just be careful you're not making a hostile work environment. If you cannot show a measurable problem that management understands, then you're the problem employee.
bystanderInnen@reddit
Times are changing old man
Party-Lingonberry592@reddit
If the code is really bad, put them on bugfixes. Bug fixing is a great way to learn about horrible design decisions and sloppy coding. It's how most of us became "Experienced Developers"
FooBarBuzzBoom@reddit
Reject any PR.
RobArtLyn22@reddit
A good place to start would be to go back and read what you wrote, think about the true nature of the problem and realize that by encouraging AI use you are aiding and abetting the problem. You need to be actively discouraging AI use or the juniors that you are currently working with will never become competent seniors who actually understand how things are supposed to work. There is no substitute for doing the fundamentals and AI use as a junior is the antithesis of that.
HoratioWobble@reddit
Junior devs shouldn't be using AI. Developers should have a strong understanding of their work and impact before using it as a tool otherwise you're always going to get slop.
And if you have a new dev giving you AI generated code that you don't think is slop - you shouldn't be using AI either.
BoBoBearDev@reddit
Demand more unit tests, so it can catch their mistakes.
melesigenes@reddit
They’ll just make AI write the tests and AI is really bad at writing tests because it doesn’t really know what you want
margmi@reddit
Tests are literally one of the best uses of AI. Tell it the business rule, ask it to write a test that validates the rule. Repeat.
melesigenes@reddit
You’re right. The point I would want to make is that it’s a great use of AI if you know exactly what you want ie if your business rule is well defined and clear in your mind. I think juniors in general use AI without first defining the business rule and therefore the AI doesn’t know what the user really wants and therefore spits out useless tests.
LongjumpingAd9079@reddit
Slop tests as well
paranoid_throwaway51@reddit
ive left comments like "refactor via, rm-rf *" or to run "git branch -d (branch_name)" on code-reviews before.
tantrumwaahh@reddit
Escalate? Feed their slop into Claude and tell it to make something up to reject the PR? 🤣
Spiritual-Theory@reddit
They should talk through the plan ahead of time with a Senior dev, so the plan and expectations are in place before they start. They can be coached to recognize what the solution will look like, and how to keep it small. Take advantage of the AI time speed up to work with them collaboratively before they start on a project.
spdfg1@reddit
The same way we dealt with juniors shipping slip code before AI. Review, request changes. They will either learn as they go or not.
pr00xxy@reddit
That is your first problem, but also your relief. If their production is your responsibility, you enforce it. If it's not your responsibility, you tell whomever it is, make sure it's in writing, and wait for the inevitable system outrage it's going to cause. And then when shit hits the fan you make sure to remind them who was right.
In the meantime you polish that resume and consider what you value in you profession.
If it's one thing this sub taught me it's that you can only do so much when things are not your problem.
Yourdataisunclean@reddit
Help the ones who want to be helped. Mentoring is incredibly important and juniors with their heads on straight will understand. Make it explicit your willing to help them level up if they put in the work and let the ones who aren't willing to do that select themselves out.
ATXblazer@reddit
Just don’t approve the pull requests until they can explain it to you
Effective_Hope_3071@reddit
Honestly the best thing for me is I have a pretty low token limit because my team is async and by the time I get to coding the team has burned most of the quota.
I think that's the true middle ground, you get scarce tokens so you only use it when you're completely stumped.
Glad-Researcher2738@reddit
I couldn't care less about majority of junior engineers not knowing how to do things. The market and business will decide. The ones with the skills needed for the business will prevail.