First time in a position reviewing pull requests and finding it difficult.
Posted by PM_ME_CATS_THANKS@reddit | ExperiencedDevs | View on Reddit | 45 comments
Somehow I've ended up spending a lot of my career (10 years) developing things solo and unfortunately lack experience working in a team.
For the past couple of years I've been a contractor on a niche project for a large organisation, almost entirely as the solo dev. Recently we had some more funding come in and a couple more contractors were brought in to work on a particular new feature.
This means it's now down to me to review all pull requests, and I can't tell if I'm doing any of it right.
One issue is that so many of the pull requests are huge changes of (likely AI written) code. A lot of it is not clear at all what is actually going on. Sometimes the pull requests are a couple thousand changes made in just a few days. I do blame myself partly for not laying out and enforcing a better strategy for making smaller pull requests.
I go through it line by line and point out some of the obviously weird stuff (why is AI so obsessed with using the any type and casting things to any that already have a type). But some of it is also just things like endless nested if statements which technically will work but just isn't good to maintain. Sometimes I point out code that is just so bafflingly weird, but it still gets merged by someone else.
The added complexity is also immense and for dubious benefit, but is necessary for the sake of this new feature.
I don't want to keep saying to people "please just make this less of a mess". We're on a deadline and they probably don't have the time to fix it all. I also want to trust that they know what they are doing and not dismiss their work just because I can't follow all of it. In the past I worked at an awful job (when I was still quite new) where basically everything I made was rejected by the person who was in charge of merging pull requests. I don't want to be like that, but somehow I'm starting to learn why they maybe felt that way.
I also realise though that once the other contractors are gone, I'm going to be the one going through the code trying to figure out bugs and making tweaks.
Then again maybe none of it matters any more because I can just hand it over to the AI slop machine? Honestly I use AI pretty minimally in my day to day work, but maybe this is the way now.
Is this relatable to anyone out here? How strict are you about what code gets merged?
DeepHomeostasis@reddit
When you cant get them to break PRs up (deadline, contractors, whatever), the workaround I usually fall back to is reviewing commit by commit inside the giant PR. Most git hosts support diffing a single commit. The contractor has rebased into something with at least some logical boundaries, so you get manageable chunks even if they refuse to split the PR itself. its still slow but it beats reading 2000 lines in one diff.
PR size and AI casting things to any are different problems. PR size you negotiate as a process rule. The any casting you fix with a linter that fails the build before the PR ever lands in your queue. Splitting them apart might make the workload feel more tractable.
Exac@reddit
Immediately close a PR that casts to
anyunless it is a clear mistake. It doesn't belong in professional work, and you should be clear. If it is a junior dev making a mistake, be gentle with them. If it is a senior dev letting AI write their PR, just email their manager and HR and let them sort it out.thy_bucket_for_thee@reddit
"Hey Anon, we hear you're delaying projects because of certain feelings you have? Sorry but if you want to continue to have your job, you need to start being a team playing and approving these PRs. I'm starting the PIP paperwork now, I don't want to submit them so let's circle back and see how you're doing in a month."
PM_ME_CATS_THANKS@reddit (OP)
This is one thing I always insist on being fixed. It's baffling how much it comes up, sometimes several times in the same file.
VRT303@reddit
Make your pipeline fail for any then?
pindab0ter@reddit
Yes, just set up a linter that disallows
anyblissone@reddit
This one is pretty easy to fix with agent skill outlining the use of types, at least this is my experience in python and latest models.
ozziegt@reddit
This would be a rule, not a skill
Euphoric-Neon-2054@reddit
Hilariously funny idea to email HR about flagrant abuse of Any.
qxxx@reddit
I noticed it too, the PRs are huge af.. even juniors can now produce a crapload of code in a few hours.. The code works technically, but is unmaintainable in my eyes. You will need to use AI to add more stuff to the mud.
In the beginning I asked to refactor, to simplify, to use some of design patterns.. but I stopped caring. Because in our company only few devs care and most of them just produce slop. Juniors reviewing code of juniors... "LGTM"... then after a few weeks I need to touch that code because some strange bugs are happening, and I am like WTF. I need to use AI to first explain to me what the code works lol.. Or I try to delegate the bug to the original developer. I don't want to touch his crap. xD
And the boss is happy - because we add features all the time = $$$
RandomPantsAppear@reddit
I think the first step, especially in your scenario is to make it so less crap is making it to PR in the first place.
It is fucking GOLDEN to have a claude agent (which you can add repo-wide) that does nothing but sit there and stylistically audit the code, force the writer to go back and revise. It does require you to awkwardly put your bureaucratic preferences in english language form though.
It's not perfect of course, but it can be very helpful in at least minimizing the AI blast radius.
but_why_n0t@reddit
There's a difference between being pedantic and being right, and you seem to be on the right side of the spectrum.
No one should be making a 1000 line change or refactoring code willy nilly. Start rejecting such PRs that don't come with a pre-approved design doc.
If you don't understand it, you don't review it. Easy. Make this an expectation. Return the PR to author as many times as necessary.
You need to define standards. Like, publically define the coding and PR standards. Mention that you want to make onboarding easier by taking away this ambiguity, or that you want to make sure code is maintainable and easy to scale.
hardwaregeek@reddit
Yeah this is as much a social problem as a technical. I think your instinct is right that continually blocking pull requests will just annoy people and make you the bad guy.
Are you constantly under a deadline? Or is this a temporary situation? One of the places with the worst code I've seen was where the team was always under a deadline and "needed to make deliverables", even though that was completely artificial. If that mindset can shift a little, that will help a lot.
Otherwise, maybe fight AI slop with AI slop? There are AI reviewer bots and they can probably catch the super low hanging fruit like any types or super bad cyclomatic complexity. You can also get AI to split up a pull request into multiple ones, or even rework a commit chain to be more legible. Of course, there's linters, but it seems like the team will just circumvent them. Maybe you can encourage them to ask the AI to fix the lint issues? It won't be perfect but at least it's a low-cost option. These are all imperfect, mediocre solutions, but unless you address the team always feeling under fire, I don't know if anything will change.
PM_ME_CATS_THANKS@reddit (OP)
It's a temporary thing, once the project is completed it'll likely just be me working on it again, also just when the budget runs dry I guess work will stop.
We don't exactly have a hard deadline but on standups someone will say "when can we deploy this to production?", often just a day or two after the big PRs land. I always say "not yet" or "when it's ready", but it's clear people want it done and dusted.
This is all nice advice, in particular I think I will try and use AI to review (a few people have suggested that). It's a bit late to apply it to this project, but I definitely will the next one.
Shazvox@reddit
It depends on how time critical things are. But one rule I never break is that I must understand what is being submitted otherwise my approval is just a rubber stamp and we'd be better off just not code reviewing at all.
When it comes to code quality and size of PR:s, you need to start formulating some rules so that people know what's expected of them.
Maybe start linting to avoid things such as nested if statements and other bad practices. Stop those things from even becoming PR:s.
imsexc@reddit
Have you watched netflix kdrama Startup?
Here's the question: "What do you want to be? A good person or a CEO? Don't be greedy. You can't be both. Choose one. Just one."
Answering this question helped me in how strict I've been in reviewing code. If it doesn't pass my standard, I won't approve it. You can get approval from someone else, fine, I don't care. I've expressed my professional opinion in reviewing. If team decided to ignore it, the ball is not in my court anymore.
This is similar to how sometimes there's a Dissenting Opinion of a single Supreme Justice within the US Supreme Court decision in a case, where the judge expressed the disagreement to the ultimate decision along with the rationale behind the disagreement.
Ideally, through code review we want to have a better quality code getting merged, and an opportunity for both author and reviewers in growing their skills and knowledge through a good faith discussion/debate, as well as having trace of history that may work as an additional documentation, about why a block of code had been written in a particular way.
We are still responsible to guide them to get to implementations that we truly prefer though, and why we reject something, instead of just blatantly rejecting without any guidance on how to make it better.
That's just me.
Ill_Ad_7616@reddit
You manage AI with deterministic or semi-deterministic automation. First - immediately inject every tool you can to offload your review scope: - stop-hooks, pre-commit, pre-push, ci hooks, and I put a merge queue in place for my team.
This gives me code lifecycle events where I can make things happen and essentially inject requirements. The examples you gave can be caught by a linter and automatically rejected. You cannot keep up with review with AI as a human. Then start reviewing requirements and tests. Do not look at the code, look at the tests only. It’s test review, not code review. If you can’t do this with AI.generated code, your culture is broken for the era of AI. Before my team sees a PR now it has been reviewed and tuned by AI 4-24 rounds by reviewer agents with different scopes of concern.
You guys can try and fight the slop by rejecting reviews but that is in my opinion a bad solution. You rejecting them means you have to teach them, if the tools reject them, the tools will teach them as they chat with AI as to why it’s failing SAST, or why the linter is yelling at them, or why the automated reviewers say your code is horribly repetitive. You will not be able to train these devs to organically be solid developers in your own and I would argue that the incentive for them to be good traditional devs is gone in most industries already versus AI is the developer and they are mini product owners of sorts.
VRT303@reddit
Comment and reject anything you're not comfortable talking the fall for when Prod goes down on a weekend or money are lost.
Communicate what would make you he process easier and push back or involve your team lead.
mightshade@reddit
This is a red flag for me. If you were just remarking "this looks a bit weird" I can see how someone else might disagree and accept the code anyway.
Any stronger feedback should at least be answered with "I see where you're coming from, but I think it's acceptable because xyz, can we merge anyway?" and waiting for your reply.
Also, isn't your role to be lead dev or similar? In that case, merging a PR with your disapproval is a clear and problematic disrespect of your role. You're becoming lead dev on paper but de facto irrelevant.
Talk to your Scrum master/agility master/manager. It's their job to either enforce your role (or define a new team setup). And don't be afraid to make use of your title. If you carry the responsibility for the project, this also gives you certain privileges (e.g. to say "no").
Finally, "we need to go fast" has always been a welcome excuse for producing poor code. Taking tech debt can speed you up in the short term occasionally, but a) not without a plan how and when to pay it back and b) it can't be the norm, otherwise it'll slow you down very soon.
kamilc86@reddit
The code coming in is going to be AI generated whether you like it or not, so the move is to get equally structured about how you review it. There is a real difference between dumping a PR into Claude and asking "is this good" versus running a loop where you hit deterministic checks first (linting, type coverage, tests passing) then ask the model to explain specific modules, trace the control flow, and flag anything that looks generated without domain understanding. That second approach works surprisingly well if you ask pointed questions instead of vague ones.
The bigger question is whether you want to keep reading every line yourself or shift to reviewing at the architecture level. If you build a principled AI review process, the question for a 2000 line PR stops being "can I follow every line" and becomes "is this architecturally sound, are the module boundaries clean, and do the tests cover the actual behavior." That is a more sustainable place to operate from when you are the only one left on the project.
Responsible-Rock-680@reddit
A 2,000-line PR isn’t really a review request, it’s a rubber-stamp request. Make reviewability the rule: what changed, why, and how was it tested.
z960849@reddit
Automate, automate, automate
Additional_Rub_7355@reddit
Of course it's difficult, that's why we need a human to do it. Same thing with most things moving forward in our damn field.
KandevDev@reddit
10 years solo + jumping straight into code review is harder than it should be because you are missing the muscle for the social half. the technical half (catching bugs) is the easy part. the hard part is reviewing code in a way that does not make the author defensive, does not let bad code through to avoid friction, and balances "this is wrong" with "this is just different from how i would do it".
two rules that helped me: (1) every "this should be X" comment requires you to articulate why the existing version is wrong, not just different. half my early review comments evaporated under this test, they were stylistic preferences not actual issues. (2) if you have more than 5 comments on a PR, your review failed. step back and ask whether the PR needs to be smaller, or whether you need a synchronous conversation instead of asynchronous comments.
The_Real_Slim_Lemon@reddit
If it helps, we all struggle with reviews in this era
Setting some ground rules is good. Where I work there’s a bot that does an initial code review - we set the rule that you first post in draft mode, then do a self review + action any bot comments (or reply with the reason why not), then you get a proper review. Saves so much time reviewing slop, plus it picks up issues a human wouldn’t.
If I had to be responsible for juniors, then I’d have more blunt rules - max size of method, max lines in a PR, and idk what else just for my own sanity. And require unit tests - guarantee that their slop at least works.
AcceptableSimulacrum@reddit
Enforce what you can automatically. You can ban the any type w/ eslint/prettier.
authentic_developer@reddit
One process change that helped me here: make a PR-description-or-no-review rule. The author writes a short "what changed, why each chunk is needed, what risks exist" block at the top. If it's missing, you close without reviewing and ask them to add it. You stop being the bottleneck and the burden shifts onto whoever wrote the slop. Pair that with a soft cap on diff size, maybe 400 lines for non-test code, anything bigger gets split. The any-cast stuff is real but the harder fight is the process, not the line-level. Once people have to write what they did, they generate less of it.
ciynoobv@reddit
I find that a good strategy is to have them walk you through the pr first, if they can’t even explain what their own code does (because they were hands off with llms) then that is an instant rejection. If they can explain then you have more context for the changes. And you can start to actually review the changes.
vasilescur@reddit
I'm so sorry. I feel my own experience resonating from your words and I know the desperation and the hopeless sense of dread as your code gets taken over by AI. As the rules and order and sense you've crafted for yourself and maintained so carefully, on principle... succumb to the drunken staggering march of entropy.
There's no going back. I'm so sorry.
Business has to move at the fastest speed it can, or else somebody else's business will. The time pressure will only get worse, and the percentage of brainpower used, lower.
I don't know if this is a storm we can weather. I've started swimming with the stream and hoping I'm wrong.
How to review a PR has completely changed in the past year. Now the answer is, use Claude. Fight fire with fire.
roodammy44@reddit
Indeed, the first line of defence against this is to get very strict with linting - that can solve the problems with the anys.
Then you get another AI to review the PR. If the dev is using claude, use GPT. It’s kind of amazing what they catch.
After it has got through those rounds, the syntax and styles should generally not be a problem and you are down to the philosophical issues. For that it’s best to sit down and discuss the PR with the dev. If the dev is vibe coding then tell them that they must read through all of the PR, understand it and correct madness before it is sent on.
HalveMaen81@reddit
I encourage _everybody_, whatever their team set-up, to make use of Conventional Comments when it comes to Pull Requests. Completely changed how I view the process, both as an Author and Reviewer.
ItSeemedSoEasy@reddit
This is just AI slop, why are you reviewing it?
The other dev clearly hasn't, why are you doing their job?
Do you have an AI policy?
Just reject it out right without reviewing it and dress down the dev doing it severely.
Ask for them to be fired because it's only going to get worse.
cobalt-jam88@reddit
The useful mental split is: does this code create future work for someone else? If yes, that's a real request change. If it's just not your style, that's a comment at most, not a blocker.
In practice the categories that actually matter are: will someone misread this in 6 months, is there a correctness edge case, or does it conflict with patterns the rest of the codebase has already settled on. Everything else is preference, and blocking on preference when you're the only one who's owned this project solo for years is going to read as gatekeeping to the new devs pretty fast.
Michaeli_Starky@reddit
Reviewing the AI generated code is hard even for those of us who had been daily reviewing PRs for decades...
flamecrow@reddit
Push back. Always small pull requests and well defined. You know better. Trust yourseld
polypolip@reddit
From what it sounds like the other people on your project don't know how to work in a team.
PRs need to be manageable in size. If they are not reject until they are. If they are upset about it and management forces you to go through the large PRs a way to do it is to do them together with the person who wrote the code and have them explain everything in the code as you go through it together. If they can't - you guessed it - reject the PR. If they can't respect your time they can waste some of theirs.
ILikeCutePuppies@reddit
They can easily have ai break the PRs up and test each part individually that needs it. It's the only way you can be sure it's stable.
Ok to make the big changes with AI to figure out the direction but for non prototype stuff it needs to be broken down. Also you need more people to review the code to take the burden down.
Also you need a lot of tests with AI. It's to risky to not have them as an extra precaution.
veryspicypickle@reddit
Reject them. Otherwise you risk reputational damage.
Imoa@reddit
If you’re in charge of the other devs, or will face reputational damage by endorsing the bad code - push it back with comments. If it’s too much to review, tell them to document all changes in PR comments, get them in a call and tell them to walk you through the changes.
If you’re not in charge of them there isn’t much you can do.
randomInterest92@reddit
This is the only thing that works with slop slingers.
There is a balance between "I'm vibecoding everything" and "i write everything manually" .. the optimal spot is somewhere in-between and the slop slingers need to be nudged towards doing more manual work (mostly reviewing and understanding)
awjre@reddit
You're going to get blamed (and eventually fired) for allowing AI slop into your codebase.
However you can fight fire with fire here. Before you review any PR get AI to do a review. I've found this to be highly useful https://github.com/trailofbits/skills
Also use AI to identify PRs that are being merged despite comments and get AI to review them using security skills.
Then start calling people out about poor use of AI and vibe coding. That if people are going to use AI (which you now clearly have embraced) you expect agentic engineering approaches.
You also still expect human consumable PRs and for contributirs to have read and fully understood the code (and to own it).
Do not let this stand. It's your job on the line.
whitehorrse@reddit
Smaller pull requests are actually better for humans to review and for robots to review. I get triggered seeing 1000 lines (if it’s not fixture or tests taking most of it). When I was on previous teams (and before AI) that weren’t as mature I’d leave a note with my review that it took a long time, if we split it up into smaller PRs it’ll save time. Historically that might be a little painful but robots are very good with git commands.
OG_Wafster@reddit
I recently kicked back a PR that changed for too much in one go and made him break it up into manageable pieces.
Refactor into a new base class without changing the code. Then in a separate PR make changes to that code. And so on.
I have caught many issues in the increments PRs that were impossible to see in the original large PR. You have to hold the line.
CraZy_TiGreX@reddit
A couple of thousand changes on a PR is a non-reviewable pr, unless is a one-off which I doubt.
By what you're saying, they are using AI and they barely know what they are doing.
Get Claude code max($200) pull the PR in your machine and get it to review it.
At the same time, talk to your boss about the situation, but they most likely don't care about quality or maintainability.
The bug situation is going to be insane, from my own experience. We had 3 contractors in a team for 6 months, I spent 3 fixing massive bugs after they left...
me_hq@reddit
If you take responsibility for the project you need to set some standards. Require comments explaining logic and reject if the author cannot justify every line of code. Do not hesitate to ask ‘why’ and ‘what for’. And double your fee.