How are you doing code reviews?
Posted by CyberWrath09@reddit | ExperiencedDevs | View on Reddit | 119 comments
Curious how folks here are actually doing code reviews these days, especially on teams where most devs are 5+ years in. I am trying to recalibrate our process and I am realizing weve kind of drifted into a weird middle ground.
Context: mid-size B2B SaaS (\~60 engineers), mostly backend and infra heavy, regulated customers but not medical/finance-level regulated. We are fully remote, roughly 6 time zones, and historically pretty async.
Right now our process is: open PR, CI runs, an AI bot does a first pass comment dump, then a human approval is required from someone at or above your level for risky areas, or any senior for low-risk stuff. We do not have formal size limits, just the “keep it small” mantra. Unsurprisingly, that has turned into 50-line PRs and 1500-line PRs coexisting.
Having some issues with staying organized and ju
makeitrayne850@reddit
Code reviews should emphasize clear communication and understanding of the context behind changes, regardless of PR size.
weIIokay38@reddit
On a previous team it was always the priority to review other people’s PRs as soon as possible. Like if you’re “coming up for air” after a focus session, check a saved GitHub search for PR reviews and go through them. Not every team does this (most teams I’ve been on the SLA is 1 day max) but I feel pretty strongly that feedback has to happen quickly and ideally be done non-blocking.
Anything that’s more than ~1000 LoC (excluding schema changes, artifacts, etc.) I won’t review and will request changes on. I encourage engineers to use small, stacked PRs so they’re easily reviewable. I’m not spending time on large PRs that are hard for me to follow or review. If anyone complains I put my foot down and escalate.
In my reviews I focus on areas that other engineers don’t usually cover, which is usually the tests. A lot of our engineers will just generate the tests and not give them even a once-over, so I add some pain there to ensure that folks know what to test, what good tests look like, that they’re not accidentally writing slow ones, etc.
If there’s anything that I find myself repeating in PR reviews, I spend time writing a lint rule. Usually I use ast-grep to do it because it’s fast and easy to set up by writing YAML rules. Biiiig ast-grep stan. I’ve also written custom instructions for Claude for things like “does this description / comment make sense?” for things that are hard to lint.
1One2Twenty2Two@reddit
50 lines PR: 4 comments, 5 requested changes
1500 line PR: LGTM 😎
weIIokay38@reddit
If it’s over 1k lines (exclusing lock files and artifacts) I usually ask for them to split it apart. Pretty easy with most tooling now. I don’t have the link but some study showed that dev attention just vanishes after like 500 LoC for reviews.
CyberWrath09@reddit (OP)
Lol yeah, that sounds way too familiar. The tiny PRs get picked apart and the giant ones somehow slide through untouched. I’m starting to think our “keep it small” rule needs more actual structure, or at least clearer expectations around what gets real scrutiny.
How does your team handle oversized PRs, do you push back, or just roll with it?
dweezil22@reddit
Re: oversize PR's, devs are expected to explain "Why is this so big" in the description. Devs that have earned more trust tend to get more lee-way. Worst case, they're told explicitly to go split it into smaller chunks (and I encourage devs to be up-front about this rather than just ignoring the PR or rubber stamping it).
Small PR's that start to get towards bike shedding are called out "Is this level of discussion necessary here".
Most important perhaps. "Nits" are treated like nits and may be ignored at the PR author's discretion.
On my teams I've generally seen everyone working in good faith and going the extra mile (so a reviewer will leave a nit, clearly suggesting it may be ignored, the autho is 60/40 likely to handle it anyway).
I also explicitly warn new-to-the-team devs that they may have some very long PR back and forths as they ramp up, and it's not hazing, it's the team helping them learn (making sure to point out that my first PR on the team had 35 comments and I closed it and made it into 3 smaller ones; no one is above the law).
yourparadigm@reddit
I tell people to split it up into smaller PRs or at least individual commits that can be reviewed separately.
thekwoka@reddit
Classic Bike Shedding.
It's easy to have a strong opinion about a small PR, and hard to have a strong opinion about a big PR.
Creepy_Ad2486@reddit
It's also generally easier to grok the changes in a small change set vs something that's hundreds of lines across dozens of files.
1One2Twenty2Two@reddit
It depends. If the PR is huge and within the scope of the ticket, I'll have a tendency to accept it and I'll just spend more time to review it.
If the PR is huge because it has stuff that is out of scope, I usually ask for the items outside of the scope to be removed, but I ensure that we create a tickets so those items don't fall through the cracks and we can actually plan them at some point.
Creepy_Ad2486@reddit
I work with a guy who creates his own backlog items outside of sprint planning, opens MRs for them, and then there's all kinds of stuff outside of the scope of the original out of scope item. It's maddening.
AlaskanX@reddit
My refactoring PRs are like 100+ files and hundreds if not thousands of lines of code. They also have tons of scrutiny and more thorough automated testing than any other code I write. Usually I improve test coverage in the process.
My other PRs I generally take the approach of smaller is better, unless an entire feature request necessarily touches many files.
BogdanPradatu@reddit
1500 lines prs get ignored by me longer so unless some one else approves, the author will have a hard time merging that, until I review everything.
bwmat@reddit
And I'll just keep on telling my manager that my task is done and waiting on review, works for me lol
bwmat@reddit
Am I a total weirdo who will just go and take half a day to do a review if the PR is large, instead of just giving up? (it might have something to do with the fact that I create large PRs sometimes lol)
NoJudge2551@reddit
Primary_Ads@reddit
this is the most fair system ive heard
CtrlAltSysRq@reddit
I have noticed a pattern. Some people, I open their PR, and I already know what it's about because we chatted about it briefly. I know vaguely their approach and what they asked about. I know roughly what they know, so I have suspicions about what they probably got exactly right and what bits they may not realize thar be dragons. Their details are perfectly fine (syntax, method factoring, etc). Regardless of how many LOC it is, it's never too much. And their PR description fills in any gaps. I skim it a bit, maybe offer one bit of advice or an aside remark, and I approve. And it's like this every time.
Then there are some people who routinely open huge PRs I have practically no idea what they're doing, it's so much code I barely know what they're even hope to get from me aside from a stamp, and it's littered with so many questionable micro-decisions that I don't even know where to start. And they do this routinely. And they have a track record of causing incidents with small mundane changes buried in these huge things - touching core libraries with huge blast radii casually as they needed something random, or are "fixing a bug."
And people typically clearly belong to one of these two camps and if you try to give feedback, the huge PR-ers will have strongly articulated reasoning for why their PR just has to be this big.
Yet everyone else manages to never need to do this. Most mysterious.
Western_Objective209@reddit
My dept we basically had code reviews for core dependencies, and then everything else it was up to the team. Then the dept instituted mandatory code reviews, and this is basically what happens.
failsafe-author@reddit
This is the way.
Jyzerman9@reddit
Small PRs capped at about 300 lines, we try and have mandatory tests (hit or miss for us), and architecture notes. Seniors review design, mids review implementation. AI bot (Coderabbit for us) runs first pass, but its feedback is non-blocking.
Toxin_Snake@reddit
What do you do if you develop a new feature? Do you merge it in unfinished parts to keep the PRs small?
HenryJonesJunior@reddit
Yes. Large features should be gated by an experiment or flag and should be deliverable in smaller parts which each function and are able to be understood.
For instance, instead of "New Feature Foo (3000 lines)", you could have something like:
"Refactor common code into helper class" (250 lines) "Add new RPC stubs" (100 lines) "Implement RPC X with tests (150 lines)" "Implement RPC Y with tests (150 lines)" "Have UI call X when feature enabled (300 lines)" "Integration tests for feature X (300 lines)"
etc.
At each stage, the project builds and all tests pass. Each piece is individually much easier to review and understand.
Western_Objective209@reddit
Do need to understand the trade offs though; this increases the development time of the feature by 3x easily
HenryJonesJunior@reddit
It doesn't. You don't pause while waiting for a code review; you continue building on top of the change. You can have multiple things out for review at once.
Reviewing small changes is dramatically faster than reviewing large ones. Small changes I can respond to within an hour or two because it will take me minutes. If you send me a 3000 line code review I will put it off for a day until I have an hour to dig through it - and in the end still ask you to split it into multiple smaller changes.
Western_Objective209@reddit
I mean it definitely does, even if you continue building you are adding friction to the process and adding overhead by having to design your code in a way that breaks it up into smaller components.
Sure if you decide that you will make sure it takes longer, then it will take longer either way, but that's a choice you are making
Qinistral@reddit
Ya I’m surprised by your downvotes. Ignore what is best practice for a second. More PRs is going to take longer no matter what if only due to additional context switching, but also when people ask for changes and you started on the next branch it’s a PITA to reconcile them.
Also as you said, new code is more likely to be refactored, and gating that churn on PRs is also large waste of time. You’re getting eyeballs ($$) on code that is dead and being thrown away.
Western_Objective209@reddit
Yep, I understand wanting to make reviewing code easier for everyone involved but resources are finite and writing a single feature that requires 6+ code reviews across several tightly coupled branches, it's prioritizing easier code reviews over just about everything else
port888@reddit
What happens if a work-in-progress feature suddenly requires a major refactor down the road, but already have parts merged into the main trunk (including DB schema changes with migrations already applied) via incremental PRs? Do you just eat the cost of merging the refactor and migrate forwards with the column/table drops?
HenryJonesJunior@reddit
When you work in multiple small changes, it's unusual to have long-lived branches. When a piece is ready, merge it to main. So these kind of radical merges don't happen particularly often, but yes, if they do you just take a few minutes to do the merge.
No_Top5115@reddit
How do you do the merging? I tried this and pr 2 merges into pr1 and ended up with a long chain that needed to be reviewed in order. And when there was a change in pr1 sometimes it would impact all other prs and would take ages
HenryJonesJunior@reddit
Most of the time it doesn't. With small reviews it is uncommon to encounter radical changes that redo everything.
The exact workflow varies depending on your source control and tools, but git and mercurial both support chains of commits, and every company I know still using a Perforce derivative has tooling to enable chains of CLs as well such that you just rebase/evolve and resolve any merge conflicts.
gammison@reddit
Don't work on PR2 until PR1 is merged and call it out in standups that PR1 needs to be vetted and cleared before you can work on PR2 more.
likeittight_@reddit
Someone who actually gets it 🤣😭
Toxin_Snake@reddit
I see. Maybe I will try to push for this approach in the future.
likeittight_@reddit
This is the only way. One PR per feature is the sign of a very inexperienced team
firestell@reddit
As someone in a team who mostly does this, what is your opinion on stacked prs? We make a bunch of prs that are reviewed individually and eventually merged into a big pr implementing the entire feature.
feanorsilmarils@reddit
Not the person you replied to, the odd couple is OK, but stacking many can easily lead to a chain of merge conflicts if in a busy repository. Plus, with many small releases going out incrementally it's obvious to spot which one breaks the dev environment when it happens rather than having to seive through a large set of changes.
likeittight_@reddit
Well put.
likeittight_@reddit
This sounds something like Gitflow? I’ve never tried it, so can’t comment on something like this specifically from experience but in general I’m not a fan of long lived branches, would rather get the code into main and shipped ASAP - bit by bit is ok.
That said - the approach you outlined could certainly be appropriate for some situations.
robben1234@reddit
Does your source forge properly support stacking diffs?
If I recall correctly when stacking MR in Gitlab if you merge first one into with squash enabled the entire chain turns becomes annoyingly conflicted. And if you merge it from tail to head and then into main what was the point of stacking.
Striking_Celery5202@reddit
Unless it's a very small feature it usually ends up taking more than one pr. Ideally you should use some kind of flagging or at the very least not hooking up the new code until the last pr(if the work spans multiple releases for instance).
thelochteedge@reddit
I'm with the others. We are trying to implement a strict 400 lines cap but I don't know how we solve the "unfinished feature" part of things. With back-end APIs I suppose it can be somewhat done in parts but when it's unfinished I'm never sure what to do.
likeittight_@reddit
I mean of course all of this depends on the product, codebase, feature etc but in general just make sure the unfinished parts are not hooked up or visible to customers, and make sure existing regression passes. There are various ways of implementing gates with feature flags etc
bwmat@reddit
What tools does one use for this 'pulling out bits' process?
Sounds like you basically have to manually reimplement it...
likeittight_@reddit
Tools? Git, you need to be extremely proficient with it. Structure commit properly, cherry pick, rebase, merge etc.
Reimplement? No
bwmat@reddit
I know about cherry-picking and interactive rebase (don't do much of the latter), but actually splitting up the completed work into parts that make sense is usually going to be a ton of work...
And I do refactoring to clean up the implementation in the original PR
jcgl17@reddit
I think patchset-oriented code review together with trunk-ish-based development is probably the way to go (and of course feature flags). The Linux kernel is a pretty good example of a project where this works.
Specific_Ocelot_4132@reddit
Feature flags
teratron27@reddit
Strict limits are hard. I had a pr a few days ago that had 300+ lines just for the tests. The actual code was ~200 lines of implementation and another ~100 of generated code (go with sqlc), I would have been able to get to the right implementation without writing the code and tests at the same time (so no way to split them up even if it was an unused path)
FanZealousideal1511@reddit
So what do you do if you have a feature larger than 300 lines? Or if you are setting up a new service?
Specific_Ocelot_4132@reddit
Feature flags
bwmat@reddit
I've always felt the cure is worse than the disease here
GreedyCricket8285@reddit
No offense but this sounds like a terrible place to work.
CyberWrath09@reddit (OP)
This sounds clean - I think we need to head in this direction.
phoenixmatrix@reddit
I've helped countless organizations, small and large, optimize their processes over the year, but I've been struggling with this for the first time at my current one.
We have some pretty mature AI users in house. They can pump out 20+ high quality, full features PRs with production quality code, tests, etc in a single day
But we still want to do code reviews. Both as a matter of principle, but also because our enterprise customers mandate it as part of SOC2 reviews and DDQs.
That became a huge bottleneck, because even if the PRs are small, and spreading the load, it's pretty costly to do that many reviews. And we don't want to slow down high quality code from going to production.
We've done some stuff. Usage of AI reviews didn't really help, because while it improves quality overall, it doesn't really save time on the final approval, nor does it reduce context switching.
Our biggest win has been moving to stacked PRs.(Git Town or Graphite). That lets us keep PRs really small while allowing people to keep working and not be blocked ever. It's only a partial win though, and our rigid infrastructure policies invalidate reviews of stacked PRs every time they the diff changes from under it.
Next I've been increasing my risk appetite. Since most of the code is high quality in our org, I pushed to spend a bit less time on PRs reviews. We already have REALLY comprehensive automated tests, and solid linting, so I can often eyeball a PR, make sure there's not some really obvious architecture or security mistake and just stamp it.
We've architected our infra so reverts take just a few seconds, so if something breaks it's easy to roll back. So far over the last few months we haven't had issues.
If the quality of the work lowers over time, I'll have to go back to more comprehensive reviews, but hopefully by then we figure out better ways to handle it.
deveval107@reddit
One problem with the absolute line of code limits are tests: do you include that or not. We generally don't, since a 10 line code change may have a 500 line test (most just object seeding)
Foreign_Addition2844@reddit
I just click approve. Im not paid extra to be thorough.
GreedyCricket8285@reddit
Usually rubber stamped. Maybe a quick once-over. Folks put too much effort into PRs.
mixedCase_@reddit
Every project that I've seen doing it like this is a shit codebase, and every single codebase I've seen that can be categorized as decent or better has been either a product of no more than 4 elite-tier devs or does strict PRs.
Idk man, I know some companies just want to shovel shit but if it's a project worth giving a damn about rubber-stamping leads to hell.
RedditNotFreeSpeech@reddit
I stopped doing them. I would constantly catch so many issues. Not just code issues but requirement issues where teams didn't push back.
Pushing back during code review only causes drama and resentment. Since I can't stop caring, I simply removed myself. Let those teams do whatever stupid things they want. No longer my problem.
Horror_Jicama_2441@reddit
Me too. It's the good thing about being in a bad team/company, you can just stop reviewing and nobody says a word. You can just do it, and the world keeps turning. You have your reviews with your manager, the topic is not raised, and you get a great bonus. There is literally no difference.
Crazy! But it actually works.
RedditNotFreeSpeech@reddit
Yeah if no one else cares, why should I? I'll just do my best work on whatever I'm working on. If someone asks me to review something they did I'll happily do it but I'm not the code standard police anymore.
You want to commit code that you obviously never tested because it doesn't compile? Not my problem!
SawToothKernel@reddit
My boss actively discourages them.
Chromanoid@reddit
that is not good. are you at least encouraged to do pair programming or to discuss design choices? a second opinion (from buddy check in the military and other gear heavy jobs to peer review in science) is essential to excellence.
downtimeredditor@reddit
My general code review goes like the following
And if everything is satisfied then approve
DjChopper24@reddit
It's often hard to avoid large PRs, so I encourage a couple of practices to help manage them:
We've had much more success with the first approach.
FlailingDuck@reddit
Spend 3 days completing a ticket, wait 6 days for it to get through code review.
jl2352@reddit
I promote a few things people find unorthodox, but IMO it works. It works very well.
It works. It saves a fuck load of time. However some things to note:
When everyone is onboard and you’ve built up trust. It works really fucking well.
fake-software-eng@reddit
Stamps for days Give stamps receive stamps We are all publishing AI slop code now anyways
Brief_Praline1195@reddit
Open, check name, approve or reject
thunderbolt_gio@reddit
I've introduced automated PR reviews through our coding assistant which acts as a first-line of defence in huge PRs. Ofcourse, we still get human reviews for actual approval but this helps the reviewer speed things up a bit with getting context of the changes quickly and any critical areas of interest for them to review thoroughly.
WaltzHungry2217@reddit
yeah its kinda hard, I saw a company ataraxy-labs building gitarsenal, to have an agent setup the environment and test it against it like a human does, not sure how good they are, also they just came out with their second version, I am still playing around with it. Looks kinda cool tho.
Shookfr@reddit
First, an important note about our process: we don't view code review as the place to catch bugs or verify specs. While it happens, most of these are covered by tests (manual or automatic). This allows us to submit small PRs (<100 LOC) that do one thing well, even if they don't cover the full picture yet.
Secondly, since PRs are mostly technical, anyone can and should handle them. This helps our juniors catch up.
Our process is as follows: - Create a design task for a US - Design is validated by another team member - US is split into tasks - Each task corresponds to a PR
Once all tasks are done, the Story is tested by a peer and the product team.
teratron27@reddit
We’re aiming to add as much context to the PR description and (linear) tickets as possible. Then if it gets too much we do a pair review on a call
No-Analyst1229@reddit
LGTM
likeittight_@reddit
You forgot the emoji 🚀
graste@reddit
Whatever is necessary at the moment depending on circumstances. Usually merge requests with review vis-à-vis (office, video call). While usually the reviewer already had a look at the diff / task. Level/depth of review depends on the risk of the change. Basic rules like changelog entries present. Codestyle, tests etc via CI. Some projects have automatic review apps deployed when MRs are created (and stopped when MR is closed) so anyone can have a look at the built (UI) feature and some headless browser tests can run on them. Helps with DevOps/env changes as well as deployments of those environments should be working flawlessly after successful other CI jobs (tests etc) ran.
Basically some sort of Ship / Show / Ask https://martinfowler.com/articles/ship-show-ask.html
twilightnoir@reddit
We have one approval required for team projects, two approvals required for main monolith. We have a codeowners file that maps parts of the monolith to the team that owns that part and they're auto-included on the PR as well and will need an approval from that team as well if your PR modifies code in that part of the monolith.
We have no mantra or guidance for size of PR. That seems like a useless metric. It needs to be as big or small as it needs to be to fulfill acceptance criteria
Goldziher@reddit
It depends on the org culture for me. I like doing them. Not for boring shit, but for interesting code or for where there is KT or teaching/learning
13--12@reddit
Unless you're doing something complex, we do post commit code reviews. You commit and push something to main (with the whole CI run of course), then ask for someone to review. If there are some change requests, you just push them to main as well.
considerphi@reddit
Last place I was at... CI runs, anyone can review, no hard limit on lines. No ai.
I'm surprised at all the senior-only reviews, we let anyone review but trusted the process to ensure that if something complicated was happening, the implementer would know to get a review from a senior or someone who knows that part of the code base well.
sus-is-sus@reddit
If you have a nit, keep it to yourself or send them a dm. Do not put stupid comments in a pr. If there isnt a business reason or some logical error or other bug, then you make a comment.
CallinCthulhu@reddit
No, if you think it needs to be addressed, leave it. It keeps a record, is a good way to spread best practices.
Just preface the comment with Nit:, and don’t block
sus-is-sus@reddit
No. You embarrass the person and you are just feeding your own ego.
kronik85@reddit
If your code is embarrassing do you not think others should learn how to avoid the same mistake?
CallinCthulhu@reddit
Anybody who gets embarrassed by nits in a code review isn’t long for the job
vijoc_@reddit
This depends on context. I've been in teams where we would typically offer suggestions in PR comments, which the author was free to ignore. Things like "Could X be a better name for the variable?" or "Maybe this could be moved to a named function?". So comments that are more about offering a second perspective or opinion on a subjective topic rather than pointing out a logical error or a violation of an agreed convention.
sus-is-sus@reddit
If the code runs and meets business requirements and is easy to read, just ship it.
vijoc_@reddit
That seems like a low bar for development. If the reviewer with a fresh pair of eyes finds potential opportunities to clarify the code, why ignore them? If the author after a second look disagrees, then fine. But more often than not, I've found this has resulted in the code becoming clearer and thus making maintenance easier.
sus-is-sus@reddit
I agree that simplifying the code is a reason for a comment. Nits are not though. Just shoot them a dm.
vijoc_@reddit
Ah, it seems we may have a different understanding of the term "nit" then. In a broader sense, I would consider any subjective suggestions to be quite "nitpicky". For truly egregiously "nitpicky" suggestions I preface the comment with "Nit:", e.g. "Nit: typo" or "Nit: capitalization". Still, I would point them out but not block the PR.
It may well be that I'm operating from a poor understanding of the term, not being a native English speaker myself.
sus-is-sus@reddit
No you understand it. But maybe you never worked with someone that has a comment on every single PR before.
vijoc_@reddit
I suppose this comes down to how we interpret the role of PRs and their reviews. To me, a PR says "here are my suggested changes to the code to add this feature / resolve this issue, any comments?".
If I review a PR, I will typically have comments. Usually they're suggestions that are not blocking the PR but more about what I thought of when reading the suggested changes. "This took some effort to undestand", "The connection between this and X is not quite obvious" and things like that. These could be considered nits, but these are exactly the kinds of comments I would like to have on my PRs: someone with none of the context I've built in my head reading through the code and pointing out where I may have cut corners.
Logical errors should, ideally, be caught in tests. Quite commonly I'll point out missing test cases, or have them pointed out to me.
sus-is-sus@reddit
It really depends what the size of your team is. In startups we really don't have time for any of this.
vijoc_@reddit
I'd challenge that. If we're already slowing down delivery by doing PRs, I don't think pointing out such low effort fixes and even addressing them is much overhead. Say there are a couple of typos and a suggestion to refactor some logic to a function, how long does that take? And if the PR is approved already, do the quick changes and merge it afterwards.
But again, this assumes a certain context of tooling and processes around PRs and their reviews.
sus-is-sus@reddit
There is a bunch of time between when the PR is submitted, when it is reviewed, when it is fixed, and then re-reviewed. Especially on fully remote teams in different timezones.
On these type of teams there may only be 3 or 4 devs. Everyone is senior level. Velocity is lightning speed.
There are limited funds so time to market is the most important thing
vijoc_@reddit
It seems to me that the underlying assumption here is that every comment / update requires a new review. That's not what I'm advocating for. If a comment is a suggestion then the PR is approved, and the author may or may not address the comment before a merge.
Again, this depends on context and it seems to me that we're coming at this with different assumptions. If the process is such that any comment needs to be addressed and requires a new review then sure, find a way to deliver the non-blocking comments outside that process (e.g. as DMs like you suggested). If, however, we can approve a PR while adding comments as suggestions, then I would suggest doing so.
icenoid@reddit
You do PRs? We have dated pair programming, the theory is no PRs and merge right to main.
GoTheFuckToBed@reddit
I am sorry I am unable to put my internal guide into words. It is somethinglike: understand the change, check if it works
HoratioWobble@reddit
I always find the line limit thing a bit weird. In some projects, say React, 1500 lines can be a fairly mundane component and tests
throwaway_0x90@reddit
This sounds just fine to me if nobody is complaining about it.
But 1,500 lines PRs? I don't see why that's allowed.
Inside_Dimension5308@reddit
I have realized that the more process we put in, more uncomfortable it gets.
The best attempt we have made is to assign senior developers to juniors as primary reviewers so that everyone contributes. And obviously, there is a OKR associated where the contributions and TAT for merging PRs is considered.
Other than that the process is what you have already mentioned.
Man_of_Math@reddit
AI tooling isn’t a silver bullet, but I do think it could prevent 1,500 line PRs from existing. Configure a rule to have a product like r/ellipsis reject the PR if it’s over a certain size. Because they’re using LLMs you can specify things like “don’t include JSON config files in the line count”
I’m a founder at r/ellipsis and I get a lot of downvotes here but for this use case in particular, AI code review bots are very helpful
organic@reddit
aim claude at it
pplmbd@reddit
my previous company was great, MRs were small, AI integrated but merely for catching minor errors and human can focus on the implementation design. Communications were clear, there were some cases seniors argued on principle but mostly civil.
But my current, I cant even rename a test case name because the current format is “readable”. Bruh, it’s like reading a camel centipede. The worst part? Majority of the team agrees with me but wont stand for their shit due to “respected” minority
siscia@reddit
What problem are you trying to solve?
In well oiled teams, usually the PR size is not a big issue and people makes pr as big as they need to be but not bigger.
chikamakaleyley@reddit
my team is about 4, but i've also done this on a team of 8
our tasks are broken down small enough, and the development pace is fast enough - that this works. We all kinda have the same skillset and are familiar with what ea of us is currently working on.
basically once my task is in a state where its almost there but needs some refinement, or like, where I just need to add unit tests, I'll create the PR anyway, but as a draft. I'll post to the team channel and just casually ask for some eyes for a first pass. I don't really care about any of the post-commit checks failing, in most cases it will
usually someone takes a look and anything they find here is usually things that would be of bigger concern. The nice thing is - I wasn't done anyway so it's easy for me to make these changes. There's possibly a second casual pass as i near completion, usually the change requests are minor by now. Once its complete, and actually "ready for review" - this final pass - is pretty low stress and the approval is easy to get
So this does a couple things for me:
And so since the engineers generally are okay with me doing this - i do the same for them, and it just works.
opideron@reddit
The metric I use is that my job as a code reviewer isn't to QA the pull request, but just to make sure it's readable and doesn't make obvious mistakes.
If it's a 1500-line PR, then I look at the quality of the changes. Sometimes 1500 lines makes sense, and our job as senior SWEs is to discern whether it's 1500 lines of slop or not. Usually, there is a good reason it's 1500 lines. For example, if it's a brand new feature that needs to get QA'd, I'm just looking at the big picture of how the files/methods are added and whether they are consistent with what is already there. I'm not looking for nitpick-level stuff, but I am looking for bad overall patterns/architecture. But if it's 1500 lines for something simple such as adding a field to an API response, I'll be much more critical unless it's justified. How could that be justified? Sometimes legacy code badly done requires updating 20+ files just to add a field, because each and every file/object requires that field to be added. Refactoring typically isn't an option, as that would be an even larger PR.
The other factor to keep in mind is that if certain people keep submitting awful PRs, that might be a sign to not keep those people on your team. Unless they're junior and you're actively training them to submit better PRs, of course. The point of PRs is to figure out who needs to improve their code quality and provide training for that. If you're continually dealing with awful PRs, that's an anti-pattern.
_giga_chode_@reddit
LGTM - approved
RandyHoward@reddit
We run a lot like you. CI and AI review first, then someone with write permissions has to approve. We all have write permissions though, so anybody on the team can approve. We're all pretty experienced though, most of us with 10+ yoe. We have no rules on line length, but we don't typically see giant PRs. Our tickets usually result in smaller PRs because they're scoped well.
Morazma@reddit
I open the branch in Windsurf (AI code editor) and save the results of a git diff to a file, then ask Claude/Gemini to review the changes with reference to the new files.
Then I review their points and discuss them with the AI if I think they're wrong or I need more clarity.
Then I do my own quick review of general patterns used etc.
danknadoflex@reddit
Just type LGTM
evergreen-spacecat@reddit
PR Size is not only a line counter. The PR should cover the entire context of the change. 1500 lines is fine if it’s basic changes of DTOs or something.
CyberWrath09@reddit (OP)
Yeah, that makes sense, context really does matter more than the raw line count. I think part of our struggle is that some of our big PRs aren’t just DTO churn, so they end up harder to review than they look on paper.
Curious how your team decides what’s “big but fine” vs “big and needs to be split”?
Rschwoerer@reddit
Why is the big PR so big? Is it a large task or story that could have been broken down? Or is it complicated because it’s complicated? I’m ok with some large changes if they make sense, sometimes it’s harder and less clear to break up a single change just for the sake of a review.
dash_bro@reddit
In general if your process is working i think you should stick to it
Unless there are issues or you predict there will be issues, no need to fix anything, imo
CyberWrath09@reddit (OP)
I’m trying not to fix things that aren’t actually broken. We’re mostly fine day-to-day, it just feels like some cracks are starting to show with bigger cross-team work picking up.
Do you have any early warning signs you look for before a review process actually becomes a problem?
dash_bro@reddit
Usually someone short-circuiting a process or procrastinating on a task, then AI-slopifying it at the last moment.
I've found package imports in the middle of a file, inside a damn try...except because a dev "forgot" to accidentally move the package to the beginning of the file. Pure AI slop.
If there's cross team collaboration I usually sit down with their lead and see if we agree on conventions and ownership for whose code goes where and who signs off on what.
It can be a nightmare if people keep randomly updating their conventions without letting others know, and you've got headstrong engineers on either team.
We've also piloted with coderabbit for code reviews as a first pass before code gets pushed, it seems to have a net positive effect so far. You might want to give that a shot!