Is a large, mechanical PR better than a smaller, more complex one?
Posted by rhyso90@reddit | ExperiencedDevs | View on Reddit | 23 comments
I ran into this while reviewing a PR and wanted to check how others approach it.
The PR was refactoring a shared component in our monorepo to align with an updated design system. As part of that, some variant names needed to be renamed (e.g. updating existing variant values to match new design tokens).
This meant updating usage across a large number of files. The change at each call site was simple and mechanical (just switching to the new variant names, e.g. error → danger), but it touched a lot of places.
As part of the review, there were two possible approaches:
1. Apply the change directly everywhere
- Many small, explicit updates across the codebase (rename at each usage)
- Each change is trivial and easy to review in isolation
- No additional logic introduced
- End state is clean and fully aligned with the design system
2. Introduce compatibility/abstraction to reduce file changes
- Fewer files touched in the PR
- Keep supporting the old variant names
- Add mapping logic inside the component (e.g. mapping old variants to new ones)
- Additional conditionals/tests to support both old and new APIs
- Defers full migration to follow-up work
- Introduces indirection where the value passed at the call site isn’t the value actually used internally
My instinct during the review was to push for (1), even if it means a large file count, because:
- reviewing many simple, obvious changes feels easier than understanding new abstraction/mapping logic
- it avoids introducing temporary complexity into a core shared component
- it ensures we fully migrate to the new design system immediately
- it doesn’t rely on follow-up PRs that may or may not happen
- The migration is type-safe and self-validating. Any missed updates are surfaced by TypeScript at compile time
A counterpoint raised was that it would cause a large file count (around 200 extra files) in the repo. While this is true, they're all one line (one word really) changes that can easily be checked off as 'viewed' whereas the alternative was adding more lines of code, whereas the alternative was adding more lines of code that increase cognitive overhead and require deeper validation than straightforward, mechanical changes.
Curious how others think about this in a review context:
- Do you optimise for fewer files changed, or for lower per-change complexity?
- Where do you draw the line between acceptable duplication vs introducing abstraction?
- Are temporary compatibility layers in shared components worth it, or do they tend to stick around and become tech debt?
Interested in how this is handled on other teams/codebases.
vilkazz@reddit
If a big mechanical thing does a single change over codebase as required by a migration, I don’t see anything bad with it.
Such a change should already be aligned beforehand and reviewing it should be as easy as scanning all the changes making sure they are all in place. Boring but doable.
IMO big PRs aren’t bad because they are big. Complex, convoluted, unreadable PRs with multiple underlying changes are bad, and those are usually big.
Mabenue@reddit
This isn’t what humans are good at. If we can run a tool that can verify the changes are correct there’s no good reason for a human to scan each line of code it’s pointless.
HiSimpy@reddit
Useful framing. The win is classifying PRs by risk, not just size: mechanical, behavioral, and architectural. Then assign matching review depth and owner. That keeps velocity on low-risk changes while preserving rigor where regressions actually happen.
ManyInterests@reddit
As you present it, the smaller and simpler approach is best. And number of files is not important for what makes a change big or complex. You can have very small code changes with complex interactions and big potential impacts that require more extensive review than a simple change that touches hundreds of code sites. At face value, I'd take your approach.
There isn't one answer here, though. There could be more considerations for an approach here than are being presented in your post. Sometimes it is absolutely critical and non-negotiable to keep compatibility layers or that one-step migrations are not practical for one reason or another that may not be about the code at all. I would take a step back and make sure you're considering the full picture to make sure you're not tunnel-visioned on the wrong details.
ElderberryNatural527@reddit
The setup you’ve presented doesn’t make any sense.
You’d do option 2 if this was an API or library, the function signature changed, and you wanted to maintain backwards compatibility. You’d mark the old function as deprecated, log a warning when that unused parameter is defined. Then you’d incrementally migrate the callers over to the new method.
If it’s just a mechanical fix to code that doesn’t have external consumers, then the answer might be both options in consecutive commits. Atomic commit zealotry irritates me, but the behavior change and the mechanical refactor should definitely be in separate commits. Even if you squash commits when the PR is merged.
Basically, at the core of it, you’ve presented a false dichotomy.
LittlePrettyThings@reddit
Why not do option 1 but in more PRs?
So instead of 200 files in 1 PR, 50 files in 4 PRs. It makes the PRs themselves less cumbersome to review (sure it's menial changes, but it's still congnitive load and patience on the reviewer). That way it can also be split among more reviewers, if that's an option.
Flashy-Whereas-3234@reddit
In the context of a large monolithic codebase, I sometimes do both.
The mechanical sweeping change of #1 is ideal. If there are variants then I'll make each variant its own commit and mention in the PR that you can review it in series. I'll always do this.
If I have an esoteric evil codebase and I can't guarantee I've sniped all calls (maybe there's a history of magic), I'll introduce some other mechanism to log/adapt/compensate for that evil.
If I have multiple teams crawling around in the code or a lot of inflight work that might also contain a usage - well I'd hope there are tests but I have trust issues - I'd also introduce a linting rule, or the Adapter with a warning to indicate, hey, stop doing this, it's already deprecated.
The worst are inflight PRs where the tests all pass but when the two branches meet, shit happens.
smoss_benchwright@reddit
Go with 1, I've seen a "temporary bridge" live in production for months several times.
If it really is as simple as you say for each individual line change then reviewing it should be simple enough and you get it over and done with in 1 shot.
Leading_Yoghurt_5323@reddit
i’d strongly prefer (1) here. 200 boring changes is usually cheaper than adding compatibility logic that everyone now has to mentally carry forever
cjthomp@reddit
Great question, ChatGPT
ZukowskiHardware@reddit
No, smaller always better.
teerre@reddit
I mean, in this contrived example the first option is obviously better. But in reality there's no situation where you cannot rename a portion of the sigils per PR. So, as usual in this type of question, the answer is smaller and simple, not need to choose
dash_bro@reddit
Option 1 if no risk of breaking and tests actually catch this sort of thing with unintended effects
Option 2 otherwise with reviewers owning those pieces of code. Ideally you can move from option 1 to 2 handily if you do multiple commits for the changes and then cherrypick commits and raise parallel PRs, all with reviews by people who actually interact with them
Then squash and merge away : hope your CI system is robust ~
ContraryConman@reddit
The large PR can work. On my old team, we used to do the following: - Wrote a design document for the refactor that would need to be approved like a pull request would - When you open the PR, basically say "this is an implementation of the previously agreed upon refactor" - Optionally, if it's very large, we'd do a live code review. Just take 15 minutes of the team's time walking through the changes live and taking any major concerns, so that any comments on Gitlab/Bitbucket are kept to minor nitpicks
throwaway_0x90@reddit
Subjective.
Also, there's a third option.
Comprehensive-Pin667@reddit
2) sounds like a way to hell. The abstraction you build will end up not being temporary and you now have two APIs and some extra complexity. It's fine to change many files at once if it's a mechanical change like this that your IDE can probably do. The chance of introducing an error this way is also significantly lower than eith the abstraction layer idea.
edgmnt_net@reddit
I also agree with the first approach. But in a more general sense, the changes should be reviewable. The Linux kernel people use semantic patches via Coccinelle (added to commit descriptions) for exactly such purposes. They are machine-checkable so you can be confident nobody snuck something else in or made a mistake. An alternative for other ecosystems is to describe the change and let reviewers duplicate them with an IDE by following instructions, then check that it results in an identical diff (or no diff versus the change).
hibbelig@reddit
If it is possible to solid the change into two then that might be a good idea. One of the two changes would be a small line count manual change and the other would be one that touches many files but can be done with an automatic refactoring.
If the developer tells me: I used the IDE to rename this method, or to inline that method, or something like that, then I wouldn’t have to review each line carefully. Because I believe that the IDE does it right.
But if I discover that the developer snuck in a different change into the supposedly automatic refactoring then I stop trusting them.
Whether these two commits are in the same PR is a different question, that might depend on team culture.
If the programming language doesn’t allow such refractors then this approach can’t be used.
VRT303@reddit
PR size doesn't matter. Commots do, and how easy they are to review.
marzer8789@reddit
1 all the way.
A PR should be either:
"wide and shallow", i.e. touches potentially many files but only makes trivial/mechanical changes in each
"narrow and deep", i.e. large, complex changes only in a few places
Anything else is just impossible for humans to review in any sustainable way
ivancea@reddit
You don't optimize a PR based on file metrics, you do it the way you should, period. You only think about files when splitting PRs or making a step-by-step approach. Or, of course, based on product needs (POCs, MVPs, etc).
So here, I don't think 200 files is too much, I've seen bigger mechanical PRs. The only important part in a PR like that is having a very clear, localized and highlighted "real" change so reviewers can distinguish between logical and mechanical changes.
You're second option looks like tech debt, so they're not equivalent really. I would only go with that as a first step before the mechanical PR, if there are many other contributors that will see conflicts otherwise.
And don't forget to pass an agent over a PR like that, if allowed by company policies. They're fast reviewing many files, and they're good at catching anything that's unexpected.
AnAge_OldProb@reddit
If it’s literally just a rename (or anything that’s basically a codemod) then just do both at the same time. I’d split and migrate it if there are durable changes eg new db schema required or complex refactors (or even just major deletions).
VinegarZen@reddit
If there is no risk of something breaking with option (1), that’s my preference. Splitting it up is only valuable if there is a risk of something breaking.