Is the norm now that PRs are basically rubber stamps
Posted by Sea_Cap_2320@reddit | ExperiencedDevs | View on Reddit | 72 comments
I started a new job at a startup about four months ago where the whole process is now "ai-first" approach being pushed on us that we should just vibe-code all of the requirements and the apps. The startup is self-sustainable, it's cash-flow positive and is looking to go get some funding in a few months for expansion but holy shit it's bad.
The startup had two developer founders that left it; their code is a mess, and I mean a complete mess but I understand it from the point of view because they needed to get the customer and had to do shortcuts and just a typical startup fashion.
Then a CTO joined and he pushed for a complete rewrite which happened after about a year and now we are going for the third rewrite (hurray!) The principle engineer is coding, the CTO is coding...? seniors get to code but they don't get to design anything and they must ask for implementation details from the CTO?
Anyways, the PR review is basically just LGTMing claude generated code, I don't understand if this is the norm now or are we just gone insane and we have claude write the code, codex review it, human rubber stamps it or runs it through gemini to appear smart and raise some issues and then claude writes the tests and it's just merged? Is this the norm now? Is it a one/two-men show and developers are just orchestrating agents, is that what it is now?
Melodic_Crow_3409@reddit
They should not be rubber stamps. Otherwise just let people push to the main branch. That’s what rubber stamping PRs is.
mamaBiskothu@reddit
This is of course poor judgement on my part to share this opinion in this inane sub but theres actually a middle ground.
For us PR approvals doesnt mean you have fully vouched for the code in the request. Its more of a "yeah that generally makes sense" sanity check - just overall, is the change what we need now or not. And sometimes its not and we just say go back to drawing board. Sometimes the reviewer can decide to look at the code to decide if some nuances are taken care of correctly.
The core responsibility for the code quality lies fully on the committer. If they fuck up QA will catch it, or internal users using staging will catch it. If they fuck up more than once, which happened once, they go on rubber stamp probation until they change.
As long as the interaction is between two senior engineers this is the contract. If its an engineer and a non tech person vibing something up (which we generally dont alow for product features, only for internal dashboard features), then the reviewer takes the traditional full responsibility.
This works for us very well. We Are much more productive and no one wastes time reading ai slop.
MaleficentCow8513@reddit
Yep. I think a lot of people don’t have premerge test jobs or any type of automated QA. If you have those things, it gives huge confidence in your team’s merge requests
MrJakk@reddit
At least you can still test in staging most of the time
Melodic_Crow_3409@reddit
What is this “test” you speak of?
Works on my box. Ship it!
thephotoman@reddit
This is why we have Kubernetes: so that we can just ship what's happening on your machine.
Careful_Ad_9077@reddit
What do you mean works on my box? The change is simple, just a +1 here , concatenating another variable there, we don't need to test that or even check if it compiles.
Sea_Cap_2320@reddit (OP)
Oh it happens, CTO sometimes changes infrastructure and pushes to main, another guy came and refactored the infrastructure and by doing so accidentally dropped the staging environment and migrated a staging secret to prod. We only found out because a customer found out that their thing is broken. Devs connecting to prod environment to run migrations...
SolarNachoes@reddit
So it’s the norm until they do even worse and destroy a multi-million dollar product. Then what?
AAPL_@reddit
there’s CI but yea
Electrical_Try_634@reddit
Feels like it. The actual implementation of code sped up to a significant degree, so now the bottleneck is humans reviewing and testing the code. Depending on your product that might have always been the bottleneck.
Dipshit middle managers told dipshits in the C-suite that we can shotgun 10x as many lines of code into into a repo as we could before AI. Dipshits in the C-suite heard that as we can ship features 10x faster.
I want to ask my CTO if I gave him a personal helicopter that resulted in a 90% reduction in his commute, could we expect 10x the overall performance out of him? Or are there bigger bottlenecks to getting work done than one narrow part of his actual day-to-day accelerating?
mirageofstars@reddit
Why did you do a rewrite if you guys aren’t gonna focus on code quality/etc?
hell_razer18@reddit
just be careful with whoever code is. Understand their role..normally just becauze it is so easy, VP jumped to code even though it is not his main responsibility. AI made the bridge so easy but dont lose track of whatever your real job is.
and be ready to make architecture easy to replace or oluggable because everything can change quite easy these days. As long as the foundation is solid, good to go..now if it is not, then it might be problematic down the road..
throwaway09234023322@reddit
I had an idiot in management in an AI meeting asking if people are still reviewing PRs and why we are still reviewing PRs. 😂😂😂 I laughed my ass off on mute.
Majestic-Watch-2025@reddit
Wow. I'm management and I would have laughed my ass off off mute
TheMightyTywin@reddit
That’s actually nuts. Even if they let ai review the ai generated code, they would discover that ai generated code is FAR from perfect.
Ai generated code is more like a rough draft. Happy path is usually fine, anything else you’re rolling the dice if you don’t review.
TooMuchTaurine@reddit
To be fair, human code can be similar depending on the human.
HatWithAChat@reddit
Which is also why we review human code
Sea_Cap_2320@reddit (OP)
You have been downgraded to a code monkey, just churn out code, that is the only thing that matters
mxldevs@reddit
Worse, we would be the code monkey's secretary that takes notes for the code monkey to churn.
fsk@reddit
Worse, you are there to take the blame when the monkey makes a mistake, even though the workload is so high that you can't check everything the monkey does.
ProbablyBsPlzIgnore@reddit
Just a tip if you want to do it that way, put the tests and the application code in separate repositories otherwise you're just telling claude to add some green tests for the form
sharpcoder29@reddit
Interesting idea, not sure how much I like it, but I will chew on it.
hypothetician@reddit
I’ve rejected multiple PR’s from principal engineers in the last couple of weeks because they were full of hallucinated api calls.
It’s not a great time to be easing off on the review process.
sharpcoder29@reddit
How are people submitting PRs without at least testing locally? Esp a Principle
FastHotEmu@reddit
The emergence of LLMs has made it awfully clear that most developers don't truly care about quality, they just want to see their PR merged so they can finish the task on the board.
I am not sure where LLMs are going to learn good programming practices if we are producing verbose garbage code.
throwaway_0x90@reddit
No,
Because at some point, a human will have to take the blame if there are outages or bugs. That human is the one that better make sure absolute garbage isn't getting merged if they want to keep their job.
lezojeda@reddit
With LLM driven development I think PR's authors should be the ones responsible, especially in companies where everyone just stamp approves PRs. It shouldn't be this way, I don't like it and I miss when code in master was everyone's responsibility since the review process was at least more serious than what it's become today, at least in my company and probably in OP's.
throwaway_0x90@reddit
That's how it is at Google. We've been warned that the author on the PR is more than 50% responsible for that code.
lezojeda@reddit
It's a shame that it has come to this really but it doesn't surprise me. PRs are for checks and automated tests nowadays.
lezojeda@reddit
The same happens where I work since last year. I feel like I'm the only one caring about not doing stuff like "as any" in Typescript or trying to reduce loc considering that AI takes more time the bigger the codebase is. And I feel like I'm just annoying at this point since everyone else is just "LGTM"ing every PR.
mechkbfan@reddit
Get ready for your fourth rewite
circalight@reddit
If it's the norm, let me know what company you work for. Some hackers want to know.
lawrencek1992@reddit
I think all of us are writing code with agents now at my company. We have an AI reviewer which automatically runs on PRs and gives feedback based on rules we defined and maintain, but it’s not enough for a merge. You have to get a human approval . I’ve written a PR review command for Claude which many of us use, but it’s not a hands off thing. It walks you through each piece of proposed feedback, and once you choose which ones to keep, it asks for review type and what you want to say before leaving a review for you. While it’s running I also take a look at the PR in GitHub, potentially adding additional comments.
I don’t think it makes sense to take meaningful human review out of the loop. I think using agents to augment your own review ability is great—Claude has caught things I’ve missed. But at the same time I’ve caught things Claude missed (usually based on system and business knowledge it didn’t have context for).
ADDSquirell69@reddit
You should be looking for another job because this startup ain't going to make it
The_Real_Slim_Lemon@reddit
I work at a large company and volunteer at a larger one - the vibe is very much “you are responsible for any code with your name on it”. Both are moving towards AI first, but both are very much insisting on humans being in the loop at all points
Upstairs_Snow5195@reddit
I have had a hard time lately keeping up with the sheer volume of code being produced tbh. God forbid I request changes or take more than 10 minutes reviewing a +2000/-700 line pr and I immediately get management harping on velocity.
Sea_Cap_2320@reddit (OP)
Wait, management pushes back on you if you review code?
Upstairs_Snow5195@reddit
Yeah... its mostly "manager ICs" that get pissy if I actually give them a real review rather than a quick green button. Very unchill power dynamic
The_Real_Slim_Lemon@reddit
I work at a large company and volunteer at a larger one - the vibe is very much “you are responsible for any code with your name on it”. Both are moving towards AI first, but both are very much insisting on humans being in the loop at all points
attrox_@reddit
It's exhausting, especially when the one who open the PR did not self review their PR. I doubt that some of them even running their PR against AI reviewer.
bossier330@reddit
You will very quickly end up in a non-understandable and increasingly difficult to iterate codebase if you rubber stamp PRs. If you don’t understand the intent and high level execution of a PR that you approve, you’re letting the agents dictate your architecture, and it will fail.
me_myself_ai@reddit
Has been for a long time in companies with bad Eng culture, sadly. Many, many managers think that reviews are best handled autonomously by the team, which inevitably means that they’re not factored into work estimates and are frequently rushed.
AI is honestly a nice fallback for the worst of the worst. At least an AI review can catch obvious shit, and apply the relevant style guide(s)
defenistrat3d@reddit
Review PRs with LLM assistance. One of the best uses is when you know that the author of the pr doesn't understand a concept you can have claude write detailed information in the pr on your behalf. Just read it. but it's a time saver and spreads knowledge which is exactly what PR reviews are for
Void-kun@reddit
No?
grizzlybair2@reddit
I rubber stamp 1/2 my teams PRs. But there also has to be evidence of testing locally if applicable. The other half needs a more thorough review. My PE had to fix one of my executive directors 100+ file PRs last week. Took them 50+ commits to fix the PR since they originally just rubber stamped it.
AbstractLogic@reddit
In the fast pace world of startups time to deliver is more important that stability. You’re in a race against time. There is only so much runway(money) before the whole thing shuts down.
In a stable corporate environment with stable customers and revenue then a stable feature set and stable releases are more important.
It’s a spectrum.
binarycow@reddit
What's funny is my organization has a (mostly) stable environment and has stable customers.
We threw all that away so we can be more like a startup.
MrJakk@reddit
Most of the ppl on my team are pretty good or I don’t know the context of their change so it’s practically a rubber stamp. But for those who I do or did have issues with their PR it’s basically a response that implies or straight up says they aren’t changing it.
When I first joined the team the staff (now principal) brought some good ideas but just loved stuffing everything in if else statements instead of leaving the function early when issues are found. Super hard to read.
Reorg and he’s off the team for a while. Another reorg and he’s back and his if else are stronger than ever. So annoying.
All that to say I guess a lot of the time PRs feel like that anyway and then there are power dynamics like senior vs principal and nobody really wants to get into the discussion.
So I’d assume the AI is worse. I am only just starting with it so I’ll see I guess.
binarycow@reddit
Before I review, I get the context.
Vinegarinmyeye@reddit
I was made redundant at the end of 2023, and I've had some other life priorities since - I've not really been working in this space since before "vibe coding" became a term. (It's going to be kinda weird to readjust - a lot has happened).
I've worked places where PRs were kinda a formality, we all trusted and had enough confidence in each other's ability that you'd give it a quick once over, push it to staging. Gotta get them JIRA points down.
I've also worked places where I'd do 10 minutes of work and spend the following few months in change control meetings.
I've personally not experienced it, but I reckon I'd feel pretty uncomfortable if the org was heavily reliant on an AI agent, AND had prod access, AND no person was reviewing PRs.
Strikes me as the kinda powder keg that I wouldn't want to be stood anywhere close to.
OAKI-io@reddit
rubber-stamp PRs plus ai-first pressure is how the mess compounds. the review bar probably needs to move from “did you write this by hand” to “can you explain the diff, the risks, and how it was verified.” if nobody owns that, the codebase just turns into archaeology faster.
mrothro@reddit
It's good that you're doing cross-model review, that definitely catches more than just letting a model review its own output.
Personally, I use the agents to do directed reviews: sure, I'll have it to a general pass, but I will also ask it to verify certain things specifically. For example, I might ask about failure modes or bounded contexts. Or hot-path DB calls.
But I will also hand-review "sensitive" things, like auth handling. I don't need to look at every CRUD operation, but I need to know 1) it is secure and 2) data won't be corrupted.
This is really the only way you'll be able to keep up with the flood of code.
x-jhp-x@reddit
No.
GongtingLover@reddit
It's a rubber stamp until there is a production issue lol
NoCardio_@reddit
More like a gauntlet at my company. We have some very bright principles, though.
spez_eats_nazi_ass@reddit
I yell at developers when i submit a pr and it just goes to complete 2 seconds later.
SoulTrack@reddit
What I've been doing is putting Claude headless in my gitlab pipeline. We prompt it questions related to requirements, code style and team preferences, and we ask it to find any potential design or implementation issues. So our MR process is more automated nowadays so it feels less like a rubber stamp. I hate reviewing code and this has helped a tom finding design issues.
colorblooms_ghost@reddit
This is the way AI is driving the industry, yes. Before AI, code reviews could be a significant investment of time but were not that much of a bottlenecks. Say you spent 4:1 writing code vs reviewing it/waiting on reviews. You could get a potential short term speed up of 25% by pushing to main, which of course you would almost immediately lose and then some to increased defect rates and tech debt.
But if suddenly code is getting slopped out 5-10x faster than before, then review becomes a very significant bottleneck. If you think Claude Code should be making your engineering output 5-10x, the only way to achieve this is cutting quality control. And that only makes sense really if you think Claude is so good it either doesn't need quality control or will fix issues so quickly you don't need to be too concerned about defect rate going on.
Personally I think you have to suffer AI psychosis to believe the tools are this good, but there is some sort of internal logic to it.
Stellariser@reddit
The solution to getting ‘features’ out faster has almost always been to cut quality in different ways.
MusicGusto@reddit
Yes, pretty much. I used to carefully review as much of my colleagues’ work as I could while still maintaining my own productivity. I was someone who raised the bar on my team when it came to code review.
However, with the sheer volume of large, half-baked PRs being produced by AI, I’m finding it impossible to keep up. I have to be much more selective what I choose to review in detail. Sometimes that means focusing on high-level details, like the architectural choices or system-level impacts, rather than code quality. It’s not sustainable to constantly spend more time code reviewing a change than the author spent actually writing it.
PeteMichaud@reddit
Don't worry about process here. They are going to fuck around until they crash and burn, just ride it out and never stop networking and interviewing. If you just casually keep yourself out there, then in a year or so some amazing job will show up and then you move over there. Don't put your heart into a shitty place that won't appreciate you, just punch your ticket and keep your head down while you look.
Physical-Compote4594@reddit
Here’s my take. If you are generating code N times faster, it needs to have a maintenance burden of 1/N otherwise you’re in the code equivalent of a gambler’s ruin scenario.
teerre@reddit
I never understood this take. If you now have much more time because allegedly llms are writing the code, reviews should be much better. Ask the clanker to generate a single use debugging tool for this one Pr. It's no work at all, right?
Electronic_Yam_6973@reddit
I stopped caring when management decided that we only hire contractors that rotate every 6 - 12 months instead of hiring FTE’s that care and take ownership of the code.
Manfluencer10kultra@reddit
Until the Russians and U.S. send nukes into the stratosphere (weekly prayer for me), yes.
GoodishCoder@reddit
Personally I still review PRs. There's no value add for PRs if you're not actually reviewing them
Early_Rooster7579@reddit
After claude and codex review, sure.
sweaterpawsss@reddit
Kinda the norm, which is unfortunate because review is one of only two places where humans can interrupt and guide the AI development loop (the other being the prompting/requirement generation).
I kind of get it…reviews take time. You have to read and understand the changes, you have to take the time to debate different approaches, you need to re-test after making changes. Startups wanna go brrrrr and do stuff at max speed. But if you’re at the “third rewrite” stage that kind of implies that quality and maintainability are becoming big concerns. At that stage, you need to take some time to think about the big picture architectural patterns you’re defining, and you simply can’t do that if you don’t even review what decisions AI makes in the first place.
Giving AI good specs up front helps, but I’ve seen it fill in the gaps with really bad ideas. And honestly, most people still don’t break down problems sufficiently or go into enough detail with their prompts. Part of that is it’s hard to get a design 100% right up front…a lot of discovery happens in the process of development and testing. You can still have good back-and-forth dialogues with AI to push it in the right direction and iterate…but again, that kind of implies review and understanding from a human.
BoBoBearDev@reddit
Has always been.
HomemadeBananas@reddit
In my company we use Claude Code and Codex to write pretty much everything but expect pushback if you’re just straight vibecoding without a care for the code quality.
General rule everyone is supposed to ask, does this look like the code I would write myself? If no then stop being lazy, fix it, I mean at least tell Claude what the issue is, it’s pretty good at fixing it if you iterate on the output.
Beneficial_Map6129@reddit
rubber stamp as long as the PR isn't too dumb