Does AI generated code change the review process itself?

Posted by Lost-Albatross5241@reddit | ExperiencedDevs | View on Reddit | 65 comments

I've been trying to put my finger on something with AI assisted code review.

Not the usual "AI writes bad code" thing. Obviously it does sometimes. I mean the review process itself.

Sometimes the code looks fine enough that you dont reject it right away. But something feels off and you cant exactly explain what.

So you read it again. check another file. Ask the AI what it was trying to do. Maybe run a test, grep around, look at a related function. Then somehow you're back staring at the same block again.

And idk, at some point it feels less like reviewing and more like orbiting the decision

You're doing review like activity, but youre not realy making a clean call.

Thats the part that feels weird to me

The "human in the loop" is technically there, sure.

But if the human is stuck between "this is probably fine" and "I still don't trust this," is that actually the loop we want?

Have you seen this?

Especially with infra /backend / securityish code?

So how do you deal with it?

Or is this not really a thing and I'm just overthinking it?