How much time do you spend reviewing AI-generated Python code before pushing?

Posted by Desperate_Crew1775@reddit | Python | View on Reddit | 66 comments

Genuine question — not selling anything.

I've been using Cursor/Copilot heavily for the last few months and noticed I spend almost as much time reviewing and fixing the AI's output as I would have writing it myself.

Specifically for Python and FastAPI:
- Missing auth checks on routes
- Pydantic models that don't validate edge cases
- Async functions that look correct but have subtle race conditions
- Exception handling that swallows errors silently

My background is network operations (20 years) so I'm paranoid about production failures. I'm curious whether this is just me or a wider pattern.

Questions:
1. How long do you spend reviewing AI code before each commit?
2. What's the most common class of bug AI code introduces in your experience?
3. Have you tried any automated review tools (CodeRabbit, Qodo etc) — were they useful or too noisy?

Not looking to pitch anything — genuinely trying to understand the problem space before deciding whether to build something.