security review is becoming an afterthought in ai-driven development

Posted by minimal-salt@reddit | ExperiencedDevs | View on Reddit | 15 comments

half my team has been relying heavily on claude for coding and honestly they started skipping manual security checks when the generated code "looked clean" and passed basic tests.

last month we deployed a nextjs app where one teammate had claude generate the auth endpoints. everything worked perfectly in dev and staging. three weeks later discovered it had a subtle sql injection vulnerability in the user search function. claude wrote syntactically correct code that sanitized most inputs but missed one edge case.

made me realize the team was trusting ai output too much without proper validation. talked with them about improving our workflow and code quality. implemented three steps:

  1. review with claude for minimum 30-60 minutes on the latest code they wrote
  2. use gpt-5 at cursor or warp to double-check architecture and catch missing pieces
  3. before pushing pr, scan code with coderabbit cli or vscode extension

it's improved our code quality significantly. the scary part was how confident claude sounded when explaining security implementations to them, making it easy to assume everything was bulletproof

questions for the community:

want to know how others are handling this balance between ai productivity and actual security