I built an AI that simulates how hackers would break your code before production — looking for feedback from devs
Posted by c0d3xxxx@reddit | learnprogramming | View on Reddit | 0 comments
Hey everyone,
I’ve been working on a small side project and I’d really appreciate honest feedback from other developers.
The problem I kept running into is this:
Even experienced devs (including myself) often ship code that looks fine, but still has security issues like SQL injection, XSS, or authentication flaws — and we usually only find them after deployment or during audits.
So I built a tool called CodeCrash.
What it does:
It takes your code and simulates how an attacker would try to break it.
Instead of just saying “you have a vulnerability”, it shows:
• what the exploit would look like in real life
• how an attacker would actually use it
• and how to fix it step-by-step
Example:
Instead of:
“Possible SQL injection detected”
It shows:
“An attacker could inject this payload → here’s how it bypasses your query → here’s the fix using prepared statements”
⸻
🎯 Goal:
Make developers think like attackers, not just coders.
Right now it’s in very early stage (MVP), and I’m trying to validate if this is actually useful for dev workflows or just “cool but unnecessary”.
⸻
🙏 I’d love feedback on:
• Would you actually use something like this in your workflow?
• What’s missing for it to be useful?
• Is this solving a real problem or overkill?
If anyone is interested, I can also give early access to try it out.
Thanks in advance 🙌