When AI rewrites your code in seconds, what should tests actually protect?
Posted by Cumak_@reddit | ExperiencedDevs | View on Reddit | 4 comments
Working with AI agents made me rethink how testing could work when code changes this fast. I wrote down my thoughts: test contracts, not implementation. Probably nothing new, but I'd appreciate feedback on whether this framing makes sense to you.
ExperiencedDevs-ModTeam@reddit
Rule 9: No Low Effort Posts, Excessive Venting, or Bragging.
Using this subreddit to crowd source answers to something that isn't really contributing to the spirit of this subreddit is forbidden at moderator's discretion. This includes posts that are mostly focused around venting or bragging; both of these types of posts are difficult to moderate and don't contribute much to the subreddit.
08148694@reddit
Article kind of self contradictory
It first asks what tests for are, with the answer that they are for testing intent (item was added to cart) rather than implementation details (x function called n times)
It then says that because of AI somehow we need to change how we need to change how we think about testing
Well if you just took your own advice from the start your tests would already be fine, it’s got nothing to do with AI, tightly coupled tests to code will be fragile to code changes either way
Cumak_@reddit (OP)
You're right that contract-based testing isn't new, and the article doesn't claim it is. The argument is more about urgency than novelty: we've tolerated implementation-coupled tests for decades because change was expensive anyway. AI makes change cheap, so the cost of bad tests becomes harder to ignore. That probably didn't come through clearly. Thanks for the read and feedback
roguelodge@reddit
If you continually rewrite your code it is always going to be untested, buggy and insecure? So why bother testing or even talking about it? Also its obvious your article is written with AI too.