Instruction-Driven Development vs Outcome-Driven Development vs Vibe Coding
Posted by max_bog@reddit | ExperiencedDevs | View on Reddit | 21 comments
I have been writing code with AI agents for a while now, and I have gone through what I think are three distinct ways of working with them. These are my thoughts about where this is all heading.
Before we continue, I should say that while I’m enthusiastic about AI, I don’t believe we can replace developers with coding agents yet. Still, I think it’s important to keep up with technology and explore where its limits are. What I describe below works best in ideal conditions, but in reality, we rarely have the resources to have such conditions.
Instruction-Driven Development (IDD)
IDD is where most developers start, and where many prefer to stay. The workflow looks like this: you assign a small task that takes the agent about 2-5 minutes, you check the results yourself, you make edits or tell the agent what to fix, and you repeat. Over and over.
It already feels super ineffective. You become the bottleneck whose main function is to click a button in the browser and tell the agent “that doesn’t work.”
The main issue with IDD is that we give agents instructions the same way humans write code - step by step. This approach keeps the cognitive load at a level that human brains can handle. But agents are not humans and can handle much larger amounts of data at once. When we micromanage them with small, specific instructions, we are not using them for what they are good at, we are just recreating a worse version of pair programming.
Outcome-Driven Development (ODD)
The shift from IDD to ODD is about moving from managing coding agents to managing work that needs to get done. Instead of telling the AI how to implement something step by step, you describe what the final result should look like and let it figure out the path.
This sounds simple, but there is a mental block that makes it genuinely hard for engineers.
When we see a task, we always have something built in our head: how we would implement it, what patterns we would use, what the code structure should look like. Everyone has had that moment of reading another developer’s code and thinking “I would have done this differently.” With AI, this instinct kicks in hard. You see the generated code, it doesn’t match the picture in your head, and you start forcing the AI to rewrite it your way. Congratulations, you just fell back into instruction-driven development.
The reframe that helped me: treat the AI as another senior engineer who has their own preferences. If a colleague delivered working code that met all requirements, you probably would not ask them to rebuild everything with a different architecture just because it is not how you would do it. You would review it for correctness, security, and maintainability and move on. Same with AI. Accept the code if it is technically correct, even if it is not what you had in your head.
A lot of developers are already doing this nowadays, but the idea of ODD doesn’t stop here.
The natural next step is to stop reading the code altogether. Right now that sounds extremely silly and dangerous. But as you make fewer changes to AI-generated code over time, you build confidence. If you have a good way to verify the result, you will eventually reach a point where reviewing no longer produces any extra actions.
Today, we already trust code we haven’t personally read. Every time you pull in a library, you are running thousands of lines you never looked at. You trust it because it has tests, a track record, and a community catching bugs. The same trust model can apply to AI-generated code, it just needs the right infrastructure.
If you have pipelines that automatically run not just unit and integration tests, but security vulnerability scans, performance benchmarks, architecture conformance checks, static analysis against your team’s standards, deploy on close-to-production staging and fully QA and monitor it. Confidence and trust will build up.
This won’t happen overnight. It will happen gradually as these pipelines get better and as failure rates drop. You will stop reading every line, then stop reading most lines, and eventually you will only look at code when something breaks. Maybe not intentionally. It will just happen when it starts to feel useless.
The responsibility model will probably shift too. Today engineers own the code directly. But as AI takes on more of the implementation, ownership moves up a level. You stop being responsible for how it is built and start being responsible for what you chose to build it with and how you verify it works. It is similar to using a third-party service: when their API goes down, you are not debugging their code, but you are the one who chose that vendor, set up monitoring, and built the fallback. The ownership doesn’t disappear. It changes shape from writing correct code to defining correct outcomes and catching failures fast.
That is a big cultural shift for engineering, and we are not there yet. But I think the trajectory is pointing in that direction.
How ODD Differs from Vibe Coding
At first glance, Outcome-Driven Development might sound a lot like vibe coding. Both involve stepping back and letting the AI do its thing. But there is a key difference.
In vibe coding, you care only about the final product. Does it work? Does it look right? Ship it.
In outcome-driven development, you care about the whole outcome. The product, the code quality, security, maintainability, test coverage, and everything else that makes software actually production-ready. You are still defining what “good” looks like. You are just not dictating every step to get there.
Vibe coding is fine for prototypes and side projects. ODD is what you need when the code has to survive contact with real users and a team that has to “maintain” it.
The Shift in Mindset
For years, our value as engineers was tied to the code we write. IDD preserves that identity: you are still the one making all the decisions, just with a faster typist. ODD challenges it: you have to let go of your preferences and focus on outcomes. And whatever comes next will challenge it even further.
Strict-Soup@reddit
Loser
Melodic_Candle_5285@reddit
Perhaps you should watch your language.
CXgamer@reddit
I've watched his language and can confirm it's English.
Gabe_Isko@reddit
I still haven't run into an agent that can even remember the code it wrote. I feel bad for the engineers that are clearning up your AI slop after you.
Melodic_Candle_5285@reddit
LLMs are stateless.
Gabe_Isko@reddit
Yeah, but the cli tools will re-feed stuff back into the context prompts behind the scenes. Except it still doesn't do a good job. But the whole "goal oriented development" approach is flawed.
WanderingStoner@reddit
I mean, if you let it have attribution in git it will know.
Gabe_Isko@reddit
Still can't believe people let these things make commits for them. I would probably just all PRs with commits that aren't reviewed by a human. Why should an AI agent care about the commit history?
WanderingStoner@reddit
if you want ai to remember what it wrote, then use the tools that are meant for it; one of which is git blame
ExperiencedDevs-ModTeam@reddit
Rule 9: No Low Effort Posts, Excessive Venting, or Bragging.
Using this subreddit to crowd source answers to something that isn't really contributing to the spirit of this subreddit is forbidden at moderator's discretion. This includes posts that are mostly focused around venting or bragging; both of these types of posts are difficult to moderate and don't contribute much to the subreddit.
carnivoreobjectivist@reddit
I do ODD but the bottleneck seems about the same as with IDD, it’s just less control enforced. I don’t care about how it’s coded if it works and is bug free, but I’ve still gotta make sure the code is good quality and that desired outcomes are achieved.
itix@reddit
Good tests are absolutely important.
carnivoreobjectivist@reddit
Ya I make the agent write those too lol unless you mean manual testing
itix@reddit
I have noted the same. IDD is okay for small tasks, but ODD is the way to create large blocks at once.
boring_pants@reddit
You know, I remember a time when our industry prided itself on being data-driven.
It's pretty cool that with AI you can just be like "chatgpt, write me a thousand words on ways to use AI. Don't worry about evidence or empirical data, just make shit up.
forgot_previous_acc@reddit
We follows DDD (Destruction-Driven Development)
Esseratecades@reddit
I stopped reading halfway through.
The problem with what you're presenting is that software systems are so large and complex that the "outcomes" are so vast that you can't actually understand if they've all been met without reading the code at some point.
Your agent can produce a screen that looks like it works, but you don't know if it actually does until some user breaks it. Depending on what your product is, that bug you didn't find could be catastrophic.
Sure you could form a thorough plan to test it, but how good of a plan can that actually be without essentially creating the instructions that you're supposing we don't pin it to? Even if we do find a bug in testing, how are we supposed to diagnose it and find the root cause without investigating the code if the bug is non-trivial? By the time you've concocted ways to protect your product from all of the risks, you've reintroduced all of the basic software engineering stuff you've eschewed in the name of speed.
Mestyo@reddit
Yes, let's discard the craft of programming and understanding of the internals of your product to improve velocity for no reason than short-term profits for your superiors. Brilliant.
WalidfromMorocco@reddit
Both are equally dumb, but this ODD thing you've invented two minutes ago is literally just vibe coding. That's what vibecoders who don't understand the code do with LLMs.
It's even more dumb that advocate to just describe the outcome and not even read the code. Ask the best LLM out there to implement an authentification flow, and it will write the most fragile system that a sneeze would make it fail.
Third thing, when the production code fails at 2AM, it will be your ass on the line. Trust me, I'm living this hell right now. My boss instructed me to go full LLM on a huge codebase (backend, mobile, front-end, embedded software as well) and then got frustrated at me when I couldn't answer some of his questions on the fly.
I really hope for this dumb timeline to end.
CodelinesNL@reddit
You vibe-wrote your own personal definition of "spec driven development" which basically predates the moonlanding.
blacklig@reddit
This subreddit is just rotting. It's just slopbots posting the same boring BORING shit about slopbots over and over.