I feel disconnected to the codebase if i adopt fully agentic workflow, i must do something manually.
Posted by ImTheRealDh@reddit | ExperiencedDevs | View on Reddit | 139 comments
I've been using AI coding tools heavily for the past while, and I've settled into a workflow that works for me.
When I let AI go fully agentic on a feature, even with a good spec, I feel disconnected from the codebase. The code is there. It works. But I don't generally get how it works. I just know the result. I don't know what it actually does. And that bothers me.
I got burned once. Wrote a short prompt, let AI implement a whole feature, went to test it, and the thing totally diverged from what I wanted. I couldn't even course-correct because I had no idea what it built. Had to scrap everything and start over.
Since then I've been doing spec-first, but with my hands in it. I generate a spec, then I argue with it, poke holes in it, point out flaws, architect it myself. Once it's workable I implement the skeleton by hand. The schema, the core logic, the architecture. Then I feed it back to improve the spec more. As I implement I find more flaws, keep iterating, and eventually the spec becomes the documentation of the feature itself.
The deterministic functions, the boilerplate? AI can have those. But the core stuff, I need to touch it. I need to write it. Otherwise I don't feel ownership of it. When the shit hits the fan in production, I need to be able to jump right in and know where the logic goes and where it breaks.
There's this concept in Warhammer 40k called the machine spirit. Tech priests pray to the machines, wave incense before them, maintain a spiritual connection to the technology. Before AI I never thought about "connection to the codebase" as a thing. I coded 100% by hand so the connection was just there. Now that I have the option to lose it, I can actually feel it. I need to feel the machine spirit of my code.
I know my approach is slower than the folks throwing unlimited tokens at the latest models with million-context windows. I accept that tradeoff. I feel all the f*cks that I did, and I want to keep feeling them.
So my question to you all: do you feel this disconnect when you go fully agentic?
timetable23@reddit
The disconnection is real and it happens when you delegate without specifying. You hand the AI a vague task, it produces code you don't fully understand, and your mental model of the codebase degrades.
The fix: write the spec yourself, let the AI write the code, then review against the spec. The spec IS your connection to the codebase. You understand the intent, the edge cases, the failure modes - the AI just handled the syntax.
I've been using ClearSpec (https://clearspec.dev) for this workflow - it generates structured specs from conversations and feeds them into your IDE via MCP. The spec becomes the reference point for reviewing what the AI produced. You maintain understanding because you defined the requirements.
CodrSeven@reddit
Whatever you outsource, you don't gain from.
StatusPhilosopher258@reddit
yeah this is real fully agentic has no ownership
best approach:
stay involved or you'll lose understanding , spec-driven development helps tools like traycer keeps it aligned
idekl@reddit
Sounds like you're doing spec driven development the right way. As far as I can tell, SDD is the ideal way to program in today's AI-first paradigm.
Chickenfrend@reddit
Writing extremely detailed specs up front and no code is the opposite of what I like to do. I've used bmad at work and I kinda hate it, it takes a long time and markdown is worse than code.
igharios@reddit
Unfortunately, software engineering is changing and system thinking/reviewing/... are the technical things SEs will do. and now they are expected to know more about spec and requirements. The world is flipped
Chickenfrend@reddit
I don't think we're going to replace code with spec. Yes, senior engineers do systems thinking, but there's an iterative process as you build code, learn new requirements, refine it, etc. Having the spec be the source of truth instead of the code, where the code is actually what determines what the software is, is backwards.
Maybe I'm wrong and we'll all be spec programmers now. I hope I'm wrong, because I prefer writing code to writing documentation and talking to AIs who are larping as project managers
igharios@reddit
I am doing a side project with bmad, I am 2 weeks in and more than 25% done. Been wanting to do this for years, but my rough estimate was 5 engineers and 2 years.
Think were these models were 12 month, and look 12 month ahead. Sooner or later Software Engineers won't be writing code
Chickenfrend@reddit
Well, if code won't matter in a year, most of us will be out of a job. We'll see how that goes.
I do think AI tools are useful. I just don't think bmad specifically is the way to go. It's too hands off and specification is a lot of work. I'll say, AI makes it a lot easier to start a project. Your side project must be really large if you'd have estimated a need for 5 engineers and two years for it. I'll be curious how easy it is to continue as you keep going past that initial 25%.
igharios@reddit
I am building an invoicing and scheduling system
another_dudeman@reddit
In the old days programmers needed to know all this stuff too. I'd argue they still did before LLM heavy workflows.
MORPHINExORPHAN666@reddit
Yeah, that's because you didn't write the code yourself; it really is that simple. There is none of your own intention in the codebase that you are reading.
If code was something you could "prompt, then review", as the AI crowd likes to put forth, then "reviewing until you feel ownership" would be something that would be possible outside of generated code. Anyone who has been forced to maintain monolithic legacy codebases that they did not create will tell you that this is not true.
ritzkew@reddit
From alogriths to live by, in scheduling theory this is called "busy waiting", the CPU is technically running instructions but producing zero useful output. The AI equivalent: you're generating code, accepting code, reviewing code, debugging code. Lots of motion. But comprehension is built through friction, not throughput. The commit history shows progress. Your mental model doesn't. Productivity is measured in problems you understand, not tokens consumed.
vickie5nickers6792@reddit
do you find any part of ai-generated code easy to trust or understand?
RobertKerans@reddit
It's the same as maintaining any code you didn't write. AI doesn't change that, it's not creating a new type of code (it can't do that, it can only glue together whatever is in its corpus).
PartBanyanTree@reddit
it's not even the same as maintaining code written by someone else.
I've done that; eventually you get into the heads pace of the previous dev(s). good or bad you get this "aha! I know what they would have done" and you're right. maybe it's just a different way of thinking, maybe it's "what's the most brain fucked choice to make here", but it becomes expected, you get the thought process. over time you can change it to your improved way of doing whatever it is, or you do it the way the ancestors did the codebase too, to be consistent or because that's just how it's done. you can tell who worked in what section based on how they formatted the code and named their variables.
with AI that never comes. there's never a critical mass where AI's instincts are something you can guess. no reason to think the 10th screen will be similar to the prior 9th. you cant see how the system grew nor how to intuit it's intent. there is no point of view, and that means every new line of code has to be understood from scratch
johnpeters42@reddit
Can confirm the first part, at least. Some of my former cow-orkers wrote hot garbage, but at least they generally wrote the same type of hot garbage, so I can wade through it faster than I could some complete stranger's hot garbage.
MORPHINExORPHAN666@reddit
LMAO I know exactly what you mean!
Wonderful-Habit-139@reddit
There's definitely a difference. AI code is usually more verbose, it doesn't shy away from repeating code rather than using it, and there was no real logic during the code writing.
This results in code that is usually 2 to 10 times (or more) longer than what a human would've written. And it hides mistakes that are different from human mistakes.
Hot_Slice@reddit
I'm noticing that AI doesn't do a good job of cleaning up after itself. If I start with a simple vertical slice and then start making more similar workflows, it doesn't go back and automatically refactor. So there's a lot of semi-duplicate code. Even when I've told it to refactor it tends to leave vestiges of the old way, and then I have to tell it even more explicitly to remove them.
If you're thinking "who cares? The AI maintains it": I think that the semi duplicate code also consumes more tokens and it's better for context management to have nicely named helper functions. So I continue to manually review and tweak every MR.
Izkata@reddit
I've also seen it remove "why" comments I'd put in for an unclear piece of code when another dev uses it for something unrelated in the same file.
MORPHINExORPHAN666@reddit
I don't find any code hard to understand given enough time. Even with higher complexity systems, I can generally hold a mental model of the system after some note taking, testing, and sketching. As far as "trust", I don't actually trust any code I didn't write myself. Even when I'm forced to work collaboratively - and I think everyone should be doing this - I review any existing parts of the codebase, and manually review any changes that are pushed. That way we all keep each other honest, and the collaborative spirit tends to benefit from it.
The problem with AI code is that the intentionality is not there like it is with code produced by a human. There is no familiar, human thread that I can follow to it's logical end. There aren't the common hints and signs of a human stumbling. AI completely breaks down, logic-wise, multiple times over the course of its "generative process", if you want to call it that. It may not be totally apparent in the small projects that crop up on github, but in real projects where generative AI was used heavily trying to parse the codebase is like trying to get into the shoes of 3 or 4 separate, mentally deficient robots with schizophrenia.
I fail to see the utility in AI and I think the hype is mostly from the uneducated or the people who have been taking shortcuts in their career, and do not want to put the time into truly understanding what they are doing or how to best accomplish a goal.
PriorApproval@reddit
i do think the maintenance part also becomes easier if search + refactoring both are correspondingly easier - which is possible. it still requires review though
ImTheRealDh@reddit (OP)
I have not forced to do that, but that sound not fun at all
MORPHINExORPHAN666@reddit
It can be good money, but it will burn you out. The worst part is that the hiring manager's of these companies will have been made acutely aware of how intimate you need to become with the codebase in order to fix issues without causing more, and they will lie and reassure you throughout the entire process that it "is not as bad as you'd expect". That, or they'll put out job postings for .NET and leave out the fact that the applicant will be tied to a codebase that supports an early version of .NET framework. Very fun stuff :)
ImTheRealDh@reddit (OP)
Hahaha, I have't have a chance to try that kind of "fun". Thanks for the heads-up.
PartBanyanTree@reddit
I once git tricked into a job where I would be doing ".net security work" I knew .net but was pretty nervous because I'm not some security guru or anything, but such a challenging premise and growth opportunity!
to start the 1yr contract, though, I needed to familiarize myself with the cidebase, do some bug fixes, etc... before long I realized I was permanently stuck in this ancient C++ system that had been going for 20+ years; it was enormous, it was comprised of multiple softwares that had been acquired over the years and smashed together. there was C++ and Delphi and .Net and VB and when you think about the huge volume of man years that went into one subsystem, it was crazy. I'd be modifying files that svn history assured me hadn't been touched in 14 years (they'd clearly switched version control systems a few times)
I watched another contractor get the ".net security work" handed to him... it was some kind of modal dialog feature that tool him 3 weeks to do, basically it was nothing. I hadn't done C++ since school and it had been a decade since I'd cared about manual memory management. I did grunt level hiring painful bug fixes that were legacy code as soon as my fingers left the keyboard. I got paid and finished my year and did not renew my contract (I had life-plans at the end if that year so I didn't want to try quitting mid-contract to find a shorter interrum contract)
xdevnullx@reddit
This written as I learn how to run a STA threaded winforms app in a separate process so I can treat it as a remote http service.
The app's original authors baked infragistics UI controls into their business logc and drags 2010 along with it.
xdevnullx@reddit
I only recently learned the term "desirable difficulty" I've been calling it "friction".
Direct_Earth2958@reddit
machine spirit concept is kinda interesting here
Sudden-Rate9539@reddit
Yeah, and imagine now when someone joins you to write the code together :D
There are several agents with memory that know your codebase well, and when someone asks questions, such helpers/agents could guide you on how exactly your hook or auth works.
immbrr@reddit
A strategy I've had a lot of success with is: - Iterating on the design/spec with AI as I would with a coworker/pair programmer - Writing the core parts of the implementation myself - Using AI for the long tail of fixes etc to get everything working right and fully connected
I only use pure vibe coding for things I care less about (e.g. small UIs for improving my personal workflow) and
AlertHovercraft6567@reddit
Yes. I can’t trust AI driven code somehow. Until it does what it says, until I have seen what is under the hood. And I found lot of bloat, multiple iterations over the code. I need to be hands on. It looks like AI is good in prototyping, helping out, scope refinement, bug fixing but the main logic we have to do. AI messes up things
fallingfruit@reddit
When I was trying to understand the hype I would ask the AI to write systems and then I would try to do some work on top to patch the system or add a feature, using the AI code as a base.
What ended up happening is that the AI code was so astonishingly bad and hard to extend that I almost always rewrote everything it did, in less than half the LOC, and created a much better system. It was a huge fucking waste of time. I don't use them much any more because I am so constantly disappointed.
And yes I know if you are building some stupid simple webapp's crud endpoints then its pretty good. Im not.
valdocs_user@reddit
I had a realization recently. Attending to places where I notice that what the AI implemented differs from what I wanted is like counting the bullet holes on planes that come back from a sortie. It's the planes that didn't come back that were hit in bad places. If I can notice and correct it it's not so bad, but how do I protect against mistakes or laziness the AI might do that I wouldn't otherwise notice?
I haven't worked out a solution to this yet.
dystopiadattopia@reddit
Do everything manually if you value retaining your skills
brobi-wan-kendoebi@reddit
Nice try clanker
ImTheRealDh@reddit (OP)
How dare you people talk to us like that?
PoopsCodeAllTheTime@reddit
I mean, this has always happened to me. I would write some code, then enough time passes since I wrote it (let’s say 6 months), I come back to that code and it’s like I had never read it before, as if written by a stranger. Now it’s the same thing but instant. Code goes in, I check that it’s ok, and that’s it, the code is immediately disconnected from my memory. That’s fine tho, that’s how it is whenever we work on a codebase.
ImTheRealDh@reddit (OP)
But we still have some intuition with that code though since it came out of our hand. I am also like that but the timeline is shorter.
PoopsCodeAllTheTime@reddit
I suppose open source projects already deal with this often if they are heavily contributed by community. Engineers that are in charge of reviewing PRs and writing little code in large orgs, they also deal with this feeling.
So yes, you are correct with the loss, but also, it seems this should be manageable.
It is a different skill, it might be more uncomfortable to do, perhaps an increase in difficulty. But…. Bright side? Still solvable, and also proof that ai isn’t replacing engineers even if it replaces some of the work, if anything it might make the hard skills even more relevant
ImTheRealDh@reddit (OP)
Indeed, agree on your take about this is solvable, im trying to.
PoopsCodeAllTheTime@reddit
So this wisdom I heard a while ago:
A good project isn’t the one that you know by heart so you can change with ease, a good project is one where a stranger can come into the code base and find the relevant piece of code with ease, the stranger doesn’t need to know the entire repo in order to make their fix
I think this goes hard and points the right direction given the new limitations
ImTheRealDh@reddit (OP)
That's hard, i will remember it, thanks.
SpritaniumRELOADED@reddit
I feel disconnected from the landscaping if I adopt a fully mower-based workflow, I have to trim a few blades with scissors and make sure everybody sees
ImTheRealDh@reddit (OP)
If you're that interested in a mower-based example, here's one: You tasked an agentic mower to mow your client's backyard, let it run automatically for an hour, then came back to see the result. You glanced at the backyard and determined it's just good enough. All the grass is cut clean. It also cut all the client's flowers in half. It also killed a cat that got curious about the mower.
SpritaniumRELOADED@reddit
That's a really sad story you made up
rupayanc@reddit
the disconnect is more serious than most people admit, because when production goes sideways at 2am you need to debug by instinct, not by pasting errors into a chat window hoping the model remembers what it built. your spec-first plus write the skeleton yourself approach is the version that actually scales.
mosselyn@reddit
I'm not here to tell you the right or wrong way to engage with AI as I am totally unqualified to do so, having retired a few years ago. However, maybe a little past history will help you frame the situation in a way that helps you:
Back when 3rd party utility libraries started to become widely available, a lot of us were on the NIH (not invented here) struggle bus. "How can I trust this code written by people I do not know to do these important foundational things in a reliable and performant way?" Think math libraries, I/O libraries, string utility libraries, etc.
The concerns weren't unfounded. Some of them were bad. All of them contained bugs to one degree or another. It was harder to identify problems and get them fixed since it wasn't your code base. And yet...
In the long run, the productivity gains won out and the libraries got better, and now none of us think too much about it. It's not "should I use a library to do X", it's "which library should I choose".
You don't feel ownership of those 3rd party libraries, nor should you. You feel ownership for what you build with them. I think this will eventually fall out similarly.
GreenOrg@reddit
AI-generated code isn’t the same as using a third-party library. With libraries:
ImTheRealDh@reddit (OP)
Thanks for taking time writing out your perspective, really appreciate that. And yes, with this current pace of AI, i have to adapt my workflow from time to time, and i will remember your advice, thanks.
Btw after retire do you have fun?
mosselyn@reddit
Yes, being retired is the f'ing bomb, at least for me. Pretty sure I was born to sit around playing games, reading books in the sun, and pursuing my other hobbies. May you be similarly happy when your opportunity arrives.
ImTheRealDh@reddit (OP)
Damn, thats sound fun, i for long put them down to start the grinding journey. Hope some day can pick them up lightly
seinfeld4eva@reddit
For me, the answer is just spend (much) more time reviewing every file. If I don't understand what it's doing or why it's done that way, I ask Claude to explain it. If I get to the stretch of some complicated code, I'll go back and try to read through it again and make sure I can explain it without having to ask Claude.
The hard part is reviewing that much code becomes tedious, especially if there's a lot of changes. I try to see reviewing as something educational and relaxing, which I think it can be. I feel like if I can just learn to enjoy the reviewing aspect more instead of seeing it as a tedious chore, then I can enjoy being a software engineer again and can see myself doing it for years to come.
lilcode-x@reddit
This right here. I 100% believe that code review is a skill that can be developed like anything else, and the industry is in a process of adapting to this new workflow.
Repulsive-Hurry8172@reddit
I feel like code review won't develop without knowing manual code writing. And the current state of this industry now that is penalizing manual writing would lead to less and less people with the ability to review.
lilcode-x@reddit
Guess we’ll see how it plays out. To your point, it could be that the reason I’ve been able to get better at code review is partly because I had already spent years hand-writing it.
-Knockabout@reddit
I still question whether it's actually FASTER in every case though. It feels like at some point you are just moving the time spent writing to the time spent prompting/reviewing.
lilcode-x@reddit
Honestly, it’s hard to say, but I think it’s more nuanced and task-dependent. Some tasks feel slower, others the same, and other tasks are significantly faster.
ImTheRealDh@reddit (OP)
Yes, there is definately some review fatique where i have to read too much code.
seinfeld4eva@reddit
I don't know how it is at your workplace, but there's no mandate for me to go super-fast. My team would rather I wrote high-quality code than a lot of code. Ideally, we want to do both. But if i tell people I need to spend a couple of hours reviewing my own generated code, nobody is going to hold it against me. I'm already going twice as fast as I used to -- there's no need to run around like a chicken with its head cut off.
dpekkle@reddit
Yeah I think I have had to adjust to using AI because I'm used to code being almost ready to push and PR when it is written to the extent that AI can get it super quickly.
That is, code that comprehensive would probably be about of the quality that it is ready to go. And so I have an instinct that it takes about X minutes of time before I can wrap up and move on, when with AI theres still a lot of review to be done, working through to understand, critique and refactor.
GrapefruitMammoth626@reddit
It’s more work to review code than write it a lot of times. When you write it, you understand the intent. LLMs do silly things, like add a bunch of fallback code just because it doesn’t have all the info it needs, so it implements very cautiously. You can tell it what not to do, but there’s also another thing you’ll have forgotten to include. Nothing worse than looking at its verbose output and having to iterate.
SetTemporary5734@reddit
sounds like you're taming your own digital machine spirit
Business_Try4890@reddit
This is all so dumb lmao probably takes you more time then to just do it yourself and it cost you and that dumbass gpt 🤣
vowelqueue@reddit
Am I the only one that can tell this post is just obvious AI slop?
Repulsive-Hurry8172@reddit
Anthropic really wants you guys to send your code to their servers, guys. Need that training data from experienced devs badly
AngrySpaceKraken@reddit
I've been seeing a huge uptick of AI posts lately. They won't capitalise the first letter of a sentence, or say something generic like "That is totally the way it is, am I right??"
What's even the point?
Muhznit@reddit
Manipulation. The AI companies have way too much incentive to manipulate public opinion in their favor.
03263@reddit
Ugh. Software hasn't meaningfully improved human lives in 20+ years. My point being that not only are the companies not our friends, the technology itself isn't solving a social need. It's very impressive and ostensibly useful, but for what, I do not see.
Economic value can not be entirely detached from social value... or can it?
nextnode@reddit
That is not how AI writes. It is not even grammatically correct.
Beneficial_Gas_3649@reddit
you might be overthinking the manual parts
Ready-Stage-18@reddit
Praise the Omnissiah
Lowetheiy@reddit
Omg is this Warhammer 40k reference? OMG OMG!
ImTheRealDh@reddit (OP)
From the moment I understood the weakness of my flesh, it disgusted me.
Yourdataisunclean@reddit
I craved the strength and certainty of steel.
hornynnerdy69@reddit
One day the flesh you call a “temple” will fail you
F0tNMC@reddit
Ah, but then you encounter the Riddle of Steel.
dc0899@reddit
The beast of metal endures longer than the flesh of men.
Ok-Target9965@reddit
reminds me of when i lost my keys and they were in my hand the whole time
Kooky_Town3336@reddit
forgot about that part
debbie7winkle2884@reddit
sounds like you've got your own machine spirit rituals going on
Any_Meaning1145@reddit
kind of reminds me of something i read before
Main_Set5297@reddit
i had something like this with AI-generated tests
Yourdataisunclean@reddit
Many of the tech priests are shown in the lore to not really understand how most technology works. The rituals they do often contain actual important elements of maintenence or other functions combined with unnecessary religious aspects. Which then get passed down without a true understanding of what they are doing. One story even had an AI mocking a tech priest and calling him an ignorant witch doctor to his face as he did some of the rituals he thought were needed.
The machine spirit is also usually just a catch all term for behavior, programming or an actual AI wrapped up in their religion. Since actual AI (abominable intelligence) is banned in the Imperium of Man. Being fast and loose with what a machine spirit actually is helps them use certain machines they otherwise might not be allowed to use.
Overall they make an interesting corollary to reflect on our own often misunderstandings of AI and complex systems.
masnth@reddit
We are on the right path to fuck up our own 40k version, maybe more than what 40k could imagine.
zero-dog@reddit
I’m exactly the same.
Electronic_Anxiety91@reddit
Agentic coding is a scam.
Cahnis@reddit
Try using a red theme on your IDE for the agent to run faster.
HiSimpy@reddit
This is a healthy signal, not resistance. Full automation without ownership checkpoints creates hidden risk. A practical middle path is manual checkpoints on design intent, risky diffs, and final acceptance so speed does not replace understanding.
rover_G@reddit
Do you review your Agent’s PRs?
Alert-Refrigerator22@reddit
I let it go with full permissions, but have absolutely clear first what "acceptance criteria" looks like - then spin up an agent to confirm everything the first agent has done following the plan.
This way for me I understand the why, get it to work per the plan, and cover it up with tests later. I feel this methology to work for me :)
Leading_Yoghurt_5323@reddit
yeah because once you stop shaping the core logic yourself, you’re basically reviewing someone else’s code at high speed.
luca_ctx@reddit
What helped me most was forcing a paired planning pass before implementation. If I just tell the agent “build this” or “fix that,” I get disconnected from the codebase fast. If I make it walk through architecture options, boundaries, tradeoffs, and a staged plan first, I stay connected to the important parts even if I’m not tracking every function and parameter.
03263@reddit
Me too, but this is a rapidly fading business need. We're going to have to find our satisfaction doing something else. I'm worried about where and whether I'll find it. For me, the "art of code" has always been a primary motivator, more than building systems. Complex debugging and such.
kyoob@reddit
I wrote a Claude skill called Navigator mode that I use on my local. It’s basically just instructions to act like it’s sitting in the seat next to mine as a pair programmer. We look at the ticket, it asks clarifying questions that I answer, it tells me what to do, and I drive. It cuts the throughout way down but at least everything that goes into the editor has to pass through my eyeballs and out my fingertips.
Sfacm@reddit
How working in bigger teams worked for you. A lot of code is added by colleagues, and you still collectively owned it , right?
ImTheRealDh@reddit (OP)
Yes, if code is from the others, i already adjusted the expectation, and read it carefully, if something unclear i can just poke my colleague to explain it to me.
Sfacm@reddit
Wouldn't the same attitude work with AI code ?
ImTheRealDh@reddit (OP)
But i will get the real answer why they do that. If we ask the AI then it will "Absolutely that is my mistake..."
ecnahc515@reddit
Not necessarily. You ask it why it does something or how something works and it will usually just explain the rationale. If you see a flaw it and tell it will definitely decide your right though. What's better is to prompt "I'm not sure X is correct, what about when Y happens?" And make it justify the approach. A lot of the time I find the agent can both explain why something isn't completely correct but why it's still a valid approach under specific circumstances. They can handle nuance much better than people give credit for.
Sfacm@reddit
Well, when years ago I mentioned to colleagues about AI bs, they said people do it too... IDK I don't really use AI as you do, but I am used to collective code ownership, hence my line of responses...
ImTheRealDh@reddit (OP)
Hahaha got it, but copy code and spagetti is easier to admit, where ai just bent and absolutely
need-not-worry@reddit
You can ask them, and if they're not a moron they should be able to answer each and every line. Including "I copied from SO and is needed for some reason I can't grasp". Or "oops I copied from somewhere else and forgot to delete it". They may have some false assumptions but there will be assumptions. Ai often just flip flop when you ask why.
Sfacm@reddit
Really, I know it backtrack when it realises the mistake, what's flip flopping?
need-not-worry@reddit
You ask it to explain and it backtrack. You point out the error in its backtrack and it backtrack again. Human sometimes does this but at least both parties learn something in this process.
Sfacm@reddit
I guess I am lucky in not using it much 😉
assemblu@reddit
Average person seemingly has better streetcred than LLM discovering having free will and jotting some code to get expected output, somehow.
w3woody@reddit
Personally I hate having AI fully write my code--and I realized why the other day.
Because when I write code, I tend to approach it in pieces. For example, when I write a UI element in an iPhone app which presents a table, the first thing I do is write the most basic table thing that fills the space, and test it. Then I write the table thing to populate rows with the visual formatting I want (but with placeholder images and text), then test that. Then I wire up the data to populate the rows, and test. Then I wire up the interactive elements (but have those elements trigger test print statements) then test that. And so forth, and so forth.
Each iteration and each cycle of testing allows me to "grow" the code from nothing to something functional.
AI bypasses all of this. Instead, AI just spews out a big blob of shit--and sometimes it isn't even the blob of shit you want. Often the blob of shit doesn't work--and you're stuck debugging a big blob of shit that appears to have been written off the top of the head by a drunk intern who has no fucking clue.
What I'm learning is that for myself, the best way to work with AI is to have it write the code in stages--in the same way I work myself. Have it sketch the code to fill the space, verify and test it. Have it sketch the code to populate the rows with the visual formatting I want, tweak and test. And so forth. Some of the steps are so simple it's easier just to code it myself than ask AI; sometimes it's easier to have AI build the framework (like building the views in SwiftUI which populate the row), and tweak that.
But by using AI as a convenient way to look up APIs and to sketch simple elements--and NOT use to to just write the entire table--it allows me to build a project and still have a sense of what's going on.
Odd-Investigator-870@reddit
Enable Claude Code's learning modes then, to keep some cognitive understanding of the codebases you work in.
kyletraz@reddit
The "machine spirit" framing is perfect for this. There's something that happens when you write even just the skeleton by hand: you internalize the shape of the thing in a way that no amount of careful review of AI output quite replicates. I've landed in almost the same spot: let AI handle the parts where correctness is verifiable, but keep my hands on anything that involves a judgment call about structure or intent. Curious whether you find the spec-writing step itself has gotten easier with AI assistance, or if that part still needs to be mostly your own thinking first?
ImTheRealDh@reddit (OP)
When working with the spec, i generally provide it more context, and ask it to come up with 2 or 3 aspect of the problem, then i need to really think about that. To sum it up: AI is my assistant.
kyletraz@reddit
That feeling of "the code works but I have no idea why" is exactly what pushed me to rethink how I was working with agents. When I let one run too far ahead, I'd come back for review and realize none of my own thinking was traceable in the result - someone else's intent baked into code I'd have to maintain. I built KeepGoing.dev partly to fix this: it captures what you were working on and what decisions were made, then hands that back as a full briefing at the start of every new AI session via MCP. You stay in the mental driver's seat even when the agent is doing the heavy lifting. Are you finding the disconnect worse at the start of a session, or more when you come back to review what it actually did?
Abadabadon@reddit
Make it do what you want then. Ask it to write the code, read it, revise manually or with Ai, have ai create rule sets to follow your coding style, make it create documentation that you prefer.
Make ai less like an apprentice and more like a printer.
nkondratyk93@reddit
same feeling. i handle this by keeping one task per session that i do fully manually - usually the thing closest to the actual system behavior. keeps the mental model alive. if i go fully hands-off for a week i lose track of what the code actually does vs what i think it does
ImTheRealDh@reddit (OP)
Good to know someone feel the same as me. Thanks for sharing
nkondratyk93@reddit
yeah it gets easier once you find a flow that actually fits. hang in there
professorhummingbird@reddit
Well. Duh? If you didn’t write or read the code you have no idea what’s there.
I don’t have your problem because I read the code. Practice good commit habits and read before pushing anything.
This saves you time in the long run.
kkingsbe@reddit
It’s all about how much of a leash you give it
Available_Trade_4259@reddit
what's the main issue you're seeing
Robodobdob@reddit
I certainly felt disconnected until I started using plan mode.
I almost always use plan mode now to ensure it gets the understanding of what I want and also to validate the plan. The more I plan, the more ownership I feel over the output.
I’ve been toying with the idea of opening a PR at this point to get a review on the plan from others before proceeding.
ImTheRealDh@reddit (OP)
To be really making sure: Plan the spec and its improvement
unflores@reddit
I share your concern. Here is how I deal with it:
I use the agent to make my plan and help with discovery. If I feel like I understand all the points of integration well enough, I'll go full agentic development.
Otherwise, it's good to ask why. Why can't you understand it? Maybe the agent just pushes a solution through but the code it is working in isn't adapted for it.
I try to follow the paradigm of: make the change easy, then make the easy change. In the case where I don't feel like a comprehensible change is possible, I'll spend some time on refactoring the points of integration into something that makes sense to me. Make sure it's tested, maybe ask an agent to poke holes in my changes. By this point I'm accruing quite a bit of knowledge of the code and I can potentially move forward with the plan I created.
I've found that this allows me to keep better knowledge of code that an agent changes.
Minimum-Reward3264@reddit
Basically day one maintainer. Horrible experience
donhardman88@reddit
I had the same problem until I built a tool to solve it.
The disconnect happens because AI generates code but doesn't help you understand the codebase you're working in. You get the "what" but not the "how it connects."
I built Octocode - a semantic code search tool that indexes your codebase and lets you ask questions in natural language. When AI generates code, I can ask "how does this relate to the existing auth system?" and get actual relevant files with dependencies mapped.
The workflow that works for me: 1. AI generates the code 2. I search the codebase to understand how it fits 3. I trace dependencies to see what else might break 4. I review with actual context instead of guessing
It's not about replacing AI - it's about giving yourself codebase awareness so you're not flying blind.
Built it because I was tired of the "generate code → hope it works → debug in production" cycle. Now I actually understand what I'm shipping.
Open source if anyone wants to try it: https://github.com/Muvon/octocode
Infamousta@reddit
I have settled on letting it rip on early revisions for new stuff and then taking a couple days to refactor into something reasonable. My least favorite part of a project is when I don't know what I don't know, "you throw the first one away," etc. So I let agentic AI build the throw-away iterations and learn from the mistakes. Easy time savings on greenfield stuff.
If I'm in an established codebase I turn more towards using AI as a rubber duck. I.e., following the patterns we already have established, how could I do X? Is there a way to streamline this and generalize the pattern? And then I tend to engage in a debate about the current design where I feel the need to back up my arguments and critically think about it.
AI is not super helpful when you have a complex and working system, but it does keep me pretty engaged with the codebase and working through alternatives and potential refactors. I can't really conceive of only using it for vibe coding though.
ImTheRealDh@reddit (OP)
Thanks for your sharing
No-Struggle-5369@reddit
sounds like a real-life boss fight
Abject-Kitchen3198@reddit
Whenever I try to generate large chunks of code, or doc/plan I feel overwhelmed and rarely go forward with it. If I do, it's a lot of back and forth to massage it into bite sized pieces that I feel comfortable to incorporate into the code.
Ok-Entertainer-1414@reddit
"fully agentic" is a scam invented by Big Token to make you buy more tokens. If you think it's a thing you should be doing, you're a dumbass who's fallen for marketing
truechange@reddit
I am leaning towards writing the entire Domain layer myself and let AI do the other layers (DDD/Clean Arch).
Disastrous_Crew_9260@reddit
Well, when I start working in a new repository I usually check the structure myself first, and afterwards when I encounter an issue somewhere I haven’t done changes before I ask agent for suggestions on locations for the changes if it’s not instantly clear to me.
If you just ask the agent to fix a ticket based on the ticket description you will be disconnected.
Use the agent to aid your understanding, not replace it.
ImTheRealDh@reddit (OP)
Indeed, that what i do now
k3liutZu@reddit
I feel that’s the definition of “vibe coding” — when you don’t know/check the output.
You don’t want to do that for the important parts of your codebase.
NoStay2529@reddit
Pardon my ignorance but I always get confused in the terminology what do you exactly mean by spec? Like how the feature should work? The inputs you are providing and the outputs you should receive? Is that what it is? Or is it in general how each component of the entire system should work?
Leather_Secretary_13@reddit
machine spirit. that's accurate.
I made the mistake of implementing a "fun" change late at night deep in my specs a couple times. If you don't know the spirit of the code then it all blurs together. like a ship in a bottle.
Icy_Background_378@reddit
Best post I've read all week. I'm a student, but I do something similar. I usually write out broken half code, a mix of comments, partially declared varibles and function calls then have the AI fix "logic, syntax errors, and all //@todos". Keeps me focused on logic rather than syntax
Better-Avocado-8818@reddit
It’s because you are disconnected from it. Go slower and review everything, step in to write or correct manually when required.
ImTheRealDh@reddit (OP)
I get disconnected because i get disconnected, got it.
Kidding though, i do go slower and more frequently now.