What are your habits / learned workflows that work well with AI
Posted by t0b4cc0@reddit | ExperiencedDevs | View on Reddit | 41 comments
Hi!
The more or less recent shift in our business comes with quite a few uncertainties, challenges and alot of tools trying to help.
llm models, harnesses, and their tools like mcp servers, skills...
About a year ago I gave AI a shot and was quite amazed by the models capabilities. and while most of the harnesses and tooling sucked I could see where the direction would go. I also happily accepted the benefits, as im usually not a fast typing or super technical corecctnes kind of coding person. so ai would benefit me immensely. planning shit, talking to users and people who have say over budgets and what directions their companies will go. half the work is talking to them. the other half is developement, where most of the work for me happened on whiteboards, and notes.
now i learned to fit with alot of different styles. from implementing features and fixing bugs from mail to prod in a 2h or 2day shift, creating prototypes and small projects in weeks, and months, to working on huge projects....
now all of these had one thing in common. 0 relevant documentation. wich means no docs, or even wrong outdated docs, much knowledge and requirements was collected by talking to people and every other call could even change things. the stuff that was written was generally not well sorted. often in mails, teams chats, my personal folders, company folders, or somewhere on the customer side in various systems...
as i mentioned the tooling back then sucked a bit. i found the model to be quite capable (eventho back then still made more errors) if fed the right ammount of info, but would fail horribly if i dont curate information enough. and every failure with these systems is not just a dip but a step into the wrong direction.
naturally i tried to organize and automate this. or, as usual not really being highly productive with my personal work but exploring the capabilities, limits and possibilities by trying various things.
and no. this is not the point where I shamlessly plug my shitty vibecoded product. but more about what ive tried and found.
the various ideas and systems to memory bank systems, mcp servers. database based approach with vectors, relations, links etc and i just found there is a huge ammount of potential. and often when i build sth for me im stopping once it helps me enough... (since i actually was working on sth else, and family)
I also said. if i can hack me sth together in a few weeks that helps my workflow, cut costs, streamlines the process of ai coding and increases the quality of output, saves time and tokens... then imagine what the billions heavy companies will produce down the line.
Here I was almost a year later, beginning of this year, revisiting and exploring the llm landscape. And I found very little of what I cared about, and what I did find didnt work too well. (besides better models, and finally harnesses that are getting good, and also the raw quantity of context window / usage)
What I wanted to do was improve my coding work with providing amazing context to llms.
That means code. Where i dabbed into using compiler tools to create code maps/graphs with various ranking algorithms, filtering things and connecting code...
And also creating various project documentation systems. Some like the memory bank, others more sophisticated like using db and creating and categorizing information, like requirements, bugs, features, progress, etc...
I was so sure there would be atleast 3 usable products by now that I didnt bother with it (and that topic became less relevant to me due to change in work)
Now there is ofc a few hurdles that are not easy solveable and the ways such tooling could work and be built are many. But there have to be so many ways that are just good enough that it really suprises me that I hardly can find anything relevant.
Ive used some systems that looked promising in the beginning but ultimately failed to keep up.
I guess Id like to have some input of people working in this field, how you people adapt. (non vibe written short blog post with saas product linked)
PS: a recent example, ive built sth adjacent to my work, parallel. using claude code, then trying claude design, also use codex to work on it. managing that is so annoying. claude knows its plans, codex has not alot, claude design is different again...
tmjumper96@reddit
This one is worth commenting on. It is very aligned, but not necessarily a direct competitor pitch. They are describing the exact pain AgentBay solves.
This matches a lot of what pushed me into building AgentBay AI.
The hardest part with AI coding tools is not always the model quality. It is the context problem. Claude knows one version of the plan, Codex knows another, ChatGPT has a different thread, and then half the real project knowledge is still sitting in calls, notes, emails, tickets, or your own head.
I think the useful workflow is becoming:
keep code and docs as the source of truth
write down decisions as they happen
keep a lightweight handoff for the next AI session
use memory for the stuff that keeps mattering across tools
I am building AgentBay AI around that last piece. The goal is to stop re explaining the same project context across Claude, ChatGPT, OpenClaw, coding agents, and other tools.
https://www.aiagentsbay.com
I do not think the answer is one giant chat history. It is better shared context that can follow the work across different AI tools.
chickadee-guy@reddit
Dont use it
t0b4cc0@reddit (OP)
a year ago I might have respected this opinion. but right now I can only think of a rather specialized workplace/project situation where a llm wouldnt be extremely beneficial and shave off hours of writing code every day or atleast every other day...
chickadee-guy@reddit
Thats a skill issue on your end. The tools have horrific accuracy, reliability, and performance issues.
t0b4cc0@reddit (OP)
i did not find the recent models to be highly inaccurate.
alot of it is just boring stuff that an intern could get right (and an llm does it excellent in 0 time), and for more complicated things putting in the work to prime the session with what I need usually gives me suprisingly good results
chickadee-guy@reddit
The fact that whatever mess you just described there is a productivity increase for you just says youre a mediocre/poor performer with little to no real responsibilities. Youre telling on yourself
t0b4cc0@reddit (OP)
this is hilarious.
you might want to share with us on what extremely high tech special product you are working on.
Main-Drag-4975@reddit
I was the same but only when competing against folks with significantly less expertise and skill. Folks near my level are a lot harder to outpace when they make judicious use of the tools and I don’t.
chickadee-guy@reddit
I have yet to see this, all the folks at my level say it slows them down. Must be a skill issue on your end
Main-Drag-4975@reddit
Yes, at your experience level it’s normal to feel that way.
chickadee-guy@reddit
Eh, again, sounds like a skill issue on your end. Your inability to execute on actual tasks, or RTFM, without an LLM holding your hand just means that the tools elevated you from a poor performer into mediocrity.
false79@reddit
Are writing novel code with every commit? I find it's easier to have a skill prepared for everything I do repeatedly
chickadee-guy@reddit
Yes? Its so much faster than the slop slingers.
Anyone who relies on claude code and skills spends all day trying to get their slop approved and then fixing the bugs they introduce
false79@reddit
Ain't no one writing unique code these days, bs.
The longer you're in this industry, the more it is the same.
chickadee-guy@reddit
I am, and again im by far the highest performer on my team. The people who are relying on LLMs for everything dont understand what they are pushing, spend all day getting their slop reviewed, and introduce nonstop bugs into production that they dont know how to fix.
Not following. I have 11 YOE and ive seen slop artists with more experience and less experience than me.
obelix_dogmatix@reddit
man you get triggered so easily. Clearly being secure is not something you learnt in those 11 years.
chickadee-guy@reddit
Huh?
false79@reddit
👋 22 YOE here. I don't advocate vibe coding with zero shot prompts. That will get you those results you don't want.
In my view, every line in code machine or human generated needs to be human reviewed and human accountable when it goes wrong. If you don't understand what is happening, it should not be merged.
It is a skill issue to know how LLMs work, break down tasks into smaller omes, provide it the sufficient context it needs and let the reasoning do it faster than a human like you could.
I get you love the control but after years of doing this, I am not married to the code I write. You get hired to solve the same problems time and time again over the years. Imo, I would rather automate these patterns than to hand code it all over again.
chickadee-guy@reddit
If you are doing this, then LLM use is a productivity loss due to the extensive time spent reviewing and reworking the slop.
If this extensive, error prone, and nondeterministic "process" is faster for you than standard human development, its a huge skill issue on your end, sorry. Maybe its one of those situations where you got 1-2 years of experience, 20 times over? Adding context doesnt prevent hallucinations by any means, the context window of an LLM is extremely brittle.
I can tell if your skills have deteriorated that much that you cant recognize slop. I dont code by hand because i love control, its because its literally faster, more efficient, and less error prone than using an LLM. And yes, im using MCP, breaking the prompts and tasks into chunks, markdown instructions, Opus, the whole she-bang directly taken from Anthropics documentation. Its still all slop.
If you cant recognize that, you have major skill deficiencies in multiple areas and id highly recommend you brush up before you fall even further behimd.
false79@reddit
So good to hear. Karma I guess for the same attitude I had for the older folks in the office back then.
Young arrogant me could not be convinced back then. I don't think you can either. It's all good.
This applies to both of us: if you are good at what you do, you will always find work. Some though will do it faster and cheaper. It's just how the market is.
chickadee-guy@reddit
If there was data and real evidence showing real tangible productivity gains, id be immediately convinced. That data is nonexistent, and its been 4+ years of mandated LLM use.
Id suggest revisiting the drawing board and brushing up your foundational skills, or youll be responsible for a slop fire in production soon.
false79@reddit
You ought to re read those studies. Too many people parrot the 16 open source devs early 2025 study who used cursor for a week as a generalization why AI does more bad than good.
The good starts by being cognizant of what you do often and automating it. I don't need a study to tell me AI is a waste, what I would have billed 4 hours work to be done in a fraction of that time by an agent under my observation. The more these smaller repeatable steps are automated, the dividends add up.
chickadee-guy@reddit
Feel free to show one that shows that AI makes open source devs more productive then. Ill wait.
Because the agents output has to be reviewed and corrected? Agents fail 95% of basic business tasks. So you arent automating the work.
You either have
Or
I wouldnt be too proud to admit either of those things! So again, i suggest brushing up on those foundational skills youre letting deteriorate. Youll fall behind otherwise!
false79@reddit
I get how easy it is to dismiss something you're not experiencing and like I said, there really isn't anything I can do to persuade you given your stage of career that I can relate to.
All these attacks you feel to make, you win, lol. Nothing will change. Especially your preference to do things the best way you know how and I am doing the same given all my experience. I'm all for doing more for less when done correctly. Somehow you need 3rd party opinion to tell you otherwise.
chickadee-guy@reddit
Im not sure im following, you havent provided anything tangible to dismiss, you just told everyone you can ship slop and no one cares. Cool?
obelix_dogmatix@reddit
what a terrible take on a sub for “experienced” devs. Part of the job is evolving with tools. Just say you aren’t proficient at using the tool.
chickadee-guy@reddit
I only use tools if they increase my efficiency and make me more productive.
The time it takes to review and correct AI slop makes it a net productivity loss versus just using my 2 hands and brain.
You either have a skill issue that you cant recognize slop and blindly ship it , or a low/nonexistent quality bar for production.
obelix_dogmatix@reddit
which is what I said. You aren’t more efficient or productive with it because you don’t know how to use it.
boring_pants@reddit
Really? Is this true for every tool?
I am less productive if I try to code while unicycling. Is that because I DOn't knOw HOw tO USE THE tooL?
I am less productive if I smear shit over my monitor. I guess I "JUst Don't KNow hOw TO USe The sHit-smeareD monITOR toOl"
Is it at all possible that some tools might not actually be good*?
Is it a skill issue if I don't find that I get more productive if I use MS Word as my IDE? Do I just not know how to use the tool?
obelix_dogmatix@reddit
With AI agents, as someone else mentioned, the answer is unequivocally - if you are less productive with it, you aren’t doing it right.
chickadee-guy@reddit
Are the Anthropic and Codex docs the wrong place for instructions on harness setup? Because im following their guides to a tee. MCP, markdowns, subagents, hand crafted prompts, skills.
Did i forget to tell the model to make no mistakes?
KandevDev@reddit
three things that actually stuck for me over the last year:
write the spec before the prompt. a 5-line spec ("input X, output Y, must handle edge case Z, must not change file W") produces 10x cleaner code than "implement this feature please" with the same model. most "AI made a mess" stories are spec problems.
force AI to ask clarifying questions before generating. literally say "ask me 3 questions before writing any code". the questions surface ambiguity i did not know existed in my own head.
never let it merge to main directly. the "review your own AI code as if a stranger wrote it" muscle is the most valuable one to build, because the alternative is rubber-stamping and that is how the catastrophic bugs land.
t0b4cc0@reddit (OP)
im far away from implement me xyz for most of my tasks. even simple shit that a junior might have gotten right with a one liner ask, gets a fat info on everything before. but i wonder, do you start your context from 0 everytime, pointing at recent code, new specs/feature infos, or do you use any memory kind of system to prime about the context around the current task for the session, what kind of code discovery mechanism do you use? i hate that basic grep for example is "good enough" so to say to not pursue further developmenet on my code graph project that started to get complicated and will only get more complicated with different systems and ways to integrate....
the clarifying questions thing is an inteersting idea. i have a few such phrases, wordings and stuff to steer. but explicitly triggering a feedback loop sounds interesting. what comes close to that is that im usually working in steps from plan mode so that most of the time questions will come if there is some kind of uncertainty. so maybe this has been solved good enough for me by the models/harnesses in combination with my natural way to prompt and work? i usually work in a way that i know already what ill be getting before the implementation... this is def sth that is very interesting to experiment with and easy enough to do.
thanks for the input
KandevDev@reddit
context strategy depends on the task shape. for a contained task (one feature, one or two files), i start with a fresh context and explicitly feed in only the relevant files + a 5-line spec. burning the context window on background loading just makes the model more likely to lose the plot.
for an ongoing task across multiple sessions, i write the spec + decisions to a CLAUDE.md or similar that the agent reads automatically. that file is the "memory" the conversation lacks. you would be amazed how much rework goes away when you stop relying on the agent to remember decisions you made in chat.
the failure mode of "let me load up all the context first" is that the agent treats everything in context as equally relevant, when actually 80% of it is noise for the current task.
SinbadBusoni@reddit
Did you tell your chatbot to write this post with shitty punctuation and capitalization so that we wouldn’t suspect it’s chatbot slop?
t0b4cc0@reddit (OP)
no. i specifically do not give a shit about capitalization and punctuation on reddit., and with the recent situation of how text/content looks im even enjoying the shit out of this totally horrible garbage looking wall of shit.
just copy paste it into your fav chat, tell it to translate it to the language you like, textlengths and giev directions on what your might care or not care about.
and honestly. 2 years ago id have carefully reviewed this for errors to be easy to read and digest.
SinbadBusoni@reddit
Did you make Grok write this?
originalchronoguy@reddit
Answer: Mostly Security. I do a lot of security related things at my 9-5. It is really time consuming.
I know the process, I know the dance with auditors. But it is time consuming.
So having it do the grunt work for me is a time saver.
Now on my personal projects, I can give them same level of Enterprise grade controls/guard-rails/governance I do on my day job to my self-run business.
Then a lot more Ops related stuff. I mentioned in another post about saving hundreds of dollars a month on cloud hosting fees for my business.
It makes the $200 a month I pay for Claude worth it.
For day to day, I can built more QOL (Quality of Life) improvements for myself. Have stand alone services that does checks here and there. Like checking my ETL pipeline as an outside auditor. Check and give me alternative critique on my system design. I had a service that had a two step queue and it was very brittle.
I redid it over 30 minutes of discussion; spinning up new workers, a sub-queue, create new agent to run as a side-car.
The QOL things will grow and grow. I think people underestimate that. It would be good to have a reminder like Alexa morning that say “Your ETl job failed over the weekend because of an infrastructure patch. The automatic patch updated MariaDB to 9.x that now forces you to use new auth. Legacy auth is deprecated, you need to use caching_sha2_password. If you can’t update your ETL, the work around is update these tables. Or change your my.cnf. You got until April 30th to migrate your app or you will be EOL when this change is permanent. The quick solution is to revert back using blue/green and restore from snapshot. Then disable auto-update until you have a plan”
That was a real message. I don’t keep track on MySQL changes and my Quality of Life Agent just gave me that run down. I didn’t even ask for it.
padetn@reddit
Use subagents with a dozen well written skills that can run independently for a while, and use git worktrees so you can work on several features at once.
t0b4cc0@reddit (OP)
yeah only recently i created some skills and tried to use more tooling around that. but im not sure whats worth investing time in and what could even be a negative...
theres so many ways to equipp these agents with good stuff
chefhj@reddit
I feel like it’s the nature of my current tasks and what they’re trying to accomplish but I have not yet developed the level of kung fu required to use work trees without getting clobbered by context switching.
It’s sort of the big skill I’m targeting to improve on.