“Coping” with agentic workflow adoption
Posted by sam-serif_@reddit | ExperiencedDevs | View on Reddit | 25 comments
Design professional now in a more ‘unicorn’ front-end role. My job consists of gathering requirements from clients, translating that into spec, contributing to the front end, and validating QA. In quotes because I DO support using LLMs
Our company identified a big value add last year - standardizing and maintaining product requirements will be much easier using agents to iterate on existing requirement documentation after client meetings, etc
I like it, it makes sense, I’m excited for this to be something that causes less fires.
Trouble is, the rhetoric I hear within our team is pretty demoralizing. It’s always “if you’re not doing this, it’s gonna be bad news for your projects” “walk, do not run, to get your projects documented in this way” meanwhile using AI in this way is a skill that a) isn’t always highly intuitive for me and b) is not agreed upon as a company wide workflow
we’re a scrappy company, and it’s the Wild West of finding value in AI, so I understand the push to get us experimenting with what works and sharing those findings. There’s just an aspect to using LLMs in 2026 that is still glorified babysitting, and while it’s true that I would produce more valuable documentation of stuff that sometimes gets missed, I have trouble communicating the nuances to which it grinds at my soul
What I do not hesitate to use LLMs for: syntax, edge case sniffing, sanity-checking component architecture, CSS cleanup, supporting any and all contributing factors of my skilled craftsmanship
What I am being urged to do: automatically parse meeting transcripts AND REVIEW FOR ACCURACY, translate requirements into long form documentation AND REVIEW FOR ACCURACY, write out a suite of test cases AND REVIEW FOR ACCURACY
It’s exhausting but i give myself grace that I’m a human and I can’t context switch as fast as the AI models they are addicted to talking to. am I at fault for feeling largely miserable about the way our leadership is approaching this? How can I show up to work with positivity and not dread?
nkondratyk93@reddit
requirements drift is the real problem here. six months of agents iterating and the spec just quietly loses coherence.
reddit_is_a_weapon@reddit
Hey fellas, there was a previous post on this subreddit with the solution to this problem. Your leadership made a bet and they’re hoping for results while pushing you as hard as they can. But ultimately it’s up to you if that bet pays off.
sam-serif_@reddit (OP)
But I would take the bet too. I would also push for LLM-doctored acceptance criteria. I’d just do it in a way that’s compassionate to the human employees, instead of scrambling to prove that our ideas are valuable. I am burnt out from the tone.
reddit_is_a_weapon@reddit
You should mingle more with the managerial group to realize where compassion for the human employees fits into the new AI strategies.
pkmn_is_fun@reddit
out the door?
sam-serif_@reddit (OP)
For real! My team is 4 including our manager. It’s a real concern of mine
DutyStrategist1969@reddit
The framing of AI adoption as urgent is the actual problem. Teams that roll it out as just another tool in the chain get adoption. Teams that frame it as do this or fall behind get resistance. The tooling is not the issue. The change management is.
_sikandar@reddit
Suck it up, you're not a special snowflake
jaco129@reddit
The best part about being asked to review something that you seem to not believe is worth reviewing is that nobody can possibly know if you actually review that thing or not.
grizzlybair2@reddit
Until it goes to prod, doesn't work, got blame, check story and pr. We've already been told whoever reviews it is basically responsible lol.
throwaway1847384728@reddit
The reality in most companies is you’ll be viewed as a great developer for fixed the production fire after it happens.
Management doesn’t check commit history or PR reviewers, and has zero technical understanding to evaluate if a failure was preventable or not.
Most companies don’t review people who spend more time to prevent production incidents.
You’re lucky if you work at such a place!
For the purposes of OP, management’s message here is probably “You need to move faster with AI. Oh, and yea definitely review everything, wink wink”.
grizzlybair2@reddit
Oh they expect us to get it all done quickly. Then complain about incidents. CTO and principal engineer reprimand engineers who are "guilty" of failing to find bugs before prod.
sam-serif_@reddit (OP)
It’s even just the process of getting something to review. My mind doesn’t immediately jump to “that was a great meeting, let’s run it through the LLM and see what it spits out”
jaco129@reddit
Yeah I feel you. That’s just using it as a toy when you are just gawking at it to see what it does. The better way is just throw the raw transcription somewhere that your model can reference if asked a useful question about something discussed in the meeting. We just store the raw and the autogenerated copilot summary in our Claude cowork space and move on with our lives.
sam-serif_@reddit (OP)
Cowork spaces sounds like a great idea. I’ll have to do some investigation into that. I wish helpful ideas came from the top, not just anxiety
Leading_Yoghurt_5323@reddit
the issue isn’t AI, it’s how it’s used… if every step needs human validation, the system isn’t really runable at scale yet
therealslimshady1234@reddit
Read The Great AI Leap Foward. It was never about increasing production, but always about maintaining power and control.
beefyweefles@reddit
lines up with everything I've observed, essentially empowers the worst inclinations and people in organizations
Adorable_Pickle_4048@reddit
I’ve got a few thoughts -
It sounds like you’re being overworked and you’re generally exhausted. Likely at having to review and manage a bunch of AI garbage.
In this regard even the best tools available today aren’t really great at doc writing. It often takes a lot of revisions, edits, and formatting and even still the models like to focus on odd things. I’d recommend trying to capture historical docs and the process/SOPs for creation attention as a mechanism to simplify that.
As far as a company wide standard workflow, honestly there is no truly standard AI workflow that is not 100% automated. The guidance and tooling provided by the company is important though, and if your tools are garbage and don’t have good data, everything follows from that. I’m vocally pretty critical of bad AI tooling relative to the few good ones we’ve got.(half of our companies AI tools might as well be the same quality of a random vibe coded GitHub repo trying to reinvent graphrag without a usecase)
Out of curiosity, how often are you reporting your findings to leadership or the broader team? Are you critical of their work or approach? It definitely will suck if you allow yourself to be a sink for all the incoming critique, rhetoric, and faux-leadership of expectations without sufficient capability
ifff you get some good tools, I suspect there are probably a few things you could accelerate in your workflow more. Any time the agent does something wrong or not in one shot, I try to always treat this as a signal to update any steering/context docs to instruct/guide the agent.
You’re right to be concerned about the reviews for the specific forms of docs you called out, meeting transcripts and requirements are particularly sensitive documents so fucking them up poses a large risk, and they’re derived from a client, not a model. And that’s a communication gap, not a tooling gap, there’s no substitute for talking to the client. Maybe it’s worth questioning how the client loop operates if the AI tooling poses major inaccuracy risk to those docs, you def don’t want to destroy client trust
Anyways, I’m sorry your company is both flying blind and operating blind to the reality of both your workload and the capabilities/processes they’re providing. Hopefully some of this is useful, happy to discuss further if your experience reflects more differently
sam-serif_@reddit (OP)
Thanks for the input! my workload swings greatly just due to our pipeline and stuff but all things considered it’s quite manageable atm. In the past when I’ve been swamped I just kinda disregard some of the cognitive load that gets piled on top.
I do feel bad that I have a couple hours a week that could be spent digging into solutions but for some reason I don’t feel compelled. I’d prefer to embrace the future myself vs being forced to comply with an SOP for example.
The underlying exhaustion I feel might be due to noticing blatant patterns after my ~4 years here. We try our best, and it’s gotten better, but we don’t have a ton of proven experience in shipping successful projects, not to mention managing the morale of team members while doing so. Something in my gut tells me those two metrics are related.
Adorable_Pickle_4048@reddit
Yeah no problem dude. Morale and persistent patterns in the workplace are 100% a leadership problem. And definitely agree unsuccessful shipping reflects that trend.
If you sense the morale shift, your coworkers are probably on a similar page as you. Could be a good opportunity to rally the troops and start accumulating team and org level feedback so that leadership will start listening.
sam-serif_@reddit (OP)
I actually had a design team member reach out after she heard my give pushback to our manager about poorly defined processes. We ended up sharing the sentiment that this is no longer the job we signed up for, but that it’s doable if we can work together instead of feeling trapped.
She took a course in AI for UX Design and has been sharing some findings so I’m hoping to apply that to my work too
I’ve been here long enough, and they need me badly enough, that I don’t feel shy about speaking up anymore!
Adorable_Pickle_4048@reddit
That’s fantastic, and glad there’s already momentum building in your favor.
I’m recalling similar circumstances recently in my own company. All of our engineers and even our org broadly are effectively at capacity and still behind on proposed deliverables. Luckily our leadership isn’t totally blind, but personally I’ve been taking more liberties to simply bring people together and being frank about what’s literally possible capacity wise, what will actually help, and what will not in concrete terms
hipsterdad_sf@reddit
The "standardized component library via AI" pattern you are describing is one of the most common failure modes I see with agentic workflows. The idea sounds great on paper: feed the LLM your design system, let it generate components, then have humans review. In practice the LLM generates something that looks correct but subtly diverges from your actual patterns, and the review burden on the human becomes enormous because you are essentially diffing against an invisible spec.
The part about your mind not jumping to "let me run this through the LLM" is completely reasonable. That workflow only makes sense when the task is well defined and the expected output is easily verifiable. Meeting notes to action items? Sure. Translating a Figma comp into a component that matches your existing patterns? The LLM does not actually know your patterns, it knows patterns from its training data, and the gap between those creates work that feels like it should not exist.
What has worked for teams I have talked to: use the AI for the boring scaffolding (boilerplate, test stubs, repetitive CRUD) and keep the design system components human authored. The design system is where your product's opinion lives, and outsourcing your opinion to a model trained on everyone else's opinions is how you end up with a generic product.
jaco129@reddit
Yeah I feel you. That’s just using it as a toy when you are just gawking at it to see what it does. The better way is just throw the raw transcription somewhere that your model can reference if asked a useful question about something discussed in the meeting. We just store the raw and the autogenerated copilot summary in our Claude cowork space and move on with our lives.