Went back to Autocomplete after Claude Code & Codex! Agentic AI really is a trap!
Posted by StoriesWithGR@reddit | ExperiencedDevs | View on Reddit | 34 comments
My Background
I am ~ 20 YOE in software dev, have been in CTO, Founder, SSE and Training Roles
The other day I had a fairly complex task of creating a crawler that goes against some features provided by the otherwise excellent CrawleeJS package. The details aren't necessary for this post, but FYI I had to implement a custom Session algorithm which would send requests with custom headers + Proxy Configuration under a certain rate limit / advanced timing controls. This is a fairly reauested feature in the CrawleeJS community via Github Issues.
I am no Luddite
Before the "pure" vibe coders jump at me, let me say I used Claude Code CLI / VSCode Extension, Codex all in Max / Extra High Effort reasoning across OPus 4.7 & 4.6, GPT 5.5 along with relevent plugins & skills added Eg Context7 to fetch the latest documentation
I used Plan mode to describe my requirements via a detailed Markdown file as prompt and got a reasonable plan where the AI justified its choices (I did this across all the models / harnesses mentioned above).
The problem is two fold:
1) The justifications given in plan mode can be on the basis of hallucinations by the AI 2) The code generated is kinda "correct" but is way way sub optimal to the point of being unusable in production nor extendable in future.
Hey but as long as the Code works right?
What I've learnt in the Business of Software.
Business doesn't really care about "correct" code. At times they don't even care about performance unless there is a sizeable impact in cloud bills etc. What we do care about is speed of iteration and the number ONE enemy for that is tech debt. At least I've seen that time and again in my career. Tech Debt leads to competitors racing past to large scale outages / security issues.
In my case, the code would actually fail for some edge cases.
What AI is currently good at
- Very fast POCs / Prototyping
- A front end with popular frameworks like React + Tailwind. And even there I had a case where it totally messed up a complex Layout management of shadcn's
Resizers even though it could perfectly read a well specced Figma file with the Figma MCP. - Learning about SDKs / APIs you're not familar with but you can provide some context of what you know and it provides good learning resources to go from there to what is required.
- Great for isolated functions / modules where scope is very very compact.
- Given an intiial skeleton by you, it can cover it up with production level guards (
try catch), building test case assertsions, reasonably complex test data etc. - Good Autocomplete even on plain english with Markdown / Code Comments
- Voice dictation with the many apps out there.
- Good Documentation (though it tends to be quite verbose there too)
What I ended up doing
- I then used Claude's web interface with Opus 4.6 to really deep dive into the Crawlee Codebase. I was already very familiar with the SDK but this project needed me to understand its internals. The big thing is, it worked when the context was shallow , I just asked it a bunch of question unrealted to my project.
- I then used Github Copilot / VSCode s autocomplete to sketch out my own solution (in effect nullyfing the utility of "Plan mode")
- I then used Claude Code / Codex to fill in the implementation gaps all the while asking for complete type signatures and code snippets in plan mode and iterating many times.
I was actually satisfied at the end MY RESULT
- The BIGGEST TAKEAWAY is that my skills didn't atrophy rather I actually learned a LOT!
- I achieved somehting MUCH MUCH FASTER than if I had done it without any AI.
- By using Claude Code / Codex in the beginnigg, I actually wasted almost a week!
What Worries me the most!
All this talk of "Super Productivity" makes one feel (especially the experienced folk) that we are no longer "good enough" or that we are "falling behind".
My response to this is
A) People & Biz care about the ENTIRETY of the SDLC, not just the Prototype, I think I can get there faster with a manual + AI, SO WHY THE HELL NOT? B) With the recent direction of agents like Cursor going full on the Plan + Implement interface rather than being a "sidebar" in VSCode I really worry about the quality of both Software AND Software Engineers in the future.
WHy am I writing this
If there's one thing so many years of experience has taught me is that even the best of us have blind spots. Does anyone agree / disagree with me? I am especially asking the beginner / intermediate devs who are "Ai Native". Or Do others also have similar stories? Its a genuine question because I want to improve myself and hopefully pass on the imporevements to my team as well!
PS: This wasn't written by AI but I hand wrote this with the help of VSCode AI Autocomplete in a markdown file!
skidmark_zuckerberg@reddit
I would be careful in considering AI good at FE development. AI turns modern Frontend codebases into slop in my experience, unless you are someone who can prompt it well and who also understands what good FE architecture looks like. It's fine for POCs but takes a lot of effort to consistently get an LLM to produce good FE code. Especially with loose libraries like React where there best practices are not enforced. It's probably a bit better at something like Angular since it's a framework and has very strict ways of doing it.
Often times for React, it takes the most naive approach and spits out working code, but It doesn't care about state management, component re-renders, or any other common FE patterns that equate to a system that is maintainable and scalable. I think this has always been the problem with FE; to most (especially BE engineers) when things visually look fine, it's assumed to be all fine and dandy. But when you look into the code or run a profiler, it's an absolute nightmare.
originalchronoguy@reddit
I disagree. LLM Agentic is a bigger threat to FE than the backend.
You can design MCPs that studies your design patterns and create proper design artifacts/css tokens.
The fact, I can MCP a Figma or StoryBook StyleGuide (source of truth) means I can have pixel precision based on factual specs -- proper variable colors, proper leading/font kerning. Correct breakpoints.
I gave a MCP agent to some backend developers and they re-did the entire front end css tokens that were better than what the front end teams did. It pointed out so many anti-patterns inconsistencies. Example, a bulleted list had different margins and alignment based on what pattern was using a bulleted list. For 3 years, no human caught that. The agent, after producing the style tokens pointed out every contradicting artifacts that the official style repository is poisoned. The fact it can execute and apply our corporate branding that is indistinguishable from Figma mocks is extraodinary. Now we have tokens you can apply to and framework - Vue, React, Angular, and Mobile. Before, there were teamns building components for the different components with a lot of drift.
I've been told not to releas the style tokens as it means a lot of jobs are on the line if I can produce a more deterministic, non-contradicting set of consistent tokens following corporate governance on design.
skidmark_zuckerberg@reddit
Yes but UI design is not the entire job of a FE engineer, that's really my point. There is an entire underlying system that needs to be created. I use LLMs to for UI work like this, UI design work is the least concerning part of my job. The real distinction is in systems design for FE. Not every application has a super complex UI, but a lot of SaaS companies have very complex UI's. Take Figma or Jira for example; this type of frontend engineering is not something an LLM can simply do. It takes having a team of experienced FE engineers who understand good frontend architecture to accomplish it.
Too many people look at the UI and think it's all about design. Yes that is a key part of it, but there is typically complex business logic underneath it and a system that enables it to be performant, scalable and most importantly; maintainable.
duch-92@reddit
Exactly, as a FE dev, when I hear AI is good at FE, I recall when long time ago in my company backend devs were doing frontend cuz company wanted to save money on hiring devs. The mess is still unfixable after years. They were also talking that FE is easy
skidmark_zuckerberg@reddit
I do full stack with a focus on FE, and in my last company we had pure backend devs doing FE work and there was a lot of fixing, mentoring and PR reviews to try and guide them into the correct ways of doing it.
Some examples; there was a server state caching strategy we implemented with React Query, and almost every PR I reviewed from a pure BE dev was taking the most naive approach and simply fetching data in useEffect calls, completely short circuiting the caching strategy we implemented. I also found they never considered rerenders which compounded this issue of using a side effect to hit the network for data.
We also had a composable form system setup where we built out reusable form input field components that were wrapped with React Hook Form controllers. The app was very config heavy, so everything needed it's own config form heavily influenced by specific business requirements. These components allowed you to compose any form as a React Hook Form. Most importantly, there was a validation library we used on top of it where you'd write out a form validation schema, and then a form schema test file. You'd take a TDD approach and create a validation schema, and then write tests based on that validation and other business requirements first. So when you composed the form, you knew it was good to go. These devs would completely disregard this and create controlled forms from scratch with almost zero validation and no UI consistency. Then after pointing it out in a PR, they'd make a fuss about needing to redo these things.
I think a pure BE dev looks at frontend and thinks it's simple, but it can be just as complex as any other system in software, especially in applications that have very complex UI. Like I mentioned; modern FE is not pixel pushing, we aren't doing FE work in 2010 anymore. They are complete systems that have good and bad patterns. In my full stack experience, I know both sides of the coin are complex and I don't pretend that one is easier than the other. Sure they both have simple aspects, but as whole.. not so much. It's just interesting that people with little FE experience seem to think it's easily replaced with AI. If so; then why wouldn't the 'simple' REST API work a BE dev does also be easily replaced?
StoriesWithGR@reddit (OP)
What proof should I attach that this was hand written? SInce most of the comments are ignoring what I am actually saying and accusing me of using AI to advocate NOT TO USE AI!
throwaway_0x90@reddit
No human types things out like this so I'm just assuming you're lying. It doesn't help that your post history shows you've posted this to like 6 different subs.
2053_Traveler@reddit
If you have an argument, state it clearly and provide a couple brief arguments and then let people discuss. The shape of this long post looks sloppy and doesn’t clearly and concisely support the argument that Agentic AI is a trap. If you are arguing that being in control of the development process and using two decades of experience is helpful, I think most people here will agree with you so it’s not really a contradictory view.
StoriesWithGR@reddit (OP)
Fair point, even I like short posts, but I was very purposeful in doing this because
A) There will always be you didn't use Tool X, or did in Y way, I wanted to negate that and
B) I wanted to take the reader through my journey of what I did intially incase they fall into the same trap / have words of advice for me!
originalchronoguy@reddit
I would not use Claude's plan mode. I would create/use my own planning agent.
For larger projects, Claude's plan mode works well if and ONLY IF you have the expensive enterprise version which is similar to github Fleet mode. Plan mode works great as you descrbe for compact workflows. It stores stuff in memory md. There is no way to do real "delegation" or hand off and things get lost in translation UNLESS you go with Entperise which is a LOT more than the $200 max plan.
Creating your own planning agent may cost more tokens up front, you can use a cheaper model or from a different vendor. But you then have control with delegation and hand off. Have an fleet orchestrator agent to do that manual hand off.
KandevDev@reddit
the post describes a real pattern but i would push on the framing. the agent is not the trap, the lack of constraint is. an agent given a vague prompt produces vague code. an agent given a 5-line spec ("input X, output Y, must not change file Z, must pass these tests") produces code that is often better than what i would have written under the same constraint.
the way to know which side of this you are on: do you have a clear spec for what "done" looks like before you start the prompt? if no, autocomplete is correct, the agent will just make your bad spec into more code. if yes, the agent is a force multiplier. the trap is using agents as a substitute for thinking, which is a user-error not a tool-error.
StoriesWithGR@reddit (OP)
I gave it about 200 + lines of Markdown as a detailed spec doc but I see your point, I think what I've learnt is in order for us to add the constraints you mention here, we already need to be very well versed with the environment at hand. So the hard work of manual learning doesn't go away. Its all the BS Hype marketing that makes me feel I can "get away with it". Thank you for your reply!
MoveInteresting4334@reddit
Are you sure?
There it is. I think we have differing views on what “hand wrote” means.
StoriesWithGR@reddit (OP)
I barelly accepted any of the Autocomplete suggestions. Why would you think I would even do this? Whats my end game? Not promioting any product, not karma farming, look at my Reddit history I am barely a contributor, even advocating not to go gung ho on AI!
MoveInteresting4334@reddit
I’m not saying you accepted all suggestions. I’m saying almost all this text is suggestions you accepted.
As for why, if I had to guess, you’re larping as a dev with 20 YOE. You certainly don’t write like someone with that much professional experience, especially someone who has been a CTO.
StoriesWithGR@reddit (OP)
Its not accepted suggestions and I have no way of proving it to you! I wrote this in 10 mins and didn't think Reddit needed a C suite level formal document! :)
MoveInteresting4334@reddit
If none of this text is suggestions from the AI, how were you using AI autocomplete?
Or just, you know, proper spelling and punctuation.
StoriesWithGR@reddit (OP)
Bro I just literally typed it out as thoughts were coming! I didn't think I'd be so bullied for it. I shouldn't even have mentioned the autocomplete I thought I was doing the right thing by disclosing because I wrote this in VSCode text editor and just accepted a few spelling suggestions because I was in the middle of work! I will keep this in mind for my next Reddit post that all spelling, punctuations should be right! Honestly didn't expect it to make a difference and I already see a ton of useful replies. So anyway, thank you for your feedback!
sweetno@reddit
At this point, I start recognizing AI-generated posts by the sheer shape of their text.
Hawful@reddit
This absolutely is not entirely AI generated. This is a human voice. Possibly AI formatted, but I liked using em dashes prior to AI, so maybe I'm just more sensitive about these kinds of accusations.
sweetno@reddit
Then congrats, you've learned to write AI slop without AI.
TheophileEscargot@reddit
It doesn't look AI to me.
I don't think this text is AI, unless it's been through a "humanization" process to insert mistakes and take out the rhetorical flourishes.
StoriesWithGR@reddit (OP)
Why would I go into that much trouble for apost I am not promoting anything in nor am a frequent redditor? Anyone is welcome to see my history and see I have nothing to prove!
StoriesWithGR@reddit (OP)
This was typed out and I don't know how to prove it! I think this is going to be a big problem, how will we sepeerate the genuinely well thought out posts from the AI generated ones?
StoriesWithGR@reddit (OP)
I literally typed this out. Any Long post = AI ?
https://github.com/user-attachments/assets/d1962fc0-6a61-4a43-9362-fcfe31371127
Why would I use AI to type out a post where I am not promoting anything and advocating for a "safer" approach to using AI?
roger_ducky@reddit
Design first.
Check unit tests you asked AI wrote as the “living documentation” to read like documentation. Make sure that matches your design.
Throw linters + security scanner + mutation testing framework at the codebase.
Once those pass, get a stupider AI to review the “PR” based on design, original story, and the info in the PR template, then have original implementer comment on the comments about the “PR” and.. this is the important part: Ask the developer what to do next.
Human looks at the argument, the code, and original design. Expect to have about a third of it be “uh, totally questionable implementation there. Do it this way instead.”
StoriesWithGR@reddit (OP)
That seems like a rock solid system. I did go with linters + tests + revieweing with another AI But I think you have a more extensive setup.
Just going over your answer at a high level I personally think one can expect productivity when all this is automated / packaged. At this point, I just feel with the current state of AI it adds more overhead and a manual + AI approach gives like 5x more productivity :)
roger_ducky@reddit
You have to do all the proper documentation for junior devs anyway. I find my project documentation gets way better when you get an amnesiac agent with ADHD involved in implementing stuff.
Everything goes sideways otherwise.
StoriesWithGR@reddit (OP)
Yes I find the AI to be a very knowledgeable Intern but with very bad ADHD! Ironically I have ADHD too !
Connect_Detail98@reddit
Yesterday I spent 20 dollars letting Claude lose on a problem. It went in circles for like 30 minutes until I decided to step in. My first Google search resulted in a Github issue that Claude didn't find.
StoriesWithGR@reddit (OP)
EXACTLYYYYY I've faced this many times. Thank you for this!
Connect_Detail98@reddit
And someone will say "It's your fault because you didn't give it a prompt that guides him on how to properly debug, which includes searching Github issues...."
I guess this thing with an IQ higher than the average developer has no idea that one of the first steps to debug a problem is to Google the error and the observed behavior. Like, even a kid could figure that out.
bowlochile@reddit
Cool karma farming bro
StoriesWithGR@reddit (OP)
not karma farming, look at my Reddit history I am barely a contributor,