Is using lots of AI token per feature a sign of low efficiency?
Posted by arstarsta@reddit | ExperiencedDevs | View on Reddit | 31 comments
I have 8 years of backend experience, no frontend experience and only two months of Claude experience.
What I have noticed is that I'm clueless about frontend and just describe what I want like "a page of sales grouped by division" and the AI consumes tons of tokens.
But in backend I'm much more precise like saying "read table sales and remove the duplicates from the datastuture in insert function". It's even more noticeable when debugging where I can point out where the error is in backend but not frontend. Most error in backend I already catch when approving an edit and tell Claude to fix it before editing.
When I develop frontend I have reached the 5h Claude limit in 2h but in backed I have never even got over 10% in 5h. I would say the complexity is about the same for backend and frontend if the code where written optimally. The token usage seem to be negative correlated with my skills in the area.
In summary Claude help me be 50% more effective in backend for like $1 a day while it enables me to do frontend at all but for a higher cost and lower quality.
Future_Manager3217@reddit
I’d separate discovery tokens from implementation tokens here.
In the frontend case the model is doing two jobs: figuring out an architecture you don’t know yet, then changing code inside it. That will burn tokens even if the final feature is small.
One thing that helps: spend one explicit pass asking it to map the relevant FE components/state/API boundaries and propose acceptance tests. Then start a fresh implementation pass constrained to that map.
High token burn is a smell mostly when it doesn’t leave you with better constraints for the next run.
damnburglar@reddit
Define efficiency
arstarsta@reddit (OP)
Tokens / story points
Enum1@reddit
Building something with technology you have no idea about seems quite efficient to me.
What's the alternative? spending months learning the technology?
Goingone@reddit
Tech lead FAANG?
You really think people should be developing things with AI if they don’t understand how the code works?
Unless AI gets smarter, the ideal path is knowledgeable people using AI as a tool to be more efficient. It should not be people creating code they don’t understand and assume AI is doing the correct thing.
Enum1@reddit
Good use of ai is to use to to learn the technology while building it. The lls can give the relevant information you need right then, faster than anything else before. Its not about blindly trusting AI written code your don't understand, it's a about learning 10x faster That's the key to not building slop.
Goingone@reddit
AI tells you what you want to hear. I haven’t been given a suggestion yet that I can’t overturn by saying it makes no sense. So either the initial recommendation was nonsensical or it bends over each time I push back.
Either way….that is not an ideal learning tool.
aj0413@reddit
lol now we know why GitHub is dying
arstarsta@reddit (OP)
The results is quite bug buggy. The alternative is for manegment to get frontend resurses for frontend work to ensure quality.
Enum1@reddit
Then you need a better harness, better QC, better skills etc.
You can use the llm to educate you on it. What are common patterns, what re best practices...
The time where knowing syntax/quirks of a programming language are a valuable skill as an SWE are over.
rovermicrover@reddit
You flip between saying the time for knowledge is over to saying you should use the LLMs to teach yourself.
Did you even write this post? If you aren’t even writing the post then what is the point?
Do you think your laid off coworkers deserved it? That if they had just listened to your esoteric knowledge they too could have been saved? Do you think you are special?
Enum1@reddit
I don't see the need to get personal. Did I hurt you?
But I'll respond regardless. There are different skills you need as an engineer.
Knowing syntax is none of them anymore. Understanding higher level concepts are. Breaking down high level problems into actionable tasks is. They always have been what was core to be a good engineer, just more so nowadays when the typing is automated.
rovermicrover@reddit
You sure like saying blank platitudes.
metal_slime--A@reddit
It's almost like "front end" requires actual depth of knowledge and might be an insulting concept to just be able to deploy "front end skill" and one shot a layout and function you can barely distill into a comprehendible description 😑 (op this isn't targeted at you specifically. Tired of people keep talking trash about how bad models perform when people prompt "make layout awesome" and expect to get magical results that reads the prompters mind)
arstarsta@reddit (OP)
I agree.
What I'm interested in is if you can see there is a problem from how many tokens someone consume for some complexity of a task.
If there is $100 worth of tokens for something that should take 50 lines a symptom that the developer don't know what they are doing?
theDarkAngle@reddit
Personally I don't think AI generation prompted by someone who doesn't really understand the target technology is good for more than prototyping, poc's etc.
I mean, you can ship it but it will probably have a ton of bugs and unstable behavior that is really hard to eliminate without a ton of expertise.
Even pure vibe coding, never touching code, a good engineer will have much better intuition about when to nudge the AI into considering a refactor rather than keep digging a deeper hole.
rovermicrover@reddit
Front end seems to be easier to describe behavior and you should already have all API inputs defined in some type of schema. My expectation would be that front end would use less tokens.
In practice the front end engineers burn through a lot more tokens for simple changes than I do on the backend.
I think it in part comes down to the verbose nature of templates and css variables names causing token bloat. Output tokens are the most expensive after all and aren’t cached by any platform.
The nature of front end state management most likely doesn’t help much either with crazy data model coupling leading to context bloat.
arstarsta@reddit (OP)
I have seen another developer vibe a frontend with raw css and html. I personally probably would have told Claude to use some library like bootstrap or MUI to not generate everything from scratch.
rovermicrover@reddit
I think your original question could be answered but you would need a defined task in a specific code based worked on by a good number of engineers with different skills and experience levels and then come up with some type of matrix of general experience vs domain experience vs tokens burned.
I mostly use LLM auto completion and targeted prompts for refactoring. My daily domain is also weird scoring and recommendations pipelines deeply coupled to arcane business workflows so the LLMs just aren’t as helpful to me. So my experience isn’t typical for most software developers. Most of my job isn’t even code it’s talking to people and gleaming out business practices. 🤷🏼♂️
arstarsta@reddit (OP)
Maybe token per story point in scrum?
rovermicrover@reddit
Not a terrible idea, it’s all made up metrics but at least story points have a general consensus.
gfivksiausuwjtjtnv@reddit
Confounding factors at play here, your prompts, your repo, your FE setup…
For FE is your AI churning through tool use trying to figure out how everything works? Is the code absolute spaghetti?
EdelinePenrose@reddit
my humble suggestion is that you’re asking the wrong question. your focus should be in learning frontend architecture over implementation details, and that will improve the quality of your prompts.
claude itself can guide you in this.
make sure you’re using adequate model settings.
i would also recommend using strict test driven development.
arstarsta@reddit (OP)
I want to discuss an observation that using lots of tokens is a sign that people don't know what they are doing.
I'm not asking about my personal development here as it's no use for the community anyways.
NotMyGiraffeWatcher@reddit
I would challenge that theory. I think there's more nuance that just number is tokens.
There is quality and size of prompt, as well the scope of the feature and even the style of the developer.
I kinda think of if the numbers are in the top or bottom 10, something might be weird, but it's a true metric
arstarsta@reddit (OP)
Yes scope of feature matters but lets assume that is a know entity.
I would say that the quality of the prompt is much better if you know the code which leads to less tokens used.
EdelinePenrose@reddit
oh, in that case, probably not.
aj0413@reddit
The answer is yes.
It’s like when you’re trying to drag out specs and reqs from a client or stakeholder; the more questions you have to ask and more you have to infer from their answers leads to both you wasting time right?
Using LLMs works the same, it’s just that you’re the client and them the engineer now
Efficacy rises with technical proficiency and knowledge in what exactly your asking it to do
Saying “update all my PRs with xxxx” can lead to token burn as it has to infer how to do that, probably defaults to API/GraphQL calls, etc… vs “use GH cli to update PRs in repos in directory” which will be much more efficient and effective
Same is true for coding tasks
Being a good communicator, understanding how context works, understanding the tools you and the LLM should be using, and understanding the tech stack all come together to affect efficiency
mxldevs@reddit
Yes, you're basically vibe coding, instead of instructing the AI what you're looking for.
skidmark_zuckerberg@reddit
Not saying you aren’t capable by bringing this up; but this is way experts in a given domain will still be needed even with AI. If you don’t know about architecture of a given side of the stack, you don’t know how to prompt it and course correct it. It’s like a FE building an entire BE system. A pure BE dev would be more efficient with the LLM simply because they know what to do. Goes both ways.
throwaway_0x90@reddit
No correlation