AI is better at backend development than frontend and it’s not even close
Posted by Few_Homework_8322@reddit | programming | View on Reddit | 13 comments
I’ve been experimenting a lot with AI coding tools lately, and it’s becoming pretty clear that AI handles backend development far better than frontend work. Backend logic tends to follow predictable patterns with cleaner input-output structures, so AI can reason through it and generate decent results with minimal context.
Frontend, on the other hand, is where things fall apart. AI can build basic components and layouts, but as soon as you need real design quality, complex state management, or something that feels polished and professional, it struggles badly. It often produces UI that looks generic, inconsistent, or just wrong in subtle ways that a human developer or designer would never miss.
Backend code is easier for AI because it’s more about structure and logic than subjective design. But once the codebase grows or the project involves multiple services, even there AI starts to lose track. It does well in isolated chunks but can’t reason properly across an entire system or keep architecture consistent over time.
I’m convinced that, at least right now, AI is much more of a backend assistant than a frontend builder. Curious if anyone else feels the same way, or if you’ve had a different experience getting good results from AI in frontend-heavy projects.
Majik_Sheff@reddit
Fuck.
Off.
BlueGoliath@reddit
Sure would be great if this sub had moderators.
shevy-java@reddit
That depends. There are moderators that ban and censor tons of things. On #kde you may not critisize Nate, for instance, not even after he added the "donate now or else" widget-daemon. Oldschool KDE devs would have never done that.
Few_Homework_8322@reddit (OP)
Thank you. Just curious about everyone's thoughts.
elmuerte@reddit
The quality of machine learning depends completely on the quality of the information it was trained on. Quality of information deteriorates as technology, which is based on evolves (well, at least changes). (This is also a thing which causes technical debt.)
The quality deterioration was quite visible on stack overflow when you were looking for questions about HTML/CSS/JavaScript. A correct answer from 2010, while still usable, was no longer correct in 2020.
Besides HTML/CSS/JavaScript evolving a lot. The web development frameworks, tool, libraries had an enormous amount of changes. Complete overhauls, major shifts, abandonment, etc. People called this exciting. It was exciting to do major rework every year. They called backend work boring. Guess what... boring in software development is good.
So these bullshit generators, I mean, LLMs, are train on historic data found on the internet. Trained on quantity, not quality. While quality is already low on the whole of the internet, the quality of web technology information on the internet has decreased a lot over time.
As long as they train on quantity and not quality, you can expect the results on LLM not to improve. You cannot tell an LLM to only generate a response based on web technology information less than 10 years old.
Note, I explicitly talk about web technology and not frontend. Creating an "ordinary" desktop frontend is a different category. It is also mostly boring (unless you used one of those volatile newer Microsoft UI frameworks.)
shevy-java@reddit
I think SO has many more problems. It seems to have peaked years ago. Like your time point of 2010. I'd even think up to 2015. Since then it really declined.
It still has useful content, and while you were correct in pointing out that it now contains more outdated information than it had in the past, may still be useful - but people don't seem to improve on SO much at all anymore and new questions are rarely answered.
Zomgnerfenigma@reddit
excellent article for an orc
shevy-java@reddit
It was more a goblin. But more seriously, the account smells like we are "interacting" with a LLM here. The karma system on reddit isn't perfect, tons of issues, but an account that after 3 years has -4 karma or something like that, trying to promote AI, is ... a tiny bit suspicious indeed.
Big_Combination9890@reddit
So the thing that is building shit like this:
...is supposed to be better when things are more about "structure and logic"?
Yeah, and there is a very reason for that: "AI", that means LLMs (because everythng sold to you as "AI" these days are these statistical sequence predictors), cannot reason.
And the evangelists may cry over this statement as much as they want, but unless they show original research, no one has to care.
shevy-java@reddit
Indeed.
In other words: if the data generated by humans is faulty, such as when these humans were idiots (accidentally or delibreately), the AI will also become a boomer idiot AI. Kind of dominance by dumbness or averageness.
shevy-java@reddit
I don't think AI is really better at anything much at all, unless it is primarily repetitive things. How does AI get its data? It sniffs primarily after what real humans have written. Many solutions are of course simple - detect temperature via a sensor, do things based on that. This will often be the same, at the least for established methods and techniques. For overall design and new things, you need to be able to comprehend the problem domain and potentially have new ideas. AI does not really do that. There was a recent example of a mathematician that solved a problem via AI, by splitting it up into various subcomponents. But he had to micro-control the AI via this guidance ultimately, so how much cleverness was there in the AI? Humans could have done so too; in this case AI probably reduced the total cost, since you would depend on fewer humans, but a human was still in the driver seat at the end of the day. Skynet 3.0 isn't yet a reality.
Also, I have a hard time trusting the humans in those greedy corporations that ultimately control the biggest AI tools such as ChatGPT (now with pr0n functionality, as of yesterday or so, guess we need that for more productivity ... or something). For instance, Google pushed its AI summaries not long ago into the search engine. What people don't realise is that, aside from those summaries often containing errors, that Google now presents you a private web. You are in the Google cage, the walled garden, the cemented ghetto. Just like in the song Hotel California, you can never leave. I refuse to allow Google or anyone else to destroy the oldschool web. By using more and more of those top-down controlled tools, we lose options. We could already see that with Google ruining its search engine - this was evidently done to reduce costs, favour AI crap and turn the open web into a Google web. Same with chrome and the chromium code base, and so on and so forth. So, when the OP concludes this:
"I’m convinced that, at least right now, AI is much more of a backend assistant than a frontend builder."
I totally disagree. I think AI use is crap and pointless in both cases, even IF people may create something useful with it. Too many think AI has only positive use cases. There is a dark omen around it - look at Dohmke saying "everyone must embrace AI at github"; next day he "voluntarily resigned". That was one big, bad omen.
grauenwolf@reddit
When that happens I build a library or code generator. Try upgrading your skills to late 1990s standards.
femio@reddit
Even *if* true, this is offset by the fact that errors in backend code are significantly more costly than janky UI