Does AI add to or remove cargo culting in tech?
Posted by GolangLinuxGuru1979@reddit | ExperiencedDevs | View on Reddit | 26 comments
The tech industry is full of cargo culting . If Google does leetcode, then everyone should do leetcode. One company does agile, we all should do agile. Even at a technical and architectural level. OOD? Everyone should do it. DDD? Yep, if you’re not doing DDD then your architecture is a mess.
I see this as the Tech Guru industry. A top down mandate about how you should work, and how you should think. It has some use, but overall it’s mostly an industry. The effects is that you can build brands, sell books, sell workshops and get conference appearances based on these “insights”.
So does AI change this? We certainly see something like this taking form. We have spec driven development. But I think it runs into a fundamental issue
No one can say they’re an AI Guru yet. The big thing about OOP is that it has existed for decades, it certainly looked like a clear evolution of modern software, and the first people to push thought leadership around it were industry veterans. You could assume they had scars
But with AI no one has serious maintained an AI generated code base for several years. Especially not code bases that have real customer impact and regulatory standards to uphold . Where mistakes can get your company fined or sued. Or where mistakes cost millions in damage to your company.
So is there even an opportunity for cargo culting? I want to make the distinction between cargo culting if and FOMO. Industry are definitely in the FOMO stage of AI. But cargo culting meanings you’re trying to solve a specific problem, and there is a top down solution or “industry wide best practice” to solve it. Agile being a great example.
Do you believe AI enhance or reduce the effect in tech? It so why?
Polite_Jello_377@reddit
AI is absolutely leading to huge amounts of cargo culting right now. Look at any company that has AI usage leaderboards
Mountain_Sandwich126@reddit
There is no maintenance with ai generated code.
Ai assistance code sure, because the decision maker is still the dev, for better or worse.
With ai generated code there are no rules other than defined in AGENTS.md
Mountain_Sandwich126@reddit
If anyone has not noticed, I have a disgust for any workflow that does not have a dev making decisions
skidmark_zuckerberg@reddit
Not super related, but I’ve been asked in two interviews now if I knew how to utilize AI tools in my workflow. If that’s the way of the world now, fine. But given that, technical assessment should be done with AI now. It makes no sense to force us to use these tools on the job, which actively erode our ability to code by hand, but simultaneously expect us to do technical assessments unassisted.
frogic@reddit
I’ve been thinking about this one. It has to be a mix of both right? You need someone who can optimize Ai use AND someone who can push back and correct course when it goes very very poorly. Yet again the ‘replacement’ of devs just made the job more complicated.
skidmark_zuckerberg@reddit
I agree, you need to be able to demonstrate you can course correct and also you understand how to prompt to get what you need. I think however this can be demonstrated in systems design discussions. If you can show you understand how to architect a FE or BE, and translate that into an impactful prompt, you should have no issue baby sitting an AI workflow to accomplish that.
In my last job we used Claude a lot over the last 6-8 months. It got to a point where I could feed it a requirements spec sheet, along with a code style markdown file that we had, and it would spit out an 80% working feature to looked like all the rest of the code written by hand over the last 8 years. The last 20% was manual effort or further prompting to fix it. That last 20% was where experience came into play and was the make or break piece that determined if the code was slop or production grade.
frogic@reddit
Not being able to evaluate developers is basically the history of the industry.
Napolean_BonerFarte@reddit
I know that FAANG is moving away from leetcode towards AI only interviews, no actual coding by hand. So whatever they adopt, the rest of the industry is sure to follow in short order.
MoreRespectForQA@reddit
The way technical assessment was generally done before was usually cargo culted (leetcode being the prime example) and didnt make any sense either.
skidmark_zuckerberg@reddit
Not in the slightest. I understand why systems design is discussed, that’s the core of our day to day jobs. But we all know that there isn’t anything even remotely close to LC level work at the typical job. Most of us work on web software and the systems around it. If I tried to reimplement some search algorithm for example, I’d get check in the PR review and asked why I didn’t just use X, Y or Z instead of wasting my time.
Inside_Course_5886@reddit
Interesting post!
Leading_Yoghurt_5323@reddit
feels like we’re in FOMO phase now, not full cargo cult yet… but it’s heading there once “best practices” get packaged and sold
justUseAnSvm@reddit
AI makes it easier than ever to cargo cut technical solutions that haven't earned their complexity with real use. LLM-based coding has removed a lot of constraints around the coding process: the time it takes to understand the code, thinking of solutions to problems, et cetera. You can open claude or codex, and just code an entire app and easily start solving problems you don't have.
That's cargo culting, and AI makes it much easier to do.
0xPianist@reddit
AI brings more cargo cult of being ‘AI driven’ and 10x.
Mom technical people and execs love this kind of crap 🙊
diplofocus_@reddit
Making something less cargo culted (both on the “general adoption of AI” front, and the lower level of “does this code actually do what’s needed, or did we just copy paste it because we saw 10 examples elsewhere”) usually requires thorough comprehension of the problem at hand and the tradeoffs being made.
Neither non-technical leadership, nor LLMs are great at this.
bushidocodes@reddit
From a technical standpoint, agentic development makes is easier to rapidly prototype, so you can actually have agents explore several alternatives for any problem quickly in parallel and then have efficient discussions with the model about the tradeoffs between them. That can lead into a very detailed plan. Then if something goes wrong down the road, a lot of things that were once one-way doors due to the level of engineering effort are now two-way doors given what agents can do. This is all great for the people that genuinely augment their intelligence with Ai.
That said, some folks just use AI to outsource all of their thinking. Those folks would be "cargo culting" before. I assume they'll just defer to what the AI tells them now.
I don't really understand your comments about OOP. People were positioning themselves as thought leaders and gurus in the first five minutes after OOP was created. It's always like that in tech. The gurus for a specialization are set before any meaningful production system is in place. Can you point to any guru that has seriously maintained important codebases for many years? Guru types are always too busy exploring new shiny stuff and brand building for that.
single_plum_floating@reddit
I... have never got this line of thinking. You get the exact same thing from just talking to the chatbot and getting a rough idea on how your prototype will crash and burn in 10 minutes.
bushidocodes@reddit
Depends on how familiar you and the model are with the domain, I think. I'm pretty regularly getting dropped into domains that are completely outside of my traditional software engineering in kernel dev, C, C++ purely based on my agentic development and GenAI skills. Some of these problems are with gnarly legacy systems that frontier models aren't as good at (obscure COBOL dialects, etc.).
Some of them are just trying to automate manual business tasks to reallocate headcount. For example, my current employer has something like 100 developers that exclusively author XML Schemas and Stylesheets by hand to have bi-directional conversion to and from PDFs. I have a small bit of experience with XML ontologies, but mostly as a consumer. I got a one-off request by an executive to look into this, and so when I had an unplanned empty slot in my calendar when a meeting was cancelled, I was able to generate three working end-to-end prototypes in an hour, and a report detailing tradeoffs between the approaches and get that off to the requesting executive. In this case, most of the approaches that GenAI would have defaulted to would not have worked because of our strict security controls (missing libraries, blocked native code, etc.). When I spawned agents to explore the problem space, they were able to hit all of these roadblocks, determine workarounds etc., resulting in the three alternatives I mentioned above and suggestions for things that we could run through our security change management process.
The leverage of all this is a big deal. All the folks that are trapped in XML, COBOL-85 that I can liberate are going to get the chance to go to an employer-paid fullstack dev bootcamp to learn modern Java and React and agentic tools.
High Agency + Agents is a way for folks to break inertia and get good things going.
Ambitious-Garbage-73@reddit
I think it does both but in different directions. for architecture decisions it removes cargo culting because you can actually prototype three approaches in an afternoon instead of just going with whatever the last conference talk said. but for coding patterns it adds cargo culting hard. people paste AI suggestions without understanding why it chose that pattern and now you have codebases full of factory abstractions for things that could be a function call. saw a PR last month with a strategy pattern wrapping a single if statement because thats what claude suggested
Ambitious-Garbage-73@reddit
I think it does both but in different directions. for architecture decisions it removes cargo culting because you can actually prototype three approaches in an afternoon instead of just going with whatever the last conference talk said. but for coding patterns it adds cargo culting hard. people paste AI suggestions without understanding why it chose that pattern and now you have codebases full of factory abstractions for things that could be a function call. saw a PR last month with a strategy pattern wrapping a single if statement because thats what claude suggested
dbxp@reddit
I think it hits both sides. ON the pro cargo cult side you've got vibe coders who don't really know what they're doing but on the opposite side you've got the fact that senior devs can now get more done as they're augmented by AI so they're higher value, have more political capital and they're the people who are going to push back and scrutinise this sort of thing.
single_plum_floating@reddit
Nah. An AI can make a plan up for any coding theory you want. Hell if you want to be funny give it a personality and a gimmick and see what happens.
So yeah. Unless you know you theory from first principles. its going to lead to a LOT of cargo culting. 99.99% of developers have no clue about advanced system design at scale. Me included likely.
Marceltellaamo@reddit
I think AI is going to amplify cargo culting before it reduces it.
The difference now is speed. Before, cargo culting spread through blogs, conferences, "industry best practices". It took years. Now someone posts a workflow or pattern on Twitter or GitHub and within weeks people are implementing it without really understanding the tradeoffs.
What worries me more is that AI makes bad patterns look convincing. You can get something that feels like a solid architecture or clean abstraction, even if it’s fragile underneath. So it’s easier to believe you’re doing the "right thing".
I’ve already seen teams adopt things like agent workflows or spec-driven setups not because they solved a clear problem, but because it felt like the direction things are going.
The irony is that the more powerful the tools get, the more discipline you need to not blindly follow them. I'd like to knows if others are seeing the same, or if this is just a temporary phase before things stabilize.
latkde@reddit
??? The act of using AI tools is often a cargo-cultish reaction to purported productivity gains by other people (often, by people who either sell AI tools, or by people wanted a shareholder-friendly excuse for firing people). Similiarly, many products now have AI features not because those features are useful, but because everyone else has an AI feature.
When generating software using AI tools, a lot of the generated code doesn't really makes sense, following patterns that are inapplicable or even harmful in that context. One of my pet peeves is the excessive use of exception handling in AI generated code, where 99% of the time the correct solution is to do nothing and just let that exception bubble up. The result is that AI-generated code is often excessively verbose, and much of this verbosity relates to detrimental functionality. In this sense, AI tools that generate code are cargo-cult machines, since they apply patterns without consideration for the meaning of these patterns.
You may also be underestimating the historical hype around OOP, or Agile. Nobody really knew what they were doing either, but that didn't prevent people from doing consulting, writing books, or givin talks. Today we can say that it has existed for decades, but back in the 90s OOP-style techniques had just breached into the mainstream, and were just starting to being used in large-scale systems. Similarly, the Agile gurus didn't necessarily lead to successful outcomes (e.g. see the Chrysler Comprehensive Compensation System project which ended up developing interesting techniques, but opinions diverge on whether project goals were ever met). Incidentally, the Venn diagram of 90s era “Agile gurus” and “OOP gurus” is largely a circle.
titogruul@reddit
I'd say that right now AI helps informed decisions, thus cutting through the cargo cult (for you, not for others, of course).
Over time I suspect monetization will force AI information aggregation be influenced by $$$ thus screwing it up, and it would be helping $$$ cargo culting.
I see a parallel with Google search in early 2000s and what company pressures made it in, say, 2015s.
OldPurple4@reddit
The driver for this behavior has never been necessity. Tooling has almost never impacted it, at least in my experience it’s made it more prevalent.
Consider this part of the entropy acceleration of the industry.