Why is the AI debate so incredibly polarized?
Posted by IllustriousCareer6@reddit | ExperiencedDevs | View on Reddit | 221 comments
The impact it has on developers seem to be either completely game changing or underwhelming. What’s causing this great divide?
Acceptable-Milk-314@reddit
Believe it or now, class war.
AvailableFalconn@reddit
Whenever I see an AI booster dev, I find it insane to see someone happy to trash their career - one of the lucky few in America that can fund a middle to upper middle class life in 40 hours a week - just cause it scratches their brain good
SansSariph@reddit
If AI ends up "working out" and fundamentally changing software dev, then there's no putting the genie back in the bottle. Being "pro AI" doesn't trash the career any more than being "anti AI" stops things from marching forward.
Some of the "pro" folks I work with have a kind of grim determinism about it. It's less about boosting and more like "the job has changed, adapt or get left behind". The evangelization is more "the sky is falling, you need to prepare" to anyone who will listen, vs brain scratching.
IAmADev_NoReallyIAm@reddit
I tried to prepare, I'm still losing my job. So...yeah, I still have mixed feelings about AI. Sure it's the future. It can be a productivity booster. Sure it helped me. It then also helped me right out of a job. Along with a couple hundred other people I work with. And there's thousands of others being laid off at the same time. And it's not like more jobs are being created. So... I don't know what I'm going to do... I'm too old to keep changing... I've been rolling with the changes and evolving since the early 80's. "I'm tired boss..."
Colt2205@reddit
It's definitely not AI alone that would be the reason for your job loss. It's almost always someone whom you never met that is going through an excel sheet looking at defined tasks and seeing how those align with the business strategy and financials. Even the performance reviews generally do not have much weight in the decision.
IAmADev_NoReallyIAm@reddit
Oh, I know.... It's not AI alone, I followed the money... Given the people that are being let go... and by their own admission... it's largely about the money... however... a lot of the people that were retained had a leg up because they already had some experience with the AI tooling.
So the rest of us get benched... And are expected to find jobs within the company... which are more and more requiring AI "experience"... and the cycle eats itself. Again, I'm not angry, I'm just tired. Well, no, that's not entirely true, I am angry, but it's not at the industry or the company, but I'm not going to get into that here because it's irrelevant and this isn't the time or place for it.
ninetofivedev@reddit
You've been working for 40 years. Buddy, it's time to retire. Make room for the next generation.
If you haven't been able to save up enough money despite working in one of the most lucrative careers for 40 years where the market has also been consistently more green than red...
Your story doesn't add up or what the fuck have you been doing?
Jaeriko@reddit
Some people actually like their job.
ninetofivedev@reddit
The person in question doesn't seem like one of those people, your argument is invalid.
Jaeriko@reddit
Oh okay, well if you've decided then I guess that's true. Thanks for revealing the truth to me!
ninetofivedev@reddit
I mean... it's like my coworker who complains about having to work overtime.
Nobody on our team is required to work overtime. And nobody is getting recognized for working overtime. And nobody is getting fired if they don't work overtime.
Either you're working overtime on your own volition, in which case, stop complaining. Or you just want to complain.
Early_Rooster7579@reddit
Do you think not using it is going to make it go away somehow?
FatHat@reddit
I think using it as much as possible would be the best way to stunt it. OpenAI/Anthropic lose money on heavy subscription usage..
notfulofshit@reddit
Agree. Use it a lot more. If the unit economics work great. If they don't work let's utterly destroy these companies who have been in a warpath to lie to everyone.
hucareshokiesrul@reddit
The work done by the software I make used to be done by mailmen delivering documents and an army of secretaries sorting through an obscene number of paper files in file cabinets. That specific job doesn't exist much anymore because it's painfully slow, expensive, and error prone compared to the automated versions I work on. It feels a little hypocritical for me to say that automating what I do is where we should suddenly start drawing the line.
xMisterSnrubx@reddit
This is not apples to apples at all. This is an almost overnight destruction of many many complete sectors of professional and creative careers. Frankly, low skilled mailmen can easily transition to another low skilled job. This is going to decimate and destroy career pipelines for software engs, quality testers, biz analysts, project managers, security engs, then you have lawyers, designers, voice artists, actors, financial analysts, cpas, medical fields. And automation/AI is already starting up on driving, transporting, food services. Even China, fucking China, has outlawed firing people due to AI.
apartment-seeker@reddit
The base assumption that this should surprise us is faulty.
Other countries often balance the value of keeping people employed against corporate profits, etc., in a real way. American society/culture does not, and, accordingly, its culture does not. Capitalism is our religion, and rich people getting richer is seen by most to be a good in and of itself.
xMisterSnrubx@reddit
Well, they do have suicide nets outside some company office buildings, and their human rights violations are somewhat well known.
apartment-seeker@reddit
lol true, they do have a shittier work culture in China and other East Asian countries
But there is also an understanding that the economy should lift all boats, and that the country doesn't exist to facilitate an economy where a very small percentage profit and everyone else takes what they can get.
AvailableFalconn@reddit
It’s bad when we offshored manufacturing too. Hope that helps.
Early_Rooster7579@reddit
Exactly. I’ve written code that has gotten people laid off multiple times over my career. Automation is always advancing
RelevantJackWhite@reddit
You say that as if they would keep their jobs if by not doing that
endurbro420@reddit
Yeah for most of us it has become a mandate to use it to keep our jobs. Luckily so many companies who claim to be “ai first” are just using copilot poorly.
Colt2205@reddit
Part of the company training is literally how to avoid burning too many tokens because unlike normal SAAS they also have to charge by usage as if they are a utility.
the_lark_@reddit
Eventually it will go from token usage as a sign of productivity to 'use it in moderation' because it's expensive
tcpukl@reddit
Are you joking?
I couldn't imagine working somewhere like that. That's insanely short sighted.
Tainlorr@reddit
Being anti AI will get you fired these days
K-Max@reddit
Not necessarily. It depends on the situation.
Tainlorr@reddit
For my job it's mandated for every ticket and we will not get promotions or raises if we don't embrace AI and figure out how to use it for productivity gains
airemy_lin@reddit
A year ago yes, but now it’s starting to take over. The current enterprise tooling from Anthropic and OpenAI is so good and the marketing is reaching a wide audience.
Boards and investors are viciously applying pressure down. I’m surprised to see the pressure at my current job and if it’s happening to us it’s probably super widespread at this point.
It doesn’t matter if your manager or your C suite (lol) is nuanced and risk averse.
alex88-@reddit
Imagine telling devs 20 years ago that they’re trashing their careers by using Google lol
We’re technologists, we adapt. There’s powerful things you can do with agents, and the end result is ultimately up to the engineer not the AI.
psyyduck@reddit
If you want a more fair and happier society, you should ask your favorite LLM how Europe does it. America has been steadily dropping on the "World's happiest Report" for a long time now. Yelling at data centers won't cut it.
ninetofivedev@reddit
I think you have it backwards. AI is taking over whether you like it or not. You can bitch and moan about it, or you can adapt.
IDoCodingStuffs@reddit
Complete with aggressive propaganda on both sides
TheRealStepBot@reddit
Baby and the bath water though. Becoming a bunch of luddites to oppose fascism or something is certainly a strategy amongst strategies
emitc2h@reddit
The real question everyone should be asking is who stands to benefit from this. The way things are evolving, it’s not the devs.
Glass-Chemical2534@reddit
completely agree, i’ve known i’ve wanted to be a swe ever since i learned there was a job on the computer in elementary school, did cs in high school and i’m a college sophomore now and i don’t even know how i feel about the field now :(
IDoCodingStuffs@reddit
Becoming luddites relative to treating this like 1800s people thinking defeating death was just around the corner because electricity can induce muscle jolts?
roodammy44@reddit
I would be fine in letting AI take my job, if it meant I could keep my house and car and keep eating. Unfortunately, society has decided the best way forward is to make everyone destitute like the 19th century.
CombatWombat1212@reddit
100%
Sheldor5@reddit
gane changing = CEOs, managers, juniors, bad devs
underwhelming = experienced devs working on tasks more complex than CRUD endpoints
QuickShort@reddit
Maaan this is so far off my experience. I'm an experienced dev working on super complex work. No doxxing myself, but I'm a team lead at a YC-backed unicorn. It's really broadened the scope of what is possible for one person to do. I can't believe people aren't more excited by this.
Infamous_Birthday_42@reddit
I’m always baffled when people say this because I’ve seen the opposite. In big tech, the loudest proponents of AI I keep seeing are all Senior/ Staff/ Principal level. Feels like once a month we have a company-wide talk by some Staff-level engineer talking about how they used AI to deliver twice the features in half the time by redesigning their entire stack to be “AI native”, whatever that means.
Admittedly, a lot of this is probably brown-nosing, since you don’t reach Staff level without knowing how to deepthroat management a little.
But still, it’s always weird logging on here and seeing “only inexperienced/ bad devs use AI”. Then I go to work and my inbox has a video from an engineer with 20+ YoE who was the architect of some cloud platform feature that everyone uses telling us explicitly that vibe coding kicks ass and we should all be doing it.
phillythompson@reddit
Circle jerk look at me my work is better
ganancias@reddit
Stochastic parrot does not solve my problems, which are not boilerplate and don't have millions of training examples. And besides, only 5% of my job is writing code, the rest of the time I'm talking to people!
Early_Rooster7579@reddit
This is funny lol. Most of the places working on the biggest scale and hardest problems have heavily embraced AI.
xaervagon@reddit
That is honestly how I feel about it.
My two cents is that this tool has been absolutely amazing at chucking boilerplate, glue code, stuff that largely does get copy/paste/modified. It feels more like a failure of the industry to produce standards or interchangeable platforms. Instead we built a machine to force all the other machinery to play together despite all the vendors fighting bitterly to maintain lock in.
At the job, I'm watching kids just basically outsource all their brain to the AI and won't make an effort to go out and learn or understand something unless explicitly told. They don't appear to be making a real effort to learn or develop skills. I'm really left wondering how they're supposed to grow.
skalpelis@reddit
It’s an engine for mediocrity. It literally creates statistical mediocrity.
SarmackaOpowiesc@reddit
I don't think they are learning and it's why I'm pretty confident that I'm going to remain employable for a very long time.
semiquaver@reddit
This is cope, btw.
ninetofivedev@reddit
Don't take this the wrong way, even things that I've built that would be considered impressive were not really all that complicated.
The world runs on seemingly complex systems that probably could be 1/10th as complex as they actually are.
Take K8s, for example. On the surface, it's actually not that complex. For most people's workflows, you maybe setup the default behavior for deployments, set up service accounts, maybe some autoscaling, a controller ingress and load balancing, it gives you service discovery by default.
Not that complicated. Then your resume driven developer comes in and decides that your platform needs an operator that orchestrates workflow lifecylces and 100 new CRDs, and fuck it, let's offload some of the decision making to AI instead of using your typical decision matrix because that'll be fun and exciting.
mikevalstar@reddit
The way I see it is the biggest divide is those that get excited about shipping features vs those that get excited by solving the puzzle
GumboSamson@reddit
With AI, there’s still a puzzle to solve.
But the puzzle shifts from “how do I make this feature” to “how do I solve the general problem of shipping features.”
LittleLordFuckleroy1@reddit
Such a terrible analogy because compilers are deterministic and rigorously tested. AI is neither of those things, and it can and will randomly hallucinate complete bullshit.
GumboSamson@reddit
I’m not saying that you should use AI instead of a compiler.
I’m saying that gaining comfort with higher-level abstractions can be deeply uncomfortable for engineers, and it isn’t the first time such a move has occurred in the industry.
(The first compilers weren’t great, either—so engineers who felt they weren’t trustworthy weren’t entirely wrong.)
LittleLordFuckleroy1@reddit
AI isn’t an abstraction. That’s the mistake you’re making.
GumboSamson@reddit
All languages are abstractions.
Including natural languages, like English.
LittleLordFuckleroy1@reddit
Not the same kind of abstraction as a compiler is to assembly or machine code. Language is not strictly defined. It’s a loose approximation of ideas. Thinking that LLMs do this for code is literally the entire problem — it’s completely wrong.
GumboSamson@reddit
I don’t think anyone is making that claim here.
LouisvilleBitcoiner@reddit
The compiler also doesn’t set massive piles of cash on fire, nor does it significantly accelerate climate change.
clutchest_nugget@reddit
Not the fucking compiler thing again 😭 at least it is a convenient way to identify people who have no idea what they’re talking about
Itsmedudeman@reddit
Understanding that there is a problem is is part of the puzzle in itself. There's lots of mediocre devs that would go through their day not even realizing there's anything to solve until someone tells them.
New-Locksmith-126@reddit
you're not going to solve the "shipping features" problem with AI as much as you might try.
You can use AI along the way but moving too quickly in one area will impede you in another. There's no way to fast-forward reality.
mikevalstar@reddit
I think that's a fair assessment, but those types of puzzles have less provable/well-defined answers.
coderstephen@reddit
Yeah that's more of the type of problem a manager solves, not an engineer/mathematician solves.
GumboSamson@reddit
If you think putting together a Dark Factory doesn’t require engineering, have I got news for you…
mikevalstar@reddit
I dont think many people are making dark factories, they are trying to compile this quarters numbers into a report for Janet before the next release cycle while also trying to fix that bug with the date picker where it seems to be off by 1 days.. but only sometimes.
SimianWorks@reddit
Not at all.
winter_limelight@reddit
I read somewhere that about 20% of the population get joy from thinking. I'm guessing that percentage is higher among developers. So this will feel quite existential for many people.
delventhalz@reddit
Indeed. If AI ever does manage to effectively replace me thinking through a problem (big if), I will be miserable.
AcanthisittaKooky987@reddit
Not gonna happen unless they figure out something different than LLMs, you're good! 😆
grimcuzzer@reddit
Yep. Thinking is one of my favorite pastimes, there's no way I'm giving that up.
morosis1982@reddit
If I can get a tool to write a whole bunch of bullshit connecting stuff and tests for me and solve the main problem myself....
Yeah
xMisterSnrubx@reddit
That’s what you would like, ( and I agree with that usage) but the c-suite don’t want you writing any code
bobsstinkybutthole@reddit
where'd you get this from?
I tried looking around and the closest looking thing i could find was this one
https://dtg.sites.fas.harvard.edu/WILSON%20ET%20AL%202014.pdf
Maleficent-Cup-1134@reddit
AI doesn’t replace thinking though…? Devs using AI tools are still thinking - just at a different level.
I imagine those who enjoy low-level thinking are the ones who don’t like AI, while those who enjoy high-level thinking love it.
the_c_train47@reddit
You can totally replace your thinking with AI if you’re lazy enough
Maleficent-Cup-1134@reddit
Fair, but most competent devs using AI tools aren’t using it to replace thinking. They’re using it to amplify it.
modelcitizencx@reddit
This is it really, the idea that AI takes away the "thinking" part is IMO misinformation, AI only moves thinking to a different level (higher), instead of thinking thoroughly about every function i write, i think about the big picture of a feature/solution, technical design/architecture, trade offs between different solutions. And i use AI to consult on these things as well, but ultimately i make the calls for what solution i am going forward with, cause i know what the priority is in the given domain.
You could say the divide is between low level programmers and high level programmers/product designers.
mikevalstar@reddit
I think a lot of people don't see it that way for themselves, and this drives them to dislike the tools/ai.
You can see a very similar thing with people who criticize others for using node or python or another "simple" language because it's too easy, they aren't real programmers.
BlackPresident@reddit
We’re not talking about “high-level” aka “non”-thinking thinners when we say people who think.
ehs5@reddit
Are you implying that the developers who get excited about shipping features don’t get joy from thinking? Because I don’t think that’s true at all.
mikevalstar@reddit
it is definitely a shift in how you have to approach things. I've been a project lead for most projects I've been on for the last 6 or 7 years, if I didn't get joy in shipping features I'd hate my job; passing tasks to an AI or a jr developer are similar-ish tasks, and have similar-ish results.
This is obviously a simplification of things, and I like to take a middle approach. The people directing me to build applications have almost always valued shipping things over anything else, and I've always seen it as my job to balance stability, quality, cost, and time to market. So I generally enjoy my AI usage, but I still see the issues around it, and I need to manage those
Hydrogen_Ion@reddit
Shipping the feature is a different kind of puzzle!
Helix_Aurora@reddit
I get excited about shipping features, but AI is not automatically the fastest most efficient way to ship all things.
It's the belief that all things are nails and AI is a hammer that frustrates me most.
Effective_Hope_3071@reddit
What is the puzzle in this case?
mikevalstar@reddit
The puzzle is usually something like: "how do I turn x array into a categorized list b" and sometimes I see people invent puzzles to make their work more complicated and challenging when the puzzle is too simple.
Itsmedudeman@reddit
Sometimes the puzzle might be "how do I create an efficient API to do a graph retrieval of relationships using our current data models". This can be quite a lot more difficult and might require thinking outside the scope of what AI can currently do if your data models just aren't built for it. And if the developer isn't aware that the problem isn't efficiently solvable with any straight forward solution then they aren't even going to be thinking of prompting for other solutions like creating a new data aggregation model for the use case because AI only cares to solve what you put in front of it.
Effective_Hope_3071@reddit
Ah gotta. Yeah I'm definitely in the camp of "shortest path" to solving an issue first and then refactor later if something needs to be more elegant.
I like puzzle solving, I don't like making my own puzzles lol
mikevalstar@reddit
I usually see it more when people do things like decide to move to microservices because a simple CRUD app is too easy. (and they definitely done need 1 microservice per active user)
New-Locksmith-126@reddit
I get excited about things not breaking.
If you want to "ship features" go be a PM
vom-IT-coffin@reddit
Yup
sleepyj910@reddit
And if it’s not hard, we aren’t necessary.
MoreLikeGaewyn@reddit
Maybe if:
-Its quality wasn't insanely suspect (Yesterday, I asked it to query my votes table and calculate a comment's score (1 is upvote, -1 is downvote). Opus 4.7 looked at my code and decided to do 2 queries on this same table, sum them, and then subtract one from the other. 🙃
-It wasn't threatening jobs
mikevalstar@reddit
I have seen developers I give tasks to make similar decisions; and in both cases I correct them... or if I really need to I rewrite it. I'm usually more dissapointed then angry or wanting to fire the agent.
Working_Noise_1782@reddit
...what about those who like ..huh, doing their workout while they wait for Claude's answers.
tacosdiscontent@reddit
I am on both sides. For my day job, I could not care less about the speed or amount of features shipped. It’s all about using the brain and solving problems myself while also being payed.
For personal projects I want to ship stuff faster to see the ideas succeed or fail (mostly) faster and not waste time working on something for a year, that was deemed to fail.
Now all I do at work and home is prompt and review. Hence the job and the whole profession now sucks big time.
The future is very, very bleak now. I don’t know how to do it for another 30 years or so, where mostly there won’t be a job anyways most likely
mikevalstar@reddit
I think a lot of people discount their ability to solve logic puzzles as a way to solve business problems. Our jobs are changing, but not truly going away.
In the early 2000s my job as a programmer was very different, computers were scary and new, I had a much more hands on role in defining the problems to solve, and I think this will ultimately be a return to that.
muuchthrows@reddit
This frustrates me so much. People trying to sell their vision of our AI future and every single time it just happens to align precisely with their own preferred style of working.
ham_plane@reddit
Oof, I think you nailed it
hemo5595@reddit
The most polarizing thing for me personally has been the engineering culture changes surrounding it rather than the technology itself (at least at my company). It seems like ai has made management lose touch with reality and we are being encouraged to abandon any kind of engineering best practices, push code as quickly as possible, and ignore things like code style or familiarizing ourselves with the codebase. At the same time as an engineer I feel like I’m just hitting the same bottlenecks that have always been there (external dependencies, product/design decisions etc)
MindlessTime@reddit
There’s definitely a part of these efficiencies that are like “To clear the way for AI, we removed all the firewalls, granted everyone permissions to everything, stopped requiring PR review, and demoted our security guy to janitor duty.” Under those conditions, things that used to take 3 days of waiting for approvals takes hours because no one’s blocking you. That part has nothing to do with AI.
OK_x86@reddit
The push is coming from high up. They've been sold this idea that we can significantly increase productivity using AI. So they sign up for these services and need to justify the exorbitant fees. When the improvements fail to materialize the upper management invariably assumes it's because devs aren't using the tool, because they're too lazy or because they are dragging their feet. So they demand more token usage to help materialize the promised gains.
It's a bit like a faith healer demanding that only with the utmost faith in God can healing occur. If the healing didn't occur it's not the faith healer's fault. It's because you didn't believe enough.
The reality is that a small portion of our time as devs is spent writing code. The rest of that is a combination of gathering requirements, meetings, testing, meetings, writing emails, meetings, writing reports, meetings, and some mandatory trainings, followed by meetings.
My tickets spend orders of magnitude more time in just PR alone than they do in coding.
If they want to fix the problem they're looking at the wrong things to automate.
non3type@reddit
For me it’s largely that key point of it being pushed from the top down without any real consultation with engineers. Many times it’s being pushed on us in the least helpful ways. Then there’s the times I’m asked a question as SME and I get the feeling my answer is being validated against AI. That’s not a trust building interaction lol.
Sfacm@reddit
Your management was in touch with reality?
hemo5595@reddit
I feel like there’s always been a healthy (or at least healthier) give and take between management and ics. But right now it feels like there’s no check at all, this isn’t the first technology that has made devs more productive, but now it feels like there’s no pushback from devs or even ability to talk about the downsides of going all in on using ai for programming. Though this definitely has a lot to do with the broader tech employment landscape too.
onefutui2e@reddit
Yeah, I see this. My engineering org is split between those who have gone all in on using AI for everything and those who areore nuanced.
The former group have higher initial velocity and ship faster, but there are more bugs and follow-up work. The latter group ship slower, but tend to be more feature-complete. But in terms of "when do we cross the finish line?" they're both about the same. In my experience, of course.
The challenge is that the former gets lionized while the latter find themselves constantly having to justify their approach. Even if in the end, the results are roughly the same. It doesn't help that faster initial velocity means something gets put in front of leadership faster.
-Nocx-@reddit
What I’m curious about is how organizations that are heavily AI focused intend to evolve their coding standards / styles. If we are consistently generating tons of code based solely on historical knowledge - where does the innovation come from?
At what point do developers run into a problem so many times such that they ask - how do we do this better? It’s possible that I’m off base, but when I’m writing code it’s those moments that drive me to do something new. If developers aren’t having as much time with code, my impression is that those moments will happen less often.
DontDoxMe3352@reddit
Yeah, the forced top-down decisions of having to use AI no matter what really feels like trying to force the rectangle on the round hole. It always comes down as a "you have to use it and be more efficient with it", while they provide no guidance on what they expect, how much money they're burning with these tools, what actual efficiency gains we're seeing across teams. The bottleneck stuff I think became even worse, because like you said, the old ones still continue consuming our time, and additionaly now we have to spend extra time fine tuning the rules and skills so that the agents can have some basic understanding of our architecture, limitations, standards and whatnot.
iegdev@reddit
I didn’t become a developer to be a project manager. I became a developer because I enjoy using code to solve problems.
I couldn’t care less about shipping features. Professionally I rarely care about what I’m working on, it’s not mine and it’s probably something I’ll never use anyway. It’s just a job.
But now AI is just another excuse for billion dollar corporations to fuck people even more.
Final_Dog_9578@reddit
Right. My priority is my craft. My employer's priorities are shipping features today, and again tomorrow (thus they are not blind to the idea of maintainability, etc). We understand each other well enough that we make common purpose.
Having said that, I lucked out in that, being of some age, I let my VP know several months ago that I would retire in early 2026, so I was heading out of the door just as the AI tsunami was coming in the windows.
ninetofivedev@reddit
It's a few different things:
Certain individuals have varying degrees of tolerance for adoption and change. They will latch on to the way they know how to do things and instinctively put up resistance to it.
ODD. Oppositional Defiance Disorder. Some people are hard wired to be this way. About 3% of adults.
Wait and see, waiting for the hype bubble to pop. AI has been adopted early at stages way before it was today and it's just really incrementally improved. There is a large swath of people who tried it back in 2024 and don't even realize how much more powerful it has become.
They think it threatens their livlihood. It's that simple, it's the same reason people hate outsourcing. In fact, let me stop this all right here.
TLDR: AI is basically treated the same as outsourcing. Someone or something else is doing their job at a fraction of the cost with varying results for quality and people don't like it.
IllustriousCareer6@reddit (OP)
Eh. I think you should read the question again.
ninetofivedev@reddit
Sorry. Your shit post got removed
crazylikeajellyfish@reddit
Ah, yes, the one popular debate of our time that's incredibly polarized...
I don't think it's unique to AI, all the communication platforms we use are designed to amplify the most extreme viewpoints. The perceived average is never the actual middle ground.
MindlessTime@reddit
I don’t fetishize or demonize tools. It’s just a tool. Learn how it works. Use it where it makes sense. Don’t use it where it doesn’t. People need to chill out.
ganancias@reddit
It's become drastically more capable in recent months. So much so that for 2027, all bets are off. Try any prompt, then try it again in six months. The change is happening too fast for chill sensemaking. People are freaking out for good reason.
IllustriousCareer6@reddit (OP)
This is how I approach it too. It least it would be, if my boss allowed me to.
semiquaver@reddit
It’s been linked to political tribes, just like covid and vaccinations was. Kiss of death for a concept.
ganancias@reddit
Interesting part is that anti-AI and anti-vaxx have the same "billionaire conspiracy" mentality.
Neverland__@reddit
Because the hype is over blown.
Yes great tool, yea limitations. Both can be true
ganancias@reddit
Fewer limitations and more capability with every model release. You're skating around as if the puck is sitting still.
ExperiencedDevs-ModTeam@reddit
Rule 9: No Low Effort Posts, Excessive Venting, or Bragging.
Using this subreddit to crowd source answers to something that isn't really contributing to the spirit of this subreddit is forbidden at moderator's discretion. This includes posts that are mostly focused around venting or bragging; both of these types of posts are difficult to moderate and don't contribute much to the subreddit.
ketralnis@reddit
What “debate”? Use it or don’t. Why do you care what other people think?
Zulban@reddit
It's not. Like most issues you're only hearing from the loudest voices.
PlasmaFarmer@reddit
I avoid talking about AI in the workplace because I'm on the center: it has uses and it will stay but in it's current state and form it is severely misused and because of the misuse it causes huge problems that managerial types just don't comprehend. For my centrist opinion I'm always labeled as the opposite side buy the current side I'm talking to. It's like both sides are sick from an AI fever and you can't discuss things with them anymore because the conversation quickly escalates to tribalism and 'us vs. them' mentality. This is even more damaging than the generated slip code. People lost their sanity.
pheaver83@reddit
I work for a company where most developers are like you. It's great. Really healthy discussions and practical applications of AI.
xMisterSnrubx@reddit
Enjoy it buddy. I know several companies now that don’t even have devs in the pipeline anymore. Biz/Product > JIRA ticket > AI > PR > AI2 PR review and merge > AI deploy to Prod. The only thing that will stop this insanity is lawsuits.
tinycockatoo@reddit
What sector? This is actually insane
Twerter@reddit
Do those products go down? I'm kind of curious on how big these projects are, and whether they even have important persistent data.
MagnetoManectric@reddit
Same here! There are booster types about, but the debate is quite balanced and there's been little pressure from management. Various AI lunch and learns do pop up in the calendar, and managers do talk about it, but it's kind of. There if you want it. Engineers use it to varying extents depending on their personal style. I'd actually like to think this is how most places are and you hear about the hyper-booster places here because drama sells, and there are bots about trying to astroturf the impression of this being the norm everywhere.
We have co-pilot PR reviews on by default, which I do find useful. I prefer using it that way round, having the LLM be in the pilot monitoring role. I don't like to be pre-empted or have the code written for me, it's like when you're playing solitaire and have someone hanging over your shoulder the whole time, pointing out every card they think you should click. Annoying, distracting, patronizing.
I like to treat the chat sidebar like a stack overflow answer generator, and keep it in ask mode. It'll generate you a great stack overflow answer. It won't be exactly what you need, it won't be in quite the right style, but it'll give you a great basis for a solution. This seems like quite a reasonable, token efficient way to use it, and infinite stack overflow answers are a useful boon indeed.
Throwaway__shmoe@reddit
I’m the same. The reason I stay silent, and toe the line with the bare minimum usage I can get away with without rocking the boat at work is because I think the concept of “AI psychosis” has legs. I grew up in a fairly chaotic home and learned it’s a bad idea to try and question or reason with people on certain topics that they have adopted as existential. They’ve somehow tied their ego so tightly to the concept they can’t understand a contrary viewpoint, and act out irrationally. In my case growing up that was emotional abuse, sometimes physical as well.
ReachingForVega@reddit
I like to think of myself as an LLM realist. I'd been working with ML, OCR and NLP for a decade delivering provable and repeatable benefits.
Along come LLMs and they have their use cases but most I see are overhyped or faked online. Ask a consultancy to show one they have delivered and they struggle.
Coding and creative writing for drafts seems like their best use so far. Agents seem really flakey when not wrapped in a deterministic process.
Heavy-Focus-1964@reddit
funny how that’s a controversial stance
angellus@reddit
In a world divided by the polarities of social media and other propaganda, it is the one who actually uses critical thinking that is the real threat.
Centralist is always the controversial stance because you are not feeding into the us vs. them mentality (class wars all the way down).
FastHotEmu@reddit
Only one side is lying constantly, though. Reminds me of an old joke:
What's the difference between a car salesman and an AI salesman? The car salesman knows when he is lying.
noobnoob62@reddit
Hard agree. It’s going to be table stakes going forward, it already has and will continue to reshape how work gets done forever. But so many on the pro-AI side are absolutely *convinced* that AI will inevitably lead to the jobpocalypse and suggesting anything to the contrary makes you a luddite
WebMaxF0x@reddit
I resonate so much with this, I also have a stance in the middle.
I was so excited at the possibilities when the first AI's came out.
Now I feel like I was labelled an AI luddite, because I wanted to be prudent, saying we need to invest in safeguards like better automated tests, linters, documentation, etc. Otherwise it's risky that the AI will break things (and honestly before AI things were already breaking a lot due to missing those same safeguards).
coderstephen@reddit
It's genuinely useful for some things, but I'm so tired of it being shoved down my throat as if it is useful for all things that it makes my outlook more negative than perhaps it could be.
ironman288@reddit
The divide is people who learned how to leverage a new tool vs those who are resisting it because they think the tool is going to replace them.
AI is a nail gun instead of a hammer, somebody still has to operate it. But that operator is getting a lot more work done then a couple of guys with hammers.
gamesbrainiac@reddit
I think you're initially excited, and once you've used it for a few months and now have to maintain code that you did not write, and don't quite understand, you begin to sour very quickly. Also, they keep changing the bloody models. This is why I think we're in Phase 1, and Phase 2 will be when local models are powerful enough to power agents. That's just my two cents, but I believe this will follow the same trajectory as mainframe computers where you used to buy time.
Fenix42@reddit
The goal is not AI on your system. It's AI they can rent to you.
MagnetoManectric@reddit
and that's why it sucks so bad. i'd be actually excited about all this if it took the form of models that ran on your devices, could train themselves, and there was a rapid march of better and better local accelerator chips. that'd actually be scifi
gamesbrainiac@reddit
I get that, but they wanted Mainframes to keep being mainframes, so that they could sell time. Technology has a way of upsetting those who birthed it.
thephotoman@reddit
There are two groups of people:
The first group was already leveraging automation tools quite heavily before AI became a thing. To this group, AI is a much more expensive, less reliable, and less predictable than their directory of scripts without much in the way of real gains.
The second group didn’t automate much before AI. When they got it and started using it as an automation tool, they got excited due to the productivity gains they experienced. When they see the first group yawning, they get indignant.
MagnetoManectric@reddit
That's a really great way of looking at it actually. And I do think you might be on to something there. Amongst my colleagues, the most excited about it are the guys who never had any ZSH scripts set up, left everything on default, didn't really know any shortcuts... not bad devs, but certainly a Type. The ones who already knew where all the buttons were and have their most used commands aliased are generally the ones that are less impressed
SansSariph@reddit
Don't forget the third group, that builds strawmen to make their preferred tribe look competent and reasonable instead of holding nuance
thephotoman@reddit
The group not using automation tools tends to work in computing environments that hide text automation.
What’s more, one of those tools in a computing environment that tends to hide automation is Microsoft Windows.
True_Fig983@reddit
100 times yes!
Fenix42@reddit
As some one with 15 yeas as an SDET, spot on.
Thundechile@reddit
A lot of the people talking about AI don't know really a lot of development and they tend to present themselves as they do. Then when they start giving advice (which is usually wrong) it starts to annoy people who actually know what they're doing in dev world.
Challseus@reddit
Probably my #1 issue. AI is being used by so many, with little experience, who also happen to be the loudest about it.
njordan1017@reddit
I think this is more at play than most people realize
Noah_Safely@reddit
Business side never really knows/cares about engineering, it's a cost center to them. They don't care about best practices, long term maintenance etc. It's a short-term mindset for them.
They've been told genAI can fully replace SWE, which they didn't understand to begin with, so they have no basis to evaluate the truth of that. There are also a lot of piss-poor SWE out there to be fair.
Anyhow, those of us for actually supporting stuff long term see the questionable results genAI provides; incredible at some stuff, godawful at others, and just a bit too inconsistent to fully trust. So we wanna keep our best practices around, our PRs, understand the code.
The conflict is business side believes genAI makes us all 10x programmers but the reality is much, much more nuanced. One should only employ genAI if you could solve the problem yourself. How else can you PR things? How can it be safe otherwise?
ManonMacru@reddit
Because most of the technology is literally marketed as a software engineer replacement. This is literally creating a polarization by opposing software engineers and non-softwate people.
But this is so incredibly dumb in terms of impact, because I think software engineers are the best people to understand and leverage AI efficiently.
Fair_Local_588@reddit
And I think the key here is that we’re now visibly seeing companies frothing at the mouth to replace all of their devs with AI. Our industry went from being interesting, well-respected, and well-paid, to apparently a huge thorn in the side of businesses.
And I have to imagine that this desire was there the entire time, it just wasn’t possible yet. And it’s still not even possible now, which someone makes it worse and more disrespectful to developers.
FatHat@reddit
Personally, I find AI to be a useful tool that still needs guidance from someone who knows what they're doing in order for it to not devolve into an unmaintainable mess. I don't really have beef with AI.. but the boosters make me want to bash my head against a brick wall. I would die happy if I never saw an "AI IS CHANGING EVERYTHING" piece ever again. Just stop. Please.
Eastern_Cup_3312@reddit
Reddit finally fully went to the shitter and is full of bots
If you want to know anything, ask the devs you know IRL
originalchronoguy@reddit
Yeah, we don't have these debates in real life. People just go about their way.
Eastern_Cup_3312@reddit
That is why I said. Ask the people you know.
Some friends from college are doing great on private companies, others abandoned, others doing fine on government jobs, one recently laid off.
Other group composed of people from a very exclusive background has many that compete for top 0.1% roles which pay handsomely
diablo1128@reddit
Generally speaking I find the most vocal people on a topic are the one who feel strongly for one side. Those that are in the middle who think both sides have good and bad points don't participate. So this creates a lot of the polarization of a given topic.
mississippi_dan@reddit
This happens with all technology. You have two groups. The engineers who are excited about technological progress over the long term. They are the "This is just the first steps, but it exciting and just imagine what the future will hold." crowd.
Then you have the business crowd who immediately decide that the future potential of a technology is already here. They overhype the technology and start mandating that we have to get ahead of the competition on adopting it. They push the technologt into every facet of the company in order to cut cost. The results are always the same. You get short term gains by laying entire departments off until you realize the technology wasnt up to the task. Then you have to spend even more putting things back the way they were.
adelie42@reddit
People hate change.
byte-array@reddit
I think it is less polarized lately actually
tankerdudeucsc@reddit
AI needs Data Centers and lots of them. They put up the DCs in populated areas and then it gets subsidized for the infrastructure by the local residents.
They get cheaper rates than you and they get their upgraded power lines from you.
It is shitty economics for residential users.
laccro@reddit
My job as a senior engineer is to develop an understanding and expertise of the problem space. Then, I can use that understanding to solve the problem in a way that actually helps the users and makes things clean for future development and expansion.
Understanding the problem and how best to solve it is the key differentiator of experience of a junior to a senior. Building good mental models and recognizing patterns
Often, writing a problem down is a key to understanding it.
The most precise language for writing down problems in a computer is to use a programming language.
Therefore, to truly understand a problem, we must write the code.
LLMs are still helpful for getting the final details written down into a usable program, and filling in gaps. But if you want to actually add value as a Senior developer, you’d better be still writing the key parts of the code to ensure that you understand the system, because modern LLMs will not do that for you
Dimencia@reddit
New vs experienced devs. New devs think they know everything and the AI is even smarter than them, so of course it can do everything. Eventually, you get to a point where you constantly have to tell the AI how to do everything better because it just seems to always choose a terrible approach, and tries its hardest to sneak in bad practices when you're not looking, then you realize they're pretty much a detriment
Also notable is that AI code seems to work great, for a while. It can take months or years before the maintenance problems really come back and bite you, not to mention problems like no longer understanding the codebase for support issues, and most companies haven't been using it long enough to even see those costs, so people working at them think AI is perfect
kagato87@reddit
T chooses the bad approach because it's training data is the internet and synthetic or simple problems.
It's REALLY bad for this in sql (my primary domain). Constantly regurgitating the same anti patterns over and over, even when the prompt and steering both explicitly tell it not to.
It's good at repetitive grunt work. Replace this specific pattern (merge to anti join, for example), convert that itvf to mstvf (the declaration and return statements differ), switch this cte a temp table (though I still have to remind it to conditionally drop every single time...).
Higher level it breaks down. Consistency isn't there (a design feature of AI is randomness), it repeats mistakes, and you have to regularly remind it to delegate to sub agents so it doesn't kill the context window...
VizualAbstract4@reddit
Remember how polarizing NFT discussions were?
Oh, caveat, it had to be with people who had a financial incentive for them to succeed.
Where do you think those NFT bros moved on to? Vibe coding.
And round and round we’ll go.
airemy_lin@reddit
I think your take is behind the times by a few years. I was a skeptic a few years ago as well but definitely not now.
Colt2205@reddit
It's a case where the technology is useful but also problematic due to the politics and financial implications.
From a business standpoint, there is a fear that if it is not adopted into processes somehow the company may not be as competitive as a company that does adopt it, with the only barriers at the moment being the possibilities of data breaches or other types of issues from the incorporation of LLMs.
Then there is the problems that have been ongoing for the past few years with copyright and public domain materials. There was a vision of the internet being a place where people can share information freely and annexed from any political entity, but now that freely shared information is getting farmed by corporations for financial gain. "Poisoning" images, FLAC files, etc, with data that is disruptive to LLM training has been the primary counter measure beyond inspection of AI generated artwork and taking legal action.
The impact LLMs have on software development and the lives of people engaging in software engineering is just a piece of the entire thing.
flynnnightshade@reddit
It just is polarizing. The marketing for the AI products is basically evangelism, it doesn't match the reality of the tools actual ability. Leadership at many orgs just sort of regurgitates this marketing straight to their technical staff and writes policies that effect those staff based more or less purely on marketing. At my own large org we have performance metrics tied to token consumption.
On the other side some folks feel empowered, there are things the tools are good at doing. It has enabled some folks to produce code that have no background in software engineering (I'm drawing that distinction on purpose) and they feel empowered by that. It is likely to have them singing the praises as well, and this will often happen leak over to the technical side once those folks start asking why you can't just use the code they produced.
Companies cite it as the reason for mass layoffs (it is not the reason, these layoffs have all been planned well in advance), saturating the job market, many folks have to take worse jobs. There's been a virtual elimination of the hiring of fresh software engineers at many companies.
Meanwhile the pricing for these tools is so subsidized by all the companies involved we have no idea what consumption of these tools will look like in the future.
You could go on and on, there's very little about the current landscape surrounding this topic that isn't polarizing.
brunoreis93@reddit
Because in the current day everything is
nettrotten@reddit
Fear.
bwray_sd@reddit
Personally I chose this career because I loved solving problems, I love the long debugging sessions and the reward that comes with finally finding the answer, I love working on a product update, putting it in prod, and watching it get used by our customers. The job itself has always been gratifying to me. An old coworker of mine said it best “this is the first job I’ve ever had where I truly learned something new every day”.
If we’re lucky enough to keep our jobs, salaries, and benefits, it won’t be the same job, it will be babysitting agents, writing prompts, and reviewing output, that’s not the job I signed up for and it sucks to see that being stripped away by quite literally the worst companies and people on earth. Not only are they ruining this career, they’re abusing our natural resources, ruining local ecosystems, and it’s already too late to stop it.
But it’s the career I chose, and this is the progression of the industry so while I hate it, I choose to learn to coexist and use it to solve problems, which comes with only a small percentage of the same gratification that solving the same problems used to come with.
cargo_cultist@reddit
Social media is turning any controversy into a flamewar. Probably the avalanche of LLM generated “opinions” doesn’t help.
TonyNickels@reddit
Wealth inequality
LousyGardener@reddit
It’s because there is are huge differences in experience and variety of things “developers” work on.
Some developers are the automobile equivalent of brake techs. Others are certified engineers designing suspensions for F1.
BillyBobJangles@reddit
Using the tools they force me to use at work is a completely different experience from what I get using AI in personal projects.
If I was basing my opinions off what I saw at work I would think AI is crap and causes more harm than good.
But then I see what the good tools can do at home and am blown away..
I imagine anyone on the "this shit is useless" side hasn't fully seen what AI can do. There is so much slop out there that anyone who hasn't taken a deep look probably only looked at the slop and formed their opinion from there.
throwaway_0x90@reddit
Short Answer: Engineer egos hurt
Long Answer: No point typing it since mods will delete this post soon.
K-Max@reddit
I'm Pro AI, but much like any superpower, it can be used for good or evil or both. Speech is the same thing. A lot of other skills are the same things.
Bricktop72@reddit
Because it's industry wide and potentially a massive change. Historically this type of divide has happened during most new technology. Look at the industrial revolution, the printing press, or even mass adoption orlf calculators
OtaK_@reddit
Like with a lot of things: the loudest people about a topic are often the least qualified about it. And the world tends to like polarized opinions currently so saying « I don’t care, it’s not for me » is akin to being a complete luddite
phoenix823@reddit
I think the simple answer is that a lot of people look at AI, don't see perfection, and write it off. Whereas other folks see how much it can already do well-enough and understand how big an impact that's going to have. I'm in the latter camp. Expecting an AI to be perfect is holding it to a bar we don't hold humans to in most circumstances.
YesIAmRightWing@reddit
As a contractor am pretty indifferent now
I use it to allow me to do as much work as I normally do and then take the speed gains and use them myself
That's when it works out anyways, when it doesn't work out it's ofc business as usual
Visionexe@reddit
In my opinion, if I look at my surrounding (at work, instead of on the internet); the people that think that it's a complete game changer are underwhelming engineers. Or a person that does not program to begin with, managers, ceo's, etc.
elliottcable@reddit
I repeatedly feel like the same stuff is being missed, on this topic.
Mostly, what a skilled human brings to the table, is 1. breadth of 2. non-direct-business problem domains.
When you ask me to build a Widget, you express domain-specific constraints and problems to me. You need to express those to an AI, too. In both cases (ideally, at least; modulo “they’re still in the oven and have a ways to go”) you, the business-domain person, gets out A Thing That Does What You Asked.
I, though, know about all the things you didn’t know to ask. I can spot, in your description or during implementation, ‘that sounds suspiciously like distributed state.’ ‘That’s going to require accessibility review.’ ‘There’s going to be a hot-loop here requiring especially sensitive performance-regression testing.’
There’s an array of, I dunno, prolly around 15-20 “things I know about,” after years of doing this. And, while it seems LLMs can speak effectively on any one of those, I constantly find them breaking down on the reasoning/analysis meta-task of *knowing what the delicate balance between those things needs to be, at this particular point in the architecture.
Can I invoke some Super-LLM, and spend two hours enumerating every design-dimension I want to be handled carefully? Yes.
Can I apply human direction, and tell an AI to review from one of those particular angles? Also yes.
But getting an AI to keep all of them in its head at once, while architecting, and efficiently deploying the limited resource of reasoning/attention/time/testing/LoC-needing-maintenance-for-a-decade into an efficient allocation between those domains, has continuously seemed impossible for me. (And I have been putting a ton of effort into this, because I fucking hate feeling left-behind.)
Here’s the thing: I, mostly, get the feeling that other experienced engineers are on a similar wavelength (both pro- and anti-camps. And, not PMs/C-suite types; they’re often Full Koolaid in a silly way I’m not addressing here.)
But where the two camps differ most (… to come around to finally answering the OP?)
… is whether one believes you can solve the shortcomings of one LLM, by applying another, wholesale.
While I feel I’ve mostly seen both camps agree with me that, “yeah, sure, when I ask these Amazing LLMs to build me a thing, it’ll often have, say, security holes” … I find that the pro-AI camp tends to come back with “… but that’s the amazing part! I can just tell another LLM to rewrite it, and focus on security this time, and omg it totally does!!”
Which is, to me, just so totally missing the point. Because there’s limited attention-budget, whether you’re a human being or a GPU with a context-window; and re-engineering the entire thing from scratch means you’re going to completely detune the other parameters.
Put concretely: yes, we’re getting close to the point where you can “just tell the LLM to make my app accessible.” But we aren’t, and I am mildly convinced will-never-be, to a point where you can “just tell an LLM to make an app accessible, and oh by the way don’t forget-about-and-fuck-up the sequence-calculus correctness under CAP, don’t introduce any new security holes, ensure the parts that need to be performant stay performant, and thread observability correctly through all components without leaking PII, while you’re at it.”
And you totally can tell a human to do that last part. That’s literally what a human is. An intuitive-leap-making-machine. “This function I’m fixing for an unrelated ticket, shit, this will leak PII.”
We’re not perfect, either … but so far, my personal experience still leads me to feel there’s a qualitative, not quantitative, difference to how human engineers (or even beginners!) I’ve worked with, versus my experience watching Claude 4.7 1M Pro Ultra Max Reasoning Genius Mode unspool its stream-of-slop consciousness.
tl;dr the way LLMs need babysitting doesn’t lend me to believe they can reasonable be a net productivity bonus, even if they get significantly smarter/more-correct in narrow, targeted goals; because they seem to do an inherently poor job of intuiting correlated goals you don’t (know to) enumerate for it; or even of balancing multiple goals that are inherently in-tension with the user’s stated goal. (“you can’t do this securely, you can’t do this and maintain accessibility, you can’t do this under CAP …”)
ReflectedImage@reddit
Only being looking at it for a week. Seems to be significantly faster than writing the code myself. Looks like if you just set it go wild, defects starting creeping in. Basically, right now we need to be more coding supervisors than direct coders.
But in 2 years will be able to do the job itself?
And furthermore in 4 years will a teenage with a couple of spare weekends be able to make and run a fully automated AI powered SaaS company? It could potentially take out everyone finance, lawyers, CEOs, the whole lot and reduce the value of a software company to peanuts.
yon_@reddit
For me it’s multi-fold.
1) I’ve experienced some awful brain fog and feel like my skills have slipped (due to bad work conditions) so I value coding, teaching myself again and upskilling
2) I don’t like how it’s being forced into everything and anything to the point systems become unusable
3) The cost increase of hardware, I built my own homelab, I’ve run out of storage but in no situation can afford the increase to hardware costs, it’s a joke
4) Companies mandating AI usage, my company have made using AI an objective, by the end of the year they want 85% of every team using AI weekly if not daily. It’s a useless metric when you force it down people’s throats so hard
5) Security. With the number of VibeCoded BS apps out there that have zero understanding of security, how are we able to realistically trust anything ? Sure we can usually see they are VibeCoded junk, but spending more time having sort through apps, just a headache
Outside-Storage-1523@reddit
Because arguing online usually brings out the best of us :)
gjionergqwebrlkbjg@reddit
In my experience it is vastly less controversial outside of reddit.
TheRealStepBot@reddit
Same reason we didn’t get rid of fossil fuels and start using entirely nuclear power. People are scared of the unknown.
Connect_Fishing_6378@reddit
Because on both side of the debate you have a lot of people talking loudly who are ignorant or legitimately just bad actors.
On the pro side you have executives/shareholders who obviously have a profit motive, and AI bros who think they have a profit motive but much like crypto bros will mostly just lose money anyway. These are the people talking about AGI being 5-10 years away (we have no idea if this is the case), saying AI is going to replace all white collar jobs in the near future, and trying to push AI as a replacement for artists, and trying to inject AI into everything with reckless abandon.
On the con side you have a bunch of pundits and twitter users/resistors sticking their fingers in their ears and screaming; pretending that AI hasn’t actually gotten really good at writing code and certain other things, and refusing to accept that AI is absolutely going to impact basically every industry, change the nature of many jobs, and yes in some cases how workers are distributed across these jobs. Obviously the con side is less malicious and more driven by fear. Nobody likes living in a rapidly changing world, and sometimes it’s easier to convince yourself that the world isn’t changing than to try to adapt.
As with all things. The truth is in the middle. AI is really good at writing code now, it’s still not that good at translating requirements into specific product features and architecture/design. It still takes a human to guide AI code generation, validate its outputs, and actively maintain a codebase over time. 97% of the code i’ve commited in the last 6 months has been generated. Does that mean I’ve been sitting around gaming while my ralph loop does all the work, not at all. I’ve been doing hard engineering work that refines, corrects, and integrates that code.
Also, base model performance improvements are diminishing. A lot of the improvements in felt performance we’ve experienced over the last few months have not been from the models getting generally more intelligent, but things like domain-specific RL and improvements to domain-specific harnesses. A lot of this has been low-hanging fruit, like model architecture and pre-training scale improvements used to be a couple of years ago.
People always say “this is the worst the models will ever be” and that’s basically true, but that doesn’t mean the models will get radically better than they are now barring new major Scientific breakthroughs.
confusedthrowaway239@reddit
Cause I like being able to maintain code base, and understanding why coding decisions were made. Trying to review AI generated code and maintain code bases with large AI generated chunks is pretty awful and miserable. I also fear that it’s causing the skills of devs who use AI heavily to atrophy, and setting up juniors for failure, either by not giving them the opportunities to build understanding and skills, or by just cutting junior roles in the first place.
But also and more importantly, the way it’s being used and positioned is to drive down team sizes and wages, across all industries. It’s being used as an excuse to downsize teams across many industries, including in roles that aren’t actually improved by it. The social and economic impacts are pretty harrowing, even at this early stage, and if you aren’t seeing them, you might want to check how deep in your tech social bubble you are.
aaron_dresden@reddit
The quality of the output varies greatly I find based on the work it’s doing, the size of the code bases and the level of undocumented in house customisation to that language.
Another divider is whether it’s helping the dev or it’s helping non-dev’s and other teams to push forward code on devs for review that comes in with no communication first, no domain knowledge, no validation it’s achieving anything. For example think a designer for a different product takes tickets for your tool feeds them to AI to pump out reviews that aren’t very good to speed up development.
Then there’s as stated people who are just fearful that management will see AI as replacing the devs, compared to those who see it as a great tool to help them speed up and solve problems. There’s also that feeling that AI has just invalidated a lot of time spent learning programming languages, and how people will keep up skills to validate AI work in a world of AI doing more of the work.
coderstephen@reddit
raynorelyp@reddit
Polarizing? Public approval indicates it’s almost unanimously hated.
wrd83@reddit
I'm on both sides. Give it simple tasks and it solves them well, with good tooling you can leave an LLM ide longer unattended.
If the task is complex it will get it wrong. And it does not have to be that complex to find bugs.
The big benefit for me lies in the fact that I can scale up and down usage towards my needs and focus on the harder parts myself and the easy stuff gets solved concurrently.
_hephaestus@reddit
A large chunk of it is that we have a broad SWE job title but there’s a wide variety of jobs represented here with varying levels of tolerance for “move fast break things”. AWS trusting AI with core functionality that relies on performance is vastly different than crud app webdev, and most of the field is crud app webdev. There are downsides to sloppy crud app webdev, but the consequences aren’t severe until you hit billing.
And then from there you have vastly different code quality expectations, some teams will push for a v3 refactor after having just refactored everything to v2, some teams just write lgtm on every PR and merge it in happily.
The way AI hits all of these different personas is different and prompts different results.
Mast3rCylinder@reddit
It's game changing but frustrating at the same time. Just reviewing the slop and no slop now takes a lot of time.
My skip also has habit every morning to ask something about some piece of code that AI found for him. Without AI he wouldn't have time to read all of this.
Also **** these skills that run every x mins and compare between the jira requirements to the Merges even though things change never documented.
dave8271@reddit
Why is the AI debate so polarized? Just look at the comments on this thread..."well, because there's two types of developer", "it's experienced devs vs inexperienced", "it's CEOs vs developers", that's why. The good old fashioned human inclination to split the entire world into us, them where us is good and them is bad.
dbxp@reddit
Within software development I think its more polarised in the US than here in Europe. The US has significantly higher wages than Europe and seems to have more status attached to the job. Here in the UK a software dev is like being a teacher or managing a shop, perfectly respectable job with reasonable pay but no special status. The polarisation in the US feels similar to how class can be viewed in the UK, the incumbents don't want new people entering and lowering their status, there is a pay element to it but the language used makes it seem its more about status.
ContraryConman@reddit
I think it's a few reasons
First, because when you say "AI" people aren't even referring to the same thing. Some people legitimately have not touched code in months and have automated all but the trickiest code writing to AI agents. They spend most of their time writing spec documents and unblocking the agents when they spin in loops. The second group only uses AI like a context aware stack overflow and auto complete. People in the second group tend not to realize or believe there is actually anyone in the first group.
The second is that there are two groups of how people relate to the practice of software engineering. Some people like writing code and solving difficult problems. These people like getting into the details on things and don't mind being stuck on one thing for a long time. They like a particular domain, language, or stack and like being experts in things. The second type of person likes having finished things. They don't care about the act of programming, as long as stuff gets shipped. To the second group of people, AI is awesome because their velocity went way up. The engineering or the act of writing code never mattered to them, just shipping. But to the first group, you are removing the entire reason this profession is even remotely interesting.
The last is that some people buy into the extreme depictions of the future of AI dictated by OpenAI and Anthropic, of the permanent underclass and AI eliminating the role of anyone who doesn't adopt... and some of us are normal
Ok-Entertainer-1414@reddit
It has literally been years now. If it was a real game changer, there wouldn't be a debate about it at this point; we'd know.
I think it's no coincidence that most of the posts sparking this "debate" seem to be written by LLMs. They or their investors are using their product to sneakily advertise their product.
It would honestly be insane if there wasn't anyone with a large financial interest in LLMs who was using LLMs to astroturf about how good LLMs are.
hexkey_divisor@reddit
I have a weird take: LLMs are inherently stochastic; they are statistical machines and randomness is part of their functioning.
Sometimes they help, sometimes they don't.
Random reinforcement is a strong conditioning force. It's what makes gambling so addictive.
bighappy1970@reddit
Fear and insecurity. People generally don’t like change, people who don’t use it day-to-day spout off plausible sounding predictions of problems without acknowledging that they are coming from a position of ignorance.
This is nothing new, I’ve seen the same basic arguments about large technology changes several times over the past 3 decades.
It will settle down in a couple years and be a non-issue- use of AI will be expected and boring.
uniquesnowflake8@reddit
Remember what happened during COVID? It was a major shift that brought out heightened emotions and spawned wedge issues
WhipsAndMarkovChains@reddit
We can’t have normal discussions like with every other tech tool because there’s like a weird religious fervor around the people pushing AI mandates.
Minute_Grocery_100@reddit
Most Devs will have to start thinking like an architect and or PM much more. That's a big shift.
steve_nice@reddit
It's because AI is awesome but the way it was rolled out to the public has been disasterious
The_Northern_Light@reddit
People are bad at updating priors. Add the extremely rapid pace of ai development and most people being far outside their depth, and you get most people subconsciously reverting to emotional reasoning.
Also there is a whole moral panic thing, fueled by economic anxiety with a side of philosophical concerns we’ve been primed to have by decades of sci-fi about rouge AIs. (Even though the rogue person who owns the AIs is probably more worrying in reality.)
Some people emotionally need AI to be a bubble. They want to believe the data centers are poisoning their water for no good reason, and won’t hear any evidence to the contrary, on either front. They decided it was bad and that’s it.
Plus people are prone to hype cycles and this is exaggerated when things are new. In addition to all the SWEs who are really impressed by ai talking about how effective it is, you also have literally all the people who were shilling for NFTs just a few years ago, and more than a few true believer zealots.
And the reality of how good ai is has changed a lot very recently. We went from something like Claude being unimaginable sci-fi to it not even being state of the art in less than four years (chat gpt launched less than 3.5 years ago!). People who used to be ai pessimists got converted recently and are actively trying to warn other people about what’s actually happening, while struggling to come to grips with what’s coming. (Including yours truly!)
Plus the scale of the money involved. People are simply not rational about money, they’re emotional and narrative driven first and foremost. And AI or not, I don’t remember a time emotions and narratives have been so charged as they are now. If your not happy with the way you see the world going but you hear people are spending ungodly sums to build something you already decided was overrated, you’re not going to update your priors, you’re gonna dig in your heels.
dipstickchojin@reddit
Imagine losing your job and getting rehired at a lower rate after months unemployed, and it was all because a CEO was hard because FORTRAN was invented
hoopaholik91@reddit
Because either you now have a guillotine above your head with a 10% chance every year it gets pulled, or your giddy that you have control of a guillotine that you will be able to drop 10% of the time
markekt@reddit
It naturally engenders a lot of fear in the dev community, because we have a front row seat to its rapidly evolving capabilities, specifically as it relates to our own sense of worth. There’s a lot of coping from people who take pride in code quality, or claim they do, that refuse to accept that code quality has always been secondary to delivery to an organization. My wife sees AI as mostly about image generation and videos and social media slop. Like most others, she has no idea how much deeper it goes. The people most at risk from AI don’t even know it yet. We do know it.
BoBoBearDev@reddit
AI accelerates/multiplier to.... Both human greatness and human slops. And the likelihood of running into human slops is much higher than human createness.
davearneson@reddit
The vast majority of people debating AI pros and cons are speaking out of their arse.
testy_balls@reddit
It's a love hate relationship
that_tom_@reddit
Existential dread we haven’t seen since the dawn of the atomic age. What does it mean to be human?
ausmomo@reddit
Fear of losing relevance and/or job
CriticalOfBarns@reddit
Preexisting experience.
StubbiestPeak75@reddit
Money