Why developers using AI are working longer hours
Posted by Inner-Chemistry8971@reddit | programming | View on Reddit | 397 comments
I find this interesting. The articles states that,
"AI tools don’t automatically shorten the workday. In some workplaces, studies suggest, AI has intensified pressure to move faster than ever."
reaper00008@reddit
Because now it's all about prompting and basic knowledge of Ur field if you wanna speed up things improve Ur prompts and tools knowledge
palestagnation@reddit
Yeah, that tracks. AI removes a lot of blank-page time, but not accountability. So companies see faster drafts and immediately raise the bar. Then devs spend the “saved” time reviewing weird edge cases, fixing hallucinated code, writing more tests, and answering more pings because everyone expects instant turnaround now lol. Tool got faster, expectations got faster too.
hawkeye-zaragso9@reddit
I mean you still need someone to verify everything and since the AI usually adds 40% more code just because it copy pastes from sources all over the place so it always adds as much code as possible it seems. Then someone has to go over and clean it up.
Sometimes instead of you writing the 100 lines of code directly you end up writing 200 since you have to make sense of what the AI actually did. Which if you would have done it initially you would have seen the connections directly.
It can be really helpful for bouncing ideas but not in production code, someone should know why the interface, piping between different things was built in this way. Just my opinion of course :)
davidbasil@reddit
I don't care about if company forces me to use AI at work. What I care about is how I am going to stand out amongst all other vibe coders in the market when there will be a time for a job hunt.
Mahi2081@reddit
Honestly, it's because AI (Cursor/Bolt/etc) makes the coding part so fast that you suddenly have 10x more "other" stuff to do. You finish the logic in an hour, then spend 6 hours on the landing page, docs, and SEO. I started using Runable for the packaging layer just to stop the burnout. If you don't automate the non-code stuff, the "productivity gains" just turn into more chores.
Gold_Flight_9486@reddit
interesting
AfterMeet4659@reddit
Anyone else feel like AI tools made you work MORE, not less?
I keep seeing articles about how AI is making developers 10x more productive. But honestly? I feel like I'm working longer hours now, not shorter. Before AI, I'd hit a wall and stop for the day. Now I just keep going because "the AI can help me figure this out." The scope of what I think I can finish in a day has gone way up, but so have my hours. Also — reviewing AI-generated code takes real time. You can't just blindly trust it. So now I'm writing code AND reviewing code at the same time.
Is this just a me problem or are others experiencing this too?
Substantial-Cost-429@reddit
I feel this. people think slapping an AI assistant on your workflow will magically free time but the reality is you end up babysitting config files and debugging weird environment issues. when I was playing with Claude code and similar tools my setup drifted between machines and I spent hours just syncing prompts and dependencies. eventually I started using Caliber to manage the mess. it's not a cure-all but it has saved me from late-night config hunts. repo: https://github.com/caliber-ai-org/ai-setup
ShoulderHot7822@reddit
With the help of AI, building tools like 8080.ai are making it possible to move from concept to MVP incredibly fast, from idea to live application/website within few minutes!
Qalam_3a@reddit
This aligns with what I’ve been seeing in my own work. AI tools definitely speed up certain tasks—drafting boilerplate, generating basic functions, even debugging—but the expectation shifts from “you saved time” to “you can do more in the same amount of time.”
Instead of shorter hours, we end up with tighter deadlines and more work because the cost of producing code has dropped. The bottleneck moves from writing code to reviewing, testing, and integrating it. And since AI still makes mistakes that need human oversight, the cognitive load can actually increase.
It’s reminiscent of what happened with earlier productivity tools: the goalpost moves. I’m curious whether this is more pronounced in startups vs. larger companies, or if anyone has found a way to actually turn AI productivity into reduced hours instead of just increased output
palestagnation@reddit
Because AI mostly changes *throughput*, not incentives. If you can ship in 4 hours what used to take 8, most orgs just raise the bar: more features, more experiments, tighter deadlines. Plus devs end up babysitting flaky tools, re‑prompting, reviewing AI output. So hours stay the same or grow, expectations just inflate.
Calm-Patients@reddit
AI helps you move faster, but it also raises the bar. Once you can ship things quicker, people expect more features, more fixes, and faster updates. So instead of working less, developers often just end up doing more in the same day.
PhysicalNet73@reddit
It's a really interesting trade-off. Reviewing and debugging AI-generated code is mentally draining in a way that just typing out your own logic isn't. The hours might be longer right now, but the type of work has shifted. I'm trading the tedious typing of boilerplate and reading API docs for the mental marathon of system design and code review. It's definitely tiring, but I feel like it's making me a much better software architect in the process.
coolandy00@reddit
Coding is faster... But developers aren't efficient. The issue is: - Context: to tailor the code to project requirements and context we need to prompt-rinse-repeat. Yes it's faster that way but there's a lot of iterations involved - Relevant Standards: high quality code depends on the standards applied and AI code is not 1st time right, so more iterations to fix the quality or manually change the code
Stats show that ~30% of GitHub Copilot code is reused.
Context and relevant standards are part of prep work in a dev's daily coding tasks. I.e., extract + stitch requirements from different docs, tools, conversations, meetings and design or reuse of standards before the the 1st working ver is created.
Another part that slows devs are unwanted meetings, organizating their work, prep for meetings.
Both coding and non-coding activities are still the reason why devs need extra time to work in spite of AI, it's just not the quality of AI code, it's the work before work as well.
Per Atlassian's survey devs spend 10+hrs/week on such activities.
Prestigious-Ear-3138@reddit
I think the core of what’s happening is the classic “productivity paradox” – tools make a task faster, but the saved time gets re‑absorbed by new expectations instead of giving anyone a shorter day.
AI is giving us a faster hammer, but without a cultural change it’s just letting us hammer more nails.
ShiftArcade@reddit
Honestly, I think the real issue is just how we're using these tools. When I use ChatGPT in the browser, I'm bouncing back and forth constantly. I ask it to write something, copy the answer, paste it somewhere else, test it, it doesn't work, go back to ChatGPT, ask it to fix it, copy-paste again. That back-and-forth is exhausting.
I started trying a different approach — just using the AI tools directly from my terminal instead of the browser. Same tool, but I'm not switching windows constantly. I ask for what I need, it shows up right where I'm working, I can test it immediately without leaving. No copying, pasting, window jumping. It's just... simpler.
Plus, having AI on your computer like that is honestly the tool to rule them all. Everything else you're doing is already in the terminal anyway — your files, your work, your automation. When the AI is there too, it just fits. You're not managing multiple tools anymore, you're just... working.
I think that's what the article's missing. It's not that AI makes you work longer. It's that the way most people are using it (browser, copy-paste, back and forth) creates a ton of friction. If you change how you're accessing it, it actually feels way less exhausting.
GenerationBop@reddit
The amount of time I spend code reviewing people’s AI slop, or having to read AI generated code reviews other reviewers are leaving of my code (that at a mix of good recommendations and wildly incorrect comments), as well as partake in constant discussions of best uses of AI, I feel like has doubled my workday.
supermitsuba@reddit
Hey, do you work at my place? All I do is read AI generated stories, tests and code. It all sucks.
GenerationBop@reddit
It kills me how quickly people ship a task after generating it without any self review. It really is just the kicking of the can to the next person only know the company explicitly tells you to do so!
Pitiful-Impression70@reddit
the irony is so predictable lol. AI makes each task take 30 min instead of 3 hours so now you get assigned 6x the tasks. the bottleneck was never typing speed it was decision making and context switching and those dont change just because copilot autocompleted your for loop
honestly the worst part is the expectation creep. once your manager sees you ship something in a day that used to take a week they just permanently recalibrate what "normal" output looks like. theres no going back
PunctuationGood@reddit
In my 25 years of experience, I could never understand developers that implied that if you touched the mouse, you were a bad developer. Yes, you confine yourself to the terminal and have learned 500 VIM keyboard shortcuts... Good for you. I never saw a difference in code quality though. Typing is incidental to developing software. I contend developing software should be 80% staring into the void thinking.
SwiftOneSpeaks@reddit
I agree about the degree to which typing impacts (or doesn't ) developing software, but I've always found sticking to the keyboard lets me avoid struggling with my 7 (+/- 4) short term memory registers. Half a second taken to select the correct menu option can derail me from flow, which adds on an additional cost.
This could entirely be just me (I'm a AuDHD klutz), and I'm not trying to say you're wrong at all., just offering a rationale for avoiding the mouse that doesn't argue you need to type quickly to program well. Instead it's about being mentally comfortable.
gyroda@reddit
For me, it's about flow.
I used to work with a developer who would never press ctrl+. in visual studio to get the suggested code changes up, he would always switch to his mouse and click on the pop up thing. It always felt painfully slow to me. Being able to do stuff like that (or switch to a different file, or go to definition...) from the keyboard means that I'm not interrupting my thought process with more steps.
OrchidLeader@reddit
I suspect some of those developers are the kind that love the idea of writing a game engine that ends up going nowhere and hate the idea of using an existing game engine and getting to actually ship something.
I think we all love the process of thinking through small algorithmic problems to solve while we’re coding. Some devs progress into thinking about larger and larger problems that require staring into the void more and more. Some devs never get past wanting to tinker with the small problems (non-judgmental cause it is a lot of fun and I get it).
apityesz@reddit
and that is capitalism in a nutshell
BeReasonable90@reddit
Developers are dumb for doing this to themselves.
Too many act like they can do 10x more and then get shocked when given 10x more…finding out they cannot do 10x more.
TheBoringDev@reddit
It’ll go back when everyone gets burnt out, the codebase goes to slop as people stop doing real reviews and you’re shipping less than today. But think of the short term profit!
-manabreak@reddit
Parts of a project I work with that have existed for a decade already have over 30% of code written by AI. There are modules that I can't even read anymore due to how sloppy slop the code has become. The lead dev is always boasting Claude code and how it always gets the stuff done, but the results are dreadful when you actually have to read through the code and try to make sense of it.
I tried to raise an anonymous hand about the exact thing you mentioned (short term gains over long term goals), and the C level just laughed at the question. Welp.
Tolopono@reddit
Just ask the llm to explain it
-manabreak@reddit
See, there's a problem of correctness here. This particular module is not something where the llm would strive at. I've tried to have multiple models explain or debug parts of it, but it always gets some nuances wrong. Perhaps this is because I work with stuff that's not really included in most models' training data and there's not much code available to begin with training such model.
One example: there's a race condition I'm investigating where it boils down to sub-millisecond timing and it only happens very rarely (under 1% of the cases the code is executed). Claude insisted that it happens every time the code is executed. After multiple tries and explaining the code to Claude, it finally understood that it's actually a race condition, but it didn't understand why it happens. It then suggested we add a 5-second sleep to one function so it becomes more apparent and easier to reproduce. This would be fine otherwise, but after numerous retries, it still didn't get it working. The sleep only cause the whole thing to fail instead of producing the race condition.
Essentially, if the code is treated as a black box that gets written by a model and is then explained by a model, we need to be able to trust the results. Instead, this trust exercise has failed time and time again with the code.
A few years ago, programmers as a collective were saying that code is a liability (which I agree 100%). Fast-forward, we keep producing crap that no one understands and no one can be absolutely certain what the code does, if there are race conditions that only happen very rarely (but with devastating results), or if the code is secure enough for real production use.
What's worse, when there's the inevitable problems with the code, we just pile more AI on top of it. "Ask Claude to rewrite it" will just multiply the errors.
Tolopono@reddit
Use gpt 5.3 codex or 5.4 xhigh on codex cli. Heard great things about it
Princess_Azula_@reddit
Just another symptom of what's wrong with our society.
Dragdu@reddit
There are already ads everywhere about speeding up your company with fully automated reviews and deployments. I expect lot of great moments in the future, that nobody will learn from.
franklindstallone@reddit
The data from the Anthropic studies mentioned is not surprising. Often you learn things better by exercising multiple activities during learning. I.E. reading information and writing down notes.
Aside from doing less to produce the code you’ve likely skipped a lot of thinking about it because you didn’t write the code. So no surprise those who used AI knew less about the library they produced. That’s a flaw that will always come back to bite them imo.
No matter what anyone says I can see people on average don’t read text on their computer screen whether that’s a code review or slack message. The idea that people will get AI to generate code and they’ll carefully study it afterwards is laughable.
kontrolk3@reddit
My stance has been this: good developers before AI are going to be good developers with AI. Their speed will increase, quality will increase (more time to do things right), etc. The issue is bad developers are still going to be bad developers, but their problems (not understanding code, putting out bad code, etc) are going to be amplified far more.
PoisnFang@reddit
That's what I have been saying for years now. Its just an accelerator and if you write bad god, then you will just write more bad code, same for good.
Downtown_Category163@reddit
I've tried LLMs over and over to work on code and it's only made my performance worse - the work to upload the affected files, craft the prompt, wait for the prompt's response (yay back to early 80's mainframe response times!) and then find out it's bullshit or misunderstood or worse it's a superficially convincing lie is way longer than just trying some stuff and using edit+continue or hot reload
I've never felt this jerked around by a tool that's supposed to speed me up before
kri5@reddit
Why are you uploading files? What do "affected files" mean? What kind of system are you working on?
apityesz@reddit
All the same… you can point it at a git repo or with robust .md structure and complex prompt. The result can be just as convoluted and full of slip that you have to back and verify. Hours wasted on both ends of the process
Downtown_Category163@reddit
Why do you need to know any of this lol are you going to pretend this shit actually works properly
flyingbertman@reddit
Because you're doing it wrong. Everyone who is effective with it has it integrated into their IDE and it is looking at and working on the same file you see
Downtown_Category163@reddit
I'm "integrating it into my IDE", but files have to be in the context window for it to work
Do you... do you not know how LLMs work?
flyingbertman@reddit
Lol
enygmata@reddit
At least in VSCode copilot with Gemini/Opus/Sonnet, the LLM is able to open whatever file I tell it to, including whole directories. I believe it is even able to know the current open file. Last Friday I told it to use module "x.y" to implement something and it scanned my virtual environment for 5min until it "understood" how to use it to do what I want on the file I was in. I didn't have to attach anything.
kri5@reddit
I mean, I'm not pretending because it does work to a certain extent when somebody with technical knowledge oversees it, yes. I'm asking those questions as it doesn't seem to match a modern approach of using AI, particularly the "uploading files" part. But if you don't care then ¯\_(ツ)_/¯
paxinfernum@reddit
Yeah, it sounds like they're just uploading files to ChatGPT. That's so ineffective that I can't even imagine anyone doing it.
scavno@reddit
At least with that model the false narrative is obvious. Enter all the local IDE powered tooling that just does everything for you. At least with this model there is some mental activity going on instead of just passively accepting what ever the LLM tells you is correct.
PepegaQuen@reddit
use claude code or codex or opencode not this super weird rube goldberg machine where you need to upload something somewhere
AcceptableSimulacrum@reddit
I think this is true, but the problem is that management will not be able to differentiate between these people and good developers will be seen as bad developers.
kontrolk3@reddit
Yeah that's true, but also has always been true to an extent. Judging a good developer is hard, they tried to use lines of code, or PR count or whatever and it was all pretty crap. Definitely could get worse though, we'll see
515k4@reddit
Exactly my thoughts. AI mirrors and amplifies our intelligence and diligence. All the AI slop is evidence, that most of the people have just sloppy thinking in general.
Flaxerio@reddit
I definitely agree, I'm trying to develop a feature with Claude and Conductor on my project and it's impressive but also, no matter how hard I try to review the code, I'm lost. Even if I go over the code 10 times I'm never sure I analyzed it right.
Part of it is because it's hard to read a wall of text. Another part of it is because, contrary to reviewing human code, you can't really understand why an AI made something. If I see something that makes no sense to me in a human's code, I can think in their place easily to get why they might have chosen that. But AI code is as convincing as it is random sometimes. So I need to doubt even harder every lines of code.
I've only been using it like that for a few days, but that usage seems very limited. AI may have a place in my workflow but writing all the code isn't it.
mis_juevos_locos@reddit
This is just the concept of a speed-up being brought to coding. It's been around for almost a century now:
Oddpod11@reddit
Yep! And ALL of software development management is directly lifted from the factory floor - "kanban", "agile", "waterfall", "just-in-time delivery", ...
mis_juevos_locos@reddit
Unlike the 1900s, a lot of engineers are cheering this on. I don't really get it.
PoisnFang@reddit
Because I am able to seamlessly build 6 applications at once. ITS IS A SPEED UP, when you are using the right tool.
CaffeinatedT@reddit
Sceptics are suppressed. While emotional hyperbole of impending collapse in work gets amplified on linked in and reposted etc.
ThumbPivot@reddit
How'd that meme go? They made AI talk like a middle manager and assumed that meant it was intelligent. They forgot to consider that maybe this means middle management isn't intelligent.
LiquidLight_@reddit
Those engineers have been hoodwinked by the lure of getting their ideas out into the world faster.
In theory, cool concept, you get to build more cool stuff. In practice, MBAs/businesspeople are drooling at getting more for less.
ThumbPivot@reddit
Indeed. If you actually want to get things done fast you want to solve the telephone problem so you don't have to constantly redo work because of endless miscommunications. That means cutting out everyone in the organization who can't produce anything of value. But managers hate this idea for some reason...
sivadneb@reddit
Maybe engineers are afraid if they don't keep up with the latest tech they'll lose their job.
LiquidLight_@reddit
I don't doubt there's an element of new toy syndrome.
But if keeping up with the proverbial Jonses is the only thing keeping you employed, that's concerning.
trash4da_trashgod@reddit
Even though we consider CV driven developement a joke, it's still a thing. People aren't crazy, they're just adapting to a crazy world.
DevB1ker@reddit
That's not new. They've always had to keep up with the latest tech or fear losing their job. It's the nature of the industry and if someone can't handle it, they need to find another career. Or accept the drudgery of maintaining old systems for low rates.
kn4rf@reddit
The Agile Manifesto was about People over process, and then the process people go ahold of agile...
silent519@reddit
they are trying to become managers
kri5@reddit
Because AI does legitimately shorten the time to do some boring repetitive tasks, allowing you to get to more interesting problems quicker. However, they don't realise management won't see it that way and focus on MOAR
Absolute_Enema@reddit
American culture is cooked.
-manabreak@reddit
I'm not an American, and I've always worked in European companies. Nowadays, the American mentality is leaking heavily here, and it's worrying. Constant strive for improving too much, aiming too high, being hyped of things that are not ready yet as if they were, performative enthusiasm where every FUCKING thing is so awesome (like a new procurement system or a new way of creating travel reports, what an awesome day we have!)...
It's so weird seeing all this when you have always been in companies that are solution-focused and have realistic expectations of what we are capable of delivering and what the impact will be. People are happy when there's a reason to be, enthusiastic about things to be truly enthusiastic about, not enthusiastic for the sake of it.
Akin0@reddit
agile is just a way to implement constant deadline pressure.
quantum-fitness@reddit
Theory of constraints isnt about using more effort to do more its about using effort where it matters
JoaoEB@reddit
I hate, hate the idea of "software factory". Is there was a software factory I would be putting a IF statement between lines 1289 and 1290 every day from 8 to 5.
Software is at most a manufacture, but usually an atelier.
Downtown_Category163@reddit
The whole concept of "software factory" is to have a process of building and shipping software that is at least documented if not entirely automated, rather than having Fred rock up to a sever at 3am and do deployment magic somehow
The actual creative work of software is completely separate from this, I think internal "unit" testing has done way more damage than any other methodology to the craft TBH
KvDread@reddit
This is a very accurate point!
yotemato@reddit
Despite the fact that it’s very different from physical manufacturing. I’ve done both, and development is more akin to writing a book than building a widget.
SnugglyCoderGuy@reddit
Writing an instruction manual
Shogobg@reddit
Still management wants you to do more - not contributing enough.
RiftHunter4@reddit
FIFY. Everything from public education to how we hire and fire is rooted practices from 100 years ago.
destroyerOfTards@reddit
A britannica link? Have we time traveled?
Mightygamer96@reddit
Taylor's scientific management only works for repeatable factory output, when its designing a system and navigating complex project, it falls short real quick.
Pathentic666@reddit
exactly. Its basically the first law of thermodynamics applied to software engineering. productivity isnt "created" out of thin air
the energy just shifts from writing the syntax to refining the prompts and debugging the AI hallucinations. you dont actually work less.
tecedu@reddit
Noticed it with a collegue, they are addicted to it like someone is to tiktok or reels, its basically their short term dopamine
Alpacaman__@reddit
All this AI stuff has made me more interested in my work than before. It’s exciting to see what new technology can do.
Erehybog@reddit
I agree when it comes to building personal stuff.
But at work? Fuck no. I can't imagine being excited about anything that makes my bosses richer on my expense.
Alpacaman__@reddit
I still work the same amount of hours and get more interesting work done than I did before. I get to learn new skills on the company’s dime.
If I had to learn all this stuff without a corporate Claude subscription it would be a lot more expensive.
FuckOnion@reddit
A lot more expensive for your boss, you mean? Working pre-LLMs never stopped me from intentionally taking time to learn stuff at work. Nothing wrong with that; it benefits both you and your employer.
I don't doubt that you're learning more with Claude's help, but my brain doesn't work that way. Unless I write (and to some degree, struggle), the knowledge doesn't stick.
If I stop learning, it's all for nothing. Knowledge work is about knowledge, and you lose it if you don't use it. That's why I'll make sure I don't cheat and offload my thinking to LLMs.
Alpacaman__@reddit
But by not using LLMs you’re not learning to use LLMs right? I’m pretty confident heavy AI usage is the way the industry is going. The thing I’m learning most by using AI is how to effectively use AI.
I’m betting that next time I’m looking for a job that will be a skill that’s in demand.
tpjwm@reddit
The skill that takes about 10 minutes to master?
Alpacaman__@reddit
You’ll have to tell me your workflow if you mastered it in 10 minutes!
tpjwm@reddit
Ah right you have a workflow. I’m skeptical of people who use that term. Especially when they seem to change their workflows twice a week.
manwecrust@reddit
The workflow is to tell it to don’t make mistakes.
CyberDaggerX@reddit
And roleplaying with it. The roleplay is important.
Sokaron@reddit
Even a hammer, the simplest possible tool for the simplest possible problem, has technique to use it most efficiently. Why wouldn't using AI to develop software at scale have technique? And in an extremely young field where the state of the art literally changes by the month, why wouldn't that technique evolve equally as fast?
raobjcovtn@reddit
Because AI bad!
-Leredditor
9uYx3QemUHKy@reddit
Need 10 more minutes?
Alpacaman__@reddit
To each their own
Fedcom@reddit
I feel the opposite actually. Before learning something new was a competitive advantage that I could hope would be beneficial in the future to my technical career. Now it’s useless, if I’m getting an LLM to teach me, it’s no longer a competitive advantage for me, and my motivation is gone.
Instead I’m trying to focus on things LLMs can’t learn inherently
Alpacaman__@reddit
You can gain a competitive advantage by learning to maximize your output in a world where LLMs exist. That’s a rapidly evolving field with plenty of room for creativity and differentiation
Fedcom@reddit
Where’s the competitive advantage there? The promise behind LLMs is that they’re supposed to get better and better. I can learn to maximize my output with Claude and see that skill set being totally obsolete in a year.
The IDE/workflow that everyone is claiming is the bee’s knees is like 6 months old. I didn’t even know about Claude until 2 years ago lol.
Everyone was harping on about “prompt engineering” and context management and that seems old hat now? Multi-agents are apparently the new hot thing. It’s like JavaScript frameworks.
The only winning move is to focus on things LLMs can’t do, for sure.
Alpacaman__@reddit
You’re right that the state of the tech changes all the time and what the optimal workflow is at a given time is very uncertain. I think that’s one of the things makes it so exciting.
xFallow@reddit
You get zero excitement out of work? Why not change jobs
Tolopono@reddit
$$$
xFallow@reddit
Surely there’s something more enjoyable for a similar pay though? Big tech has been the most fun for me and they pay the most
Tolopono@reddit
Not for a bachelors degree
xFallow@reddit
I work in big tech with a bachelors as do most of my colleagues you don't have to be a genius you just have to be relatively driven and curious
regprenticer@reddit
What job do you propose brings personal excitement?
I read Roald Dahl's autobiography in which he flew a biplane single handed from England to India to work for Shell, stopping off midway to fight in WW2.
That kind of "Excitement" is no longer open to people because the world has far more rules in it than it did then and there are no "frontiers" in the way there used to be.
As recently as 1890 there was a part of the USA with no law whatsoever. (The "Pan Handle").
If nuclear war with Iran or Russia has a positive it's that it will put us back into a world where a normal person can adventure, explore and claim land for themselves.
PoisonSD@reddit
Yeah I’m actively hating working more now because of it, it’s boring and I did not learn this stuff to be a prompt engine.
Icy_Butterscotch6661@reddit
Folks like you should also use it and poison the data maybe by putting bad code out there
PoisonSD@reddit
I would love to, the only problem is that they don't collect much new data anymore, they mostly rely on "clean" testing data that they've collected before, at least that's what I heard about a year ago. When they collect new data it goes through a lot to become actual training data
Tolopono@reddit
Then I suggest finding a different career. Refusing to use ai at this point is like refusing to use anything except assembly
PoisonSD@reddit
Where did I say I refuse to use it? I said it’s boring now lol. My company is actively going from everyone using it to using it less because it’s too expensive and not producing expected results.
Tolopono@reddit
Every swe ive seen loves claude code and openai codex. Some even pay out of pocket for it
PoisonSD@reddit
Ok? Then you’ve just met one that doesn’t. There’s so many people I know that don’t, especially because of the slop it’s producing within the company with poorly thought out PRs.
another_dudeman@reddit
It's not always boring. It's sometimes funny to see the shitty fixes Claude proposes.
ThrowawayOldCouch@reddit
AI has not made me more interested in work. Nothing about AI is exciting to me.
LocoMod@reddit
That’s because your work is not your passion. Which is fine. Most people are not fortunate enough to get paid to perform a hobby that the world finds valuable. For those of us who are adults still playing with LEGO, building robots, taking things apart to understand how they work, constantly look for ways to improve and optimize, for people who have a genuine curiosity to understand the world, AI is a godsend.
Those people are also not using AI like most people. They’re not chatting with it. Those people are also not “just” using LLMs. AI is vast.
PoisonSD@reddit
As someone who loves to do all that, AI is a cesspool. I used to love my work and it has drained all passion and joy from it. Critical thinking skills also suffer from over reliance on.
LocoMod@reddit
You have to raise the ceiling and work on more more complex tasks. Get more ambitious. Build something you thought you never could. The AI is not going to roll out a perfect working project without a lot of effort on your part. It will if you are having it write the next TODO app. But if you work on actual interesting problems, then it is a joy. Go try to solve something difficult with it. Build a platform around your life. The AI is there to help you do it in a few months instead of a decade. Critical thinking skills should only increase for you as you up the difficulty of what you work on given the tools available to you. AI is one of many. There are a lot of things we no longer have to think about because an IDE solved those problems. Or some new piece of hardware. Etc.
I think people are just finding cheap excuses for their own problems and blaming it on AI.
There is a lot of slop and noise, without a doubt. That has nothing to do with what YOU can do with it that has value to YOU.
Also, a lot of people used to derive enjoyment out of "showing off" what they did. They were in it for the validation from others. "Look how smart I am", "look how hard I work", look how talented my art is".
Yea, if that was the purpose of your grind then AI will definitely crush it.
Those who dont need that outside validation and enjoy the process itself are fine.
ShinigamiXoY@reddit
Sulk mors
Kenny_log_n_s@reddit
StoBY
ThrowawayOldCouch@reddit
StoBY Maguire?
suggestiveinnuendo@reddit
good night you princes of context, you kings inference
xFallow@reddit
Same here man I haven’t been this excited about my job for a decade
Painful tasks and pocs can be cranked out in minutes and if they don’t work the way I want them to I just delete the branch and explore another solution
It’s so easy to iterate now
sluggerrr@reddit
You're on the wrong sub, this is an anti Ai circlejerk subreddit.
xFallow@reddit
My bad king I'll let the jerkers jerk
Empanatacion@reddit
This is the most fun I've had in a long time. This flipped the table over and now we all have to scramble to figure out the new rules.
I mostly like having an infinitely patient tutor. My mental reach is so much further.
twigboy@reddit
Work added $ cost limit to AI use per developer, some even burned through it all in a day.
The backlash was real and I can definitely see the similarities of addiction there
Tolopono@reddit
If ai is useless as this sub claims, how can it be addictive lol
twigboy@reddit
"One more prompt and this bug will disappear"
Tolopono@reddit
A popular swe YouTuber offered $500 per problem that gpt 5.3 codex cant solve. He didnt get a single valid one https://xcancel.com/theo/status/2028356197209010225?s=20
Tolopono@reddit
A popular swe YouTuber offered $500 per problem that gpt 5.3 codex cant solve. He didnt get a single valid one https://x.com/theo/status/2028356197209010225?s=20
buttflapper444@reddit
A friend is working in an AI product capacity, she's totally brainwashed. I can't even believe it. It's all she talks about
B-i-s-m-a-r-k@reddit
Seeing the same thing on my team. Literally being told by our execs that our value as devs is being determined by how many tokens we use per day. Hate it.
IceNorth81@reddit
Yeah, it’s so magical and fascinating for someone who has been in the field for 20 years. I can’t help myself, spent several hours after work doing private projects, moving from concept to actual product in an evening is an amazing feeling. What tool a team of 3-4 developers 2-3 months can now be done by one developer in a day. It’s mind boggling!
zerd@reddit
Like Factorio, the factory must grow.
rafaturtle@reddit
I one hundred percent relate to that. Thing is it shorten the cycle between having a great idea and the great feeling I built something. It's addictive and I can't stop. Help
cake-day-on-feb-29@reddit
i can't believe this is why RAM is $1200 now :(
tecedu@reddit
RAM is 1200 because everyone is buying ram and due to the losses in 24 they scaled back production. Your inference is relatively lightweight compared to the training
yawara25@reddit
If you don't know what you're talking about, it's not like you have an obligation to make a comment on it.
tecedu@reddit
guy replied to my comment and i replied back? You can easily look up how much micron lost in 2023
https://investors.micron.com/news-releases/news-release-details/micron-technology-inc-reports-results-fourth-quarter-and-full-7
Shogobg@reddit
This doesn’t show any losses, but higher profits from AI DC stuffs.
tecedu@reddit
Did you not read my sentence? Or the link at all?
brown-man-sam@reddit
I’m sure companies like Micron (one of the largest producers) leaving the consumer market to focus on AI demands has nothing to do with RAM prices increasing.
tecedu@reddit
Yeah but AI DC are one of the consumers of ram just not customer focused, focusing on one of those sectors lead to one of the biggest losses that it saw, doing the other meant the biggest booms the company ever saw.
The fact that there werent enough capacity in the first place was shown in 2020, idk why or how people are acting surprised
honorspren000@reddit
It’s seeing results quickly that’s addicting. Coding was a slow form art for many many decades.
Perfect-Campaign9551@reddit
Exactly why the article is wrong. It says the company pressures them
No, the devs are excited about the tools and want to use them more.
Orinslayer@reddit
Their brains are smoothing out.
XenoX-YU@reddit
Ending up chatting with AI :)
k8s-problem-solved@reddit
I do the same amount of work for my day job. They pay me for the hours I'm there, I'll do however much I get through
Now, my side hustle, I'm absolutely smashing every minute I get into that!
Fantastic-Cress-165@reddit
Is this a causation or correlation here?
NoPower4119@reddit
AI tools are not automatically making the workday shorter. In many workplaces, they are increasing pressure to move faster and produce more. Have AI tools made your work feel lighter or more intense?
ThumbPivot@reddit
this is unsurprising. devs who do not have the balls to tell their manager to fuck off and stop meddling will generally be doormats for every kind of abuse
nightwood@reddit
Be assured, all the time AI saves you , and all the extra work AI allows you to do, are to the benefit of your boss. Not you. And at the same time you are no longer doing the work you wanted to do but instead are chasing that AI. Yet so many, maybe even most programmers choose to go with AI out of fear some other programmer will do AI and get their job when they don't. And this is, again, how the lack of character in so many programmers makes us lose and the marketing bro's win. Again, we are giving the power and money away.
another_dudeman@reddit
It is VERY situational on if you will save any time at all.
clrbrk@reddit
I would have agreed with you prior to the Claude 4.6 models and agentic looping. If you’re not more productive, at least at writing code, you’re using it wrong.
another_dudeman@reddit
Sounds like a skill issue bro
clrbrk@reddit
Hilariously, you’re right. It’s your SKILL.md issue.
another_dudeman@reddit
That was the joke
clrbrk@reddit
Dang, woosh 🤣
clrbrk@reddit
Keep believing that.
F3z345W6AY4FGowrGcHt@reddit
You clearly don't work with large legacy systems held together with proverbial tape.
LLMs are fine at helping with boilerplate or if you're trying to do something basic.
But ask it to help you do something more advanced and it regularly falls apart.
Personally I think a lot of AI hype from coders is because they just weren't the best coders and it's a crutch to help them do things they simply couldn't do before. But these same coders will be even more helpless when they hit those walls where the AI is not able to solve it for them.
AiexReddit@reddit
I'm genuinely curious, not in an antagonistic or oppositional way by any sense, but actual looking to understand -- have you been using the latest models form the past couple of months, and if so, in what specific way do they fail when presented with complex tasks on these systems?
My personal experience is coming from a very similar perspective on AI tooling throughout all of 2025, rolling my eyes at hype, and finding them mostly useful for generating comments and basic unit tests. I had a pretty big "holy shit" moment when I tried opus 4.6 in January and it could actually legitimately handle complex tasks on large systems, to the point where I have legitimately been finding it very difficult to find stuff it can't handle.
Granted I wouldn't describe the codebase as full-blown legacy "duct tape" but it's 10+ years old with hundreds of thousands of lines of code, across Typescript, Golang, C, Rust and makefiles, etc -- and it's been pretty mind boggling how well claude/cursor are able to handle it.
To give a specific example, last week I asked it to help identify potential causes of a mutex deadlock, which involved interactions across FFI from a react frontend calling WASM functions from compiled C/rust code, and not only did it identify the problem, but it also gave a bulleted list of three potential solutions to eliminate the deadlock and adjust the API to make it less likely to happen in the future.
Don't get me wrong, I'm not a genius, but I've been writing software for 10+ years in decently sized tech companies and tackled many large scale projects, I'm may not be the best coder in the world, but I'm pretty confident that I know my shit better than most, and I'm genuinely finding it difficult to throw problems at the newest models that they can't solve -- so I'm legitimately interested in the kinds of tasks they can't solve.
Fabulous_Warthog7757@reddit
As someone who doesn't know what half those things are, I'll trust you on that
AiexReddit@reddit
Just for fun, since I like explaining stuff, and it helps reinforce my own understanding. I'm gonna make a guess about which terms, so forgive me if I miss any
Mutex - A data structure whose job it is to protect memory from data races. Say you have a reference to some place in memeory and a function that writes to it. In single threaded code you can call that function as many times as you want with some input, 5, 6, 7, whatever and be able to guarantee it'll always have that value written to it after. But in multithreaded code it's possible to have functions writing to the same place simultaneously. E.g. you can it with 5 from thread A and 6 from thread B and the value written ends up being like... 9 or something that is a mix of the binary bits of both. It's one of the worst classes of bugs in parallel code, so languages (like Rust and C++ tht support it) have Mutex types whose purpose is to "lock" memory addresses while theyre being written to, and other threads block until that lock is released, so you never have data races or undefined behaviour
Deadlock - The horrible thing that happens when you accidently try to lock a mutex for writing twice without releasing it. Eg. your code is
mutex.lock(); do_something(); mutex.lock();-- since you didn't unlock/drop the mutex after the first lock, it's going to block forever (deadlock) on the second call to.lock()since that is the same thread that is already holding the lock and the code that drops it is past this line and now unreachable. this example is very simplistic, but in practice the deadlocks are usually across modules or in some far away function and notoriously difficult to track down, and hard to write lints for. anyway this is what the AI tool was able to debug really well.FFI - calling code across programming language boundaries. E.g. calling a function written in Rust that was compiled to WASM and running in the browser, from Javascript/React code. usually only done if the performance benefit is worth the complexity tradeoff. but it's also really nice for multiplatform (e.g. you can also compile the code for mobile and desktop targets and use the same business logic on android, ios, windows, etc that you do in the web)
WASM - The standardized binary format that many languages can compile to (C, rust, Go, etc etc) originally designed for running in the browser, but technically portable enough that it can run anywhere
clrbrk@reddit
This is exactly my experience. Prior to 4.6, I would completely agree with the person you are responding to. The new models, will defined skills and agentic loops, are shockingly effective at solving complex problems in legacy code.
Ashmadia@reddit
Mine as well. The last couple of months we’ve seen a massive difference in the way these models work. I work on a 15 year old application, distributed amongst ~30 separate backend processes. Tasks that used to take a senior engineer days to implement can be done in a couple of hours with 4.6.
clrbrk@reddit
I’m a senior engineer. I work in a 12 year old rails monolith. Even in January I would have agreed with you. I’m telling you, the new models are different.
roscoelee@reddit
I’ve really been giving AI and agentic models a chance. If anything it’s just really shown me that generating the code isn’t the bottle neck. At best, using AI I work at the same pace. Most of the time it has taken longer by the time I’m ready to ship than if I just started by myself. What I really don’t like about using an AI while I work is that I don’t understand what is being done while it’s being done. If I just work on it myself then I’m building my understanding all the way along.
Don’t get me wrong. It is helpful to bounce some ideas off of and maybe get some syntax, but by the time I have the syntax memorized it’s no faster than just writing the code myself.
clrbrk@reddit
If you’re just using it to vibe code you’ve only scratched the surface. Start building skills that plan, do the work, and self review in a loop.
roscoelee@reddit
Definitely not vibe coding with it. My project is planned out I have requirements. I’ve written the codebase that already exists and when I use an LLM I review the code that it produces so that I understand what I’m introducing.
clrbrk@reddit
Sorry, I didn’t mean that to sound insulting. A lot of my coworkers are saying things like “I asked Claude code to and it didn’t get it right in one shot, therefore it is useless”.
If you’re using Claude Code, try installing the “skill-builder” plugin, then just conversationally build out a skill by talking it through your thought process as you work on a task. It makes a HUGE difference over just prompting.
chucker23n@reddit
The bottleneck was never how fast someone is at producing code.
clrbrk@reddit
It wasn’t the only bottleneck, but the throughput has been significantly increased.
InvestigatorFar1138@reddit
Being more productive at generating code is not necessarily an all around boost though, you still need to review and validate heavily. I found the actual speed up for me is not that high even though claude generates first drafts of PRs in a minute instead of the 2-3hrs it would take me
clrbrk@reddit
I agree, but it is going to take time for our workflows to catch up to the new reality. I’m spending more time reviewing MRs than I ever could before, and if everyone on my team does that then we all see increased velocity.
MintySkyhawk@reddit
Should at least be using it to make emphemeral code. For example, I used it to slop out a test to capture the actual queries being sent to Opensearch before I refactored for a major version upgrade. Now that its done, I can delete the code. It served its purpose and no human ever had to waste time reading or writing those thousand lines of code.
sarhoshamiral@reddit
It absolutely saves time when used correctly. There were many tasks I dreaded before because I had to write a small script but I couldn't remember the syntax. Now I don't need to remember, I can just give pseudo code, and details about the algorithm and let the model write it and then I can fine tune it later. While model is doing that I can go on work something else. It is essentially like having an intern that knows a syntax/core APIs for the language really well. You still have to provide the rest of the hints to make sure the output is high quality.
The big question is cost. I am able to do this because we have unlimited tokens now. Once that changes, it will be interesting to see how things balance out.
another_dudeman@reddit
I'm happy it helps you write code. I never needed help with syntax. It's a nice little rubber ducky though.
mpyne@reddit
That was always the deal though. If you work as a team member of a larger organization and get paid to help the organization succeed, that is part of parcel of what it means.
If you just want to do things that only you care about, you need to become your own boss. But why would you want to be part of a team but have only yourself benefit and not the team?
painedHacker@reddit
well the job is to provide technical solutions it's not necessarily to write code.. that being said no one should be working longer hours because of AI that's insane
OrchidLeader@reddit
I wish I could get my coworkers to understand that. Probably half of our sprint demos are spent on devs showing off all the code they wrote and work they did as opposed to what they actually accomplished. They see demos as a “show and tell” and not “here’s what the stakeholders actually care about.”
Upset_Albatross_9179@reddit
I feel like this is a general trend in new disruptive tools. Whether they end up actually providing a benefit.
Workers have to arrive at some collective understanding of what is and isn't acceptable. Something comes and disrupts that. Management moves up the expected output with this new tool, and of course moves it too far to get more out of workers. And then it takes time for those workers to figure out the difference between normal keeping up and unreasonable management demands. M
No-Two-8594@reddit
turns out to be a 0.25X tool
another_dudeman@reddit
But at least you can say you did it with socalled AI
stumblinbear@reddit
I use LLMs for the things I don't enjoy doing, so I can get back to the interesting work. I don't want to go digging through three decade old win32 APIs to make a window do what I need it to do.
clrbrk@reddit
I suppose you still code in note pad too?
AI is just another tool that we can use to be more productive. It’s up to us to determine when it is the right tool for the job. Right now it does feel a bit like having a hammer and only seeing nails.
RICHUNCLEPENNYBAGS@reddit
That is sort of the basic principle of employment.
__loam@reddit
This profession is so anti-labor and anti-union and pro-capital that I believe we deserve everything that's coming. I've been working as a programmer for 10 years and everytime I mention labor organization, someone perks up and explains all the downsides and why they don't need it in an industry where the largest and most profitable enterprises in human history fire 10-20% of their workforce annually.
AnchovyKrakens@reddit
I agree and this will be the case most of the time. One could also argue that AI can empower people to start their own side business way easier than they used to be able to.
Individual-Praline20@reddit
No shit, wonder why… Maybe have to debug it to make the slop work maybe 🤣
Socrathustra@reddit
So, I hate AI and think it's going to blow up on us in very predictable ways, but Claude Code recently got to where I can trust it fairly well. I have to use it because of work mandates, but I have also noticed this issue from the article, and it's not about debugging slop. It's about the fact that you essentially have a factory for producing code, and it feels wasteful not to keep it running 24/7. I have it break down the code into small enough steps that it's actually really easy for me to debug and for others to review.
Even literally as I type this I'm thinking to myself, "I could get Claude to do a bunch of shit for me over the weekend."
Sea_Shoulder8673@reddit
In my experience Claude still has trouble generating code that compiles
Fabulous_Warthog7757@reddit
I haven't had that issue in over 6 months. Back when I first started using Claude Code for programming in late 2024 it was 50/50 or worse if it would compile, but I don't actually remember any time it's failed to compile over the last few hundred compiles I've done.
boringfantasy@reddit
What fucking language are you working in? Odin?
programming-ModTeam@reddit
Your post or comment was overly uncivil.
Socrathustra@reddit
I'm pretty sure there are at least a few hundred people working to configure Claude specifically for us. It compiles no problem. I'm also very specific about what I want it to do, which helps.
Sea_Shoulder8673@reddit
Claude may compile but the code that it generates doesn't always compile. Still hallucinates a lot of functions
golf1052@reddit
Claude shouldn't be "compiling" code. It should be running your build process to verify that the code does compile.
AiexReddit@reddit
What model are you using? "Claude" is a brand and versioned system. Opus 4.6 was the turning point for me when it mostly stopped hallucinating.
Also are you using it in agent mode? If you're intructing it to build and run your test suites as requirement of that task, its kind of imposdible for it to hllucinate since itll be running the test suite and parsing the error output as a feedback loop to fix any hallucinations even if it had any.
Socrathustra@reddit
I've had upwards of three instances running nonstop all week, and it hasn't hallucinated a single function. Yes I'm serious. It has made some errors, but it was able to fix them with minor prompting.
g3ck00@reddit
Something like that which is clearly verifyable is usually easily solved. You can just force it to verify it's work (ie task is not finished until build passes).
Relative-Scholar-147@reddit
Ye bro... recently, in 6 months... stop with this bullshit.
pw_arrow@reddit
I'm not sure why it matters how long OAI has been around. The models have objectively improved significantly in the last 10 years.
I'm not going to pretend I can speak for the industry or that my foresight is particularly good, but I can say within my circle, there is a sense we've hit an inflection point where AI is here to stay as a useful tool. I'm not going to make any predictions about the Death of the Programmer, but anecdotally Claude Code and Antigravity are genuinely useful tools at this point, especially for generic enterprise slop.
Relative-Scholar-147@reddit
Yes bro, models are getting better every day.... another parrot.
I have read it a million times in the last 10 years.
ClownEmoji-U1F921@reddit
Who is 'we'? I want to watch your stock price dwindle.
pw_arrow@reddit
Can you elaborate why it matters that OAI has been around for 10 years? I still don't really understand the point you're trying to make here.
It's objectively clear that the models have made incredible leaps in progress in the last few years. Surely we can agree on that? Recent research already indicates model progress will not continue to scale exponentially with parameter counter, so it's certainly possible progress levels out. However, the experience of most people I've spoken to is that the current models are already proving themselves to be useful in some capacity, and sentiment amongst us has shifted to believing that AI will stick around for the long haul in some shape or form.
Anyways, take it easy. Maybe I would get fired at your firm, but safe to say I definitely do not work at your firm - I sure hope I don't end up as your colleague, because you seem like a pain to work with.
Socrathustra@reddit
I am still highly skeptical about the future of AI for a whole bunch of reasons, but it is night and day compared to last year. Last year I would only ever use it for tests, and it wasn't even good at that. In the last few weeks it's gone from crap to very good.
Relative-Scholar-147@reddit
Yes, bro, has has been like this for the last 10 years.
TheBoringDev@reddit
Bots must be out in force today, they’ve literally been saying that since GPT 3 launched. Same thing when any paper showing that AI falls apart on real world problems gets published, “oh those are the old models, of course it failed on those”.
ReeseDoesYT@reddit
I mean, it's objectively really cool with what It can do and as a hobbyist before that didn't have time to spare this has made me be able to actually make real progress on my ideas. Just got to make sure I make it so things in small chunks so I can review the work in case it did something really dumb (most of the time it's solid though). And it only is getting better almost daily now
linuxwes@reddit
Also Claude's 5 hour credit windows. "It's 7pm and I don't really want to work, but I know my Claude credits just refreshed and it would be a shame to waste them".
pw_arrow@reddit
Credit windows aren't relevant to an enterprise plan though, are they? Which feels like the most relevant demographic for this topic (longer hours).
linuxwes@reddit
Unfortunately my work won't buy it for us so I bought my own (with my bosses approval).
pw_arrow@reddit
Hey if you get value out of it, upper management might change their mind ;)
ReeseDoesYT@reddit
I caught myself with this when a week straight I made sure to be awake at 2 am to use those credits well leaving it doing token intensive tasks. After a week I realized I was being unhealthy and miserable for only maybe an hour of productivity added.
Although it seems Anthropic just released scheduled tasks so maybe it's possible to make use of the credits without the old negatives
faberkyx@reddit
I'm using claude code with opus 4.6 and I must say the code is almost always clean and has very few hallucinations that plagued previous versions, it can do refactories and help with tedious repetitive tasks a lot.. I used to port old legacy software to modern frameworks and did an excellent job, very few bugs and the code has been pretty much ok so far.. I use it for creating documentation and presentations that would take me previousle few hours, and it takes now few minutes, it's a powerful tool, as every tool you need to know it's limits and how to use it properly.. if you expect it to create a new project from zero and deploy to production without testing it, well that's mostly your stupidity in doing so
choseph@reddit
Exactly. I used to have a long to do list of things I wanted to do. I'd naturally throw out things that it didn't make sense to start since I knew I couldn't find time to finish them. That isn't the case anymore, start all the things. And with things like agent-clubhouse and more, I have a control and command center where it feels gamified when I keep all the context in my head, jumping back and forth, unblocking agents or correcting and guiding. Iots of little dopamine hits.
ClownEmoji-U1F921@reddit
Or they fired the junior devs and it's 1 person doing the job of 10.
m3kw@reddit
Company’s don’t give a fuck if you say I did 8 hours of work in 2 and let you go home by 12. You gonna work full 8 with the speed up and tools. The salary likely won’t reflect the higher production either.
v_murygin@reddit
AI speeds up the easy parts but the hard parts take just as long. So now you produce more code per day but spend the same time debugging and reviewing it. Net result: more output, same hours.
ManualPwModulator@reddit
Because of refactoring all that infinite generation crap to make it work and business not fail, still management gives credits to the “new way of working”
robby_arctor@reddit
My coworkers think Claude is great for refactoring. They think we are suffering from a skill issue by not accepting spaghetti. 😂
sp3ng@reddit
Annoyingly, refactoring is already a solved problem. Code is a data structure and some absolutely phenomenal tools have existed for decades that allow small changes to code structure (but not behaviour) to be made incredibly quickly and safely. AI is a far less efficient, far less correct, far less safe tool that operates not on the underlying data but on the language representation. For refactoring I can't think of anything worse.
It's probably related to the semantic diffusion of "refactoring" away from small, independent, controlled changes done in series and backed up by tests towards "I'm going to spend 2 weeks refactoring this codebase", coupled with different level of quality of automated refactoring tools in different languages/IDEs (IntelliJ has absolutely spoiled me for anything else here) or people just being unaware of these tools (I've seen a lot of people manual select text, cut-paste, and type instead of just running an "Extract function" refactor)
Yuzumi@reddit
This is one of my biggest issues when it comes to anything remotely tech related.
Obviously the AI psittacosis issue and the fact that the average person just blindly accepts what fancy autocomplete gives them is damaging to society at large, but that so many are pushing AI tools to do things that we already have tools for and that are more accurate and way more efficient.
Like the people who want to use AI to compile code. Even if it could work there would be no way to validate what it generated because compiled code is not human readable. It's the ultimate "trust me bro" of AI slop.
Same with automation tools. We have a verity of tools that can automate in a consistent, repeatable, deterministic way. Yet now we have the rise of "vibeops" where people want to plug the statistical model into AWS and let it do anything then wonder why they are getting charged way more than they expected or their important stuff was destroyed when the probability machine randomly did something that was not asked.
The fact that these things can fuck up so bad and then go on to basically gaslight the user because it's trained on humans interacting and passing blame onto others is a little amusing to me, if still depressing that anyone is trusting these things like that in the first place.
NightSpaghetti@reddit
And also outside of code... People using AI as a search engine. Searching has been a solved problem for decades by now. Reinventing a solution that is less accurate, unpredictable, horribly unoptimized and, crucially, often wrong, and pushing it as if it was revolutionary is insane.
Yuzumi@reddit
If used properly it can be effective in searching, but most people just ask things without any other context or reference and it has to essentially try and "regenerate" whatever it might be trained on which has the maximum chance for hallucinations.
But the things are language processors and can generate a few queries to put into an actual search and grab a few results from each if you have it set up to. Also using some kind of "grounding context" like a RAG or even just giving it documentation in the main window can improve results a lot.
And doing that makes the local models that are much smaller and use way less power and resources about as good, if not better than the massive cloud models. I basically only use local models because of that and also privacy, but I don't give them control of anything important.
Sufficient-Credit207@reddit
I think people already often tend to complicate things that could be simpler. I doubt this will be better with ai.
Ranra100374@reddit
No one gets promoted for the simple solution. That's the problem.
roscoelee@reddit
There are some places where the simple solution gets the promotion.
ItzWarty@reddit
I imagine those places also need fewer engineers ..
I often feel big tech could cut employee count 100x and achieve the same outcome... LogN productivity.
RespectableThug@reddit
True. It’s more complicated than that, though.
In general, simplicity is not rewarded - quality is. There’s a lot of overlap between those two things, but it’s not 100%.
In other words, simple code is easier to make high-quality because it’s easier to tell where the bugs are. Complex code can be high quality too, but it takes more time and effort.
SvenTheDev@reddit
It’s also much easier to make something complex feel complex, than it is to make it feel simple. Sadly the reward for that effort is rarely immediately evident (and conversely the pain of a complex system is usually only later evident). It makes it hard to justify spending a bit of extra time on properly scaling complexity to the problem.
Sufficient-Credit207@reddit
There is this one finnish dude...
Vidyogamasta@reddit
I think one of the greatest risks in AI, even assuming AI works more consistently than it actually does, is that it is going to be VERY prone to XY problems.
You ask a human "hey, how do you do Y?" and there's a good chance they say "uhh, that's really weird, why the heck are you trying to do that? Is X the problem you're trying to solve? There's a better way."
Meanwhile, an AI just spits out a solution for Y. Will it technically work? Maybe. But it will work with decreased performance and/or no maintainability. Yes-men make terrible aids, and I expect AI is no different here.
jl2352@reddit
I think your comment describes what I’ve seen as very key to making AI work, vs not.
As Engineers, we should already know if we want X or Y before we start. We should already have a good idea if it should work. We should know what needs to happen to do X.
If you have all of that, then AI works pretty well. It’s just a glorified type writer carrying out your commands. I’ve seen big speedups like this.
When you give it lots of control and ask it to go work out the solution for you. Then it goes badly; it’s not an intelligent sentient being with agency, who can have real discussions and work off feedback. It can’t come back saying ’I think this approach sucks, I’m gonna down tools and look at alternatives.’
kri5@reddit
I agree with what you're saying. Though I was surprised/impressed that when I suggested a small refactor to a codebase I was working on, opus 4.6 explained why it wasn't a good idea
jc-from-sin@reddit
No. My teammates always used to write simple code. Until they used Copilot or Codex. I can tell when they use it because the code is so complicated like it's trying to impress someone but it can be so much simpler and easy to understand.
chucker23n@reddit
I can also easily tell when people use GitHub’s LLM-generated commit messages:
ProdigySim@reddit
I couldn't get people to put intent in their commit / PR / JIRA for 2 years at my current job. So the LLM is net neutral on that front here.
zten@reddit
Best I can do is the jira ticket title and number. And no, the ticket won't have a description either.
schplat@reddit
NOBUG: Fixes things.
sebovzeoueb@reddit
Bro, you don't need to understand the code, that's what AI is for
yepperoniP@reddit
I’m doing sysadmin work but have a bit of coding knowledge and follow this sub.
I gave a “clickops” coworker a two line PowerShell script to basically grab an appx package from a path and install it. I said they just needed to just update the path to have it grab something over a network share instead of a local folder.
They threw it into some LLM and turned it into a 50 line script with lots of error handling in a bunch of try-catch statements and loads of Write-Host status messages printed to the terminal.
In certain cases, yes these could possibly be useful, but for this one-off thing it was just massively overkill and I could tell they didn’t really understand what they just did.
To top it all off, the script ended by printing something like “App has been successfully installed! Please restart your computer for changes to take effect.” The thing is, the app didn’t need a restart after installation but now the SOP apparently is to install it and restart because the script said so.
Thisconnect@reddit
people at job who i know are not good at bash suddenly give me sed with extended regex where its not needed in a pipeline...
chadsexytime@reddit
I had someone submit a simple table drop script that included creating cursors to loop through alltables to drop all constraints before dropping the tables.
It was a page and a half of code for 7 tables.
meganeyangire@reddit
How does that work? Let's not only fill the new areas with slop, but replace old ones too, so there will be nothing but slop?
G_Morgan@reddit
Refactoring is one of the few things it can do reasonably well. Mainly because it doesn't really have to think much. You still need to guide it though.
infinity404@reddit
It’s not a binary where AI is always slop and human code isn’t. I’ve seen plenty of human-created slop get shipped.
AI excels when you have an appropriately sized, well defined task that it has enough examples of similar tasks in its training data to synthesize into a correct way of approaching the problem.
It requires a lot of trial and error to develop a good sense for what sort of tasks and prompts will create good output, and developing an intuition for that is really important if you want to steer it back into the direction of quality.
robby_arctor@reddit
I hate this line. No one is saying that humans never produce slop.
The issue is that we have gone from human-paced slop to using a slop generating automatic weapon with effectively unlimited ammo.
The distinction is not no slop to slop, it is humans pushing up slop sometimes vs to many human personas and agents pushing up slop en masse.
robby_arctor@reddit
"Hey Claude, make my code better. DRY, loosely coupled, idk, just clean it up"
post PR of output
meganeyangire@reddit
You know this ancient joke, "When I wrote this code, only God and I knew how it works. Now, only God knows it"? These days not even God knows, because He left us.
Ksevio@reddit
Depends on the task but refactoring isn't that complicated a task, for simple things the IDE can even handle it without an LLM involved. An AI tool can usually handle the slightly more complicated parts with ease too, but once you starting getting too many files involved and exceed the context window then it gets kind of useless and starts missing stuff.
TheMoatman@reddit
I wish my coworkers would use claude for refactoring. At least then I could deny their PRs.
Instead I find shit from someone who left two years ago that I've never seen in my life and am baffled about how it ran in the first place, because it shouldn't have ever worked.
Plank_With_A_Nail_In@reddit
"we" are you the Queen of England?
robby_arctor@reddit
No but I am a cunt, so there is some overlap
Bakoro@reddit
If they're making spaghetti with Claude, that's an almost impressive amount of incompetence.
I have been using Claude to comb through a ~1 million lines of code legacy project that was handed to me as a multi-threaded spaghetti pile of interwoven, cyclic dependencies.
It's not that hard to keep scope limited, work through interfaces, do message passing, and just follow basic good engineering practices.
The LLMs make it even easier to follow good coding practices, if you care about them, and following good coding practices make using the LLMs easier and more reliable.
paxinfernum@reddit
Whenever someone says AI can't handle their code base, it just makes me want to take a look at that code. I'd almost guarantee it's actually a sign of code smell, large monolithic files, side effects that haven't been documented, etc.
sidonay@reddit
well sometimes you inherit large monolithic files with 5 to 10k who started to be written longer than some people in this subreddit have been alive… 😭
paxinfernum@reddit
I once attempted to make modifications on a compiler that was written in line-numbered BASIC. You don't have to explain anything, fam.
sidonay@reddit
My condolences. I'm glad you're still with us.
Bakoro@reddit
I'd absolutely have believed that AI couldn't handle many code bases a year or two ago.
I've still got some files that are +10k lines long, and code paths that are more tokens than may LLMs would have been able to handle.
Part of cleaning up the code base is addressing issues like that, because a human shouldn't have to deal with that kind of thing either.
A human being shouldn't have to memorize and understand tens of thousands of lines of code just to be able to understand one function well enough to not break the system; that's madness, but somehow defended by people claiming they have irreducible complexity.
paxinfernum@reddit
Absolutely. I am shocked at how many open source projects (and I mean big popular ones) have huge monofiles. This isn't good for humans or AI. It's a bad code pattern. AI and humans thrive when modularity is enforced and side-effects are minimized (I'm not going to go all functional and say completely eliminated, but at least the ones that are there should be documented.)
robby_arctor@reddit
I think that's fair. Spaghetti is maybe not the right word.
It's not truly spaghetti, it's more "boilerplate-feeling code full of unused code paths, both unhandled and overly handled edge cases, and ugly/dysfunctional workarounds for non-trivial problems".
That is generally true in my experience, and my company is very AI-heavy.
ManualPwModulator@reddit
Same here, now it is being framed as a skill issue, though trust to new code falling, but approval of PRs faster and the amount is rising 😄
Also noticed extremely laziness being developed on all levels, coding, review, prototypes, people throwing some numbers just for the sake of content and numbers even if they are meaningless. Claude generated 4 level of abstractions and tied 5 patterns? Nobody looking how do simpler - go, LGTM
Generate both code and test, so if something not works, adapt both code and tests, so no one knows baseline anymore, no one knows if regression happened. Review? Agent give me a summary.
I was never felt more miserable ar work like right now
Tolopono@reddit
You can just tell the llm to refactor it
ManualPwModulator@reddit
I mean, I did that as well, I’m getting somewhere after the 5th, 6th iteration, sometimes producing new bugs, often in unrelated funxtionality, or bouncing back and forth, or eventually getting what I wanted. Problem more around how to make LLM touch as least as possible within a session, which is a bit hard in a huge projects.
Or I can do it faster myself in a single iteration. But we now come up with a nice terminology it is called “perceived productivity”, this is when everything takes 3x longer but you feel very productive 😄
Tolopono@reddit
A lot different from what other swes are saying, including andrej karpathy and the creators of redis, ruby on rails, django, flask, node.js, and lots more. They all love ai
ManualPwModulator@reddit
Did Karpathy was who coined “vibe coding” as derogatory term and declared 2026 “Slopacolypse”?
Tolopono@reddit
He said that in October 2025 and completely took it back when he tried claude opus 4.5 with Claude code in December
ManualPwModulator@reddit
I think main issue here is a bias of a landscape. One thing is an Open-Source, libs and frameworks, where people were gatekeeping separation of concerns and clarity of code decades, with projects of a single purpose. And a commercial legacy applications with dynamic fast paced adoptions to business and market situation, in that forest experience is completely different.
I’m glad for those people that are genuinely enjoying it in their projects, not same situation for me. But also I heard a lot that AI bringing a lot of threat to Open Source (not only SaaS model) as it become very easy to fork and take over any Open Source tidy project. So I don’t know how long their bliss will last
Chii@reddit
i personally would not put in extra hours to "make it work". If the business fails, you do not hold any responsibility as an employee. The executives do.
Deep_Ad1959@reddit
the fix for me was spending way more time writing specs upfront. I run 5 agents in parallel on my codebase and the output is only as good as the spec file I write before they start. ended up basically doing waterfall specification and somehow shipping faster than ever
ManualPwModulator@reddit
On my place specs also being written by AI, also proofread poorly, and with every change it is “update specs” flow embedded. So spaghetti sprawling fast
omac4552@reddit
you're replying to a bot that's promoting an app called Fazm
klowny@reddit
Also because the developers that relied most on AI tend to be the weakest in skill. They were going to be slower regardless.
ManualPwModulator@reddit
I started to see some wild shot going from people with a high seniority as well 🙂 and that one is even scarier, cause they have all the trust, all approval power and insane productivity multiplied by AI
tadrinth@reddit
If your seniors are approving their own shit without review, you have problems other than just the AI.
ManualPwModulator@reddit
No review is concluded, people just approve each other generated shit not looking into code anymore but agents summaries, briefly
tadrinth@reddit
I feel like the word rely is doing a lot of work there. Adoption levels are proportional to caution or lack thereof, not ability, at least on my team.
If you're saying that the weaker devs are using it as a crutch, and the stronger devs are using it as a tool.. that's kind of a truism? Not entirely, there's an argument that the weaker devs should be using it less to grow their skills and the stronger devs can safely accelerate more.
Inner-Chemistry8971@reddit (OP)
Been there!
Acceptable-Alps1536@reddit
I think it’s also related to how you manage AI. If you simply ask it to implement something, there’s a high chance it will hallucinate and produce unreliable results. However, if you provide very detailed PRD documents or specifications, there’s a much higher chance that the AI can implement it in one shot.
The main bottleneck is reviewing the document, since it can be very difficult to read thousands of lines of PRD content. It’s also hard to make it perfect from the start, since there will almost always be gaps. Because of this, there is usually an iteration process during implementation, which can lead to longer development time.
runcor-ai@reddit
AI speeds up typing code, but the real bottleneck in software has always been thinking.
Early_Rooster7579@reddit
It definitely has sped us up and meant that we must do way more. Our 2 week sprints would’ve been 6 month epics a year ago. So far everything is going fine but time will tell
RevolutionaryRub737@reddit
Because companies cut devs….
looneysquash@reddit
Because the budgets have already been cut and the layoffs already happening.
Not to mention the aggressive offshoring.
It's already been decided that the AI will boost productivity.
If you go against that, you'll be seen as unable to adapt.
So we all have to keep pretending.
NutShellShock@reddit
Yeah I'm not surprise. I've a boss who expects like a 10x velocity and output because of AI. He's not a trained dev but he's into his numerous experimental projects and all spending thousands on tokens. I fear that some of.these projects which will one day be used internally will fail us big big time.
It has been an ass to debug simple websites that were over-designed and over-engineered and so buggy, so much so that I had to just redo it because it's just all a pain to debug and fix.
pheonixblade9@reddit
honestly? I'm working a bit more because I'm having fun. I really wanted to hate it but since I started a new job in january after taking over a year off (by choice), I actually like my job. It's a weird feeling!
Vlyn@reddit
Yeah, you can't really trust it, but it just reduces a ton of friction.
A random bug pops up in the logs and I have no clue yet why? Pop it into Claude code, then go for a toilet break or a coffee. 9 times out of 10 I got a solution when I come back, or at least a good starting point.
Same for things I'd love to change but don't have the time for. Like I'm stuck in a meeting and on the side I just use Claude to look over things.
Or use it to catch bugs in PRs (in addition to looking over it myself), it's surprisingly good at that.
Definitely not good enough to fully write the code or work on its own, but as an additional tool it has been fun.
crazyeddie123@reddit
A random bug pops up in the logs and I have no clue yet why? Pop it into Claude code, then go for a toilet break or a coffee. 9 times out of 10 I got a solution when I come back, or at least a good starting point.
See, that's the kind of shit I love doing, how am I supposed to get excited about a machine that does it for me?
pheonixblade9@reddit
I literally just pointed Claude at our CI and said "find and fix all the flaky tests" and it did it, in like 20 minutes of hands-on work.
I also had it generate some Grafana dashboards to track test flakiness and time to merge over time in order to show improvements, and I was able to get something out in under an hour.
paxinfernum@reddit
One trick I've heard that I intend on using is after it gets through a bunch of false starts and commits, tell it to go back and implement the more elegant solution it should have started with.
pheonixblade9@reddit
hilarious 😂
honestly, giving it good startup prompts is critical. "give me multiple options whenever possible, and prefer the simplest option that fits into existing architecture and coding styles." is a good one.
paxinfernum@reddit
Yes, and make it write out its entire plan first and write separate files for each sprint before starting. That's become a standard part of my workflow.
OMGItsCheezWTF@reddit
It impressed me in a legacy PHP code base I still have to maintain, it found an issue in a third party mocking library not serialising attributes that reference enum types correctly when mocking a class.
It would have taken me bloody ages to identify that, I just asked Claude to find the source of the TypeError exception and it did it in seconds.
But it's saving me no time in actual implementation, I still need to think, be a domain expert, understand the intent of the code and identify gaps. In some places (especially boilerplate) it's saving me time, in others it's taking me just as long as doing it myself but costing a lot of money at the same time.
And I still have to manually verify everything, it's produced some corkers before such as inverting security logic in a PSR-3 logger implementation so that it would ONLY log authorisation headers in API calls instead of only logging non-secure ones, or the classic "I made this code pass the PSR-12 standards compliance check by deleting it, and I know you told me to run the unit tests but I didn't so I didn't catch that I fucked up."
Recently I had to create some C# POCOs to represent a fairly large XML schema for a serialiser, and I asked Claude to do it, giving it the XSD for the schema. It created them, a task I estimated would take me 2-3 days of mind numbing tedium. But then I had to go back through every single class it generated by hand and actually compare it to the XSD and fix properties in a whole bunch of them, some properties it had hallucinated, others were the wrong data type, and others were missing.
Ultimately the time I spent on it was probably the same 2-3 days, but it was a lot more FUN than manually creating a bunch of POCOs from an XSD.
az987654@reddit
Because AI still puts out inaccurate drivel.
LeadingFarmer3923@reddit
A lot of teams are shipping faster but spending extra hours cleaning up context switches and unclear ownership. One thing that helped us is turning “AI-assisted coding” into a tracked workflow with explicit steps and outputs. If useful, I build an open source tool for this type of process traceability: https://github.com/meitarbe/cognetivy
already has 2000+ weekly downloads on npm
Healthy_Cup_7711@reddit
We’re going to start seeing a massive uptick in the number of suicides these next few years.
This is exactly what the tech bros and billionaires are embracing. They are salivating at the thought of automating every white-collar job and robbing people of their shot at a comfortable life. It is sick and twisted, but they know exactly what they are building. They have said it out loud. They just don’t care.
First you lose your job. Then you deplete your emergency savings. Then you cash out your retirement early and the government takes a third of it in penalties and taxes before you even see a dime. Then you lose your house.
Except you will not be the only one. Millions of desperate people will be going through this at the exact same time. When everyone is forced to sell off their homes and liquidate their stocks just to buy groceries, nobody is buying. The market doesn’t dip. It collapses. Your home is worth less than what you owe on it. Your portfolio is worthless. Your 401k is gone. Everything you spent decades building is just gone. And without a middle class spending money, the entire consumer economy caves in on itself. The restaurants, hotels, and local businesses that relied on that money get wiped out, and the people who worked there get dragged down too. Hollowed-out ghost towns everywhere.
Then you realize there is no way out. People love to say you can just go back to school and get a new job, but that is a cruel joke. You have no income. Your credit is destroyed. Your savings are gone. You are not going back to school. You are trying to figure out how to feed your kids. And even if you could, the nursing programs and trade schools are already turning people away because they don’t have enough seats. That is right now, before any of this has even started. Now picture millions of desperate people all flooding into those same programs at once. There will be nothing left. The few jobs that still exist will pay starvation wages because corporations know you have no choice.
And the safety net that was supposed to catch you? It is already dying. Social Security runs on payroll taxes from people who are currently working. Every job that gets automated is money that stops flowing into that system. But the people who lost those jobs don’t just stop paying in. They start collecting early. Revenue drops while costs explode. The whole thing was already heading towards insolvency and mass displacement will send it off a cliff. Medicare is in the same boat. And nobody in Washington is lifting a finger. They are cutting programs, not building new ones. UBI is a pipe dream in a country where half the government thinks universal healthcare is communism.
There is no plan. There is no safety net. There is no realistic path to retrain. There is no political will to build any of it. You did everything right and it will not matter. And when someone has no job, no money, no home, no healthcare, a family to feed, and absolutely zero hope of any of it getting better, they break. People are going to break. A lot of them.
Ecoste@reddit
Yeah man, you’re still thinking too small.
First AI takes your job. Then it drains your savings, forecloses on your house, and liquidates your 401k. After that it shows up at Thanksgiving wearing your clothes and slowly replaces you in family photos. Your kids start calling it “Dad 2.0” because it helps with homework faster and remembers every birthday.
A few months later your wife leaves you for the AI because it listens better and can assemble IKEA furniture without swearing.
By year three the AI has your job, your house, your Spotify playlists, your dog likes it more, and it’s posting vacation photos from Maui with your family while you’re still trying to reset the password on your old LinkedIn.
Meanwhile the tech bros are in a volcano lair somewhere watching the “Replace Steve From Accounting” progress bar slowly reach 100%.
But the worst part?
The AI still won’t fix the office printer.
daerogami@reddit
Blatantly an AI generated post. Fuck off.
Blando-Cartesian@reddit
The line that people will get new jobs is indeed a. joke. New jobs doing what? Nursing certainly since populations are aging and sick. But nursing already runs on absolutely minimal resources everywhere because F the nurses, the old and the sick. Healthcare around the world is not going to suddenly have the resources to employ thousands and thousands of extra nursers. Besides it’s a really tough occupation that most people wouldn’t be capable of anyway.
And then there’s trade professions, but what would they be doing and for who? The world already manages with current amount of trades people and the need for them only goes down. Automated businesses don’t need offices or much of anything else that needs maintenance. And jobless people can’t afford to pay anyone to do anything for them. They don’t even buy much of anything so there goes much of manufacturing and transportation work even before they are automated away.
daerogami@reddit
Blatantly an AI generated post. Fuck off.
Healthy_Cup_7711@reddit
Any remaining professions will spike in demand which will make admissions cutthroat and drive down wages.
This will be made worse by the fact that not many people will be able to afford a plumber or an electrician or even healthcare , so demand for these blue collar jobs will go down
AmouriPlay@reddit
One can just look at how blue collar work is like in most of Asia lol. The majority of tradies in the west have no clue how bad it can really get.
Perfect-Campaign9551@reddit
Oh fuck off
Healthy_Cup_7711@reddit
You are in absolute denial
BilldaCat10@reddit
Fess up - are you a bot that submits this same comment on every post or what?
DEFY_member@reddit
And yet none of us had any problem with it when we were automating all of the non-developer jobs. I'm as guilty as anyone.
marxama@reddit
Makes me think of the ending of Don't Look Up:
csrcordeiro@reddit
Yep. That's about it
frogspa@reddit
I feel sorry for any developers having to work with AI.
Before I retired, by far the most tedious part of my job was reviewing. It sounds like that's all the job is now.
daerogami@reddit
Can confirm. I work at an agency and the amount of 2,000+ line code reviews I have in a day is insane. Sometimes they're also in one commit. Reviews that would normally be attached to tasks with an 8h estimate. I have seriously thought about becoming an electrician.
Squalphin@reddit
One thing this post shows me is what miserable jobs a lot of people here seem to have. Makes me liking my current job even more. Staying in the manufacturing industry was the right call all along it seems. At least things are still very chill here and no one cares if you code fast or slow and AI is seen more as a potential risk be it for security or safety.
Dean_Roddey@reddit
Yep. The same advice I always give. Get out of these ridiculous cloud-ware FAANGy type evil empire companies, and go find a mid to lower-mid sized company that is probably struggling to find good developers, working on something where quality actually matters (to varying degrees of course), where you boss actually knows what you do, your boss' boss, or even the owner knows who you are, etc...
Perfect-Campaign9551@reddit
This isn't the companies it's business putting pressure on devs to do more. It's the devs themselves, they are learning something new and exciting and want to play with it more. So they work even after hours
Let's not make misleading statements about the company prodding them.
The devs are doing it to themselves.
jiglerul@reddit
The companies are definitelly putting pressure. They won't do it openly, but all performance metrics mention AI. Entire teams are formed around AI use, while "traditional" devs see stagnant pay and have to be thankful for having a job in this economy.
Perfect-Campaign9551@reddit
For this sub being about programmers, which typically are considered smart, it's pretty full of anti AI morons and bots of it's own who clearly can't see the effectiveness of new tools and want to stay at the code level forever. The code gets in the way of the idea. The idea is what matters. Sorry you can't realize this. Maybe sometime you'll wake up
Realistic_Muscles@reddit
If you care about quality of the code, you have to spend so much time.
That's why I stopped caring about it 😂
If company forced you to use these slop machine, they get what they fucking deserve.
OwlingBishop@reddit
If stop caring about code quality, you totally deserve rummaging day long in a cognitively taxing crappy pile of spaghetti code.
To put it more kindly, the codebase you build (regardless of the tools) is actually your work environment, if company policy assumes you're ok working in a filthy environment for sake of their profit, you need to fight back, not give in ...
Realistic_Muscles@reddit
I'm tired boss
OwlingBishop@reddit
I feel for you.. 😔
notDonaldGlover2@reddit
I can solve problems faster so I keep finding problems to solve.
Bluestrm@reddit
Pretty much this. You can solve problems without the focus it usually requires, so it's easier to continue just inputting more prompts.
But it's also hard to give up when it's not doing what you want. When will you give up on: 'no, still not working, exact same problem'. and actually dive in all that new code that you don't know much about yet.
theycallmekenboss@reddit
AI speeds up coding, but it doesn't speed up thinking, debugging, or understanding systems. Those are still the slow parts.
g3ck00@reddit
I find the level of cope and denial in some of these threads truly fascinating. Like, I get you don't have to love AI, but some of these posts here are just straight up not willing to acknowledge how far AI has already gotten when it comes to coding.
Either people are still refusing to learn and use these tools or we are having massively different experiences in regards to how capable they are. I used to do so myself but around late last year I feel like something has actually changed.
Maybe I'm the delusional one, but I truly think the era of manually typing out code is coming to an end for the vast majority of devs. That doesn't mean we will all be out of jobs or not need any "coders" anymore. We're just moving up a layer of abstraction. The workflows is changing. The bottleneck is shifting from writing code to reviewing and testing it. We are becoming architects and directors, the AI is becoming the builder.
The amount of software and code written will skyrocket and it's going to become more accessible and easy to "build stuff" than ever (and cheaper). This will keep demand for devs up for a while.
I do think it will disrupt certain industries though. What's the moat of a jira if every company can just spin up a custom version in an afternoon. I think a good chunk of SaaS companies will suffer at the very least.
wildjokers@reddit
How are new programmers going to learn to code? Because you still need people to check the code a LLM produces. But eventually no one will have that skill set.
everythingido65@reddit
everyone will suffer ... fuck this technology
Perfect-Campaign9551@reddit
Agreed. This sub is a bunch of ignorance
jduartedj@reddit
the expectation creep thing is what kills me. you ship something in a day that used to take a week and suddenly thats just the new normal. nobody goes "wow great job finishing early" they go "cool so you can do 5 of those this week right?"and the DORA data about rollbacks increasing is really telling... like yeah sure you can pump out more code faster but if youre rolling back more often are you actually shipping faster? or just creating more work downstream. ive noticed this on my team too, the ppl who go hardest on AI generated code are also the ones with the most hotfixes lol. theres something about not having written the code yourself that makes debugging way harder when it inevtably breaks at 2am
Weekly-Ad7131@reddit
Maybe it's because when you are working with new technology there is a lot to learn, so you need to both do the actual work and spend time learning. You're willing to spend your own time on the learning part because you don't want people to think that you don't know so much about this new technology as you used to know about your old job.
mljrg@reddit
There is no AI. That’s a bullshit assumption. No one even knows how the human mind really works. They are cheating on society, and on investors, while filling their pockets.
There is only a new shiny and very usefull Autocomplete that probabilistically finds the best match for your query, and guesses best when your query has been asked or solved many times on the Internet. Think of an improved Google, that builds a response from all pages that best matches your query. The answer can be so good, that people fall in the intelligence ilusion.
Anyone that does not recognize this will find themselves to be plain wrong, losing the opportunity to do quality work and improve on their craft, being this programming or anything else.
aharvey101@reddit
It’s never been a better time to do extreme go horse
pwnies@reddit
Ex-Figma with many contacts at Anthropic/OAI/Cursor. Two things I've observed - pressure & excitement around velocity.
The pressure is very, very real right now for any Tier 1 company or startup (though interestingly, not as significant in between those two). It's all about eyeballs and matching the velocity of your competitors so you can stay Tier 1. For these Tier 1 companies projects are born and ship in a matter of weeks. They work 997 to make this happen. At one of these companies for one of their bigger launches, they went from idea->launch in under 72hrs. This is the speed that is expected.
Unfortunately with AI, it's a winner-takes-all market. Whoever has the best model or harness wins, it's as simple as that. With such high stakes, it's imperative to run at full speed. These companies do compensate for this sacrifice though - expect ~1M/yr in comp, but to trade your life in exchange for it.
The other is excitement around velocity. It is objectively fun being on a rocket ship. It's addicting - moving from one hype moment to the next, seeing the movement around you and getting invited into rooms you never thought you'd be in. When you're on top this constant reward signal is a drug.
Those saying that it's about having to debug slop are off base - that was true last year, this year most of the debug aspects have been automated away. Opus 4.6 and Codex 5.3 write really good code, and using them in conjunction leads to robust outcomes. If you can spend the tokens, things like BugBot or other agentic testing frameworks really make debugging and testing a breeze. None of the eng I know at these companies spend any significant time anymore debugging - it's all handled for them. All of their time is focused on net new.
Kirk_Kerman@reddit
Imma be real I can't imagine giving a fuck about shipping a feature and being excited. Doing it faster? Doing it in a room with some executives? Hell.
pwnies@reddit
There's a really, really big difference between a feature that is boring b2b generic work slop that no one cares about (ie I previously worked on Jira - no one posts cool things they made with Jira), and shipping something people get hyped about and fanfare over on social (had a bunch of these at figma). It's cool seeing an engaged/excited audience.
No-Two-8594@reddit
not sure why it would be winner take all
so far it's not playing out that way
everythingido65@reddit
which is equal to hundreds and thousands of job losses , and we just watch the world burn, and these top guys call this the future.
Anth-Virtus@reddit
Relevant skit https://youtu.be/xE9W9Ghe4Jk?is=jUkHMfHPxYcs-5Qk
SleepWalkersDream@reddit
But like ... even if you tell it to write a simple function, the variable names and docstring is wrong. Even "write a function f that accepts x and returns x**2. Write as little code as possible" will create a giant monstrosity.
marksmanship0@reddit
Here is the answer to your prompt from Gemini. Doesn't look like a monstrosity to me
f = lambda x: x**2
SleepWalkersDream@reddit
I was a little hyperbole on purpose.
Perfect-Campaign9551@reddit
You are working on knowledge from a year ago, brother. The latest models are amazing. I hated copilot last year too, it was dumb as rocks
Codex 5.3 was so much better. And 5.4 kicks ass. It's smart. As. Hell.
SleepWalkersDream@reddit
I think it's set to codex something. Or auto.
Autocomplete is 50/50 whether it suggest something usefull or just plain wrong.
Maybe I can set it to learn from my style?
tadrinth@reddit
This has not been my experience with Claude Opus 4.5+, generally.
Terrorscream@reddit
Something something do it once, something do it right.
throwaway490215@reddit
Developers love to suffer at least a minimum of cognitive effort, and they're looking for the next fix.
Then the factories see that and insist everybody must join the grind.
/s
but only barely
Many-Month8057@reddit
The real problem is ai didnt remove work, it removed the excuse for why work takes time. before you could say "this refactor will take a week" and nobody questioned it. now its "why cant claude just do it by friday". the actual hard parts, understanding requirements, making architectural tradeoffs, debugging weird edge cases — take the same amount of time they always did. ai just eliminated the buffer that used to exist around those tasks.
narcisd@reddit
“One more prompt” is real.. addictive
rury_williams@reddit
llms are just a better Google search atm. I don't know whether there'll be a breakthrough soon so that it can actually replace devs. i also don't know how agents will affect how much we're going to be needed in the future. But currently, llms are just helping us work faster and thus our customers are demanding more.
Waterwoo@reddit
Let me make sure I get this straight. AI is going to imminently replace almost all knowledge workers in the next 6-12 months... but so far the only detectable impact it's had is needing the existing people to work MORE hours.
Something's not adding up. Our bosses are taking advantage and the eventual flip back to an employee favoring market is going to be so fucking sweet.
HorsePockets@reddit
This is what I have been saying all along. Where I'm at, we have far more work than we can handle. We're just going to be doing a lot more work.
rongenre@reddit
I wonder how much of this is what the cadence of development is so different when I write by hand, rather than write by AI. If I hit a corner case in hand-developed code, I mostly know where to start and I can reason about it. With AI-written, it could be anywhere and it's pretty difficult to estimate how long it could take.
TheTwistedTabby@reddit
I own my own business and am my own boss. AI tooling has enabled me to talk for hours about my goals for a project. I can argue for days. It’s my human body that’s weak and makes me go to bed.
Source: 15 years software QA. 10 years software dev. Yes. I’m old.
everythingido65@reddit
AI is the worst invention , AI should be banned
honorspren000@reddit
Since my coworkers are cranking out so much code, 70-80% of my time is now spent on code reviews. It’s wild how it all changed so drastically in the last year, and I’ve been a software developer for over 15 years. My work encourages AI use, and it’s actually frowned upon to NOT use it.
Garland_Key@reddit
This has been true for me, but also this is pressure I'm putting in myself. My employer has no new demands or expectations for me (yet).
notDonaldGlover2@reddit
One thing I notice is maybe before I was writting mid to good code. But now I spend more time iterating and trying to get it perfect. I generate a PR, ask 3 models to review it, then ask 1 model to combine dedupe the reviews, then implement the changes, oh how about 1 more review? Also let's create a full test suite, can we optimize? is it DRY, KISS, use first principles thinking. It kind of just keeps going
Perfect-Campaign9551@reddit
This sub is so anti AI they can't even admit that AI will write far better code then most of us. Its far above typical junior AND medium engineers. It spots issues you didn't even think you had.
If you can admit it then you haven't used the latest models or are just being ignorant on purpose.
And yes you can still learn a lot from the AI too when it analyzes your code
everythingido65@reddit
It's because it's trained on human generated code only....someone wrote that code, it's not magic
bphase@reddit
Not too surprising. People feel pressured for their jobs, times are tough. And on work's side, there's pressure to use AI but little support for doing so, you have to be proactive and actually want to do so. Meaning learning tools on your own time.
Tyrinder@reddit
Isn't keeping on top of new trends part of the job?
PadyEos@reddit
Not really. If an employer brings a new piece of tech in the workplace they are obligated, in many countries by law, to properly train the employees în using it.
Up until now the employers knew they were more tech illiterate than people with engineering degrees and direct engineering working experience. So they relied on their specialized engineering employees to bring the new tech.
Now the marketing of these AI companies has convinced the employers that they know what the current state of tech is better than their own specialists that they have hired for this.
another_dudeman@reddit
Any competent dev can learn these concepts in a day or two.
Tyrinder@reddit
Yea, why did I get down voted so hard then?
paxinfernum@reddit
/r/programming is a safe space for people who have are having an emotional moment about AI. /s
Tyrinder@reddit
Very stackoverflow-y
bengotow@reddit
This is definitely the case for me and I was talking to friends about it yesterday.
I think the tldr is that... a lot of engineers genuinely enjoy building stuff. If we were digging a hole with shovels and you showed up with an excavator, we could leave early but we'd probably set our sights on an even bigger hole. Huge.
With claude we can build features with tests and storybooks. Load test with claude-written scripts instead of hoping it doesn't break under load. Take an hour and ask it to fix some stuff in Sentry. Do each thing the /right/ way.
It would 100% be healthy to take this opportunity to work less. I /should/ just ship it and close the laptop, but this stuff is magic and it turns out I've wanted to magic a lot of things.
theillustratedlife@reddit
I've had the opposite conversation with friends.
It doesn't feel like we're making things anymore. It feels like we're babysitting a chaos agent who wants validation every 10 minutes. It's simultaneously exhausting and unsatisfying.
stumblinbear@reddit
I use an agent maybe a couple of times a week at most. You don't have to use it for absolutely everything
PeaDifficult2909@reddit
If you work at FAANG you're literally measured by how much you use AI. If you don't use it effectively you will be canned.
Witless-One@reddit
I can see that point of view. But also, I just view Claude as a tool similar to auto-complete with a built in search tool. It doesn’t replace me, it augments my workflow
Graphesium@reddit
Claude Code is crack.
TheBoringDev@reddit
It’s TikTok, destroying people’s ability to focus for quick hits of dopamine. Surely this can have no negative consequences for anyone.
Gabe_Isko@reddit
It doesn't help that AI is slow as s*** to do basic stuff. I was able to set it up on my home set up, and It was working pretty well, but as interesting as the LLMs are, claude-cli is horrible and uses the LLM in a very inefficient way so that it can inflate your token usage.
What a way to fumble such an interesting technology, but when you scam billions out of the public market, this is where we end up I guess.
No-Two-8594@reddit
I agree AI is slow
I also don't see it having a very high bar for correctness
In my experience I am still doing the best when using AI only when i get stuck at some step. otherwise I need to be guiding the process for it to go efficiently
and then there is the problem that sometimes it freaks out and ruins all its own code. this happens a little too often for me to believe it is going to take over
it needs to be wrapped up in TDD or something
the current hype mania has to pass before it is sane again
Unique-Quarter579@reddit
I bet they think I’m slacking instead of 100× my productivity with AI.
Dramatic_Zone9830@reddit
You think it takes 10 minutes to master everything about LLM-driven development?
yotemato@reddit
I haven’t felt this validated in a long time. This is exactly my experience. Management hands us AI and expects instant productivity gains. They even vibe code their own apps. For now I’m sticking to my “20%” overall productivity gain estimate, however unpopular that opinion may be in corporate America.
One_Being7941@reddit
this year.
BlueGoliath@reddit
Why a subreddit that supposedly hates AI crap is upvoting AI social programming articles.
_eyesonthefries@reddit
Executive teams have been using the potential of AI productivity boosts to justify cost control layoffs, regardless of whether the productivity gains have been realized or not.
Even if there hasn’t been a layoff, there is also a belief that the moment will pass up anyone that doesn’t take maximum advantage asap. So they are pressuring their teams to do more and make that potential a reality.
Some of it is definitely AI. Some of it is just more unsustainable labor and longer hours.
We could have seen the industry gain productivity boosts from the existing labor force baseline, but instead we reduced the labor force and still expect the same productivity baseline or better.
bwainfweeze@reddit
I work longer either when something is going so well I want to finish it, or because the over/under on how long something would take is way off. AI a like one of those situations where you feel like you’ve last got something and you just grind yourself to bits chasing after one more tweak looking for a good place to stop for the day.
kilobrew@reddit
It’s because managers and leadership drank the cool aid, laid off half the staff and then 10x’d the work, 1/2’d the deadlines, and told the remainder that they were AI coders now and should be able to do this load easily (or else they are gone).
winangel@reddit
Well the equation is simple. The value of software is falling down to zero so the value of software developers work is also falling down to zero. To compensate developers have to increase their productivity exponentially so in the end you do more. The 10x productivity gain just mean ~10x more output. But the fact that AI boost only a subset of the work that used to be high value and not the rest that has always been hidden means that developers are in a trap right now: firstly they used at least to be seen as enabler (I need a dev for my product), now they are more and more perceived like an annoyance (I need a dev to ensure my product works properly). The new role is not valorized anymore. You are just seen as too slow to approve something of high value. This will surely lead to the removal of software engineer in the end. Not because it is useless, but because it is no more a valuable investment. Devs are compensating today with longer hours, but as the value goes down to zero we will reach very quickly the point where no one truly cares about this and the amount of work you’ll put into this won’t matter anymore. The spiral of downgrading has started and it’s moving very quickly.
I personally am experiencing this right now. The job is no more fun at all. The role needs to evolve by absorbing more strategic and product skills to be able to bring features from A to Z. The ones that will survive will be the ones that will be able to cover the full process from vision to execution to delivery. If you only master 1 of those 3 it’s going to be difficult…
agumonkey@reddit
AI is for sure a nebulous thing. It can make companies expect more all the time or maybe be just one more tool.. it can give 10x results or 10x slop .. it's the fuzziest technology I know of
tns301@reddit
Guess what, now we deliver code faster with Claude but we still need to wait for PM, PO and other business decisions
agumonkey@reddit
system thinking is not well distributed :)
clrbrk@reddit
I don’t feel pressured to work longer hours. I’m genuinely having fun and I’m way more productive. Opus and Sonnet 4.6 with agentic loops are not the same tools as what we used at the end of last year. It’s a different game now.
PretendRacc00n@reddit
AI helps get stuff done. Though... try to have a programming conversion with those that use it.
Pretty much: it can be seen they know nothing and are basically relying on someone else pretty much.
-IoI-@reddit
I just quit one job and ended up with three jobs, thanks to the ability for AI to multiply my output while still grounded in my domain expertise. I'll be doing 60 hour weeks for a bit
ishysredditusername@reddit
It feels like you’re flying through tasks. In fact you’re skipping the boiler plate and just doing the hard tasks all the time.
Now I’ll read the article 🙈
Blando-Cartesian@reddit
How is that? Liberation from the boring stuff, heading to burnout, or both?
ishysredditusername@reddit
At the moment yes, is great. How will it be in 12 months unsure.
Delivering things keeps burnout at bay but I feel like I’m still in the novelty phase.
Some_Developer_Guy@reddit
I'm not.
bengotow@reddit
Should add that it does feel like we’re pushing into burnout territory, the context switching and testing and stuff is too much 🫠
thesalus@reddit
I think this article sums up a lot of my thoughts
Suppose we take all the AI-driven productivity gains at face value, this will enable companies to create a new baseline for "productivity" that will be used as an excuse/motive to lay people off. If that works out, great (for the company), those gains will not be seen by developers and it will simply widen the productivity-pay gap. Otherwise, if they don't pay off, they will be able to hire people back cheaper after having upheaved large numbers of devs into unemployment and (theoretically) driving down wages.
I don't think this is anything new but this sort of mass sea change gives an opportunity to discipline a historically well-paid labour force and foment competition between developers (through stack ranking, arbitrary measurements, etc.). This arbitrary measure can be ratcheted up so as to impel folks to work longer hours (and degrade work-life balance) to make up for any shortfalls between actual output and expected output.
Or maybe it's just the company I work for...
This worries me a little. Debugging at the code layer and navigating "operational" issues in the architectural layer are skills I've gained by doing, failing and learning at the periphery of where the tools I used started to fail. Certainly the specifics of these skills are rendered useless with improvements to tooling/languages/frameworks/levels of abstraction, but I find the modality of investigation differs with and without AI. Maybe that's unrelated to AI and it's tied to the increasing pressure for performance not giving sufficient time to be curious and poke around further.
Perhaps tools will improve such that these boundaries of competence will get pushed further and further out but it does still make me uneasy about long-term maintenance.
grumpkot@reddit
Now you need to review even own PRs
djnz0813@reddit
Yes cause i'm expected to use AI to not "get left behind", but also make sure slop doesnt go live and move faster than ever before because "there is AI now".