Why I’m ignoring the "Death of the Programmer" hype
Posted by Greedy_Principle5345@reddit | programming | View on Reddit | 268 comments
Every day there are several new postings in the social media about a "layman" who build and profited from an app in 5 minutes using the latest AI Vibe tool.
As a professional programmer I find all of these type of postings/ ads at least hilarious and silly.
Of course, AI is a useful tool (I use Copilot every day) but it’s definitely not a replacement for human expertise .
Do not take this kind of predictions seriously and just ignore them (Geoffrey Hinton predicted radiologists would be gone by 2021... how did that turn out?)
Mjolnir2000@reddit
I'm ignoring it because it isn't going to alter my behavior in any meaningful way. Either programmers are dead, in which case there's nothing I can do and I'm just going to enjoy writing code for as long as I can, or they aren't...in which case I'm going to enjoy writing code for as long as I can.
BeReasonable90@reddit
This. Plus you can just use it yourself and see it is overhyped anyways.
Could someone make a full AI coded X, ofc. People have been bragging about doing that with no-code solutions for decades too though. With enough force, you can make it work. The issue is, why bother?
If AI gets good enough to not need force and such to program to the point software developers are not needed at all, looking into it will not matter as you will get laid off either way. If it gets good enough to replace manual coding where software development changes, then you can just learn the updated tools at that point pretty easily. If it does not get good enough, then any time you spent on AI is a waste of time.
shooteverywhere@reddit
If it gets so good that nobody needs software developers, then it would be so good that nobody would need the companies that develop software either. You would just be able to look at your computer, tell it what to do, and it would create and run a program to fix the problem for you. At that point, it is bordering on AGI.
So....what is a company going to sell you if you can recreate 100% of their software stack for free at home just by talking to your computer? T
dawson_fieldlog@reddit
this is interesting.
waozen@reddit
A lot of what AI companies are doing, is retrieving or arguably "stealing" (by ignoring GPLs) existing code from various sources, then regurgitating it back to users (with some alterations for often legal purposes) for a fee and if they prompt correctly (which often becomes a programming-like task unto itself).
ultimateedition@reddit
Being grounded in reality is important. Imagine you're a competent taxi driver in the 1920's who has just seen horse drawn carriages phased out for cars, and you come to a realization that your fairly recent job could be replaced by an automated machine driving taxis in the future so you panic and quit right then. Well you'd be a hell of a great forward thinker but at the same time you and 4+ generations of descendants could have been driving taxis before an inkling of your prediction started becoming a reality, and it's 2026 and we still have some taxi drivers.
It's no way to live to be perpetually in fear. Changes are always happening but right now it's today and today is fine.
turinglurker@reddit
ok... but at the same time, if you're a horse-carriage driver (blanking on the term for this) in like 1910, and you see a car for the first time. Not a bad idea to start thinking "shit this could replace me... maybe i should learn to drive and try to become a taxi driver"
psyanara@reddit
True enough, and though 99% of the horse-drawn carriage drivers did vanish with the times; some still remain. Horses, in many forms, are a popular hobby that can also be profitable, even though bicycles, cars, and motorcycles exist.
aeric67@reddit
I agree with the sentiment, but there is behavior that could be savvy: anticipating a shift and learning or training to be ready for it. Which, in my opinion, is sort of a fun activity anyway.
BusinessWatercrees58@reddit
But what if you spend all that time training for the shift, but then you die before it happens? You wasted precious time.
No_Indication_1238@reddit
But what if you don't die? Then you'll be at the top of the food chain! It's all about risk management. Do you anticipate to die before the shift, or after?
BigHammerSmallSnail@reddit
Good logic
Pumpedandbleeding@reddit
Exactly. This is how all jobs have worked always. While employed you work and collect a paycheck.
You could always be fired for any reason, but who cares? You could die at any moment yet we go on living. Living is a subtle form of optimism. Working is also a subtle form of optimism.
Don’t stop dancing until the music stops.
Doomers predict end of world and market crash every other day.
jaypets@reddit
"Until such time as the world ends, we will act as though it intends to spin on"
Kok_Nikol@reddit
And they're always wrong.
ummaycoc@reddit
This nerd branches.
muuchthrows@reddit
I agree. Learn the tools but ignore the doomerism.
Maximum_Musician5375@reddit
I think the truth is somewhere in the middle.
When ERP/CRM systems became common, accountants didn’t disappear—but their work changed, and they became more productive.
I think the same will happen with programming. AI won’t replace engineers, but it will change how much one person can do.
And we shouldn't forget about costs. Good AI tools aren’t really “free”—once you hit limits, it can get expensive, while a developer’s cost is more predictable (salary).
So it feels less like a replacement and more like a change in how the work is done.
marjinpilot@reddit
I agree. in fact having experience to monitor AI coding is very valuable. Speaking of which, looks like there are more and more job openings in software engineering.
GardenFree5017@reddit
the Hinton radiologist comparison is genuinely perfect and more people need to hear it 👏 "X profession is dead in 5 years" has been confidently wrong on repeat for decades now. vibe coding a todo app in 5 minutes and maintaining a production system serving millions of users are literally different galaxies fr 💀
the "layman built an app overnight" posts always conveniently skip the part where that app has zero error handling, no security, no scalability, and collapses the moment anything unexpected happens. that's where real programmers still eat.
AI is absolutely a powerful tool Copilot, Claude, ChatGPT, Gemini, Grok, and Runable genuinely make good developers faster and more productive. but they make BAD developers confidently wrong faster too. the expertise gap isn't closing, it's just moving up the stack.
the death of programming hype sells clicks. actual companies are still desperately hiring engineers who understand systems deeply. that tells you everything 🔥
East-Ad7653@reddit
replacing radiologist has many legal challenges...
BUT a major hospital is taking the first step of replacing 90% of their radiologist with AI ... Link here
So adoption is Slow... But as with all things.,. first it's a trickle then its a flood... Wait for it.
Same with SWE... Each step in the Software Development Life Cycle will eventually be automated..
everything for planning, Designing, implementing, testing, and maintaining code ..
Right now the implementation(coding part) is practically dead...
next on the chopping block is the Design and Planning phases-- Much harder to automate But doable...
and as always testing and security are the last pieces to be solved... since these tasks dont generate any income... companies are not eager to invest into that Yet...
Might be a good idea to start looking into cybersecurity against AI slop ...
Ok_Type_9093@reddit
No i agree ,i mean humans are the real managing brains but AI just does the heavy lifting ,which i believe is good enough for now
Spirited_Score_3201@reddit
You're right, but from what I've read, most of these "layman" are product managers, marketers, or sales specialists. They understand clients' needs. One of our jun in our team used to work as a bartender. He began as a Vibe Coder and became a good programmer. Generated by LLM code is well-commented, which allows for learning best programming practices. After half a year, our jun became a really good programmer.
kay_jay_DAG@reddit
AI can process patterns and statistics far beyond what humans can handle.
But improving and guiding AI systems is still a human-driven process.
I think it’s better for us to become people who work with and refine AI, rather than just compete with it.
Roles may shift — some purely technical roles might decline, but those who manage and guide systems will still be needed.
Actual__Wizard@reddit
I'm being serious: I just canceled my last AI coding tool, which was github copilot that I kept because it was $10 a month.
I'm tired of having of having my productivity killed by "AI problems."
oadephon@reddit
You can turn of the tab auto complete suggestions...
Actual__Wizard@reddit
How? I'm serious, that was an actual pain point in python. Sometimes you're trying to press tab to insert a tab and it inserts a suggestion instead...
LonghornDude08@reddit
Keyboard shortcut settings. But the name is confusing
Actual__Wizard@reddit
If I ever try it again, I'll look into it.
Just being serious, I was just writing some code and stress level is like 25% of what it is when I use some dumb tool.
FearlessBoysenberry8@reddit
This is absolutely the reason I also am not using it. Glad I am not the only one, feels reassuring somehow.
I do chat with Copilot, but 50% of the time it is wrong and hallucinates anyways.
Actual__Wizard@reddit
The way the interface works needs to be ironed. The experience over all is awful.
PFive@reddit
Pretty sure you can press escape to clear any autocomplete suggestion, then you should be able to press tab freely.
Actual__Wizard@reddit
Oh yeah I was spamming escape a lot.
CoffeeToCode@reddit
Can't your editor toggle it? I bind it to a hotkey so it only triggers on demand.
Actual__Wizard@reddit
What editor are you using? I was using VScode.
CoffeeToCode@reddit
I don't use VSCode so I can't help you. But have you seen this? https://www.reddit.com/r/webdev/comments/1gpni94/is_there_a_way_to_trigger_github_copilot/
Actual__Wizard@reddit
No, that's what I was looking for, but couldn't find. Thank you!
hayt88@reddit
Funny things is: asking chatgpt would have most likely also got there is you didn't find it via search engine yourself.
Careful_Praline2814@reddit
Youre using it as a coding assistant or advanced autocomplete
This isnt the only way to use it and if youre focusing on building whole systems and creating architecture AI could be extremely useful eliminating the drudgery
DAVENP0RT@reddit
I feel attacked.
spaceneenja@reddit
What? You ask it questions in a prompt, it doesn’t run continuously.
markvii_dev@reddit
Bro in a programming sub never used an IDE 🙀
Nixinova@reddit
The auto complete one does, and can get very annoying, especially since tab and arrow keys make the suggestion accept.
spaceneenja@reddit
That sounds very annoying but also I am guessing takes 2 seconds to disable.
CoffeeToCode@reddit
They're talking about the inline code completion feature of copilot, not the chat.
stdanha@reddit
A pocket knife in the hands of a marine can do way more damage than a machine gun in the hands of a kindergarten student. Lovable + a pro programmer? you have no idea.
FragrantArt8270@reddit
I've been thinking about this topic from day one of LLMs hitting main stream... like most people.
For common tasks, you can say to an LLM or programmer, write me a function to compute the Fibanacci number. Easy peasy. The scope of common tasks will expand as LLMs get better. This only works if the task can be easily said in English (or other spoken language).
For specific tasks, you are going to have to write in English the exact code you want. You are basically going to have to write every line of code in English with excruciating accuracy. (There are industries where the design document is this detailed)! By the time you write the English correctly, you might as well have written the code.
Then there are the bugs. If you don't carefully write the English, you are going to get codeb that doesn't work how you intended. You need to as a programmer to review the code to make sure you wrote in English what you meant.
There is no way a lay person is going to be able to write and fix all complexities of software.
Marceltellaamo@reddit
I’ve gone back and forth on this over the last year. The tooling is impressive and it definitely changes how I work day to day. But most of the value I bring still comes from understanding tradeoffs, constraints, and what actually matters for the product.
The hype cycles feel louder than the actual shift on the ground. My workflow has evolved, but my job hasn’t disappeared, it just requires clearer thinking.
Curious what concrete parts of your day have actually changed because of AI, versus what’s mostly noise?
UltraPoci@reddit
Can't wait for when the bubble pops and people with actual expertise will be the ones who will be sticking around.
97689456489564@reddit
There likely is no bubble. If there is, LLMs are still going to be very dominant, anyway.
UltraPoci@reddit
There's clearly a bubble. What happens afterwards is anyone's guess
97689456489564@reddit
I am like 65% sure there's no bubble. Maybe I should bet on one of those prediction markets. (Sadly they're all owned by right-wing nutjobs.)
Complex-Lettuce7164@reddit
And you’re left wing and don’t understand the simple economic reason why it’s a blatant bubble. Moving money around companies and ‘promising’ to pay doesn’t generate capital like you think it should
97689456489564@reddit
Would you like to make a bubble-bet? (As in, we both stake money and pick a year and for the conditions we consider a valid "bubble burst".)
Also, I am not left-wing at all. I'm a neoliberal (socially progressive, economically pro-market).
Complex-Lettuce7164@reddit
You can make a bubble bet via the stock market! It’s called shorting. You borrow some stock, sell it immediately, and then buy the same quantity back when the price is lower and give it to the broker, the difference between what you took the position out for and what you pay for the stock is your profit. A 10x leveraged short on companies like nvidia will genuinely create generational wealth when the market inevitably collapses.
Bet against me by longing, just buy stock and assume the growth will continue. That can be our bet.
97689456489564@reddit
RemindMe! 3 years
97689456489564@reddit
That is true, and I already am. It's just more satisfying to win a direct time-constrained bet.
All that being said, Hegseth unexpectedly ruining Anthropic's business relationships does throw a wrench in my predicted revenue trajectory (my company and all the companies I know all have DoD affiliations and we all use Claude for everything). So take my prediction to be in the counterfactual where that did not occur. Not sure how much it may affect NVDA in the long run, though.
Dean_Roddey@reddit
Are you kidding, there's a huge bubble, both in terms of hype and in terms of financials. People seem to think it's going to keep going forward at the same speed, and it's just not.
We got a huge jump because it was starting from almost nothing and some large companies suddenly realized that if they spent a giant amount of money and burn enough energy to run a city, they could take existing NN tech and make it actually doing something. But that's not going to continue to scale.
And, it then turns into a war between these large companies to 'own' this space, so they are pushing it and investing far more than is justified. I think it's worse than the internet bubble and the internet was a tool with vastly wide applicability.
Rude_Philosophy_4631@reddit
I just hope once the bubble bursts we get regulations.
97689456489564@reddit
But you understand this is empirical, right? In, say, 5 years we'll learn if you were right or if I was right. I am predicting I will likely be right. You and I could take out a bet of some sort.
That said, I am making two claims:
ZealousidealBad2753@reddit
Why are there so many "diversity hires" in this sub? It’s ruining the industry standards.
texan-janakay@reddit
every new invention gets heralded this way - oh all that kind of job will go away - it never happens. The jobs may change, but there are still jobs.
AI is a great ASSISTANT, but it is not a people REPLACER.
Calm-Success-5942@reddit
For replacing jobs you need actual intelligence that can predict outcomes of their actions. LLM can’t do that and I suspect we are absolutely nowhere near in research of an AI that can reason and predict outcomes.
So LLM is useful for summarizing info, getting advice, learning surface level knowledge on new topics. But not for making intelligent robots.
97689456489564@reddit
??? What happened to this subreddit? HN went through its confusing anti-AI phase but seems to mostly be out of it, now. Is this place going to lag behind by a few years? We're not on GPT 3.5 anymore.
fuscator@reddit
You just have to ride it out. Take your downvotes for not joining the anti-AI brigade, and in a few years people won't be able to pretend anymore.
AI assisted coding is getting better every few months and within some years (unsure exactly), pretty much every programmer will be using it.
Bookmark.
EveryQuantityEver@reddit
No, this vague hand wavy, "Someday it'll be awesome!" bullshit is getting tiring. You don't get credit for not making an actual prediction.
fuscator@reddit
Ok, within five years the majority of programmers will be using it, and if you're not you'll be at a disadvantage.
You in?
EveryQuantityEver@reddit
Again, where's the evidence for any of this?
97689456489564@reddit
If you have not spent at least a few days sitting down and seriously trying to use GPT 5.2 Codex with extra-high reasoning on a personal codebase, you shouldn't have an opinion on this one way or another. If you use it and it really is constantly fucking up, then I'd get it. But I am guessing you have not tried that particular model and harness.
The evidence is just seeing the line's slope. In 5 years, I would bet a lot of money with 95% confidence that most programmers at most American corporations will be using AI to write a significant percentage of their code. This is already the case at many tech companies right now.
EveryQuantityEver@reddit
That's a lot of words to not present any actual, concrete evidence. Again, back up your claims with actual data.
97689456489564@reddit
One can't really give evidence of "[X] will happen in five years". It's just a prediction. What I could do is set up some kind of concrete bet, but AI skeptics almost never agree to those because the whole point is blanket skepticism, not thinking about specific outcomes.
Although it's a far-right hellhole, Twitter is one of the best ways to see dozens of daily examples of AI coding being useful for many people (experienced devs, junior devs, completely non-technical people).
I could send you hundreds of Substack essays and tweets, like this most recent tweet by Karpathy (who was pretty skeptical a few months ago but is less so now), I could spend hours giving my personal experience and the experience of all of my coworkers and friends.
I could link you METR graphs, which tries to set some objective metrics: https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/
I could ask you to ask me to "create [X]" and I could try to have an AI agent do it and tell you exactly how long it took, what the prompt inputs and tool calls and responses were, how many conversation turns were required.
You could also just try it. If it sucks, okay. If it doesn't, maybe you can shift towards more epistemic humility in the future.
Maybe-monad@reddit
In a few years you'll have to pay for the actual costs of running the models, training costs and copyrights for data used in training.
97689456489564@reddit
Wanna take out a bet on cost per token for the frontier models will be higher or lower in 2028? This is empirical, so we can actually do a solid bet and learn which of us was right.
fuscator@reddit
No-one will bet you. They will furiously click the downvote button instead.
97689456489564@reddit
Yep. Even if they said they wanted to, they'd say that betting is degenerate, or something.
97689456489564@reddit
This place is one of the last holdouts. It's really odd.
fuscator@reddit
That's how the Reddit system works. Downvotes drown out dissenting voices so you tend to get echo chambers.
And Reddit is ridiculously anti-capitalist, so obviously they're going to be anti AI too.
It doesn't matter though. People can continue to pretend the trend isn't there, but at some point the magical internet points are not going to help, and they'll have to join reality.
TheBanditoz@reddit
I'll take you up on that.
I'll reply to this post in exactly a year and see how we're still feeling.
fuscator@reddit
A year? Fine. But I did say a few years. Lets say within a maximum of five years I bet most programmers are using AI assisted coding.
97689456489564@reddit
I'll happily bet you on it.
Maybe-monad@reddit
We're on Opus 4.5 which hallucinates in weird ways and writes shitty code
97689456489564@reddit
It doesn't hallucinate much. The code is not really shitty usually, though sometimes it's incorrect or incomplete.
I will pay you $20 so you can get Codex if you want. Tell me GPT 5.2 Codex XHigh sucks. It's obviously amazing. Sure, I underspecify a refactor and it suddenly makes a function taking 20 args - it's not perfect at all - but it's very good at most tasks. And in 2027 the best model will likely be noticeably better.
Maybe-monad@reddit
AI brainrot is real
97689456489564@reddit
You people really are so lost. It's very bizarre.
Just try it! It's right there!
Maybe-monad@reddit
The only thing I find bizarre is how can you be so naive.
97689456489564@reddit
All you have to do is try them. It's disappointing how much ideology can influence people's behavior.
Maybe-monad@reddit
How do you think I know Opus writes shitty code?
97689456489564@reddit
Try Codex. If you try it and you write a blog post showing what you did and what it did and that it sucks, I will concede you're right.
Maybe-monad@reddit
I don't need you to concede anything, you will do it on your own when the time comes but there's much to learn until then.
97689456489564@reddit
Not to harp on this too much, but give this recent tweet and the replies a read. It fully fits my experience of using AI coding agents daily for months: https://x.com/karpathy/status/2015883857489522876
Maybe-monad@reddit
So you've been outsourcing your coding skills to LLMs and you've become worse at evaluating the output, in other words dependent.
97689456489564@reddit
I am using it professionally on several large software projects and at home for three large software projects in various languages. It is very good. I have seen thousands of other veteran devs report very good results. I gain nothing by being right in this argument, but I am pretty sure that you're going to change your tune within a year or two (or even tomorrow if you were to try it and attempt to follow best practices with it).
The "meta" right now is typically Claude Code for smaller tasks or more "action-oriented" tasks, and Codex GPT 5.2 XHigh for complex, large codebase changes (new features, big refactors, bugs that Claude Code has failed to find).
imeeseeks@reddit
No! The thing is you haven’t tried enough. Like maybe you simply do not know how to write your prompt in way the LLM can fully understand you requirement. Also you need to be really explicit about they way you want your code, it’s easier to explain it than write it yourself
/S
Maybe-monad@reddit
It's easier to have an LLM loop to an array of bugs while trying to fix a simple issue than fixing it manually, surely the person who will maintain it two years from now will have no trouble understanding what's going on/s
EveryQuantityEver@reddit
It still can't predict anything other than one word usually comes after the other.
2this4u@reddit
That's just ignoring facts. The technology is flawed but you can absolutely get an LLM to predict outcomes for its actions, that's almost exactly what it's designed to do.
Hope so you think they got one to run a vending machine (badly) for a long term period, because it didn't randomly submit orders it used available data to make a decision (often poorly thought out) based on what would sell.
Being bad at something doesn't mean it's not doing the thing.
There's lots of things to point at as problems with LLM vibe costing, making shit up harms your arguments because you can so easily be dismissed as someone uninterested in reality.
tgdtgd@reddit
An llm is per definition a system that predicts the most likely next word. It does this by utilizing the preprocessed largest possible set of knowledge available. Aka Stochastic parrot.
There is no mechanism that allows it to understand what it is doing. Hence the e.g. wrong numbers of fingers problem. These issues can be mitigated but it's more of a brute force approach - do something, check it, redo it, check it, ... Until the check didn't raise any issues.
Is it possible that the parrot can be used to successfully operate a vending machine? It seems so. Can it produce the right decisions for future situations? Sometimes. Can it predict the outcome of something? Sometimes. Does that mean, it can understand what it is doing? No. (See Stochastic parrot)
LLMs are great tools. But it is very important to know their limits.
97689456489564@reddit
Your first paragraph is very, very wrong. It was wrong even in 2023. Do a little research.
KarasuPat@reddit
Enlighten us, how is he wrong?
97689456489564@reddit
Why not try to use any of the tools for yourself? The current ones.
KarasuPat@reddit
How is that an answer? I’m not talking about tools. You said his definition of an LLM is wrong. How is it wrong?
97689456489564@reddit
https://x.com/Sauers_/status/1965101031077150863
https://x.com/dionysianyawp/status/1965134873519145221
https://x.com/repligate/status/1965659230486364420
Base LLMs do do that. But no one uses those. They are heavily modified by many additional layers that alter how they function.
EveryQuantityEver@reddit
Their first paragraph is literally how these things work. If you want to dispute that, you have to provide actual fucking proof.
EveryQuantityEver@reddit
You don't have any facts in your corner. And literally the only thing an LLM can predict is that one word usually comes after the other. That's it.
RICHUNCLEPENNYBAGS@reddit
You don’t actually though. Lots of jobs have been replaced by dumb automation.
TwentyCharactersShor@reddit
Only if you know where it may be wrong and only on areas that are not very niche.
Bloodshoot111@reddit
Yea, I work in automotive and we do have some limitations and specialities. And it’s always so painfully wrong even when I give him that info
TyrusX@reddit
You are just not use the right “vibe topology” my friend! /s
FuzzyWizard834@reddit
interesting perspective
10K_Samael@reddit
Just gotta get good enough to fix AI code and you'll always be employed because now there is infinitely more shit code in prod creating liability
Top_Percentage_905@reddit
As a professional programmer I find all of these type of postings/ ads at least hilarious and silly.
But the advertised nonsense is not targeting experts. In my home country the propaganda has reached new heights in media sources many smaller investors trust. In part because these 'journalists' are not programmers either, in part to get dumb money in so the smart money can get out.
Crazy-Platypus6395@reddit
If anything i think the idea of not having to write your own 1k line class files has made more people interested in programming (note how i didnt say programmers) than ever before. This is more like the equivalent of when we brought 3d printers to the maker community. Less prep work, and the guys who know what theyre doing do great things. Unfortunately, most will just print/slop out a few half baked products and call themselves engineers/collectors.
goomyman@reddit
AI isn’t killing programming jobs as fast as AI spend is.
TomWithTime@reddit
I feel like open ai and the others are going to miss their window of ai being profitable. The more time vibe coders invest in this tool, the more people have a chance of becoming aware that there is more to programming than writing code and that it takes more than having an idea to be successful.
I'm sure just like only fans, a handful of vibe coders will achieve a livable amount of success. Unlike only fans, everyone else is going to incur large ai token bills and get no returns to even cover their app listing fee.
No_Indication_1238@reddit
There are a ton of open source models for the vibe coders. All of that memory is not for your average Joe and his localhost website. They are gearing for servicing entire enterprise industries and their LLM workflows. Like, the ones that will emerge any day now...I said...any day...oh, automated chatbot and voice to text summarizers!
Unlikely_Eye_2112@reddit
I'm bracing for what will happen once they have to make some major price hikes. We're currently in the phase where they're burning venture capital. We pay for copilot at work and it does help a lot when you treat it as cheap outsourcing that you give very detailed demands. But once it starts to cost enough to create a profit the quality and need of handholding becomes a problem where it's cheaper and easier to just to back to doing it all ourselves.
SupaSlide@reddit
I’m hoping for a future where models that can change a few functions across a couple files can run decently on device. I don’t need it to write all my code in one shot, but I do like transcribing something small and exact instead of typing it.
Unlikely_Eye_2112@reddit
Yeah that would be pretty good. There's stuff that can run on a Raspberry Pi with a specialised AI hat, but I haven't tried it.
For Copilot that we use at work I've found that I get the best results when I treat it like a junior and give it a small task with clear boundaries and a lot of context. It still requires some handholding but instead of hours it takes minutes.
CpnStumpy@reddit
The difficulty will be convincing the bosses that software can be written without this shit. Sadly MBAs love burning cash in effigy to truths long past there prime
TomWithTime@reddit
I hope it will become a wide spread act that on that event, if it ends up very expensive, many of us stand together and tell our executives that we faked its usefulness in order to help them score extra investor money, but it's time to stop pretending the current offering is going to benefit the company more than a couple of juniors we can train.
Ai tools can always have a breakthrough where their accuracy improves and their cost dramatically reduces. We can adopt the tools when they are good. The current offering? Barely worth using with no cost to me.
Claude also made some catastrophic mistakes this week so the first thing I'm doing next week is redoing my work from Thursday and Friday. Our bosses keep pressing us to use it more and delegate more to it. Ok well I've added another disaster case to my list to fake it to appease the boss. I'll share this with you so you can exercise caution as well:
A task that will touch multiple packages that contain functions with the same name. Claude has a chance to misidentify them as the same function
A task where you have packages named in a versioned sequence ex (thing-v1 and thing-v2) and you want to decommission v1 and bring over shared parts into v2
1 isn't that bad but it could make the LLM confidently miss things during tasks for removing unused code. #2 is the new addition to the list that bit me this week. I gave very detailed yet simple task and direction - look at any v1 references in v2 and copy them over to v2 into an existing file that makes sense or make a new file. And then it proceeded to copy everything from v1 over, reference or no reference, littering the v1 code all over the v2 code, and renaming my files to file_old.backup and unfortunately I did not commit before that so it was all staged code mixed up together.
And unfortunately the deltas are large enough that the optimal next move is to copy out what I made and then reset.
improbablywronghere@reddit
My fear is the bosses will have successfully brow beaten a generation of software engineers down to being “vibe coders” instead of lifting any non engineers up to make a “viable living”. I think the former happening is a much bigger deal than the later and is an ongoing project.
ridicalis@reddit
Step 1: Spend $billions
Step 2: ???
Step 3: Loss
EEcav@reddit
spending money = projected growth = increased stock valuation… until no profit comes. as long as you sell your shares first it’s legal theft.
bluegrassclimber@reddit
are we talking about programming or stocks?
seanamos-1@reddit
You are typically right, but what makes this different than the typical case of "someone upstairs got a bee in their bonnet", is that the costs involved are astronomical, unprecedented even.
In the vast majority of cases, this is investment that will never come close to break even, so these companies are actively harming themselves. When the penny drops and the time comes to start right the budget due these massive losses, salaries are the biggest expense, so they get chopped first. It just takes this happening to a handful of big tech companies for there to be a giant ripple effect through the industry.
So unfortunately, there's going to be a winter for us devs sooner or later, and its not going to be because LLMs took all the programming jobs, its going to be because reckless spending on LLMs drained everyone's wallets.
SupaSlide@reddit
I’d argue a lot of companies are spending a lot more than “a bit”
And more importantly, if you’re at a publicly traded company, if they’ve made a big deal about it, the stock price is at the mercy of AI hype and if that collapses then layoffs are almost certain.
bluegrassclimber@reddit
Yes 100% agree
Martin8412@reddit
There’s no shares to sell until after the company IPOs and even then, most IPOs mandate lockup periods on stocks for people who got their allocation prior to the IPO. They usually get only a tiny share of their allocation at six months post IPO, and have to keep working there to get the rest.
But don’t let facts get in the way of your rage posting.
NoxinDev@reddit
You miss the last steps where the gov would go after those thieves, but instead they lobby(bribe) to get the government to bail them out INSTEAD and start the cycle again, Yay - the cycle of life is beautiful!
pickyaxe@reddit
is this loss?
LowB0b@reddit
thank you for this I actually laughed. I hate this timeline so much, being critically online is doing so much damage to my mental health
feketegy@reddit
I am checking in with the clankers from time to time, to see where my iob security is. O even prepaid a month for the Claude Opus 4.5 "frontier" model.
And let me tell you, I sleep like a baby.
hyrumwhite@reddit
I’m ignoring it bc I have a coworker who is doing everything right with ai. He’s got agents working when he’s not working. He’s dropping 300 file PRs that we’re all objecting to….
And the code sucks. It’s have baked, half finished, and poorly architected.
Raknarg@reddit
yup just had to review a 2.5k line PR the other day my coworker admitted to being written "98% by AI", really felt like our reviews were the first time he actually looked at the code it generated.
2this4u@reddit
That's not doing things "right", that's exactly the kind of vibe code shit that means you're doing it "wrong".
9Q6v0s7301UpCbU3F50m@reddit
That’s where I’m at - when we first started using AI for everything I couldn’t believe the code it was writing - massive modules with massive functions, massive amounts of repetition, etc - needed to prompt it to write small functions, small modules, be DRY, etc. I wanted to ensure that the code was human-readable so that I stood a chance of understanding it and in case by some miracle we stopped using AI, and also noticed that the crap code that the AI wrote left to its own devices was tripping it up constantly because it was so convoluted.
hyrumwhite@reddit
That’s why I put “right” in quotes. It’s all the stuff that’s prescribed by the vibe bros.
kontrolk3@reddit
Good programmers before AI are still good programmers with AI, just faster and more efficient. Bad programmers before AI are still bad programmers now, just with the ability to create even more problems than before
ElectronicCat8568@reddit
I recently did my first code review on vibe coded code, and I didn't know it was vibe coded. My first reaction when I just scanned the code was "Oh my god, WTF is this?" Then I worried maybe it's actually too brilliant for me, so I took some time and reasoned through it completely, by running it, putting in break points, testing it. It was ridiculous. I could have made the same thing 10 times simpler. I honestly thought a lot about what to say to the other dev, because I still didn't know it was vibe coded. So I started with a basic question about one part, and the response revealed it was an AI choice, and it was vibe coded. And I was like "Ohhhhh. Good. They're not insane. AI is. Whew!" But I sure as hell hope I don't have to ever work on that fever dream. You vibe it, you bought it. That's gonna be my new motto.
Raknarg@reddit
Seeing the dogshit that AI writes and the kind of dogshit dev relying on AI too much turns you into makes me confident the human element isn't going anywhere.
NoxinDev@reddit
Glorified markov chains are only impressive to the layman, The only person they are going to get rid of is the outsourced code monkey who's code was already nigh unusable, it's at best a side-grade.
At no point is producing slop code equivalent to the decomposition of real world problems, assigning the right tools and methodologies followed by the real work of debugging and ensuring quality and security. Writing it down and pressing compile is the simplest part of the job always.
We can forget about slop factories taking your job unless you truly deserve to lose it.
muuchthrows@reddit
Not disagreeing with your overall conclusion, but the glorified Markov chain trope is getting old and is probably not true. There is research showing the larger LLMs are developing at least some rudimentary internal abstractions, simply because it’s the most efficient way of performing some complex predictions.
ankercrank@reddit
Try getting an LLM to write code for more than a single narrowly scoped class and you’ll find it dig itself into a massive hole and never pull itself back in. I’ve seen MCP tools make an absolute monstrosity and spend hundreds of dollars in the process.
Saint_Nitouche@reddit
Last night, I used Opus to generate an entire blazor server app in around ten minutes. Now, it was a small app - a single-page calorie tracker for myself. But it produced the UI (it looked decent), the backend and the data model, in one shot. If what you wrote is what you've experienced, I respect that, but it's just not accurate to what a lot of people are seeing and doing.
CreationBlues@reddit
perhaps what is essentially a first semester introductory programming assignment whose domain has been hammered at in training by the companies isn't the most... effective... test of whether it can go toe to toe with someone who's been programming for more than 1 month.
gajarga@reddit
We still aren’t allowed to use LLMs at work for much, but I’ve been playing around with Github Copilot on a small personal project for the past month or so. How I’ve described it to my coworkers is that it’s like have your own personal co-op student, only that student is the fastest developer in the world, by like a factor of 50x.
Prompting even the newer Claude models is remarkably like supervising a very green developer. You have to nudge it in a lot of the same ways. If you keep it focused on tightly scoped, well defined problems it’s super useful. Give it too much to “think” about and it’ll go off the rails just as fast.
Amazing-Royal-8319@reddit
Have you used opus 4.5 much? It’s written a lot of code for me, and not just narrowly scoped classes. Game changer for productivity. I’m a senior engineer with extensive open source contributions to several well known python libraries prior to (and during) the rise of AI-assisted coding.
I don’t always get exactly what I want on the first try and building fuller features usually takes a few rounds of prompting/iteration, but we’re talking like 1-2 hours of intermittent prompting and review to build what would have taken 10-20 hours of painstaking thought and effort before.
alphanumericsheeppig@reddit
I've used Opus 4.5 quite a bit. I'm a principal engineer, and most of the work I've been doing in the past 2-3 years has been on niche B2B SaaS applications. I find most models do decently well at building stuff that's close to what already exists, or natural progressions/extensions to software that's already in the training data. But even Opus doesn't really handle the kinds of things I have to do on a day-to-day basis. It's useful if I need to scaffold a simple CRUD API quickly, but when it comes to complex business requirements, I'll spend all day arguing with the LLM giving me something that doesn't actually work when it would have taken half a day to implement it myself.
ankercrank@reddit
That's my exact experience. It's fine for tasks that are a step above an IDE's auto-complete, but I definitely wouldn't have it do structural changes to an app.
NoxinDev@reddit
I know I simplified it for comedic affect and I'm well aware the complexity differences - but in the end the majority of it is still smoke and mirrors to appear that it isn't simply predicting the best set of tokens to return for your input tokens.
The reasoning and abstraction proof tech CEO's tout is just intermediary tokens before final tokens, another layer of the same thing that gets fed into the final result or produced purely to placate the user; is this intelligence? No.
Will LLMs as a technology in general get there? I very much doubt it.
Will neural nets in general get there... eventually.
EEcav@reddit
With all the investment following the big chat gpt breakthrough a few years ago, it doesn’t seem like the tech improvements are scaling with the money. They are getting incrementally better, but there hasn’t been a leap up as big as the initial one.
NoxinDev@reddit
I think this is all due its a fundamental lack of understanding about what LLMs are and how they function by the venture locusts and tech CEOS. The very nature of LLMs is that they are "stochastic parrots" (A phrase I love for describing them) they will repeat set a set of learned tokens most of the time for a set of initial tokens - they can never arrive at the promised ($$$) land of AGI - the architecture isn't teaching unique thought, it is entirely transactional/functional.
You can like you said it can be incrementally better, but what is needed for AGI is an entirely new and unseen architecture, something that comes out of gradual research - generally at universities for years and years if not decades. You cannot rush a breakthrough of this caliber and all of this prep of data-centers and funding will indeed make better LLMs, but that tech isn't going to revolutionize the world any more than it already has.
FortuneIIIPick@reddit
Agreed, I had thought maybe quantum computing applied to AI could be the next big leap but so far it's also only improving things some but no leap in sight yet.
BoringEntropist@reddit
Yeah, diminishing returns on investment is going pop the bubble. The training costs of AI is growing much faster than the growth of capabilities.
2this4u@reddit
Everyone you know is operating under the intelligence of predicting what to do based on available parameters, and you yourself know you occasionally make mistakes or even just say the wrong word sometimes.
So why do you think the same symptoms and processes in LLMs proves absolute fallibility in the technology? It is genuinely useful for debugging, writing unit tests, writing rote functions like string manipulation. Why does it even matter if the technology behind that is simple or complex.
buldozr@reddit
As someone who had to rewrite function implementations initially written by Claude Code, even string manipulation can be done in very naive and non-optimal ways, which manifests in real money wasted on processing of big datasets.
red75prime@reddit
And the best set of tokens for "[your problem specification]" is a well-written program.
red75prime@reddit
The sibling commenter decided to block me, so the answer is here:
If you sprinkle RLVL for correctness, and RLHF for "best", and CoT for inference-time scaling, and inference-time RL for task adaptation, then you'll get something that goes beyond the training data set.
CreationBlues@reddit
Ahh, not exactly
"best" here is a word which means "most statistically represented in the training set, mod whatever rl sugar got sprinkled on top"
Whether a well-written program is statistically represented in the data set is a different question entirely...
goranlepuz@reddit
😉
FortuneIIIPick@reddit
> Helped me solved harder bugs much faster because there my main limiting factor is usually my own motivation and exhaustion of ideas of what to try or doublecheck.
Using it like a Google or SO replacement works pretty well. People who allow it to generate code they then put into a PR without even reviewing it and testing it themselves, first; are the people companies should not be hiring.
97689456489564@reddit
No, their overall conclusion is also wrong. There is some kind of collective forcefield that anti-AI ideologues wrap around themselves. Opus 4.5 is great at debugging but also great at coding. GPT 5.2 Codex XHigh is even better at coding.
muuchthrows@reddit
I agree, I think this is one of the largest problem in our field at the moment. AI can be at the same time extremely helpful while in the next moment being extremely dumb. Wrapping your head around this paradox is super painful and a lot of the time the easiest path is to join one of the extreme sides.
usrlibshare@reddit
And so does a markov chain, only more rudimentary.
If we postulate true symbolic intelligence to be what we do, then a text basued autocomplete is symbolic as well, only its power of conceptualization is much more limited (all it onows and can work with is text sequences).
So no, the comparison is accurate.
It's always personal accounts, opinions, feelings. Or "vibes" if you wanna use that term.
Where is the data?**
This has been going on for almost 4 years at this point, surely after more capex invested per time than for any other technology before, more media attention and free advertising than ever before, and CEOs falling over one another to make the bigliest announcements about how awesome all of this is...
... surely someone, anyone, should be able to produce a single goddamn graph, study, ANYTHING showing in cold, irrefutable facts that this stuff is actually having a real world impact on par with the announcements, no?
I mean, with the iPhone, at this point after Jobs got on stage, we jad an entire new industry around the Smartphone platform. No one had to "feel", "believe" or "vibe" how good smartphones are, it was there, clear as day, for all to see.
So, this is all I ask of people: Instead of telling me how they "personally found" something to be...show me the data proving their point.
Because we sure as hell have data showing the exact opposite.
muuchthrows@reddit
Are you denying there has been progress? I’m hearing more anecdotes than ever from people I trust about the newer models such as Claude Opus then I ever heard about GPT 3 or 4. Don’t ignore the forest for the trees.
I’ve also read the study you linked multiple times and initially I was one of the persons constantly linking to it. But it’s been over half a year already and it’s still the only study that’s being thrown around.
red75prime@reddit
Do you know what a Markov chain is? First. You can't write it explicitly even for a measly 100 token context window. Its representation wouldn't fit into the observable universe. Second. It is a set of discrete states with nothing resembling latent representations of DNNs.
maikuxblade@reddit
Markov chains with some edge case handling isn’t that much more sophisticated though. And they have to do that because otherwise it can’t tell time or count the occurrences of a letter in a word, which ruins the illusion of intelligence for the laymen they need to impress in order for mass adoption
strangescript@reddit
"I use co-pilot everyday" is your first problem. By far one of the worst tools in the group.
crazyeddie123@reddit
co-pilot is just the IM-ish thingy that lets you talk to the model. You're still using Claude Opus or whatever.
sivadneb@reddit
Y'all are making your judgement call based on your experience with CoPilot of all things. CoPilot is not the current SOTA.
97689456489564@reddit
This thread is full of very cloistered people. It's pretty bizarre.
InterestingFrame1982@reddit
Unfortunately, and giving a nod to their angst, understandably so, the thought of losing their medium for craftsmanship is deeply scary. As a software dev, it’s borderline existential, so there will always be an immense amount of cognitive dissonance in these conversations. The paradoxical debate about whether it’s useful or not, and the rising temperature in that three year debate, is certainly some level of proof that the tools are getting better.
97689456489564@reddit
I just have never understood this. I am crafting more things than ever. The AI is a way to extend your ideas. It's a personal force multiplier. The latest models have rejuvenated many people's love for building software for themselves and others.
Like roon (OpenAI employee) said here: https://x.com/tszzl/status/2015253546372153347
crazyeddie123@reddit
What? No! Programming never sucked. I really don't get what this guy is on about... requisite pain?
Pain is going to be when I can't get paid for coding and have to try and find a real job.
yaxriifgyn@reddit
Be that someone who writes the code that trains the AIs. Without you the AIs can never learn new things. They can only generate combinations of the things they already "know".
This has always been the way human knowledge increases. Accessing the newest knowledge has just been a lot slower in the past.
crazyeddie123@reddit
Don't you need like a PhD to get one of those jobs?
jake-spur@reddit
Plenty of tech debt being being created by Vibe coders expect to see Vibe Janitor job adverts in a few months
Squalphin@reddit
Which are definitely not fun jobs if you had to experience a „rescue“ project at least once. Some of those will definitely be lost causes and a re-write will be easier to do.
Martin8412@reddit
It’s pretty much the same as cleaning up after outsourcing to India.
vytah@reddit
Artificial Intelligence can write much more code much faster than Actual Indians can.
ehansen@reddit
Already happening
mx2301@reddit
One angle that I fell like is undertalked is the fun aspect. Like I am still a student and working on the side as a devops dev. It is not my first job programming/scripting during my studies, but it is the first job where I am expected to finish tickets and issues as quickly as possible and make use of Agents and LLMs. And I have to say, it is really not fun for me anymore. I kind of did enjoy finding and figuring out the solution on my own more than just describing the problem to the LLM/agent and then understanding what it came up with.
_3psilon_@reddit
I'm just contemplating on the same thing with 9 YoE as a tech lead!
Probably gonna make a post out of it to this sub later once I pulled it all together. Basically writing quality code, crafting architecture is a mentally fulfilling activity for many. You envision something, implement it and feel great when the whole thing starts working and always have a sense of learning and accomplishment! I shifted into SWE because I passionately loved coding and how code all works together in some architecture.
On the other hand, reading existing code in order to understand it and find bugs - especially if it's subpar LLM-generated AI slop - is a boring chore compared to the joys of writing it. It's unfulfilling, mentally taxing and it's easy to miss important things.
It's not fun! I'm often guilty of procrastinating PR reviews. Who enjoys reviewing a hundreds line long PR?!
LLMs shift engineering activity away from writing code towards reading code i.e. they take away the fun parts and introduce more of the unfun parts. They also lessen the human discussion and connection part and instead we're mindlessly chatting with a machine all day.
Heck, they even degrade our thinking - at least with web search, we've had to make the final decision about a piece of information or adapt some solution to fit our problem. Now, we're spoon-fed with generative solutions that may or may not work.
I like programming computers but I don't like outsourcing my thinking to probabilistic black boxes and labeling that as "productivity". I don't like the most fun parts being taken away from this profession.
Of course, many folks just treat code as a means to an end already. They may be good engineers who just don't like coding as much. Or they are managers, CTOs etc. who may have liked coding before but don't have the time for it any more. For them, it's totally understandable that whipping out Claude Code in order to create something between two meetings can be amazingly fun. But not for me!
I'm not sure where all this will go but maybe I need some exit plan out of this profession even though I have achieved quite much. I already barely tolerate the pressure from AI hype, the anxiety that it causes and the burnout that it may bring.
9Q6v0s7301UpCbU3F50m@reddit
This is where I’m at too. I took joy in my job as a freelancer crafting applications that in some cases have been running for almost twenty years now and that have remained easily extended and updated. But recently I took a job with a consultancy/product shop that has gone ALL IN on AI and we’re expected to let AI write, review, revise and test all of our code - we are expected to just prompt it and check over the results. It’s kind of working but the job became very joyless and I quickly started daydreaming about retiring early, or getting into a new line of work.
CSAtWitsEnd@reddit
It’s because programming is an art. Being an artist is fun. Telling a machine to poorly mash together other artists work resulting in sloppy art is not fun.
This whole “AI future” all the CEOs are begging for is just so dystopian and it’s actual loser behavior to lean into it of your own free will.
diegoasecas@reddit
why I'm ignoring your grifter slop (anti AI is a grift too):
no reason, i just can't be bothered enough to care
oadephon@reddit
AI isn't killing the job yet. Give it another year. Give it two years. Think about how bad Claude was early 2025 compared to now, it's a much more sophisticated tool. How much better will it get, and how quickly?
ochrence@reddit
How many years of limitless, no-strings-attached free billions of dollars and gigawatts of electricity do we think that is going to take, and why do you think we’re going to get there before the market’s exuberance runs out after all this outrageous overpromising?
I do not think people realize the enormous and unsustainable amount of resources constantly being sacrificed to sustain any of this progress and subsidize public use of this technology. Eventually the bills are going to come due, and my bet’s that it’s before we hit “AGI.”
oadephon@reddit
Idk, how long do you think it will take?
I think we get human-level AI in the next 2-5 years. LLMs are already incredibly smart despite all of their limitations, and with this level of investment, there will be more massive breakthroughs to come in short order.
Granted, 2025 didn't have a massive breakthrough, it was still just scaling previous breakthroughs, but even that scaling led to pretty large capabilities improvements for LLMs. Even if LLMs keep scaling at the METR eval levels, they will be very good at even more complex tasks by the end of 2026.
I'm pretty optimistic we'll see the end of human wage labor in our lifetimes, likely within the next 20 years.
Martin8412@reddit
LLMs aren’t smart. They don’t know anything.
oadephon@reddit
Nah they're pretty smart, they know and understand quite a lot.
MeenzerWegwerf@reddit
They do not understand. They do not know. They are just trained.
oadephon@reddit
They do understand, in the same way we do.
The way they work is by identifying extremely complex relationships between words. They "know" these complex relationships so well that they can predict the "best" word to come next, and they are right so much of the time that they can do useful work.
The way we work is by identifying extremely complex relationships between words (and their features). Just like you can identify that my first sentence/paragraph above stated my point, my second paragraph gave a deeper overview of my point of view wrt LLMs, and this third paragraph is reiterating my point in a different way, wrt humans. Your brain is just doing complex math to identify these relationships between words in a reddit post, and you understand them so well you can probably make a good prediction of what word I'm going to type next. Bananas.
ochrence@reddit
It is hard for me to make any prediction about what will happen in my lifetime. Of one thing I am sure, based on my personal experience working in Big Tech and directly with AI myself: if superhuman intelligence comes about, if we even know how to define that, it's not happening with LLMs or similar model architectures. Period. Reinforcement learning – real reinforcement learning, not RLHF – seems like maybe less of a dead end if you're trying to get there. Even then, though, anyone claiming an imminent singularity needs to take a deep breath (or is trying to sell you something).
Right now we are in yet another boom-bust, FOMO-driven hype cycle familiar to anyone who has been following developments in technology for over a decade. I wish we would learn a lesson about this for once. Yes, this time we do have some more impressive tech than the metaverse or NFTs. However, there is so much money to be made in lying to the public about LLMs' capabilities right now, with little legal risk, that you simply cannot take these companies at their word. Nor should anyone trust what they have to share about quasi-repeatable and suspicious "benchmarks."
I do not blame anyone for being excited about what is a revolutionary stage in natural language processing, but natural language processing is not intelligence in and of itself. If you'd rather hear this from an esteemed researcher than from a random person on the internet, go take a listen to what Richard Sutton has been saying lately.
oadephon@reddit
I mean, I repeat that even if you think the Transformer on its own wasn't the key to AGI, which I'm unsure about personally but at least I think it's a totally fair position, then you should still be able to say that massive, yet-unknown breakthroughs are an inevitability, and they will come soon.
I mean, I use Claude for coding. It has massively improved since early 2025, from my own subjective viewpoint.
I haven't watched anything by Richard Sutton, but I've seen and read plenty by Geoffrey Hinton, Yann Lecun, Francois Chollet, the leaders of Deepmind and Anthropic, etc. A quick google search shows that Richard Sutton himself thinks we're nearing AGI:
Maybe AI won't take all SWE jobs in the next two years, in fact, it almost definitely won't, but it will take all of our jobs, and soon.
ochrence@reddit
Check out Sutton’s recent hour-long interview with Dwarkesh Patel. He has decidedly hopped off the LLM hype train since the quote you’ve found, and articulates many of the points I’m trying to make better than I can.
I’m happy for you that you are finding benefit in the tech. I personally still have no use for any LLM coding tools at all in my daily life. I’m a software engineer and feel quite secure in my line of work (in no small part due to my very lucky résumé), but most of what I do and think about is way higher-level than typing code. I work with a lot of medium to large companies’ APIs at my current job, and even now I’m generally able to tell when I’m working with human-designed systems and when I’m not.
The closer you have to look and the more you have to rely on the outputs of even state-of-the-art models, the more they fall apart in very particular ways. The models will continue to produce more impressive simulacra of intentional design, but at the end of the day they will still be simulacra. Inefficiently generated simulacra, I’ll add, whose tremendous cost is being temporarily shielded from the public by a truly historic funding glut.
This is broadly getting a little bit motte-and-bailey. “Massive, yet-unknown technological breakthroughs” are obviously “an inevitability” for as long as humanity survives. These things still do not intrinsically imply that we reach AGI within the decade, especially when the first major con artist’s AI company goes belly-up and people start asking for their money back. Funding for this moonshot will not last forever.
Barrucadu@reddit
People have been saying "yeah it was a crapshoot before, but now the latest models are the real deal, programmers are cooked!" every 6 months for the past 3 years.
oadephon@reddit
I mean, it's been true every time. They just keep getting better.
We're on the ramp up to the end of human wage labor and all you guys can say is, "It's not as good as they say it is! AI bubble!!!"
EveryQuantityEver@reddit
No, it absolutely has not.
Barrucadu@reddit
They keep getting better, but they're still not very good. Based on current rates of progress I don't see any reason to believe we're at risk of LLMs replacing much of anything.
oadephon@reddit
Nahhh Claude Opus 4.5 is pretty good....
dxflr@reddit
Not everything's gonna be a linear trajectory. In the current state of LLM driven AI, it looks to be a horizontal asymptote
oadephon@reddit
This is just false. Look into the METR evals. And keep in mind that as other evals saturate, they will look like horizontal asymptotes.
awj@reddit
METR is like a master class in lying with graphs.
The main graph they have, the one everyone cites:
oadephon@reddit
The y axis is just 30-minute increments...? https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/
They also have an 80% success rate one?
If people were incentivized to take longer on tasks than needed, that would mean the completion is still doubling, just a bit slower than it seems...?
I mean, there are good arguments against METR but none of those are it...
awj@reddit
Bro is about to learn the hard way what a sigmoid improvement curve looks like. This has happened so many times with AI it’s practically tradition.
Fun-Rope8720@reddit
Unreal progress. And with so much competition in the space the pressure to keep innovating is life or death.
This year we are going to have our minds blown by all the new tech.
iso_what_you_did@reddit
The "built an app in 5 minutes" posts conveniently skip the part where users actually try to use it and everything breaks.
AI is great for boilerplate. It's terrible at architecture, edge cases, and all the things that make software actually work in production.
Hinton's radiologist prediction is the perfect cautionary tale - AI augments expertise, it doesn't replace it. Software will be the same.
ciemnymetal@reddit
Saying AI can make anyone a software engineer is like saying a tractor and other machinery can make anyone a farmer. Tools don't make anyone an expert, it's years of studying, knowledge and experience.
neortje@reddit
The cURL program just ended their bug bounty because there was a huge influx of fake AI merge requests stating the app had security flaws at pieces of code where it doesn’t.
For now I’d say the software engineering job is safe. To create proper software with AI requires in depth knowledge of programming to actually understand what AI wrote.
temporaryuser1000@reddit
Think of the AI slop AI will be learning from
nemesit@reddit
even if developers are someday not needed for code engineers will still be needed because people rarely know or understand their own needs
97689456489564@reddit
AIs will eventually be better idea generators than humans.
nemesit@reddit
maybe in 100+ years but not anytime soon
97689456489564@reddit
I predict within 20 years there's a higher than 50% chance they will be, and within 30 years a higher than 75% chance. We could take out a bet; though of course it is very hard toe valuate this.
nemesit@reddit
what we have now is from the 70s just computing power wasn't there to make use of it. theres no way 20 years is enough
97689456489564@reddit
I am confident enough that I'd happily take out a bet on any of the predictions I have made here. (Commensurate with my specific confidence probabilities and timelines, of course.)
mrspoogemonstar@reddit
another day, another post where people just don't seem to get it. there is no sign of an end to llm scaling. the models are actually reasoning now - the barrier at the moment is the context window and memory.
EveryQuantityEver@reddit
Yes, there is. It's called money. They do not have infinite money, and this stuff is expensive.
They absolutely are not.
bureX@reddit
Chaining multiple prompts to try and avoid hallucinations.
2kdarki@reddit
Behold, a most curious and lamentable spectacle that has begun to infest the realm of true craftsmanship. One observes these self-anointed "developers," though their title is a garment ill-fitting and stolen, who profess to build digital kingdoms while possessing little more than the whimsical "vibe" of an architect.
They are the artistic equivalent of one who, with a grandiose flourish, commands a machine to produce a masterpiece, then has the gall to press their seal upon the canvas and proclaim themselves a painter of the old masters' lineage. The effort lies not in the mixing of pigments, the understanding of light, or the stroke of the brush, but merely in the utterance of a fashionable incantation.
Their code is not architected; it is wished into existence, a precarious tower of incantations copied from distant scrolls, held together by hope and the toil of the libraries they do not comprehend. They speak in vagaries of "energy" and "flow," mistaking the absence of rigour for the presence of genius, and believe that debugging is a spiritual journey rather than a disciplined exercise in logic.
Let it be known: to manipulate the very logic of machines, to command the silicon with precision, is a pursuit worthy of the title Developer. It is a vocation of structure, of relentless reason, of building with materials one thoroughly understands. What these "vibe coders" practice is not development; it is digital dabbling. They are not builders of empires, but mere decorators of rented rooms, blind to the foundations beneath their feet and doomed to bewilderment when the walls—as they inevitably must—begin to crack.
The court of genuine achievement has no throne for such hollow pretension. They may play at creation, but they reside in the gallery of consumers, mistaking the act of curation for the sweat of genesis. A most tiresome and decadent trend, indeed.
rolm@reddit
I have a theory that the pendulum will swing again, and programmers (esp senior programmers) will be in high demand to clean up the mess created by overuse of AI. Most notably, to understand and maintain.
Imnotneeded@reddit
Every day there are several new postings in the social media about a "layman" who build and profited from an app in 5 minutes using the latest AI Vibe tool.
- There any real case studies?
Adrian_Dem@reddit
real vibe coders take months to release an app. and usually not very complex ones.
they still put in the work, have analytical thinking, just miss the coding skill.
there's no magic to AI, it's a tool in someone's hands, and it will perform as it is guided.
FortuneIIIPick@reddit
It's not even a a tool. AI helps but a tool is deterministic. AI will tell you it's not deterministic if you ask it, even if you configure it to be as deterministic as possible; its output can still vary day to day, meaning it's not deterministic.
Some people tell me, AI helps them so it's a tool to them. A tool is something you can depend on to work the same way, every time, like a compiler. AI is not deterministic, its output can be different, sometimes wildly different tomorrow than today and with hallucinations; it is a caution to use it. If you're careful, it can help...but it is not a tool.
Squalphin@reddit
If you follow the vibe coding scene, you will notice that they only regurgitate code from existing open source projects with minor changes here and there. You would think that every month now at least one amazing project should surface, but nope, it’s just slop 🙄
buldozr@reddit
There was an announcement from the Cursor CEO that they've built a browser almost entirely with AI: https://www.reddit.com/r/programming/comments/1qdo9r3/cursor_ceo_built_a_browser_using_ai_but_does_it/
The 3+MLOC code they dropped did not even compile, was found to use massive existing OSS engines under the hood, and you can guess about the maintainability of the 3 million lines of slop. Some people would counter with "you can use AI to iterate on it!", but I suspect it will turn into a money sink for LLM compute tokens with diminishing returns.
poladermaster@reddit
The 'AI replaced my job' narrative is strong, but tbh, debugging still requires a human brain (for now).
ARK7080@reddit
Many people only need a small tool. These apps are already enough to them.
bluegrassclimber@reddit
AI is creating lots of opportunity to innovate if you don't resist it you can get a slice of the pie. As a programmer that's the best way to look at it, IMO
cfehunter@reddit
It's worth staying on top of the tools, but personally it's a meeting notes and documentation search tool.
97689456489564@reddit
You all are like 2 years behind. This thread is like a time portal.
cfehunter@reddit
I'm trying out new stuff constantly, it's still not good enough.
It's fun to play around with, and I will be staying on top of things, but code assistants are just absolute shite. No matter what the twitter talking heads (who are literally selling these products) say.
97689456489564@reddit
You've tried Claude Code Opus 4.5 and/or Codex GPT 5.2 XHigh for at least a few days, trying to follow best practices?
cfehunter@reddit
Have tried Claude. Haven't tried codex. Maybe it's because I'm a C++ programmer on a mostly in-house tech stack, but it is bad.
bluegrassclimber@reddit
Yes the AI is only as good as the repositories it's trained on.
It's getting pretty good for C# and is pretty established for other things like Python and Node. I can imagine C++ where pointers becomes an issue. It flails
bluegrassclimber@reddit
yeah I'm saying to explore emerging products and markets
Dean_Roddey@reddit
As has been pointed out endlessly, but never addressed be the AI Bros, why haven't any of these companies are pushing this idea take over the software industry? Why don't they vibe code everyone else out of the industry?
The answer is obvious.
shokuninstudio@reddit
Nobody has the right to tell you what to do with your own computer, especially the cretins on LinkedIn and in the media who spread these fears. If you want to use generative AI models or not it is up to you. Your consumers and customers are interested in your story and journey, not in tech hype concocted by someone else.
If you read r/LocalLLaMA or r/StableDiffusion you'll see that a very large number of generative AI users do not believe the hype and are a lot more critical than those who aren't so deep into the technologies.
97689456489564@reddit
Largely because the local models aren't as good.
Dean_Roddey@reddit
And they never will be because the whole thing is predicated on huge compute resources, and that's something the large companies current fighting to control the space need to remain true, or their huge investments have zero chance of paying off. It's another way to destroy the personal computing revolution and move control back to large companies.
shokuninstudio@reddit
They talk about all size models, local and cloud, on those subs. The largest local language models are up there with the big cloud based models but they require an expensive workstation.
lambertb@reddit
Kurzweil’s predictions have been surprisingly accurate, as have Moore’s Law type predictions about chips. If you don’t think software development is currently undergoing an epochal shift, I don’t know what to tell you. The biggest software development organizations in the world publicly say that it is, and you can see them shipping improvements at an increasing rate. Not sure what evidence would convince you.
zambizzi@reddit
I scroll right on by. It's noise, at this point. The market will crash, the correction will shake out the malinvestment, and we'll continue to write code, in high demand.
drumnation@reddit
lol you’re not worried but you drop that you aren’t using the best ai system. Maybe you’d be more worried if you used better quality tools?
JamesTiberiusCrunk@reddit
Was really prepared for this to be obvious ChatGPT output.
The Problem
Why I'm not worried:
The Future:
Ok-Tie545@reddit
You’re absolutely right!
ThisSaysNothing@reddit
I think there might be a future, some decades from now, where the workflow will look like this:
A human and an llm work together to produce a formal specification for a program
The llm uses a theorem prover to iteratively code up an implementation of the formal specification that is mathematically proven to be right
Humans test the program to see if it actually satisfies their needs and check the formal specification for flaws
In short: I think it is possible that programming will mean producing and checking formal specifications in the far future.
97689456489564@reddit
This basically worked in 2024. You don't need decades.
ThisSaysNothing@reddit
Well, yes. The fact that it is already possible is part of what lets me think that it is a realistic path to take. I also think that we are just at the start of leveraging theorem provers in general and using them to prove the correctness of some implementation against a formal specification specifically.
There is a lack of knowledge on how to effectiviely work with formal specifications. Perhaps it is possible to build tools to help with producing and evaluating formal specifications. Perhaps we could move towards reducing incidental complexity from our systems in a way that would make it possible for mathematical reasoning to scale better.
My main point is that theorem provers could eliminate the core flaw of llms to be unreliable and hallucinate and that both technologies are only at the start of their development.
AdInner239@reddit
Ai for software engineers is what calculator where for mathematicians. It did not replace them, it just made them way more efficient
ehansen@reddit
AI is only going to kill the jobs of those who can't think critically, or relied on AI to do their job for them. Its why it will be rough once us senior devs start leaving.
AtmosphereThink1469@reddit
I'll venture an analysis: - if I don't give it the right context, the output is poor or it invents unexpected features. - sometimes if I forget to tell it that a certain feature doesn't need exposed APIs, it does them anyway, and if you don't notice, you've got a security hole. - if I don't do several iterations, the output is poor. - if I don't break the problem down into lists of microtasks, the output is poor. - if the dataset is poor, the quality is poor. - if the training is poor, the output is poor. - if the chunking strategy is poor, the output is poor. - if the retriever isn't built properly, the output is poor.
I could go on, but I'll stop here.
So, okay, AI is a very innovative tool, but all the data preparation, prompting, review, iterations to correct errors, etc., is human. So I'd say that currently 80% of the work is human. How exactly would it replace us?
levodelellis@reddit
AI is a useful tool? I mean, sure, it's better than google when you want the name of a function. But I was told people use it for something completely different, and I have never seen anything produced beyond a toy in an alpha state
NuclearVII@reddit
Honestly, this.
I'm getting real sick and tired of the "AI can be a useful tool, but..." rhetoric. As far as I can work out, nothing in the literature can show that this 8 trillion dollar industry is able to produce anything of value.
The "AI is a useful tool, you gotta use it right" crap is on the same level as "Bitcoin is a store of value".
NickHalfBlood@reddit
I was engineering interfaces, data pipelines, and serving them reliably for humans.
Now I am doing it for agents and other protocols. There is more work now. And complicated work that isn’t done before in a standard way. So LLMs can barely autocomplete it without fking up.
Like I get that you can type „list down top tending inventory of my items“ and AI will send your natural language query to my systems in a structured way. But, my system has to be even more structured and secure now. Earlier it will be just sort by „trending“ option in UI and it will work the same way.
Not that there are other benefits of AI but don’t shove AI down everyone’s throat.
bufalloo@reddit
Unfortunately it might be the perception and promise of AI productivity takeoff that leads companies to layoff their engineering orgs, even if the engineers were propping everything up. One tough reality is that software usually doesn't need to be great. It just needs to be good enough, so many businesses can scrape by with functional slop.
I guess it's up to us as programmers to shape our reality going forward. Can we forge our own path where we can build better than just 'good enough'?
torn-ainbow@reddit
I've worked for clients for decades. Something that has always been true is that people who are buying development always want more than their budgets will allow.
So I don't think it will be as simple as AI shrinking budgets and killing jobs. Scopes will rise to meet the budgets and devs will be expected to deliver more in the same time. That effect will counter and cushion the efficiency effect which is where we could deliver the same for less.
gjosifov@reddit
there is one thing that most decision makers miss
ZIRP is over and AI companies have to make money
for most companies OSS is free lunch and they are addicted to it
for them AI in the current form is more or less the same thing - free or very cheap lunch
and because AI is so expensive and money aren't cheap AI companies have to raise their prices and this will lead to cancellation and killing the AI companies in the process
a good example is the db market - if Oracle/Microsoft db were cheap then it will be no brainer to use them
but instead many companies use Oracle/Microsoft db only for very critical data, because it is expensive
captainAwesomePants@reddit
It is totally possible now for someone with only a little bit of programming knowledge to use AI to put together a couture program that does something they need that works well enough to use, and with AI they can get it done in hours or even minutes. And that's wonderful.
But also, ain't nobody building and selling solid apps out there without programmers involved.
red75prime@reddit
Which AIs? Transformer-based LLMs? LSTM-based LLMs? VLMs? LMMs? Is it a shortcoming of a specific architecture? Specific training methods? How do you know they are unable to "truly understand"?
akirodic@reddit
Im 100% with you but let’s be honnest. A lot of programming tasks that used to be the kind of work a junior would do can now be achieved with Ai in minutes and reviewed/modified by someone experienced. So we need to adopt workflows to take advantage of this yet give beginners a path forward that goes beyond vibe coding.
zwermp@reddit
You aren't gonna make it.
sealsBclubbin@reddit
It’s simply a new tool to add to the toolset; albeit, LLMs will force us to just change the way we work. We won’t have to spend as much time coding and get an AI to do most of it. The real pain will be come code review time haha
ZogemWho@reddit
I agree, while retired since 2019, now, I’d use the tools to do the mundane stuff to get the larger objective closer. That larger objective takes thinking out of the box to uniquely solve a problem, AI, in its current state can’t create these unique solutions. I’m talking the creativity that can lead to patent. LLM can’t create anything new, it just regurgitates what’s been done.
headykruger@reddit
Domain is all you need to know the bias. Also who uses copilot? That’s enough to not take this seriously.
cottonycloud@reddit
I would not take the AI killing the industry predictions seriously. Maybe I’ll worry about it when I get laid off and have trouble finding a job.
Beautiful_Dragonfly9@reddit
Claude Code is amazing. Makes a lot of things that were just not possible for me a mere few months ago possible. I get to micro-manage several terminal sessions, check their outputs, and chain them. Still learning how to do it effectively, but if I know what I’m building - it’s great.
If I need to figure out what I need to build - I still have to face that scenario. Will be interesting time ahead of us for sure.
And it’s pretty amazing at non-coding tasks. It’s the first time I see the AGI in all of this AI train. Not to be a doomer, but I don’t think that AI will kill the jobs. It will not make the jobs easier. It will, however, make the competition insane. People using the AI effectively will gain an insane edge that you cannot compare. It was never more valuable to know something than now. Knowledge matters still, and matters much more. Syntax is cheap - knowledge and experience in building, reading large code-bases quickly, comprehension, acquiring new skills.