AI doom and gloom vs. actual developer experience
Posted by Any_Rip_388@reddit | ExperiencedDevs | View on Reddit | 195 comments
Saw a NY Times Headline this morning that prompted this post and its something I've been thinking about a lot lately. Sorry in advance for the paywall, it is another article with an AI researcher scared at the rate of progress in AI, its going to replace developers by 2027/2028, etc.
Personally, I've gone through a range of emotions since 2022 when ChatGPT came out, from total doom and gloom, to currently, being quite sceptical of the tools, and I say this as someone who uses them daily. I've come to the conclusion that LLMs are effectively just the next iteration of the search engine and better autocomplete. They often allow me to retrieve the information I am looking for faster than Googling, they are a great rubber duck, etc. Maybe I'm naive, but I fail to see how LLMs will get much better from here, having consumed all of the publically available data on the internet. It seems like we've sort of logarithmically capped out LLM progress until the next AI architecture breakthrough.
Agent mode is cool for toy apps and personal projects, I used it recently to create a basic js web app as someone who is not a frontend developer. But the key thing here is, quality was an afterthought for me, I just needed something that was 90% of the way there quickly. Regarding my day job, toy apps are not enterprise grade applications. I approach agent mode with a huge degree of scepticism at work where things like cloud costs, performance and security are very important and minor mistakes can be costly, both to the company and to my reputation.
So, I've been thinking a lot lately: where is the disconnect between AI doomers and developers who are skeptical of the tools? Is every AI doom comment by a CEO/researcher just more marketing BS to please investors? On the other side of the coin you do have some people like the GitHub CEO (Seems like a great guy as far as CEOs go) claiming that developers will be more in demand in the future and learning to code will be even more essential due to the volume of software/lines of code being maintained increasing exponentially. I tend to agree with this opinion.
There seems to be this huge emphasis on productivity gains from using LLM’s, but how is that going to affect the quality of tech products? I think relying too heavily on AI is going to seriously decrease the quality of a product. At the end of the day, Tech is all about products, and it feels like the age old adage of 'quality over quantity' rings true here. Additionally, behind every tech product are thousands, or hundreds of thousands of human decisions, and I cant imagine delegating those decisions to a system that cant critically think, cant assume responsibility, etc. Anyone working in the field knows that coding is only a fraction of a developers job.
Lastly, stepping outside of tech to any other industry, they still rely on Excel heavily, some industries such as banking and healthcare still do literal paperwork (pretty sure email was supposed to kill paperwork 30 years ago). At the end of the day I'm comforted by the fact that the world really doesn't change as quickly as Silicon Valley would have you think.
MathmoKiwi@reddit
I agree, now we've had stuff trained on basically most of all of the good content on the internet, where is there to go? Especially as any future content online will likely be polluted.
There are efforts to train on synthetic data, but it has issues.
I agree for us to see the next big 10x leap forward it will require the next AI architecture breakthrough, something as big and game changing as AlphaGo etc was. When will that happens? Might be in five years time, or it might be in fifty years time.
Yes, culture changes far far slower than tech does.
For instance just because emails and online banking exists, doesn't mean automatically overnight fax machines and bike couriers will die out! Yes, it eventually happens, but it often takes longer to fully happen than people would predict.
Waterwoo@reddit
I really don't see how training on synthetic data will help much at this point. If you're still at the "trying to teach this neural net how language works and is structured" sure it would be very useful, except that there was more than enough real publicly available text to do that. If you are trying to actually teach it how skills, I'm not seeing the value of giving it a billion examples representing only what existing AI already understands well enough to generate.
Tacos314@reddit
The productive gains of LLM’s is just that there, one thing I find funny is I have better luck using LLM for business tasks, if anything non-developers need to be worried, especially anyone who create the same reports / documents / outputs over and over again.
Waterwoo@reddit
Yep.. GPT 4.1 in copilot struggled to write basic unit test coverage for a react component for me yesterday. Meanwhile if you told me tomorrow that all of the upper management and HR/Ops emails I've received for the past 2 years at work were an LLM, I wouldn't be the least bit surprised.
Right-Tomatillo-6830@reddit
Meanwhile:
Business guru: this is way better at writing code than doing business tasks..
nullpotato@reddit
It is way better at helping me fix the tone of an email than writing any non-trivial code.
Trevor_GoodchiId@reddit
On March 10th Dario Amodei said 90% of code will be AI generated in 3-6 months. September 10th 2025, 4 months from now. I have a todo set.
This mania got out of hand.
Null_Pointer_23@reddit
Even if that is true, what does it actually mean? Before AI the majority of all code was most likely written by IDE / LSP autocomplete. If it's humans directing, correcting, editing, etc... Then the 90% number doesn't mean much imo
Trevor_GoodchiId@reddit
He explicitly said "written by AI". I assume it means written by AI.
https://www.businessinsider.com/anthropic-ceo-ai-90-percent-code-3-to-6-months-2025-3
Waterwoo@reddit
He's laughably wrong then. Reminds me of this https://www.complex.com/life/a/kari-paul/women-will-have-more-sex-robots-than-men-2025
I mean if you count vibrators as a robot I guess that might be true.
Trevor_GoodchiId@reddit
100% of my code is coffee assisted.
Waterwoo@reddit
Well.. if you own a data center and script a bunch of LLMs to crank out useless boilerplate code 24/7 you could probably generate 90% of all code in the world by AI.
Whether it's any good or will ever be used for anything is a different story.
Far-Citron-722@reddit
You already see public companies making claims that 90+% of their code is "written with AI". Very easy to achieve, just mandate Cursor use across company and adoption rate becomes "code written with AI" rate which is technically true
Trevor_GoodchiId@reddit
Which public (I assume publicly traded) companies? YCombinator bros do claim that with not much to show.
fireblyxx@reddit
We all must suffer v0 now because CTOs desperately want it to build entire products in 10 minutes because one guy said he made a business that way.
The question at that point being if anyone can just launch a service with a $20/mo v0 subscription, wouldn't a lot more people just do that creating a shit ton of competition and devaluing the worth of any particular company?
Far-Citron-722@reddit
Yes, publicly traded. Instacart did in their latest earnings call, checked the number, it is 87%, actually, I misremembered. "In Q1, 87% of our code was written with AI", to be precise, from CEO's speech (she's moving to OpenAI soon and always has been a big believer in AI, so it's not the same as Bob Iger or Jamie Dimon making the same statement)
Bummykins@reddit
"with AI" is doing the heavy lifting there. Copilot autocompleted the last 30% of some lines in this PR? The whole feature was written "with AI"
Far-Citron-722@reddit
That's precisely my point. CEOs make loud proclamations, media gobbles it up, doom and gloom ensues.
People interviewing those CEOs do not have the expertise to push back on wild promises.
I would love for someone to ask Dario something like: How much time does a developer spend writing code? What is the biggest drain on developer productivity? How does an average codebase compare to context of most powerful LLM?
Trevor_GoodchiId@reddit
https://www.supermarketnews.com/finance/instacart-goes-big-on-ai-stock-spikes-following-robust-q1
"developed with AI assistance"
creaturefeature16@reddit
Between Emmet, autocomplete and snippets, something other than my meager human hands have written at least 50% of the code before AI ever got into my IDE.
creaturefeature16@reddit
Between Emmet, autocomplete and snippets, something other than my meager human hands have written at least 50% of the code before AI ever got into my IDE. It's a rather meaningless metric to me; generating code was always the easy part.
Any_Rip_388@reddit (OP)
2026: 90% of all code AI generated 2027: 100% of all code AI generated 2028: 25% of all code AI generated, code quality has gone to shit, senior devs called in to fix the mess
GoTeamLightningbolt@reddit
I will be worried about AI doom when companies switch to LLM bookkeeping. So far thay hasn't caught on for some reason.
GeneReddit123@reddit
I'll turn it on its head and say that the easiest way to get devs to enthusiastically support AI is allowing AI to manage fucking JIRA for you.
Let it track your work, Github commits, calendar meetings, etc. - and ask you dev-friendly (rather than business-friendly) questions as needed, and then automatically move tickets, tie loose ends, track your time for you. Not in the surveillance sense, but in the work bookkeeping sense (with optional overrides like devs can already do.)
I don't want to have to answer the standup question of "what did you do yesterday" a single more time in my bloody life. Let AI keep our books for us, and let us actually do our jobs.
P.S. the same would apply to many other professions like doctors. People are afraid doctors use AI for diagnosis, whereas what doctors would really want is to have AI track patient inputs, paperwork, insurance submissions, prescription compilations (for the Dr. to only oversee as a medical professional for correctness, not as a bookkeeper), etc.
sol_in_vic_tus@reddit
The best use for LLMs is to do useless things that should not even be done in the first place, but at that point you really should just stop doing the useless things.
Waterwoo@reddit
True but often stop doing those things is a political battle the devs don't have the power to win so, "make this dumb machine bullshit it's way through them" is the next best thing.
tolerablepartridge@reddit
That is a thing. Of course it's nowhere near being a full replacement for an accountant, but it definitely means you can do more with less headcount. This is used in production by many large companies.
doublesteakhead@reddit
Sorry this seems like a bookkeeping product with AI features, not an AI CPA
anonyuser415@reddit
Like who? That their site only mentions startups doesn't inspire confidence
Mysterious-Essay-860@reddit
Similarly, I'll worry when code quality starts trending upwards. After all, if AI can do my job, it should be fixing bugs, so we should see the number of bugs in code trending downwards.
Like... QA should be directly telling the AI to fix bugs, and it should do so, and then we have fewer bugs.
So, I wonder why that's not happening...
rco8786@reddit
Shouldn’t even need QA right? If AI was good it would just not write those bugs in the first place.
Sensanaty@reddit
We literally just fired every QA we had not even a week ago lol
"With AI, the engineers can do QA work faster than ever before!" Was the quote.
Mysterious-Essay-860@reddit
Well I accept that we have a lot of legacy core to fix too
zoddrick@reddit
There are a few things preventing widespread usage of AI models in the day to day of most companies -
1) Costs - using models like Claude 3.7 max and Gemini Pro 2.5 are expensive especially at scale.
2) Red tape around what AI agents and tools can and cannot be used within a company.
Before we can hand AI to non-engineers we need to get it first into the hands of engineers so they can become more efficient which is going to be the first milestone of real usage inside the workplace.
Once we have done that then you will see the tooling support get better for non-technical people to utilize agents to fix problems without much hand holding from engineers.
LongUsername@reddit
Just start your prompt with "Don't write bugs" in the first place! /s
I literally asked an AI guy "I keep trying to use Chat GPT to help me in the job but it keeps hallucinating API calls and other things that look right but when I dig in they don't exist or are wrong. How do I keep them from making things up?" and the answer I got was "did you tell it not to make stuff up?"
NuclearVII@reddit
And then, with his next breath: "these are really powerful tools that make me 5x as productive, you're just mad cause I'm gonna keep my job"
xt1nct@reddit
Sounds like he is on his way to being an AI consultant. $300 an hour to help companies switch to AI and fire all their staff.
After collecting his checks he will disappear and move onto the next fool.
Efficient_Sector_870@reddit
That guy is worth every penny
MoreRespectForQA@reddit
I'm not sure how you'd even measure this.
Mysterious-Essay-860@reddit
Generally in how many times per day I despair at an app on my phone, as a good starting point :D
creaturefeature16@reddit
And when people start hiring "vibe accountants".
Hziak@reddit
This is my perspective, as well. Until jobs that AI can actually replace completely, such as analysts, managers, accountants and lawyers - basically anything that takes data and outputs a predictable solution to a puzzle, answer to a question or completed math equation - get replaced, it’s just going to be a phase. AI will continue to be pushed irresponsibly quickly by companies who don’t have the experience or knowledge to utilize it correctly. It will make a huge mess of everything. Developers will be in demand again because they’re the only ones who can untangle the mess.
All I’m seeing is people looking for short term gains by replacing payroll expenses with AI, and like all short term thinking, it’ll come back to haunt. That and gullible managers who think it’s more important to look like you’re cool and trendy than to actually evaluate a solution to a problem you don’t actually have. Nobody in the greater business world is actually serious about AI as evidenced by them all still having their jobs…
Any_Rip_388@reddit (OP)
lol, yeah good point
Mysterious-Essay-860@reddit
You've come to broadly the same conclusion I did. I think it helps that I've been writing code since slightly after punchcards, so I've seen several "We're going to make engineers obsolete" technologies already.
My general comment on AI is it can't do an engineer's job, but it can play one on TV. Which is to say, it does what people _think_ engineers do, which is why lots of people expect engineers to be replaced, but actually it just eliminates a bunch of drudge work.
A lot of the amazing results from AI coding turn out to be someone fed a crazy number of prompts in until one worked, then went "Look it coded this amazing thing" and hide that they can only get it to do exactly that one thing.
AI will focus an engineer's role into thinking about workflows, customer experience, resiliency, managing complexity etc, and less on the specifics of syntax, but it won't replace us.
All of that said, it's a hell of a rough time to be a junior right now, and sympathies to anyone starting their career currently. I believe it will change, but I know things suck currently.
ImYoric@reddit
Come on, COBOL is going to make engineers obsolete any day now!
(also PL2, LISP, SQL, Prolog, 4th generation languages, etc.)
Mysterious-Essay-860@reddit
Can you imagine training someone to write C using vim, then letting them use Python on a modern IDE? The leaps forward are already huge, but we forget they happened.
ExtremeAcceptable289@reddit
Pluginless vim or pluginned? I still main pluginned vim lol
ToughAd4902@reddit
How is this upvoted? Is this actually an ExperiencedDevs sub? The world literally runs on the software still being written, and maintained, by C developers who very often use vim. What
Mysterious-Essay-860@reddit
I'm not saying it can't be done, I'm saying that it's a lot lot easier to write Python in a modern IDE than it is to write C in vim.
Although why would you write C in vim, I say as someone who used to write C in vim? Are there lots of people working on systems which are infeasible to have the code in something easier to edit with?
MathmoKiwi@reddit
Can you imagine programming a computer in machine code...
https://www.youtube.com/watch?v=KsiwCcVvJ6A&ab_channel=LinusTechTips
ef4@reddit
Exactly, and after every one of those huge leaps in productivity it unlocked even greater demand for skilled programmers.
gcalli@reddit
I still like my vim. IntelliJ only for Java
quantum-fitness@reddit
The thing they dont get is that developers are engineers. Even if LLMs gave a 100× productivity boost software would just get 100× more fancy.
Its just like higher level languages just allowed people to do things that where impossible before.
BigDieselPower@reddit
Not only that but AI is very non-deterministic, so that one prompt that particular developer got to work will not work for others because the output will differ even if they follow the exact same chain of prompts.
Boom9001@reddit
There are always people assuming increasing worker efficiency will mean companies need less workers. Instead it always just results in increased output.
AI is that right now. Maybe it can become a huge generational leap like computing or electricity, where the entire makeup of the job market changes. But right now it's more like the first GUIs imo, huge productivity increases in a short time across many industries. But it's just making you do more faster, not entirely changing the work people do.
MathmoKiwi@reddit
I started much later than you, but have still been long around to have programmed an Altair 8800 at uni (more for a learning experience, as a uni lab, than for anything practical!) and to have used punchcards myself (as scrap paper and bookmarks, ha! Because there were tonnes of them lying around for anybody to grab and use).
And I too see these similarites and phases being repeated.
https://x.com/AStratelates/status/1923771565595857252
This is why it's so very important to learn the history of your field! History often repeats itself, or at the very least heavily influences the future.
prisencotech@reddit
AI is most useful in the hands of a skilled developer, but saying it'll get rid of developers is like saying a high tech state of the art band saw will make master carpenters obsolete.
angrathias@reddit
Or the way mechanisation got rid of farmers…oh wait, society went from 90% farming related to 2%.
Good thing everyone can just level up to the next set of jobs that AI cannot yet do, academic level research, oh…
PoopsCodeAllTheTime@reddit
Farming related jobs that gone done for: mindless mechanical work.
If you got the knowledge to harvest crops with low costs you can literally turn a profit today, even with aeroponics or whatever. The knowledge of farming is still very scarce even among farmers, that's why GMO seeds that can't be reproduced and Roundup (cancer inducing glyphosate) have gotten so much market, at the detriment of everyone.
Nikulover@reddit
Can you say that 10 years from now? Or in your opinion AI will never get to that level at all?
Mysterious-Essay-860@reddit
If AI gets to the point it is breaking down tasks, analyzing what the customer wants and turning it into something which can be engineered, considering failure cases at scale, etc. then it will probably also be replacing basically everyone else.
Which is to say I think we'll have bigger problems by that point.
Nikulover@reddit
Not extract requirements but how about just reading the user stories that were created by PO/BA and write code out of that. The architects can also draw the design and the AI just executes it. I mean I’ve been to companies where devs only do that.
Mysterious-Essay-860@reddit
So the answer is "This varies"
I get user stories like "The customer wants to deploy relatively arbitrary code, in any location we support, and automatically gain monitoring, database backup & restore, alerting". Obviously that's off the other extreme, but obviously that's hopelessly vague for an AI to have any chance of implementing.
That said, the line between PM and engineer may blur somewhat more.
Nikulover@reddit
Man, we get stories like that and that's going straight to retro for "things that went wrong" by being very vague.
I don't know. I work in trading platform for a big bank. We have well defined roles. We get clear stories most of the time that has both business and technical acceptance criteria. I mean the biggest complexity for us in bank are all the integration we need to get our data but our technical leads create technical documents for us to make sense of it. I just think what I do day to day is relatively simple and I imagine AI can do this in the future. But i am not sure. Hopefully i am wrong.
Mysterious-Essay-860@reddit
I'm a technical lead, so I do get much much less well specified tasks and then partly I break them down with Product to go "Is this what you intended?" while also building technical documents.
In this case I think Product will realize they don't actually want to allocate enough engineers to ship this any time soon and we'll cut a lot of the requirements.
I think AI will just move us all further away from the raw code, though. I'll write higher level technical documents while seniors write documents at the level I used to, and they give them to mid/juniors who work with an AI to build and test it.
angrathias@reddit
I have the joy of working with large automotive manufacturers with data, and I don’t even get requirements, many of them are literally like ‘tell us what we need’ smh
jackjackpiggie@reddit
Probably one of the best takes I’ve read in a while. Well said.
codemuncher@reddit
Regarding “the specifics of syntax” - that’s not even the real problems with engineering and building systems anyways!
It’s literally automating the easiest thing, which leaves… everything else.
PragmaticBoredom@reddit
There was a Substack article from an unemployed developer about the “Great Displacement is already happening”. He blamed AI for his inability to get a job.
It got hundreds of comments, spent all day on the front page of Hacker News, and has spread all over Reddit.
But when people started trying to help him by reviewing his resume and portfolio (which he was sharing everywhere to try to get a job) it was very obvious where the problem was: His resume was in a weird format and had a big list of skills without context and his “full stack web developer” portfolio looked like something I’d expect high school kids to make in their HTML class (I’m not exaggerating, it was a black background with some centered yellow text in a quirky font).
The sad part is I do volunteer resume reviews in another forum and I see this scenario over and over again: People with terrible resumes blaming AI for their inability to get callbacks.
I think the developer market is correcting overall after a decade+ run of companies hiring everyone who could write any code at all and dragging their feet on firing anyone. I also think this is coming at the same time that AI has arrived, which has made it easy to blame AI for everything.
WatchStoredInAss@reddit
Bingo!
pootietangus@reddit
The doom and gloom is not about the current state of the tools, it’s about the vision for our world that is put forth by SV. It’s about robots replacing people. The end of work. UBI. Productivity gains for elite programmers and, as for the rest of us, well, who knows what will happen. And the tools aren’t anywhere close to replacing people right now, but the rate of progress is remarkable, and we can all see it happening in front of our eyes. And we’re all participating (or at least complicit) in this reshaping of our world into something that is, in the way it’s been presented us, ugly.
knowitallz@reddit
AI helps developers. It doesn't replace them.
EasyPain6771@reddit
Oh god thinking about the mountain of shit code developers are going to have to maintain.
UnnamedBoz@reddit
My colleague is writing SwiftUI code and making lots of basic mistakes that would end up hurting the performance. This is an iOS dev with 10 years experience, but little to none about the framework.
It’s astounding how bad it is. Fast? Good? Absolutely not, it’s embarassing for me to see snd I lost a bit of respect for him. There are more developers than I’d think that has a Dunning-Kruger situation about LLMs simply because they don’t understand the basics of a framework
vigoritor@reddit
Unfortunately some devs try using it to solve stuff they wouldn't be capable otherwise. Like it should speed your progress, not replace knowledge you don't process. I also sort of wonder if people now apply for jobs they wouldn't have applied for otherwise cause they think AI can close the gap. Which a lot of job descriptions sorta of lead you there by asking questions like "how do you use ai to boost productivity"
PoopsCodeAllTheTime@reddit
The funny thing is that even today, sometimes, you still need to read the man pages, or you end up wasting so much time on the modern search engines
ImYoric@reddit
As a benchmark, I'm currently on a quest to find meaningful FOSS contributions made with Generative AI.
So far, no luck.
SporksInjected@reddit
OpenHands Commits
All of those are 100% AI generated. 2,037 commits this year.
ImYoric@reddit
I'm currently looking at https://github.com/All-Hands-AI/OpenHands/pull/8310 (it's one of the most recent PRs, updated a few days ago).
Here's one of the human comments, picked randomly:
I see 5 human-issued comments along these lines (the one above being one of the longest). As someone who has mentored 100+ fresh open-source contributors, it strikes to me as handholding a complete newbie, the kind of thing that burns out FOSS developers fairly quickly.
Would you concur with this evaluation?
SporksInjected@reddit
It’s entirely possible that super verbose comment is AI written. It could also be someone very into prompting too though. I agree, I wouldn’t have that much patience.
If you’re writing in Python or JavaScript though, current gen LLMs are really sufficient as an assistant. I would highly recommend just trying something like Claude Code or Codex with a decent model (o4-mini or Sonnet 3.7) to see for yourself.
ImYoric@reddit
Yeah, I should really try one of the recent versions.
I don't doubt that GenAI can be very useful in some scenarios, I'm just a bit skeptical of doom-and-gloom prophecies/hype that suggest that GenAI can already replace developers.
SporksInjected@reddit
Yeah I mean there’s still definitely a requirement for a human in the loop right now. I will say though, there are definitely teams that would have asked for contractor help 3-5 years ago and don’t have to now. I’ve observed this myself a few times already but it’s with people that were already developers just moving to a new stack.
Even with that being the case, I don’t think people are going to decrease the amount of devs, they’ll just expect things to be done more quickly.
ImYoric@reddit
Do I understand correctly that all these commits are to repos that belong to the organization selling the agent, right?
(note that this does not mean that the commits are bad, just it's something to take with a pinch of salt)
SporksInjected@reddit
Yes that one is. The agent tool itself is open source though so there’s likely been lots of projects to use it. I was trying to find the Devlo agent since it’s centralized. It may give a bigger picture of how many people are using it.
pl487@reddit
Huh? There are probably more contributions being made right now with it than without it, meaningful or otherwise.
ImYoric@reddit
Can you show me a few?
pl487@reddit
The code is indistinguishable from traditionally written code. The decision to make the change was made by the human, the human made the commit and pushed it, but the contribution was made with generative AI.
teslas_love_pigeon@reddit
So you can't show anything then?
CoochieCoochieKu@reddit
Just read reports from stackoverflow hacker news copilot with all the statistics, no need to be so pedantic
teslas_love_pigeon@reddit
It's not being pedantic it's literally asking and providing an example. Failure to do something basic doesn't help their argument.
box_of_hornets@reddit
Seems like a bad faith request since it can't be proven though
teslas_love_pigeon@reddit
Yes, the person making the claims is acting in bad faith. Failure to prove claims is bad faith as well.
box_of_hornets@reddit
But there's no way to prove what code is written by LLMs. The best that can be shown is things like the StackOverflow survey that shows 70% of developers are using AI in their workflow. It is already ubiquitous and it is more likely that a very sizeable percentage of open source contributions are assisted by AI than there being no examples of that
SporksInjected@reddit
He’s definitely right. More devs use GitHub copilot than don’t.
teslas_love_pigeon@reddit
Bold statements require bold statistics, please provide some.
SporksInjected@reddit
It must be nice to just demand people do things for you. Anyway,
GitHub developer survey 2024:
“AI adoption is growing. The use of AI tools like GitHub Copilot is on the rise, with 73% of open source respondents reporting they use these tools for coding or documentation.”
The number is going to go up. We need to get used to that; especially when Claude can unattended for $2/hr.
ImYoric@reddit
If you're speaking of Copilot-level, sure. But this entire post is about "AI doom", in the sense of "AI will take over our jobs". Copilot is something that, on good days, predicts the code you're about to write and saves you keystrokes – not something that threatens to take over any job, and in particular it's not what I would describe as "meaningful FOSS contributions made with Generative AI".
I'm looking for something a bit more "AI doom"-compatible, if you have examples at hand.
pl487@reddit
I'm talking about code written by conversing with an AI agent, with little to no hand-editing of the results, not just auto-complete. Code written this way is in every active open source project by now.
ImYoric@reddit
Do you have any example?
Kuinox@reddit
I did but it seems like you ignored my response.
ImYoric@reddit
I'm actually looking at your commit right now :)
Kuinox@reddit
btw i don't really agree with the one you are arguing, llm isn't that widespread yet in OSS contribution.
Maybe if you count line autocomplete or if someone asked a question to a chatbot while working no a PR the number may be close, but to get any decent code it's hard to just ask the agent to do it.
Kuinox@reddit
Well I really wanted to make magic-trace working.
But
perf
made a mysterious error when I was using the correct flags.I emailed the intel engineer that maintain intel_pt in the
linux perf
, which put me on the track that the dotnet jit needed to understandJITDUMP_FLAGS_ARCH_TIMESTAMP
, which you can get what it do from my PR.Mind that except the flag name, I had 0 knowledge of theses things and used a LLM to learn this.
For writing the code itself, I had tried with cursor without success.
I was seeking for a minimal code change, cursor only wrote convoluted mess.
I retried a few days later with the "hot new llm", and with a bit of micromanagement through the chat, I got this diff.
Now I'm writing my own viewer because magic-trace have tons of bugs for my use case and I don't want to learn OCaml right now.
FLOGGINGMYHOG@reddit
https://github.com/ggml-org/llama.cpp/pull/11453
Kuinox@reddit
here is mine: https://github.com/dotnet/runtime/pull/111359
Damaniel2@reddit
One of the main 'contributions' being made to FOSS projects are huge piles of useless security vulnerability reports being made by people feeding open source code into AI tools with prompts to find security issues. The 'issues' discovered are always non-issues, but people continue to clog projects with tons of these reports, especially with projects that pay bug bounties.
ICanHazTehCookie@reddit
curl
had to add a verification process because those AI reports were, in their words, effectively DDoSing their capacity lolTimurHu@reddit
Let me share an anecdote. I work on an open source graphics driver. Our project recently received a issue report where the person who reported the bug included an AI analysis of the problem.
The AI wrote a very technical explanation that on the surface seemed reasonable but when I actually started to look into fixing the bug, it turned out to be completely wrong.
Then we had some further conversation with the person who reported the issue and he told us that he actually "vibe coded" some shader compiler optimizations for us and shared some code and some explanations from the AI.
They were all wrong:
In conclusion, you can't "vibe code" a shader compiler.
Coincidentally, several open source projects recently added a policy against accepting AI generated code.
thisismyfavoritename@reddit
it has begun
Western_Objective209@reddit
I think people using tools like cursor effectively are not just letting the LLM do everything. They have it write a bunch of code, review it, and make changes. I think open source projects are coming out a lot faster then they used to, but even if people are using cursor/chatgpt to generate a lot of code they are not going to advertise it most likely
MoreRespectForQA@reddit
I'm fresh out of those but I've got a stack as high as my arm of take home projects made with generative AI, some of which compile.
IngresABF@reddit
There is a possibility that AI could help with quality, eventually.
If, some years from now, when/if energy is far more cheap/ubiquitous, you could have agents running app interactions through all possible scenarios similar to what we do with weather models now. Similar to the Purify tool we used to use for detecting buffer overflows in C many years ago
The_Startup_CTO@reddit
For me and others I spoke to, agentic AI has sped up development of high quality code by almost a factor ten. It took me a good month focusing on just learning how to do this, but I would say that's where the "danger" comes from: I'm now building a company where I will hire significantly fewer developers than I would have for a similar company just a year ago.
So one interesting question could be: What would you need to do differently to also get 10x dev speed increases with agentic coding in high quality? Some parts are out of your control: In a shit codebase, AI will just create even more shit. And you can't just un-shittify a codebase in one night (at least with current AI).
The main mistake that I did initially was to give AI too big of chunks, and this lead me to not review that thoroughly (as usual, the bigger the PR, the lower the review). "Create feature X" would get me to the feature faster initially, but the code it would create would not allow me to grow this over a prototyping stage.
For me, tdd was extremly helpful here, mainly as it forces me to develop in small units.
Sure, this is slower development than in prototyping mode. But it's still 10x faster than manual coding, and gives the same, if not better, quality.
AurigaA@reddit
So if you claim 10x dev speed you’re saying you can now do a years worth of development in about a month plus one week, and in a single year you can do 10 years worth of development? We should be hearing about you taking over the software industry very soon I imagine. 😂
/s
The_Startup_CTO@reddit
Yeah, it's quite amazing speed-up. Though there have been more than 10x speed differences betweeen devs and teams before, so it's not as big as it sounds, and for many companies it's not the first investment they should choose.
austospumanto@reddit
Yours is the first well-informed comment I’ve found in this thread. I usually find /r/ExperiencedDevs to be a bastion of great discussion. But I guess no one is immune to FUD causing head-in-the-sand syndrome.
I lead an engineering team at a decacorn. This stuff is real. 75%+ of devs use Cursor or Claude Code daily. We use them in a high touch manner, often executing on several Linear tickets at once, touching base with the agents when they conclude small chunks of work.
For anyone reading this: try Claude Code in earnest. Try to tackle a few tickets with it. I promise you, at some point it’ll click and you’ll ‘get it’.
I would also highly recommend you try Gemini 2.5 Pro in Google AI Studio. Record a screen recording video (eg in QuickTime) of you using some website and giving a voiceover on a feature you’d like. Upload that video to Gemini. Tell it to build your idea as a single html file. Open that file in Chrome. It can reliably churn out interactive prototypes. It can take in around 50k lines of code as context. Experiment with it. Enable Grounding with Google Search.
It’s important to understand where we are at with this technology. The other top comments in this thread are verifiably incorrect.
The_Startup_CTO@reddit
Yeah, many devs here are afraid of AI as it threatens their jobs. In German we have this saying (from a great German poet): Weil nicht sein kann, was nicht sein darf (in this context roughtly translating to "This would be bad for me, so it surely isn't true"). You can see it also from the downvotes on my post without a single comment actually contradicting anything I wrote.
flerchin@reddit
"It is difficult to get a man to understand something when his salary depends upon him not understanding it"
--Upton Sinclair
I totally share your exact observations of AI tools. They're a useful tool in my toolbox, but they require me to apply even more brain because they're deeply flawed. I also might be blind to how much doom there could be.
Craiggles-@reddit
The only thing that bothers me is that leaders in the space are:
MagnetoManectric@reddit
My main set of contentions too. The boosters are all charlatans, and they don't really do much to hide that either. They're very open about their lack of moral scruples. Many are openly assocating with a fascist regime.
It's a useful technology that's currently serving as the head of a doomsday cult comprised of some of the worst people imaginable, being pitched as a way to replace all the people that are compelled to work for them.
In a different, better socioeconomic system, LLMs would be an unabaited good. But the strucutral issues with our society right now are so large, that I just can't really see them making our lives better. They'll simply be used as leverage against the value of labour, regardless of how capable they actually are of replacing us.
Repulsive-Hurry8172@reddit
It's also not just AI. All the recently hyped tech like ledgers, VR can have targeted good uses. Can speak for VR personally - I play in it to work out with other people (it's just Zumba / workout dances in VRChat) and it has been beneficial for the players who do the same. But the techbro would rather use VR to sell you shit, keep you in their ecosystem, etc. Profit only thinking
Imagine what would have happened in the very early days of internet if all the people pushing for it though of profit only
MagnetoManectric@reddit
It's no coincidence that public research funding has only gone down and down since the times of Tim Berners-Lee. :(
I believe the early days of the WWW actually had strict prohibition against using it for commerical purposes. People would go apoleptic over a single instance of advertising spam on their usenet groups! In a way that seems quaint and kind of extreme now - but really, that kind of vigilence allowed the internet to grow strong roots as a force for good, even if only for a short while.
International_Lack92@reddit
Very well said
NuclearVII@reddit
Would that I had more updoots to give you.
abrandis@reddit
Technically they aren't stealing, they're reading it , stealing would mean they were copying it verbatim... If you try to use that litmus test than anyone ever in history that wrote a book , or listened to music , saw an art painting of other art form then went on create their own work based off the other prior art is also stealing.... But don't fret AI companies and their lawyers are already drafting up contracts with most publishers to legally injest data and give them a cut.
Craiggles-@reddit
Meta staff torrented nearly 82TB of pirated books for AI training — court records reveal copyright violations
This is a common problem for all major AI companies and not just for books.
abrandis@reddit
If they purchased the 82TB worth of books for training data would that change your mind?
Craiggles-@reddit
Change my mind in what way? That they are not morally bankrupt? It would definitely help their case at a minimum.
kaibee@reddit
yeah thats why its totally cool to shrink jpeg by 1% and now you own the rights to it.
MrLyttleG@reddit
Musk being liar #1 on the first 2 points you mention and Trump being the dignified chief resister who aligns with the 3rd point
Theoretical-idealist@reddit
YES!!! You are cooking
franz_see@reddit
For me, LLM is just like any other tool - like IDEs or code generators. It just helps engineers translates thoughts into code much faster
However, as long as we dont keep code into a minimum, then we’re just generating more work to maintain.
Can we really be using vibe coding internally? - maybe. Maybe not for customer facing stuff. But maybe for internal tools used by ops - i.e. it will be competing against no-code/low-code tools.
EmmitSan@reddit
You should look up the Gell-Mann effect
jenkinsleroi@reddit
The doomers are the one who never were actually good at their jobs or understood how things really work.
Unfortunately, that includes a lot of junior devs. Using an LLM can only get you so far if you don't understand how it works or what it generates.
tfandango@reddit
I’m waiting for the day when they decide writing prompts is too ambiguous and we should have some sort of special language where we can tell the computer what to do logically.
casey-primozic@reddit
Wait till you hear about Japan
letsbreakstuff@reddit
There's this project at work, it's a nightmare to get setup to develop on. Documentation to set it up is complicated and points to other documents for other stuff that needs to be setup too. Sometimes those are out of date and link to updated copies. It takes some very careful reading to know the order of steps you have to do. You know, typical big company enterprise stuff. It's gonna be nice when there's an on prem AI that is aware of all that documentation and new users can just ask their ide how to get setup
TheNewOP@reddit
Regarding that interview, I'm not quite sure how much I can respect Kokotajlo's opinion. You see, he was a philosophy PhD candidate. He worked on governance at OpenAI. He was not an ML researcher, nor does he have the math credentials to even pass at being one. To my knowledge, governance at OpenAi is a failed project and division.
And then there's the fact that even Sam Altman says that AI can't replace developers, directly contradicting Kokotajlo's opinion that we'll all be replaced before 2027-2028. And Altman's the person who would gain the most from it being true.
The_Big_Sad_69420@reddit
I agree with the analysis of the current state of AI tools.
i think what I would be worried about is precisely that - the next AI breakthrough.
i am not an expert enough on AI to have insights on the exact technology that made the current LLM possible, so I have no idea what would make the next breakthrough possible. the current one also came as a surprise and has progressed very fast, which makes me anxious if and when the next one will blindsight us.
SituationSoap@reddit
The person from the 2027 AI report is a fucking idiot. If that person took what they're saying even remotely seriously, the correct response is not "we're doomed because of AI" it's that we should be imprisoning anyone who works on AI tools, burning down the data centers, and invading China so they don't make the same mistake. That person claims AI is an existential threat to humanity within the next 24 months. Responding to that with anything but overwhelming force is stupid.
That person is a huckster. They're selling the idea that AI is going to be a new god, and they want to set themselves up as the priest because only they understand it.
hippydipster@reddit
Is that an argument for why it's not possible? I'm not sure I understand the logical flow there. Just because people aren't blowing up data centers means nothing too radical is going to change in the next few years?
SituationSoap@reddit
No, I'm saying that if this person actually believes that humans face an existential threat within the next 24 months, he should be advocating for the use of overwhelming force.
I'm not advocating for overwhelming force because I think that guy is a fucking idiot and his takes are bad. If I believed him, I'd absolutely be protesting in the street that we need to turn the ship now, before it's too late. The way that I protest in favor of things like action taken to combat climate change.
Climate change is not an extinction-level event in the next 2 years with an obvious off switch, though. If it was, I'd be in the streets right now arguing that we need to hit the off switch as quickly as possible because I don't want literally everyone to die in the next three years.
The fact that he's not doing this means that he doesn't believe his own rhetoric. Because he's a huckster.
This is like Christians who talk about how the Rapture is definitely coming any day now, but who still contribute to their own 401(k) accounts. What you do is way more important than what you say.
hippydipster@reddit
You may have noticed protesting hasn't accomplished much, and I think it's reasonable that someone like Daniel Whatever thinks he'll have a better chance of positively influencing outcomes by managing his image is this respect. Advocating bombing data centers, ala Yudkowsky, seems to result in being taken less seriously.
I also think your overly emotional reaction here is a bit suspect.
Fidodo@reddit
Incorporating AI into more products will make them far more complex. They'll be more flexible and non deterministic and all that will make codebases far more complex with more edge cases to handle and state to manage.
Right now companies are doing a piss poor job at utilizing the potential of AI and most new products I see is just another variation of RAG in summary out. There's so much more that can be done. I think any productivity gains we get will be immediately used up as soon as programs catch up with the potential and explode in complexity.
Ok_Bathroom_4810@reddit
I think the main difference is that doomers are seeing how fast the technology is evolving vs developers seeing how the tools work right now. Developers see that AI does not cover the use cases required today, while doomers see that AI is evolving extremely rapidly with major advances coming out almost every month.
It is a bit too early to say exactly how it will play out. In my opinion the people/companies that will get rich are the one using AI to build products, rather than the companies who are building models. I think it is inevitable that model training and model usage will become commoditized and costs will drop rapidly, but I don’t have a crystal ball and could be totally wrong.
I also think there will be tons of job opportunities in the AI space as people figure out how to use it effectively, but you never know. AI has already started taking graphic design and writing jobs, so it’s not that far fetched to think it could take developer jobs in a few years too.
jhartikainen@reddit
I'd say it boils down to this:
I don't know if there's really any real disconnect. You just tend to see the extreme ends of the reactions (hype/hate) more online because those who are somewhere in the middle don't care enough to spam their opinion.
mentally_healthy_ben@reddit
the weirdest category of AI reaction is that of the majority of people - the ones who don't use LLMs or only use them for like, writing emails. "Oh yeah, ChatGPT is the AI thing right? I've tried it a couple times for recipes"
Any_Rip_388@reddit (OP)
This is a solid take. Thanks for sharing
ares623@reddit
Developer experience doesn’t matter. If your CEO is under board pressure to push AI, it will be pushed
ButterPotatoHead@reddit
Co-pilot type tools are just the next evolution of IDE's and aren't really a threat to the existence of software developers, they're just another tool to cut out some of the grunt work.
Everyone has seen that taking code directly from an AI and trying to make it work is futile. You still need actual engineers to not only architect and design it but to set up testing, pipelines, etc. Basically the coding itself is going to become a less important part of overall engineering.
However, AI is going to radically transform how data is used. You can now take 100 petabytes of call center transcripts and feed them into an LLM and ask it to identify trends, problems, improvements. It will be possible to spin up and train an LLM the way we currently spin up a database, and then connect them together with techniques like vectorization.
Doing things like pulling together 5-10 different large datasets of different shapes and sizes and quickly querying and analyzing them is going to become easy and will transform the types of problems that are solved.
kingofthesqueal@reddit
For anyone wanting more info on what this Times post is referencing, it’s this https://ai-2027.com
The main claim to fame by the guy being interviewed in the field is “predicting a large fraction of AI advancements and integrations from 2021-2026 before ChatGPT released”.
It all went through Reddit last month, but many ignored the countless criticisms involving the article and these guys as a whole and the dubiousness of how accurate he actually was over 2021-2026.
creaturefeature16@reddit
I hate this author and this site. It's pure conjecture presented as "research". The presumptions are pure guesses.
It reminds me of this graphic of "emerging technology for 2012 and beyond".
Apparently by now, we should already have interplanetary internet, telepresence, context aware computing...
kingofthesqueal@reddit
I just remember reading this comment a while ago](https://www.reddit.com/r/slatestarcodex/comments/1js1fgc/comment/mnd3n2h/?utm_source=share&utm_medium=mweb3x&utm_name=mweb3xcss&utm_term=1&utm_content=share_button) regarding criticism of this whole thing.
It didn’t matter though, r/singularity, r/accelerate, etc this shit up and posted it all around reddit as if it was an absolute certainty and ignored so much of the issues with the article
prescod@reddit
The disconnect is simple to explain.
They are researchers, looking at the difference between the problems they considered insurmountable five years ago and the progress they see today. They see all of the cutting edge stuff and extrapolate it into the future.
You are looking at a commercial product , delivered at scale, representing the best thinking of two years ago.
In other words: it is the early days of air flight and you see the Kitty Flyer and ask “why would that be disruptive to ocean migration business? People will need to take boats to traverse the ocean did the foreseeable future.” And they envision the 737 and they know long distance oceanic migration is doomed.
Neither of you is wrong about the time frame you are looking at. But you aren’t looking at the same thing.
Have you heard that AlphaEvolve solved an algorithm problem that was open since 1969?
“AlphaEvolve’s procedure found an algorithm to multiply 4x4 complex-valued matrices using 48 scalar multiplications, improving upon Strassen’s 1969 algorithm that was previously known as the best in this setting.
That’s research. It’s not a product you can buy in 2025. It’s a product you can buy in 2030z
lookmeat@reddit
The reason I suspect there might be a dot-com bust style in the AI space (though it's a much smaller section of the tech market right now, so I hope it won't be a dot-com bust size correction of the market) is that I see a lot of what was the attitude towards the internet in the late 90s. People assumed that the interet was going to do all this magical things, in magical ways, with a lot of handwaving, we'd just say "computer I want a pizza" and the pizza would just "appear" on my table, no explanation of the logistics of getting it there. When you asked "how do you expect to be able to make a grocery delivery system be cheap enough that people would use it" the answer would be "man you don't understand, it's the internet: see that exponential growth? It's just going to keep going baby"; as if you downloaded the groceries.
Same thing with AI. There's things where I really think it'll be revolutionary, but way to many people are making claims that are ridiculous, leave a lot of unanswered hard and important questions, which are just handwaved away showing us how AI is getting exponentially better by some arbitrary metric (even though the hard problem that is being asked about wouldn't be solved by AI in their model).
Something that I think will be a revolution in AI: Janitorial work. There's a lot of work you need to do to keep a codebase at large companies in a good state, paying down tech debt and keeping the work going. When you do a breaking change, or deprecate some functionality in a library in exchage for a better way to do it, ideally you want that code base to be updated. Turns out the easiest way (when it's entirely within a company) is if the people upstream just go and change things. Smart engineers will try to make these changes something that is
awk
friendly, so they run a script over all the repos (one of the advantage of monorepos) and their files and then create the very large amount of PRs to handle this (which can be automated). With ML agents, you can just document how to handle this, and let AI agents sourge over the whole codebase and do the updates. They should be small and specific, and engineers can look at this. Open Source projects may push documentation on "best practices" or migration guides when they release a v2, that are written in such a clear and simple thing (which is a good thing either way) that an AI agent could just use it. They just make a pointer for the AI-agent to follow the docs, and then you get an "auto-updater" or "good-practices-code-fixer" for free with your docs. This frees engineers to do bigger, more aggresive changes foundational changes when needed, without wasting time on making them actually be used, letting an AI agent handle that instead.Something that won't be the revolution: vibe-coders replacing engineers. To be fair the main reason this won't work is something that most engineers do not know, or would rather not know: their job is barely related to writing code; their job is related to translating ambiguous problems and solutions into concrete and mathematically rigurous and specific solutions (so clear that a machine could follow). This is done through a series of interactions, sharing, and iterations. AI agents aren't really any better than humans are, and AI agents aren't really cheaper than humans are, once you put all the cost in: you still need the person (prompt-engineer) that translates ambiguous problems into prompts that AI agents could solve, and then understand the code that the agent generates well enough that they can catch issues. This skill is just as rare and hard to get as software engineering, so you end up with exactly the same amount of employees and the same cost. And this is assuming that AI agents become better than really solid engineers. Though again, that is more about the abilities to work on meetings, delegate tasks, and work with others, not quite "just coding", that's the difference between a junior and a mid. So if you need the same amount of engineers as before, and they are just as hard to get, what are the gains to offset the costs of paying for those very expensive machines?
PreparationAdvanced9@reddit
I agree with you but how do you come to terms with releases like this : https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/
creaturefeature16@reddit
The math subs have some reasoned takes that cut through the google PR hype machine:
https://www.reddit.com/r/math/comments/1kmnwsg/alphaevolve_a_geminipowered_coding_agent_for/
https://www.reddit.com/r/mathematics/comments/1kmma65/alphaevolve_improves_on_best_known_solutions_to_a/
Visual-Blackberry874@reddit
I agree completely. And I use them daily too, both inside and outside of work.
These tools are going to take us away from searching for things like “weather London weekend” to “I am going on a trip with my partner and children to London at the weekend, will we need to take wet weather gear?”
The end result is the same.
ecmcn@reddit
I wouldn’t go that far. A recent example: Yesterday I was dealing with some pretty ugly awk scripts I didn’t write, part of an old build system. A quick AI “explain what this script does” saved me 10 minutes of parsing through them manually. I still double-checked that the AI was right, but that was a lot faster than starting from scratch. That’s a very different experience and workflow from using Google to get to the same result.
Visual-Blackberry874@reddit
Yes that is one way I use them too, great for legacy or spaghetti code.
The scenario I described is how (I think) the general population will use them if and when they replace search engines.
ecmcn@reddit
You’re probably right about that. It’s even frustrating to watch people who don’t know how to google stuff.
Visual-Blackberry874@reddit
Well that’s the thing a lot of non-techies will still write out a fuller sentence whereas we’ve being trained for years to target specific keywords.
Which-World-6533@reddit
If you write down "Yes" and nail it to a noticeboard it will be right most of time.
Source: Live in London.
rashnull@reddit
It’s good. It will only get better. Don’t count your chickens till they hatch.
alfcalderone@reddit
Jesus, the video in that article. I'd rather slam my dick in a car door than watch ROSS DOUTHAT talk about AI. Jesus.
HansProleman@reddit
I think it's having enough technical knowledge to understand that LLMs are not magical, have no reasoning capability, and have no apparent evolutionary path to it. There's a ceiling on their performance and we have probably hit it.
And having enough experience of boosters to know that they should not be taken seriously. Remember full self-driving? I suspect this will be similar in terms of predictions turning out to be unachievable.
And really, I suspect many boosters and doomers do too, but the amount of media hype, investment and market share competition going on rewards sensationalism over sobriety.
originalchronoguy@reddit
I feel lucky in the sense I get thrown into these types of work. Like I am on a scouting mission; thrown in to explore and evaluate the technology. So I do have my opinions.
As for doom and gloom, I mostly read post from people on how LLMS affect me. Most comments are andecodtal in the a way, "it generates bad code, not up to par, etc." This is a very selfish myopic take.
You will never read stuff from people that will say, "we used an agent to parse our infrastructure logs and can predict failure and potential real-time attacks from nefarious nation states. It has a good high probability and we have prevented two major breaches." You don't read about that but I've seen it first hand.
I can't disclose the work I do. I will say, it isn't even 100%, 90% accurate. At best, it is 70% decent. And that is what matters to some business. It means there is a future not to be sideline when it reaches greater accuracy. This is the worry, when it gets 80%, then 85%, then 87% accurate.
My experience is generally positive because I am not using LLM as a tool that affects me. I am not using it to write code. Rather, I am using it find out how it hallucinates. How it gets it wrong 30% of the time. Then build tooling and ecosystems to have it learn from it's hallucinations. That to me is fun. To find the holes in it but be cautious of it's future potential. And to me, there is so much work in this realm to keep people like myself busy for the next 5-8 years. Enough for me to retire.
xordon@reddit
You don't hear about it because it doesn't happen. Predicting failures? #doubt
Real-time attacks from nation states thwarted by an AI scanning your logs? Sure sure.
This is the kind of shit an AI would write, or someone paid to peddle AI nonsense. There are plenty of things AI is good at, but protecting you or your data from nation state attacks like this is delusional.
xordon@reddit
You don't hear about it because it doesn't happen. Predicting failures? #doubt
Real-time attacks from nation states thwarted by an AI scanning your logs? Sure sure.
This is the kind of shit an AI would write, or someone paid to peddle AI nonsense. There are plenty of things AI is good at, but protecting you or your data from nation state attacks like this is delusional.
latchkeylessons@reddit
Having done a good amount of work in this space the past many years at this point, I'm going to say that the reason you don't hear the most horrible stories is because they get buried - perhaps not even metaphorically. But I'll provide a couple examples from the headlines that get buried and my own experience:
Algorithmic understandings of IR imagery in Yemen were used to bomb civilians where no enemy actors had been. The DOD acknowledged this. However, the news story got wiped within 24 hours on a few different outlets. I can't link the story - the news is gone. There are other accounts of this with similar responses - you need to be quite diligent to find them when they occur and they're not on random, shady blogs from conspiracy theorists, but BBC, NYT, etc from time to time. They are buried quickly. Did AI push the commit bomb button or whatever? No. A recent, naive high school graduate did under the threat of punishment from superiors. Did a team of sophisticated intelligence experts qualify the AI findings? Sometimes, sometimes not. There are DOD contractors on this sub that know these details.
In one engagement I worked on, AI was used with some home-grown algorithmic understandings of supply chain data to auto-ship parts and highly dangerous, controlled, toxic substances. During the initial engagement the client hired chemists to refine the data models and do internal audit. Then they decided that was too expensive and they stopped - and let the "AI" decisions run free without qualification. The problem with industrial supply chains at a high level is that, except at the highest of levels with the Apples and Exxons of the world, they are actually fragile and easily manipulated. Long story short, during an audit a couple years after the firings, a LOT of material had gone missing and one can reasonably assume was trafficked given its nature and value. Plausible deniability was in the hands of the executives involved and there are no real regulations around AI "decision-making," so nothing changed upon investigation, no accountability, and last I heard that AI was still shipping highly dangerous material... who knows where? Someone/people were clearly making significant money off the back channel deals there.
The real problem with "AI" in my book is plausible deniability to force harmful outcomes. There is no regulation or enforcement - or consequences. At the highest levels of decision making in companies and in governments most everyone is complicit either actively or via ignorance and that does not look to be changing anywhere in the world.
So of course AI can provide helpfulness in scenarios and useful applications, but when we're talking about projects and businesses seeking many billions of dollars constantly, for the most part they're talking about the usefulness of plausible deniability above - because removing humans from a workforce is the biggest gain most companies at scale will ever have so long as the revenue keeps coming in. And that last point would be the correctional. The game stops when there is no more revenue.
narcisd@reddit
I’ll start worrying when a LLM can perform debugging, until then is just the next tool
Sweet-Satisfaction89@reddit
Ross Douthat is a known village idiot.
pwouet@reddit
Dono anymore. I read another thread this morning with a lot of experience devs saying they were doing everything with AI agents now. I guess I need to try cursor.
Which-World-6533@reddit
I'm hugely sceptical of that thread.
Every time I've tried AI it's been more of a hindrance than a help.
My conclusion is that the people who are the most in favour of AI are those who stand to gain the most.
pl487@reddit
I was previously skeptical just like you, and then I started using Cursor in Agent mode with the premium models. I was wrong.
kingofthesqueal@reddit
I’m very skeptical of AI as well, but it is important to note there is a dramatic difference between paid and unpaid versions.
IE: ChatGPT 4o, o1, and o3 (though still limited in their own rights) would blow away the 3.5 version many were stuck using in the free tier until the recent changes.
It’s what makes people take on AI on reddit so hard to gauge.
Plus I think there’s a ton of astroturfing by pro AI interests to support things, but it’s also probably made up by people who may be overly skeptical of AI
Which-World-6533@reddit
And 4) The level of competence the people have in the task they are doing.
When I dig into people who bang on about AI it's hobbyists and managers who think it's the best.
Trevor_GoodchiId@reddit
At the very least, we'd see an uptick of prominent open-source contributions, proudly paraded by vendors.
Efficient-Life5235@reddit
It wears off once you start using it everyday!! I was surprised with how well they were responding to a question but over the course I got so frustrated with its answers that I decided to never use it again!
camelCaseCoffeeTable@reddit
May be an unpopular take, but I think many people here are fooling themselves about AI and using the current state of it to do so.
These AIs are getting better every day. Yeah, CEOs are saying hyperbolic stuff. But that article isn’t. That article isn’t saying our jobs are at risk in 4 months. It’s saying 2027 (and if you read it, he actually pushed it to 2028).
That’s 3 years away. Maybe we’ll hit some roadblocks. But if we don’t? Our jobs are absolutely in danger. The article spends some time talking about that - it will lead to political upheaval, it will lead to unrest. The article freely mentions that.
I’m somewhere in the middle. I’m somewhat dubious they’ll be able to scale up AI fast enough to take our jobs within the next 3 years. I think maybe some programming jobs will be lost, but there’s a lot of externalities that will slow things down: power consumption being a big one. Computing power may be another.
But I don’t think this externalities will last forever. I do think there comes a day where AI will take coding jobs. The companies are clearly working towards that solution first, so saying “well it’s not taking accounting jobs” ignores the fact that they aren’t currently optimizing them to take accounting jobs, they’re optimizing them to take our jobs.
Idk how hopeful I am about the future, honestly. Idk how much faith I have in our government to step up and do something about it
nonades@reddit
This is one of the biggest issues I have with the tech. I don't think the environmental impact is worth something we already had.
sozer-keyse@reddit
This doom and gloom has been going on for 3 years, it's taken my current job that amount of time to even consider using GitHub Copilot. Figuring out a way to use generative AI while keeping sensitive data and code confidential is a literal minefield.
Using it on my personal time, I find it's only really useful for autocompletion, writing boilerplate code, and doing tedious stuff. Even then, what it spits out isn't always 100% the best and still needs human intuition to sanity check it and correct it. It works best when it's prompted to do something specific, and software developers are the most qualified to do that.
Most of these CEOs, researchers, and other wackos on LinkedIn bragging about how they don't need developers anymore because of AI are in for a very rude awakening once their codebases are filled up to the brim with bugs.
sampsonxd@reddit
You said it best "toy apps are not enterprise grade applications". And the reality is for most devs, guess what it is they're doing to actually be paid. But all an CEO sees is number go up.
Same thing with all the image generators, several years ago, everyone saw them and claimed all arists jobs are obsolete. Reality is, in all this time nothing changed. Turns out an artist does a lot more than spit out an image.
pl487@reddit
There will be winners and losers. There may be more of one than the other, we don't know yet.
We're already seeing the effects of increased developer productivity. I know my company isn't going to be hiring any more devs unless a significant chunk of the current team gets hit by a bus. If that's all that happens it's massive.
Awkward-Cupcake6219@reddit
Probably it is me, but every time my project complexity exceeds a certain (low) threshold any AI suggestion becomes mostly inaccurate, while autocompletion on single parts is still good.
Despite my years of experience, I think I have still a lot to learn since people with 0 coding experience boast to have built a SaaS that is making thousands thanks to AI.
Legitimate_Plane_613@reddit
The disconnect makes sense when you consider how LLMs and neural nets work.
They get trained to fit as much of the data as possible, which is going to land it firmly at 'the mean'. So, those who are below 'the mean' are going to think its great, those around the mean won't see the point, and those above the mean will see it as bad.
Now, think about the average quality of code they've had access to and ask yourself what does that look like? What does the distribution of developer competency look like?
Which-World-6533@reddit
I think this is the best explanation yet why I don't see much benefit.
PickleLips64151@reddit
I read a research paper that determined AI/LLM usage drove down code quality and increased code churn. I haven't followed up on it lately, but I would estimate the results have not improved since the initial research.
FormerKarmaKing@reddit
AI Doomerism is good for product marketing, media publishers / ad sellers, and smart people with near zero technical skills that want to appear on trend.
iBN3qk@reddit
I work for a corporation and am in the room when evaluating tech for potential adoption.
Nobody actually sells any solutions that can replace devs at this point in time.
If you have anything, please come pitch it to us. I would promote a tool that really works, but I don’t want to waste my time with bullshit.
MoreRespectForQA@reddit
This "magic robots gun take er jerbs" insanity from investor owned newspapers is actually nothing new. Years before LLMs were a thing they would say similar hopeful things about robots/automation and absolutely would go off the rails attributing all kinds of magical powers to automation they clearly didn't understand.
I remember a study by Ball State University for example that used a mathematical sleight of hand to pretend that foreign (e.g. Chinese) factory workers were actually probably all robots and that therefore robots were taking over the economy. As a statement in English this makes absolutely no goddamn sense but if you slyly put it in an equation and then publish it then the investor-owned newspapers will wet their pants and publish it.
TheWhiteKnight@reddit
My take around the fear isn't that AI is replacing developers right now. The fear is that AI will make developers, especially junior ones, obsolete in a few years or so.
Maybe you'll no longer need to tell Agent Mode which files to pull into context but instead it'll just pull everything into context automatically and do things 100X faster/better than it can today.
My problem with the argument that AI will replace lots of developers in a few years is that you'll have to give it access to everything. Back-end code, front-end code, access to DB schemas, devops functionality ... everything.
Maybe a few years (or as soon as 2027) is too soon to be worried about. It's impossible to know what may come in .. 10 years? Who knows.
We do somehow have FAANG companies already saying things like "80% of our code is written by AI". This is a huge mystery to me. What are they talking about?
Regardless, the future is indeed uncertain. It's certainly not "stable" and thus should concern newbies IMO.
daedalis2020@reddit
You write a function. To keep the math simple say it’s 70 lines of code.
You use AI to generate 3 unit tests, 10 lines each.
Congrats, AI just wrote 30% of the code.
GammaGargoyle@reddit
I’ve worked in legacy codebases where you could refactor and remove 50% of the lines of code. Now imagine a legacy AI written codebase
AdventurousSwim1312@reddit
I find myself using AI for two stuff : boilerplate code (when I'm kicking of something fast and need a standard template) and tedious task (ex: in front make auto translation in many languages, unit tests, docstrings, some basic refactoring, bug fixing), in those AI is really helpful and performant
As soon as I switch to custom logics though or flow implementation, optimization, multi repo etc. It becomes barely usable (both because it cannot contextualize well and because making a description with enough detail of logic and edge cases is basically less practical in natural language than as code).
So id say it saves me a lot of headache from repetitive and un pleasant tasks (who likes documentation) while being miles away of core logics and highest value produced when developing code.
So actually useful but not in the way that is marketed.