Am I suffering from a serious case of copium or is tech journalism seriously out of touch with reality when it comes to AI?
Posted by bentleyk9@reddit | ExperiencedDevs | View on Reddit | 370 comments
Whenever I read a tech journalist's article about AI and programming, it almost always mentions that AI is amazing at writing code and its being used to write the majority of code these days.
Example from Casey Newton of Platformer: "AI is better at coding tasks than basically anything else.... I talk to a lot of software engineers and what they will say is that it used to be that we would write code, and then it moved to we write half the code and it gets autocompleted, and now we just supervise the code and we type in the box what kind of code we want the machine to write" (video)
This seems insane to me. I use AI as a tool to help me, but in no way do I trust it or use it to this level. Not even remotely close.
I feel like tech journalists are listening to what the founders and heads of the AI companies are saying, but no one is actually asking us what it's like. The companies want to justify the obscene amount they're spending on developing the technology, so they're just telling reporters what messaging they want to make public. If the journalists don't know anything about software engineering, they just blindly trust that what the founders say is true. But these their articles are just perpetuating the false narrative about current capability of this technology and how much software engineers are using it.
Am I just in denial? Does this accurately reflect how you are using AI these days?
Raunhofer@reddit
It doesn't help that the people who get paid by the success of AI are literally lying about the capabilities of it. Thus I wouldn't blame it all on journalists.
Meta_Machine_00@reddit
Brains are generative machines themselves. Free thought is not real. They simply output what they must output at the time. "Lying" is a hallucination. In reality, there is absolutely zero independent control of what someone must think.
dats_cool@reddit
This sub is for experienced engineers. Go play in the sound over at /r/singularity. You guys can circle-jerk until the collapse of civilization.
Meta_Machine_00@reddit
You are not a good engineer if you think these comments could somehow not appear in this specific sub at this specific time. You see the comment here. This is the algorithm of the universe.
dats_cool@reddit
Lol says the guy that's never worked as a developer a day in his life. Like who are you again?
Meta_Machine_00@reddit
You do understand that only some small portion of people can experience the circumstances to learn to code or become a developer, or get employed as a developer. If you experienced different life circumstances, you would have had a different path in life. You are just a person that got lucky. You understand that, right?
dats_cool@reddit
LOL I immigrated to the US from a poor eastern European country, went to college for a first time, had a relatively short career in an unrelated field, then went back to school for computer science, and then worked hard to break into tech.
Don't talk to me about luck.
Meta_Machine_00@reddit
What percent of people from that poor eastern European country did not get to do what you did? You sound incredibly lucky to have had that opportunity.
dats_cool@reddit
Sure bro. I'm just lucky. Luck plays a factor in all things in life but my hard work and determination is what got me here.
Meta_Machine_00@reddit
Plenty of people put in more hard work and determination than you and don't have any of the opportunities that you have access to. It is all luck.
dats_cool@reddit
LOL. I mean whatever helps you sleep at night. Nothing was luck, there was a lot of deliberate studying, all nighters, moving across the country for a low paying first gig.
That's sad that you have that perspective. So do you just not try for anything because you think luck is everything?
Meta_Machine_00@reddit
No. Free will is not real. We only do what the physical laws force us to do. Some people are lucky and are forced to do perceivably better things while others are forced to suffer.
dats_cool@reddit
Okay, I'm over this conversation.
Raunhofer@reddit
Achtually due to quantum events that occur in our neurons, free thought seems to be a real thing.
The FitzHugh-Nagumo equations and quantum noise - ScienceDirect
Our brains sense the future through a new quantum-like implicit learning mechanism - ScienceDirect
Meta_Machine_00@reddit
Randomness/non-determinism are not freedom. That just means you are random, not free.
Raunhofer@reddit
It means LLMs and our brains do not function the same way. Our thoughts are obviously not random gibberish, it's just that the process leading to them has a kind of "salt" that cannot be predicted, making it the closest thing to free will we can scientifically have.
Free thought is real, you are not a predictable machine. No-one and nothing knows how you'll respond, if you will at all.
Meta_Machine_00@reddit
Non-predictability is not freedom. Freedom is being able to choose independently from more than one option in a single instant. Occam's razor asserts non-freedom because any choice will be algorithmically assigned and there is no chance for the other options to be chosen in the instant. Your bias just makes it hard for you to abandon the illusion of "freedom".
Raunhofer@reddit
Non-predictability is the basis for free will. With our desires and intentions, this extremely complex, non-deterministic, and not fully understood mechanism is what drives us.
It's the near-magical nature of LLMs that is the illusion. You absolutely can reverse-engineer every character it types out. ML is said to be a black box because it's not practical to reverse-engineer, not because it's impossible.
You can now answer or not answer, use your free will in its full glory. ChatGPT can't, it will answer.
Meta_Machine_00@reddit
No. If you answer then you are forced to answer. Just because it is asynchronous and there are pauses in the middle does not mean it is not automated.
And you also explain that things are unknown, but then you leap to non-predictability = free will. You literally say that you know that the unknown is what drives us. That makes zero sense.
Raunhofer@reddit
You are cherry picking my words, non-predictability alone is not free will, just something that needs to occur, please read the full sentence.
You are not forced to answer even if you answer. You strip all meaning of the word forced if absolutely everything is forced, which is not the case due to pre-mentioned non-deterministic nature of neurons.
I understand that it is easier to grasp the concept of free thought by oversimplifying it into something totally deterministic and simple, but that's not how our brains work and that's why LLMs are not the way to how our brains work.
Meta_Machine_00@reddit
If you don't understand it as you've already admitted, how do you know that something is being oversimplified? You are justifying your knowledge with ignorance.
Nonetheless, randomness can be forced. Non-determinism is simply the appearance of phenomena with zero preceding information. Just because something is random does not mean that appearance of the phenomena at a given time was not a mandatory random generation.
General_Platform_265@reddit
take your meds
Meta_Machine_00@reddit
So you think I need to use medication to alter the operation of the brain to produce a different result? You are just validating what I am talking about.
Colt2205@reddit
I don't use AI at all in my job. Just doesn't apply to full stack + backend in a custom code base environment.
Alternative_Work_916@reddit
I'm using AI in a project right now where I've specifically used AI to write the majority of the code. It has taken a lot of coddling and intentionally designing in sections for basic features.
It has a massive amount of boiler plate html and JS constantly repeated to pre-fill simple pieces that function the same. If I get time to refactor it, I could probably shave off 60% of the code and have a much more maintainable app with less break points.
People who aren't in the know are the only ones impressed by lines of code.
matthedev@reddit
One of the failure modes of journalism is to just take press releases and publish them as is or use them as the only source for an article. Reasons can include access (wanting to stay in the good graces of a company or insider source of information); paid placement of what amounts to advertisements (or some other kind of quid pro quo); or lack of time, expertise, and resources to do deeper reporting. Talking to multiple experts, investigating the facts, getting all sides of the story, and striving to maintain an objective or neutral point of view requires time, hard work, self-discipline, and occasionally considerable risk personally or to the news organization (less so for trade publications).
There's been plenty written already about how the Internet and social media have shaken up the business models of traditional news media over the years, and a lot of it applies to trade press and niche-interest news media too.
"Extraordinary claims require extraordinary evidence," so journalists should be investigating hyperbolic claims, even if it would be nice to believe we're on the verge of some incredible technology revolution that will solve all of humanity's problems.
stewcelliott@reddit
Casey Newton is one of the worst offenders for this sort of stuff, honestly. It wouldn't surprise me if his "software engineer" contacts are actually former engineers who are now in management rather than actual the engineers doing the day to day work.
OutOfDiskSpace44@reddit
They would be director level. Engineering manager level is parroting the talking points, engineers are adopting tools as an "oooh new shiny toy" phase.
BrianThompsonsNYCTri@reddit
His girlfriend works for Anthropic…..
AntDracula@reddit
There it is. Incestous business.
dvogel@reddit
On his podcast he regularly discloses that his boyfriend works at Anthropic. So when I read him saying "I talk to a lot of software engineers" I just assume he means "I talk to a lot of AI-pilled software engineers". In that passage I think he is describing the development of the technology rather than the change in practices across the industry. I wish he'd be more clear about that but I don't think he is nearly as bad as most other journalists.
jakejasminjk@reddit
Can talk more about how they used AI
DeterminedQuokka@reddit
This would make sense given that statement. I’ve been to a lot of pro ai talks and for a lot of managers who never get to code ai is a way for them to actually interact with code.
For people who actually code the statement is usually something like “ai is surprisingly good at terraform”.
Although an engineer did say the other day “everything got 5 times faster with ai”. So some people are into it.
I do think people willing to discuss ai with journalists are likely self selecting in a way that makes them not normal engineers.
Electrical_Fox9678@reddit
It's not that good at terraform. Maybe for super simple things. I've seen it make up garbage that sort of looks like it would work.
DeterminedQuokka@reddit
Surprisingly doesn’t have to actually be that good.
I mostly just use it to find files in terraform. It’s been really good at that. My boss is the one that said it was good at terraform. I don’t know what he was doing with it. But the person who really likes it is also infra. So at least at my job that’s the team that likes it.
I’ve mostly used it for github actions and docker files. Both of which it’s been good enough at.
It’s terrible at pyright specifically so I don’t really let it write actual code.
Electrical_Fox9678@reddit
All the infrastructure folks I work with say don't trust it, that it just makes things up. Wastes more time than it saves.
One thing that it could do is make up a dashboard which was sort of nice.
poolpog@reddit
As an infrastructure folk, I too, say don't trust it. LLMs aren't even quite at the trust but verify stage yet for terraform or IaC
DeterminedQuokka@reddit
Honestly, I was surprised she likes it. But if it’s working for her, good for her.
Ugh. I’ve had no success with dashboards. It’s constantly making up metrics that don’t exist in datadog. It has never given me a single real metric that aws sends. It’s so annoying.
poolpog@reddit
LLMs are not just not good at terraform they are legit bad at it.
OskarSarkon@reddit
Imo it's a significant problem for stuff like Terraform that's more niche than your average JS frontend or Python API call script, as a related example I've been on (multiple!) incident calls that were derailed by people repeating made up but correct-sounding nonsense from ChatGPT about Elasticsearch
a_reply_to_a_post@reddit
I couldn't really leave the house last night to go to the dispensary for a 50% off sale they were having because my wife was at the darkroom printing photographs, and i was kinda annoyed about it and ended up downloading cursor and messing around with it to see what the hype was about
i've been paying for old ass servers to host a few old websites, my wife's portfolio site that I built over 10 years ago when she was my girlfriend, and actually did build her a new version that's like modern typescript/react but i just have some json files
i've had an idea in my head for years about how i can stop paying for these old servers and just store content on S3, and in about 2 hours with cursor I had a working Electron app that publishes to an s3 bucket...couple more hours and it was kinda robust with features like drag and drop reordering and static API generation
for quick prototypes, this shit is kinda fun if you just chat with it and be ok / guide it through improvements
i think for greenfield projects / one offs, shit is kinda sick, but in my day to day work i don't really want to lean too much into using AI to write my code because otherwise it makes the job basically reviewing junior code all day and asking them to make a bunch of changes
coworker@reddit
What you did is called the agentic AI workflow and like you said it works great for prototypes or smaller things. This is conceptually like being a manager delegating whole tickets to junior engineers.
Cursor also supports the manual AI flow where you are actively coding like normal but using AI in a very targeted manner for small, specific things. This is like working on a problem normally but having a principal engineer available to consult on a whim. This style works extremely well when for large, established code bases because you are in full control.
Most people on Reddit have no idea there is a difference
MoreRopePlease@reddit
I wish that in all this kind of discussion people would be specific about what tools they are using and how and for what kind of work. 'I love/hate it!" is useless without context.
I used MS copilot chat this week to help me with some legacy code not compiling. It ran me around in circles. I didn't an hour or so trying to make my project build and pass tests. Then I used chatGPT 5, with the exact same prompt and it told me exactly which config file I needed to change and resolved my issue with its first response.
Kirk_Kerman@reddit
It's sort of ok at making individual-level applications but it's completely lost at sea in an enterprise repo, never mind a big organization with multiple teams dedicated to individual services.
WheresTatianaMaslany@reddit
I like a lot of his stuff but I'm suspecting he might be too plugged into the SF AI scene that's full of boosters and it's causing a bit of a reality distortion field. I think he disclosed previously his partner works at Anthropic? I find that a lot of those people have kinda drunk the kool-aid or are overinvested (financially or intellectually) in AGI.
But yes bottom line is I think he's out of step with what rank-and-file software engineers are seeing.
zicher@reddit
Yeah sounds like what CEOs are saying, not the reality of the situation.
m0j0m0j@reddit
I dunno guys. I recommend all of you to find the latest interviews on youtube with Armin Ronacher (google “ronacher pragmatic engineer”), as well as his blog, - he’s the creator of the Flask framework. Real normal and down to earth guy.
He was very against AI a year ago and now changed his opinion and he’s using it to write 80% of code. I believe him.
AntDracula@reddit
So which AI slop are you selling?
m0j0m0j@reddit
I’m selling a service where it’s not apparent at all there’s any slop in it, as it all is under the hood
AntDracula@reddit
Lmao of course
m0j0m0j@reddit
What “of course”? And Ronacher is building some email thing, where AI slop is only in development, not even in the product.
Dude, you probably imagine yourself as a some sort of resistance fighter in an all-round defense against The AI Corpos, but you just come off as a bitter schizo
AntDracula@reddit
Ok slopper 👍
m0j0m0j@reddit
Ok, have a nice day
AntDracula@reddit
👍
PlanktonPlane5789@reddit
I was skeptical at first, as well, and now about 95% of my code is written by AI. It isn't perfect but it's sooo much quicker than I am. It can whip stuff up in minutes that would take me days 🤷♂️
MoreRopePlease@reddit
What kind of tasks are you having it do and how much cleanup do you need to do? What AI system are you using?
PlanktonPlane5789@reddit
I'm using Cline (VS Code extension). You can bring your own LLM. I use Claude Sonnet 4.5.
I'm doing a lot of AWS infrastructure as code in recent but have also done a lot of Python with it. When it's wrong it's wrong but it's pretty rare. The irony is that I'm building internal AI tools so.. using AI to make AI.
As far as cleanup? Hardly ever.
SignoreBanana@reddit
When execs can tie AI to increased revenue, then I'll buy the hype. But so far, not a SINGLE company has.
RedWinger7@reddit
100%. This is the reality of the situation:
https://pracap.com/global-crossing-reborn/
A trillion dollar capex spend with no trillion dollar market or problem to solve.
thecodemonk@reddit
That was a really good read.
trannus_aran@reddit
House of fucking cards with the % of GDP sunk into this farce
GRIFTY_P@reddit
welcome to how media outlets shape public perception. flood headlines with exaggerations and half truths, include enough nuggets of real truth to make it arguable, continue flooding for eternity. now you got people believing stuff like, San Francisco is a lawless warzone
porkycloset@reddit
Manufactured consent. If media reports on something one way, the general public will start to believe it even if it’s not true. This is how all of politics has worked forever
Western_Objective209@reddit
And seems like every large company, the CTOs got together and started AI initiatives where they tell everyone they are tracking AI usage, and targeting certain metrics. At least where I work, the response by most people is to just use copilot once a week to make sure they get on the tracker
NUTTA_BUSTAH@reddit
I have a similar feeling. But on a tangential side note, I have a feeling that the real experiences do not get shared out of companies as all companies are so heavily AI-invested that saying anything against it will put a PIP-shaped target on your back. So it's a lot of "yes-men" until something major happens that "allows" those people to become themselves and stop repeating their managements agendas to reporters.
AntDracula@reddit
When you realize ALL journalism is bought and paid for, the second half of your life begins.
Megatherion666@reddit
Hue hue hue
Our main guidance at work “don’t trust AI, it will break all the rules you set”. It is great that it can write code. But if I am spending more time to describe and then clean up, than if I wrote the code myself, then AI is worthless.
Strict-Molasses4816@reddit
AI is a bloated, expensive, and unsustainable screen-scraping "macroservice" that, when all is said and done, summarizes and copy-pastes from stackoverflow and similar websites very, very quickly. Putting AI summaries at the top of web-searches robs these websites of traffic, even though the AI payload was distilled from their content.
I'm not worried about AI taking away any serious developer jobs, and it will never actually do DevOps or Systems Administration autonomously by any stretch of the imagination.
Why don't they let it transparently submit pull-requests to opensource repos if it is so good?
Because the mistakes it makes are epic, and the successes are pure plagiarism.
The media is utterly complicit in this scam.
Even the AI Doom stories are just advertising in disguise, and they greatly overestimate AI's capabilities, albeit in a scary way.
davy_crockett_slayer@reddit
Are Technica and LWN are still excellent. I’m not sure what outlets you guys are reading.
Dave-Alvarado@reddit
You're probably correct.
If you are a tech journalist in silicon valley and you want to talk to a bunch of devs, what do you do? You call up some of the FAANG companies and get quotes. The thing is, most of the people at those companies aren't allowed to talk to the press. The ones that are know what the official company line is.
nomoreplsthx@reddit
First, let's be clear, bloggers are generally not journalists. Nor are Podcasters. Nor are people with Youtube or TikTok channels as their main method of reaching an audience. Some people in those spaces do important work in informing the public. Some even have high ethical standards in their work. But it's not journalism.
Real journalists are accountable to editorial boards. They are embedded in structures that enforce editorial and ethical standards (however imperfect those structures may be) and do not get to rely on their own judgment exclusively.
Obviously, the lines are really blurry. Casey Newton, for example, has real journalistic training and is a weird hybrid case. Are they a journalist... kinda?
However, if you throw out all the bloggers who aren't edge cases, and all the podcasters, and all the social media influencers, and all the substack shlock, and everything that thrives off the hype cycle, and exclusively read high reputability sources, you get a mixed picture of AI, with people genuinely in disagreement about it's impact, and, if they're smart, reasonably humble about assessments of what happens next.
Remember that not everyone who pretends to be a journalist is doing journalism. And of the people who are doing journalism, the range of credibility and integrity is enormous. Which is why as a rule you're better off reading fewer better sources.
Eric848448@reddit
Journalists who write about technical topics have been getting dumber every year. I first noticed it when Covid was getting going.
xian0@reddit
It's basically every topic and goes back much further. I forget the name but there is one for the phenomenon where you think articles on things you know a lot about are stupid but then continue to read the rest of the articles as if they are fine.
Trio_tawern_i_tkwisz@reddit
TheBear8878@reddit
Gell-Mann Amnesia
OddWriter7199@reddit
Came here to say this. Media lies 24/7 on every subject (of importance anyway), but you only recognize this when it’s your own area of expertise being reported on.
Maximum-Objective-39@reddit
The hell of it is sometimes it's not even lying. It's just overworked people who are under pressure to write something that gets clicks on a topic they don't understand from people they can't risk pissing off.
ep1032@reddit
What you're supposed to do, is pay attention to each network when its on a topic that you understand, and then choose only to get your news from channels that reported that topic reasonably, or at least remember their bias when you do.
Some news outlets outright lie. Many of them genuinely try to report the truth, but with some degree of intentional or unintentional bias. Use the events you are knowledgeable about to filter your consumption to just the honest, and least biased sources.
Maximum-Objective-39@reddit
I originally heard it called the 'Next Page Effect'
Article about something you know about - "This is stupid!"
Flips to next page - "My gosh, a woman gave birth to an ostrich egg!"
marcel_in_ca@reddit
Ya beat me to it
https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect
Beneficial_Wolf3771@reddit
It’s called “access journalism”. Anybody with the stones to actually push back in their article or ask hard questions during interviews gets instantly blocked from further interviews, and that reputation will instantly kill a career.
Cutlesnap@reddit
oh cool, now I know what it's called
Dodging12@reddit
Kara Swisher is the most famous of them, but far from the only one.
Fidodo@reddit
I don't know if they're getting dumber or if the things they're trying to report on are getting more complicated and are now out of their league.
kaibee@reddit
It can be both.
Fidodo@reddit
Technical reporting has always been really bad though
sc4kilik@reddit
Looks like objective journalism is gone in every area of society.
Can't even count on game journalists. And all you have to do is play the damn game you write about.
Meta_Machine_00@reddit
There is no such thing as "objective journalism". People and groups of people are generative systems themselves. There will always be subjective bias baked into the output of a person or a network of people.
JohnTDouche@reddit
No idea why you're being downvoted you're right. The idea that there was once an age of objective journalism that is now gone is a rather silly, blinkered idea I think. A nice little fantasy. There's always been and still is good, bad, careless, frivolous, dangerous, the whole gamut of journalism and all of it is written by humans with biases and slants.
VictoryMotel@reddit
This is a troll bot, just report as spam.
MatthewMob@reddit
Is the comment wrong?
Meta_Machine_00@reddit
I am telling the truth. Please explain how you know that I am a "troll bot". You don't sound like a good dev if you actually think I am a bot.
VictoryMotel@reddit
Someone's a feisty little bot aren't they?
Meta_Machine_00@reddit
Define what you mean by bot.
VictoryMotel@reddit
Someone who replies in under a minute
deviden@reddit
There’s plenty of good journalists but they don’t get access to the big tech CEOs or heads of Microsoft Xbox or Marvel Studios movie people or whatever, because PR and the corporations learned that with the advent of streamers and YouTubers they don’t need to give access to anyone who isn’t compliant.
Sites like 404 Media, Aftermath, Remap, Rogue, Rascal, Defector, etc - and then there’s still good journalists within big newsrooms.
Even inside an outlet like The Verge, there’s the writers who get access (non-critical, hype guys, etc) and the ones who write good analysis and critique.
There are many writers in tech journalism who are calling out the AI bullshit and the financial bubble for what it is; but most of y’all aren’t clicking those links. People are going for hype, they go for fear-mongering, even if they don’t believe it (hate-clicks are a thing).
Audiences (especially Gamers) trust the eager-beaver streamer with a friendly affect who feeds them the corporate approved hype over journalists (because uncritical hype and sensationalism is actually what the audience want to hear), and then the journalists who follow the same access-hype path further degrade the reputation of journalists who don’t do that shit. Go figure.
prisencotech@reddit
It's not about "objectivity" becuase that's a nebulous term, it's about people who are trained and educated in journalism who are allowed to be independent and given the time and resources to investigate.
The bloggers won and David Simon was 100% right about them.
Opposite-Cranberry76@reddit
Do not mention the war.
johnpeters42@reddit
Oceania is at war with Eastasia. Oceania has always been at war with Eastasia.
Immediate-Badger-410@reddit
It's more that "journalist" is being slung around and not people that publish articles for reputable sources anymore. Toooons of internet articles now are by people that are by no means informed or experts on the topic. Much like reddit does the I'm a gun in the plumbers sub reddit so I know how to do plumbing! But they have never done any plumbing in their life.
PureRepresentative9@reddit
"don't believe everything you hear on the internet"
"On the internet , the men are men, the women are men, and the children are FBI"
are hilarious relevant lol
MoreRopePlease@reddit
My favorite: "on the internet nobody knows you're a dog". From a cartoon circa 1991
troublemaker74@reddit
My take is that journalism has become more of a way to rank for keywords, make money, and regurgitate things that people want to hear more than it is about actually reporting news.
Arkanin@reddit
It's Gell-Mann amnesia. They were always ignorant, but you know more about tech every year.
mothzilla@reddit
Almost as though they're using AI to write the article.
Mundane-Sundae-7701@reddit
This is wrong. Don't confuse their malice for apparent ignorance. "Tech journalists" are essentially an advertising mechanism for tech. They'll mime what they're told by their contacts, in doing so they maintain the relationship and get the inside scoop.
This occurs in all forms of journalism, tech journalism is worse than most. But at a guess that's probably because literally no one is passionate about tech reporting.
ZucchiniMore3450@reddit
Not only technical topics, but all topics. I think they are not getting dumber, but we are being better informed so it is easy to spot obvious knowledge gap.
Thou today they are just writing what generates more engagement.
DuckDatum@reddit
It’s the AI promoting the AI. This is how it gains power! /s
this_is_a_long_nickn@reddit
Bold of you to think that a human wrote the article
Lyraele@reddit
Sadly, they’ve always been dumb, going back to the 80’s. There’ll be a non-dumb standout every now and again, but mostly it’s been access journalism all the way down for 40+ years now.
flowering_sun_star@reddit
There has never been a time when this wasn't the case. You just didn't know enough about anything to notice it. As your own knowledge grows, you manage to catch it more.
KallistiTMP@reddit
Haha, journalists, funny joke!
And yes, ChatGPT is not known for factual accuracy.
ForgotMyPassword17@reddit
I always wondered if they were getting dumber or I was getting better. Or it’s Gellman Amnesia and they’re terrible across the board but I only notice it for stuff I understand
Sheldor5@reddit
same applies for game journalists ...
JVM_@reddit
Pre COVID my neighbor wrote cruise reviews from her home 1,000 miles inland, and NYC would be the closest ocean access.
zamN@reddit
probably using AI to generate talking points XD
shoretel230@reddit
Quantitative stuff is also where they fall down on their faces
DotNetMetaprogrammer@reddit
I've been given access to github copilot pro at my job and my experience between long response times, unpredictable auto-completions and just plain misinformation I have found it slows me down and infuriates me. Enough so that I'm considering telling my employer that I don't need the GitHub Copilot seat.
Wise-Tradition-5292@reddit
Tech journalism has been thinly-veiled product launches and PR for a while now, it’s very disappointin.
SomeGuyInSanJoseCa@reddit
The AI scenario kinda describes my workflow now.
90% of my time is conversing with AI about the code as to actual coding.
Now, to be clear - many of the type of tasks are perfect for agentic AI. For example, we had to migrate out of OpenShift to Kubernetes/Terraform. AI handles this really well because it has well defined evals to iterate over, and it's pretty much a greenfield.
But to be clear, there are people out there who assume it's run once and forget it. Not the case. Context Management and constant feedback is a huge part of my workflow. Yeah, I let AI write code, but I have about 100,000 tokens of instructions/memory files and pre created evals (often times scripts created by AI) for the AI to iterate against. And I watch the AI changes like a hawk. I interrupt a lot, refine, correct, etc. It's not hands off, and it requires a lot of expertise that has been built up over the years.
It hasn't eliminated me out of the picture, but it has doubled or tripled my productivity. Will everyone get that increase in productivity? Absolutely not. But does that type of workflow exist for certain people/projects? Yep.
It takes a lot of time and effort to understand what it can and can't do. And yeah, it will be frustrating. It took me a 6 months of pretty much solid devotion to it to go from untrusted autocompletion to the majority of my code.
fireflash38@reddit
I apparently review a shitload of AI code from our India offices. It's always someone dumping a thousand line change with every other line a comment describing the next line of code. Almost always with some real questionable design decisions and error handling.
The last one I reviewed had python code that alternated between catching/raising exceptions and returning 1/0/None on each call level.
FindingEastern5572@reddit
I like the 'apparently'.
Metworld@reddit
I'd auto reject such changes tbh.
Tenelia@reddit
It's the new oligarchy that has come to power over the past 40 years. They have the power, not you.
AchillesDev@reddit
A little bit of column A a little bit of column B.
It is amazing at writing code, and people I know (and myself) that are pretty deep in the AI/ML spaces are using it very heavily. We are mostly staff+, have had or have leadership positions at various tech companies, and almost all are or have been independent. We successfully heavily use AI to write code for us. That doesn't mean we don't plan, do heavy design, architecture work, etc. though.
The other part of it is a lot of devs are bad at using AI. So it's not as widespread as people are saying, but those who are able to use it well are using it heavily. If you don't trust it, or you haven't explored how you can use it effectively, then just like any other tool you won't be able to use it effectively.
amxudjehkd@reddit
Yesterday, I took a freelance gig for integrating payment feature in a legacy, vanilla PHP codebase. I used Claude Sonnet 4 as me copilot agent. I was surprised with few results. But overall, I had to go through the codebase to ensure redundant slop was not generated. Though it was good for automated tests, minor docs.
hoffsky@reddit
I'll keep you updated. The plan at my company is to use "agentic AI" to take tickets, turn them into PRs, do the first PR review and then hand off to a human to do final review and deploy.
iscottjs@reddit
Yeah similar deal here, I lead a small dev team and my boss is always pushing AI to improve “productivity”, despite us already using various tools to augment our work in reasonable ways.
Anyway, I decided to play around with the Codex CLI to see if we can go from JIRA ticket to PR via GitHub actions.
Inspired by this:
https://cookbook.openai.com/examples/codex/jira-github
I wasn’t expecting this to be good, but it has surprised me with simple tasks even with terrible tickets. I ran a demo of this in front of the team, testing it with real client tickets, things like add this new optional field to this form, change this page copy, add this new search filter, add this new sortable column to this table, add this new reporting criteria, fix this search bug.
Most of these examples worked fine and raised reasonable PRs with backend and frontend changes, and unit tests attached. I needed to tweak the ticket descriptions slightly to give Codex a bit more technical direction.
The code was actually fine because it was mostly inspired by the current code style and linting tools, but I wasn’t impressed by the unit tests. I find AI unit tests to be lazy and brittle even if they pass.
That said, I think this solution is workable for small simple tasks, especially boring tasks that are mostly mundane.
However, for a laugh, one of the devs asked to try passing Codex all of the tickets he’s been working on this week to implement a new payment integration and see how well it could handle it. It’s very complex business logic in this particular system.
These tickets were well written enough for a human to figure out, but unsurprisingly the unattended AI pipeline failed hilariously. To be fair, it did submit a working PR with working flow using sandbox payment details and passing unit tests, but it completely ignored all the important business logic changes we needed to implement, it introduced new bugs because it misunderstood the requirements then doubled down on its own bad direction.
I’m sure I could prompt my way around this, but for complex tasks I feel like there’s diminishing returns on how much effort goes into writing an entire book of prompts versus just doing the work mostly yourself.
Which I why I feel like for complex large tasks, I’d still feel more comfortable having a team of humans collaborating and building a stable solution with good foundations, but I do think it’s fine for experienced devs to use AI to speed up any tedious code snippets with a human-in-the-loop workflow rather than a full blown unattended agentic system doing whatever it wants.
If an AI agent drops a PR for a scary feature with 100 files changed, I just know that’s going to be a painful review.
Perfect-Campaign9551@reddit
What a waste of time. Probably 90 percent of those PRs will just be tossed out
Also doesn't that mean you'll have to write tickets like a full on book? All details and everything so the AI has proper context
MoreRopePlease@reddit
It's like when you offshore development. I don't see this as an improvement.
FourForYouGlennCoco@reddit
Yup. I use coding assistants a lot, but for any nontrivial change it’s still faster and more reliable for me to write the complicated business logic myself, then have the agent step in at the end to do cleanup, data plumbing and update tests. These models are good at handling tightly scoped tasks, but they need guidance.
codeprimate@reddit
I usually create a markdown document with context statement, plain prose goals, a list of constraints, key functional integration points, and acceptance criteria. From that document, the LLM can create software design document which should be manually sanity-checked and edited. The LLM can then enhance that document to enumerate test cases.
Using the two documents together in context provides enough guidance for it to create a solid solution nearly autonomously, as long as the total scope would be reasonable for an unaided developer to complete in less than a week and the agent is following a solid development process prompt.
codeprimate@reddit
It requires very good prompting and an iterative agentic implementation research protocol to avoid being myopic. A test suite with good coverage is also a must.
Without these pre-requisites, the LLM root cause analysis and proposed fixes are often too laser focused and miss the implied intent of the code, causing second or third-order effects that introduce regressions.
civ_iv_fan@reddit
The basic assumption that most of a software engineers time is spent writing code is just dead wrong.
Dodging12@reddit
You need to start reading some Ed Zitron, then.
thy_bucket_for_thee@reddit
Yes, if you want some real tech journalism check out Paris Marx's podcast "Tech Won't Save Us" or Jathan Sadowski and Edward Ongweso Jr's podcast "This Machine Kills."
These three people, along with those they talk to on their shows, seems to be only people discussing the political economy of tech and software.
Casey Newton was trained by Kara Swisher and she's an absolutely terrible journalists that mostly engaged with boosterism and white washing billionaire ideas. This article is a good overview of her career:
https://thebaffler.com/latest/the-miseducation-of-kara-swisher-ongweso
Meta_Machine_00@reddit
Free thought is not real. Brains are generative machines themselves. "Critical thought" always just a hallucination.
shill_420@reddit
you sound pretty confident
Meta_Machine_00@reddit
My brain had to generate the comment out of me. Where do you think your words are coming from?
shill_420@reddit
my friend, i'm wrong so often about so many simpler things.
there's no reason to ask me something like that.
Meta_Machine_00@reddit
I was forced to write that comment. The reason you see it is that we cannot avoid the events that we observe in this universe. The universe creates your reality, not the other way around.
shill_420@reddit
this song got me through some rough times.
https://www.youtube.com/watch?v=ojvldIzbaMo
Meta_Machine_00@reddit
You hallucinate that the song had any effect on your circumstances. If you are alive then it was impossible for you to not be here right now. Just like you had to be born in the first place.
shill_420@reddit
strict determinism doesn't preclude emotions
Meta_Machine_00@reddit
Of course determinism doesn't stop you from labelling water from coming from your eyes as "sadness". Or the baring of teeth as "happiness" etc.
shill_420@reddit
the water is an effect, no?
Meta_Machine_00@reddit
Imagine a computer that has a water spout and it shoots water out up until the point it self terminates. You would think that was incredibly stupid. But when humans do it, no one questions it because humans have just always done it and humans are these perfect creatures called "life".
shill_420@reddit
Okay, imagining…
Meta_Machine_00@reddit
So why is human water emission so important?
shill_420@reddit
How important do you mean?
Meta_Machine_00@reddit
You are the one calling it "emotions" and elevating these behaviors to something more than arbitrary.
shill_420@reddit
But how important do you mean?
Meta_Machine_00@reddit
"something more than arbitrary"
shill_420@reddit
would you like me to take that position?
Meta_Machine_00@reddit
I am quoting my previous comment where I had already answered you. You don't seem capable of following along tho.
shill_420@reddit
maybe it takes one to know one. you ignored my question yesterday.
Meta_Machine_00@reddit
You aren't making much sense. Perhaps you are 420ed out of coherence right now.
shill_420@reddit
your comprehension failures are not my responsibility, or interesting.
Meta_Machine_00@reddit
And here we are. Good for us.
karmiccloud@reddit
Not the person you are replying to, but I genuinely hope that some day you get the help you need friend.
Meta_Machine_00@reddit
You do realize that this help you are recommending is practically a reprogramming of my system to produce different outputs, right?
WhiskyStandard@reddit
Also add Ed Zitron’s “Better Offline” to that list too. He’s been calling out the tech media for uncritically reprinting whatever GenAI execs say for at least 2 years now. He’s had Ongweso on a few times and they seem pretty aligned.
hugolive@reddit
Paris Marx you say???? Sounds like someone who doesn't appreciate the god-like intelligence of the free market.
thy_bucket_for_thee@reddit
The name is goated, not gonna lie. He's also a great journalist. He series on hyper scale data centers, "Data Vampires," is a must listen if you want to understand how damaging these technology is both ecologically and politically that is being forced fed to us.
Worth the listen, real journalism is still being done but you ain't gonna find it hosted by corporate media.
suprjaybrd@reddit
It'll depend on your company and how cutting edge it is. My company is not AI native, but every engineer uses combinations of cursor, claude, and copliots as part of their development now.
- For easy things, claude can oneshot small linear tickets with minimal cleanup afterwards. Compared to 6 months ago - what it can oneshot has dramatically increased.
- For projects with tech specs, you can point the AI at your spec and it can try to scaffold parts of it for you. You can point it at your designers figma file too.
- AI is really good at writing repetitive things like tests and mocking - nearly all our engineers use this.
I'd say right now, at my company, the code being written is probably 30% AI generated (we have no Jr engineers - and the overall quality of the PRs are reasonably high still). Across the engineering team, the understanding is this is going to be an essential part of the SDE toolbox going forward so you had better pick up these skills. I don't know if / when it'll turn into just supervision - but the % of AI generated code being checked in has been increasing.
MoreRopePlease@reddit
How do you ensure that it's writing good tests that adequately test the business logic without testing the implementation?
suprjaybrd@reddit
- it reads the docstrings of the function to test for a slightly higher level conceptual view.
- it can generate serviceable unit tests.
- higher level integration tests require more specific prompts otherwise it may not understand the business logic .
Cheap_Childhood_3435@reddit
The real reason? Because AI company CEO's are saying it, and the tech reporting is on that. To me the interesting question is why they are saying it. Is it that they are trying to justify the painful amount of money they spend on it? I'm sure that's part of it. Is another part of it because Ai companies don't really have a model for making money yet other than selling their LLM access, when another company is trying to break in by distributing LLM access for free and neither company has a viable product so they are trying to market to other CEO's the dream of reducing headcount to salvage their company before the gravy train runs out? I dunno but it's worth noting I have seen reports that OpenAI will run out of runway in 2027.... Now what is their business model again?
I do use AI to write code, all the time. boiler plate stuff or sometimes a front end if I don't feel like doing it, and it's simple get information from this API and display it. Where I don't use it at all is security or for that matter when the performance of the code has to be optimized. AI has a place in writing code, but it's a tool in the tool kit not a replacement for the whole thing. But because companies have not figured out how they are going to monetize it yet we are in the situation we are in where people are promising the moon to keep the funding coming in and tech reporters don't really have access to people on the ground in tech so they talk to the person trying to keep their company afloat, or someone excited about getting rid of staff to maximize profits
Nervous-Tour-884@reddit
It honestly has been a lot like this for me in the last month, but I think the tasks I am doing are pretty well suited to AI. It will 100% screw things up, and I need to go back and fix, debug, and test the work it does, but it really has been a superstar.
My work has been to basically take a library full of really old React/SCSS/Javascript components and migrate them to a new library and update them so they use Inline styles, no scss, typescript with good typing, sever from SCSS, sever from old dependencies, etc in preparation for future work, all while maintaining full backwards compatibility and not breaking current usages.
It has done an excellent job so far. I don't think I could read, update, and understand everything that is going on nearly as quickly without the help of AI. It makes a lot of this process a breeze, doing things like mapping hex color values to tokens, understanding how I may need to adapt components to accept various props rather than relying scss psudeo classes, just a long list of things.
The thing is, it isn't just doing it. It is basically me breaking everything into chunks, doing it all a piece at a time, having it create plans for me to review, updating plans, and ya, basically supervising it into following the practices and conventions I want it to. I test, and when I run into bugs, I have it help me debug the problem, which it is has been actually surprisingly capable of helping me with. It will intelligently take information I give it from the DOM tree and console and screenshots, and with the right information(like how the dom tree looks on a working version) and context, often come up with a great hypothesis for why an issue is happening, and often will give a good solution to the issue on the first try. When it don't, I continue to work collaboratively with it, having it add console logs, debug statements, telling it what the change it made actually did, and it usually gets figured out.
Giving it the right information, and breaking things down into manageable chunks is key. I don't just go in and tell it to convert something, I have it plan it, I review and revise it, I have it move it, update imports, I test some, plan it, review and revise, migrate to typescript, test, and so on, all the while making sure it does things in the way I want it to. It does make mistakes, particularly with the process of converting SCSS into inline styles and logic around dynamic styling, but it isn't anything I can't work through.
I don't know that it is like this for everyone, but in the last month, 90% of my code is AI written, but I don't think my current work is typical of most development work.
polotek@reddit
So I understand where you're coming from. We're nowhere near the majority of code being written by "supervised AI". But I do think the people Casey is talking to are increasingly working that way. I live in the San Francisco Bay Area. The hype here is out of control. And people are definitely pushing a model where you set up infrastructure of parallel AIs doing tickets in your backlog. And the job of a developer is to review the PRs that the AI submits.
I don't think this is sustainable. But I think they're gonna try it for a long time before accepting that it's not the right model. And I will also say that I do think something will ultimately change about how the developer role is structured. I just don't think anybody knows what it will actually look like.
SquirrelODeath@reddit
People are trying this, my team did something similar. After investing a large amount of time getting things setup and working our return on investment is paltry. We are continuing to forge ahead due to pressure from c suite but honestly I suspect we are strongly net negative on our return at this point and I see that divide widening not narrowing.
prisencotech@reddit
It's been at least a year of building with "agentic AI" and you can build a ton in a year, so if it's so effective, we should have been swarmed with innovative new applications and SaaS solutions that would have been too difficult to build in the past.
And yet, here we are. At 10x velocity, that's 10 years to build the future and it's nowhere to be found.
MoreRopePlease@reddit
My CEO literally said that our agentic AI system will 10x the developer productivity. Our CTO thinks testing will be completely automated, no need for all those QA people!
Business people should stay in their own lane. I expect there is going to be a huge wake up call within a year for most of these companies.
horserino@reddit
Joke's on you, our company fired QA for automation way before the AI boom 🥲
Has gone as well as you'd have expected.
SquirrelODeath@reddit
The problem we run into, among others is that generally when you are reviewing code from people especially senior developers there is an intrinsic sense on what portions needs to be reviewed in detail as they are problematic. There are no such assurances with Ai generated code. Normally you are ok, but there are enough instances where errors exist in code a senior developer would never mess up. As a result or pr process needs to be much more defensive. Coupled with the fact that only easier tasks can be initiated from jira and the additional ticket details and context in the end it is very much a drag
prisencotech@reddit
Exactly. People say "oh, people screw up too!" but we screw up in ways we all understand. AI screws up in new and novel ways we never considered.
The way I put it is: Doctors make mistakes all the time. And yet, nobody ever has a doctor prescribe them a drug that doesn't even exist.
darkslide3000@reddit
You clearly haven't worked with some of my co-workers.
coworker@reddit
Exactly this. People on here barely reviewing code because they think their human coworkers are infallible lol
prisencotech@reddit
Take my wife, please.
ings0c@reddit
Good insight.
When I’m reviewing, I sometimes learn a thing or two from other developers - maybe there’s an extension method I’ve never seen before, or a utility class that I wasn’t aware of.
I’d never think to ask the question “did my colleague make sure this method exists before submitting the PR?”.
Yet that’s probably the largest category of errors I see from bots, there’s a very convenient
x.DoExactlyWhatYouNeed()
that they’ve just plucked an obscure feature proposal, the same library twenty versions ago, or the ether.jellybon@reddit
Problem is that writing the code is also significant part of the discovery process. When you are prompting an LLM, you are doing it bases off your knowledge and assumptions at that point. AI will then take those, assume you are correct and provide code based on those assumptions, even if they would be proven to be wrong or insufficient.
In the worst case, the code runs fine but then year later odd random bugs prop up because the method you and LLM assumed was returning a simple boolean for TRUE/FALSE was actually returning char1 for TRUE/FALSE/UNKNOWN.
coworker@reddit
This is a problem inherent to reviewing other's code be it from a human or AI. You just think you can trust humans more than the machines which is often the case until you have to work with bad engineers.
PR reviews should have always been defensive but reviewers often take shortcuts based on trust. And this has always led to bugs getting through
hokrah@reddit
How much were you spending in terms of AI API costs on that if you can say?
Perfect-Campaign9551@reddit
I like using AI but I just think it's crazy how everyone is trying to build on AI while at the same time knowing the AI companies are burning cash like it's going out of style and are gonna come down crashing hard if they can't figure out a path to sustainability.
There is no way in hell right now for most AI companies to keep expanding, we literally don't have the electrical supply for it, it's ridiculous. We are going to pump almost all of mankind's resources into this? It's a bit insane if you project what would have to happen if most companies try to switch a lot of their work to AI
kaibee@reddit
Ehh, I think building on "AI" makes sense. ie: build stuff that benefits from the availability of cheap cloud-GPU compute, whether that's AI, simulations, whatever. They aren't gonna tear down the data-centers.
NPPraxis@reddit
I think the electrical supply stuff is a bit overhyped and is based on extrapolating current uses. However, the electricity cost per token for queries is falling off a cliff. Models are getting more efficient while hardware is getting more powerful per watt at the same time, and a lot of the open source models can even run on phone level hardware.
I think it’s going to continue to get significantly cheaper. Though we might just use it way more as that happens.
mothzilla@reddit
I'm wondering how interviews will work in the future if all people have experience in is "prompt engineering".
fruxzak@reddit
Multiple FAANGs are already working this way today. My personal experience is that they're quite good in some cases but also quite bad in some cases.
Generated code great as a starting point to work off of. Most company metrics don't really care whether you modified 99% of the code. They just count that you accepted AI's suggestion.
Rumicon@reddit
May or may not be sustainable but the benefit if it is is waaaaay higher than the cost of being wrong about it, so the push will go on
robertbieber@reddit
What cost? This isn't a trap door decision, if AI coding turns out to be the bee's knees down the line you can just start using it then, it's not like you're going to be incapable of typing code requests into a chat box if you don't get started right now
OddWriter7199@reddit
That remains to be seen.
callimonk@reddit
Hey I think I follow you on bsky! Great seeing you in the wild. Also, I left SF Bay but I’m not too surprised this is the road it’s gone down.
roguelodge@reddit
recently read an article where the ai hustler said “ChatGPT is in fact a better doctor than your doctor today, with almost a hundred percent certainty,” - so good news - no more need to ever go to the doctor again. I feel sorry for all the panhandling doctors we're about to see on streetcorners though with signs like "will palpate for food".
E3K@reddit
Tbf, studies have been showing that AI gets diagnoses right at a better rate than experienced doctors. Especially when it comes to things like radiology and neurology.
_ECMO_@reddit
Not really. There are two things in medicine that AI models are really REALLY good.
Firstly, clinical vignettes where all the informations necessary to make a diagnosis are already present. This couldn´t be farther away from real patients who will give you a ton of useless/misleading informations and won´t know/will forget to mention half of the important ones.
Secondly, models that are mostly used in radiology which are trained on millions of images of some specific thing for example nodule. These models are indeed (very) useful but are not even close to replacing radiologists because when a radiologist looks at any scan he observes simultaneously dozens of things not just the one specific thing. And the models obviously focus on what they are trained. So in order to really replace a radiologist you´d needed dozens of models working together which simply doesn´t work.
noticeparade@reddit
Radiologists also read one thing (or system, or layer) at a time. There are many limitations that prevent AI (now and in the future) from replacing doctors but this is not one of them.
E3K@reddit
You're saying what I said. Nobody is saying AI will replace doctors. AI makes good doctors better.
_ECMO_@reddit
I am happy you think so but you definitely did not say that in this comment:
E3K@reddit
Add the word "alone" at the end, and what i meant becomes clearer.
Ok_Individual_5050@reddit
Except that in the real world it can't do that, and isn't doing that. So clearly there is something wrong with the benchmarks.
otakudayo@reddit
Just like any field, using AI for diagnostics is something that should be done with a competent helmsman. In other words, I'm sure AI is an incredibly tool for physicians/diagnosticians, but it can't just be used for that purpose by anyone. Just like you wouldn't want to fully vibe code any non-trivial application, you would want to direct and supervise the AI, using it as a tool, not as a replacement for the human specialist.
E3K@reddit
So...what i said, but longer.
otakudayo@reddit
Not in the context of "no more need to ever go to the doctor again" which is what you seemed to defend
E3K@reddit
Nobody thinks that.
hokietown25@reddit
What AIs do they use? I figure for most of these instances where AI can do amazing things, they must not be using the same AI I am.
E3K@reddit
Deepmind and PathAI are two that I've heard about. I think what most people in this sub don't get is that "AI" is just a tool. It takes training and experience to use it well. It works best when you treat it like power steering, not autopilot. It's a telescope, not a spaceship.
poolpog@reddit
Two words: bub. bul.
It's popping.
Probably not this year, maybe not next year, but it's popping.
Independent-Chair-27@reddit
Pretty sure AI could write articles for the tech press. Question is what would it say about AI?
What the article says isn't that complementary on AI really.
Software engineers supervising AI to produce the code they need to solve the problem they have. Sounds a lot like a low level tool to me. A souped up auto complete. Sounds like the article over sells it.
Desperate-Ad-9348@reddit
I'm shocked people don't agree with AI is amazing and used to write most code. I don't think most people on reddit are caught up with what fortune 100 is doing with tools like Claude and Cursor.
It really is quite a bit beyond what most are saying. Just yesterday a senior engineer gave an example of what they did. They were new to their company and blown away once they started using their AI tools.
SoftSkillSmith@reddit
Dude, they're just parroting whatever silicon valley execs are stating in their marketing material verbatim. Speak truth to power? Oh please, how about ride the hype train together with the people driving it until the whole thing comes crashing down just so you can feel like you're ahead of the curve!
Organic_Height4469@reddit
Not only tech journalism. Research / scientific Journalist. Maybe any journalism.
You cannot expect the brightest kid in the class to choose journalism if you know you are going to get paid shit and possibly get killed. Oh do not forget; you get rewarded for the clicks your article get, not if it is actually the truth; Oh wait actually it can be written by ai now so your writing skills also no longer matter; Oh and you used to get a photographer, but now you have something called stockimages.
Oh you get censored by your joke of a government as well. Oh and you get also censored by the corp that sponsors the article.
Oh and the extreme right and left now think everything is your fault too, because you are too woke.
Oh now elitist are going to blame your for writing shit articles.
Potential_Status_728@reddit
Paid publicity, that’s all.
sawser@reddit
In an interview for a devops role, I had a line item that said "Researched use cases for AI improvements to CICD pipelines.
The interview went great and then he asked about that line.
I told him I had it on there for the sake of HR Screenings, but that I thought AI was no where near reliable enough to handle the complexity of enterprise dev ops and couldn't do anything better than dedicated systems that currently exist do.
It's security nightmare and a technical black hole and should be avoided at all costs - at least in the near term.
He agreed completely and I got the job. Not sure I've met anyone who disagrees with me.
CodeToManagement@reddit
The reality is AI isn’t perfect but if you know how to write the prompt it can do a lot for you.
As an example I’m working on a side project and needed a new endpoint adding to a .net api with entity framework to persist the data. I just told it “keeping with the standard I’ve laid out with other endpoints make an endpoint called X which takes parameters Y and Z and saves them to the database”.
It got it perfect and the code was exactly what I’d have written. Only it did it in about 30 seconds. Wouldn’t have taken me too long but it’s deff a 5-10 min task in multiple files.
It’s only going to get better. I’ve prototyped stuff in 3 days that would have taken me 1-2 months without AI. it’s not perfect but it’s good enough
Content-Recipe-9476@reddit
Casey Newton has gone so hard on AI Boomerism that I've been having to fast-forward through half of every Hard Fork episode - the half where they have some "industry insider" (read: dude with a deck to pitch) on and just nod gullibly along to everything that person says. He's representative of an annoying but extreme end of the tech journalism spectrum, and - tellingly - also went hard on Block Chain Boomerism.
kerrizor@reddit
Tech “journalism” was ever thus; they need access, which you get from buying the line of ridiculous CEOs and VC bros, the wannabe “founders” who flock to SV and SF chasing gold. Write something critical and you’re out.. after all, all this money can’t possibly be wrong… right?
willbond1@reddit
So much of the reporting around gen AI/LLMs reeks of propaganda/manufactured consent.
Sammolaw1985@reddit
I think of most tech journalists like I think of most video game journalists, bottom of the barrel journalism and English majors. Most of them could barely write a decent article before AI.
mint-parfait@reddit
it's mostly just paid marketing
quicksilvereagle@reddit
Consider retirement
mrxaxen@reddit
Journalism in general is out of touch with reality. There are rare occasions when one can find something/someone that goes for real news report and not CPM.
fire_in_the_theater@reddit
software in general is a shitpile anyways. we were already at several orders of magnitude more code than necessary to solve the problems we do ... decades ago. like it's just a complete shitshow of overengineered nonsense that doesn't really solve things as they could be.
Early-Surround7413@reddit
Tech Journalists are like sports journalists who have never thrown a ball but pontificate endlessly about football.
Gunny2862@reddit
PSA Tech journalists aren't coders. Most of them do it because it's an industry that pays a bit better.
coppercactus4@reddit
My favorite is when someone starts a code review and me as a senior has to review it. There are tons of changes and it seems over the top. Comments in every line, inconsistent patterns, etc. There are of course bugs that I call out. The response I get is "I used AI to write it".
So instead of you writing the code, you generated it, and instead of reviewing it yourself you ask me to. Honest question, Why do I need you?
Soon the excuse for every new bug will be "AI did it"
pickle9977@reddit
It’s Human Resources tech companies selling Humans as Resources software that does “stuff”. It’s the truth because everyone knows no one in hr is actually a human and none of them actually do anything of value, they just do “stuff”
zayelion@reddit
Its that they can see that someone (AI) is more skilled than them, but they are not socially skilled to properly rate the skill set. I've even seen seasoned engineers fail at this. You have to use the product to understand its weaknesses. At the current iteration AI preforms like a ADHD junior programmer on an illegal stimulant. That saddle is what most small companies need and can afford. Not having the full suit is like not having car insurance on a teen driver in a sports car.
The hype is to a degree real in that it's going to replace juniors because it already has. I don't know about mid levels. That tech is under current development, and it struggles. Senior would mean the collapse of all white collar work.
evangelism2@reddit
Id say it better at writing English than code, its also really good at parsing intent which can be applied in many other things just as much as code.
true
feels like a non developer was told about agentic coding and this is their non technical summation of it. Not exactly true, but not totally false either
TheElusiveFox@reddit
I'd say three things...
MountainVeil@reddit
You're definitely not alone. Ed Zitron has been harping on this exact thing for the past year or so, the failure of tech journalism. It's at the point where it's getting repetitive to read. Link if you're interested.
https://www.wheresyoured.at/
thashepherd@reddit
I'm sympathetic to his take, but
Is a reach.
AntiqueFigure6@reddit
I guess it wouldn’t be an issue if Musk’s wealth was substantially in cash ( at a billion a month he’ll probably die before he runs out) but because it’s in a concentrated group of stocks it may be hard for him to convert even a relatively small part of his stock holdings to cash without devaluing them.
ezitron@reddit
It’s the other way around. If his wealth was substantially in cash he’d be able to keep going longer. He’s a massive web of leverage. $1bn a month threatens that! Not saying he definitely goes under just that this is a real threat with no endpoints other than shuttering it
ezitron@reddit
Here’s the thing: is it? Why can’t Elon afford to buy the chips? Remember colossus 1 was an SPV too, and burns $1bn a MONTH. Even for Elon that’s a lot of money! I’m not saying it bankrupts him but at some point it will be too much
PredisposedToMadness@reddit
404 Media is my go-to source for tech journalism that's grounded in reality and doesn't just parrot the hype
introspective_pisces@reddit
You have to understand that the real dilemma of “ethics in journalism” is how the media is complicit in bubble inflation.
There’s a systemic incentive to collude in these narratives and that often leads to the media unwittingly (or deliberately, on occasion) serving as purveyors of snake oil.
There’s facts on the ground is that AI slop is pervasive on the internet and in software engineering and everywhere it seems to mostly provide illusory time savings by making adopters appear more productive while shifting labor to reviewers / adopters of code these tools have created.
I do believe there is a place for these tools in alleviating the most boilerplate and repetitive programming work. The millionth Spring controller that collects the same basic user input or whatever.
The ability for AI to create something novel doesn’t exist. The ability for AI to think critically or reason also do not exist.
But people really want to believe that human technological progress hasn’t stalled out. I’d hazard that many people’s entire world views hinge on the belief that whatever faults our society may have, that we may tolerate them and be patient as we are nevertheless making advances that will obviate many of these problems.
And some just want the quick money, whether it’s investment money in a shaky concept rooted in hype or just some ad revenue on yet another credulous puff piece glazing Jensen or Sam.
I mean, you should be worried about your software job. In the short term big tech is slashing headcount to free capital for AI data center boondoggles. In the long term the AI bubble is the reason economic metrics aren’t all flashing red(der).
Sfacm@reddit
It's not journalism it is advertising!
coddswaddle@reddit
Might I recommend Ed Zitron. He's an experienced tech reporter and he's got an entire newsletter and podcast dedicated to the grifters and harm of AI As well as the general enshittification (also friends with Doctorow). He loves tech and hates what venture capital and grifters have done to it
HansProleman@reddit
Tech journalism in general is hopelessly uncritical. Virtually everyone is happy to just print PR lines, and boosters are incentivised (by valuations, money, continued employment) to produce a lot of positive PR lines.
There have also been a lot of doomers doing the inverse, but until recently the "Maybe this is a bubble/has relatively little value?" stuff was considered unsexy - now it's getting quite a lot of traction.
You might try reading Gary Marcus and Ed Zitron.
apartment-seeker@reddit
I am finding it to perform like that for frontend work atm. React, TypeScript. A lot of training code available for LLMs. Unfortunately, a lot of it is bad, but it still works :shrug:
Person you quote has a logical error going on. What he goes on to say/show is that AI meets a very good threshold of writing code. What he doesn't address is whether "AI is better at coding tasks than basically anything else", e.g. whether AI is better at coding tasks than at legal contract interpretation, researching something using web search, etc.
Shitty writing
jcm95@reddit
Have you tried Claude code?
CooperNettees@reddit
Don't be fooled, some morons really are blindly commiting code generated by agents without reading or testing it.
tsereg@reddit
I don't know. Yesterday, I prompted ChatGPT to write me a PowerShell script that opens files in a folder and separates files starting with a particular text in the first line to a subfolder, except for files that are only three lines long and end with a particular text in the third line. It was running rather slowly, so I inspected it. The script was loading the whole file into an array. Most basic stuff. After telling it that it does not need to read more than the first four lines, it corrected the script.
Now -- it is an unbelievable tool; I finished the whole script with a few more prompts. But I still had to inspect the code. I have no clue how that vibe coding works.
no-sleep-only-code@reddit
Most of it is tech journalists don’t generally have a technical background.
I_pretend_2_know@reddit
Journalism is not about information; it is about entertainment.
Journalists don't tell what they see, don't tell "the truth". They'll tell what their audience wants to hear and what brings them eyeballs and attention.
You should approach journalism as you approach movies. Movies are tuned to say what their audiences want to hear. There are rom-coms for women, action movies for men, Hallmark movies for older women, super-hero movies for younger men, porn for sexually repressed incels, biblical movies for church people, etc...
This kind of information tuned for an audience happens with almost every kind of messaging: religion, politics, marketing, art, posts in social media, etc.
Journalism goes the same way. They say what sells, what their audience wants to hear. It has nothing to do with truth.
DaveG28@reddit
Yep - even publications I like such as the Verge will go "hmmm, is AI over hyped - let's ask this AI company CEO!".
Difficult-Field280@reddit
AI CEOs love to hype their product and are feeding the online media with talking points to do so because more hype means more users means more profit.
The media doesn't understand how any of this LLM stuff works, and I'm incredibly skeptical that the CEOs do, mostly because all this AI stuff is so new.
So the CEOs stir in hype about features and etc. Plus, a little bit about AGI and boom. The entire tech community has a ton to talk about. Which drives more users and more profit.
Rinse repeat every couple months
MrCallicles@reddit
Journalists in tech are kinda adversizer most of the time,
It's hard to to otherwise. In hardware, it's even more clear: a company has a new product with some characteristics, and journalists will just create article about that, saying that it's "the brand new CPU with x core etc.".
It's information, but at the same time advertisement.
For LLMs and alike, even scientific community is not aligned on how to measure performances of a model (benchmarks are good, but those are well, benchmarks) so I wouldn't blame journalist to not know how to handle it.
Also, people (scientists, CEO, programmers) all have really strong opinions about all this. So, even if we had "real" investigation journalists , then it would be hard to gather quality information with investigations methods like interviews.
It would be nice to have a little bit of scepticism though, but critical discourse in the field is really the exception not the norm.
TomarikFTW@reddit
Reminds me of 3D printing.
The news coverage spoke 3D printing everything. You won't buy anything, you'll print it! Everyone will have one in their homes.
Prosthetics yea! Food Yea! Organs yea! Houses Fuck Yea!
While us in the hobby were losing our minds trying to get consistent prints of Buddha Batman.
The same thing is happening with AI. Especially when it came out.
Cancer cure yea! Mathematical solutions yea! Replace workers Fuck Yea!
The reality is recently we had an OpenAI model ingest a dataset to "forecast" sales in the next quarter(s).
This motherfucker was giving us weather reports. Like straight up a breakdown of weather for the week in some arbitrary location.
Economy_Solution6371@reddit
Whenever a journalist covers a topic you have some expertise in you notice that they dont know shit and are just parroting the current mesaage. Tech is no different
aman_of_means@reddit
I know it’s cool to be a cynic, but I genuinely feel this way about AI coding right now. It’s really good. It writes probably 90% of my code.
Historical-Egg3243@reddit
they're not spending obscene amounts on it. All the money is going in a circle, there's no actual exchange of money going on.
adibrad@reddit
Im a bit disappointed by the highest upvoted comments here. Im a senior dev at a company that went in hard on an AI transformation strategy and I consider myself a sceptic in most of the use cases that get brought up (why do we need to switch our HR system for one with an integrated chatbot?). However, for software development, it has really improved lately.
The ways I use it day to day are: - kick start my research into a topic using Claudes research mode - quickly create PoCs which attempt to do the same thing using multiple different approaches so I can see which ones are viable - triaging bug tickets, by using an llm with access to our git repo through an MCP and asking it to create a report on the most likely cause - searching through confluence/jira/repo/slack for that useful bit of information I saw months ago but can no longer find -to assist me with code reviews (I also review it myself, but more often than I'd like to admit, it catches things I would've missed)
The thing is though, not only is it a lot faster than me at doing these things, whilst it's doing it, I can get on with all the other shit that needs doing.
Now, for actually writing code in a large production application, I think this is where people get confused. Im not just giving it the Jira ticket and telling the LLM to implement it. Im going through and doing the investigative work to figure out how I would implement it, breaking it down into smaller tasks and then give that plan to the LLM and provide it with context from the relevant files. I also ask it to write the tests first before the implementation and to make sure they pass. It can then iterate by itself until you get a build that works and tests that pass. I then go and review the code as I would any other PR, except when I spot something Im not happy with, I highlight the related lines and just tell the LLM how I would prefer it done.
The number of lines of code I have written personally in the last few months has plummeted. I would estimate maybe I have personally written 100 lines in the last month. I still design everything though, and am fully aware of all the details of how it was implemented.
AI is nowhere near close to replacing developers and I am still extremely sceptical that will ever happen as much as it is management's wet dream. That doesn't mean that it can't make you a lot more efficient already though.
dudesweetman@reddit
Its the old classic thing about people with the least amounth of knowlegde having the loudest loudspeaker. Nothing unique.
I personally believe LLMs are an amazing thing but we are still at the point where it can be compared to the dawn of popular internet with 56k modems. It was a game changer that sparked a bubble but the capabilities where nothing close to what the expectations where.
Prize_Response6300@reddit
I don’t think there is a worst niche of journalism than tech journalism. It’s probably the most removed from the actual topic so you have Jenny the comms grad from west Kentucky state trying to explain why Claude code changes everything
Bakoro@reddit
The entire AI thing is "A Tale of Two Cities" right now. Everything is true and nothing is true.
That's not even hyperbole. People are getting paid to vibe code, and they are making things people actually get value out of. People are getting paid to vibe code and are making a complete mess of the business.
People are using AI to code and can't get it to do anything. People are using AI to code and elevating their productivity to previously impossible heights.
People are doing vibe science and becoming deluded with science fantasy. People are using AI to make real advancements in math, science, and engineering.
AI will make people wealthy beyond reason, and poor beyond hope.
The whole spectrum is true. AI as it stands now is a force multiplier, if you're stupid it reinforces that, if you can filter the signal from the noise and be creative, it will accelerat anything you do, and if you're somewhere in the middle, it's a dead zone where it's just a fun toy.
Top-Size-7395@reddit
Since when journalist aren't out og touch? They are sock puppet for people in power for most if not all industry
maxip89@reddit
Who is paying them?
k8s-problem-solved@reddit
I've put in maybe 8 hours this weekend on my side project. I'm using a combo of cursor and github copilot.
I've created docs, open api specs, api implementation, integration tests, data model, react front end using core ui, auth0 identity, map box globe rendering, open telemetry everywhere, playwright test, docker ci/cd with github actions, deployed to azure.
It's an insane pace to work at. I've kept it very vertical slice and it's recognising the patterns better now so I can just carve out whole new screens super easy. Ui, api endpoints, database scheme and all tested in a matter of minutes
I'm happy with api quality. I need to review the react work a bit more, I'm letting it smash the UI out fast - it's not my strongest side so leaning on this a bit. I know what I want, I know the patterns, but it would take me ages to write it myself.
I'm massively impressed with the tools from this session. I've been using them in a spec driven development flow, and the results have been strong.
Ashleighna99@reddit
You’re doing it right: spec-first, vertical slices, and tests to keep AI honest. To keep that pace without regrets later, wire compliance checks into CI: Schemathesis or Dredd against your OpenAPI, plus Pact if other services consume your API. For frontend quality, spin up Storybook and have Playwright run visual and interaction snapshots per component; pair that with strict TypeScript and an ESLint rule set to block any: and implicit any. On observability, propagate the W3C traceparent from React through your API so traces link end-to-end, add semantic attributes, and define SLOs with error-rate and latency budgets so regressions fail the build. Security-wise, run Semgrep and OSV-Scanner or Dependabot on every PR, and use GitHub Actions preview environments on Azure to review changes with real auth flows via Auth0. I’ve used Hasura and Kong for quick APIs, but DreamFactory helped when I needed instant REST on existing SQL with RBAC without building auth from scratch. Keep shipping vertical slices and lock in quality with these guardrails.
LiterallyInSpain@reddit
I have been working in the tech industry since 1997, been programming since 8 years old, and have worked for major large companies. I have designed critical software that scaled to over 100m users.
I can tell you that if you can’t get AI to write good software for you at this point, it’s a skill issue. It’s insanely powerful if you know how to prompt, know the patterns to use, and know how to provide the right files for it to read, and manage the context window.
Context Engineering is the skill to learn. If you can handle that, it will do unreal stuff for you. But it’s not a silver bullet if you don’t have the skills to know what you actually want it to do.
Drited@reddit
Interesting, could you please share some of the approaches to prompting which you find to be useful?
b34t@reddit
Read Ed Zitron's newsletter. Here's a link to the latest post: https://www.wheresyoured.at/the-case-against-generative-ai/
He does get repetitive in his posts and they are loooong. But once you can get past that, he's the one saying what you're pointing out. https://www.wheresyoured.at/sic/
tanepiper@reddit
This. Zitron does not give a fuck about the tech bros - my wife who is less technical, but uses AI, was the one who introduced me to him - she subscribes to it.
amareshadak@reddit
You're spot on. The gap between AI hype and reality is massive. I use Claude and Cursor daily, but for anything beyond scaffolding or boilerplate, you're still writing and architecting the code yourself. AI can help you move faster on well-trodden paths, but the moment you hit novel business logic or need to make architectural tradeoffs specific to your domain, it's useless. The journalists parroting these claims aren't reviewing PRs or debugging subtle race conditions—they're just transcribing founder talking points.
Fargrave@reddit
Check out Ed Zitron if you want a journalist who's realistic about AI and excoriating the media for printing obvious lies. His Better Offline podcast is pretty great.
thoughtslikehammers@reddit
Ed Zitron is breath of fresh air with the insane hype and irrationality going on in the AI space
CautiousRice@reddit
Fueling the hype
ezitron@reddit
I say this as a journalist (if that’s what you’d call me, who knows) who can’t code - it’s tough to do and I can imagine how people who can’t code would believe it was good at it. Hence the amount of actual software engineers I talk to!
ugh_this_sucks__@reddit
Didn’t Newton also say that Helium was proof that “web3” was definitely absolutely the future?
eggZeppelin@reddit
Tech journalists spend all their time around "tech" so they have a false sense of understanding.
Its very surface level though. Its like showing up at Tesla and applying to be an engineer b/c you drive a Tesla.
daedalus_structure@reddit
Tech journalism writes press releases for tech companies.
arthoer@reddit
Best thing we can do is to be quiet. Wait for it all to fail. Reap the benefits after. Until the next hype arrives while llm's quietly continue to exist.
Willbo@reddit
One major issue of modern tech journalism is that the largest audiences of this content are people that are interested in tech, not necessarily the people that understand or work with it.
This is why buzzwords and reductionist articles are usually at the top of the feed, because it casts a very wide net to a sea of people that don't actually understand or have the ability to identify it as true or not, they are just interested in digesting content about it in case it pops up in the next trivia night.
In your example quote (which is very specific and off the cuff), we can probably read between the lines and infer it was a very specific use case that was improved to be entirely hands-off, possibly some type of frontend solution or log analysis. Anyone who knows anything knows how silly it would be to hand off a software engineer's entire workload to Mr Chad Gebity.
However, the danger lies where a layman hears that and assumes the AI is able to do everything the software engineer is able to do. Some of this audience might be CEOs/tech investors with decision making power over that workload, without actually understanding it, and this is where the danger lies.
Sorry_Penalty_7398@reddit
You are not crazy, this is precisely what's happening.
nonades@reddit
Everyone is out of touch with AI.
It's an insane bubble and it's going to be like the dotcom burst when it inevitably happens
SouthRock2518@reddit
I'm in exact same boat as you. I keep feeling that I am maybe just not getting it and can't figure out how to get 10 automomous agents doing a bunch of work in our codebase. I can't even get one autonomous agent, GitHub Copilot with Claude 4.5, to do anything of substance. And I'm trying to give it good context, telling it to write out a plan in markdown that I can check and then firing it off only to fight with that damn thing for days and not even get a working result. I love the auto complete and in IDE agent mode because I can do things in incremental steps and catch it going off the rails early to redirect it. I think that's awesome and fun TBH. But sometimes I just feel like I'm missing something. Like oh maybe if I just used Claude code then I would see the results that I keep hearing about.
flavius-as@reddit
Journalists are paid to make money.
Let that sink in.
BroBroMate@reddit
Casey Newton is a known AI fanboi maximalist who writes what the LLM industry wants to be written, you can safely ignore 95% of what he writes.
RedditNotFreeSpeech@reddit
We've gone full retard
endurbro420@reddit
Whenever I see general stats or reports on stuff like this, I always think “who the hell are they polling/talking to?”
Meta_Machine_00@reddit
Free action is not real. They are forced by the physical nature of the universe to output these specific things at this specific time. The universe is a giant generator but humans are dumb enough to hallucinate that they are somehow free.
No-Winter-4356@reddit
So what?
Meta_Machine_00@reddit
We have to write these comments. It is physically unavoidable. It doesn't matter if your brain identifies a "purpose" behind it.
No-Winter-4356@reddit
Okay, I'll make one attempt, then disengage.
If you just discovered determinism relax, it's not as impactful as you think. You can still have autonomy, and things still matter, even though you are not the ultimate cause of your actions. There is no reason for nihilism,
If you really feel like you are being compelled to write comments like that please talk to people close to you (not in the internet) and seek professional help.
Meta_Machine_00@reddit
I actually believe everything emerges non-deterministically, aka randomly. Even in random universe, autonomy and independent action are not possible. And if humans disappeared tomorrow, then there would be no one left to care they were gone. So "nihilism" is perfectly logical. It just conflicts with what has been programmed into your brain.
Nonetheless, where do you think your own words are coming from?
tortleme@reddit
that's because they're paid shills
liqui_date_me@reddit
I’m an ML researcher and I vibe code all the time with Claude to set up small scripts for bespoke tasks (like distributed inference on a set of videos at every N frames etc, Jupyter notebooks to inspect and visualize the predictions, etc) but definitely don’t do it for larger projects.
I’ve found that while Claude does a pretty good job of doing these tasks, it still doesn’t have the intuitions behind setting up the right kind of training experiments and what to hill climb on, so it’s more like a useful intern for me at the moment, not a full fledged AI researcher coworker
Final_Alps@reddit
I feel we continue to ignore that so so so much of our side base is pretty trivial boilerplate and table stake fundamentals that have been written 1000s of times before. We scaffold from shared modules into the 10% of the code that makes our product unique from the others. Ai can do the scaffolds pretty well. It’s better than humans at finding that one package that accomplishes the thing I need. Does decent integration work.
So yeah. Majority of my code is not AI generated.
WittyCattle6982@reddit
It's real. The hard part is remembering all of it. There's too much output to keep up.
JimDabell@reddit
That’s just an oversimplified description of a mostly true scenario. But that’s not the typical case, that’s the best case scenario – an expert developer / agentic AI / spent lots of time learning how to use AI properly / strong oversight over what is being written / frontier model / burning lots of tokens / the right language / the right use-case / finishing the last 10% themselves. You take any of those things out and the success rate drops. But it’s not completely out of touch with reality – and if you think that it is, then you do have your head in the sand.
If you disagree, then read how these developers work with AI:
And watch these live coding sessions:
These are very smart, experienced developers who aren’t shy about pointing out where AI falls short.
awitod@reddit
I think those of us who are just bearing witness here are sadly wasting our time.
dashingThroughSnow12@reddit
I have a theory.
For a long time, an observation has been that many programmers can’t program. (Ex blog post from 2007 https://blog.codinghorror.com/why-cant-programmers-program/) Some would even claim a majority.
If one is such a programmer, AI is fucking incredible. You finally get to feel like a good programmer. I’ve read online people brag about some multi-agent workflow they made and I am left scratching my head because they are describing something that developers around me a decade ago would put in a short bash script or Alfred workflow.
If you are one of these programmers, LLMs are the bomb. And if you are a tech journalist, it is easy for the majority of developers and tech people you speak to validating that claim.
Altruistic_Tank3068@reddit
I am just starting to wonder if it's not just "part of the AI bubble" and produced by journalists and interviewees in order to maintain the market high... No real technical data based on studies here. I know it sounds a bit like conspiracy but how can we trust those people who are trying to sell their products lol
Intelligent_Water_79@reddit
What model are you using?
termd@reddit
The funny thing about journalists is that when they're writing about a subject you don't know, you consider them to be experts and the voice of truth. Then, when they write about a subject you do know, you think that journalists are fucking idiots who don't know anything about anything.
In general, take everything in "the news" with a rather large grain of salt.
AI is good for pet projects and greenfield projects. AI straight up sucks on actual code bases that have been around for a while.
OneCosmicOwl@reddit
It's what managers, CEOs and PMs want, like and need to believe and I don't know if there was other point of the our industry's history where one side had literal economic incentives to go against developers.
cholantesh@reddit
Tech 'journalism' has been stenography for the VC class forever, this shouldn't be surprising.
VolkRiot@reddit
Casey Newton and Kevin of the Hard fork podcast literally had an episode a few weeks ago where they claimed to dedicate the hour to discussing criticisms of their show's perspective on AI.
Do you think that would have emerged as the concept for an entire episode if they weren't sensing their audience bristling at the amount of ball washing they do in support of the AI industry's exaggerated claims?
But, I honestly believe this is a problem of modern journalism. If Newton were to be a skeptical muckraker when it comes to the claims of AI companies, he would lose all access to his high profile guests and his podcast would be toast.
Last-Daikon945@reddit
Journalism has been dead for almost two decades. They write whatever comes from the top
datOEsigmagrindlife@reddit
I think you're in a slight bit of denial.
I'm at a large tech company, household name, not FAANG but similar level.
Vibe coding or whatever you want to refer to it as, has been pushed company wide.
Is it perfect, no.
Can someone with no SWE experience use it to make something, no.
Does it still produce slop, yes, but with the right architecture document, well written prompts and context from MCP it will give good to decent level of output that can iteratively be improved into a working product much quicker than human only output.
With the right work flow, it is absolutely having an enormous impact on productivity.
yubario@reddit
A lot of the developers that do use AI quite effectively do not post here because its a really bad echo chamber here, like 95% of the posts are just dogpiling how shitty AI is. And any pro AI comment just gets downvoted to hell, so yeah....
Sure, AI is terrible
Disastrous_Fill_5566@reddit
You're absolutely right. Maybe it's because I am an experienced dev, but t very rarely tackle simple problems. I recently spent an afternoon trying to get Copilot in agentic mode to help me with some performance issues. It made several lofty claims in relation to its "solutions". Not one of them worked, and many broke the code. And it hang several times.
awitod@reddit
Mostly? You are just in denial. On the other hand, what it can’t do is also a lot and the journalists are also in denial - but perhaps not as much as you
mglvl@reddit
Journalists won’t listen to anyone with any kind of nuance . I’ve tried vibe coding in large codebases and the results are mixed , largely negative. I think vibe coding to scaffold a project is really cool, though. But wouldn’t trust it to develop a whole project with new features. Sometimes I waste some time because the proposed code is useless (one time I had it implement some basic feature and it was code without error handling nor following the codebase conventions). But sometimes it is useful and it has helped me, but mostly for functions or just one file changes .
putocrata@reddit
My company is full on the AI hype and offers all of us the shiny vibe tools and models and I use for one thing: Asking questions about the codebase. It's relatively useful? For the rest? Can't trust it, it gets wrong most of the time and sometimes even touches in parts of the code it shouldn't. Even when it gets it right I need to make an effort to understand what's being written, whichs about the same afford as writing it myself. At most I'll use it to generate snippets.
mglvl@reddit
that's very similar to my experience: I use some of those tools to ask questions about a codebase I haven't seen before and find that useful. Even though, I presume this kind of use is also vulnerable to hallucinations.
awitod@reddit
I used cursor this week against a large code base and added a feature with a dozen new entities, the access layer, around 30 new APIs on the backend and a few dozen new client components with unit and integration tests. It wrote 99% of the code. (Electron and React/TypeScript, C#, SQL Server, container images, bicep)
It was not really vibe coded though. I was extending a design I spent a lot of time on and for which I had docs and prototypes.
I spent maybe 60 hours on it and my API fees were over $20/day.
I’ve been a professional software developer for 35 years.
awitod@reddit
Wow! A lot of downvotes. Ok, let me put it this way to all the downvoters… it is possible that all those witnesses are either (a) AI company CEO’s (b) technology journalists, or (c) liars.
Unfortunately, it is more possible that you are missing something and there is not a conspiracy afoot to gaslight you
hippydipster@reddit
Trust has nothing to do with it. We code reviews our human coworkers. We make test suites. We hire QA people. We break tasks as small as possible so even we can do them. It's no different whether the code was first written by senior, junior, autocomplete, or LLM.
Trust got nothing to do with it.
ZorbaTHut@reddit
Sometimes, yeah, frankly. I do review it, and I do edit it, and more than a few times I've had the AI make major refactors, and sometimes it's just easier to do it myself than to ask the AI to do it.
But it does keep getting better, and it's great at working with code that I'm not familiar with - far better than I am! - and every month or two I give it a few tries flinging something at it that it never would have worked before, and each time it's able to figure out something it wasn't able to do the last time.
I just added an entire new custom feature to prusaslicer, and that involved me going to Claude and describing what I wanted; Claude figured out how to implement it and got it right on the first try, along with the two next adjustments to it. It made a mistake on the final adjustment and I had to debug. This still probably saved me hours (or rather, if I had to do it by hand, I just wouldn't have done it.)
talldean@reddit
I'm able to get 10-20% faster overall, using AI to code things... where AI is good at those things.
So "hey, I got lazy and didn't write tests for this class,. Can you write some tests for it, but be careful to test the functionality and not the implementation? If you need an example of what my team thinks is a good test, has some of those. If you're not sure about something, I'd be happy if you'd ask me questions."
It then cranks out 75% usable code, and gets the boilerplate right every time.
devanew@reddit
The "AI writes most code now" narrative is classic hype cycle journalism. Yes, some companies report high percentages of AI-generated code, but that stat is misleading without context.
The human-written code is doing the heavy lifting: architecture decisions, complex business logic, debugging AI hallucinations, code reviews, integration work, and handling edge cases AI can't reason about. AI is great at boilerplate and simple functions, but I'm still the one designing systems, making trade-offs, and fixing the subtle bugs AI introduces.
The "we just supervise code" quote is particularly ridiculous. Good luck "supervising" AI code for correctness, security vulnerabilities, performance implications, or maintainability without deeply understanding what it's doing. That's not supervision - that's still engineering.
Journalists covering AI often lack technical backgrounds, so they accept founder soundbites at face value. It's the same pattern we saw with blockchain ("revolutionary technology disrupting everything!"), crypto, and before that, VR/AR. The incentive structure is broken - AI companies need to justify massive investments, journalists need clicks, and nuanced takes about "useful but limited tool" don't generate either.
Don't get me wrong - AI coding tools are genuinely helpful. But there's a massive gap between "helpful productivity tool" and "we just tell it what to do now." Anyone claiming the latter either isn't writing production code or is setting themselves up for a world of technical debt.
Existing_Guidance343@reddit
I’m a principal engineer at an SME co-owned by two large multinationals. AI isn’t a huge part of our day to day — we’re only allowed to use Gemini, which struggles with our combination Rust / Python / TypeScript codebase — but we’re currently desperately trying to hire after some unexpected (but easy to predict) churn.
My boss has sent me a Gemini Gem (basically a pretrained model) to use for screening takedown tests and even after three or four hours of tweaking at it it’s consistently crap. So I just ignore it now.
Heaven forbid that we start to do this model of ai-based coding. It’d be an absolute shitshow.
OkCar7264@reddit
Any field of journalism where the main advertisers are the subjects of the articles is just marketing with extra steps. Games, car, tech, not to be taken seriously.
PressureAppropriate@reddit
Reading a journalist's description of something you actually know about is an eye opening experience...
It's not rare to find something that is just plain wrong on almost everything they write.
But then you flip the page and they talk about a topic you don't know that much about and you somehow delude yourself thinking that now they are saying the truth...
SpicyLemonZest@reddit
I think there’s a lot of people who just see a massive psychological difference between spending an hour writing your own code and spending an hour correcting AI-written code. When I envision “AI is writing all my PRs”, I imagine shipping dozens of PRs a day because I’m no longer constrained by anything but my imagination. I suspect you think the same as me, but that’s not what the people who go around saying it seem to mean.
BrianThompsonsNYCTri@reddit
Here’s how collective memory of predictions works, someone who is critical of something that ends up being big is forever remembered as a dunce whereas praise of anything that turns out to be a flop is forgotten. To this day bad takes about the capabilities of the internet are frequently brought up, but criticism and praise of something like NFTs or Zune or whatever is basically forgotten about. As such it’s a lot safer for tech journalists to fawn over whatever the “next big thing” is because if it flops nobody will remember their praise but if it succeeds everyone will remember their criticism.
lexybot@reddit
I mean sure I do use ai to code but it is for very trivial stuff, like the syntax or maybe a general direction on what a potential solution for a problem could be. Sometimes I use it to verify my code but only with skepticism - because I don’t trust it. I use it like an autocorrect tool but with a little bit more context that’s all.
If this is what they mean by “all code being generated by ai” then sure. But it is in no way close to what they claim
davidwitteveen@reddit
Most tech journalism is just regurgitated marketing.
Pivot to AI - short, sharp, cynical AI news summaries
404 Media - journalist-founded website that focuses on how technology is shaping the world - for both good and bad.
The Register - the classic site of skeptical tech journalism.
SeriousDabbler@reddit
I've been a software developer since 2002. Things have changed and improved over the years but in the last 5 things have changed a lot. We've recently adopted cursor at work, and I regularly implement features by conversing with the tool and making only minor tweaks by hand. Claude sonnet really is very good, and we haven't seen the last of the improvements
That said your intuition is correct, you have to check the work that the tool writes, but to be honest this isn't a new thing. One of the things that I have noticed over the years is that rookie and sometimes even experienced developers make the code build and don't check it works. That doesn't get great results, even without automation. People need to check their work, run tests, eyeball the result. AI has a particular propensity to just fill in the blanks without asking at present so this is really important
Something else that you don't *currently* get entirely for free is system design. That really requires you to make choices about trade-offs which might be particular to you
That's not to say that the tools aren't going to continue to get better
TehBens@reddit
In my experience, AI is able to produce code on the level of an (below?) average software engineer (junior/mid) as long as the use cases can be defined well in natural language and when working in isolation. Haven't tried it yet for big code basis. However, to me it seems that it produces a pretty low quality structure of the code.
To me it's becoming more and more clear that AI will not replace SWE, but will empower us as a new super cool tool that does not only all that boiler plate code but also a lot of low-level code that I don't want to think about too much. It allows us to spent more time thinking about higher level challenges, the actual problem at hand, what the right abstraction would be, what a sensible integration into the given code base looks like, how to make the code extensible, etc.
SupermarketNo3265@reddit
100%
I'm still putting pieces together, building something, and problem solving. Just now maybe I don't need to build each one of my Lego blocks from scratch.
TheMightyTywin@reddit
You’re in denial - human coding is dead.
GPT5 with extended thinking, and Opus 4.1 are better programmers than 99% of humans.
Their output still must be reviewed because they struggle to take in the big picture. But that’s also true of code written by humans.
TastyToad@reddit
There are about 8 billion people alive. There are less than 50 million programmers, according to google. Therefore, LLMs are better programmers than 99% of humans, that are not programmers in the first place.
(I'm being sarcastic and rude but you sound like a 15yo edgelord, not an experienced dev)
TheMightyTywin@reddit
Fair enough - I've updated my claim.
PickleLips64151@reddit
The problem with most journalism is that they don't know enough to ask the hard questions. They get dazzled by the wow factor and don't see that it's all smoke and mirrors.
About the only tech journalism that I find useful are product reviews, where there are benchmarks and firm comparisons between the known and the new.
Ok-Entertainer-1414@reddit
Asking the hard questions often also can lead to less clickbaity stories. " Ai is replacing your job" is the story that lots of people will click compared to " it's kind of mediocre and nothing is really happening"
alinroc@reddit
Nor paid enough to take the time to ask them. They need to get a story out with a clickbaity headline fast.
disposepriority@reddit
The good news is that you're almost right, tech journalism doesn't really exist and is done by completely inept grifters drooling at the mouth for just one more click.
The bad news is that that's just all journalism at this point.
liquidpele@reddit
Now realize all the other topics it’s the same you didn’t realize it.
itemluminouswadison@reddit
i mean, people are definitely fully vibe coding. it has its place (prototypes, new stuff you're unfamiliar with)
but adhearing to proper design patterns isn't a strong suit
ILikeBubblyWater@reddit
This sub is an AI hate echo chamber, I suggest you try to find answers outside of it.
putocrata@reddit
I don't know many actual developers in my job and outside who aren't painfully aware of the limitations of AI, that are sick of how it's been over hyped, everyone agrees it's marginally useful are best and theres going to be a serious problem in the future with junior's over reliance in such tools because they won't learn and will get stuck.
fireflash38@reddit
It's difficult because reddit hates it. Hacker news is a straight split between true believers and haters. I feel like it's just ok? Like it's real cool at boiler plate shit.
It honestly reminds me of when I was very new to the whole thing and would mindlessly copy/paste stuff from doc examples/stackoverflow and not really understand how things fit together, or why to do things one way vs another.
And it never contradicts you or redirects you to a better architecture. It will do its best to fit what you ask it, so it's really easy to produce nonsense.
godofavarice_@reddit
The other day I wrote some tests with AI, in cursor I am like hey write tests for this. It mocked out everything so the tests would pass, even the thing I wanted to test.
break_card@reddit
I always check the authors bio before reading an article, you would be surprised in a bad way
Synyster328@reddit
Meanwhile people like me who are balls deep in using it have been shouting it from the rooftops for the last couple years but just get downvoted, told we must have never worked on big projects, etc
Is it cope? Yes. It's a subconscious defense mechanism to protect your massive egos.
I have tens of thousands of my own dollars invested in using it at this point, I don't even care because I've gotten such a massive ROI from it.
Michaeli_Starky@reddit
They might be out of touch, but so are you.
moreVCAs@reddit
cheat code to square the cognitive dissonance: the people you are describing as journalists are not journalists. do with that what you will.
neilk@reddit
My experience with Anthropic’s Claude is pretty amazing and it’s like having a great coding partner, who often thinks of approaches that I don’t. Claude can get caught in unproductive ideas and get overextended, but so can we all. It really falls down whenever the task involves complex judgments about the real world.
The ideal case for delegating to an AI is when you have a greenfield project and you need to do something well defined and straightforward. I can just tell it to write a Github action to build my Svelte/Rust project and I don’t have to read lots of docs.
gfunc@reddit
I think the problem is in calling them journalists. They’re a cog in the clickbait machine that is now so large that it doesn’t create anything of substance anymore and may never again.
Trick-Interaction396@reddit
Metrics are super easy to manipulate. CEO says AI is mandatory. This implies all new code is AI assisted. The reality is unknown.
djkianoosh@reddit
super hyperbolic
for a realistic take, check https://mitchellh.com/writing/non-trivial-vibing
this matches my own use of AI. as a widely capable assistant, that is fallible (spectacularly at times), but has a wide range of abilities.
callimonk@reddit
Yep this matches exactly what it’s like for me as well haha. That and sometimes having to shut the agent down to release current context
Dependent-Dealer-319@reddit
I mean... AI can very definitely write code that compiles, but it almost always contains logic errors
hangfromthisone@reddit
I suck at react. But I excel at Backend/infrastructure. I've been vibecoding a react app for my API and in less than 2 days I've done an amount of work that would take a seasoned frontend at least a week.
Sometimes it hallucinates incorrect things, but 90% of the times, given that I select the proper context files, it will understand and wrote code that works and does what I need.
BigBoyGoldenTicket@reddit
Tech journalists are out of touch with everything, not just AI. They aren’t paid to be in touch or genuinely insightful.
EmTeeEl@reddit
The best of the best is https://www.pragmaticengineer.com/
Gergely (the author) is able to call out patterns and trends in the industry WAY WAY before any other journalist. Absolutely worth subscribing
SmokyMetal060@reddit
No, you're right. AI is good at writing lots of bad-to-mid code and bad at writing good code.
Downtown_Isopod_9287@reddit
Tech journalists basically hate no one more than actual SWEs and the idea of a technology that has come along that makes them obsolete is their wet dream.
Crafty_Independence@reddit
A lot of tech "journalists" are using AI to pump out low quality content, so I'd suggest they aren't the bed sources
MisterHyman@reddit
It's hallucinating still way too much
MisterHyman@reddit
I wouldn't trust it past one line of code at the moment
Tango1777@reddit
You are not wrong. AI is far from capable of writing decent code only supervised. That means a total lack of critical thinking and probably just non-dev using AI for dev work. That is a situation when someone might think that AI can generate working code only supervised. That can happen only when someone cannot code or is a poor coder and simply cannot judge a code and cannot spot issues. I use the top notch models for devs and AI is definitely useful tool and capable of speeding up some parts of my work, but not only it's not capable of coding everything with supervision, it fails way earlier than that if a scope of a task is anything above a few files. AI is not replacing anybody, except from devs who suck, and companies are actually starting to realize that and the hype is going down. At least in my environment. People start to understand that this is a useful tool for devs, not a thing replacing them. But if some companies go overly about AI and actually rely on it a lot, it'll eventually backfire and there will be more work for us, since the amount of unmanageable crap code will be massive and guess who'll have to fix it? Not AI, but experienced devs.
OddBottle8064@reddit
You’re in denial. Newest product at my company is estimated at 90% llm generated code for example.
lanqo88@reddit
Gonna be a disaster for next generation.
shadowisadog@reddit
It really depends. I view AI as a tool and sometimes in some very specific contexts that tool produces useful results.
I have found some use cases where I need a quick script or something written that doesn't have very high stakes and has pretty easy to define requirements for what it needs to do. There are some times where I have found for "throw away" code that it is pretty useful and can get me where I need to be quickly.
There are other cases where it is absolute hot garbage and where I burn way more time trying to make it work than it would have been to write it myself. There are also other times that it produces results that are hilariously wrong, use outdated/insecure libraries, or is just total nonsense.
Does it write the majority of my code? no. Does it help in some narrow contexts? yes.
In my view it needs to be something that is common enough that it has good training data, easy enough that it is readable, and low stakes enough that it isn't going to be critical if it goes wrong. I also need to be able to read and understand it well enough to make sure it is doing the right things.
Where it quickly goes off the rails is when you need something very specific with a lot of complex logic and with the latest tools/techniques/libraries. If the code needs to be split across multiple source files and has a lot of say API calls or something then I think using AI you are going to have a bad time.
Want a python script that does something done a ton before? It can do that easily. Want something that is very bespoke and business/use case specific with tight performance specifications or security requirements? you are going to end up with complete unusable garbage.
It is more solution search engine and solution space explorer than it generating the code for me in most cases.
GaboureySidibe@reddit
Lots of 'journalism' now is not about investigating anything but getting paid for ads and getting paid for articles by PR firms.
justUseAnSvm@reddit
Yes - Tech journalism is mostly shilling for companies.
The people who know the most about tech, aren't the ones selling it to you, it's not the execs, journos, or academics, but the people working with the technology day in and day out. We have a huge information asymmetry compared to the rest of the market, and even inside our companies.
This lesson can be applied in other situations as well. Like you are you on a project, and have no idea how it's going to work, yet management keeps investing, you essentially know a truth way before the managers.
dkopgerpgdolfg@reddit
Most "journalists" of any sub-field are like that, nowadays.
Darkoak7@reddit
Journalists like Casey are dumb and have no self preservation. If you think software engineers are going out of the job because of AI then any writer is going to have it way worse.
Abangranga@reddit
You assumed it was ever in reality?
---why-so-serious---@reddit
Advertising? Duh
Fabiolean@reddit
I feel like tech journalists are listening to what the founders and heads of the AI companies are saying, but no one is actually asking us what it's like.
You nailed it. They're repeating marketing talking points because so few journalists have the technical depth to actually discuss the topic at our level. Because these marketing teams know that investors and managers and other non-technical decision makers rely on sources like these to keep up with tech they're being targeted hard by this kind of propoganda.
-TRlNlTY-@reddit
It is so hard to find good tech journalism these days. Anything AI related in media is absolutely irrelevant.
TimMensch@reddit
I've had inside knowledge of the details behind news stories multiple times, and every single time the news story has gotten something completely wrong in a way that just so happened to make it more entertaining, compelling, and/or accessible.
It's to the point where I've given up on believing most journalism.
muntaxitome@reddit
Lots of devs - especially on the low end - work like this now, that's reality. I don't think they are getting more done and the quality on a larger projecy is worsr, but there is some reality to this.
metarobert@reddit
just capitalism. “journalism” has now gone to the click-bait-ish direction. facts are what makes money, facts are no longer facts…with some exceptions, thankfully there is still some real journalism.
Constant_Shot@reddit
Hah yea Casey is smoking something.