Could be AI but it reads with some poetic and literary flair.
AI, Claude in particular, can write some decent English these days. It's not nearly as bad as the old days when everyone mainly used chatGPT. Like what, 2 years ago? And it's far, far better than chatGPT 3 was, the model that started it all.
Anthropic hired a bunch of well-versed writers to read and manually judge the quality of chunks of generated text. They took their, I'm sure, vast numbers of (text, rating) pairs to tune how Claude writes.
Don't take it from just me enjoying Claude the most when reading information, don't take it from the fact that they invested millions into using reinforcement learning to adjust how it writes. Take it from that recent poll that some newspaper did. They had several categories of writing like fantasy, science for the layperson, history, etc. For each category, they had two chunks of text to read with the reader asked to pick which they like the most. The excerpts written by a human were high quality, stuff from popular books. Long story short, in every category, Claude scored about 50% or better.
AI be writing quite well these days. Well, Anthropic's AI at least.
Yes, thanks Claude. Now, what does tedbradly think, if he still does?
Some people scored As in English, so they can write... cogent, structured thoughts? It's also an IQ thing. Dimwits can't write too well even if they try, so they always demonize anyone who can.
We’ve already had to sit through a bunch of presentations for new dashboards that have been spun up that are somehow supposed to be better than the dashboards we already have
LLM-related content is absolutely banned in /r/programming in April, and this is undeniably about LLM https://www.reddit.com/r/programming/comments/1s9jkzi/announcement_temporary_llm_content_ban/
Not sure what is the Designated Twin For LLM (as they seem scattered, it would be great if mods put exactly one in "Related reddits"), but LLM-related-content seem to be "invasive species" drowning all the rest by sheer numbers, it makse sense to designate a safe haven for non-LLM, as otherwise it gets lost
The author complains with a lot of words about lack of validation but fails to explain how and why validation is lacking and if and why this lack of validation is inherent to AI or just general carelessness, which has, or could have, existed before AI.
For example, marketing teams have always been good at justifying their existence and budget with questionable and non-falsifiable claims. Development teams have always produced millions of lines of code with insufficient test coverage and excuses not to write tests. Managers have alwsys allowed tech debt to pile up. We have alwsys had insufficient documentation and specification of what correct outputs are, and if such documentation existed, nobody would read it.
The author also suggests that the solution would be wider understanding of how LLMs and ML work under the hood, without explaining why. It makes one wonder if they are just bitter that their recently acquired skills are no longer needed outside the major companies that produce the major generic LLMs that everyone uses now, rather than training custom models.
Since the author likes analogies: This is like complaining that the finance team uses Excel, although none of them has ever designed a parallel prefix adder and none of them bothers to check the sums with a calculator or pen and paper (when in reality, you can use a spreadsheet to verify results in a spreadsheet and ensure that your formulae and setup are correct. Oh, and AI can help do that job as well...)
There are a lot of problems with AI, but constantly complaining about the wrong things or about solvable or solved problems isn't helping.
It sure does generate clicks and Reddit karma, though.
That comment had more meat than the rest of this thread. it even contained a typo, which is a telltale sign of human written text. instead of engaging with it, it just was downvoted and declared gpt output. ironic.
AI has better reading comprehension than most humans, sadly also most of all the "problem solvers" that work in our field.
Absolutely. Most bad code comes from the minds of dimwits. Lazy dimwits to boot. Not only are they not so good at learning complex stuff, they don't even try. I've been trying to convince my brother that reading 30-50 pages of a programming book of his choice that relates to his work is beneficial. Extremely beneficial. Instead, he simply tries to reinvent the wheel and brute force his way to his solution, and along the way, he watches stupid YT videos from people without any authority to trust. The video could literally be made by a C student. Why take that risk when there are critically acclaimed books written by people with like 35 years of coding experience there to condense an important aspect of coding into a book with 450 pages that can be read in 9 days @ 50 pages a day. I even told him to bill the time as in, every work day, dedicate 1 hour to reading 50 pages. Always stay learning, and always integrate legendary books as part of that learning. Yeah, you need to learn through trial and error with new technology, but you can't beat soaking up massive amounts of knowledge from a great programmer who has decades of more experience than yourself who put likely 100s of hours into how best to preach about some aspect of coding that his decades of experience taught him was important.
I even tell him that famous Newton quote: "If I have seen farther than others, it is by standing on the shoulders of giants." If we didn't transmit knowledge, every generation would be playing with sticks to create shelter, fire, tools, and weapons. Any programmer trying to do their job without external, massive boosts of knowledge contained in books is just cutting their salary in half.
What books to read? Depends on where you're at in your career and what you are currently working with. You've got generic programming books about programing itself, useful to absolutely everyone in any situation. You've got best practices books for the language(s) you are using the most, which are must reads for anyone using a language. And then you've got all the books about particular frameworks, which are must reads if you're using those frameworks.
The Great Leap Forward was an abject failure that killed tens of millions of people in the course of 4 years. At the end of it China was not industrialized, and was not any closer to being industrialized than it was at the beginning.
The response to the tragic failure and disruption could be argued led to the actual industrialization and education of the 80s and 90s.
The historical question is twofold: 1: was the forest fire of the 50s a prerequisite to the welfare 50 years later or was is simply an avoidable tragedy.
2: is there a situation where the forest fire could have made the economic miracle of the 80s and 90s impossible and was it luck
My biggest complaint about AI adoption is the lack of benchmarks. So many in-house bespoke tools that produce a neat demo with a tailored prompt that fall flat on their face with literally any other use case.
AI is cool and does new and neat things, but people need to stop abandoning engineering rigor. Test your AI products, folks. Establish a performance baseline.
I agree with the author that exec enthusiasm is just jet fuel on this fire, and it probably won't stop until it blows up in their face.
rexspook@reddit
I can’t think of one good reason to remove this
erizon@reddit
LLM content is absolutely banned in April https://www.reddit.com/r/programming/comments/1s9jkzi/announcement_temporary_llm_content_ban/
rexspook@reddit
Well that’s a stupid decision.
sasik520@reddit
It's not just stupid, it's purely idiotic.
rexspook@reddit
Right? Like let’s ban the hottest topic in software engineering in a programming subreddit. Absolutely dumb.
richardathome@reddit
Worth reading or (I suspect) more AI slop judging by the AI slop picture on an article about AI?
RScrewed@reddit
Could be AI but it reads with some poetic and literary flair.
tedbradly@reddit
AI, Claude in particular, can write some decent English these days. It's not nearly as bad as the old days when everyone mainly used chatGPT. Like what, 2 years ago? And it's far, far better than chatGPT 3 was, the model that started it all.
Anthropic hired a bunch of well-versed writers to read and manually judge the quality of chunks of generated text. They took their, I'm sure, vast numbers of (text, rating) pairs to tune how Claude writes.
Don't take it from just me enjoying Claude the most when reading information, don't take it from the fact that they invested millions into using reinforcement learning to adjust how it writes. Take it from that recent poll that some newspaper did. They had several categories of writing like fantasy, science for the layperson, history, etc. For each category, they had two chunks of text to read with the reader asked to pick which they like the most. The excerpts written by a human were high quality, stuff from popular books. Long story short, in every category, Claude scored about 50% or better.
AI be writing quite well these days. Well, Anthropic's AI at least.
case-o-nuts@reddit
Yes, thanks Claude. Now, what does tedbradly think, if he still does?
tedbradly@reddit
Oh, and my account is like 15 years old. Feel free to look at my oldest messages. They're all well-structured.
tedbradly@reddit
Some people scored As in English, so they can write... cogent, structured thoughts? It's also an IQ thing. Dimwits can't write too well even if they try, so they always demonize anyone who can.
literate_enthusiast@reddit (OP)
It's fine, it's about how "adopting AI by decree from leadership" means a gradual accumulation of slopware that nobody understands.
rom_romeo@reddit
Or even uses.
chefhj@reddit
We’ve already had to sit through a bunch of presentations for new dashboards that have been spun up that are somehow supposed to be better than the dashboards we already have
RScrewed@reddit
You're the OP, your opinion is obviously that "it's fine".
Sharlinator@reddit
It’s okay, but could be condensed a lot and the style definitely triggers my LLM radar.
sasik520@reddit
I cannot judge if it's Ai, but imho worth reading.
A bit too long but it makes some really good points.
kintar1900@reddit
Why the hell was this removed?
Objective_Badger007@reddit
Seriously why?
erizon@reddit
LLM-related content is absolutely banned in /r/programming in April, and this is undeniably about LLM https://www.reddit.com/r/programming/comments/1s9jkzi/announcement_temporary_llm_content_ban/
kintar1900@reddit
The images, yes. The article? I disagree...and there's no way to prove it, so while I understand the reason for the ban, enforcement is a shitshow.
erizon@reddit
Not sure what is the Designated Twin For LLM (as they seem scattered, it would be great if mods put exactly one in "Related reddits"), but LLM-related-content seem to be "invasive species" drowning all the rest by sheer numbers, it makse sense to designate a safe haven for non-LLM, as otherwise it gets lost
pysk00l@reddit
It seems any article criticizing the AI hype vanishes. This is the 2nd time Ive seen a great article just vanish
EC36339@reddit
The author complains with a lot of words about lack of validation but fails to explain how and why validation is lacking and if and why this lack of validation is inherent to AI or just general carelessness, which has, or could have, existed before AI.
For example, marketing teams have always been good at justifying their existence and budget with questionable and non-falsifiable claims. Development teams have always produced millions of lines of code with insufficient test coverage and excuses not to write tests. Managers have alwsys allowed tech debt to pile up. We have alwsys had insufficient documentation and specification of what correct outputs are, and if such documentation existed, nobody would read it.
The author also suggests that the solution would be wider understanding of how LLMs and ML work under the hood, without explaining why. It makes one wonder if they are just bitter that their recently acquired skills are no longer needed outside the major companies that produce the major generic LLMs that everyone uses now, rather than training custom models.
Since the author likes analogies: This is like complaining that the finance team uses Excel, although none of them has ever designed a parallel prefix adder and none of them bothers to check the sums with a calculator or pen and paper (when in reality, you can use a spreadsheet to verify results in a spreadsheet and ensure that your formulae and setup are correct. Oh, and AI can help do that job as well...)
There are a lot of problems with AI, but constantly complaining about the wrong things or about solvable or solved problems isn't helping.
It sure does generate clicks and Reddit karma, though.
Jejerm@reddit
/r/okbuddygpt
pancomputationalist@reddit
That comment had more meat than the rest of this thread. it even contained a typo, which is a telltale sign of human written text. instead of engaging with it, it just was downvoted and declared gpt output. ironic.
EC36339@reddit
I've said it before, and I'll say it again:
AI has better reading comprehension than most humans, sadly also most of all the "problem solvers" that work in our field.
tedbradly@reddit
Absolutely. Most bad code comes from the minds of dimwits. Lazy dimwits to boot. Not only are they not so good at learning complex stuff, they don't even try. I've been trying to convince my brother that reading 30-50 pages of a programming book of his choice that relates to his work is beneficial. Extremely beneficial. Instead, he simply tries to reinvent the wheel and brute force his way to his solution, and along the way, he watches stupid YT videos from people without any authority to trust. The video could literally be made by a C student. Why take that risk when there are critically acclaimed books written by people with like 35 years of coding experience there to condense an important aspect of coding into a book with 450 pages that can be read in 9 days @ 50 pages a day. I even told him to bill the time as in, every work day, dedicate 1 hour to reading 50 pages. Always stay learning, and always integrate legendary books as part of that learning. Yeah, you need to learn through trial and error with new technology, but you can't beat soaking up massive amounts of knowledge from a great programmer who has decades of more experience than yourself who put likely 100s of hours into how best to preach about some aspect of coding that his decades of experience taught him was important.
I even tell him that famous Newton quote: "If I have seen farther than others, it is by standing on the shoulders of giants." If we didn't transmit knowledge, every generation would be playing with sticks to create shelter, fire, tools, and weapons. Any programmer trying to do their job without external, massive boosts of knowledge contained in books is just cutting their salary in half.
What books to read? Depends on where you're at in your career and what you are currently working with. You've got generic programming books about programing itself, useful to absolutely everyone in any situation. You've got best practices books for the language(s) you are using the most, which are must reads for anyone using a language. And then you've got all the books about particular frameworks, which are must reads if you're using those frameworks.
HommeMusical@reddit
Very interesting article, though padded.
I have to say that reading about the Great Leap Forward again now in 2026, and looking at where China is now, does make me think quite a bit.
In 1949, China was basically all peasants. Now they're the manufacturing giant of the world.
However, I still hate the Chinese ~~Capitalist~~ Communist Party.
PancAshAsh@reddit
The Great Leap Forward was an abject failure that killed tens of millions of people in the course of 4 years. At the end of it China was not industrialized, and was not any closer to being industrialized than it was at the beginning.
countkillalot@reddit
The response to the tragic failure and disruption could be argued led to the actual industrialization and education of the 80s and 90s.
The historical question is twofold: 1: was the forest fire of the 50s a prerequisite to the welfare 50 years later or was is simply an avoidable tragedy. 2: is there a situation where the forest fire could have made the economic miracle of the 80s and 90s impossible and was it luck
rexspook@reddit
God this sums up my feelings on where we’re at right now so well
notsofst@reddit
My biggest complaint about AI adoption is the lack of benchmarks. So many in-house bespoke tools that produce a neat demo with a tailored prompt that fall flat on their face with literally any other use case.
AI is cool and does new and neat things, but people need to stop abandoning engineering rigor. Test your AI products, folks. Establish a performance baseline.
I agree with the author that exec enthusiasm is just jet fuel on this fire, and it probably won't stop until it blows up in their face.
megayippie@reddit
Nice write-up! I enjoyed reading it. It is as clear that LLMs are incredibly useful as it is that LLMs are nowhere near real experts.
countkillalot@reddit
The article could have been a very good tweet. But the analogy is thought provoking
sasik520@reddit
A tweet? 2k+ words?!
obetu5432@reddit
you almost figured it out
omgFWTbear@reddit
Some people have made careers out of books and speaking engagements that are one; maybe two tweets worth of content.
countkillalot@reddit
It really didn't need 2k words to make that observation
rom_romeo@reddit
Damn. This is a good read.