Show of hands: How many of you feel your stomach turn whenever you run into AI content?
Posted by ninetofivedev@reddit | ExperiencedDevs | View on Reddit | 136 comments
This can be a few different categories, ranked to your preference:
- Obviously AI generated content
- Content that involves the topic of AI
- AI adjacent discussions.
Let's focus on content and AI discussion more-so than the others. I think I have a pretty good grasp on why people don't like it when they run into something they think is AI generated.
So for those of you who would self-describe as being anti-AI, where would you say those feelings come from?
The reason I find this so fascinating is that throughout the years, when new technology makes an introduction to our industry, usually what you see is a number of people making some sort of effort to understand it. Going as far to actually use it.
AI, to me, feels more liken to GraphQL. Soon as it came on the scene, plenty of people jumped into understanding it, sure. But plenty also just quickly dismissed it without ever interfacing with it.
This is to say, my hypothesis is that so many people who are actively against AI don't seem to have bothered to use AI. Maybe they've prompted chatGPT a few times in the early days. Maybe they've tried hooking their company's github account to copilot.
Regardless, it's fair to say, what I've noticed, is that some of y'all are straight up nasty with it. It induces vitriol like nothing else.
So without anymore priming, I want to know: Where would you say those feelings come from?
Western_Objective209@reddit
Whenever I work with people who are anti-AI, they are noticeably slower to onboard and get PRs finished. They are fine in their comfortable codebases, but someone using AI can onboard and be productive in a week.
That's the big difference I've noticed tbh. There's also people who just genuinely don't know what they are writing with AI, but reviewing code with an AI is actually a lot powerful than people give it credit for, like reviewing a +1k loc PR is really fast as it makes tracing code much easier than doing it manually
ProbablyNotPoisonous@reddit
You can't outsource understanding your codebase to AI. How would you ever verify the output?
Western_Objective209@reddit
you can use AI to make understanding easier. The main bottleneck is reading, it's decent at connecting pieces and summarizing. If you are skeptical, just ask it to output files and line numbers, open it up in a text editor and verify. It's really a lot faster than using a traditional IDE and reading all the code yourself and navigating with the built in tools, and saying this as someone who was really good at using a jetbrains IDE and always hated working on projects that weren't configured with it
Spinal1128@reddit
I don't get this take, using AI is a "skill" that can be obtained within an afternoon or a few days at most, 90% of it is being able to articulate and then the rest of the tooling isn't exactly complicated.
There's practically nothing to learn.
Western_Objective209@reddit
you would think so, but I see so many people who are bad at using it. I literally have to do pair programming sessions with all kinds of devs to onboard them to using claude code effectively. I see people getting stuck all the time so I just fix their PRs and point out things they missed/didn't test using it
hotsauce56@reddit
I don’t agree with your hypothesis. Just speaking for myself, I use AI all the time at work. I am also more on the anti-AI side. Not fully, but I’m also no evangelist for it.
When I comes to AI content, my stomach turns because it often just feels lazy. I have no way to know how much to trust the author, because I’m not even reading the authors voice. It’s just word vomit from an llm.
Also I’m just sick of it, it’s everywhere. In a way, it’s just boring to me now. That doesn’t mean AI isn’t a useful tool, but I don’t necessarily find interest in understanding how someone wrote their prompts and what came out.
supyonamesjosh@reddit
I feel like complete anti-ai people are hurting their cause. AI is an incredible search engine. Its so much faster than trying to find things in a traditional manner and if you are searching for verifiable information you can check it before relying on it. When it says "Try this coupon code for pizza" it either works or it doesn't.
AI in any form of communication I am extremely concerned about. I am worried people are replacing themselves with a robot which is really funny in a way. The best way to get AI to replace you is to do act only in ways AI tells you to.
boring_pants@reddit
"People I disagree with are hurting their cause by saying things I disagree with" is the laziest rhetorical trick in the book. It makes no sense and has never made any sense. It's just a way to (try to) shut up dissenters.
upsidedownshaggy@reddit
I think a major issue is a lot of the pro-AI posting people, or at least the ones I see, aren't doing that. They're just trusting whatever the AI says wholesale. Everyone says it's a great search engine and then doesn't do the next step of verifying what it's given them.
max123246@reddit
The people who use AI who aren't infected by the AI evangelist flu don't have to publicly talk about how good it is. Because those people are insane, I don't want to be seen even close to relating to them happily predicting the end of society as we know it and the mass unemployment of people
There's a lot of shitty things about LLMs, though the environmental impact is surprisingly a lot lower than I expected it to be. But the training on all of human data and then privatizing it is what I disagree with. The open weights models are cool though in my mind. LLMs really do provide a lot of value but it's a tool and so it will be used for evil, as we've seen time and time again.
supyonamesjosh@reddit
I feel like the entire flaw of the entire environmental concern is are these people also complaining about steam? Movies? Literally everything has an environmental cost. Not even including actual insane environmental costs like bitcoin
supyonamesjosh@reddit
Well that just makes them bad at their job honestly and that’s nothing new. People infuriate me at work in non ai situations already!
itsgreater9000@reddit
it was easier to debunk these people before. now with an army of sycophantic models at their disposal, every discussion becomes an infinite loop of responses that are half-witted or are barely cogent and trick the original author into thinking they must be right.
i basically have a policy now that if your voice switches over to just copy and pasting AI responses to what i'm talking about, i just stop responding. there isn't really a conversational end here, so i'd rather head it off and save everyone their time and tokens.
Tundur@reddit
I dunno, AI is incredible at documentation following a template, and explaining existing code. As a senior engineer on a team that tends towards junior, I previously spent a LOT of time documenting, a lot of time answering questions when inevitably the documentation has gaps, and context switching has a massive toll on productivity.
Now someone asks about a service I designed 2 years ago, I can have the AI answer their question and just review the answer before I send it. Usually it jogs my memory enough to be confident in the output, and it means they get an answer almost immediately. I can continue with my primary tasks and keep the rest of the team out of my way.
That's fundamentally different to them asking the AI themselves, because they don't know whether the AI is right. So my 'seal of approval 'on a response is still worth quite a bit.
hxtk3@reddit
When I see a piece of information media, I lean two things: the obvious one is its contents, and the less obvious one is I learn it was worth someone’s time to create.
AI makes that second signal a lie and makes it harder to judge when something is worth my time to consume.
As a user of AI, I find it has addictive qualities in much the same way as social media and it’s easy to basically doom scroll by bike shedding a system design where time would be better spent actually building something myself and gaining understanding of the edge cases by running into them.
AvailableFalconn@reddit
It’s polluting our information ecosystem, it’s polluting Reddit, it’s polluting our environment, it’s polluting art, music, television, movies, it’s polluting our codebases. It’s even polluting our advertisements.
I was also a (vindicated) graphql hater btw. I’m sure it’s worth the squeeze for like Netflix, but even for the couple hundred engineer companies I’ve seen use it, it added as much complexity as it sapped.
annoying_cyclist@reddit
This is basically it for me (as someone who uses AI extensively to write code at work).
It's now trivially easy to concoct human-seeming troll posts for Reddit, fake AI videos for your IG feed or Facebook group, or superficially deep takedown/dox reports for someone you don't like. If you look at how many people engage with this obvious AI slop (here and elsewhere), we are collectively awful at identifying when people do this, and the responses we've evolved so far (looking for bot tells in writing, accusing one another of being bots) aren't exactly conducive to a healthy community. There are subreddits I've personally stopped participating in because of this: the people using AI slop to manipulate the community for their own ends, people reacting to that in ways that further undermine the community that used to be there.
The web used to be really good at helping niche groups who would never meet in real life find each other and interact. I feel like I'm watching trolls and astroturfers kill that dynamic with AI slop, and that makes me more than a little sad.
Main-Drag-4975@reddit
Same here but we have to remember that sock puppets are part of Reddit’s “fake it til you make it” origin story.
niveknyc@reddit
Great summary of how I feel too. Plus it takes the effort out of actually doing certain things, which gives far too many people the undeserved confidence that they're good at that thing now.
Normal-Excuse-1822@reddit
feels like we're living in an ai-generated simulation sometimes
knightcrusader@reddit
In the words of the ancient Internet prophet "The Tourette's Guy":
I'm tired of all these looters... and polluters!
chickadee-guy@reddit
Im on a GraphQL team who also owns the database. It basically incentivizes customer to run Select * queries on all your data. We found multiple teams using our GraphQL to essentially copy our database in real time and make their own.
The amount of effort spent combatting this effect easily outweighs any benefits gained by 10-15x. We have to watch customer queries like a hawk and introduce all sorts of process and tooling to only allow "approved" graphQL patterns.
Mountain-Dragonfly46@reddit
OMG, thank for the (horror) story.
beefyweefles@reddit
I'll triple agree here, it's a horrible technology for Facebook to drip-feed heroin to its users and usually copied and abused by other companies
mckenny37@reddit
Yeah, but think of all the upsides. Like its supposedly good at updating power point presentations
Tired__Dev@reddit
I actually believe there's no real need for the web anymore. I can explain AI's issue rather simply through a scenario I'm going through. I'm going through PRDs/SoW written by a bad product manager. What AI does is amplifies how much they don't know because they one shot things instead of drilling into specifics with context the AI doesn't know.
A lot of web content doesn't serve anything. Hell, this comment doesn't survey much and you could be a bot. A significant portion of web forums like Reddit were Q&A sites, but AI does that better now. Why are you asking for strangers opinions when you can get something that has the entire web and has information about you to contour your answer in a way that's rather polite? For searching government sites, why do I want to go through a deeply nested navigation structure when I use Google to evade that anyways? With media, why do I really need to hear about events with emotion injected bias? The entirety of the internet is funded on ads that get clickthrough .05% of the time anyways.
While people are saying AI will take our jobs they're not understanding it replacing the actual things we work on.
Elegant-Avocado-3261@reddit
Biggest problem to me is that it churns out slop exponentially faster than we can moderate it
Majestic_Diet_3883@reddit
Overall quality of life doesnt seem to improve. Maybe it's just the present climate, but Spotify increasing duo plan price, prime vids, and Netflix also increasing sub AND adding an ad plan??? Are they all really strapped for cash after all the layoffs?
MonochromeDinosaur@reddit
The feelings come from knowing someone out there might be writing mission critical life and death level software and I’ve seen the kind of insidious buggy code that comes from LLM.
matjam@reddit
easy things are now really easy. You have a system that has a feature that makes it really easy to onboard a new use case for X? great - the LLM will be able to use that to onboard X.
hard things are slightly easier, but they are still time consuming. You don't have a feature to handle Y, and it's super complex with many edge-cases you've not thought about? the LLM will help you get there a little faster but you will be still iterating towards a result and that takes human time to test, evaluate, identify issues, provide relevant feedback to the LLM, rinse repeat. God forbid Y is completely outside your normal expertise.
We use LLMs heavily at my shop and we've pretty much as a company decided we're all in but we are still tactical about it and realistic about what we can get it to do today, vs what we'd like to get it to do.
The dangerous people are the ones telling the CEO that we can fire 90% of devs because Foundry can replace them. Fuck those pople.
Misty-knight200@reddit
My stomach turns when I run into AI-related posts in this sub.
NewFuturist@reddit
I have to say the assumption that ”people who are actively against AI don't seem to have bothered to use AI" is the dumbest shit ever. It's so ignorant and so obvious that the person posting it is a bad programmer and doesn't understand the ecosystem and real business uses of programming.
I've been actively trying to use AI ever since the initial release of chatGPT where Sam Altman said that it would replace us all. It couldn't even create a basic home page with balanced tags and curly braces.
I have full access to all models via a corporate subscription to Copilot. Claude Opus 4.6 is great sometimes. But too many times it has no idea what it is doing. Just last night I was unpacking its shit because it had duplicated code and done it incorrectly. It takes ages to come up with a good prompt, takes ages to run the prompt and then you sit there berating it "no you missed this, you did this incorrectly" etc.
If I have shit to do, I can't risk the two outcomes of using AI which is it one shots it and I'm super impressed or it acts like a junior that has no attention span and refuses to listen to you.
tchernobog84@reddit
Code from AI trying to solve a bit more complex problems is typically incorrect, verbose, and overcomplicated.
Me being the senior reviewer it means: I get 10 times more vomit from AI I need to review, and point out not obvious mistakes to the author. Which takes double the time per review.
So it makes juniors twice as lazy. They stop thinking. And I get to think twice as hard for them.
And the question is why I am anti AI slop for production code?
Professional-Dog1562@reddit
Wanted to list to a leadership podcast and all the recent episodes are about integrating AI.
Technical specs are written by AI.
Comments in Jira are written by AI.
I'm about to make an express rule on my team that we must communicate without AI
druidgaymer@reddit
A lot of the reason I'm against AI is how people use it. I don't care if someone uses it to write code. Thing is, people are using it to scam people out of money or to lie to people. Creating fake images to fool people. It's depressing.
svekkxor@reddit
It doesn't pass the wife test.
The same reason I would get in an argument with my wife for, if I would hire a nanny and a cleaner and then say. Well the house is clean now, kids are taken care of. Great right? Right?
The tasks may be done, but we didn't put in "the work". We didn't suffer through it together. And as with anything, happiness comes through the journey, not the destination. Especially in relationships.
And I believe the same thing happens with AI content. If I smell a person has not put in the work, it immediately devalues the piece of content I see. Not because it's necessarely bad content or code, but because that person has 0 connection to that code. They did not put in the work, the thinking, the understanding. And you can spot it from a mile away. At least we think we do.
An this is true for anything in life. Maintaining friendships, falling in love, raising kids, .... You got to be there, be present in the moment. Be connected to the thing you are doing. And that connection, through AI is slowly fading as our brains are always looking for the easy dopamine hit. Every deep conversation you had with AI is a deep conversation you denied a loved one. Every generated code review is a bonding opportunity you denied someone, joking about their coding style or variable naming, or mentoring them on the details of your codebase.
If you really want to take this to the max, generate a slide deck of a topic you don't care about and present it in front of a group of people. That's what it feels like. You wouldn't be able to ace that presentation even if you tried.
And that's what generating things does, it slowly disconnects you from it. Slowly you stop caring. And if you say something about it, some will say with a smile "I didn't do this, the AI did" and use it as a shield. That's the one that stings the most. Try saying that to your wife. 🪦
Everyone is still navigating the AI adoption curve and figuring this out. So am I. Its still changing, evolving every day. I don't think it's all bad. I believe using it requires a certain level of maturity and self awareness because like it or not its a giant dopamine trap. and we're not ready for this. Human brains crave novelty more than anything else. And if you're not careful it will suck you right in. As it still does with me.
Ok-Entertainer-1414@reddit
Specifically when it comes to AI generated content on Reddit:
lenfakii@reddit
Can I rant here about a team member of mine that, since I've known them, has never had an original message of their own on Slack, Jira, or otherwise? It winds me up no end having to interact with this default Claude character.
I recently started replying with "Thats great but what does {name} think instead of Claude?" and yup - canned LLM response back that they should use their own voice instead of CC. Fuck me :D
roger_ducky@reddit
I only dislike AI content where the person using it is obviously just one-shotting it without much effort put into editing or refinement.
In text, that means default AI tone of voice.
In pictures, it means bland/default composition or extremely inconsistent details.
In video, it means the generated content doesn’t stay consistent.
mothzilla@reddit
It's the dawning horror as I realise I'm not talking to a human, I'm talking to an LLM operating through a human.
mxldevs@reddit
People getting laid off because one guy could use AI to do the work of 5 people since AI boosts productivity greatly
He can't.
ninetofivedev@reddit (OP)
I agree with you that he can't.
But I also think perception is reality. Either they'll come to the realization that they put way too much stock in this tech.
Or it will actually become that good.
I don't believe the latter is coming.
But I also don't undestand why engineers don't just sigh and wait. I'm old enough to remember CTOs pointing at magazine articles explaining why we needed to use some shitty FotY product. Someone needs to take away this maps computerworld subscription.
itsgreater9000@reddit
I hate this saying because it's not really true. It might be true at your organization or something, but reality is reality, whether you perceive reality accurately or not does not mean perception is reality.
mxldevs@reddit
If you received notice that you were going to be let go by end of the week, how long would you expect to be out of a job for?
ninetofivedev@reddit (OP)
3-6 months tops.
midasgoldentouch@reddit
What? How are you supposed to sigh and wait through getting laid off? Your bills getting laid off too?
ninetofivedev@reddit (OP)
I've been laid off before chatGPT. I know how to handle it.
automata_theory@reddit
You think you know how to handle it, until labor conditions change, or you encounter disability, and you find yourself in an unfamiliar position.
ninetofivedev@reddit (OP)
AI is going to give me a disability?
What are we talking about?
Shucks, I might get hit by a bus, better not get out of bed.
bogz_dev@reddit
okay, i can sigh and wait. how do i make money in the meantime though? start a new save?
Wonderful-Habit-139@reddit
Stream yourself sighing.
engineered_academic@reddit
9 women can make a baby in 1 month!
chmod777@reddit
Well one woman and unlimited ai tokens now. At least until the ai companies jack up the rates.
engineered_academic@reddit
Have you seen how much it costs to deliver the baby? They suck you in with the relatively low cost of conception then hit you with the huge bill once you are locked in. It's completely a scam.
ninetofivedev@reddit (OP)
When's the last time we tried? Are we sure this is still the case?
official_business@reddit
I don't care about the opinions of a jumped up TI-83. I find something deeply revolting when I read about people using these AI chatbots as surrogate friends or whatever. The "My Boyfriend is AI" subreddit just makes my skin crawl.
I don't really know how to describe it. I just don't think anything meaningful can be derived from a machine. It might be a convincing facsimile, but I'd rather talk to a dog.
Now we've got situations like this: https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/
Reading the entire rant that the AI bot was just annoying. The bot isn't alive, it doesn't have feelings, it's not real. Fuck it and it's opinions. Dealing with something like that is such a monumental waste of time. I won't be manipulated or guilted by a machine.
These bots are polluting the internet and reddit with total slop.
What's to understand? It's a chatbot spewing out rubbish at varying levels of convincingness. It can trigger a dopamine response in someone craving a bit of human contact, but it's never going to properly fill that need. It's not real, it's not a person, and it's opinions can fuck off.
The people running these bots can put out AI slop on a grand scale to try and manipulate discourse. It's going to be very corrosive for society as a whole I think.
The other aspect is AI programming where you now have people who've never coded telling me that a bot can do my job. This is another rant, but being told that my job can be done by an AI by someone who doesn't know how to code is just aggravating.
I've played around with the AI art generators. That doesn't make me an artist or qualified to talk about art.
barndawe@reddit
I'm anti AI for a few reasons, and none of them are directly the AI itself: - the environmental impact. - the emboldening of people who don't really know what they're doing to suddenly think they're experts. I mean this in every area, e.g art, music, writing, medical, etc. - the greed aspect of flooding art and music with low effort crap that ends up taking money from people who spent years perfecting their craft. - the greed of the corporations that run the available models. - increasing the output of bad actors, e.g. scammers and hackers. - that these models were trained on things that you and I wrote and created, and now it's being sold back to us at a price
dbenc@reddit
the last point in particular irks me. I'm sure there will be a legal reckoning
hdkaoskd@reddit
Only if we make it happen.
philosphercricketer@reddit
I'm waiting for governments to realise there aren't going to be many that pay tax after the AI takeover. And for corporations to realise that they aren't going to be customers trying to buy stuff.
And then CEOs/Corporations who don't comply fall to the governmental armies unless they maintain armies or control energy themselves.
CatDawgCatDawg2@reddit
Why do you think model companies are "greedy"? There's dozens of open source alternatives. Anyone can compare price to performance and go with a non-frontier model from any number of providers.
Also they're not even profitable as long as they spend billions researching the next model lol. If they're greedy, they're not very good at it.
If you compare cost to value these frontier is winning because they're still delivering the best value.
ninetofivedev@reddit (OP)
I'm just going to say it: For everyone who says the environmental impact, how many are you trying to just add reasons that are irrefutable?
Because listen... If you're worried about boiling the ocean, you need to add everything else you use to the list. These AWS server farms aren't exactly energy efficient.
th3gr8catsby@reddit
Don’t let perfect be the enemy of good. Just because something doesn’t 100% solve climate change doesn’t mean it’s not worth doing.
mashuto@reddit
I live near a massive amount of data centers. It not just adding reasons to add reasons. They are very much disliked and they are pushing for more and more with all the AI spending.
They pull water from our rivers, and potentially can contaminate them. The big river around here is heavily used for recreation and drinking water. They often built right next door to neighborhoods, can be loud with constant noise, and they have been driving our electricity prices up.
Granted the river also had a massive sewage spill recently and the data centers existing before AI. But AI adoption is only accelerating this as an issue.
PolyChune@reddit
I guess the reason is in the “why”…
The AWS cluster could actually be running something of value or a useful service whereas the AI models are necessarily stuff that is already available, bloated and pretty shallow.
It is a damage use of energy with no value added
ninetofivedev@reddit (OP)
The AWS cluster could be running something horrible too.
AI technology, not necessarily just LLM, could be used to cure cancer.
This is a terrible basis for your argument.
Noah_Safely@reddit
I feel like you just want to "win" the argument here. LLM is obviously new, and obviously using a ton more compute capacity and power than anything else, including crypto. Why do you think they're building all these datacenters, power plants, and utility bills in areas with new AI focused datacenters are skyrocketing?
You have a clear agenda and bias, no matter if you're aware of it or not. It's screamingly obvious.
PolyChune@reddit
I think we’re all talking about generative AI. That AI has been trained on public work, thus lessening the value of it despite corpos trying to sell it back to us.
Also you declaring that i have a terrible basis for my argument does not make it so lol
WolfeheartGames@reddit
Over half of your criticism are actually critiques of capitalism. Arguably all of them are. The environmental impact on ai is negligible compared to what we already do, and its the only technology in production that can help us reverse environmental damage that already exists.
You said none of the critiques are directly ai themselves, so you recognize this already. Its about how people use the technology. Right now people are using it for what society incentivizes: money. We could just as easily use this technology to undo the core problem.
If every disgruntled developer used ai to tackle SaaS we'd change the world with in 12 months.
morinonaka@reddit
Also add, centralises the power into the hands of those that control the models. Ie corporations.
PolyChune@reddit
Right, this is the new peak of corporate greed. Selling our own products back to us. If corporations are people, they need to be stripped of their rights and their work make a public utility.
sacrecide@reddit
The imprecision inherent with LLMs Aka Machine Learning, directly contradicts what the digital revolution was about.
OdeeSS@reddit
AI isn't the same as GraphQL.
My biggest issue with AI isn't its use to generate code. My biggest issue os AI being used to replace critical thinking, communication, and art.
rover_G@reddit
The low effort slop makes my eyes roll
Cool_As_Your_Dad@reddit
My stomach turns when I hear c levels execs think throwing AI at work will just make everyone 80% more productive and dont need resources.
Heard it today again. Fire us all. Let the company sink will make me smile
StackedCakeOverflow@reddit
More than anything I hate how it's ruined the ability to approach anything with trust first. Everything we see on a screen now we have to doubt before feeling anything else. The business world has always been filled with masks and fake infographics that don't mean anything and deceptive buzzwords but at least you knew there was a person somewhere behind the layers.
Now? Scrolling through LinkedIn and emails feels like They Live somehow. Every post sounds the exact same because no one actually wrote it, and if no one actually wrote it how can I trust it? It's an endless scroll of the same handful of rehashed phrases structured around the context like madlibs. Worse are the folks that generate emails and articles, only for the intended audience to then use AI to generate a summary of it! What are we doing here, people? What is your actual goal in shitting this crap out into the void where your audience is going to flat out ignore it (like I do if I recognize it's generated) offload consuming it to another slopbot?
It's at the point now that I enforced no AI generated emails for my team. If you send me one or send a client one and I realize, we're going to have a talk. If I have to sit you down and teach you actual email etiquette like I had to learn in school then I will.
Forget B2B we're in the era of slop 2 slop.
fdeslandes@reddit
Kinda fed up with even more buzzword monkeys telling us how to do our job and people just not realizing that AI being up to their standards doesn't mean it's up to ours.
Fidodo@reddit
I'm not anti AI, I'm anti slop. There's no reason why we should lower our standards for AI, and there's no reason we can't incorporate AI into best practice pipelines where we keep quality up.
But this is not new. This industry has always been battling slop whether it's AI or human made, and I will continue to battle that slop. Quality always wins out in the end.
cbusmatty@reddit
Nah, at least I know what patterns it’s going to use as opposed to jr dev, offshore dev or dev who thinks he’s smarter than anyone else.
Give me predictiable ai generated content over offshore content any day
PolyChune@reddit
Id take quality human work over both
cbusmatty@reddit
No way, not every human lol
PolyChune@reddit
True lol, i guess as long as theres hood collaboration between humans. Idk man it’s definitely hit or miss lol
SemaphoreBingo@reddit
AI "art" was my gateway drug, it was (and remains!) so obviously and clearly awful that I was pre-disposed to be a skeptic when the machine started producing software. I'm going to be forced to use claude and similar sooner or later but until then (and probably even after) my views will continue to be "this sucks" and "lol.lmao."
I do have to interact with a couple ai autocompletes, it's nice when it correctly finishes writing the repetitive stuff for me, but not so nice when it finishes writing the repetitive stuff except with subtle errors, or when it starts making up fields and variable names.
wizzward0@reddit
I use Claude code at work and think it’s amazing at coding stuff. I give it very strict instructions and plan about implementation and I usually only have to tweak a few small things. I also think it’s amazing for pair programming.
But, I absolutely hate the grift I see on places like LinkedIn and reddit. It’s so nauseating. People who’re selling shit or obviously don’t understand the rubbish they’re outputting or using. Like the internet is genuinely becoming unusable.
FrankieTheAlchemist@reddit
This has got to be rage bait. Anyone comparing GraphQL to LLMs is so out-of-the-loop on both that I’m surprised they can operate a keyboard.
ninetofivedev@reddit (OP)
Most useful Reddit comment.
Jmc_da_boss@reddit
I've generally left most of the internet, essentially every single programming space anywhere has become infested with both slop, and inexperienced people armed with Claude that now think they can participate in those spaces when they can't.
No_Structure7185@reddit
bc i dont like slop. if you wanna create content with AI, go for it. at least as long as its not noticable its AI.
mackstann@reddit
It's disingenuous. People post content that appears to be authored by them, but it's not. It's not even authored by a person. I want to connect with people and learn from their real thoughts. It's made all the more galling due to the over-confident tone that AI usually has.
ComputerOwl@reddit
And that's the same problem that exists at work: People lazily generate some nonsense code without thinking twice, assign someone else as a reviewer and thereby (try to) offload their own work to someone else.
I don't really care where the code comes from as long as the code has high enough quality that a human could have written it, but oftentimes, it just doesn't meet that bar. It looks good at first glance but once you start thinking about it, it reveals how deeply flawed it is.
mackstann@reddit
Absolutely. My team uses AI a lot but we have to baby-step it and scrutinized it heavily. PRs aren't full of slop. I've seen PRs full of slop and the back-and-forth to get it to reasonable quality is just painful.
Logical-Idea-1708@reddit
Well, I wouldn’t say ALL AI contents. Some are very entertaining. I think production cost may have stopped many creatives from ever creating
pl487@reddit
Those feelings come from the ego. They are a normal response to threats to the ego.
nameless_food@reddit
For me, it depends on how that AI generated content is being used.
Just like any new tool, we're still figuring out what it's good for, and where it's not. We're in a golden era of experimentation where we get to screw around with frontier models at far cheaper costs then it actually costs to generate their outputs. I do hope that locally hosted AI models get better and cheaper to operate.
dbxp@reddit
I think AI by itself is fine. It's more that it reveals issues with the attention economy.
I think the issues are from the full pipeline, companies make products people don't really want them throw money at ads to try and sell them. This then pushes money to the ad networks which motivate the slop content. If more companies focussed on being good at what they do rather than just throwing money at ads there wouldn't be the money to motivate slop content.
As for the software developer side of things. I think there's a lot of overpaid American Devs who think they should get paid 6 figures for basic react work. There's this weird first world problem blindness where every other industry has been hit by automation and now when it's hitting tech they've suddenly realised it may impact jobs.
PatchyWhiskers@reddit
Being nasty to AI isn't a problem, they don't have feelings or emotions. People who use AI to polish or partially write their posts (presumably including this post) feel upset when people are nasty to them and treat them like bots. But if we wanted to chat to ChatGPT, we all know how to get there.
ninetofivedev@reddit (OP)
That's not what I meant.
No AI was used to polish this post. I purposely don't as it always elicits a negative reaction, and quite frankly, I feel like my unpolished work stands on its own merit.
mjbmitch@reddit
How come you haven’t adjusted your writing style given that it elicits the same negative response?
boring_pants@reddit
I think the worst is when people post whose brains have atrophied completely through reliance on AI post a screed containing a thousand words but no information in which they mindlessly regurgitate some "AI good, as long as you use it correctly like I do" spiel.
Obviously AI-generated text is still icky, but it doesn't give me the "this was once a human being, and now nothing is left except a glorified markov chain spitting out a string of words" feeling like these alleged blog posts do.
Chili-Lime-Chihuahua@reddit
One of my coworkers, former Amazon, uses LLMs to help write messages. Simple solutions become a pile of slop filled with placeholder values that are not correctly updated and emojis.
Don’t get me wrong, some of the tools have been incredibly valuable. I’ve solved things I probably never would have before. But there’s so much low-quality noise in the environment too.
dingo-liberty@reddit
is this bait? lmao
go on the internet for 5 minutes man. it's all AI slop. your post reeks of AI although you claim not to use it. will we ever know for sure whether you did or not? NO and that's the fucking problem right there.
i hope you get a fat check from big slop for throwing in the "oh they just haven't used it! that's why they hate it forehead" argument that i have to read every single day.
DeterminedQuokka@reddit
I saw a great explanation by Alberta tech that the right position is to be an ai hypocrite. Use ai all the time but know it doesn’t actually work that well.
I don’t honestly care if docs are ai generated if they are good. I do care if a doc was written by ai and human never touched it. But things that are really clearly just ai I engage with a lot less. If the author can’t be bothered to care why would I?
I think if you think people who dislike ai don’t use it. I would counter with people who really like ai don’t bother to double check it and notice it is almost always at least 30% wrong. And maybe what you are doing isn’t important and that’s fine. But that’s not true for my work.
ninetofivedev@reddit (OP)
Sorry, a pet peeve of mine is the ol "all statistics are madeup" trope. So I can't read past this.
fletku_mato@reddit
I got physically ill from just reading the title, which mentions AI.
ALAS_POOR_YORICK_LOL@reddit
If you got physically ill from reading a title, I feel like you've got some other stuff going on there
fletku_mato@reddit
I may be slightly exaggerating but it is pretty tiresome reading this same stuff over and over.
niveknyc@reddit
It's like, every time some dork makes one of these identical posts, they think they're adding something new to the conversation, like they've got a unique take that needs to be seen and discussed. The hot takes has been one of my least favorite biproducts of AI.
fletku_mato@reddit
Best ones are clearly written by an LLM. You can't be bothered to even write your own hot takes, but you expect others to read your carefully prompted wall of text and treat it with some amount of respect.
niveknyc@reddit
Exactly, it's self exposing how little effort these people are putting in and why they're reliant on it. AI is a great tool, when you're also using your own brain along the way.
merRedditor@reddit
When I see content that is very clearly a cool use of AI, I can enjoy it. When I see people trying to use AI to replace things that could be done organically with relative simplicity, not so much.
AI generated mockups of music videos with characters superimposed? Cool.
AI teaching language learning lessons with animated stories instead of a human teacher? Nah.
mint-parfait@reddit
I can't begin to explain how infuriating it is to receive AI slop generated PRDs from product managers 😄
ninetofivedev@reddit (OP)
That's why I send my PRDs to Anton.
He doesn't call me anything. He grimly does his work, then he sits motionless until it's time to work again. We could all take a page from his book.
uniquelyavailable@reddit
I love Ai, I've been working with it my entire life basically. It is exhausting how its everywhere, and I fear it will be used against us. Not because it is powerful but because the people that control it are powerful and eager to misuse it for their gain.
PolyChune@reddit
It absolutely will be used against, in fact it already is. You will have to get a subscription to a model and be forced to update to the new one to keep up.
They are already selling humanity’s own work back to us.
It should really be a public utility
but_why_n0t@reddit
How it this relevant to experienceddevs?
ninetofivedev@reddit (OP)
What topics do you feel are and are not relevant to this sub?
Evinceo@reddit
Because buy more tokens, serf
kondorb@reddit
I don't really care that it's "AI", but I do care that it's crap. I hated crap content before, I still hate it, there's just suddenly more of it.
small_e@reddit
What pisses me off are slop readmes and PR descriptions. You could easily type something short and meaningful in two seconds… but no… send me this 10 section nonsense full of bs no one cares about.
Altruistic-Bat-9070@reddit
Honestly for me everything is fair game except human interaction.
As doom and gloom as people can be i don’t actually believe software dev or the arts will vanish because of AI, i just believe the jobs will change and the tools and art available will diversify. At the end of the day if AI can’t do it better or can’t achieve the uniqueness of humans there will be a place for human content.
The thing that instantly makes me angry is teams or slack messages obviously written by AI. The worst was a few years ago when i got a hr complaint (not upheld) that was obviously written by AI. The coworker had obviously spiralled chat gpt with context that made them think they had a case against me and used it to draft all letters and some emails to me and it was so frustrating talking to a computer. The worst part is whenever anyone does that even now they have plausible deniability of doing it and its so exhausting
Serializedrequests@reddit
It's a disempowerment technology, meant to replace self empowerment.
binocular_gems@reddit
I use AI tools at work, have used them basically as soon as they started to come out as products and now have access to fairly unlimited use of Claude Code, Gemini, and Codex. I still use them extensively at work and have found a good workflow for working with Claude Code. I enjoy using Claude Code for research, experimentation, helping to close bugs, and for specific controlled tasks in well specc'ed out projects. I don't let the agents do their thing, I like to use them in a limited, interactive mode where I can check the work and validate results before moving to the next task or another part of the task. It makes me feel more like a code owner than a code watcher. I have an objective at work to use AI as much as possible, I have to use it for my career objectives, but I also like working with it most of the time and now that I have a good sense of the limitations I rarely run into the issues that I ran into 6+ mos ago (looping issues, degeneratively bad code, inferring/hallucinating libraries/frameworks that aren't there, it explicitly ignoring instructions, etc... I Still run into them occasionally, but rarely and because I follow a spec-based workflow and make small, controlled commits w/ AI I'm usually able to stay on top of when it deviates into something crazy).
That said, I am broadly anti-AI, I think it's bad for society, bad for the humanities, probably bad for the individual. My "opposition" to AI from an ideological point of view doesn't come from inexperience with it, for about 2 years I was anti-AI because I was using it and could see the very low quality of the output, but that's definitely improved with tools like Claude Code and having a firm set of guidelines and responsible/productive use guardrails around it. I am still ideologically anti-AI. It's affect on prose is horrible, there's so much pure shit that's generated by AI it actually makes human writing worth less because it's hard to tell whether something that you're going to use your human brain to read is actually worth reading or whether it's just more robotic junk. The writing style of AI is shit, unoriginal, marketing-like shit. You can try to use prompts to get it to not write like shit, but it'll always regress to shit. I think the word slop is appropriate, though overused obviously, but it captures it well. I don't think that AI is capable of producing art, I think that art is human and AI is not human, that the value of something artful isn't intrinsic to the thing but it's at least partially (or mostly) derived from the context, the human expeirence, that created it. AI can't do that, it won't do that, and even if AI produced The Old Man and the Sea after churning away at a million data centers for a million hours, it would be devoid of value because it's just a statistical accident that produced it, rather than the spark of human intentionality.
For what it's worth, I don't think that AI is actually artificial intelligence, I think it is applied statistics. And I do not believe that human intelligence is applied statistics (I'll frequently see that argument made).
That all said, I think it's something that is here, it's something we have to account for with human learning and its affect on human attention, ultimately on human intelligence. I don't know how we do that. I don't know how to teach in the age of AI, I don't know how train people in the age of AI. I used to feel confifent giving advice to juniors and people interested in the software engineering field, now I don't feel confident about that advice at all. I'm worried about the influence of AI on my own career. I have a hard time imagining human-orchestrated software engineering existing in the same way that it has, in 10 years. It probably still will but in a different way that I can't forsee at the moment.
At least in the software field, I don't think that this is true at all. Lots of developers who I know who are ideologically anti-AI use the tools frequently. It might be true of society at large, but I don't really think society at large is anti-AI, I think most people in the American workforce, education, or society are just kinda indifferent to it; they get annoyed when someone tricks them using an AI generated photo, they don't like when people are abused or sexually exposed using AI generated deepfakes, they might worry about affects in some ways, but they largely don't care and they're indifferent to it. On the internet, the AI critics probably have a louder voice, but there's a reason for that, AI has come for quality internet content first and the people who are most likely to object to that are people who value high quality internet content. (Side note, I also hate the word "Content" but it's the word we have)
mashuto@reddit
I have used it a bit recently to assist with some coding tasks. Its genuinely impressive what it can. It has also made mistakes, written code that would be hard to maintain and given me conflicting information. Its a nice tool that can help find things I may have missed, but I do not trust it blindly to just write code. Everything needs to be reviewed.
I dont hate it because of those shortcomings. I dont really hate it at all. But I am what I would consider anti-AI overall. People in decision making positions seem like they dont want me to continue having a job going forward. AI very much feels like something the people at the top want to use to enrich themselves at the cost of everyone else. There are also talks of environmental impacts. I happen to live very close to the so called data center alley. Its becoming more and more of a problem here. They are taking up a lot of land. They are loud, they pollute the water, and they are driving our electricity prices up.
_hephaestus@reddit
I don’t feel my stomach turn when I run into AI content writ large, but I do feel it when I hear non-technical stakeholders using it. A few weeks ago someone pitched using Claude to an executive as having more capabilities than greenlit by IT, it hallucinated outputs, and in general was the wrong tool for the job. Now I have to do the delicate dance of de-escalation to an excited executive with grand ideas.
Main difference with something like GraphQL is that still really requires some technical know-how to evangelize. Everyone knows LLMs
PicklesAndCoorslight@reddit
I like to trick it. I taught one that humans have 12 fingers.
aioli_boi@reddit
I use Claude Code extensively. I'm running multiple workflows at once, leveraging skills + subagents in my day to day. I'm a huge champion of Claude Code on my team, pushing them to use it more and re-orienting my team around how to leverage it not only in their workflows but also shaping our team culture around it. I've used AI heavily for non-code use cases for the past 2-3 years.
AI slop is awful. What's the point of even logging into reddit anymore when 50% of posts are AI slop + 85% of comments are AI slop? Why don't I just have ChatGPT generate reddit posts and comments for me, since it's the exact same experience? Even worse, the AI slop generated on Reddit is significantly worse because there's a monetary/financial incentive behind it.
It's an awful experience to have to wade through so much noise to find anything of substance these days. How is this NOT a problem to you?
defmacro-jam@reddit
We don't serve their kind here.
BoBoBearDev@reddit
I love to play adventure game with AI, especially the one that generates stories with all sorts of porno details.
Fair_Local_588@reddit
I use Claude extensively for work and have tested LLMs outside of work to see how much they know. I have also used it for some simple tasks and as a second opinion when editing things I’ve written.
I don’t dislike AI content because it’s low effort, but because people are choosing to take themselves out of the equation and have the machine spit out some sanitized, median response. I guess this is me hating what it represents existentially: a way to strip away our humanity when interacting with other people.
On the quality front, it’s also way overhyped. I went through a phase where I listened to Claude’s recommendations for how to design something, but when I pushed back it would change its mind. And when I pushed back more it would change its mind again. Repeatedly until we were back at my original suggestion. And at each step it would say “great suggestion - this is a good change because X” even though we were moving through objectively bad options.
But if you don’t know any better, it sounds very polished and convincing. But it’s very often wrong.
ALAS_POOR_YORICK_LOL@reddit
It comes from feeling like it threatens both their job and their identity as smart creative nerdy types
Majestic_Diet_3883@reddit
I have the opposite experience. I found that most tech folks that dont like ai have used it before.
For me it's that it resulted in a lot of redundant tools flooding the market, and also the spike of hardware prices
Antique_Pin5266@reddit
I don’t mind AI itself as a tool. It’s super handy
I hate all the other stuff that has come with it. As you said, all the slop, asinine expectations from idiot higher ups, all the layoffs
Thin_Mousse4149@reddit
Seeing TikTok’s where people are clearly reading a script generated by ChatGPT makes me irrationally angry. It has a very specific and annoying way of writing that is somewhere between condescending prick and inspirational speaker.
whitehorrse@reddit
Like the episode of mad men when he screams that the computer is going to make everyone gay.
hotsauce56@reddit
I don’t agree with your hypothesis. Just speaking for myself, I use AI all the time at work. I am also more on the anti-AI side. Not fully, but I’m also no evangelist for it.
When I comes to AI content, my stomach turns because it often just feels lazy. I have no way to know how much to trust the author, because I’m not even reading the authors voice. It’s just word vomit from an llm.
Also I’m just sick of it, it’s everywhere. In a way, it’s just boring to me now. That doesn’t mean AI isn’t a useful tool, but I don’t necessarily find interest in understanding how someone wrote their prompts and what came out.
Angelsonyrbody@reddit
I think that a lot of people's skepticism / cynicism about generative AI is rooted more in moral / ethical concerns than in concerns with its efficacy. So I'm not sure how/why them actually using it would affect that point of view.