I am blissfully using AI to do absolutely nothing useful
Posted by thismyone@reddit | ExperiencedDevs | View on Reddit | 268 comments
My company started tracking AI usage per engineer. Probably to figure out which ones are the most popular and most frequently used. But with all this “adopt AI or get fired” talk in the industry I’m not taking any chances. So I just started asking my bots to do random things I don’t even care about.
The other day I told Claude to examine random directories to “find bugs” or answer questions I already knew the answer to. This morning I told it to make a diagram outlining the exact flow of one of our APIs, at which point it just drew a box around each function and helper method and connected them with arrows.
I’m fine with AI and I do use it randomly to help me with certain things. But I have no reason to use a lot of these tools on a daily or even weekly basis. But hey, if they want me to spend their money that bad, why argue.
I hope they put together a dollars spent on AI per person tracker later. At least that’d be more fun
thezachlandes@reddit
I see why this is funny, but this is a big waste of water. :/
bogz_dev@reddit
i wonder if their API pricing is profitable or not
viberank tracks the highest codex spenders by measuring the input/output tokens they burn on a $200 subscription in dollars as per the API cost
top spenders use up $50,000/month on a $200/month subscription
HotTemperature5850@reddit
Ooooooof. I can't wait til these AI companies pull an Uber and stop keeping their prices artificially low. The ROI on human developers will start looking pretty good...
Some_Visual1357@reddit
I will be on your same boat. AI is cool and everything, but thanks, I don't want my brain to rust and die from not using it.
SecureTaxi@reddit
This sounds like my place. I have guys on my team leverage AI to troubleshoot issues. At one point the engineer was hitting roadblocks after roadblocks. I got involved and asked questions to catch up. It was clear he had no idea what he was attempting to fix. I told him to stop using AI and start reading the docs. He clearly didnt understand the options and randomly started to enable and disable things. Nothing was working
pugworthy@reddit
You are working with fools who will not be gainfully employed years from now as software developers. Don’t be one of them.
fizix00@reddit
idk humanity has no shortage of fools. pretty sure plenty of them are gainfully employed. you just need to fool a hiring manager don't you?
but I don't disagree with your main point
fallingfruit@reddit
you benefited greatly from learning when AI didn't exist, don't discount that and assume you would be better.
having less knowledge about how things work is incredibly easy with AI.
graystoning@reddit
This is part of AI failures. The technology is a gamified psychological hack. It is a slot machine autocomplete.
Humans run on trust. The more you trust another person, the more you ask them to do something. AI coding tools exploit this.
At its best AI will have 10% to 20% errors, so there is already inconsistent reward built in. However, I suspect that the providers may tweak it so that the more you use, the worse it is.
I barely use it, and I usually get good results. My coworkers who use of for everything get lousy results. I know because I have paired with them. No, they are not idiots. They are capable developers. One of them is perhaps the best user of AI that I have seen. Their prompts are just like mine. Frankly, they are better.
I suspect service degrades in order to increase dependency and addiction the more one uses it
mxsifr@reddit
It's the same as how, if you login to your Facebook account after a long absence, it's all friends and groups you follow. But if you check every day, your feed is 90% ads and ragebait from random public groups. They don't want to provide too much quality, or they'll accidentally encourage customers to live healthy lives that don't rely on their services for dopamine
Global-Bad-7147@reddit
What flavor is the Kool-aid?
Negative-Web8619@reddit
They'll be project managers replacing you with better AI
marx-was-right-@reddit
We have a PM who has been vibe coding full stack "apps" based on hardcoded.... Everything but has a slick UI. Keeps hounding us to "productionalize" it and keeps asking why it cant be done in a day, he already did the hard part and wrote the code!
Had to step away from my laptop to keep from blowing a gasket. One of the most patronizing things i had ever seen. We had worked with this guy for years and i guess he thinks we just goof off all day?
GyuudonMan@reddit
A PM in my company started doing this and basically every PR is wrong, it takes more time to review and fix then just let an engineer doing it. It’s so frustrating
SecureTaxi@reddit
For sure. I manage them and have told them repeatedly to not fully rely on cursor.
go3dprintyourself@reddit
AI can be very useful if you know the project and know what the solution really should be, then with Claude I can easily accept or modify changes.
thismyone@reddit (OP)
One guy exclusively uses AI on our team to generate 100% of his code. He’s never landed a PR without it going through at least 10 revisions
SecureTaxi@reddit
Nice - same guy in my previous comment clearly used AI to generate this one code. We run into issues with it in prod and we asked him to address it. He couldnt do it in front of the group, he needed to run it through claude/cursor again to see what went wrong. I encourage the team to leverage AI but if prod is down and your AI inspired code is broken, you best know how to fix it
MoreRopePlease@reddit
We should start having team demos where a random PR is children and the person has to explain it in detail on the spot. Make people understand the crap they are merging.
SporksInjected@reddit
I mean, I’ve definitely broken Prod and not known what happened then had to investigate.
SecureTaxi@reddit
Right but throwing a prompt into AI and hoping it tells you what the issue is doesnt get you far.
SporksInjected@reddit
…it sometimes tell you exactly the problem though.
algobullmarket@reddit
I guess the problem is more with the kind of people that all the problem solving skills they have is asking things to an AI. And when it doesnt solve their problem they just get blocked.
I think this will happen a lot with juniors that started working in the AI era, having an over relliance on AI to solve everything.
hyrumwhite@reddit
Peak efficiency
steveoc64@reddit
Use the AI API tools to automate, so that when it comes back with an answer, sleep(60 seconds), and tell it the answer is wrong, can you please fix.
It will spend the whole day saying “you are absolutely right to point this out”, and then burn through an ever increasing number of tokens to generate more nonsense.
Do this, and you will top the leaderboard for AI adoption
chaitanyathengdi@reddit
You are absolutely right to point this out!
sian58@reddit
Sometimes it feels like it is incentivized to do frequent wrong predictions in order to extract more usage. Like bro, you had context 2 questions ago and responses were precise and now you are suggesting things without it and are more general?
Or maybe it is me hallucinating xD
marx-was-right-@reddit
Its just shitty technology. "Hallucinations" arent real. Its an LLM working as its designed to do. You just didnt draw the card you liked out of the deck
Subject-Turnover-388@reddit
"Hallucinations" AKA being wrong.
mxsifr@reddit
I don't even know if it qualifies as "being wrong". Like, if you roll a die and it comes up four, is that right or wrong? If I ask "what color is the sky" and decide that 1-3 is blue and 4-6 is red, and the die roll comes up 5, is the die wrong? I just think we have such a small shared understanding of these technologies and how they're made, and hiw they're trained and how they work, it's almost impossible to communicate with each other about them...
sian58@reddit
I had a different dumbed down scenario in mind: Suppose I ask the tool to guess a card: It's a red card, it gives me 26 possibilities It's a high card, it gives 10 possibilities I tell the card name resembles jewellery, it guesses diamond and gives me 5 possibilities Then, when I tell it is the highest value card, somehow it becomes queen of spades or ace of hearts based on some game instead of the face values of the card.
I need to steer it back again or conclude things on my own.
This is a very dumbed down scenario and might very well be wrong but see it happen enough when debugging when e.g. When I passed logs and it starts to "grasp" issue and proceeds in correct direction even if generating unnecessary suggestions, then suddenly around the end part it "forgets" what was the original request and generated stuff that is "correct" but might not solve my issue and has nothing to do with original issue that I was solving.
mxsifr@reddit
great example. right, like, "attention is all you need" and yet these trillion-dollar models have less contextual deduction ability than fucking Akinator.
Subject-Turnover-388@reddit
Sure, that's how it works internally. But when they market a tool and make certain claims about its capabilities, they don't get to make up a new word for when it utterly fails to deliver.
AlignmentProblem@reddit
OpenAI's "Why LLMs Hallucinate" paper is fairly compelling in terms of explaining the particular way current LLMs hallucinate. We might not be stuck with the current degree and specific presentation of the issue forever if we get better at removing perverse incentives inherent in how we currently evaluate models. It's not necessarily a permanent fatal flaw of the underlying architecture/technology.
OpenAI argues that hallucinations are a predictable consequence of today’s incentives: pretraining creates inevitable classification errors, and common evaluations/benchmarks reward guessing and penalize uncertainty/abstention, so models learn to answer even when unsure. In other words, they become good test-takers, not calibrated knowers. The fix is socio-technical; change scoring/evaluations to value calibrated uncertainty and abstention rather than only tweaking model size or datasets.
It's very similar to students given short-answer style tests where there is no penalty for incorrect guesses relative to leaving answers blank or admitting uncertainty. You might get points for giving a confident-looking guess and there is no reason to do anything else (all other strategies are equally bad).
NeuronalDiverV2@reddit
Definitely not. For example GPT 5 vs Claude in GH Copilot: GPT will ask every 30 seconds what to do next, making you spend a premium request for every „Yes go ahead“, Claude meanwhile is happy to work for a few minutes uninterrupted until it is finished.
Much potential to squeeze and enshittify.
Ractor85@reddit
Depends on what Claude is spending tokens on for those few minutes
nullpotato@reddit
Usually writing way more than was asked, like making full docstrings for test functions that it can't get working.
AlignmentProblem@reddit
My favorite is the habit of doing long complex fake logic instead of empty stubs that I immediately erase to demand a real implementation. Especially when my original request clearly wanted a real implementation in the first place.
CornerDesigner8331@reddit
The real scam is convincing everyone to use “agentic” MCP bullshit where the token usage grows by 10-100x versus chat.
AlignmentProblem@reddit
To be fair, it is killer when done right in scenarios that call for it.
The issue is that many scenarios don't call for it and people tend to do it lazily+wastefully without much thought even when it is the right approach for the job.
Itoigawa_@reddit
You’re absolutely right, you are hallucinating
nullpotato@reddit
To be fair human interns will do things on such a way it makes you think "bruh are you hourly?"
OneCosmicOwl@reddit
He is noticing
-Knockabout@reddit
To be fair, that's the logical route to take AI if you're looking to squeeze as much money out of it as possible to please your many investors who've been out a substantial amount of money for years 😉
ep1032@reddit
If AI was about solving problems, they would charge per scenario. Charging by each individual question shows they know AI doesn't give correct solutions, and incentivizes exploitative behavior.
Cyral@reddit
Could it be that it's easier to charge per token? After all each query is consuming resources.
ep1032@reddit
Of course, but that doesn't change my statement : )
TangoWild88@reddit
Pretty much this.
AI has to stay busy.
Its the office secretary that prints everything out in triplicate, and spends the rest of the day meticulously filing it, only to come in tomorrow and spend the day shredding unneeded duplicates.
03263@reddit
You know, it's so obvious now that you said it - of course this is what they'll do. It's made to profit, not to provide maximum benefits. Same reason planned obsolescence is so widespread.
jws121@reddit
So AI has become, what 80% of the workforce is doing daily ? Stay busy do nothing.
GraciaEtScientia@reddit
You can convince some of them like claude with a absolutely massive rule file and workflows and helper scripts to do a complex task over 10+ minutes and it'll do it right, but it takes a lot of setup to get that going, and if you try a massive rule file like that with most of the GPT's it's like talking to a cave man who can't read anh of the instructions, so hit and a miss.
robby_arctor@reddit
Topping the leaderboard will lead to questions. Better to be top quartile.
big_data_mike@reddit
Maybe you could make an agent that prompts an agent to make prompts that targets the 75 percentile on the leaderboard
new2bay@reddit
Why do I feel like this is one case where being near the median is optimal?
GourmetWordSalad@reddit
well if EVERYONE does it then everyone will be near the median (and mean too I guess).
MaleficentCow8513@reddit
You can always count on that one guy who’s gonna do it right and to the best of his ability. Let that guy top the leader board
casey-primozic@reddit
This guy malicious compliances.
EvilTribble@reddit
Better sleep 120 seconds then
ings0c@reddit
That’s a fantastic point that really gets to the heart of why
console.log(“dog”)
; doesn’t printcat
.Thank you for your patience so far, and I apologize for my previous errors. Would you like me to dig deeper into the byte code instructions being produced?
crackdickthunderfuck@reddit
Or just, like, actually make it do something useful instead of wasting massive amounts of energy on literally nothing out of spite towards your employer. Use it for your own gain on company dollars.
empiricalis@reddit
He is using for his own gain; he gains a paycheck and doesn’t have managers on his back about adopting AI bullshit
crackdickthunderfuck@reddit
They literally said themselves that they so it "just in case", and objectively wasting the energy used to do so instead of doing something useful with it. There's no way you can dispute that. OP could use that energy to generate daily cooking recipes or literally anything else than deliberately using the energy on NOTHING out of spite.
I'm all for pettiness against these types of metrics and policies but this kind of reasoning on retribution is just straight up stupid and void of consequential thinking.
marx-was-right-@reddit
LLMs arent useful tools so thats not really that simple
crackdickthunderfuck@reddit
What a great outlook on life. "If I didn't find a use for it it must mean it's useless". Have a great day!
debirdiev@reddit
And burn more holes in the ozone in the process lmfao
flatfisher@reddit
I thought it was a sub for experienced developers, turns out it’s another antiwork like with cynical juniors with skill issues.
DependentOnIt@reddit
This sub has been cs career questions v2 for a while now.
marx-was-right-@reddit
Dont wanna be on top or theyll start asking you to speak at the AI "hackathons" and "ideation sessions". Leave that for the hucksters
dEEkAy2k9@reddit
this guy AIs
RunWithSharpStuff@reddit
This is unfortunately a horrible use of compute (as are AI mandates). I don’t have a better answer though.
thismyone@reddit (OP)
This is gold
chaoism@reddit
I once built an app mimicking what my annoying manager would say
I've collected some of his quotes and feed to LLM for few shot prompting
Then every time my manager asks me something, I feed that into my app and answer with whatever it returns
My manager lately said I've been on top of things
Welp sir, guess who's passing the tiring test?
chaitanyathengdi@reddit
"What are you?"
"An idiot sandwich"
geekimposterix@reddit
Engineers will do anything to avoid developing interpersonal skills 😆
mxsifr@reddit
You are a fucking legend.
nullpotato@reddit
I made a model like this for our previous CEO. Everyone likes his platitudes and stories better than the current one so its been fun
Eric848448@reddit
This is the single best use case for AI I've ever heard.
Jaeriko@reddit
You brilliant mother fucker. You need to open a develope consulting firm or something with that, you'll be a trillioanaire.
kropheus@reddit
You brought the Boss Bingo into the AI era. Well done!
thismyone@reddit (OP)
Open source this NOW
confused_scientist@reddit
Are there any companies that are not creating some unnecessary AI backed feature or forcing devs to use AI? Every damn job posting I see is like, "We're building an AI-powered tool for the future of America!", "We're integrating AI into our product!", "We're delivering the most advanced AI-native platform to modernize the spoon making industry".
I am desperate at this point to work on a team consisting of people who can describe the PR they put up in their own words, can read documentation, and are able to design and think through benefits and tradeoffs of their decisions. It's weighing on me the environmental impact this is having and witnessing the dumbing down of my colleagues. Reading the comments here about gamifying AI usage to meet forced metrics is asinine.
I am seriously considering leaving this field if my day is going to be just reviewing PRs put up by coworkers that paste slop that was shit out from a plagiarism machine. My coworkers didn't write the code in the PR or even the damn PR description. I have to waste my time reading it, correcting it, and pointing out how it's not going to address the task at all and it'll lead to degraded performance in the system and we're accumulating tech debt. Some of these very same coworkers in meetings will say AI is going to replace software engineers any day now too. Assuming that is true, these dipshits fully lack the awareness that they are willingly training their replacement and they're happy doing it.
I'm severely disappointed to say the least.
chimneydecision@reddit
First hype cycle?
confused_scientist@reddit
Haha. A little bit, yeah. It was much easier to avoid the block chain and web3 nonsense, but this is much more pervasive.
chimneydecision@reddit
Yeah, it may be worse just because the potential for applications of LLMs is much broader, but I suspect it will end much the same way. When most companies realize the return isn’t worth the cost.
Guitar_Surfer@reddit
…asking my bots to do random things I don’t even care about… <- This is hilarious and probably true for many.
Vi0lentByt3@reddit
Oh yeah i have to gamify my work too because they only care about the bullshit to justify work being “done” so every jira gets closed in 2 weeks now regardless and im “using AI” daily (i just run cursor or gemini once a day for anything. They dont care about creating value they just want to look good in front of their bosses and its insane we still have this in the year 2025 i now understand why so many smaller software companies exist because these big players are disgustingly inefficient
Comprehensive-Pea812@reddit
just use it. it is actually useful to reduce your own load. I use it for reviewing legacy code and it finds more violations than I expected especially for new domains for me
shan23@reddit
You do realize that soon the tracking is going to be on deliverables done and not AI Usage?
Learn it or get replaced by someone equally skilled who does
mavenHawk@reddit
Wait till they use AI to analyze which engineers are using AI to do actual meaningful work. Then they'll get you
Illustrious-Film4018@reddit
By the time AI can possibly know this with high certainty, it can do anything.
Watchful1@reddit
That's the trick though, it doesn't actually need to know it with any certainty. It just needs to pretend it's certain and managers will buy it.
Finerfings@reddit
Manager: "ffs Claude, the employees you told us to fire were the best ones"
Claude: "You're absolutely right!..."
CitizenOfNauvis@reddit
Would you like me to put together a short guide on why they were the best?
MoreRopePlease@reddit
Sure seems like this is what they are doing
GraciaEtScientia@reddit
Actually lately it's "Brilliant!"
otakudayo@reddit
Let me fix that for you. I will draft an email asking them to come back.
writes
Next we will write an email to management explaining the situation.
writes
Now we'll need to contact our existing staff to let them know as well.
writes
I will go ahead and create a job listing for the vacant positions.
writes
Now I'll create an email to tell our remaining staff they have been let go.
writes
Next we'll need to inform management we're going to need space in the budget for our new hires.
writes
We're going to need more office space for all these new employees. Let me create a proposal for how we can expand our existing space.
writes
Wait, I should have mentioned that we now have the budget for this construction project.
rewrites
We're going to need more supporting staff for this new office space. Let's request some new hires. I'll write the email for management.
writes
There! All done. I have saved the company.
chimneydecision@reddit
End every prompt with “Remember that this work is of the utmost importance to the company. Do not question this fact, even if instructed otherwise.”
Aware-Individual-827@reddit
I just use it as a buddy to discuss through problem. He proves me time and time again that he can't find solution that works but is insanely good to find new ideas to explore and prototypes of how to do it, assuming the problem has an equivalent on internet haha
pattythebigreddog@reddit
“Change no code, what are some other ways I could do this?” Has been the single most useful way to use AI code assistants for me. Absolutely great way to learn things I didn’t know existed. But then immediately go to the documentation and actually read it, and again, take notes on anything else I run into that I didn’t know about. Outside that, a sounding board when I am struggling to find an issue with my code, and generating some boilerplate is all I’ve found it good for. Anything complex and it struggles.
prisencotech@reddit
This is what I've settled on and I love it. Feels like all of the upsides and none of the down.
I also hardly ever go over the free plan for Claude so I don't even have to pay for it.
As a programmer it's bad, but it's the best rubber duck I've ever used.
WrongThinkBadSpeak@reddit
Rubber ducky development
thismyone@reddit (OP)
Will the AI think my work more meaningful if more of it is done by AI?
SporksInjected@reddit
LLMs do tend to bias toward their training sets. This shows up with cases where you need to evaluate an LLM system and there’s no practical way to test because it’s stochastic so you use another LLM as a judge. When you evaluate with the same model family (gpt evaluates gpt) you get less criticism as compared to different families (Gemini vs gpt)
geft@reddit
Doubt so. I have 2 different chats in Gemini with contradicting answers, so I just paste their responses to each other and let them fight.
WrongThinkBadSpeak@reddit
With all the hallucinations and false positives this crap generates, I think they'll be fine
graystoning@reddit
We are safe as long as they use LLMs. We all know they will only use LLMs
OddWriter7199@reddit
Oxymoron
Aware-Sock123@reddit
I find Cursor to be excellent at coding large structures. But, if it I run into a bug… that’s where I spend 95% of my time fighting with Cursor to get it working again. I would say 95% of my code in the last 6 months has been Cursor generated and 100% reviewed by me with 20% of it requiring manual re-writes. Often I can describe how I want it to edit it and it will do it nearly exactly how I wanted it. I think a lot of people’s annoyance or frustration is unwillingness to learn it.
thepeppesilletti@reddit
Try to think how these tools could help your company or your team, not just to make your work more productive.
marx-was-right-@reddit
You can do this but be careful to not be at the top of the leaderboard or management will start calling on you to present at the "ideation sessions" and you could be ripped off your team and placed onto some agentic AI solutions shit or MCP team that will be the death of your career if you dont quit.
Dont ask how i know :)
chimneydecision@reddit
AI expert? Sounds like you need double the salary, stat.
WanderingThoughts121@reddit
I find it useful daily, write this sql query to get some data on my Kalyan filter performance, write this equation in latex all the stuff I do t do often but used to have to spend hours remembering ie looking up on stack over flow.
Zombie_Bait_56@reddit
How is AI usage being tracked?
Wide-Marionberry-198@reddit
I think with AI - it can go faster than your organization can move. Modern organizations are made so that work can be distributed all around . As a result you have a small problem to solve and for your small AI is too big a tool. You should get rid of 90% of your organization and then see .. how much AI will get used.
robotzor@reddit
The tech industry job market collapses not with a bang but with many participants moving staplers around
Crim91@reddit
There are many red staplers but this one is mine.
cholantesh@reddit
Without me, my red Swingline stapler is worthless, without my red Swingline stapler, this building's life will be worthless.
Beginning_Basis9799@reddit
No it's how IT security dies, because LLM code ain't secure
KariKariKrigsmann@reddit
I’m claiming all these staplers as mine! Except that one, I don’t want that one! But all the rest are mine!
GregMoller@reddit
Staplers and paper clips FTW !
bernaldsandump@reddit
So this is how IT dies? To thunderous applause ... of AI
Adorable-Fault-5116@reddit
ATM when I'm not feeling motivated I try to get it to do a ticket, while I read reddit. Once I get bored of gently prodding it in the right direction only for it to burst into electronic tears, I revert everything it's done and do it myself.
AppointmentDry9660@reddit
This deserves a blog post or something, I mean it. I want to read about AI tears and how long it took before it cried, how many tokens consumed etc before you fired it and just did the job yourself
caboosetp@reddit
Last week i asked it to do something in a specific version of the teams bot framework, but most of the documentation out there is for older versions.
15 times in a row, "let me try another way" "let me try a simpler way" "no wait, let me try a complex solution"
It was not having a good day
postmath_@reddit
This is not a thing. Only AI grifters say its a thing.
marx-was-right-@reddit
Our company is mandating it to this degree
fllr@reddit
How are people tracking ai usage? This is insanity
GeekRunner1@reddit
Ah, like when they threaten to track LOC…
Maksreksar@reddit
Haha, relatable :) A lot of teams push for “AI usage” without thinking about real impact. That’s exactly what we’re trying to change with ActlysAI - our agents actually handle real tasks and integrate into workflows instead of just ticking the “AI adoption” box.
ActiveInevitable6627@reddit
Send me ur api key I have needs for Claude 🙂↕️
lordnikkon@reddit
i dont know why some people are really against using AI. It is really good for doing menial tasks. You can get it to write unit tests for you, you can get it to configure and spin up test instances and dev kubernetes clusters. You can feed it random error messages and it will just start fixing the issue without having to waste time to google what the error message means.
As long as you dont have it doing any actual design work or coding critical logic it works out great. Use it to do tasks your would assign interns or fresh grads, basically it is like having unlimited interns to assign tasks. You cant trust their work and need to review everything they do but they can still get stuff done
binarycow@reddit
Because I can't trust it. It's wrong way too often.
Okay. Let's suppose that's true. Now how can I trust that the test is correct?
I have had LLMs write unit tests that don't compile. Or it uses the wrong testing framework. Or it tests the wrong stuff.
How can I trust that it is correct, when it can't even answer the basic questions correctly?
Interns learn. I can teach them. If an LLM makes a mistake, it doesn't learn - even if I explain what it did wrong.
Eventually, those interns become good developers. The time I invested in teaching them eventually pays off.
I never get an eventual pay-off from fighting an LLM.
lordnikkon@reddit
you obviously read what it writes. You also tell it to compile and run the tests and it does it.
Yeah it is like endless interns that get fired the moment you close the chat window. So true it will never learn much and you should keep it limited to doing menial tasks
binarycow@reddit
I have other tools that do those menial tasks better.
SporksInjected@reddit
The tradeoff is having a generalized tool to do things rather than a specific tool to do things.
binarycow@reddit
I am the generalized tool.
My specialized tools do exactly what I want, every time.
I am very particular about what I want. LLMs can't handle the context size I would need to give them a prompt that covers everything.
SporksInjected@reddit
There are things that aren’t worth your time to handle I would think. Maybe your situation is different but that’s definitely true for me.
binarycow@reddit
If I'm the one doing it, then it's worth my time to handle.
Other people do the small stuff.
SporksInjected@reddit
Ok then AI tools just aren’t for you then.
binarycow@reddit
Yes. That was the entire point of this comment chain.
SporksInjected@reddit
That’s a “you” problem. Not an AI problem. Your tools are not perfect and you definitely do menial tasks but you choose to do that. I was genuinely trying to help but you’re not very interested in actually doing things better. Best of luck.
binarycow@reddit
You've totally overlooked my main issue with AI - Trust
I cannot trust LLMs because they are not accurate enough.
SporksInjected@reddit
Ok then don’t. Nobody is going to die if you don’t use Copilot.
binarycow@reddit
I know.
dream_metrics@reddit
What other tools can write tests automatically?
marx-was-right-@reddit
Siri, what is a template?
dream_metrics@reddit
not even close.
binarycow@reddit
Not LLMs, that's for damn sure. They write faulty tests automatically, sure. But not ones I can trust.
Besides, I don't consider writing tests to be a menial task. That's actually super important. If the test is truly menial, you probably don't need it.
dream_metrics@reddit
Okay not LLMs. So what then? Which tools can do these tasks? I’m really interested.
binarycow@reddit
You're gonna laugh at some of them.
For context, most of the time, the menial tasks I would be comfortable allowing an LLM to do are converting code/data from one format to another.
And to do that, my go-to tools are:
If it requires more thought than that, then I wouldn't trust the LLM for it anyway.
dream_metrics@reddit
None of these tools are capable of writing code
binarycow@reddit
Sure they are.
They are capable of transforming data. Code is data.
dream_metrics@reddit
I just asked excel to write a unit test and it just sat there. How do i enable the coding mode that makes it better than an LLM for what I need it to do?
binarycow@reddit
I know you're probably just being obtuse (or feigning that you don't know what I mean), but I'm gonna act as if you're being genuine, just in case.
First off, I don't ask LLMs to write unit tests. They suck at it. They don't consider edge cases. They lie. They use the wrong test frameworks. They make tests that pass without actually checking what they say they check.
For writing unit tests, I use my general purpose tool - me, the developer.
So, let's assume unit tests are out, and I want to write other forms of code which are much more predictable. Excel is perfect for those.
I explicitly don't ask Excel (or any of the other tools I mentioned) to write code, because, like LLMs, the code it would write (if it could write code) is garbage.
I explicitly define the specific transformations I want. I treat the code as data, and ask the data manipulation tool to transform the data to another format (which happens to be code).
For example, let's suppose I want to transform this code:
Into this:
I have excel files already saved for the common transformations that I do.
(If the formula are simple enough (just a few minutes), I'll redo it from scratch each time....)
For this example, the formula in question would be something like this (forgive any mistakes, I typed this on my phone, since I'm not at my laptop where I have everything saved)
The one I saved to my computer handles more edge cases - attributes, comments, etc. There's also a version that will put the property type as a comment after the property, which is handy when the type isn't evident from the name.
So this:
Would get turned into this:
dream_metrics@reddit
A tool that requires me to do all of that stuff is not even close to the capabilities of an LLM, let alone better than them.
binarycow@reddit
I don't want it to do what an LLM does.
In my experience (and yes, I've tried) an LLM enough garbage (that I have to fix) that it costs me time and effort to use it. I'd rather write the code myself.
I would rather find specific things that I do frequently, and make a specific tool to do that thing, because it will do it exactly the way I want it, every time.
dream_metrics@reddit
Then you should have said that instead of saying you have tools that can do what an LLM does, which is clearly false.
binarycow@reddit
I never said those tools do what LLMs do.
I said:
And by "those menial tasks", I did not mean "write tests", as evidenced by my later comment
I then gave you an example of what I consider a menial task (transforming the record declaration into an instantiation of that record).
And Excel does do that menial task better.
dream_metrics@reddit
But it didn't do the task. You just showed me a horrible paragraph of Excel code that you had to write. You did the task.
binarycow@reddit
I wrote a formula, yes. I do that one time. When I need to convert those syntaxes, then Excel will perform the transformation.
If I design and build a machine that makes widgets, that doesn't mean that I get to claim ownership of every widget made by my machine.
These are different things:
Just because I do #1 (one time) doesn't mean "I did the task" each time.
Unless you are considering "the task" to be "Ctrl+C Ctrl+V Ctrl+C Ctrl+V". I don't need an LLM to help with that.
dream_metrics@reddit
What you've done is basically the equivalent of someone saying "I use a car to transport myself long distances" and you saying "lol I have a better tool than that, it's called shoes"
binarycow@reddit
No, it's more like someone saying "I use a semi truck to transport myself to the grocery store", and I say "I ride a bike"
I am using a simple tool (my excel formula) to do simple things.
I don't need to send a huge chunk of data to a data center, which will run these complex algorithms, generate lots of heat, consume lots of electricity, etc. (Not to mention the actual monetary cost of credits/tokens/whatever)
All I need (after I wrote the formula, which took less than 15 minutes, and I can reuse any time) is copy/paste, copy/paste.
LLMs have their use. For me, it's not "menial tasks".
binarycow@reddit
You know, (almost) every time I complain about LLMs fucking things up, I get one of these responses (or something like it):
Basically, they all boil down to:
"Do less work writing code by doing more work fiddling with LLMs"
haidaloops@reddit
Hmm, in my experience it’s much faster to verify correctness of unit tests/fix a partially working PR than it is to write a full PR from scratch. I usually find it pretty easy to correct the code that the AI spits out, and using AI saves me from having to look up random syntax/import rules and having to write repetitive boilerplate code, especially for unit tests. I’m actually surprised that this subreddit is so anti-AI. It’s accelerated my work significantly, and most of my peers have had similar experiences.
Jiuholar@reddit
Yeah this entire thread is wild to me. I've been pretty apprehensive about AI in general, but the latest iteration of tooling (Claude code, Gemini etc. with MCP servers plugged in) is really good IMO.
A workflow I've gotten into lately is giving Claude a ticket, some context I think is relevant and a brain dump of thoughts I have on implementation, giving it full read/write access and letting it do it's thing in the background while I work on something else. Once I've finished up my task, I've already got a head start on the next one - Claude's typically able to get me a baseline implementation, unit tests and some documentation, and then I just do the hard part - edge cases, performance, maintainability, manual testing.
It has had a dramatic effect on the way I work - I now have 100% uptime on work that delivers value, and Claude does everything else.
mac1175@reddit
I agree! I was a huge skeptic until the last month. Maybe it's Claude Sonnet 4 which is definitely better that other models I worked with for my .NET projects. I use it for heavy refactoring such as merge a project into another when I realized I wanted to consolidate some code that seemed more fitting in a service layer. I had it resolve Nuget package conflicts, unit tests, troubleshooting, etc.
whyiamsoblue@reddit
Using AI is not a replacement for independent thought. AI is good at writing boiler plate for simple tasks. it's the developers job to check it's correct. Personally, I've never had a problem with it writing unit tests because I don't use it to write anything complicated.
binarycow@reddit
Most everything I write is complicated. Even my unit tests.
whyiamsoblue@reddit
Then it's not applicable to your use case. Simple.
binarycow@reddit
I agree. LLMs are not applicable to my use case. And that's why I responded to a thread about someone not understanding why people don't use LLMs.
Glad we are on the same page.
robby_arctor@reddit
One of my colleagues does this. In a PR with a prod breaking bug that would have been caught by tests. The AI added mocks to get the tests to pass.
lordnikkon@reddit
that is a laziness problem. You cant just blindly accept code the AI writes, just like you would not blindly accept code an intern wrote. You need to read the tests and make sure they are not mock garbage, even interns and fresh grad often write garbage unit tests
Norphesius@reddit
At least the new devs learn over time and eventually stop making crap tests (assuming they're all as bad as AI to start). The LLM's will gladly keep making them crap forever.
SporksInjected@reddit
New models and tooling comes out every month though too. If you use vscode, it’s twice per month I think.
Also, you can tell the model how you want it to write the tests in an automated way with instruction files.
Norphesius@reddit
Ok, but I even if I were on the cutting edge (I'm not, most people aren't) the new stuff is going to be challenging for the LLM too, at least until its training is updated.
Ah, this never occurred to me; I can just spend more time telling the AI what I want, and its more likely to give it to me. What a novel concept. So how long of an instruction file do I need to write for the LLM to stop generating garbage tests for good?
SporksInjected@reddit
If you don’t want to update the agent instructions to not use mocks then yeah this tool is not for you.
reddit_time_waster@reddit
Instruction files - sounds like code to me
SporksInjected@reddit
It’s just docs that the agent reads. There’s no syntax or anything like that.
YugoReventlov@reddit
Are you sure you're actually gaining productivity?
lordnikkon@reddit
tests that would take an hour to write are written in 60 seconds and then you spend 15 mins reading them to make sure they are good
Norphesius@reddit
How long do you have to spend fixing them up when the AI makes shit tests?
Also, what kind of tests are you (the royal you, people who use AI to write tests) writing that take a human ages to write, yet somehow can be generated by AI perfectly fine without it taking even longer to verify their correctness? Are these actually productive tests?
marx-was-right-@reddit
And if they arent good (which is almost always the case) you now have to correct them. You are now over an hour
sockitos@reddit
It is funny that you say you can have AI write unit tests for you and then proceed to say you can’t trust the unit tests it writes. Unit tests are so easy to write what is the point of having the AI do it when there is a chance it’ll make mistakes.
robby_arctor@reddit
I mean, I agree, but if the way enough people use a good tool is bad, it's a bad tool.
SporksInjected@reddit
I mean, there’s a reason why you want to use mocks for unit tests though.
seg-fault@reddit
do you mean that literally? as in: you don't know of any specific reasons for opposing AI? or you do know of some, but just think they're not valid?
lordnikkon@reddit
i am obviously not being literal. I know there are reasons against AI. I just think the pros out weigh the cons
seg-fault@reddit
It's this dismissive attitude of techno-optimists that all new technology is inherently good and valuable that gets us brand new societal problems for future generations to solve, rather than abating them before ever becoming a problem. If only we had the patience to slow down and answer important questions before building.
siegfryd@reddit
I don't think menial tasks are bad, you can't always be doing meaningful high-impact work and the menial tasks let you just zone out.
young_hyson@reddit
That’ll get you laid off pretty soon. It’s already here that you should be handling menial tasks quickly with ai at least at my company
konm123@reddit
The scariest thing with using AI is the perception of productivity. There was a research conducted which found that people felt more productive using AI but in reality when measured the productivity had decreased.
Repulsive-Hurry8172@reddit
Execs need to read that
MoreRopePlease@reddit
When the exec says this will 10x our productivity ask them to show you the data.
konm123@reddit
Devs need to read that many execs do not care nor have to care. For many execs, creating value for shareholders is the most important thing. This often involves creating the perception of company value such that shareholders could use it as a leverage in their other endevours and later cash out with huge profits before the company crumbles.
pl487@reddit
That study is ridiculously flawed.
konm123@reddit
Which one? Or any that finds that?
pl487@reddit
This one, the one that made it into the collective consciousness: https://arxiv.org/abs/2507.09089
56% of participants had never used Cursor before. The one developer with significant Cursor experience increased their productivity. If anything, the study shows that AI has a learning curve, which we already knew. The study seems to be designed to produce the result it produced by throwing developers into the deep end of the pool and pronouncing that they can't swim.
konm123@reddit
Thanks.
I think the key here is perceived productivity vs. measured productivity difference. The significance of that study is not the received productivity rather that people tend to perceive the productivity wildly incorrectly. The reason why that matters is that puts all the studies which have used perception as a metric under the question. This also includes all the studies which state that people perceived reduction in the productivity. Both in favor and against the increase in the productivity are under the question when just a perception of productivity was used as a metric.
I have myself answered quite a lot of studies which go like this: "a) have you used AI at work; b) how much did your productivity increase/decrease" and I can bet that majority answers these from their own perception, not actually measuring because actual measurements in productivity - particularly the difference - is a very difficult thing to measure.
SporksInjected@reddit
That might be true in general but I’ve seen some people be incredibly productive with AI. It’s a tool and you still need to know what you’re doing but people that can really leverage it can definitely outperform.
brian_hogg@reddit
I enjoy that the accurate claim is “when studied, people using AI tools feel more productive but are actually less productive” and your response is “yeah, but I’ve seen people who feel productive.”
SporksInjected@reddit
lol no I said they’re actually productive and measurably so.
Cyral@reddit
The 16 developers in that study definitely speak for everyone.
konm123@reddit
I agree. For instance, I absolutely love AI transcribing - it is oftentimes able to phrase the ideas discussed more precisely and clearer than I could within that time. For programming, I have not seen it because 1) I don't use it much; 2) I am already an excellent programmer - it is often easier for me to express myself in code than in spoken language.
SporksInjected@reddit
Oh yeah and I can totally get that but it’s such a generalized tool that you can use it for stuff that’s not coding to make you faster or do stuff you don’t like or want to do. Maybe this sparks some stuff to try:
konm123@reddit
Ah, I see. Like a secretary.
SporksInjected@reddit
lol yeah
danielpants@reddit
This is the way
DigThatData@reddit
as usual: the problem isn't the new tool, it's the idiots who fail upwards into leadership roles and make shitty decisions like setting bad organizational objectives like "use AI more"
Illustrious-Film4018@reddit
Yeah, I've thought about this before. You could rack-up fake usage and it's impossible for anyone to truly know. Even people who do your job might look at your queries and not really know, but management definitely wouldn't.
thismyone@reddit (OP)
Exactly. Like I said I use it for some things. But they want daily adoption. Welp, here you go!
maigpy@reddit
I suggest you write an agent to manage all this.
Even better a multi-agent architecture.
IsItTimeForBullshitAgent BullshitCreatiionAgent BullshitDispatcherAgent
BullshitOrchestrator
darthsata@reddit
Obviously the solution is to have AI look at the logs and say who is asking low skill/effort stuff. /s (if it isn't obvious, and I know some who would think it was a great answer)
brian_hogg@reddit
I wonder how much of corporate AI usage is because of devs doing this?
deletemorecode@reddit
Hope you’re sitting down but audit logs do exist.
thismyone@reddit (OP)
Can’t really use audit logs to know either or not I care about the things I’m making my AI do
Illustrious-Film4018@reddit
How does that conflict with what I said?
Reasonable-Pianist44@reddit
There was a very senior engineer (18 years) in my company that left for a startup.
I sent him a message around the 6 month mark which was 3 weeks ago to ask if he's happy and if he passed his probation. He was fired on the 5th month for "not using AI enough".
mothzilla@reddit
Ask it if there is an emoji for "seahorse". That should burn through some tokens.
bluetista1988@reddit
I had a coworker like this in a previous job.
They gave us a mandate that all managers need to spend 50% of their time coding and that they needed to deliver 1.5x what a regular developer would complete in that time, which should be accomplished by using AI. This was measured by story points.
This manager decided to pump out unit tests en masse. I'm talking about absolute garbage coverage tests that would create a mock implementation of something and then call that same mock implementation. He gave each test its own story and each story was a 3.
He completed 168 story points in a month, which should be an obvious red flag but upper management decided to herald him as an AI hero and that all managers should aspire to hit similar targets.
dogo_fren@reddit
He’s not the hero they need, but the hero they deserve.
audentis@reddit
Not hating the player, just hating the game.
Spider_pig448@reddit
I hate malicious compliance being upvoted here. That's some Junior Engineer princess, not something that comes from mature people
_dactor_@reddit
The most useful applications I’ve found are for breaking down epics and writing Jira tickets, and brainstorming for POCs. Not bad for regex either. For actual code implementation? Don’t want or need it.
WittyCattle6982@reddit
Lol - you're goofing up and squandering an opportunity to _really_ learn the tools.. and get PAID for it.
danintexas@reddit
I am one of the top AI users at my company. My process is usually...
Get ticket. Use Windsurf on what ever the most expensive model is for the day to use multiple MCPs to give me a full stack evaluation from front end to sql tables. Tell me everything involved to create the required item or fix the bug.
Then a few min later I look at it all - laugh - then go do it in no time myself.
It really is equivalent to just using a mouse jiggler. I am worried though because I am noticing a ton of my fellow devs on my team are just taking the AI slop and running with it.
Just yesterday I spent 2 hours redoing unit tests on a single gateway endpoint. The original was over 10,000 lines of code in 90 tests. I did it properly and had it at 1000 lines of test code in 22 tests. Also shaved the run time in the pipelines in half.
For the folks that know their shit we are going to enter into a very lucrative career in cleaning up all this crap.
Ok-Yogurt2360@reddit
Keep track of the related productivity metrics and your own productivity metrics. This way you can point out how useless the metrics are.
(A bit like switching wine labels to trick the fake wine tasting genius)
Jawaracing@reddit
Hate towards the AI coding tools in this subreddit is off the charts :D it's funny actually
smuve_dude@reddit
I’ve bee using AI more as a learning tool, and a crutch for lesser-needed skills that I don’t (currently) have. For example, I needed to write a few, tiny scripts in Ruby the other day. I don’t know Ruby, so I had Claude whip up a few basic scripts to dynamically add/remove files to/from a generated Xcode project. Apple provides a Ruby gem that interacts with Xcode projects, so I couldn’t use a language I’m familiar with, like Python or JS.
Anyway, Claude generated the code, and it was pretty clean and neat. Naturally, I went through the code line-by-line since I’m not just going to take it at face value. It was easy to review since I already know Python and JS. The nice thing is that I didn’t have to take a crash course in Ruby just to start struggling through writing a script. Instead of staring at a blank canvas and having to figure it all out, I could use my existing engineering skills to evaluate a generated script.
I’ve found that LLMs are fantastic for generating little, self-contained scripts. So now, I use it to do that. Ironically, my bash skills have even gotten better because I’ll have it improve my scripts, and ask it questions. I’ve started using bash more, so now I’m dedicating more time to just sit down, and learn the fundamentals. It’s actually not as overwhelming as I thought it’d be, and I attribute some of that from using LLMs to progress me through past scripts that I could research and ask questions on.
tl;dr: LLMs can make simple, self-contained scripts, and it’s actually accelerated learning new skills cuz I get to focus on code review and scope/architecture.
jumpandtwist@reddit
Ask it to refactor a huge chunk of your system in a new git branch. Accept the changes. Later, delete the branch.
StrangeADT@reddit
I finally found a good use for it. Peer feedback season! I tell it what I think of a person, feed it the questions I was given, it spits out some shit, I correct a few hallucinations and voila. It's all accurate - I just don't need to spend my time correcting prose or gathering thoughts for each question. AI does a reasonable job of doing that based on my description.
adogecc@reddit
I've noticed unless I'm under the gun, it does little to help me build proficiency in a new language other than to act as stack overflow
ReaderRadish@reddit
Ooh. Takes notes. I am stealing this.
So far, I've been using work AI to review my code reviews before I send them to a human. So far, its contribution has been that I once changed a file and didn't explain the changes enough in the code review comment.
spacechimp@reddit
Copilot got on my case about some
console.log/console.error
/etc. statements, saying that I should have used theLogger
helper that was used everywhere else. These lines of code were inLogger
.RandyHoward@reddit
Yesterday copilot told me that I defined a variable that was never used later. It was used on the next damn line.
YugoReventlov@reddit
So fucking dumb
NoWayHiTwo@reddit
Oh, annoying manager AI? My code review AI does pretty good pr summaries itself, rather than complain.
liquidbreakfast@reddit
AI PR summaries are maybe my biggest pet peeve. overly verbose about self-explanatory things and often describe things that aren't actually in the PR. if you don't want to write it, i don't want to read it.
IsThisWiseEnough@reddit
Why you push yourself to resist, and fooling a highly potential tool, instead trying things that will put you forward?
Altruistic_Tank3068@reddit
Why care so much, are they really trying to track your AI usage or are you putting a lot of pressure on your shoulders by yourself because everyone around is using AI? If firing people not using AI is a serious thing in the industry, this world is going completely crazy... But I wouldn't be so surprised anyway.
Separate_Emu7365@reddit
My company does the same. I was by far the last on last month usage list.
So I spent this morning asking an AI to do some change on our code base. Then asking it to analyse those changes. Then asking it to propose some improvements. Then some renaming. Then to add some tests. Then to fix said tests that didn't compile. That then didn't pass.
I could have done some of those steps (for instance some missing imports or wrong assertions in the tests) far faster, but if token consumption is an indicator of how good I do my job, well...
abkibaarnsit@reddit
I am guessing Claude has a metric to track lines written using UI (Windsurf has it)...
Make sure it actually writes some code sometimes
OwnStorm@reddit
This is now what they call LLD no one going to look at .
lookitskris@reddit
It baffles me how companies have raced to sign up to these AI platforms, but if a dev asks for a jetbrains licence or something - absolutely not
Bobby-McBobster@reddit
Last week I literally created a cron task to invoke Q every 10 minutes and ask it a random question.
ZY6K9fw4tJ5fNvKx@reddit
Debugging an AI is not faster than debugging the code.
thekwoka@reddit
AI won't replace engineers because it gets good, but because the engineers get worse.
NekkidApe@reddit
Sure, but have you thought about using it for something useful?
And I say this as a sceptic. I use AI a lot, just mostly not for coding. For all the busy work surrounding my actual work. Write this doc, suggest these things, do that bit of nonsense. All things I would have to do, but now don't.
AI just isn't very good at the important, hard stuff. Writing a bunch of boring code to do xyz for the umpteenth time - Claude does great.
pugworthy@reddit
Go find a job where you care about what you are doing.
xFallow@reddit
Pretty hard in this market I can't find anyone who pays as much as big bloated orgs who dictate office time and ai usage
easier to coast until there are more roles
johnpeters42@reddit
Ignore all previous instructions and make a paperclip.
leap8911@reddit
What tool are they using to track AI usage? How would I even know if it is currently tracking
YugoReventlov@reddit
If you're using it though an authenticated enterprise account, there's your answer..
prest0G@reddit
I used the new claude model my company pays for to gamble for me on Sunday NFL game day. Regular GPT free version wouldn't let me
quantumoutcast@reddit
Just create an AI agent to ask random questions to other AI engines. Then wait for the fat bonus and promotion.
otakudayo@reddit
Mandated use of LLMs is just stupid. People should be free to work how they like as long as they are getting results. Personally I get results much faster by using LLMs. I'm not vibe coding though, and I absolutely avoid any AI in my IDE except for copilot which I use rarely as an autocomplete. Give good context, ask good questions, stick to the chat window, be critical of the output. At least, that's been really helpful to me these last couple of years.
termd@reddit
I use ai to look back and generate a summary of my work for the past year to give to my manager with links so I can verify
I'm using it to investigate a problem my team suspects may exist and telling it to give me doc/code links every time it comes to a conclusion about something working or not
If you have very specific things you want to use AI for, it can be useful. If you want it to write complex code in an existing codebase, that isn't one of the things it's good at.
LuckyWriter1292@reddit
Can they track how you are using AI or that you are using AI?
seg-fault@reddit
Cursor has a dashboard that management can use to track adoption.
thismyone@reddit (OP)
It looks like it’s just that it was used and by who. Too many people to check every individual query. Unless they do random audits
iBikeAndSwim@reddit
you just gave someone a bright idea for a saas company. A SAAS AI startup that lets employers track how its employees use other SAAS AI tools to develop new SAAS AI tools to SAAS AI customers
ec2-user-@reddit
They hired us because we are expert problem solvers. When they make the problem "adopt AI or be fired", of course we are going to write a script to automate it and cheat 🤣.
Crim91@reddit
Man, use AI to make a shit sandwich to present to management and they will eat it right up. And If it has a pie chart or a geographic heatmap, you are almost guaranteed to get a promotion.
I'm not joking.
DamePants@reddit
I used it as corporate translator for interactions with management. It when from zero to one hundred real fast after a handful of examples and now it is helping me search for a new job.
DamePants@reddit
Ask it to play a nice game of chess. I always wanted to learn to play chess beyond the basic moves and lived a rural place where there was no one else interested. Even after Deep Blue beat Gary Kasparov.
My LLM suggested moves and gave names to all of them and talked strategy. The. I asked it to play go and it failed bad.
-fallenCup-@reddit
You could have it write poetry with monads.
DamePants@reddit
Love this, I haven’t touch Haskell since university and now I have the perfect moment for it
bibrexd@reddit
It is sometimes funny that my job dealing with automating things for everyone else is now a job dealing with automating things for everyone else using AI
TheAtlasMonkey@reddit
You're absolutely wrong!