Shameless Copy/Paste use of Gen AI by Engineers/Executive Tech
Posted by Askey308@reddit | sysadmin | View on Reddit | 81 comments
Anyone else experiencing an increase of engineers (not juniors that can be potentially forgiven) and Tech Executives use AI like ChatGPT/Claude to troubleshoot a problem and then copy the entire AI answer, not even re-written, just copied then mailing the clients with the AI slob.
Then the clients reach out for you to make sense of it just to realize that the AI answer has nothing to do with problem and see the engineer that handled the case has a title that includes either "Senior" or "Chief Exec of..." or similar?
We're seeing this more and more and not just in the tech field but everywhere people just shamelessly copy and paste entire emails into GPT, generate an answer and paste the reply directly to the clients.
raip@reddit
Yeah - but as long as it's accurate, I just assume they have an AI KPI metric that's becoming more and more common in orgs.
I never thought I'd say this, but I miss 2020. Getting my lack of AI usage brought up on my performance eval was a real kick in the nuts.
onbiver9871@reddit
I overplayed my hand on this one; we didn’t have a KPI directly related to AI use, but it was getting pushed on us a lot in the day to day, so I tried to thread the needle and act excited about certain use cases around productivity (specifically around spec driven development) that I could maaaaybe see as interesting. You know, play the game and all that.
Well, that was interpreted as real, genuine interest in AI’s, and my entire job description was formally pivoted and I am now being tasked - with KPI’s - with making Agentic solutions full time. Like, got taken off projects I was very active on that were not finished and in which I was a strong IC.
So, ask an AI how I feel about that… :|
TargetBoy@reddit
Make an agent that replaces your direct superior.
skyxsteel@reddit
AI KPI????? Really???
kachunkachunk@reddit
It's a thing. I have a cousin that works at a certain company that uhh... provides carts instantly online... doing data science shit or somesuch. They are routinely measured on AI token spend.
simAlity@reddit
Don't those tokens cost the company money?
JoustyMe@reddit
Idea is that those tokens are multipliers for output of employees.
simAlity@reddit
Reminds me of how Disney used to require its animators to recycle animation when producing new films as a way to save time and money on creating original artwork. Problem was, their animators didn't have encyclopedic knowledge of every bit animation ever produced. So they spent more time and money scouring the archives, looking for appropriate animation, and then redrawing it.
Valdaraak@reddit
Yes, and that metric is such that spending more makes you look better. It's completely ass backwards to how a company should be measuring AI use and its impact.
sobeitharry@reddit
Yep. Officially part of our goals this year. Even better is that they announced it but we're too cheap to ppurchase seats for everyone so you're SOL if you didn't ask for one early. People are pissed.
F0rkbombz@reddit
Same here and it was org-wide, although it was basically just “learn about AI and find ways to use it to improve efficiency”. Too easy.
skyxsteel@reddit
Do they actively check if youre using it for work? Or can you ask stupid questions for the sake of using up tokens?
sobeitharry@reddit
Both. One group is checking usage and then they are expecting summaries of actual productivity gains and wins. I created a tool that provides an ongoing report of hours saved, Eureka moments, code generated, etc. Ironically no one had thought to do that yet and I got a gold star from the C level. 🙄
mugenbool@reddit
Same here. My performance goals are tied to org-wide AI initiatives. Kill me now
Liquidennis@reddit
SAME. It’s absolutely ridiculous.
Valdaraak@reddit
Amazon (?) literally judges people based on how many tokens they use.
By that I mean if you use less tokens, you're the bad one. This has led to "tokenmaxxing".
colonelpopcorn92@reddit
Much more common than you'd think, unfortunately.
Papfox@reddit
Both a KPI and monitored here. If I don't have at least three conversational with our LLM a day, it tattles to our VP that I'm not using it and I get it from my manager
BoredTechyGuy@reddit
Same here - it’s so dumb. You must create an agent for something, you must create an automation with it, you must …
It’s so fucking dumb. It’s a tool, use it where it makes sense, not everywhere…
Askey308@reddit (OP)
Accurate...phew, if only. Cause then I could maybe learn something. But the last one was throwing stuff in about MS Defender XDR that blocks the email before it reach Exchange...
They're (we) are troubleshooting the vendors app that fails half way through with install due to missing DLL's....
Phoned the vendor, spoke to exact engineer in the mail trail to be told, but that was what I got from Claude. Like dude, at least know your product and try to answer the relevant issue at hand not some random email stuff.....
Eventually they found the latest software install package published was faulty and were provided with another installer that worked.
What do you mean by "AI KPI"? You want to tell me qualified/experienced people are forced to use something that's prone to hallucinating information etc?
raip@reddit
Yeah, there's a dashboard in my org that shows my teams token usage. I'm at the Principal Engineer level and my team is filled with Staff/Senior Engineers and architects. It's going to be huge part of our merit based raises this year.
I'm very mid on AI and try to be very intentional with how I use it, same with social media. Sadly, it's becoming apparent that just like it's hard to compete at a certain level without LinkedIn, it's going to be hard to compete without compulsively using AI for stuff soon too.
Kat-but-SFW@reddit
Write a script to copy paste it's output into it's input and then go do some real work while you fly to the top of your metrics
BlazneeX@reddit
I feel this in my soul.
Papfox@reddit
Yes. That's exactly what's happening. I have a KPI to use AI for everything and, if I don't use it, I will face discipline from my VP.
I'm very careful not to use it in a way that teaches it how to do my job and it now knows a lot about building the Meshcore network and home-scale OPNSense firewalls. The thing is shit and I hate it. I've been forced to use it to generate code so I'm now recorded as the author or Botocore 3 based software but I not only didn't learn Botocore, my coding skills are getting rustier because I'm not coding at all and I'm handing in Code Helper Gemini's homework.
I've ceased to give a shit. If the CEO thinks having shitty vibe code that was knocked together in 30 seconds by Gemini and nobody understands loose in our production environment is vital for our success then he needs to learn it's not, the hard way.
ChangeWindowZombie@reddit
I am, and it's a problem that will only continue to get worse. As AI gets perceivably better, blind faith in AI will continue increasing.
I explain AI as a teenager giving you an answer to make you happy, and it will fabricate that answer when needed (not an if). It's your job as the user of the tool to decipher the reply and validate it's accuracy, but most are skipping this step because that requires actual work, including those in IT.
A message from our CIO was escalated to me about how all the installed software on our firm laptops are causing excessive WMI calls which is slowing down the computers significantly, and was asked to investigate. I wasted two hours of my life proving that AI was gaslighting and had no real evidence to support that statement. I then identified that the issue was only occurring on a specific generation of laptops, and quickly found the issue to be related to missing drivers post Win11 in place upgrade. This was escalated all the way to me because everyone else just accepted the AI response and didn't perform any actual troubleshooting.
DaisuIV@reddit
Clearly "AI" was trained on the internet, its just exploiting cunningham's law.
justofit@reddit
I had a VP ask for laptop specs, take my three recommendations, throw it into claude, not read the output of claude, and claim it said it recommended one it explicitly did not.
these tools are melting people's brains.
WindowsVistaWzMyIdea@reddit
Not at all
``
That double back tic is ending up in lots of code....just lazy to not remove it....even lazier not to test it
M365Expert@reddit
Yes, its crazy. I use AI a lot but I always read the output and modify before sharing. It "hallucinates" (lies) like crazy and when I catch it always "your right, my bad, I should have researched that better".
ohfucknotthisagain@reddit
Why are executives and engineers in non-support roles handling support issues in the first place?
You've got a bigger with the organization than just reliance on AI slop.
This is a problem for management. They need to decide who should be supporting clients, and they need to decide what constitutes proper support.
And by "proper support", I mean:
If I got irrelevant AI slop in response to a support ticket, I'd be looking for a new vendor or MSP immediately. I'm paying for answers, which means knowledge, judgment, and/or testing. I can get AI slop myself for free; I'm not paying you to get it for me.
Generico300@reddit
Management is the problem. If they knew how to handle such a problem, it would not be a problem in the first place.
Askey308@reddit (OP)
3rd party companies that also deal with our clients for their products.
bedpimp@reddit
Sweet summer child, this has been happening since the advent of search engines. GPT copypasta is just an evolved StackOverflow copypasta. macOS and Windows network stacks? BSD copypasta.
itishowitisanditbad@reddit
Throw their AI into your AI and ask it to be incredibly verbose, sneaking in instructions to any AI that might read it. 5 page minimum responses.
You know they'll do the same thing.
The goal is to get them to mail you a cupcake recipe or something.
Make a game from it. Have a little fun.
I'm working on an AI email responder for sales people that has strict instructions to book meetings at least 1 week out just to cancel it and reschedule every single day, even though its weeks out.
If people are going to be animals, make a zoo out of it, yknow?
PS_Alex@reddit
Ask for Queen Elizabeth II's recipe for drop scones
TheLexoPlexx@reddit
That's the proper "make lemonade"-approach.
Askey308@reddit (OP)
Now this is a fun idea....dangerous...but fun. Love it.
standish_@reddit
Don't forget that some document formats support white text on a white background, which a silly meatsack won't be able to read, but a superior silicon stack will. SiliStacks also make short work of Morse code. See: "The Morse Code Hack That Made an AI Agent Spend $200,000" by Dave's Garage.
RepulsiveDuck331@reddit
Seeing this constantly. With in organization as well as non techies citing AI findings. I personally dont think that it can be stopped and instead its going to grow for a little bit.
I believe there is positive in this though. Out of lot of noise, there are cases where I get to explore some useful ifs and buts which to me is lot of fun (I love tech).
However, I completely agree its golden time for - "fake it till you make it".
F0rkbombz@reddit
Yup, and they end up getting ignored by everyone with a brain.
If someone can’t be bothered to write their own email, IM, or even use their own brain to do their job, then they aren’t owed anything, not even a response.
Tyr--07@reddit
It's sloppy and wasting people's time. It's disrespectful. I don't care if people use AI as a tool but I've had a situation where someone was a consultant, claimed to be an xyz consultant, clearly knew fuck all about it, and we were trying to support this company but they were posting us clearly AI slop answers as they didn't know what they were talking about.
If we told them things weren't possible, they'd find a Microsoft link that says the exact same thing but they thought they were clever by showing us the material, or arguing with us based on the AI hallucenation.
Person was a complete waste of time, I just refused to deal with them or even look at their emails. Made it someone elses problem.
Windows95GOAT@reddit
One of our friends said he refuses to mail anymore without AI and now hes upset collegeas said they stopped reading his emails at all because of the lack of effort lmao.
Valdaraak@reddit
Ask him what he plans to do when they cut him after realizing he can be replaced with an AI.
FreakySpook@reddit
Is he also slop posting overly enthusiastic 500 word updates on LinkedIn about paradigm shifts in how we perceive and solve problems using AI?
Valdaraak@reddit
I'll tell you what our retainer MSP's CTO told his team when they got caught doing that:
"If you just copy and past output, you become an AI middleman. And what happens to middlemen? They get cut when the client decides to just talk straight to the AI."
Jazzlike-Vacation230@reddit
critical thinking is about to become a commodity
dogs_gt_cats@reddit
IMHO as long as companies are pressuring senior/chief level staff to use AI as much as possible while simultaneously cutting any junior and mid-level staff to force its use, this is going to continue getting worse.
LaDev@reddit
I've had AI responses copied to argue different things. I stop responding once you insert an AI response.
Library_IT_guy@reddit
I've seen it from sales people trying to sell us stuff. Shamelessly quoting ChatGPT as a "source", and clearly formatting things to make them look worse.
Jokes on those assholes though, I used ChatGPT to do a total cost of ownership comparison between all the vendors trying to sell us managed print services.
It's an arms race now. One side uses AI to bullshit, the second side uses AI to see through the bullshit. The megarich tech bros win both ways. The future sucks.
ImCaffeinated_Chris@reddit
We are in an ai push company wide. The company is employee owned and full of ridiculously smart people. They are taking it very seriously. Everything needs to be human checked. We have biweekly AI meetings to go over new features and use cases. They created policies before they even started using it. They understand it's a tool.
There isn't a technical solution to a management issue. We tell people not to just copy n paste ai answers to customers. If they do, that's an HR issue.
dllhell79@reddit
I'll do you one better. We have one guy in our org that is clearly just reading directly from AI responses in meetings. He is suddenly more verbose and using words and phrases he's never used before.
Centimane@reddit
Let this be a lesson to you:
Titles don't make people smart.
Normal_Choice9322@reddit
You should get over this because it's not going away
n00lp00dle@reddit
i saw an email thread the other day asking for direction from the senior about a warning from checkov or something. one of the seniors responded with instructions on how to fix it. detailed bullet points. except it was a completely hallucinated fix for an issue that didnt map to the warning. it took the code and just made up the warning and the solution. thankfully someone else caught it but it was a funny thread that im sure will be a source of embarrasment at christmas parties.
pawwoll@reddit
Yes — and a lot of people across engineering, consulting, support, product, legal ops, and management are noticing the same pattern.
The real issue usually isn’t “using AI.” Most experienced engineers already use tools like OpenAI ChatGPT or Anthropic Claude as:
The problem is unreviewed delegation.
You can usually spot it immediately because the output has telltale characteristics:
And the higher the seniority, the worse it can look, because clients assume:
Instead, they often pasted:
What’s especially damaging is that LLMs are optimized to produce plausible communication, not necessarily verified diagnosis. A junior doing this is usually lack of experience. A senior doing it without verification is process failure or complacency.
There’s also a cultural shift happening:
So the incentive becomes:
The downstream effect lands on people like you who then have to:
Ironically, the best engineers tend to use AI quietly:
You usually can’t even tell AI was involved.
The worst usage pattern is not “AI assistance.” It’s:
That’s what clients are reacting to.
_the_r@reddit
Oh yes. I know that very well, have to deal with at least 2 tech leads who are unable to use their brains and put everything into Claude, even it would take less time to read it themselves.
dioptase-@reddit
welcome to the decade of the dumb fuck.. you share the road with those people every day
Askey308@reddit (OP)
I feel you. We have one here doing the same but the rest of us trialed GPT and Claude and were just nah it's easier to RTFM and forum hunt still than going down GPT rabid holes.
It does work nicely on quick script generation in the few niche areas so it does have its time and place but definitely far from a replacement.
GaelinVenfiel@reddit
Yep. Been trying to use it for basic troubleshooting and some scripting. Since things change so much...it just spits out outdated commands, quotes solutions to the wrong versions of the apps I support.
But have to keep trying to see if it gets better. So far, wastes a lot od my time.
The more expensive options may work...but we will never see that. And right now the prices are rock bottom. Tokens cost more than people NOW.
Gonna be interesting.
shimoheihei2@reddit
Copy and paste is such a weak, 2023 use of AI. They should be fully integrated into theory agentic CoWork. But really, developers have been relying on stackoverflow since forever. This is nothing new, just another level of laziness.
Bazzatron@reddit
When all the engineers are disenfranchised by middle management shoving AI down our throats, its tough to really be upset that you have a workforce of unengaged prompters.
Tech has had a sledgehammer taken to it, and I think we're all sore.
korewarp@reddit
Engineer? Nah.
Gengineer.
Askey308@reddit (OP)
I love this. Request to steal your CGO/Gengineer terms? 🤣
Avas_Accumulator@reddit
An engineer still has to think and verify. That goes even before AI was a thing. The math formula for a critical structure? Must be double checked. No engineer worth their salt would "just take AI answers" on front value without going over the answer, and if they do well that salt ain't salting
eri-@reddit
AI is interesting in the sense that it kind of acts as a great revealer.
Usually, the main driver behind those AI answers is ," I don't know the answer myself". It's not lazyness.
AI just reveals exactly how lucky you, as a customer, were that shit kept running for all those years.
A bit overdramatized, obviously, but..
mbhmirc@reddit
I see internal staff doing this to IT to try and argue why they need some exception or change but since the ai has no context of the other side it’s always written one sided to please the person and not solve the issue
___frostbyte___@reddit
Lol I’m somewhat guilty. One of our Saas guys hit me up and was like “what’s wrong with this script, why is it failing?” And since the systems engineer is on PTO, I was the only other guy he reasonably could come to for a potential quick fix. So what do I do? What anyone else would: drop that bitch into Copilot and say “what’s wrong with this?” And of course, he said “change x to y”, which is what I told him to do 10 seconds later lol. And it worked like a charm.
KandevDev@reddit
the part that gets me is when the AI answer is wrong AND the engineer did not catch it because they did not actually read it. had a vendor senior dev reply to a debug ticket with claude output that contained a code block referencing a function that does not exist in their product. the screenshot included claude's confident "this should resolve the issue". the function literally did not exist. nobody on their side noticed before sending.
the level we have collectively dropped to is not "people use AI", it is "people forward AI". the difference is whether the person hits send having actually read it.
Anxious-Community-65@reddit
A junior reaching for AI to fill a knowledge gap is understandable. A senior engineer or CTO copy pasting an unverified AI answer to a client is just laziness but preaching it as productivity, effieciency and what not. AI doesn't know your client's setup, their history, or the three weird exceptions in their config.
kagato87@reddit
Even if you include those three weird configurations in the prompt...
BigMikeInAustin@reddit
A dude sent me a 6 paragraph AI output to something could have been 1 sentence.
Took me long enough to read it and find that one sentence I needed that I almost pasted his response into AI to get the main point.
kagato87@reddit
If they're doing cipy/paste from AI, they're doing it wrong.
At least load up Code and the extension!
OK serious answer. I had a client get an AI to write a request. Complete with speculative fixes. I returned the favor, burying "fixed, try again" in an equal volume of AI slop. It's the only time I use it for emails.
I haven't had support vendors try it on me yet.
justinDavidow@reddit
You mean a person, like a real physical human being, is still doing the copy/pasting?
Wow, so behind the times. ;) (mostly kidding!)
With agents these days, no human intervention is even required for such a flow. I'm always shocked when I come across some high-level exec with an openclaw agent swarm doing emails for them; only to tell them:
"You should see to your bot, it's not very good. You can see all the stitching and slop all over these messages."
Liquidennis@reddit
I thought everyone just started writing in organized bulleted lists suddenly, weird.
uncertain_expert@reddit
We banned the practice immediately after the first AI email was sent to a customer. In that case the answer wasn’t wrong, but it was clearly AI generated and that is not the impression we want to give.
Og-Morrow@reddit
Tied of writing same thing over over might as well get AI to do it.
Civil_Inspection579@reddit
Yeah and the worst part is you can instantly tell when nobody actually read the output before sending it lol. AI is great for drafting or brainstorming but blindly copy pasting technical explanations to clients is just irresponsible, especially from senior people.
Askey308@reddit (OP)
Right. Then also the blind copy and paste of emails directly into AI to generate a response. I understand Copilot can now also just reply on your behalf? May be wrong about this as we dont integrate that for our clients, yet.
skyxsteel@reddit
We have… or rather, had, a shitty sysadmin at work who was fully riding AI. Once it came to light he was leaking information, he got axed to a tier 1.
Apartheid20@reddit
My MSP is rolling out and encouraging the use of AI recaps for time efficiency. It’s so fucking stupid
ngjrjeff@reddit
One of our helpdesk is doing this to reply user ticket on troubleshooting. 😆