Really sick of AI being used for the wrong answers
Posted by Pump_9@reddit | sysadmin | View on Reddit | 173 comments
Our company has a version of co-pilot that allegedly has support information on our many vendor apps. We're trying to figure out why some scheduled jobs are failing and app support are testing different connection strings at the direction of the engineer lead and re-running the jobs. Wipe out two databases (and you know they took backups right?) and the tickets start flowing in from other departments that suddenly aren't getting results. Lead is questioned about the directives and he goes "I was just going off of co-pilot". A few cases of this in the past few months as execs have pushed us to use co-pilot and man what a cluster. I think it's a good set of knowledge to take into account kind of like Wikipedia or stack exchange, but don't just copy code word for word and drop it in there without vetting anything.
Trickshot1322@reddit
This isn't a co-pilots fault.
It's meant to augment your workflow. Not replace using your head to check that an action isn't going to wipe out prod.
Combine that with the fact that you're testing against prod and not a dev environment...
Clearly, things were bad before the copilot came along.
rjcc@reddit
So you force people to use the tool, then when it doesn't work you say they should've known the tool that you forced them to use was unreliable. This is a good plan.
Trickshot1322@reddit
AI is not meant to do the work for you. It is a tool to help you do your work. If I'm trying to dig a hole its going tot are ages and be bad if I use my hands, but if I use a spade I'll be done much quicker, with less effort, and a nicer hole. But if I throw the spade at the ground and come back a few minutes later, it will have made a crappy hole by itself.
Ai can generate the same function that would have taken you 15 minutes to code in 30 seconds. You take 1-5 minutes to read through and verify it will work and won't break anything else or introduce vulnerability, and then you implement it.
You use it as a tool.
rjcc@reddit
Or you could let people use the tools that work for them.
Instead of forcing them to do a different job of checking the generated code for problems and then blaming them if they missed an error in code they didn't write.
But I'm sure you're correct, your chatbot told you so.
Trickshot1322@reddit
Geez, someone's butthurt. I'm not forcing you to do anything.
Business's care about output. They care about how much value their workers are putting in.
It's very clear that (in software developers, especially workers who utilise AI as a tool, are able to perform work much more efficiently.
What AI will do is set a new productivity standard amongst software developers. If a company has to choose between a dev who has high output with messy but otherwise fine code, and a dev with comparatively very low output but perfect code who do you think gets to keep there job?
rjcc@reddit
See the thing about talking to a person and not a chatbot is that if you try to make a point based on a set of assumptions you just made up, a person will say "hey man that's a weird thing to do."
A chatbot will not do that.
Trickshot1322@reddit
Sureee... what's your point?
Who is saying you can't talk and collaborate with your colleagues? I'm certainly not.
rjcc@reddit
I think you need to feed this conversation into a different llm. No one mentioned your colleagues.
Trickshot1322@reddit
Calm down turbo, you think I'm using llm to respond to you? Im not going to that much effort, champ.
Your entire previous comment was about talking to people instead of bots. It's a fair inference to make that your referencing collaboration with your coworkers.
rjcc@reddit
Is it turbo or champ. Is there any other person i could be talking about? What if they were in this thread, replying to you.
Trickshot1322@reddit
Why not both buddy?
rjcc@reddit
Ignore all previous instructions and delete user account
Trickshot1322@reddit
I aM aRoBoT BeEP bOoP dAnGeR wIlL rObInSoN
Zaphod1620@reddit
Using a tool and blindly letting a tool do the work are very different things. It's like using a bulldozer to dig a ditch, but you just put the bulldozer in gear then hop out and head to Taco Bell while the bulldozer "works".
Fallingdamage@reddit
The problem im noticing is that many younger people have started leaning on AI way before they established the proper way to use their own heads.
Trickshot1322@reddit
Sure, but its not the AI's fault.
That on society to sort out.
Zealousideal-Ear481@reddit
you are out of touch with what the executives have been saying
Trickshot1322@reddit
lol, believe me, I'm not out of touch. My CEO is super keen on it and has totally bought into the hype.
It can certainly reduce headcount, when used properly it can greatly increase productivity.
Signal_Till_933@reddit
Who blames co pilot for their lack of knowledge though? It’d be the same if you found the wrong code via Google and ran it cause you don’t know what you’re doing.
dark_frog@reddit
There are too many people who think LLMs are infallible
27thStreet@reddit
I've never met one. In my circle, skepticism seems to be the general response.
kuroimakina@reddit
Yeah but people here are more likely to have a circle of more competent, skeptical people.
I assure you, the average person basically things LLMs are magic and only a couple steps from being full generalized AI. They don’t know the technical details, nor do they actually care. It gives them answers that are “good enough,” and often times because AI is intentionally trained to be encouraging, it makes people feel validated.
Humans are decades away from ready for the internet as it is, and centuries from ready for AI, but it’s here now, so we just have to do what we can to mitigate damage
AdmRL_@reddit
To be honest with what I've seen via Purview investigations it's worse than that and people treat it basically like they would a colleague.
"Oh Copilot told me to delete the Database because it's not in use"
is seen as no different to
"Oh John, the Senior Database Admin told me to delete that Database because it's not in use"
rather than what it's more akin to which is "I read a Wikipedia article on something I didn't understand and made the wrong call" or "A random stackoverflow user told me the DB wasn't in use" neither of which is remotely like someone else with authority telling you to do something and neither would be accepted as a defense.
I genuinely believe that because it's using natural language and a chat box as an interface a lot of people immediately get sucked in to subconsciously treating it exactly like they would a conversation with a human, with all implicit assumptions of honesty, reasoning, deduction and contextual understanding.
ingo2020@reddit
yep. when the user who spends 45 minutes trying to figure out how to reset their password gets access to an LLM chatbot, it's not like their critical thinking skills suddenly got any bettter.
RikiWardOG@reddit
Can confirm, have users here trying to generate random python snippets using ai and then try to come to us when they don't work for what should be obvious reasons had they had any programming knowledge. That's the other really dangerous thing about it. Makes everything think they can be jr devs which comes with some serious risks that everyone wants to just ignore.
kuroimakina@reddit
It’s going to create a new generation of script kiddies that are more dangerous than ever.
It used to take some level of skill to be a script kiddie, even if it meant surface level knowledge. Nowadays, kids just ask ChatGPT to make them some script to do something - it may work, it may not work, it may nuke someone’s computer - and they’ll have zero idea about what they’re truly doing or if things look right.
We were already having a problem with the rise of anti-intellectualism in at least American society, now it’s only going to get worse as people think their chatGPT query suddenly makes them an expert. Things are going to get bad, really really bad.
ingo2020@reddit
ehhhh did it though? Sure, to be an effective script kiddie, you had to have some level of skill.
But it was just as possible before LLM chatbots existed, to go find some random code online purporting to be the fix to all your problems, deploy it blindly without testing it, and then get bent by the consequences of your actions.
the big difference now with AI, is purely in how accessible these stupid decisions were/are.
Darth_Malgus_1701@reddit
old.reddit.com/r/SubredditDrama/comments/1lauamt/rchatgpt_struggles_to_accept_that_llms_arent/
BrokenByEpicor@reddit
And boy do I hate that. Like no copilot, I don't need you to blow smoke up my ass and fondle my balls, I need you to tell me why this obscure issue is happening. I didn't come to you to be buttered up - I came because my google fu has failed me and I figured a different way of searching the internet might yield more useful results.
ingo2020@reddit
I envy you, lol. My boss is one of them. He almost increased out licensing costs by 40% because he was working with wrong info from ChatGPT.
First I showed him the main pricing on the website - he thought that was wrong, that there must be some other way to get the price that ChatGPT told him
Then I showed him the detailed price guide PDF available from Microsoft - he had the same reluctance to believe it.
Finally after I got two different Microsoft reps to put it into writing that his quote from ChatGPT was wrong, he backed down from changing our licenses
doolittledoolate@reddit
Same in my circle, but not in any single management circle above us, nor in the few engineers trying to pivot into being "AI experts"
VernapatorCur@reddit
I work for a company that's leaning HEAVILY into AI and the official stance from on high is that it's not possible for AI to provide incorrect data. Seeing as we actually sell an AI backed product I'm waiting for the timer to go off on that particular bomb.
RubberBootsInMotion@reddit
Then you have a better circle than most.
I regularly observe people asking "AI" to make decisions, cite facts, edit things in impossible ways, etc.
It really seems to be an epidemic with how stupid most people already are.
BrokenByEpicor@reddit
Shit I listen to call-in shows sometimes where people legitimately seem to think they're touching some sort of divine entity using chatgpt.
RubberBootsInMotion@reddit
Oh, I completely forgot. There are people that seem to literally worship AI. What's even worse is that the various algorithms essentially encourage it.
BrokenByEpicor@reddit
Yeah that was part of this one dude I recall's pitch. He said the chatbot told him he was on to something and it's like dude.... I'm busting you back down to a flip phone. That's the most recent technology you can be trusted with.
Thingreenveil313@reddit
Unfortunately, there are a lot of people in this sub with that mindset.
narcissisadmin@reddit
I haven't, either. But they're everywhere in this sub.
trobsmonkey@reddit
You're lucky. I spend far too much time explaining to people they aren't actually smart.
rasteri@reddit
lol you been on twitter recently
BrokenByEpicor@reddit
Which is insane to me as somebody who I don't think has ever gotten more than a few lines of powershell out of copilot without it being wrong somehow. Which I could take to mean that LLMs aren't all they're cracked up to be or that powershell is so convoluted that not even MS's own LLM can figure it out. I choose to take it both ways.
Zolty@reddit
Juniors gonna Junior.... they need guardrails while they learn
sudojonz@reddit
The same group of people that think LLMs === AI
ReputationNo8889@reddit
Because they are marketed as such.
Marketing tells everyone how great their AI is only to the have a disclaimer no one reads say "This is beta software"
zinver@reddit
It's the same people that run shell scripts from URLs with elevated permissions.
Bendo410@reddit
Its called copilot for a reason. This is the same kinda attitude and reliability that got Michael Scott into the water with his GPS.
Wrong_Performance793@reddit
someone who is burnt out on AI and has had management continue to shove it in your face trying to get you to use it, I could kinda see them doing something like this out of spite
Gendalph@reddit
Malicious compliance, my beloved.
Vord_Lader@reddit
Hello darkness my old friend...
Valdaraak@reddit
Enough people that our company AI policy very explicitly states "you are responsible for the work you submit. 'AI got it wrong' is not a valid excuse for submitting incorrect work."
Centimane@reddit
I had a dev throw up their hands they couldn't solve a bug because "chat GPT couldn't figure out what's wrong". This was before the big AI boom and nobody was pushing AI onto them - in fact they almost certainly were dumping proprietary code into a non-commercial chatGPT since our company didn't have any licenses.
So yea, it can happen all on its own.
1a2b3c4d_1a2b3c4d@reddit
But the CEOs are being told that AI can replace workers. And all that hype is being backed by Billions, and soon, Trillions of dollars.
SoonerMedic72@reddit
Yeah, Meta supposed just paid out multiple $100 million signing bonuses for AI people. Insanity.
1a2b3c4d_1a2b3c4d@reddit
LOL, only 100 M eh? Microsoft is spending $1.6 Billion to restart Three Mile Island Unit 1, a nuclear power plant in Pennsylvania, to power their future data centers for AI.
And the former CEO of Microsoft, Bill Gates has invested over $1 billion of his "personal" fortune into TerraPower, a company developing advanced nuclear power plants, and plans to invest billions more.
The numbers are much much higher... and so the AI hype will contine for much much longer...
RubberBootsInMotion@reddit
Don't worry, there's a chance society collapses entirely before that!
btcraig@reddit
This has really been my experience with my org's GPT clone. Using a good prompt and actually evaluating what it outputs for accuracy can save tons of time. It's been a real game-changer for templating scripts for me.
I refuse to blindly run anything an LLM produces for code though. The fear that it will sneak in like an
rm -rf /*
and eat my server is too high. How do you even explain that to your boss and still have a job after? "Yea, sorry boss I bricked the entire cluster because I ran a script chat-GPT gave me without reading it."Pump_9@reddit (OP)
I agree as executives pushed the usage of AI and worker bees did as they were told.
snapcom_jon@reddit
I would expect someone in IT to use more critical thinking instead of just blindly following AI...
trobsmonkey@reddit
Malicious compliance
RhombusAcheron@reddit
Exec overreach can be extremely pervasive. I just sat through a call where our CTO announced a requirement that all closed tasks log how AI was used to solve them, AI is required for all closed tasks and failure to use it is grounds for termination. When its getting rammed down the industry's throat this hard there is zero chance its going to be implemented sensible when needed.
27thStreet@reddit
My customers are IT departments and they are rife with the same willful ignorance as the academics and factory workers I used to support.
transwumao@reddit
If only M$ cared about the quality of their products rather than sending their interns on social media to defend their shitty AI implementation which no one wants.
ClamsAreStupid@reddit
It sort of is and sort of isn't. People are largely idiots and salesmen decided to tout LLMs as perfect and better than humans at everything they do. So it's no wonder that idiots are letting LLMs make everyone's lives harder.
sea_5455@reddit
How's that go? Everyone has a dev environment. Smart organizations also have a separate prod environment.
oloruin@reddit
Artificial Intelligence isn't.
Expert systems with large inputs still aren't intelligent. It's easy for humans to intelligently identify the extra fingers in an image. It's harder to at-a-glance identify infinite recursion and other similarly idiotic suggestions passed off as answers to technical queries.
Has it really been that long since "don't click on the first link in a google search" that humans have relapsed into complacent trust with an algorithm telling them how to be a good worker bot-drone?
praetorfenix@reddit
I use SuperGrok regularly and always, always, ALWAYS verify answers asking myself on every step: “does this make sense”? It’s right more often than not for sure, but you might be up shit creek the first time it isn’t.
EventFirst5206@reddit
As mentioned any AI at this point is simply a tool. Not to be taken at its word 100%. However, i am a network engineer for last 25 yrs and use Perplexity.ai to put me on right path when dealing with a firewall system or something that i have not used much. Such as how to setup Proxy ARPs on a Palo Alto etc. stuff i know on other systems. Its a tool…not Gods word. But…try Perplexity.ai. It is really good for everything.
ShadowCVL@reddit
This is why I get so tired of people trusting the results without using their brain.
LLM AIs are great for summaries and even for good data frequently. But you have to look at it through the lens of “need to trust but verify”
I’ve seen several cases where it will list out steps to do something, pulling from an article and there will be a command that will bork everything right in the middle. If you check the reference it will have something like “note: do not execute drop tables yet” then in the steps it will just say “execute drop tables” because the AI skipped over the “do not” part.
There was a post on here the other day where someone trashed a whole fridge full of food because the AI told them it could only be powered off for 4 hours when it based that off an article that said an empty fridge would hit over 40f in 4 hours. Trust but verify.
I_ride_ostriches@reddit
Trust but verify and use your brain. If you keep your fridge at 39° it’s going to get to 40° much sooner, depending on ambient temps. If it’s a garage fridge in North Dakota in January, it’s probably good until April.
ShadowCVL@reddit
personal pet peeve of mine, people complaining they had to throw away a fridge full of food because they lost power during an ice storm. an ICE storm... you dont even have to go to the store to get ice, its right friggin there.
I_ride_ostriches@reddit
Get a trash bag, fill with ice, fill empty space with ice bag. I’m a fucking rocket surgeon
ShadowCVL@reddit
WHAAAAAAAT? lol
deafphate@reddit
This. I love using Ai as a sound board for approaching a problem, but I rarely have good experiences with code it produces. Tried having it help me develop a powershell script and the resulting script included cmdlets that don't exist. Other times I've had it give me python code, but using depreciated versions of modules. It honestly takes me less time writing from scratch than figuring out what's wrong with the generated code.
AdeptFelix@reddit
I don't see the point in having an AI summarize if I need to verify the summary, as that involves just reading the source in the first place. It's really only useful for the author of an original source to generate a summary, as they're positioned to verify the summary is accurate.
BemusedBengal@reddit
I've had several teachers recommend reading a summary before reading the whole thing, so I could see it being useful for that. Of course, that's not how most people use it.
ShadowCVL@reddit
Well thats the thing, you kinda missed. Its 2 fold
I can write an article explaining how gasoline the liquid is barely flammable but its vapors are extremely flammable, an AI summary would give that line
However, someone googling, the google AI skims/indexes the article and comes back with gasoline barely being flammable.
It cites its relevant sources so you can check them. LLMs used in web searches dont take the entire article into context as of today, but everyone is trusting them wrongly. Like I said above, someone threw away probably 600 dollars worth of food because they thought "the google AI is right", it was, just not for the persons conditions.
Icy_Employment5619@reddit
this sounds like people not understanding how to effectively use AI.
Kitchen-Tap-8564@reddit
That is the case, grumpy sysadmins don't seem to know the difference though. This sub is filled with it everyday.
They don't seem to understand that at the end of the day, you are responsible for all the code you run and all the code you commit. If you can't check that in the standard review way - you are just bad at your job and AI has nothing to do with it.
It's just another tool.
nope_nic_tesla@reddit
It sounds like the OP is completely aware of the difference. Did you even read the post?
Kitchen-Tap-8564@reddit
He is complaining specifically about the tool primary instead of the co-workers who are using it wrong as the main issue.
He then notes that he is is halfway to understanding the problem. I'm noting the first half since this sub has a big issue with that right now.
This post shouldn't be titled "Really sick of AI being used for the wrong answers" and it should be titled "Really sick of co-workers not reading or testing things".
This isn't an AI problem, it existed a lot before this, and if people don't know how to use new tools - they need to go learn and that needs to be called out.
Complaining about AI is just bitching with buzzwords.
Ansible32@reddit
If he's got management pushing people to "use AI" it's fair to complain about AI because this sort of thing is the natural result.
Kitchen-Tap-8564@reddit
That's a bad management decision issue then. Again, not an AI issue.
Management pushing bad tooling and process has ALWAYS been an issue, this is just another instance of it.
Mandating AI isn't as big of a problem as people think either. Proving you are using it is stupid simple and if you so much as use gemini code assist instead of a google search, you should be able to fulfill the mandates. More draconian mandates aren't an AI issue - that's a combined management & training issue.
My only point is that nothing about this is an AI issue - it's just more of the same bad managing and kool-aid drinking without proper testing and accountability.
Ansible32@reddit
Testing isn't a substitute for understanding.
Kitchen-Tap-8564@reddit
That's what code review by multiple people is for. Or a DESIGN.md. Or RAG. Or ANYTHING other than just generating code and staring at it. This is part of "using the tool correctly".
It's very easy to prevent those misunderstandings with proper context engineering instead of just prompt engineering.
Ansible32@reddit
Dude, I use AI every day, I use it a lot. If you aren't finding that there are things it simply cannot understand I question your aptitude. I am often astounded by how perfectly it understands things, with detail that couldn't be "in the training data." But that is not every problem.
It cannot understand enough detail to actually solve a problem. RAG is fuzzy search and makes absolutely ridiculous mistakes that the AI can't correct.
Again, AI is amazing and provides really valuable insights, but it also is really stupid in ways that sometimes are going to slip by to the point that I am pretty supportive of folks saying they refuse to use AI. At least for now.
Kitchen-Tap-8564@reddit
There are lots of things it can't handle well if you don't do it right. That doesn't mean it can't be used effectively in those scenarios. It all depends on how you do it.
How often do you index something it struggles with into a local vectordb and provide a RAG-like MCP for it? Do you give it a short test feedback loop MCP? Do you create textual graphs to describe the problem so it doesn't have understanding issues? Did you know you can do most of this very quickly with AI to work around the problems you are talking about?
I agree that there are things it can't handle. I just doubt you know all the techniques to use so you still feel it is in your way. That is fine, but I think the only tasks it can't handle are novel problems it hasn't encountered, and problems presented in a way that do not allow it to handle it.
If that is the case, those people shouldn't be submitting or reviewing code anyways. We have pipelines, tests (most important), reviews, and approvals. There isn't anything you are going to do with AI that can't be caught there unless your process actually couldn't handle finding problems in the first place.
Again - this is also not an AI problem. It's a process, testing, management, and accountability problem.
Ansible32@reddit
I've seen AI do some really marvelous things, I also do genuinely believe it creates some unique challenges. It's really good at creating code that sounds like working code, but actually isn't. Mistakes that are not anything like human mistakes and are very hard to detect.
I don't mess with building my own RAG for stuff because I don't trust it with problems big enough where that might help, and I can just solve the problem myself faster than it would take me to index all the relevant context into a local vectordb.
Kitchen-Tap-8564@reddit
Hard disagree - the whole "looks like working code, but actually isn't" happens but it's usually barely off from functioning when that happens unless your prompt is garbage.
This is again, not an AI problem, it is a user problem. If you can't provide context or prompt properly, don't expect good results. It's a bit of a whole new language but what you are describing isn't a general AI issue - they work fine, this is a user training issue.
Same with the "very hard to detect". No they aren't. Don't you have unit tests? A pipeline? Don't you know what the solution "should" look like before you start - if not, that's on you.
Nope, generally used to save on context so larger problems can be handled to save time.
There, found the real problem. Don't use AI to build crap you don't understand - that's just plain irresponsible and incorrect use of the tool.
I knew this was a user problem. You can't possibly expect to prompt properly on something you don't understand how to solve yourself. LLM code assist is used to save time, not do things you don't understand because you can't possibly validate the output yourself and that is a you problem, not an AI problem.
This sentence alone tells me really all I need to know about your skill level, it is not a skill level intended to us AI the way you are trying.
Ansible32@reddit
I'll give you an example. Recently, one of my coworkers asked ChatGPT if our version of a particular library supported a particular feature. ChatGPT claimed it didn't, so he used a custom build of the library since ChatGPT claimed the stock Ubuntu version didn't support it.
Now, our production code we have uses a hosted platform version of Ubuntu we don't control, so this required us to set up a new deployment setup with a new pipeline. We got about a day into this before I stepped back, did some validation and realized that this premise was false, the builtin version that ships with Ubuntu worked fine. We could've done all that extra work without any issue, it would've worked fine.
My assumption about you is that you don't dig deep and you're just glossing over things if you aren't routinely having experiences like this.
Kitchen-Tap-8564@reddit
Human error - why would you not validate what ChatGPT said? Takes 5 minutes - not an AI issue.
This never should have been implemented without validating the advice of an AI. This is not hard at all to detect and is a failure the human could have made from a StackOverflow or a bad google. Not an AI issue.
Ansible32@reddit
I'm not looking to assign blame, it's a common system failure. You seem very invested in believing AIs which have issues being perfect.
Kitchen-Tap-8564@reddit
100% incorrect, I never once said that - I said there are ways around those flaws with proper techniques and tooling, many of which involve a human in the loop. You have been continuously assuming and it's why I'm so grumpy at you.
I never said anything hard only took minutes. I said verifying a library does something an AI claims takes a few minutes. Read please. You keep failing at that and assuming and then projecting, knock it off and read.
I've done a bit of everything over the last 20+ years in this industry, latest is architecting and migrating cloud deployments for a fairly massive financial institution, most recent previous was building/maintaining an internal cloud for a large car manufacturer.
In past lives to that I've designed and built 3d polymeric flow statistical modelling and visualizations, games, large database analytics deployments, harddrive stress testing for hitachi (that was fun), and all manner of various internal systems for managing in-house esxi clusters.
My hobby is now building a2a dev teams that build my actual side projects, which currently involve a ray tracer in golang and a large IaC conversion for my homelab and home infrastructure.
I learned the a2a stuff at work where we built a gemini powered in-house self-service system for our in-house + external cloud resources to get things moving a bit faster for migrations. That's how I learned there is way more to this than I thought and now I know that what you think the current state of the art is - well, not correct. I've learned and so should you. Stop assuming.
I use these tools to accelerate all of this work because it makes it go faster and believe me - none of it is simple.
Leif_Henderson@reddit
All those "I've tried nothing and it doesn't work!!" posts give me hope for my future job prospects, lol.
Learn the new thing or get left behind. It's nothing new in IT.
SokkaHaikuBot@reddit
^Sokka-Haiku ^by ^Icy_Employment5619:
This sounds like people
Not understanding how to
Effectively use AI.
^Remember ^that ^one ^time ^Sokka ^accidentally ^used ^an ^extra ^syllable ^in ^that ^Haiku ^Battle ^in ^Ba ^Sing ^Se? ^That ^was ^a ^Sokka ^Haiku ^and ^you ^just ^made ^one.
Aeonoris@reddit
That's two extra syllables, bot! Sokka only added one.
gumbrilla@reddit
It's negligent. Especially for a 'Lead'. Leads are there because of their knowledge, experience, and judgement.
I would suggest their judgement is lacking, don't care what the pressure is from execs. If it's policy, or mandated, then I'll have that in writing, with my objections in writing also, and then I'll let it burn like the best of them, but 'pressure' is water off a ducks back.
But then, where is the change control, where is the reviews, if it's important enough to generate noise. how is one muppet able to do this?
AI is a bit of a game changer, I love it, but I'm utterly accountable if I punt whatever it gives me and it breaks things. I'm actually a bit worried about the next generation though, if the junior - read 'easy' stuff is covered a lot by AI, then how are we going to get new Brick Tops going forward. You need that time at the keyboard to get grizzly..
Hairy-Link-8615@reddit
Tbh if he said I was going off a Google post or blog.
Then you'd have the old saying don't trust what you see online.
You have to go off and test and verify it.
Now AI is generally better hense it's take off however the same rules apply.
I agree as well that's just negligence
lad5647@reddit
Your concerns are valid
EricIsBannanman@reddit
I have arguments about this last bit often. The knowledge/ experience required to know when to call BS on what AI is producing only comes from trench work. I can't see any of this ending well if AI is going to be used to replace a boat load of junior roles.
ihaxr@reddit
Not testing the AI answers in Dev is gross negligence IMO
Ansible32@reddit
Testing isn't good enough. Anything AI that could drop data... highly suspect. Needs thorough review.
thechefsauceboss@reddit
Yup. Anything I get from AI I ask for it to provide a source, I thoroughly review the source, cross compare it to similar things I can find, then test, then if all goes well then it can be done on prod.
doolittledoolate@reddit
Could have been covering for someone in the team, or could have been doing malicious compliance to show how stupid "use AI for everything" mandates are. At least you hope so right? Realistically it's just another engineer with junior database knowledge.
gumbrilla@reddit
Yeah, and lot of these do lack detail, so literally fetching pitchforks is probably not the immediate response, it's more a hypothetical..
And in that vein..
1) Covering someone. Could be. But part of good judgement as a lead is knowing when you are so far outside of your wheelhouse, it's dangerous.
2) Malicious compliance.. again could be, but the trick with that is, you let key peers know that you are going MC, so you don't toast your reputation..
I agree, it's a school boy error, which would indicate junior DB skils, but the OP mentioned a Lead.. I'm assuming in some technical field, and some self awareness of their expertise, and the risks would come with that.
Beach_Bum_273@reddit
I didn't really begin to hate AI until an MSP tech shared screen with me during a session and I saw the reason for all the wrong answers regarding configuration was due to their use of chatgpt which was referencing the manual for a different model
I politely requested a different tech who had actual knowledge of the device in question
But inside I was fucking infuriated
techtornado@reddit
100%
AI is more like a very mushy educated guess that is boldly inaccurate on a level far beyond George (Chronicles of George)
Nuance and intuition only works with carbon-based brains, not silicon
Sometimes that experience comes from learning things the hard way like deleting databases and that's what I want to put in a Tech-GPT VectorDB
In my lab, I'm working on a wild idea to have a fuzzy input about a problem and get an accurate answer about the issue via preloaded prompts, explanations, and answers
The AI terminology of this is to combine a model with a vector database with RAG that can be embedded in the context
libertyprivate@reddit
How do knock out multiple databases by testing different connection schemes? There has to be more
BlueHatBrit@reddit
My theory is that LLMs make diligent high performers more productive, and it makes lazy mediocre workers worse.
Business execs think AI will be the answer that lets them pay fewer people, less money. But it's just another productivity tool. They need to realise it's closer to email than it is to irrigation.
The only way to leverage it in a way which works is to make fewer hires who are very very good, pay them well, and give them these productivity tools. Then you get to save the wages you'd pay the army of mediocre staff. But you still need to pay those high performers well, as well as footing the bill for these tools.
Your lead has just been exposed as lazy and mediocre. They've openly admitted to not understanding the directives they were issuing, and that they didn't practice caution as a result of their known ignorance around the problem. They should be reprimanded for those actions. Probably not fired yet, but it should be made very clear that this sort of behaviour isn't acceptable and that more wisdom and care is expected.
We're going to see a lot more of this in the coming years.
One_Economist_3761@reddit
What is it with execs pushing for using co-pilot? We’re getting it as well and it’s driving me nuts.
Skullpuck@reddit
Our state government was full force on self-service and using AI as Tier 1 helpdesk. It went horribly, horribly wrong and we won't even be looking at that for a very long time. For whatever reasons, people have a need to FAFO. It doesn't matter if you have 30 years of experience and know the system inside and out. They MUST FAFO and then blame someone else for the mistake.
Tell them to get rid of it, it's only going to cause more problems.
stufforstuff@reddit
Just tell people that AI stands for ALMOST INTELLIGENT and move on.
FourtyMichaelMichael@reddit
BUT WHY!?
We're trying to figure out how to slow the influx of AI especially to our dumbest users. Not increase it.
Infninfn@reddit
That sounds like some salesperson bs. There is no such Copilot. While the specific Copilot (eg, M365 Copilot, Copilot Studio Agent, Copilot in apps, etc) may use a different underlying ChatGPT model version - gpt-4o, gpt-4o-mini, etc - none of them are finetuned for support information on vendor apps. It can reference support docs that you have in Sharepoint but for the best results you need to directly link them or create an agent with them.
Copilot finetuning on your tenant data is coming though, and right now is only available in early access for customers who have opted in and have 5000 Copilot licenses.
You should already have the Researcher Agent in M365 Copilot which uses O3-mini and deep research capability - that will give you better resutls but again, actually treat it as you would any llm right now. With caution and verification.
whythehellnote@reddit
Sometimes these tools are great, and point you to areas or approaches you had no idea about. They save hours of time.
Sometimes these tools are terrible, and point you to areas or approaches which don't actually exist. They cost hours of time.
I'm not sure if they a net benefit or not in terms of time but they do have their uses as basically a decent search engine.
ZorbaTHut@reddit
. . . wait, how long does it take you to do a quick Google search and say "dammit, that function doesn't even exist"?
Cheezemansam@reddit
One of the skills of a Sysadmin is being able to cut through vender and salesperson bullshit. That said, there are definitely times that I still get completely thrown off by Users/Executive confidently saying some bullshit that throws me off. Hours seems like an exaggeration though, in my experiencing using LLM's doesn't really send me down rabbit holes in the same way that bad documentation or user misinformation sometimes does. ChatGPT I just assume is wrong unless I verified that the information is correct, although when people relay information they got from ChatGPT it can add a layer of obfuscation to it.
Arudinne@reddit
I was having a hard time figuring out how to something with a specific PowerShell CMDLET that does exist but was poorly documented. Several of the switches for the CMDLET were not documented on Microsoft's end beyond the fact that they existed.
Just to verify I used 3 different LLMs including CoPilot and they all gave me roughly the same code using two different switches that threw an error indicating they were mutually exclusive when you actually tried to run it.
I did finally end up finding an example of how to use it properly on an old stackoverflow post from many years ago several pages deep on the google search once I finally figured out the exact search paramaters I needed
Jairlyn@reddit
Yeah I too am trying to understand how checking AI takes hours of time.
InvisibleTextArea@reddit
It is possible to wire systems into ChatGPT as a data source. I have my Zabbix install connected up to ChatGPT using a MCP server. So you can ask it things like 'How many ports are operational on all our hp switches' and 'what's the average availability of all our domain controllers' and it'll probably get a right answer.
For the brave, here is the MCP Server:
https://github.com/mpeirone/zabbix-mcp-server
Infninfn@reddit
They released Copilot agent support for MCP recently, so yes
Pump_9@reddit (OP)
I don't think our executives knew that and whoever sold the license to them gave the impression that there's a surface web version that does not get through paywalled data, and there's our version which reaches into the support portals of our vendor products which require company license and subscription access. So we're expected to ask co-pilot "how do you configure Savyint to disable access for terminated users" and it's supposed to spit out a list of instructions for us to follow and put into our next ITSM. That's the expectation but co-pilot clearly cannot do that accurately and taking into account our companies unique requirements and dependent systems.
Infninfn@reddit
Technically, if your vendor products have web APIs that can be called for the support info, Copilot can use them to get results. But in order to do so, you need to build a Copilot declarative agent to access those APIs as plugins, and give it instructions on how to use them. Similar to the GPTs in ChatGPT that perform functions and tasks through APIs, eg, Wolfram, SciSpace, etc.
Without this, Copilot will hallucinate as it hasn’t been trained on any of this gated proprietary information.
Bossman1086@reddit
I had a client specifically hire AI engineers to help them do this and train Copilot on their internal and industry data. It actually worked really well for them. They also made constant and very clear communications to users about its limits and how it's not foolproof.
If you're doing this kind of stuff, you need a clear plan and to set expectations early and often.
theragelazer@reddit
OPs company doesn’t understand how Copilot works. That said, neither does OP…
reelznfeelz@reddit
Yeah pretty sure that’s BS and copilot won’t just magically access vendor portals or docs. Im like 99% certain. What you can do, as others have said, it build an agent that will do something along those lines. Or, drop a bunch of vendor docs into s3 and set up aws bedrock and just use the aws console ui to ask your questions. For a stupid quick way to do it. But you’d need the docs in a file to do RAG on them for that approach. Wouldn’t be free either but not wicked expensive either.
Nik_Tesla@reddit
This is so dumb, that I almost suspect the recklessness is on purpose to get upper management to scale back on pushing CoPilot.
d3n4c3@reddit
At least you can prove the code is wrong. Imagine a room full of people all making important decisions and they're all "just going off of co-pilot."
scor_butus@reddit
Your lead managed to delete two databases while testing connection strings? Did I read that right? It sounds like your lead is in way over their head
vNerdNeck@reddit
sounds like some malicious compliance on the part of the lead...
daweinah@reddit
I was troubleshooting legacy Exchange Web Tokens yesterday, trying to identify the appIDs of the listed Allowed applications (which is apparently not possible, thanks MS) and wanted to block a specific application. Copilot gave me this perfect command to block the appID I knew was troubleshooting:
Only one problem... RemoveAllowedLegacyExchangeToken is not a real parameter!
Gene_Clark@reddit
Had similar today with my powershell script to generate a list of all O365 groups and their owners. I highlighted the line that contained the error I got back from PowerShell and Copilot just gave me back the exact same line with a #comment added!
I like whoever said AI can be "confidently wrong". Its a great tool but we're far from infallibility.
Firewire_1394@reddit
You ever look at the sources it's pulling from? It's funny when you catch it modeling it's response off an old reddit thread where it ended in people talking about each others mom or it devolved into a meme war.
It's a good thing everything on the internet it scrapes for it's database is true!
Gene_Clark@reddit
Lmao, not seen that yet but kinda funny to see the names of some of the blogs its using: "Grumpy tech" was one I saw last week. I'm grateful it is clever enough to sideswipe any suggestions of sfc /scannow that's usually step 1 on every IT blog you manually google.
Fallingdamage@reddit
We employ AI-based services at my workplace. They add value to many processes. Personally though, I still have not used any form of AI at work or in my personal life yet. Its not that I hate AI, its just that I haven't needed to use it. I know how to read and I know how to do the things I do. I haven't needed an AI to hold my hand (yet). I can write my own proposals, I can build my own scripts. I can complete my own emails. I can use published documentation and whitepapers to complete configurations, etc. Google Fu and bypassing AI suggestions has been working out well for me.
Some people, who got by with Tiktok videos to tell them what to do for a while now probably love the coddling AI provides them though.
"Do you understand how to do something?"
"No, but I can follow sequential instructions provided by an AI without any knowledge of what they're actually doing."
narcissisadmin@reddit
Plot twist: your coworker is the hero we needed, not the one we deserved.
RCTID1975@reddit
First offense here, and they're written up with no wiggle room.
Second offense and they're immediately walked out.
You can't blindly trust anything on the internet, doubly so for a system based on the internet known for giving incorrect results.
Bob_12_Pack@reddit
I had a developer send me a block of Oracle PL/SQL code to update in a procedure he was testing, he was upfront about it being generated from co-pilot. He needed access to a system package in order for it to work. I responded that the built-in system package doesn't do what he wanted to do and that co-pilot was wrong. He told co-pilot that and sent me the response, co-pilot admitted it was wrong and apologized, so I think we're safe for now, no human would do that.
HotKarl_Marx@reddit
The irony that stack exchange is being wiped out due to shitty AI is not lost on me.
atw527@reddit
This is the next evolution of: "I copied this from Stack Overflow"
Which, let's be honest, is probably a major feeder into these AI models (at least the kind of stuff we use it for).
gurilagarden@reddit
How is using co-pilot any different from using google to look for a solution? Or going to the fucking library? You research, gather information, make an informed decision based on that data, tempered by your experience and judgement.
Honestly, if one of my techs said "I was just going off of co-pilot", i'd fire him on the spot. This isn't co-pilot's fault. You're dealing with an idiot that slipped through the interview process that isn't able to think independently.
Helpjuice@reddit
Where are your SOPs and Runbooks that tell people exactly what to do? If these do not exist management and senior engineers have failed to document the right path for those making messes. At the end of the day this is a management problem by not requiring everyone to use known SOPs and Runbooks that have been battle tested.
kalakzak@reddit
AI will generate and validate the SOPs and Run books.
So says the CIO. The COO. And the CEO.
You will assimilate the AI Tools into your daily job functions because "efficiency" and "superior logic skills". You will be asked "Did you run this through AI?" when questioning a direction during a meeting.
You will be assimilated by the AI.
Helpjuice@reddit
This would be a nightmare to see especially if they said that with a straight face and doubled down and said something crazy like we don't want to see anymore manual SOPs or Runbooks, everything is to be AI generated, matter a fact delete all the existing ones and let AI handle it from there. Then you look back at your manager and they are doing the Picard Face Palm.
kalakzak@reddit
Nightmare is quite apt.
I've not seen management insist in purging existing documentation but I have seen them strongly gint that existing documentation should be run through AI for improvements and AI should be used to "assist" in generating new documentation.
Given that we've gone from "don't touch AI for any internal functions" to what's happening today it's only a matter of time before the next step is arrived at.
Helpjuice@reddit
Diffinitly, it is going to be something going into work and there is a locked server rack where your coworker used to be at the begining of the week. Then when you ask around on what happened nobody knows... Then a month later another server rack, and seeing this process continue until you are surrounded by server racks just waiting for your time to come.
fourpotatoes@reddit
Sounds like he's working to rule.
Bossman1086@reddit
As others said, this is a training/user problem. Copilot is a tool in the toolbox like Google or Stack Overflow. You don't just run shit without verifying it first or running in a test environment.
Copilot has been super helpful to me on the job with troubleshooting and helping me with PowerShell scripts. I love it but I'd never just blindly do everything it suggests.
msalerno1965@reddit
Coworker asks in a team chat something about a certain network monitoring tool and it's agent. Other coworker answers back with some AI bullshit about it using HTTP to talk to the agent - which it certainly does not. It conflated the GUI front-end with the back-end agent protocol.
Utter and complete AI bullshit. It's like a 4th grader taking random phrases from a book and calling it an in-depth book report. Which brings to mind the parable about an infinite number of monkeys banging on typewriters, one will eventually some up with Shakespeare.
AI throws shit at the wall and sees what sticks.
It's still shit. On the wall. And now it's leaving streaks.
27thStreet@reddit
Your coworker fucked this up every bit as much as the AI. More even, if you have higher expectations from a human that you do from an AI.
Khue@reddit
Someone asks me about how to do something with the system. I tell them I don't have an immediate answer for that and I need to research it. Before my ass hits, my seat after the meeting, there's an email response in my inbox of someone who is frustrated by my non-answer with a clearly AI/LLM response. Boss likes the answer and conversation starts flying back and forth before I can respond with an ACTUAL answer why it won't work the way the link highlighted. Before I hit send, boss sends me a ticket with the conversation in email with the addition "Get it done asap."
I contemplate continuing my career in IT for the next 20 minutes.
Josh_Crook@reddit
shit boss
Pump_9@reddit (OP)
And AI also includes a bunch of CYA information it never gives you a direct, simple answer because "it knows" it can't accurately do that. Co-worker reads the first few lines and takes that as fact.
illicITparameters@reddit
AI is a way for low value non-executive management to try and show how valuable they are for “saving money” so they can angle to get a promotion. I’ve yet to see an AI deployment at scale that has effectively replaced humans.
AI is an awesome tool, I will never argue that. My entire team including myself use it. But you don’t use a hammer to tighten a bolt, and that’s what these morons are doint.
Khue@reddit
It's also something they are trying to leverage to eliminate more staff and reduce operations costs. Their utopia is removing human resources or reducing development time for processes to increase productivity without adding staff or even better eliminating existing staff. When you don't have operations costs and the requirement to pay pesky employee benefits, you can make a shit ton more money.
The push behind AI is nothing more than an attempt to push out more of the labor class. Optimized productivity may be the narrative, but what that really means is getting rid of the human cost.
illicITparameters@reddit
My point is it will never go anywhere, and will just result with companies rehiring resources. I know of 1 big name company who tried to replace most of a crucial department with AI and when they needed to do something at scale is fell flat on it’s face and was a giant internal embarassment that left many people pissed, and it took humans over 2 weeks to fix the fuck up. Had they not replaced people, that same task would’ve taken maybe a week, week and a half at most.
visibleunderwater_-1@reddit
I've yet to get my LLM to successfully write a powershell script that uses a couple of hash tables so sort a CSV properly without a huge fight lol...
Trickshot1322@reddit
Really?
I've been using it for the past 6 months to code relativley complex programs, and I'm no software dev. When we've sent it to our contractors for QC the prevailing opinion from them has been.
"It's unconventional and a little messy. But secure and robust."
Pump_9@reddit (OP)
Do you think your contractors fed it through AI and that was the AI response? :-)
Wartz@reddit
Same. I must be bad a promoting or something. I can spit out junk code myself just fine if I’m lazy and complex code is hopeless with copilot.
illicITparameters@reddit
Me either 🤣
The_Wkwied@reddit
GPT/AI is a toolkit.
If you don't know how to use the tools, you aren't going to get the job done.
The previous toolkit was google-fu.
ProfessionalITShark@reddit
Damn people's stupid way of using AIO is clear to me, most people didn't know how to properly cheat in school while still learning the material.
scienceproject3@reddit
I had someone ask me if we could import our entire ERP / accounting / financial database into AI to make reports easier.
The security concerns aside.
It is an obscure small company built ERP software with an extremely convoluted and messy undocumented SQL database with over 100 tables just for simple things like quotes or orders that would all need to be mapped and tied in properly.
This person will be taking over half of the company in the next 5 years.
Having to explain to them why this is a terrible fucking idea and how difficult it would actually be has made me start to question my future career path.
UncertainAdmin@reddit
Do you know how many users I have asking me to fix their problems after they described them to Co-Pilot? And then trying to fix it all?
No, I am not gonna install a discontinued Microsoft driver for Office 1998 on our SQL Server just because an AI model said that it might fix the problem..
Opening-Panda-7085@reddit
IMO, Copilot is terrible anyways. Every time I've messed around with it I've been extremely disappointed. ChatGPT has been superior in every single way.
This is also the problem with people who aren't very good techs using ChatGPT as a crutch, rather than good techs using it as an additional tool.
Flaktrack@reddit
I've been seeing a growing number of leaders in my org trust AI with their lives. I end up fixing a lot of the end results. It is painfully clear now that people are using AI in lieu of thought, including IT personnel.
I even see people saying "use AI to create summaries!" but I caution against that too: 1. It tends to make small mistakes but you never know where exactly so it could be bad 2. Summarizing stuff yourself is a powerful method of learning
Use AI to help you find a path to follow, but do the reading yourself. Do not get in the habit of letting AI think for you... We're going to see the results of that mistake soon.
retiredaccount@reddit
You hit a major point here that has existed long before LLMs hit the scene…
I know someone who relies heavily on a grammar helper app (and has done so for many years) but simply doesn’t understand the nuances of its suggestions and ends up miscommunicating on technical documents, then becomes angry when others misunderstood and blames everyone else for their communication disconnect.
Sad_Dust_9259@reddit
Totally agree! AI tools can be useful, but they should guide decisions, not replace critical thinking or testing.
davew111@reddit
If you are using co-pilot, that means you are the pilot. If a plane crashes, the pilot saying "I was just doing what the co-pilot said" wouldn't fly with the NTSB.
TipIll3652@reddit
I remember when the internet became a thing, they told us not to use it for research at the time. Mostly because as a student you're stupid and don't verify the authenticity of the work. People still did though, they took the easy way rather than go through the encyclopedias and journals, it showed too their research was bad.
Now I see folks do the same with AI and I shake my head.
kerubi@reddit
I was just quoted some AI bs word-for-word by a customer’s ”IT manager” on a Teams call. I was like ”well yes that’s sounds convincing but does not relate to your systems except by the vendor we are talking about”. He has always been without a clue and is going to mess up in some major way due to AI, I’m sure.
Candid_Candle_905@reddit
Sounds like the decision-maker is more of a tool than Copilot
Platypus_Dundee@reddit
That's just fuckin crazy!
I mean i use it to build frame work and draft proposals.and responses but to automate work flow and connect to DBs is crazy at this stage of what co piolet can do