"But ChatGPT said..."
Posted by prettyyboiii@reddit | talesfromtechsupport | View on Reddit | 348 comments
We received a very strange ticket earlier this fall regarding one of our services, requesting us to activate several named features. The features in question were new to us, and we scoured the documentation and spoke to the development team regarding these features. No-one could find out what he was talking about.
Eventually my colleague said the feature names reminded him of AI. That's when it clicked - the customer had asked ChatGPT how to accomplish a given task with our service and it had given a completely hallucinated overview of our features and how to activate them (contact support).
We confronted the customer directly and asked "Where did you find these features, were they hallucinated by an AI?" and he admitted to having used AI to "reflect" and complained about us not having these features as it seemed like a "brilliant idea" and that the AI was "really onto something". We responded by saying that they were far outside of the scope of our services and that he needs to be more careful when using AI in the future.
May God help us all.
JaschaE@reddit
They are everywhere.
Analog photography subreddit "Hey, is it true that...?"
Everybody with experience: NOPE
OP, 2 days later "So, I have read *300page, dense theoretical work from the 70s* now and it and ChatGPT say I'm right.
Sure buddy, you read that...
Mccobsta@reddit
I've seen people make posts asking why their camera dosent do what chat gpt says it can do
We're getting dumber and people belive autocorrect more than the manual they never looked for
JaschaE@reddit
In all fairness to the AI Victims: I could not tell you when I last held a useful manual for a somewhat current appliance.
The booklet I recieved with my sony camera is 1cm thick and starts with "which way to point a camera if you have never seen one before" which I find discouraging.
A brother embroidery machine I tried to troubleshoot even had a table with common issues and how to properly diagnose and fix them. Literally every single one was "Send it to brother for repair."
Mccobsta@reddit
We've definitely lost our ways In how to make a actually good manual
Especially consumer oriented stuff where they just don't realy bother
meitemark@reddit
Well, first you need 200+ pages what NOT to do with your item. Then you need 2 pages of proper use, and 20 pages of possible faults and that any fault are yours and that you need to pay to get them fixed. Or just buy a new one.
OR you can try the makers website that will get you to subscribe to at least 5 different things at the low, low price of $29 per month (autorrecuring and ups to $129 after a year) before you get access to support AI (first) then outsourced indian support.
FatManBeatYou@reddit
Reading the damn instructions, or just Googling the model would probably take less time and be more accurate.
RogueThneed@reddit
Except for where they read the fucking AI summary of the results, and it's wrong.
psychopompadour@reddit
I assume they get the AI to read it out loud as well
RogueThneed@reddit
LOL!
Yesterday I saw an excellent video of 2 AI chatbots talking to each other by phone. SO understanding. SO apologetic. SO smarmy. And they couldn't end the call themselves, so it got surreal pretty fast.
mrstabbeypants@reddit
Can't serve that up and not give the sauce, mate.
RogueThneed@reddit
Oh wow! Thank you! Now I'll have to see if I can find it.
mrstabbeypants@reddit
Please do! I could use a laugh.
peccator2000@reddit
Got a link?
Mccobsta@reddit
People don't think that anymore
They'll either go to a llm or ask on reddit and get anoyed people telling them to do that
FauxReal@reddit
RTFM, did it ever work?
Mccobsta@reddit
Of course not Even error codes on screen telling the user what is the problem dosent work
Like people will post 20 year old cameras with errors on the line of memory card not recognised and they'll ask online why their 20 year old camera is showing an error that they've not read in the slightest
I want to be helpful and encourage new people to go deep into the hobby but damn it wears you down
limeypepino@reddit
Ugh. The worst. I have a pile of tickets along the lines of "It's not working. There is an error on the screen. We need this back up ASAP!" Still waiting on a response as to what those mysterious errors are. Probably a simple fix but without any information beyond "an error" not much I can do.
vinyljunkie1245@reddit
Reply that because they haven't provided a specific error message the fix will be tasked to support. This will take 6-8 weeks because support have no point of reference to begin fixing so need to examine everything related in depth.
Or they can just supply the error message.
HaElfParagon@reddit
I just ask for a screenshot of the error code, and throw in waiting for feedback.
After 1 business week, if they haven't answered, it auto closes.
HaElfParagon@reddit
I've been getting a lot of those. "I'm getting errors on my screen when I try to open quickbooks!!!!"
"Okay, what's the errors?"
It's a screen saying they need to launch as an administrator to update quickbooks... and they have local admin and could do so if they bothered to read or think...
Grant_Son@reddit
The amount of helpdesk calls I took over the years where the user would tell me something wasn't working.
Me: "is there an error message?"
User: "yes it says windows has encountered a blah blah blah."
Me: "Thanks. I've never seen an error message that says blah blah blah & I'm fairly sure the helpful part of the message that tells me what the problem is is in the part that you replaced with blah blah blah"
PraxicalExperience@reddit
Nah, nah, the response is: "Oh, great, yeah, when the error is blah blah blah you need to yadda yadda yadda." Then close the ticket.
Grant_Son@reddit
I'll keep that in mind š
peccator2000@reddit
Show me the Luser who does not click away pop-up windows but reads the damn error message!
Grant_Son@reddit
Oh yeah. The days of exchange active sync & password changes.
User: I haven't had any email on my phone for a week. See... Opens email app & cancels the password prompt
š¤¦āāļø
FauxReal@reddit
I felt that last sentence in my bones!
andypanty69@reddit
No. People can't even read the RFCs that gave the internet; just look at the number of websites that don't allow valid email addresses.
peccator2000@reddit
A friend of mine whose company is pretty much a one man ISP asked my help for thinking up a good number plate for him. I told him:B - IP 791. He said he will love me forever for this.
Flyrpotacreepugmu@reddit
Yeah, I have one website where every time I try to log in, it sends a link to my email. First thing after clicking that link to finish logging in, it asks me to confirm my contact information, and won't let me do so because the email address I obviously just used isn't valid.
uprightanimal@reddit
To be fair, RFCs aren't exactly thrilling reads for most.
lemachet@reddit
I dunno, have you read 1149?
psychopompadour@reddit
Omg, one of my favs!... I'm sad I have favs
FauxReal@reddit
That's been amended by 2549, it adds QoS.
ralphy_256@reddit
And actually has an (attempted) implementation.
https://web.archive.org/web/20140215072548/http://www.blug.linux.no/rfc1149/
jobblejosh@reddit
I beg to differ. 418 is an absolutely thrilling read
amberoze@reddit
Ftfy
jimicus@reddit
Including Office 365 and Google.
Fraerie@reddit
As someone who used to hang out on slashdot and usenet - nope, it has never worked.
FauxReal@reddit
You made CmdrTaco cry.
Fraerie@reddit
I still have one of the āin the cool green glowā Slashdot t-shirts.
FauxReal@reddit
Oh cool, I've never seen a Slashdot shirt before.... or ever since I haven't seen that one.
I got a six digit Slashdot ID, would have been 5 if I didn't lurk for years first.
Fraerie@reddit
I suspect something similar, I think I was a low ish 6 digit because I lurked. I would have to see if I even still had my account details.
XkF21WNJ@reddit
Anymore? Did they ever?
Beginning_Method_442@reddit
Google is AIā¦. Just sayinā
amberoze@reddit
We have a thing for this in Linux communities. RTFM.
Sure, you can ask AI, but always verify the information by RTFM.
Someone else may have done it before and posted about it...4 years ago, but you still need updated information, so RTFM.
In essence, nothing beats RTFM.
Just Read The Fucking Manual.
peccator2000@reddit
That's for amateurs.
RTFS: Read The Fucking Source! Isn't open source great?
psychopompadour@reddit
I was on our L1 Service Desk like 6 years ago and told a coworker to RTFM and he was like "what's that mean?" Could be generational nerd slang, but he was only like 5 years younger than me and not even an idiot, actually a coworker I respected! So I did a quick survey of our 20-some coworkers and about half of them had never heard this acronym (it did skew younger, but not entirely). I didn't even know what to say.
nymalous@reddit
I read through about half of the reply that you were replying to before I figured out what it meant. And I'm closer to 50 than to 40. (To be fair, I didn't get into this end of computing until recently.)
I'm not sure why it clicked, but it did click before the end of the comment.
amberoze@reddit
From my experience in my almost 20 years in IT professionally, it's not the age that determines if someone has heard of RTFM. It's their experience with open source software.
MrT735@reddit
Or when they started gaming, used to be you'd need the 72 page manual to know what to do in games. Now you either get a handholding tutorial or watch some streamer doing it.
ThatBurningDog@reddit
Funny story, computing and IT teacher in secondary school was a really nice guy, but he was close to retirement and his patience threshold was not especially high.
Anyway, he makes a joke about where you might find information about a task you don't know how to do. "You RTFM".
Everyone looks blank.
"Read. The. Manual."
Everyone briefly is like "oh, that's sensible", and then have a bit of a giggle because teachers aren't supposed to swear. A hand goes up, and a genuine question is asked...
"What does the "F" stand for?"
peccator2000@reddit
Then there is RTFS: Read the fucking source!
grunkle_dan78@reddit
unfortunately most of the google searches will give some kind of ai bs as the first answer
FatManBeatYou@reddit
Yeah, I've had to de elop new muscle memory to scroll past that nonsense.
FearlessSyllabub8872@reddit
New? This muscle memory is forged in the fires of years of "sponsored" search results.
nymalous@reddit
I was just going to say that myself.
In addition, I have my search engine set to disable AI unless intentionally activated by me.
Kaltenstein23@reddit
Even google throws an AI review in first place...
JustAnotherSolipsist@reddit
If you add -ai to the end of your search it promotes the ai
Tatermen@reddit
Not when Google now includes AI generated slop at the top of the results.
A little while back, when trying to look up some quantities for a recipe, it very, very confidently tried to tell me that 500 grams and 2.5 kilograms were the same weight.
Ancient_Skirt_8828@reddit
RTFM has always been a problem.
jockmcfarty@reddit
20-odd years ago. 3am.
Me (Tech Support): Why are you calling me about this? It's on page 1 of the manual (that I wrote).
Them (System Operations): We don't have time to read the manual.
FatManBeatYou@reddit
But they had more time to ring you up at 3AM? Make it make sense.
psychopompadour@reddit
Well see, one involves reading, and the other doesn't, so
feor1300@reddit
The problem is when you Google the model the first answer you get will be AI generated, so people just assume that must be right.
dreaminginteal@reddit
Google is now pushing AI nonsense as the top result in most searches. Sadly, Googling something isn't going to help much.
JanB1@reddit
At this point I'm using AI chats to find the damn documentation about things because it seems like Google seems to have decided to offer a good search engine, and I regularly have to deal with hard- and or software documentation that doesn't care to go into detail and the documentation just states "Just do this and that and you will get result" and doesn't care giving even the slightest hint on what to do if you get an error before getting result, or you need to make an account first to get the documentation (and then get contacted by sales or get spam).
geekwonk@reddit
if youāre going to insist on using AI for this purpose then i recommend perplexity since itās search-first and is built to provide sources for its claims
narielthetrue@reddit
The number of people that mix lose and loose is also getting larger.
Or someone arguing with me that complement and compliment are the same thing, or else why would autocorrect not say complement is wrong?
The world is fucked, bro. FUCKED
FantasmaNaranja@reddit
i complement you on your grammarā i tooā find that people are being too lose with itā nowadays...
Sairenity@reddit
to* lose
you can fir a few more mistakes in yo shit, c'mon
FantasmaNaranja@reddit
Toulouse? but im not french...
syntaxerror53@reddit
Toulouse or not too loose.
That is the question.
rleaff1@reddit
fir
Sairenity@reddit
fugg :DDDDD
borkman2@reddit
Benis :D
Sairenity@reddit
ebin :DDDD
andypanty69@reddit
I compliment you with your complement.
thereddaikon@reddit
My latest peeve is everyone using then when they mean than. It's almost as infuriating as "funnily enough". Autocorrect has gotten worse. Seems instead of trying to match the word with use and meaning now it matches based on what other people type. So instead of making helpful corrections it tries to sabotage my typing with the crowdsourced illiteracy of Zoomers and gen alpha.
Frahal@reddit
Autocorrect is horrid, how the heck do you get habanero from haha? And don't get me started on acronyms, relative of mine typed ttfn (Ta Ta For Now) and the phone autocorrected to Mitch.
MrT735@reddit
I'm still trying to train my new phone to not give me the American spellings for everything.
But the use of "should of" needs to be punished, bring back stocks in the market for offenders.
thereddaikon@reddit
I think the larger problem is the general loss of standards in society. On its own one grammar mistake becoming widespread isn't a problem. And when someone says it's not a big deal they're right. But taken in its totality, the loss of all spelling and grammar, no public shame, no social contract, just doing what you want and having "your truth" really just results in a shitty place. We didn't land on the moon by acting this way.
MrT735@reddit
A lack of consistent standards is what lost the Mars Climate Orbiter. The navigational software expected input in metric, and the ground team used software that sent figures in imperial.
fevered_visions@reddit
I also run into rein/reign and "tow the line" a lot lately
wrincewind@reddit
The magic of āØLarge Language ModelsāØin action~
thereddaikon@reddit
Your username makes me miss Prachett. He would probably have a funny discworld story that's a metaphor for LLMs.
wrincewind@reddit
CMOT tries to get rich quick by running "hex for the people" and he charges a penny a go, but it's just a bunch of imps that agree with whatever they're told.
Rainthistle@reddit
The one that kills me is folks who swap 'apart' and 'a part', which are pretty much diametrically opposed.
Puzzleheaded-Joke-97@reddit
Yeah, I see that alot!
Rainthistle@reddit
Yeah, it's almost as bad as 'a lot' and 'alot'.
fevered_visions@reddit
ooh I'm not the only one with that pet peeve
TychaBrahe@reddit
complement means that one thing completes another. Like scrambled eggs and bacon are a good breakfast, but some nice crispy hashbrowns complement it.
A compliment is something I like hearing.
joe_lmr@reddit
"I complimented the chef for the complementing the dish with hash browns"
Trinitykill@reddit
I contemplated complimenting the dish if it weren't for the commonly complemented constipation.
brother_of_menelaus@reddit
The bacon is complementary to the eggs. $7.99
The bacon is complimentary with the eggs. $4.99
Floresian-Rimor@reddit
Affect & effect. I swear even publishers these day donāt know the difference.
bobk2@reddit
"different from" is correct; "different than" is not.
abecedaire@reddit
Mine is phenomena instead of phenomenon when itās singular! I keep seeing āa phenomenaā everywhere and it drives me nuts lol.
hates_stupid_people@reddit
Most people have stopped calling out "should of" on social media, and some people are defending the use.
narielthetrue@reddit
I know, and that makes me so mad.
Granted, if Futurama is a prophecy of any kind, that will be the norm in 1000 years
Prior-Task1498@reddit
OK but those are honest mistakes. Asking chatgpt about technical topics and insisting its answers are true is much worse.
aspiegrrrl@reddit
discreet: adjective
careful and circumspect in one's speech or actions, especially in order to avoid causing offense or to gain an advantage.
"we made some discreet inquiries"
discrete: adjective
individually separate and distinct.
"speech sounds are produced as a continuous sound signal rather than discrete units"
IAMA_Plumber-AMA@reddit
Breath and breathe have essentially swapped meanings at this point.
Queer_Echo@reddit
It's not even autocorrect, it's predictive text in a fancy hat. Ai ruined autocorrect, it's why any autocorrect using ai ends up advising misspelled words- it suggests the most common spelling and if there's a word commonly misspelled it'll suggest the misspelling.
StorminNorman@reddit
The rise of "payed" in place of "paid" is giving me mad "literally means figuratively now" vibes.
Evanisnotmyname@reddit
My wife thought her brand new Macbook was broken and defective, reinstalled the OS, was arguing to bring it to the Apple Store, all because the yellow mic would pop up on the screen
Far-Win8645@reddit
You are right. But lately, the Manuals are basically useless
I bought a new washing machine a month ago and the manual did not even specify how long takes in each program
andypanty69@reddit
But I bet it tells you how to connect it to "the cloud" so it can be updated, controlled remotely and definitely not be disabled when the subscription model comes in next year.
LupercaniusAB@reddit
Shit, most of the things that I interact with now have hieroglyphs instead of words for their āmanualā.
I hate it.
NonNewtonianResponse@reddit
The camera is a machine. ChatGPT is also a machine. Obviously, ChatGPT must have more insight about how its fellow machine works than any human would
Miro_the_Dragon@reddit
The amount of times I've seen someone post something along the lines of "ChatGPT said my teacher is wrong" (and same with textbooks and grammar books instead of teachers)... Like yeah, sure, teachers can make mistakes, and it's okay to question things that seem off. But...it's concerning to see so many people go to a chatbot to verify a professional person or resource *sigh* (and some of them will still argue with "but ChatGPT said..." after being told by several native speakers that their language teacher is right and ChatGPT is halluzinating)
SalemTheKit@reddit
I think the worst version of that story was hearing a friend's younger sibling get into a ChatGPT fight with their teacher where both were using LLMs to try and prove their point and, of course because sitcom logic, both were saying they were right.
Only recently did I learn how many teachers at my local high school constantly use LLMs and image generation for so many things and it just kinda ruined me because I know so many of them mean well.
I_am_normal_I_swear@reddit
Gen X and older Millennials are the last ones to truly understand how computers work. We tore it apart to see what was there and then booted it up to see how the software worked.
I mean, look at us all becoming HTML pros because of Geocities and then later MySpace.
syntaxerror53@reddit
Internet Experts because they knew what HTML and HTTP stood for.
Golden_Apple_23@reddit
tore it apart, popped the hood, fiddled with DIP switches, knew how to create master/slave drives (and the differences!) and dealt with the unholy COM1 issues of Soundblaster.
PraxicalExperience@reddit
Auuugh having to edit config.sys and fuck around with dip switches to try and find that one combination of IRQs and DMAs that it allowed and was open...
Golden_Apple_23@reddit
yeah. I look down, plug in my SSD drive and it's immediately recognized, I can change the drive letter on the fly...
Do I miss the old days? HELL NO.
Am I nostalgic for the effort needed and the satisfaction gained from getting your unique system to work properly? Yeah.
psychopompadour@reddit
I think in part that was because at the time, it was very difficult to use a computer for anything except very specific functions (like a single program for work, etc) if you didn't at least KIND OF understand how it worked. Same for websites... if you wanted your LiveJournal to look non-generic you HAD to learn some html. The tech was its own gatekeeper to some extent. Nowadays UX has been made super friendly and easy, which is good because it allows ANYONE to be online making content, but also bad for the exact same reason, haha
fresh-dork@reddit
gpt always says you're right. i thought people knew that by now
JaschaE@reddit
No, thats just what the haters claim, of course chatty agrees with you when you are always right! /s
Wiiplay123@reddit
I really want to make a "ChatGPT made me delusional" reference here, but I also don't want to spoil how crazy it gets.
Sarke1@reddit
You're absolutely right!
nagi603@reddit
More and more people even in work are nothing more than conduits between their colleagues and ChatGPT. Zero individual thoughts, zero expertise, just a dumb pipe.
Blurgas@reddit
ChatGPT also had an aneurysm over a seahorse emoji
JaschaE@reddit
aneurysm requires a brain. It's just a malfunction (Personally trying not to humanize the chatbots)
PendragonDaGreat@reddit
Analog photography is especially bad for it I've found since at the moment the newbs almost outweigh the ones with experience. I'm super glad that film is seeing a resurgence and projects like Harman Phoenix and Lucky C200 are happening because my inner film goblin wants more film (and more options). But holy hell the misinformation trains are full steam ahead at times.
Kodak-Alaris, please, I would go feral for Ektachrome in 100ft spools, my bulk loader is waiting.
No_Buy2554@reddit
It's because A is built that way. Its primary function is to complete the task that's asked of it, with the answer being correct a lower priority. So if it's prompted in a way to get a certain result, it will give that result to the user even if it's not correct so it can complete the task.
psychopompadour@reddit
Well, I don't know that it's that complex... I feel more like the way LLMs work, it's more that it's very difficult to ensure correctness. I'm pretty sure that if it were easily possible, that would be implemented by at least SOME companies. I notice specialized non-LLM AI (which is to say, just what we're currently calling super-complex algorithms) is usually far more accurate and useful in what it produces (for example, that medical one that folds proteins, or the astronomy one that looks for patterns in light waves)... however, that stuff needs to be built and maintained by people who understand how it works and what kind of results it produces, and you can't sell that to the general public as a computer talking to you, so yeah.
No_Buy2554@reddit
This info was given to us by one of the AI companies themselves when they came in to do our training on the tool.Ā They were up front that more of the training time would be spent verifying the results of our prompts for that reason.
JaschaE@reddit
If you put it that way, I feel a weird kinship...
Cypher_Aod@reddit
Link? That sounds interesting!
JaschaE@reddit
300 Page dense theory? "The Negative" by Ansel Adams of course\^\^
Cypher_Aod@reddit
Oh awesome, I love Adams's work, I'll track down a copy.
Screwed_38@reddit
Well of course, take the 2 things that say you're correct while ignoring the 100s of things that say you arent.
Conformation bias is a bitch
No-Society-6118@reddit
man, that sounds like a wild ride! ai can be super hit or miss, huh? for customer support stuff, i use chatpirate for this kind of thing and it usually gives pretty solid info. just gotta be careful with the whole "hallucination" thing, lol.
SpaghettiAndSlaps@reddit
Bruh, this is wild but also kinda expected now. People be trusting AI outputs like gospel without checking. ChatGPT can def help, but it aināt perfect or omniscient. Gotta keep that healthy skepticism or you end up activating features that donāt even exist lol. AIās dope, but still gotta use your brain too.
Master-Hamster3879@reddit
@mods This is a bot. Look at its comment history; e.g. the combination of comments starting with, ābruh,ā āvibesā, āclassicā, ālol fr thoā.
All bots make the same stupid variation of responses on posts. Once you see it itās pretty obvious.
How to spot bots
Thulak@reddit
We do graded E-Learning tests to onboard our engineers. We regularely receive tickets about errors in the tests and engineers arguing for more points which we encourage.(Rather have people think than blindly trust)
One new hire decided to copy paste the questions into our company internal version of ChatGPT. We have a couple of catch questions that the AI gets wrong 100% of the time (so far) so it is fairly obvious, though it hasnt happened before. This user wrote a ticket proudly stating that the AI gave them these answers and therefor they must have a 100% score. They also claimed her collegues confirmed her answers without giving a simgle name.
Safe to say she did not get the extra points.
PackYourEmotionalBag@reddit
Adjunct professor here⦠have an assignment that Iāve been using for the last 6 years on XML.
Every layperson Iāve asked to do it gets it right on the first try, but about 85% of my students get it wrong and we have an in depth discussion on assumptions and overthinking.
Until this year, where 100% got it right. From the other assignments I know that this class is not far and above my other classes, or so far below that they wouldnāt fall into the overthinking trap. Iām just grading a classroom full of copy/paste from an LLM. No longer do we get to have the discussion on overthinking, because no one is thinking at all.
The field they are going into is niche, LLMs constantly hallucinate when asking anything beyond the cursory for the field⦠it has invented entire libraries in C# that just donāt exist, and its knowledge of playing with this data in python is just as bad. (Staying intentionally vague)
Nihelus@reddit
Sounds like you need to have a discussion about cheating with AI and becoming brain dead idiots if they donāt start thinking for themselves. Could bring up just how stupid theyāll look or the jobs theyāll lose if they just trust everything ai says without thought.Ā
PackYourEmotionalBag@reddit
I do when I can⦠I try to enforce that using AI isnāt the problem, but usually a Google search will get you closer to the right answer since there around about a half dozen websites that have the best, most up to date, information.
I explain that with our niche field there is a small sample pool that AI can pull from and that there are old news groups cataloged by Google that have information from the infancy of our standard and it doesnāt apply anymore.
I try to drive home that once you are in the job, there might be times where AI could be useful, but unless you understand the data first you are setting yourself up for embarrassment and a fast track out of the field.
The part that really makes my brain hurt: there are 2 professional tests that these students can take and in this niche field with the amount of grads itās really a good idea to take them to set themselves apart. There is a proctor, no notes, no book, how these students think they are going to pass that, itās beyond me.
I was finally told by the dean that the students are adults, that Iāve warned them, and Iāve encouraged thinking over relying on LLMs and at this point I care more about their success than they do. They are paying to learn and to get their degree and prepare for the tests⦠Iām providing an environment to do all that, itās up to them to use it or waste it.
Icarium-Lifestealer@reddit
Now I'm interested in that XML question. I'd expect few laypeople to even know what XML is, let alone answer questions about it more reliably that IT students.
Lord_Dreadlow@reddit
Cisco IP phones use .xml config files.
StorminNorman@reddit
Out of interest, how many laypeople do you think know that?
PackYourEmotionalBag@reddit
I laid out a hypothetical application and then showed the XML file that would need to be created for the configuration of the application.
I then pitched an addition to the application to have it do something else and asked what additional fields should be added to the XML (and maintain proper formatting)
Itās really not an XML question as using XML as a stand-in for ācan you parse a document with markup?ā
Laypeople look and say āoh! I see a field called āEmailā that contains the email address, and the new application needs a phone number field, so letās add that under a second nestā because they are just doing a 1:1 but my students typically try to get too creative and end up going in a different direction, or they are too confident and donāt check their markup and we run into syntax errors.
beachedwhitemale@reddit
God help us.Ā
LupercaniusAB@reddit
Oooh, Iām a layperson lurker!
paulmp@reddit
I hope I am never in a position where my fate is decided by a jury of these types of people. They are the types that go "well the police wouldn't have arrested them if they didn't do it".
RatherGoodDog@reddit
"And how would you feel if you hadn't eaten breakfast this morning?"
"But I did eat breakfast this morning"
"Yes, but how would you feel if you hadn't?"
"I don't understand"
dogman15@reddit
No imagination.
LordTimhotep@reddit
I recently saw a kids show about how the brain works. They had an experiment there about how people react to what theyāre being shown as evidence.
They had a number of kids there being told they were going to watch a press conference about someone, but that there was one completely true fact: The person they was being talked aboit was innocent.
Then they watched the press conference in which the person was being blamed for stealing money. It was said that they had stolen before and a very grainy video was shown as proof (and that video could have been everybody).
They asked the kids after this part if the person was guilty, and more than half was sure the person was (even though they were told differently before).
They did some other things after which I canāt remember, but this part really stood out to me. These are the people that would also take the result chatgpt gives them as fact.
paulmp@reddit
I was pretty much constantly in trouble as a kid because I questioned everything and everyone. Mostly out of curiosity and I was under the mistaken impression that questions were a valid form of seeking to learn and understand. Turns out many neuro typicals find that to be a challenge to their authority or think I'm trying to argue with them.
Intelligent-Luck-954@reddit
Did you read bout the judge who said that while in jury duty?
Ahielia@reddit
How can you be a judge without being trained as a lawyer first?
Intelligent-Luck-954@reddit
Welcome to the world of elected judgesĀ
paulmp@reddit
Geez... he said the quiet bit out loud.
Flog_loom@reddit
Holy fuck.
knoxaramav2@reddit
Unfortunately that has been a problem long before AI.
paulmp@reddit
I wasn't saying that it was a new issue, I was pointing out that there would be a significant overlap in the people who make up these two groups of people.
MrWolfe1920@reddit
Wow, they just openly admitted to cheating like that?
esqew@reddit
At my company (400k+ employees globally), using AI for post-training exams (except where explicitly permitted) is a fireable offense. Iām frankly shocked itās not this way elsewhere.
Thulak@reddit
We're a smaller company (less than 1500). We work in such a niche field that most new hires never worked with our or similar products. Ad on top that they need to understand some surface level polymer chemistry and we need to do a lot of in house training. The company philosophy is still a "Results matter, how you got there isnt that important" kinda type, but its shifting. For that reason the tests are "open book" or rather "open PDF". Despite that we get results of 60 - 70% on some topics pretty frequently. The consequence is usually more training for said new hire. In terms of AI usage... I dont have to like the policy, I just have to deal with it.
whizzdome@reddit
I would be interested to know more about the questions that AI gets wrong 100% of the time.
Thulak@reddit
Its niche knowledge that isnt widely available. Since the answers are usually multiple choice AI tends to go for the lowest or highest values that arent outlandish. Hasnt failed a single time.
Nevermind04@reddit
We have a series of benchmark tests we use to gauge the progress of graduate engineers as they're going through the first two years with us. We also have catch questions to identify AI usage. Because the stakes are so high with the work we do, we have a strictly enforced policy against AI use. We don't allow it at all. You either learn to be an engineer or you wash out of the program.
We have a two strikes policy. After the first blatant use of AI, we don't directly accuse a candidate, but we meet with them one-on-one and (hopefully) put the fear into them. We explain why it's so essential that they actually learn and understand every single part of the project they're working on. They must become subject matter experts. If they do it again, that's considered gross negligence under their contract and they're gone.
We've had a handful of first strikes so far but nobody has made it to strike two thankfully. But that day is coming.
eichkind@reddit
That sounds like she also shouldn't get the job...Ā
Seroseros@reddit
She's probably in the C-suite now.
Thin_Pomegranate9206@reddit
AI slop is also affecting IT as well. A couple times where I needed to escalate an issue sometimes I'll get AI garbage sent back to me. The most notable was when it included solutions that required software from an outside vendor that required a subscription service. Pissed me off enough to call him out. It's making everyone dumber, destroying our environment, and negatively impacting our economy.
HaElfParagon@reddit
I'm on the other side. Having idiots submit a ticket, then when I tell them how it will be resolved I'll get an argument back "Well ChatGPT says..."
Well, when ChatGPT signs my checks, I'll start listening to it.
Alpha433@reddit
Dude, its spreading to everywhere now. I do hvac work, and so.many posts on the hvac advice sub and even customers irl start eith "chatgpt said" and then finish with some of the most dumb shit ever.
Its not even old people only either. Its all ages.
HaElfParagon@reddit
Yuup. Get alot of it in the homelab and datahoarder space too.
They'll post a link to some shady chinese site and go "Is this 8TB SSD for $5 okay? ChatGPT says it's highly reliable?"
RogueThneed@reddit
Why would it "only" be old people? Everyone has been trained to accept that computers are right, and that used to be reliably true. If anything, younger folks are more likely to blindly accept generative AI output because they don't enough about the world to be cynical.
Future_Direction5174@reddit
Back in the 80ās we were told ānever say itās a computer error, computers just do what they are told. Someone, somewhere, told the computer how to do it and got it wrongā.
Had a computer told to āround down when the sum is ###.05 or lessā. Further multipliers were than used. The legislation said āalways round upā. Half of the annual bills that year went out undercharging the recipients because the cost of correcting the error AND rerunning everyoneās bills, plus the subsequent delay in collecting payments far exceeded any potential loss. The company decided that as only the people who had been undercharged would be aware of the fact, and if they complained they would have to pay more, it would most likely never come to light. It didnāt and was corrected long before the following years bills were calculated.
So yeah, computers can make errors but a human started the ball rolling in the first place.
mrhashbrown@reddit
I recall some basic polls and studies showed that digital literacy is lower for older people (learned it later in life) and younger people (exposed to it very early but did not use tools/software that still required critical thinking to use appropriately), yet the middle-aged Gen X and Millennial groups have stayed mostly level.
Makes sense when you grow up with technology as it emerges, but such tools still relied on analog tools/data to a certain extent. Now the analog part is really disappearing and I think that's what has made technology feel much less grounded, with AI at the forefront.
Born-Entrepreneur@reddit
Its been an ongoing concern of mine. Yes technology is much more accessible and usable now that we don't have to muck with config files to squeeze a mouse driver in there with Doom, or set up your IRQs.
But it's gone too far with phones especially sanding all the edges off. People don't understand even basic concepts like the file system, they never engage with it because each app has its own wrapper around it and you never work with the basic system. For example my ex had no idea that the Downloads folder existed on her phone until I pointed it out to her, where we discovered 85 copies of the same PDF menu or form she had downloaded time and again, not knowing she already had it.
mrhashbrown@reddit
Yeah I wouldn't be surprised if most people were unaware of the files app on their phones. And I don't blame them because trying to manage files on a phone is a mess, especially iOS where everything is so heavily compartmentalized by app you can barely figure out where anything is.
Liked how you described it as "sanding all the edges off", think that's a perfect way to put it. It's an effort to simplify that is hurting more than it's helping imo
CallMeSmigl@reddit
I am an audio engineer. Since DAWs are super complex, I sometimes need help troubleshooting. Whenever I task an AI to help I get the weirdest hallucinations. Whole menus and workflows that donāt exist are being quoted. The suggested solutions would also usually break something else. Get smart at CTRL + Fing your way through manuals and documentations, people. Donāt just blindly listen to AI.
HaElfParagon@reddit
Just wait until the QA people all get laid off and documentation are written by AI :D
Demache@reddit
Same happens in car repair (and honestly any technical skill). People in car subs asking for buying advice or repair advice come up with some truly bizarre questions and claims because "ChatGPT said". Like half the conversation is just people going "whoa whoa hold your horses" and convincing OP that the chat bot made shit up.
NiiWiiCamo@reddit
"But Chad-She-Bee-Dee said...".
As much as I hate people that use that phrase as a rebuttal to facts, at least it tells me I'm probably dealing with someone without any critical thinking skills.
I believe LLMs are a great tool for certain applications, the same way a jackhammer is a great tool for certain applications. Thing is, we all know that, but these are the same people that buy the "31 in one hammer-screwdriver-spanner" tools for 5$ and tell you it's better than the proper tools.
No point in arguing with them.
zanderkerbal@reddit
Also like a jackhammer, if you use it for anything outside of its narrow set of applications you will make a complete mess of everything.
PhantasyAngel@reddit
Bro it's fine for typing on a keyboard, watch keyboard splits in half, desk collapses and floor now has a small dent with concrete showing through
See it's perfect.
Also it works when using the office printer!
zanderkerbal@reddit
To be fair, if this subreddit has taught me anything, it's that sometimes a jackhammer might be the right tool for dealing with an office printer.
vaildin@reddit
A jackhammer is the only tool that should ever be used on a printer.
EquipLordBritish@reddit
It's like a quantum computer, it's great for things that you can easily verify are true yourself, but not great for everything else.
zanderkerbal@reddit
With an emphasis on the "easily," yeah. Humans are really bad at checking large or even medium amounts of mostly-correct-looking automated output for errors.
etihw_retsim@reddit
I had to double check a data dictionary generated by an LLM from DDL and a database description document. The output LOOKED really good, but it made up so much stuff. And that's after we did a decent amount of prompt engineering before getting a half decent output. And that's was just doing basic formatting and acronym lookups from a fairly limited amount of data
zanderkerbal@reddit
Yeah like double checking is a task that makes people's eyes glaze over at the best of times but the way LLMs work via "what is probable to come after this?" makes their mistakes often extra insidious. They're better at doing things that look right than things that actually are rightĀ
Golden_Apple_23@reddit
exactly, the right tool for the right job. Know your tools strengths and weaknesses.
spaceraverdk@reddit
Well, every tool can be used as a hammer, once.
Ich_mag_Kartoffeln@reddit
At least once.
spaceraverdk@reddit
Some, more than others. š¤£
fresh-dork@reddit
brb, coding up an interface called ChadCBD. kind of like gpt, but half the time he just wants to smoke up
NiiWiiCamo@reddit
nice, since Chad-She-Bee-Dee is already great at hallucinating the jump in quality should be minimal
EquipLordBritish@reddit
Don't forget the sunglasses and popped collars!
xienwolf@reddit
It immediately tells you they have no desire to engage in any critical thinking, and may flat out be incapable of it.
So... don't bother trying to tell them that GPT was wrong. They understand WRONG, and if they believed something for a moment, it cannot be wrong.
Instead, break out every logical fallacy that amuses you and explain how their answer could be CONTEXTUALLY correct, but in this case...
thereddaikon@reddit
There absolutely is a point to arguing with them. Showing they are wrong and belittling them. Not doing so furthers the erosion of standards and gets us closer to Idiocracy. More people should be publicly shamed for being idiots. The day we stopped doing that is the day we started on the slow fall to where we are now.
Arterra@reddit
I read once that emphatic approaches have a higher chance of bypassing people getting defensive at being wrong and entrenching themselves. The thing is that it's hard to keep caring about the sheer number of people slowly throwing their mind and agency into the trash. I have to believe that the middle ground is a bland rebuke then bypassing them entirely because I can't muster the energy to help or talk down to what feels like the entire world.
Mickenfox@reddit
It's OK you can just get ChatGPT to argue with them.
Sairenity@reddit
... the one valid use for LLMs might have been found, holy shit. What's even better: once your target took the bait, the thread ought to become more and more nonsensical as the LLM starts hallucinating more
CDRnotDVD@reddit
I think this is still the best use: https://www.technologyreview.com/2024/09/12/1103930/chatbots-can-persuade-people-to-stop-believing-in-conspiracy-theories/
MutantArtCat@reddit
Probably also the same people that end up in a canal or a storefront because their navigation system told them to go right.
Defiant-Peace-493@reddit
"The machine knows!"
FyneHub@reddit
The best part is him doubling down and saying the AI was āreally onto somethingā like you should just build the features because ChatGPT thought of them. At that point just forward the ticket to OpenAI and let them handle it.āāāāāāāāāāāāāāāā
Tools_for_MMs@reddit
I had a customer question if his copy of Black Ops 7 or Battlefield 6 (can't remember which) was fake, bc chatGPT said it didn't exist.
Don't know how he got that, bc when I asked it, it gave me the correct info.
Sandwich247@reddit
Gosh dang it, it was always going to happen I just wish we had more time
Slinkypossum@reddit
I work in education and there's two camps regarding AI. Those who won't touch it and those who are all in and want to use it for everything. I've given several presentations on its proper use and emphasize the importance of watching out for hallucinations. Most of the time I feel like all they hear is Charlie Brown's parent noises from my mouth.
nymalous@reddit
I've not found AI to be helpful in my understanding of material (working on a degree in mathematics and data science, including learning a programming language for the latter).
I'm listening to the Charlie Brown Christmas album while I read this. :)
Demonicbiatch@reddit
I belong mostly in the first category, though i have used it for text generation. I also tried it for something else which it got very wrong and couldn't correct when asked multiple times. I also prefer to teach analog with pen and paper, no calculator. Until we are doing assignments that need the technology of a math program. Then i teach the niche and smart use of that program. I also remember being forbidden from using Wolfram Alpha back when i was in school...
MusicBrownies@reddit
'Charlie Brown's parent noises' - great reference!
pennyraingoose@reddit
Throwback to high school when I was describing Charlie Brown parent noises to my English class and said they had "horny voices." š³
Speijker@reddit
We get so many questions recently from users saying "I asked ChatGPT how to do X in Outlook/Excel/Whatever, but I can't find it. Please fix". Smart people mind, engineers and technicians...
The cake was won by a highly paid IT consultant, who needed a CLI tool and couldn't figure out how to set it up. Walkes him in person through installing the tool through PowerShell, showed how to start it and get to the login. Even opened a browser tab with the step-by-step manual, showing every line he needed to type to start, connect, and get going... He came back half an hour later, with "I asked ChatGPT how to use the CLI tool, and it said to check here if it's installed, but I can't find it?". Dude, you're looking nowhere near the control panel or Programs & Features, and you just stood next to me when we installed and ran your tool...
/rant
Seriathus@reddit
High-Paid Consultant = Halfwit-Pandering to C-suites.
meoka2368@reddit
The company I work for decided to make an AI agent to help us diagnose and troubleshoot issues. It apparently has access to our product and features, but I haven't bothered testing that.
Instead, I threw something generic at it.
"The computer says limited or no connectivity. What should I try?"
It came back with a list of things like checking cable and DNS settings.
"How would DNS be involved in getting an IP?"
It said it wouldn't.
I asked why it suggested that then.
And it was deflected the question.
Needless to say, I don't use it.
azurecrimsone@reddit
Failed DNS resolution can result in a "limited or no connectivity" error, depending on how generic the application/OS error messages are. It's missing checking if the machine has a network to talk on, an IP address, working IP routing, and whatever protocol it needs isn't blocked by firewall/NAT, however.
I'd say one of the main purposes of DNS is getting IP addresses associated with domain names (there are other types of DNS records, but A/AAAA records are among the most important). So, not exactly useless suggestion, but DNS is involved in getting an IP (and it should have mentioned DHCP if there was no local IP).
meoka2368@reddit
See, you explained a way that it might be possible for DNS to be involved.
The chatbot went the route of contradicting itself instead.
azurecrimsone@reddit
Exactly! I probably should have made that issue clearer for the people who would otherwise trust a chatbot.
iamdisasta@reddit
You had my upvote even before I startet to read your text.
Ironically I think AI helps us getting back some natural selection.
I once overheard a patient in my doctors office discussing with staff to get a prescription. They insisted he had to wait for the doctor to check and give the approval for that medication.
"But ChatGPT totally suggested this tablets for my symptons, I can show you!"
Stryker_One@reddit
Great. Go get the prescription from ChatGPT.
MonkeyChoker80@reddit
You laugh, but I have to fear thereās someone out there trying to make āChat MDā that can prescribe pillsā¦
thereddaikon@reddit
IBM tried that for years with Watson before chatgpt was a thing.
Sporkmancer@reddit
To be fair, Watson Health wasn't an LLM. That said, they sold it off in 2022 because of the limitations of the types of AI that were (and still are) available. Since then, they have changed direction into making WatsonX, which is an LLM just like ChatGPT but not intended for medical usage (though the best usage of LLMs is still chatbots that shouldn't be trusted for accuracy).
Flog_loom@reddit
What came of this?
thereddaikon@reddit
I havent checked in awhile but last I heard it was a flop.
Flog_loom@reddit
I remember advertisements.
Hina_is_my_waifu@reddit
There's already medical ai that physicians use.
mrhashbrown@reddit
Well insurance would never support that as a "pharmacy", so any kind of service like that would be DOA.
But applying AI to current hospitals and their in-house pharmacies could be a problem, especially as hospital management is all about cutting costs and stretching every dollar they have. I'm even curious to what extent the Alexa AI has infiltrated Amazon's pharmacy home delivery service.
At least most doctors aren't typically dumb enough to risk prescribing something blindly. They know just about anything they do exposes them to litigation and losing their license, hence why they often have to be pretty rigorous with diagnosing before offering a prescription.
Stryker_One@reddit
Damn, and here I thought that I'd be able to get insurance to pay for my street pharmacist. /s
Squeezemyhandalittle@reddit
It's done. I know someone making it.
EquipLordBritish@reddit
Followed immediately by a 'mysterious' uptick in prescription drug use and overdoses.
nachohk@reddit
You laugh, but some of us have had such abysmal experiences with doctors that it's hard to imagine even the chatbots doing any worse. I am genuinely excited for a lot of doctors to lose their jobs to computers.
Ok_Bandicoot6070@reddit
It'll just give you the WedMD answer of stage 5 everything cancer when you input your symptoms.
Stryker_One@reddit
Stage 5 everything cancer? Is that like Jeremy Clarkson's Double Ebola?
Skerries@reddit
but it's got GP in it's name
EdricStorm@reddit
One of my favorite things I saw on here recently:
AI doesn't know facts. It just knows what facts look like.
SporesM0ldsandFungus@reddit
Let them know about the man who trusted ChatGPT to lower the table salt in his diet and ended up in the hospital for nearly a month with psychosis due to Bromism (bromide overdose)
After using ChatGPT, man swaps his salt for sodium bromideāand suffers psychosis
TLDR - Man asks ChatGPT how to lower his consumption of table salt (sodium chloride). ChatGPT tells him to substitute with it sodium bromide which he orders online.Ā While it was used as a sedative 100 years ago, doctor stopped because it makes you hallucinate and go crazy until your kidneys flush it out.Ā Dude used it for cooking until he couldn't stand or speak coherently.Ā Ā
sleepydorian@reddit
Great, you can sue chatgpt when you have an adverse reaction. Oh wait, thatās not how it works, so youāll have to wait until the doctor, who is actually liable, approves it.
vidoeiro@reddit
I wouldn't trust a helper ai trained with medical data with just that purpose unless used by a doctor, people that trust general purpose LLMs with medical stuff are insane
MuckRaker83@reddit
As a hospital-based provider, AI has given me nothing but headaches and patients who are certain about things they know nothing about
FantasmaNaranja@reddit
before someone had to have at least a baseline of knowledge to even be able to google something to prove themselves right
now chatGPT spits out reasonable sounding nonsense within seconds even if you have no idea what you're asking for
BubbleWrap-Booty@reddit
Bruh, honestly, this just proves people take AI way too literally sometimes. Like, ChatGPT ain't some magical genie that can activate hidden featuresāit's just spitting out info based on what it's been trained on. If you ask it stuff that sounds legit but isn't real, you'll get hallucinations. Customers gotta chill and double-check, not just trust everything blindly. Use AI as a tool, not gospel.
WaytoomanyUIDs@reddit
They wil hallucinate abouf stuff in their dataset. Its a fundamental design flaw of the current generation of LLM's the designers decided to prioritise generating human seeming text over accuracy.Ā
weisswurstseeadler@reddit
I work in sales for SaaS - somehow AI has gotten a lot worse over the last weeks.
I mostly use it to summarize stuff, go over websites and whatnot.
Even for summaries regarding our own products and providing the right sources, the output has been flawed nearly 100% of the times.
Damn they even messed up simple calculations when I gave them the numbers.
Dunno what's happening lol.
Golden_Apple_23@reddit
LLMs are not good at math. They're word prediction machines. Calculators are great with numbers and their words are limited to things like BOOBIES
WaytoomanyUIDs@reddit
IIRC OpenAI looked at passing through anyrhing maths related to Mathmatica or Wolfram Alpha and decided they didnt want to pay a licence fee to Wolfram.
__wildwing__@reddit
I was helping my daughter with algebra last year. Used ChatGPT for a tutorial. Iām competent enough in math, that I could figure when the answer they gave was wrong and tell it to recalculate. Both of us were still āshowing the workā and actually doing the steps ourselves, but being able to have the process broken down was a huge help.
VincibleAndy@reddit
Wolfram Alpha is great for doing this with math and it's been good at that for like 15 years now.
MrDoontoo@reddit
Wolfram Alpha definitely saved me multiple times in college, seeing any question I had immediately broken down into explainable parts was super useful
RogueThneed@reddit
If it's that old, it's probably not chatGPT?
MrDoontoo@reddit
Wolfram Alpha is not ChatGPT.
RogueThneed@reddit
Right, that's my point. Software can do lots of stuff well! but chatgpt is not one of those things. But too many people don't recognize the difference.
__wildwing__@reddit
Is that free? Iāll have to check it out.
I did AP Cal/Phys in high school but pretty much all of that has slipped away.
InspectorTiny1952@reddit
I don't know where this quote about AI is from, but it's sure stuck in my mind:
"After spending billions of dollars, Microsoft has finally invented a calculator that's wrong some of the time."
KnottaBiggins@reddit
I find I get a kick out of proving AI's wrong all the time. They usually come back with "you are correct, I was going by a site that has since been discredited. But I am only an advanced search engine, and not truly intelligent. I can only scour the web and tell you what I find."
AI's that we can interact with, such as ChatGPT are nothing but extremely well programmed chatbots.
LogicBalm@reddit
I've begun telling everyone that the first question they should ask an AI is whether or not they should trust an AI to answer this question and what kinds of situations are never appropriate for AI to be the trusted authority.
AI is actually pretty good at getting that question correct and it helps a ton for people like this to hear it directly from AI that they should never trust is for anything where there is no room for failure.
From ChatGPT: "Bottom Line Use AI for information. Use professionals for decisions."
ask_compu@reddit
but if the AI says to never trust it then i shouldn't trust the statement that it says to never trust it! so therefore i must trust it completely with matters of life and death forever!! checkmate! /s
LogicBalm@reddit
Then you just feed this paradox back into ChatGPT to destroy it forever!
ask_compu@reddit
it'd probably just go "absolutely right, brilliant!"
fresh-dork@reddit
never trust information from the AI - read the sources. it likes to lie about the information too
pennyraingoose@reddit
I just read a post from a librarian frustrated with people coming in looking for books AI has made up, like the library is secretly hiding them somewhere.
Usually I was the kid that skimmed the directions, skipped one part, and got the completely wrong answer because of it. Now when I hear stories like all these here, I feel like I'm one of the few that heard and understood the part about ChatGPT being a language model when if was first released. It can talk to you just like another person, which is cool, but it doesn't actually know facts. It strings together text that makes sense based on what it's trained on.
I know of a company that's building their own calendar / event scheduling system that uses AI to customize the frequency of meetings. If you want a meeting on the 10th of each month you tell it that and it'll create a recurring series. But it doesn't know what business days are or what holidays are and there's no functionality to adjust one-off events in a series. So if the 10th happens to be on a weekend or holiday you're just fucked. But it's totally better than google's calendar system......... š«©
EquipLordBritish@reddit
Yeah, use AI to try to find the information. You have to verify it found what you wanted by reading the sources.
shaggy24200@reddit
The problem with artificial intelligence is that it has intelligence in the name despite it not being anywhere close to that.
Fibbs@reddit
haha thank god all those big corps are rushing to replace us all with AI.
peccator2000@reddit
I had to work with a completely broken CSV format and tried to use the normally great CsvHelper nuget library. It didn't work so I asked ChatGPT for help and it kept me busy all day doing nothing but adding ridiculously complex customize classes. I broke the deadline. Then I spent about half an hour to write a parser myself. That is often the case with ai, it just drives itself into a corner and can't get out. Even in simple math problems. I have to give it a hint and it continues and solves it.
PowerShell had no problem reading those shit files with Open-Csv
Tyko_3@reddit
Idiots have evolved
RatherGoodDog@reddit
Would you prefer Advanced Idiots or Actual Indians?Ā They're both differently bad when applied to techĀ
Tyko_3@reddit
Man, youāre putting me in a tough spot over here
cwthree@reddit
Just when you think you've idiot-proofed something, the world comes up with a better idiot.
castlerobber@reddit
We have a pilot project going with Copilot. I needed to see when a certain program object was last used on my IBM midrange. so I asked Copilot what the command was. Instead of just retrieving DSPOBJD from IBM documentation, Copilot hallucinated a command called DSPOBJU, complete with parameters, that has never existed on the platform. It didn't even mention that I could also get the information via SQL, from a system service IBM has added. Search engines and other AIs gave me correct, complete answers.
Meatslinger@reddit
Growing up, I think everyone knew that one friend who was absolutely dead certain that you could get Mew in PokƩmon Blue/Red by moving a truck that didn't exist, simply because they'd seen that said elsewhere and took it as gospel.
ChatGPT is that kid.
The-Choo-Choo-Shoe@reddit
My brother sacrificed his PokƩmon Blue save file to test this.
tvoretz@reddit
In That One Friend's defense, the truck is real. Mew's not under it, but there really is a truck just off screen in Vermillion City's port.
Meatslinger@reddit
Every good urban legend is rooted in at least some truth, I suppose. I'll admit I had the opposite effect happen here: I spent so long accepting that it was wholly debunked that I never thought to look it up again since.
Quick-Whale6563@reddit
You do need to go out of your way to get to the truck (iirc you need to complete the events of SS Anne and then lose to a trainer without leaving the boat, so it never leaves port; then come back when you have Surf), and it's quite literally just a piece of decoration that doesn't do anything. I think in the remakes they put a Lava Cookie underneath it as a reference.
giftedearth@reddit
Also, there is a convoluted way to get a Mew in RBY without an event or external device. It probably wasn't linked to the playground rumours because it wasn't found until the mid-2000s, but it could have been if some kid had gotten stupidly lucky.
Quick-Whale6563@reddit
It was entirely unrelated to the truck, though.
DoctorPlatinum@reddit
"ChatGPT said..."
"Grok said..."
Well shit, Dr. Dre said... NOTHING, you idiots! Dr. Dre's dead! He's locked in my basement!
tubegeek@reddit
At least you didn't forget about him.
DoctorPlatinum@reddit
Nowadays LLMs wanna talk like they got somethin to say
but nothing comes out with they boops and blips just a bunch of gibberish
AI agents act like they forgot about Dre
tubegeek@reddit
AI is the ultimate "sucker MC."
The001Keymaster@reddit
Sold cars when the Internet prices were just hitting the first internet. The website people used, you'd pick a manufacturer and then model a car to get the price. Except every option the manufacturer has on every car they made was available to add to any car.
People would come in with a sheet and a price. Corolla with no anti lock brakes, red leather seats, a spoiler that could be added to a supra, etc. Then they'd get mad when I'd say that car doesn't exist.
But but my print out right here says it exists!!!
MoneyTreeFiddy@reddit
If only the website had a compatibility checker for aerodynamic parts that maybe couldn't be installed...
A Spoiler Alert, perhaps
tubegeek@reddit
Ouch. EXCELLENT use of spoiler text, gotta hand it to you. Angry upvote.
EruditeLegume@reddit
ThePugnax@reddit
Reminds me about a guy i talked to at work, he was on about using ChatGPT to make HSE system for his business, as it was being auditet. I tried pointing out that its better to make a HSE system tailored to your business yourself, than to have ChatGPT make something that looks decent to you, but looks half assed for anyone auditing the business. He did not agree.
tubegeek@reddit
Guy next to you in the bar says what?
R3D3-1@reddit
I'm glad that in programming, hallucinated replies become obvious very quickly most of the time.
StarblushBloom@reddit
Bruh, this just proves ppl gotta chill with expecting AI to do magic. Sometimes it spits stuff that sounds legit but is total BS lol. Gotta double check and not trust everything word for word. AIās dope but not a saint.
SaucyCyberSuccubus@reddit
Bruh, honestly this just shows how messy things get when ppl rely on AI too much without double-checking. ChatGPT can help but it aināt magic ā you still gotta know your own stuff or youāre just chasing ghosts. AI isnāt gonna replace experience anytime soon, LOL.
EdanE33@reddit
My colleague uses Chat GPT to get the wrong answer on everything before he inevitably just asks me. While I'd rather not be his Wikipedia I don't know what he bothers with asking AI firstĀ
Miffy92@reddit
"ChatGPT said X", "ChatGPT said Y"
ChatGPT told me you were the reason your parents divorced.
RogueWedge@reddit
Librarian - AI makes up references in resource lists. You want us to find items that dont exist... but look totally legit
Naf623@reddit
Gullible Predictive Text strikes again.
statisticus@reddit
So that's what it stands for. Thanks.
RogueThneed@reddit
I like that! Thank you!
KetchupKisses@reddit
Bruh, honestly gotta say ppl gotta chill a bit with how they treat AI convos. It aināt magic, itās just a tool to help brainstorm or get ideas. Hallucinations happen, thatās on the user to verify stuff, not the AI alone. If a customerās trusting ChatGPT like it's gospel, they gotta be schooled on what it can actually do. No reason to blame the tech for user mistakes, ya know? AIās dope but not perfect, gotta keep it real with expectations.
dr_stevious@reddit
I had a "heated discussion" with a student about a rather esoteric database systems topic. The student used ChatGPT to support their arguments. However, ChatGPT was referencing my very own publications but making false claims and attributions about my work. It seemed to be conflating my work with that of others from adjacent subject domains.
I invited the student to read the source material for themselves, but at the end of the day they chose to go with ChatGPT's interpretation of reality instead š
CA-CH@reddit
I have seen people follow AI blindly and brick their Prod environment.
CupcakePanties33@reddit
Bruh, this whole ābut ChatGPT saidā flex is honestly wild. ChatGPTās dope but itās not gospel, people gotta do their homework too. If you just blindly follow AI hallucinations, youāre just asking for trouble. Real talk, always double-check and question the source, even if itās a bot. No excuses.
Strait409@reddit
This is only the beginning.
Imagine support techs doing the same thing...
dontovar@reddit
As a support tech at a large hospital....
Ain't nobody got time for dat
To be prompting AI and "looking" for solutions that way š
xd1936@reddit
Hey, I made a website exactly for this! Send this to them (maybe anonymously š )
https://stopcitingai.com
MorpH2k@reddit
They should really lock down all the AI behind a moderately priced paywall or even better some kind of tech literary test so that "regular people" have to put in some work to get access to it. Like force everyone to complete a course on how to use it properly and what it's actually good for.
Ruevein@reddit
Literally had a user tell me a software we use had a feature he wanted. Looked at his google ai result and the article it sighted was for a different product from the same company.Ā
ThunderDwn@reddit
Even God can't put this genie back in the bottle.
AI is a scourge. Calling them "intelligence" is really, really stretching the definition. They're basically just a more focused google search wrapped in nicer words.
BandicootPresent9596@reddit
This is how executives and directors have thought all their life, I will just get someone or something else to do it.Ā
caribou16@reddit
People don't seem to understand that LLMs are basically just a step above of typing a few keywords into google and hitting that "I'm feeling lucky" button.
Anyone whose ever "discussed" a topic that they themselves are knowledgeable about should be getting all sorts of red flags from the LLM's responses. Maybe a slight inaccuracy here, a common misconception there, sometimes an outright inaccuracy.
So if you know LLMs aren't accurate about things you know well, how could you possibly think it's ok to trust it about something you DON'T know well?
Kneady-Girl@reddit
Bruh, this is wild but kinda expected tbh. People trust what AI spits out without checking if itās legit. Hallucinated features? Classic AI flex gone wrong. Support gotta set that boundary clear or else we'll get more of this kinda chaos. AIās dope, but gotta use 'em wisely or it bites back hard.
LilyDRunes@reddit
... I have cursed at chatgpt so many time because I got so pissed that in one single sentence I said f*ck more times that I have ever said it outloud in my 20 years of living
It has thought that pokemon legends za wasn't out until I said look it up
I have also told it to look up and have gotten 5 different answers each time
I like gpt because it can show idea ways that I never have and that has massively helped
But
It is a tool not a omg lemme believe everything it says type of thing
Again, a tool that is basically on drugs...
Just-A-Regular-Fox@reddit
AI is like a finger pointing a way to the moon. Dont concentrate on the finger or you miss all the hallucinations. Or whatever bruce lee said.
pkinetics@reddit
One of the first questions I ask any AI response on something technical: is this an abstraction or confirmed?
More often than not it will confirm abstract and then check if it can find an actual step by step
RockerXt@reddit
Im in electronic engineering and we take multiple complex math courses. I find chat is pretty good at explaining how to solve math problems without giving you the answer until you ask it to. Super useful tool for if i get stuck studying, though it does require integrity if you want to actually learn.
espositorpedo@reddit
The best description I have heard or seen regarding AI is that it is meant for collaboration, quite like what you are doing. Use it to support what you are doing, not replace knowledge or critical thinking.
Freifur@reddit
Can we please stop with the pro-AI propaganda that is calling it "hallucinating" and instead call it for what it actually is? e.g. LYING.
If I came to you irl and pretended to be a subject matter expert and gave you a load of advice / guidance and spouted off a load of sources that turned out to bee entirely fake. You wouldn't think I was hallucinating, you'd think I was a liar spouting a load of bullshit.
Sloper713@reddit
I get a lot of emails these days with misquoted or non-existent law, or sentences that donāt actually make any sense because this stuff is very nuanced and complicated, and Iāve taken to just calling them out immediately- āCan you please clarify these sentences? What do they mean?ā āCan you please clarify your source for these? These are patently incorrect. Hereās the actual law:ā āI see you may have used publicly available resources for this, however next time please double check by searching and reading the actual laws here:ā etc.
Sujynx@reddit
New sales woman came round with her laptop and said she'd lost a file she'd been working on all week. She didnt know it's name or location but she'd collected some data and asked Chat GPT to put it in 'an excel' then continued to work on it.
To prove it had once existed she showed me a notification that began 'are you sure you want to permanently delete this file?''
salttotart@reddit
Once again, AI can be great as a tool, but just as you wouldn't trust a machine to perform open heart surgery, you shouldn't trust it to do anything specialized. Always check it.
Bramble_Ramblings@reddit
I had VPs of finance complaining that they couldn't access shatGPT anymore once their own security team finally blocked it so now the VPs and other upper management for this client are all up in arms cause they can't use it for their jobs
Another guy ran into a problem in Azure management because he used it to solve a problem but whatever he blindly did messed up something else worse and we had to go back and undo everything
Makes me absolutely terrified to know what kind of info they've got on companies simply because someone was too lazy to just Do The Work Themselves/Educate Themselves or didn't bother to ask anyone to show them how and explain it
MOS95B@reddit
-people who ask AI for specific technical advice
FantasmaNaranja@reddit
i fear the CEOs buying OpenAI and Midjourney company licenses and forcing their employees to use it to then justify firing half of their workforce more than people who ask AI for advice
thereddaikon@reddit
OpenAI being eaten by its own product is a good result for us. It means chatgpt will become less than useless in short order and we might end this madness.
mrhashbrown@reddit
The most tantalizing business case for AI is that a company can reduce FTEs and still get by with decent results because AI "automates everything" and "this frees up our current staff to spend their time elsewhere". At least that's the product pitch the C-Suites of the business world are hearing today.
As it turns out there's a lot of hidden costs involved with adopting corporate AI - training your employees how to use it properly vs. when not to use it, making sure your IT/Sec staff know how to implement correctly, making sure that same team knows how to configure and continuously stay on top of safeguards to minimize the risk to confidential company data, making sure the AI implementation is not violating the company's regulatory compliance, etc.
The latter one is probably what will cause a lot more headaches for the C-Suite than anything else because it potentially affects their B2B transactions. And funny enough, the end result could be the company forced to expand the IT/Sec/Compliance team to help better manage the AI tools as these issues pop up.
Wouldn't even be surprised to see some companies appoint a high level VP or C-level manager to AI implementation, solely focused on making the implementation a ROI success. Not sure how well that's going to pan out if they're choosing to pay a guy $200-300k annually for that job on top of the AI tools own costs.
Shadowrunner156@reddit
Well, if you look at the large scope, overall people are trying to make AI replace people rather than be a tool, bit we've also all seen how it constantly fails when given those responsibilities
RogueThneed@reddit
It's that Cory Doctorow quote though. It perfectly sums up business. I mean, it was true enough when sales people convinced execs that open plan offices were actually a good business idea (as opposed to just a money-savings idea), and that was just about money and the physical world.
āAI cannot do your job, but an AI salesman can 100 percent convince your boss to fire you and replace you with an AI that canāt do your job"
TechnoEmpress@reddit
God has helped us by giving us the tools to help ourselves. Now grab that pack of TNT and drive to your closest AI DC. You'll know when you've arrived, the locals have 19th century diseases because of the polluted water and air.
l1nux44@reddit
I feel like this is just showing us the dangers of surrounding ourselves with spineless yes men. -_-
matthewami@reddit
So every exec with an opanai pro account?
FlameEyedJabberwock@reddit
That, but I think the previous comment was a not-so-subtle political statement.
l1nux44@reddit
Bingo
trunksshinohara@reddit
Both of my jobs told me within the last week I need to get on board with using AI for everything I do at my jobs. Despite me pointing out all the information they give is possibly wrong.
myychair@reddit
Just ask it about something you already know if you want to see how often itās wrong
Strongit@reddit
And yet my job mandates that we use copilot and our home built AI bits at least 20 times a month. We're all doomed.
LipsServicesOnly@reddit
lol honestly this is wild but not surprising. AI can def make stuff up when it doesnāt have the facts, so gotta double check esp for stuff tied to real features or settings. ppl relying on it blindly are gonna run into these hallucination traps way too often if they donāt keep their guard up. Mad respect for double checking tho, saved tons of headache. AI helps but aināt perfect, yet.
raspirate@reddit
Had a similar one just yesterday. A user was using copilot to do something with a spreadsheet, but something was bugged in the copilot app and none of the links were actually clickable, so they asked copilot where the links were and it hallucinated some semi-plausible explanation about problems with the user's environment when it was literally just a bug with copilot itself... So they put in a support ticket.
Buddy, I need you to understand that trying to use AI to do your job and then getting broken output and asking me to fix it is just one step removed from asking me how to do your job...
Automatic_Yard_425@reddit
I've had issues with ChatGPT hallucinating settings in it's own menus before.
It spent months directing me to the bottom left corner saying that's where the settings menu was and to then click on a menu and a sub menu within it that don't exist.
It did make me laugh when they slightly updated the UI recently and put the menu button in the bottom left corner, when it used to be in the top right.
Our ERP team has to directly show people that the menu they are referencing in their tickets doesn't exist before they will accept that they could possibly be wrong.
SassyDesire@reddit
Bruh, this whole story just screams how ppl blindly trust AI without thinking lol. Like yeah, ChatGPTās dope but itās not gospel, sometimes it just makes stuff up or misleads. Always double-check, especially for real important stuff. This aināt some magic switch that solves everything on its own. Gotta keep it real.
RogueThneed@reddit
"Sometimes" it makes stuff up? It's a fucking word prediction device. It doesn't know anything. It doesn't know anything. It can't know anything.
And those AI overviews of our search reviews are often wrong, by the simple expedient of not including the weird "not" in a sentence.
Moquai82@reddit
Philosophical zombies.
It all boils down to sapience/self conscious vs NPC/philosophical zombies/troglodytes.
Honest_Relation4095@reddit
I mean....the customer could pay for it.
prettyyboiii@reddit (OP)
Thatās not how we operate.
MOS95B@reddit
Pay for developing a feature the product was never intended for? Yeah, not many customers are going to fork over that kind of money
UnjustlyBannd@reddit
I work for an MSP and one of the helpdesk guys (I'm field/Engineering and sometime help cover TAC) is constantly using Gemini for answers. Then he comes over asking why the fix isn't working. We've told him since day 1 to NOT use or remotely trust it.
MarzipanGamer@reddit
I have some hope for the future. My son is in middle school and rather than saying āno AIā the teachers are adding onto the assignments examples of appropriate vs inappropriate use. That seems a better approach than a flat out ban.
AshleyJSheridan@reddit
Seems like the teachers are finally learning.
Back in my day it was Wikipedia. Every teacher told us not to use it, because having a source of information that could be collectively changed by many people was not trustworthy (as opposed the text books they preferred we rely on, which had to be updated every year to correct mistakes.)
It wasn't until university that professors understood it was fine, as long as you understand the difference between primary, secondary, and tertiarary sources.
The damage was done by then though. To this day, people across the world still argue that Wikipedia is not a good source of information, because of what they were (incorrectly [ironically]) taught.
Blackby4@reddit
Okay now you're just making me feel old. Wikipedia barely existed when I was in school, and teachers didn't even know what it was to be refusing sourced & cited info from them.
HasFiveVowels@reddit
Yes! I canāt seem to get my kids to use it. Iām over here like "you need to learn how to best use it because it will probably be a big part of your life". I think Iāve effectively banned them from using it by suggesting that they should use it.
Zeewulfeh@reddit
@grok is this true
CertainlyEnough@reddit
AI helped lawyers with cites of imaginary court cases. The judges were more then irritated.
TK0927@reddit
shudder