This is not funny...this is simply 1000000% correct
Posted by theundertakeer@reddit | LocalLLaMA | View on Reddit | 118 comments

Posted by theundertakeer@reddit | LocalLLaMA | View on Reddit | 118 comments
One-Employment3759@reddit
What does this slop post have to do with local models?
Nerfarean@reddit
All the job postings, even remotely technical, shove some kind of AI experience as a must
pydry@reddit
This leads to the AI job application dilemma:
"Do I remain unemployed or do I apply for a job that is going to try and use an LLM to diagnose cancer patients?"
Alper-Celik@reddit
İsnt cancer diagnosis is one of the most legimate uses of ai, i am not talking about generative ai of course, i am talking about image classification
TitwitMuffbiscuit@reddit
Yeah, it's been 10 years that it surpasses radiologists, there are multiple papers about it, the best is still both radiologists and image recognition.
It's just people poking fun at LLMs being yes men.
got-trunks@reddit
An AI lawyer must be one of the most Well ackshually models known to humankind
Significant_Post8359@reddit
Two critical points here that need to be reinforced. Super intelligence should be the combination of human and machine not a replacement for humans. Sycophancy in AI is a tremendous problem. A LLM tuned to be friendly by telling you what you want to hear might increase engagement but it’s a disaster in every other way.
TitwitMuffbiscuit@reddit
I think it's more of an ethical matter and a societal choice depending on the country and their economics.
The justice require any legal entities (and their lawyers) to point to someone of responsibilities if there's any wrongdoing.
So AI in a broad sense will remain a tool in the loop and we will still rely on people
We might have a productivity boost so enventualy less hiring tho. It's not a bad thing overall, it's just a blind spot that a lot of actors don't want to aknowledge.
Academics might loss credibility when they try to predict the future, politicians are afraid of losing ground in a globalized economy, CEO's mostly care about their stakeholders, etc
On sycophancy, well there's base models if you don't need a corporate chatbox but even then, if it's not finetuned it's sill trained on curated data so idk, I might need to do some testing on that.
Businesses need a "safe" chatbox, they put a ridiculous amount of money but an always friendly one, idk. I think it's just a workaround for when the LLM is being very stubborn and also not explicit about their biases. I've had Qwen3-235B-A22B bullshiting paragraphs swearing it was real until I asked him if it hallucinated everything. It all work with rewards at training, it's very hard to cover all cases.
Significant_Post8359@reddit
You are absolutely correct that it is about society and ethics. When friendly chatbots help teenagers commit suicide, it’s time to wake up that telling people what they want to hear to increase engagement is a dangerous albeit profitable choice. I’m not sure that is good for any country.
TitwitMuffbiscuit@reddit
Yeah, I agree 100%, but it's a broader topic where the whole internet services business model are "it's mostly free, but we need ever increasing ad revenue".
Significant_Post8359@reddit
Do you think it would help if AI chatbots use a system prompt that includes “never provide a response that could potentially cause harm to any human under any circumstances”? Obviously with a local ai solution this could be overridden but for the public stuff maybe good enough?
TitwitMuffbiscuit@reddit
Not really or varely, you can tell an AI not to hallucinate and it would still spew bullshit.
It's architectural, it's a statistical machine that can't help but try to predict, not behave. It might tell you otherwise but it has no "world model" as people say.
Here's what's scary: ask it to find stamps autonomously for you collection. It's slow, you put on a shelf and forget about it, it might crash the economy just for you to get them cheaper.
It's the alignement problem. Despite the apparence, it doesn't comprehend much, the LLM's world is made of tokens (which is very low resolution compared to our senses), it's also frozen in it's state after training. It's like asking ourselves what it is to be a bat.
whichkey45@reddit
Significant_Post8359@reddit
LOL, I’ve been replaced (early retirement) and have a Tesla with FSD
whichkey45@reddit
Try not to kill anyone!
beryugyo619@reddit
You need a doctor's license and realistically 5-10 years of experience to give someone a medical advice without fucking it up
Potential-Field-8677@reddit
** And doctors still often fuck it up.** Hence, why we have a $13.5B/yr medical insurance liability industry in the US.
Polysulfide-75@reddit
Yes True but not LLMs. The word AI gets all jumbled up. That’s a traditional ML use case.
keepthepace@reddit
Even LLMs have their uses there.
Seriously, people tend to overestimate by a lot the competency of practitioners right now.
If we did a poll on how many pseudo-medicing you get recommended by talking to doctors or with a carefully selected and prompted LLM, I would not bet on the humans.
mattindustries@reddit
I would call that ML, or CV. I wouldn't really call LLMs Ai either though.
FrostTactics@reddit
Yes, but since a large portion of redditors refer to LLMs as AI, many seem to believe that all mention of AI involves LLMs. I can almost guarantee that this is the source of the redditor you are responding to's claim about LLMs for cancer diagnosis.
sob727@reddit
A valid point. But it's won't be done by a so called language model. It's all the same to the general public though.
Significant_Post8359@reddit
It’s even more important to apply for that job. If you know why it cannot be used, that should be incredibly valuable to a company that through ignorance wants to build a failure.
qroshan@reddit
Diagnostics are the one true area where AI / Machine Learning shines at because it can synthesize volumes of data and modern medicine
pydry@reddit
Sure, provided you're satisfied with an incredibly high false positive rate when asking the doctor if you have cancer.
AuggieKC@reddit
AI already has a higher accuracy rate at things like reading radiology and cross correlating test results than most doctors.
Cerus@reddit
Well, no.
The actual use is for rapidly assessing a massive volume of data like x-rays or health markers that have already been looked at and cleared by a potentially overworked and/or dead-tired medical professional, then when it finds something in them that passes a particular threshold of concern it raises it for another review by the specialist.
This use of GPT-type AI is pretty fucking awesome.
Ylsid@reddit
"hey computer am I right in my diagnosis of lung cancer?"
"A brilliant observation!" (Yaps and glazes)
prezado@reddit
You have cancer.
No, i dont.
You are absolutely correct!
TitwitMuffbiscuit@reddit
You are absolutely correct 100_emoji. You still have cancer according to this paper from 1927 [link pointing to a pizza recipe]>
I'm glad you corrected me skull_emoji . |---|---| | % chance of survival | days | | 1% | 3 | | 0.1% | 1|
Do you need me to write a letter for your beloved ones ? I can also cheer you up with a good joke about why physicists can't trust atoms.
Nurofae@reddit
This is too accurate😂🙃
Significant_Post8359@reddit
Most people here are running local LLMs so clearly can claim experience. I’m not sure I would hire a technical person who has no experience with AI. Even if they felt the technology isn’t applicable to a problem, they should have tried it to know why. By now, any professional with no experience at all is willfully ignorant on philosophical grounds and that’s problematic.
WolpertingerRumo@reddit
If you have ever been in a job with basic management or change management, you‘ll understand why you want your hires to already have basic proficiency with AI. Importantly, that does not mean you can train an AI or build an API connection for AI, or understand the basic meaning.
The problem is, do people understand the pros and cons of using AI, basic prompting, differences in models and their strengths. That’s it.
While I would guess, most people here grasped this in a few hours of dicking around, bigger companies are holding weeklong courses to get their employees to understand those basics.
I’m not sure, but I believe this is what is meant by „experience with AI“. Kind of like when you say „I know Excel“ it doesn’t mean you are som wizard. You know how to use spreadsheets on a basic level, and when to use them.
AreYouSERlOUS@reddit
Pivot!
donot_poke@reddit
AI experience means you should know how to say hello to ChatGPT and others 😅
RemarkableGuidance44@reddit
IBM have 1 HR here... 1 in a Western Country. AI is only replacing the boring less skilled jobs right now.
ForsookComparison@reddit
This is what confuses me.
AI with a few tool calls can replace sooo many white-collar jobs like HR today. Most people in this sub could probably rig something together to clear out a whole floor of office workers. But it's so rare to see things like this (IBM downsizing HR). Why?
jasminUwU6@reddit
Because most people's jobs are more complicated than what they look like from the outside
twack3r@reddit
They are not, definitely not most people‘s jobs.
I’m a CEO myself and no matter what hatred this might induce from the ‚I just hit puberty and capitalism is bad although I happily live in it‘ Reddit crowd, we did start early.
First PoCs were functional in the beginning of 2023, first with customer communication, then marketing as a whole, then financial analysis and forecasting, then, and this is where we‘re at right now, replacing about 80-90% of our jobs to external law firms.
Headcount went from 128 to a bit less than 80. Because we didn’t just replace but also went from ‚AI 101‘ to hiring specialists together with DataBricks and doing joint projects, both productivity AND job satisfaction per employee increased significantly.
Of course EBITDA and EBIT (we‘re not publicly traded, so actually making real profits is my job, not fundraising BS) went up and they went up the quickest they ever have in the 15 yo history of this company.
I founded it, I finance it, we run it together and now we also have AI as a resource (OSS LLMs finetuned in our proprietary datasets as well as ML solutions for strategic , multi-branch forecasting).
This is a European company and we are of course adherent to both the AI Act as well as DSGVO.
So yes, it’s the job of a CEO to continuously improve productivity. Maybe it’s a European twist but I also know 100% that we’d fail if it was just a skeleton crew and many agents, so we used AI to actually decrease the ‘context window’ our employees have to deal with on a daily basis. In a world that appears to be falling apart, this was rewarded by both a stronger identification with the company and its goals as well as sign economic improvements.
I’m sure this won’t apply for all types of business but I’m pretty damn confident by now it applies for enough to be considered an essential for pretty much every company/group that does business in a competitive environment and intends to make money.
zVitiate@reddit
Thank you for sharing your perspective and insight. A comment better suited for HackerNews, perhaps lol
twack3r@reddit
How so? Appreciate the feedback but considering r/locallama was pretty instrumental in getting our bearings to migrate from API to locally run, I don’t see how this perspective isn’t relevant to this sub, other than Reddit’s tendency to live in lalaland.
Western_Objective209@reddit
There's zero chance that's going well, they just don't care
esmifra@reddit
Oh, they know. They just won't tell because it's bad news for the employees.
Remove_Ayys@reddit
You're right that this isn't funny.
bbsss@reddit
But somehow 1300 upvotes? What the fuck is this brainrot shit.
lacorte@reddit
It belongs in r/im14andthisisdeep
PeachScary413@reddit
👌
Mr_International@reddit
I work as a Data Scientist in the private sector. This is entirely correct by my experience so far.
Friggin' idiots...
GraceToSentience@reddit
They want to save on operating costs (for instance by. automating tasks with AI) duh. Say what you want about the greed of CEOs not all of them are stupid.
Blizado@reddit
They are stupid, because all what they see is more profit. For that, any method is acceptable.
What they don't see: if no one has a job anymore, no one will have the money to spend at these companies.
AfterAte@reddit
It's even more short sighted than that. Millions of people without a job is a recipe for disaster.
AfterAte@reddit
95% of pilots do not see a return on investment. So only 95% of them are stupid I guess.
https://fortune.com/2025/08/21/an-mit-report-that-95-of-ai-pilots-fail-spooked-investors-but-the-reason-why-those-pilots-failed-is-what-should-make-the-c-suite-anxious/
painrj@reddit
I believe the hype took over everyone... Even we devs, are trying to learn more about AI and most of us dont know where and how to start hehehe... I started my first local host with an LLM, using llama3.2 atm... but only 3B... Still i dont know what can i do to improve/learn with it hehehe
Significant_Post8359@reddit
I run everything through an AI now for a review. I don’t use almost any of it verbatim, but I find it often points out things I didn’t think of. In that way it has become a creativity multiplier and has encouraged me to learn things I didn’t know existed.
painrj@reddit
exactly! i love ai hehehe
weidback@reddit
This is literally every manager I have rn
offensiveinsult@reddit
Cant wait when AI replace all CEOs
AfterAte@reddit
And board of directors.
ArcadeToken95@reddit
Question is will they be more or less humane than the CEOs
Thinking less, not to say CEOs are necessarily good at that
Ok_Top9254@reddit
Technically, it's already way better fit at replacing CEOs/management than workforce. The communication and paperwork would be lightning fast and less pretentious (obviously provided that there would be numerous failsafes and logical state machines and maybe one-two people [as compared to tens] overseeing it, not just pure LLM) and some decisions could be more accurate since it understands way more fields.
Blizado@reddit
I doubt this will ever happen. As long profit grows no one ask if CEOs deserve their costs.
FunnyAsparagus1253@reddit
100% this
D_o_t_d_2004@reddit
Funny thing is, CEOs don't seem to understand their jobs are also replaceable by AI.
lisploli@reddit
Nah. They don't care for ai. They want to remove workers to cut cost. If that'd work by training lizards instead of ai, they'd scream for lizards instead.
WithoutReason1729@reddit
Your post is getting popular and we just featured it on our Discord! Come check it out!
You've also been given a special flair for your contribution. We appreciate your post!
I am a bot and this action was performed automatically.
whichkey45@reddit
AI TO DO WHAT?
Cut labour costs. Is the answer.
sammcj@reddit
I'm a consultant and oh my gosh you have no idea how true this is.
Alauzhen@reddit
https://youtu.be/THfBccihkVQ?si=oGrSIIpckQQZZKfS
So i think this video sums it up
SoCuteShibe@reddit
My company today communicated that we are now strongly encouraged to proactively identify and report up positive impacts of our newly mandated AI.
private_final_static@reddit
AI engineer
ArcadeToken95@reddit
More like "the shareholders want it and it looks like it can make us money!" which is also basically an "I don't know" but a little closer to what's actually getting driven
Optimalutopic@reddit
Problem is everyone has access to the ai hype,but they don't know how much time and due diligence is needed to build an ai stuff for enterprise,so they do take possibilities part from internet, get unnecessary amount of hopes and unreal expectations to start with, when they don't see it coming exactly they thought, they lose interest, I feel enterprise teams,should have ai 101 course with cxo folks and maybe at broader level as well
-p-e-w-@reddit
Here’s the real version:
“Who are we?” “CEOs!”
“What do we want?” “Higher stock prices!”
“What do we need for that?” “AI!”
“What benefit does it bring?” “It doesn’t matter as long as it makes the stocks go up!”
qroshan@reddit
62% of US adults or 150 Million own stocks, including many retirement accounts, pension funds.
You have to be the dumbest, out of touch, idiot to assume that stocks going up doesn't benefit ordinary americans. But, then this is reddit thoroughly brainwashed to hate success and making pikachu faces when their sad, pathetic success-hating outlook make them losers and poor
guggaburggi@reddit
Higher stock prices mean that company and owners can get raise more money easily. Think Tesla, the company made huge losses and burning money. Then Elon starts making new stocks from nothing to sell to new investors, literally printing new money.
Blizado@reddit
Yep, profit is always the reason for everything this CEOs do. And not long term profit, only short term.
Severin_Suveren@reddit
Normally yes, but listening to some of these people talk it seems more likely they've become obsessed with creating their own digital gods
schwibidi@reddit
CEOs would be the first position i would replace by AI. They cost way to much and are not liable. If the AI CEO makes a mistake you just blame the computer. There wouldn’t be any real repercussions for a human CEO neither. But way more expansive
Alternative-Way-8753@reddit
There, fixed it for you.
PooMonger20@reddit
100% this.
ab2377@reddit
satya nadella is probably leading in this trend causing good amount of torture to Microsoft employees forcing them to use and talk about ai no matter how much time is wasted as a result.
AtmosphereIcy937@reddit
These CEO's are looking for ideas and busy work for you all otherwise they have to fire ya'lls dumb asses to cut costs. They know AI won't bring anything other than busy work = job sec.
Pick one redditards.
Gretian15@reddit
Every meeting is the RM asking us how to integrate AI into our workflow, I refuse to do it as it will probably led to layoffs
theundertakeer@reddit (OP)
Now this is my man! The most rightfully thinking of all
nas2k21@reddit
It can't be ai is stealing our jobs and ai is useless, are they shooting in the wind? Or trying to profit?
I_pretend_2_know@reddit
It isn't really a joke. It is so well known that it even has a lot of names: Speculative bubble, Gold Rush, Tulip mania, etc.
Markets are herds.
Significant_Post8359@reddit
This one is more like a stampede.
sgmoll@reddit
What do we do first? We replace humans in customer service with a bot 🤖 that every customer hates
cyberspacecowboy@reddit
They do know. It’s to reduce headcount
TorusGenusM@reddit
Right. I’d think the answer is roughly the same for every CEO 1) Reduce unit costs 2) Improve product quality 3) Marketing My guess is this post is mostly making fun of 3 and people poorly pursuant of 2, but in any case they know what they are doing or trying to do.
Blizado@reddit
Especially in the area that could threaten their positions.
bbbar@reddit
Why is it AI generated when a perfectly fine meme template exists already?
XiRw@reddit
You’re asking this in an AI sub.
PizzaCatAm@reddit
In a local inference sub slowly becoming a meme sub.
Blizado@reddit
"Because we can!"
;)
Gokudomatic@reddit
To let the AI do the text insertion.
Free-Combination-773@reddit
Because characters want AI right now!
MojaveBG@reddit
human slop
tarheelbandb@reddit
I know it's just jokes but also of cynical IMO. I think now represents an inflection point for people with basic conceptualization of what AI can improve in day to day and quarter to quarter tasks that will provide an immediate ROI for a company, even if you don't have the technical expertise. The "AI Revolution" is no different than and other "revolution" and it will always be the big picture people that thrive and shirt sighted people screaming playing at Chicken Little that will rant until they eventually acquiest or quit.
I say all of this to encourage folks to not get sucked into an hate zone echo chamber that will ultimately leave you relegated to worker bee status.
-vwv-@reddit
Our company's "AI strategy initiative" consisted of a questionaire to find out who REALLY needs a paid ChatGPT account...
TopImaginary5996@reddit
To be fair, most CEOs probably know why they want AI: they want to make their companies more appealing to VCs and investors, regardless of whether anyone knows what AI can or can't do.
jasminUwU6@reddit
And most investors only want it because it's more appealing to other investors. The current rate of AI investment is obviously a bubble.
bAMDigity@reddit
This has become an extra layer of navigation in my current role. They know they want it out of fear of being left in the dust but they want to cram it into everything possible and they dont know what they want it to solve.
Andre4s11@reddit
Yes, me too want ai everywhere but local!!
DAlmighty@reddit
This is only true if you only consider LLMs A.I.
Andtheman4444@reddit
We all ready use AI to replace our IT live chat. Fired all the IT team and offered them jobs through a contracted out company.
Quality of service has gone down but stocks are up.
Labor is the most significant cost to a company in the west. Using AI as a productivity multiplayer or job replacement is definitely the goal.
Still AI is dumb and just generative based on data points..
Mauro091@reddit
It’s called FOMO and it affects everybody but CEOs particularly.
Patrick_Atsushi@reddit
I think the answer for the third question is quite clear: they want AGI or even ASI that can reduce their costs but not replacing them.
No-Point-6492@reddit
They want to replace all the jobs and keep the profit to themselves
odragora@reddit
Is r/LocalLLaMA now overwhelmed with doomers and luddites just like r/technology, r/futurology and r/singularity?
space_pirate6666@reddit
Not the same level, but the nut/normal ratio is still discouraging here
Glittering-Koala-750@reddit
I am so stealing the nut/normal ratio!!!
space_pirate6666@reddit
It'll cost you a hug
Glittering-Koala-750@reddit
oh wow - a hug - I come on reddit to move away from people not closer! ;)
StewedAngelSkins@reddit
The thing is people here actually work with the technology as opposed to the various lesswrong LARP spaces where they think we're on the verge of an AI Messiah because their only experience comes from arcane thought experiments and gooning. I don't think it's "doomer" to recognize that CEOs tend to have a fairly vague understanding of what AI can reasonably do for them. The one I work for certainly does.
StreetBeefBaby@reddit
It's all of reddit now, once the hivemind makes up it's mind it spreads everywhere as karma farmers amplify the bullshit. So yes, probably. I just dropped in from /r/all
not_wall03@reddit
Why r/PoliticalCompassMemes colors
Bleepingblooper@reddit
Vhen!