Reality check from the Microsoft AI Tour: "Agents" hype, the enterprise disconnect, and peak AI Fatigue
Posted by Relaxation_Time@reddit | sysadmin | View on Reddit | 163 comments
Just got back from the Microsoft AI Tour in Zurich. Honestly? Nothing has globally changed since my last visit to these events two years ago. They just scrubbed "LLM" and "GenAI" from all the slides and replaced them with "Agents" sprinkled on top of absolutely everything.
The FOMO is unreal. They declined tons of registrations, but still packed 3,000 people into the venue. Obviously, everyone wants to see where the industry is heading, but the sheer scale of it is overwhelming. You just get bombarded: agents for security, DBs, finance, science, GitHub, productivity agents, agents to replace humans, agents to help humans, agents for alerts... My head is still spinning.
The Good Stuff
I still genuinely enjoy the keynotes. The Americans know how to put on a show — it’s not just a boring slide deck about "increasing ROI"; it’s a full-on theatrical performance with lighting and staging. Judson Althoff knows how to work a room and actually performs his 1.5 hours on stage. Honestly, he’s much more engaging than Satya (Satya can be a bit dry). Though I did walk out halfway through when the boring hands-on demo started.
The hallway track is where the real value is. I had a great chat with some MS experts about an unreleased product (Microsoft Discovery). My company would definitely be interested in an agent layer sitting between our scientists and our databases. But here lies the core issue: Microsoft’s vision of scientists effortlessly building and maintaining these agents vs. the reality of our labs are two completely different universes. More on that later.
A quick comical side note: NVIDIA. They were supposedly the main partner of the event. Built a massive booth. I walked up to chat and got a very clear signal: if we aren't ready to buy clusters and train a $50M-$100M foundational model for chemistry, we are basically of zero interest to them as clients. Fair enough.
"Agents" vs. Enterprise Reality
A little context: 2-3 years ago, I was that guy. I was the one yelling at every meeting about how we urgently needed to implement LLMs and chatbots. I argued for email/calendar connectors, saying that yes, it costs money, but the productivity boost would be insane.
Now, Microsoft is on stage saying the exact same things: they are "observing incredible productivity growth." Meanwhile, on a 40-meter screen in a massive hall, right after a grandiose speech about becoming a "frontier company" and transforming the very nature of work, they demo... sending a calendar invite via Copilot chat. Seriously?
In reality (and our internal metrics plus professional forums back this up), things look very different.
For simple tasks, LLMs are top-tier: translating text, outlining a presentation, or summarizing an existing doc. But the moment you tackle heavy-lifting — the kind that could theoretically save hours a day (massive documentation, complex PM tasks, Jira organization, tricky vendor emails, annual financial reports, contract/invoice analysis) — trusting the LLM becomes practically impossible.
Every output, every report has to be micromanaged and read under a microscope. There are almost always hallucinated numbers, clunky sentences, or entirely missed details. The absolute worst is when the neural network loses context. You write a prompt regarding an email to Mike and Elena, and the logic flips: what was meant for Mike goes to Elena, and vice versa.
It just makes you want to give up. You have to double or triple-check the results. In long documents, it turns into pure hell: you have to fix the logic, scroll up and down, rewrite entire blocks, which then breaks the flow of the rest of the text. The "Editing Tax" for AI BS ends up taking more time and energy than just writing the damn thing from scratch.
And you know what this leads to? On stage, they preach about the shifting labor market and how HR needs retraining programs for those who "don't know how to build agents." This is completely disconnected from reality! I have an entire department of auditors who are terrified to click the wrong button in ServiceNow, let alone cobble together neural networks from scripts.
As a result, people lose their patience, lose confidence in the tools, and just quietly stop using them. Our metrics show a massive spike in month one, followed by a 70-80% drop-off in active usage. I’m talking about internal corporate chatbots with access to company files. This is peak AI Fatigue.
Microsoft confidently claims from the stage that their agents are ready to replace humans. But on the ground, these "agents" are mostly just the same old LLMs wrapped in fancy scripts and system prompts. They inherit the exact same issues with context, hallucinations, and AI fatigue. The only difference is that now, instead of catching this AI BS in a Word document, we are going to have to debug it in broken business processes.
Suspicious-Bug-626@reddit
This is why the agent hype feels so off to a lot of enterprise people.
The demo assumes clean data, clean permissions, clean workflows, clean ownership, and users who know exactly what they want.
Most actual companies have none of that.
So the agent does not remove the mess. It just runs through the mess faster and makes the cleanup harder to trace.
This is also why we think so much about system understanding at Kavia. Not because agents are useless, but because acting inside a messy enterprise system without context is usually where the pain starts.
HotTakes4HotCakes@reddit
Here it is yet again. That not-at-all subtle signaling to every company out there that if you have employees that aren't immediately leaping on to the AI train, you should do something about it. Is "retraining" now the code word for "get on board or get fired"?
And we're supposed to rope HR into this shit now, expect them to turn our employees into being good little copilot users?
It was bad enough when we, the IT department, were expected to to be selling this shit to every other department.
It's remarkable how little they seem to appreciate that if the shit was as good as they claim it is, no one should need this level of pushing.
Alaknar@reddit
Well, in their defence, it might be PTSD from Windows Mobile and the original Edge browser. Both were absolutely excellent products, both died because people associated "Microsoft" with crap.
ig88b1@reddit
Honestly edge suffers the same issue. If you need to tell me to try edge when I install windows, as a pop up, pinned to my taskbar and start menu, nag me when I try to download chrome or make it my default, and reset edge to my default every other week/update, there's zero chance edge is a good product.
Alaknar@reddit
Here's the tricky bit: Edge (Chromium-based) is literally just "Chrome but better in every regard".
axonxorz@reddit
Windows Mobile/UWP is crap.
Edge Classic was a great product only in retrospective deference to abject shitiness.
Alaknar@reddit
Spoken like a true Guy Who Never Used Windows Mobile.
Well, OK, let me rephrase - if you wanted the phone to be a portal to all the social media apps and what-not, sure, it sucked because all the third party social media companies were desperately fighting to kill it in the crib.
But if you wanted your phone to be a good phone with a snappy OS, great camera, and incredible interoperability between your PC and the phone, there was none better at the time.
pdp10@reddit
That would be the one written from nearly scratch, in between the one that was a fork of Mosaic, and the one that was a fork of Firebird that was a fork of Netscape that was a rewritten Netscape that was a rewritten Mosaic?
Alaknar@reddit
Well, because it was. File Explorer is a browser window that lists local drive contents, for instance.
Or, at least, it used to be. I'm honestly not sure if they switched FE to use Edge like it did Internet Explorer, but that used to be the case.
WhereDidThatGo@reddit
It was nice that the original Edge browser wasn't just more Chromium but I don't know that I'd call it an excellent product. I'd call it an acceptable to good product that appeared better than it was because it was replacing Internet Explorer.
Alaknar@reddit
It was super fast and super lightweight. That, to me, makes for an excellent browser.
Nightcinder@reddit
but copilot is somehow worse than chatgpt while using chatgpt and both suck compared to claude
cosmin_c@reddit
This is the gist of it. When something is so aggressively pushed then it lacks the chops to get adopted organically. And that means it isn't worth the hassle in the first place.
Having to go through documents just to fix what the LLMs hallucinated sounds like a peak dystopian nightmare for example. And you can see issues even with LLMs that appear to be all right. I've been using one the last few days to summarise and work on stuff that I'm unfamiliar with and it even casually dropped kanji in the middle of an explanation, whereas we absolutely never touched on Japanese or anything similar (worked with English and a bit of Romanian). WTAF.
kjasdiw43@reddit
Same thing with ESG, EVs, green co2 bullshit. The globalists invent a problem, then build an economy around it trying to convince the population it's for greater good, while wasting enormous amount of wealth around it and profiting at the same time.
MGMan-01@reddit
Sir, yoy are speaking to system administrators.
kjasdiw43@reddit
Sysadmins get fucked by the system as well - hence this thread.
awful_at_internet@reddit
And wee babby t2s
Check out this sweet script i wrote, it only breaks prod in two different ways!
Ron-Swanson-Mustache@reddit
I recently purchased and rolled out Claude Enterprise for about 5% of our company.
I was showing my employees of an instance of how I used it to work on an issue (disabling direct send while figuring out an issue about why my SMTP relay server was showing "Anon" instead of "Partner".)
3 separate times it ran to an end and said "Open a ticket with Microsoft saying 'XYZ' as it should be working" only for it to go "Oh yeah" when I suggest something else.
At the end, I had already fixed all the issues but kept pushing it to see where it would go. It kept wanting to create a rule instead of just disable direct send. It would ignore what I said I wanted to do and repeatedly tell me to do the rule solution. I shared the text history with my employees to show them the problems I was seeing and their response was "It was arguing with you".
But it was great for putting together powershell scripts for doing certain, specifically tasks. Though it would occasionally use old commands from on prem in a EO environment.
tesseract4@reddit
This is the part I can't get past: If you need to be an expert in the topic to ensure it's not bullshitting you, why can't you just do the work yourself?
Dotakiin2@reddit
If you know a programming or scripting language, but not so well that you never need to check the documentation, it could help. That still requires that you know it well enough to tell what it will do though.
cosmicsans@reddit
Yeah this is where it shines for me.
I've been a lead for a few years. My programming skills have atrophed. Our organization flattened, getting rid of all leads. I'm now expected to program as my main output again.
I am still quite capable of reading the code, and knowing what patterns to apply when and why, etc. Or when I have to write Terraform which I haven't done in forever. It's pretty easy to validate that the output is doing what I want when I give it simple enough instructions.
I would never trust it with "build this whole thing for me" but "I need to refactor this function to use this pattern" works well.
Ron-Swanson-Mustache@reddit
You can, but it does make your work more efficient. I can generate and proofread scripts faster than I can write them.
Increased efficiency isn't the same as being able to replace employees. That's a problem because AI is trying to sell the C level suite on that they can do that.
mangeek@reddit
Doing it collaboratively when you've been told that "this is a 'yes AND' attitude environment" and the other person isn't understanding why the LLM output doesn't make sense is a true nightmare.
Me: "That line about the [whatever] is weird and wrong, that would cost us a millions of dollars for no reason." Person: "Well that's what ChatGPT said, so you have to explain why it would lead us wrong." Me: "It's wrong because it specifies very non-standard settings that will cost a fortune to implement. It's not actually needed for that category of system." Person: "But ChatGPT says we need it." Me: "OK, but I know we don't." Person: "I'm asking Gemini now..." Me: "Hang on, can we work through 'what we are asking'?" Person: "I'm already done, Gemini says we should do it." Me: "But what did you ask Gemini? There's context here that's important. I want to make sure it's included." Person: "You are frustrating me. Two AIs say we need this. Everyone will just know if it's too weird or expensive that it just doesn't apply to them." Me: "That's not how standards work, we actually need to write it... so the words... are taken literally." Person: "Leave it in, we will see if it gets through when we do a final pass in Claude."
cosmin_c@reddit
And after they sack the last competent person and proceed to employ yes-people and implementing stuff that a chatbot hallucinated they still will not understand why it's all going to shit.
F0rkbombz@reddit
“It's remarkable how little they seem to appreciate that **if the shit was as good as they claim it is, no one should need this level of pushing.**”
exactly.
InnovativeBureaucrat@reddit
For all those complaining about Teams, when was the last time you used Sharepoint?
In an organization like mine where Sharepoint is highly controlled and therefore not maintained, Teams was a breath of fresh air.
MandelbrotFace@reddit
2 years ago I was at a service provider conference in Manchester where they had Microsoft speaking. It shocked me how aggressively Microsoft were pushing Copilot, telling companies to ignore the voices/concerns of employees who question it as they "will be left behind". It's like, dude, you're here to sell a product, don't tell me how to run my business or treat my employees!
BreathDeeply101@reddit
Your employees are hurting our shareholders and we need you to take action on that, mmm'kay?
ProfessionalITShark@reddit
I also feel like they don't understand that HR is probably the least technically adept department fairly typically across many organizations.
So many companies I have worked at and with, the HR emails or HR vendors were written slightly worse than phishing emails, and the biggest pushback against phishing test was HR, because they kept failing it.
I once had HR lady tell me she can't wait of trend of computers to go away, so they go back to paper and pencil. This was in 2019. She was in her late 20s.
PapaDuckD@reddit
This is the same as what happened with Teams vs. Slack/Zoom/Cisco.
The first sale - even of a product that isn't all the way there yet - is going to carry organizational inertia that is hard for competitors to overcome. But you only enjoy that benefit if you can get the product in to a sufficient degree in the first place.
It is easier to sell into a place and be the first provider into a new technology space than it is to sell into someone who's already embedded with a competitive product - even if they're unhappy with that product.
As a partner that MS pays to help make this happen, that's exactly what's going on.
_--_---__--_--_-_-_-@reddit
They know it's not, but everyone is too afraid to be the one admitting the Emperor isn't wearing any clothes due to the stupid amounts of investment depending on the bubble not popping. So you get more and more naked courtiers pretending everything's cool, and that everyone's eyes are the problem
wrosecrans@reddit
Obviously they do realize it. They are just liars. No company would be spending millions after millions on trying to force adoption, if they genuinely believed they had a product that sold itself.
A few years back, Tesla did basically zero advertising. The Tesla roadster was a disruptive product, and they had no problem selling their full production capacity. So any money spent on a big ad campaign in those days would have been wasted money. Anybody pushing such a strategy would have been promptly fired. Microsoft and all of their peers aren't seeing the sort of adoption they keep insisting is happening or is at least inevitable. If they were, no manager in the company would be permitted to spend so much money trying to drive the adoption. And yet they push and the push, in all their most valuable user interaction surfaces, they spend millions on events like the one OP is writing about, they spend millions on PR, they spend millions on consultant white paper theater.
If you look at old advertising from the 80's when there was a push to drive microcomputer adoption, or from the 90's in the early days of the web, it was quite different. The tone of pushing AI is much more like the more recent hype waves for things like Crypto and NFTs, where a drug addict is trying to fight you in an alley about his vision of the future and demanding that you believe him or he'll do everything in his power to ruin you.
xRyozuo@reddit
As a marketer, I think the issue is “AI” is coming out feet first because OpenAI basically forced everyone else to come out with their half baked cookies. It’s a solution looking for a problem and the industry right now is saying let the users “find problems we can monetise while we subsidise discovery with tokens”.
From the marketing, it seems they’re focusing on business leaders who look at labor expenses and want to reduce that number.
AlexisFR@reddit
Teams is still the best work chat tool in the market though.
Clovis69@reddit
Only if you can't use Slack or Discord or notes folded into paper airplanes and tossed from desk to desc
AlexisFR@reddit
Discord? At work?
HeKis4@reddit
Give them a few months for them to start talking about an "AR department", as in "Agentic Resources", on the same page as HR.
awful_at_internet@reddit
Fuck you, dont you put that evil on me Ricky Bobby
kremlingrasso@reddit
Teams can't get the fucking status color right half of the time to this day. And I should blindly trust their AI's output.
Relaxation_Time@reddit (OP)
Exactly. I feel like this kind of massive paradigm shift might only be happening internally at Microsoft—if at all. And even then, I highly doubt every single employee there is actually running a whole "zoo" of personal agents on a daily basis.
But they seem absolutely convinced that the world has already moved on and everyone else just needs to catch up. I guess that's exactly why they're pushing these narratives so aggressively from the stage.
I wonder if there’s anyone from MS lurking here who can share what this actually feels like from the inside?
heapsp@reddit
Microsoft gave a key demo at one point where they rolled out agents for HR to access the HRIS and give reviews to employees. They had to implement a failsafe in the demo because the AI agent started accessing information it wasn't supposed to during certain employee reviews and totally botching them. They used it as a demo for their killswitch on the AI agent portal instead of an actual demo for using the agentic AI successfully.
Ferretau@reddit
And right there is the concern - who needs an insider/outsider releasing the information when your AI agent will do it on your behalf.
IAmMcLovin83@reddit
The AI fatigue point is real. Anecdotally, every organisation I have spoken to tells the same story: big spike at launch, then a quiet retreat back to old habits.
The thing nobody says at these events is that most enterprises have never sorted out the basics: identity sprawl, ungoverned data, inconsistent permissions, years of shadow IT. Microsoft has been selling the tools to fix that for years through Purview, Entra, proper classification and governance. But that work is boring and it does not fill a 3,000-person venue, so they skip to the flashy stuff.
Without that foundation, Copilot just becomes a very confident tool rummaging through a filing cabinet where nothing is labelled and half the documents are wrong. The hallucinations and context failures you describe are often a data hygiene problem the AI is making visible and embarrassing, not purely an AI problem.
The bigger concern is that Microsoft has bet the company on this. If enterprise adoption plateaus or keeps dropping off the way your metrics suggest, that is a serious problem for them. They need this to work. Which is probably why the messaging stays theatrical rather than honest about what the prerequisites actually are.
acquiesce88@reddit
That's like at my current role, providing support for equipment I was never trained on have never seen in person. My manager just told me that all the documentation is in a directory with a chatbot on top, and I should just use the chatbot to troubleshoot customer problems. He was sold on the ideal AI reality. But if it's not in the documentation, and don't understand how it works, then I'm left to figure it out on my own. And my limited understanding of the customer base and how they're using the product combined with my limited knowledge of the product makes for a poor support system and me an unhappy employee. Sorry for the ramble.
GolemancerVekk@reddit
I implement software pilots. The craziest thing I constantly run into is companies that don't know their own processes.
I mean obviously those processes are in effect at some level but the knowledge is split among dozens of people. Sometimes it's stored in files but it's anybody's guess how many, whether they're digital or paper, or if they're documented at all. Often it's pure folklore.
Zenkin@reddit
One of my primary job duties is helping our customers write up DR plans and then execute them. I've often said that if these guys had their business operations written down, we wouldn't have anything to do. But none of them seem to have anything written down, much less digitized.
webguynd@reddit
It also doesn't get the budget internally. Execs will go all in on "Agentic AI" but reject every single request to buy the tooling to manage it. They will happily spend billions on AI but refuse to spend a fraction of that on the tooling around it.
destinyos10@reddit
Ever since this absurd mess kicked into gear, I’ve been remarking how all of this would have landed far less solidly if we, as an industry, hadn’t been completely fumbling enterprise document search. Surprise, data needs categorisation and organisation to be searchable, but companies tossed everything into SharePoint and ended up with a mess, when they really just needed to hire some librarians.
tiredrich@reddit
I'm running the training at my work at the moment for copilot and they're surprised when it turns to data protection, sensitivity, metadata etc
It's not just about AI chatbots, it's how the company is organised and run as a whole. If you're not seeing the complete picture, implementation is going to be embarrassing and fail.
IAmMcLovin83@reddit
This is the way. The fact that people are surprised when data protection and sensitivity come up in a Copilot training says everything about how these rollouts typically go. They sign up expecting a magic chat assistant and nobody told them they were also signing up for a crash course in how badly organised their company's data actually is.
You are doing them a favour even if it does not feel like it in the room. Most organisations only have that conversation after something goes wrong and a shareholder report ends up in the wrong hands or an agent helpfully summarises a document nobody was supposed to see.
To be fair, Microsoft does stress the importance of this stuff. The problem is that organisations desperate to show AI wins to their own leadership move faster than they should, and Microsoft's revenue machine is not exactly incentivised to slow them down. So the guidance exists, it just gets cheerfully ignored by everyone in a hurry, which turns out to be almost everyone.
SMS-T1@reddit
The statement about data hygiene (and infrastructure / configuration / policy hygiene I would argue) rings very true to me and should be mentioned more. Thank you for bringing it into the discussion.
IAmMcLovin83@reddit
Appreciate that, and you are right to broaden it. Data hygiene is just the most visible symptom. The configuration debt, policy gaps, and infrastructure assumptions underneath are just as bad, and somehow always "someone else's problem" until an AI agent confidently acts on all of it at once.
The fact that this conversation happens more honestly in Reddit threads than at a 3,000-person Microsoft event does say it all. Maybe the real agents were the sysadmins we ignored along the way.
kremlingrasso@reddit
I keep returning every AI conversation about "foundational data" quality. Entra owner employee IDs, servicenow ci owner, purchase unit price and quantity, HR org structure and job titles, group members, technology taxonomy, SharePoint pages currency, etc, etc. Everything is filled to the absolutely most basic level as quickly as possible, never maintained retroactively and completely siloed. And you suppose to build your agents on top of that? VPs look at me like I just kicked their dog in front of them. And this is my 6th fortune 50 company and it's exactly the same issues.
IAmMcLovin83@reddit
This is exactly what I spent years working through with customers. The pattern you are describing is identical every time: data entered to the bare minimum to close the ticket, never touched again, owned by nobody, and completely invisible to anyone upstream until something breaks or an AI confidently serves it up as fact.
The dog-kicking reaction from VPs is real. In my experience, the conversation tends to die the moment someone calculates what a proper remediation effort would actually cost in time and people. It is much easier to approve a Copilot license and hope for the best.
Curious how you have navigated it across those six companies. Have you found any approach that actually gets traction, whether that is framing it as a risk conversation rather than a data quality one, finding a single exec who gets it, or something else entirely? And when you raise it now, are organisations any more receptive given how visibly the AI tools are failing without that foundation, or is it still the same wall?
Also genuinely curious whether you have seen any pockets of real Copilot or AI adoption that have actually stuck. Even with all of this, there must be use cases or teams somewhere across six Fortune 50s where something landed well.
Walbabyesser@reddit
Feels like this guys back then
Hey_HaveAGreatDay@reddit
I work cloud and AI at Microsoft and I realize there’s a huge fatigue on this. I won’t bring it up to my accounts honestly because if they want to hear about it they’ll ask me. Some do, some don’t.
Anyways, tomorrow I have to follow up with about 200 customers across my territory (I didn’t get to go to the event) on how their time was at the AI tour and this is very helpful.
I treat my accounts like people and don’t sell them on hype and buzzwords. It’s gotten me in trouble a couple times but fuck that, I’d rather my accounts reach out to me with questions than avoid me like the plague and decimate their budget by trying to do something themselves.
i_am_fear_itself@reddit
This guy sales engineers.
rubmahbelly@reddit
Thanks for your impression and thoughts. Gives me hope that we are not obsolete in the next decade.
Also the hyper aggressive marketing stems from the fact that they invested billions in datacenters, along with the other big players. If the reality is a poor adoption rate of AI they can write that off.
Somenakedguy@reddit
I don’t see any chance of the SE role going away in the next decade, although I am biased. Customers and prospective customers want to talk to a real person who at least plausibly understands the products and services they’re buying or want to buy
I do think the lines between account executive and SE will start to blur though. I can see the roles being merged in the enterprise space of companies looking for a smaller number of people who can do both roles simultaneously for the biggest accounts
acquiesce88@reddit
That would be ideal, wouldn't it? Someone who understands the needs of the customers / end users AND understands how a product actually works and easily translate between the two worlds. Like a router.
CreativeGPX@reddit
I always thought that Microsoft was one of the better positioned companies for this to be a bubble:
When the bubble bursts, I think Microsoft will definitely be there to pick up the pieces while other companies that literally only exist to sell AI struggle to pivot into something else.
StinkMaster90@reddit
ive heard that the data centers used for AI would need all their graphics cards replaced with SSDs or whatever the actually profitable thing is, which would cost a fortune especially since those H100 AI graphics cards would be worth way less once the AI bubble pops. do you think thats true?
CreativeGPX@reddit
I guess the point is that they don't "need" to do anything. They already paid for the graphics cards. If it's best to throw the cards in the garbage and sell the land, they can do that. If it's best for them to turn off the data center and wait a couple years for the staggered hardware upgrades they'd be doing in the cloud anyways they can do that. If it's best for them to lower prices to stimulate demand they can do that. Yes, if the bubble bursts Microsoft's market cap and profit margins will probably take a hit as it adjusts (just like most companies) but the point is that these aren't existential threats for them.
I feel like one reasonable possibility is that the AI bubble bursting just leads prices from all the major cloud providers to fall (even if primarily for certain kinds of workloads) and that those lower prices will lead to various new kinds of use cases to become feasible.
timbotheny26@reddit
Shouldn't Google, Amazon, Apple, Nvidia, etc. also be relatively fine since - like Microsoft - they get shit loads of money from all of their other services/products?
webguynd@reddit
Yeah. The big losers of an AI bubble burst is going to be the model companies OpenAI, Antrhopic, XAI, etc. along with anyone whose entire business model is repackaging one of said models into a fancy wrapper. LoveableAI, cursor, etc. If your whole business is "we repackage ChatGPT into an IDE" then yeah, you're not going to last very long.
Microsoft is at least model agnostic, so no matter who survives and who doesn't, they still get to sell "run xyz model on Azure." Likewise for AWS and GCP. Google is kind of unique because they have their own model and sell compute, and basically have enough cash to just stick around until everyone else goes away.
thirsty_zymurgist@reddit
Depends on if they leveraged themselves into the position and how much. What I hear, yes, all of the companies you mentioned should fair pretty well through this. One that might surprise you (but shouldn't) who will have big trouble making it if the bubble bursts sometime in the next 2-3 years is Oracle.
CreativeGPX@reddit
Yes. The red flags for the bubble are:
I think the companies you mention are all profitable, are extracting cash today from AI hype and have substantial other products/features/markets aside from AI.
Old_Ad_208@reddit
I read recently that Microsoft is running out of datacenter capacity and needs to build datacenters to keep up with growth.
Frothyleet@reddit
It would have taken you much less time to write this comment if you'd leveraged Copilot for Reddit (tm)
wrosecrans@reddit
Is there any pushback happening internally? Or is all of the pushback external and the customer facing people are eating all of the heat and it's not pushing back up to management how negative some people are responding to the onslaught?
Hey_HaveAGreatDay@reddit
I guess I did my pushback last year when I had a terrible manager. They’d yell at me every week for not having enough AI opportunities and I’d tell them it’s because my customers don’t want to hear it and the finance industry doesn’t want to Guinea pig that shit lol.
My new manager just expects that if the conversation shows up that I can speak to it in an effective manner.
Relaxation_Time@reddit (OP)
Thanks for the honest perspective. It’s refreshing to know there are still people who value human relationships over the buzzwords. Good luck with those 200 follow-ups—that sounds like a tough grind!
Hey_HaveAGreatDay@reddit
It’s not because AI lol (kidding)
Universespitoon@reddit
Agents, I wonder when yhey'll start putting "authorized" as a prefix?
Anti-Virus segment needs a new villain.
Infninfn@reddit
In fairness, having your entire tenant's documents and data automatically added to the semantic index vector db, then be RAG'able and searchable a minute(s) later is a technical feat. What's lacking are the tools and integration, and most importantly, the autonomous agents everyone wants. That said, Copilot Cowork will power some advanced business uses without the need to code a solution, and OpenAI will eventually catch up on their business tooling and be in a position to tell Microsoft how to properly do integration.
I don't doubt that model capabilities will get to the point where business can fully rely on AI to perform job functions exactly as they specify, according to their specific requirements and processes. It's just a matter of time.
But the fact remains that Microsoft and everyone else doing AI business productivity have been selling a pipe dream.
RussEfarmer@reddit
The true value in AI is the ease of information retrieval in the enterprise, where shops with scrappy or nonexistent information flows dealing with insane amounts of siloed data all the sudden has a way to get things moving without re-architecting everything. Counterintuitively though, the same shops that end up in that situation are the ones least likely to blow a bunch of money integrating AI into everything
hannahranga@reddit
Not convinced AI is actually reliable enough for that. I've some coworkers dump in documentation, ask for answers and then it's a dice roll on the accuracy. Sounds good even when it's dribbling shit tho :/
RussEfarmer@reddit
A lot of it is the implementation. Most AI searching across wide sets uses something called RAG which has a few good ways of doing it and many, many, many really bad ways of doing it. Given that it's pretty new you'll find far more bad than good, but it's quite reliable with done correctly
Infninfn@reddit
That is a search use case, which is valid and important but not far removed from existing Sharepoint Search/Syntex functionality. No business would easily justify the cost of the Copilot license across their entire userbase just to do search.
Phrown420@reddit
Literally the only thing I use copilot for at work, ask it to find the definition of a very specific acronym or find a reference to a control in a ton of documentation.
03263@reddit
So it's like having a rather dumb intern do all your work and trying to monitor and correct it?
WhitePower252@reddit
Except your intern is also arrogant about being correct however when you call them out on being wrong and try to correct them, at least the AI will acknowledge it was wrong.
03263@reddit
Though never learns from its mistakes
Jazzlike-Vacation230@reddit
AI as it is today aka an LLM is just a pack of do/while loops in a trench coat working overtime. Soon as the ads start coming in it's gonna become ad infested slop like google became 7 years ago. And eventually no one will use it. You see it already, these companies are paywalling average users now. So like yeah
TheCharlieFoxtrot@reddit
LLMs are a dead end if trying to do multi step, long lived, complex tasks. Yann LeCunn called it out in 2022 and he is building out his own AGI architecture + company now. Another recent red flag, OpenAI CFO wants to push IPO out until 2027, you know the numbers they show to VC and media are not valid with that detail. This language based smoke and mirrors scam will blow up spectacularly, might even take Oracle with it.
FlyingBishop@reddit
did you actually go to this thing or did you just ask an AI to write a review of what going to an AI conference might be like
GargantuChet@reddit
I’m old enough to remember when Microsoft claimed that a new filesystem was needed to make Windows search work. Now they push storage to OneDrive and Sharepoint and claim that Copilot is needed to make search work.
bureX@reddit
Oh god I remember this one. Also, every single Windows component was to be rewritten in .net
pdp10@reddit
... until it's slow enough to sell new hardware.
Then it turns out that Electron is more useful than .NET and just as slow, so they used that, instead. What was the actual business goal, again?
aes_gcm@reddit
Search worked in Windows XP and that was 26 years ago.
Khue@reddit
ReFS?
MeccIt@reddit
Vista flashbacks
Relaxation_Time@reddit (OP)
This is gold comment.
F0rkbombz@reddit
If the rest of their agents are anything like the Security Copilot agent, they are gonna have a tough time getting people to spend money on them.
We got access to Security Copilot after they moved it into E5 and we immediately saw why nobody was paying for the add-on licenses originally. It’s basically worthless, and I assume most of their other agents are as well.
Drew707@reddit
Did they say much about implementing this shit in Fabric?
knawlejj@reddit
We've spent the last 6 months going from a SQL based PBI structure to Fabric based PBI. That part is basically done and now we're starting to test layering on Copilot with specific agents to Sales, Procurement, etc. and their associated semantic models.
Some people are more comfortable using a chatbot like experience, while others prefer the typical dashboard reporting structure. Doesn't matter to me as long as we're using the same datasets for both from an integrity standpoint.
Drew707@reddit
How have the agents been so far?
knawlejj@reddit
It's been fine and I see little risk so far - we've made sure our security groups through PBI are applied so people aren't seeing data they couldn't on PBI. Things will get more interesting if/when we decide for the Agents to take action and do things.
Drew707@reddit
What kind of impact to your CU usage have you seen?
Relaxation_Time@reddit (OP)
Actually there was not that much Fabric this year. But it was the thing 2 years ago.
Drew707@reddit
Thanks. I really need to spend an afternoon learning about MCP servers and LLM integrations, but I just haven't found the time lately, and shit is moving so fast.
My latest task has been to look into replacing/augmenting PBI as our delivery layer with Claude.
mangeek@reddit
What I have heard from vendors when we're off-the-record is that most of their clients are still at the "staff using generic chatbots" or "strap a chatbot on the website" stage, and some have implemented some "run agents in parallel with regular workflows and measure their effectiveness" and have stalled there. There are a few niche use cases where a really well-organized team has good luck with an agent speeding-up a very specific important step in process (like coding-up a security detection based on feeding manually-picked logs from an attack).
What frustrates me is that upper management is getting sold on the idea that regular staff are going to be writing their own agents for stuff, that this will let them automate things. That might be true for a small percentage of the users, but most people in the world probably can't verbalize breaking their own work down to specific-enough repeatable steps, let alone do that in an environment that would let them test and publish an agent for colleagues or reliably know how to test its effectiveness. Management is being sold "everyone will be able to ask the robot for an oil change and get one" but 95% of users don't really know the dipstick from the wiper fluid reservoir, so they can't describe to an agent what the real steps and dependencies of the "oil change" would be.
I don't mean that as a derogatory thing about users, but most people in most seats are following word-of-mouth or lightly-documented workflows that have nuanced and difficult to explain differences, and most of them have a lot of access to data that's disorganized and basically impossible to set any automation on.
aaiceman@reddit
If a user can't put in a ticket to accurately describe their issue, they can't be expected to put in a prompt to accurately describe what they need done.
ycnz@reddit
Wildly overestimates programmers' ability to describe issues in tickets here.
CreativeGPX@reddit
Yeah, I'm a developer and used to teach software development and this reminds me of what I tell people who think you have to be good at math to be a programmer or new programmers who focus too much on "I can't remember every single feature of the programming language". The primary skill that a decent developer has been honing for years is just precisely understanding and describing how something should happen. The fact that it's in a programming language or that the one who "reads" the program is often a computer is all secondary to that and is usually not the part the people are actually struggling with (although they think they do) as they are learning to program. They're still going to be among the most qualified to come up with precise, unambiguous, complete, efficient and maintainable instructions for how things should be done regardless of whether they're speaking in plain English with a person/AI or in programming languages.
It's kind of like lawyers. Everybody thinks they know what they want out of a contract, but businesses still have dedicated lawyers to write the contracts because they are going to think of things that ordinary employees will not.
Ordinary people speak in a way that presumes the person listening isn't just listening, they're applying their expertise to actively add missing information, eliminate ambiguity, make some executive decisions, etc. If what they are saying is dumb, wrong or turns out to be very inefficient the listener might actively change what they do from what they were explicitly told. People don't realize how many problems in their communication is solved this way because it's just normal. A shift to having an AI that will do whatever you say and precisely what you say requires a shift in that communication and it also pushes a lot of that work (and accountability) that was going on the listeners back onto the speaker.
poorest_ferengi@reddit
I've always said, programming languages are tools and anybody can learn to "swing a hammer."
The real skill is understanding how to manage both the great and terrible thing about computers. The great thing being they do exactly what you tell them to do, the terrible thing being they do exactly what you tell them to do.
SkiingAway@reddit
You need to be relatively good at math to get a CS degree - which is not exactly the same as being a good programmer/dev, I agree.
Dekklin@reddit
Remember that computer instructions experiment where the teacher tells you to explain all the steps to make a peanut butter and jelly sandwich?
"Put the peanut butter on the bread", and you end up with a jar of peanut butter flattening a whole loaf of bread that's still in the bag.
coderguyagb@reddit
This tracks with my experience almost exactly. Small scale demos work well, the moment you tell it to work on a production grade system; you're quicker doing it yourself.
lordmycal@reddit
To make matters worse, the Copilot functionality for GCC and GCC High tenants is neutered compared to their commercial capabilities, making Copilot literally the worst choice to implement from a pure functionality perspective.
JLChamberlain63@reddit
In my experience it's more like lobotomized than neutered. It's so braindead as to be completely useless. Claude for gov does a much better job but the tokens are so throttled we usually can only do a few prompts a day.
apple_tech_admin@reddit
Unfortunately, we can no longer use Claude because that drunk bastard at DoD..excuse me “DoW” had a temper tantrum.
apple_tech_admin@reddit
PREACH! From my experience, Copilot in GCC environments is so bad that it truly results in negative time saved. If copilot was going to be nerfed that hard for GCC, GCC-H and DoD (the last one I completely understand), why in the hell did we waste hard earned tax dollars on bullshit? And further salt on the wound, my agency is tracking copilot usage.
frymaster@reddit
That's not what I'm hearing from the multilingual and from translators. If you mean "it's easier than copy/pasting text into google translate", sure, but if you're wanting customer-facing professional quality documentation, you have the same problem you've identified about needing micromanaged and editing taking more time, it's just that unless you are also a fluent speaker of the other language, you might not realise the issues
bv728@reddit
Very much this. It's not TERRIBLE at very straightforward, low jargon practical texts, but nearly every text you need to translate isn't in all three of those categories.
If you want to translate two strangers talking about the weather or some simple directions, it'll probably come out okay. If you want anything genuinely technical, or where anyone speaks non-literally, or the discussion includes place names or location labels, there's a very good chance you need to review and revise. Which means you need to understand both languages and manually translate.
Dotakiin2@reddit
I've tried reading some novels through translation, both with different AI models and Google Translate. Translate is decent but very inorganic, while AI removes the author's voice almost entirely and frequently makes major mistakes. Anything more important than leisure would need an actual translator.
Fallingdamage@reddit
I'm going to make some assumptions based on my own observations, correct me where I deviate too much.
There seems an obvious push to replace conventional products and programs with AI. 'Agents' being LLM's that get set loose with required permissions to interact in a system on a user or businesses behalf to accomplish a task or maintain an asset, among many other things. This isnt augmenting existing products so much as replacing them with the AI aether. Humans are not really software, and we think of AI like a human but far more energy-intensive. The overhead AI needs to complete even simple tasks poorly far exceeds the energy required for a linear piece of software to do the same thing, yet suddenly we want to replace these structures of code and rules with a prediction engine.
Suddenly, writing a simple script using LLM prompts consumes more power than a household consumes in a day, and it still cant get the message right. These synthetic brains are being grated to everything and sold as a total replacement for labor (or an accelerant, as you implied.) Its actually adding more overhead and creating more mistakes than if people did it themselves.
How can we really use AI in a way that expends less energy and keeps details of a job accurate and structured?
Quick story: Back 20 years ago, there was an amplifier company that was trying to build a cheaper, digital amp that sounded just as good as the old analog ones. Digital amps got a lot of flack as they worked well, but were rather 'low rent' options that could not deliver as well as the old equipment musicians were used to. Some engineers at the company were able to digitally model a lot of different amps and effects, but could never build a single solution. Eventually, some of them had the crazy idea to instead build and model solid-state versions of all the different amps and popular effects, then wire each one together via analog channels. That way the output of each one fed into the next to give the impression that you had a series of analog devices processing the source. The solution worked and they were able to build and market an amp that sounded 95% as good as the classic devices their customers loved.
IT Admins and Engineers use a lot of software tools, scripting and scheduled tasks to perform work and get predictable output/outcomes. Professionals in charge of grafting AI to their filesystems, databases and creating virtual workers are also creating a lot of risk, no matter how well the AI is trained. Instead of giving a trained model access directly to your resources, why not give the AI access to your tooling. Example: You dont ask AI to backup your database. You give it access to execute the human-built-and-tested scripts or tools that do it. That way the AI is executing the task, but the AI isnt not doing the task. The outcome will be predictable as the linear tooling is still doing the work. The Agent uses the tools provided to it to assist in the process, but the Agent never directly touches the data being put at risk.
Dont try to use AI to replace existing products. Use AI to manage and operate existing tooling the way a human would. If the AI needs to alert us about a problem, its alerting us about a failure in the tooling, not a failure in the AI's reasoning.
Delusionalatbest@reddit
Having been in the room for an MS & Partner meeting about a client's security project....
A couple of the MS guys were frothing at the mouth to say "lack of AI" and "I don't see copilot in the slides" in front of their boss.
JFC, they can't and shouldn't be getting AI until there's a minimum viable service for security in their estate.
The push and panic to sell copilot SKUs (+ now E7) is being driven with an extremely hard sell. Chatting to some long timers in the industry, it seems to be the biggest push from MS sales side in quite some time.
I'd be worried where things go from here. I don't see the agent swarm disappearing jobs any time soon. I do worry for MS staff in the long run as the current trajectory is not sustainable.
burnte@reddit
I was the guy who in response said, "no, we need to sit this year our, let the players create useful products, and come back later when the fog has cleared."
This is what I argued. It's a lot hard to read someone else's code than to write it.
I'd rather have a human write two sentences than an AI write five paragraphs.
JosephRW@reddit
We are out of the era of easy gains and people haven't realized the curve has fallen off. When it's the same solution divided in to smaller wrappers and more "guard rails" you know you're reaching the edge of capability and they're just creating a new demo to show some c-level.
We are literally at the point of NFTs before their crash. The juice isn't worth the squeeze. Making users "learn" how to use something that doesn't have a form and then asking them to also be dorks about something that isn't their job isn't a great way to get your product to sell.
Tools have handles that stay handles, you can't hammer a nail with a cup of jello with some rocks in it.
tsaico@reddit
So far, i have found this to be spot on, in the MSP space, we get micro slices of this view from many different companies.
What we have been seeing is that AI should really be treated like any new employee. You have to watch and cross check its results for period of time and train it like you would a new employee, and someone needs to be cehcking it's output on the regular, because it learns across whole org and starts to change output on that. Watching it becomes a job unto itself.
so it "saves" money in the sense that now everyone gets an assistant, but as in human assistants, you don't get 100 doubling productivity, since now some of the manager time is spent managing the assistant. To compound the issue, not everyone is a good "manager" or communicator, so you end up getting a ton of slop over a period of time, since now the AI is learning from all "manager" regardless of good or not and merges all that into the same model, further diluting the good training.
So far, we found the is "built in" for Gemini or copilot to be most effective, since it is simply included, no extra tokens/cost needed, and most people use it for the superficial "make this email sound like I am more confident in my skills" and "take notes for me because I didn't" tasks.
The only thing I have seen real improvement is digesting logs. I will toss in logs of switches or event viewer logs to do a poor man's SIEM type of analysis. - read all these logs and tell me some ways to improve my network or workstations. I am planning on building an agent to digest and aggregate of the logs across a client to see if this will actually save me headache
klauskervin@reddit
All of this AI stuff seems so over the top considering the SMB I'm apart of any most others I work with have no resources or the infrastructure to utilize any AI features beyond basic troubleshooting or general knowledge questions. This is probably another instance where only the larger orgs are going to benefit and they are going to take all the SMBs out of the market due to being unable to compete with the big orgs.
Frostyazzz@reddit
There is a clear gap between operational reality and the inflated claims coming from hype driven AI vendors and low quality “vibecoding” advocates.
I work in the real world, where AI fatigue is no longer theoretical. It is measurable, and it is affecting serious business decisions. What remains surprising is how often budget discussions with decision makers are still distorted by inflated spending on token heavy AI projects, many of which cost significantly more than equivalent work delivered by a skilled developer at a fraction of the price.
So far, the market has tolerated a remarkable amount of irrational spending. I hope that period is about to end soon. Businesses are now absorbing the cost of buying into AI hype without sufficient scrutiny, and many are paying a premium for solutions that deliver less than conventional engineering would have.
HouseMDx@reddit
What a great writeup of the conference and the additional context provided was awesome. AI will have a long-term impact in some way, but it's likely not what Microsoft, Nvidia, OpenAI keep pushing.
Equivalent-Peak-5213@reddit
For all the talk of how revolutionary LLMs and AI are, its quite astonishing there's literally the same fundamental flaws with the output as there would be with a broken shell script from the early 90s.
Tweed_Beetle@reddit
The 70-80% drop-off is more mechanism than fatigue. The verification loop on writing-side agents costs more than the task itself, and people figure that out around month two and quietly stop. Coding agents don't have the same pattern because the verification loop is built in. You get a compiler, a type checker, a test suite, and a stack trace when the model is wrong. The model can be wrong all day and the work still moves forward, because the cost of catching the error is low. On a vendor email or a board summary, the human is the verifier, and human verification of LLM output costs more than just writing it because the verification has to cover not just spelling but whether the model picked the right detail to lead with and whether it accidentally inverted the relationship between two parties.
There's a workflow-level distinction underneath this that Microsoft deliberately blurs. Ambient agents, where the work happens autonomously and the human reads the result later, versus supervised agents, where a human triggers the action and reviews the output before it ships. Every working production agent in the wild is supervised. Ambient is what gets demoed at the Tour because per-seat pricing requires the agent to do enough work autonomously to justify the cost, but supervised is what actually works. The economics of supervised-agent work don't fit the licensing model, so the demos never feature them.
For the Discovery use case you mentioned, the supervised-versus-ambient distinction is the load-bearing one. A supervised agent that lets a scientist say "show me all binding assays from Q1 2024 against this endpoint" and returns a queryable view is a real productivity gain because the verification step is the scientist scrolling through the result and saying yes or no. An ambient agent that handles the whole flow end-to-end, from drafting the protocol through running the query to emailing the result to the lab, produces output the scientist still has to verify by re-running the query themselves, which collapses straight back into the editing-tax pattern you described.
ScannerBrightly@reddit
Can someone explain to me how an AI bot connected to my Calendar makes you able to "observ[e] incredible productivity growth."???
Ferretau@reddit
i think it highlights how desperate the company is for their offering needs to produce a ROI. They've bet the company on it and if it doesn't return with more subscribers willing to pay more then those who evangalized it are going to feel the swing of an axe.
cdoublejj@reddit
in the gaming and home space, normies and boomers are talking about linux. i wasn't even gob smacked when my older uncle i haven't seen in a year or 2 told me he was trying out linux.
as to say not just getting ROI but, actively pushing people away.
Ferretau@reddit
I see that as an effect of their policy of forcing AI down the throats of their customer base within Win 11, Office 365 etc. without consultation and including the agents with no path to remove/shut them down. I note you do have registry settings but it doesn't remove the agent from running on the PC - that still runs. Removing peoples choice when it comes to AI/LLM will force people to vote with their wallets/computers as we are starting to see ordinary people decide that Linux might not be so bad to try out now.
i_am_fear_itself@reddit
With their renegotiated contract with OpenAI at least they don't have to continue sharing with them. The downside is copilot is such a horribly incompetent product that if MS has to stand on their own AI capability, I don't see how massive layoffs aren't on the horizon.
awful_at_internet@reddit
The funny part is layoffs wont fix it, because theyll still have the same morons trying to enforce the productivity gains the sales team says they should see, so they'll just end up with fewer people available to edit the slop.
wrosecrans@reddit
Can't come soon enough. As far as some user are concerned, that axe should be less of a metaphor and more of a sincere reenactment of the French Revolution.
Previous-Low4715@reddit
And so they should. They’ve burned thousands upon thousands of employees on this bonfire.
timbotheny26@reddit
God, I can't wait for this stupid god damn bubble to pop already. I know various cracks are starting to show, but it needs to move faster, and it needs to happen in a way that really wakes the higher-ups at Microsoft the fuck up.
Diasom@reddit
Just a question, was AI used to write this? The reason I ask is Elena is a name that AI loves to use.
Relaxation_Time@reddit (OP)
Haha. I wrote the whole thing myself. Used AI for spellcheck. I have some Belarus roots so Elena — is just a name that popped up in my head. Quite popular in my homeland.
svideo@reddit
This entire thing is AI, it has every single tell. Em dashes everywhere, Elena, contrastive negation everywhere (it's not this it's that), starting sentences as Honestly comma, it's every single paragraph of the OP, this is 100% AI.
Relaxation_Time@reddit (OP)
Sure, you are correct. I definitely used AI to clean the text up. To be honest, I’m a pretty bad writer and I struggle to put my thoughts on paper in a logical way. My process is to just "dump" my thoughts out first, and then I ask the AI to polish it, clean it up, fix the punctuation, and check the grammar — all without changing the core idea or losing my original, "choppy" authorial style.
After that, I proofread the text several times to strip out any words I wouldn’t normally use and to make sure the AI didn't sneak in any of its own "hallucinatory" thoughts.
So while yes, it is AI, it’s more of a symbiosis. The core ideas, my personal observations, and the actual experience of attending the AI Tour — AI simply isn’t capable of that yet.
FeedTheADHD@reddit
Saying you only used it for spell check is disingenuous. Your post and all of your comments read like you pasted it straight out of a prompt without even trying to mask the parts of it that LLMs shoehorn into everything they output. A lot of people here sift through AI slop all day long and it rubs me the wrong way when it's clear something was copy pasted from a prompt and it's not clear which parts of this are your actual original human thoughts.
Relaxation_Time@reddit (OP)
Еhere is even some kind of funny irony in this. I wrote a post about AI fatigue and now I’m experiencing this very effect on myself.
tastyratz@reddit
This makes your appeal feel dishonest. If you're looking to tell us this is how you feel about AI but everything is obviously just an AI prompt output then that says something completely different.
It's not just spellcheck that's happening here and it's easy to spot.
cdoublejj@reddit
what a waste of time and life but, hey you had fun so maybe not so much.
Josh_Fabsoft@reddit
I feel this so hard. The "agents everywhere" rebrand feels like watching the same movie with different subtitles. The enterprise AI fatigue is real because most of these solutions are solving problems that sound impressive in demos but don't translate to actual workflow improvements.
What's particularly frustrating is how these big vendor events create this artificial urgency around adoption. The FOMO marketing works on executives who then pressure IT teams to implement "AI agents" without clear use cases or success metrics. Then when the pilot projects don't deliver measurable ROI, everyone gets burned out on AI altogether.
The disconnect you're seeing is that most enterprise AI tools are still too complex and require too much organizational change management to be worth the effort. Companies end up spending months on implementation and training just to get marginal improvements over their existing processes.
The real opportunity isn't in these flashy "agent" platforms that try to do everything. It's in focused solutions that solve specific pain points without requiring a complete digital transformation. Sometimes the best AI implementation is the one that works quietly in the background and just makes people's daily tasks a little bit easier.
The industry will eventually move past this hype cycle, but right now we're stuck in the phase where every vendor has to slap "AI agent" on their product to get meeting invitations. The companies that will succeed long-term are the ones building practical tools that deliver clear value without the marketing theater.
No-Preparation7805@reddit
Feels like we moved from “AI will solve everything” to “AI everywhere but no one knows what actually works”.
A lot of noise, not enough real value yet.
HeKis4@reddit
alwayshasbeen.jpg ? I really can't see how this could be any different.
nut-sack@reddit
Oh? You’re not seeing the 100 billion Sam Altman pumped into training the last 2% gain they got out of the model? /s
Now let’s all watch as OpenAI requests a $1T bailout. Either he gets it and the American public pays for this. Or he doesn’t and OpenAI goes bankrupt.
the_star_lord@reddit
Ive been asked to build agents for some critical processes for social care work, and the AI just keeps making stuff up, even when declaring it should stick to the facts in the provided docs.
I'm glad it's not just me struggling with the anything past "quick wins"
Business leaders are foaming at the mouth for all this to work so they can lay us all off. So they will eat up all the noise and buzz words.
Proper_666@reddit
The 70-80% drop-off you're describing matches what we see across our customer base ( mostly LATAM enterprises), It is consistent: month one spike, then quiet retreat.
The root cause isn't that AI tools are bad, It's that organizations deploy them without a methodology for how humans and AI work together, they hand people Copilot and say "be productive." That's like handing someone Terraform and saying "be DevOps."
Your "Editing Tax" observation is the key insight, when AI generates output faster than humans can review it, the bottleneck shifts from creation to verification. If you don't restructure the workflow around that shift, you get exactly what you described: people spending more time fixing AI output than writing from scratch.
We've been working on this problem for two years. The short version: structured AI development (explicit intent before generation, quality gates during generation, team review after generation) eliminates most of the editing tax because the AI gets better input. The 70-80% drop-off happens when people use AI as a magic box. It doesn't happen when they use it as a tool within a structured process.
The data hygiene point from IAmMcLovin83 is also dead on. AI makes your existing governance gaps visible and embarrassing. That's actually useful, if you treat it as a diagnostic rather than a reason to abandon the tools.
csorfab@reddit
Not going to read this AI slop
Ok-Measurement-1575@reddit
This is not a bad bit of slop.
I think the tldr is, it's all still kinda niche and if you're not an expert in your field and also quite handy in general, expect to get bitten every time the wheels fall off.
Matsuda_109@reddit
The “editing tax” point is spot on. Feels like AI shifts effort instead of reducing it — from creating to constantly verifying.
I’ve noticed the same pattern: amazing for quick wins (summaries, drafts), but the moment stakes go up, trust drops to zero and you end up double-checking everything anyway.
DrAtomic1@reddit
The whole Frontier companies thing is a big marketing miss too imho.
GremlinNZ@reddit
It's a fascinating time to observe... As much as being strapped to a train can be, when you don't know who's in control (and highly suspect no-one is).
C suite observe the gains just by saying they're investigating AI. Share price goes up! Well boys, we're sitting on a gold mine here! First mover advantage etc. Companies selling themselves know that C suite is all about AI, and if they don't highlight it, and a competitor does, they lose.
So now our printer suppliers claim AI gains... A fucking printer. Companies fall over themselves to clear out staff, while others like nVidia point out it costs more. Adding IT has always fixed bad process in the past so how could AI fail? /s
Uber spends its annual budget in 4 odd months because it basically forced staff to use it. Meanwhile today, Copilot can't get commands right to fetch logs because it left out a pair of quote marks.
That said, if you can get some sanity in the mix, it can be effective. But instead, let's just buy some more licences? Only one trying to win is Microsoft...
UnprofessionalPlump@reddit
So glad to be validated about this Microslop AI fatigue. I’m beyond puzzled why we think it’s a good idea to let the autocorrect on roids do the thinking for us.
No-Rip-9573@reddit
Agents are often just the good old “solution looking for problem” - How do we make the orgs use our LLM more? Let’s invent some use cases to promote.
Damet_Dave@reddit
Anyone going through ELA negotiations right now will likely tell you what we are seeing, they are trying to cram E7 licenses down our throats.
All the standard street drug pushing tactics.
They are clearly giving us first pass aggressive pricing that is meant to fool the C-suite negotiators into thinking Copilot is free. They are screwing with other offerings to make it look like it’s a no brainer.
Then you look at the hidden costs like agentic usage costs in Sharepoint and app creation etc..
They intend to make every customer an AI shop willingly or not.
Degenerate_Game@reddit
All tech conferences are an annoying circlejerk of meaningless words and you can't convince me otherwise.
Jony_Dony@reddit
The permissions point is the one nobody wants to fix first. Agents hitting production with overly broad access is exactly how you get a Copilot that confidently surfaces documents the user shouldn't have seen. The "AI isn't ready" conversation is often really a "our IAM is a mess" conversation in disguise.
Walbabyesser@reddit
Key phrase „completely disconnected from reality“ - Whole AI train if a fever dream of lies and ill fated illusions
Opposite_Bag_7434@reddit
Yep, this is pretty consistent with what we are finding, but with different AI platforms. Same fatigue rate which is a big surprise.
I personally believe I’ve found a couple of sweet spots. Still the same problems can happen so every result has to be checked.
enterprisedatalead@reddit
This honestly doesn’t surprise me.
Feels like a lot of this “agents” push is just the same LLM stuff with a new label on top. The demos always look smooth, but once you try to use it for real work, it falls apart pretty quickly.
I’ve seen the same pattern great for small things like summaries or drafts, but anything complex turns into double checking everything. That “editing tax” you mentioned is real.
Also agree on the drop-off. People try it, get excited for a few weeks, then slowly stop using it because it’s more effort than expected.
Feels like the gap between demo and actual enterprise use is still pretty big.
Curious if you’ve seen anything that actually worked well in production, or is it mostly still in that “demo stage”?
Darrelc@reddit
Could not be happier to be having a break from the industry, what incredible timing.
Thanks for giving up a part of your soul for this, interesting read which backs up my unobjective assumptions lol