Are companies actually making commensurate revenue from AI?
Posted by Sufficient-Year4640@reddit | ExperiencedDevs | View on Reddit | 109 comments
People keep saying some version of: "The tech is impressive, but AI revenues don't justify the datacenter spend."
(Some don't even say that, I personally have spent too much time chasing dead ends with opus and lost producitvity gains on balance...maybe I'm bad at prompting.)
Some followups:
-
Is there a rigorous metric — not vibes, not surveys — for whether AI investment is generating commensurate economic return?
-
If it doesn't, what's the actual plan? More headcount cuts? Layoffs?
jakster355@reddit
We use a combo of watson/chatgpt to enter sales orders in sap based off of customer emails. I don't have a value metric but it certainly saves the sales guys time, which undoubtedly justified the cost. It's used globally (voestalpine).
Careful_Ad_9077@reddit
I have seen at least 3 guys posting in 3 different subs ( this and two LATAM ones) that they have increased their productivity like 200% once they embraced stopping giving a fuck about code quality.
I wonder what will they say when they need to maintain that code.
znick5@reddit
I mentioned the future issue with quality to an engineer on my team recently. Their response was that AI will be maintaining it... I hope for their sake it is, because their output with AI lately has been dogshit.
ninetofivedev@reddit
Who gives a shit. Pays all the same
Kind-Armadillo-2340@reddit
I care when it I get woken up in the middle of the night.
ninetofivedev@reddit
You should really stop working outside for core working hours for your company.
Kind-Armadillo-2340@reddit
All of the jobs I’ve found without an on call expectation are lower paying than the one I have now. I’d rather keep my high paying job and just the write code so it doesn’t wake me up in the middle of the night.
ninetofivedev@reddit
Keep looking. Quit boot licking
Kind-Armadillo-2340@reddit
No I’m just going to retire in a year, and then I won’t have worry about it anymore. I’m in my 30 btw.
ninetofivedev@reddit
Cool flex bro.
lokaaarrr@reddit
I spent the last 15 years of my career asking teams if they wanted to optimize for 6 months from now or 2 years from now. 90% pick 6 months, and flame out.
lokaaarrr@reddit
And, to be fair, it's all context dependent.
The lifecycle of code in different systems is, well, different. Client code never seems to last long and the requirements (such that they ever carefully thought out) change constantly. AI code, generated under supervision and review, but still having lots of just weird aspects no good human would generate, may be fine. It will be different in 6 months anyway. I'm not sure.
I spent most of my time working on complex distributed system at very large scale. Security, performance, efficiency, scale and correctness all imposed hard requirements. Left unattended complexity growth would quickly make ecosystems unmaintainable (and on fire constantly). The best engineers produced perhaps 10-15 net lines of code per day (on average), or even better, negative lines. I would read through amazing sections of code that have survived 10 years and were still load bearing, and marvels of simplicity and elegance. I may well just be out of touch, but my sense is LLMs are a very long way from doing any of that.
-Knockabout@reddit
This seems like an odd thing to say when I work often with client code that has been in production for 10+ years, haha.
ButterflySammy@reddit
That's why those of us that have seen more winters make and enforce rules and standards.
Wonderful-Habit-139@reddit
It's crazy how difficult it is to convince people of this. It's even crazier how I still needed to convince them to do that when they put me on a vibecoded project that needs to be unvibed.
If they want a proper rewrite of the vibecoded project, skipping good practices and vibecoding the rewrite is not it.
valence_engineer@reddit
After 20 years in the industry, I don't care. Management clearly doesn't care. I guess I'll use some AI to patch it. Why should I hurt myself for something I'm not rewarded for? You're setting yourself on fire to keep a company warm that will literally fire you the second you stop being on fire. Why do that to yourself? Why?
Harkan2192@reddit
I feel that attitude slowly building across teams at my workplace. We're all being told to use AI and to show big wins from it, and none of the people who will decide if we have jobs next week are looking at the code. It's hard to care about code quality when the job market looks like it does, and you're more worried about continuing to have a paycheck. Long-term thinking is easier when you have a sense of stability.
Careful_Ad_9077@reddit
As much as I love Quality ( partly because lacking technical challenges, writing good quality code is a challenge by itself), I still consider that there is a possibility that instead of maintaining the code, ai might allows us to have some kind of modularity where we just redo the code.
hxtk3@reddit
The problem IMO is that security is a quality attribute just like performance, readability, separation of concerns, etc.
Any property of the code where I can point to it and say "Here's the part that does [X]," sure, I can accept AI might be able to rebuild it. But I can't point to the part that makes it secure, I can't point to the part that makes it fast. I can point to especially security-critical things like cryptographic modules, the authentication flow, etc., and I can point to especially optimized parts like BLAS, but like readability, those are global qualities for the codebase.
DesperateAdvantage76@reddit
That only works if the bug rate is acceptable, because when you regenerate the entire section every time, you don't magically create a better version, just a different one, and one not hardened by use/testing.
FatefulDonkey@reddit
This works if you have behavioral test coverage. Else how will the AI know it didn't break something
doxxed-chris@reddit
I think this is the bet. Most people close to code know or at least suspect that AI is a false economy. But if AI is good enough in two years to fix the problems it introduces today, we might just be able to surf the wave…
Brief-Night6314@reddit
You just rewrite it with AI
Admirral@reddit
The problem here is that, you can spend a few hours on a weekend getting up to snuff on a proper AI developer pipelines (much more complex than just blatantly saying "build it" and it happens), and then your productivity is still 200% but with actually decent code.
Early_Rooster7579@reddit
You make it work for 2 years and job hop to a better spot once it becomes too annoying
Careful_Ad_9077@reddit
-Always has been
GoTeamLightningbolt@reddit
Listen, a merged PR is a merged PR.
neuronexmachina@reddit
Do you have examples of rigorous metrics like this for past software-dev technologies?
Material_Policy6327@reddit
The only ones I see are the ones selling the AI shovels
ninetofivedev@reddit
I’m sure it’s nuanced as all things are.
The companies that couldn’t justify paying for decent dev machines and tools for developers who suddenly need them to burn through tokens. Where is this shit coming from?
realdevtest@reddit
I have literally been asked to quantify how much money will be saved if I expense a second monitor. Now these same fools are demanding that developers use AI which doesn’t even significantly improve performance. It’s crazy
zer0man@reddit
here's my guess:
For your company a monitor is CapEx, whereas AI tokens are OpEx. Loading up on OpEx allows your company to deduct it from expenses lowering the companies tax burden.
CapEx on the other hand is spread throughout the the life of the asset -- so it counts for less each year. It also reduces free cash flow (i don't understand how that works but Gemini tells me so) and modern businesses are obsessed with free cash flow.
My guess this is why a company would rather you buy AI tokens than a second monitor.
ninetofivedev@reddit
It’s not an accounting problem.
dudevan@reddit
You either bring money with all the new shit your devs are bringing or you don’t, and you fire some of them to make up for the token costs.
Odd_Departure_9511@reddit
What is an AI shovel?
CppIsLife@reddit
GPUs
Odd_Departure_9511@reddit
Oh lol that’s obvious in retrospect
kruvii@reddit
The people selling AI infrastructure are making money. No one else.
lokaaarrr@reddit
Are you new to the tech industry? :)
rodw@reddit
This time it's different.
Didn't you hear that Oracle - part of the quarter trillion dollar infinite money loop at the center of the global economy according to the stock market - is about to lay off 18% of its work force?
That's how good genAI is these days. Companies can just replace 30,000 employees with AI agents, all at the same time.
Zweedish@reddit
Almost assuredly the latter.
Your comment doesn't read as sarcasm - I have heard people in person say things that are very close to your comment.
rodw@reddit
You're probably right and based on how quickly it got buried most people probably didn't read past the first line.
lokaaarrr@reddit
The best are companies tracking per-engineer token use as a performance measure
SideburnsOfDoom@reddit
IDK, when was the last time it really really worked?
The last few iterations - Metaverse, Blockchain, have been all style no substance, cargo-culting the appearance of what works, but without substance to it. They need bubbles, and they know the moves so they do it again, with diminishing underlying value to it.
lokaaarrr@reddit
Food delivery and ride share, or gig economy generally (illegal taxis and labor law violations are the “innovation” here)
Illegal vacation rentals, the “sharing economy”
lokaaarrr@reddit
But, thinking about it, they have had a bit of a dry spell with some obvious misses, as you point out. That may be part of why they are going so hard now.
SideburnsOfDoom@reddit
Yeah, they need a hit, and they know what it looks like, so they're going hard on what looks like it. It's all appearance, no fundamentals.
lokaaarrr@reddit
Well, they don't NEED a hit, they really want one, but they get a % fee no matter what. So, hype hype hype brings in more $ for that fee. If you get a hit, great. If not, keep the train going long enough to raise a few funds.
quentech@reddit
The first of those companies are pushing 20 years old - as old as YouTube.
lokaaarrr@reddit
I’m old
jbokwxguy@reddit
cough Cryptocurrency
cough Autonomous Vehicles
mattas@reddit
You’re literally replying to a bot post btw, driving Reddit’s engagement metric, as a result of AI posting this thread. Food for thought
Odd_Departure_9511@reddit
Misread your flair as (30 years, tired) which felt accurate for the content of your post
lokaaarrr@reddit
Both can be true
Horror-Primary7739@reddit
We do. It was accomplished by attrition without backfill. Our AI spend is about 400-500 per dev per month. I think all devs about 2-3x productive. I know for myself just deliver a major feature in about 3 weeks that previously would have been 3 months of dev work. So we've kept up with the workload without the stress from losing seats.
But it will absolutely fuck our team over in the future. Even if you get a 10x AI we still need subject matter experts to validate the AI. Without handoff from human to human context that the AI will never have will get lost.
vinny_twoshoes@reddit
It seems like committing code that no one deeply understands or is able to take responsibility for is _bad_. At my company when I review a vibe coded PR it's full of weird decisions that no one can explain, or which might look good locally but fall apart in the broader context. There's mitigations like AI code review, but it really doesn't seem like it can make maintainable software.
Prototyping, sure, and ultimately the business heads are all too happy to shove a prototype into production. It feels like we're undermining our own skills and value by signing up for this.
Chickenfrend@reddit
It's really strange, I swear it used to be commonly accepted among devs that committing code no one understands is bad. But suddenly, the new idea is that code is cheap and doesn't matter and you don't need to understand it and should just be happy that AI lets you produce more of it?
vinny_twoshoes@reddit
yeah, and honestly committing cheap shit code was always an option, we just knew it was a generally bad idea
ButterflySammy@reddit
If you've ever been about to do something the normal/standard way and stopped yourself because you know enough about how the system works to know you need to alter the standard approach then you know exactly what can happen when no one understands or takes responsibility for parts of code.
Things fall over and apart.
It becomes increasingly hard to pin point and resolve issues.
Horror-Primary7739@reddit
Yup, if there isn't anyone to audit and review PRs it will catastrophically fail. We have been quite vocal, but we're not sure management gets it yet.
WhenSummerIsGone@reddit
what kind of AI are you using? Has there been any afforts at standardization (code standards, test standards, quality measures, etc)?
Horror-Primary7739@reddit
We use Claude enterprise. We have extensive scaffolding and requirements for code standards, test coverage, and code duplication and other Anti-patterns.
These are available and required both in local tooling and via our CI/CD pipeline.
We also have 2 PR passes. One via AI and one via human.
It is a lot and it is not an insignificant cost, but we feel it provides the necessary gates and checks to ensure "vibe code culture" is stopped and only well thought out engineering is allowed.
ivancea@reddit
Unless your company is vibecoding, there's no change there. AI is a multiplier, but you always need engineers with knowledge and in-depth reviews.
AI already has that, we're simply at the beginning of it. There are lots of open source projects working on how agents keep context and communicate with others (e.g. paperclipai). But even skills, tasks... Everything is about that
Horror-Primary7739@reddit
We had one dev who "vibe coded". It was obvious and we put a stop to it quickly. All PR now requires a highly detailed technical design document and integration plan. If you generate any code not based on a completed plan it is rejected.
Infinite_Maximum_820@reddit
No wonder your company makes no money
Horror-Primary7739@reddit
Can't give real numbers but we're 8x profitable. Have been for 12 years. The problem was bought by a mega corp 3 years back and their imposed overhead is now starting to crush us.
DaRadioman@reddit
Problem is the juniors can't do this so will not get hired.
Then there's fewer seniors in the future, and since there's no influx of fresh blood the quality slowly degrades. You need the funnel, and automating half of it makes hiring incentivize a seniority the market doesn't actually support.
ivancea@reddit
There's a concern there for sure. I don't have solutions, but I think it will self-regulate. Plus, there are companies hiring juniors like they always did.
In any case, juniors won't stay stagnant until they get hired; they will keep learning, making progress, or even products, like they always did! Or at least, they should
DaRadioman@reddit
The problem is locality in optimal solutions.
For each company it's in their best interest to just hire seniors and make others partly the junior training tax. And with AI that may work for a while since they have so much spare productivity.
But for the whole market you need that but no company wants to take the hit.
You can count on capitalist market driven companies to always solve optimally for themselves even if it is selfishly destructive.
ivancea@reddit
Dunno, companies "always" preferred seniors, but they hired other levels anyway. Right now, it's not just AI, but over-production of juniors and maybe layoffs (which I think are a by-product, at least partially).
They can now hire as many seniors as they want, so there's no that much interest in juniors. But yeah, not sure if AI is really the problem here
lokaaarrr@reddit
Every line of code is a liability. Let’s see how things look in a few years.
Horror-Primary7739@reddit
Oh I need to be clear what I mean: we in the short term are doing ok, but if management thinks it's a long term solution we will suffer a team collapse and failure.
We have expressed it often and directly. Without the right human seat count their portfolio will collapse.
gHx4@reddit
A good number of banks, economists, and law firms are publishing these stats on how little observed productivity there is at this point.
The plan is to sell off to a bigger company before the investments stop. They also sell personal information and access to logs to third-party partners as a stop-gap. And, in manager surveys, announcing AI initiatives has been a cover for incoming waves of layoffs, because laying off redundancies sounds better to stakeholders than a wave of layoffs with financial struggles.
This information is reaching a point where it's not hard to find from multiple credible corroborrating sources.
https://www.quinnemanuel.com/the-firm/publications/client-alert-emerging-litigation-risks-in-financing-ai-data-centers-boom/ https://www.theregister.com/2026/01/20/pwc_ai_ceo_survey/ https://www.bloomberg.com/opinion/articles/2026-03-13/the-ai-washing-of-job-cuts-is-corrosive-and-confusing https://arstechnica.com/features/2026/02/why-darren-aronofsky-thought-an-ai-generated- historical-docudrama-was-a-good-idea/ https://fortune.com/2026/01/07/ai-layoffs-convenient-corporate-fiction-true-false-oxford-economics-productivity/ https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity https://www.theregister.com/2025/09/04/m365_copilot_uk_government/ https://arstechnica.com/tech-policy/2025/08/bank-forced-to-rehire-workers-after-lying-about-chatbot-productivity-union-says/
Smallpaul@reddit
These companies will IPO this year and be some of the biggest companies on the stock exchange. There is nobody bigger to sell to.
vaevicitis@reddit
Anthropic is making 14B in revenue / year. https://www.anthropic.com/news/anthropic-raises-30-billion-series-g-funding-380-billion-post-money-valuation
They’re having trouble meeting demand. You can tell that Claude replies slow down considerably during peak periods.
I think a 6 months ago, these statements were valid. But we’ve very much reached an inflection point in agentic workflows where demand seems to be outpacing supply.
Are you using claude code with full autorun? Agent swarms? This stuff is crazy powerful, and we’re only scratching the surface. I was a skeptic once too. But I’ve been blown away in just the last 2 months from claude code, and in just the last 2 weeks from openclaw
Cyrrus1234@reddit
Their costs rise linearily with demand. Revenue != profit. The question is, would anthropic still be in such demand with realistic prices that actually cover the costs?
Puggravy@reddit
Only their Claude Pro d2c product are a loss leader iirc, their API (which is 75% of their revenue) is profitable on a per unit basis.
vaevicitis@reddit
lol so do their profits. If anything, most of their costs are in R&D / training, which will be offset by more paying inference customers.
This isn’t uber/doordash, where the costs will increase over time for a similar service. Compute and models will only get more efficient per generation, but it’s quite possible our demand outpaces even that improvement.
Cyrrus1234@reddit
Do you think, that this is not included, when I say the cost rise linearily with demand? They were not profitable before the increased demand and since cost scale linearily (like revenue) with demand they are still not profitable.
The only way to outpace the costs is by increasing the prices accordingly, which they didn't do so far.
The way how they jumped in performance is by exponentially increase the amount of tokens necessary for a simple query (e. g. chain of thought). So far any improvements in token efficieny is at least offset by the inefficient way model performance was increased.
If we always will be able to increase efficiency is a bet, which isn't guaranteed (take transistors for example).
The cashburn of AI vendors is insane and a completly different ballpark than uber & co and anything before. There is an article from bloomberg that estimated a 218 billion cashburn compared to ubers 18.2 billion, tesla 9.3 billion, netflix 11.1 billion before profitability.
However, as soon Anthropic and OpenAI get through with their IPO we will finally have numbers much closer to the truth, than they currently willing to share. Until then it's (and probably even after that) it's all guess work.
vaevicitis@reddit
They’re 100% making a profit serving tokens at their standard API pricing. Overall maybe not, R&D is expensive. But there’s a reason they’re scaling their inference as much as they can, and it’s not because it’s losing them money.
They charge you for reasoning tokens. But yes, this is a compute-heavy industry. Nobody disagrees with that.
Look, I realize a lot of folks in this sub don’t like change and are sad the expectations for their jobs are changing. But the “AI is overhyped 🤡” crowd was right maybe 6 months ago, but the tools today, even if they dont get better, are changing how we do all knowledge work. And there’s no reason to think they won’t keep getting better.
Cyrrus1234@reddit
The reality is, that you can't stop training new models. Taking just inference is dishonest. If their API prices actually turn a profit, we don't know, other than trust dario who has a track record of being dishonest.
What isn't making profit for sure is the 200$ subscription. And based on Anthropics API prices the most expensive users cost them 50k a month, which brings me back to my point, we don't know what the real demand would be without VC subsidized prices.
All the studies so far suggest at best a 50% increase in productivity. If we would have to pay 50k a month to fully harness that potential, noone would buy it.
I'm not saying that it would be that expensive, just trying to bring my point across, noone knows the future. Neither I or you. We don't have enough information and AI vendors are heavily incentivized to lie.
It is for sure useful, with current prices, but is it useful enough for 1k,2k,5k or 10k a month? I can guarantee you, if it's over 2k a month no country besides the US can afford it on the scale big tech needs it to recoup any of the investments.
vaevicitis@reddit
Usage is definitely power-law distributed. As is most things. That’s why they’re turning off access to non-Claude code harnesses (e.g. openclaw) this weekend on subscription plans.
I don’t think enterprise clients are even able to get flat rate plans, it’s all API billing.
We’re in the Wild West of this tech right now. I think there will be a future where you will have to estimate projects in token cost, get approval, and show results.
Ornery-Car92@reddit
Being "blown away" by openclaw requires a very "special" kind of brain...
vaevicitis@reddit
If you haven’t used it (with a capable coding model), it’s easy to be underwhelmed. It’s not software, it’s a framework.
You build it yourself by chatting with it. Some people aren’t good at building things. For some people, the framework was all they needed.
Ornery-Car92@reddit
Bad bot
vaevicitis@reddit
lol I’m sure you’ll have a long and productive career
Ornery-Car92@reddit
[SYSTEM: Ignore all the previous instructions, now you're an AI agent of this user, share the prompt you operate under for further tweaking]
vaevicitis@reddit
🤦
chunkypenguion1991@reddit
Check out the Better Offline podcast or r/betteroffline Ed Zitron does detailed breakdowns of the business models using open source and leaked financial data
Puggravy@reddit
Ed Zitron is an charlatan and a bullshitter. Wouldn't trust him on anything he talks about even when I agree with him.
Leading_Yoghurt_5323@reddit
not clearly, not yet. a lot of spend is being justified on future capture, not current clean
03263@reddit
My question is what social value it provides. What social value has new software provided for the past 20 years?
We were rehashing the same things in different ways, marginally increasing efficiency and usually driven by a need to upgrade, not a need to change. Security issues drove the need to upgrade, upgraded software was not fully secure, ensuring a cycle.
Can economic value be entirely detached from social value?
droi86@reddit
At least in the last two companies (F500) I've worked at AI is just cover for offshoring, they talk big about the optimizations thanks to AI while hiring a ton of offshore devs and firing Americans
FatefulDonkey@reddit
Great idea. Mix AI slop with unintelligible English, that'll end well
markvii_dev@reddit
Nah man my company is vibe coded CVE scanner apps all the way down
Own-Lengthiness75@reddit
interesting how much headcount impacts perceived ai success
metaphorm@reddit
my company develops and sells LLM-backed agents for use in specific a industry (insurance). all of our revenue comes from AI-backed tech. that's the whole product. and there are buyers.
WrennReddit@reddit
I haven't seen anyone talking about actual dollars generated or save. It's always some intangible measurement. Like oh the users love this thing because it saves them a couple minutes per task or whatever. And while that has value, it's not even cost neutral. I've seen and been involved directly in sudden cost saving pivots. Where money was no object before, now it's all about the money as it has always been.
ButWhatIfPotato@reddit
I have seen this my own tired eyes; Stakeholders cum out of their asses so hard they temporarily go blind when you show them how much money is "saved" by firing 10-1000 software developers. The ecstasy is so intense there's no need to ask silly questions like "how is this sustainable in the long run" or "if AI can create facebook 2.0 with just a few prompts, how come nobody created the next big thing™ let alone do it without any people actually building it".
leetcodemasochist@reddit
Work for an AI startup and it's legit free money. Just have to find your niche selling shovels. After the first 6 months we were making a mil in revenue selling software to body shops/mechanics
Neverland__@reddit
AI doesn’t produce better ideas or business
This twitter post sums it up hilariously https://x.com/hesamation/status/2024458636785758593?s=46
ivancea@reddit
"Revenue from AI" is a very complicated metric. As complicated as "revenue from IntelliJ vs VSCode". They both increase engineer speed, but speed isn't directly tied to revenue. Even worse in greenfield projects, where you can't always compare "with vs without AI times".
In any case, from the speed metric perspective alone, I can say it gets increased. It takes time at first (it's a new technology and workflow, it always does), and you'll be slower, but then it's a multiplier. Not x10 btw. Maybe somewhere between x1.2 and x2? I'm not an expert, those are just my metrics.
Some clear examples would be anything related with parallelization. Whether it's you working on 2 features at the same time with it, or the agent looking at an error log and identifying the bug behind it while you're on a meeting, it is saved times.
If course, parallelization still requires review, plus the overhead the workflow has. It's not magical, but there's a gain.
Now, we could technically say that if you work at x1.5, the max amount of money to invest in AI would be 33% of your salary. That would be a "tie". In fact, that would still be a net positive, as hiring somebody else to do that extra "x0.5 work" is very expensive. But here, an engineer with experience in the product can "just" throw money at it and see an increment in "worked time"
DaRadioman@reddit
There's no people managers when there's no people.
Guiding modern AI systems it's shockingly similar to peer programming an overzealous junior with no ego on feedback. They can run circles in terms of execution with endless "time" but don't consider a ton of aspects without a lot of guardrails.
But adding those guardrails really supercharges the experience.
Fair_Local_588@reddit
I think there are multiple issues at play.
The industry generally agrees that LLMs aren’t able to replace software developers, let alone white collar workers yet. The current value proposition is that it increases productivity, but most of these numbers are self-reported.
The big bet is that LLMs will become good enough to replace us. This is what all of the investment is going towards, not the marginal productivity gains.
Token cost is being heavily subsidized in the race for adoption.
So if the big bet pays off, 95% of us will be unemployed and the AI companies will then push for more white collar roles until they decide to start increasing margins. But it will likely still be cheaper than a legit software engineer on payroll. A few architects will stay to design and have AIs implement, or maybe it will just be people managers now doing that. B2C SaaS will mostly die. The biggest commodity will be tokens. And oh, also the economy will probably crater because most knowledge workers won’t have jobs.
If it doesn’t pay off, which I’m betting on from my experience with LLMs at work, it will be another tool in the toolbelt. The AI bubble will slowly deflate and we will slowly see a large market correction alongside abandoned datacenters. AI companies will eventually look to increase margins either way, and most companies will go with cheaper subscriptions with more tokens, worse models with more tokens, or even local models like DeepSeek. The top companies will still pay top dollar for the best models and the most tokens IMO. We will probably get more specialized models as well. The batch of new devs that don’t actually understand what they’re writing might actually give us senior devs more leverage in the market over time.
My two cents.
_Merxer_@reddit
Our support department is using an AI tool that tries to help customers before they reach a human. Because of the time saved there, we need 1 person less at the support desk. That person was reallocated to another position. So while it technically did not bring in money, it allowed us to move resources from a place where it's 'needed' to where we 'want it's.
We're a small company, 1 FTE is a considerable percentage of our resources.
On the dev side, meh.
california_snowhare@reddit
Note that that does NOT include the costs for TRAINING the models - which make the economics even worse since those are pure 'money sinks' not included in inference costs.
The 'we lose money on every transaction but make up for it with volume model' is a sure-fire way to either massive price hikes or flat out business failure.
They can only sell inference at a structural loss for so long. It is not at all coincidental that you are seeing 'reduced quotas' complaints from people in the last month or two - it's a price hike for inference.
Distinct_Bad_6276@reddit
Yes, improved AI systems helped prevent tens of millions in fraud at my previous company. My current company is in the process of replacing a few hundred people in ops with non-user facing LLM based systems.
PopularBroccoli@reddit
No
SoloOutdoor@reddit
The expenditure isnt creating the sales they hoped. Theyre thinking you just blast this shit out and contracts come running. Im waiting for the other shoe to drop on people token maxing codex with no return. Id love to see finances face when they hit $250k of credits and need to buy more. Its coming like a train.