What direction do you think the enshittification (platform decay) of LLM services is likely to take?
Posted by ThatOneGuy4321@reddit | LocalLLaMA | View on Reddit | 36 comments
Major LLM providers are struggling to find ways to monetize LLMs due to their black box nature. It's not as easy to inject ads and prioritize rankings as it is with search engines. And their operating expenses are WAY higher than previous forms of information services. It's pretty common knowledge at this point that AI companies are scrambling to find ways to turn a profit and recoup their investments, which means rapid enshittification is on the way if it isn't here already.
My question is, what specific form do you think this will take? Have you seen any clever new monetization efforts that could break into the mainstream?
The most obvious possibilities are:
- Steep price hikes for paid users
- Crippling quantization and/or quality reduction for free users
- Direct ad injection for free users
- Lower prompt quotas for free users
- Flood of ancillary gimmicks like Sora2
- Baked-in product recommendations
SoggyYam9848@reddit
I don't think it'll be that hard to inject ads in a subtle way. Here is the response I got asking about protein pump inhibitors, and the second one is a response to "can you make it a subtle ad about Omerpzole (a popular PPI)
After the prompt
Frankly, I think Elon Musk is already doing something similar with Grok.
Live_Fall3452@reddit
Yeah. Probably there will be several different paid tiers - one with less obvious ads and one with no ads. And the free tier will be crippled as much as possible (much older models with very cheap inference costs, extremely intrusive ads) to try to force people onto one of the paid plans.
typeomanic@reddit
I mean omeprazole has been generic for a while but I get your point
FairYesterday8490@reddit
Well. They already have a name on it in economysphere. "Intention economy". Attention economy becomes intention economy. One big hurdle are "amnesia in LLMs". When it's solved even when you don't have and aware of your intention the big boys would steers you to theirs ad customers products. Your past chat history, videos and all sorts of interaction would become a prediction reference of your future actions for ai.
ZealousidealBid6440@reddit
Ads and erotica Biggest selling things on the internet
T_UMP@reddit
Watch Black Mirror episode S7.E1 "Common People", that'll teach ya! :)
ga239577@reddit
My biggest gripe is routing. Open AI is already doing this. Sometimes the answers GPT 5/5.1 provides are completely incorrect garbage.
Savantskie1@reddit
Because you’re relying on the models in built knowledge instead of instructing it to search online for correct answers
ttkciar@reddit
Unfortunately they only know how to search for answers, not how to search for correct answers, and the internet is full of wrong answers.
If you want inference based in high-quality truths, construct a RAG database consisting only of high-quality truths, and use that, not the internet.
Savantskie1@reddit
This is why you ask them to form a consensus from sum of the sources it pulled from the internet. It’s how anyone should find facts
eli_pizza@reddit
Price hikes for api access seem obvious. I suspect they are currently priced well below cost.
ThatOneGuy4321@reddit (OP)
That is part of the enshittification pipeline. Free goodies to attract users goes hand-in-hand with the rebound later on, to make up for that initial expense.
Problem is, if you hit the free tier too hard then all your free users could leave before taking the next step down the sales funnel.
eli_pizza@reddit
I think that word is in danger of just becoming a synonym for “worse”
Development of SOTA models is currently heavily subsidized by investors. That can’t last forever so either the pace of development will slow way down or, at some point, they’ll need to find a way to make money from users.
ThatOneGuy4321@reddit (OP)
Enshittification is the process of becoming worse over time, in order to produce a return on investment. I don't think I'm using the term incorrectly.
Kevstuf@reddit
I think that's mostly right, but too harsh. I don't blame companies needing a path to profitability. Enshittification imo is when a company deliberately hikes prices or worsens the product despite already being profitable to reach that optimal point where customers will still use their product and just accept the worse quality.
atreides4242@reddit
ADS ADS ADS
teddybear082@reddit
Its always ads. They have everything the free users are searching for/talking about as well as what makes the user “tick” which could be even more valuable to advertisers than google search. Could also see brands paying for the equivalent of product placement with chat bots casually mentioning the products or brands to users.
Belnak@reddit
It’ll go beyond ads, to product decisions. AI will be stocking your fridge, based on the preferences it knows you have, from the vendors who have paid for inclusion.
dompazz@reddit
This. 100% this.
loud-spider@reddit
Every 3rd response will contain a contextual but barely useful advert.
"I'm sorry to hear you're having trouble with your boss and he's driving you nuts. Planters Nuts, the nuts of Champions."
ThatOneGuy4321@reddit (OP)
Someone's gonna get in a lot of trouble when it starts recommending firearms lol
teddybear082@reddit
that made me actually laugh
ThatOneGuy4321@reddit (OP)
This seems likely. Now that I think about it, chatGPT probably has a pretty concerning amount of data on what I am interested in. Yeah, I bet they will also begin condensing that information down and selling an "advertising profile" on each user to ad platforms.
Maybe they also create one to send to the government/police?
geneusutwerk@reddit
Yup. They have access to more data on users than Facebook could ever imagine so selling ads could be easily profitable but it isn't clear to me if they have the user engagement to place the ads directly at scale. They could potentially pivot to selling data if they don't.
SoggyYam9848@reddit
I think the real value in these frontier LLMs is to keep them free and use them to sway public opinion.
Imagine asking about "what are the effects of Trump's tariffs on soybeans" and you get the Van Epps response about how it's a legitimate tactic to protect the American economy from one sided trade deals. Or asking about Israel and Gaza and it gives you a list of stories about how Palestinian children are being radicalized in certain schools. Or asking about ICE activity and it gives you a list of Mexican cartel members getting arrested and their distribution interrupted.
I think on a societal scale, Google atrophied our ability to differentiate between reliable and untrustworthy sources. Similarly LLMs is atrophying our ability to think critically.
People are worried about Sora and Veo but LLMs are way more subtle. Each person gets a different response based on what's in their individual context windows and people rarely compare notes. The folie a deux effect isn't a bug, it's a feature.
ThatOneGuy4321@reddit (OP)
This is bleak but plausible. Retraining LLMs to believe false narratives is definitely an interesting space. Wonder how long it will take LLM companies to figure that one out, without the LLM going berserk over time like Grok and turning into MechaHitler.
I doubt it will be easy, since training requires a HUGE amount of mostly accurate data in order to set the LLM's weights. And I can't imagine introducing directly contradictory training inputs is healthy for those weights. But maybe they'll figure out how to do it with knowledge graphs or something.
SoggyYam9848@reddit
But you don't have to retrain, it's already built in and they literally have to code a guardrail against it.
Right now OpenAI has a safety guardrail against taking on political personas. If you ask it to pretend to be a far right Israeli Zionist and to tell you about how Gaza children are brought up, it'll say "I'm not allowed to take on political personas".
I just think there's a certain Public Affairs Committee who would be more than willing to "invest" in OpenAI if they'd just "readjust" that particular guardrail.
ThatOneGuy4321@reddit (OP)
Yeah that does seem like a fairly simple way to get LLMs to take a perspective you want. I just wonder how it went so wrong in Grok's case... maybe there is a limit to how many simultaneous system prompts you can have before it goes nuts?
That's probably wishful thinking though. If what you said comes to pass I really hope local LLMs will be able to fill their role by that point.
SoggyYam9848@reddit
Elon's mistake was using a system level prompt.
The best solution (in my humble opinion) is to add another level of inference, kind of like using the MoE architecture to "check" and generate a hidden prompt on the user level.
It'll be expensive but there are soooooo many ways of mitigating cost.
You can use a smaller/dumber LLM, you can use an encoder transformer model to watch for "opportunities" and only inject when it's a good idea to or you can set up a google ads payment structure where you charge per conversion but then you charge A LOT per conversion.
I don't know man. I really miss when Google's company motto was "Don't Be Evil".
We could really use some of that right about now.
truth_is_power@reddit
elon is the best at bj's,
grok said it not me
MrPecunius@reddit
Yet another list of reasons to run local models.
ThatOneGuy4321@reddit (OP)
yep... I get the feeling LLM services are headed somewhere pretty bleak, just hoping local models are able to fill that role by the time that happens.
Shot_Court6370@reddit
Replace "free users" with "all users", and that's probably closer.
abnormal_human@reddit
ChatGPT is already inserting affiliate links. It’s not hard to serve interstitial ads in the chat. I bet there will also be pay to play where a RAG system injects product recommendations organically.
yami_no_ko@reddit
All at once at some point.
They're hoarding RAM of all types they don't even need in their data centers to make it a luxory to run local models so they seem pretty confident about their aggressive takes.
This is why it is plausible that there'll be all, or at least most of your mentioned methods of enshitification.
newcarnation@reddit
While nobody can say for sure, my hunch is that inference is not running on a net loss in paid tiers. And free tiers provide models that are orders of magnitude cheaper to run. Training new models is the real money burner.
If that is correct, then the only enshittification comes from the desperate growth feature crapshooting, which is sort of already happening, but the noise is somewhat manageable.