Cloud AI subscriptions are getting desperate with retention. honestly makes me want to go more local
Posted by remoteDev1@reddit | LocalLLaMA | View on Reddit | 24 comments
Ok so two things happened this week that made me appreciate my local setup way more
tried to cancel cursor ($200/mo ultra plan) and they instantly threw 50% off at me before I could even confirm. no survey, no exit flow, just straight to "please stay." thats not confidence lol
then claude (im on the $100/mo pro plan) started giving me free API calls. 100 one day, 100 the next day. no email about it, no announcement, just free compute showing up. very "please dont leave" energy
their core customers are software engineers and... we're getting laid off in waves. 90k+ tech jobs gone this year. every layoff = cancelled subscription. makes sense the retention is getting aggresive
meanwhile my qwen 3.5 27B on my 5060 Ti doesnt give a shit about the economy. no monthly fee. no retention emails. no "we noticed you havent logged in lately." it just runs
not saying local replaces cloud for everything. cursor is still way better for agentic coding than anything I can run locally tbh. but watching cloud providers panic makes me want to push more stuff local. less dependency on someone elses pricing decisions
anyone else shifting more workload to local after seeing stuff like this?
CrypticZombies@reddit
paying for cursor.. jokes on you
a_beautiful_rhind@reddit
I never really paid for any cloud besides some $10 deals. Didn't experience this.
What I do see instead is free inference more or less drying up compared to past years. Are you really using $200 of costs for them? If they keep you at $100 maybe next month you forget to cancel?
Hopefully you actually got free API calls, not ones it simply miscounted. Occasionally those have shown up later with other providers.
TheDailySpank@reddit
I had someone who used to treat ChatGPT and Grok like they were almost sentient do a 180 within the last month. They finally accepted local generation because of Gemma 4 and a one time $500 video card buy-in. It's fast enough for them and that's one less subscription.
VampiroMedicado@reddit
What GPU? 26B A4B?
TheDailySpank@reddit
5060 16gb added to an existing 2060 super 8gb on some Intel with 64gb RAM. IDK tokens/sec, I just suggested the additional GPU to do it at home, if a little bit slower.
They're testing out Gemma, various flavors of Qwen 3.5, and GLM that I know of.
Plastic-Stress-6468@reddit
Cancelled chatgpt in November and got a one month free deal.
I think it was 5.1 being more useless than ever pissing me off and maybe gemini or grok was much more useful by comparison so I switched over.
Then came February and Gemini got lobotomized, and Grok's new 4.2 heavy ended up just being 4x 4.1 thinking duking it out - which to be fair is still better than Gemini since it hallucinates less and actively searches the web so it won't be confidently wrong - but it gave me the push to finally look into running things locally again.
I tried running ollama back in August last year and local models were just kinda shit on my 4090 relative to SOTAs at the time. Now my 5090 is actually usable running qwen3.5 and gemma4 with 120k context. It's actually viable for work now. Though now I regret not buying something like an Asus GX10 or anything with MAC's UMA architecture for the same 3kish spend.
deejeycris@reddit
You should try openai again it got better and doesn't have the same absurd quotas of anthropic.
VampiroMedicado@reddit
Between the free models which one do you consider better for general chatting? If you tried them extensively I mean, early this year Gemini Flash was very good but as the main comment said it got lobotomized. I was using Claude Sonnet but it's very stingy for the free account.
LanternOfTheLost@reddit
Their billing page has a bug. ChatGPT claimed they were giving me a month off in January, then February, then March. That date kept updating every month, and the bills kept coming too. Except for January.
o0genesis0o@reddit
How good is the 27B on your 5060Ti? I guess you need to partially offload layers to CPU, regardless of context window, right?
I have the 4060Ti 16GB that is still running OSS 20B and Qwen 30B. If the 27B does not run that bad, I could spend a weekend to change the model.
DepressedDrift@reddit
Correct me if I am wrong but if you can run a 30b model, you can definitely run Gemma 4 26B 4B at Q4 quantization
o0genesis0o@reddit
Yeah, I think with some expert offloading, I can even run Q6 with at least 65k context.
I'm more curious about OP's claim that they can run the dense 27B on the 5060Ti. Last time I run devstral 24B, it was very slow to be practical in agentic coding. Just wonder if they have any magic config to make the dense 27B viable.
DepressedDrift@reddit
I think using the recently invented turboquant, kv caching and another setting (fast something) might reduce the VRAM usage.
I'm eyeing a 9060xt as it's the only affordable 16GB card out there so I really hope this can make it work lol
o0genesis0o@reddit
Isn't it easier to grab a 5060Ti? nvidia is undesirable, but not having to debug is nice. I have zero problem with CUDA on Linux since the time I got 2060 mobile. Meanwhile, my mini PC with AMD has been lying dormant since January because kernel 6.19 messed up 780M iGPU when running compute workload. Like hard crashing the entire display driver.
If I have the money, I'll replace my 4060Ti with a 5090, and then move the 4060Ti to the mini PC via oculink. That way, I'll have 3 GPUs running for three kinds of models at once.
DepressedDrift@reddit
Where I am, NVidia + Sales tax makes it sooo expensive unfortunately. I would end up paying 30% more over a 5060ti than 9060xt.
z0_o6@reddit
I’m working on a stack using LiteLLM as a router, a tailnet, and my 5090 as the workhorse for as much as possible. Claude API escalation, LLMLingua for compression, headroom as well. It’s still in work, but it should yield a significant reduction in token usage without neutering capability. It also naturally strips a lot of minable data out of any cloud queries.
silenceimpaired@reddit
I avoid cloud because cloud providers made my hardware - specifically RAM more expensive.
Have you tried the new Gemma 4?
SpicyWangz@reddit
As a whole I want to get away from cloud, and I do whenever possible. But there's a lot of more complex situations that require me to fall back on it. I probably do around 50/50.
I would love to get away from it completely eventually, mainly because of how bad data centers are for a community.
Electrical_Date_8707@reddit
dude its so good I have no idea what google was thinking with this one
silenceimpaired@reddit
I avoid cloud because cloud providers made my hardware - specifically RAM more expensive. Yeah, I’m liking it. I think as the models get better locally they are also getting more brittle and changeable. I hope I’m wrong, and I guess with Apache licensing we will see.
remoteDev1@reddit (OP)
not yet but been seeing really good things about it especially after the kv cache fix landed in llama.cpp. was worried about the vram usage at first but sounds like its way more usable now. probably trying it this week
MrHaxx1@reddit
Of all the issues with cloud AI, your issue is that they are giving you discounts???
misha1350@reddit
Yes, it speaks of the problems with their liquidity. It's a clearly a downward spiral. It's going to get worse and worse for them.
draconisx4@reddit
Spot on local setups let you sidestep the vendor lock-in and keep tighter reins on your AI runtime for real safety and control, especially when cloud services get pushy like that.