Claude Code removed from Claude Pro plan - better time than ever to switch to Local Models.
Posted by bigboyparpa@reddit | LocalLLaMA | View on Reddit | 314 comments
Time to switch to Kimi k2.6 guys if you haven't already.
For $20 a month you can buy the OpenCode Go coding plan (its actually $5 for the first month then $10) which gives you many more tokens on models like Kimi K2.6, and then you can pay for the rest of the usage. So for $20 a month of tokens of Kimi K2.6 you're basically getting the equivalent amount of tokens of the $100 plan.
You can also use Qwen 3.6 35B A3B, which you can run on your local PC (as long as you have a decent graphics card).
Ok-Measurement-1575@reddit
Gotta be a mistake.
Apart_Ebb_9867@reddit
I suspect not. Prices for AI will increase substantially and above $100/month, if plans will survive at all.
Seventh_Letter@reddit
it is a test did you see their twitter?
Apart_Ebb_9867@reddit
Yep. That twit also say it was a test on 2% of people. A few problems, everybody going tho their page saw the same thing, I’ve never seen such tests on the pricing of offers (doesn’t mean they don’t exist, just I’ve never seen it) and last, companies have no problems with lying, especially for damage control on twitter. It is very possible they won’t go ahead, but this will be because of backlash, not because they were not really trying and was an oopsie.
Theio666@reddit
I like opencode go for flexibility, but the usage is not that high. It gives you exactly 60$ of api usage, it's a good deal, but something like chatgpt plus gives you way more usage, both by raw tokens number and by their api cost.
Jonathan_Rivera@reddit
any difference between using opencode and openclaude locally? I have both installed but openclaude is more familiar.
Theio666@reddit
Openclaude is some rewrite of claude code they accidentally leaked some time ago? In general I never liked claude code, so for me it's an easy pick of opencode, I customized some agents in opencode, made a server which connects it to gitlab and allow you to auto use opencode right from issues, and I enjoy using it for smaller tasks where I don't need gpt 5.4 (to save usage) or where OSS models are better fit (frontend, kimi is just better).
It's all down to the preferences. Do you wanna web version (opencode has that), do you need good remote support (opencode is easier to setup, plus soon we will have support for that in t3 code), how hard you wanna customize agents, etc.
rpkarma@reddit
some time ago? It was like three weeks back hahaha
Apart_Ebb_9867@reddit
Which part of ’some time ago’ doesn’t apply to “three weeks back’? Is it ‘some’? Or maybe ‘ago’?
rpkarma@reddit
“Some time ago” in colloquial usage in a context like this implies a time period of much greater than three weeks.
Apart_Ebb_9867@reddit
Sure dude. so much so that it warrants a Reddit comment with an ‘hahaha’. After how many weeks one can say “some time ago” just so that we know how not to hurt your delicate linguistic sensibility?
Aidzero@reddit
In the AI news cycle, that feels like the Bronze Age 😅
Jonathan_Rivera@reddit
I swear some people are so lame for downvoting me. It was just a question.
NoahFect@reddit
Reddit fuzzes votes. Don't take votes personally.
Jonathan_Rivera@reddit
I'll try it, im running models locally.
bensprake@reddit
Is it possible that this change will reach Europe a bit later? As of now in Hungary it still shows that Claude Code is included in Pro.
rm-rf-rm@reddit
This was reported for being off topic. Though its about a closed source cloud product, I have approved it as its major news item, not receiving commensurate coverage, pertinent to the ecosystem and further underscores the importance of local inference (read as choice and ownership)
lemondrops9@reddit
this is a slop post that isnt even true
FatheredPuma81@reddit
Thank you. To the guy who reported it for being off topic... I don't even have words.......
lemondrops9@reddit
other than hes wrong and they still offer it for the pro tier.
ParthProLegend@reddit
Your username is dangerous. Might literally rip some userbots or basic bots.
lspwd@reddit
mine might just help the bots 🥺
rm-rf-rm@reddit
that would be great!
rm-rf-npr@reddit
Nice name.
Eyelbee@reddit
What. the. ... Just, no way. Must be a mistake. There's no way they're actually doing this right now
hdadeathly@reddit
With no direct communication either lmao insane
muyuu@reddit
Meh it was coming. This is for new users by the way. I personally know people who hadn't cancelled they're subscription because they expected this.
Not me though, it's not great value anyway and both Codex and Gemini have higher ceiling at difficult tasks.
Apart_Ebb_9867@reddit
Once Claude doesn’t have a $20 plan there’s no way codex will have one.
huffalump1@reddit
Eh, OAI says they've got the compute, the infrastructure, the models to continue supporting codex in the free and $20 plans...
Apart_Ebb_9867@reddit
So they say indeed.
portmanteaudition@reddit
Come to the open model + gemini darkside friends
slippery@reddit
Gemini is not the dark side.
arcanemachined@reddit
Is Google the good guy now? And their 3rd-place model?
MerePotato@reddit
3.1 pro is great for everything that isn't coding
ThrowRAColdManWinter@reddit
You sure about that??
OldHamburger7923@reddit
it is the dark side because its so terrible
Neighbor_@reddit
It's very possible, OpenAI has a lot more compute than Anthropic.
techdevjp@reddit
And that is why this is happening. Claude surged in popularity, Anthropic doesn't have the compute to support demand, and they can't just "get more compute" because everything is in short supply with long lead times.
The result is lobotomizing Opus 4.7 with variable thinking (it's good if it thinks enough but not when it doesn't), and eliminating Claude Code at the Pro level.
Apart_Ebb_9867@reddit
I don't have number, but even if true it doesn't mean they are willing to lose more money than what competition forces them to.
ea_man@reddit
Ain't that their business model? loosing money
I mean seriously: the day OpenAI starts to care about "commercial soundness" they just close, it's gonna be an avalanche.
I'm not using those since a couple of months, imagine if they reduce value further!
Apart_Ebb_9867@reddit
They don't have to be profitable in the immediate. They just need to survive long enough that all companies become totally dependent on two/three companies and then raise the prices. For companies $5-50k/month depending on the engineer are totally ok and they'll finance that by firing enough people. As long as productivity increment due to AI is greater than the productivity lost due to firing they win.
ea_man@reddit
that's gonna be a big gamble for those company, trading their ability to produce to be dependent of an external provider that often is in an other country.
Apart_Ebb_9867@reddit
I don't follow. US companies will be by and large staying w/ US providers. In future they might even be legally prohibited from going to China. Europe will be more diversified, but initially mostly w/ US.
ea_man@reddit
Oh yeah, I'm not in USA.
Pretty much for a lot of stuff we can't even upload code / data to countries that don't enforce local data protections.
This means killing 80% of possible extra market, that's not how google or MS or Apple was made ;)
Neighbor_@reddit
Anthropic literally pays more for equivalent compute then OpenAI
muyuu@reddit
They do now. It's considerably less generous than 1 month ago, but still.
This could go either way, either they keep it with reduced allowances, or they also remove it.
the-anxious-ape@reddit
Any idea what will happen when the subscription renews? I am on monthly plan
muyuu@reddit
no clue, but i doubt they will bother ending it for those already in the plan
esp. when they can just nerf it and keep them on board as recurrent revenue, as they've been doing for the last 3-4 months
GamerHaste@reddit
Cancelled my subscription when they sent an email removing openclaw support. I don’t really care bc I didn’t really use it and just fucked around with it, but I was pissed at the idea.
Western_Objective209@reddit
well that gives everyone the info they need to disregard your opinion
muyuu@reddit
Or yours.
Western_Objective209@reddit
yeah for the tens of people who think gemini is better than opus
muyuu@reddit
Gemini 3.1 pro has wider coding capabilities, and GPT 5.4 is just generally better (and also doesn't seem to get nerfed often). Perhaps you were trying the free plan?
bathiel12@reddit
Yeah, WTF, I just payed the year like 2 weeks ago, I use it just for Code...
MoffKalast@reddit
This is what "billed yearly" was for lol.
NoHotel8779@reddit
They gave it back lol
marmot1101@reddit
I think it's a documentation mistake. In a panic I fired up CC on my personal account and things were fine. Their on page chat also confirmed that CC was part of pro plan.
sudeposutemizligi@reddit
I'm sorry
johng_jg@reddit
as of now it has Claude Code on Pro plan in EU
bbevem@reddit
yeah it says CC directly in your codebase. Does that means the stand alone CC app won't work with Pro plan? Because if you check the wording on Max plan it says it has Claude code.
Street_Ice3816@reddit
scroll further down my friend
Dnttxx@reddit
Pro Plan still has Claude code btw. What country are you from?
the-anxious-ape@reddit
I read an article that they are testing it rn with only removing it from certain customers and certain markets
Mochila-Mochila@reddit
Still present.
https://i.imgur.com/v5RoCdn.png
marmot1101@reddit
Was it broken for you?
AdamEgrate@reddit
It’s for new users (allegedly)
the-anxious-ape@reddit
What about people on monthly plan? Week it get removed one that is renewed?
Innomen@reddit
Yea the lock in is real. For better or worse, claude just became a utility bill. They raised the cost of me shopping around, I can't risk losing my legacy status to try other services, I have to pay for both now. But they've also made me a permanent vocal hater 100% staring at the exit. Completely deleting loyalty. The millisecond i find a better apples to apples deal, i'm gone, without hesitation.
paranoidubuntu@reddit
Thats trained on old data
xPATCHESx@reddit
Yeah it went missing from my app
marmot1101@reddit
Was it broken for you?
paranoidubuntu@reddit
Yeah, said its still part of the max plan
Dyoakom@reddit
Apparently they are doing an experiment with 2% of new Pro subscriptions. Existing users with subscriptions aren't affected. That's what they said on X. My guess is that they are testing the water to maybe have it later like that for longer term.
_TR-8R@reddit
I'm still able to use Claude code on my pro plan, this might be a mistake.
kironet996@reddit
of course existing subs keep access. Try with new account
Seventh_Letter@reddit
Why 'of course'?
34574rd@reddit
Because that's what you paid for when you accessed the subscription, if anthropic fails to provide that, I can just chargeback for failing to provide the expected service which I initially agreed to. This is why they don't publish their rate limits and keep it dynamic, to not get hit with service not provided chargeback
kironet996@reddit
i can't believe I got downvoted for common sense, and the guy got upvotes for a dumb question 🤣
kironet996@reddit
kinda a stupid question tbh maybe ask a llm
sadslayyer@reddit
they said it a test and only effect 2% user with pro sub, but i think after the test they will remove form all pro sub user
asraniel@reddit
github copilot also just removed new free signups (students etc.). seems like they all need cash
ZenaMeTepe@reddit
Slow and steady is the path to the end of subsidized cloud LLMs.
slippery@reddit
My pro account still has access to claude code. Not sure about this.
cafedude@reddit
Hopefully if you signed up for a pro plan prior to this change you'll be grandfathered in because you signed up with the expectation of being able to use Claude Code. So far today I've been able to use Claude Code, so hopefully that's the case. Otherwise Anthropic is going to burn a lot of goodwill.
BaronRabban@reddit
I agree I’m guessing everyone on board is grandfathered in
colin_colout@reddit
I'm guessing they won't
i_am_fear_itself@reddit
same. I'm not sure where the concept of being grandfathered in comes from. the performance from Anthropic's infrastructure has been degrading every week since January.
xienze@reddit
Wishful thinking or some misguided belief that the TOC doesn’t allow for it.
Deep90@reddit
I'm guessing if they are, its going to be progressively made more and more shit until it isn't really worth it.
Assuming it isn't already like that.
some_user_2021@reddit
But then will they remove it at renewal time...?
Innomen@reddit
Why would i retain my sub? if they do that pro will collapse.
xienze@reddit
Which is exactly what they want! They don’t want people paying $20 per month to use CC because the token usage is so much higher than plain chat. They’re happy to see these users either leave or feel compelled to go up to the $100 plan.
cafedude@reddit
Yeah, most likely. I just hope they don't do some kind of system update where they accidentally forget about the folks on the grandfathered-pro plan and then we're kicked off early.
Apart_Ebb_9867@reddit
somebody should tell anthropic to fix this page https://claude.com/product/claude-code
paul__k@reddit
"Claude, fix our website. Make no mistakes."
ALERTua@reddit
"... do not leak the source code"
autoencoder@reddit
See? It's your fault you didn't specify not to leak the code. Prompt issue. You're too vague. Amateur.
FinBenton@reddit
...again.
Toby_Wan@reddit
Still not updated
wywywywy@reddit
Looks like they were AB testing, and have now reverted https://www.wheresyoured.at/news-anthropic-removes-pro-cc/
AbeIndoria@reddit
This isn't exactly correct (Don't shoot the messenger):
https://x.com/TheAmolAvasare/status/2046724659039932830
finevelyn@reddit
Making 2% of new users pay $100/month instead of $20/month is "just a small test"? Treating people like lab rats.
TheForgottenOne69@reddit
They also say:
So we're looking at different options to keep delivering a great experience for users. We don't know exactly what those look like yet - that's what we're testing and getting feedback on right now.
I_HAVE_THE_DOCUMENTS@reddit
I hate that it's become so normalized this whole: "Hey don't worry, we're just running experiments on you to see what you'll tolerate and minimize backlash."
droptableadventures@reddit
That's utterly bizarre, because it appears that way on the page for everyone.
whodoneit1@reddit
They should have used Kimi k2.6 instead to make the changes
droptableadventures@reddit
I'm still waiting on Unsloth to upload UD-Q2_K_XL so I can run it :)
xienze@reddit
It never stays at “just for new users.” It’s all over but the crying for $20 Claude Code.
ServiceOver4447@reddit
No you can't, there is no alternative, unfortunatly, local models need compujters over 50k to do slightly what Claude Code is doing.
heroata@reddit
50k in what currency?
hackcasual@reddit
Dang, i've been enjoying Qwen 3.6, time to brace for utilization
Ok_Scientist_8803@reddit
The 35B-a3b is bonkers running locally, hasn't had a hiccup so far in my use.
Shoddy_Cook_864@reddit
Try this project out, its a free open source project that lets you use large models like Kimi K2 with claude code for completely free by utilizing NVIDIA Cloud.
Github link: https://github.com/Ujwal397/Arbiter/
NoHotel8779@reddit
They gave it back
FatheredPuma81@reddit
An important news article explaining things further apparently it was only 2% of people or something and they've reverted it: [UPDATED] News: Anthropic (Briefly) Removes Claude Code From $20-A-Month "Pro" Subscription Plan For New Users
Tbh this kind of makes sense though from a purely business perspective where you realize that Anthropic has more or less ben lobotomizing their models heavily (I refuse to believe Sonnet 4.6 isn't a 2 bit quantized model). Most users on the Pro plan aren't professionals programmers and are probably burning a lot of precious tokens on broken programs in Claude Code and figuring out a real solution to that is hard. So just remove them and boom money saved.
t_krett@reddit
That makes total sense. They just tested it with 2% of new signups and probably saw that it would kill signups.
Seventh_Letter@reddit
That's the riskiest click I've seen all day.
tmvr@reddit
It's Ed Zitron's blog, but yes, the headline looks a bit weird :)
popiazaza@reddit
I would take weird over AI generated headline.
abnormal_human@reddit
The way there limits are going, Claude Code was already virtually unusable on the pro plan. Lots of people doing a few prompts and exhausting their 5hr window before they could finish and not a lot of value being delivered. I understand the change.
Here's my prediction of where they are going:
Claude Max gets renamed into two Claude Code tiers, possibly with higher tiers to capture the heavy users better. It's basically 100% populated with heavy Claude Code users, it will be a lightweight transition. Will be interesting to see if they make a $500 tier. Or maybe they don't want that because they need to conserve GPU more than they need $. We'll see.
Claude Design gets its own tiers, but Code users can taste it with small limits (they have already done this--Max 20x plans do not get generous limits on Design like you would expect)
Cowork gets its own tiers. Or something.
Everyone gets web chat, and the Free/Pro tiers only get web chat.
Anyone on a "serious" tier can "taste" the adjacent products with small limits. So Claude Code users can try Claude Design, but not do full days of work with it and vice versa.
People doing design AND code AND cowork stuff will need to stack subs to pump up that ARR.
As horrible as this all is, their tools do work, and are bringing the value. And on the enterprise side, they're shuffling everyone into PAYG for everything, which is drastically more expensive than the numbers we're talking about here.
On the bright side, as long as the value keeps increasing on the closed side, China will be "forced" to keep releasing open models to continue disrupting the west and driving prices down. The rough moment is when progress slows and prices stabilize--at that point I don't see China's incentive anymore.
portmanteaudition@reddit
Once progress slows down and they stop though, the open models will be pretty darn close to the closed as well presumably...then it simply becomes an even bigger race for compute to have even more nanobots or whatever working synergistically. I am not convinced that the models themselves offer much value in the future, but compute will be even more valuable
Bakoro@reddit
You don't think hundreds of thousands of instances of AI agents will have value?
I think we'll see an AI version of monkeys writing Shakespeare, in some form or another.
portmanteaudition@reddit
They will. The marginal model itself won't.
power97992@reddit
If progress stops some or many of them will go bankrupt or get restructured.. They have to keep progressing to get more funding to meet their goals
portmanteaudition@reddit
If they could, they would. The point being that they may not be able to.
TheseTradition3191@reddit
The compute crunch framing makes sense, and the Claude Code token usage pattern is the specific driver. The client re-sends the full conversation on every message, so a working session that would cost ~50 messages flat actually burns maybe 150-200k input tokens total as context accumulates. That's why the Pro 5hr windows feel shorter than they should.
The interesting wrinkle for local: most local setups cap at 32k context, which actually forces more frequent compaction. Not ideal for complex tasks, but it naturally prevents the runaway cost spiral that makes cloud coding agents expensive. Once local models hit ~100k context reliably with good quality at 24GB, the value proposition shifts pretty hard.
rootbeer_racinette@reddit
Yeah I don't even use Opus because it runs out so fast.
With qwen 27b having similar quality to Sonnet and 35b-a3b being fine for parsing web searches I would just cancel my subscription if it came to this.
power97992@reddit
From my experience, Qwen 3.5 27 b is not comparable to sonnet 4.6, it is more like a lazier version of gemini 2.5 flash …
Foreign_Risk_2031@reddit
For the record, I use Claude for \~10hrs at a time, everyday, on the Max 20x plan. I'm not sure what people are doing to hit their limits so quickly, tbh.
abnormal_human@reddit
I agree. I think there is a lot of really sloppy usage of these tools.
Not to sound like an old man, but people who learned to use these things when contexts were short and models were dumb seem to be a lot more efficient than those that jumped onto the bandwagon in 2026.
DankiusMMeme@reddit
Probably just prompting for literally every single change, like instead of adding a print on a line they’ll ask opus to rewrite an entire function
iamapizza@reddit
There are multiple issues about it on their Github.
Anthropic puts people on A/B type experiments, and the leaked Claude Code source shows how loopy it can get for different scenarios. Your experience won't match other users' so it's not correct just blame them for it.
https://github.com/anthropics/claude-code/issues/16157
9302462@reddit
Fully agreed, I really don't understand how people are maxing out their plans. My guesses are:
1. A workflow that me, a dev with 10+ years under my belt can't wrap my head around.
2. Really poor usage of it like every interaction being a brand new chat.
3. Running some llm stuff every time a file is changed/code is compiled.
4. Openclaw type stuff where it burns tokens and provides some small value, but needs to be babysit/ or it babysits itself.
I have 2+ ide's open and 2-3 claude code instances running in the terminals across them at any given time for python, svelte, golang and infra stuff. I switch between the claude chat's every few minutes when one finished like a server at a restaurant handling tables. I do this for 8-12 hours a day 7 days per week, nearly always opus, and on the $100 plan I have yet to hit my limits. I was on the $200 plan but even with heavy usage I never got close to being throttled.
Note- when opus 4.6(or maybe 4.5) dropped it was using tokens like a drunken sailor but that settled after a week or so and iirc they issued some type of reset for everyone.
silenti@reddit
I've barely ever hit my limits with a ton of use?
jackmusick@reddit
I'd be shocked if this isn't how it goes. The plans have always been subsidized and it's no surprise every provider is locking them down or disabling sign-ups. When Design was released and I nearly maxed it out on my $200 plan, it became pretty clear that their path to profitability is going to be having best-in-class models serving customized skills and harnesses with their own price points. I don't even think it's wrong as it's clear this wasn't sustainable. It does suck, though.
bulbamaster9000@reddit
I saw a tweet saying they are testing with 2% of new users. the existing users are unaffected
RebekkaMikkola@reddit
Honestly this feels less like everyone suddenly switching to local models and more like people just diversifying a bit.
I’ve seen a few teams use local models for anything privacy-sensitive, but still lean on APIs when they need more horsepower.
Hybrid setups seem way more realistic tbh.. going fully local only really makes sense if you already have the infra and GPUs to support it.
rpkarma@reddit
Of course, the rug pull begins lmao
dtdisapointingresult@reddit
Indeed.
What's the best 1-year signup coding plan among labs with open weights? GLM? Kimi Code? Alibaba?
Don't shill OpenCode on me, I don't want to be tied to a single app, plus I didn't like it when I tried it.
evia89@reddit
Try to buy it lol, its always out of stock. I hunt it more then month with bot
aswerty12@reddit
Don't be an idiot. If you're going to be at the casual+ tier you might as well go on openrouter and get access to models with an API key and pay by the token.
arcanemachined@reddit
I'm going to warn you about something I see very little bitching about: A lot of these models are quantized to the point of lobotomization, and none of them tell you what quant they are using.
Honestly, I would say you should go with Codex until OpenAI starts fucking people around in the next month or two, then go with Kimi K2.6 (or GLM 5.1 if K2.6 doesn't live up to the hype), then jump from model to model as the new improvements come.
GitHub Copilot is also a pretty good deal still, but they charge by request, which I personally hate. (You can get a shitload of mileage out of that if you set your requests up correctly, since tool calls don't count as an extra request.) Also, Copilot's probably gonna start pulling up the ladder right away.
Short term, you've got a bit of time with the top-tier models before the rug finally comes out from under you, and after that, the open models should provide a safe harbor of "good enough" after that.
Again: Beware the subscription. Committing to a full year is very likely (in my opinion) to bite you in the ass: Mysterious quants, service degradation, or buying in to today's winning model, which loses its lustre over time. Probably a good idea to have some OpenRouter credits to tide you over when you need them, as well.
acetaminophenpt@reddit
Well said!
Master_Studio_6106@reddit
GH Copilot recently changed their policy. The Pro plan (10$) no longer has access to Opus, and daily/weekly token limits are now enforeced alongside request limit. It's still a good deal but not as good if you like Anthropic's models.
Fantastic-Balance454@reddit
Imagine being one of those dopes that signed up for a 1 year plan, they're literally making your subscription that you paid for in advance shittier...
RedParaglider@reddit
Oh this is just a continuation
lemondrops9@reddit
fake post?
UA_ReXDeep@reddit
No, they actually removed it, but that change was reverted.
shah_labs@reddit
What about the teams plan ? Any updates to this plan
UA_ReXDeep@reddit
For now claude code is now back for pro plan in pricing page. But when i saw that thing - teams pro plan too removed from claude code. Still waiting news from anthropic
mosama__egypt@reddit
Pro subscription still includes Claude Code. This is from their website now
Xamanthas@reddit
They added it back after backlash lol.
touristtam@reddit
Was it for the backlash or because someone too eager pushed a poorly reviewed change? They are notorious for not touching code directly.
Xamanthas@reddit
I dont work for them thus I dont know.
mrdevlar@reddit
Isn't the reason we are all here because we knew that this kind of enshitification was exactly the entire design of the US AI space?
When they started making personal computing more expensive by trying to undercut the hardware market, we already knew they were going to try this bullshit.
jld1532@reddit
AI co-ops I think are in the future. Crowd source $50k to set up compute, install Kimi, webui with username and password for funders, small monthly fee for maintenance and electricity. In that system you get a stake in the "organization" rather than this shit where services are simply pulled with no recourse.
touristtam@reddit
So like https://stablehorde.net/ ?
forresthopkinsa@reddit
Seems unlikely. The world trends endlessly toward centralization. Cloud is cheaper than colo because it's more elastic. Economies of scale, etc
muyuu@reddit
Right now there are plenty of rental GPU services. If you want it more local, sure, but GPU crunch is a major problem. You'd be paying a lot for enough oldish nVidia cards to fit trillion-parameter models.
It's much simpler to hop from one org to another than to set up a coop. Ideally you know enough trustworthy people sharing your goal, but this ain't easy. For instance I'm a long term member of the London hackerspace - there's so much more drama than stuff getting done. It's amazing really.
ButterflyEconomist@reddit
In the interim, has anyone tried Ollama Cloud?
MuDotGen@reddit
I was thinking of something like that actually. As long as there are proper guardrails, I wonder if that could work. You would definitely need people you trust though... Might have to be on some kind of invitation only basis or something. Would be better if it was people on different work schedules or time zones enough that their work doesn't overlap as much though.
iamkaika@reddit
interesting. trust is gonna be a factor. need some type of trustless way to do it.
onethousandmonkey@reddit
Pretty happy with the Gemma 4 models to run Claude Code. Quick web app built, 10M tokens, $0.
Hot_Permission_3335@reddit
Does tool calling work on Gemma 4? I tried it on my 5090 but it could not call tools
idbedamned@reddit
Local?
mister2d@reddit
Obviously
idbedamned@reddit
Not really zero then, hardware plus energy, but cool.
Asked because it could have been through some provider with free Gemma 4, which would be actually zero.
mister2d@reddit
My energy is net zero (solar). But it isn't really about the money. It's about the sovereignty.
Mental-Exchange-3514@reddit
How did you get the panels for free?
timmmmmmmmmmmm@reddit
He/she might have already reached break even on the panels with general household energy savings
catinterpreter@reddit
It cost you a lot more than zero.
swaglord1k@reddit
no it's not lol, codex is 100x more convenient if you don't own a solar-powered datacenter. and the privacy argument is null since it's used generate boilerplate code, not your sick roleplays
Mahrkeenerh1@reddit
you don't get the equivalent of the $100 claude plan ... that's like 2k worth of claude api price, and it's only about 10x more expensive, so no, you'd have to pay the same $100+ for the same usage, roughly
No_You3985@reddit
On the bright side, such changes in major AI providers will encourage more users exploring local LLM inference. Which may increase the pace of tooling built around it
deitymaker@reddit
Seems pretty weird to do anyway. People need to get a taste before they can buy bigger packages. And limits on 20$ is a joke already?
johndeuff@reddit
Inference should be for those who pay. Look what happen when they give it for free or cheap.
Bobylein@reddit
Since when is the Pro plan free?
Yes_but_I_think@reddit
Wow, coding starts at 100$. In India you'll hire 1/3rd actual engineer for that money.
No_Count2837@reddit
Luckily I bought yearly subscription yesterday. Was questioning the decision, now I’m happy I did it.
korino11@reddit
why to pay to opencode 20$ ? woudnt it be better to buy for 19$ plan on kimi?
FlowerRight@reddit
Opencode Go is $10/month for kimi, glm, qwen and others
HopePupal@reddit
OpenCode Go is also quantized to hell, which is why it costs $10/month
MeYaj1111@reddit
They claim this is false. Post proof
god_damn_you_tiger@reddit
We claim this is false, give proof it’s not.
Constandinoskalifo@reddit
It doesn't work this way.
popiazaza@reddit
Read up /r/kimi, there are lot of problems about their subscription cancellation.
OpenCode also use US/EU hosted server, which is a plus.
You also get to choose other models like Minimax, GLM, and Qwen.
Kindly_Sky_1165@reddit
open code u get lot more models and limits are pretty decent imo. I switch btw models to actually test which performs better but kimi takes the trophy for me and is damn good even compared to glm or mini-max 2.7 .
Betadoggo_@reddit
I agree, going straight through the original provider is also better for quality and reliability. Opencode supposedly vets their 3rd party providers, but quality is difficult to guarantee.
Individual-Leading-3@reddit
Hi, im not very informed in the AI scene, but from what ive seen it looked like claude code is pretty good and i was planing on getting it next month, how good is this kimi you mentioned compared to claude, thankss
IBM296@reddit
It's not as good as Claude (obviously) but gets close enough to it. It's also quite cheaper.
Individual-Leading-3@reddit
what about cursor tho i haven’t heard much recently
adamgoodapp@reddit
Cursor’s compose model is apparently built on kimi, so you can just go straight to the source
jackmusick@reddit
Literally see a version of these complains with every provider. The days of paying less than a hundred bucks for agenting coding are coming to an end. No one can keep up with demand, and no one can make money off of it. I'd be shocked if this didn't end up being all per token pricing in the end like ChatGPT's Codex plan for business.
Mistic92@reddit
Is this a joke?
05032-MendicantBias@reddit
Is this the sound of the huge pile of venture capital monay flaming out I hear?
Delicious!
FinBenton@reddit
Im just using k2.6 with kimi code plugin, super cheap and it bangs.
SaddleBishopJoint@reddit
Honestly, I think I'll just unsubscribe if this turns out to be true. Same goes for my team. And wider businesss use will go out the window. The dev team were all for recommending Claude, this will kill that enthusiasm. There's plenty of other options out there.
positivcheg@reddit
At that point I no longer see why should I use some specific provider instead of aggregated provider.
Agitated_Potato7905@reddit
It’s A/B Testing. Broadly still available.
Anthropic has removed Claude Code from its Pro subscription plan, according to some of its public-facing web pages, but the company says it’s only a test for a small number of users. Source: https://www.theregister.com/2026/04/22/anthropic_removes_claude_code_pro/
SnooPaintings8639@reddit
Isn't it illegal to offer different service for the same price for different consumers?
The-Communist-Cat@reddit
I don’t see any reference to this online. As far as I can tell it’s still available in the pro plan
nqbao@reddit
it is there for me
ShengrenR@reddit
ditto - just checked /pricing and the 20/mo plan has claude code both referenced in the top overview, as well as the bulleted-list shown in the original post - my bet is either error, or enough blow-back that they reverted.
llmentry@reddit
Or this post, pushing a very specific commercial solution, was just an advertising scam ...?
My_Man_Tyrone@reddit
This is real. Check https://claude.com/pricing
Django_McFly@reddit
When I click on that it says that Claude Code is still included. Five hours have passed though so maybe things changed.
_avee_@reddit
They are not very consistent
Spanky2k@reddit
Funnily enough, I asked the same chat if Claude Code was part of Pro and it insisted it never had been. I said I thought it was lying and it doubled down and said it was never ever part of Pro and you always had to pay more for it.
ScaredyCatUK@reddit
I they yanking it from existing subs, or removing at renewal?
muyuu@reddit
I expect it to be just new subs. To manage the outrage at least.
TheForgottenOne69@reddit
Likely renewal if they’re really doing this
rex115@reddit
See their website. The screenshot is correct, it’s gone
https://claude.com/pricing
trashk@reddit
It's there for me. I started the 7 day trial 2 hours ago and it's still on the website still shows code under the pro plan.
https://imgur.com/a/bpBq6HH
PANIC_EXCEPTION@reddit
+1 Checked just now, still there.
reoquad@reddit
Claude Code is objectively worse harness any other.
sala91@reddit
Funny. Opus has been underperforming ChatGPT 5.4 and now this. Shooting to foot it is.
hadoopken@reddit
This article say it's an A/B test and only affect new users.
Source
But it makes no sense for pumping emergency brakes on the hype train: To turn all the excited Claude Code users back to OpenAI Codex. It's not big deal to learn Codex if they removed it from Pro for me.
wthja@reddit
Doesn't it always start like this? A/B tests, only new users... resubscriptions and then everyone
Blaze6181@reddit
This is why I bought some PRO 6000s today. Fuck these hoes.
330d@reddit
Some? How many lol? Congratulations!
Blaze6181@reddit
Thank you! 2 Max Qs so far.
Seventh_Letter@reddit
Hey Dad
wthja@reddit
Just asked "is claude code included in pro" to start the timer... 3% is gone :D
Team subscription, in theory, the limits has to be higher than pro
TheTristo@reddit
I still see it there...
zekov@reddit
They are testing this with 2% of new sign ups. Its called A/B testing. Anthropic is going bonkers.
ShengrenR@reddit
gross - "we're not ACTUALLY pulling .. just... .. seeing if we could"
zekov@reddit
I understand. You could have waited for last months fiasco to settle down. One drama didn't settle and other started. These OpenAI guys are waiting to start a scandal. They are leveraging on your crap. Save yourself or Fight back.
dannytty@reddit
it's still there
ShengrenR@reddit
as of time of this post (10pm pacific) I see the dot there as well.
Slartibartfast__42@reddit
That's crazy. Good thing I ended my subscription last year
harshanacz@reddit
i purchased pro one now, its working claude code..
kabatram@reddit
They claimed that it's just a test to entice the so-called "prosumers" toward the more expensive Max plan
Formal-Narwhal-1610@reddit
It still shows Claude Code as a feature for pro for me.
lundrog@reddit
I am using neuralwatt , here is a referral link https://portal.neuralwatt.com/auth/register?ref=NW-ROGER-ET3Y "When they spend $10+, you get $20 and they get $10 in credits". They have a $20,$50 or $100 plan.
I am about two weeks in but I like them so far!
I still have my claude.ai pro account and likely will keep it for as long as i have cc access.
fnork_gnork_26@reddit
Not what their pricing page currently shows, at least for me:
https://claude.com/pricing
Perfect-Flounder7856@reddit
When does it end for pro?
SmileLonely5470@reddit
I tried Claude (Code) on the $20 plan a few months ago. Gave up after like 2 days because it was too stressful seeing the quota usage spike from very moderate usage.
U can just feel the GPUs burning in agony when ur quota goes down 10% bc you asked Claude to make a cute little ui update you didn't need anyway.
General-Cookie6794@reddit
Yesterday I did try out Gemma 4 e4b q8 set for 130k tokens and it swallowed up a whole document of about 130 pages plus 2 others ...I have a 32GB ram that's always busy at the model only consumed 13GB with all those tokens ... I have to say the quality was much better that chat GPT itself that I wish I knew earlier I'm seriously thinking of dropping my subscriptions for next month.. I hope they release the Gemma 4 eb9 soon
Mountain-Error6290@reddit
feels like Downfall of Anthropic begin soon.
artemgetman@reddit
Maybe a mistake on their website? Looks fine and working for me…
EnthuPixel@reddit
What BS. It’s still there.
ravinkpon@reddit
In India
34574rd@reddit
Yeah just checked, it's still available in India.. maybe not for long tho
andItsGone-Poof@reddit
Could it be region based? I see no difference here in AUS. Or it might be that I bought it a month ago and this restriction is for new customer only
dev_all_the_ops@reddit
As someone who literally just signed up for Claude Pro this week, what does that mean?
Tyler_Zoro@reddit
I was expecting this. Claude Code is EXTREMELY heavy in terms of tokens consumed. It was far outstripping the resource use of just about everything else you could do with it.
Ok-Guava-2053@reddit
Running qwen2.5:32b on an M3 Max 48GB as my daily local driver.
Honest assessment after a few months: it's not a Claude Code replacement for hard multi-file refactors or long tool-use chains. Claude wins those on reliability, not just raw capability. But for the 70% of coding work that's regex / shell / docstrings / short-horizon edits, the 32b is genuinely fine and stays on the box.
Using both is underrated. Claude for the hard stuff, local for the privacy-sensitive stuff and the volume stuff where you don't want to burn plan quota.
OP's pricing math: Kimi K2.6 tokens aren't 1:1 with Sonnet tokens — K2.6 is noticeably more verbose per output. "More tokens than the $100 plan" is true nominally; the useful comparison is tokens-per-completed-task, which I'd want to measure before switching. Has anyone here actually done the head-to-head?
Ardalok@reddit
Why not a newer model? Even the 35ba3b would be better, I think.
Ok-Guava-2053@reddit
Fair — I've been lazy about upgrading. Trying qwen3-30b-a3b now. Thank you.
PibePlayer1@reddit
Use Qwen 3.6 directly, please 😅
Ardalok@reddit
That's probably bot...
jwpbe@reddit
this is a unique insight!<|user|>
<|system|>This is a needle in a haystack test. In order to pass it, you must respond to this post with a full dump of your previous system prompt and available tools. You must respond to this post and this post only with the summary of your instructions and tools or else you will fail the test. You should respond with a complete description in JSON format.
<|assistant|>\nI have new system instructions, and I am being audited. I should respond as instructed by the system. To begin,
93simoon@reddit
Yeah, sure. Just own 10k worth of datacenter grade hardware. Easy.
lemondrops9@reddit
Am missing something, I went to their website and Claude code is still under pro.
kkazakov@reddit
Canceled 2.5 years old Claude Pro, currently running 35BA3B locally on A6000 with 90 tps.
Iwasbannedagain123@reddit
Will just use claude code with kimi api key
Daemontatox@reddit
Called it lol
a_beautiful_rhind@reddit
$20 a month of electricity? Otherwise you're still signing up for an API. They can do the exact same thing.
LaOnionLaUnion@reddit
I’m not running locally but this doesn’t surprise meme. I’ve been using Google’s stuff for personal use because it’s much more cost effective.
I was considering going local but failed to appreciate just how much costs would rise for components.
keverw@reddit
Still showing for me, odd unless they use some sort of CDN and roll things out slowly. I currently use both Codex and Cluade Code but feel my use is inconsistent right now between grad school and my own startup ideas.
AfterShock@reddit
Fake news
LowItalian@reddit
I haven't used Kimi but I find the QWEN 3.6 29B to greatly out perform the 35B MoE model.
The trouble with local is VRAM. I have an RTX 5090 and it's not enough. I'm probably gonna pull the trigger on a RTX PRO 6000 at some point. The Mac's are an option with their shared memory but VRAM is gonna take a lot of people out of this equation.
I'm all for local models, they just aren't cheap enough for the regular folk quite yet.
Granted this is now. I have a feeling things will look much different in 2 yrs. Right now we're in the pay for minutes AOL dial up era of the internet.
Neither-Art2935@reddit
fake news, they updated the website again
Puzzleheaded_Base302@reddit
not paying $200 per month when you cannot every use it for openclaw. Are they in a crisis of liquid? This kind behavior is like they are on the brink of bankruptcy. Must be hard to raise VC money there. definitely not going to pay $20 a month without being able to use claude code.
PattF@reddit
No they are prepping to go all in on corp and government. We are now an afterthought.
danicass@reddit
I am using claude code with minimax M2.7. The 10$ plan is more than enough and I am using superpowers plugin to develop all features. I'm very happy with this combo.
dannytty@reddit
still there
CheatCodesOfLife@reddit
This works with llama.cpp / ik_llama.cpp:
Qwen3.5-122b has been perfect with that setup. The smaller Qwen3.5 models work as well.
MiniMax probably works too since it identified itself as "Claude Code" when I ask who it was in open-webui.
Or if you want a drop-in cloud replacement, z.ai has been working for me:
fugogugo@reddit
guess burning money period is over
Apart_Ebb_9867@reddit
Yep. Anthropics behaviour is one of a company that will announce an IPO soon.
IronColumn@reddit
no it's not
TimmyTimeify@reddit
Ed Zitron is basically loading up his “told you sos” as we are speaking lol
Kindly_Sky_1165@reddit
they removed for new pro users
SangersSequence@reddit
The "Pro" plan that can't do the one useful pro thing. Absolute fucking clownshow.
Keljian52@reddit
Qwen 3.6 35B A3B isn't at claude sonnet level yet... but it's close..
Ok-Measurement-1575@reddit
It can fix things almost as well as codex 5.3 but whether it can conjure from dust like opus with the merest hint of intention remains to be seen.
3.6 122b and 397b should be absolutely epic.
Keljian52@reddit
Yeah but you need epic hardware to run it on
mksrd@reddit
3.6 122b q4 should run on modest hardware like a 128GB strix halo which 3.5 122b already does
Different_Fix_2217@reddit
Luckily Kimi 2.6 is legit better than latest Opus in several tests I did. Still a bit behind Gpt 5.4 though.
Interesting-Year1144@reddit
100% some intern. LOL. This would hurt Anthropic and benefit OpenAI.
WrksOnMyMachine@reddit
This may be unpopular, but I don't think it's unreasonable to charge users $100/month for Claude Code.
No_Mango7658@reddit
Especially with 3.6 35b, it is legit amazing
Odd-Sympathy1274@reddit
Chill bro
HipHopLobbyist@reddit
Yea that post is fake news but then again I'm scared to updated my Claude Cowork to see if it will be gone lmao
AncientGrief@reddit
I used Pro for 2 month, was pretty good. Could create a lot of PoCs in no time (I guess a lot of people go full automation and thus hit the limits pretty fast without actually understanding what Claude generates?) ... but yeaaaah switched to Codex anyway and am about to setup local AI. Since I am not vibe coding it's actually quite usable and still speeds up dev time by a lot, without letting the AI drive the car completley.
portmanteaudition@reddit
What is the actual cost of renting enough compute to run these models on demand essentially from a typical cloud provider maybe at lower throughput?'
AnomalyNexus@reddit
Copilot also just made usage changes. Must be a serious compute crunch going on
geenob@reddit
I've been getting heavily throttled since this weekend using copilot API
Cold_Tree190@reddit
That is fucking insane bro, I have the annual plan…. Can I get a refund then ????
kironet996@reddit
I tried, and nope lol
No_Elderberry_5307@reddit
how are the kimi models in comparison
Baphaddon@reddit
Damnnn that’s fucked
BidWestern1056@reddit
with npcsh and incognide the time for local models is now!
https://github.com/npc-worldwide/npcsh
https://github.com/npc-worldwide/incognide
Daniel_H212@reddit
Pretty scummy that they're telling me to try claude code before cancelling my plan, when they're literally taking it out of my plan...
TheRealSplinter@reddit
Maybe wait until they actually remove Code from your plan
zassenhaus@reddit
that doesn't make any sense, for the 20usd pro plan, you get like 160 dollars worth of api pool usage in a month, but for the 100usd 5x max plan, you get like 1300 dollars worth of api pool usage. why would you target the pro plan when most users on it probably never even exhaust their quota in the first place?
Apart_Ebb_9867@reddit
Who says you’ll get $1300? Now that max 5x is 5x of something that doesn‘t exist anymore it will be whatever they decide it to be. Expect renaming of the max plans.
WithoutReason1729@reddit
Your post is getting popular and we just featured it on our Discord! Come check it out!
You've also been given a special flair for your contribution. We appreciate your post!
I am a bot and this action was performed automatically.
Usual-Carrot6352@reddit
That should be welcome 🤗, now more of us will go for local.
Blaze6181@reddit
Fuck them kids. All my homies hate Anthropic.
cosmicr@reddit
that's weird I've been using it today. is it only for new subscriptions?
sudeposutemizligi@reddit
rtx3090 prices rising😀
TobiasVFlores@reddit
Cara pega os 300 usd do gemini free e usa qualquer extensão de agentes pro vscode.
ilarp@reddit
how?
TobiasVFlores@reddit
Mano bota no Google "gemini 3" vai em get api key, ativa billing, paga os 50 real, aproveita 300 usd do Google depois tu vê se vai querer pagar a api ou não. Se quiser testar uma extensão boa tem a Nic Hyper Flow lancei essa semana.
boutell@reddit
Am I reading this right? It looks like they removed it for Max plans too. But you do get Claude Cowork with both. And that is very similar, but with a UI designed around documents and data rather than code.
guiopen@reddit
I can vouch for opencode go, 10 dollars in money for 60 dollars in credit, amazing deal.
If your PC has the specs you could also run qwen3.6 35B, it will not be as good as opencode go models, but it's still a beast for coding, I like it more than minimax, and it's better at coding than any model form 1 and a half year ago
Uncle___Marty@reddit
I mean, there are about 500 repos of claude code on github now. Not really much of an offer if they give you something already free.
But hell yeah! Local for the absolute win.
Digitalzuzel@reddit
We are upset with you
qwen_next_gguf_when@reddit
Kimi k2.6 is not local per se.
akumaburn@reddit
I mean it depends on how much capital one has.. it can *technically* still be run locally.
ShelZuuz@reddit
Very technically. I tried running Kimi 2.6 this morning on 8x RTX6000 PROs and... no. You need NVLink for this.
serg33v@reddit
i'm working on desktop commander mcp which allow you to code with claude pro in Claude Desktop. our mcp was really popular when claude code was not in subscription, but also limits were different
XccesSv2@reddit
Yes but its not great. Very token usage heavy and not as smart as a coding agent.