Title: Is Anthropic’s new restriction really about national security, or just protecting market share?
Posted by LuozhuZhang@reddit | LocalLLaMA | View on Reddit | 27 comments

I’m confused by Anthropic’s latest blog post:
Is this really about national security, or is it also about corporate self-interest?
- A lot of models coming out of Chinese labs are open-source or released with open weights (DeepSeek-R1, Qwen series), which has clearly accelerated accessibility and democratization of AI. That makes me wonder if Anthropic’s move is less about “safety” and more about limiting potential competitors.
- On OpenRouter’s leaderboard, Qwen and DeepSeek are climbing fast, and I’ve seen posts about people experimenting with proxy layers to indirectly call third-party models from within Claude Code. Could this policy be a way for Anthropic to justify blocking that kind of access—protecting its market share and pricing power, especially in coding assistants?
Given Dario Amodei’s past comments on export controls and national security, and Anthropic’s recent consumer terms update (“users must now choose whether to allow training on their data; if they opt in, data may be retained for up to five years”), I can’t help but feel the company is drifting from its founding ethos. Under the banner of “safety and compliance,” it looks like they’re moving toward a more rigid and closed path.
Curious what others here think: do you see this primarily as a national security measure, or a competitive/economic strategy?
full post and pics: https://x.com/LuozhuZhang/status/1963884496966889669
One-Employment3759@reddit
I mean this seems a bit rich given them operating in the USA and all the tech CEOs aligning themselves with Orange Mussolini
TechnicalInternet1@reddit
Elon and Zuck have not been as anti-china as anthropic has been. Would argue the orange hates anthropic even more lol
LuozhuZhang@reddit (OP)
😯
beezbos_trip@reddit
Isn’t anthropic themselves subject to an authoritarian regime? If they really cared about that, they’d speak up for change immediately. They are in SF and the city will become a prime target of the Federal forces at some point in the future if things continue to escalate.
Gamplato@reddit
From reading the screenshot, it’s unclear how this would be about market share. Aren’t they explicitly reducing their own market share in this statement?
LuozhuZhang@reddit (OP)
What would happen if people using DeepSeek-R1 or Qwen Coder inside Claude Code were forcibly blocked? It’s possible they might end up being restricted to using only Anthropic’s own models.
Gamplato@reddit
Oh it wasn’t clear it was about Claude code. Thought maybe it was the chat app.
Vatnik_Annihilator@reddit
What does this have to do with locally run LLMs
ASYMT0TIC@reddit
If they're worried about Claude serving "authoritarian objectives", they'd better break camp and move their headquarters out of the authoritarian country they are based in.
Zeikos@reddit
"Adversarial nations", adversarial to what? To the formation of an AI oligopoly?
Say what you want about China but if it weren't for their open source contributions were would we be?
MisterBlackStar@reddit
Yeah, also take a look at the names of top talent in each AI company lol.
ubaldus@reddit
bla bla bla...
LuozhuZhang@reddit (OP)
Be nice
lodg1111@reddit
it is more or less two things.
1. marketing strategy -- forbidden fruit effect.
2. being political correct, minimize the risk of getting into what Nvidia has been facing.
No_Efficiency_1144@reddit
I don’t agree with the views of Anthropic but I think their views are genuine. As in they seem to genuinely attract working researchers who have the Anthropic set of beliefs.
LuozhuZhang@reddit (OP)
Is it really that simple, or is there something we don’t know going on behind the scenes?
No_Efficiency_1144@reddit
The thing is, Anthropic consistency is very high when it comes to their views. They held these views 2 years ago when there were only 2 good LLMs in the world (GPT 4 and original Claude.) They did not adopt those views after the Deepseek release. Also they put way too much time and money into safety if it is just for marketing. Particularly early on Claude was by far the strictest LLM due to their “Constitutional AI” fine tuning and so they actually lost a ton of money by driving away customers at that time. Seems to be a real belief.
ForsookComparison@reddit
Grok-Code, 5-mini, all of these open-weight china releases for cheap.. people are eventually going to realize that $15 output to be throttled while using a silently quantized Sonnet is some BS.
LuozhuZhang@reddit (OP)
Yeah. Totally BS
BumblebeeParty6389@reddit
They don't want Chinese model makers to generate datasets via Claude and create OpenSource models that steal away their customers with attractive price/performance ratio. It was always like that. They just weren't specifically targeting "China"
Haoranmq@reddit
collect data from the whole internet to train a model and claim "THIS IS OUR DATA!"
LuozhuZhang@reddit (OP)
lol exactly
LuozhuZhang@reddit (OP)
I think the main motive is protecting market share in data distillation and coding, and second is capturing more users to turn their data into an advantage for Anthropic’s own models
LostMitosis@reddit
Anthropic have figured a way to generate revenue from fear mongering. So far it's working for them and you don't change something that's working, you double down.
LuozhuZhang@reddit (OP)
Could this be tied to how narrow Anthropic’s revenue model is? Most of their enterprise income is concentrated in coding, but they’re now facing a flood of fast-rising challengers in that same space.
Massive-Shift6641@reddit
yes, they are losing their shit that based chinese brothers will eat their market share away. nothing new.
LuozhuZhang@reddit (OP)
😂 Guess the next move will be blocking those Chinese models inside Claude Code.