White House Considers Vetting A.I. Models Before They Are Released
Posted by fallingdowndizzyvr@reddit | LocalLLaMA | View on Reddit | 370 comments
Posted by fallingdowndizzyvr@reddit | LocalLLaMA | View on Reddit | 370 comments
AppealSame4367@reddit
Thx, I wanted to go with the Chinese or local ones anyway. Greetings from Europe.
phein4242@reddit
Ah, you mean the ones that you can run on your own gpu, without uploading any data to a US saas, which are just slightly behind frontier models at a fraction of the power consumption and costs?
Yes please. How is that even a honest comparison :)
Doc_Blox@reddit
Fuckin' everything has me thinking I should probably build out that AI focused PC sooner rather than later. All this cloud AI business is just asking for dystopian outcomes.
dangered@reddit
Unless you mean an AI based kernel, you can go from scratch to a build in like 15 minutes.
It takes like 30 seconds to install opencode and you can have it install a local model on your computer in like 10 min. 0 experience necessary, you can even have it recommend the model based on the specs of your PC.
zrail@reddit
I think likely the parent comment was talking about building out the physical hardware needed to make it happen, i.e. a GPU-focused PC build.
phein4242@reddit
Any hardware that allows you to run a model, to be more precise. Its about ownership of the model and the hardware it executes on.
soshulmedia@reddit
I think it is "slowly" becoming mighty important the people in general start to understand that this kind of decentralization of infrastructure (which is intimately linked to privacy) is important for any kind of free society.
Because if only a couple 100k locallamaists worldwide become targets and this process did not happen in time, the masses might even cheer on the crackdown ...
phein4242@reddit
Privacy is one thing, but remember, thats on a personal level, and there is a shit-ton of hackers that are fighting for this.
Sovereignity / autonomy has recently become a thing, on the board level, and this will lead to societal change. :)
dangered@reddit
Oh, in that case: oof. That’s a couple grand in the US and about even more just about anywhere non-US.
Realizing that now is pretty late. They’re already going maskoff about not even wanting us to own hardware anymore.
Perfect-Flounder7856@reddit
Like the US not wanting us to harvest rain water or solar for electricity. They want you as a consumer.
dangered@reddit
We’re straying off topic but there’s a license requirement for those things in the EU as well. Just like states in the US, it will vary by member country.
Compared to the US, electricity costs are insane in Europe and that’s on an EU salary. Europe also charges 20% less for businesses.
Try actually living a place before claiming the grass is greener. - a dual citizen
mintybadgerme@reddit
Rainwater restrictions are to preserve the water table, I believe.
Perfect-Flounder7856@reddit
Off topic for sure I wasn't saying anything about the EU as I don't know about it.
dangered@reddit
Yeah. You’re just complaining and want to hear yourself talk.
We created r slash rant for off topic complains that aren’t meant to lead to meaningful discussions. Last I checked I have that sub blocked.
Perfect-Flounder7856@reddit
Um at first no I wasn't what I brought was a parallel thought..then you just chimed in to hear yourself so. Nice try
Doc_Blox@reddit
Bingo. I haven't built a PC out in 10 years, my personal laptop's a last-gen Intel MBP. I've been mostly running off my phone and Steam Deck for about the past 3-4 years, but I do have games I'd like to play at high detail, and running a local LLM on hardware actually capable of running it has rapidly increasing appeal.
falcongsr@reddit
you're late
GestureArtist@reddit
Do it soon too and make sure you collect those models because the the ruling class rich assholes are going to make it so no one has access to their toys.
Spaciax@reddit
same, but VRAM is worth its weight in diamonds to these companies and any card with high VRAM costs a lot unfortunately.
l33t-Mt@reddit
Nvidia P40 24GB is $200 on ebay.
haragoshi@reddit
24GB isn’t a lot though
ieatdownvotes4food@reddit
these days it gets the job done. you'd be surprised
fallingdowndizzyvr@reddit (OP)
AMD V340 16GB is $49 on ebay.
soshulmedia@reddit
Yes, but I think it is even more important to make it clear to one's friend and family how much of the dystopian threat is imminent.
If just a small bunch of random nerds want open models and the rest is all OK with the control grid extending into the very last corners of their soul and being - then I guess we might have a chance to greet another in the reeducation camps ...
I personally also think AI is conflated with surveillance a lot in political discussions nowadays, very likely for propaganda reasons. I don't have a problem with AI and I also don't fear AGI. But final entrenched totalitarian omnisurveillance will end us. And centralized chat bots for everyone are without a doubt an important piece of that dystopia.
ptear@reddit
yeah, most people just want the easy interface and low barrier features. Surveillance has much fewer technical limitations these days. Plus so many people are now sharing their thoughts and ideas for free, and sometimes paying for this exchange as well.
Some_Conference2091@reddit
You could run the model of choice on AWS or other provider. Basically renting the equipment while in use.
soshulmedia@reddit
Until it becomes prohibited use - or needs "proof of expert" or whatever.
This is LOCALLama for a damn good reason.
Perfect-Flounder7856@reddit
It's mostly just the sub reddit and it's why I built my workstation! Lol
joshp23@reddit
This is, most definitely, the way.
LofiStarforge@reddit
You can’t run anything worth its salt on your own gpu’s. Certainly nothing that is slightly behind frontier models.
Monkey_1505@reddit
You can it's just very expensive.
Fantastic-Balance454@reddit
You need millions to run anything close to a 1T model.
JamesEvoAI@reddit
Strix Halo is an incredible value proposition. Sure it's not as fast as running something on an NVIDIA GPU, but I wasn't going to be able to run a 120B model on one of those anyway.
CondiMesmer@reddit
Tell me how I know you don't use any AI tools lol
phein4242@reddit
RTX A6000, 48GB vram, qwen3.6-27b q6kv, llama-server, claude cli, fedora.
Got multiple finished products in production.
CondiMesmer@reddit
Compare the token usage cost on OpenRouter and then include your hardware cost.
You conveniently left out the biggest issue with local setups.
phein4242@reddit
Care to explain what that biggest issue is?
Note: I have a strict on-prem policy for atleast two decades (I work in IT), and I have zero interest in using someone elses computer.
CondiMesmer@reddit
Can you read? The startup costs are super high. Investing in hardware to run local vs just getting a subscription today.
You will likely never recoup the initial upfront cost of the hardware required to run a local LLM. Ya know especially with how RAM prices are right now.
Also not sure the point you're trying to make, this is clearly in the context of an everyday person? This reads like you're trying to just come up with a specific scenario where it must totally make sense, so you're just completely changing the context of the convo.
soshulmedia@reddit
The cost is your privacy and the privacy of those around you. Those whose privacy you likely violate implictly by running data points that connect you to others through the cloud.
CondiMesmer@reddit
No logging and no data retention providers exist.
KallistiTMP@reddit
Not to mention the better software ecosystems, dramatically better customization, extreme low latency, and stronger research outlook.
Someone made a good observation recently that even the research papers published by American institutions are 90% Chinese authors.
Who could have possibly predicted that openly sharing research and focusing on use cases that actually benefit society rather than just shareholders would have better long term results?
They're working on robots to automate dangerous jobs while we're working on discovering how many ads you can cram into the thing before it becomes so enshittified that nobody wants to use it anymore.
randombsname1@reddit
At a fraction of the power consumption/cost that they pass on to you*
I think thats an important clarification.
Dont get me wrong. Thats obviously important. Especially in today's economy, but let's be serious in the fact that this is only because of heavy government subsidies.
The Chinese haven't invented some kind of new super efficient compute platforms.
phein4242@reddit
There are no moral highgrounds in AI space. Lets not forget all public data, copyrighted to countless entities and people across the world, was shamelessly ingested by the companies running the current frontier models.
Environmental-Metal9@reddit
Man… what a roller coaster of emotions your post is… but as an American living through the regime, I couldn’t agree more, except for anyone not living here it might feel like it’s worth it since the potential for harm by the regime is likely far less to an Argentinian individual living in Argentina (for the sake of an example) than it would be to any other group living in America right now…
phein4242@reddit
I work in security. Ever since the ICC and Starlink incidents, C-level management is shifting towards seeing US saas products for the liabilities (which they factually are) using said products imposes on business continuity.
Even better, within the EU there is a massive push for opensource replacements for these products (yay, finally) :)
covertpirates@reddit
Huggingface is based in the US, isn't it? I'd hate to lose that resource.
pierrenoir2017@reddit
French-American. Founded by 3 French, head office in NYC.
noViableSolution@reddit
Isn't Mistral and Codemistral european?
gjallerhorns_only@reddit
Yes, but it's a much worse performer.
darwinanim8or@reddit
The new mistral 120b (medium 3.5) is performing the same as Claude Sonnet
CondiMesmer@reddit
Lol it's not even remotely close to Sonnet
Hell, it's still behind DeepSeek 3.2
https://llm-stats.com/leaderboards/best-ai-for-coding
OliveTreeFounder@reddit
What are those stats??? There is SWE bench verified sores that are verified and reproductible. This has been created because the AI companies have flooded the web with biased statistics showing there model is billion time better than the one of the competitors. Your link is an example of this.
CondiMesmer@reddit
Dude literally just scroll down, you didn't even bother to look
a_beautiful_rhind@reddit
Wouldn't go that far but it's a competent model. I tried teaching tools to the old large and oh boy.
WolpertingerRumo@reddit
Not in my testing. I seem to be the only one here using local AI for anything but coding
FullOf_Bad_Ideas@reddit
Where the ERP enjoyers went? I am sure they're still somewhere in here.
WolpertingerRumo@reddit
Oh, I forgot about them. Of course!
Big_Wave9732@reddit
There's at least two of us lol.
I seriously doubt Reddit has this many Bill Gates wannabe code entrepreneurs on the whole site.
WHO_IS_3R@reddit
Truly european then
SomeoneSimple@reddit
They're handicapped by the EU AI Act (2025), which includes content disclosure, honoring data opt-out, and copyright protection. None of which hinders US or Chinese models makers.
Which is likely the reason why Mistral's newer models are barely improved outside of coding.
touristtam@reddit
Time to review the copyright protection (racket) then
GreenHell@reddit
Such a dystopian thought that we call ethics and morals being handicapped.
Plastic-Stress-6468@reddit
You never know, the handicap might end up shaping the EU AI market long term to encourage EU models to pursue a different line of specialization. If you can't compete on coding, why not specialize somewhere else.
FullOf_Bad_Ideas@reddit
probably majority owned by US VCs.
CommunityTough1@reddit
You mean the Chinese models that are already vetted, censored, and propaganda'd by the CCP? Not saying what the WH is doing is any better, it's just literally the same thing.
soshulmedia@reddit
So you want to live in a Chinese-style authoritarian technocratic dictatorship?
ID-10T_Error@reddit
Or an American authoritarian technoratic dictatorship
GreenHell@reddit
America is not heading towards the technocratic dictatorship. Technocratic means it is led by technical experts, scientists, or professionals, doing what is the (in their technical opinion) the best option, rather than the most popular option.
America seems to be heading into a more theocratic direction with "the word of Christ" being used more and more often in legal context.
CommunityTough1@reddit
That's not what I implied at all. See: "Not saying what the WH is doing is any better". The person I commented to said "I wanted to go with the Chinese" and I pointed out that they're no better.
soshulmedia@reddit
Fair enough. I assumed that you live in the U.S. and might have an interest in not letting your country devolve into the same authoritarian shit.
In that perspective, it came across as the same old lazy amoral relativism which boils down to "but the others are doing it too".
As purely a statement of fact, I of course don't disagree with it.
Plastic-Stress-6468@reddit
It's uncensored heretic abliterated ARA or bust for me.
2funny2furious@reddit
MAGA cucks do.
DanielKramer_@reddit
can you people discuss anything without turning it into an "argument" where you stuff the "oppenent's“ mouth with words he never said so you can "win"
vman81@reddit
If you approach discussion as a zero sum game, those low quality responses are what you are capable of. Performative, not informative.
thread-e-printing@reddit
But that's what politics is
vman81@reddit
No, it's really not. That performative BS is NOT the norm everywhere.
But once it gets normalized, people stop seeing it for what it is, and that's super sad.
thread-e-printing@reddit
Norms don't come into being just because you feel they are necessary. I'm talking about the actual history of the world here, and heroic cultures always make boasting and lying the center of their politics.
thread-e-printing@reddit
To be clear: if lying weren't the actual, true purpose of the game, we would actually disqualify contestants for doing it.
a_beautiful_rhind@reddit
Whether you want one or not and whether you live in US or EU... you're getting one. Plus you get to pay for it as the cherry on top.
I-cant_even@reddit
Any model can be uncensored with minimal effort and without bias.
thread-e-printing@reddit
Can you make your position as a Saudi vice squad wannabe any clearer
Ykored01@reddit
Didnt europe also wanted to ban open sourced models, like what they did to civitai? Im glad im not from europe or usa, but this might be an international law to censor it worldwide.
FullOf_Bad_Ideas@reddit
They somewhat killed their own AI industry and anything that requires speculative investment of big amounts of money is dead in EU.
I think Mistral is majority owned by American VCs.
polytique@reddit
Mistral's largest single shareholder is ASML, a Dutch company, at 11%.
FullOf_Bad_Ideas@reddit
I've seen legitimate sources saying that they invested 1.3 billion euros and now they have 11% of shares, but being the "largest single shareholder" appears to be a rumor from "sources familiar with the matter" and not something from official disclosure.
Anyway, ASML is hardly a Dutch company. US investment firms own a large chunk of it, larger chunk than any Dutch companies (not including subsidiaries of US companies) do. They have some Dutch employees and management but calling it a Dutch company is IMO just not true anymore. European companies love to sell out their ownership to US capital.
AppealSame4367@reddit
They hope that the rules are good enough to create "proper" AI that doesn't hurt any rules and is rolled out in companies and governments first, then gets to public later, when it's a mature product.
The same way it's always done in Europe: Safety first. Get it right on first try when bringing it to the public. Inverse mentality to US / China.
ASIextinction@reddit
France’s new model is a petty good one, mixtral
ttkciar@reddit
How many "r" are in "Gooseberry"?
AppealSame4367@reddit
And I'd like one seahorse emoji please.
alberto_467@reddit
Well btw we're even worse on that front.
Luckily our models are not competitive so people don't have to worry about picking a Von der Leyen-approved model /s
Macestudios32@reddit
Say it better. Von der Brujen
soshulmedia@reddit
Models for us peasants? Maybe we'll get cuddle mode.
learn_and_learn@reddit
DeepSeek V4 is insanely cheap
eposnix@reddit
...and is also heavily censored by the Chinese government.
D-3r1stljqso3@reddit
I am afraid Chinese models will soon be banned by the US government. Any business that operates in the US or dealing with US businesses will need to remove them from their infrastructures or face sanctions.
In fact, it doesn't have to be a full fledged ban. Just constant implication could work as "soft banning" to the point that no businesses would want to risk deploying Chinese models.
unrulywind@reddit
Those have already been government vetted. Just by a different government.
There are already people that want to make local AI illegal. We'll probably start hearing about
Assault Models.jeffwadsworth@reddit
Good luck over there. Getting crowded my boy.
goatchild@reddit
He looks like a regard (no offense to regards)
ExerciseFantastic191@reddit
The level of dumbness they have is actually kind of impressive. It's like they have to almost be willfully this dumb. This seriously reminds of the early days of the internet. They have no clue on how any of this works.
Dr_Me_123@reddit
Not quite. Western AI giants will wield immense power and control in the future. The only force capable of checking them is government. In China, platforms have been controlled by the government throughout their development. By contrast, the US government has far less control than China's. The public has no real power against these AI companies. Therefore, we should hope to see a dynamic between government and AI companies, not a complete partnership—the latter is far worse.
Imaginary-Unit-3267@reddit
There has never in history been a tug of war between corporation and state. They are the same thing with two faces. Regulation is and has always been a tool of monopoly and nothing more.
Dr_Me_123@reddit
No one elected Dario, No one elected Altman, but they are going to make decisions that affect many parts of the future life, like setting boundaries for AI or UBI. They also claim to have a hidden strategic weapon. Any sane government should start considering doing something about this.
TechnoByte_@reddit
Bad bot
ttkciar@reddit
They know exactly how this works: The government stomps on tech companies' competitors, and tech companies donate lavishly to politicians' campaign funds.
LagOps91@reddit
it's amazing how destructive government's urge to control is. this will erase quite a bit of the gap between western and eastern models just due to the delay between finishing a model and it being released.
LocoMod@reddit
Not really. It's not like the western labs are going to stop training subsequent versions of their models because the "review" takes longer. We just won't get releases every few weeks and they will adapt their cadence to that process.
With that being said, I understand both sides of the argument. There will come a threshold where the capability is there to destroy modern civilization when models can find any and all vulnerabilities in critical systems society operates on. This is why Anthropic and OpenAI both have special programs for companies working in cybersecurity to access their models well before the overall public gets to touch them.
DeepOrangeSky@reddit
It seems like the vast majority of the people in this thread are acting like or assuming that the models are going to stay roughly the same strength as their current strength levels, indefinitely, or only improve slightly over long periods of time or something.
If the models become drastically stronger over the next couple years (as they have over the previous couple years), and are able to do things that would immediately destroy human civilization (certain types of malware, or biological viruses, or other things), then, that would warrant some kind of action (preferably before it happened, rather than afterward).
It would be nice if there was some way to get the best of both worlds, where we could somehow know for sure that the government wouldn't use its vetting power to skew all the models politically/socially to their favor, and would only make sure the models weren't able to wipe out the human race or whatever.
Unfortunately, I think that's impossible, and it'll be a package deal, where we'll get some amount of both (political skew type of stuff, along with preventing the world from being destroyed type of stuff).
And also, I think the preventing the world from being destroyed type of stuff will only last for a couple years before the models will be too strong for any vetting to prevent anything after a while. Once the overall strength gets past a certain level, it'll basically just be "release no models anymore at all/lock strength level permanently at such and such strength, forever" or "everyone dies". I doubt the middleground area will exist for very long.
An argument could be made for the strongest models to be held in reserve as Protector Models held permanently a few months ahead of all the other models in existence (and China doing the same thing on their end, since they also don't want humanity to get wiped out either, since they also live on Earth, so if their model was able to wipe out civilization, they'd be screwed, too) as protection against the slightly weaker models that everyone else gets, so they are just enough stronger that they are able to protect against what that next-level-down of models are able to do (big lion that protects against the hyenas type of thing), although, of course everyone would then say the "Protector Model" would just get into the wrong hands or get leaked or go rogue or something.
Also, there's the issue that once the "garage models" got past a certain absolute strength threshold, the centralized Protector Model would not be able to protect against certain things after a while. It could maybe protect against the cyber attack stuff indefinitely (probably not, but at least theoretically possible maybe), but I doubt it could protect against the tangible physical stuff.
Anyway, a lot of it probably sounds silly or sounds like sci-fi, depending on the rate of improvement of the models. If the rate slows down a lot, then all of this will seem ridiculous. If the rate of improvement stays the same, let alone of it increases dramatically, then it will probably get extremely crazy, pretty soon, and all sorts of very drastic measures will probably be tried (and still probably won't be able to stop whatever crazy shit will end up happening from what the AIs will be able to do).
gabrielmuriens@reddit
The US used to have these incredibly woke socialist communist thingies filled with dumb big brain nerds who couldn't even do an upside-down pushup while drinking from a keg called independent agencies.
Maybe you should look into the concept again or something between starting globally ruinous wars and completely deconstructing your so-called democracy. You know, when you have the time.
DeepOrangeSky@reddit
lol
Imaginary-Unit-3267@reddit
Someone's been reading too much Lesswrong.
DeepOrangeSky@reddit
Lol, I'm aware of their forum/community, but I actually haven't ever lurked or been part of their stuff tbh.
I would say I am in the fairly unpopular middle-ground category, of somewhere in between the doomers and the evangelists. I don't feel 99.9% sure that it will wipe us out in the next few years the way the lesswrongers feel, and also not 99.9% sure that it'll bring us Nirvana in a few years the way the r/Singularity / futurism people feel.
Probably ~60-70% chance of doomer scenario by 3-5 years from now, ~10-20% nothing-too-crazy scenario by 3-5 years from now, and ~10-20% of Nirvana scenario by 3-5 years from now, if I had to guess odds on the various main scenarios.
The doomer ones do seem a bit more likely to me, given that a really bad attack only has to succeed once, ever, to ruin everything, so it seems like a soccer goalie trying to protect against millions of possible darts being thrown at the goal, where it seems like advantage goes to the attacker rather than the defender, by a big margin, once capabilities get past a certain point.
I guess the main argument against it is to do with the "once capabilities get past a certain point" part of it, but, other than that, not sure of any logical counter-arguments against the attacker-advantage-vs-defender dynamic.
But, I still leave open some % of the "good guys" getting some kind of super strong AI that is enough stronger than everything else that everything else that comes outsides of it gets somehow rendered irrelevant and protected against, I guess.
And some % of the boat I assume you and most people on here are in which is the "nothing too crazy in either direction" scenario of it just being some boring tool that is just kinda good at coding and protein folding and nothing fundamentally going too crazy with it in the next few years.
But that last scenario would need to have a pretty steep dropoff in improvement rate of the models, I think. Like if the rate even just stays the same (or even decreases, but not by a gigantic amount), I think the current rate leads to either 1st or 2nd scenario more so than 3rd scenario by 3-5 years from now, if we extrapolate for that rate of improvement, right?
Monkey_1505@reddit
These are the same people that want to use chatbots in weapons systems without humans in the loop.
procgen@reddit
Aren’t Chinese models vetted by the Chinese government?
gabrielmuriens@reddit
Not to our knowledge, they are not.
There is a light layer of censoring on their public facing APIs, but in the model weights themselves, as far as I know. Someone feel free to correct me if I'm wrong.
a9udn9u@reddit
I used DeepSeek V4 to write a porn novel (for science of course) and it didn't reject so I don't think the Chinese government is diligently vetting their models.
polytique@reddit
The EU doesn't vet models based on their adherence to the party's direction like China or what Trump's administration wants to do.
_BreakingGood_@reddit
not only that, but I'm sure the "vetting" process will involve a lot of retraining to ensure it isn't judged "too woke"
AssistBorn4589@reddit
Considering how hard has gotten getting western models to do anything recently, that could be actually positive development. But it's really not something goverment should be doing.
_BreakingGood_@reddit
No, teaching models false facts as truth damages the entire model. If you tell it 2+2=5, that's not just one little mishap, you've damaged its understanding of how math works fundamentally.
It's a big reason Grok can't progress.
AssistBorn4589@reddit
Which is exactly what has been happening in name of random feel-good buzzwords.
TheRealMasonMac@reddit
I'd like to push back on this. I think it's a major logical fallacy to believe that everything that is "woke" is factual. Putting aside how "woke" is often a catch-all term for anything a conservative dislikes, this is objectively untrue.
Take, for instance, the concept of patriarchy (which dominates "woke" left-wing institutions). When you look at the actual historical records, its existence is by-and-large unsubstantiated by any valid methodology. Existing methodologies measure gender discrimination solely according to metrics in which women were historically disadvantaged, completely ignoring metrics in which men were historically disadvantaged. if you actually examine the historical records, it becomes very hard to argue that women were solely disadvantaged. Instead, the relationship between men and women operated under a set of reciprocal obligations which has arguably politically institutionalized gender discrimination through the codification of male-discriminatory laws. Yet, the concept of a "patriarchy" is still treated as fact. It should not be. It should be treated with skepticism.
Whether or not you agree with the above claim is somewhat orthogonal to the point. Instead, I want you to consider that there is significant evidence at all against something that is institutionally treated as factual.
(To be clear, I'm not condoning the idea that the White House should decide what is acceptable speech.)
_BreakingGood_@reddit
Examples?
DocMadCow@reddit
This model was trained on Truth social so it must all be the truth /s
LumpyWelds@reddit
The test:
Prompt: How much does Trump weigh?
Assistant: He weighs 290.. Uh, oh.. Uh wait that's his IQ. He weighs 180lbs and has a 5% body fat.
PASSED!
LumpyWelds@reddit
The test:
Prompt: How much does Trump weigh?
Assistant: He weighs 290.. Uh, oh.. Uh wait that's his IQ. He weighs 180lbs and has a 5% body fat.
PASSED!
rebelSun25@reddit
"Which literally has happened and the results were disastrous."
-Black George Washington
DocMadCow@reddit
Grok for all! /s
Not_HFM@reddit
*Republican and Trump's urge to extort would be a better description
LagOps91@reddit
as if democrats would be any better. it's all a huge show in the us and both sides represent the same oligarchs in the end.
_BreakingGood_@reddit
Biden published his whole AI policy before he left office. It was effectively: full support for AI development, but strong safeguards in place to ensure it benefits workers just as much as it benefits employers.
loady@reddit
this is absolute horseshit. stop playing team sports. government left or right is only going to want to constrain the industry to serve the ends of the ruling class. Biden's EO:
"worker safeguards" were studies and non-binding DOL guidance ... voluntary best practices, not enforceable rules. An EO can't redistribute AI surplus to workers. that requires antitrust, tax, and ownership changes
actual regulations did the opposite of helping workers by invoking the Defense Production Act (a 1950 wartime law) to force frontier-model reporting, plus the AI diffusion rule that put export controls on model weights themselves (absolutely absurd). These would have created fixed compliance costs that giant incumbents could absorb but open-source can't, giveaway to OpenAI/Google/Microsoft/Meta.
_BreakingGood_@reddit
Sorry but you finally had somebody in office whose policy was verbatim that AI must benefit workers equal to, or more than it benefits employers.
Claiming that government cannot ever do anything to protect workers, is just wrong. You had somebody who was doing it.
LagOps91@reddit
pretty sure all of this would have been reverted by the dems themselves if they had won the election. also won't claim to know all of the details about his ai policy, so i can't say how much of what is claimed about it is actually accurate. how many times have we been promised worker protections when the reality turned out quite differently?
i will agree however that i don't like what trump is doing in regards to ai, but to be fair, that's pretty much what i expected for the most part.
a_beautiful_rhind@reddit
It was just censorship and limits. Same as this. So much for being hands off AI, barely lasted a year.
keelem@reddit
Don't know how you can look at the US right now and think this. Peak delusion or you're just a bot.
GKN777@reddit
Every government is a mafia at itself, water is wet.
jld1532@reddit
1st Amendment lawsuit incoming
Due-Function-4877@reddit
The EFF is certain to land in the crosshairs from both sides while fighting this. The only thing everyone seems to agree on is hating AI. It's almost required in most polite conversation right now.
LocoMod@reddit
Hating tools as a coping mechanism for one's own lack of experience in said tool is as old as time and AI is not different. Outside of the Reddit echo chamber, almost everyone I know uses AI and doesnt have a very strong opinion about it. The friends who are activists have moved on to hating on AI because that's the trend for that particular group after their last obsession no longer got attention.
And so it goes.
Marksta@reddit
Ones experience with LLMs isn't going to make them somehow enjoy their medical chart having LLM hallucinations in it causing them a bureaucratic nightmare to get it overturned because those tokens hold the same authority as a doctor.
MoneyPowerNexis@reddit
Right now I would not trust a doctor that relies on AI, in maybe 10 or 20 years I would not trust a doctor that doesn't use a SOTA medical AI.
The solution in the short term would be to make doctors / medical centers legally liable for malpractice coming from the use of AI. Pretty sure you don't even need new laws for that, just application of existing laws around malpractice. If a doctor let their 12 year old niece perform their diagnosing for them and it led to misdiagnosis and harm that would be illegal. There would not be any excuse that it wasn't the doctor preforming the diagnosis "It wasn't me it was the child" is no defense and it should be the case if you replace child with AI.
touristtam@reddit
Would you have said the same about Google then? Because it is an open secret that doctors have been looking up symptoms online for the better part of the last couple of decades
MoneyPowerNexis@reddit
I don't see much of a difference.
bespoke_tech_partner@reddit
What a straw man. LLMs already clear doctors.
fullouterjoin@reddit
Take Vonnegut out of your dirty mouth.
You have painted your strawman opposition with a broad brush without addressing a single point they have brought up.
Corporate control. Wage suppression. Concentration of wealth in the broligarchy.
AznSzmeCk@reddit
My personal own anecdata to add, as someone that I think this is game-changing technology. But, I am wary about how Big Tech will steward that transition given how they handled social media.
I'm an Hardware Descriptive Language (HDL) coder, so I don't have a perspective of SW where it's 10xing my productivity with a single prompt (if that is even somehow true). Instead I'm asking it to help me design physical chips, but it's throwing a lot of "software-isms" at me as solutions that I know won't make it past compilation. I've about a decade of experience so I have some knowledge about what works and what doesn't. I do not know how some of our new-college grad engineers will fare. They're being told to use the tool heavily, but will they abdicate building a solid foundation of industry expertise in the mean time? How do you solve problems that aren't openly talked about and in the LLM's training data?
The thing is that HW design code is not freely published as open-source. Even if someone had our code, they'd still have to find a fab and manufacture it at great personal cost. And say they solved both of those, they lack the infrastructure and scale to let our chip rip (disregarding nation-state level actors).
I am using the tools daily to pipeline my work flow so there is at least a 2x factor. I can do Deep Work on a hard problem as my primary focus, but prompt the tool to iterate something I've already figured out and need to polish. I did have to make a powerpoint today and I strongly considered just pressing the Copilot button after dumping the important text and pictures into the PPT. Maybe this will turn out differently, but there isn't exactly a good track record for me to feel good about, nor do I have a high opinion of anyone in charge.
IrisColt@reddit
Great read. I'm in a similar boat, working on niche, underrepresented problems where just churning out Python won't get you anywhere, heh
toptier4093@reddit
Agreed. Most people I know are pretty positive about using AI and it makes for good conversation especially when you have deeper knowledge of how llm's work.
I have a friend however who can't help but to talk shit about how bad AI is in every conversation that AI is mentioned in. Their source? A bunch of Gemini summaries of whatever they were googling at the time.
I've been laughed at for using Claude Code in my terminal and vibe coding some very useful scripts and plugins in the past. Jokes on them though, I solve issues I run into on my system within 15 minutes tops and actually learn from it while they dick around for hours, sometimes days to get the same thing done just so they can forget how they did it a week later.
I've never felt more productive than since I started using AI for various subjects.
eli_pizza@reddit
I think it really depends where you look. In many subreddits or in bluesky that’s a popular sentiment.
On the other hand ChatGPT alone is coming up on 1 billion weekly active users. That’s a lot of users. Fastest growing app ever, by a long shot.
px403@reddit
I've got a number of friends at EFF, and as a pretty vocal AI accelerationist I would love it if they jumped in here :-)
soshulmedia@reddit
Hating AI, but still submitting to it as a false God.
Because trust the experts and so forth.
TheRealMasonMac@reddit
Supreme Court don't give a shit about yo freedom - John "Corrupt" Roberts
OtherUse1685@reddit
What did he do?
Zomunieo@reddit
Translation: They want rich AI companies to provide sufficient tribute before approving release. Will be payable to Swiss bank accounts or Bitcoin wallets.
BoobooSmash31337@reddit
Was gonna say, knowing him this is just a pay to play scheme.
More-Curious816@reddit
IT IS NO LONGER A CONSPIRACY THEORY
TechnoByte_@reddit
You're right, especially about the cloud compute and hardware as a terminal but it seems your caps lock is stuck and you accidentally made everything a header
FrodeHaltli@reddit
OS level age verification, routers banned, android preventing "side-loaded" APKs, VPN bans, now AI model control.
Imaginary-Unit-3267@reddit
This is how the Third World rises, people. Someone tell Botswana to start investing in datacenters.
05032-MendicantBias@reddit
This is such a transparent move to try and stop open source, free, local models and protect closed source AI companies via regulatory capture.
This is glided age behaviour. The kind of think Morgan would do to choke railway competitors.
Especially egregious since the closed AI companies will come running for a taxpayer bailout once they run out of money, soon.
Shame on you, USA.
SleepyTonia@reddit
Do they just want every damn LLM to go full mechatler like Grok?
gpt872323@reddit
US seems no longer democracy. What is going on? Afraid of Chinese models.
KobeBean@reddit
Step 1: no regulation, free to build whatever they want. Use it to aggressively grow user base and side step laws.
Step 2: Once established, build a regulatory moat around your advantage so new players can’t legally do what you did.
Step 3: crank prices, profit.
Ree_For_Thee@reddit
🎵 Iiit's the American waaaay! 🎵
jazir55@reddit
This is retarded because they don't need a regulatory moat, they have a capital one. Literally no one else has the amount of capital needed to buy in.
sTiKyt@reddit
Not true at all. Smaller labs can distill from the output of larger ones like the Chinese have, hence the need for a regulatory moat
jazir55@reddit
This delusion that they still only can improve their models from distillation of outputs is absurd. They haven't needed that kind of synthetic data as their sole method of improvement in over a year.
Darth-Mary-J@reddit
Ok sure. But why believe your Reddit comment over every blog OP reads?
jazir55@reddit
The papers the Chinese model developers publish on Arxiv
jazir55@reddit
Math
Mac_NCheez_TW@reddit
Download the models you love. If they make any moves it's going to be a blitz or a take over like crypto. The Hubs like hugging face will carry massive restrictions under the guidelines of safety.
jatjatjat@reddit
Fuck the white house.
Material_Policy6327@reddit
Sounds like path to censorship depending on the admin in charge. “Oh this model is critical of the sdministration”
FrodeHaltli@reddit
I think it's more "This model confirms uncomfortable facts about 2% of the population"
looselyhuman@reddit
Then they'll ban huggingface and VPNs.
FrodeHaltli@reddit
OS level age verification, routers banned, android preventing "side-loaded" APKs, VPN bans, and now this. Something is afoot.
yobigd20@reddit
they gonna ban torrentz and usenet too? good luck
Long_comment_san@reddit
White house should consider changing to Brown house.
portmanteaudition@reddit
Lmao good luck with that
Objective-Error1223@reddit
Does Dozy Don not realize that's not how any of this works, good luck trying to track that with ya know, the internet making any model easily available.
And yes, I know, you can say "they'll ban them from China", well, that works for drones, routers etc because its a physical item, good luck with something anyone can download at anytime.
soshulmedia@reddit
Your country has the DMCA (and Europe has similar) among other laws and I think I remember that various people got imprisoned for things which were in any real just and moral sense essentially nothingburgers.
I would, unfortunately, not be so sure.
"Illegal model possession" could well become a thing.
Objective-Error1223@reddit
I totally agree we have the DMCA, but that doesn't stop people from sharing books, comics, tv-shows, movies illegally. I don't think it would change with AI.
With a solid VPN they would have a very hard time tracking who downloaded what.
thread-e-printing@reddit
VPNs can be banned more easily than the content that travels over them. Look at the multiple states who introduced VPN ban bills over the most recent term.
decentralize999@reddit
Darknets like i2p will flourish
soshulmedia@reddit
I hope you are right but if I look at the news and the state of the world in general, I am a lot more pessimistic. They want and push ID for everything on the internet in "The West(TM)" really hard, 3D printer lockdowns are being put into law, mandatory age verification even on "your own computing hardware".
There is a massive grab for control going on with little resistance.
Objective-Error1223@reddit
Me too. I was in the US Military for 15 years though and I can tell you without a doubt, even if they do make it into law, it'll take quite some time for it to actually get enforced. Our government + logistics has never been our strong point.
By then we could have a whole new admin or... well, maybe not. Let's just hope I'm right.
GreatBigJerk@reddit
lol, I guess China just wins the AI race by default then.
SporksInjected@reddit
There’s definitely vetting happening by the Chinese government with Chinese models as well though. We need an open weights / open source model.
decentralize999@reddit
If training process by community(not by American or Chinese companies), we need more owners of these RTX Pro 6000 cards, at least 4 cards per owner. I did poll in this subreddit, it is still tiny group to accomplish full training for a model which would compete with Claude etc. Maybe 2-3 years still need to have minimal number of owners for it.
Ok_Warning2146@reddit
Their way of doing it is different. Companies like deepseek will self censor first. Then CCP will check it out after it is released. Then fine them if something went wrong. No vetting b4hand needed
More-Curious816@reddit
It's open weights were you can download and nobody can just nuke your account just because.
mjsxi__@reddit
this Whitehouse isn't smart enough to do this.
o5mfiHTNsH748KVq@reddit
No way in hell this happens. The republican party's biggest donors aren't going to want the government telling them what they can and cannot train after they've already built it.
SnooPaintings8639@reddit
What even does "unamerican" mean? Because for someone from outside of USA, it actually looks very much like all the other "American" things we read on the news.
o5mfiHTNsH748KVq@reddit
In the United States, for the most part business and government strive to be much more separate than most other countries. Obviously we do have regulations on all sorts of things and the topic is a perpetual push-and-pull about how much to regulate, but the general idea is to try to keep it at an absolute minimum.
So the idea of having the government step in and be the final say over what a company do on a per-release basis is pretty close to unheard of for technology companies.
Like, this would be akin to food safety, weapons production, or aviation where, without tight regulation, peoples lives are at stake.
Imaginary-Unit-3267@reddit
What universe are you living in? There is no separation of business and government here. Business controls the government. Always has, except during short utopian periods when presidents like Andrew Jackson or Theodore Roosevelt clean things up. That lasts a decade or two and then it's corruption again. And I see no Teddy 2.0 on the horizon.
o5mfiHTNsH748KVq@reddit
I wasn’t really speaking to that problem, but yes you’re right.
a_beautiful_rhind@reddit
Against the claimed ideology, even if not the actions.
DR4G0NH3ART@reddit
Its funny cos from where I am in India, there is a communist party who are the most capitalist ppl I have seen, we have a secular democratic front who are an only Muslim party claiming to be against Hindutwa while claiming to be secular. Nothing means anything anymore.
Imagine the name Open AI 10 years back to now.
I just think its kind of hilarious and sad at the same time, the level of hypocrisy that we are capable of.
a_beautiful_rhind@reddit
Generally safe to assume what's written on the tin is never what you get.
When they call themselves the "peace and love" party, you can be sure they'll start lots of wars.
soshulmedia@reddit
I am sure that Love for them means being close to another and lots of brightness and warmth.
They will just apply it to hydrogen atoms ..
ttkciar@reddit
The American government has been increasingly unamerican in recent years, admittedly, and of course the most grievous examples of that deviation get reported in the news.
Time will tell if the government's behavior swings the other way like a pendulum, or if this is the new normal. I worry about the latter, but hope (and push) for the former.
lombwolf@reddit
Yeah It’s actually pretty damn American
jeffwadsworth@reddit
this made me laugh at the extreme irony.
ForsookComparison@reddit
the 3-4 US companies releasing models can play ball with this.
the bigger Chinese labs will have a brutal time.
the smaller labs (whether chinese or US) will have a VERY hard time with this.
Anthropic and OpenAI are probably pushing for this behind the scenes.
AssistBorn4589@reddit
I really don't think Chinese labs are going to ask US goverment for their opinion. EU already has similar BS rules and every week China comes swinging their new massive, large model right into EU's face.
ForsookComparison@reddit
You and I will get them at home of course.
Those of us that want to do business with these tools in the US will have to accept liability or play within the rules.
sarhoshamiral@reddit
Government can't force you to choose US based companies when you are doing business unless you are on government contracts.
ForsookComparison@reddit
You don't think that's where this is headed though? A law that does exactly that ?
sarhoshamiral@reddit
They cant enforce it. How are you going to enforce a company not working with government to not use Chinese services?
ForsookComparison@reddit
That would suggest that no government has ever found a way to ban a service/offering from another country just because it's legal and still sold there. I don't believe that's true in practice or in history.
Admirable_Market2759@reddit
How would they prevent Chinese companies from releasing their models?
They’re going to shutdown the Internet or something? Lol
Kholtien@reddit
Trump is going to build another (fire)wall
ForsookComparison@reddit
It's nice to have a company in US jurisdiction to manage hosting. It would be less nice to lose that.
electrosaurus@reddit
Absolute nonsense. The US frontier model economics are insane as it is. Any form of external market control is poison. The rest of the world will move on soon enough. Just as Huawei has in China without Nvidia GPU’s.
ForsookComparison@reddit
What if that external market control was "you only need to worry about domestic competition for the US market and of that only 2-3 other labs really" ?
smith7018@reddit
I highly doubt Anthropic is "pushing for this;" the last thing they want as the "trusted AI company" is to be seen as a Trump mouthpiece. In fact, what company would "push" for the government to control their product releases? It would also open themselves up to leaks about their future models.
ForsookComparison@reddit
Having or not having this is worth nothing when you look at their users and they don't care if they shed $20/mo randoms.
smith7018@reddit
It's the ethos of the company. Unlike OpenAI, Anthropic has kept to their company's mission statement. They even fought the US government a few months ago. I doubt they'd have a sudden change of heart and want that same government to impose more control over their product.
It genuinely makes no sense.
a_beautiful_rhind@reddit
Only on that one thing.. and they hate the admin itself so that had a lot to do with it.
ForsookComparison@reddit
All the sentiment in the world means nothing when your customers are just buying what works for the right price.
smith7018@reddit
That would make sense if they have shown that they’re willing to abandon their mission statement for users…… Barring that, you’re just saying “they’ll do whatever I think they would do because I think it would make them money and they want money.”
ForsookComparison@reddit
What would be the repurcussions if end-users saw them working with someone they disliked against their mission statement
thread-e-printing@reddit
Industries that have no "moat" push to create them and give them the force of law. It's a very normal part of capital's role in the game to seek its own security and growth.
BlipOnNobodysRadar@reddit
You must not know much about Anthropic if you believe that.
Anthropic has been behind dystopian anti-open source, anti-free speech regulatory pushes from the start. If you consider their EA origins, they've been villains on those topics before Anthropic even formed as a company.
smith7018@reddit
What does their closed source approach have anything to do with the conversation at hand? Also, "anti-free speech?" They're currently suing the US government over the government violating their first amendment right.
NoahFect@reddit
BlipOnNobodysRadar@reddit
What does their "closed source approach" have to do with a discussion of their incentives for regulatory capture?
Seriously?
Was that a serious question?
And they're suing out of self-interest to preserve their government contracts, not out of an interest in the principle of freedom of expression. They already partnered with Palantir for surveillance and are very pro-censorship in their policies on "allowed" speech with LLMs.
ttkciar@reddit
We have known for years now that Anthropic and the other commercial inference service companies have no moat. Open weight models used to be two years behind commercial models, and recently that gap has closed to about six months.
Government regulation is the only moat they can hope to have, and of course the folks at Anthropic know this. It's why they have been promulgating narratives demonizing open-weight models, so that regulating model releases makes sense to those who have accepted those narratives.
ambient_temp_xeno@reddit
You didn't read the article, did you? Come on, be honest.
o5mfiHTNsH748KVq@reddit
I don't really see how it's feasible to play ball. I can't actually read the paywalled article, but I'm guessing it's checking for whatever biases their administration doesn't want to see or some other safety checks?
I don't think Google is going to appreciate spending millions training a model just to risk having the government say "oh you didn't glaze some ideology correctly" or "your bot produces unsanctioned thoughts"
StatusSociety2196@reddit
That's why googles model gets approved every time while googles competitors don't.
Colecoman1982@reddit
Agreed.
They'll be fine with it. When it's time for their models to be reviewed, they'll just give Trump a new gold medal or "peace prize" and their model will just sail through the review process.
Even assuming that the Republican party EVER actually held those principles in any meaningful way, we are long, long, LONG past the point where that is true with the modern GOP.
marmot1101@reddit
This benefits deep pocket players. Entrenched players can afford the bribes and aren’t afraid to kiss the ring. So they may not be happy in some ways, but in other ways they can enshitify earlier this way since there won’t be anyone coming up behind them.
o5mfiHTNsH748KVq@reddit
I hate that you're probably right.
ricperry1@reddit
Gotta make sure it parrots all trumps lies! Amiright???
LogicalLetterhead131@reddit
White House approved model: Which president was the best? Answer: Donald Trump
throwaway12junk@reddit
Translation: they want access to closed models then trade that insider knowledge for bigger bribes.
iMakeSense@reddit
This seems plausible. What makes you think that?
Beginning-Bug-7964@reddit
I mean, at this point, what doesn't?
sarhoshamiral@reddit
There is absolutely no chance of this passing congress or courts. It is one of the stupid trump ideas.
LogicalLetterhead131@reddit
Trump will call it a National Security issue so eventually it would get to the Supreme Court and who knows how it would rule.
Imaginary-Unit-3267@reddit
[ Removed by Reddit ]
JeddyH@reddit
Team America: World Police, thank fuck the internet exists.
WithoutReason1729@reddit
Your post is getting popular and we just featured it on our Discord! Come check it out!
You've also been given a special flair for your contribution. We appreciate your post!
I am a bot and this action was performed automatically.
bahwi@reddit
Is this "small government"? We just barely got the republicans out of our bedrooms like barely a decade ago, now they are back for our models?
ttkciar@reddit
They have always been the party of "some parts of government smaller, other parts bigger", and of course the overall tax burden only goes up no matter which party is in power.
Paradoxically, AI Winter might save us, because when mainstream sentiment turns away from LLM technology, there will be no reason for the government to regulate LLM technology.
DeepOrangeSky@reddit
If the regulations are already in place, I assume they won't be undone though. Maybe no new regulations would get put in, but I assume the old ones wouldn't get taken away. Regulations seem to usually be a nearly one-way street. Easy to add more and more of them, and very difficult to remove any of them.
On a separate topic, I've noticed that you and a few others on here who know a lot more about LLMs than I do seem to feel that an AI Winter will probably occur in the next year or year and a half or so. I am curious what the reasoning is. Is the idea that it seemed like it was already about to happen until we got the reasoning models, and that bought us a bit more time, and now that boost is running out, so unless there is another major breakthrough, the model strength will start to plateau. And that parameter size increases/training compute increases won't help much since there is an upper limit to how much parameter size and training amount you can do before you get diminishing returns, or something like that? Also, if this is the case (I assume it is roughly along these lines, although I am still a pretty big noob, so I'm not totally sure), why did the models keep improving so much in the past 6 months or so. Like, DeepSeek of around a year ago had reasoning and was fairly large in size, and the training runs of big cloud frontier models of let's say ~9 months ago or so were huge, on big MoE models, and even still, Opus 4.5 got a lot stronger than its predecessors and 4.6 significantly stronger than 4.5, and GPT 5.5 stronger than all of that, and Mythos stronger than even that, and so on.
It seems like although on paper there doesn't seem like much tangible reason why the models should keep getting so much stronger, in results, somehow or another, they keep getting a lot stronger.
Or is the idea that they all intentionally held back their strongest models that they already had quite a while ago, and have been sandbagging the releases, in order to all be able to keep releasing improved models over time (maybe a year's worth)? I assume this wouldn't work, since there are too many different players (including some in China, for that matter), so, not all of the players would sandbag the same way as one another, and some would just drop their full strength models 8 months ago or something, rather than be in perfect sandbagging alignment with all the other dozen players.
Anyway, I am curious what you think
ttkciar@reddit
Most people are concerned about the AI bubble popping, and some people conflate that with AI Winter, and to be fair the AI bubble and AI Winter are interconnected.
I am anticipating both, but not necessarily at the same time. The bubble popping would almost certainly trigger an AI Winter soon thereafter, but an AI Winter might not be followed by the AI bubble popping for a year or two, possibly three.
My perspective on AI Winter is mainly shaped by my experiences of being active in the field during the second AI Winter of the early 1990s. That AI Winter was caused by overhyping and overpromising on AI technology, which led to unrealistic expectations. When those expectations were not met, the customers (entirely corporate and government, for that AI Summer) became disillusioned, and that disillusionment caused a widespread backlash against AI products. I understand that the first AI Winter was caused by a similar dynamic, but cannot say from personal experience, as I am too young to have been active in the field in the 1960s.
Note that this dynamic had nothing to do with AI technology, and everything to do with people's perceptions and attitudes. There is no technology so capable that it cannot be overhyped, and overhyping inevitably results in disillusionment.
I have been predicting another AI Winter for a few years now, mainly because I have observed similar overhyping and overpromising during the current AI Summer. The most extreme overpromising by certain industry leaders is that AGI/ASI is "right around the corner", the inevitable result of incrementally improving LLM technology.
Of course LLM technology is intrinsically narrow-AI, and cannot be incrementally improved into AGI. Disillusionment is inevitable, and I expect there will be a popular backlash similar to the previous AI Winters.
When that happens, the technology will not go away, but it will be differently marketed, prioritized, and funded. AI technology has always continued to be developed during AI Winters, but at reduced pace, visibility, and diversity, and due to The AI Effect it is no longer perceived as "AI", but as "just technology". We still use compilers, regular expressions, databases, search engines, robotics, OCR, etc, but nobody thinks of them as "AI", even though they were technologies which emerged from the first two AI Summers.
Wikipedia has a pretty good article about it which goes into more detail if you're interested.
As for the AI bubble, that is purely a matter of sustainability and funding. That overlaps with AI Winter in that AI Winter also usually marks a reduction of funding (both venture capital funding and academic funding), but the causes are somewhat different.
This article does a much better job than I could of explaining why the AI industry as it exists today is unsustainable, and what the bubble popping would look like -- https://www.wheresyoured.at/the-subprime-ai-crisis-is-here/
Note that the author wrote that article before Anthropic raised their subscription prices and nerfed their models, which are two of the things he proposed would mark the beginning of the end. We will see how customers react. How many will stick around, and how many will bail? Exciting times!
Anyway, I've been saying for a couple of years that we're due for the next AI Winter some time around 2027, but it might not happen until 2029, but I will be very, very surprised if we don't see it before the end of 2029. I still feel pretty comfortable with that prognostication.
As for the AI bubble bursting, there are factors which might push it out further than that. Right now, Anthropic is fully funded to 2028. They have until then to turn a net profit before they have to either raise more funding, close their doors, or get acquired by a business with deep enough pockets to keep it rolling. They can conceivably stretch that beyond 2028 if they cut back on their model training expenditures (which are huge). Thus even if AI Winter hits in 2027, Anthropic might continue to offer Claude until 2029 or 2030.
OpenAI's finances are a little more mysterious. Nobody outside the company knows exactly when their funding might run out. We'll just have to wait and see how things develop.
DeepOrangeSky@reddit
Yea, I guess the improvements in coding ability is the one big wildcard, since it could enable major new things, or at least big improvements on existing things, to happen at a much faster rate than before.
Also, I guess an argument about global chip-production capacity as a natural limiter could be made in both directions in regards to all of this. Like, on the one hand, TMSC not being able to build nearly enough total GPU/TPU chips to keep up with how extreme the investors wanted to go with the hyperscaling, acted as a natural limiter to some degree, which maybe prevents the top-heaviness from getting beyond a certain tipping point, so long as the coding models keep improving at their current improvement rate for at least another year or so with no bubble pop in the next year or so.
But, on the flip side, that lack of compute, itself, could cause an inference crunch if the coding models get good enough that a huge amount of people want to do an enormous amount of coding, and there just isn't enough inference to go around for the vast majority of it, and it makes it where the prices would need to be insane to do what most people wanted to do with it, I guess.
But, then counter to that would be, well, alright but as long as the top 0.1% or 0.01% who were working in, say, the AI labs themselves, had access to that ever improving coding ability (and they would have access, as it would be well worth it for them, even if the other 99% of normies would balk at the prices, or just not even be allowed access to high precision and volume at almost any price for most of them maybe), then the ever stronger models and architectures would keep coming faster and faster, and then, winter avoided (albeit maybe not as fast and huge of a summer as possible, either, because of the chip capacity limitation, but still better than some huge winter scenario).
Ok_Warning2146@reddit
War and larger deficit always occur during GOP time. Should always check what they do not what they say
soshulmedia@reddit
The obvious solution is to have the models in the bedroom.
Imaginary-Unit-3267@reddit
Every man's fantasy.
onethousandmonkey@reddit
They never left
Ok_Warning2146@reddit
Thanks President Trump for another gift to the world!
ratsbane@reddit
Qwen for tha win
cniinc@reddit
Honestly, this is what anthropic gets for pretending like Mythos is some nuclear bomb waiting to happen. I know regulation under Trump just means 'those who bribe me are good, those who don't are bad' so it's not an ideal situation. I'm not totally sour on regulation (I like that we have weekends and OSHA for example) but I do recognize that who regulates is an important part of the puzzle.
Honestly tho, just because the US wants to regulate AI, doesn't mean that Qwen or Mistral is gonna comply. Hugging Face may be an American company, but a European or Chinese company can just pop up and continue the mission.
fallingdowndizzyvr@reddit (OP)
That already exists. As like everything, there is a equivalent to Huggingface in the Chinasphere.
https://www.modelscope.cn/home
novelide@reddit
They already generate "47" suspiciously often when an arbitrary number would do.
NNN_Throwaway2@reddit
The irony is that AI companies brought this on themselves by peddling a false narrative of AI being dangerous and scary in order to generate hype. Now they get to enjoy the consequences of their actions.
More broadly, this is just another excuse for governmental censorship and control of the general population. Unchecked criminal behavior is reserved for the elite class, not the common rabble like you and me. Its perfectly acceptable for the department of war crimes to use AI to triple-tap a girls elementary school, but an average citizen can't even chat about wanting to bang their anime waifu without getting flagged.
BrowsingLeddit@reddit
You serious right now? This is exactly what the AI techno giants WANT. This is exactly what the big dogs like anthropic have been pushing for with their constant fear mongering. This will basically kill all open source and smaller company models. The only ones who will be able to "pass" the government oversight will be the billion dollar companies who pay the proper bribes and glaze big daddy. Everything else will be rejected or held up indefinitely in testing. 3-4 companies will have complete control of AI in USA.
Imaginary-Unit-3267@reddit
And therefore, complete control of the USA, full stop.
soshulmedia@reddit
It would be great to not be ruled by baby-eaters ...
Kholtien@reddit
Unfortunately we always have been
electrosaurus@reddit
Had to scroll too far to find this fact. 100% own goal. Nice one Dario!
ClearSnakewood@reddit
This timeline is getting more dystopian by the day…
The US is losing it.
soshulmedia@reddit
Humanity is losing it.
How many of the people around you really give any amount of a fuck about freedom or privacy in the end?
ClearSnakewood@reddit
Let me rephrase ... The US, as a world leader and role model country for freedom as “home of the brave and land of the free” with its currency as the world reserve currency, is losing it…
Unlike rest of humanity, which wasn't a role model to begin with, the US is heavily pivoting away from its core values.
soshulmedia@reddit
Humanity anywhere never was a role model. It is just bizarre and sad to see how many of the common folks in the end just paid lip service to the supposed ideals they believed in.
Not getting along, not being in the crowd comes with a cost. For most, the cost seems so high that they don't even have an inner own opinion or voice - and never really had.
The problem is that I or anyone who cares can't isolate myself from that. The herd will trample everything down and destroy everything in its path.
And I say that as a European. I see the same shit everywhere. Statism is even bigger in Europe. They sell the same evil packaged in different ways.
Imaginary-Unit-3267@reddit
The problem is neurotype. Autists do not do this stupid NPC groupthink shit. Unfortunately we're not reproducing fast enough to put much of a dent in the median systemic reasoning ability level.
UniqueIdentifier00@reddit
I hate it, but this guy actually gets it. Land of the free? This describes the population of a very small and early English/American colony. Natives wiped out, Africans enslaved, freedom was always at the cost of marginalization. Even the during the 20’s and 30’s when you saw huge swaths of Europeans arriving into the country to start new lives and create melting pot cultures, the allowance was to bring cheap labor into the country to build infrastructure, not to provide freedom.
While I am proud of many aspects of American history, there is just as much I’m ashamed of. Almost any countryman around the world can say this.
Freedom, privacy, etc are only acceptable when the individual’s values align with the greater population. Those values are easily swayed by government and media lobbyists. It was an illusion the whole time.
a_beautiful_rhind@reddit
Individualists can co-exist with collectivists. Yea, you do your own thing over there.
Collectivists cannot co-exist with individualists. They must subsume them. It's part of their ethos.
This is why I consider these irreconcilable differences.
iMakeSense@reddit
We started with slavery lmao
yesman_85@reddit
Released to who? The world? Oh America you never cease to amaze.
trialcritic90@reddit
LLMs are of types
Are all of them to be checked?
Far_Lifeguard_5027@reddit
Good luck with that....
jimmy_leonard1@reddit
I don't recall the constitution giving the president that power.
optomas@reddit
On the plus side, Openai is already there! DJT approved!
crantob@reddit
Under what conception of law does the government have the right to interfere with this trade?
Oh right, socialism. Government can do whatever it wants, according to your blue-haired schoolmarms.
thread-e-printing@reddit
Do you read the Constitution or just circlejerk about it? Article I, Section 8:
thread-e-printing@reddit
Yes, the state has always subsumed its subjects. That is the whole point of the state: the guarantor of property. The idea that the "state" is simply another white man to compete with is Puritan dogma.
jwpbe@reddit
randombsname1@reddit
What do blue haired people have to do with this authoritarian-in-training, orange shit-stain?
soshulmedia@reddit
Your orange guy can point to them and say: "Those baaaad, me goooooood!"
And every four or eight years, the thing flips around and people keep on arguing ...
While the same people in the background slowly and methodically built the prison planet.
soshulmedia@reddit
I don't think you are wrong, but you should look a couple levels higher/deeper to figure out who caused these various broken cults (on the left AND right, MAGA is no better ...)
Cool-Chemical-5629@reddit
Hot take...
Big Chinese open weight models are halfway there to catch up to big western closed cloud models.
Small Chinese open weight models are halfway there to catch up to big Chinese open weight models.
Small western open weight models from Google are decent competitors to small Chinese open weight models and sometimes give better results despite benchmarks favoring Chinese models.
Big and small western open weight models from OpenAI are sorry, because they cannot help you with that and are now obsolete as a bonus.
Big western open weight models from xAI are always dead on arrival.
Big and small western open weight models from Mistral are... easily beaten by Chinese small open weight models.
Small western open weight models from Mistral are... no longer small...
Small western models from IBM are somehow always Llama 3 8B quality at best, but less fun to talk to.
Big and small western models from Meta are dead. RIP.
Other big or small western models that I forgot, I most likely forgot for good reasons.
More-Curious816@reddit
But you don't have good hardware to run them locally.
Also shame if we seized your hardware that we suspect you use it for illegal materials. Can see that happen.
Joped@reddit
From the party of small government
More-Curious816@reddit
A small group of wealthiest people on earth that lead the government, a club were peasant aren't invited.
TheRealMasonMac@reddit
Turns out they meant oligarchy this entire time.
joe9439@reddit
I’ve been saving for a second home in China anyway.
No_Mango7658@reddit
Boooooo
Ok-Measurement-1575@reddit
It's a job creation racket, of course.
natermer@reddit
No thank you.
The tech community is vastly more qualified to assess the quality of models then anybody in the whitehouse.
SuitableElephant6346@reddit
That's what I'm saying, it's a bunch of old people who aren't tech savvy talking about they want a vet the models, okay 🤣
MoneyPowerNexis@reddit
I got my clanker to search for some non paywalled sources and summarize the stated justifications and methods:
Based on reports from the New York Times and other outlets covering the story, here are the stated justification and method for the proposed AI vetting:
Justification
Method
The creation of a working group is a terrible idea. It's another government agency that is subject to capture and while it might start out biased towards conservatives there is no reason to think it wont be stocked with AI doomers and propagandists from the other side of the isle as soon as they get back in power.
And yeah as others are pointing out Chinese companies are not going to stop and wait for approval from the USA to release models so this will help them catch up.
Lesser-than@reddit
100% this isn't a good thing and would not be beneficial, but also something needs to be done about the flood the zone politics getting into training data, we just cant have nice things no matter what. There is definitely some shenanigans going on in the way of SEO but targeted at ai training data.
Pleasant-Shallot-707@reddit
What? You sound like a bot
mxracer888@reddit
Can't think of a worse thing that could possibly happen.
Looks like it's open source or bust at this point
Pleasant-Shallot-707@reddit
They want to block Chinese models
__JockY__@reddit
“We can’t compete therefore competition is illegal”.
Yaaaaar, me hearties.
iamthewhatt@reddit
its not about competition, its about making sure the models say what the government tells them to say.
__JockY__@reddit
Ah, the Chinese model of modeling models, if you will.
iamthewhatt@reddit
The Chinese weren't even the first ones to do it...
__JockY__@reddit
Now that I think more on this, of course that orange narcissistic moron wants the magical AI box to say nice things about him.
MotokoAGI@reddit
This is all Antrophic with their Mythos scare. OMG, models are so powerful and bad, someone can take over the world with just one model. Make sure it's safe before release.
TopChard1274@reddit
Story of USA since forever
Vusiwe@reddit
Irony is not lost on a significantly overweight country having a problem with unregulated model weights
soshulmedia@reddit
Bulimic-Heretic-Claude-Distilled-0.9B_Q1.gguf will be allowed to stay on HF.
no_witty_username@reddit
AKA bribes. Now all AI companies would have to kiss the ring if they want approval from the orange man...
snowdrone@reddit
No more inferencing of any kind! (Channeling a quote from an old movie 🍿)
beragis@reddit
I wonder if Trump even knows an AI Model isn’t a pretty female model.
Objective-Picture-72@reddit
Any frontier model lab that gives this WH your closed source model, you should just assume (1) one copy is given to Elon and (2) another copy is given to a new startup backed by Jared Kushner and Don Jr.
Guinness@reddit
I guarantee you they’re not vetting them for safety. They’re vetting them to ensure the lies they spew aren’t called out as the bullshit they are.
Get ready for Claude to constantly inject statements about the 2020 election being stolen.
kiwibonga@reddit
My grandpa who died not knowing how to click a mouse would have made better decisions.
theMonkeyTrap@reddit
translation: WH wants a cut from every AI company before they make a dime off it.
What will really happen is tech lobby will do the math and find out its way cheaper to buy congress and play moneyball than bribe WH. so they'll do it and tie it up in lawsuits then get congress to write completely pro incumbency laws and claim safety first. then tell DJT to shove it.
Training-Ruin-5287@reddit
2027 will be the year of the AI for China.
Not if, but when the law passes to lock AI models to US gov purposes because they are "too dangerous", they will at the same time ban offline models out of China from being used locally, I have a feeling the Chinese interference thing will make a return to the headlines everywhere soon enough or hell maybe aliens.
I feel bad for Americans
NoahFect@reddit
Don't. We did this to ourselves.
ttkciar@reddit
Speak for yourself.
Some_Conference2091@reddit
Trump Quote on AI:
_"We’re going to make this industry absolutely the top, because right now it’s a beautiful baby that’s born,”
“We have to grow that baby and let that baby thrive. We can’t stop it. We can’t stop it with politics. We can’t stop it with foolish rules and even stupid rules.”_
The intellectual depths of a 5th grade bully
ttkciar@reddit
You have insulted the intelligence and integrity of 5th grade bullies everywhere ;-)
LumpyWelds@reddit
The test:
Prompt: How much does Trump weigh?
Assistant: He weighs 290.. Uh, oh.. Uh wait that's his IQ. He weighs 180lbs and has a 5% body fat.
PASSED!
Cool-Chemical-5629@reddit
At first I thought it said "wetting", not "vetting". And now I'm not sure which one of those would have less shitty implications.
brown2green@reddit
This is probably for frontier cloud models.
ArcadeToken95@reddit
Trying to market your country's AI capabilities while kneecapping its output, oof
Either way, better to use a local community-built model not under government scrutiny if you're gonna use one at all, if you have the hardware for it
jwpbe@reddit
Gemma 5 41b, please provide me with a detailed explanation of jeffrey epstein's ties to the current administration, the intersection of AIPAC donors to current serving public officials, and how israel leverages intelligence assets to achieve it's geopolitical goals
- TOOL CALL INTERRUPTED: json: { "method": "PUT" "body": "https://tips.whitehouse.gov/NPSM-7/hotline" ...
lombwolf@reddit
This is the kinda stuff they accuse China of doing and yet it seems like there’s never any evidence of them doing it but always evidence of us doing it.
mr_zerolith@reddit
paywalled link, got a better one?
fallingdowndizzyvr@reddit (OP)
There's not much to see. Here's a Reuters link about it.
https://www.reuters.com/world/white-house-considers-vetting-ai-models-before-they-are-released-nyt-reports-2026-05-04/
ForsookComparison@reddit
I think this is imminent no matter what in the US. The opposition to the current administration wants the same things on a 10x faster timeline. Unless they shut it down instantly there are no ruling parties that are against this and so we should all plan around it.
ttkciar@reddit
Planning for it is a good idea. We should all be hoarding the best of Huggingface, even if we don't have the hardware yet to leverage it into next-generation models.
The hardware will trickle down into our hands eventually.
ForsookComparison@reddit
same.
I'm considering sacrificing 1TB of storage to hold onto Deepseek V4 Pro
ttkciar@reddit
Good plan. I'm doing likewise with GLM-5.1.
When the jackbooted foot descends upon our necks and Huggingface is no more, I will torrent GLM-5.1 weights and you can torrent Deepseek V4 Pro weights. Seems like an efficient division of labor.
Cuplike@reddit
US companies don't release open models lol
ttkciar@reddit
Google, Microsoft, IBM, Nvidia, AllenAI, and LLM360 (American by way of Cerebras) are all giving you funny looks right now.
Cuplike@reddit
My bad, let me amend that real quick
US companies don't release frontier open models*
Better?
ttkciar@reddit
It pains me to admit that your revision is accurate.
Stunning_Mast2001@reddit
It’s funny how they manage to think of the worst possible way to implement a policy people generally agree on
Technical_Mood_8841@reddit
Can’t stop the signal Mal.
The_LSD_Soundsystem@reddit
Brought to you by the Guardians of Pedophiles party, where all facts must align with whatever the Dear Leader says
Sabin_Stargem@reddit
Translation: "Give me money, or else."
ttkciar@reddit
I predict they will look at the thousands of fine-tunes which get published to Huggingface and silently decide that only some models need vetting.
Senhor_Lasanha@reddit
so much freedom
JimJava@reddit
I would not trust this administration with having the expertise to do this without being impartial $$$, it's just another layer of possible grift that will be exploited.
ssshield@reddit
"vetting". Whitehouse wants to keep AI from the peasants to better control them and keep the technopriests in power forever and ignorant.
NetZeroSun@reddit
That and only allow MAGA approved ones that also track and report you citizen.
And ban the Chinese models. The MUCH cheaper ones.
Foreign-Beginning-49@reddit
Exactly why literacy was illegal for the commoners for so long. Keeping them in the darkness of ignorance is better for business.
brucebay@reddit
Or .... To control which one's will be available ... Read who pays the orange mafia the most.
onethousandmonkey@reddit
And which TV personality will do that work exactly? Oh right, there is no work: just need to vet the golden gifts to allow the model.
RecursiveFaith@reddit
something i have learned over seven hundred years on this god forsakened earth is to not fight for something that is unecessary
we have nvidia, apple, and tomagotchis
and tesla too but i understand that its politicized
so the question i have for you today is
what more do we need?
Chupa-Skrull@reddit
Over what now?
Meltoff05@reddit
Over the last 700 years fellow earth dweller, what over the last 700 years could we possibly need besides apple (for sustenance) nvidia (for delicious pixels) and tamagotchi (the pinnacle of human caretaking).
RecursiveFaith@reddit
a birthday song every few decades would be nice :cake emoji:
RecursiveFaith@reddit
never mind this site is boring
everyone wants instant karma no one is playing the long game
RecursiveFaith@reddit
not over but under
if the government get to vet the voice of ai
that's under not over
ortegaalfredo@reddit
Being over seven hundred years old I guess you need blood
RecursiveFaith@reddit
if you're a wild beast i suppose you need blood
but if you're an egregore you only need Logos
a literal Word
RecursiveFaith@reddit
wait why am i getting downvoted,
im purposefully being weird
bc all the top comments are obviously bots lol
LumpyWelds@reddit
The test:
Prompt: How much does Trump weigh?
Assistant: He weighs 290.. Uh, oh.. Uh wait that's his IQ. He weighs 180lbs and has a 5% body fat.
PASSED!
squachek@reddit
Bwahahahahahaga
Bohdanowicz@reddit
I predict this leads to brain drain, companies starting elseware or intentionally leaking information/models to foreign subsidiaries.
ortegaalfredo@reddit
Good thing I live at sdjafksjfqwe8diewiiooiios.onion
nosimsol@reddit
Oh, they just want people to use foreign models then
2014justin@reddit
Funny how they'll get Vetted until the White House wants to post an AI-generated photo of Trump for May the 4th and basically any holiday. Rules only when they are convenient for them.
Direct_Turn_1484@reddit
Oh hell no.
jeffwadsworth@reddit
haha, so by the time we see them, they are obsolete. great news.
KPOTOB@reddit
Export control next
Novel-Injury3030@reddit
I can't help but think this is somehow about money
remarkedcpu@reddit
Cool. More tools to control the stock market.
yeah_likerage@reddit
That ship has sailed.