DeepMind will delay sharing research to remain competitive
Posted by mayalihamur@reddit | LocalLLaMA | View on Reddit | 123 comments
A recent report in Financial Times claims that Google's DeepMind "has been holding back the release of its world-renowned research" to remain competitive. Accordingly the company will adopt a six-month embargo policy "before strategic papers related to generative AI are released".
In an interesting statement, a DeepMind researcher said he could "not imagine us putting out the transformer papers for general use now". Considering the impact of the DeepMind's transformer research on the development of LLMs, just think where we would have been now if they held back the research. The report also claims that some DeepMind staff left the company as their careers would be negatively affected if they are not allowed to publish their research.
I don't have any knowledge about the current impact of DeepMind's open research contributions. But just a couple of months ago we have been talking about the potential contributions the DeepSeek release will make. But as it gets competitive it looks like the big players are slowly becoming ~~Open~~ClosedAIs.
Too bad, let's hope that this won't turn into a general trend.
Equivalent-Apple5656@reddit
What I was thinking is that they can wait for other researchers to publish their paper, for example deepseek or whatever, and now deepmind can publish their paper without delay secretly nobody knows, and declare that's what they have done 6 months ago
t98907@reddit
Jürgen Schmidhuber had already published ideas similar to Transformers. Even if Google had delayed the release of the Transformer paper, a similar concept would likely have emerged from another research group.
Considering the subsequent careers of the Transformer authors, it's clear that publishing the paper significantly benefited them. Given that even Google struggled to release a fully polished Gemini model in a timely manner, delaying the publication of the Transformer would likely have resulted in a valuable technology remaining buried within Google for many years. Such a delay would have been a considerable loss for the AI community. Fortunately, that didn't happen.
jubilantcoffin@reddit
Lots of "revolutionary" things that DeepMind supposedly did were variations on research others had already published, but bolstered by Google-sized hardware resources and PR machines.
This stuff is massively overrated.
Appropriate_Cry8694@reddit
Yeah, that's inevitable, they definitely will become more closed, and try to make regulations moat cus of "safety ".
atineiatte@reddit
>In an interesting statement, a DeepMind researcher said he could "not imagine us putting out the transformer papers for general use now"
Neither can I. If only capitalists had realized the full value of the research earlier :(
BusRevolutionary9893@reddit
LLMs and many other things would never had been able to have been created in a socialist "utopia". That evil capitalism is what is responsible for funding the creativity and incentive.
Salt-Powered@reddit
LLMs require extensive funding precisely because of evil capitalism. In a "socialist utopia" as you put it, we wouldn't be so dependent on proprietary technology and the available LLMs would be leaps and bounds better due to the shared research processing power, something like folding@home, and talent. Why do you need to get and NVIDIA gpu and why aren't they freely available again?
FickleAbility7768@reddit
In a socialist state, nvidia would never be found. The government would never fund some Chinese mf that wants to create different compute than cpu. CPU are amazing and they are doubling every 18 months. It would make no sense to waste people’s money is what ever this GPU is. Maybe they will give very little money but wouldn’t be sufficient.
Salt-Powered@reddit
Again with this. I don't know where you are all getting this shared "government dictatorship" fantasy.
Not only in a socialist utopia, people would be able to fund stuff of their own volition, but the government would also be interested in the actual well-being of their people and entertainment is also included with that. People don't exist to work, they exist to exist and that requires a varied array of activities along with solid leisure.
Gaming is a very efficient form of leisure, so it would be invested upon. GPUs also have other uses than gaming.
FickleAbility7768@reddit
I’m talking about in the 90s. GPUs were a waste by most standards. Heck even AI was a pipe dream; especially neural networks.
Socialist governments invest with consensus. As in majority should agree to invest in something. For example, space race or highways.
But majority of Innovation happens when you are contrarian and right.
Salt-Powered@reddit
Again. Governments wouldn't have monopolies on investment and or production. A company could easily exist, it would just be heavily regulated and the founder wouldn't become a billionaire from it.
Even so, people invested in those GPUs with their wallets during capitalism so I don't see why they wouldn't happen under a different system. They would be a minority stake at the beginning, just like how it happened under capitalism, and garnered further stage presence through social interest.
FickleAbility7768@reddit
The only reason VCs make risky investment is because 1/100 investments will become so big that their 99 investments can fail. They can only recoup the failure of the other 99 by making fuck ton from that 1 big hit.
Government would never make that risky bet. Since investors can’t make huge returns, they wouldn’t be as risky either. You’d turn them into current European investors but even worse. There’s a reason Europe doesn’t have innovative companies.
Salt-Powered@reddit
I'm sorry but I can't discuss what doesn't make sense. I guess Mistral, Stability etc don't exist for you.
They only thing american investors seem to be contributing to society is higher levels of debt. I hope you don't need medical attention soon.
Equivalent-Bet-8771@reddit
China is socialist and they're rapidly increasing their capabilities.
bolmer@reddit
"socialist"
Equivalent-Bet-8771@reddit
They're not capitalist and they're not actually communist.
Olangotang@reddit
Officially they are "State Capitalist".
bolmer@reddit
"Socialism with Chinese characteristics" officially. Which is State Capitalist to everyone else definitions.
FickleAbility7768@reddit
China is not socialist. Deepseek was not started by a government
thetaFAANG@reddit
I get that perspective, its just that they would never be able to rationalize development of the infrastructure necessary to leverage LLM’s, they would have never found it because its a single organization run by committee.
Whereas the capitalist societies are infinitely numerous organizations, relying on the permission to fail to incentivize taking a chance at making something useful. It has selective evolution in an infinite ongoing Cambrian explosion of pathways.
Communist societies are then able to leverage some outcomes for their own efficiencies.
Its not really about the ideology, its about how many organizations are competing. 1 competing with itself, versus 5 in one sector or versus 500, or versus 500,000 etc.
BidWestern1056@reddit
yes nut there are not infinite and there usually arent even several options because of tendency towards monopolization in industry. if we had a functioning govt that prevented such monopolies then we would have proper competition but the market makers make the regulations that make it impossible for newcomers to even start.
thetaFAANG@reddit
Yes, capitalism is vulnerable to a winner take all outcome. That doesn’t negate that how that winner got there, amongst infinite permutations of competitors.
Salt-Powered@reddit
Competition doesn't work in capitalist societies quickly enough, or we wouldn't be where we are today. Collaboration however, would go a long way. I'm sure you would prefer to have better working conditions and salary as much as your boss would prefer to have your loyalty.
Trennosaurus_rex@reddit
Yeah probably not. In a socialist utopia no one would be working.
Purplekeyboard@reddit
Robots do all the work?
Trennosaurus_rex@reddit
Who pays for the robots?
Salt-Powered@reddit
I don't understand why would that be the case, as there is still food, shelter and medication to produce that wouldn't happen magically. It's not about living off the government, but about working together towards a common goal.
Example:
Phones have slowed down R&D because its not profitable and offer a confusing selection of models to get consumer to pay for the more expensive ones.
Or
There could be a limited amount of phone models, made to last and easier to repair with some modularity sprinkled in.
Honestly, you could have looked this up yourself.
alongated@reddit
In a socialist utopia you wouldn't be able to convince the masses to spent percentages of their taxes on something like llm. Not only wouldn't you be able to convince the masses you wouldn't be able to convince the 'higher up' folk of it. That is why it took so long for something like this to happen.
Salt-Powered@reddit
Then its not a utopia? Also convincing people to help its easier when the tools are there to help them, not to further their unemployment.
Turkino@reddit
We probably wouldn't be where we are currently when it comes to the field if it wasn't publicly shared.
mycall@reddit
I truely hope open source models will be the way forward.
Olangotang@reddit
China is going to pop the bubble with their drive-by open releases, possibly adding onto the (immediate) recession woes. They don't need profit, just to take down the US Economy.
Iory1998@reddit
Stop regurgitating what you hear in the US media. Why would China wants to tank the economy of its biggest trade partner? How can that benefit it? Can't China just truly wants to help advance the world? Or that is inconceivable for any country except the US?
I could understand the argument that China might benefit from cheap software developments since you HW to run it. And, China is the world's largest HW manufacturer. Imagine if AI models could be incorporated in every single electronic device. Who would benefit from that? Well, it's the world's largest HW manufacturer. Why not let software become a commodity, so everyone can easily develop software that can fit any HW, instead of one country controlling most of the software?
TheElectroPrince@reddit
Even the US is not helping advance the world out of the goodness of its heart. Every country is out for its own interests, no matter what systems of government they use.
Of course China would want to wreck America's economy, the same way that America wants to wreck China's economy. It just so happens that China is less inhumane in doing so, compared to America's wage slavery, lack of proper healthcare, rapidly diminishing political freedoms (and upcoming genocide of minorities), and the brutal neocolonization of MANY overseas countries.
No country is truly innocent and they're all at each other's throats for world domination and securing the safety of their citizens and systems of government.
SidneyFong@reddit
Projecting a lot.
siwoussou@reddit
this might be true for now, but AI could presumably change our perspectives. especially if it comes with efficiency enabled abundance
InsideYork@reddit
The industrial revolution happened over 100 years ago already.
GlowiesEatShitAndDie@reddit
Cope.
Hey_You_Asked@reddit
bruh China released the number one economically empowering thing to the world for fucking free and with an open license
you have no basis for what you're saying, while that stands true
a_beautiful_rhind@reddit
China are going to be bwos and make anime real.
mycall@reddit
I hardly think a best model can take down the US Economy, but it is a challenge nevertheless.
curryslapper@reddit
exactly. it's not like Google didn't have the resources to do it at that scale.
it's that you need an ecosystem to iterate and progress the research
Tim_Apple_938@reddit
Sam Altman is literally a venture capitalist 😂
Expensive-Soft5164@reddit
That Google leadership for you, they make $4m a year for their "vision" then layoff people under them
Baphaddon@reddit
I mean they did do a week of insane releases regarding their research
kvothe5688@reddit
i mean six months is good. The amount of research papers they have published in the last 2 years are second to none. if other companies were eating your core business by using your research any company would take this strategy. six months embargo is not evil. not publishing research at all like most other ai companies are doing is definitely evil. there is risk of losing search to chatbots already. also losing chrome would definitely hurt them.
Snoo_64233@reddit
There is nothing evil about not releasing anything at all. They paid for these researches. Their money, their choice.
Also don't cry about people using their work, if they release it for free.
Podalirius@reddit
That way of doing things is stupidly inefficient, enough so that most of the researchers smart enough to do the research consider it immoral. Would you want to spend your career researching something someone else has already discovered? Does it really not seem like a waste to you?
Snoo_64233@reddit
"Would you want to spend your career researching something someone else has already discovered?"
I have no clue what you talking about.
InsideYork@reddit
There are more than market forces. Researchers want to publish.
Lucyan_xgt@reddit
Keep licking those boots goddamn
Snoo_64233@reddit
So you want to work for free?
mexicanocelotl@reddit
Lol do you know how they trained those models? On whose data?
mayalihamur@reddit (OP)
For now, it’s six months. But once principle gives way to "staying competitive", you’ll soon see it stretch to a year, then five, and eventually, indefinitely. It is a race to the bottom.
allegedrc4@reddit
Then you do the research and release it for free. Easy, right?
tedivm@reddit
The only reason I don't see this happening is that you can't keep talent if you aren't willing to let them publish, and you certainly can't recruit talent that way. A six month delay isn't going to bother most people, but a year or longer will.
starfallg@reddit
That's not a big factor once your team has enough recognition.
virtualmnemonic@reddit
It depends on how big the team is. Is the rapid progression of AI we've seen the result of a large joint collaborative effort or a few brilliant minds? If the latter, they will definitely want the name recognition for their work.
Apprehensive_Rub2@reddit
Slippery slope fallacy. If they were interested in doing this kind of disingenuous IP protectionism then they wouldn't be releasing this statement, they would just include less and less info in their research papers ala meta.
To me this seems like they very intentionally want to avoid that outcome, but (like me) suspect that Google have leapfrogged them in reasoning benchmarks by pretty directly crimping their RL research and having way bigger datacenters.
Not saying Google definitely did do this, I am saying if I was the product manager for Gemini when r1 came out, I'd be an idiot not to do this.
farmingvillein@reddit
Yeah, but flip side is they have very few ways to keep their research from leaking into the community, at least in the current IP climate.
6 months honestly is probably close to the maximum they can realistically pull off for anything deeply material.
mexicanocelotl@reddit
Lol sounds like a skill issue from closedai. Deepseek publishes their research...
Iory1998@reddit
I agree with your take that labs may take steps to protect their own research. That's appropriate.
Though, I believe Deepseek has published the most papers int he last 2 years.
cyan2k2@reddit
>not publishing research at all like most other ai companies are doing is definitely evil
Who is "most"? I literally don't know any important player who doesn't release papers.
Also, an embargo won't help. It just slows down collective validation and iteration. Most major scaling leaps were only realized through years of open sharing, scaling laws, data choices, etc. You know, the kind of stuff that's hard to evaluate and benefits from multiple data points collected by the whole community. Even OpenAI knows this and published arguably the two most important papers in regard to LLMs.
Take "Attention Is All You Need" Between that paper and GPT-2, more than six months passed, and Google did absolute jack shit with it because they didn’t believe in scaling or emergent abilities.
So keeping the paper private wouldn’t mean Google would’ve run OpenAI’s experiments. They probably wouldn’t have, because scaling was basically the opposite of the direction DeepMind was focused on at the time. So we'd either still be playing with BERTs and discussing sentiment analysis all day, or at least the last few months of progress wouldn’t have happened yet. But Google still wouldn’t have a moat, and even in the worst-case scenario, 100% privacy, not even closed-source online models, they still wouldn’t know what they actually discovered.
But in no scenario would the field be in a more advanced state
binheap@reddit
Afaik, OpenAI has not really released papers recently. Their index seems to suggest a bunch of product releases.
https://openai.com/research/index/
Anthropic kind of does but probably not anything that you can use to improve your own LLMs. It's a lot of interpretability research which is important but probably not going to be embargoed by anyone.
GreedyAdeptness7133@reddit
I always wondered why companies didn’t do something like this already. But it could slow down research given the benefits of getting external input on your research.
_supert_@reddit
Even academic colab with industry has a worse lead time.
LagOps91@reddit
yeah, very disappointing. holding the entire field back to just to make more profit. but then again, if you think you lose all your advantage if you write some papers, i suppose the gap can't have been too large in the first place.
thatonethingyoudid@reddit
Companies like OAI built their whole business off of the research DeepMind freely shared in 2017. Google realized what a massive fuckup this was from a biz standpoint.
"Meanwhile, huge breakthroughs by Google researchers—such as its 2017 “transformers” paper that provided the architecture behind large language models—played a central role in creating today’s boom in generative AI."
Can't blame them for wanting to re-gain and protect the lead in the field -- which will end up being the most valuable tech of this century (AGI).
Amgadoz@reddit
This is major BS. OpenAI built its business from the hard work of their talent and their religious belief in scale. Google had plenty of time to train GPT-1 before OpenAI. They had plenty of time to train GPT-3 after the release of GPT-2,but they didn't.
A core contributor of gpt-3 said he was afraid google will train a GPT-3 level model before OpenAI given their resources (compute, data, talent, money) but they never did.
ab2377@reddit
let them have all their talent and investment and take away the attention paper and tell me where they get? nowhere near chatgpt's success.
paulo2p@reddit
That OpenAI doesn't exist anymore
thrownawaymane@reddit
Yes, Google wasn’t hungry. They didn’t have to be.
They do still get to be frustrated that people built on their land.
Tbh I still think Google taking on all of the reputational risk of a gpt rollout going bad would have been catastrophic, it is better for them to have a foil that’s a startup
CoUsT@reddit
I agree partially but then many people took upon their work and improved things, found new things, just made things better overall.
Open collaboration is great, it just sucks they had very little from opening the baseline ground work to the public.
Maybe they could utilize patents in some way so that anything built on top of their work makes them few % from companies that use their research/work?
xugik1@reddit
The transformer deep learning architecture was invented by Google Brain researchers in 2017, not DeepMind.
slightlyintoout@reddit
They're not holding anyone back by not immediately publishing research... The 'entire field' is still free to do whatever research they want.
Hundreds of billions of dollars in value has been created on the back of 'attention is all you need'. OpenAI wouldn't be anywhere near where they are without it. Meanwhile, OpenAI has closed models etc.
I think it's perfectly reasonable thing for google to do
Ansible32@reddit
Gemini wouldn't exist if they hadn't released the "attention is all you need" paper. All those "hundreds of billions of dollars in value" wouldn't exist. How much poorer will we all be (including Google) because of their stinginess?
slightlyintoout@reddit
Attention is all you need was released in 2017!!!
But yeah sure let's all get upset about them sitting on research for six months.
Five years from now we will be AT WORST delayed by 6 months from where we would otherwise be, assuming noone else is doing any other research in the meantime
Ansible32@reddit
That six months number seems meaningless, I expect they will be sitting on things much longer than that if they are actually worried about people playing catch-up. The article says they wouldn't have released the transformers paper at all, which seems plausible.
RobbinDeBank@reddit
It’s a shame that their future papers will be 6 month old when they are released, but that’s miles ahead of ClosedAI 0 papers. As long as DeepMind keeps publishing, I’m fine with it. They’ve been the forefront of AI research for such a long time with so many valuable contributions to the field.
diligentgrasshopper@reddit
And then you have deepseek open sourcing their flagship AND the entire research behind it for other companies to directly make money off of.
nderstand2grow@reddit
I mean, they have no obligation to share their work publicly and for free, just the same way companies don't have to release any open source models either.
Inkbot_dev@reddit
They will lose very intelligent researchers if they decide to go that direction. Being able to publish is quite important to a lot of people.
BootDisc@reddit
Yeah, but at some point, corporate espionage or just company intermingling takes over and you might as well share.
Ultramarkorj@reddit
PO sõ agora nego percebeu: O Pessoal da AI "ELITE" ta 10 anos na nossa frente, deixando 1 monte de Entusiasta empolgado, já arquitetaram como coordenar, pode ver que é sempre em sequencia... e os preços tudo similar. Só a OpenAI que realmente é a Lider em AI que botou aquele absurdo pq tá ditando a corrida.
Mas nós estamos em 1 teatro coordenado rs
GamerGateFan@reddit
The timeline might of been out of sync a little bit, but there were unoptimized recurrent neural networks combined with attention around when google published their paper.
OpenAi would of taken pre-google's paper stuff, made it generative and eventually gotten it optimized reguardless of the paper. Maybe they would of went down BERTs path first, but they had the people who would probably get the generative eventually.
The only thing holding back the papers does is keep people who are actually willing to do the work from moving forward as fast. Because it is obvious that google was not willing to do the work, and it is likely they are so dysfunctional they will continue to be unable to realize the gains they are imagining.
YearnMar10@reddit
„and eventually optimized them“
Yes, that’s how research and science works. Even if there are pretty smart people at deepmind, this will just delay the overall progression in the field. But it’s a competitive company afterall…
charmander_cha@reddit
It's always good to remember how the community loves to talk nonsense like "competitiveness is good".
When he should be talking about how group, community work, with a free flow of information, is the best for humanity.
Whoever asks for competition is just another accelerationist idiot hoping that humanity will end, because the only plausible alternative for humanity is that everyone has the right to access information and so we can all enjoy the things that are the result of humanity, not megalomaniacal companies that should be destroyed.
Evening_Ad6637@reddit
I totally agree with your comment. And I really hate reading "competition is good" every time.
Yes capitalist competition can certainly be a motivation, but it is an extrinsic motivation and as such it promotes progress mainly through people who love the attention and fame and not the underlying topic itself. Such a system also rewards narcissistic behavior and facilitates the formation of monopolies. This system is poison for the development of humanity and its cultures driven by genuine diversity and creativity.
Capitalist competition based on envy and jealousy makes it almost impossible for people with intrinsic motivation to become relevant and gain recognition. Many people seem to forget this when they supposedly wish for more competition..
CoUsT@reddit
While true, it's good to remember that people are competitive in nature and it's hard to just group up together as "humanity" - a collective - and work on things together. Someone along the way will certainly try to exploit their position and just make money or whatever.
In ideal world we would have that global collaboration but the second best thing we can get is competition.
charmander_cha@reddit
Where have you been all this time? It's nice to browse Reddit knowing that there are people with this mindset and not just far-right scum.
mikew_reddit@reddit
The opposite of competition is a monopoly.
I don't see how a monopoly is any good because that removes all pressure on pricing.
SwagMaster9000_2017@reddit
On the topic of safety, can someone explain how everyone having access to dangerous AI is more safe than just big corporations having access?
I don't trust Google, OpenAI etc. but I don't trust the general public either given how quickly safety and censorship guardrails get taken off open models.
robberviet@reddit
Totally understandable.
spac420@reddit
oh have the turns have tables lolol
cnydox@reddit
6 months? It will soon become permanent if you really want to be competitive
SquareWheel@reddit
Considering how much advantage they lost by publishing their once-world leading research, I can understand it. Six months is still quite reasonable, and better than we see from OpenAI and others in the commercial space.
Serprotease@reddit
In this field where everything is going fast, from a researcher point of view, 6 months is quite some time. There are no rewards in publishing second.
You can be sure that openAI bled talent because of this policy and that quite a few researchers will look for other places to work after this announcement.
mayalihamur@reddit (OP)
This is fake competitiveness and I believe engineers fail to understand the social complexity behind real competition. Competition dies when people try to keep their research to themselves and on the contrary thrives when findings and advances are publicly presented, discussed and enriched in an uncontrollable, contingent environment.
Once their minds are corporatised, I think people lose the ability to acknowledge that we have rapidly evolving LLMs thanks to this ongoing exchange between ideas, not merely because some indispensable geniuses in DeepMind invented the transformer model. DeepMind is practically saying "I am going to benefit from whatever free, open research there is but will keep my own closed."
OpenAI became ClosedAI, and I am afraid DeepMind is on its way to become ShallowMind.
TheRedfather@reddit
The funny thing is that back in 2023 Google had an internal memo leaked that said this:
“The uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch.
I’m talking, of course, about open source. Plainly put, they are lapping us. Things we consider “major open problems” are solved and in people’s hands today.”
(Source for the quote: https://semianalysis.com/2023/05/04/google-we-have-no-moat-and-neither/)
Surely then DeepMind knows that open source is coming for them and is trying to limit that. Quite a shame.
doorMock@reddit
This "open source" you are talking about is still very dependent on mega corporations publishing their models and research. Universities barely mattered in the LLM field, and I don't know of any breakthroughs coming from some random GitHub profile. The breakthroughs came from Google, Meta, Microsoft, Deepseek and so on.
Linux doesn't need funding to progress, LLMs do though, so I don't know what you are laughing about.
TheRedfather@reddit
I'm not laughing? Literally the opposite - I called it a shame.
You do realise that not all open source comes from random Github profiles? You seem to be conflating open-source with for-profit. Many of the same mega corporations that you quoted have pushed open source in the past for strategic reasons (e.g. building an ecosystem as with Android or creating new standards/protocols as with MCP), and it's helped create competition, scale and innovation.
Zuckerberg has himself been vocal about Meta wanting to be open source (or at least open-weight). And one of your examples, Deepseek (which is very much not a mega-corporation but until recently a startup launched by a hedge fund manager with a fraction of the funding), is a case-in-point that smaller players CAN find smart ways to be competitive. There's also a lot of open source tooling being built (by for-profit startups) around the LLM ecosystem like Firecrawl, Browser Use etc.
You're correct that the wider open source community is reliant on the mega corporations releasing their models and research, in part because training foundational models is expensive (for now). But there's also an argument to make that the big corporations that choose to wield open-source/open-weights to their advantage could win.
t98907@reddit
Jürgen Schmidhuber had already published ideas similar to Transformers. Even if Google had delayed the release of the Transformer paper, a similar concept would likely have emerged from another research group.
Considering the subsequent careers of the Transformer authors, it's clear that publishing the paper significantly benefited them. Given that even Google struggled to release a fully polished Gemini model in a timely manner, delaying the publication of the Transformer would likely have resulted in a valuable technology remaining buried within Google for many years. Such a delay would have been a considerable loss for the AI community. Fortunately, that didn't happen.
bill78757@reddit
I often think about what it would be like if ChatGPT was the only llm and nobody outside openAI knew how it worked
OpenAI would for sure be the most valuable company in the world, the hype would be insane
foldl-li@reddit
Is Google acting fast to become as closed as possible?
mrtie007@reddit
sounds like MBA meddling. flashback to the time IBM tried to patent fast fourier transforms before realize gauss had discovered it 200 years earlier. these things are discovered not invented, engineers realize this, MBAs don't.
defaultagi@reddit
Transformer came from Google Brain, not DeepMind
ionthruster@reddit
Google Brain got merged with DeepMind to make Google DeepMind - so "we" works for both of their past reincarnations
__Maximum__@reddit
This is what closedAI did. Those greedy fucks started this.
Umbristopheles@reddit
As an accelerationist, I say, "BOOOOOO!!!!!" Hopefully the moat has evaporated and the whole world is off to the races. So if DeepMind discovers something, everyone else will too in short order.
No-Break-7922@reddit
I don't think any major advancement was made in LLMs pretty much since GPT 3.5 Turbo anyway (yeah yeah, I'm aware of all the test results and the hype) so I highly doubt we're missing out on anything major. The methods are all out there, it's a question of who has more and better data and more and better computational resources.
brahh85@reddit
They already did it since 2023 https://www.businessinsider.com/google-publishing-less-confidential-ai-research-to-compete-with-openai-2023-4
Insisting on it may be a desperate way to say markets "hey, we are here, we have revolutionary IP , dont sell our stocks because of recession , buy us"
But the truth is that if you dont develop things fast and release fast, you are killed by chinese or european companies that will do it anyway.
They still think that the success of closedai was because they took advantage of what google created, when the truth is that google didnt take advantage of its own products and was overtook by others, because of this stupid strategy of delaying things.
We dont give a fuck about shit done 6 months or one year ago , we are focused in the open weight companies that releases fresh models this week. In AI, a year ago its like a decade ago. People wants up to date research, not out of date companies.
romhacks@reddit
If they keep it at 6 months, I'm personally fine with it. In our capitalistic world, companies need that for competitive advantage, and 6 months seems reasonable. However I can easily imagine them stretching it longer and longer before not releasing research at all.
JustinPooDough@reddit
This seems totally fair to me tbh
SadWolverine24@reddit
Yeah, 6-months is not much time.
LanceThunder@reddit
The UN should step in and start an organization that scoops up anyone that doesn't want to work for a private company. Give those people whatever they ask for and open source all their work. build shines to them and treat them like heros.
ConfusionSecure487@reddit
who publishes such article on the 1. april?
JLeonsarmiento@reddit
I think the genius is out of the bottle at this point anyway.
218-69@reddit
6 months is nothing. We've been using sdxl for almost 2 years now.
And they're doing the most for open source if you count their papers, the other companies are just monetizing their shit.
Enough-Meringue4745@reddit
lol the old trump tactics
Secure_Reflection409@reddit
Easy come, easy go.
Trennosaurus_rex@reddit
Makes sense.
segmond@reddit
oh well, me too. I'm going back to my cave with my prompts.
computer-whisperer@reddit
April fools -- i hope???