AMD is skipping RDNA 5, says new leak, readies new UDNA architecture in time for PlayStation 6 instead
Posted by Odd-Onion-6776@reddit | hardware | View on Reddit | 201 comments
hey_you_too_buckaroo@reddit
Misleading title. If you read the article it's clear they're dropping the RDNA architecture and replacing it with UDNA. They're not skipping anything. They're just changing architectures.
bubblesort33@reddit
I think it could be more than that. When AMD switched to be RDNA it was a large departure from GCN. If they aren't calling it RDNA5, I'd assume it's vastly different from RDNA4. To me it would seem like they probably started work on RDNA5 which was just an iteration of RDNA4, but then dumped it for a ground up overhaul. Often times these revision names are arbitrary. Like RDNA3.5 on mobile could have easily been called RDNA4. But I think an entire be architecture name change signals something beyond just a minor iteration.
lightmatter501@reddit
UDNA is “let’s do the same thing we do for CPU and share chiplets between datacenter and consumer”. It means that the compute capabilities of AMD consumer GPUs are likely to get a massive bump, and that future AMD APUs will effectively look like scaled-down MI300A chips. The whole point is to not need both RDNA and CDNA.
I think the idea is that many of the rendering features we currently have directly in hardware could probably be done via GPGPU methods one a modern GPU, so if you just amp up the raw compute power you can make the GPU more flexible and easier to build.
whatthetoken@reddit
If this ends up true I'm all for it.
Lightening84@reddit
AMD already combined compute and graphics with their Graphics Core Next (GCN) and it didn't work out very well. Don't get your hopes up.
AMD doesn't have a great track record with future proofing their designs using structurally sound systems architecture. They are always playing catch up with bandaids and duct tape.
porcinechoirmaster@reddit
Depends, really.
In 2012 - when GCN launched - supercomputers and datacenters with GPGPU support were working on pretty wildly different problems than consumer games. Workloads were commonly things like large simulations for weather, material science, nuclear particle interactions, finite element analysis, and the like. FP64 data was common, which is why you saw FP64 performance artificially lowered on consumer parts since games almost never use FP64 and it was a way to offer market segmentation as well as cut die size. Games were in a bit of a transition period (this was right around the move from forward to deferred rendering and shading, which has some oddities for hardware requirements) but still were pretty heavily leaning on FP32.
Today, however, the gap between server/datacenter workloads and gaming workloads has narrowed a lot. Compute is a huge part of game rendering, with a much greater focus on general-purpose compute utilized by compiled shaders. Additionally, with ML being the main driver of GPU use in the datacenter, the need for FP64 has dropped relative to the need for memory bandwidth/capacity and FP32/FP16/INT8 performance... both of which drive game performance, too.
Lightening84@reddit
I don't know how different things were then versus now, because Nvidia managed to do just fine with something called a CUDA architecture.
Aside from some pipeline changes, the concept is the same.
porcinechoirmaster@reddit
Despite the name, CUDA isn't really an architecture, it's a API with some platform features. It's responsible for exposing a relatively stable software interface to applications, which it does quite well.
The AMD equivalent to it would be ROCm, not GCN. GCN is a card GPU architecture, like Kepler or Ada Lovelace.
Lightening84@reddit
A CUDA core is a hardware structure.
porcinechoirmaster@reddit
That's a branding decision. nVidia decided to double down on calling all of their GPGPU stuff CUDA, and so all the general purpose nVidia cores are called "CUDA cores." AMD calls them stream processors. They fulfill the same role, which is "programmable compute work."
The real architecture, and the details of those cores and what they're capable of, has been named after scientists: Maxwell, Kepler, Ada Lovelace, etc. Each one of those architectures features pretty different implementations.
Saying a CUDA core is an architecture is like saying a CPU is an architecture.
wiwamorphic@reddit
They have MI300A right now -- HPC labs like it, so I hear. A quick check on their specs/design seems to suggest that it's fine.
Lightening84@reddit
The MI300A is not a combined architecture meant to unify gaming and compute, is it?
wiwamorphic@reddit
You're right, it has too much FP64 hardware and presumably too much interconnect/memory bandwidth.
VenditatioDelendaEst@reddit
Yeah, but back then the guy with the perf prediction spreadsheet said to go wave64 for compute. Maybe this time his spreadsheet says wave32 is good for the goose and gander.
lightmatter501@reddit
Part of that was because everyone was using fairly limited shader languages to program GPUs. C++ for GPU programming is a whole different beast and let you do things the shader languages couldn’t.
skepticofgeorgia@reddit
I’m no expert but my understanding is that the chiplet design of Ryzen CPUs means that they’re extremely Infinity Fabric/bandwidth constrained. Given that GPUs usually deal with much more bandwidth than CPUs, do you really think that the GPU division’s overcome that hurdle? Not trying to be a dick, genuinely curious
lightmatter501@reddit
The top 3 supercomputers in the world use chiplet-based AMD GPUs or APUs, I think that have that issue handled. The MI300X has better memory bandwidth than an H100, so if they reuse that interconnect it will be fine.
Strazdas1@reddit
If you are going by TOP5000 list then largest clusters (for example META) arent even on the list.
skepticofgeorgia@reddit
Ah, I hadn’t heard that about the MI GPUs, I guess I need to pay better attention. Thanks for the explanation!
wiwamorphic@reddit
5.3 TBps vs 1.6 for H100 (4.8 for H200). For reference, a 4090 is ~1.
hal64@reddit
It's handeled for compute gaming is different a best. It's much more latency sensitive. If rdna 3 is any indication it looks like amd didn't solve it then.
sdkgierjgioperjki0@reddit
You can't use cowos in consumer products, it's too expensive and it is too supply limited to "waste" on low-margin products. Also it is only top 2 supercomputer, not 3, Frontier and El Capitan.
onetwoseven94@reddit
AMD has already dumped much of the dedicated hardware for rendering. There’s no dedicated vertex fetch or tessellation hardware like Nvidia has. The remaining HW like TMUs and ROPs is essential for any GPU that actually does graphics. But the insistence on using general compute instead of dedicated HW is why AMD is far behind Nvidia on ray tracing and AI.
fkenthrowaway@reddit
Vega all over again.
Vb_33@reddit
Yea it's GCN part 2. They've already done something similar.
nismotigerwvu@reddit
I know it's not directly related, but the concept of ditching fixed function hardware for general purpose on GPUs will always light up a giant, flashing neon sign saying "Larrabee". AMD has been down this road before though, and very recently at that, and I doubt they would be getting behind this push without being absolutely sure that the underutilization woes of GCN are resolved in this.
dern_the_hermit@reddit
Another way to look at it: It's like convergent evolution, and the reason it makes you think of Larrabee is because industry experts probably saw this sort of trend coming a long, long way away... it's just that some early efforts were just too early, half-baked. Similar to AMD's Fusion initiative (I assume that's what you were referring to) which basically needed around a decade to become a modestly respectable product.
Hendeith@reddit
There will be major changes, because they are fusing RDNA and CDNA so they no longer have two architectures. I don't think this is a decision that will be beneficial to PC/laptop market in the long term.
This is a cost cutting decision. AMD is losing market share on PC, they will focus on bringing improvements for market previously targeted by CDNA with less focus on anything else. Generations that will bring major gaming improvements will be generations that they also plan to use for consoles, since here they have proper financial incentives to focus on improving gaming performance.
xXxHawkEyeyxXx@reddit
I thought the purpose of the split (RDNA for gaming and CDNA for workstations/servers) was so that they could offer something that's appropriate for the use case. Unfortunately, right after the split, compute became a lot more important on consumer hardware (ray tracing, upscaling, frame gen, AI models) and AMD couldn't compete.
sdkgierjgioperjki0@reddit
None of those things you listed are "compute". They are their own categories of dedicated acceleration.
onetwoseven94@reddit
You’re getting downvoted by inane pedants for being correct. The reason why AMD is behind Nvidia in all those categories is precisely because AMD uses compute for those tasks and Nvidia uses dedicated HW.
hal64@reddit
All his parenthesis are compute workload. Which like all compute you can have a dedicated hardware acellerator for.
sdkgierjgioperjki0@reddit
Then raster is also a compute workload.
xXxHawkEyeyxXx@reddit
Everything done on a computer is theoretically a compute workload.
Also you could do everything using the CPU if it had enough compute power. I think there are ways to run some fairly recent games on the CPU, but you need something like Threadripper or Epyc to get acceptable results.
shroddy@reddit
Do you have some actual benchmarks, would be really interesting how the latest 128 core Epyc does, which GPU it can actually beat.
xXxHawkEyeyxXx@reddit
Slightly older but here:
https://www.anandtech.com/show/16478/64-cores-of-rendering-madness-the-amd-threadripper-pro-3995wx-review/4
shroddy@reddit
I wonder how they actually did that? Does Crysis have a Cpu render mode? Or do they use some kind of external tool like swiftshader, is that the reason it is limited to 32 threads (which makes the 64 core Threadripper slower than a 16 core desktop Ryzen)
hal64@reddit
Nvidia gpu are more focused on compute than gaming that's why they gimp the memory so large ai model are forced into workstation cards. Nvidia keep trying to make game use their compute like dlss raytracing etc.
Ryzening Radeon is a necessary move for amd chiplet strategy. They need consumer ai capable gpu and have much better economy of scale.
BadAdviceAI@reddit
AMD is barely making anything on consumer gpu sales. Thats why they are shifting to UDNA. Make it best for AI, then add gaming stuff after the fact.
Radeon is basically dead. They need to rebrand it. I think they made 12m profit in 2023 on gpu sales. 2024 is prob less.
hal64@reddit
Ironically if amd had listened to ai dev back in the early days of rocm i.e around 2015 they be in a much better position in ai.
U3011@reddit
Someone suggested what you said a long while ago and they were down voted. I love supporting AMD especially in their revival period but Radeon has floundered for far too long and Nvidia is killing it with their stack. ATI was the only cards you'd buy if you had the money for the best hardware long ago. Those days are sadly gone.
theQuandary@reddit
It may be worse in gaming perf/area, but it should be better at compute/area which is better for other stuff.
Hendeith@reddit
Yes, true. If you want card for AI and HPC without reaching for workstation GPUs then it's a benefit for you, cause regular Radeons should be better at this starting with uDNA 1.
thehighshibe@reddit
tbh RDNA 1 was literally just GCN 5 in a trench coat and some efficiency increases
Jonny_H@reddit
I think it shows there's more to "consumer-visible difference" than design specifics.
In terms of core design, the addition of wave32 was much larger than the difference from RDNA1 to RDNA2, just the subsequent tweaks and larger dies in RDNA2 made a larger difference in actual performance.
detectiveDollar@reddit
Heck I'm pretty sure the 5700/XT were rumored to be the 680/670 internally.
yimingwuzere@reddit
This, even AMD were saying that many parts of RDNA1 were lifted straight out of GCN
Capital6238@reddit
+1
It is always developing on top of the existing stack. Everything else would be horrible. Look at Intel and their compatibility issues. You just don't throw everything away and start from scratch. Except if you have to.
thehighshibe@reddit
Oh definitely. I just remember AMD advertising it as a fresh start and all-new and then it ending up being closer to GCN 5.5 then an actual fresh start that we'd been hoping for after the Vega-series' performance.
Reminds me of how 343 called the slipspace engine all-new when it was just Blam! with new parts bolted on.
And then RDNA 2 brought us what we wanted in RDNA 1, and finally matched Turing, but by then of course NVIDIA launched Ampere and stayed one step ahead (and then the same with RDNA3 and Ada lovelace)
Elegant_Hearing3003@reddit
Graphic workloads look very different from AI workloads look very different from HPC workloads today. Graphics needs it's dumb hardware rt units (software rt is faster, see Tiny Glade, etc. but whatever that ship sailed) and needs texture units and the way workload is distributed and done is very specific.
HPC needs FP64(+) and AI (currently) needs as much variable floating point matrix multiplication as it can get and nothing else. Doesn't make any sense to try and match those wildly varying hardware requirements to same basic hardware blocks.
Arctic_Islands@reddit
they are not dropping anything. rdna 5 = udna
VanWesley@reddit
Yeah, "skipping RDNA 5" implies that they're jumping straight to RDNA 6 for some reason, but definitely just sounds like a change in the branding.
chmilz@reddit
And even if they were "skipping" a generation, that's still just marketing. The next release is the next release, whether it's tomorrow or next decade.
kingwhocares@reddit
Wasn't a unified architecture already planned for RDNA 5 and thus RDNA4 was expected to be a simple refresh.
COMPUTER1313@reddit
Yep, they’re combining the previously separated compute and gaming focused archs.
Elegant_Hearing3003@reddit
"Next GPU is UDNA, aims to launch mid 2026" is what the headline should say.
As far as I know they were hoping for late 2025 maybe, but a 6ish months after that is unsurprising. And yes this fast turnaround is why there's only 2 mainstream RDNA4 gpus, mostly skip RDNA4 and go straight to the next arch.
Also I believe the console here is the new Xbox handheld and home consoles, not the PS6 which is still slated for later than 2026?
scytheavatar@reddit
Source is claiming PS6 will be either Zen 4 or 5, which also means it might be releasing soon enough that Zen 6 is not an option.
Elegant_Hearing3003@reddit
Yes I read the headline, which is wrong as the Playstation hardware team just recently finished the PS5 Pro, which they publicly touted as having worked hard on to meet the quick deadline at all, they've not had time to work on the PS6 yet.
PorchettaM@reddit
UDNA being a 2026 release is plausible. The Playstation bit sounds a lot more questionable, it would be imply a very early PS6 release.
Afraid-Department-35@reddit
2026/2027 for ps isn’t that early. PS4 released in 2013 and ps5 in 2020. We just got the ps5 pro, ps4 pro released in 2016. Ps6 in about 3 years is pretty plausible based on history. With that being said, if UDNA is available in 2026, it’s unlikely ps6 releases side by side with it, maybe a year or 2 after it matures a little.
Vb_33@reddit
UDNA in 2026 makes sense since it's supposed to replace RDNA5 which would typically be due 2 years after RDNA4.
The PS6 using UDNA (sure) but combined with a Zen 4 (2022) or Zen 5 (2024) CPU while launching in 2027/2028 is ridiculous. Even if it launched in 2026 why would it use a CPU from 2022?
Bluedot55@reddit
The same could be said about the ps5 launching with zen 2. Consoles are often designed for maximum gaming performance for a given die area, and if you can get more performance from having a smaller cpu and allocating more resources elsewhere, it may be a good tradeoff.
PMARC14@reddit
Yeah but choosing Zen 2 was specifically cause it was AMD's small core design at the time. They made further optimizations to reduce die usage, but they missed optimizations in Zen 3 that didn't really blow up die usage. But AMD now has a separate small core series in the Zen C line that already jumps most of the optimizations a console design would want. No reason to not choose the latest architecture small core before beginning your console design, so unless we are getting jumped by the PS5 in 2027, it would make more sense to use Zen 5C designs.
Bluedot55@reddit
They do, yea, but the C line is its own thing, and may perform worse for something like gaming then you'd expect, since it heavily cuts back on cache, which AMD has also demonstrated is really important for games, unless they re-add it and lose a lot of the density improvements.
Its also partially getting those density gains from being fabbed with a density focused library, as opposed to a clock focused library, so unless they want to switch the entire ps6 die to a density optimized library, or use chiplets, there goes the other half of the density improvements.
AirSKiller@reddit
Not really, the PS5 used a 1 year old architecture. If UDNA releases at the end of 2026, that puts the PS6 at the end of 2027, which sounds plausible to me
JackSpyder@reddit
Its likely they'd release desktop products for refinement before a console. Sony and Microsoft would surely want live demonstrably successful products before they agree on a product selection? Including refinements in the driver side.
yimingwuzere@reddit
The PS4 uses the GCN architecture.
PS4 release: Nov 2013
First GCN cards available for PC? Jan 2012
Zednot123@reddit
First RDNA was mid 2019, PS5 late 2020.
Seems to be a repeating time line.
yimingwuzere@reddit
It seems to me that Sony waits for AMD to release a new GPU architecture and fine-tunes it further before rolling out the improvements in the PlayStation.
PS4 used a custom GCN 2 (same as Radeon 7790 and 290X).
PS5 used a custom RDNA2.
I would presume that PS6 will use a "second gen" UDNA, even if AMD doesn't heavily advertise second-gen UDNA as being distinct from the first generation (like how 7790 was marketed together with GCN 1 cards).
Zednot123@reddit
Or is it the console makers who asks for changes/updates to the architecture, that AMD then incorporates in later releases?
Hard to say really.
the_dude_that_faps@reddit
RDNA2 launched after the consoles, though.
JackSpyder@reddit
Fair I was just guessing! I stand corrected!
Strazdas1@reddit
Why? 2026 has been expected console release cycle ever since microsoft data leaked from court documents.
theholylancer@reddit
I think its likely setting the ground for it, PS6 gets the next gen refined version of it
Jdogg4089@reddit
I think this generation might be even longer than the last, especially with the PS5 pro. We are seeing diminishing returns with hardware now. So much more compute required for little graphical increase. The real focus now is on lightning and rt is really expensive at the moment, so hopefully we get some sort of breakthrough for that instead of having to just upscale everything to push through poor optimization.
Jeep-Eep@reddit
I find 2026 PS6 implausible, but them wanting a gen at least of UNDA to shake out the teething troubles of the re-merged family before consoles makes sense.
Boreras@reddit
Rdna2 released on gpus and ps5 at the same time, 2020. The pro is ahead of rdna5. Ps4 and pro were also congruent with gcn2/4.
You're right that in both cases they started at the second iteration of the architecture.
PorchettaM@reddit
I guess so, the only way I can make sense of it is if "PS6 uses UDNA" is being said fairly loosely, and the final product ends up being some sort of "UDNA 1.5" similarly to how PS5 isn't really full RDNA2 and PS5 Pro isn't full RDNA4.
ItsMeSlinky@reddit
None of the PlayStations are straight Radeon architecture. Sony picks and chooses the silicon features it feels best suit its use case and APIs, and then has AMD make it to order.
So most likely, Sony wants specific features for PS6 that will end up in UDNA.
Jeep-Eep@reddit
Much like RDNA had a gen before consoles and RT to shake out shit like the power filtration sensitivities.
OscarCookeAbbott@reddit
2027 PS6 is probably about right, so 2026 for early UDNA is highly plausible.
pomyuo@reddit
The article never implied a specific architecture generation, it simply says UDNA, which could mean UDNA 1, UDNA 1.5, UDNA 2, or other as opposed to an RDNA architecture.
imaginary_num6er@reddit
AMD's Jack Huynh already confirmed UDNA will start the same number where RDNA left off. So if RDNA4 is the last of the series, UDNA5 will be the first.
AMD's marketing people get paid to make more confusing names to keep their jobs
Vushivushi@reddit
It'll be funny, yet unsurprising if they go from RDNA4 to UDNA5 but skip RX 9000 for something entirely different.
imaginary_num6er@reddit
Probably UDNA Ai100, Ai200, etc. like their laptop lineup. Would be funny if Gigabyte has an "Aorus Infinity Ai TOP" AIB card on top of that.
pomyuo@reddit
That's not really a confirmation, and my comment still has the same meaning, we'll see the numbering when it's announced
Xavilend@reddit
No it would signify when they lock in the hardware for pre production and development, a console won't launch with a chipset made anywhere close to it's release date. It would likely be Christmas 2027 for hardware sales.
memtiger@reddit
I don't think it's saying "in time for a PS6 release in 2026". It's just saying that UDNA will be ready by the time that PS6 begins development/production.
Sony would likely want at least a year of UDNA availability before they'd have something optimized for it and using any new features.
greggm2000@reddit
The PS6 using UDNA sounds plausible to me, but I don't think it'd come out until 2028, when on Desktop we'd be getting UDNA2 at that point.
RandyMuscle@reddit
That’s what I’m thinking. 2028 is entirely plausible for a PS6.
windozeFanboi@reddit
My zen4+rtx 4000 gonna be ripe for upgrade around that time frame...
Although, i'm more excited about the SoCs that will pop up around then or before that.
AMD Strix Halo 2 vs Qualcomm Next vs Nvidia whatever .. All those that compete against Apple Pro/Max.
Raiden_Of_The_Sky@reddit
I have a feeling that somebody will actually go Mediatek + NVIDIA for next gen. That's only natural to do.
alvenestthol@reddit
Why does Mediatek have to be involved though, Nvidia already has their Tegra-like for Switch 2 and automotive, and the Grace-Hopper for infra (server)
the_dude_that_faps@reddit
SoCs are more than just cores and GPUs. The uncore is so important today that it can make or break your design. Take Arrowlake vs Lunarlake. And what about other IP that Nvidia just doesn't have or has but may not be as competitive? Say, a modem, wifi, BT, USB, ISP?
I don't think Nvidia ever showed any kind of technical ability to build a competitive SoC. Their tegras were a failure in the market.
Partnering with an established player, to me, makes sense.
Nointies@reddit
A tegra powers one of the most successful consoles ever made. In what world is it a failure.
the_dude_that_faps@reddit
That's a weird take. The AMD SoC that powered last gen consoles were also wildly successful. Is anyone going to argue that due to that AMD is suddenly competent enough to compete with such SoCs in the open? Technologically those were crap (jaguar cores anyone?) and the same is true with the Tegra inside the Switch. Regardless of how much those consoles sold.
A console is a closed system without competition. Nvidia is not looking to build parts for a closed system. Whatever they build will have to compete with Apple, Qualcomm and whatever Intel and AMD make.
In the open, it failed. I remember the original Surface with a Tegra 3 and Windows RT. A steaming pile of turd not just because Windows RT was crap.
Whatever success it had on the switch is based on the fact that it is a nintendo console more than the fact that it has an Nvidia badge. I mean, the 3DS succeeded in a world where the ps vita was several times faster.
Nointies@reddit
I think its successful in the console space, which is all it needs to be.
the_dude_that_faps@reddit
Way to ignore everything that was said. How is Nvidia translating that success with the switch into something else? How is it all that it needs to be? Will they bring that success to other markets?
It has not worked for AMD, which has sold millions of SoCs for both MS and Sony since 2011 and it didn't work for Tegra. Where are all the tegra phones? Tegra tablets? Tegra PCs? Exactly. Tegra is a failure.
GenericUser1983@reddit
The Tegra in the Switch is in an odd spot - it failed horribly for phones/tablets etc, so Nvidia had a pile of them they needed to get rid of for cheap, and thus offered Nintendo a really good deal. The Switch itself success is mostly due to the novel form factor + Nintendo's popular first party games, the exact SoC going into it didn't really matter.
Nointies@reddit
And it was a successful product in that space. Switch 2 is also going to be on a Tegra successor.
Kryohi@reddit
Again using a repurposed old product on an old architecture, made on a cheap Samsung node.
LAUAR@reddit
Why would console manufacturers go NVIDIA? Have the reasons behind picking AMD over NVIDIA changed?
Strazdas1@reddit
Nvidia offers better upscaler and RT. something AMD has such poor support on that Sony went and and made their own upscaler instead.
BarKnight@reddit
ARM
Nointies@reddit
Arm has basically no benefits in the console space if you're not already on ARM.
Losing backwards compatibility is disastrous nowdays.
SomniumOv@reddit
Why do you assume you would need to sacrifice Backwards Compatibility ? A Rosetta2-like solution fits within the purview of console design.
Nointies@reddit
Because that would sacrifice backwards compatibility. If I buy a new console and it runs games worse than an old console that is not going to be a good selling point.
People need to not pretend that the x86 > arm is some costless translation. Its not. It has a lot of costs.
SomniumOv@reddit
why would it be the case ? The cost of Rosetta 2 was not bad enough to wipe multiple years of CPU advancement. You can go with a very similar setup, ie with specific hardware acceleration for the translation layer in the SoC.
Nointies@reddit
Let me turn this around.
What are the benefits of swapping over to ARM, right now, for a console.
SomniumOv@reddit
For Microsoft specifically, getting an Nvidia GPU could be big, establish clear tech leadership and maybe get new killer ML features built-in.
For Sony, nothing, they wouldn't do it.
Nointies@reddit
Microsoft isnt releasing another console.
SomniumOv@reddit
This is wrong.
https://www.rollingstone.com/culture/rs-gaming/xbox-console-future-cloud-ceo-phil-spencer-1235166597/
Nointies@reddit
Surely phil spencer would never lie!!!
every indication is they're not making another lol
SomniumOv@reddit
every indication of what ? Some rumors you've heard and somehow state as known facts ?
They've stated multiple time that they are working on next gen, stating it will be "the biggest generational leap".
Nointies@reddit
Every indication from their actual business.
Advertising that all sorts of shit is an 'xbox', the fact that the Series X is selling absolutely abysmally, the fact that the xbox division is getting the microscope on them, the fact that they can't even release fucking games.
SomniumOv@reddit
wut.
Irrelevant to the console itself. "You can play Xbox on anything" has been part of the messaging since they launched their cloud offering, before Series X launched. Are you overeacting to the latest ad ?
It's not competitive with PS5, but it's selling ok, you're getting caught in "if you're not first you're last" mentality, it's still a healthy platform even if it's not where they would want it to be. Series S/X has outsold the Gamecube FYI.
All the more reason to make a bolder departure of a product, like a very ML-feature focused Nvidia collab, with the opening towards an architecturally compatible handheld variant with similarities to the wildly successfull Nintendo Switch (ie : what if the Series S next gen equivalent is handheld).
While i'm sure they'd love to have better console-seller First Party products, and have made a lot of acquisition to try to improve that, the money is consoles still is mostly in the Third Party fees.
Nointies@reddit
Outselling the gamecube, another console that was regarded as an overall failure is not impressive. Also the Series S has sold a lot more than the Series X, they are not the same product and we shouldn't act like they are.
Trying to sell a 1000 dollar Nvidia machine learning console is not the future.
Third party games are also not releasing on Xbox because developers don't want to develop for the series S and X.
The only reason they're making a handheld is because they want to ape the Steamdeck, and I suspect it'l be much more similar to that than a switch, especially given development is being headed up by the surface team. Its probably targeted more as a steamdeck competitor than anything else lmao.
SomniumOv@reddit
Oh so they ARE making a console.
Nointies@reddit
I consider a handheld meaningfully different than a console.
Xbox will not be releasing something that can compete with playstation meaningfully.
SomniumOv@reddit
Commercially, probably yeah, they only did that once in 2005. In Form Factor and Hardware Specs they likely will. Now we'll see in a few years shall we, i'll keep referring to their official statements and you'll keep referring to .. whatever feelings you're having.
vlakreeh@reddit
If those arm cores are fast enough and you’re able to get the proper licensing it’s not the end of the world. Modern arm cores are easily faster than Zen 2 with emulation overhead and with good emulation like Rosetta (or prism now that it supports vector instructions) you can definitely run x86 games on arm. I could see nvidia strapping x925 cores on a 4080 class gpu and undercutting AMD just to keep AMD out of consoles.
BarKnight@reddit
That's what they said about the Switch.
Rumors are out there for a portable Xbox streaming device
Nointies@reddit
the WiiU and Wii were on PowerPC RISC, its not the same as moving from x86, and the Switch was a massive success because of how different it was format-wise that it didn't need backwards compat.
A xbox streaming device is going to suck because tis a streaming device. Because they all suck.
Raiden_Of_The_Sky@reddit
The only reason I know is that AMD can produce SoC with both CPU and GPU being viable enough. Previously all NVIDIA solutions were completely dedicated which made console more expensive.
I mean, they wouldn't collaborate with Mediatek if something like this wasn't on their mind I guess. Their GPU leadership is too strong to not be used in consoles.
Nointies@reddit
their GPU leadership hardly matters in the console space. Price is what matters more.
Raiden_Of_The_Sky@reddit
They CAN push more performance at lower manufacturing price and provide a lot more features. Look at Ada vs RDNA 3. Nvidia architectures are so far ahead from what AMD does at the moment.
Nointies@reddit
Nvidia is not going to be undercutting AMD on cost on all things to steal away the console market.
Especially since 1. AMD already won the PS6 contract, which it appears Nvidia didn't even compete for (intel did) and 2. There is no next generation Xbox and probably won't be.
Pyrostemplar@reddit
Well, one of the tidbits surrounding the initial XBox series was the Microsoft vowed never to work with nVidia for a console ever again. How true those rumors are, well, it is anyone's guess.
Nointies@reddit
Nobody is going Nvidia besides Nintendo.
AMD is already locked in for the PS6
There is not going to be a real successor to the X-box series X, I think thats the last true X-Box we ever see.
sascharobi@reddit
Really? That was it?
Nointies@reddit
Its accurate lol.
I don't think Microsoft is ever going to release a true console again. You'll get a streaming stick maybe.
tukatu0@reddit
Hm they'll probably release more xbox series variants. The handheld could be a portable series s. Switch lite type situation for $300.
I dont see any point but I am no longer an xbox customer so what do i know
sniperxx07@reddit
I don't think nvidia will be interested in wasting their capacity of ai gpu's on console,and nvidia is working on it's own arm processor so won't be interested in partnering
Firefox72@reddit
Pretty sure at this point the next gen consoles are locked in.
FreeJunkMonk@reddit
The Playstation 6, already?? It feels like the PS5 has barely been around
SomniumOv@reddit
It's been here for 4 years already, more than half a generation by the standard of both previous ones (7 years).
Since GPU generations seem to be slowing down (was 2 years recently but current one will have gone 2 and a half+), that puts you pretty close to a full 7 year generation.
Yeah having the consoles launching around the same time as Nvidia 30 series and barely being in the 60 series era when they're replaced isn't great.
Strazdas1@reddit
I disagree. the more often consoles get replaced the less time they spend with obsolete hardware holding down game developement.
SomniumOv@reddit
That's probably wrong, actually. As we've seen with the current generation, more commonality between the old gen and the new one meant the crossgen period lasted a lot longer (including cases like Jedi Survivor where a next-gen only title later got backported to benefit from the old gen installbase).
If the jump next time is as little as it's likely to be (~2060-2070 performance to ~6060-6070?) that's dire, the crossgen period would last a very long time.
Strazdas1@reddit
This is primarely due to the insane decision from microsoft that every game must be Series S viable which is an aboslute heaping trashpile of hardware.
ResponsibleJudge3172@reddit
This is of course the third time in a row AMD is said to be fast tracking development of GPUs to leapfrog completion by being early and faster on iteration.
Third times the charm I guess
SoTOP@reddit
Who is saying that? There was nothing about leapfrogging competition or faster iteration.
ResponsibleJudge3172@reddit
"AMD is skipping RDNA5". Also looking at the timeframes
SoTOP@reddit
That is purely your own interpretation. For example I can provide you with opposite interpretation, if RDNA5 was released as was once planned maybe it would do so sooner and be actually faster than UDNA(at least per area/transistor count) because no time would be spent integrating CDNA capabilities and no architectural compromises would be needed to add those additional capabilities that gaming has no use for.
What we know about UDNA so far is that it's mesh of RDNA and CDNA architectures specifically to optimize development costs, there haven't been any indications that it will be superior than either separately.
Another example is RDNA4. In theory it's "just" a bit optimized RDNA3 plus some small RT improvements so using your outlook it should have much faster release cycle than usual, yet there hasn't been any speed up.
scytheavatar@reddit
The whole point of UDNA is that AMD wants to get out of split architectures, cause the gaming GPU business has crashed for them. They need their AI customers to justify still making GPUs. Also both Sony and consumers are putting pressure on AMD to follow Nvidia and have hardware upsampling so "optimizing" for gaming is moot.
SoTOP@reddit
That is what I said.
CalmmoNax@reddit
Just wait for Big Udna-vi, surely then AMD will win.
GenZia@reddit
I've no idea why they squeezed PS6 in the headline/article.
It's barely relevant, even for clickbaiting.
For one thing, I highly doubt the PS6 will be on N3. Sony/MS will have to wait for N1 if they want to offer a generational performance uplift (\~2-3X) over the base PS5 and XSX.
And N1 won't begin production until \~2028 (IIRC), and that's assuming everything goes at planned.
scytheavatar@reddit
AAA game development has crashed and burned, like what exactly is the point of "generational performance uplift"? The next real generational performance uplift will be when Path Tracing is ready for primetime, and that will require much more than 2-3X.
tukatu0@reddit
Thats only if they want a 300mm die or something that could be cheap. Theortically it seems very possible to me the ps6 will cost atleast $1000. $700 before tarrifs. Who knows what the market could support in 4 years
sniperxx07@reddit
They actually might wait for n1... So that n3 gets cheaper 😂,I am not the technical guy but from what I have heard each node is more expensive than previous one even after considering increase in density so to save cost although I think you are correct
sascharobi@reddit
PS brings more clicks.
GlobalEnvironment554@reddit
Maybe they just renamed rdna5 to udna, for marketing
Minute_Path9803@reddit
Way to tank their stock you'll see and now they want to get into phones.
Sometimes you have to realize stick with what you know best.
Unless AMD wants to start out as OnePlus did back in the day where they actually did offer great value not a good idea saturated market.
forking_shortballs@reddit
So if this is true, the 7900xtx will be their highest-end gpu until 2027.
CatalyticDragon@reddit
It is speculated RDNA4 will be a mid-range only part but even if that trend continues mid-range parts in 2026 will be more powerful than the 7900xtx which was released two years ago.
High end RDNA3 parts didn't sell so they cancelled high end parts to give a chance for those shift but it's not something they can afford to still be doing three years out.
unityofsaints@reddit
How can you skip something that doesn't exist yet? This is more like a cancellation, or maybe a renaming, depending on how similar / different RDNA and UDNA are / were.
anival024@reddit
This is such a shitty headline and "article".
Even if this is true, nothing is being "skipped". If RDNA4 launches in 2025, and the successor launches in 2026 named "UDNA", then nothing was skipped. RDNA5 was never an architecture for any announced products. If they're renaming/rebranding RDNA4's successor to UDNA because it's markedly different, so what? That doesn't imply anything was skipped.
Nvidiuh@reddit
There is actually a zero percent chance the PS6 is releasing before fall 2027.
Psyclist80@reddit
I think they are just shifting the naming, afterall the name RDNA5 hasnt ever been confirmed, and Q2 2026 lines up with a yearly cadence after RDNA4.
imaginary_num6er@reddit
Is it true that they’re going to start with “UDNA5”? Because people will definitely wonder what happened to UDNA1-4
Psyclist80@reddit
No one said UDNA5, just UDNA.
SomniumOv@reddit
Windows 9 ate them too.
gumol@reddit
Aren't RDNA and UDNA supposed to have architectural differences? If you can rename RDNA to UDNA, then how "Unified" will it be?
Ghostsonplanets@reddit
No? UDNA is basically the unification of CDNA and RDNA under GFX13.
shalol@reddit
Aka unification of the professional and gamer software stacks, like CUDA
Kryohi@reddit
Hopper and Ada are very different architectures. And ROCM already supports both CDNA and RDNA.
ResponsibleJudge3172@reddit
Who said that's not what RDNA5 was all along is the point
A5CH3NT3@reddit
You're thinking of RDNA and CDNA. When AMD split their compute and gaming architectures after Vega. UDNA is the rumored re-unification of their architectures back into just the single type.
gumol@reddit
yeah, but if you just rename RDNA to UDNA, then have you really reunified anything?
CHAOSHACKER@reddit
If UDNA gets all the extra instructions and capabilities from CDNA, why not? Current CDNA is already GFX11 based (RDNA3)
gumol@reddit
oh, so it wouldn’t just be renaming. That makes sense
Jedibeeftrix@reddit
.... yes!
if the new architecture (formerly known as "RDNA5") has been engineered to be capable of also fulfilling the use-cases that currently require a separate architecure (currently called "cdna").
gumol@reddit
oh, so it wouldn’t just be renaming, but also reengineering. That makes sense
ThermL@reddit
I think what the OP is actually musing is that previous references to "RDNA5" was actually UDNA before they came up with the UDNA name scheme.
"If you can just rename...." Yeah, we can just rename anything. It's kind of what we do in the PC space.
MortZeffer@reddit
Does the U stand for upscaling
SturmButcher@reddit
More like Unified
Kaladin12543@reddit
Since this is a unified architecture, will this have high end GPUs?
sascharobi@reddit
Hopefully. If not, I don’t need it.
noonetoldmeismelled@reddit
PS6 is definitely 2018+ depending on when TSMC N1 is ready and already had their hogging lines by Apple/Qualcomm mobile and Nvidia/AMD datacenter. The Nintendo Switch is at almost 8 complete years on market as Nintendo's core hardware product. Consoles are a software platform that can last a decade now. PS4 is already a product over 10 years old that still sells software and in game purchases
UDNA I'm excited for. Just keep pushing ROCm
FreeJunkMonk@reddit
> 2018+
2028?
noonetoldmeismelled@reddit
yup. Fixed
dudemanguy301@reddit
Buckle up people, ML acceleration going to be mainstream for 10th gen the word “neural rendering” is going to boil in the public discourse even harder than upscaling or raytracing.
constantlymat@reddit
I wonder if this time AMD will treat the buyers of the last generation of their GPU architecture better than they treated Vega64 buyers who didn't even get seven years of full driver support before support was sunsetted.
JV_TBZ@reddit
Highly doubt PS6 releasing before 11/2028
Advanced_Parfait2947@reddit
This does not bode well.
I was looking at upgrading my Radeon 6800 non XT next year. If the 8800xt doesn't offer a significant boost then I guess I won't have any choice but to switch to nvidia
Arx07est@reddit
7900 XTX is over 40% upgrade over 6800 XT tho...
BTTWchungus@reddit
Doesn't the 7000 series have AI cores over the 6000?
Arx07est@reddit
Yes, it means no FSR4 for 6000 series.
GreenDifference@reddit
40% upgrade is meh, it's atleast 200% when I need upgrade
Advanced_Parfait2947@reddit
But I don't need a GPU with 20GB of memory.
I'd argue that 12 would be fine since I game at 1440p.
Also, switching to a 7900xtx means I also have to upgrade my PSU
fishuuuu@reddit
You aren’t paying for the memory. That’s a silly reason to not buy.
Requiring a PSU is another matter.
Advanced_Parfait2947@reddit
In my mind there is such a thing as overspending.
Why spend so much on a GPU I won't even Fully utilize? If I Targetted 4k i'd consider the 7900xtx but at 1440p it's overkill
uzuziy@reddit
I don't think just %40 increase in raw performance is worth it for the extra you're paying especially when 7000 series doesn't have any extra tech to offer over 6000. FSR 4 might change that but we have to see.
PorchettaM@reddit
RDNA5 is/was never going to be a 2025 release either way.
hey_you_too_buckaroo@reddit
What are you talking about? This article isn't about the 8800x series. That's still using RDNA. They're just switching architectures after RDNA4 to UDNA.
dabocx@reddit
Well it should be a massive boost for RT over yours. Maybe FSR4 depending what cards do and don't get it.
someguy50@reddit
Maybe they should just skip to UDNA2 and leapfrog the competition. Are they stupid?
RealPjotr@reddit
This surely isn't new? Half a year ago I read RDNA4 would not be anything special, but the next generation would be, as it was a new ground up design.
TechAficionado3@reddit
How is this self-promoting spam from PCguide allowed here?
fishuuuu@reddit
Is this old news?
TheAgentOfTheNine@reddit
I think this is like when your birthday is on Xmas and your two gifts are fused into one. RDNA5 and CDNA whatever iteration are fused into UDNA with a bit of this and a bit of that. It's not like they are radically different archa to begin with.