Next-Gen AMD UDNA architecture to revive Radeon flagship GPU line on TSMC N3E node, claims leaker
Posted by fatso486@reddit | hardware | View on Reddit | 159 comments
SherbertExisting3509@reddit
UDNA needs to have:
Fixed function RT cores like Intel and Nvidia so that UDNA doesn't struggle with path tracing (where lots of ray triangle intersections are requried)
Low precision matrix math cores like Intel's XMX engines or Nvidia's tensor cores
Great encoder like Nvidia's or Intel's XE media engine.
Same great low occupancy performance and bandwidth as RDNA
AI based frame generation with MFG support
AMD equivalent to RTX AI and other Blackwell features on day 1
Bug and issue free day 1 drivers
*insert standout killer feature here*
And most importantly:
A good launch price and marketing campaign.
(optical flow accelrator would be a nice to have but not required as Intel's frame gen demonstrated)
IANVS@reddit
So, impossible for AMD.
SherbertExisting3509@reddit
The B580 showed us that consumers will only buy a card from a competitor if it has feature parity (or close to) with Nvidia, has RT performance close to Nvidia and is priced well at launch.
It's not that people won't buy AMD no matter what they do, It's that AMD doesn't release compelling enough products to pull customers away from Geforce. Consumers want RT, DLSS, Frame Gen and MFG even if they're buying entry level cards.
TK3600@reddit
More like AMD has no good distribution center outside western markets. Their shit overpriced as fuck in places like China.
Allan_Viltihimmelen@reddit
AMD should try to innovate something new rather than chasing Nvidia's tail all the time.
Like Zen(1) which was something new with a scalable potential as the technology improves. Intel wasn't scared until Zen 3 when they suddenly was neck and neck with Intel which came as a big chock.
Gachnarsw@reddit
That's a lot of targets to hit all at once. AMD just doesn't have as much money to put toward R&D as Nvidia. That doesn't mean it's impossible to compete, just that they have to be selective about where they focus their resources.
I'm hoping UDNA can be a Zen moment. In that there was Zen 1 and Zen+ before Zen 2. AMD has executed Zen upgrades well overall, and I'd like to see a similar evolution on UDNA.
But Intel was complacent before and during early Zen. Other than pricing Nvidia hasn't been.
DYMAXIONman@reddit
They just need to get RT performance up and improve FSR so it's not shit.
MrMPFR@reddit
The AI stuff just needs to be ported from CDNA so that's already done. RT is getting improved with RDNA 4 already with BVH traversal in HW + coherency sorting like SER and TSU and getting Mega Geometry like HW acceleration should be possible as well.
A Zen moment with GPUs is just impossible CPUs are a much higher margin product than GPUs, but let's hope they won't do the usual slot in pricing bs.
sachi3@reddit
RDNA4 hasn't even been announced and there's already UDNA leaks? bruh
DaBombDiggidy@reddit
Steve from HUB last year was hearing at CES that this gen was a bit of a dud and their issues would only be fixed with a new architecture. Wouldn't be surprised if that new architecture was getting a pedal to the metal treatment from AMD for that reason.
CrzyJek@reddit
UDNA is basically CDNA. They are merging the stacks like Nvidia does. AMD'd Instinct is already chiplet and working well...so it makes sense to unify the architecture.
Vb_33@reddit
UDNA is GCN.
CrzyJek@reddit
GCN developed into CDNA.
EmergencyCucumber905@reddit
Where did you hear that?
R1chterScale@reddit
Iirc there is some tech in current RDNA that's very useful that's being ported over (besides the obvious RT stuff)
Kepler_L2@reddit
UDNA is basically RDNA with big matrix cores tbh.
Qesa@reddit
The same story occurs pretty much every gen. There's apparently a very profitable cycle for tech influencers of
reddit_equals_censor@reddit
i mean that is not surprising at all.
it takes many years to develop gpus or cpus.
for cpus amd straight up is using 2 leap frogging teams developing the architecture.
of course the question is how reliable leaks are for far away releases.
constantlymat@reddit
Also the rumour mill from Computex 2024 basically said they buried the flagship RNDA4 line because the chiplet design did not scale the way they were hoping for.
AMD's execs can spin whatever tale they want, they didn't actually abandon the highend sector of the market "because 1440p for $500 is where the gamers are at".
They did it because they architecture bet failed and it was the best strategy with what they had this generation.
reddit_equals_censor@reddit
that goes against everything i heard, which stated, that high end chiplet rdna4 didn't have any problems at that point, but commiting a ton of resources into a high end expensive to make card, that competes with tsmc packaging resources for golded shovels in the ai mines instead AND having the engineers work on, instead of working on the gold based shovels for the ai mines would have been a bad decision.
how cheap could they have made it? who would have bought it?
those rumors certainly made sense.
please link me the sources for what you heard if you got them at hand.
devinprocess@reddit
5090 is a bad gold shovel too compared to data center AI products. Yet tsmc has no issues with it.
Vb_33@reddit
No the 5090 is monolithic. No cowos involved.
RealThanny@reddit
The 5090 uses zero advanced packaging. None of nVidia's consumer GPU's use any advanced packaging, meaning they don't compete with any of their data center products in the packaging bottleneck.
Not so with AMD. Their top RDNA 4 chips would have used advanced packaging akin to what the MI300 chips have. That means each would-be 9080 or 9090 (assuming the same terrible naming scheme they ended up choosing) would take the place of an MI300 chip in the packing pipeline, which is slower than every other part of the production process for those parts.
It literally comes down to a choice between gross margin of a few hundred dollars at most on a consumer graphics card or a gross margin of a few thousand dollars minimum on an AI/Compute module.
It's not a big mystery why that choice was made. What is a mystery is how anything will be different for UDNA. Giant monolithic chips like nVidia? Simpler MCM? Or will TSMC have sufficiently improved their packaging capacity by then?
Setepenre@reddit
I think the dichotomy between CDNA and RDNA might make RDNA high end less cost-effective.
4090 chip is ADA102 which is used for other products (L20/L40) in their line-up, including GPUs that can be used for AI. Hopefully UDNA will enable AMD to do similar things, maybe repackage one of their MI datacenter GPUs as a high-end consumer cards like the Titan cards used to be.
reddit_equals_censor@reddit
YES and it actually hurts in many ways to have things split.
you have to develop 2 architectures, then develop a set of cards based on those architectures and THEN (and that one hurts a lot) you gotta get masks and everything from tsmc to be able to start producing them and that last one is insanely painful. so you want to make as few dies as possible, or reuse them as much as possible.
and with a unified architecture with unified gpus as well, you are taking way less risks.
something doesn't sell to data center? well don't worry, sell it to gamers.
gamers don't want a card? off to datacenters more go.
and you inherently want high end cards for datacenters, so there is way less risk in making some quite expensive new chiplet design.
i mean the theoretical dream would be reusing core chiplets in most gpus and just scale the number of chiplets or how cut down they are, instead of producing several different dies.
the zen approach kind of.
remember, that a zen chiplet approach actually wasn't possible until that long ago, especially for cheaper gaming cards. cheaper as in compared to data center.
the reason being latency and insane bandwidth requirements.
it can also be a big marketing/mindshare win.
for example amd didn't at the time have anything to really compete, so what they did is just release some vega 20 dies, that were bad bins into gaming/workstation-ish versions with 16 GB vram in 2019.
they didn't make a ton, but they could launch a decently hgih end and cool product.
the titan cards weren't that. the titan cards existed purely to get people to accept vastly higher prices for gaming cards.
they strapped some "workstation" use case stuff in the driver/hardware early on, but then stopped.
it was purely to get people to eventually accept 2000 us dollar gaming graphics cards...
sth, that amd could do however is actually sell cards with good value, that only have the vram cost difference with double the vram.
just for marketing reasons that can make sense already. doing that even with midrange gaming cards makes sense.
releasing a 32 GB honestly marketed 9070 xt (what were they smoking with that name??? ) would give people sth to talk about, good value, if is sold just for vram cost difference and it would equal the vram of nvidia's 2000 us dollar card. not the performance of course, but it certainly would help optics.
so lots of money to save, lots of marketing wins, lots of reduced risk. wins all around.
Setepenre@reddit
Titan V was a cut down V100. Titan X was a cut down P100. Titan RTX was the first not being a datacenter GPU, not rebranded.
Massive_Parsley_5000@reddit
I think this is exactly where this is all going.
It's much easier for AMD to cut down dies from pro cards, clock them 2x as fast, give them a much higher TDP, and say they are for gaming than having two distinct architectures and having to cut dies for both.
Although, to meme a bit, I have to say....
And somehow, GCN returned.... 😂
Zenith251@reddit
Except people will buy consumer 5090's, like they did 4090s, for CUDA dev work.
Very few people are buying RDNA consumer GPUs for LLM work.
If AMD is confident that consumer UDNA high-end will sell to prosumers and small businesses, not just gamers, I can see why they'd want to invest more wafer space.
PitchforkManufactory@reddit
The 5090, like the 4090 and Radeon VII, aren't the full chip. They're the cutdown leftovers of what the big boys are buying.
Vb_33@reddit
These leaks are old RDNA5 was supposed to do this and UDNAi s RDNA5.
acc_agg@reddit
Welcome to amd, where the next generation will always fix everything wrong with this generation. Forever.
Decent-Reach-9831@reddit
Every chip company is like this
BinaryJay@reddit
It wasn't long after I bought the 4090 near launch that there were blackwell rumor articles talking about how it was going to be twice as fast. At some point we just need to accept these things for what they are, clickbait with little reason to actually pay much attention to until products actually come out.
nismotigerwvu@reddit
Well it doesn't take a leak to figure out that AMD plans to return to a full product stack with their new architure. The exact reason isn't really even necessary, but we know that there was either a back breaking issue in continuing RDNA to the point that RDNA4 is only being utilized in a select market segments and RDNA5 was canned at or near takeout. This could just be down to CDNA being more effective in these workloads once you tack the fixed function graphics hardware back on as well. Regardless, AMD never had such a limited role for RDNA4 in mind and they are going to want to pivot back to business as usual ASAP. As others have already stated, late 2026 is a very typical timeline for a successor anyways so this is just a nothingburger.
imaginary_num6er@reddit
Red Gaming Tech was doing RDNA 4 leaks during launch week for RDNA 3 because he was saying RDNA 3 is 300% uplift compared to RDNA 2
DktheDarkKnight@reddit
These leaks are usually on point though. Even the earliest RDNA 4 leaks mentioned that it will have only mid-range cards.
Extra-Advisor7354@reddit
Eh that seems like a logical take though. They couldn’t compete at the high end with 7000, and without a big node jump they obviously couldn’t compete with 9000.
juGGaKNot4@reddit
Muhahaha "leaks" a month away from launch aren't reliable let alone these.
But that's fine you can just say it was the target but it wasn't met and dumbasses will still buy your shit
reddanit@reddit
With the actual amount of time new silicon development takes, there genuinely are things that are basically set in stone literal years in advance. Obviously some things can change, especially whenever something either slips its timeline or gets cut/abandoned. In general though there are genuine pieces of info that can leak even earlier than this specific UDNA thing.
Obviously, if somebody is giving you specific FPS numbers a year before expected launch of a product you can and should laugh at them.
theunspillablebeans@reddit
I remember seeing it on Reddit not too long after I bought my 3070Ti that AMD was considering downsizing their line up to mid-range only. Of course leaks are inherently unreliable because they almost always have to protect their sources.
Recktion@reddit
Most credible leaks have been accurate. 2 years ago the leaks said mcm didn't work out as planned and AMD wasn't going to compete at the high end. It's not hard to tell what leaks are likely true if you use your brain.
BWCDD4@reddit
Nah not all of them, we have genuinely known for years since just after or before rdna3 was launched that rdna4 was going to be monolithic and they gave up on the high end.
kingwhocares@reddit
Intel had Celestial ready even before Battlemage dGPU release.
TheAgentOfTheNine@reddit
Wait until you hear about UDNA2
PAcMAcDO99@reddit
I think it is because it could be coming next year instead of 2027 or 2028
TK3600@reddit
Make AMD Great Again
NGGKroze@reddit
if for some reason UDNA release next year, RDNA4 will be a bad purchase.
In a sense it will be like 40 series non and super variants - where you either get more performance for the same price (4070TIS even bumped the VRAM) or you get the same performance for less money (4080S).
This will also means UDNA will compete with 50 Super series as well.
I wonder if AMD will try to compete with 90 class card from Nvidia or they will settle again for 80 class.
jonydevidson@reddit
All tech is always a bad purchase. We're seeing generational improvements in tech coming out every year now in a bunch of product lines. TVs and Soundbars are one.
If anything, I would bet it's due to engineers and researchers having access to LLMs - in my case it supercharged my productivity to a point where I can get stuff done in a day which would've previously taken me 2 weeks, and can try out new stuff in days which would've previously taken me months.
So now more than ever, all tech is a "bad purchase" according to your philosophy.
Tech depreciates ridiculously fast. The moment you buy it, it's already lost 20% of its value. Two years later it's usually at -50% (unless it's a PlayStation 5).
You're buying it for what it is right now, and for what it'll give you right now.
Numerous-Complaint-4@reddit
Well im not an engineer (yet) in a tech sector but i doubt that LLMs really help with that, the things getting researched are mostly not public and keeped secret.
Geddagod@reddit
W mindset
jonydevidson@reddit
The main breakthrough point of an LLM is the way it lets you interact with knowledge. The things getting researched still rely on math and established physics principles. You can add your internal data to the LLM and have it interact with it the same way it does for everything else.
Strazdas1@reddit
Poorly? i got better results from GPT pretending to be an idiot than giving it all the correct keywords. Also it has severe tendency to repeat same answers with a word replaced for different questions.
jonydevidson@reddit
Right, that's why the company got a $150bln valuation, the product is actually shit and we're all idiots for getting work done 10-20x faster using it.
Numerous-Complaint-4@reddit
Well atleast with chemical math chatgpt is very fucking stupid if it is a little more complicated, and cant really calculate something right
jonydevidson@reddit
It absolutely can
Numerous-Complaint-4@reddit
Depends i guess how complex your question is, but even o1 doesnt know how to work sometimes
jonydevidson@reddit
if you ask it to do the calculations in python and then run it, it's right every time
Strazdas1@reddit
ive yet to see it generate a python script i didnt have to fix.
jonydevidson@reddit
It does so for me on a daily basis.
Numerous-Complaint-4@reddit
Well im going to try this, thanks stranger
ResponsibleJudge3172@reddit
You are using the wrong AI tool for that.
Numerous-Complaint-4@reddit
Do you know any AI for that use case?
PitchforkManufactory@reddit
well there you go lol. All the big tech companies have their own derived LLMs now that have been approval by legal for internal use.
Numerous-Complaint-4@reddit
Well if so thats probably plausible, but even your o1 has problems solving complex thermodynamic questions, thats why i said that atleast for engineers working at the edge of technology they might not be of real use
crshbndct@reddit
What? MLA OLED is still the best TV you can get. The only thing that’s improving is a bit of brightness, but they are mostly at the bright enough level already.
Soundbars are getting incremental improvements but nothing amazing each year.
Decent-Reach-9831@reddit
Highly debatable. At the same price, I would choose VA Mini LED every time.
I really disagree. Even the brightest minileds are not quite bright enough for realism, and these oleds are nowhere near that.
https://youtu.be/5FdDUrHl5RE
crshbndct@reddit
We can agree to disagree on that one :-)
jonydevidson@reddit
Not for long, there's already stacked and tandem OLED that was on CES just a few days ago, launching in TVs this year (with Apple already using tandem OLED.
FloundersEdition@reddit
Consoles will likely use second gen UDNA. Going with RDNA4 should be a good purchase for 7-8 years until next gen exclusives lauch. It will bring you through the high demand phase of a new console generation (2028-2029?).
First gen UDNA will likely age poorly for gamers like Vega/RDNA1 did. Lack of next gen gaming features. It will focus on AI and professional and do well there like Vega.
You shouldn't expect a new sub $700 chip in 2026 either. CES for a 9070XT price product, ~November 2026 only for enthusiast/AI devs.
MrMPFR@reddit
100%. UDNA is a pipe cleaner architecture like RDNA 1, UDNA 2 is when things will get interesting.
FloundersEdition@reddit
Yeah, PS4 and XBone launched on GCN2 as well. They need a dev kid for the new architecture/cache hierachy first, otherwise API development can't start. UDNA will basically be the common ground/lowest common denominator between both consoles in a similiar fashion to RDNA.
Custom features like different ROPs, suprisingly high clocks (probably longer pipeline), primitive vs mesh shaders, packed FP16 & ML instructions, cache scrubbers and Sampler Feedback will be added and they need a couple of quarters for APU development vs just a GPU. Vendors will also want higher yields, higher density implementations and a test chip with time to fix stuff.
AMD will implement the best of both worlds for themself.
MrMPFR@reddit
Interesting, just hoping we'll see both consoles prioritize functionality over time to market. I don't want another rushed PS5. Not having mesh shaders and sampler feedback on PS5 is plaguing recent games.
FloundersEdition@reddit
AFAIK, this is not true. The additional upgrade from primitive shaders to mesh shaders is not to big. Sampler Feedback got zero support, because Epic has a software inhouse solution anyway and SFS seems to have a lot of CPU demand.
MrMPFR@reddit
Indeed but it's still inferior, which is why the PS5 Pro moved to Mesh shaders and barely any devs have bothered to add support for any of these and continue to use the old pipelines, holding back graphical fidelity.
SFS is not the same as Sampler fedback and isn't software going to be inferior to HW acceleration? And is that tool available for other game engines? I couldn't find anything on Sampler feedback CPU overhead, can you include the link?.
FloundersEdition@reddit
Regarding SFS: I think NXGamer said it in an interview with MLID, he has good connections to software devs.
XSS 10GB is a way bigger limitation in driving new features as well as keeping last gen/GCN and Pascal alive. Add more meshes/triangles = way more memory required. Add a BVH structure = more RAM and bandwidth. Add both and the BVH size explodes.
The Alan Wake devs also said they deactivated mesh shaders on PC, because while speeding up RDNA2, it slowed down Nvidia. I don't know how Nvidia screwed that one up, since they invented it. PS5 APIs, engines and polygons are basically so customized, it doesn't matter anyway.
MrMPFR@reddit
The CPU overhead issue is probably due to bad code, Microsoft explains a mistake devs can make here (search for "performing many clears"). Not the first time we've seen devs botch new functionality. Alternatively the SFS is the MS implementation on XSX specifically and is a lot more extensive than the PC version so that could explain the difference also.
Forgot about the XSS, yes that's holding back gaming massively + Pascal and Polaris buyers that haven't upgraded in the mean time. Doubt it would ever become an issue with the barebones RDNA 2 RT implementation + devs can always choose to disable RT effects for many of the consoles, which they already have.
I can find nothing to suggest that only an improved fallback compute shader released around March 2024 which massively increased performance on older GPUs. RTX Mega Geometry runs on triangle clusters that look identical to meshlets in UE5 and I'm 99% sure it requires mesh shaders to work, which likely explains why only AW2 has confirmed game integration. Conclusion: The game probably still uses mesh shaders.
FloundersEdition@reddit
Actually Remedy implemented Mesh Shaders in AW2, but the per primitive culling feature wasn't used because of Nvidia running slower https://x.com/Sebasti66855537/status/1845091074869690693
Disabling RT and disabling Mesh Shaders demands building a completely new pipeline and the game will look completely different. If devs try to support newer features and older cards from both vendors (not to mention Arc), that's basically 4x the work for devs, artists and quality insurance. That would increase budget and delay the game. They just don't bother for another 2-3 years until these cards become obsolete.
MrMPFR@reddit
Thank you for the link. I wonder if they followed this to a teeth, the degradation on NVIDIA HW is very odd: https://docs.vulkan.org/samples/latest/samples/extensions/mesh_shader_culling/README.html#_per_primitive_culling
Can't argue with that. Graphics rendering being in no man's land rn is a huge problem and the example of RT on PC, limited ray tracing on console and a fallback of baked lighting + some limited real time non RT GI for everything = big mess for developers.
noiserr@reddit
9070xt is not a high end card. It's not like you're shelling out $2K for a GPU which will be obsolete in a year to 2 years.
DYMAXIONman@reddit
The rule with graphics cards is to be 20% faster than consoles with enough VRAM. The RX 6700XT for example will not need to be replaced until the next Playstation comes out.
Dangerman1337@reddit
I mean if you're in the market for a 1440p $500 card right now then RDNA 4 will serve that market fine.
NGGKroze@reddit
That is true and 1 years is long time as well (for example I could have waited 1 more year for 50 series, instead of 40 Super).
https://www.techpowerup.com/329003/amd-to-skip-rdna-5-udna-takes-the-spotlight-after-rdna-4
Based on TPU UDNA GPUs will enter production in Q2 2026 which means perhaps Q4 2026 release (maybe Q1 2027). Now that will be good spacing from RDNA4 (close to 2 years).
I think however UDNA will focus more on AI as well, akin to Nvidia so on top of the generational improvements, AMD could bring more goods on the software side as well.
Depending on how UDNA perform it could release between 50 Super and 60 series and could be a tough spot for AMD as well (could UDNA compete with 50 Super or should consumer just wait for 60 series form Nvidia).
SomniumOv@reddit
That's it, the GPUs don't have prices announced and we've already it the "Wait for next gen, then you'll see" part of the AMD Fan Hype Cycle.
DuranteA@reddit
If only AMD could deliver with the same level of consistency and reliability as their fan hype machine.
PalpitationKooky104@reddit
bot?
MumrikDK@reddit
He says to the lower scale game celebrity.
Dreamerlax@reddit
I was accused of being a bot on the AMD sub. Guess they label any criticism as bot behaviour I suppose. 🤷🏻♀️
SuperDuperSkateCrew@reddit
I was accused of being a bot in the Nvidia subreddit for criticizing my poor experience with the 6750XT
Dreamerlax@reddit
I'm having issues with Adrenalin. I'm a bot guess I guess.
SuperDuperSkateCrew@reddit
Yeah I’ve been having nothing but driver issues since my update to Windows 11 and my system is basically unusable, system crashes as soon as I launch a game.
Upgraded to the 6750XT from a GTX 1070 and wish I would’ve stuck with Nvidia.
Dreamerlax@reddit
I've been burnt by that before, not falling for it anymore.
JensensJohnson@reddit
Wait for ~~RDNA 2/3/4~~ UDNA
unknown_nut@reddit
Some people here are already saying wait for UDNA 2.
SenorShrek@reddit
wait for ~~Polaris Vega VEGA 2 RDNA 1/2/3/4~~ UDNA
Akait0@reddit
AMD has been competitive with Nvidia almost every gen, because competition doesn't only happen in the high end.
But if you wanna go there, AMD was competitive in the RX 6000/RTX 3000 series, despite people claiming they wouldn't be able to looking at the previous gen.
People still bought Nvidia even if it was a clearly bad choice (RX 6800-RTX 3070/Ti). It also happened in previous gens (RX 570 - GTX 1060 3gb)
Folks here act like AMD being competitive equals same raster performance and previous gen raytracing as Nvidia, for half the price. And even if AMD decided to bankrupt itself to please them, they would still claim Nvidia is actually better because X or Z. It's not gonna happen.
Firefox72@reddit
Really unfair to lump RDNA2 in there.
Yebi@reddit
First gen will probably have teething issues, wait for UDNA 2
Hellknightx@reddit
I was thinking of getting one of the new RTX 50X0 cards, but now I might just wait for UDNA 4. There's a chance they might be on par.
Doubleyoupee@reddit
Wait for Vega. Poor Volta
Exist50@reddit
Huh? No one's claiming the RX 5070 is going to be a flagship offering. That has been abundantly clear for a while now. The rumor is AMD will have a wider stack for next gen.
INITMalcanis@reddit
The next gen Hypecycle is revving up already?
theholylancer@reddit
It has to, because this gen there is nothing to be excited about.
It seems they had a pricing strategy, either it was torpedoed by nvidia's preempt price drops, or it was always nearest nvidia -50.
Which isn't what is exciting when you don't have a top tier card and is fighting in the 70s arena and yet still don't have a proper price advantage since nvidia is playing hardball down there. And I am not sure if AMD wants to do a 450 card to fight the 5070 that would actually give it some real fans again.
Vb_33@reddit
People always say Nvidia doesn't even think of AMD but Nvidia always reacts and makes it hard for AMD to ever make inroads via pricing.
theholylancer@reddit
I think its more like Nvidia is way, WAY smarter about it.
AMD has been talking about going after the mid end for a while now, leaks from a gen ago. So nvidia set the prices before anything else.
AMD, when they do change their mind, acts like the 7600 launch, with hours to go and everyone scrambles. Or even post launch by a month or two when they aint selling, and only in select markets because the rest of the world gets the prices even later for some reason.
Part of it is also nvidia is the dominant one, and everyone else has to follow it, but god damn they are smooth when they are faced with pricing pressures it seems.
Vb_33@reddit
Nvidia specifically reacts to AMD and has for ages. Most price cuts are a reaction to AMD. Nvidia s game bundles were also a reaction to AMD who pioneered that idea. Now this time Nvidia was proactive but that is not always the case.
MrMPFR@reddit
100%. They did not expect this "aggressive" pricing by NVIDIA.
The only reason why AMD hasn't announced anything is because they want to price the 9070 series as high as possible without any backlash. If they were serious they would just go 9070XT $499 and 9070 $399. But TBH I'm more inclined to believe it'll be 9070XT $649-599 and 9070 $499-449 :C
And now it seems like RDNa 4 deep dive has been postponed yet again. We're not getting these cards until February :C
BleaaelBa@reddit
Can't postpone something which didn't have a date set.
MrMPFR@reddit
you're right. I've corrected my comment.
kontis@reddit
The best way to get more performance from the same number of transistor is to do specialized ASICS (like video encoding) or at least specialized cores in the architecture (like Tensor, RT etc.).
Universal compute is less performant for specifics tasks, but far more flexile, dev friendly and allows more innovation.
AMD giving up on gaming architecture and pushing computer server arch into Radeons means they are willing to sacrifice raw gaming performance for the future of AI.
However, if neural rendering takes over gaming completely this decision may be the right bet even for gaming. We will see.
Vb_33@reddit
If ML and RT don't continue to take over the industry there's no way forward.
gokarrt@reddit
the amd gpu hypecycle is a perpetual energy machine
OutrageousAccess7@reddit
rdna 4 is stopgap product like rv670.
kingwhocares@reddit
Should've just gone for a RX ~~8600~~ 8060 and ~~8500~~ 8050 because RX 7600 are based on 6nm.
TheElectroPrince@reddit
Those are reserved for the new S-series iGPUs.
ibeerianhamhock@reddit
Yep. I think 50 series is a stopgap too. This is going to be a really short GPU gen after a really long one.
MrMPFR@reddit
RDNA 1 was also kinda a stopgap. RDNA 2 was the real deal with DX12U support and much higher clocks.
Strazdas1@reddit
the "next gen will fix it" leaks are happening even before this gen launches.
BinaryJay@reddit
Rumor: RDNA3 supposed to compete with the 4090.
Reality: It didn't come close.
Rumor: Because of some kind of bug that prevented it from reaching the clock speeds they thought it should. RDNA '3.5' would fix this bug, clock speeds will soar on the next product revisions and beat the 4090.
Reality: Even RDNA4 showing no signs of this happening by any metric.
Lesson Learned: When it comes to rumors about AMD GPUs, maybe don't get too excited until it is released.
ResponsibleJudge3172@reddit
You are being charitable.
The rumor was that Nvidia was biting the bullet and overpaying TSMC as revenge from how they shunned them after using Samsung.
They need to do this while also doubling power consumption because they were worried that AMD will triple performance, and be cheaper than Nvidia and using a smaller die. Later it was reduced to RDNA3 being 20% faster while 50 % more efficient than rtx 4090ti
DehydratedButTired@reddit
AMD rumors are usually so far off for the gpu side. I’ll believe it when I see it.
Dangerman1337@reddit
The only thing that really makes me question is the use of N3E if it's chiplet based next year because wouldn't it be more logical to use TSMC N3P on like any GPU Chiplets? Especially if they're reviving the Navi 4C IOD/Interposer tech.
ProperCollar-@reddit
You're absolutely dreaming if you think that's the case. Do you genuinely think AMD was sitting on something competitive with the 5090?
Kryohi@reddit
Competitive is a big word that depends on a lot of things, but the 5090 is "that good" mostly because it's huge (750mm2, 512b bus). Reaching that kind of performance (at least in raster) is not hard if you put a lot of silicon to it, e.g. doubling everything in Navi 48, the problem is to actually make money from it. Which is hard if you don't have a lot of potential consumers willing to spend $2000+ on it.
InformalEngine4972@reddit
No one buys a 5090 tier card to play on low settings ( RT off ).
The reason an amd competitor won’t sell is exactly that. They need to also match nvidia in ray tracing and dlss , which they don’t and are still 2 generations behind.
RT level on Blackwell is 4.5 out of 5 .
Rdna 4 just reached lvl 3, which matches ampere.
EbonySaints@reddit
I'm certain that there's one fool with more money than sense who plays CS2 or RS:S at 1080p Low on a 4090 just for "the frames", and I'm certain that there will be one with a 5090.
ProperCollar-@reddit
The only thing that makes them a fool in that scenario is they're likely CPU-bound, not GPU-bound.
EbonySaints@reddit
What I was trying to imply is that they were probably skill-bound more than anything hardware related.
Decent-Reach-9831@reddit
Many such cases!
dudemanguy301@reddit
If you are going to reference imagination technologies “levels” classifications, you should probably mention it so that people who do t already know what you are talking about have some context. An account walled paper from a few years ago is a bit obscure.
SirActionhaHAA@reddit
Wrong.
InformalEngine4972@reddit
Maybe correct me instead ? Your comment helps no one.
SherbertExisting3509@reddit
AMD's ray accelerators are far behind Intel's and Nvidia's RT cores in ray triangle intersections per cycle (which matters in PT)
RDNA2/3 = 1 per cycle
Battlemage=3 per cycle
Ada = 4 per cycle (2x over ampere)
dudemanguy301@reddit
When he’s referring to “levels” on a 1-5 scale he’s specifically referring to a classification made by imagination technologies. He provided no context on this but it’s pretty clear to anyone that has seen the papers before.
https://blog.imaginationtech.com/introducing-the-ray-tracing-levels-system-and-what-it-will-mean-for-gaming/?hs_amp=true
GARGEAN@reddit
Technically they are even behind Ampere, since by Ampere NV already had parallel hardware rays and BVH calcs. RDNA4 still seems to be stuck on doing rays only on their shader units.
MrMPFR@reddit
Not true. The issue is the shared ressource approach and lack of concurrency, if it's unchanged from RDNA 3 (we still don't know). Doing RT in the TMUs is not a good idea for path traced games.
As for BVH traversals the leaks + PS5 Pro patents suggest RDNA 4 has BVH traversal HW acceleration + Cerny confirmed there's some sort of divergence mitigation akin to NVIDIA's SER and Intel ARC's TSU. So most likely the HW functionality is probably close to Battlemage and Ada Lovelace but the performance will fall behind.
MrMPFR@reddit
There's still nothig suggesting Blackwell is on level 4.5. Ampere RT is level 3, Ada 3.5. At best Blackwell is level 4. Still no scene hiarchy generation in hardware, although CPU overhead will be massively reduced with clusters and RTX Mega Geometry.
RDNA 4 = level 3.5. Cerny talked about managing ray divergence in hardware.
GARGEAN@reddit
Do remember that only part of that silicon goes to raster. You can MAYBE reach 5090 with comparably huge die in raster if you are AMD. Raster, RT and AI all at the same time? Lol. LMAO even.
RealPjotr@reddit
This was known almost a year ago, back when AMD first said RDNA4 was said to not go high end.
AMD tried the chaplet design that is so successful on the CPU side. It failed to reach its targets in RDNA3 and they saw it wasn't going to get much better with RDNA4. So they dropped it for RDNA4, leaving the high end for a generation.
We'll see if it makes a return or not for RDNA5, but it will be a rethink and retake of AMD GPU architecture. They need to aim for AI/Data Center at least as much too.
DYMAXIONman@reddit
Chiplet usually comes with downsides compared to monolithic. As long as Nvidia is not using chiplets, AMD can't really move in that direction.
Decent-Reach-9831@reddit
It also comes with upsides
CrzyJek@reddit
It'll be back after RDNA4 once they unify. Instinct is already chiplet.
MrMPFR@reddit
TSMC is moving fast on packaging tech, it'll likely be a lot better in late 2026 than with RDNA 3. It'll be interesting to see if AMD goes monolithic, 3D stacked (compute tile on top of base tile with IO, mem phys, and infinity cache) or takes the MCM route with UDNA.
Gachnarsw@reddit
If they go MCM, I'm wondering if AMD will share compute tiles between client and data center like with Zen. AFAIK CNDA tiles don't have ROPs, TMUs, or RT hardware.
Is it possible, feasible, or desirable to put those graphics blocks on a separate tile and maintain competitive performance and power?
I wouldn't be surprised if the answer is no MCM yet.
MrMPFR@reddit
Interesting. Can't say which one it'll be but looking forward to hearing more about it. Perhaps we'll get another AMD Architecture day where they'll spill the UDNA beans. Fingers crossed.
king_of_the_potato_p@reddit
Rdna4 feels like a stop gap so I wouldn't be surprised if its a short term line.
BarKnight@reddit
I would say it's not going to sell well, but RDNA3 already set a low bar there.
king_of_the_potato_p@reddit
Ill be curious for udna. At the moment Im using an xfx rx 6800xt merc for 4k. Its been a solid card but hopefully something with more vram and maybe an upscaler for browser streaming will be a thing by then.
Ok_Fix3639@reddit
“Flagship” just means a card for the $999 price point like previous generations.
ibeerianhamhock@reddit
This whole card generation is going to be short lived. I see 60 series dropping next year. It's basically a waste of money to buy a new card this year if you have a 40 series or a 7900+ amd gpu imo.
ResponsibleJudge3172@reddit
RDNA3 and RDNA4 were "leaked" to launch 1 year after the previous gen. That never happened. Just saying
SceneNo1367@reddit
RDNA2 launched 1 year after RDNA1.
Kryohi@reddit
I'm interpreting "next year" as at some point in 2026, so it is believable imho. Q3 or Q4 2026 isn't unlikely, and it would be closer to two years after RDNA4 tbh.
Mi400 is also planned for 2026, so if UDNA is what these leakers claim, I can see consumer GPUs being launched a few months after that, provided no big problems arise in drivers.
Dangerman1337@reddit
If I was AMD I'd be launching UDNA & Zen 6 (X3D) together ASAP. Zen 6 X3D & a 512-bit UDNA Card sounds like a killer combination that upsells the AMD brand.
If they can get chiplet GPUs really working (Orlak & Kepler implied N4C was in good shape but AMD canned it because they got spooked by GB202 whch in hindsight was a huge mistake) then AMD can get a competitive lineup against RTX 60/Blackwell-Next/Rubin. If Multi-GCD can work very well and they can launch UDNA Chiplet GPUs next year that compete against RTX 60 then they should do that, no excuses.
Only thing that makes me ? it all is TSMC N3E and not N3P. I mean would only make like 5% performance perhaps but against RTX 60 a top UDNA card needs all it can get. Though I suspect any low end, 128-bit die will be N4P/N4X still (12GB GPU that's at least 3070 Ti+ performance would be great).
JakeTappersCat@reddit
If I was AMD, I'd skip to UDNA3 right away. Why wait?
Dangerman1337@reddit
I think a lot of stuff changed during then, AMD thought they had a huge winner with RDNA 3 with N31 competing directly against AD102 but ofc then they started to say in public "actkually it's a 4080 competitor". And trying to fix RDNA 3 much as possible pushes things back.
DeeJayDelicious@reddit
We've known about Strix Halo 2 years in advance too.
But frankly, a new chip using the what should be then (in 2027) the 2nd best node available, isn't really surprising.
basil_elton@reddit
Rumors about future AMD GPUs - rumors, not patches in the LLVM or Linux kernel - have a lower probability of turning out to be true than a coin toss.
Various-Debate64@reddit
Radeon VIII
AutoModerator@reddit
Hello fatso486! Please double check that this submission is original reporting and is not an unverified rumor or repost that does not rise to the standards of /r/hardware. If this link is reporting on the work of another site/source or is an unverified rumor, please delete this submission. If this warning is in error, please report this comment and we will remove it.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.