RTX 5070 Ti vs RX 9070 XT - DLSS 4 vs FSR 4 Performance Compared
Posted by Antonis_32@reddit | hardware | View on Reddit | 155 comments
Posted by Antonis_32@reddit | hardware | View on Reddit | 155 comments
ShadowRomeo@reddit
TLDW: 5070 Ti and 9070 XT performance is nearly identical on raster with the 5070 Ti being only 5% faster in average, But the gaps narrows down nearly equal when upscaler are turned on, probably due to anomaly like SW outlaws being significantly slower with DLSS than FSR 4.
Also, I would like to point out as well that while DLSS 4 may appear slightly slower compared to FSR 4 at their same respective quality modes, the DLSS 4 still has the edge when it comes to overall image quality, one may argue that DLSS Balanced looks equivalent to FSR 4 Quality, making the performance back in favour of RTX GPUs once again.
IMO basing from my own use case I barely use DLSS Quality nowadays because I find balanced to be the best sweet spot while still providing good image quality but provides me more performance, The DLSS Performance is the lowest I will go if I really want to extract more performance though as I find it acceptable but it is noticeably worse compared to Native.
AreYouAWiiizard@reddit
this isn't just raster though, quite a few are using RT.
Exajoules@reddit
The RT-titles are fairly modest in terms of RT-requirement though. Ratchet and clank and Spiderman 2 has pretty much the exact same RT scaling for the RTX 3000 series and the RTX 5000 series.
For example the raster performance difference for spiderman 2 and ratchet and clank is almost the exact same as the performance difference when RT is turned on.
In ratchet and clank the 9070 XT is 16% slower with RT on, and 14% slower with RT off. In Spiderman 2 the difference is the same for both raster and RT. So even if you changed those two games from "RT" to "raster", the overall performance difference wouldn't really change.
AreYouAWiiizard@reddit
A lot of those games are using RT but aren't specified as such, for example F1 25, Assassin’s Creed Shadows (forced afaik), The Altar (I think it's just software Lumen but still technically RT), Borderlands 4, Mafia (Lumen but idk which), Marvel Rivals (Lumen, not sure if software or hardware on selected preset), Stalker 2 (Lumen, not sure if HW). Along with the ones already mentioned.
Exajoules@reddit
All of your mentions uses software RT based on shaders, not RT hardware. Assasins creed for example goes from being
AreYouAWiiizard@reddit
Even if they are using software doesn't mean it isn't RT.
Exajoules@reddit
No, but that software RT is mostly using shaders, aka rasterised performance - so you are just testing raster performance with extra steps.
AMD718@reddit
I see. So a better and more balanced method would have been to select only the games that do the best vs. AMD and only use those in the selection.
Exajoules@reddit
No, the point is that the RT results doesn't change the overall results, as 2/3 of the RT results have the exact same results if you measure the games' raster performance instead of RT performance.
AMD718@reddit
They used the highest non-PT ray tracing setting and did not include path tracing as it is only in a handful of games and everyone already knows that's Nvidia's wheelhouse. To be seen how that changes post-redstone. Your issue is that they didn't choose RT games that more heavily favor Nvidia. The point is you can always select games that favor each side and basically shop the result you're looking for. HUB attempted to use a balanced and representative mix here.
Exajoules@reddit
The point is that 2 of the 3 games don't affect the results at all (if they had used raster numbers instead of RT, the relative results would've been the same). Sure that is good to know, but it makes the claim "raster and RT" a tiny bit misleading imo, as there were so few games tested, and only one of them had a decently demanding RT setting. Both SM2 and RC:rift apart have the same RT scaling for the RTX 3000 series as the RTX 5000-series for example, clearly showing that these "RT" titles care more about raster than actual RT performance.
This isn't a jab at HUB, as they mention PT results towards the end, but it was a response to the commenter.
Aggravating-Dot132@reddit
Outlaws is outlier just as Stellar Blade and Dying light the Beast, so not really worth pointing that out.
ExplodingFistz@reddit
What's with the performance degradation of AMD in Stellar Blade? Seems like they didn't optimize this game at all in their drivers.
OftenSarcastic@reddit
No TLDW in mixed raster and raytracing across 22 games the 5070 Ti is 5% faster at native and 2% faster with quality upscaling.
In the raster only result across 16 games they showed from their previous video the 9070 XT is 3% faster with latest drivers.
If you eliminate Outlaws, the difference changes from +1.9% to +2.4% in favour of the RTX 5070 Ti.
If you eliminate Stellar Blade, the difference changes from +1.9% to +0.8% in favour of the RTX 5070 Ti.
If you eliminate both outliers, the difference changes to +1.3% in favour of the RTX 5070 Ti.
Exajoules@reddit
While true, it is also slightly misleading (unless I missed something). In the test, they had only 3 RT games, and two of those RT games (spiderman 2 and ratchet and clank) has almost the exact same performance delta between the 5070 ti and 9070 XT for raster as well. Spiderman 2 is pretty much the exact same, and ratchet and clank goes from the 9070 XT being 16% slower with RT ON, to 14% slower with RT OFF - not really a big difference.
The only RT games which really impacted the results here is cyberpunk, and here the highest RT setting wasn't even used.
OftenSarcastic@reddit
It's only misleading if you start from the assumption that the 5070 Ti must be inherently much faster than the 9070 XT at ray tracing across a wider selection of games. However if you look at reviews that's not really true and an assumption based on previous generations.
If your favourite games are Stellar Blade and Cyberpunk 2077 then sure just look at them, but for a healthy overview presented to a general audience it's worth looking at a wider selection of games.
From TechPowerUp, May 1st:
From ComputerBase, March 5th:
It's only when you venture into path tracing that there's a significant persistent shift for the newest generation.
Exajoules@reddit
I think you misunderstand; It's not that 5070 ti > 9070 XT in RT, but that the RT numbers in their newest benchmark almost doesn't change the raster picture at all, as 2 of the 3 RT games included have almost identical relative performance in Raster as well. Heck, even removing the cyberpunk result would still yield the 5070 ti similarly ahead as per their numbers - which is actually contradictory to their previous video where they somehow got the 9070 XT to be faster than the 5070 ti with the newest drivers (which results no other reviewer managed to reproduce).
ShadowRomeo@reddit
Yep, my bad already edited my main comment to correct my mistake.
motorbit@reddit
one might be wrong, though.
ShadowRomeo@reddit
Except they maybe right this time, because DLSS 4 still looks better than FSR 4 on both quality mode, this is already proven by multiple reviewer outlets.
AMD718@reddit
So we watched HUBs video that's the topic of this post, but didn't bother to watch HUBs videos on FSR4? They are very close now. It was the case with FSR3 that DLSS could look better at a lower quality level, but that is no longer the case with FSR4.
Firefox72@reddit
DLSS4 definitely looks better but its slowly reaching the diminisging stage where the differences get harder and harder to spot for most people.
DLSS3 was already more than good enough for the vast majority of users. FSR4 looks as good or better than it in most cases.
Jellyfish_McSaveloy@reddit
The benefit of the upscaler being better even though it has diminishing returns at the top end means you can reduce the render resolution and get increased performance for the same image quality.
At 1440p DLSS4 balanced transformer look better than DLSS3 quality CNN, which is fantastic. Especially if the point is that DLSS3 is already 'good enough'.
BlueGoliath@reddit
"Multiple review outlets" have repeatedly used basically still images to compare image quality when DLSS / FSR issues are half motion related.
Take what they say with a grain of salt.
Aggravating-Dot132@reddit
Better is subjecrive though. Especially on Quality mode. It has wider support and overall more common and thus fine tuned better, but in games like Horizon Forbidden west Quality mode is identical and the only way to see the difference is Performance.
Yes, technicality. However, this very Video was used for real world scenario. Absolute majority of players will play it in Native or Quality at 1440p. Not 1080p Performance.
lifestealsuck@reddit
If you stand really fcking still maybe.
OddMoon7@reddit
Something I don't see mentioned as well is just the fact that DLSS gives you really good framegen compared to FSR.
AMD718@reddit
The most usable form of fg is 2x and there DLSS, FSR, and XeFG all provide a similar fg experience.
gusthenewkid@reddit
Framegen isn’t good in any game I’ve tried it in.
RedTuesdayMusic@reddit
I got a fantastic kick in the groin the other day
AMD718@reddit
Sure, one could argue that if it makes them feel good, but they'd be wrong.
ClerkProfessional803@reddit
I haven't budged from DLSS4 performance mode. I just can't see the difference anymore. Against DLSS3, all three modes are a night and day difference.
Aggravating_Ring_714@reddit
Possible 10+ years of driver support vs approximately 3-5 years? 🤣
From-UoM@reddit
Ironically, this longer support is making dlss4 slightly slower than fsr4.
Nvidia can make dlss faster by moving from FP16 to FP8 (FSR uses FP8). But that would mean RTX 20 and 30 series lossing support or having to maintain two different versions of dlss.
On 4k Performance its currently 1.50 ms for dlss fp16 on a 4080 and 1.312 ms for fsr4 fp8 on a 9070xt
Nvidia could potentially get close to 0.75 ms by moving to FP8 on a 4080.
Can make it even faster with FP4 or NVFP4 on Blackwell.
Aggravating_Ring_714@reddit
Sure but then again dlss balanced looks better than fsr 4 quality, so it’s not an apples to apples comparison.
Shidell@reddit
I've only heard anyone ever say the two are comparable.
Source?
Disregardskarma@reddit
They’re close, but the transformer model is generally a notch above what FSR does
the_dude_that_faps@reddit
The only comparison I've seen is that they're comparable where dlss4 is a notch sharper but more susceptible to ghosting.
Considering that fsr4 is above dlss3 or at worst comparable, I don't think transformer is notoriously better. Or at least, not enough to perform on par with a whole quality setting below.
inyue@reddit
People are saying that they are comparable since fsr1.
Kryohi@reddit
You're assuming that would even work. LLMs can be easily converted to lower precision, but accuracy still goes down. Most other types of models degrade much faster with decreasing numerical precision. At the very least, they would need to retrain it from scratch, probably increasing the number of parameters.
KARMAAACS@reddit
Slower but it looks better. I'd take that trade off because it means I can move down to balanced or performance and get equivalent image quality and then race ahead in performance.
Mind you I'm not saying FSR4 is bad, but relative to DLSS4 it's worse and the transformer model itself is not perfect. But if I can move down a tier and get basically a wash in image quality versus a higher tier of FSR4, I'd say NVIDIA's made the right choice.
It's ironic that AMD was always billed as being more consumer friendly and putting out open source stuff and supporting older products, when NVIDIA's the one being the gold standard here. It's kind of embarrassing for AMD.
I don't think NVIDIA even has to drop support for 20 series or 30 series. They could easily make a new branch of DLSS that leverages this FP8 or FP4 and thanks to their DLL swapping and the NVIDIA app they could detect 50 series and 40 series and use that enhanced performance version instead for the relevant cards. The model is trained on all the same data, so it's just about instruction set at that point and balancing it with quality. I suspect that's what DLSS 4.5 will be about, NVIDIA's just waiting to release it with 50 series SUPER probably.
Balu2222@reddit
DLSS4 is already uses FP8 as stated in the DLSS4 paper, thats why its slower on the RTX 30 and 20 series.
https://research.nvidia.com/labs/adlr/DLSS4/
From-UoM@reddit
Dlss4 upscaling still uses FP16
Dlss 4 ray reconstruction uses FP8
Balu2222@reddit
But than if FP8 is not a support problem for RR than why are they using FP16 for SR? You can still run FP8 RR on the 30 and 20 series it just runs at half the speed.
From-UoM@reddit
Probably for exactly what you said. Performance reasons. You can use FP8 on FP16 hardware through emulation but as shown the performance tanks.
And DLSS SR is way more popular and used by many 20 and 30 series owners.
So using using FP8 wasn't on the table. But i do think we will FP8 or NVFP4 versions soon.
Balu2222@reddit
But performance wouldn't tank it would run the same as now on the 20 and 30 series and run at half the speed on 40 and 50 like RR does now.
From-UoM@reddit
It would. There is a good cost in emulation on converting fp8 to fp16. You need to factor that cost in as this is not free. Remember the 20 and 30 series can't run FP8 natively.
That's why RR has much drops more performance on the 20 and 30 series vs the 40 and 50 series.
And since DLSS SR still uses FP16 and can run natively, there is similar performance drops throughout the 20 to 50 series.
Balu2222@reddit
I don't think you need to emulate anything with data formats, you just put the FP8 numbers in the FP16 ALUs and the left over bits get padded. And the DLSS RR perf numbers show this. RR is using FP8 sparse.
The RTX 4070 Ti has 641 TFLOPS (FP8 sparse) and the RTX 3080 Ti has 296 TFLOPS (F16 sparse) thats 2.17x the peak perf. and the RR cost is 2.09 ms and 3.97 ms, 1.9x increase in time. Remember that not all part of the RR pipeline and AI model is calculated in FP8 so there never will be a one to one match.
The other example is the RTX 3070 162 TFLOPS (FP16 sparse) and the RTX 2080 Ti 114 TFLOPS (FP 16 dense) the perf is 1.42x and the RR cost is 6.06 ms and 8.2 ms a 1.35x increase, so this is close too.
So I don't think there is any emulational cost in play.
From-UoM@reddit
Its not about cost in tflops. Its about MS cost
Even if lets say the 3090ti and 4070 ti has the same ms cost in RR portion, the 3090 ti would still still take longer TOTAL TIME as it has to do two tasks
The MS cost of coverting fp8 to fp16. And the MS cost then run RR. So two tasks and this would increase the total MS cost.
Meanwhile the 4070 ti can run RR from the the get go. One task.
Balu2222@reddit
Ok.... so riddle me this, if the 3080 has around half the tensor compute power of the 4070 Ti and that means RR has double the cost on that GPU, than where is the additional cost of this emulation?
From-UoM@reddit
Twice the compute doesn't mean twice the performance.
The 5090 is almost 1.9x the compute over the 5080 but it isn't running 1.9x faster than the 5080 now is it?
Its what ~50-60% in games?
So the reason its actually that big for the 4070ti cause of the there is additional compute going on the 3080 leading to that big gap.
Balu2222@reddit
You can't compare a small neural network that is close to 100% ALU bound to the complexity of game rendering. Just look at the RR cost in the developer guide. The RTX 5090 has 1676 TFLOP (FP8 sparse) vs the RTX 5080's 900 TFLOP (FP8 sparse), and the cost of RR at 4K is 1.83 ms on the 5090 and 3.17 ms on the 5080. The 5090 has 1.86x the compute power and runs the RR 1.73x faster, almost identical, smaller than 10% difference.
From-UoM@reddit
You are forgetting that the 5090 also has almost double the bandwidth. The gap should be much higher with how bandwidth dependent inference is. But isnt.
Balu2222@reddit
Once again these are really small neural networks that are almost 100% ALU dependent. Large part of the network fits inside the chip and aren't even using the mermory interface when infereced. Just look at the RTX 4080 and RTX 5080, the 5080 has just 15% higher compute perf (900 TFLOPS vs 780 TFLOPS FP8 sparse), but has 34% higher bandwith (960 GB/s vs 717 GB/s). But it is only runs RR 14% faster (3.17 ms vs 3.62 ms at 4K), if RR was bandwidth dependent the RTX 5080 would run it faster closer to the memory bandwith increase.
From-UoM@reddit
Oh shit. I just realized something.
Dlss RR runs SR as well. So the Total cost - dlss sr cost will give the actual dlss rr cost
Its even written in the docs the DLSS-RR will excute DLSS-SR on its own
So
3080 ti is 8.49 - 2.08 = 6.41 for just RR (used 3090 for 3080ti dlss sr numbers)
For the 4080 it will be 3.62-1.50 = 1.52
There. That's the cost of running RR
Not 8.49 v 3.62, but rather 6.41 v 1.52
Dlss sr numbers here -
https://github.com/NVIDIA/DLSS/blob/main/doc/DLSS_Programming_Guide_Release.pdf
Balu2222@reddit
SR and RR a two different thing. SR only does upscaling, while RR does upscaling and denoising. RR upscales the raw noisey imgage then runs the denoising. So it requires a different upscaling network than that is inside SR because that works on clean images, while the one in RR upscales noisey images. RR is a complete upscale and denoise package so the stated millisecond cost are for the whole pipline upscaling included.
But its clearly written in the developer guide of DLSS RR on page 11:
"When DLSS-RR is enabled, it effectively overrides DLSS-SR execution (in other words, DLSS-SR is no longer being executed)."
If the stated RR cost would not include the upscaling cost than there would be no need to state the cost of RR with different resolutions.
And you can't take the SR cost from the SR developer guide because its a different upscaling model for clean images.
From-UoM@reddit
Read the doc
1) DLSS-RR overrides DLSS-SR.
Which means DLSS RR is running an upscaler within itself that replaces DLSS-SR integration
2) DLSS-SR is no longer being excuted
So upscaling is happening (as you can select modes quality, balanced, etc) and its not the DLSS-SR integration as that's not being excuted.
Meaning DLSS-RR running both the upscaler (fp16 based, which we know from the dlss-sr doc) and the denoiser (fp8 based which we know from the article)
That's why the very first line doesn't mention 16 bit algorithm like in the DLSS-SR documentation. Cause there is both 8 bit and 16 bit involved.
If it was only doing denoising, they would have said 8 bit algorithm running. As we know DLSS-RR denoising is fp8
Sevastous-of-Caria@reddit
Do even old graphic cards get useful drivers that arent security updates? Endless sea of complaints from 30 series and lower how new drivers make performance WORSE.
Aggravating_Ring_714@reddit
I mean yeah? Compare Vega cards in new games vs their Nvidia counterparts. Characters aren’t even displaying properly in Alan Wake 2, the game is unplayable on Vega while it’s great on the gtx 1000 series.
Sevastous-of-Caria@reddit
I think Vega 56 specific bug was fixed? Or atleast both 10 series gpus and cards that doesnt support mesh shaders of dx12.
https://youtu.be/zhKPnYKcyK0?si=QUzZ815c4J2fzUqE
Radiant-Fly9738@reddit
yeah, true that, but on the other hand getting new fsr features vs being left behind. I'd say it all depends whether you keep your gpus for long or not.
ShadowRomeo@reddit
RTX 20 series a 7 years old GPU architecture are still getting new features up to this year, and it's already proven that RDNA 2 is more than capable of running a lite version of FSR 4 while still providing decent image quality uplift.
Their is simply no defense in AMD Radeon's part that they should just flat out refuse to give newer feature on their most popular current gen GPU architecture let alone a basic day 1 game optimization that should ensure games of the future should run fine on it.
Instead, what these RDNA 2 GPU owners will get is second class citizen treatment and no guarantee of future games working correctly, and they have to rely on third party modders instead to get a proper fix.
Radiant-Fly9738@reddit
I'm one of those RDNA2 owner, so believe me, I feel the pain and I wasn't defending AMD. I'm mad at their latest driver decision as it affects me a lot.
ChobhamArmour@reddit
This is absolutely fucking hilarious considering AMD actually is working on FSR4 for RDNA3 and below given the leaked INT8 version. Meanwhile Ampere users like myself were left without DLSS FG and are thankful to AMD for FSR FG.
Hayden247@reddit
Why are you so bent about FG? You got DLSS4 upscaling and ray reconstruction, RDNA2 is NOT getting FSR4 INT8 because AMD has cut off driver support for mere maintenance drivers which historically has been extremely limited support, basically just to keep the GPUs working and no more. Upscaling in my opinion is far more important than frame generation, especially when upscalers can also be used for native anti aliasing that for DLSS4 and FSR4 is superior to standard TAA, that's why people often think quality upscaling looks better to begin with, because the algorithm is much better at temporal anti aliasing than standard TAA.
And the original comment WAS talking about FSR4 INT8, that is the "lite" version of FSR4 running on RDNA2 mentioned, just as of now it's an unofficial version modded in that came from a leak and on RDNA2 requires a driver downgrade to drivers from 2023 to properly work. But yes, that was what was talked about
Qsand0@reddit
Its like lithium battery vs tubular 😂😂
ShadowRomeo@reddit
If AMD never reverts back their stupid decision with RDNA 2 being put into maintenance mode, I think it is definitely more than fair that buyers should weigh in that aspect as well. Especially for future buyers who are going to buy used.
Right now AMD RDNA 2's used market value is about to plummet because solely because of AMD's stupid decision.
Firefox72@reddit
Really comes down to regional pricing in the end.
Where i live the 9070XT is 20% cheaper than the 5070ti. Or even more if you want a decent model of the 5070ti. That makes it a no brainer.
The 9070 is also just 5-7% more expensive than the 5070.
Plank_With_A_Nail_In@reddit
Always buy what ever card is the cheapest the premium for "decent" models is never worth it.
Framed-Photo@reddit
Some of the bad models like the ventus are loud as shit and run, 10 degrees hotter or more than other comparable models.
I think paying an extra 50 bucks extra to get a card that's quiet and has features like a dual bios, while leaving tons of room to OC if you want, is worth it at this price point.
AIgoonermaxxing@reddit
I will say that almost every problem that comes from buying a cheap card (temperatures, fan noise, coil whine) can be greatly mitigated by undervolting it at no performance cost. Sure, you can also do that on the higher end cards too and reduce their temps and noise, but they're already going to be running pretty quiet so it's not going to be as big of an improvement comparatively to doing that on a cheap card.
That said, they are worth it if you do want to overclock and push power limits, but I think your average PC owner will be far more apprehensive about doing that than then undervolting.
Toojara@reddit
It can help, but often the problem is the VRAM running too hot. You can't undervolt the memory on many cards which means the options tend to be quite limited. It also tends to limit how much you can actually adjust the fans without cooking the memory.
Keulapaska@reddit
Looking at TPU data I see no 50-series card with poor vram cooling so even lowering fan speed they'll be should be fine. Yea some ampere cards had high vram temps, mainly the 3090 due to backside modules, but in general vram temps are kind of non-issue on most cards.
Toojara@reddit
Most of them seem to be good, but I do barely see the cheaper models on their 50xx reviews. I'd expect to find a couple from MSI Ventus, Gigabyte Windforce, Zotac or Inno3d Twin models. On the 5070 ti for example:
https://www.techpowerup.com/review/msi-geforce-rtx-5070-ti-ventus-3x/40.html
The fans are running way above anything other card, clocking in 40 odd dbA, and the memory is running 70C still. Definitely not a problem as is, but put that in a less ideal case and try ramping the fans down to 30 dbA and that'll be in the 100C range.
Keulapaska@reddit
And you have data on this that it'll increase that much by this sort of fan speed drop? also if the cooler/fans are crap will it be be enough to even cool the core on that card at 30dbA?
I don't have GDDR7 card, but I doubt it's much of difference to GDDR6X and I do have a ventus 4070ti. Sure not stock fans, ziptied 2 Noctua NF-A12(due to horrible fan motor whine and to have better low speed noise as the noctua fans are silent until 35-40%, not because of cooling performance), but comparable to the cooler type somewhat at least.
A quick 30min afk test at TW:WH3 map at ~270-275W(stock core, but +1500 mem oc to try and emulate like 4080 mem speed or of gddr7 draws slightly more power) at 55% fan speed, memory was also around ~82C with max 84C. Maxing out the fans the memory temp goes down to ~72C.
Now lets undervolt as that was the original point to 0.9v and around 170-175W, but still keep the +1500 memory OC, 50% fan speed. So we haven 100W less heat radiating and results in ~72C Memory Temp so a non-issue. Even at 30% fan speed which is completely inaudible, ~82C memory temp so that huge drop didn't really affect the temp much.
So yeah memory temp not really an issue.
Framed-Photo@reddit
On Nvidia cards specifically, the quality of the fans the card comes with matters way more because you cannot lower the speed below 30%. So sure undervolting can help, but if you have shitty fans then they're gonna keep being shitty even with that undervolt, and you'll hear them even at the minimum RPM.
Spending an extra little bit at the start gets you a MUCH better experience overall and it's almost always worth it imo.
Now that being said, that's not me saying MSRP cards are bad, they're not always. It's me saying to look at reviews (techpowerup is a good source for thermal and noise data) and get a card that performs well.
If that means paying an extra 50 then go for it. If it means you aim for a specific MSRP model (Asus prime seems to be the best for nvidia this gen) then that's also great.
fiah84@reddit
the alternative to spending more for a card with better fans is to get your own fans and use those. I have a MSI Ventus 3X because it happened to be the cheapest I could get, and it so happens that you can really easily remove the shroud and fans on mine. So I just strapped on 2 120mm fans, worked a treat. After my wife bought a 3d printer I designed and printed some mounts and funnels to reverse the fans and blow the hot air straight out of the case, works even better
Framed-Photo@reddit
Sure and that helps, but you can also just do that with a better card and still get far better performance. it's not just the fans that are better, the heatsink on something like the venus has less heat pipes and is physically smaller in terms of cooling surface area.
Sadukar09@reddit
The Ventus models aren't even MSRP anymore unless it's on "special" here.
Their even shittier Shadow cards are MSRP.
At that point just get another brand's better MSRP card, that's if you can deal with other problems.
Gigabyte's really bad this gen with fan stop noise (also cracked PCB previous gens), PNY has cheap capacitor issues, and ASUS/Zotac has questionable warranty.
You're kind of limited to the FE (also hard to get, unobtainable for some cards, hard to repair, issues w/2nd PCIe board) at this point.
ASUS Prime cards tend to be decent if you can deal with their RMA shenanigans.
IANVS@reddit
What are the issues with PNY specifically?
Sadukar09@reddit
https://overclock3d.net/news/gpu-displays/geforce-rtx-5090-capacitor-explodes-with-heatsink-bending-force/
https://www.igorslab.de/en/cause-research-capacitor-defect-on-a-pny-rtx-5070-with-announcement-and-prior-warning/
IANVS@reddit
I see...thank you.
jtj5002@reddit
Some of the MSRP cards are seriously cheaping out at worst places. The 60ti and 70 are known for extremely high vrm/pcb temps, and no MSRP cards other than the FE and ASUS prime put thermal pads under the back plate.
Also some of the AIB cards run about 10 degrees cooler for $100 more. That is about the difference of the best 360 AIO vs a peerless assassin, and people gladly pay for that all day, alone with $100+ for RGB reverse fans.
teutorix_aleria@reddit
This isn't true and even if it were, there's differences in build quality and features like dual bios which can be useful.
Toojara@reddit
Yep, tends to come down to the individual card in question and local pricing. Many of the cheap models are good but as an example I wouldn't touch the cheap Acer or ASRock cards with a pole due to their fan curves.
fmjintervention@reddit
Nothing wrong with entry level ASRock cards in my experience. I had an ASRock B580 Challenger and it ran cool and quiet. The fan curve was a little aggressive from factory, but that's really just ASRock choosing to lean further towards the low temps side of the trade-off rather than prioritising low noise. There was nothing wrong with the actual cooler itself and with a 5 minute fan curve adjustment it ran much quieter and still stayed very cool, about 75C under full load.
crshbndct@reddit
I’ve got a cheapest model 4060, and a “premium” 9070xt.
The 4060 is loud even at idle. The 9070xt is silent at full load.
DominantDo@reddit
Even for the 5060ti 16gb?
maarcius@reddit
Just avoid msi venus. I have 4060ti, it is very loud. I will upgrade it because noise and not because i need stronger gpu. It is garbage.
DominantDo@reddit
What about gigabyte? I've heard about thermal putty issues or something along those lines
Individual-Sample713@reddit
their Gaming X cards are great, had a 1060, overclocked to 2080 MHz, the fans never went above 50%
AMD718@reddit
This is bad, blanket advice. Always do your research and buy the card that's the best match for your requirements and preferences. Sometimes that will be the cheapest model. Other times that model will either be too noisy, too hot, or lacking in some other way.
danny12beje@reddit
That's absolutely not true.
Flaimbot@reddit
i think it's more about how loud they are and their optical design in this context. other than that, you're right ofc
mityboss@reddit
The issue with the amd cards is that they almost never go on sale. I can find the rtx 5070 selling for $680 Canadian, and the cheapest 9070 I have seen was $800. So there is a considerable amount of price difference, at least in Canada. The 9070 XT on the other hand, has seen some decent price drops. Last time I checked, it was going for $880. Which way below 5070 Ti’s price point ($1090). I say, if you have around $800 Canadian to spend, then get the 9070 XT. If you have $600-$700 CAD to spend, then try to score a 5070 for $680. Once the 9070’s (non-xt) prices drop, then the 5070 will be irrelevant.
crshbndct@reddit
The correct card to get always depends on regional pricing, and specific features.
fmjintervention@reddit
Yep regional pricing is everything. Eg cheapest 9070 XT available in Australia right now is $1049 AUD, the cheapest 5070 Ti is $1289. At those prices it's pretty hard to justify the 5070 Ti. You'd have to be really really keen on a specific Nvidia feature for it to make sense.
5160_carbon_steel@reddit
There was a 9070 TUF going for $716 (511 USD) at CC not too long ago. I would've gotten one if I hadn't recently built my PC, I just didn't want the hassle of selling my existing 7800 XT.
mityboss@reddit
Looks like I missed that one :( I got my 5070 for $710, and honestly can’t complain. I wish the 9070’s goes on sale more often, because I see a 5070 below 700 almost every week. Either way, I’m glad these prices are coming down. They were way above msrp during initial release.
5160_carbon_steel@reddit
I see your other response to me in my inbox, but it isn't showing up for some reason. Here was the thread on bapccanada, couldn't believe my eyes when I saw it.
Yeah, I'm glad that GPU prices are finally coming down. Sucks that RAM prices are skyrocketing now instead, the DDR5 I got with my $550 CPU MOBO RAM bundle is now going for $240-300.
mityboss@reddit
9070 tuf for $716. That’s incredible if true. I haven’t seen anybody post it on r/bapcsalescanada
Aggravating-Dot132@reddit
5070 needs an MSRP of 450$ to be reasonable. Same with 9070 non XT to be 500$ MSRP.
Blue-150@reddit
By 'most cases' do you mean insted very few edge cases? That seems to be more the reality.
dexteritycomponents@reddit
Sir I play exclusively only the latest and most unoptmized AAAA titles. And with those games, I can only play them at 4K max with the highest resolution textures, DLAA and with frame generation enabled.
Blue-150@reddit
Fair point, i didn't think of it like that lol
Plank_With_A_Nail_In@reddit
12gb is looking to be good enough for this generation for most gamers though.
Aggravating-Dot132@reddit
No.
Point is the chip itself is pretty strong, but when you try to activate all the fancy stuff, 12 gb Vram gets a problem. Try Cyberpunk for example. Card can handle path tracing, but after 20 minutes it will need a restart due to VRAM buffer getting full.
I don't see any point in buying a 12gb gpu these days. For 550+ $
firaristt@reddit
Those things doesn't calculated from today's needs. My 3080 10GB was perfectly fine on 2k and still mostly fine since I upgraded to 4k, but here and there I feel VRAM is just not enough anymore. 12GB is not enough for a new card. That shave a few years from the happy-usage duration.
inverseinternet@reddit
It's just got a shorter use-by date than it should have.
mr_tolkien@reddit
Yeah, the 9070XT is #1 sales in Japan for example because of pricing
960be6dde311@reddit
I'm always buying NVIDIA GPUs regardless of performance and price. Better quality hardware, features, and drivers cost a premium and I'm okay with that.
yourdeath01@reddit
FG and RT and even DLSS is superior on Nvidia, don't see how saving maybe $50-$100 tops is worth
InevitableSherbert36@reddit
Yes, but only by a little—around 5% at 1440p according to TechPowerUp.
$150. The cheapest 9070 XT on PCPartPicker is $600; the cheapest 5070 Ti is $750.
KARMAAACS@reddit
People said this same stuff for RDNA2 and RDNA3 and look how it turned out, you got gimped support in the end years down the line compared to the competition, inferior software technologies like lack of Ray Reconstruction and Upscaling (it took AMD years to catch up whilst NVIDIA owners were just using it), game performance was all over the place (as in some games used RDNA2 and 3 really well like CoD and then in other games it was a whole tier of performance lower) and the drivers sucked for ages for RDNA3. The whole "you're saving money" thing by going AMD is kind of a myth. Also don't get me started on RTX 20 series vs RDNA1, RDNA1 aged poorly and got left behind because it doesn't support RT acceleration or have specialised HW support for upscaling, you were much better off buying a 2070 SUPER over a 5700 XT.
InevitableSherbert36@reddit
That's all valid. I'm just saying that the price difference is larger than what yourdeath01 stated.
lifestealsuck@reddit
wow the difference betwee 9070xt and 9070 is minuscule
starburstases@reddit
Yea from about 10% at full resolution, scaling down to about 5% with performance mode upscaling. That gap will just about disappear with a power unlocked bios.
ThermL@reddit
Well sure, but it's not apples to apples.
I can do all of the board power limit mods to the XT as I can to the baseline 9070. And then the XT would go right back to being 10% faster. The one thing I can't do to a 9070 compared to the XT is get my 4 CU's back, and that's why you pay the 50 extra dollars. (Or, actually I can't even get a 550 dollar 9070 at microcenter, they're weirdly all more expensive than the 9070xts are at my location)
Noreng@reddit
No, you can't.
The 9070 can be crossflashed with a 9070XT BIOS pretty easily, and all you'll notice is that the GPU is slightly faster and produces a lot more heat. You can also increase the power limit of the 9070 XT, either by shunt-modding the card, or by tricking the VRM controller with a softmod in Linux. The BIOS mod reports correct power draw, while the shunt mod and/or Linux mod does not report correct power draw.
The performance scaling isn't particularly great beyond 250W for Navi 48 regardless, so even though the 9070 XT is easily capable of hitting 400W in a lot of games under the normal 3450 MHz clock limit, you're not going to see a 10% performance improvement going from 304W to 400W.
While the 9070 has disabled 1/8 WGPs per shader engine, there's still 4 Shader Engines, 64MB of L3 cache, a 256-bit memory bus at 20 GT/s. The fact of the matter is that the number of Shader Engines has a larger impact on gaming performance than WGPs/CUs.
Ashratt@reddit
I always run my XT @ 250watt, its such a no brainer.
Cross-checking with an FPS meter and my watt meter, its just laughable what an additional 100 watt give you (itsfuckingnothing.gif)
Noreng@reddit
It still scales, it's actually scaling better than Nvidia's 5080 in the same interval. The problem is that it's still well beyond diminishing returns
buildzoid@reddit
it would actually even be less of a gap after all the mods because 9070s are sandbagged from the factory with their low powerlimit. 9070XTs aren't.
starburstases@reddit
The 9070 has a lot more to give when going from 220 to 300W than the XT from 300 to 400W
techraito@reddit
Some 9070s could even be flashed with XT firmware lol
Extreme_Fondant_338@reddit
nah, fuck amd after recently news
and fuck rdna 4 because of udna
ExplodingFistz@reddit
Yup easy choice is 5070 Ti. AMD will just drop support for 9000 series after 3 years.
Mediocre-Award5044@reddit
It is nearly 2025 why are they still barely testing RT?
veryjerry0@reddit
Almost every game released today still doesn't have RT lol ... not everybody plays AAA games.
godfrey1@reddit
really need that 9070XT to play balatro and hades 2
venfare64@reddit
Don't forget Hollow Knight Silksong and late game Stardew Valley farm benchmark.
Elysium_nz@reddit
Let’s be real, RT is overhyped. I turned mine off because it’s actually annoying.
CatsAndCapybaras@reddit
It's because most of the time RT doesn't look any better.
galaxyhmrg@reddit
Its HWU dude, they’ll die on that hill
redstej@reddit
Gaming performance is about the same. The price premium on the nvidia card buys you CUDA. If you don't know what CUDA is and why you might need it, get the amd card.
moofunk@reddit
Until you run into those, who bought an AMD card for gaming and are now trying to do generative AI or 3D rendering with it, they complain it doesn't work or is slow, and everyone tells them they should have bought an Nvidia card for that.
Jeffrey122@reddit
CUDA, better upscaling and frame gen image quality, more support for upscaling and frame gen, better power efficiency, better memory efficiency, better RT, much better PT.
Let's not forget all the other stuff.
myst01@reddit
As much as AMD shat in their pants with the nonsense of RDNA 2 Windows drivers shut off.... If you actually need CUDA, you take 5090+ - it's serious work and an actual investment (unlike investments in hardware to play games, that's just purchase) . Frame gen is one of the worst ill-conceived marketing bullcraps to be witnessed. Frame-gen is just input latency for anything that's not cut scenes.
For anything else 9070xt is a perfectly fine card, especially under Linux.
AnechoidalChamber@reddit
17:07 ( https://youtu.be/uDligqbbPGs?t=1027 ) Is why I'd still choose the 5070 Ti.
But that's only because I'm a cutting edge tech freak.
dorting@reddit
I've always said that the best card of this generation was the 9070, it's a beast that sometimes competes with the 5070 while costing significantly less, the 5070 on the other hand is never to be recommended unless you go to a big discount store in comparison it should cost a good 20% less to be able to compete.
Creative-Expert8086@reddit
In my place(Singapore) 5070 << 9070xt = 5070ti, while in china 5070=9070xt<< 5070ti in terms of pricing.
Liambp@reddit
I recently switched from AMD (7800xt) to Nvidia (5070ti) mainly to checkout Nvidias features. To be honest DLSS and multi frame gen are cool but haven't really impacted my gaming experience much.Path tracing still carries too much of an overhead to be usable. Overall I am happy enough with my 5070ti but one thing I miss is AMD Catalyst software. Sure Catalyst is loud and pushy but at least it feels like it is from this century unlike Nvidias hard to find control panel.
AbrocomaRegular3529@reddit
Path tracing doesn't carry too much overhead unless you only want to play at 200 fps on everything.
Indiana jones and cyberpunk 2077 with path tracing on and everything mostly ultra-high still gets you 80-90FPS with DLSS balanced or quality. 80 FPS in worst case is super playable FPS IMO.
Game looks too flat without path tracing.
I also upgraded like yourself from 6800XT(basically identical to 7800XT) to 5070ti, and I played all the games that I liked before with path tracing ON such as alan wake 2, indiana jones and cyberpunk. They look entirely different games.
iDontSeedMyTorrents@reddit
Is there a reason you don't use the new Nvidia App?
Liambp@reddit
The only reason is because I never realised it had more functionality than it used to. The last time I used it I still had to open the old control panel to make any detailed adjustments like enabling Gsync. The new app seems to have a lot more features. Thank you for pointing me to try it again.
kuddlesworth9419@reddit
In the UK the prices are weird. The 9070 isn't much cheaper than the 9070XT so it just makes sense to go for the 9070XT instead. If you don't want to pay silly money the 5070 is the only card that isn't crazy money unless you go further down.
noiserr@reddit
9070 non-XT is really the value king in this bracket if you can get the good prices in your region. In Australia 5070ti is 55% more expensive while offering 9% more performance.
heyyoudvd2@reddit
Here in Canada, the cheapest 9070 XT is $865 CAD ($617 USD), the cheapest 9070 is $800 CAD ($571 USD), and the cheapest 5070 Ti is $1090 CAD ($778 USD).
Easy decision.
AntiGrieferGames@reddit
After what AMD happened about RDNA 1/2 card driver support drop soon, i would better get a nvidia card at this point, espcially long driver support.
In some cases with a good deal you have a similar performance on that.
ClerkProfessional803@reddit
9070xt seems amazing. But do anticipate AMD putting it in maintenance mode 5 years from now. UDNA is coming. Also, the msrp shenanigans at launch left a bad taste in some people's mouths.
Zeor_Dev@reddit
Comparing fake performance vs fake performance so developers can get more lazy in optimizing games. Therefore leading to create more flawed "gpus" that will compensate for laziness. Win for gpu manufacturers > loss for gamers being cheated.
Griswo27@reddit
Dlss isn't fake performance
From-UoM@reddit
That's not how you calculate performance of upscaling cost at all.
Lets say a game run 1440p at 60 fps that 16.67 ms. On DLSS 4k Quality (internal 1440p) runs at 50 fps. That 20ms. So the cost of upscaling is 20ms-16.77 = 3.37 ms
That's just an example
Nvidia lists the 4080 (closet to 5070ti) using FP16 at performance upscaling the following:
4k - 1.50ms, and 1080p - 0.38 ms
https://github.com/NVIDIA/DLSS/blob/main/doc/DLSS_Programming_Guide_Release.pdf
Page - 6
Now for the 9070xt amd lists it at for performance mode (FP8)
4k - 1316us or 1.3 ms and 1080p 352us or 0.352 ms
https://gpuopen.com/manuals/fidelityfx_sdk2/techniques/super-resolution-ml/#performance
So there you go.
The actual cost of dlss (FP16) v Fsr (FP8)
Dlss is probably still fp16 to fully support older cards like the RTX 20 and 30 series.
kulind@reddit
slow day for HUB, beating the dead horse.
Icynrvna@reddit
They do love to do vids regarding upscalers.
Antonis_32@reddit (OP)
TLDW:
A) Models compared:
i) MSI RTX 5070 Ti Ventus 3X
ii) Sapphire RX 9070XT Pulse
b) 22 Game Average, 1440P Ultra Settings:
RTX 5070 Ti:
Native TAA: 92 FPS, 1% 71 FPS
DLSS 4 Quality: 118 FPS, 1% 90 FPS
DLSS 4 Balanced: 128 FPS, 1% 96 FPS
DLSS 4 Performance: 137 FPS, 1% 102 FPS
RX 9070 XT:
Native TAA: 88 FPS, 1% 69 FPS
FSR 4 Quality: 116 FPS, 1% 88 FPS
FSR 4 Balanced: 124 FPS, 1% 95 FPS
FSR 4 Performance: 134 FPS, 1% 101 FPS
RX 9070:
Native TAA: 80 FPS, 1% 62 FPS
FSR 4 Quality: 108 FPS, 1% 83 FPS
FSR 4 Balanced: 116 FPS, 1% 88 FPS
FSR 4 Performance: 127 FPS, 1% 95 FPS
RTX 5070:
Native TAA: 72 FPS, 1% 57 FPS
DLSS 4 Quality: 95 FPS, 1% 74 FPS
DLSS 4 Balanced: 104 FPS, 1% 80 FPS
DLSS 4 Performance: 113 FPS, 1% 86 FPS