[Videocardz] NVIDIA GeForce RTX 5060 Ti and RTX 5060 may get 9GB GDDR7 variants with 96-bit bus
Posted by NeroClaudius199907@reddit | hardware | View on Reddit | 86 comments
trmetroidmaniac@reddit
These things are already bandwidth starved, these variants would be awful just like 3060 8GB...
the__storm@reddit
Same memory bandwidth as a 980 Ti lmao
Shitty_Human_Being@reddit
980 Ti was amazing though, 11 years ago.
Dr_Cunning_Linguist@reddit
Didn’t the 980 Ti have a 256bit bus?
Shitty_Human_Being@reddit
384-bit I think. It's been a while.
soggybiscuit93@reddit
With slower memory
asaprockok@reddit
Isnt gddr7 faster anyway? compared to gddr6/6x 128bit bus the 96bit bus will be faster anyway
MonoShadow@reddit
5060ti already uses GDDR7 and on a 128 bus
asaprockok@reddit
Ah i see, I commented before reading the rumor article. To be honest, i dont think a 25% reduction will be that much of a problem since most of the target user for this SKU are gamers and there's a lot of better workstation GPU options in the market.
KuroNanashi@reddit
It's a fairly sizeable reduction to be fair, the resulting \~336GB/s is somewhat anemic for a card like the 5060 Ti. The example they posed is actually fairly interesting because the 3060 8GB went from 360 > 240 with most of the other specs the same and it performed 15% slower on average, sometimes up to 30% in bandwidth limited scenarios as you might expect.
For a card that's already well reviewed, seeing a version with a gig extra memory assuming it's a nice bump only to discover it performs feels disingenuous.
Ok-Parfait-9856@reddit
The memory will be clocked at 36gbps which will make up for the smaller bus
Ok-Parfait-9856@reddit
The memory will be clocked at 36gbps which will make up for the smaller bus
WuWaCamellya@reddit
So does the base 5060, only the 5050 is on G6.
Vb_33@reddit
5060 is not bandwidth starved.
vegetable__lasagne@reddit
Is the 5060 actually starved? It performs similarly to a 4060 Ti but has 55% more bandwidth. Shrinking to 96 bit drops it 16% more.
kingwhocares@reddit
RTX 5060 ti 8GB vs 16GB. Very little performance impact for that.
Noreng@reddit
The 4060 Ti has significantly slower memory than a 5060 Ti, and it's not exhibiting signs of bandwidth-starvation.
StaysAwakeAllWeek@reddit
It's going from GDDR7 to GDDR7. The bandwidth is marginally higher, the capacity is increased by 1GB and the production cost is reduced.
Literally nothing to complain about.
hackenclaw@reddit
this is untrue lol.
try compare with 4060Ti vs 5060.
steve09089@reddit
That 1GB VRAM is truly going to open new frontiers
Noreng@reddit
It will have the "funny" effect of increasing performance in games that are VRAM-starved, and reducing performance in games which aren't.
krilltucky@reddit
Can you explain how it would reduce performance? Coz of the low bus width?
Noreng@reddit
The standard 5060 has a 128-bit GDDR7 bus operating at 28 GT/s.
This "upgrade" seems to be a 96-bit GDDR7 bus operating at 36 GT/s. Which means less bandwidth.
Ok-Parfait-9856@reddit
Isn’t the bandwidth about the same despite the smaller bus, given the faster memory clocks?
Noreng@reddit
It would need to be 37.33 GT/s for the narrower bus card.
derI067@reddit
didn’t think i’d see a double-digit bit bus on a gpu in the big 2026 but here we are
techraito@reddit
Perfect for Spiderman 2! It uses somewhere between 9-10GB of VRAM even on 1080p with ultra textures! It'll perform slightly better than the 8GB cards, but will still occasionally dip to 10-15 frames.
TurtleCrusher@reddit
Very rarely does my 6800XT go over 9GB of VRAM at 4K. It is the right call.
hackenclaw@reddit
3 memory chips seems to be cheaper than 4 memory chips now, that 1 GB is just the positive side effect.
The real goal is to cut memory chips.
I wont be surprise we will see 128bit 12GB 5070, 18GB 5070Ti or 5080.
TemptedTemplar@reddit
Those were on the table up until a few months ago.
https://videocardz.com/newz/nvidia-geforce-rtx-50-super-refresh-faces-uncertainty-amid-reports-of-3gb-gddr7-memory-shortage
The "delay" might push them too close to a 60 series launch, or they may keep them around and push everything else back.
vegetable__lasagne@reddit
It could save a couple watts, in which case I'd like to see a low profile 75W version or a fanless regular sized card. It'll probably perform similar to a 4060.
Plenty_Demand8904@reddit
i would love to see passively cooled cards make a comback
pacoLL3@reddit
It genuienly will on reddit considering how utterly insane this place is treating VRAM.
Vb_33@reddit
1GB does make a difference, hell I wish the 3080 would have had 11GB vs 10GB.
TheOtherWhiteMeat@reddit
Soon: 9.1GB variant with a 72-bit bus.
mezuki92@reddit
how much will the price increases for an extra 1gb if VRAM?
Die4Ever@reddit
this layout is probably cheaper not more expensive
3 chips of VRAM instead of 4 chips, but higher density
and slightly higher yields due to disabled silicon of the memory bus
batter159@reddit
3060ti had a 256-bit bus.
Plenty_Demand8904@reddit
and?
Vb_33@reddit
570 had a 320bit bus and a 520mm² chip.
JustHereForCatss@reddit
I love that NVIDIA treats the bus like limbo, how low can it go?
frostygrin@reddit
And people will be using these cards to run games with DLSS and FG, starting from ~ 720p 50fps. It's not the future I anticipated. :)
VaultBoy636@reddit
Can't wait for the 32bit clamshell RTX 8050ti
Caddy666@reddit
back to the isa bus.
phylter99@reddit
Pin for pin compatible with the 486DX.
neshi3@reddit
32bit is for datacenter GPU's ... we normal consumers will get 8 or 16 bit chips /s
wusurspaghettipolicy@reddit
Charlie : I can go lower
rstune@reddit
They got Barbados Slim as VP of engineering
WhoTheHeckKnowsWhy@reddit
and we are all Hermes, standing back in impotent bewilderment at his audacity.
WhoTheHeckKnowsWhy@reddit
and we are all Hermes, standing back in impotent disgust of it all.
260X@reddit
I'm old enough to remember when the GTX 260 came with a 448-bit wide bus.
Then Fermi cut things down to 256-bit, though it didn't matter much because Nvidia's memory controllers couldn't keep-up with the then brand new GDDR5, for some bizarre reason.
Kepler initially shaved things down to 192-bit, which was mostly fine since the controllers finally caught up with GDDR5, though the GTX760 went back to 256-bit.
Then we got a 128-bit GTX960 but it didn't matter in practice because of delta color compression.
Pascal went back to 192-bit and the same deal with Turing and Ampere.
Then we got 128-bit GTX4060s.
And now, we are getting 96-bit GTX5060.
djlemma@reddit
Anybody remember the R9 Fury X? Kind of a flop of a card but it had a 4096 bit memory bus.
Dangerman1337@reddit
The 600 series having the 680 being a 256 bit 300mmish due was a bad sign IMV. I mean 700 was a Kepler refresh with 780 and 780 Ti being true high end cards but the 680 and 670 was a sign of Nvidia thinking they could get away with it .
railven@reddit
And they did get away with it. 700 series wasn't as much of a refresh but the rest of the line up that NV kept for enterprise because AMD couldn't even beat the 560 tier successor with their top GPU.
As another poster said, specs don't matter when NV's Civic tier engine runs circles around AMD's Ferrari tier engine.
NV has been getting away with it and as AMD continues to use more advanced nodes only to lose - NV will keep getting away with it.
260X@reddit
Nvidia evened things out with the “dual-core” GTX 690 and the Titan OG a.k.a "Big Kepler."
Besides, Kepler was a huge departure from Tesla and Fermi of yore, with a massive bump in shader cores (the GTX 680 had exactly 3x the core count of the GTX 580) and the “hot clock” philosophy dropped.
It was a bold yet conservative and calculated move, and one could argue that Nvidia used Kepler as a stepping stone for the forthcoming Maxwell.
Noreng@reddit
The GTX 680 was much more comparable to a GTX 560 Ti than a 580. Since the SMX in Kepler was based on the revised GF104/6/7 SMs rather than the GF100 SMs.
Noreng@reddit
If you're old enough to remember the GTX 260 with a 448-bit bus, you might be old enough to remember the 128-bit 6600 GT, or even the 128-bit Geforce4 Ti4800.
The GTX 260 was kind of an outlier in the grand scheme of things. The fact that the HD 4870 could beat a GTX 260, with literally half the die size and bus width. This should be proof enough that the GT200 was an extremely bloated design.
The reason for Fermi cutting down the memory bus size was because of GDDR5's increased signalling requirements making PCB routing more difficult. Even the bigger GF100 went down to a 384-bit bus.
The GTX 960 was occasionally slower than the GTX 760, so the memory bus cutback in that scenario did actually matter. Maxwell relied on larger caches to reduce memory traffic, not delta color compression (which was added with Pascal).
Turing was on an old node when it launched (TSMC 16nm with reticle limit increase), that's why the memory bus width was kept up despite adding GDDR6. Ampere was similarly on an old and unused node (literally nobody else wanted Samsung 8nm at the time). The same argument could be said for Blackwell, but there's a huge silicon shortage.
260X@reddit
Yes, I am, and I do remember the 6600 and 7600 GTs. But I would never support technological regression.
After all, would you be willing to go back to quad-cores, if not dual-cores with HT?
HD4870 was on GDDR5. It was a cutting edge product. The GT200 totting GTX 260 had to make do with GDDR3.
In fact, the 448-bit GTX260 had slightly lower bandwidth (112 GB/s) than the HD 4870 (115 GB/s).
Nvidia claimed that Maxwell had an up to 8:1 compression ratio.
Noreng@reddit
You're too hung up in marketing numbers. Memory bus width isn't design a goal, it's a means to an end. A "128-bit" GDDR7 bus has 176 data lines, which is practically as many data lines as the 192-bit GDDR6 bus of an RTX 4070.
The same thing applies to core counts. If Intel were to release a 20 GHz Arrow Lake superchip, but with only 1/4th the number of cores, I would buy it in a heartbeat, as it would be a ridiculous upgrade.
Yes, the HD 4870 used GDDR5 to bridge the memory bandwidth gap, but the difference in die size was still on the order of 2x.
I misremembered about delta color compression, it was introduced with Fermi. Regardless, the difference in memory bandwidth requirements were driven by cache size.
NeroClaudius199907@reddit (OP)
Have you said thank you once?
HookLeg@reddit
I’m holding out for a Voodoo refresh.
Caddy666@reddit
might be along sooner than you think....
https://github.com/fayalalebrun/SpinalVoodoo
fastgtr14@reddit
Crying 😭 in nostalgia
ekvq@reddit
As someone with a ~~hobbled~~ LHR 3080 with 10GB, I can say from experience that this won’t do much to fix VRAM problems
TheChosenMuck@reddit
i guess they wanna sell their stock of overpriced memory before the price drops
DemoEvolved@reddit
wtf is the point of 1gb of ram?
ballmot@reddit
...But isn't this tier of GPU already bandwidth starved? So now they plan on advertising the extra 1 GB VRAM in one hand while stabbing their customers in the back with the other. Fantastic.
Igor369@reddit
As long as the price is good enough.
Grouchy_Advantage739@reddit
I’m surprised they haven’t gone with 8.5gb with the last 0.5 being slower DDR4 memory, that’d really bring back fond memories of the GTX 970.
Tgrove88@reddit
I remember I got 2 settlement checks for my 970 SLI after they got caught lieing about that 😂😂😂
SilverKnightOfMagic@reddit
remember when Nvidia have two different versions of 4070? one with GDDR6x and one with GDDR6. same price.
Flimsy_Swordfish_415@reddit
can we stop treating videocardz garbage as a fact?
AfterIssue6816@reddit
Oh dios mío, impresionante 🖕
hamatehllama@reddit
These rumors don't make much sense. The cost of 9 gigs of vram ishigher than Nvidia saves on binning chips with one fewer channel.
soggybiscuit93@reddit
Is 3x 3GB chips more expensive than 4x 2GB chips? by now much?
Kryohi@reddit
It isn't.
timfountain4444@reddit
What is the point of one more GB of RAM? I'm really surprised that NV didn't go the other way and start selling them with 6GB, given how much they are carping on about DRAM costs....
gomurifle@reddit
It's all about efficiency. How much perfomance can be squeezed out of a paper thin buss and dollop of memory.
nonaveris@reddit
Scraping the bottom of the barrel there?
Paed0philic_Jyu@reddit
Let's see it run DLSS5.
Beneficial_Common683@reddit
Remember GTX 970 3.5GB Vram ?
Hour_Firefighter_707@reddit
I shared the Videocardz article in this sub when the rumours of the 9GB 5050 first popped up. I defended that it was not too bad. The VRAM amount was going up and the memory bandwidth was also slightly higher. At the same $250, I'd rather have 9GB of G7 than 8GB of G6.
I have nothing but fire breathing rage for this. Absolutely nothing they can say will make this digestible. What the fuck?
ChaoticCake187@reddit
Do we know the approximate price of 24Gb G7 modules? I'm wondering if 3x24Gb is actually cheaper than 4x16Gb.
Ruzhyo04@reddit
You wanted more vram? You got it... *monkey paw curls*
Aggravating-Dot132@reddit
Sounds like a black mirror episode, lol
AutoModerator@reddit
Hello NeroClaudius199907! Please double check that this submission is original reporting and is not an unverified rumor or repost that does not rise to the standards of /r/hardware. If this link is reporting on the work of another site/source or is an unverified rumor, please delete this submission. If this warning is in error, please report this comment and we will remove it.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.