How come a GPU with upgradeable VRAM isn't a thing yet?
Posted by BrawlX@reddit | hardware | View on Reddit | 41 comments
Other threads have already answered why they likely won't be a mainstay (cost, compatabilty, demand, etc.) but that doesn't explain why no one has attempted to do one yet.
Feel like this makes sence to do for the current market as VRAM is becoming more and more important for a game to reach higer frame rates and improve stability.
ecktt@reddit
It has actually. My Matrox Mystique was VRAM upgradeable.
Also, recently ASUS changed their premium bords DDR5 slots claiming the empty one act as antennas with interfer with signal integrity. VRAM already is running at very high speeds, so that adding upgradable memory slots would compromise the achievable memory speeds.
Strazdas1@reddit
well, if you use CAMM, then youd have one, always occupied slot. you could just replace it with a different module.
surf_greatriver_v4@reddit
You will be sacrificing a fair bit of performance to be able to do so, not to mention screw with the form factor and cooling, AND increase costs
Strazdas1@reddit
unless you do CAMM memory? then most of that is not an issue?
Icarus_Toast@reddit
Don't forget the most important part: GPU manufacturers don't want to let up the reigns on their tiered pricing model. They literally have no incentive to do this.
COMPUTER1313@reddit
"8GB VRAM forever"
COMPUTER1313@reddit
"Hey want to buy a RTX 3070 Ti with 8GB VRAM?
Omotai@reddit
There used to be graphics cards with upgradable memory, quite a long time ago (think the 90s).
It's really not possible with modern cards because signal integrity can't be maintained with socketed memory at GDDR speeds. Regular system memory is much, much slower than video memory for this reason.
jigsaw1024@reddit
yeah, I remember adding DIP chips to my EGA card to take it from 64kb to 128kb! Wild times!
cowbutt6@reddit
Even regular system memory performance is running into barriers caused by modular memory. https://en.m.wikipedia.org/wiki/CAMM_(memory_module) may alleviate that, in the short term, at least.
Just_Maintenance@reddit
Would be kinda harsh to slot 12 DIMMs on an RTX 4090 and still lose half the performance.
Versorgungsposten@reddit
Tbh I think we'll also soon see fixed onboard RAM for CPU's soon, like Apple did. It's just way more efficient to directly pack GPU-VRAM or CPU-RAM together, rather than adding all that overhead (like increased distance) that comes with modularized systems.
balrog687@reddit
Just like a console. I'm fine with it as long as the hardware lasts a decade.
Kyrond@reddit
The issue I worry about is artificial pricing segmentation.
Just like Macs and phones, RAM might come at extreme markup because you have no other option.
jakejakeson123@reddit
Intel's Lunar Lake has onboard ram but they said they won't do it again going forward
Versorgungsposten@reddit
Intel is not quite the mesure for state-of-the-art that it used to be.
Platinumjsi@reddit
My S3 Verge from 1998 had user upgradable ram
JohnDoe_CA@reddit
The bit rate per pin has gone up a little bit in the past 26 years.
lifestealsuck@reddit
I think its possible ,but it would add cost to the board and the vram module . So much that its could make a GPU +8g vram module cost even more than a GPU with 16g vram soldered like today .
And less performance because of latency.
JohnDoe_CA@reddit
Latency would be such a small impact on performance. It’s all about signal integrity. And the huge area and power cost to work around it.
UnsaidRnD@reddit
The same reason we don't have GPUs with longer viability. The huge moneymaking machines that Nvidia/AMD are must sell each and every one of us something every 3-4-5 years, if not more often.
I would love to live in a world when both these companies downscale and give us a new GPU model every 10 years, and game/app devs just milk the hell out of it, like console gens.
JohnDoe_CA@reddit
Ah, the benefit of ignorance! It makes life so much simpler.
Google “PCB back drilling”. Learn that it’s a technique to reduce tiny stubs in the pathway between the GPU and the GDDR pins to reduce reflections on the signal and how it would otherwise fuck up the data eye. And now imagine that you’’s go through all that effort then then… insert a freaking connector on that signal path.
We’re talking 20+ Gbps bit rate per pin. That’s manageable in serial links that have humongous SERDESes and complex FEC schemes, but you can’t do that on wide busses and chips that are area and power critical.
Trey_An7722@reddit
Geometry doesn't allow for it. At GDDR speeds DFRAM has to be right nex to GPU and all signal lines have to be clean and short. No sockets, no optional slots etc etc.
Plus, these things aren't meant to be dissassembled etc. And no one is willing to spend $$$ to certify the chip against DRAMS on the market etc.
monocasa@reddit
CAMM should allow for GDDR modules.
Just_Maintenance@reddit
fastest CAMM is expected to get for now is barely 8.5GT/s.
Modern GPUs basically start at 16GT/s. High ends run at 21GT/s. Next gen with GDDR7 is expected to start at 32GT/s
monocasa@reddit
The 8.5GT/s has more to do with the LPDDR on those sticks rather than the CAMM form factor.
III-V@reddit
Even if this were the case (no idea), it would be a mess to cool.
Cj09bruno@reddit
not at all, it would be the same as normal gddr one just looses a few mm of height
simo402@reddit
Because it adds cost and lowers peformance. The future is less and less modular
sascharobi@reddit
Unfortunately true.
__some__guy@reddit
Form factor, signal integrity, and Nvidia prefers if you buy their server cards for 10x the price (2x the VRAM).
icrazyowl@reddit
by the time you need more vram your gpu is usualy too weak to use it.
Firefox72@reddit
You do understand how VRAM is put onto a GPU board right?
BrawlX@reddit (OP)
Was thinking more of a modular based solution, like with those modular smartphones from a while back.
Unlikely-Today-3501@reddit
Smartphones? They were never modular.
itsapotatosalad@reddit
Fairphone
Firefox72@reddit
That is never gonna happen. And we all know why.
First of all its probably not worth design chances to allow it.
Secondly why would Nvidia offer you a VRAM upgrade when they can just limit high VRAM to expensive cards or future generations and get you that way.
Nicholas-Steel@reddit
The obvious implication they're making is switching to a socket approach for the VRAM. This would significantly complicate things if you want to maintain similar performance to the current approach.
gold_rush_doom@reddit
I think there was a one off card where this has happened, I remember seeing it on Lazy Game Reviews or Linus Tech Tips.
opensrcdev@reddit
Tom's Hardware had an article that talked about using PCIe-attached memory or SSDs to augment VRAM. Interesting concept. https://www.tomshardware.com/pc-components/gpus/gpus-get-a-boost-from-pcie-attached-memory-that-boosts-capacity-and-delivers-double-digit-nanosecond-latency-ssds-can-also-be-used-to-expand-gpu-memory-capacity-via-panmnesias-cxl-ip
calcium@reddit
You just explained why no one had done it:
If you think this needs to change, form a company to go do it. Show us all how we’re so wrong.