Is it possible to extend VRAM or even modify a GPU with extra components?
Posted by 1vertical@reddit | buildapc | View on Reddit | 7 comments
Say you have two graphic cards and one is busted, but most electronic components are still working, would it possible to extend the VRAM for example (even if it looks and implement in a hacky way) with these extra components?
I've always thought it's such a waste to chuck away a GPU just because some components of it doesn't work. I also thought that imagine you take many GPUs apart, build one gigantic circuit in a room, one cabinet is filled with memory chips, a another is filled with processing chips, abother cabinet with other components and all is connected via cables and such leading to the computer.
I mean, a GPU is kind of a computer on its own if you think about it, plugged into the main one with its own memory, power unit and processor. It would be nice to slot in extra RAM in your GPU. It's 2023 for crying out loud.
Bluedot55@reddit
So, the reason you can't do that is, for memory, it has to transmit a crazy amount of data. This means the transmission needs to be really high frequency. And a long run of this results in signal degradation, which makes the data useless unless you clock it lower.
So basically, to get high speed memory, it needs to be right need to the die. And a socket also dramatically decreases signal strength, not to mention cost. So we're kinda stuck here.
A skilled tech can actually remove and transfer the actual GPU and memory chips from a board to another, but the number of people who can do that, and have the gear to do it, are limited.
If you want a prime example, look at laptops. You can buy a laptop with lpddr5 7200 soldered, or you can get a laptop with ddr5 5400.
So, would you take a GPU that's 20% slower and 10% more expensive, to let you replace parts?
Opening_Initial6323@reddit
You are wrong because there's people who did doubled the Vram with soldering and programming.
Pleasant_Hawk_9699@reddit
"A skilled tech can actually remove and transfer the actual GPU and memory chips from a board to another, but the number of people who can do that, and have the gear to do it, are limited."
deleted_by_reddit@reddit
The way you're describing is not possible, no.
There are people out there redneck engineering GPUs to increase their VRAM by soldering on more or larger capacity memory chips, but it's not something that is possible for anyone but the most technically savvy folks. It's completely unrealistic for most people to do.
Opening_Initial6323@reddit
You are wrong because there's people who did doubled the Vram with soldering and programming.
Elianor_tijo@reddit
That's a loaded question. Short answer: no.
Longer answer:
For VRAM, it is possible. It requires specialized tools. There are hundreds of solder balls on those chips, this requires machines meant for it and can't be done without the proper equipment. In addition, every time you de-solder and solder new memory, there are risks of the chips not being perfectly aligned and that will cause issues. You also have to make sure the memory chips are of the right type for the GPU (the actual chip, the entire assembly is a video card even if we often refer to it as a GPU) to handle it. That means that not all memory chip may work. The card uses Micron GDDR6, better use Micron GDDR6. That leaves you with basically using chips of the same generation with higher capacity.
Finally, the vBIOS may need some modding to work properly. That's involved, but it's doable and has been done before.
Now, assuming all that is covered, there is the issue of whether the GPU can handle the extra memory. That's where the bus width comes in. Think of the memory capacity as the overall number of cars you can fit on a road. The memory bus is the number of lanes on that road. If the bus is too small (lower number of bits), you won't be able transfer data to and from the VRAM to take advantage of extra capacity. For a current example of this, look no further than the 4060 Ti 8 GB and the 4060 Ti 16 GB. It's not quite to the point where the 16 GB variant can't take advantage of the VRAM in some cases, but there's a reason most reviewers says the 16 GB variant is bad value.
That's a big fat nope. If you ever look at a video card PCB, you'll notice the VRAM chips are extremely close to the actual GPU die. There is a reason for this. Basically, the farther from the GPU die the VRAM is, the more signal degradation there is and the more latency there is. Signal degradation is an issue considering the clock speeds at which VRAM works. Latency is pretty much self explanatory. There actually used to be swappable VRAM chips on ancient video cards, they went the way of the dodo due to the issues I mentioned.
That's one issue, another is that PCB design isn't simple, especially if you have something as complex as a graphics card, let alone a PCB with more than one GPU on it.
What you're describing is a bit like computing clusters work. The main difference is that each cabinet has racks that contain CPUs, RAM, etc. or a bunch of video cards in the same rack. The interconnect speed between the racks and cabinets is already a concern and often these need not be as fast as the RAM channels, etc. By the way, this was already a concern 20 years ago.
Opening_Initial6323@reddit
You are wrong because there's people who did doubled the Vram with soldering and programming.