[Der8auer] Investigating and Fixing a Viewers Burned 12Vhpwr Connector
Posted by Berengal@reddit | hardware | View on Reddit | 125 comments
Posted by Berengal@reddit | hardware | View on Reddit | 125 comments
P_H_0_B_0_S@reddit
Wish he and thermal grizzly would make an updated wireview that has the same per cable real time current monitoring the Asus Astral cards have. So that measurement could be added to any GPU with this connector.
This would resolve one of the current biggest issues with the connector, that the user gets no warning when there are imbalances and some pins are carrying over spec current. Visual inspection and make sure the connector is fully seated does not cut it and a clamp meter only measures at that point in time. Might well have saved this user. Not all of us can afford the Astral premium, so having a product that brings that feature to all cards would be great.
Though understand if they don't want to have a product that sits in this fubar connector ecosystem...
P_H_0_B_0_S@reddit
And I was heard (more likely already being worked on).
https://overclock3d.net/news/gpu-displays/thermal-grizzly-fixes-12vhwr-with-its-wireview-pro-ii-connector/
Can't wait.
DOSBrony@reddit
Shit, man. What GPU do I even go for that won't have these issues? I can't go with AMD because their drivers break a couple of my things, but I also need as much power as possible.
Strazdas1@reddit
Anything with low power draw so it never overloads the cable. 5070ti or bellow if you have to stay on Nvidia.
reddit_equals_censor@reddit
WRONG. 5070 card melted 12 pin nvidia fire hazard example here:
https://videocardz.com/newz/user-reports-melted-power-cable-after-using-geforce-rtx-5070-graphics-card
5080 cards melt as well.
so NO, you are absolutely NOT safe with a 5070 ti or below.
that is a fact.
Strazdas1@reddit
the images in that article does not show a melted cable? and i never said anything about 5080. That was another poster.
reddit_equals_censor@reddit
the first picture in the article, left.
you can see the melted cable at the left going into the connector. so it melted in the connector and it melted the cable as well outside of the connector itself.
Strazdas1@reddit
I see physical damage on cable insulator, but i dont see melting.
fuettli@reddit
So what is that damage in the connector right where the totally not melted cable fits into?
https://i.imgur.com/bi1qFxU.png
Strazdas1@reddit
As pointed in the article itself, that is clearly a bent pin.
fuettli@reddit
No, the socket below that bent pin.
Strazdas1@reddit
Im not sure what you are seeing, i just see a normal socket.
fuettli@reddit
https://i.imgur.com/waAeGnx.png
you can't see these "white" spots?
Strazdas1@reddit
Those look like reflections from camera or just inperfections in plastic molds more than melt.
fuettli@reddit
What a coincidence that it's right where the burnt cable is, right?
Reactor-Licker@reddit
5080 and below have the same safety margin as the “old” 8 pin connector considering their power draw.
https://en.m.wikipedia.org/wiki/12VHPWR
reddit_equals_censor@reddit
WRONG.
we know this, because 5080 cards keep melting. so they clearly don't have the same safety margins as an 8 pin pci-e or 8 pin eps power connector.
we are even seeing 5070 cards melting now.
so NO, there is no safe amount of power you can pull through these 12 pin nvidia firehazard connectors.
derating them would NOT be enough.
you are NOT safe at any power you draw from this garbage.
the only safe nvidia 12 pin fire hazard connector is one thrown into the garbage and never used.
___
if you are wondering about a possible explanation of why the nvidia 12 pin fire hazard can melt and fail at even very low loads, one of the possible explanations is, that the connections are VASTLY weaker as they are vastly smaller. again a possible explanation, in case you needed one. not THE explanation and one of many in a list of found causes and possible causes for 12 pin nvidia fire hazards to melt.
kopasz7@reddit
Their server cards (PCIe) use the 8-pin EPS connector. (eg. A40, H100) But then you need to to deal with their lack of active cooling either via added fans or a server chassis with its own fans.
Strazdas1@reddit
the new server cards use 12V connectors too. They just have lower power draw and we dont hear any melting from them as a result.
kopasz7@reddit
https://images.nvidia.com/content/Solutions/data-center/a40/nvidia-a40-datasheet.pdf
Freaky_Freddy@reddit
This issue affects mostly affects the XX90 series
If you absolutely need a 3000 dollar GPU that has a random chance to combust then the Asus Astral has a detection tool that might help
evernessince@reddit
The 5090 astral is a whopping $4,625 USD right now. $1,625 for current detection is nuts.
Kougar@reddit
Incredible... I guess that's one way to slowly kill off a successful brand regardless of how good the product is. Doesn't matter how good the performance is when it's crazy expensive and has a design flaw that causes AIBs to deny warranties, because at the end of the day people can't risk that much money simply going up in smoke. Especially when GPUs now need to last 5+ years just to make the value worth it.
reddit_equals_censor@reddit
you are missing the part, where it is an actual fire hazard.
are you willing to have your home burn down with you or your children it, to run an nvidia card?
YES the chance is very low, but it exists.
for other fire hazard products, full immediate recalls are made and government enforced, if the company slacks very often.
so again, it is not just the product or your whole computer dying.
but a true fire risk.
which should disqualify it from any sane person buying it.
fallsdarkness@reddit
I liked how Roman appealed to Nvidia at the end, hoping for improvements in the 60xx series. Regardless of whether Nvidia responds, these issues must continue to be addressed. If Apple took action following Batterygate, I can't think of a reason why Nvidia should be able to ignore connector issues indefinitely.
reddit_equals_censor@reddit
you seem to possibly have a wrong conception about how apple works.
here is a great video about it (it is also entertaining):
https://www.youtube.com/watch?v=AUaJ8pDlxi8
apples approach is to deny an issue exists at first, then when that doesn't work anymore, they will go ahead and massively downplay the issue.
when that doesn't work anymore lawsuits might come int. the result of the lawsuits might be an extended warranty replacement program on a specific product, if bought in a specific time period.
apple will then lie to customers about this program, they will NOT have the warranty program for units sold out of the EXACT time period specified, despite having the exact same issue.
and they will also NOT have an extended warranty program for different units like bigger screen sized laptops from the same year, OR products, that released afterwards, that still have the exact same engineering flaw and were designed AFTER the lawsuits were at least already rolling about the engineering flaw.
so apple literally does the barest minimum, that is possible with gaslighting customers as much as possible. scamming them in whatever way possible.
so in lots of ways nvidia now acts like apple, however we somehow didn't see a proper big lawsuit, that unlike for apple would in this case require a recall, because we are dealing with a fire hazard here.
and both nvidia and apple's anti consumer pure evil should get punished to the maximum.
the interesting part is, that apple is doing lots of it to increase profits. remember it is a double design decision. 1: have terrible engineering flaws, that break the products. 2: make the design as unrepairable as possible, so that a repair is straight up impossible or unreasonably expensive due to how nearly impossible it is by design.
but nvidia? nvidia isn't making more money pushing the nvidia 12 pin fire hazard onto customers. and they trippled down by now.
that can't be answered with greed even... they trippled down on the nvidia 12 pin fire hazard.
but yeah giant lawsuits, or governments without a lawsuit forcing nvidia to do a recall and drop this fire hazard is required.
and the 2nd one is somehow extremely unlikely, as governments in the usa, uk and other places are busy trying to murder innocent trans children and making life worse for everyone.
so yeah i guess we might have to wait for an nvidia 12 pin fire hazard caused house fire with maybe even dead people to start a lawsuit big enough to end this nvidia 12 pin fire hazard nightmare?
because nvidia somehow does not want to step away from this fire hazard.
ryanvsrobots@reddit
What do you think Apple did after batterygate?
Reactor-Licker@reddit
They added an indicator for battery health that was previously entirely hidden from the user, as well as the option to disable performance throttling entirely (with the caveat that it turns itself back on after an “unplanned shutdown”).
Still scummy behavior, but they did at least acknowledge the issue (albeit after overwhelming criticism) and explain how to “fix” it.
detectiveDollar@reddit
They also switched to a battery adhesive that can be easily debonded by applying power to a few pins, allowing for much safer and easier battery replacements.
TopdeckIsSkill@reddit
Apple was forced to do it by a judge after losing a cause
Leo1_ac@reddit
What's important here IMO is how AIB vendors just invoke CID and tell the customer to go do themselves.
GPU warranty is a scam at this point. It seems everyone in the business is just following ASUS' lead in denying warranty.
Jeep-Eep@reddit
And I am fairly sure this connector was the thing that drove EVGA out of the GPU AIB business because it destroyed their main competitive advantage in their main market.
crafty35a@reddit
EVGA never even produced a GPU with this connector so I'm not sure what you mean by that.
Jeep-Eep@reddit
Yeah, they did the math after being forced on it and realized it was going to bankrupt them, so they got out of DGPU.
crafty35a@reddit
Odd conspiracy theory to suggest EVGA knew the connector would be a problem and got out of the GPU business for that reason. AIl reporting I've seen about this suggests they left the business due to Nvidia's pricing/bad profit margin for the AIBs.
TaintedSquirrel@reddit
Also wrong.
Yeah they left the video card business. And the mobo business. And pretty much all businesses. They stopped releasing products 2+ years ago. Closed the forums, closed their entire warehouse.
The company is almost completely gutted, it's basically just a skeleton crew handling RMA's now. It has nothing to do with Nvidia, the most likely answer is the CEO wanted to retire early but didn't want to hand the company over to someone else.
Nvidia was just the fall guy.
crafty35a@reddit
Yet it's been reported by reliable sources (Gamers Nexus, see the article I linked).
I'm sure it was a factor, that doesn't change the reporting that I mentioned earlier though. More than one reason goes into a decision like that.
shugthedug3@reddit
ahem
crafty35a@reddit
Is Gamers Nexus not considered reliable? Honest question because I have not heard anything to that effect.
shugthedug3@reddit
Has a habit of raising drama where there is none for clicks, in this case presumably just relaying bad information but also haven't heard any corrections issued especially given it's quite obvious EVGA wound down their business for more than just one reason.
crafty35a@reddit
Of course, I've already said I agree there was more than one reason EVGA shut down all operations.
But this reporting was based on a meeting with the CEO. And it's not like they reported it was the reason that EVGA shut down all operations, only that it was the reason they terminated their partnership with Nvidia.
TaintedSquirrel@reddit
Article is 2 and a half years old, I'm sure it was "accurate" at the time. We now know the CEO is a liar.
crafty35a@reddit
Feel free to give since more recent sources.
TaintedSquirrel@reddit
A source for what? He said they were pulling out of the GPU market, they pulled out of all markets. He lied.
crafty35a@reddit
The fact that they pulled out of other markets a bit later does not mean that the stated reason for pulling out of the GPU market was a lie.
Frankly, I don't know what point you are trying to make. My original point here is just to refute the ridiculous claim that the 12vhpwr connector is the reason that EVGA shut down.
ryanvsrobots@reddit
That makes zero sense, the failure rate is like .5%. They had worse issues with their 1080tis blowing up.
Nuck_Chorris_Stache@reddit
Would have been more than that with the new power connector
airfryerfuntime@reddit
EVGA was toying with exiting the GPU market during the 30 series. I doubt it had anything to do with this connector. They likely just got tired of the volatility of the market.
Jeep-Eep@reddit
I dunno, this shit looks just about right for the final straw.
GateAccomplished2514@reddit
EVGA 3090 Ti FTW doesn’t exist?
Deep90@reddit
They made a few 4090s iirc but they never went into full production.
whelmy@reddit
they made a few 4090s and probably lower end skus but they never went to market so only ES are about
shugthedug3@reddit
I see the whole Reddit talking about why EVGA stopped producing GPUs story has been re-written again.
ryanvsrobots@reddit
"I am fairly sure" you just made this shit up.
pmjm@reddit
The situation is a little complex, because technically it's not the AIB's fault either. This spec was forced upon them. I understand why they wouldn't want to take responsibility for it.
At the same time, it's a design flaw in a product they sold, so it's up to them to put pressure on Nvidia to use something else. Theoretically they would be within their rights to bill Nvidia for the costs of warrantying cards that fail in this way, but they may have waived those rights in their partnership agreement, or they may also be wary of biting the hand that feeds them by sending Nvidia a bill or suing them.
But as a customer, our point of contact is the AIB, so they really need to make it right.
hackenclaw@reddit
Is it possible for them to go out of spec by just doing triple 8 pin?
or add custom load balancing on each of the pins?
karlzhao314@reddit
Evidence says no.
If Nvidia allowed board partners to go out of spec and use triple 8-pins, there absolutely would have been some board partners that would have done so by now.
Nvidia for some reason also appears to be intentionally disallowing partners to load balance the 12V-2x6, as evidenced by the fact that Asus has independent shunts for each pins...that still combine back into one unified power plane with its own shunt anyway. This is a monumentally stupid and pointless way to build a card, save for one possible explanation I can think of: that Asus foresaw the danger of unbalanced loads, but had their hands tied in actually being able to do anything about it. Detection, not prevention, was the best they could do.
VenditatioDelendaEst@reddit
Presumably the GPU only has input pins for one shunt. A tricksy AIB could use multiple shunts and and a passive resistive summing circuit, but maybe Asus didn't think of that?
Ar0ndight@reddit
yeah imo you're spot on.
We know that Nvidia has been more and more uptight when it comes to what AIBs can and can't do, and I wouldn't be surprised if power delivery was yet another "stick to the plan or else" kind of deal.
Kougar@reddit
No, NVIDIA requires AIBs stick to its reference layouts with few exceptions. There is a reason not a single vendor card has two 12V 2x6 connectors on it, not even the \~$3400 ASUS Astral 5090 which is power-limited even before it's put under LN2. NVIDIA controls the chips & allocation, the only real choice AIBs seem to have is to simply not play, basically the EVGA route.
Blacky-Noir@reddit
Nobody forced them to make, or sell, those products.
Yes, Nvidia is a shitty partner. It's been widely known for 15+ years. Yes, Nvidia should not be left off the hook.
But let's be real, AIB are selling those products. They are fully responsible for what is being sold, including from a legal point of view.
jocnews@reddit
SPEC was forced on them, but so was the responsibility. They have to process those grievances about that with Nvidia.
redditorium@reddit
?
flgtmtft@reddit
customer induced damage
redditorium@reddit
Thanks!
GhostsinGlass@reddit
Since Nvidia seems to have no interest in rectifying the underlying cause and seems to have prohibited AIBs from implementing mitigation on the PCB my thoughts are thus;
Gigantic t-shirt again. We're six months away from Roman showing up to do videos in a monks robe.
Z3r0sama2017@reddit
Or psu's doing the load balancing from now on as nvidia are incompetent
shugthedug3@reddit
To be completely fair, it has been pointed out to me this is how it is done in every other application. Load balancing and fault detection are on the supply side, not the draw.
slither378962@reddit
The PSU could just do current monitoring per-wire. But instead of melted connectors, you'd just get sporadic shutdowns! Well, at least it didn't melt.
And we'd be paying for this extra circuitry even if we didn't need it. Let the 5090 owners foot the bill!
shugthedug3@reddit
Yes, that would be an acceptable way of dealing with a fault. It's how it works for everything else.
Also we do need fault detection, that's a basic feature expected of a PSU and it's pretty crazy to read people saying they don't want it.
slither378962@reddit
The optimal solution as far as I'm concerned is load balancing. Which supposedly works.
Simply connect those wires to different VRMs. According to buildzoid. Probably doesn't cost anything. Just good circuit design.
Then, per-wire current monitoring shouldn't be necessary. You might still have it on, say, high end PSUs for even more safety, but I'm not convinced it would be worthwhile. I don't feel the need for this extra cost on any other connector.
Strazdas1@reddit
You could technically restrict max output per-wire but im not sure if that would fix the issues. The result would likely be GPU crashing after voltage drops.
VenditatioDelendaEst@reddit
The only cheap way would be to intentionally use high controlled resistance, like with 18AWG PTFE-insulated wires or somesuch. But that would compromise efficiency and voltage regulation.
The ludicrously expensive way would be a little bank of per-pin boost regulators to inject extra current into under-loaded wires.
GhostsinGlass@reddit
Eh, shouldn't the delivery side by dumb and the peripheral be the one doing to balancing? Just because the PSU doesn't know what is plugged into it, despite the connector only really having one use at this point.
Still feels like the PSU ports should be dumb by default.
Strazdas1@reddit
Yes, PSU does not know, so it cannot do the load balancing.
Strazdas1@reddit
you cannot do load balancing on a PSU. PSU does not have the necessary data for that.
Xillendo@reddit
Buildzoid made a video into why it's not a solution to load-balance on the PSU side:
https://www.youtube.com/watch?v=BAnQNGs0lOc
viperabyss@reddit
You mean rectifying the underlying cause of DIY enthusiasts that should've known better to plug everything in properly, but don't, because of "aesthetics"?
I just love how reddit just blame Nvidia for this connector, when it's PCI-SIG who came up (and certified) with it.
PMARC14@reddit
Nvidia is part of PCI-SIG, but they also get the lion share of the blame because they are the majority implementer, they could back down but it is clear they are the main people pushing this connector considering no one else seems interested in using it.
Strazdas1@reddit
To be fair, Nvidia was the one who proposed this (together with intel if i recall) so the blame is valid. PCI-SIG also carries blame for not rejecting it.
GhostsinGlass@reddit
Calm down please, it's Sunday.
fallsdarkness@reddit
Just making room for massive muscle gains after intense cable pulling
der8auer@reddit
hahahahhaa the t-shirt comment made my day <3
Berengal@reddit (OP)
tl;dw - More evidence for imbalanced power draw being the root cause.
Personally I still think the connector design specification is what should ultimately be blamed. Active balancing adds more and more points of failure, and with higher margins in the design it wouldn't be necessary.
shugthedug3@reddit
Yeah it's obviously too close to the edge with the very high power cards.
Thing is though... why are pins going high resistance? there has to be manufacturing faults here.
cocktails4@reddit
Resistance increases with temperature.
shugthedug3@reddit
Sure but take for example his testing at the end of the video, see the very wide spread of resistances across pins... it shouldn't be that way. I think it has to be manufacturing tolerances, either male or female end and some pins just not fitting snugly.
VenditatioDelendaEst@reddit
That resistance was measured after the connector overheated for probably several hours, and after Der8auer went gorilla on it trying to unplug it with fused plastic.
There was obviously an imbalance, because the melting happened, but an imbalance doesn't have to be high resistance. The maximum contact resistance is a tolerenaced parameter. The minimum is not.
Alive_Worth_2032@reddit
And can increase over time due to mechanical changes from heat/cooling cycles and oxidation occurring.
username_taken0001@reddit
Pins having higher resistance would not be a problem (at least not considering safety, a GPu would just get not enought power, the voltage would drop and the GPU would probably crash), the problem is that some idiot though to use another cable in parallel on different pins. This causes the issue, because the moment one cable fail or partially fail, the other one has to carry more power. Connecting two cables, when one of them is not able to handle the current by itself (thus the second one is just a backup) is just unhear ofd, and such a contraption should definetny not be sold as a consumer device.
VenditatioDelendaEst@reddit
I'm pretty sure active balancing costs 2 extra shunts and 3 1k resistors. Or rather, "semi"-active, where you reuse the phase current balance of the VRM to balance the connector, by round-robin allocating phases to pins.
SoylentRox@reddit
The correct solution - mentioned many times - is to use a connector suitable for the spec, like the xt-90. 1080 watts rated, and more importantly, it uses a single connection and a big fat wire. No risk of current imbalance, large margins so it has headroom for overclocking, future GPUs, etc.
Z3r0sama2017@reddit
It's wild. The connector on the 3090ti was rock solid. I don't remember seeing any posts saying "cables and/or sockets burnt". Yet the moment removed it for the 4090? Posts everywhere. Sure their was also a lot of user error, because people didn't put it in far enough, but even today their are reddit posts of people smelling burning with the card in the system for 2+ years. And the 5090? It's the 4090 shitshow dialed up to 13.
liaminwales@reddit
Some 3090 TI's did melt, Nvidia just sold less than 3090's so less posts where made.
Strazdas1@reddit
8 pins melted too. everything has a failure rate. This connector is just bad design increasing it.
Tee__B@reddit
The max power draw of the 4090 and 5090 compared to the 3090ti doesn't help.
-WingsForLife-@reddit
The 4090 used less power on average than the 3090Ti, it really just is the lack of load balancing.
Tee__B@reddit
Sure it's more efficient but it can and does go way higher. My 5090 at stock spikes above 600.
RealThanny@reddit
The card was designed for three 8-pin connectors, and the 12-pin was tacked on. That meant the input was split into three load-balanced power planes. So that's three separate pairs of 12V wires, with each pair limited to one third the total board power (i.e. 150W per pair). Even if one of the pair has a really bad connection, forcing all the current over the other wire, that's still only 12.5A max.
The 4090 has no balancing at all, so it's possible for the majority of power to go through one or two wires, making them much more prone to melting or burning the connector.
The 5090 is going to be much worse due to the much higher power limit.
Jeep-Eep@reddit
Yeah, the connector... it's not the best but balance it and/or derate to the same margin as 8-pinners and you're gucchi.
GhostsinGlass@reddit
Just say 8-Pin.
Jeep-Eep@reddit
Okay, but the burden of the message remains - use these blighters like the old 8 pin style - derate to 50%, multiple, load balancing on on anything over half a kilowatt - and they'd probably be roughly as well behaved as the 8 pin units.
GhostsinGlass@reddit
Yeah, All I did was tell you to say 8-PIN, whatever you are crashing out about here has nothing to do with what I said.
conquer69@reddit
There was never any evidence of that either. It's clear that even a brainrotten pc gamer can push a connector correctly.
If the card isn't plugged in correctly, then it shouldn't turn on.
ficiek@reddit
That's not true, there were plenty of reports of that happening back then as well, mostly due to the connector not being fully seated.
Quatro_Leches@reddit
You wouldn’t see many devices with less than 50% margin on the connector current rating
Jeep-Eep@reddit
Yeah, and the performance of the 5070ti and 9070xts that use them is telling - run it like like the old standard and it's pretty reliable and you still have a board space savings.
PostsDifferentThings@reddit
With the price of GPU's these days, complaining about the added cost of some IC's for power balancing is like a Bently owner complaining about the added cost of tinting their windows on thei 2025 Continental GT.
Like seriously, what the fuck are we talking about here? Too much cost? GPU's are more expensive than used cars these days. Fucking add the cost.
woozie88@reddit
Thank you kindly.
Oklawolf@reddit
As someone who used to review power supplies for a living, I hate this garbage connector. There are much better tools for the job than a Molex Mini-Fit Jr.
DNosnibor@reddit
While they're certainly not perfect, the Mini-Fit Jrs seem mostly fine (PCIe 6/8-pin and EPS12V 4/8-pin are pretty reliable). 12VHWPR is MicroFit, not Mini-Fit Jr, and it definitely has major problems.
venfare64@reddit
Hey, didn't know you browsing reddit. Thank you for your past contribution on PSU review. >!you also definitely better than johnny cause he didn't seems understand that the connector having bunch of design flaw and trying his best to defend it.!<
Oklawolf@reddit
If it wasn't for Jon, I'd have never gotten started reviewing. He's also a friend of mine, so I don't really appreciate people taking shots at him.
Leo1_ac@reddit
Hey Johnny Guru. Respect man. Thank you.
jerryfrz@reddit
OklahomaWolf isn't Jonny.
Jeep-Eep@reddit
Team Green's board design standards are why I ain't touching one for the foreseeable future.
ZekeSulastin@reddit
… were you of all people ever going to touch Nvidia anyways?
Jeep-Eep@reddit
I do have fond memories of my EVGA 660ti back in the day.
Hewlett-PackHard@reddit
It's like they fired all their electrical engineers and just let AI do it.
Lisaismyfav@reddit
Stop buying Nvidia and they’ll be forced to correct this design, otherwise there is no incentive for them to change
starcube@reddit
As soon as there is a competitor offering the same performance... oh wait.
TheSuppishOne@reddit
After the insane release of the 50 series and how it’s freaking sold out everywhere, I think we’re discovering people simply don’t care. They want their dopamine hit and that’s it.
Strazdas1@reddit
the vast, vast majority of people do not follow tech news and will not even be aware of the issue until it hits them personally.
THiedldleoR@reddit
A case of board partners being just as scummy as Nvidia themselves, what a shit show.
BrightCandle@reddit
Clearly no user error in this one, we can see the connectors are in fully. The connectors on both sides have weld themselves. The only place this can be fixed is the GPU. They need to detect unbalanced current on the GPU for this connector for safety reasons. This is going to burn someone's house down, its not safe.