Why does Nvidia wants so badly to stick with 12vhflr ?
Posted by Public_Courage5639@reddit | buildapc | View on Reddit | 228 comments
It's a bad connector that literally melts, it ruins their reputation (that and paper launch, fake msrps, multi flame generation, etc... but it's not the topic). Why do they absolutely want to keep it ? 6+2 works great and is very reliable. Which benefit do they have using 12vhflr over that ?
UpperRearer@reddit
Just to make things difficult for regular people, create some air of exclusivity, and make lateral movement more tedious. Good luck explaining any of that connector pin stuff to younger family members that just want to buy a new GPU. Then again, Nvidia is too far into the bubble of their own making to pretend to care about consumer goods these days. If you're not a data center, you might as well not exist to them.
djGLCKR@reddit
They co-sponsored the development of the connector alongside Dell, PCI-SIG designed it. Sunk cost fallacy, perhaps.
NTC197@reddit
Nvidia is one of the most influential members of the PCI-SIG. Also, Nvidia developed first iterations on their own (with Astron) before they brought the connector into the PCI-SIG.
aragorn18@reddit
Ultimately, we don't know. But, it's not like the connector has zero benefits. It's much smaller and allows a cheaper and more flexible design for the PCB.
UltimateSlayer3001@reddit
“cheaper” -> found our answer lmao
Unicorn_puke@reddit
Yup rather than 3-4 if the traditional 8 pin they want to just use 1 connector to save like $3.50 a board.
Ask_Who_Owes_Me_Gold@reddit
1 connector is also easier for the user and requires less cabling.
itherzwhenipee@reddit
Yes, because connecting 2-4 cables to one splitter is so much easier than to connect 1-2 cables to the GPU?
Ask_Who_Owes_Me_Gold@reddit
One 12 VHPWR cable with one connector on each end is easier than running multiple cables or messing with splitters.
Forward_Golf_1268@reddit
If only they didn't melt.
Unicorn_puke@reddit
Yep. I'm not against it if it works. I have the nitro+ 9070xt and it uses that connector hidden under a cover plate. It's hard enough moving 1 cable in there. No way they could do that design with 2 or 3 connectors
dugg117@reddit
And crazy part is they could just use 2 8 pins and still have more headroom than the nonsense they're using
dbcanuck@reddit
$3.50/board x 150,000 4090s cards sold = $525,000.
Its not immaterial, although given Nvidia's market cap does seem penny wise / pound foolish.
chsn2000@reddit
I mean, you need to take into account the failure rate x cost of warranty to understand the marginal difference
ExampleFine449@reddit
What about space on the board? Would there be more than enough room to put that many connectors on it?
Unicorn_puke@reddit
We have 3-4 slot GPUs as the norm now. Most are at least 300mm in length. They don't give a fuck how big they make these things.
Haunting_Summer_1652@reddit
So $3.5 a board means like $50 for the final price I'm guessing
Lightbulbie@reddit
Naw they're pocketing the savings and still overcharging.
Swimming-Shirt-9560@reddit
selling less for more, ngl if i'm Nvidia shareholder that's exactly what i want them to do to maximize my profit
Money_Do_2@reddit
Small gain, big risk if it comes back to bite them. Probably wont tho.
pizzaking10@reddit
Prioritizing shareholder profit is the harbinger of decline.
SpurdoMonster@reddit
love me free trade
RCB1997@reddit
Free trade is good. Regulators not doing their jobs because corporations donate to political campaigns is bad.
jamothebest@reddit
free trade means no regulations lol. 100% free trade is bad.
RCB1997@reddit
That's not exactly true. Individual jurisdictions are still supposed to keep companies in check within the rules/regulations of their countries. Free trade means no tariffs, taxes or arbitrary market protection for certain markets. Free trade =\= zero regulation
jamothebest@reddit
All of those rules/regulations are restrictions on trade, hence it’s not technically free trade.
What you’re saying is more what free trade means in practice rather than in theory. Free trade is bad because it allows businesses (mostly bigger corporations) to take advantage of consumers.
Regulations are good and should be encouraged but not all regulations are good.
PiotrekDG@reddit
Monopoly, more like.
oscardssmith@reddit
The savings are more than just the connector cost. It's also the smaller PCB which means easier cooling.
Sleepyjo2@reddit
Its also dramatically easier to design, with fewer required components (beyond the physical connectors themselves).
Its not just "haha its cheaper and they're being cheap".
emelrad12@reddit
Ye, probably something they need as those cards are getting crammed so hard.
Yuukiko_@reddit
They also took out the load regulators so probably more
HSR47@reddit
The issue isn’t the connector in and of itself.
The issue is Nvidia’s unrealistic expectations (600W per 12V HPR connector has never been realistic or safe), and their bad designs not doing enough to detect potential issues.
If Nvidia had designed its cards better, and if they’d stuck with 300-450W per connector, melting likely never would have been a serious issue.
If they want to keep pushing 600W through a single-connector, they need to revise the spec to add at least two more conductor pairs to it.
adadhead@reddit
inhale a big glass of water
buildapc-ModTeam@reddit
Hello, your comment has been removed. Please note the following from our subreddit rules:
Rule 1 : Be respectful to others
^(Click here to message the moderators if you have any questions or concerns)
KillEvilThings@reddit
I wouldn't say it's that simple. I had far smoother power delivery and less USB hub stutter with my 12vhpwr instead of 2x pcie connectors. When I wasn't using the 12vhpwr on my Ti Super, I saw lots of transient power draw from the PCI connector on the mobo which probably resulted in some weird issues.
Now could it be the board and GPU power management was shit? Absolutely. But it's never completely straight forward.
I think if 12vhpwrs are stuck to 400w at most, they're great. However, above that, literally playing with fire.
lafindestase@reddit
Connectors worth maybe a dollar each are what they’re worried about on these $500-$2000 devices…?
the_lamou@reddit
Everyone is focusing on the "cheaper" part where the reality is that a 5090 would take like three 8+2 connectors.
eBazsa@reddit
I don't really understand this take, as the Corsair SF series comes with 12VHPWR cables rated for 600W with a dual 8-pin end on the PSU side.
the_lamou@reddit
Corsair has always been weird about not following the official specs and doing their own thing. That's not a good thing.
eBazsa@reddit
As someone already said in this thread, the EPS connector has only one extra live wire, but it's rated at 300W. 1 extra wire, but double the performance?
The 6-pin PCIe connector is rated for 75W with either 2 or 3 live wires, the latter matching the 8-pin standard, yet being rated for half of its power. Even if we consider two wires, the rated wattage "shouldn't" be 75, but 100W.
To me it seems that this standard is not really consistent with itself. The power rating of each connector considers widely different rated currents and I don't see any reason for this.
itherzwhenipee@reddit
Mechanically, wire gauge, the 8pin can deliver around 280W but PCIE standard says 150W. So the PCIE standard completely ignores wire capabilities.
eBazsa@reddit
And I highlighted the contradiction in the PCIe standard. 1 extra wire seemingly always doubles the power rating, which doesn't make too much sense to me.
aragorn18@reddit
Four
itherzwhenipee@reddit
Nope, looking at wire thickness, the 8pin can handle around 280W, so it would have been 3.
the_lamou@reddit
Yup, I think you're right and I miscounted. So imagine having to plug four cables in to get shit running. It would be absolutely idiotic. The 12V2x6 cable isn't perfect, but it's the best of a bunch of bad options.
jott1293reddevil@reddit
Have you seen the size of 5090 AIB cards? They’ve 100% got the space for 4 connectors. We’ve had cards with three for a decade that weee much smaller.
the_lamou@reddit
And I have the space to put a full grand piano in my living room, but that doesn't mean I want to deal with that headache.
The only real problem with 12V2X6 is that it's still too easy to not plug in properly, and that PCI-SIG (the people who actually created the standard) really fucked up in limiting it to 575W (with 600W at the "+" level) rather than designing a true future-proof connector.
Realistically, they should have created a standard for a 1,000W connector, and specced it to be safe at that power level. As it currently stands, the connector is functionally obsolete since the 6090 is unlikely to draw less than 600-650W.
jott1293reddevil@reddit
I’ve seen so many of these burnt out posts that I don’t even believe they’re all “not plugged in properly” anymore. A lot look like manufacturing defects where one or two pins are a tiny bit too short causing overloads on other ones that are making proper contact. Basically as you say they made a cable with too tight a tolerance for the 4090 and 5090 power draw. You are entitled to your opinion that 4 cables would be too big a hassle. From my perspective the 12v is the only reason I haven’t bought a 5090. If Nvidia weren’t such control freaks I’m convinced someone would have made an AIB card with 4/5 8pin pcie cables and I’d have grabbed one. If I’m dropping $2500 on a video card I don’t think it’s too much to ask for a little peace of mind to go with it
the_lamou@reddit
By "too many" you mean like 10? Because the actual number of burnt connectors has been miniscule.
PM_ME_NUNUDES@reddit
Nobody "needs" a 5090. If you're doing AI work for a professional business, you're not buying 5090s, if you're gaming at 1440p, you can do that with any decent current gen cards.
The 5090 is for a tiny percentage of gamers who want to play new titles at 4k. You don't need to do that unless your job is literally "4k game reviewer".
the_lamou@reddit
Or "photo/video/3D rendering". Or physical modeling (for example in physics or chemistry). Plenty of professionals in AI use 5090s in home labs and for prototyping, because moving up to the next step (RTX PRO) costs about 3-4x a 5090 and isn't necessary unless you're working with full FP32 production models. I'm currently deciding between whether I actually need to spend $9k on a PRO or if I can get away with building a functional prototype on two 5090s that I can resell when I need to move up to a workstation or server card.
jott1293reddevil@reddit
Haven’t been counting but yeah it’s dozens now across the last two generations. I had it on preorder and cancelled it after I saw der Bauer’s vid on it. Genuinely not sure what to buy now. Last time I bought a GPU was when the 4000 series dropped . Let’s just say my finances have improved since then, I’m gaming on a 5120:1440p monitor and my current card can’t give me a smooth experience with ray tracing enabled. I was completely prepared to pay for a 5090 but I’m just not sure now. I don’t want to reward stupid design with a purchase of this size, but unfortunately there’s 0 competition for a good performance at that resolution. I will probably crack and buy one but I feel about it the same way I would if I was buying a car and the only option was a cybertruck
the_lamou@reddit
Well, no, it's hundreds to thousands if you count the last two generations. But that's not what we're talking about, and the connector changed from the 40XX to the 50XX so not sure why you'd count them unless you just wanted to feel included in the conversation.
The one where he used a known bad power supply on a cable well outside its spec range, got one result that no one has been able to replicate including systems integrators who pointed out that his equipment had to be out of spec because they had never seen similar behavior AND there's no way he would have been able to touch the cable if it was at the claimed temperature, and then pretended like it was some definitive proof despite not doing even the bare minimum of proper experimental diligence?
That has got to be in the top ten worst click-bait garbage videos in tech of this decade, and it's sad that people still make decisions based on it.
The difference being that the Cybertruck is a genuinely bad car with huge issues across the range, whereas the 5090 is actually a great card that has a relatively minor issue which to date has affected fewer than a dozen or two cards. It's not a great look, but it's also highly over-blown for 5090 cards. It's the "MSG will kill you" of tech.
The 12V2X6 connector is not great, but a) that's not NVIDIA's fault, since the standard is written by PCI-SIG, not NVIDIA, and b) your chance of having a connector melt is tiny enough that you're more likely to damage your 5090 through a bad OC. If you want a 5090, get one. The connector shouldn't stop you.
Zorboids@reddit
Sure, but not many PSU's come with nearly enough connectors for that.
anirban_dev@reddit
most 1000 W PSUs before the ATX3.0 mandate of including the specialised connector, would have had 6 connectors. They don't now precisely because of the 12V2x6. So no, NVIDIA arent solving a problem they have created themselves.
Rebelius@reddit
But my motherboard takes 3, so 6 would not be enough if I needed 4 for the 4 needed for a 5090, assuming we don't trust the pigtail cables.
My PSU has 5 plus the 12V-2X6, and that's enough.
anirban_dev@reddit
If you're using a CPU that actually needs 3 connectors, then I would imagine your suggested PSU would be over 1000w if you also have a 5090.
Rebelius@reddit
An NZXT C1200 has the same connectors as my C850.
I don't need all those connectors, I have a 9800x3d, but the motherboard takes 2 8-pin and a PCIe. It just says in the manual to connect them all, I'm sure it could do without one of the 8-pin CPU connectors if I really needed it for the GPU.
buildzoid@reddit
your 9800X3D only needs 1 EPS 8pin and a 24pin. Any other connectors on your motherboard are optional.
LA_rent_Aficionado@reddit
I’m running 3 5090s and I have absolutely no desire to route 12 8 pin connectors and make my case into more of a rats nest. Once I get a 4th even more so.
It’s imperfect however as a person who got back into builds a few years back I will say the 12v2x6 is way easier from a user standpoint than using bulky 8 pins
anirban_dev@reddit
Hope you are able to recognize how much of an outlier you are in the non AI GPU market.
LA_rent_Aficionado@reddit
Yes
ime1em@reddit
How much did 3 cost vs just buying a RTX 6000 pro?
LA_rent_Aficionado@reddit
Not far off, granted I got 2 before even hearing about the 6000. My plan is to likely sell one and swap it out with a pro at some point.
ime1em@reddit
https://www.reddit.com/r/nvidia/comments/1kmxnmu/comment/msdx9zi/?utm_source=share&utm_medium=mweb3x&utm_name=mweb3xcss&utm_term=1&utm_content=share_button
The 6000 has good gaming performance too.
LA_rent_Aficionado@reddit
I definitely want one, I figured they’d be impossible to get near MSRP an go for 12k or something, doesn’t seem to be the case though
iAmBalfrog@reddit
The apple connector special
VengefulCaptain@reddit
My ancient 850W PSU had 5.
Any PSU that would be used with a 600W card would have enough plugs. You might have to email the manufacturer to get additional PCIE power cables though.
Rebelius@reddit
The NZXT C1200 has 5 PCIE/CPU and one 12V-2X6. My motherboard uses 3 of them, so if you swapped the 12V-2X6 for an extra PCIE/CPU connector, you wouldn't have enough left over for a 5090.
VengefulCaptain@reddit
Why would you swap a 600w output with a single 150w output?
If they removed it they would probably add 4 PCIE power connections.
GolemancerVekk@reddit
That's because they also cheap out, or gouge you for extra cables. There are brands that put sufficient cables in the box match the wattage, like ADATA/XPS, which is why I keep recommending them on top of being tier A.
elonelon@reddit
wait, 8+4 or 8x4 ?
aragorn18@reddit
8x4
elonelon@reddit
why the heck on earth we need that kind of power for 1 GPU ?
aragorn18@reddit
The 5090 can pull 600W. Each 8-pin connector is rated for 150W.
eBazsa@reddit
Corsair's own 12VHPWR connector is rated for 600W and using only two 8-pin on the PSU side.
skylinestar1986@reddit
4 is ok. I don't mind having a connector like the motherboard power connector.
aragorn18@reddit
This would actually be 33% larger than the standard ATX motherboard connector.
Ouaouaron@reddit
Which wouldn't just be ugly, but would take up a significant amount of space on their fancy FE circuit boards
maewemeetagain@reddit
It's also cool in concept to have a connector powerful enough that one cable can power a high-end GPU, it looks nicer in a build and it's much more simple to cable manage.
The issue, of course, is the execution. NVIDIA really wants it to work and I get why. Unfortunately, it just isn't compatible with NVIDIA's desire to juice up these cards as much as possible. I mean, we're already seeing some really high-end 4090s and 5090s that have two 12VHPWR connectors, entirely defeating the purpose of the biggest benefit the connector is supposed to have.
itherzwhenipee@reddit
How dafuq does a cable tree of four cables going into one, look nicer than two cables?
maewemeetagain@reddit
I'm talking about the single 12VHPWR cables that run straight from the power supply on newer units, not the adapter.
Frankie_T9000@reddit
I was suprised that the 5060 Ti I had has one single 8 pin connector.
Eduardboon@reddit
Yeah I couldn’t plug in 3 6+2 cables because I only had 2. Forcing me to use an Y plug most GPU’s come with (even though you shouldn’t do this).
The 12VHPWR was an easy order from Seasonic and made space in my case.
I do not want it to melt my 5080 however, so maybe I should put some ice on it
Reworked@reddit
Maybe it's me, but I would imagine "not losing market share due to not even bothering to do basic planning of the connector" is a fair sight cheaper.
aragorn18@reddit
Losing market share you say?
https://store.steampowered.com/hwsurvey/videocard/
GolemancerVekk@reddit
Yes. The vast majority of cards there may be Nvidia but they're 1000-3000 gen, with a handful of 4000 thrown in. People used to buy Nvidia, true. They can't afford it anymore, the cost/value ratio isn't there.
Scarabesque@reddit
They also can't afford AMD anymore. In many markets their latest two GPUs are more overpriced than NVidia.
GolemancerVekk@reddit
It's not about AMD vs Nvidia, it's about home users being priced out of the market altogether.
Caramel-Makiatto@reddit
You literally said that nvidia is losing market share. That implies that AMD is selling more cards than Nvidia. It's inherently an Nvidia vs AMD statement.
GolemancerVekk@reddit
The market is no longer just home PC... consumer card sales are dwindling in favor of datacenter cards and other professional segments.
You're going to say that's semantics since it also says "Nvidia" on datacenter cards, but you and I are never going to buy a datacenter card. As far as we're concerned it might as well be a completely different producer.
"Our" cards are going extinct and it's very little comfort if they have brand A or brand B on them while that's happening.
So yep, you're absolutely right, Nvidia is not losing market share... except soon that won't matter to us anymore.
Wiggles114@reddit
Yeah but the cards are now bigger than ever, I'm no engineer but I find it hard to believe it's impossible to fit 4X PCI-E 8-pin on these behemoths
CmdrSoyo@reddit
Because the PCI-E spec only allows for 150W for an 8pin and 75W for a 6pin they wanted a new connector because their 600W GPUs would have needed something like 4 8pins. That makes their cards look extremely power hungry (because they are) and is bad pr. It also takes up a lot of space and makes the PCBs more expensive.
Why didn't they just recertify the PCI-E 8pin to 300W? I have no idea. you can put 300W through a 6pin and it likely would only get a bit warm. Still better than the 12VHFR.
mikelimtw@reddit
I because Jensen is too proud to admit he made a mistake with that connector.
itherzwhenipee@reddit
Because Nvidia wants to be like Apple. Creating their own ecosphere.
AdministrativeFeed46@reddit
coz they can shrink the pcb and have smaller coolers. less materials, less cost.
StomachAromatic@reddit
This isn't a question that actually desires a real answer. This is asked to simply complain about Nvidia.
thelord1991@reddit
The problem is not the connector. The problem is the gpus unevenly suck power through the cable.
The8auer tested it and noticed that the biggest load goes through 1 or 2 cables while the others are barely used. Thats why mostly of 1 or 2 of the pins melt.
So its basicly nvidias fault. They dont give their gpus the tools to spread the powerdraw balanced over every of the pin connectors.
ununtot@reddit
Because with their "all power to one Rail" they would also burn down 8Pins.
Matttman87@reddit
I'm not an expert but from my understanding, the engineering is sound, it's the implementations that have been bad. It appears that simply adding per-pin amperage monitoring could detect a bad cable and prevent the literal fires. When one pin starts to fail, or makes a poor connection, it creates resistance so the other cables compensate with more power than they're rated to handle and the excess energy is expended as heat, aka fires.
But that increases cost so is it an implementation oversight or a deliberate cost-savings measure that's lead to all these connector failures? That's the real question.
Noxious89123@reddit
Trust didn't fix the problem though, it just alerts you that there's is a problem.
FarkGrudge@reddit
It absolutely fixes the problem because the card can monitor current ratings then and throttle if it’s exceeding on any pin, producing a warning to fix it.
The connector, when properly inserted, is rated for the current being supplied. It’s the cases where it isn’t properly inserted (on either end) that are melting as the card doesn’t know the pins are being exceeded and just keeps pulling current above their ratings.
Noxious89123@reddit
Right.
So now you've got a graphics card that doesn't work properly, because it either:
This is not a fix.
This would be like having a punctured tyre on your car, and instead of replacing or plugging the tyre, you instead decide to fit a tyre pressure monitoring system and a foot pump.
It isn't fixed if you have to keep fuckin' with it.
Melodic-Letter-1420@reddit
No the engineering is not sound if the design requires all wires to work in order to not have fatal failure.
A building or bridge is designed to not have all the nuts and bolt in working condition with tolerance in mind when it may cost lives.
Matttman87@reddit
Again I'm not an expert but it's my understanding that it isn't until 2 of the power pins have completely failed that current pushed through the remaining connectors will exceed their rating which suggests there is redundancy. But sometimes those connectors haven't completely failed and are still letting some current through while expending some as heat (which also increases resistance), so the power supply has to send more current, and so on until there's enough heat to melt it.
The real problem is that the system doesn't know which pins have that increased resistance so when the GPU requests more power, the power supply can't tell which pins can safely handle it and which pins will expend that extra energy as heat. If the system had some way of monitoring each pin, it could a) better balance the load and b) alert you to a poor connection to be remedied before a catastrophic failure, whether that means simply re-seating the connector or replacing the whole cable.
FarkGrudge@reddit
Um, what? The connector and wires are rated for a specified current and VA. If properly inserted on both ends, the current draw is evenly distributed and below the ratings of the connector. This is basic electrical engineering design, and is sound.
The issue is what happens when they’re not properly connected (or a wire is broken) causing the other pins to exceed their ratings, and the design of the cards is such that they cannot detect and throttle to protect in this case until it’s fixed. That’s the design issue - not the connector.
Melodic-Letter-1420@reddit
I didn't even mention that the connector is not rated for the intended use or electric engineering.
I'm talking about the lack of tolerance and redundancy. Real-world isn't always perfect and good engineering accounts for that. One broken wire causing catastrophic failure isn't sound engineering.
coolgui@reddit
The 12VHPWR connector was developed by PCI-SIG, but not Nvidia directly. It's the "official" way anything over 200w is meant to be powered. There is nothing stopping them from just using multiple 8pin like others, but I guess they just don't want to admit it hasn't worked out very well. They keep thinking it's fixed but doesn't seem to be. I guess they are stubborn.
Mike_Prowe@reddit
Look who sits on the PCI-SIG board of directors and spot the Nvidia employee
coolgui@reddit
AMD, Intel, Nvidia, Qualcomm, IBM, etc... I'm not sure that means that much.
mkdew@reddit
Nvidia and Dell was the main sponsor for 12vhpwr. I know you guys want to just blame in on AMD and Intel.
Mike_Prowe@reddit
I know that but saying PCI-SIG developed it like Nvidia had no say in or influence is kinda weird
coolgui@reddit
I never said they didn't have input, but all the members supposedly tested and vetted it. Maybe there is nothing at all wrong with 12VHPWR but rather Nvidia's implementation of it? I don't know enough to make accusations, just something to think about.
aragorn18@reddit
PCI-SIG basically just adopted the 12 pin proprietary connector Nvidia invented for the RTX 30 series. All they did was add the 4 sense pins that are basically unused.
ChaZcaTriX@reddit
Nvidia also didn't invent it; they took an existing Molex Minifit connector that fit their requirements.
From the enterprise side of things, we're also starting to get systems with Minifits as a standard connector for modular power supplies. If this gains traction, we'll be able to swap modular power supplies freely without replacing the cables.
lichtspieler@reddit
AFAIK the used connector for the NVIDIA 3000 series was the Molex Micro-Fit 3.0 12-pin connector.
Seasonic example: https://www.hardwareluxx.de/images/cdn01/430C81B7A3BA4ED38895A31759E495F0/img/C23AC996B29B458CAFCA64C7569353BC/Seasonic-MicroFit-Adapter-00001_C23AC996B29B458CAFCA64C7569353BC.jpg
coolgui@reddit
Oh okay. That probably explains why they are so stubborn about it. You'd think they'd have scrapped it with a new interface by now.
skylinestar1986@reddit
4 series. Not sure if it's ok. There are failures.
5 series. Not ok. There are failures.
6 series. We have failed 2 times. Shall we fail again?
Seriously, I hope this is not repeated in the 6 series.
No-Actuator-6245@reddit
My guess is aesthetics. A lot of people who drop big money on systems want the most aesthetically pleasing design. Minimising cables and connections really helps.
Kilo_Juliett@reddit
It's funny because the connector is the only reason that I'm hesitant to buy a 5090.
poipoipoi_2016@reddit
Yeah, I'm skipping the entire 5000 series.
I need a 4090 to do AI work with and lordy let the 5080 TI/Super/whatever the 24GB version ends up being called brings those back under 2 grand.
GladlyGone@reddit
What do you need to use AI for in your work? All I see is AI slop posted everywhere on YouTube/Reddit.
MindbenderGam1ng@reddit
Big difference between generative AI like chat GPT, Gemini, etc. slop/copyright infringement. AI is revolutionary in fields like medicine and law, not for the purpose of having a robot that can do surgery/, but can make research and parsing millions of documents exponentially faster leading to breakthroughs and more efficient use. I’m generally an AI doomer but many desk jobs that are not slop related at all will revolve around AI. Also extremely useful for upscaling (controversial with DLSS frame gen, but think of things like restoring archival footage)
PsyOmega@reddit
Running AI models that use lots of vram reduces the 'slop' factor
Fitnny@reddit
Ditto.
SagittaryX@reddit
Not just the cables themselves, but the PCB and card design as a whole. There's no way they could have done the 5090 flow through design if they needed 3-4 8 pin cables to go into it as well.
f0xpant5@reddit
They knew they were going for the 450w+ end of things, and sure this cable is a lot nicer and neater than 4x8pin, shame it wasn't designed with a bit more overhead.
It seems rare to have issues with a card wanting for ~400w or less.
No-Actuator-6245@reddit
The latest findings from reviews suggest the cable/plug could have worked without issues if NVidia hadn’t tried to save cost and remove all load balancing. The 3090Ti used the earlier version and never had any problems, it also had load balancing.
f0xpant5@reddit
Yeah that would also be a great addition, cost cutting to the rescue again eh.
The-Great-T@reddit
Personally, I love the 3×8pin connectors on my 3080. I like having as many pins on my card as a motherboard. It make a nice, broad line.
Unicorn_puke@reddit
It's 2025 - lady line
The-Great-T@reddit
Fuckin' LOL
RichardK1234@reddit
sunk cost fallacy
BillionaireBear@reddit
Yeah they spent millions designing it I imagine, not throwing that down the drain. Not to mention 12vhpwr is not inherently bad, it’s just flawed right now. Very surprising it’s not fixed yet tho
Kurgoh@reddit
I'm not surprised at all, fixing it costs more money than the alternative. Unless a lot of pcs burst in flames and burn down apts and houses so that it becomes big news, nvidia probably doesn't really care to fix it.
BillionaireBear@reddit
They already are spent millions more fixing it with ATX 3.1, 3.2, 12+2x2pin whatever the hell it is lol. But yeah Nvidia wasn’t the most incentivized to fix it. That mess did make the news, but for 5min, then more exciting worldly news came out, and people moved on. 4090 came out in Fall of 2022? Wowzers
BillionaireBear@reddit
I was wrong, as people mentioned, made by PCI-SIG but Nvidia did obviously invest their own time/money into developing around it. Seems they were the biggest proponents alongside Dell. There's still something to be said about mass producing faulty items, but that's what recalls are for I suppose.
eduardopy@reddit
Are you sure Nvidia designed it? It literally is the standard put forward by the PCIE. Nvidia simply followed the standard literally.
KarelKat@reddit
Also newer, smaller, fancy pants sense lines = better
tjlazer79@reddit
Ego and pride. They can't admit that it's not as reliable as what it replaced. Especially how they came out for years, saying it was all user error. I got flamed and attacked on another reddit a few years ago, for saying it was a bad design, when it was happening to the 4090s. Fuck nvidia. I still got an EVGA 3080, keeping it at least one more gen, then more than likely going with AMD.
RunalldayHI@reddit
As an EE, this subject is extremely far from being anywhere even close to complicated.
Those wires are in parallel. If length,temperature, and clamping force on the pins remain the same, there is literally no way for those cables to have an imbalance of current, this is a very basic fundamental of electricity.
Don't get me wrong, a shit cable, kink in the cable or even something as dumb as tugging on the cable or smashing it against the glass, can spread the molex pins enough to create a resistive hot spot, which may eventually lead to failure, it doesn't take a lot to wear down these cables if you keep messing with them.
OZIE-WOWCRACK@reddit
12VHPWR is fine. The pcb needs sensors keep it in check. And for 5090+ it needs two (2) 12VHPWR. Prototype needed four 8 pins lol
Lunam_Dominus@reddit
Because id your gpu dies you buy another one.
Warcraft_Fan@reddit
9 out of 10 times it's user error like not plugging all the way in or reusing old cable where one pin may have stealthily slipped out of place and forcing the remaining 4 or 5 wires to handle heavier load.
NVidia wanted to use this connector up to 600w but failed to consider one or 2 failing wire or connection causing overload and melting because they were too cheap to mandate separate load monitoring.
GuyNamedStevo@reddit
The problem is the voltage regulation on the cards, not the connector.
Nuclear303@reddit
that is a part of the problem, the other parts is that the wires are connected in one spot in the cards - that is a part of the specification - the wires having a smaller circumference doesn't help either, if the resistance on one of the wires is higher than on the others, it will get hot and because it's a small wire, it will just melt in this scenario
rccsr@reddit
I assume because with GPUS hitting 600 watts+, a 4x8pin is a bit excessive.
Might as well just reuse the connector for all the GPUs.
SpaceCadet2000@reddit
Maybe consumers as well as the industry as a whole need to realize that 600 watt GPUs are obscene and shouldn't exist. So much energy usage and heat load just to play a game is extremely wasteful.
They can't just keep upping the power each generation anyway, it's completely unsustainable, and only used as a crutch to mask shrinking generational gains.
michael0n@reddit
Wait for the review from Gamers Nexus "You need two 4x8pin on each side of the card and a 1200W psu to make it work. Welcome to PC gaming 2027. It gets around 50fps with raytracing on 4k".
KillEvilThings@reddit
Well on the other hand, 4k lol. Also lol raytracing, the world's most overhyped, inefficient, insanely well marketed bullshit.
I'm of the mind that, when I can one day play native 1440p at 200+ FPS with full pathtracing, THEN I'll give a shit. Because I need DLSS just to play with full PT/RT, and usually FG on top of that, which ultiamtely squeezes out all the fine minute details that are the whole fucking point of RT so it's like, why the fuck we even bothering.
Sure I get 100+ FPS on a Ti Super at 1440p with DLSS Quality + FG on 2077 at max. But 30-40FPS native with no AI bullshit looked 100X more beautiful.
613_detailer@reddit
Most PSUs don't have four independent 8-pin connectors either.
Noxious89123@reddit
All of the ones you'd be using with. 600w graphics card, do.
No one (that isn't a moron) is using a <600w PSU for a card that needs 400w+ of power.
613_detailer@reddit
Most have a maximum of three cables, with one or two splitting into two connectors at the end. I've never seen one with four totally independent cables.
Some manufacturers (e.g. Corsair) state that the split cables can be used for up to 150W for connector, for a total of 300W per cable. Not all PSUI manufacturers are that clear, so I'd rather not use the split pigtails if I can avoid it.
bassgoonist@reddit
You don't need separate cables. Any remotely good psu comes with heavy enough cables for the double ended ones to be fine.
dabocx@reddit
It lets the pcb be smaller
MadShartigan@reddit
It lets the card and cables be prettier. A triumph of form over function.
alc4pwned@reddit
Not really, the issues with the function aren't because of how small the connector is. It should be totally possible to make a connector of that size work.
FragrantGas9@reddit
The connector is one thing, but doing power monitoring and balancing requires some space on the PCB after the power connector. There was absolutely no room for that on the 5090 FE PCB. I think that decision to make that cooler ended up trickling down for all the 5000 series. They could have changed course after the issues with 4000 series but stuck with it to make that FE design possible.
MadShartigan@reddit
Sure, in a manufacturer-controlled environment, which PC assembly is not.
The old 8-pin connectors had a safety factor of 1.68 for 150W. The newest 12V-2x6 connectors drop that safety margin to 1.14 for 600W. It's asking for trouble.
FragrantGas9@reddit
And that was necessary for the 5090 FE dual pass through cooler design. I have the feeling that design was some executives pet project or ‘baby’, and that heavily weighed into the decision not to revise the power balancing requirements, because it would have made the 2 slot dual pass through cooler design on the 5090 FE impossible.
viperabyss@reddit
It's a bad connector that literally melts, ONLY IF people don't plug it in properly.
12VHPWR is part of the PCIE5 standard published by PCI-SIG. It reduces the space needed for connector (just compare a single 12VHPWR to 4x PCIE 8pin), and reduces the space needed for VRM /MOSFET.
If only people know how to plug a connector in.
erasedisknow@reddit
The connector isn't what's causing the melting, it's how the cards are wired immediately after it connects to the board.
All of the power pins get funneled into one line, so if something goes wrong, you're potentially shoving the GPU's entire power draw down one pin of the cable.
FragrantGas9@reddit
The dual pass through 5090 FE design was probably some high up executives ‘baby’ at Nvidia and that required no power balancing because there’s absolutely no room for it on the PCB.
Nvidia engineers may have suggested changing the power delivery design to improve safety but may have been overridden by an executive who demanded that the 5090 FE with 2 slot cooler be put in production. So they approved the power connector and delivery again for that card, and that essentially meant it was ok for any card, since it’s expected to handle 600 W on the 5090. And the AIB partners are happy to continue with that because it’s cheaper for them to produce.
Dyrosis@reddit
The connector is bad design bc it's harder and expensive to try and add the additional power infrastructure to the tiny area that connector takes up on the board.
Force retailers to spend more space on the board accommodating the power units and maybe they'll a lot some of that 'dead' space to functional load balancing.
erasedisknow@reddit
The connector itself may be flawed but the internal wiring of the card itself definitely isn't helping things.
https://youtu.be/kb5YzMoVQyw?si=ynMdWLHOtgDm2Bvn
Dyrosis@reddit
iirc nvidia doesn't specify how the power delivery systems should be handled, onyl the quality and type of power deliveryed to the computational/memory units. The power conditioning and load balancing is all on the manufactures.
It's similar to how Intel doesn't handle the power delivery for the CPU, only specifies the quality and types of power required, the rest is on the motherboard.
Just the CPU-mobo power delivery is customer facing, as it's 2 separate purchases and can be a marketing point, whereas the GPU power delivery is not.
erasedisknow@reddit
So then why do their board partners keep building their cards with the same flaws instead of internally wiring them like the 3090 Ti?
beefcreamgarlicbread@reddit
Because despite what all the know-it-alls here like to say, it's literally the PCI-SIG spec for 12VHPWR and 12v-2x6, it's not just an nVidia thing. AMD cards that use it will be wired up the same way.
Dyrosis@reddit
Cuz it's cheap, easy, and they try to jam all the power intake into a small area around the connector. Use bigger connectors and maybe they'll spend some of the resulting 'dead'space on load balancing. Refer to my first reply
viperabyss@reddit
Imagine fixing all these with one simple trick: just plug it in properly.
And funnily enough, none of the datacenter and enterprise customers have issues with 12VHPWR. Only a very small population (~0.04%) of consumers have this problem.
erasedisknow@reddit
The datacenter cards are likely wired differently inside to help balance the load better because if Nvidia can't fuck them over without losing their main source of revenue.
viperabyss@reddit
Highly unlikely, as the board size are very similar.
crazydavebacon1@reddit
I say most is user error in buying junk card brands and 2 not plugging it in correctly
chi_pa_pa@reddit
8-pin pcie connectors are typically like this too as far as I can tell... The pins are just way thicker, and it's way less total power per connector
jasons7394@reddit
Not really. Power is split over different cables so it can't push everything down one wire.
Plus they can safely handle 300W on their own to handle any spikes.
Noxious89123@reddit
Clever load balancing is far less necessary when the connector has a 100%+ safety margin.
The 12v-2x6 connect has functionality zero safety margin, but on paper it's like <10%.
FragrantGas9@reddit
I have a personal conspiracy(?) theory about it.
A high up executive at Nvidia basically fell in love with a pre-production design mock up of the current 5090 with the double blow through cooler. That design necessitates the 12v2x6 connector, without any power balancing circuits on the board, because the PCB has to be small enough to fit between the twin pass through fan solution.
Surely there was talk of changing the connector or adding balancing/monitoring circuitry, but they knew it would not be possible on that 5090 design. An executive weighed in the decision and pushed forward with the same connector to make that 5090 FE cooler design possible.
All the other cards in 5000 series are basically victims of the 5090 FE design requiring that power setup. Once they were able to say it is “safe” enough for a 600 watt card, that cheap, unsafe connector and power balancing design was considered “OK” for all the other models. And AIB board partners were happy to comply with that because it’s cheaper for them to produce.
Plane-Inspector-3160@reddit
Why not 2 to offset the 600w going through one of them
courage_and_honour1@reddit
This is easy:
And most importantly
demand by far outweighs supply.
no matter what, people will still buy a 5090/5080 as the alternatives are less preferable in terms of performance.
IrrationalRetard@reddit
I see you're also a Buildzoid enjoyer 😎
Ok-Ruin4177@reddit
It looks cleaner than using multiple connectors and looks sell. Notice how almost every new case has a glass side panel.
olov244@reddit
amd board partners are trying to jump on it too, it's sad
it will be funny when nvidia jumps ship and amd is on it
ahoypolloi69@reddit
Here is a perspective. When I built my sytem last year with a 1200W Thermaltake and 7900xtx, I was faced with having to use a piggyback for two of the 8-pin connectors. The supply has the 12vhp connector, but fewer of the 8-pin, causing a problem. It works, only because it is a large supply with thicker gauge cabling. This piggyback wouldn't work with a sub 1000 watt supply that uses thinner wire.
A move like this could be anti competitive, and will restrict people from upgrading to a 3x8 pin if they have a newish supply with fewer 8pin connections.
All the people building systems today with a power supply that has 12v connector and only 2 8-pin, are going to be restricted from upgrading to a high end AMD offering if AMD doesn't switch.
autobulb@reddit
Semi related: why hasn't the power output of the PCI-express port itself gone up over time? It's been 75 watts as far back as I can remember. Not safe enough to pump more through the motherboard?
ime1em@reddit
Maybe they want more cards to break, including the workstation ones that use 12v cable, so that more money for them .
Yasuchika@reddit
It can't be anything other than lower cost at this point.
HorrorsPersistSoDoI@reddit
Money. Money is always the answer to any question.
KFC_Junior@reddit
because 12vhpwr is better than traditional 6+2's, the issue stems from the 90 class cards drawing so much power and having zero safeguards.
some advantages, all cards use the same, no need to check how much power your cable can deliver (if i was using 2 6+2's i wouldnt be able to OC my 5070ti with a flashed bios to drawing 370w), smaller, cheaper for nvidia lmfao, looks better due to it being one smaller cable
valqyrie@reddit
Objectively false. There are people who use 250-300w cards with this type of connector reporting melting. 12vhpwr with it's current application is significantly inferior to tried and true 8 pin connectors.
Also, I OC'd my 7900XT to draw 390-395W on 2x8 pins and used it for 2 years, your 5070ti would do just fine if PSU and cables are not some trash quality one.
Last but not least; looks alone doesn't worth having a fire hazard in a PC case.
KFC_Junior@reddit
Where are the 250-300w cards that melted, I havent seen any. Yes you can prolly run 400w through 2 6+2's but is it a good idea? No not really. Ive seen those melt as well
valqyrie@reddit
Literally saw a 4070ti or something like that (a sub 300w card) had melted connectors yesterday. With a little bit of search you can find it.
cowbutt6@reddit
There was a 5070ti the other day.
RoawrOnMeRengar@reddit
You would totally be able to draw 400w out of 2 8 pin safely. Also for aesthetic, my 3x8 cable extension looks much better than any 12vhpwr cable I've seen.
KFC_Junior@reddit
i use strimers but the 12vhpwr one being so slim into my gpu looks a lot better than something the size of my atx plug wouldve been
gbxahoido@reddit
Because it's neccessary
8 pins can only deliver 150W while 4090 under full load draws 450W, tripple 8 pins will require a larger pcb, more cables, less airflow....
Stronger gpu draws more power, 5090 full load draws 575W, sooner or later they will have to abandone 8 pins and use something that can deliver much more power
RoawrOnMeRengar@reddit
The 8 pin is rated for 150w but can safely deliver more than that pretty easily, many case of overclocking bios on the 7900XTX like the XFX and asrock ones that allow you to get your power draw to 550w+
LOSTandCONFUSEDinMAY@reddit
There's also the 8pin EPS connectors (cpu power connector) that can do 300w.
And nvidia has used it on their server GPUs before.
Xccccccrsf@reddit
The EPS connector is the sole reason the 150w spec isn’t relevant anymore, it got one 12V pin more but double the power? Doesn’t sound right. Especially when People (miners) ran up to 400w through 8pin PCIE (HCS Plus) without issues long term.
LOSTandCONFUSEDinMAY@reddit
It's to do with most pcie 8pin cables being overbuilt beyond what the spec is based on.
Pcie 8pin is based around cables being 20-22 gauge wire. Now most cables are 16-18 gauge but this isn't enforced so the official spec remains at 150w. Technically a single 16 gauge wire can transfer the full 150w.
EPS enforces stricter specifications so can go closer to the actual limit of cables being used. I agree it's strange why pcie remains the main GPU connector.
Funny thing about 12vhpwr, the wires can handle over 900w however the connector is only rated at 660w. This is backwards from almost every other power standard where the wires are the failure point.
Legal_Lettuce6233@reddit
Most of these people are wrong and talking about shit as if nVidia is evil and stupid.
Fewer connectors means much simpler designs in terms of power delivery; easier to scale up or down. Simplifies VRMs too.
Simplified PDNs also allow for smaller PCBs, less heat output, and are just easier to use.
People here, myself included are VASTLY under qualified to even attempt to cover everything at play here. I'd they were qualified enough, they'd work for NVIDIA.
Here's a good read regarding power on GPUs.
https://www.monolithicpower.com/en/learning/resources/predictive-transient-simulation-analysis-for-the-next-generation-of-gpus
ScimitarsRUs@reddit
People will still buy the cards. They need incentive.
Tunir007@reddit
Just wanted to ask if the connector in the sapphire 9070 xt nitro + is the same or different because I’ve heard some people say that it’s not as bad as the nvidia ones. I’m seriously considering buying that card but idk if i should just get a non-oc 2x8pin model instead like the pulse
flesh0119@reddit
It uses an extra 8pin iirc.
No-Upstairs-7001@reddit
It opens up a whole new market in power socket water cooling 🤣 or fire suppression
_Metal_Face_Villain_@reddit
i actually don't know but if i were to guess I'd say that it probably makes the card very slightly cheaper to make
MasticationAddict@reddit
For their highest end cards you need 4 of those connectors to meet power requirements, and this forces boards to be larger just to fit four bulky connectors and route the traces for them
If they push the connector specifically for their top end cards the problem arises that fewer options will be available to supply to them
It was actually Intel that designed the connector and lobbied for its adoption, whereas NVidia has been spending a lot of money trying to fix the connector and it's likely that sunk cost that is preventing them from dropping it
prrifth@reddit
I have two cards that use 12VHPWR and haven't had any issues. Most times I see someone post their melted connector it's a third party cable, and most of the ones that aren't third party are improperly seated cables. Even the old 8 pin connectors would have some failure percentage, so the fact that there are some OEM cables seated fully that melt doesn't show that it's happening more frequently than with 8 pin, I'd really want to see some actual stats on failures with properly seated OEM cables for both 12VHPWR and 8 pin before I believe anything.
The amount of dumb shit I see people posting like no thermal paste on their CPU, thermal paste in the LGA, stickers left on their CPU heatsink, CPU fans not plugged in, monitor cable plugged into the motherboard - really makes me wonder how much of the issue is PEBKAC.
Gold-Program-3509@reddit
im sure that twitch kids dont push it fully.. i mean its vendors fault, but somewhat user error also
SpaceCadet2000@reddit
Demonstrably false. Or are you suggesting that Der8auer doesn't know how to plug in a connector?
cowbutt6@reddit
My take is that a change to anything that isn't obviously more capable would be seen as an admission by Nvidia that there are things wrong with the 12VHPWR / 12V-2x6 connector and their implementation of it. And that in turn opens an invitation for legal action, both from consumers and partners.
Norgur@reddit
The connector isn't the issue. Having Zero load balancing is.
YuYuaru@reddit
with how much a card consume, it the best way they can think for now. imagine a GC has 4 6+2 pin in 11L case. I twist my hand around to build 5070Ti in CH160. I cant imagine people build 4 6+2 pin in fractal terra case
alancousteau@reddit
If you don't understand something in today's world just look at the cost and 99% of the time you will understand.
skinnyraf@reddit
The current architecture of power delivery to PC components is just not ready for 0.5+ kW power draw. 12vhflr is just a hack to allow such insane wattage without redesigning everything, e.g., by increasing voltage delivered to graphics cards.
911NationalTragedy@reddit
More aesthetically pleasing Less steps to plug in for newbies Probably cheaper to produce It carries more power in one lane.
iKeepItRealFDownvote@reddit
Don’t need to have multiple wires running through the system and only gotta worry about one and it looks way better than having two/4
ArchusKanzaki@reddit
So in the future, they can stick 2 of them to make a 1200W GPU
Shadowcam@reddit
My guess is they wanted something smaller and flexible to go with some of their board/cooler designs; but they were too cheap to add any real safety measures, or to double the cables on the higher wattage cards. There's also the fact that other companies are involved in the spec too, and pretending there's nothing wrong is easier than rattling the supply-chain.
At best we might see the gradual rollout of more boards with monitoring like the Asus Astral with model refreshes. I wouldn't expect any serious change though. Next gen, its possible they do build more involved safety precautions into the cards; that's a lot more practical than trying to go backwards with connectors when some power supplies are being sold on the new spec by default now.
jessecreamy@reddit
Whoever tell me they dont like this connector, doesn't mean it's a bad connector
Random guy in reddit accused pcb designer at Nvidia "ruins their reputation"
Exostenza@reddit
My best guess is because they are cutting costs by making the PCB as small as possible and having one tiny connector allows them to have such a small PCB. Just look up an image of the 5090 PCB and you'll see what I mean. Honestly, I have no doubt that it's about cost cutting because there really isn't any other upside to it.
cheeseypoofs85@reddit
a couple reasons. they wanna feel warm and fuzzy inside for creating something new. they also dont wanna look stupid for switching back after sticking with it after it failed MISERABLY on the first generation.
belinadoseujorge@reddit
looks like they’re getting Apple’s disease, like: “oh we need so badly to push 600W through this small area (remember when Apple were obsessed with USB-C?) and we’re gonna do it, doesn’t matter if it will upset our customers making them buy new power supplies or potentially burning down their homes, we’re gonna do it because we have money and if we have money we can do anything”
Unique-Client-4096@reddit
Apple isn’t obsessed with USB-C. USB-C is the industry standard for charging phones and other small devices. Infact if apple had it their way they would have never transitioned to USB-C as it’s not their creation.
You’re likely thinking about their proprietary Lightning Cable which they used until the EU forced them to start using USB-C.
ArgentNoble@reddit
Actually, Apple did create the USB-C. They are members of the USB-IF. In fact, their Lightning and Thunderbolt connectors are USB-C connectors, just reversed (the male and female connectors are swapped from a standard USB-C implementation).
The reason Apple didn't want to switch is that the USB-C is due to them not fully owning the patent, as that belongs to the USB-IF as a whole. They fully own Lightning and Thunderbolt connectors and can charge licensing fees for third party manufacturers.
OzymanDS@reddit
No, he's thinking about the MacBook line, which,.in the generation before the M1 chip, had a design with the only ports being 4 USB 3.0 and 1 3.5mm. Truly cursed dongle dependency
ed20999@reddit
cost cutting
laci6242@reddit
sunk cost fallacy
Effective_Top_3515@reddit
To make it look nicer in their marketing slides. Doesn’t matter if it’ll potentially start a fire. Just make sure you’re next to your pc when it happens so you can unplug the psu cable asap