PCIe 7.0 is launching this year – 4x bandwidth over PCIe 5.0
Posted by gurugabrielpradipaka@reddit | hardware | View on Reddit | 184 comments
Posted by gurugabrielpradipaka@reddit | hardware | View on Reddit | 184 comments
Unplayed_untamed@reddit
I literally haven’t seen a single pcie 6.0 device. And pcie 5.0 is often not even on every port on a motherboard and also shares lanes. I genuinely don’t think faster ssds are even going to matter. If anything due to windows getting worse and ddr5 being annoying, my computer feels slower now than it did 5 years ago on windows 10. Also, the same problem occurs with internet speeds, like who needs 5/10 gig lan? What provider even has that? Most cities hardly have 1gig lan. Maybe if you have a NAS and sync to your computer over Ethernet?
psydroid@reddit
Providers in Europe are starting to offer 8-10 Gbps internet. It's really not far off and still many times slower than your hardware can handle.
Tomi97_origin@reddit
Because they don't exist yet.
The reason these exist is because enterprise customers need them as they needed faster LAN.
hackenclaw@reddit
That can be useful if NVME SSD that use 1x PCIE lane exist.
seklas1@reddit
Didn’t even know PCIe 6 was a thing already. Cool stuff. Wonder when we will need that in consumer electronics.
constantlymat@reddit
Even PCIe 5.0 in consumer electronics is still struggling to offer real world benefits to users.
We tried a 4TB Corsair MP700 Pro for video editing in the office vs our WD SN850Xs and we really couldn't tell the difference.
We still had an old machine with a Sandisk Extreme Pro PCIe 3.0 one and we can tell that one is slower but between PCIe 4.0 and PCIe 5.0 in a typical workflow it just didn't offer a tangible benefit.
Meanwhile the 4TB WDSN850X cost 200€ a piece as a bulk purchase and the Corsair 600€.
kingwhocares@reddit
Honestly, the benefit I see is from expansion cards. Imagine putting an M.2 expansion card on a PCIE 6.0 x1 slot with PCIE 4.0 m.2. Or something like a USB expansion card.
akuto@reddit
This. With how frugal new mobos are when it comes to hard drive slots, the only way I see myself upgrading to a new CPU is with an expansion card with 6 or more SATA slots or lots of m.2 slots. I had too many SSDs fail to trust a single drive with anything.
NeverDiddled@reddit
It is interesting to me you have had so many SSD failures. I have the opposite anecdote. I have been running SSDs for nearly 15 years now, starting in Raid 0. I have never had a single failure, and my longest run was 12 years on a Raid 0 array. I've now owned 10 different SSDs, 7 of them are still used daily, and the remaining three were only retired when I ran out of SATA ports.
Meanwhile I have had platter drives fail left and right in that time. I believe 6 failures. It is just funny how easy it is for anecdotes to differ.
Scrimps@reddit
Same with me. I am a computer engineer, and been working in tech since 04. I have had SSD's in my computers since they first became commercially available.
Never had a failure.
I still have an Intel X25-m that was used and reused as a main drive or a cache drive that is completely functional.
akuto@reddit
Maybe I'm just cursed or something. At work I've only seen classic HDD drives fail, but at home it's the opposite. I still have a working PC with a 2002 HDD.
One SSD that died failed so bad the PC didn't even want to boot when it was connected, which resulted in a 2TB paperweight.
topazsparrow@reddit
I'd be running your computer off a double conversion UPS to clean the power with that many failures. That's sketchy.
Melbuf@reddit
hows the power in your area? thats the only think i can think of. do you use aa UPS ?
akuto@reddit
The power is stable, but I'm not using an UPS. All PC shutdowns were graceful though. I'm using a Corsair Rx PSU connected to a high quality Brennenstuhl surge protection strip, so it should theoretically handle filtering any noise well.
But I was planning on getting a UPS anyway, to have more peace of mind during BIOS updates, so maybe it will help. Thanks for the suggestion.
chx_@reddit
As an aside, since this is a hardware sub: Brennenstuhl has an ingenious power strip which plugs into a UK power socket on the wall and provides four UK and two EU sockets. I live in Malta, it's indispensable as the wall is UK but many, many devices are EU and I loath converters even the proper ones you screw on.
LowSkyOrbit@reddit
I've had only 2 SDDs die. Both used as cache in a Synology NAS. My own fault because they were small drives and not NAS grade that Synology typically calls for.
SeventyTimes_7@reddit
I have a friend still using a crucial M4 that I bought in 2011. And I’ve been in IT for 10 years and never had an SSD fail in the datacenter or end user devices.
Calm-Zombie2678@reddit
I've never had a hdd fail without it being physically damaged (like dropped)
I have a 1tb hdd in my og Xbox that I bought in 2009, spent 5 years in my main rig before I used in an external enclosure and then modded my Xbox during covid and she still goes
akuto@reddit
I've seen a 2 HDDs die in my work NAS, from two different manufacturers and all from the lines supposedly fit for NAS use. But that was back when I had no idea of the distinction between SMR and CMR drives.
Now we are on WD RED CMRs and they have been stable for a few years, but I don't know if it's just a roll of the dice or if there really is a difference in reliability.
Calm-Zombie2678@reddit
Yea my sample size is fairly low, the 5 4tb I have in my nas have been chugging since 2018, there was 6 but I dropped one putting them in a new system (upgraded from a 2009 xeon to a ryzen 3600)
kwirky88@reddit
I’ve had about a 10% failure rate by year 5ish of operation, dating back to about the early 2010s.
Top-Tie9959@reddit
I remember early SSDs seemed to have somewhat bad controller firmware related failures. It was interesting everyone was worried about the flash getting burned up from to many writes but the most common failure was sandforce controllers shitting the bed.
Alpacas_@reddit
This, ended up getting a 670e.
Don't need 4x m.2 and 6 sata, but I have it.
Was upgrading from a i5 3570k, so going to a 7950x and that was nice
Justhe3guy@reddit
At that point it may be your motherboard destroying your SSD’s
RobsterCrawSoup@reddit
The other thing I could imagine benefiting consumer devices is that having more data over fewer lanes could help eGPU Setups.
GreatNull@reddit
They are unfortunately extremely unlikely to materialize, and if they do, host card pricing will be unpleasant. Despite the obvious utility for us, consumers and prosumers.
Why? Host card would have be active one with pcie switch onboard enabling this 1xGEN6 <---> 16/32xGEN4 operation and they are ungodly expensive even for ancient pcie gens. I don't even know if there are any that would support switching between different pcie gen signalling.
Current and much simpler designs for PCIE4 are usually starting at 600 USD and up. Only limited PCIE3 based adaptors are reasonably priced. Look up highpoint adaptors, they are among the cheapest ones.
Much saner, cheaper and likelier solution would be proliferation of x1 slots and x1 controllers. No MB vendor seem to be implementing this, nor storage controller vendors. Samsung might be the first wit 4xG4/2xG5 controller hybrid.
kingwhocares@reddit
Do they actually need to if it's the entire motherboard that uses PCIE 6.0?
razies@reddit
It's just not how PCIe works. Imagine a x16 connection like 16 wires. You can route all 16 wires to a single device, or you can split it into 4 connection each with 4 wires. You can do this split passively as long as you tell the BIOS to bifuricate the link into 4 devices.
But you can't passively change the speed that a wire operates at. Both ends need to agree. In your example, you would want 4 M.2 slots with PCIe 4.0. Thats 16 wires running at PCIe 4.0. speed on one end. But the other end is supposed to be PCIe 6.0 speed but fewer wires. That upgrade from 4.0 to 6.0 speeds requrires an active PCIe 6.0 capable chip.
That's also why we don't see many x2 PCIe 5.0 SSDs. Because if you plug those into a x4 PCIe 4.0 slot, you will get x2 PCIe 4.0 speeds (bad). If you plug them into a x4 PCIe 5.0 slot, you are wasting 2 lanes of PCIe 5.0. It's a chicken and egg problem.
kingwhocares@reddit
I was talking about something like this Reddit post or this YT video where m.2 SSDs are attached to a PCIE slot. All motherboards already come with a PCIE x1 slot and thus my assumption.
jamvanderloeff@reddit
That's not doing any version conversion there, that's just a straight cable of x1 slot to x1 at the drive, so if you've got only a PCIe 4 drive you only get PCIe 4 x1 speeds to the motherboard, even if the motherboard could've had a PCIe 6.0 x1 slot that matches the full speed of the PCIe 4 x4 the drive wants.
kingwhocares@reddit
I was talking about motherboard that supports PCIE 6.0. That PCIE x1 slot would then finally be attractive.
jamvanderloeff@reddit
Only if you've also got a PCIe 6 drive but don't want the speeds a full PCIe 6 x4 link could get you
razies@reddit
Both my comment and the top comment in that reddit thread answer you're questions.
You can either passively use bifurcation to distrubute the same number of lanes. If you need to increase the number of lanes (which you will if you want to use a PCIe x1 slot for multiple SSDs) then you need an active PCIe chip in there. Latter is gonna be more expensive than former.
Cohibaluxe@reddit
While I totally agree this makes sense in theory, in practice this requires a switch to perform the signalling and those are not cheap, and are unlikely to ever become cheap.
We could see it as a feature on top-tier motherboards costing thousands of dollars (which, to be fair, is also the market segment that is clamoring for more PCIe expansion slots the most) but don’t expect to see it on mainstream consumer boards.
Top-Tie9959@reddit
It's worth noting switches were fairly cheap back in the PCI-e 1.0 days. Motherboard makers used to include them on high end boards for more connections. Then Broadcom bought out the manufacturer of the best ones (PLX) and did what Broadcom does which is to jack the prices to the moon to screw over the captive server market.
Asmedia started to fill the gap left somewhat but that has taken years.
mckirkus@reddit
It's also for laptops. A GPU running in a x16 PCI-e Gen 5 slot only needs two lanes on PCI-e Gen 7 (2x2x4).
Using fewer lanes apparently cuts down on power requirements.
COMPUTER1313@reddit
Especially useful for ITX desktops and laptops.
GhostReddit@reddit
PCIe 5 is for enterprise SSDs.
GPUs don't need it, and they usually have 16 lanes anyway. The standard for SSDs is x4 though and with 50-100+ TB drives coming out the interface speed is starting to become a limiting factor. Your 8TB M.2 card isn't going to run into this problem, they don't have the power or the parallel channels to be able to push data this fast.
future_lard@reddit
Id rather have 4x more pcie5 lanes than pcie7
seklas1@reddit
I think the problem with PCIe 5 SSDs is that they are using PCIe 4 controllers still. So whilst the initial read speeds are around 14GB/s, once cache fills up, it slows down to the same speeds as PCIe 4.0 drives. Might be incorrect now, but that’s the only thing I remember reading about them when the first PCIe 5.0 drives launched.
GreatNull@reddit
Thats not true , what you are seeing is SLC cache exhaustion in play.
It affects all TLC/QLC based ssds in consumer market. Can be avoided by using enterprise grade ssd, that have much larger caches available for performance stability reasons.
If "PCIe 5 SSDs" would be using pcie 4 controller, you wound never be able reach 14GB transfer speed at all.
gomurifle@reddit
So you are saying they need to release gen 5 SSDs with bigger cache then...
GreatNull@reddit
Yes ant it not happening for cost optimization reasoning unfortunately.
Entire product design is based around this, and only way to "fix" this would be massively over-provision drive with additional nand flash, which would completely eat up margin or push end price massively up, most likely both.
There is no market for such product and AFAIK there has not been single consumer ssd design that would go against this design principle.
Given all that you can:
constantlymat@reddit
I only know the Corsair one was supposedly "the best" one available at the time of testing on the German market.
Omotai@reddit
This is why I was happy to see the 990 EVO come out. The one that can operate in both 4.0 x4 mode and 5.0 x2 mode. The benefit from a 5.0 x4 drive over a 4.0 one is basically impossible to realize in the real world unless you have an extremely unusual workload, so I'd personally rather just see drives come out that use fewer lanes since PCIe lanes are a precious commodity on consumer platforms.
Stahlreck@reddit
Wouldn't the most obviously benefit be that at some point pretty much everything that any consumer would ever need could just be an x1 lane? So you could have more lanes for other stuff?
jamvanderloeff@reddit
Mainstream platforms already have more lanes than most consumers know what to do with.
ghenriks@reddit
If your talking people with a GPU and one storage device sure
But the look at any current motherboard specs and all the caveats regarding downgrades to features due to PCIe lane scarcity when you start using 3 drives and more than one PCIe slot says for many of us we don’t have more lanes than we need
upvotesthenrages@reddit
Sure, but this is the exact same shit people said about PCIe 4.0 and 3.0.
It's not about the average user today, but the extreme user. In 5 years the extreme user of 2024 is the average user of 2029.
Massive GPU, multiple SSDs + whatever else you can pin on there. But in 2029 the average stuff will be equivalent to the super extreme user in 2024.
account312@reddit
I doubt it. It's not the '90s anymore.
doscomputer@reddit
I mean one gen to the next isn't going to be a massive difference at all especially when the SSD controllers on the market aren't at full gen 5 performance yet
the performance benefits are real, but I have a hard time believing you hit 7gb/s with video editing let alone the ~15gb/s that pcie 5 tops out at. unless you're trying to scrub through 15 minutes of uncompressed 4k footage at a time you would never see a difference
I personally see the difference plainly with game load times, comparing pcie4 nvme to SATA3 SSDs. So if 7gbs makes a difference compared to 0.3gb/s. When we actually are at PCI7 and have nearly 50gb/s of bandwidth and therefore much lower latency, I think people will again see the reason to upgrade. And at 50gb/s thats literally approaching DDR4 bandwidths. Even if video editing isn't so IO heavy, a lot of other applications will thrive once we finally get those speeds. Im betting 2030 at the soonest
rpungello@reddit
I'd argue newer PCIe versions aren't beneficial to consumers because they enable faster speeds, but because they can do the same speed with fewer lanes. Consumer CPUs don't have enough pins to have the crazy PCIe lane counts of Xeon/EPYC, so if you could have your 8GB/s M.2 drive only wired to a PCIe 7x1 slot, rather than PCIe 5x4, you've just saved 3 lanes that can be allocated elsewhere. Now repeat that process and suddenly those 20-30 lanes you get on consumer chips will go much further.
jamvanderloeff@reddit
Further for what though? The number of consumers that want anything more than GPU + maybe two SSDs + whatever the chipset provides is tiny
rpungello@reddit
Gamers yes, but many homelabbers use consumer products to build out their homelabs as they're more affordable and way less power hungry. Having extra lanes for HBAs, high-speed NICs, etc... is always nice.
jamvanderloeff@reddit
Homelabbers aren't going to want to spend the extra pricing on PCIe 5/6/7 compatible controllers
rpungello@reddit
Eventually that'll all become standard though, just like how you can't buy PCIe 2.0 hardware anymore.
jamvanderloeff@reddit
Sure you can, there's lots of PCIe 2 gear still around and even some still being produced, homelabbers buy lots of it.
rpungello@reddit
Can you link me a PCIe 2 motherboard I can buy new, with warranty, today?
Salty_Host_6431@reddit
The funny thing is that in consumer desktop use, the difference between SATA SSD and PCIe SSD performance is relatively minimal for most applications. It’s nothing like the performance jump from a HDD to a SSD. Like I have to wait an extra second or 2 for my game to load (and often not even that). Don’t even notice it. It’s not like the old days where you got a coffee while waiting for a program to load into the system.
theQuandary@reddit
The advantage is eGPU where you're limited to only 4 channels.
An eGPU on PCIe 7 x4 would have the same bandwidth as an internal PCIe 5 x16.
jamvanderloeff@reddit
The faster you go the less possible and more expensive the external connection gets.
theQuandary@reddit
Oculink can supposedly go around 1 meter for PCIe4.
Articles claim that PCIe 5 and 6 will both have the same trace length limitations (no idea about gen 7 yet), but I couldn't find any info about that other than that people may need to switch from oculink to another connector. In any case, 1m still offers a lot of length that can be cut before the cable is too short to be useful.
Another thing to mention is optical connections. These are really desirable in server environments and a working group was formed in 2023 (PCI-SIG Optical Workgroup) to make this happen. It will essentially eliminate interference/impedance which are the primary length limiters at which point the big limiters become spectrum and communication latency increasing with fiber length.
sautdepage@reddit
Worse it seems that with high speed PCIe & NVMe and other stuff lanes are being eaten crazy fast nowadays.
There was an interesting discussion where most 870x boards on the market halve the main GPU PCIe bandwidth in half if you want to populate all NVMe slots. That's nuts!
I remember on AM4 where an X-series board with the additional chipset gave you all the bells and whistles including 8 SATA. Not anymore.
Here's the spreadsheet: https://docs.google.com/spreadsheets/d/1NQHkDEcgDPm34Mns3C93K6SJoBnua-x9O-y_6hv8sPs/edit?gid=755628141#gid=755628141
BlueGoliath@reddit
Hopefully motherboards start offering fancier bifercation options. There is no reason to dedicate 16x PCIe 7 lanes when you could use PCIe 4 or 5 and use the other slots for a second GPU or something.
ritz_are_the_shitz@reddit
I wish there was a way to merge lower lanes into a higher tier, or split higher tier lanes into lower tier lanes. Because if I could take a PCIE 5 16x slot and split it into two sets of 16x PCIe 4 lanes, that would be perfect. One set for the GPU, one set for four m.2 drives.
karatekid430@reddit
Um there is but you won’t like the price
gurugabrielpradipaka@reddit (OP)
I have a 2TB Corsair MP700 Pro that cost me one arm and one leg. Yes, before I had Samsung 980 Pro, and still I cannot see the difference in Windows performance. Even before I had a Samsung 960 Pro, and I saw no differences when I moved to the 980 Pro. Is Windows maybe rendering all of them the same?
blueredscreen@reddit
Why do you keep buying them then?
BlueGoliath@reddit
Latency matters more for general desktop usage than raw speed.
Zednot123@reddit
Ye, a old crusty optane drive from 2017 will give a better windows experience than raw transfer rates.
yesfb@reddit
3.0 is still more than operational in 99% cases. A lot more important things about a drive than its speed
constantlymat@reddit
I am not arguing PCIe 3.0 is not operational or serviceable for the majority of use cases.
I am just stating we were able to tell a difference in our video editing workflow.
yesfb@reddit
I think that comes down more to the speed/ quality of the actual drive than the interface itself
Intel optane is far and away some of the best drives out there and 3.0 only
Kryohi@reddit
If you actually need to constantly read and write tens or hundreds of GBs you're definitely going to see the difference between 3.0 and 4.0, even against Optane. Meanwhile Octane drives are still the best option as an OS drive for example.
Gullible_Goose@reddit
In the year between 5.0 SSDs hitting the market and me leaving my PC retail job, I never saw anyone buy any of those drives. The price for the little performance improvement you get is laughable
SpeculationMaster@reddit
Maybe instead of bandwidth they should focus on more power delivery so we dont need psu cables
ResponsibleJudge3172@reddit
I blame Microsoft for taking almost 2 years to get an SDK for DirectStorage and still onl releasing a half baked solution
capybooya@reddit
Samsung has experimented with Gen5x2/Gen4x4 drives. I suspect by Gen6 or Gen7 there's no reason to use x4 for SSD's anymore. Hopefully those lanes can be used for other things and doesn't end up being reduced by AMD/Intel though.
guzhogi@reddit
I could see maybe instead of a single x4 drive, maybe two x2 or four x1 drives, giving more storage to those who need it (video editors, AI, etc).
gurugabrielpradipaka@reddit (OP)
Me too. I have PCIe 5.0... but where is PCIe 6.0? In what hardware?
Frexxia@reddit
Data centers
youreblockingmyshot@reddit
6.0 stuff hasn’t really hit the market yet. There’s a lag between launching the spec and products coming out and being tested and added to the integrator list.
CheesyCaption@reddit
We could already benefit by making 4x slots standard and giving the rest of the mobo a bit more real estate at the very least.
Melbuf@reddit
lol this, first thought was "so we just skipping 6?"
DeMichel93@reddit
7.0 and 6.0 are aimed at enterprise, hell, even 5.0 is kinda aimed at that market. Consumers won't see any difference between 4.0 and 5.0 with current work that's done on workstations.
Enterprise on the other hand get more and more complex work done on DB servers, AI stuff etc. and 5.0/6.0/7.0 storage array on the backend is something worth investing into, especially when Ethernet is getting faster and faster, aproaching if not already exceeding 1Tbps bandwiths.
From-UoM@reddit
Pcie 6 is heavily used in data centre.
kontis@reddit
When? We desperatly needed it 20 years ago for gaming.
Remember Ageia PhysX? Remember how Nvidia's GPU PhysX couldn't do anything to gameplay and was only for eye candy?
All that tech was ruined by PCIe's bottleneck making fast transfer of GPU data back to CPU impossible. It's getting better now, but multiple frames of stalls were common, so basically unusable for real-time games.
Capable-Silver-7436@reddit
man pcie 5 is hardly even used on the consumer side, 4 still isnt even hardly half was pushed to its limit.
MeatyDeathstar@reddit
I'm not sure why you got down voted. For the normal consumer 4.0x16 is only half utilized since it's only saturated by a 4090 in 4.0x8
DerpSenpai@reddit
Actually, it would be pretty dope to get a list of GPUs and the necessary BW before you see drops in performance. Specially helpful for Occulink and TB4 and TB5 systems
Tman1677@reddit
Standards are not designed for the “normal consumer”. For a normal consumer 3.0x16 is still generally plenty for anything.
Capable-Silver-7436@reddit
people like big numbers over facts on this sub.
Zarmazarma@reddit
What's your proposal? They stop developing new PCIE specs until hardware catches up and it's actually a problem? What benefit is there to that?
GarethPW@reddit
Faster lanes are great for those who want additional devices though. Real PCIe expansion shouldn’t be locked behind a workstation tax.
Rjman86@reddit
the downside is that devices that can use fewer faster lanes are pretty expensive too, (eg look at the price of a gen 4 x1 10gbe NIC vs a gen 3 x4 10gbe NIC)
It is nice that we finally have a flagship GPU (5090) that should be able to run at x8 with no performance loss now (on gen 5), so I hope PCIe 7 keeps that happening for a future generation.
Capable-Silver-7436@reddit
i agree
1leggeddog@reddit
Standards outpacing the actual products more and more
Frexxia@reddit
It's not actually outpacing enterprise hardware, which is the only place you'll see this in the foreseeable future
ResponsibleJudge3172@reddit
Aproaching VRAM bandwidth.
Frexxia@reddit
That's a moving target though
Stingray88@reddit
Imagine a future where you use a second PCIe slot for a VRAM upgrade for your GPU. Latency wouldn’t be as great, but if the bandwidth is there…
steik@reddit
Except VRAM bandwidth will continue to improve as well.
Stingray88@reddit
Sure, but there could be a workload that could benefit from having significantly more VRAM, even at the expense of slower bandwidth.
steik@reddit
That's literally how GPU's already utilize RAM :) but yeah may be able to achieve lower latency with a "dedicated PCIe VRAM card" I suppose.
liliputwarrior@reddit
This is how CXL came to existence.
Wood_Berry_@reddit
That's actually a really cool concept! Nvidia would hate it. haha
simplymoreproficient@reddit
But is it approaching onboard vram latency?
SERIVUBSEV@reddit
Just in time for upcoming gaming APU/SoC era.
gatorbater5@reddit
perfect. the rtx8060 2gb will require a pcie7 1x slot.
MutekiGamer@reddit
By the time pcie 7 is mainstream they will be teasing pcie 10
Balance-@reddit
Bit of a weird title. Only the specification is being finalized this year (2025) - not the actual launch of PCIe 7.0 products.
So while the specification may be published this year, we won't see PCIe 7.0 devices for several more years.
ThePillsburyPlougher@reddit
Title seems pretty clear to me, that’s how these things typically go.
RScrewed@reddit
Glad I never have to work with you.
If you told me something was "launching next year" and all you put together was a spec, I'd think you're disingenuous.
Massive_Parsley_5000@reddit
Welcome to r/hardware. Some of the most insufferable assholes on reddit post here, unfortunately.
-WingsForLife-@reddit
I've been slowly trying to stop to interact with the majority of the internet, Ive noticed that a lot of people tend to try to "win" arguments and/or just be plain rude when responding to anything.
I don't understand why this is happening, this isn't how people interact in real life.
Massive_Parsley_5000@reddit
I've been on the internet a long time. Since my very early teens (unfortunately, lol....). I used to always look down at the ignore function, but I find myself using it all the time nowadays because of exactly your last paragraph.
If someone just responds to me like a total asshole, I just immediately block them. I started doing it because I thought to myself, if this was another person standing across from me in real life having a personal conversation and they said something like that /at best/ I would never speak to them again. Why in the hell do I not have the same respect for myself and my time online?
It's made my internet browsing so much better, personally.
RedditAdmnsSkDk@reddit
Ignorance is bliss.
Obviously you're going to be happier if you're running through life with blinders on.
Massive_Parsley_5000@reddit
That's a funny take.
Would it be "blinders on" if I walked away from an asshole in reality instead of giving him the attention he wants?
This is the exact same logic you see out of children complaining when people ignore them when they're acting like shitheads.
In reality, I have decided that your opinions aren't worth my time anymore, and my life is better without you in it, and showing you the door 🤷♂️
I mean, if you're so "right" why does it matter that an "ignorant" person like me doesn't see your bullshit anymore? 🤔 Is that not a qualification for your "correctness", or maybe there's another reason entirely you're acting like an asshole to begin with, and sucking the oxygen away hurts because that's all you were after to begin with.
RedditAdmnsSkDk@reddit
Because an ignorant world is a shitty world.
It's so very easy to just declare anything that doesn't suit you as "asshole behaviour" and block it away. It creates echo chambers.
I'm sure you have no issue to see how it's problematic if "famous/powerful" people surround themself with yes-men which is what the "block-feature" encourages.
The less you accept being challenged the lower your tolerance for it gets and the more ignorant you will become. Which is the slippery slope heaps of older people slide down.
Exist50@reddit
Tbh, not terribly far off what some startups do. But yes, disingenuous.
ThePillsburyPlougher@reddit
PCIe is a spec not a product, it’s not like this is ambiguous language.
youreblockingmyshot@reddit
PCIe 6 doesn’t have any items on the integrators list yet even as official testing hasn’t been offered yet.
kuddlesworth9419@reddit
Do any modern GPU's saturate 3.0 yet?
Xanthyria@reddit
Yes they absolutely can. I believe even on a 6600XT you get a couple percent performance loss—so scale that up and you’ll see more.
gatorbater5@reddit
yah OP shoulda specified pcie3.0 x16. 6600xt only uses 8 lanes.
Top-Tie9959@reddit
This always seems like an annoying game. Now you have to buy a newer motherboard to get enough bandwidth from my budget card. Why not just buy a card with more lanes instead?
PolarisX@reddit
Because that is part of how they saved money designing the card. I'm not trying to be smart, its a easy way to save cash - most people never even know.
gumol@reddit
there’s a difference between „do GPUs saturate 3.0” and „do video games saturate 3.0”.
GPUs can easily saturate PCIe 3.0. PCIe 3.0 is only 16 GB/s, which is basically nothing for a modern GPU. There’s a reason why some companies make custom CPU-GPU interconnects with like 300 GB/s bandwidth - which GPUs can also saturate.
However, video games are not bottlenecked by PCIe speed.
RBeck@reddit
The issue was always that Intel was stingy with PCI-E lanes. If you could move to 4.0 that doubled the bandwidth, so you could use 8x for your video card and have a lot of bandwidth for SSDs or other devices without even relying on the actual chipset.
Atheist-Gods@reddit
A 4090 will, but that’s about it.
Cyphall@reddit
Some professional sectors need to stream truly gigantic amount of data to the GPU so yeah
MeatyDeathstar@reddit
The 4090 just BARELY saturates a 3.0x16. We're talking less than a 2% difference in performance between 4.0x16 and 3.0x16.
hardolaf@reddit
The 4090 can burst up to about 70% of the bandwidth of PCIe 4.0x16 with steady state significantly below that according to the PCIe analyzer that I stuck it into. This shows up in gaming as poor 1% lows and a very small decrease in average FPS.
Exostenza@reddit
I mean, the 4090 was the first GPU to saturate PCIE 3.0 so we still have a hell of a long way to go just to saturate 4.0 nevermind 5.0... I know nvme drives can benefit but you you really need a large server to benefit from those speeds. I also know more lanes means more usb, m.2, devices etc... but I think PCIE 5.0 already provides for much more than any consumer would care for. This is probably only going to be useful for datacenters for quite a while.
Tomi97_origin@reddit
That's not quite right. You are thinking about gaming and you would be right in that workload.
But when you try running local AI models especially if you would like to offload parts of the model into RAM. You will saturate the connection like nothing.
Jrix@reddit
What technological breakthroughs allows this consistent doubling of bandwidth bandwidth?
Tomi97_origin@reddit
Well they defined the specifications of PCIE 6.0 in 2022 and there are still no devices on the market.
The specifications are just the first step and it takes quite a while to get to production from those.
Sylanthra@reddit
So this is a standard right? It doesn't tell how to actually implement a device that can achieve it, just how what it is supposed to achieve. What's stopping them from defining PCIe 7-25 right now where each stage is 2x the previous one and letting the industry catch up whenever?
Tomi97_origin@reddit
That's not quite it. They also define signaling, error correction mechanisms, encoding among other things
Darkomax@reddit
Meanwhile I'm still on PCIe 3. Seems the standard is evolving fast I still feel like PCIe 4 was released yesterday
definite_mayb@reddit
Yeah... Planning on keeping my 5800x3d on a b450 for a long time... By the time I upgrade it will be 3 generations ahead
Flyinmanm@reddit
Lol I was just considering upgrading my 5600x to a 5800x3d on my existing b450.
dssurge@reddit
5600x is a pretty good CPU, and unless it's limiting you in some tangible way there's no real reason to upgrade.
Flyinmanm@reddit
It's because my son is using my old pc which can't run windows 11 securely.
I inherited a hand me down b350 pc and put my 5600x in it for him.
I'll put the 5800x3d in my case.
chronocapybara@reddit
B450 3700X still going strong.
Atheist-Gods@reddit
Pcie 3 lasted a long time but every other generation has been anout 3 years. It’s noteworthy that pcie speeds aren’t the bottleneck and so there isn’t a reason to push onto the newest version.
Dangerman1337@reddit
Wonder if 6.0 will be skipepd, like by the time 5.0 is saturated may as well skip 6.0.
robbiekhan@reddit
Useful for all the GPUs and things leveraging massive amounts of bandwidth that not even PCIe 4 isn't enough.... oh wait that's not accurate at all./
III-V@reddit
There it is; the obligatory "but PCI-E version X isn't even saturated yet!" comment.
Wrong-Historian@reddit
I suspect we will see a reduction in number of lanes that will be used. You already see this with PCIe5.0 x2 SSD's. Ultimately, reducing lanes while retaining bandwidth will reduce cost, as it will require less PCB real-estate (eg less traces), less pins on the BGA's, less transistors/chip space used for the PCIe controller, etc.
sishgupta@reddit
Yeah I wonder where the value starts to make sense, but the problem with PCIE5 for instance was that it's costly due to increased requirements for traces in the motherboard. So even though you see PCI4 slots with less lanes, PCI5 is still hard to come by due to the increased signalling speed requirements increasing costs
wrathek@reddit
Someone educate me here - is there a reason beyond cost savings that would stop a cpu mfr from having lanes that are multiple speeds? Like i get that pcie is backwards compatible, but wouldn't it be beneficial to have like 8x4.0 lanes, 4x5.0, 2x6.0?
burnish-flatland@reddit
There are x4 and x8 gpus, for example some RDNA2 and RDNA3 models. They still use the full x16 slot and just don't connect anything to the unused lanes, likely because of reasons of structural stability.
Wrong-Historian@reddit
I know. I have an RX6400 that runs on 4.0x4. But obviously that's not enough bandwidth for high-end GPU's. I'm talking 6.0x8 or 7.0x4 just becoming a standard, where either the CPU has less integrated PCIe lanes (cheaper, more power efficient), or is at least able to use the other lanes that it now doesn't need for the GPU to be used for other devices like NVME drives. Obviously having x16 lanes in your CPU PCIe controller but just using x8 of them provides no benefits.
Tystros@reddit
current GPUs that run AI models that are too large to fit in their VRAM are definitely maxing out whatever PCIe speed they support when they need to get stuff from RAM. there are a lot of regular consumers in r/StableDiffusion who would really benefit from more speed for that. But it's still a small niche overall, yeah.
cocacoladdict@reddit
Why it's not named 6.0?
Tystros@reddit
because thats the old version that only has 2x over 5.0? they always 2x every generation.
capybooya@reddit
Is the PCIE interface a bigger problem than the (relatively) low bandwidth of DDR5?
Atheist-Gods@reddit
No. A 5090 will likely show no difference on pcie 4 vs 5.
Tystros@reddit
real 5.0 x16 bandwidth seems to be roughly 50 GB/s and good DDR5 memory on dual channel platforms is quite a bit faster, and way faster on quad channel platforms.
DumyThicc@reddit
They need to create architectures that take advatage of these new speed, which GPU's currently aren't even close to reaching. The cost due to supply and demand. The process isn't optimal and there is very low demand atm.
It's essentially pointless to use PCIE 6.0 for consumer level, even in most cases businesses have no need for the technology currently either.
But the main problem is the adoption of these technologies takes a long time to put everything together and use, and if they did create hardware that used this tech IMMEDIATELY then the prices would be astronomical.
Tystros@reddit
current GPUs that run AI models that are too large to fit in their VRAM are definitely maxing out whatever PCIe speed they support when they need to get stuff from RAM
DumyThicc@reddit
I don't consider people that use local LLM regular consumers. That is still a very small niche compared to who these cards are the target for.
While am clearly in agreement that we should have more and faster vram for consumers etc, the price point will keep these people away regardless.
Tystros@reddit
I think the people who buy a 5090 are probably like 50% gamers and 50% AI users.
AI that people run on their GPUs is also not primarily local LLMs I think, but primarily image and video models.
DumyThicc@reddit
I doubt that considering I have 2 friends that got a 4080s and a 4090, and have ZERO development experience. They haven't even touched an IDE at all. The closest they got to doing anything related to the field is moving files and posing some commands they found for CMD and hitting enter.
Most people buy this just for clout or for gaming. That's their target anyway.
50/50 could be possible, my my assumption is more like 20/80 in favor of gamers that don't even touch AI. Now I'm talking about consumers, cause this is technically a consumer grade product. However that doesn't mean. That some businesses don't utilize 4090 etc. But again, most regular CONSUMERs don't utilize AI and I don't consider the people that fiddle with AI to be the majority in the REGULAR consumer category.
Tystros@reddit
We're talking about running local AI models and you talk about some development experience with IDEs and whatever? People who run AI local models are mostly not devs. They use regular easy to use GUIs where they enter a prompt and get fun images or videos or texts. They have no development experience with anything. They are regular consumers.
DumyThicc@reddit
Bro, you seem to be confused here. People that have never opened an IDE was used as an example of the skillet these individual have and their interest level for AI.
Someone that won't even open up an IDE won't set up a LLM. How did that not click?
Tystros@reddit
No, you seem to be confused. The people who use local LLMs or local image generation or video generation AIs are not primarily devs. They are people who use 1-click-install solutions on their PC where they double click something and a pretty GUI opens where they can use the AI. Without any programming knowledge. They don't even know what IDE means.
DumyThicc@reddit
Sure bro, this is going nowhere.
You just don't understand how an example works.
It's the effort involved along with the usability most individuals don't touch either IDEs or AI. They like you state yourself, click on a pretty game and shoot people or dj whatever. They have zero interest in AI or in Development.
They grab the 4090 for clout and nothing else. Idk how to explain this in an easier to understand way.
Tystros@reddit
You say using a local AI program would be any more difficult than "click on a pretty game and shoot people or dj whatever". And I say this is not true. Using a local AI tool is actually easier than playing a game. For most games you at least need to be somewhat familiar with WASD movement etc, which is something that takes a while to learn for people who are new to that. While for using AI, you don't need to know anything other than how to type text on a keyboard.
DumyThicc@reddit
Im saying that is NOT where the focus is when purchasing these cards for a "Regular " user.
Tystros@reddit
And I say that is what the focus is for 50% of people who buy a 5090. I say 50% of them are such AI users who do not play any demanding games, they only use demanding AI.
DumyThicc@reddit
5090 sure, we have no usable stats for that yet. It's also considerably more capable for AI. That is not something I am denying. Their usage of FP4 with its significant gains is a huge benefit for AI, hopefully its still accurate. I'm aware that in their tests it lead to fairly positive results.
Regardless, my point stands for REGULAR users, which I don't consider AI or developers being. Most regular users are buying the 40 series for clout or for gaming performance. And those people heavily outnumber the developers etc.
The 600k number in stable diffusion etc don't matter since you don't need a 4090 or 4080 etc to run stable diffusion.
It is not 600k 4090 users.
Th3Loonatic@reddit
The faster it goes the more complex it becomes to implement. The timing of signals, the eye diagram, the logic complexity. All of it becomes more complex.
cp_carl@reddit
Because pcie6 exists and is double the data rate of 5?
cocacoladdict@reddit
Due to wording i thought the 5.0 was latest and they skipped 6.0 for some reason.
Otherwise i don't understand why they compare it to 5.0 and not 6.0
Atheist-Gods@reddit
5.0 is just what’s on the market and people are familiar with. PCIe 6 is already overkill and so intel, amd, and the motherboard haven’t implemented it yet .
Lille7@reddit
Because there arent really any 6.0 products yet?
AngelicBread@reddit
I think the idea is that more people are familiar with PCIe 5.0 because it's in a lot of consumer electronics, so that's used as the frame of reference. I agree, though - it is confusing.
defchris@reddit
Because 6.0 is already out, but not yet adopted. 7.0 is the followup.
Physical-King-5432@reddit
Pcie 5 already seems new to me
pmerritt10@reddit
I wouldn't worry about this too much in all honesty. We don't have CPUs currently available that could process all that traffic. It would just end up being a bottleneck.
hunterczech@reddit
Meanwhile i'm still on PCIE 3.0 on my still quite recent build
RayphistJn@reddit
I'm at 3.0, and even that isnt an issue yet, pass