[News] DDR6 Set for 2027 Mass Adoption as Memory Giants Reportedly Finalize Prototype Designs | TrendForce News
Posted by imaginary_num6er@reddit | hardware | View on Reddit | 243 comments
Aggrokid@reddit
So can we expect AM6 around 2028?
bubblesort33@reddit
I hope. Less than 2 years between each of AMD's CPU generations so far. Zen6 should be around the middle of next year (July 2026?) if this generation is as long as the last. Which might mean we get even Zen7 on AM5, in the middle of 2028.
greggm2000@reddit
Zen 7 on AM5 seems unlikely, AM6 and DDR6 looks like they’ll be out at roughly the same time, so it would make sense for one to use the other. We’ll get “launch day” DDR6 ofc, just as we got “launch day” 4800 MT/s DDR5 at 3x the price of DDR4 back in 2021 when it launched.
bubblesort33@reddit
I mean it's 3 more years until DDR6 is here, so of if it's not Zen7 it should be Zen6+.
greggm2000@reddit
That's not where the rumors are pointing at this time though (yes, rumors, especially this early, can easily be wrong). Besides, 2028 matches up with both DDR6 and Zen 7 both, so.. why not use it, especially if DDR6 will offer the bandwidth that Zen 7 presumably needs?
But ofc we will have a much better idea of things by early 2027, well after Zen 6 has launched, and AMD gives us more official info on what to expect. If it will use DDR6, I would actually expect them to say that at Computex in Jan 2027.
Jeep-Eep@reddit
I've said before that's just the de jure date, it may take longer to get DDR6 in a state suitable for client use and AMD has found 'caching and older RAM' as a combo to be a winning one so they may lag significantly behind Intel if not going for dual format shenanigans.
Vb_33@reddit
Intel will also have a big cache years before Zen 7 tho.
Jeep-Eep@reddit
Well then, more reason to think client DDR6 will wait until it's good and ready.
greggm2000@reddit
That's a possibility, sure. But it might not take longer, and more to the point, would AMD really introduce a new socket and chipset (AM6) using DDR5, especially when by the following year or 2 years, DDR6 would be ready? Idk, they could, but would they? We'll see. Persuading people that a new socket is reasonable is a lot easier when there's a new DDR generation as part of it, IMO.
Anyway, we'll get (official) clarity on this sometime in 2027, after Zen 6 has been out for a little while, I bet. Maybe at Computex in Jan 2027, which is when they have announced CPU + GPU stuff, in the past.
Jeep-Eep@reddit
Depending on the design of DDR6, AMD may try to make the pinouts for AM6 intercompatible with 5 in such a fashion that a AM6 chip will fit in a 5 socket but not vice versa.
greggm2000@reddit
If they did, that would be the first time, as far as I'm aware. I'd be very surprised if they did that, however.
Jeep-Eep@reddit
While true, they have all the parts in their techbase to make it work if it can be done and it would drive a lot of sales that wouldn't otherwise happen.
theholylancer@reddit
I am wondering if the speeds presented, esp the faster "up to" 17,600 MT/s is a hint
namely, what it was said that you can't get that kind of speed with normal ram with our existing 4 slots, and that you'd need soldered or CAMM or something like that.
either they need to mandate 2 slots only (and most of us who use faster ram is already doing so) with good tech upgrade, or mandate something like CAMM2.
IE there is DDR6 mobos for AM5 and AM6, with AM6 aiming for higher speeds and CAMM2, and AM5 with normal DDR6 with slower speeds with either forced 2 slot or all 4 slot but it wont go as high.
unless I am mistaken and they are able to get those speeds on slotted ram.
Caffdy@reddit
there's not gonna be DDR6 Mobos for AM5, next year is the last iteration of the platform and DDR6 JEDEC Spec is not even out yet, it takes time for memory, MoBo and chip manufacturers to design new systems to accommodate new RAM types
theholylancer@reddit
was that set in stone? I know AMD was wishy washy about that
and well the other thing would be if Zen 7 gets what intel was always doing, a memory controller that works with either DDR5 or 6, and that could be in theory easier if it was designed to work with 2 IODs, one for DDR5 and DDR6.
but yeah, it could just be completely abandoned after 2 gens, and we welcome back to intel and what upgrades?
Jeep-Eep@reddit
Yeah, and AMD's cached CPU tech is a perfect match for dual format.
Nuck_Chorris_Stache@reddit
Probably set in fibreglass, like most other PCBs
NewKitchenFixtures@reddit
CAMM2 makes so much as an evolution.
It’s like going from the old SECC2 connectors in Pentium 2/3 to the Sockets. So much signal integrity advantage and more mechanically robust.
Jeep-Eep@reddit
Easier to install too.
No_Hornet_1227@reddit
Not gonna happen man. Zen 7 is gonna be DDR6
AIgoonermaxxing@reddit
Can't wait to grab a cheap 13950X3D in 2032 so I can drop one into my B650 lol
XavandSo@reddit
My poor little 5800X3D...
shugthedug3@reddit
Still a monster of a CPU. They're going to be around for years to come.
Jeep-Eep@reddit
If I had PCIE 4.0 and could be sure it was a dying DIMM on the last build, I'd have gone for one of those and a new RAM kit no questions asked.
INITMalcanis@reddit
Still a great CPU. Faster CPus coming out doesn't makes yours one bit slower.
XavandSo@reddit
Just more concerned it will actually last that long or be viable. I already know Battlefield 6 is going to make it sweat.
INITMalcanis@reddit
I get you. I have a 5800X non-3D, and honestly it's OK for everything I do. Would a 9800X3D be faster? yes, sure! But it would also cost me like £700+ to upgrade CPU, motherboard & RAM and I'm frankly not feeling 700 quid's worth of performance constraint yet!
I'll see what Zen6 brings. And if that's on AM5, then honestly... I might just keep on with the 5800X unless it dies or something.
nismotigerwvu@reddit
That exactly the same boat I'm in. The performance uplift from the 1800X I started with on my X370 to the 5800X in there now is astounding. Being able to stretch it out through the AM5 generation is just unprecedented value.
renrutal@reddit
If it does make any game you absolutely want to play viable, then get a new platform, but after you've played the game for a while with your current rig.
Personally, I'll wait to replace my 5800X3D only when the second generation of AM6 12-core X3D chips come out.
Hayden247@reddit
Don't worry man, it's still equivalent to a Ryzen 7600X or 7700. It'll go fine until Zen 7 on AM6, then that will be the perfect moment to upgrade, even if you cheaped out with a Ryzen 5 then it'd still be big.
Besides remember consoles use underclocked Zen 2 CPUs and next gen won't be until 2027 or 2028. You'll be right until then I think, the 5800X3D is no longer the best of the best but it still isn't far behind a Ryzen 9600 for gaming which is brand new.
Tuna-Fish2@reddit
Honestly, I think getting a 5800X3D and skipping the AM5 generation was probably the smart move. There's still no real need for anything better, and we are 2 or 3 releases away from the new platform.
SpeculationMaster@reddit
that thing will last another 10 years
Skulkaa@reddit
It's still fine . I feel no need to upgrade mine yet
jigsaw1024@reddit
Maybe AMD will stop supporting AM4 with new chips by then.
Strazdas1@reddit
AM4 hasnt had new chips in many years. Releasing binned chips of same production run as new SKU does not count.
Hayden247@reddit
Yeah this is what annoys me when people say AM4 isn't dead... I mean technically sure AMD still releases stuff for it but it's all just different bins of existing silicon, usually on the cut down side as the 5800X3D is still the fastest on there. There wasn't been some "Zen 3.5" or whatever, it's just binning.
And apparently DDR4 is now starting to rise in cost to produce which is a sign it's getting old and DDR5 is getting more cost effective... which makes sense when 6 is just a few years away.
Strazdas1@reddit
The big players stopped producing it, so supply got lower. There are plenty of secondary producers though, like the chinese memory fabs that lag behind cutting edge.
Zanerax@reddit
Chinese government gave them the instruction to focus on DDR5 shortly after the non-Chinese majors announced they were pivoting. Price-shocked the DDR4 market pretty badly.
Haven't paid attention since then, but I've got to imagine someone is going to decide to stay in the market given suddenly margins will be great.
Strazdas1@reddit
Even if that is the case, nonchinese secondary manufacturers can pick up the slack with increased margins now.
theholylancer@reddit
the question is are they actual new production from lines, or batches from the warehouse that they somehow have overproduced to this degree or had that many duds to need to bin to shove out.
At this point tho, it seems that they were fighting with AM4 for cheap low end stuff, and AM5 wasn't for that, as there is still no Ryzen 3 stuff and unless you go ali specials you wont get the sub 100 dollar CPU on AM5.
I assume as AM6 comes out, AM5 becomes that in place of AM4, but I do wonder if that will now be the actual way things are and value = previous gen socket.
fastheadcrab@reddit
No, that seems incredibly unlikely. A chip company will not be making thousands or millions of chips to stick in a warehouse for years. Economics shows that the cost of that storage quickly becomes very unprofitable.
Its more that the Zen 3 production is incredibly cheap and all the problems are worked out, plus they have a huge installed base of compatible users. They can just do a few production runs here and there for a very low cost.
Old x84 chips were be made for decades or more and some modern embedded devices use still use ancient silicon designs.
Strazdas1@reddit
AMD has been known to do exactly that in the past, spending years trying to sell off old models they overmanufactured.
fastheadcrab@reddit
Continuing the production of and selling older generation chips is not at all the same as selling off years of excess inventory. There are no credible sources that even remotely suggest that AMD built years of AM4 chips and now are selling them off. People often confuse the selling of fused off and binned chips at lower price points with AMD selling old inventory
Anyone with basic economics knowledge can see that its incredibly unprofitable to keep years worth of chips (millions) on hand in storage. It would be literally be better to have held a fire sale or literally thrown them into the dumpster.
Strazdas1@reddit
fused off chips like 5700x3D are just chips who were made for 5800x3D but werent good enough. They are literally produced before, didnt meet standard, sold as different SKU now.
einmaldrin_alleshin@reddit
I honestly would not be the least surprised if Zen 3 was still in production by then. It's still their most cost effective arch, and they can sell it for less than anything made on more advanced processes
Tuna-Fish2@reddit
DDR4 prices have started to rise. AM4 will no longer be the most cost-effective platform for long.
einmaldrin_alleshin@reddit
It's still a long way to go for that to happen. If you want to build a basic office PC with IGP and 16 GB memory, the cheapest AM4 option (5600G) will cost around 200€ for mainboard, CPU and memory; on AM5 you'll have to pay at least 270€ for a 8600G platform.
GumshoosMerchant@reddit
zen 2 is still kicking around in their mendocino chips, so i wouldn't be surprised if zen 3 is in whatever comes after
hackenclaw@reddit
the way I see it, they will move AM5 as budget tier, just like they did to AM4.
By 2028 4nm should be budget node while CPU in AM6 will be using 2nm/3nm.
ResponsibleJudge3172@reddit
Budget node doesn't mean cheap. It will remain $17,000 per wafer bu that's nothing compared to A14 node costing $45,000 per wafer
TotalManufacturer669@reddit
Ryzen 5 5500X3D coming up.
ConsistencyWelder@reddit
I'd gladly take a 5950X3D.
Morningst4r@reddit
More like new Zen 2 CPUs with incomprehensible model numbers
JunosArmpits@reddit
With an "AI" shoved in there somewhere
PrivateScents@reddit
5500x3D on the way
LuminanceGayming@reddit
"new"
hieronymous-cowherd@reddit
How dare you
AnxiousJedi@reddit
false
NerdProcrastinating@reddit
I wonder if it would be feasible for AM6 to move to memory attachment being on the processor substrate?
i.e. CPU substrate footprint extended to also contain CAMM2 connection point(s). This means the CPU socket becomes physically larger, but has fewer pins as the motherboard doesn't need to handle memory.
It would provide many benefits:
boomstickah@reddit
If you look at a calendar you'll know that zen 6 will be out fall 2026. AMD is on an every 2 year cadence.
dabocx@reddit
AM6 is zen 7
boomstickah@reddit
My bad, misread that.
PMARC14@reddit
They are talking about AM6 the socket which AMD only seems to change for new memory technology
Jeep-Eep@reddit
I would expect 2029-2030, AMD will probably let Intel facetank the DDR6 teething issues and it seems a moderately probable thing that they may be moderately troublesome.
Laj3ebRondila1003@reddit
Bruh I'm still on ddr4
Jeep-Eep@reddit
TBH, I might have waited if I hadn't had signs of a failure that was hard to be sure if it was the DIMMs or the mobo.
input_r@reddit
Excited to see something new
reddit_equals_censor@reddit
that is quite some bullshit.
camm2 can NOT replace dimm designs in servers and it would be a TERRIBLE insane idea to do it on desktop.
the camm2 module was designed around laptops. it has a giant max height (as in how long it can grow with having more memory dies on it) of 68 mm.
the current modules you see for camm2 are "just" 40 mm height.
that is why you see the prototype camm2 motherboards have completely empty space next to the camm2 slot with a camm2 module in it, because they NEED to be able to fit the giant 68 mm version.
this also blocks all connectors at the edge of the motherboard, because those wouldn't fit there at all anymore with the camm2 module.
HOWEVER that isn't the biggest issue.
the worse issue is, that you can not put 2 dual channel (one camm2 module, that you generally see is a "dual channel" module) on a board.
there isn't any place for it at all. there are no 2 slots, that you can make space for, that are 68 mm height from the socket to the left and right from it i suppose.
so it would be a downgrade to have camm2 forced onto desktop.
and it certainly is NOT a replacement for standard dimm slots on the desktop.
so that section you quoted is absurd nonsense.
camm2 was designed to replace so-dimm only.
we also don't know if dimm slots will have issues with dimm slots in the ddr6 era. we don't know.
__
so again you DON'T want camm2 on the desktop. it would be a major problem actually.
if we need camm like modules, then the design to get is socamm modules or socamm like modules, because those are fixed in all dimensions, which means, that you can put 2 next to each other to get "quad channel" bandwidth.
or on your desktop motherboard you could have at least 2 socamm modules, which then would mean double the bandwidth of the single giant camm2 module, while taking up less space and not blocking off the edge connectors of the motherboard.
so again YOU DO NOT WANT CAMM2 ON THE DESKTOP! you want socamm on the desktop if anything at all.
based_and_upvoted@reddit
Your concerns are valid but the way you write is going to get you reddit'd by really annoying people instead of having people challenge and ask questions. I am interested though.
I think your concern with module size is only important for small ATF/ITX motherboards, right? Why can't motherboard manufacturers just say "we only support 40mm modules"?
About the dual channel, I thought CAMM2 was supposed to have good enough bandwidth for that not to be a concern. For example in this link you can see that DDR5 CAMM2 had a bandwidth and latency similar to dual channel DDR5-7200 C38 (keep in mind people usually go for 6000MT/s kits)
I am struggling to see the advantage of camm2 on desktop compared to DIMM modules, as well, if these data points remain the same. For laptops, if CAMM2 is as good or almost as good as soldered memory, then great.
Or are CAMM2 modules easier to improve on bandwidth and latency compared to DIMM modules, so that in the future it really is a better technology? Are DIMM modules reaching the end of what can be improved on?
TheHodgePodge@reddit
Maybe they can charge more for camm2 compared to dimm modules.
reddit_equals_censor@reddit
i mean if companies want to push a standard on desktop with full atx motherboards, then i'd say, that they should certainly fit all memory sizes of that standard.
we can fit all dimm sizes we want into our current motherboards without a problem, it would be a terrible idea to have a spec setup and to have to check for height for all you buy and oh did you get the right atx motherboard for the module, that you bought? oh i guess it isn't a real standard then is it? "your memory is too tall"/"your motherboard only fits the short modules",
is the kind of stuff, that shouldn't exist in a standard and it would be a downgrade compared to full sized dimm.
yip yip. so-dimm in comparison is at its end. that is what camm2 and lpcamm was developed for, to fix the laptop situation, where soldered memory VASTLY outperformed so-dimm, because so-dimm couldn't clock nearly as high.
a single camm2 module or socamm module or lpcamm module is "dual channel", so the equivalent of having 2 ddr5 dimm sticks in your desktop machine or 2 so-dimm sticks in your laptop.
but the issue is, that that isn't enough for apus at all, that have even half decent performance and it also wouldn't be enough for higher core count workstations for example.
so just for laptops running the future amd halo apus (if amd isn't a piece of shit and won't block modules from getting used) requires 2 camm modules.
but lpcamm is vastly worse to use due to the modules having a very awkward shape and camm2 being even worse, so that leaves you with socamm. a standard designed to be put right next to each other in servers, so it would also work great to do that for high performance apus as it keeps the traces with more than 1 modules as short as possible and also takes up the least pcb space overall then.
btw this:
just applies to desktop cpus. that is the current am5 cpu sweetspot.
the desktop cpus we use are WAY WAY WAY less bandwidth concerned than apus.
strix halo in comparison using a 256 bit memory bus to get 256 GB/s memory bandwidth is still somewhat memory starved.
so for future higher end apus in laptops for sure you want well almost as much bandwidth as possible.
PMARC14@reddit
While LPCAMM has a somewhat awkward form factor that I would prefer SOCAMM replaces, it still is good enough for the APU's we expect right assuming they put in the work to ensure signal integrity. You just position the modules either on opposite sides of the centered chip, or facing down away from it like some old SODIMM designs but still way more compact, and you need max two of those modules for the expected 4 channel memory on these APU's (I don't expect any higher even if it would be cool).
reddit_equals_censor@reddit
yes lpcamm works good enough for laptops, but it is just worse than socamm, so why bother right?
if socamm is just better and has less limitations (put 2 sticks right next to each and keep traces short enough, etc... ) and it also has increased production scale by being used in servers as well, then let's just use the better standard right?
PMARC14@reddit
Well we don't know what level of iteration each has had relative to the LPDDR and DDR standards, as SOCAMM is so far only backed by Micron and Nvidia even though it hopefully is mostly derived from CAMM such that inter compatibility would be simple. If it is then I would expect SOCAMM to displace LPCAMM hopefully but just need more signal from the rest of the industry that they would follow, but hopefully a basically SOCAMM module where they "tuck" the extra chips hanging off inside the rest of the module as an update for the design is possible once we start seeing DDR6 products.
reddit_equals_censor@reddit
that is incorrect.
micron, samsung and sky hynix have i guess shown off their socamm prototype modules already as mentioned here:
https://www.tomshardware.com/pc-components/ram/micron-and-sk-hynix-unveil-lpddr5x-socamm-up-to-128gb-for-ai-servers#:\~:text=A%20SOCAMM%20measures%2014x90mm%20%E2%80%94%20one,%2Dclass)%20DRAM%20process%20technology.
and ian cutress in his socamm video shows off prototypes/pre jedec rather modules from several companies. it isn't just one memory maker + nvidia making sth one off.
all major memory makers are in with it (i mean it makes sense, because nvidia... )
but yeah good news there i guess for you :)
PMARC14@reddit
Aw yeah great sign. Hopefully a DDR6 version can be agreed on by others.
Caffdy@reddit
he's one of the people around the site who have negative karma in the three digits from me. I'd like to thing we just disagree a lot, but the reality is that he just love to throw the most baseless and uninformed opinions around all the time
Gwennifer@reddit
Higher speeds and tighter timings become possible as the electrical noise drops quite a lot is my understanding. Teamgroup's own OC team was able to get a very impressive manual OC with a CAMM2 module already.
I'm also not understanding the footprint issue raised for PC desktop; I don't see it as an issue. It's definitely an issue in server and laptop where motherboards are essentially single-sided and horizontal space is at a premium. As far as I know there's nothing stopping a vendor from placing a CAMM2 module on both the frontside and backside of a consumer PC motherboard, other than not necessarily being in spec, complicating assembly, requiring another ground plane. There'd also need to be cooling to make it really work well, too--those new ultrasonic/pulsed air coolers would work just fine since the heat load is low, but that's adding a lot of cost.
Also, current prototype CAMM2 modules come in more than sufficient capacities for client. I believe the currently claimed max 128gb capacity for 1 module is with 16gb denstiy IC's? With 24gb density IC's, that should be 192gb, which again is more than enough.
re:
AFAIK on Intel and AMD both, 2 dual channel kits drops speeds significantly as the memory controller can't keep up with the voltage & signalling, both of which CAMM2 improves. I consider this "can't" a moot point, because this is not exactly a usable configuration as-is on consumer desktop.
It's more like the DIMM form factor has a fundamental disadvantage in noise and trace length. In theory CAMM2 needs a lot more R&D to make this advantage known.
reddit_equals_censor@reddit
part 2:
if ddr6 doubles bandwidth compared to ddr5 (through clocks + increased per module bus width), then a halo apu could get with 2 socamm modules to at least 480 GB/s. (one lpddr5 socamm module is up to 120 GB/s shown in a presentation in a video by techtechpotato)
and with "just" 480 GB/s you are getting into somewhat reasonable range if you wanna push the gpu section of the apu really far.
for comparison the 9070 xt a very modest bandwidth modern graphics card is at 644.6 GB/s bandwidth already.
so a 480 GB/s apu would just have 75% the bandwidth of a current non high end and bandwidth efficient graphics cards.
so yeah comparing to actual graphics cards and their bandwidth requirements is what makes a ton more sense here and that basically shows you, that apus need bandwidth and are generally very starved.
i mean i don't know. unless we're actually at amd, intel, samsung, micron, etc... it is impossible to know i guess.
to me i'd guess, that we'll see one more generation of dimm on desktop at least (memory generation of course and not cpu generation).
but yeah i don't know, i can't know, but i'd love to know!
maybe we can get some insight with ddr6 in how far it can stretch itself bandwidth wise.
if desktop still gets full sized dimm, but laptops and mini pcs get socamm for example, then extreme overclockers could give us a decent insight into how they compare a bit more and give us some sense at least.
i'd love to see some presentation by those companies on those topics.
but one thing seems quite clear, that if we need camm modules, then socamm seems to be the best choice from all that i can see.
Caffdy@reddit
Jeese, what the fuck are you talking about?
Vb_33@reddit
Is there anything pointing to so-camm coming to desktop? Sounds like amazing tech.
reddit_equals_censor@reddit
nope.
jedec is in the process of standardizing it as we speak, so it is very early and its target rightnow is servers. or rather its first target.
this video talks about socamm btw:
https://www.youtube.com/watch?v=BhPChFe5ugc
it is worth however to remember, that there beyond some company hype posts there are no reasons to think, that camm2 will come to the desktop either.
so on the things pointing to either is about the same.
remember, that most of the camm2 stuff you saw thus far is companies producing some custom boards to test camm2 and to show it off, because it is good marketing, even if it never releases.
i can't even buy a freaking camm2 module at all (so as in for laptops, etc... as well just a module)
so don't think, that camm2 is such a far ahead in adoption standard btw.
nvidia wants to push socamm HARD again for server though.
so having the industry throw lpcamm and socamm in the dumpster and instead use socamm for laptops instead as a start just makes way more sense to me.
in the video you can see at 2:28 into the video how tightly you can put the memory modules next to each other. if you use camm modules, that is how close you want them to be.
the lpcamm module instead is a joke vs that.
it is the non memory die electronics in a sectioned off part, so you can't put 2 modules right next to each other for some dumb insane reason.
so you are already screwed trying to use 2 lpcamm modules for the next amd halo apu (the big ones with high gaming, etc... performance eg: strix halo), or rather it would be a lot harder for no actual benefit at the bare minimum.
socamm just seems better in almost all regards.
and there aren't any capacity problems for any desktop use, because one module can be (for now) up to 128 GB.
___
and having one standard used by more than one industry benefits everyone.
having socamm be used in servers and laptops at least is better than having socamm just for servers and laptops have to deal with camm2 or lpcamm.
and having it on desktop AS WELL then would play more into that.
so yeah, let's just hope it or a similar design like it will become the standard for desktop and laptop as well in the future.
one can only hope.
greggm2000@reddit
Yes, you did persuade me in the other thread, I agree with you. SOCAMM surely has momentum though, given that Nvidia is using it already? So from my un-expert viewpoint, I could see SOCAMM on consumer desktop for DDR6. I guess we’ll see.
FourLeafJoker@reddit
This one just found out their new step dad is a CAMM module and he isn't happy about them marrying his mum.
reddit_equals_censor@reddit
maybe stop wrongfully assuming genders, when you make sense comments at least?
doctorcapslock@reddit
how ironic is it that this guy's user name is called "reddit equals censor" lol
Traditional_Yak7654@reddit
Any time it’s an unhinged wall of text it’s good ol Reddit equals censor. Dude should take their meds.
JuanElMinero@reddit
There's like a dozen of these vacuous rants in this comment section alone, thousands of words that say surprisingly little.
Can someone please teach this user about making a concise point and using the shift key?
kristenjaymes@reddit
Emily?
Pimpmuckl@reddit
Idk what's the yapping about quad channel anyway. Or not "quad" but whatever channels four dimms would be in theory.
Because news flash, memory controllers don't magically over more channels just because there's more dimms present.
And camm still provides much better speed even vs dual Rank dimms in most benchmarks I've seen so still a win?
As long as we have choice as consumers, it's a win.
Vb_33@reddit
Silly question but why can't they place the memory on the backside of the motherboard to make space on the front.
Strazdas1@reddit
makes board design and cooling a lot harder. Now we dont need to worry about backside because theres nothing there.
reddit_equals_censor@reddit
i can't think of a reason why they couldn't.
now i am not sure if camm2 or socamm can fit flat enough to fit onto the back of the motherboard and not touch the case to be clear.
BUT even if that were an issue new cases could just come with a cut out on the back of the motherboard.
and having the memory on the backside of the board could possibly reduce trace length further.
you could also easier fit more modules or fit one on each side of the socket, which again could mean shorter traces.
but even then you'd still want socamm and not camm2, because you can get more bandwidth out of the same size and have the nice fixed dimensions, which are easier to deal with overall.
there is the downside of cooling the modules on the backside of the motherboard as they wouldn't get any airflow, but maybe socamm at least could have less of a problem, because it has the pin connections below all the memory dies, unlike camm2, that only has some below the pins and other parts are dual sided memory. i do however not know the exact cooling setups of camm2, that could be used and stuff.
___
but yeah memory on the back of the board with flat modules be it socamm or camm2 is certainly possible and could have advantages performance wise as well.
loozerr@reddit
68mm? There's been heatsinked dimms around that height for ages
reddit_equals_censor@reddit
68 mm from the socket to the RIGHT basically.
this is the jedec spec pdf:
https://www.jedec.org/sites/default/files/Tom_Schnell_FINAL_%202024-05-03.pdf
go to page 8 to see the bxx dc ddr5 camm2 module with a rendered imagine of it.
it is giant and it needs to have its space on any desktop motherboard if they are dumb enough to push that spec on the desktop and THAT is an issue.
the upwards height of dimm modules doesn't matter on the desktop, except for air cooler clearance in comparison.
so having super giant flat modules on the board is a problem IF they are dumb enough to try to push camm2 on the desktop.
socamm is free from this issue, because socamm has fixed dimensions all around.
LightShadow@reddit
You're not very creative are you?
VulpineComplex@reddit
the twine and corkboard store must love you
innovator12@reddit
Will consumer CPUs support quad channel memory?
Jeep-Eep@reddit
I suspect next DDR5 mobo wave is gonna have a lot of it, they're gonna be clearing the teething problems of this form factor ahead of having to mix them with the teething troubles of DDR6.
p90rushb@reddit
Back in my day, we had RAMBUS.
ScepticMatt@reddit
Maybe even SOCAMM
Clean_Moose5071@reddit
I believe this is confusing LPDDR6 with DDR6. I don't believe JEDEC has released the DDR6 spec yet but LPDDR6 was released. LPDDR6 is expected to be used in CAMM2 and SOCAMM.
ConfuzedAzn@reddit
Is thia CAM2 compatible?
Jeep-Eep@reddit
Likely CAMM2 mandatory.
slrrp@reddit
Bruh I just upgraded to DDR5 like five days ago
BrightCandle@reddit
The march of technology in computers is relentless. No such thing as future proof, it all goes obsolete far quickly, after 5 years every part is obsolete.
Jeep-Eep@reddit
Eh, modern sPCIE, 64 gigs and a marque with a good warranty like GSKILL or Corsair can come pretty close.
Homerlncognito@reddit
I'm on AM4 and I might actually skip DDR5 on desktop. The price was too high for too long.
Jeep-Eep@reddit
May well ride DDR5 to DDR7.
SuperDuperSkateCrew@reddit
Same, I’m waiting to build a new pc until GTA6 releases on PC so by that time DDR6 will for sure be available
Sh1rvallah@reddit
Might be DDR7 at this rate
SpeculationMaster@reddit
yeah, i am skipping DDR6 and waiting for DDR7 for my next upgrade.
Hot-Software-9396@reddit
I heard DDR9 is the one to wait for
-protonsandneutrons-@reddit
DRAM bandwidth is almost never a major bottleneck for consumer and / or gaming use case and DDR6 will be exceedingly expensive (I’d expect $300+ for 32 GB on launch) for meh latency/bandwidth combos.
Mid to late cycle is the best time to buy DDR, IMO, unless you frequently upgrade to re-use your DRAM.
Academic_Carrot_4533@reddit
You must not have experienced the era of single core cpus lol
Soggy_Association491@reddit
If the are going to redesign mobo for CAMM2, then i hope they may as well move the GPU slot so fans can push hot air immediately out of the cases now that GPU are using flow through design for heatsinks.
Nicholas-Steel@reddit
That's what the CPU heatsink and fan is for, after absorbing some of the heat and decreasing efficiency in cooling the CPU.
02mage@reddit
so i need to buy a new case aswell lol
Soggy_Association491@reddit
Yes but think about the temp drop
02mage@reddit
my temps are fine
GenZia@reddit
No mention of latency.
Hardly a good sign.
Chicag0Ben@reddit
Let’s hope it has traditional ECC for ddr6 this time around to help with signals.
GenZia@reddit
Error correction is supposed to worsen latency.
Strazdas1@reddit
Ill take ECC over latency.
MaverickPT@reddit
Mind of explaining why?
I see quite a few users here on Reddit requiring ECC full stop. But I personally fail to see how useful it is, bar some enterprise settings. I can't recall having issues that I could pin down to a ram bit flip. Could you please elaborate?
Strazdas1@reddit
I do data analysis for a living. I saw RAM errors corrupt my data in the past. Most people blame anything and everything when their RAM craps out. Once i started observing memory errors i realized half the software crashes i had was caused by memory, regardless of actual error message.
Kryohi@reddit
Latency has been roughly the same for a long time, and in fact now it's being mitigated better than ever by DDR5, new CPUs and especially big L3 caches.
Bit flips in RAM are only getting worse due to increasing densities, increasing RAM sizes and increasing bandwidth. I don't know what I would prefer specifically for the DDR6 generation, but the trend here is clear and it makes sense to just implement ECC everywhere at some point.
Also, are you really sure none of your movies or games that you moved around at some point are completely clean? AFAIK big files/datasets can be easily corrupted in subtle ways without ECC, often without making them unusable.
Caffdy@reddit
I'm glad these regards who shun ECC are not in charge of designing new memory platforms/standards. ECC is the future as you said
f3n2x@reddit
DDR5 already has better bit flip protection than classic ECC because it's part of the automatic refresh cycle. Classic ECC only adds transfer protection which is pretty much irrelevant for 99.9999% of all users.
Traditional_Yak7654@reddit
Non registered ecc shouldn’t add any latency.
BrightCandle@reddit
They haven't been able to do anything about latency since SDR. We have been playing with the same fundamental latency range of 50-70ns for all the generations of DDR and it's not really worth talking about because it's not currently solvable.
PMARC14@reddit
Probably not a regression, but no improvements again at the start.
crshbndct@reddit
Latency is counted in the number of clocks to achieve something.
3200Mhz CAS8 is the same latency as 6400Mhz CAS16. If latency was measured in ns, you’d see very little change in latency overall from generation to generation.
AnxiousJedi@reddit
Oh DDR5, we barely knew thee
Chicag0Ben@reddit
2021-2027?. Bit shorter than ddr4 which isn’t surprising considering the AI money pouring into memory makers pockets.
bakgwailo@reddit
Both DDR3 and ddr4 lasted about a decade+, and multiple platforms from both Intel and AMD. DDR5 is pretty crazy short lived in comparison.
Noreng@reddit
2007-2014 for DDR3
2014-2021 for DDR4
dystopianartlover@reddit
DDR4 was 2014-2024
WuWaCamellya@reddit
If you don't consider ddr5 as being out until several years after release then you would have to also be fair and apply that to ddr4 as well when it was much more expensive than ddr3, aka like 2017-2023/4.
dystopianartlover@reddit
New hardware was being made for DDR4 in 2024. Was that happening for DDR3?
Noreng@reddit
I have an SSD introduced in 2018 with DDR3
dystopianartlover@reddit
Sure and theres still new sbcs that use ddr3. But thats not the same as new cpus and motherboards and laptops being manufactured with ddr4 dimms. Ddr3 and ddr4 were only commonly concurrent for 1 maybe two years (though it was possible to build a ddr4 system in 1011). While ddr4 and ddr5 have been concurrent for most of ddr5s life. This is why people feel like ddr4 lasted so long in comparison to ddr5.
Noreng@reddit
In that case, we can probably extend DDR5 to 2030 at least
kingwhocares@reddit
Yep. People feel this way because price-wise DRR5 was only realistic option since 2023
Jeep-Eep@reddit
That's just when they'll be launching it, I suspect anywhere from 2-4 years before it's housebroken enough for client use.
CrzyJek@reddit
They were not a decade each. More like 7-8ish. Meanwhile DDR2 in consumers hands was from like 2004-2009.
DDR5 is not short lived in the grand scheme of things.
berserkuh@reddit
Isn't it? Most people I know had to go through two "lowest required memory" standards. Both 16GB and 32GB were on DDR4. I'm still on DDR4 with no real reason to upgrade.
PMARC14@reddit
Not really those were also only 7 years, and DDR5 will be 7 years old when the first consumer DDR6 boards and sockets are shown, and probably continue to be supported way into 2030
NerdProcrastinating@reddit
Does anyone know how DDR6 differs from LPDDR6?
With this new generation of standards and how close they appear to be, I don't get why two standards exist rather than one?
steinfg@reddit
Because DDR6 is made with high capacity, high performance, error correction, and modularity in mind, while LPDDR6 is made with high performance per watt and low power modes in mind.
NerdProcrastinating@reddit
But on what knowledge basis do you claim that?
Both standards appear to be moving to 24 bit channels with a burst length of 24 to provide host controlled bits for ECC. This appears to be identical in terms of protocol. Underlying DRAM technology is the same in terms of capacity and modularity.
Whilst the maximum transfers/second specs differ, that appears to be related to clocking & energy targets rather than a fundamental difference. With how converged they appear to be this generation, it seems they could have been the same standard with different operating targets.
Exist50@reddit
LPDDR is much higher performance than DDR.
steinfg@reddit
No, see any review with lpddr5 laptop and ddr5 laptop. Lpddr5 Bandwidth is ass, Latency is ass.
Exist50@reddit
That's completely wrong. LPDDR is much faster speeds than DDR, and the latency is comparable from a memory standpoint.
steinfg@reddit
Look at any benchmark that saturates that memory bandwidth and you'll see both lpddr5 and ddr5 are similar, despite the clcok differences
Exist50@reddit
No, you won't. It's not even close. That's why big iGPU parts are always paired with LPDDR. They need the extra bandwidth.
steinfg@reddit
No, they pair with lp memory because it's cheaper to manufacture, and igpu gaming doesn't need much cpu power - the most powerful igpu is weaker than 1660. I though we were talking about actual gaming laptops
Exist50@reddit
Also false. LPDDR is significantly more expensive.
But it does need bandwidth, which LPDDR provides. And again, latency is similar.
steinfg@reddit
Chips themselves yeah
Exist50@reddit
Motherboard is more expensive too. Soldered doesn't help it. If that's your thought, you can solder normal DDR.
AttyFireWood@reddit
I thought we were in Dance Dance Revolution 19, not 6.
Rami-El@reddit
so this means 192 bit when dual channel right? igpus are gonna be fire in the next few years
reddit_equals_censor@reddit
if we got the good ending it will be more than fire.
192 bit would be a single dual channel module,
BUT if we get socamm we can have 2 next to each other or on each side of the apu to get 384 bit bus and a MASSIVE memory bandwidth increase.
or rather this means, that there would be 0 excuse to solder memory next to apus anymore, which is the excuse, that strix halo uses for example.
Pimpmuckl@reddit
I want to have your CPU that morphs into having double the memory controllers out of thin air, that sounds great.
Because AMD sure as shit won't pack a 384 bit interface in their everyday-Joe chips.
reddit_equals_censor@reddit
strix halo already uses a 256 bit bus.
so if you want to have for a START high performance laptop apus, that aren't unservicable soldered together shit,
you want socamm modules, so that you can have a nice pcb layout with small easy to handle socamm modules and get the required bandwidth of the apus already.
and who knows where desktop goes bandwidth wise.
maybe amd would like to see an increase beyond just going to ddr6, but probably not.
Pimpmuckl@reddit
That's why I said "in their everyday-Joe chips."
Halo might have a 384 interface. Maybe, maybe not, who knows.
But the general laptop chips we'll see won't have it.
No idea what your obsession is with socamm but my g, you gotta chill out.
reddit_equals_censor@reddit
why not?
strix point is clearly held back by its memory bandwidth.
so future strix point needs a step up, which a 384 bit memory interface for future strix point would be for a great well rounded apu, that amd can sell to higher end handhelds as well.
socamm just seems to be a decently throughout standard, unlike camm2 or lpcamm and that is worth talking about, whenever fan-peeps get excited about camm2 for its non existent advantages on desktop....
Pimpmuckl@reddit
Money and power consumption
reddit_equals_censor@reddit
you can clock down the memory in special low power modes for a handheld for example if you really want to save a tiny bit more power.
but most importantly this doesn't matter, because you need the bandwidth, that you need. strix halo NEEDS 256 bit memory bus. it isn't a benefit, it absolutely NEEDS IT.
so if a future strix point like apu wants to have decently fast graphics, it ABSOLUTELY NEEDS the bandwidth.
steinfg@reddit
That's not hiw it works. Memory bus is determined by the processor, nit memory format (dimm, sodimm, camm, socamm)
reddit_equals_censor@reddit
amd apus in laptops and mini pcs are already using "quad channel" memory setup of 256 bit bus.
that could be done with 2 socamm modules or 4 dimm modules, but dimm doesn't work in laptops. so-dimm doesn't work at all anymore as it can't reach the performance.
SO if you want laptops or mini pcs, that aren't unservicable soldered together crap, then you want socammfor the REQUIRED bandwidth of those apus.
apus, that are already using 256 bit memory buses, which with ddr6/lpddr6 would be then 384 bit as ddr6 increases bandwidth per module (and camm modules are "dual channel" instead of "single channel" as well to not get confused here).
so your argument is already ONLY for desktop cpus. laptops already need 2 socamm modules to get the bandwidth, that they absolutely need with memory buses, that they already have.
and if amd wants to bring halo apus to their future desktop sockets, they'd actually need to double the bandwidth of the socket so 4 dimm sticks or 2 socamm modules.
they very much may not want to do that of course, but who knows.
__
and to be clear the comment i made was about socamm enabling vastly higher bandwidth to being able to put 2 next to each other and still keeping short trace length, etc...
i didn't say it was required, but rather that halo apus can easily use socamm in the future, instead of the soldered insult for a start.
steinfg@reddit
Strix halo? One niche chip with a $2000 laptop and no available mini pcs (ony preorders)
Framework tried to get camm working, AMD said it won't work on sttix halo, 256-but bus necessitates weird trace layouts that won't properly work with camm or dimm.
I'm not buying strix halo. Most People are not buying strix halo. Most people are buying <$2000 laptops, and those are perfectly serviceable with camm. socamm is proprietary and expensive format, only useful in multimillion dollar blackwell racks.
Hey, there's a rumor that Next-Halo is cancelled. That's good news for you. Less un-upgradable laptops in the future.
Hahhaahahahah, thanks for a joke. That's never happening, igpu boy.
The processor determines bus width, not the memory format.
Eh, already gave an answer above.
reddit_equals_censor@reddit
part 2:
so you already want more bandwidth on strix point like apus rightnow if you could, but you can't, because you don't have socamm yet in laptops that would make it reasonable to use 2 modules relatively easily for nice mid range apu, that isn't massively bandwidth starved.
ah yes amd with their big amounts of new apus, that they are pushing out will randomly cancel medusa halo for no reason. that sounds like a very reliable rumor /s /s
you like to repeat yourself after i made a comment about how apus are already using double the bandwidth of a dual channel setup today?
just repeat yourself for reason right?
what is your magical source, that socamm is a proprietary format, that is only allowed to be used by nvidia?
and what is your source, that socamm is more expensive than lpcamm/camm2?
anything?
just making things up?
is jedec just standardizing an nvidia only standard for funsies right?
steinfg@reddit
No, AMD can already do it if they wanted. Most Strix Point laptops are using soldered ram. 256 bit bus + 8 chips of soldered ram is cheaper and more compact than socamm. The only reason AMD doesn't go for 2x bandwidth is cost. They don't want strix point laptops to baloon in price.
Yeah that happenes when someone else is repeating the same false claim.
Jedec is also standarding intel-only ultra-expensive MCRDIMM, so yes.
reddit_equals_censor@reddit
you are talking utter nonsense.
soldered on memory is not acceptable for an apu and indeed requires a solution.
the only way, that you could possibly think otherwise, is if you are extremely anti consumer and right to repair.
steinfg@reddit
The solution is camm, and it's already gaining momentum. I never even said I want soldered memory, what are you talking about. I just said your dream of a $1000 quad-channel APU with upgradable memory is never happening. It's gonna be $3000 or regular bus width (128 lpddr5 / 192 lpddr6).
reddit_equals_censor@reddit
why?
tell me what is expensive about having double the memory bus width and 2 socamm modules?
is it the 5 us dollar increased price of the apu? (this is assuming we don't trade parts of the apu for higher memory bus)
or is it the dirt cheap memory modules, which sell with costs/GB so having 2 half sized modules being generally the same cost as 1 double sized module.
so what magical fairy dust bs prevents amd from having a 384 bit lpddr6 apu at lower prices in the future?
it isn't the apu size, it sure as shit isn't the complexity of the motherboard.
so what is it?
there would be only one possible thing you could mention, which is amd's greed of wanting to massively overcharge for mid range apus of the future.
but that is a decision made by amd and assumes no meaningful competition as well, but it is NOT sth impossible to overcome.
and margins get thrown out of the window for the steamdeck 2 as that would be sold at or around cost again and it very likely will use a 384 bit memory bus.
the question is whether or not valve would want be nice and use socamm for it, probably not, but oh well guess what that would be a dirt cheap "quad channel" device then.
it frankly is just absurd to think, that memory bus width for apus is gonna stay static no matter what, despite the 128 bit memory bus massively holding back performance of the apus and preventing options of increasing compute units a bunch to create a more well rounded apu.
it is just absurd to hold onto sth, just because it has been done like that for now.
steinfg@reddit
Everything except the keyboard I'd say 😁
You definitely live in fantasy version of earth. That's an easy +$300, maybe more after the manufacturer margin. Have to recoup the extra litho mask cost.
Cheap ones are slower, we are going here for fast. But yeah they are cheaper than the rest of stuff here.
The PCB itself will increase in cost, it will have to include more layers.
so-camm is really expensive. It sacrifices any cost efficiency for the most compact form factor. camm is what all industry players agreed to, before nvidia came and demanded an expensive version that crams buttload of ram next to their blackwell chips.
And after all those price increases this laptop becomes a niche product that loses to rtx 6070. So to recoup the R&D we have to throw another +$300.
reddit_equals_censor@reddit
why?
amd can have one socamm module on each side of the apu with memory controllers on each side of the apu to get the shortest traces and 0 cross over and perfect seperation.
or put differently: NO ADDED LAYERS! needed.
maybe you will still do it, because it makes things easier, but it clearly shouldn't be required.
what's the cost?
how is a socamm module more expensive or meaningfully more expensive than a same sized lpcamm module?
both have 4 memory chips on them.
so again why would a 32 GB socamm module be so much more expensive than a 32 GB lpcamm module?
i'd like to be proven wrong in my assumption, that it wouldn't be, so please provide me some evidence here.
steinfg@reddit
Connectors cost more (they have much finer pitch), PCBs for socamms cost more (finer traces, more layers for more complex traces), and smaller SMDs require a bit more complex assembly. I'm so tired talking to someone who believes in magical fairy dust (quad-channel for cheap, socamm in laptops).
Realistically thinking (I know, not your biggest strength), if medusa halo even comes out (laterst rumors point to cancellation), then the best you can hope for is a $3K framework laptop with 2 camm modules, one on each side of the SOC. Cooling will be ass, but that's pretty much it.
reddit_equals_censor@reddit
one of the pre jedec spec socamm modules has 694 connections.
lpcamm has 644 connections. (ddr5 version)
or basically the same.
and the connector area is about the same.
where the heck are you getting the idea, that socamm magically uses a much finer pitch than lpcamm?
is that just sth, that you came up with?
no evidence provided, that it costs more than the less expensive than the lpcamm design.
you make it sound as if putting lots of small smds next to each other is some expensive magical task.
as if you can't buy ssds full of tiny smds, or how about a 100 euro motherboard, that has the back of the socket stacked to the top with smds.
it is absurd to claim, that having lots of smds close to one another is a reason for prohibitively expensive production costs.
just absurd.
steinfg@reddit
The solution is camm, and it's already gaining momentum. I never even said I want soldered memory
reddit_equals_censor@reddit
this is nonsense.
first of framework didn't try to get camm working on strix halo.
framework asked amd to see if memory modules of any kind were possible.
amd ran a bunch of simulations and sadly it would have been at such a massive bandwidth cost, that it would have mode non sense.
but that doesn't explain why camm didn't work with strix halo, it just said that it didn't.
so the almost certain reason is actually, because the memory controller of strix halo was ONLY designed around soldered on lp memory.
it was never designed around using memory modules.
IF strix halo would have been designed around using 2 lpcamm or 2 socamm modules, then it would have been 0 problem at all is the reasonable assumption here.
amd didn't put a memory controller on strix halo, that can properly handle modules and that is almost certainly all.
strix halo is an example of a high performance apu.
high performance apus can be 1500 euros in a laptop as well.
strix halo happened to be the first in pc laptops and the price often charged doesn't reflect what it can cost.
furthermore cheaper apus will need in the future still higher bandwidth.
so your 1000 us dollar laptop can get its bandwidth needed for all its performance with 2 socamm modules.
as a reminder here apus with the exception of the playstation like consoles using gddr are VERY bandwidth starved.
steinfg@reddit
Nah, regular strix already sells for 1500, AMD will never eat profit margins. Strix halo is for $2500+ laptops
It does.
High bandwidth = expensive APUs. Cheap APUs will just remain cheap and low bandwidth (192b lpddr6)
That's a $3000 laptop lol. The most you can hope for in a $1000 laptop is a discounted strix point, not whatever you're imagining. I feel like you live in a fantasy land where everything is 60% cheaper than in the resl world lol.
Playstation graphics portion is massive, and connected directly to GDDR6 , it's not an igpu. It's a whole ass dgpu with small integrated cpu 😆
Vb_33@reddit
What are the odds of us getting so-camm
steinfg@reddit
Zero
reddit_equals_censor@reddit
i have no idea to be honest.
if the desktop follows reason and logic, then we should get socamm like modules, WHEN they become necessary, which may not be with ddr6 on desktop, but ddr7 for the first time.
remember, that if there are no meaningful performance differences with ddr6, then it wouldn't make sense to get camm modules on the desktop YET with ddr6.
so we are going by the time frame, where it will be needed here, which could be years away possibly.
maybe amd will have the desire to push high performance apus on the desktop with memory modules, which could effect their decision making.
but yeah ddr6 on am6 could just be standard dimms and then we see what happens with ddr7 in 5 years after that or whatever.
while at the same time (hopefully) socamm gets used heavily in mini pcs and laptops.
maybe there will be more overlap with camm and dimm in the ddr6 era, which with socamm sized modules can absolutely make sense, as explained before NOT with camm2.
__
but yeah i don't know. i certainly hope, that we are getting a proper standard next.
the last standard to think about was nvidia's 12 pin fire hazard connector, that should have never made it past the drawing board and nvidia has trippled down on the fire hazard by now no matter the melted cards continuously coming in.
so yeah i have little hope in proper standards being chosen these days, or BAD standards getting pushed :D
(worth noting, that camm2 is SAFE, so it isn't in the same universe as nvidia pushing a fire hazard)
GenZia@reddit
iGPUs have always been just about to be 'fire' in the next few years.
I've been hearing this statement since Sandy Bridge, basically, because that's when Intel got serious about integrated graphics.
Personally, I only really used the IGFX on my SNB to encode videos and capture gameplay footage as my Fermis (plural) lacked one and software encoding was just too taxing on my Core 2 Duo.
Beyond QuickSync, the iGPU was basically useless to me and now, I don't even care as virtually all GPUs now come with encoders.
Gwennifer@reddit
I beg to differ, Lunar Lake is really quite competent on this front, and the big Vega mobile units are admittedly less performant but sufficient for a lot of gaming needs.
Caffdy@reddit
bruh, Strix Halo. Enough size. This guy is a joke
Homerlncognito@reddit
They're great on laptops for playing older games. For desktop they'll never make sense.
MaverickPT@reddit
...until your gpu (or drivers) conk out and that iGPU comes in clutch to either troubleshoot it or at least keep the system usable for a while, until the issue is fixed
Homerlncognito@reddit
Yes, but that's not really the context of this discussion. If you want to play games on a desktop, then a dGPU is simply a much better choice.
f3n2x@reddit
Or the exact opposite where Adrenalin opens every 5 seconds making the system unusable even if you don't actively use the iGPU until you rename RadeonSoftware.exe because AMD still doesn't know how to do software.
MaverickPT@reddit
I have a driver only" instal on my 7800X3D and never had issues with that. Maybe you could try that too?
f3n2x@reddit
Maybe, but I don't want to waste any more time on this to be honest. I've never missed an iGPU on AM4 and probably won't ever be using it on AM5.
Windowsrookie@reddit
Maybe for enthusiast high-end gaming like 4K or 240FPS+. But iGPUs are the direction the industry is heading. The current AMD iGPUs can play cyberpunk in a 25W handheld device. Apple M chips are performing quite well too. In another 5 years with DDR6, iGPUs will be more than powerful enough for most gamers.
Outside of reddit most gamers don't want to build gaming systems (and manage all the research and troubleshooting that comes with it). They just want to buy a device that runs their game, and iGPU's will do that.
Alive_Worth_2032@reddit
A iGPU that adheres to what makes the PC space the PC space, which is modularity and compatibility. Will be constrained by memory bandwidth. Since we are not going to throw more channels at CPU tiers which aren't needed.
A five year old game. In 2015, a laptop with a Skylake CPU you could also play five year old games with oacceptable performance. At least when drivers allowed for it.
They are in no way what we in the PC space refer to a "iGPU". They are big as SoC with novel memory subsystems without modularity. They have more silicon dedicated to the GPU part of the SoC than even some mid range discrete GPUs.
In the PC space the iGPU has first and foremost been about what performance you can get for "free". From just utilizing what is already there. Bolting a giant ass GPU part to a CPU and making a large SoC that can't slot into existing modular infrastructure, increases costs. Sure you may be gaining other benefits, but cost is not one of them.
Living_Morning94@reddit
For most people it already is fire with Strix Halo.
Hopefully prices will come down... but as long as people are buying these stuff, paying premium for the 128gb model so they can run llm inference on it then there's very little reason for amd to do so.
Strazdas1@reddit
Ah yes, a 200 dollar GPU performance in a 1300 dollar chip is fire.
Living_Morning94@reddit
Which 200 dollar gpu has 128gb ram to run inference training on?
ResponsibleJudge3172@reddit
Quite irrelevant for OP talking about target market
steinfg@reddit
"Hopefully prices will come down..."
It's a giant 40-core igpu strapped to 9700X/9900X cpu, I have no clue why people expect it to be cheap.
Low enthusiasm for Strix halo proves that people don't want good iGPUs, they want cheap iGPUs. And AMD understands that now. GPU portion of amd laptop SOCs will be low priority from now on.
ResponsibleJudge3172@reddit
Even the performance is not as amazing when you actually look at the specs
Pimpmuckl@reddit
I wouldn't say AMD understands it, more so they actually have money to tape out more specific designs.
Just like Nvidia having like up to 10 different chips per GPU generation (AMD usually had like 3?) there's now a fair few options for mobile offerings they are getting ready.
And given how absurdly expensive upfront costs for designing a chip is, it took a while for AMD to have the cash to do so.
How we no longer have a "one size fits all" laptop die so it makes sense to cut the igpu down when you also offer the chips with much larger igpu as well
Vb_33@reddit
Strix Halo gets outperformed by dGPUs in the same price bracket that makes it pointless for gaming.
Living_Morning94@reddit
So? It's not really for gaming. It's 95% work with llm and 5% gaming
HuntKey2603@reddit
"They don't work for my usecase so you should hate them too" is peak blind r/hardware posting.
GenZia@reddit
And exactly what gave you that impression?!
I merely said I don't "care" about iGPUs, and indifference can't exactly be classified as "hate."
There's no need to make disingenuous straw man arguments to sooth your feelings (or whatever it is that you're trying to accomplish here) or mock an online community just because their views don't align with yours.
We are humans, after all, not a hivemind.
kingwhocares@reddit
The Ryzen 5 8600G igpu isn't bad. Very goood for a new budget build.
Shadow647@reddit
It's a hunk of shit
Source: I have one (in my home server)
Vb_33@reddit
This is true but I'd like to add that in this era we now have handhelds which rely strictly on igpus so faster memory will be fantastic, the only thing better than this is seeing more actual chip maker competition in the handheld market, would love to see more Intel and Nvidia handhelds.
slither378962@reddit
And they still won't replace low-end GPUs!
SherbertExisting3509@reddit
It will probably be too late for Nova Lake and Zen-6 to support
It would likely be ready in time for Zen-7 and Razar Lake.
Dangerman1337@reddit
Razor Lake will still be iterative on LGA 1954. It's Titan Lake for Intel that'll be DDR6 most likely in late 2028.
Caffdy@reddit
given AMD 22-24 months iterations, seems like AM6 is gonna be on the DDR6 train in late 2028 as well
Dangerman1337@reddit
I mean the rumour for Zen 7 is A14 so yeah probably 2H of 2028, can see them doing both Zen 7 & Zen 7 X3D because X3D is very popular + push against Intel if Intel ends up doing Q4 releases for non-cache and then Q1 (say Q1 2029 for TTL) for extra cache variants.
I definitely think AMD is seeing how well X3D is doing and will do a single 16-Core CCD X3D (possibly with 128MB of additional 4nm cache if Zen 6 X3D will start having 96MB on 6nm) ASAP with Zen 7, hell I wouldn't be surprised if AMD starts doing X3D with a Single CCD day one of a new architecture release on Desktop.
Exist50@reddit
Razer Lake will just reuse the NVL SoC. So the first real chance to intercept would be a new socket for Titan Lake in '28-ish.
Chicag0Ben@reddit
It’s still in a weird spot since Nova lake will be 6-12 months too soon for ddr6 and likely on a new socket. They would have to do the LGA1700 thing of making Nova Lake support both ddr5 , ddr6 and introduce new motherboards with Razar lake or whatever is after Nova and likely make it just support ddr6.
Vb_33@reddit
Just do the Arrow Lake thing of only supporting Arrow Lake in 1 socket.
Pimpmuckl@reddit
Given their financials, that seems like the obvious choice.
Consumer goodwill can be recouped by better end user pricing that they need anyway if even half of the ridiculous clock speed rumours of Zen 6 are true.
jhenryscott@reddit
Intel is a good example of short term value spirals. They are extraordinarily hard to get out of. Worst case ends with private equity and drained of the last of its life. With a new socket every generation because eol short term value in making people upgrade until all the clients are sick of it.
SherbertExisting3509@reddit
Intel sat on their laurels and grew lazy.
Their Haifa Israel CPU team grew complacent, rotted from the inside, and couldn't keep up with AMD after they started competing again with Ryzen.
This was the team that designed Conroe (Core 2) and Sandy Bridge.
They grew their core's out-of-order window and die area size with Sunny Cove and Golden Cove
It grew so much that 4 Gracemont cores (Skylake IPC) were only slightly bigger than 1 GLC core.
Pat then forced them to design a core with modern methods i.e. from sea of fubs to sea of cells. They only managed a 17% IPC uplift in 3 years
For context, ARM managed to get a 15% IPC uplift in ONE year from the Cortex X4 to the Cortex X925
Their ONLY saving grace is their E-core (Intel Atom) team who achieved a 38% INT and 68% vector IPC uplift in 3 years and their cores use a third of the area of a P-core (1.7 vs 4.5mm2)
That's why Intel tasked them with designing a Unified Core based on the E-core design in Nova Lake (Arctic Wolf) it could be ready in 2028
In the meantime, the P-core team is designing Griffin Cove, which MIGHT implement some features of the canceled Royal Core project. How many end up in the final product depends on their competence. It COULD come in 2027.
callmedaddyshark@reddit
I live my life a quarter~~ mile~~ly report at a time. Nothing else matters
CatalyticDragon@reddit
Wild.
EPYC servers with 12 memory channels hit 614GB/s per-socket. So dual-CPU systems are hitting 1.2TB/s of total bandwidth. DDR6 at these speeds means anywhere from 1.7 TB/s to 3.38 TB/s of total bandwidth.
HuntKey2603@reddit
NPUs salivating.
CatalyticDragon@reddit
Yeah. And with server CPUs reaching 192 cores CPU inference is more than workable.
HuntKey2603@reddit
I was joking as said. Isn't GB200 like 500+TB/s?
Double_Cause4609@reddit
That's a bit nuanced. I guess what you're asking is "Well, there's these really big GPUs with all these specs, and they're already not sufficient for inference workloads at scale in the cloud, so how could a CPU be relevant?"
But that's not quite fair. CPUs have a lot of advantages in how you can program them at a low level (they handle branching better, sparsity, etc), and are better suited to certain kinds of workloads (they're a lot easier to program small models, for example, where CUDA kernel launches occupy most of the execution time).
CPU inference is already viable in a roundabout way in certain situations (if you control for dollars spent, there's this weird zipper like effect where CPUs and GPUs trade off for best performance per dollar in the low end), and there's a lot of cases (like having really complex models) where having an easy to program device really makes a difference.
I believe by workloads run CPUs are actually the majority of AI use currently, it's just that they're running a lot of "boring" algorithms like traditional machine learning, fraud detection, etc etc.
CatalyticDragon@reddit
Nope. The CPU to memory bandwidth on a GB200 NVL72 is 512 GB/s which is already lower than EPYC servers. The bandwidth to the GPU side is 8TB/s (same as MI355x).
msolace@reddit
not the part of the system we have the largest bottleneck on....
we need more pci lines, faster lanes, more ability to run local llm without impacting the rest of the system. so unless ddr6 is also coming with triple memory size for cheap dunno..
TheComponentClub@reddit
Surprised they’re already talking mass adoption for 2027. Still feels like DDR5 hasn’t fully taken over, especially on consumer boards.
JuanElMinero@reddit
When was the last time you saw the release of a main line DDR4 board?
TheComponentClub@reddit
It’s not that DDR5 isn’t readily available, it’s more the affordability. Most people I know haven’t upgraded yet because prices are still a bit high and DDR4 still does the job.
JuanElMinero@reddit
It might also not make it to 2027 for all consumers.
Example AMD:
Going by the roughly 2 year release cycle of their latest platforms, Zen 6 is probably a mid-2026 release and still on DDR5. Unless there's a larger shift, their first DDR6 platform might well be 1H 2028. Datacenter parts are earlier, as per usual.
neutralityparty@reddit
I'll finally build my gaming pc
ConsistencyWelder@reddit
I'm more excited by CAMM2 becoming part of the standard. We need this to become the new normal, it's better in so many ways.
Slasher1738@reddit
Me too , but for SO-CAMM
rilgebat@reddit
I'm presuming by 4 channels it means what most people refer to as "sub-channels" as with DDR5 splitting 1x64b to 2x32b. So presumably at 24bpc DDR6 must have finally adopted ECC as standard?
Verite_Rendition@reddit
No. It means they're likely doing roughly the same thing as LPDDR6 and using a non-power-of-2 channel size. In this case, a 96bit module split up into 4x 24bit channels, rather than a 64bit module split up into 2x 32bit channels.
rilgebat@reddit
Unless I'm misunderstanding the way memory and memory accesses work, DDR5 halved the width but doubled the burst length to reach the cache line size. Surely a 24b channel size would result in a fundamental inefficiency as there is no way to reach cache line size without undershooting or overshooting.
steinfg@reddit
24b channel * 24 burst length = 576 bits of info, of which 64 are ecc or metadata, so only 576 - 64 =512 bits are for data, which fits perfectly into cache lines
Verite_Rendition@reddit
You are correct. If it's like LPDDR6, then the rest of the data (32 bits) is for metadata.
https://www.jedec.org/sites/default/files/Brett%20Murdock_FINAL_Mobile_2024.pdf
Mind you, the article here (which cited another article from the Chinese press) is highly speculatory. So this item should be treated as a rumor rather than news.