Ryzen 9000X3D leaked by MSI via HardwareLuxx
Posted by the_dude_that_faps@reddit | hardware | View on Reddit | 252 comments
So, I'm not linking to the article itself directly (here: https://www.hardwareluxx.de/index.php/artikel/hardware/mainboards/64582-msi-factory-tour-in-shenzhen-wie-ein-mainboard-das-licht-der-welt-erblickt.html) because the article itself is about a visit to the factory.
In the article, however, there are a few images that show information about Ryzen 9000X3D performance. Here are the relevant links:
-
https://www.hardwareluxx.de/images/cdn02/uploads/2024/Oct/proven_byte_os/msi_factory_tour_shenzhen_2024_article_068_1920px.jpg
-
https://www.hardwareluxx.de/images/cdn02/uploads/2024/Oct/lucid_server_6a/msi_factory_tour_shenzhen_2024_article_069_1920px.jpg
-
https://www.hardwareluxx.de/images/cdn02/uploads/2024/Oct/supple_circuit_r2/msi_factory_tour_shenzhen_2024_article_070_1920px.jpg
-
https://www.hardwareluxx.de/images/cdn02/uploads/2024/Oct/modest_engine_57/msi_factory_tour_shenzhen_2024_article_071_1920px.jpg
There are more images, so I encourage you to check the article too.
DogAteMyCPU@reddit
Looks like my 5800x3d lives on
SharpMZ@reddit
I wish I had bought one when I could, the X370 motherboard I got 7 years ago supports it just fine. I guess really should finally grab a 5700X3D when they are still available, after 1700 and my current 3700X that would be a pretty decent upgrade and still competitive.
I'll ride this new generation of CPUs out until we get AM6 and hope this old dog of a motherboard works for 10 years or so.
Lyonado@reddit
Yeah, the AliExpress $125 ones looking real tempting
Swatieson@reddit
Bullshit. Link please.
Lyonado@reddit
Looping been back on this, I ended up getting one for $123. $14X base price, $12 discount code and then $20 off buying w. klarna
stunt_penis@reddit
Szcpu got me mine. $135 + tax - $15 promo code = 137. Got it installed last week, works great. Took about 2 weeks to get to me.
hampa9@reddit
I got one for that price a few weeks ago there.
Swatieson@reddit
Oh yeah the usual "I got it this low but nobody else can" BS.
Mind OP is implying that price is available NOW.
bow_down_whelp@reddit
Cool the jets
Lyonado@reddit
Lemme see if I can dig it up, I think it's $150 and then someone had a $25 off code
Swatieson@reddit
So BS as usual.
Molestador@reddit
https://www.aliexpress.us/item/3256806921974699.html?spm=a2g0o.order_list.order_list_main.23.21ef1802yfBQKj&gatewayAdapt=glo2usa
the price fluctuates, i got mine at 135 pretty recently. lots of other people here on reddit confirmed it legit before i ordered.
you could try having better manners.
kikimaru024@reddit
What AliExpress CPUs?
All I see are €182 + up.
Lyonado@reddit
I think I misspoke and talked about the cheaper ones at around 150 that you can stack a discount code on top of
iFenrisVI@reddit
I’d get one from there but it ends up being lik 30$ cheaper in my currency so might as well just grab it locally and get it quicker.
Lyonado@reddit
Yeah, for sure.
No-Actuator-6245@reddit
The only concern I have as someone with a B450 board and a 5800X3D is if pcie 3.0 will be a limiting factor and by how much. Apart from that I am comfortable I can drop in a 5080/5090.
shroombablol@reddit
techpowerup did a pci-e scaling test with a 13900k & 4090:
https://www.techpowerup.com/review/nvidia-geforce-rtx-4090-pci-express-performance-scaling-with-core-i9-13900k/
tl;dr: a negligible difference of a couple %
https://tpucdn.com/review/nvidia-geforce-rtx-4090-pci-express-performance-scaling-with-core-i9-13900k/images/average-fps-1920-1080.png
No-Actuator-6245@reddit
Thanks, I had seen that. My concern is the claims the 5080 will be faster than the 4090 and obviously the 5090 will be much faster so without reviews I’m concerned the gap will be greater. At this point it’s just an unanswered question than something that definitely will be a problem.
-CerN-@reddit
Doesn't make sense to get a 5090 if you're not going to pair it with a top CPU anyways. The 4090 is already CPU limited in most situations.
No-Actuator-6245@reddit
I can get enough FPS out of my cpu, it doesn’t have any problems pushing 240fps in the games I play at 1440p 240Hz and has an even easier time at 4k 120Hz/fps. What I can’t do is run the game settings above about medium with DLSS performance. I don’t need more fps, I need to be able to crank the settings up. I’m wanting a gpu that will do a good job for the next 4 years as I plan to skip a generation before upgrading. My 3080 has served me well for 4 years and totally skipped 4000 series. When I find my cpu/motherboard is a limitation it will get upgraded.
kyralfie@reddit
Have you seen the specs? It's absolutely not a given that 5080 will be faster than 4090.
No-Actuator-6245@reddit
I have. It is questionable but being a totally different architecture you can’t compare the 2 generations based on spec alone. If the 5080 doesn’t beat the 4090 it will be the first time the new 80 class doesn’t match/beat the outgoing 90/top tier. Given the 4080 had weak sales volume which lead to the 4080 Super having a lower price I hope NVidia realise that breaking the performance trend will need it to be priced very competitively to avoid another disappointing 80 class.
kyralfie@reddit
Well you definitely cannot but you can make an educated guess based on all the leaks. All I'm saying is that it's not a given that it does beat it.
First time? Wouldn't surprise me. 70 tier used to be beating the previous gen flagship.
Plank_With_A_Nail_In@reddit
You can still buy 5800x3d second hand.
SharpMZ@reddit
They are really expensive second-hand here, it would probably make more sense to just get a new board, RAM and CPU instead. 5700X3D makes a lot more sense for the money.
BlueGoliath@reddit
Until developers release even more unoptimized games.
Kionera@reddit
Ngl the disappointing CPU uplifts on next gen helps with that too since devs can't just target a CPU that's significantly faster than current gen because it doesn't exist ~~unless you're cities skylines~~.
Z3r0sama2017@reddit
This is my hope. If the tech to brute force it doesn't exist, then they are forced to do work under the hood. It's why the 4090 being such a beast with a mighty uplift over the previous gen hurt everyone.
BlueGoliath@reddit
If a game needs more computational power than a 5800X3D then it's unoptimized.
BlueGoliath@reddit
Don't worry, Nvidia, AMD, or Intel will release some software solutions like DLSS but for CPUs that developers then use as a crutch.
lordlixo@reddit
Probably there will only be enough incentive for us to upgrade when they release am6
reddit_equals_censor@reddit
yip ddr6, am6 and unified l3 cache 16 core with x3d cache will be the point.
am6 might also bring some more basic features back compared to HORRIBLE am5 lol :D
All_Work_All_Play@reddit
I'm OOTL what features did AM5 lose that AM4 has?
reddit_equals_censor@reddit
at the same price points or almost overall:
ecc support top to bottom, except msi (that was the case for am4, am5 was a shit show and only now gets a bit better)
sata ports! it was easy and cheap to get an 8 sata port board, and 6 sata ports was standard on am4, on am5 you can buy a 350 euro board with 4 sata ports and a middle finger.
8 sata ports STARTS at 490 euros... not making this up. there are 3 available 8 sata port am5 boards and the cheapest is 490 euros. actually it is 2, because one is just a different color version. and the other 8 sata port board is 1000 euros ;)
so basically there is one 8 sata ports board available rightnow and it starts at 490 euros ;) hf
the cheapest usable 6 sata ports board on am5 starts at 230 euros (usable means, that graphics cards don't block the ports if you're wondering).
remember, that 6 sata ports was just the standard before hand.
but maybe you don't care about sata ports, well how about using 2 pcie devices, that require an x16 slot and you want both to run electrically at x8 to the cpu.
how about a debug segment display? oh that will cost you lol :D at am5 launch it was insane, now it is a bit better.
do you want 5 cents worth of audio jacks in an otherwise half empty i/o plate for the board?
remember the audio chips can all do 7.1 audio no problem and we had 5 audio jacks + optical for ages as the standard.
well how about frick you with am5, you get 2 audio jacks on the back now with 350 euro boards or higher, so you can't connect your suround sound ssetup for example :D
and other stuff, but here is a rant about this by gamersnexus:
https://www.youtube.com/watch?v=bEjH775UeNg
on how freaking horrible the removed features or insanely priced artificial segmentation scam.
itsabearcannon@reddit
If you need 8 SATA ports on a consumer motherboard, chances are you’re using it wrong.
Just get a used LSI 9300 on eBay for less than $50 and have a proper SATA HBA on whatever $100 motherboard you want, instead of trusting your data to some bargain basement onboard Marvell SATA controller.
reddit_equals_censor@reddit
now while that advice itself seems reasonable, it may not apply to lots of people.
in regards to audio. you may get a 150 euro 5.1 audio setup. i think back in the day the teufel 5.1 audio setup was for the price amazing. you WON'T spend 100 euros more when the audio setup cost you "just" 150 euros. 100 euros is a lot of money for lots of people.
and in regards to audio interference, in the manual of some of those 2 audio jack boards they show the dystopian way they may support 5.1 audio, which is using the front case audio ports, as in the signal goes from the motherboard through the entire case to the front of the board, to then have 2 wires out of the front of the case for no reason, IF your case has front audio to begin with. that certainly sounds like quite a terrible experience interference wise.
and in regards to the hba. the lsi 9300. how are you using it, if you play on using 2 graphics card, both taking up 3 slots and thus taking up 6 full slots?
well you can't, unless you use a pci-e extender cable on one of the cards and having a case, that can mount a 2nd card vertically, while not blocking the first card in the standard way it is installed. and i believe the lsi 9300 would also preferably have 1.5 slots for it maybe, because you want/need a tiny fan on it as it expects server airflow i think.
clearly the proper solution is to build a zfs nas with a lovely hba. get a 16 port card to be ready for the future and be done with it, except now our proper solution is building an entirely new system. and worse you may not be able to afford running a 2nd system 24/7 due to electricity pricing and being poor as frick.
btw ebay is also not available to lots of people. for example people, who don't have a paypal or won't use paypal. so a new hba, that isn't leftover server hardware? uh....
also if you want to use a graphics card fully and you want the hba to bypass the chipset to remove a failure point, you need a board with 2 x8 electrical pci-e slots directly to the cpu.
wouldn't matter for a nas, because you just use whatever way to get a video output, but if you need to use the system for more than just storage you need a fast slot for your graphics card.
well that puts you at 400 euros for the motherboard already :D such bullshit.
____
again great advice generally, however stuff costs money and having basic features integrated certainly is how it still needs to be, especially when they cost near 0 or straight up 0 (5 vs 2 audio jacks on the i/o bracket may literally cost the same part wise) to have.
or like i said it may straight up not be possible to add an hba card, when you use 2 graphics card for example.
itsabearcannon@reddit
First off, nobody does this anymore. SLI support has been software-deprecated for essentially over a decade now, hardware-deprecated since the RTX 2000 series, and even older titles work far, FAR better on a single more modern (but still insanely cheap) card than two older SLI'd cards. The last "good" SLI configuration where you got good scaling on a lot of games was probably the GTX 285 back in 2009-ish, and nowadays you can find graphics cards people are giving away for free or cost of shipping that will outperform that setup.
Not sure what you think eBay is. PayPal is not the only payment option - you can use a credit/debit card, bank account, or many forms of direct payment without even having to have a PayPal account.
Also, I can find LSI HBA listings right now for the price I indicated that will ship anywhere in the world barring countries where it is literally illegal to import certain electronics from the US.
Again, if you're spending the money for a proper surround sound system, and not one cobbled together from five different brands of Goodwill speakers and wire you stole from a house under construction next door, chances are you've got money to get a used receiver from whatever local electronics resale site or store exists in your country. If you have a place to get cheap speakers, you have a place to get a cheap receiver.
reddit_equals_censor@reddit
that was supposed to be a *plan on using 2 graphics card" and not play.
my bad.
and i know multi gpu is bye bye for gaming for ages. i was thinking of use for vms with pci-e passthrough for a vm, or applications, that benefit from using 2 gpus like blender, etc....
itsabearcannon@reddit
Okay let’s be real here.
If you’ve got money for two GPUs for virtualization or Blender workloads, you’ve got money for a purpose-made solution to that problem.
Plank_With_A_Nail_In@reddit
None of these are basic features though.
reddit_equals_censor@reddit
a 7 segment debug display is not a basic feature?
i guess you're working for the motherboard marketing people, if you actually wanna claim that lol :D
IANVS@reddit
All that penny-pinching and cutting fown useful stuff just to fit more fucking RGB, print novels on the back of the board or cram more M.2 slots for the market where vast majority of customers only use one, sometimes two...
I had a board from 2013 with a POST code display, Power, BIOS Flashback and Direct Key (to enter the BIOS) buttons, all USB3 ports, 8 SATA ports, a ton of PCIe ports and it cost me 140 EUR...and that was on the high side. Today I would have to pay probably at least 500-600 EUR for a similarly equipped board and I still wouldn't get all of that...but hey, at least it would glow like a goddamn Christmas tree and I could read marketing crap from heatsinks!
reddit_equals_censor@reddit
btw keep in mind, that all the rgb stuff is almost free, because leds and the rgb glow blend strips, etc... are also dirt cheap.
so i guess the only real feature over the time has been more m.2 slots on the board.
i mean hey sure why not. but if you think, that boards have the same amount of storage connections of NEGATIVE compared to what they had before, it is quite an insult.
i think before it might be 1 or 2 m.2 drives and 6-8 sata ports, now it is 2-3 m.2 drives and 2-4 sata ports AND you pay more for even that. like i am not even thinking within the same price point in my mind when i compare those, but already throw some added money in for the new boards.
i actually can set the board requirements i got with am4 into am5 on geizhals. the results are 0 :D
ecc support, 2 pci-e x8 electrical slots (so x16 slots, that if both used both run at x8)to the cpu with the first slot being in the standard position, 8 sata ports and we're already out :D
like we already threw out dreams of having 5 audio jacks and what not and burned the idea of a working csm (compatibility support module, important for certain stuff, including booting windows 7 if desired, or certain legacy hardware).
just what i bare minimum need and am5 goes at any price point mind you NO! EAT SHIT! :D
also a random trend by boards now is it to move the primary pci-e x16 slot one slot down. why? well good question :D
because having it in the standard position creates no spacing problems at all. the standard position is the 2nd slot down from the 7 slots of a standard case to get a reference.
so what asrock is telling people now is, that instead of you being able to use their board with 2 x 2.8 slot graphics cards in a standard case, you now gotta buy a special case, that has 8 pci-e slots in the back, so that the bottom graphics card fits into the case now ;)
alternatively they could argue for the bottom slot to stay the same, but have the top slot in the standard position, so that a 3.8 slot graphics card does NOT block the 2nd slot and with ever bigger graphics cards, that would make sense, but why make sense, when you can just go wild and move things around instead :D
so the bar is quite low for am6 to jump over i guess :D let's see if board makers manage to take that jump :D
hope you don't mind the random rand about this shit motherboard industry, but hey i guess it is in the spirit of gamersnexus as well :D
chapstickbomber@reddit
I bought a $700 X670E Hero and I get major GPU noise on integrated even with an AX1600i and a dedicated breaker. A breakout 7.1 box works fine because it doesn't get nuked by the literally 600 GPU amps
mrheosuper@reddit
Everything you said has nothing to do with AM5, it's not like AMD requires the cheapest board with 8 Sata must be $490.
All_Work_All_Play@reddit
You think AMD controls all those things?
reddit_equals_censor@reddit
they can or can not.
they generally don't.
they can require boards based on "chipset" to have a certain feature.
if amd wanted to, they could force motherboard makers to include a 7-segment debug display on ALL am5 boards.
they could force them also to include at least 6 sata ports and their orientation.
or to have it be a softer way, they could require it for the x870e sticker or whatever.
you want to be called that? alright those are the requirements...
just like a certain usb is required for x870e boards to have compared to x670e.
amd is also in control of the chipset itself.
so how much io each chipset chip contains.
so you can certainly put partial blame on amd or point out, that amd could if they wanted to adress the problem, but the main ones at fault are the motherboard makers of course.
All_Work_All_Play@reddit
Oh
Shogouki@reddit
You mean the little numerical LED that displays mobo debug codes?
reddit_equals_censor@reddit
YES,
as the gamersnexus video points out for them to get a motherboard with that and one that boots docp without issues cost them 500 us dollars!!!
it is a basic debug function to have and it also saves everyone money, because it means less returned motherboards, less support calls, less time to troubleshoot for system builders, etc... etc...
so it is crazy, that they are trying to segment the products with dirt cheap debug functions, that save everyone money in the chain.
just insane as gamersnexus points out and i agree.
Shogouki@reddit
Leaving basic features like that off of every mobo that isn't their top model pisses me off so much.
ezkeles@reddit
price
Swatieson@reddit
Disclaimer: I just bought one 5800x3d.
Why? The 9800x3d will be significantly faster and it seems like a good upgrade if you are a gamer.
feyenord@reddit
It has enough power still, but the problem is you're stuck with slower PCIE lanes for your GPU and NVMEs. It's kinda planned obscolence.
RamonaNonGrata44@reddit
Man im absolutely livid. I contemplated upgrading to a 7800X3D back when you got Starfield premium edition with it. Didn't bother then purely because I couldn't be bothered to rebuild my PC. Recently I thought I'll upgrade on the next game bundle, then they launched the Space Marine bundle so thought 'awesome', there were loads of deals for it at £329, but they only lasted the weekend before going up to £350, so I waited but now they've gone up to £379!
Now we find out after all this time thats its just a 10% upgrade. Should have upgraded a year ago and forgot about it all!
JakeTappersCat@reddit
Yeah you should have
RamonaNonGrata44@reddit
Yeah, it was the perfect timing to extract the maximum value. Well priced upgrade, get four years out of it. Then upgrade the CPU when the 11800X3D came out, and get a further four years out of it.
Now the values just all over the place. Whole reason for upgrading was because I've got a standard 5800 that i pulled from a prebuilt, and trying to run 4K high framerate, you just get so many drops!
Klinky1984@reddit
It really sounds like you don't want to upgrade. Don't upgrade unless you really think you'll use it.
Aware-Evidence-5170@reddit
You already waited this long
May as well wait for the 10800X3D :)
Vornsuki@reddit
This is really promising as someone who is looking to upgrade from an almost 10yr old machine (i5-6600k with a GTX 1060 6g)!
Folk keep saying to just pick up the 7800x3d but it's either completely sold out up here (Canada) or it's from some 3rd party selling it for $800+. It's about $630 from the retailers up here if it ever does come back in stock.
With a decent price, and actual stock, I'll be picking up a 9800x3d before year's end.
raydialseeker@reddit
I'd get an r5 7600 + b650e PCIE 5 + 32gb 6000mhz ddr5 and a used 4070tiS/4080 or new 5080 depending on pricing. That gives you the best long term upgrade path on AM5 and at 4k the difference between a 7600 and 7800x3d is nearly non existent. It'll let you upgrade to the last gen am5 x3d product instead of compromised zen5 or the oos 780px3d.
snowflakepatrol99@reddit
Who said he's using a 4k display? If you are on 4k, always go for the better GPU because that's far more important but if you are on 1440p or 1080p then 7800x3d is always the better purchase. 7800x3d is the clear CPU choice to buy unless 9800x3d is just as cheap and becomes available soon. It has by far the best upgrade path because every single GPU is bottlenecked by it so you'd only ever need to upgrade GPUs. 7800x3d is AMD's biggest mistake. The CPU is just too good. The soonest time for people to upgrade would be 10800x3d and that's if it can be run on their b650 boards and if the person is a competitive gamer and wants even better frames and even better 1% lows. Otherwise you can keep it for 5 years while being only a few percent below the best.
Z3r0sama2017@reddit
I mean the 7800x3d is still a great pic @4k if you play a lot of simulation games. I know Zomboid and Rimworld really like that vcache.
_OVERHATE_@reddit
Same situation but with a 7700k and 1080!!
Everyone tells to go for the 7800x3d but it's sold out in Sweden and other retailers are gouging it's price like crazy, no thanks.
I'll just preorder a 9800x3d
Crusty_Magic@reddit
Similar setup here, 3570K and a 1060 6GB. Can't believe how long this setup has lasted me, but I'm ready for an upgrade.
Standard-Potential-6@reddit
7800X3D used is likely the move, unless you need a warranty or are very averse to used parts.
CPUs are probably my favorite part to get secondhand honestly, saved a couple hundred each on the 3900X and 5950X and both are still doing great.
S_A_N_D_@reddit
The used part market in Canada isn't the best compared to the US.
Plank_With_A_Nail_In@reddit
Buy from a US seller then.
S_A_N_D_@reddit
One you add cross border shipping you're not too far from the new price.
wogIet@reddit
I upgraded from a 6600k and gtx 970 to a 7800x3d and 7900xtx. Life changing
Plank_With_A_Nail_In@reddit
Not actually life changing though as you still doing all the exact same things as before. Still playing the exact same games with the exact same story and gameplay.
SleepTakeMe@reddit
Weirdo
cramsay@reddit
I had an i5-6600 in my old PC and mate.....that piece of shit was sluggish in everything like 5 years ago when I upgraded to a 3950x. It's not even about games, just everyday usage (web browsing, etc.) is so much more responsive.
bow_down_whelp@reddit
My daughter is still using that processor lol
Drewbacca__@reddit
Also hoping to upgrade my 6600k in the next 6 months!
Plank_With_A_Nail_In@reddit
Wait, you don't need to buy it today ffs.
TheJoker1432@reddit
Same here on my old 4570
TheCookieButter@reddit
Feel like I'm about to be in an awkward spot with my 5800x (non-3d).
Improvements above AM4 don't seem drastically meaningful for me, who plays in 1440p or 4k with a 3080. Yet the 5700/5800x3d are both not worth paying for with my current CPU.
Z3r0sama2017@reddit
Yeah i have 5950x and it's good enough for 4k gaming and good productivity so 5800x3d would hurt more than it helps. Gonna hold off till I see how the 9950x3d rocks it in benches.
Dea1761@reddit
I am in the same spot. 5800x and 3070 (all I could get at MSRP at the time). I play at 3440 x 1440 on an Alienware DW. I can afford top end components, but my playtime is limited and I feel like the price to performance ratio has not been worth it. That being said I have held off playing a few games like cyberpunk, due to wanting to play it as a high end experience. I might give it one more generation.
TheCookieButter@reddit
Very similar, I've also held off playing Cyberpunk. I also had the same concerns about playtime, I've been putting a lot more time into retro games on my SteamDeck
Machevelli110@reddit
I'm in same spot. I've got a upgrade itch though and I really wana scratch it. I'm at 1440 with my 4070ti and I think running a 9800x3d would be worth it. It would cost me around £400 for the upgrade, selling old gear but they have that extra 4/8 pin for the cpu which i haven't got on my current psu so it's prob £550 all in. Worth it?
Swatieson@reddit
I downgraded from a 5950X to a 5800x3d and the smoothness is significant.
(the big chip went to a server)
Kittelsen@reddit
That's a nice tip, must have been excellent service at that restaurant.
snowflakepatrol99@reddit
5800x3d is a big boost to gaming. It makes games far smoother. 7800x3d makes it even better. Saying that you're in a weird spot when you have 2 clear upgrade paths that would make your gaming experience far better is weird. If you feel like your performance is good enough then that's fine but you have very easy and cheap upgrades that will make your PC a lot faster.
TheCookieButter@reddit
7800x3d would require a new motherboard and RAM, making it an expensive upgrade.
5700x3d would be a slight downgrade in productivity areas while a good upgrade in lots of games. Questionable that's worth £175.
EasternBeyond@reddit
There is no need to upgrade unless the proccessor isn't doing what you need. I have a 5700x with a 4090 and it's perfectly fine for my use case. I think I will skip until Zen 6 or Intel core ultra 3 in 2 years.
TheCookieButter@reddit
Yeah, that's my plan. Moving country next year, but I'll probably pack my mobo with CPU and RAM. Depending on the 5000 series GPUs I'll bring my 3080 along or not.
SkylessRocket@reddit
You're in an awkward spot because you don't feel the need to upgrade?
TheCookieButter@reddit
Just that when I upgrade I will have to upgrade everything. I missed having the best of AM4 which would have prevented that, but swapping from 5800x to 5700/5800x3D wouldn't be worth it now.
Dreamerlax@reddit
I'm sticking with the 5800X for several more years to come.
kyralfie@reddit
5700/5800X3D is a solid generational improvement over 5800X. It's def worth it.
baksheesh77@reddit
I also run a 3080 at 4k. I went from 5600x to 7800x3d this year, some games that could noticeably be on the chunky side or have intermittent framedrops seemed to perform a lot better. I was really satisfied with the upgrade, but I only paid $350 for the CPU.
RedditNotFreeSpeech@reddit
I had the 5600x and made the jump to 7800x3d during a microcenter starfield bundle. Even with a 5600 I was questioning it
imaginary_num6er@reddit
Looking like Zen 5% X 3
IKARIvlrt@reddit
No those games are gpu limited so more like a +10%< increase
SmashStrider@reddit
Zen 15%
IKARIvlrt@reddit
The gpu is bottlenecking in those games so the uplift is probably more like 10% or more which is pretty nice
Sopel97@reddit
ok but how about factorio?
the_dude_that_faps@reddit (OP)
Is this a meme? I'm bad at these things, but I think it's a meme.
Sopel97@reddit
no, it's a legit question about a very popular game that shows some of the highest benefits of x3d
clingbat@reddit
Because if you play a larger map with more dense build the cache advantage suddenly vanishes and the performance drops to the same if not worse than non-x3d chips. All these great results are on smaller maps, so frankly it's kind of bullshit and most of the reviewers are aware of this.
Sopel97@reddit
false https://factoriobox.1au.us/results/cpus?map=f23a519e48588e2a20961598cd337531eb4bf54e976034277c138cb156265442&vl=1.0.0&vh=
jaaval@reddit
That seems to show x3d brings limited benefits and no longer tops the charts.
Sopel97@reddit
? it shows that x3d still gains roughly 20-30%
jaaval@reddit
Sure, more cache is better against similar compute power (that’s kinda nobrainer, I wonder why you ever thought that would not be the case), but it’s no longer the strongest option and extra compute power overweights the extra cache.
Sopel97@reddit
I know this thread is long, but if you follow it up a little you find this claim that started it
BatteryPoweredFriend@reddit
Anything that minimises the penalty of cache misses will improve UPS in Factorio. Larger caches, better prefetching, faster ringbus/fabric speeds, tuned RAM timings, etc. they'll all help.
People giving blanket statements like "big base no difference" are kind of burying the lede, as the Factorio devs have talked about this before on their blog. The important part they specifically mention is that all active objects are checked during each tick update.
So if a base is so big that you're constantly paging out into DRAM, then the tick rate's weakest link and bottleneck will obviously be how long it takes to fetch the data from DRAM.
Sopel97@reddit
I don't understand why people have been saying that the difference for larger bases diminishes, really, because it's still provably there https://factoriobox.1au.us/results/cpus?map=f23a519e48588e2a20961598cd337531eb4bf54e976034277c138cb156265442&vl=1.0.0&vh=
BatteryPoweredFriend@reddit
It's because the people claiming it makes no difference or is worse than the Intel option have never actually played the game.
And since when used as some sort of "gotcha!!!" evidence out of context, it conveniently fits into the narrative certain people here have been pushing. No different to someone claiming the gaming performance difference between a 3600 & 14900K is minimal, only because they tested with a GTX 280.
derpity_mcderp@reddit
iirc that was only because the small test factory was small enough to be processed in-cache, which made it able to be really fast. However when testing a large late game factory, the cache was mostly irrelevant.
tux-lpi@reddit
That's sort true, but it's also jumping from one extreme to another. That "late game" factory is 50k SPM (science per minute). That's insanely big.
One of the most hardcore factorio youtubers recently did 14k SPM (while aiming for 20k and failing). And it took weeks of mind-numbing effort, from one of the most experienced people.
It's not just a late game map. Approximately 0% of people will ever have to worry about a map this big!
the_dude_that_faps@reddit (OP)
That's a fair point. I don't play the game. It felt to me that the usual claims of speed for Factorio were best case scenario rather than realistic.
tux-lpi@reddit
Yeah, I just think it's somewhere in the middle! It's definitely not interesting to benchmark tiny maps, because you don't have performance problems on tiny maps anyways
But picking one of the biggest map that has ever been made is also not a great benchmark I feel like, that's also not super realistic if people never get anywhere close to that point realistically!
Sopel97@reddit
we're not comparing with intel https://factoriobox.1au.us/results/cpus?map=af7eda7ffc9a34b083ba82bfefb4178c791c8d04ce3e5b3cc6dd999605e8d509&vl=1.0.0&vh=
the_dude_that_faps@reddit (OP)
This was what I was looking for. I mean, it's not entirely irrelevant given that it is still a good 10% faster than non-X3D. But the gap narrows considerably.
This time around Intel might be more ar a disadvantage than AMD considering they went with an off die memory controller.
However, I haven't seen Factorio tests on zen 5.
Zednot123@reddit
Depends, it may be that latency isn't as relevant as the sheer bandwidth requirement at those map sizes.
It's really unfortunate that we have no real good way of monitoring bandwidth usage of applications. It would give a very clear picture of what scales with mainly latency or bandwidth.
kyp-d@reddit
DRAM read / write bandwidth is reported in HWinfo for my Zen3.
eight_ender@reddit
No Factorio screams on X3D its legit one of the best ways to run really complex factories
the_dude_that_faps@reddit (OP)
Didn't HU test complex factories in Factorio and the performance gap was severely diminished?
Sopel97@reddit
vs intel, yes
jecowa@reddit
The CPU doesn't deal much with the graphical load - a graphics-lite game isn't going to be easier for it than a graphics-heavy game.
I guess you already know the X3D processors are great for gaming. Well, the are especially great for simulation games. Maybe factorio is kind of a meme, but it's also a simulation game that allows massive factories. I am interested in the X3D processors for Dwarf Fortress, a game that traditionally uses ASCII-style graphics.
Also the games they were testing in the screenshots look like they might not have been the best test for an X3D processor to show off its abilities. They all look more like first-person shooter type games instead of simulation games. Also, when they compared the 9000X3D to the 9000 non-X3D, they used Cinebench instead of a gaming load that would have allowed the X3D to shine. It's like they don't want to promote their new X3D chips. Oh, I see now this is Msi labs, not Amd, so they might not care as much about making the new products look good.
III-V@reddit
It's kind of a meme, as you have to build a ridiculous factory to need to worry about your CPU being able to handle things, but it's also a very popular game and it's really interesting to see how it scales with big caches.
FitCress7497@reddit
Why do I feel like AMD and Intel shaked their hands to shit on us.
III-V@reddit
We are probably not going to see big gains for a while. The industry has more or less hit a soft wall. The economics are starting to become crap, interconnect resistance is increasingly become a major problem, and until the industry decides how to work around those problems, I would expect small gains. GAA-FETs and BSPD will be decent gains, but that's about it, unless they manage to transition to CFETs without too much issue (highly unlikely).
Geddagod@reddit
I find it hard to believe the industry has hit a soft wall in terms of performance when Apple is just beating Intel and AMD by decent margins while also consuming dramatically lower power. I would imagine Intel or AMD would need rapid progress, or one large design overhaul, in order to create cores as wide and deep as what Apple is doing, while also sacrificing area or power in order to clock higher to achieve higher peak performance (which is what I believe used to happen in the past).
Apple may have hit a wall, idk, but based on how Intel and AMD are doing vs Apple, I believe they have plenty of room to grow.
All the problems you described here seem to be manufacturing problems, there's a lot of architectural improvements I think AMD and Intel could do to at the very least match Apple in perf and/or power.
admalledd@reddit
Apple's M-series ARM processors aren't so simply comparable to either AMD or Intel, people really need to stop saying such with near-zero understanding of the differences going on, such as
Again and again the main comparisons between M-Series and Intel/AMD have not been when they are on the same process nodes. When they are on comparable nodes the differences shrink significantly if outright disappear and start coming down to things more related to power targets and die area. Apple and ARM are not really competing that well actually. Sure, they did a heck of a lot of catching up to modern CPU architecture performance AND they have a much easier time with low power domains, that isn't anything to sniff at, but that isn't anything unique just that mostly no one has cared for x64 since that stuff tends to come at the cost of high-end many-many core performance. IE: Apple's M-Series is likely impossible as is designed to support more than say 28 cores as their interconnect is today.
Apple is "winning" by simply paying 2-5x the dollars per wafer to be first to the new nodes, and finally applying many of the microcode/prefetch/caching tricks that desktop and server processors have been doing for decades that ARM often wasn't wanting to for complexity/cost/power reasons.
TwelveSilverSwords@reddit
Let's compare Lunar Lake and Apple M3.
M3 : 146 mm² N3B.
LNL : 140 mm² N3B + 46 mm² N6.
Don't know about this, but Lunar Lake has higher overall SoC bandwidth.
M3 : 100 GB/s.
LNL : 136 GB/s.
M3 : 25W.
LNL : 37W.
Note that the P-core area for Lunar Lake is including L2 cache cache area. Even without L2 cache, it's about 3.4 mm² iirc, which means it's still larger than M3's P-core.
Since both are on N3B, this is an iso-node comparison.
M3 trumps over Lunar Lake in ST performance, ST performance-per-watt, MT performance and MT performance-per-watt. (Source : Geekerwan). And Apple is doing it while using up less die area.
So tell me, why are Apple processors superior? It's not due to the node. It's because of their excellent microarchitecture design.
admalledd@reddit
Against LL: M3 has significantly more L1 per core, and I would be shocked if most CPU benchmarks could take advantage/aware of LL's vector units/NPU which it knowingly does on M-Series. Geekbench is a great overall tool for quickly surface level testing things, especially "does MY system perform how it should, compared to other similar/identical systems?". Without per-scenario/workload details (such as given via OpenBenchmark, etc) it is difficult to ensure the individual tests are actually valid. Further, Lunar Lake's per-core memory bandwidth is... not great unless speculative pipelining is really working fully, which is "basically never" when under short-term benchmarks, while M3 has nearly 4x the memory pigeon holes for its speculation.
Another thing is the memory TLB and page size, Outside of OpenBenchmark's database tests, I am unaware of any test/benchmark (not saying they don't exist, just I don't know of them) that take into account the D$ and TLB pressure differences due to a 4kb page size on x64 vs 16kb of the M-series. It is known that increasing page size, merging pages ("HugePages"), etc can greatly increase performance, from databases to gaming the performance gains are often in the 10-25% range... if the code is compatible or compiled for it. By default, any and all code compiled (and thus assumed to be running) on M3's is going to be taking advantage of 16kb page-sizes, while anything on x64 has to specifically be compiled (or modded) and the OS to enable (due to compatibility concerns) HugePages/LargePages.
You also are missing comparisons to even AMD's own modern Zen5 chips, which are a node behind (N4X), that meet-or-beat the M3 within margins of error of single digit %, that we can hand wave as 'competitive enough' and 'competing with decent margins'. AKA 'AMD at least isn't loosing to Apple by decent margins' which is part of the thesis above that I am trying to refute. Intel (assuming we can trust the results, which I hesitate due to a language barrier and unfamiliarity with the tests ran) being close at all within 5-10% is not "being beaten by decent margins". Decent margins is normally, and consistently 10%+. On LL's power usage in those same benchmarks: again they aren't comparing ISO-package, and even if they were that has never been the performance argument. If a vendor wants a super-low-power chip, that is possible (though Intel seemingly has never had a good history at doing so) but often sacrifices higher power and higher-core count designs. LL's actual cores and internal bus are going to be re-used in their 80+ core server Xeon chips. Apple doesn't care and designs for their own max core counts of "maybe twenty?" and live/suffer with that.
In the end, you are still parroting the exact reasons I am so tired of the "b-but Apple chips are so goood!" lines, they are being engineered for entirely different uses from the ground up, more and beyond the differences that ARM vs x64 has alone. AMD at least is nipping at Apple's heals whenever they get a chance on a node even close, and can scale their designs up to 384 threads per socket. Apples designs are good don't get me wrong, and are very interesting, but the gulf between is far less than people keep parroting. Super-low idle power is just not where the money is for AMD, so while they do try (partly due to mobile/hand-helds, partly since low idle power can save power budget when going big) the efforts are not nearly as aggressive as what Apple is doing.
TwelveSilverSwords@reddit
M3 P-core
128 KB L1d.
192 KB L1i.
16 MB L2 (shared)
LNL P-core
48 KB L10.
192 KB L1d.
64 KB L1i.
2.5 MB L2.
12 MB L3 (shared)
Intel is spending as much on cache as Apple is.
That's a good point.
I don't think there's any limitation in the CPU core itself that would prevent scaling to such large core counts. There's an ARM server vendor called Ampere, who makes 128 core CPUs. Then there's also Nvidia's Grace CPU, Amazon Graviton etc... So there's nothing preventing Apple from making a CPU with 100+ cores. Yes, they'll have to design a new interconnect to scale to that many cores, but that should be peanuts for them.
TheRacerMaster@reddit
Are compilers such as Clang are somehow managing to compile generic C (such as the SPEC 2017 benchmark suite) to use Apple's NPU (which is explicitly undocumented and treated as a black box by Apple)? I would also be surprised if Clang was generating SME code - it's probably generating NEON code, but it's also probably generating AVX2 code on x86-64.
Geekerwan's testing showed that the HX 370 achieved similar performance as the M2 P-core in SPEC 2017 INT 1T. Both the M3 and M4 P-cores are over 10% faster with lower power consumption than the HX 370. This also lines up with David Huang's results.
There are definitely tradeoffs with designing a microarchitecture that can scale from ~15W handhelds to ~500W servers, but I don't see why it's unfair to compare laptop CPUs from AMD to laptop CPUs from Apple. I also don't see why it's wrong to point out that Apple has superior PPW in the laptop space.
firaristt@reddit
They are build solid, engineering-wise very good, but totally different architecture. That's one big difference and we can't compare apples to apples therefore. Running x86 and risc are different and each cpu has different capabilities, so die area, physical size is not really comparable things on this regard. You can't use Apple SoC like amd or intel cpus. The platform is locked.
SleepTakeMe@reddit
Apple built their own processor line to have complete control over the hardware backdoor. That's about the end of the story.
Edenz_@reddit
Yeah strange that the industry has hit a wall but Apple and ARM haven't.
Plank_With_A_Nail_In@reddit
Intel were giving this exact same bullshit argument before Ryzen dropped.
III-V@reddit
No they weren't. They were actually saying that they were going to outpace Moore's Law with 10nm and 7nm.
itsabearcannon@reddit
Ah, yes. The heady days when Intel still knew how to get a node out on time.
Kryohi@reddit
Is it with CFETs that we're supposed to see new big gains in cache density? That could definitely help.
scytheavatar@reddit
Zen 6 is supposed to be working on the "interconnect resistance", so it is more promising for performance gains. Seems AMD had underestimated how much their old chiplet architecture is reaching its limits.
the_dude_that_faps@reddit (OP)
I think process node improvements are not as large and AMD built zen 5 in the same node family as Zen 4...
On the other hand, both Zen 2 and Zen 3 are N7, IIRC... I don't know man. I don't know.
Lucas8Fire@reddit
Physics.
Looks like we're starting to hit a wall pretty soon.
signed7@reddit
Tell that to Apple who's already way ahead of Intel/AMD in efficiency/IPC yet still keep making 20% year on year gains
TwelveSilverSwords@reddit
Apple M4 is cracked. Released only 7 months after M3, but with a 25% improvement.
Meanwhile AMD took 2 years to deliver a 16% improvement.
input_r@reddit
Yeah I think this is it, we're going to start needing new materials to see major gains in the next decade
skinlo@reddit
Gamers aren't that important. Look at the performance improvements for Zen 5 in enterprise however.
Baalii@reddit
If these results are real, it may lend credibility to the rumors that the 9950X3D will come with two X3D CCDs, and that the clocks are closer to the non X3D chips.
BWCDD4@reddit
Why? It’s the same percentage jump. It’s more likely they just did what you’re supposed if your gaming with a xx50x3d chip.
Which is disable the second ccx/set the game affinity to only the X3D ccx and you get the same performance boost as the xx80x3d chip.
I don’t know how that issue still isn’t cleared up or why reviewers and benchmarkers never corrected themselves.
PT10@reddit
Because that's too many hurdles to expect normal users to deal with
BWCDD4@reddit
Normal users shouldn’t and don’t buy an xx50X3D.
It’s a prosumer chip for people that don’t want to drop a mad amount on a workstation chip.
PT10@reddit
That's just not true.
wintrmt3@reddit
Have you met normal users? They just buy a pre-built computer at their price point.
PT10@reddit
There were plenty of pre-builts with 7950X3D and 7900X3Ds and people just bought them because they were the "best" or "top of the line"
account312@reddit
Because if the user has to manually fuck around with process affinity, there's enough blame to go around to hand some to both the os and the hardware.
Frequent-Mood-7369@reddit
It would also be a great "do it all CPU" that doesn't require buying a 285k and praying for 11,000mhz ram just to have competitive gaming performance an 8 core x3d cpu.
COMPUTER1313@reddit
Don't forget a +$700 Apex motherboard with two slot DIMMs for the best possible RAM OCing.
Lucas8Fire@reddit
I mean, having 1/3 the cache compared to an x3d means you need to find that performance elsewhere.
Faster RAM is one way to do it.
bmagnien@reddit
It’s not insignificant that where as the 7800x3d regularly outperformed the 7950x3d in gaming, now the 9950x3d outperforms the 9800x3d. So going from a 7800x3d to a 9550x3d will get not only the performance uplift of the gen on gen advancements, it will get also get the added performance of the 16 core chip (which likely comes from higher clock speeds and better binned CCDs). The 16 core chips will most likely have more OC overhead as well.
Berzerker7@reddit
The 7950X3D has outperformed the 7800X3D in cache heavy games for quite some time now. In normal gaming it’s more of a tie.
bmagnien@reddit
That’s interesting. Do you have a game as an example? And does it still run off just 1 ccd or does it spread the workload across all cores?
Berzerker7@reddit
Yup. MSFS is the big one you can see here: https://www.tomshardware.com/reviews/amd-ryzen-7-7800x3d-cpu-review/4
It spreads the load properly to all cores. The terrain generation and cache heavy actions are on the 3D cache CCD while things like plane system simulation go on the faster cores CCD.
bmagnien@reddit
Very interesting. Was just playing the closed test for MSFS24 last night, would be perfect timing
Berzerker7@reddit
My 7950X3D has been great for 2020. Will probably be better when 2024 gets final.
NotEnoughLFOs@reddit
If you look at the MSI's gaming performance slide carefully, you'll see it's a mess. Left side is "16 core Zen5 vs 16 core Zen4". Right side is "8 core Zen5 vs 8 core Zen4". FPS is larger on the right side (8 core) and the 13% generational uplift in Far Cry 6 is on the right side as well. And then at the top it claims that the 13% uplift is for 16 core.
BlackMetal81@reddit
7800x3d will last another cycle, I see. Fine with me, AMD! :o)
SanityfortheWeak@reddit
I had the opportunity to buy a 783D for $230 3 months ago, but I didn't because I thought it would be better to wait for the 983D. Fuck my life man...
daNkest-Timeline@reddit
My mental shorthand for this generation is this.
Zen 5 = 5% better.
battler624@reddit
Well more reasons to get the 7800x3d i guess
Except its double its msrp atm.
thekbob@reddit
Glad I got a Microcenter bundle deal. Board, RAM, and chip for $500...
kyralfie@reddit
I awear I'll become a mod of this sub just to ban microcenter bragging. It hurts our microcenter-less feelings! Their deals are more like steals.
the_dude_that_faps@reddit (OP)
The 7800x3D is around its original MSRP and rising at least wherever I can purchase it. If I had known this a few months ago when I could've gotten it for around $280, I would've. But at the current price, if the 9800x3d releases similarly priced I see the 9800x3d as a no brainer.
Especially since it is running 400 MHz faster having a much smaller delta to non-X3D parts. I mean, the cinebench score shows a massive improvement, so that's probably clocks.
JakeTappersCat@reddit
Shhh let them buy 7800 X3D for $330 used. It will be funnier when AMD drops the 9800 to the same price in a month or two
NKG_and_Sons@reddit
You, me, and everyone else t_t
DiggingNoMore@reddit
The 9800X3D is clearly better than the 7800X3D. Buying the 7800X3D instead would be intentionally buying worse hardware.
battler624@reddit
at what price tho?
If the 7800X3D goes back to below 300$ then the 9800X3D absolutely cant catch up.
thats the idea.
DiggingNoMore@reddit
I don't follow. The 9800X3D is the objectively better processor. It doesn't have to "catch up", it's ahead and always will be.
battler624@reddit
I am saying its probably not worth the price.
In comparison, the 9700X is objectively the better processor compared to the 7700x, why do you think everyone hates it?
DiggingNoMore@reddit
Feels like /r/buildapc is leaking. I don't know who hates what or why, but you go ahead with the worse hardware if you want. Always buy the best hardware that's within your budget, don't gimp yourself intentionally.
If I had earmarked $500 for a processors and there was one that was $500 and one that was $400, but the $500 processor was only 5% better, I'm taking the $500 processor. Because it's the best processor within my budget. Intentionally buying worse hardware because its ratio of price to performance is better is wonky.
failaip13@reddit
The issue is you can put the 100$ into the better GPU and possibly get way better gains, or get a better SSD, better RAM, better case etc. Or just save the 100$ that's a decent amount of money.
Automatic-End-8256@reddit
Let's be real it's a top-level processor, it's not getting paired with a 4060. $100 doesn't do shit in the 4080/4090 price range
Lumpy-Eggplant-2867@reddit
No, it’s the best processor if you have no budget. You get better performance for your money if you use the saved money on a better gpu instead of a 5%faster cpu.
DiggingNoMore@reddit
So you agree it's the best processor. I don't want the best performance for my money, I want the best performance.
NKG_and_Sons@reddit
No, it's the most normal thing in the world, lmao.
Yommination@reddit
I doubt it will they will just cut production
battler624@reddit
They already did probably.
A_Monkey_FFBE@reddit
Big if
zippopwnage@reddit
Literally this year I want to build a new PC for me and my SO. I was looking at the 7800x3d and in the last months it got higher and higher in price. Fuck me I guess.
Scytian@reddit
That what I would expecting 5-6% increase from new architecture and another 5-6% from higher frequency in cases when not GPU limited and when large cache matters. Just little bit better version of 7800x3d for the same price (in case of 9800x3d)
gfy_expert@reddit
That’s a significant jump in ghz over 5700x3d and more freq than 5700x. If they keep allcores high AND ram can be over 6400 1:1 would be a tempting upgrade.
HypocritesEverywher3@reddit
At least it's not going backwards, I guess
SmashStrider@reddit
It shouldn't go backwards (cough cough, arrow lake, cough cough)
F9-0021@reddit
So much for the people expecting the 9800X3D to be 20% faster. Also looks like the 9950X3D is still pretty meh for gaming too.
skinlo@reddit
These are engineering samples however, so the real products might be slightly faster.
SmashStrider@reddit
It's possible, but I highly doubt that is the case, if the 9800X3D ends up being released within the next month.
Atheist-Gods@reddit
The slides have a disclaimer that review and retail units will likely outperform their test units.
the_dude_that_faps@reddit (OP)
That would've been crazy. Anyone expecting it was high on hopium.
SmashStrider@reddit
That was pretty much how much gaming gain I expected. Since there doesn't seem to be an increase in V-Cache amounts, and the actual gaming gains for Zen 5 are around 1-2%, I predict most of the gains for the 9800X3D should come from clock speed.
SlamedCards@reddit
aren't these best-case zen 5 games as well? (could be wrong)
basil_elton@reddit
AMD's marketing slides intended for public viewing vs MSI's factory tour slides which someone unknowingly screenshotted - one is clearly more trustworthy than the other.
SlamedCards@reddit
ya I'm saying MSI slide is good. Far cry had a decent zen 5 uplift so see X3D have a decent uplift is not representative. Personally I'd be shocked if X3D on average in games is greater than 10% vs zen4 X3D.
basil_elton@reddit
Look at my comment history - this is exactly what I have been saying as well. But there never seems to be a shortfall of redditors eager to gaslight you into believing that somehow things will be much better with the launch reviews.
Geddagod@reddit
I think the problem is that when people do look at your comment history, they see you are incredibly biased. If I were you, I would not be going out and telling people to look at my comment history, but that's just me tho.
basil_elton@reddit
Of course someone who doesn't take leaks and rumors at face value and constantly tries to 'hope for the better' by throwing in additional variables as it suits them would say that.
Also, people like you who believe in flawed reviews, believe in flawed way of representing data that doesn't take into account the subjective experience, and so on, would say the same things about me as well.
Nearly all the responses I get are just variations of the same accusation of bias. Nothing I've got in response so far in the past couple of days genuinely attempts to logically interpret the data different from my interpretation.
Geddagod@reddit
Like when?
Yea, I don't believe in subjective experience, because if it's not measured empirically, where's the proof?
Also, I also noted out the weird 2P scaling results. That doesn't invalidate the 1P scaling results though. You took up your complains with the author of that article, who then told you the problem.
People like you who just don't like to see any review where Intel is doing worse than AMD would say the same things about me.
You got plenty of responses in that post where uzzi flamed you IIRC lol.
deleted_by_reddit@reddit
[removed]
AutoModerator@reddit
Hey basil_elton, your comment has been removed because it is not a trustworthy benchmark website. Consider using another website instead.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
the_dude_that_faps@reddit (OP)
Black Myth Wukong? I don't know but I don't think so. That game is very GPU limited.
SlamedCards@reddit
the game with large uplift was one of cherry picked ones AMD used to during launch event. tho f1 would have been worse cherry pick
Turtvaiz@reddit
It's at the very bottom of the cherry picked titles. I'm not sure what you're getting at
SlamedCards@reddit
Of AMD chart of games. Most were a lie (looking at cp77). Far cry did actually have the uplift
https://youtu.be/EgOHuVvaBek?si=ErI-mIhIBw8jRKCg
steves_evil@reddit
If there isn't any major overhaul with how the x3d cache operates on the Zen 5 cpus, then it should just be a similar uplift as going from the ryzen 7700x to the 9700x, but for the 7800x3d. The 9950x3d may be using 3d cache for both CCDs which should have stronger performance implications for its use cases.
the_dude_that_faps@reddit (OP)
There is though. It is running at a higher clock speed. Cinebench is 18-28% faster (nT) for the 7800x3d vs 9800x3d. Whereas the 9700x vs 7700 is more like 5%. Couldn't find a 9700x 105W vs 7700X to see if the difference is maintained.
sl0wrx@reddit
9800x3d has to be pulling 150w to achieve these CB scores. 105w 9700x gets about 24k in CB vs the 19k for the 7700x.
masterfultechgeek@reddit
Most games are limited by the GPU.
Throwing more CPU at the problem is like squeezing blood from a stone.
We'll likely see a bigger uplift once we jump to MUCH faster GPUs and/or once ray tracing FINALLY takes off... it's slowly getting there. Maybe in 2030.
conquer69@reddit
That's what people thought before the 3d cache cpus. Then they got it and still increased their performance despite being gpu bottlenecked.
Turns out the cpu matters in more scenes and key moments than the average gamer was aware of.
epraider@reddit
People have always claimed this but I have always found my games to be throttled by the CPU far more than the GPU. CPU is very important if you play a lot do open world or sim-style games.
basil_elton@reddit
You need to have a CPU that is 30% faster or more to have a noticeable difference in the situations you describe.
Like a 30% delta would result in a change from chugging along in the low 20s in Cities Skylines to a tolerable 30 FPS.
General advice/opinion should not be based on very particular examples like this.
SomeoneBritish@reddit
Raytracing taking off will help reduce GPU bottlenecks?
the_dude_that_faps@reddit (OP)
It might put a bigger burden on the CPU to prepare those BVHs, I think?
basil_elton@reddit
Yeah, and if you are playing a game that burdens the CPU with BVH requests - all available testing points to the general direction of Intel, not AMD X3D.
the_dude_that_faps@reddit (OP)
Does it? What testing is that?
masterfultechgeek@reddit
I wouldn't use the word bottleneck because what IS a bottleneck in the system might change from millisecond to millisecond and the term is used so loosely it's almost meaningless.
Here and now, even relatively old CPUs are "good enough" for feeding most GPUs and getting 100+ FPS in most titles. There comes a point where the performance is good enough that people really shouldn't care.
Ray tracing increased the demands on the ENTIRE system. There's both higher CPU and GPU demands. It falls harder on the GPU side but the argument for something faster than a 5 year old CPU becomes a lot stronger.
Gluecksritter90@reddit
There's always MS Flight Simulator, severely CPU limited.
leonard28259@reddit
At native 4k? I agree. While I personally dislike upscaling, plenty of people are using using it which makes the CPU more important. Also there are plenty of unoptimized/CPU heavy games so the additional performance would help brute forcing additional frames. (The Finals, Escape From Tarkov for example)
As a 1080p 360Hz user, I'll take all the CPU performance I can get but I'm in a minority.
(I hope this doesn't sound snarky or so. That's just my random perspective)
Fluffy-Border-1990@reddit
you say this because you don't own a 4090, most of the games I play are limited by CPU and faster GPU is not going to do any good
Jeep-Eep@reddit
Between early benches, the rumors that something is 'different' with the arch of the new cache, meaning more bios tweaks in the pipe, and Wukong being extremely GPU bound that don't mean much at all.
conquer69@reddit
Those weren't rumors. It was pure distilled hopium.
ApacheAttackChopperQ@reddit
My 7800x3d doesn't go above 70 degrees. I want to clock it higher.
If the 9800X3D is cooler and gives me overclock headroom to hit my target temperatures, I'll consider it.
lovely_sombrero@reddit
Pre-release samples. Also, automated benchmarks can have weird results. So impossible to say.
fogoticus@reddit
Ah right. The release samples will be the 20-30% faster that reddit keeps promising us.
signed7@reddit
Yep. This isn't some early pre release Geekbench etc leaks these are marketing slides very close to release
fogoticus@reddit
Likely what investors and all higher ups saw before OP got to see that tour.
Jeep-Eep@reddit
And one of the games is wildly gpu bound.
lovely_sombrero@reddit
I mean, it could be or it couldn't be. Most of the performance over 7000X3D will come from higher clocks, so if these clocks are final that is it, but if not, it could be more. As I said, impossible to say.
Jeep-Eep@reddit
IIRC, high yields I think brought up a stacked cache die, and that might bring some bios stuff that wants attention to stop it from being held back?
lovely_sombrero@reddit
A stacked 3D cache would probably improve performance even more, but I don't see it happening. But they have to change something because the L3 cache on the 9000 series is smaller and the 3D cache is bigger. It will be interesting to see what they did.
Wrong-Quail-8303@reddit
Interesting footnotes: "PR samples and retail samples expected to perform better".
PR samples = those sent to reviewers. We all suspected.
brand_momentum@reddit
Better: 3%
spazturtle@reddit
PR = Pre-Release.
They are saying that the finally production chips including pre-release samples and the retail chips should perform better.
dabocx@reddit
These are early engineering samples, of course later ones will be better
imaginary_num6er@reddit
Remember when PR samples for Zen 5 had to be recalled due to “quality” concerns? I don’t buy that reviewer samples not being actual retail performance since it is exactly how they performed with Zen 5
vegetable__lasagne@reddit
So if: 9800X3D is 0.3% slower at fixed 5.2Ghz Single core 9800X3D is 2.2% slower than 9700X 9700X hits 5520Mhz single core (100-3%)(100-2.2%)5520 = 9800X3D hits 5236Mhz single core?
SirActionhaHAA@reddit
What?
They've got same perf at same clocks on cbr23 which is obvious enough because they're on the same core. Msi's comparison slide was worthless and was within margin of error
How did ya get to 5236 as 97.8% of 5500?
kammabytes@reddit
They calculated 0.97 x 0.978 x 5520
I don't understand why, maybe it's just a mistake because you'd 0.997 not 0.97 to find 0.3% of some number.
vegetable__lasagne@reddit
My mistake, it should be 5382Mhz
Zohar127@reddit
As always I'll be waiting for the HUB review. I'm currently using a 5600x so I don't feel like there's a huge reason to upgrade now, at least with the games I'm playing, but if the 9800X3D at least represents good value I might consider it time for an upgrade. Will wait for all of the cards to be on the table, though, including Intel's new stuff.
TechyySean3@reddit
I'm just gonna upgrade from 5600x to whatever the best gaming CPU is at the tail end of the AM5 platform. It's still really good at 1440p if I temper expectations.
ConsistencyWelder@reddit
Interesting, even if these are just engineering samples with (expected) worse performance than the retail chips. It does suggest that both CCD's will have Vcache this time, and that clock speeds on the 9800X3D are gonna be higher than usual, maybe as high as the 9700X. That explains why it performs better in MT than the 9700X, since it has similar clocks and gets help from the Vcache?
Bluedot55@reddit
Cinebench doesn't benefit from the cache at all, it's likely a higher power limit. If x3d has a default 105 watt tsp, it may boost more in all core
Jeep-Eep@reddit
There is also some talk from I think High Yields about a stacked cache die, and that may induce changes that will need BIOS massaging to get the best perf from.
Noble00_@reddit
Slowly 3D V-cache has a smaller penalty, at least from the clock speed we've seen, temps is another factor. Performance is expected, this is the same Zen5% the internet has non stop been talking about, so higher clocks and whatever changes made to the v-cache that could improve bandwidth/latency are the only contributing factor which aren't generational ones. No surprise pikachu faces here. Interested to see a deep dive/micro benches on the v-cache as we already partly have die analysis on the TSVs of Z5 CCDs courtesy of Fritzchens Fritz and High Yield.
dervu@reddit
Those top titles are so misleading.