Hardware Unboxed ~ AMD Ryzen 7 9800X3D vs. Intel Core Ultra 9 285K, 45 Game Benchmark
Posted by Valmar33@reddit | hardware | View on Reddit | 299 comments
SmashStrider@reddit
According to some rumors, Intel is going to be releasing a 3D-Cache competitor soon, but for Xeons and server processors. So AMD is likely going to remain the only viable option for gaming for the next couple of years.
AnimalShithouse@reddit
Viable in what sense?
Are there a lot of games that you won't already play well with either of these CPUs? Like, outside of homeworld 3, both CPUs are posting "good enough" fps for these games or "both are kind of not perfect (e.g. flight simulator is brutal in general).
yondercode@reddit
budget and power draw optionsnalso dominated by AMD
high end gaming where "good enough" is not enough is dominated by AMD
it is literally the only viable option for gaming
AnimalShithouse@reddit
As long as they are always competing on cost, I'd agree.
If there are Intel deals where you're getting more cores for cheaper, I'd go Intel. AMD priced their X3d on am5 at a super premium. And their non-x3d has a big gap.
yondercode@reddit
yeah i agree from 12th gen intel provides really good value for cores rn and especially for older ones they are so damn cheap
funny how we ended up where amd is the gaming option and intel is the productivity option! just before zen 3 it was the complete opposite
Strazdas1@reddit
Yes if you are into sim/builder/strategy genre. Vcache makes it day and night in how much better it runs on AMD. It could be a difference between playing at 30 fps and 60 fps when mdoels are compex enough.
crshbndct@reddit
Viable in the sense of a CPU that is posting mid 80 1% lows and 100-110fps average is going to be posting mid 30 1% and mid 60 average in games that come out in a year or three.
People want CPUs that last for 5+ years. If you buy the fastest one now, you can have that. If you buy from Intel, you’re going to have stuttering before you’ve even had one GPU upgrade.
AnimalShithouse@reddit
Sure. But we are using 1080p to gauge relative performance when most of these cpus are destined to 1440p or higher where we have been gpu limited for almost as long as 4k has existed.
So we're talking about future proofing for something when a different component will seemingly be the limiting factor for years to come.
Posted this elsewhere, but just as relevant here.
crshbndct@reddit
There are many reasons to benchmark CPUs in CPU-limited scenarios, and not GPU limited ones. This was a discussion when Bulldozer came out, when Zen 1 came out, and now that Arrow Lake is out. I’m not going to explain it here but if you want o know more search for “why are CPUs benchmarked at low resolutions”
You can’t judge a fish on their ability to climb trees.
AnimalShithouse@reddit
Literally not at all what I said. I understand why they do it. Or facilitates relative ranking schemes. It's mostly academic. They don't do higher resolutions because almost all of those cpus will be fine at higher res since the GPU will be the limiting factor. The very simple and fundamental question is:
At what point do you think the GPU will stop being the limiting factor at 4k (and maybe even 1440p)? And at what point will it stop being the limiting factor in a cost competitive format.
We already have GPUs that are maybe 4x more in cost than CPUs and they are still the limiting factor in a gaming build. Even if in 2028 we have GPUs that aren't bottlenecking us, they won't be available at a competition price. Most people will be in "mid range" territory w/ their budgets and they'll probably be GPU limited indefinitely. Probably they'd be able to rock an AM4 cpu into 2030 x3d cpu and if gaming is their schtick at 4k, they'll be fine.
For gaming, most cpus in the modern era are already going to be fine most of the time. We can benchmark for relative comparison purposes, but we're fighting for a gaming crown that mattered more 10 years ago.
Productivity benchmarks - totally different story. Cost/perf benchmarks, absolutely still important. Efficiency benchmarks - keep em coming. But gaming benchmarks for the sake of ranking when they're all basically winners? It just feels like something we're still doing for a generation of gamers that grew up in the 2000s and like it for the nostalgia and miss the old Tom's hardware cpu hierarchy figures.
crshbndct@reddit
https://www.youtube.com/watch?v=5GIvrMWzr9k
AnimalShithouse@reddit
Do they have this 40 min video in written form? Especially the segment towards the start where they talk about how for most people high end cpus aren't even going to be relevant?
crshbndct@reddit
Here’s a ten minute version:
https://m.youtube.com/watch?si=0FPiaZFt_JqxNABY&v=O3FIXQwMOA4&feature=youtu.be
crshbndct@reddit
https://www.techspot.com/article/2918-amd-9800x3d-4k-gaming-cpu-test/
Try doing a ctrl+F for "longevity" and then read the longevity test section - it exaplins my point better than I can.
crshbndct@reddit
Ok. But how many people build a high end system, 4090, fast ram, fast SSDs, etc and then just throw a “good enough” cpu in it? Not many.
There’s still a point in finding the best one.
Especially because the new intel ones are actually getting to the point where they are too slow to feed a 4090
AnimalShithouse@reddit
Bold claims lol
crshbndct@reddit
I would direct your attention to the linked video in this post.
JonWood007@reddit
The thing is demands go up and the worse performing cpu will likely age relatively poorly.
AnimalShithouse@reddit
Sure. But we are using 1080p to gauge relative performance when most of these cpus are destined to 1440p or higher where we have been gpu limited for almost as long as 4k has existed.
So we're talking about future proofing for something when a different component will seemingly be the limiting factor for years to come.
JonWood007@reddit
I hate to have to pull this analogy again but its like this.
Say CPU 1 does 100 FPS, CPU 2 does 200 FPS
You have a GPU that does 60. What do both systems get? 60
Okay so fast forward 3 years, CPU 1 is now doing 50 FPS, CPU 2 is now doing 100. Youre trying to run a game at 60 on your aging system. System 1 wont be able to do it because the CPU isnt powerful enough, whereas CPU 2 will.
It baffles me how short sighted you people are in buying components. Just because both systems are good enough today doesnt mean they'll work as well as they age.
Honestly, anyone who keeps insisting because its 4k it isnt a big deal should be forced to play on nothing but like a 4090 with a 7600k until they get the picture. Because im tired of having to explain this over and over again.
AnimalShithouse@reddit
I'll take the 5700x3d for cheaper and we can call it a deal.
JonWood007@reddit
The point was to force people to use such an anemic CPU and force them to put up with a stuttery mess so they learn to appreciate the value of a good CPU.
AnimalShithouse@reddit
They're both great cpus. My point is I can grab a cheaper and older AM4 cpu and I'm happy to live with that and a 4090 until the 2030s.
That's the reality. The best cpus for gaming from am4 era and even 13th gen Intel era will be fine enough to keep all the way up to 2030. We are going to be GPU bottlenecked for a very long time at 4k and the best gaming CPUs from am4/13th Gen Intel will be absolutely fine for gaming at 1080p almost indefinitely outside of maybe a couple of fringe games.
We should be giving advice to the masses, because that is who needs it most. Not the random whales who build PCs without meaningful budgetary constraints. Those individuals just always buy what is the best without compromise. They're vocal on Reddit, but a minority IRL.
JonWood007@reddit
In that sense I agree with you. Virtually all modern CPUs in the midrange $200-300 category perform the same these days and they'll all be good for years and years to come at this rate. THe 7800X3D and 9800X3D are the best, yes, but the amount of people who act like you need that for gaming and who upgrade to that are a small minority of gamers. Most will be fine with mid range parts for the foreseeable future, since you literally need 3D vcache just to get any meaningful performance boost at all, and AMD is saving that for their $450 top of the line CPUs.
We seem to be in a new intel stagnation era where the two brands have hit a wall with conventional performance, so anything from 5000X3D, 7000, 9000, or 12th-14th gen intel are basically the new 2500k/2600k style CPUs for the time being.
I just dont like the "but but 4k" argument. No. CPU benchmarks should be designed to actually measure the power of the CPU, not go "oh well any CPU gets 60 FPS at 4k"....yeah. I could probably get the same performance out of a 4090 and a 1080 on a 7600k too. You see the point?
You dont measure how good a CPU is by measuring in GPU bottlenecked setups. Ultimately, a 9800X3D WILL be a better CPU for longer. It's just really a matter of whether it's worth the value. And I view it kinda like a 4090 style halo product where it's "the best", but man you're paying A LOT for the privilege of having "the best."
But yeah, I'd agree, midrange is a better value and I plan on using my 12900k until 2028-2030ish or so.
AnimalShithouse@reddit
Ya I'm mostly aligned with everything you're saying. I don't mind the 1080p relative benchmarks. I just don't like the conclusions being talked about like "wow AMD is destroying Intel here" without the caveat of "these are all incredible cpus" lol. AMD is legitimately beating Intel, it's just not really relevant in gaming for most people.
I'd rather support synthetic gaming benches more if there was just more of a disclaimer about what's a "useful enough cpu that you'll be fine most of the time".
Tbh, I mostly just look at productivity benches these days for those reasons. They are more meaningful to show how things are going to scale in a higher variety of workloads.
JonWood007@reddit
I've seen the 9800X3D beating intel flagships by as much as 50-75%. That's an ###whooping. And yeah if you're going for a 285k vs a 9800X3D, duh, go X3D.
The problem is that in an era where a 5700X3D, 13600k, 12700k, etc are all like $200ish, well....suddenly that value proposition doesnt look as good. It's all about the value for the money.
But that doesnt mean that AMD isn't winning. But let's face it, compare even a 9700x to any of the above CPUs I just mentioned. take away the fancy cache and you're left with a relatively mediocre product. They call Zen 5 Zen 5% for a reason.
AnimalShithouse@reddit
Absolutely!
AMD is still winning, just not as relevant in most gaming segments. I thought of the analogy of cars while out and about. Braking distance, 0-60mph times, and other performance metrics matter.. to a point. Above a certain 0-60mph time, or just stops being relevant for most people. But, the flip side is something like range and efficiency tend to matter a lot more and still be relevant in conversations. Modern cpus are a lot like many modern cars - already fast enough for most practical scenarios. But there are some people out there buying bugatis and amd 9950x3d systems!
JonWood007@reddit
Well again imagine if every year your car gets 20% slower. That's what happens with CPUs as system requirements go up.
I look at CPUs with higher FPS ceilings as giving more longevity long term. Basically, if a car is capable of 200 MPH and another is only capable of 120 MPH....the 120 MPH car is gonna hit that 60 MPH bottleneck a lot faster than the 200 MPH one.
AnimalShithouse@reddit
Much like with PCs being GPU limited, most cars will start to become fire and braking limited lol. The engines are already all mostly fast enough.
JonWood007@reddit
Eh, eventually the CPU bottleneck will get you. And it's always a massive pain when it does. Remember, GPU performance scales, CPUs do not.
it's also easier to replace a GPU than a CPU.
AnimalShithouse@reddit
For sure! But not on the horizon. We spent like a decade cpu limited. Then storage limited. Now we're in the wonderful age of GPU limited (and exorbitant) :(.
JonWood007@reddit
Yeah true. I spent less on a 12900k than i did on my 6650 XT....
ConsistencyWelder@reddit
Yeah the problem is that when Intel some time in the future are ready to release their own 1st gen cache, AMD will have moved on to their 3rd or 4th gen Vcache. And I don't see any way around it, Intel will have to implement some form of cache.
Zednot123@reddit
Intel first gen is already out when it comes to moving to large caches. Emerald Rappids has a ungodly amount of L3 and is essentially Intel moving in the same direction as the rest of the industry. They are already about to release gen 2 with Granti Rappids (same core as ARL).
Just because it's not being used on consumer side yet, does not mean Intel isn't changing things.
ConsistencyWelder@reddit
Good point. The cache on ER is not 3D stacked though, which is where they need to head if they want to be able to compete with AMD again some day. I suspect putting that amount of cache on the die of a consumer chip would be prohibitively expensive, which I guess is the reason they haven't done it. It would make the die ridiculously expensive and increase failure rates.
As I understand it the L3 cache is integrated with the core, right? So with an 8-core it would only be 40MB cache vs AMD's 96MB of L3 cache. Assuming they'd only put the extra cache on the P-cores.
I also expect that by the time Intel would be willing and able to put that amount of L3 cache on a consumer chip, AMD would have moved on to better implementations, and probably even bigger cache sizes. There are already talks of double or even triple stacking the current Vcache layers.
Zednot123@reddit
Hence why I mentioned Ponte Vecchio, which has the L2 in the base tile.
The developments are happening in parallel. They are both optimizing their architecture for large caches and building out the tech for where those caches will eventually at least partially end up.
Hard to say where cache amounts would end up on consumer once Intel finally brings it over. There's announced Granit Rapids SKU with cut down core count that retains a much larger share of the max cache. So Intel def sees there to be room in the product stack for larger core/cache configurations than the full tile design.
ConsistencyWelder@reddit
The amount of cache isn't what's interesting here though, it's the 3D stacking and solving the issues related to it, not just performance but also linking it up and dealing with the heat buildup issues.
If just giving your CPU's more and more cache was the solution, AMD would have done it instead of stacking.
It's not economically feasible to make a consumer CPU with that much L3 cache, which is why neither Intel nor anyone else is doing it. They don't have the die space, the dies would become too expensive to manufacture and the failure rates would be too high. Making the cache modular like AMD is doing makes a lot more sense, since if you have a faulty cache, you're not losing the entire CPU as well.
Laxarus@reddit
How long it took AMD to get to where it is today while Intel was leading the market? They could not compete on the high end with Intel so they tried to compete on the price/performance. And now, after years of built-up, they are also dominating high end.
Intel appears does not seem to get that. 285K is ridiculously expensive compared to what they offer.
285k MSRP $589
9900x3d MSRP $479
Who the hell would buy intel?
shmed@reddit
A large portion of high end desktop sales are for workstation, not for gaming PCs. The 285k is a 24 cores machine vs 8 cores for the 9800x3d. They are good at different things
broken917@reddit
It is actually 8p 16e cores with a total of 24 threads, against 8 cores 16 threads.
The 24 vs 8 is a bit misleading. Because it is definitely not 24 normal cores.
Strazdas1@reddit
That is 24 actual cores. as opposed to 8 hyperthreaded cores. Hyper threading is vastly oversold.
broken917@reddit
Yes, 24 cores, but not 24 normal cores. Otherwise, the 16 core 9950x would be toast.
Strazdas1@reddit
Nor normal cores, but a hell of a lot better than virtual cores from hyperthreading.
shmed@reddit
I never said double. The person I was replying to asked "who the he'll would buy Intel". All I'm saying is not everyone is buying a cpu for gaming. There's other use cases, including some where the Intel CPU performs better.
the_dude_that_faps@reddit
This feels like sandy bridge vs bulldozer. At least for games. It's madness.
It would be awesome if AMD also managed to become at least a quarter as competitive against Nvidia I'm the GPU market.
TZ_Rezlus@reddit
If amd gpu keep having issues, that won't be happening. I've had a 7900xt for a year now and the last two months it's been giving me nothing but problems. No matter the hardware or settings I've changed. It's my first amd in years and wish I stayed with nvidia instead.
the_dude_that_faps@reddit
Not to discredit your experience, but I'll just leave mine here for comparison. I have a couple of gaming PCs littered across my home. One (my main desktop) with a 3080, my main laptop with a 6800m, an HTPC with a 7900xtx and a secondary desktop for my family with a 6800xt. I even have a hacked together PC from Chinese Xeons and a mining 5700xt, and even that has been pretty stable.
My gaming stays mostly between my desktop and my HTPC and neither of them has had any issues.
Considering the amount of AMD GPUs I have, and the complete lack of major issues, I wouldn't say AMD is just fucked. Maybe it's just setup, maybe AMD isn't great on your particular selection of games, I don't know. Maybe just had bad luck. Or maybe AMD does suck. But it is workable for enthusiasts like me at least.
(Why so many AMD GPUs? I do a lot of Linux stuff outside of gaming and even today Nvidia is a pain).
Jofzar_@reddit
2500k sandy bridge was such a great CPU, so cheap, massive OC and cheap MB
AssCrackBanditHunter@reddit
Actually pretty nuts you could have rode that CPU for a decade
_Lucille_@reddit
I had one and did not replace it until 2600x for Ryzen.
Crimveldt@reddit
Mine lasted me 9 years. 5GHz overclock from day 1 to retirement. It's actually still alive over at my parents, chilling at stock clocks now doing the occasional print or web browse.
I'm hoping my 7800X3D can last me about as long but we'll see
VenditatioDelendaEst@reddit
A video linked elsewhere in the thread reports a measured wall idle power of 120W, so you should probably look into replacing that thing with a $80 eBay Skylake optiplex/elitedesk/thinkstation.
PCMasterCucks@reddit
A buddy of mine was gifted a "launch" 2500K and he rode that until 2021 playing indies and AAs.
996forever@reddit
2700K maybe, the 2500K not really. 4C/4T started really having issues by the mid late 2010s with the likes of AC origins and odyssey. 2700K could’ve been fine up till 2021 (not fast, but usable) but not 2500K.
MSAAyylmao@reddit
BF1 was the wakeup call for my 4690k, that game needed lotsa threads.
996forever@reddit
But as long as you have a 4C/8T, you can even use it till today if your expectation is just 60fps, with very few exceptions, even if it’s even older.
https://youtu.be/AFYz_cAkgNU?feature=shared
MSAAyylmao@reddit
Thats incredibly impressive, power draw is ridiculous though.
996forever@reddit
It is power hungry, but next to modern gpus it doesn’t seem like much.
VenditatioDelendaEst@reddit
That guy measured the AC-side idle power at 120 W, and 155 W overclocked (didn't have adaptive voltage back then I guess?) comparable to an entire classroom worth of laptops. Modern GPUs are only like 15W.
virtualmnemonic@reddit
You mean 2600k (2700 doesnt exist), but yeah, it is a legendary CPU. You can overclock it to ~4.5ghz with a $25 hyper212 for a significant performance boost, too. The IPC was excellent for the time.
I still have my 2600k machine running Windows 10 for a family member. Alongside a SSD its a solid performer. And I built it over a decade ago.
conquer69@reddit
The 2700k does exist. It was a higher binned version of the 2600k.
https://www.intel.com/content/www/us/en/products/sku/61275/intel-core-i72700k-processor-8m-cache-up-to-3-90-ghz/specifications.html
thunk_stuff@reddit
Damn, so many of us here. For me the 2500k lasted through 2020. Zen 3 finally tipped the scales enough (2x single thread improvement plus all the extra cores). Subscribing to r/sffpc/ earlier in the year and seeing all the cool sff builds did not help things at all lol
RevealHoliday7735@reddit
I did, almost. Used the 2700k for 9 years lol. Great chip!
New-Connection-9088@reddit
I think I stuck with mine for almost 10 years. It really was a great CPU.
LobL@reddit
I had a 2600k for a looooong time, run it at 5 ghz forever without any issues. Even the next generation was almost worse since Intel cheaped out and glued the chip to the IHS instead of soldering it so the OC was much worse.
tomzi9999@reddit
I think it is good that they have one battle at the time, to not loose focus. They need to push CPU division to take 35-40% server market share asap. I think looking at thendlines, that could happen within next 2 generations.
Also we need some godly $200-$400 GPUs. Not top shit cards that only 0.1% of people buy.
bestanonever@reddit
Apparently, AMD's next-gen GPUs are all about the mainstream market. Which sounds pretty awesome to me. Let's hope they are some good ones.
Zednot123@reddit
That's just corpo speak for not having a architecture that can compete outside the mid range. It's Polaris all over again.
There's no re-focusing or change of strategy, they simply don't have a product to offer and are spinning the narrative. RDNA simply uses to much power and requires to much bandwidth at the higher tiers to be viable, just like late era GCN.
bestanonever@reddit
While you might be right, Polaris was amazing for us players. Cheap, reliable, was a great consistent performer. I personally used a Polaris GPU for 4 years straight without missing a beat. Excellent architecture.
If they can spin another Polaris, they might have a winner in their hands, even if they can't capture the high-end just yet.
Geddagod@reddit
AMD has targeted the midrange with better value than Nvidia for a while now. It hasn't been a winner for them, but maybe it would be pretty good for us consumers. To be seen ig.
Dat_Boi_John@reddit
I disagree with this completely. I was looking at 4060 tier GPUs for a friend and AMD has literally nothing worthwhile in that price range while it's the most popular price range. And that's coming from someone who bought a 5700xt and a 7800xt over the 2070 and 4070 respectively, with a 3600 and a 5800x3d.
Most people see the Nvidia GTX 4090 dominating and say I want the version of that card that costs 300$ so they get a 4060. AMD neither dominates the top end and neither offers good value (for new cards) for 200-300$.
The only time they did was when 6700xts and 6800s were available for those prices but that time is long gone, at least in Europe. Now you have the 7600 which doesn't offer enough value above the 4060, then the 7600xt which is horrible value and then the 7700xt which is both too expensive for the 200-300$ price range and offers significantly less value than the 7800xt.
So basically anything lower than a 7700xt from AMD isn't worth buying, meaning they are completely out of the most popular price range. Imo, a 7800xt is upper mid range, low high range, while the 4060 is the actual mid range, which AMD has largely abandoned with RDNA3.
VenditatioDelendaEst@reddit
If I look at PCpartpicker, I find the 7600xt is $20 more than the 4060, and the 7600 is $45 less. Looking at performance, I see that either the $20 gets you an imperceptible uplift but 2x the VRAM, or you save $45 for perf that is about the same.
I see these prices as pretty good, unless you have DigitalFoundry-tier hatred of FSR2.
Dat_Boi_John@reddit
Ah, maybe they have better prices in the US. In Europe, the 7600 is about 20 euros cheaper than the 4060 and the 7600xt is 60 euros pricier than the 4060.
So you get access to DLSS upscaling for 20 euros more compared to the 7600 or have to pay 60 euros more to double the VRAM and not have DLSS.
I use a 7800xt at 1440p and almost always prefer XESS upscaling over FSR because of the way FSR makes particles look (the kind of pixelated/painted looked), but I don't value DLSS significantly more at 1440p or 4K.
However, I generally value DLSS more at 1080p because the lower the internal resolution, the worse upscaling looks. And FSR really struggles if you go below 720p internally, while I expect DLSS to hold up better in that case.
bestanonever@reddit
They haven't targeted it as agressively as what they did with Ryzen CPUs when they weren't competitive, though. A slight discount against Nvidia's similar GPUs won't cut it.
Brickman759@reddit
Obviously the market wants high end cards though. Nvidia owns like 90% of the discreet GPU market and their low end cards are barely competitive or do worse than AMDs offerings. They're bolstered by the flagships. AMDs strategy is awful.
djent_in_my_tent@reddit
You can grab a 3080 on US eBay for $350 right now and a 3070 for $275
From-UoM@reddit
Bulldozer was worse.
Arrow Lake atleast reduced power usage.
Bulldozer was more power hungry and had less performance.
Geddagod@reddit
I swear people have started to call everything even slightly mid as the "next bulldozer" without realizing how much of a miss bulldozer really was for AMD.
the_dude_that_faps@reddit
It's not that the 285K is mid. It's that the gap is so large.
No_Guarantee7841@reddit
Thread scheduling is broken currently. So yeah not really a bulldozer but definitely nothing good either. At best if things get fixed, its gonna perform about the same as 14900k on average.
Valmar33@reddit (OP)
Thread scheduling seems to bring little improvement. Hardware Unboxed did a test with a CPU-intensive portion of Plague Tale: Requiem with the e-cores disabled, and there was barely an improvement.
Thread scheduling is basically broken for everyone who has different sets of cores ~ whether P vs E-core or X3D vs non-X3D CCDs, it's a fundamental limitation with Windows scheduler it seems.
There is also no easy way to automatically determine what set of cores an application may want, or why. Should the user decide, the OS, the application developer? Who is right?
No_Guarantee7841@reddit
You say its broken in general but in homeworld 3 i dont see ryzen dual ccd cpus suffering at all whereas intel/amd cpu performance difference is about 100%, something that its not normal... Also i never said to disable all e-cores... Thats just stupid because you lose the l2 cache from too many cores. L2 Cache is not shared like previous intel gen which is why you are not gonna see improved performance along the fact the there is no BT anymore so you are literally leaving the cpu with just 8 threads.
Valmar33@reddit (OP)
Which dual CCD CPUs?
Hardware Unboxed decided to test it because they wondered if the E-cores were causing low performance.
Ditto for which dual CCD CPUs...
No_Guarantee7841@reddit
All dual ccd parts. You wont see a dual ccd part perform worse than a 5800x in those games. Which is the equivalent of a 285k performing worse than a 12600k.
Valmar33@reddit (OP)
Um... what about the 9800X3D? We're talking games here...
the_dude_that_faps@reddit
I have a hard time believing that the issue is scheduling. Arrowlake has less complexities compared to Raptor lake. It does not have SMT and its E-cores are faster.
It being worse than raptor lake just points elsewhere to me. We'll see I guess.
No_Guarantee7841@reddit
There are many games that turning off p-cores improves performance considerably (same games usually that strongly underperform). Homeworld for reference https://x.com/CapFrameX/status/1851368710659064105/photo/1
996forever@reddit
It’s more like kabylake vs zen+. Good in productivity, bad gaming performance from memory latency.
Azzcrakbandit@reddit
If I'm not mistaken, the i7-7700k was only like 20% faster than a ryzen 1800x in single core performance and the 1800x had twice as many cores.
Cheeze_It@reddit
I don't know if it was 20% faster. Probably closer to 10%-15%. Maybe on an outlier game.
996forever@reddit
“Outliner” for 7700k vs Zen 1 was more like 30%+ difference like in Far Cry titles.
Morningst4r@reddit
The extra cores did nothing in gaming though. And the 7700k could get another 20% from overclocking. It wasn't a good long term buy, but it was definitely a better gaming CPU at the time.
Noreng@reddit
Even today it's a better gaming CPU
996forever@reddit
And then the 8700K remained the better gaming cpu than anything pre-Zen 3 even today. Skylake and its derivatives were simply outstanding for gaming for the time.
Zednot123@reddit
And still pretty much matches or even beats comparable Zen 3 CPUs if overclocked and tuned. The 5700X/5800X can generally beat it in newer titles thanks to having 2 more cores. But the 5600X generally falls to the wayside if both systems are maxed out, although the Zen 3 chip is faster at stock.
tukatu0@reddit
Somehow i doubt that. It would need to be like 30% more performant to to beat a 5600x. Was that doable and what ddr speed?
Zednot123@reddit
The only time you see figures like that are outliers titles or if the tests are run with stock ram. The 8700K is extremely held back by stock ram if it was tested at JEDEC specs. Meanwhile the 5600X does not gain as much. Both from having a higher stock JEDEC speed and having all that cache making the impact smaller overall. Since the 8700K can push ram into the 4000+ range, something Zen can't do without decoupling RAM/IF.
The 8700K is for all intents and purposes a 10600K ran at slightly lower clocks. But they often overclocked into the 5+ GHz range. And overclocked performance would as a result end up a fair bit above stock 10600K.
Here's how the 5600X and 10600K matchup looks at stock frequency and 3600 ram for both platforms, less than 5% advantage for the 5600x.
A lot of 8700K could add another 200-300Mhz core on top of 10600K stock frequency and run ram even higher as I said, most Z370/390 boards can run 4100-4200~ with 2 stick of SR B-die, for DR or 4 sticks you might have to settle in the 4k range on some garbage boards. With additional performance gained from doing stuff like IMC/NB OC. There's another 5-20% performance to be had in a lot of titles above those 10600K numbers depending on how much they like bandwidth.
And you even had another slight advantage over those 10600K numbers. The 8700K is not a cut down die, so the ring bus is physically smaller and cores as close as they can be. The 10600K and other cut down Intel SKUs actually had a 1-3% performance variance. Depending on which cores were cut and how close or far the hops/latency ended up being. The 10600K can use the same die as the 10900K or the older 8 core die. Which mean that worst case you can end up with a CPU that has the central dies in a 10 core die disabled.
tukatu0@reddit
All this time I've been under the impression the 9900k 10400 and 12100 all performed within 3% of each other. Now i have to wonder of the 9900k also goes much above. Well i don't really want to know. Not too sure how many will be buying them in the coming 2 years.
Zednot123@reddit
9900K when tuned and using faster ram lands somewhere above 10700K stock numbers. Because that is essentially what it is once you match/beat the frequency.
Noreng@reddit
That depends heavily on the benchmark. If you're looking at something mostly running in cache, the 7700K's single core performance was about 20% faster than an 1800X. If you're looking at stuff that's more memory-bound, the 7700K can be almost 100% faster in single core benchmarks
the_dude_that_faps@reddit
I remember buying a Ryzen 1700 and I don't think I ever saw a game that had almost 2x better performance. Maybe my memory fails me though.
buildzoid@reddit
really depends on what games you play
996forever@reddit
I already said good productivity performance. I specified gaming performance in the second part. Don’t know why you had to indulge into the “single core performance” (most likely just derived from cinebench) when I didn’t mention it.
Cyphall@reddit
Zen+ wasn't that bad for gaming.
I remember replacing my 7600k with a 2700x as AC Origins was a stuttery mess on 4c/4t and let me tell you the performance in that game did not decrease a single inch. In fact, the game was finally playable.
996forever@reddit
AC origins/odyssey was one of the OG killers of the Quad core non-SMT cpu though. The 7700K was significantly ahead of the 7600K.
COMPUTER1313@reddit
Even for productivity, Zen 5 regular is quite good at it. The 285K's platform (board and RAM included) is more expensive than the 9950X, and the 9950X still has the Zen 6 X3D upgrade route.
oathbreakerkeeper@reddit
Will Zen 6 be AM5?
Jeep-Eep@reddit
I hope this Revenge Of Small Die strategy plays out.
WTFAnimations@reddit
Intel just had it's Bulldozer moment. Now to see if they survive as well...
DktheDarkKnight@reddit
At this point this is like beating a dead horse over and over again.
Intel is almost 2 generations behind at this point.
Shoddy-Ad-7769@reddit
People say this who don't understand how CPUs work.
AMD isn't two generations ahead. They simply have vcache. The 285k trades blows with the 9950x. As does the 265k and 245k and their respective competitors.
In the one, very, very, very small segment of desktop gaming, AMD has 3dvcache which gives it an edge, 100% because TSMC made it for them. Beside the difference of "TSMC making 3d vcache for AMD, and Intel Foundaries not making it for Intel", the difference is minimal. And this is BEFORE we get a windows performance patch for intel to fix the fact that games are playing on its E cores instead of P cores.
RHINO_Mk_II@reddit
How else would it work? AMD doesn't fab in house.
And yet here we are, 3 generations after 5800X3D and all Intel has come up with is this Core Ultra 200 nonsense.
Shoddy-Ad-7769@reddit
Yes their foundry is behind on 3d stacking. No doubt. But can't confuse that with intel design being behind AMD design.
RHINO_Mk_II@reddit
Then maybe it's not as simple as a one-generation catchup solution, is it?
Shoddy-Ad-7769@reddit
For the foundry? Probably not. For Intel design? Certainly. That's what AMD did... they didn't even plan 3d vcache at all to go on the 5000 series. They simply slapped it on top at the last minute. How do we know this? They said it themselves.
SupermanLeRetour@reddit
Who owns the foundry and whose R&D it is, is irrelevant. What matters as a consumer is that AMD's CPU have a technology that Intel's CPU don't have, and that put them a good generation ahead. I don't get why you're so insistent to make the point that 3D V-Cache is TSMC tech and not AMD's, because it really doesn't matter at all in the end.
Earthborn92@reddit
AMD literally invented HBM (good Bryan Black), along with SK Hynix. That's why they were the first to use it.
Valmar33@reddit (OP)
My brother in Christ, have you not seen the benchmarks?
Half, if not more of the desktop gaming space massively benefits from that vcache.
So saying "one, very, very, very small segment" comes off as... how else to put it, incredibly salty.
Shoddy-Ad-7769@reddit
Desktop Gaming itself is a very, very, very small segment.
Valmar33@reddit (OP)
The way you're desperately trying to minimize desktop gaming is... quite laughable, frankly. Anything to defend Intel, I suppose.
Laptop gaming does not really exist for AAA titles, thus laptops are rather irrelevant for benchmarking.
Consoles... are consoles, and both Sony and Microsoft both prefer AMD, who provides a far more efficient and powerful set of hardware.
HandheldAddict@reddit
Intel still has no answer to V-cache, that's why AMD is effectively 2 generations ahead.
They don't even have Vcache on the roadmap either.
Shoddy-Ad-7769@reddit
Vcache is a trademarked TSMC product. Intel will have its own stacked 3d cache. First it was doing a stepping stone of 2.5d cache. 3d stacking is certainly on Intel Foundry's roadmap, and has been for years. TSMC is just ahead of Intel. AMD isn't.
Earthborn92@reddit
And Arrow Lake uses TSMC for most of its tiles. On a more advanced node than Zen 5.
Still doesn't beat it.
I was also one of the folks who thought that Intel's fabs were holding back their design team. I think ARL comprehensively disproved that. AMD has the better designs.
Geddagod@reddit
I wouldn't be surprised if there was some very deep collaboration between AMD and TSMC for V-cache. I think it's pretty telling that no other company has yet to utilize 3D stacking SRAM onto their chips, afaik.
Word it however you want, in the end, AMD's products are ahead of Intel's in gaming.
strictlyfocused02@reddit
Wild how that persons entire comment boils down to "AMD isnt two gens ahead, Intel is two gens behind!" along with a perfect description of why Intel is two gens behind.
Just_Maintenance@reddit
I mean Intel could also "just buy" 3d cache from TSMC. Arrow Lake is partially made by TSMC anyways (although not the packaging, which is where stacked silicons are installed)
Also implementing stacked dies requires lots of work from the AMD side, they need to design the interconnect and the bus on both ends and then put it somewhere in the silicon.
Shoddy-Ad-7769@reddit
It's a long term commitment they would have had to have made years ago. They didn't and don't want to make that commitment because their plan is for their foundry to make it themselves.
Point being, people are confusing Intel foundry being behind TSMC with Intel Design being behind AMD design.
Geddagod@reddit
Intel's design side is also behind AMD's design side too though. Even ignoring 3D V-cache, simply looking at LNC vs Zen 5, or iso node comparisons of RWC vs Zen 4/Zen 5 or WLC/GLC vs Zen 3 should make it obvious that Intel is behind.
vedomedo@reddit
A part of me wants to sell my mobo and 13700k, and get a 9800x3d, buuuuut… I feel like saving that money and changing my 4090 for a 5090 will give way more performance at 4K in my case.
Qaxar@reddit
You're gonna get CPU bottlenecked (if the rumors of 5090 performance are true)
vedomedo@reddit
Well… literally everything will be bottlnecked by the gpu, but okay, sure.
Hell I used a 8700k with my 4090 for a good while, upgrading to 13700k gave me better 1% lows, the averages were VERY similar. Same things gonna happen here, yes the 9800X3D will perform better, but it won’t be miles ahead.
Large___Marge@reddit
Not everything. Escape From Tarkov and Factorio have entered the chat.
vedomedo@reddit
Dont play either
Large___Marge@reddit
Hardware Unboxed’s deep dive on 9800x3D specifically 4k gaming from earlier this week: https://youtu.be/5GIvrMWzr9k?si=froGTT9f3MFqztUQ
vedomedo@reddit
I know, but like I said, a 5090 will be more impactful
Large___Marge@reddit
That really depends on if the games you play are heavily CPU bound. If they are, then your 5090 won't hit 100%. If they're not, then the 5090 will give you the bigger uplift.
vedomedo@reddit
While yes that’s true to a degree, at 4K you will always be gpu bottlnecked. There’s no way in hell a 5090 won’t be running at 100% in modern titles.
CatsAndCapybaras@reddit
this is a myth. certain games are heavily bottlenecked by the cpu at 4k. for an extreme example, late game factorio. less extreme: Rust, Tarkov, 7days2die. budget breakdown for cpu/gpu always depends on what people like to play
vedomedo@reddit
And I’m gonna repeat my answer for the third time. I don’t play those kinds of games, I simply… don’t. Literally no interest in any of them, or the genre they represent.
I mainly play the big games with RT and preferably PT. With those features and at 4K yes the cpu matters, but not nearly as much.
I used a 8700K with my 4090 for a year, and I remember the same conversation being a thing. Hurr durr bottleneck. People use that word without even knowing it’s meaning lol. Lo and behold I upgraded to a 13700k, and you know what happened? My 1% and 10% lows got better, my average stayed more or less the same.
Obviously having higher lows is better but come the fuck on. People like you make it sound like the machine won’t even work lol. It will actually be fine, and the performance bump a 5090 is rumored to give is around 30% over the 4090. While upgrading the 13700k to a 9800X3D is anywhere from 4-15% or so depending on the title. My entire original comment was basically implying this simple fact. If I’m gonna spend money I will spend it on the biggest upgrade, which in my case, will be the GPU. And who knows, maybe I pick up a 9800X3D or whatever comes out in a year or two.
Happy now?
CatsAndCapybaras@reddit
I was replying to the statement "at 4k you will always be bottlenecked". Maybe reread your previous comments before going on a rant
Large___Marge@reddit
Again, Escape From Tarkov is a prime example. Hogwart's Legacy is another. Highly recommend you watch the video I linked.
yondercode@reddit
that's cap or you're using very few extreme examples of games. i used a 10900K with 4090 for a while and the CPU bottleneck is showing in almost every game especially with DLSS, upgrading to 13900K massively helps
Finesse3Ways@reddit
Would a 5090 bottleneck a 14900k?
tukatu0@reddit
Everything would. 4080 and 7900xtx were already bottlenecked around 1/3rd of titles of 2023. At 4k.
2024 has had some sh""" optimization though so you'll have the 4080 rendering stuff at 1440p 30fps even if ultra. If a 5090 is 50% stronger than a 4090 then it would be say 80%. But for simplicity sake i will just say double the fps
.... Well i barreled myself. Point is you'll be fine until you start hitting 150fps in games. That is when most modern games start to bottleneck. Often you won't cross 200fps in 2018-2023 games.
Falkenmond79@reddit
You are exactely right. I’m a big fan of the new x3d CPUs and got a 7800x3d myself.
But if you play at 4K and especially now, all you might get is an improvement in the 1% lows. Avg will stay mostly the same.
You might start to see a difference with the 5090 and 6090 when the GPU limit matters less, for current games. For games that make use of that new hardware, it will matter less. In 5 years this might look different. By then the 13700 might get left behind like the 10700 is now, while the x3ds of today will be able to keep up.
vedomedo@reddit
100% agree.
noiserr@reddit
It also depends on the games you play. Like for instance if you play WoW. Having that v-cache in busy areas and raids is really nice.
Igor369@reddit
I for example can not wait to buy 5080 to play Doom.
Earthborn92@reddit
Playing through Doom Eternal (+DLCs) again with a 240Hz 4K OLED + 5090 + 9800x3D in the future doesn't sound like a bad time. :)
Igor369@reddit
...Doom eternal?...
airfryerfuntime@reddit
Does WoW really need that much vcache?
Stingray88@reddit
For crowded areas, absolutely. Most MMOs benefit from from the extra cache a ton.
Earthborn92@reddit
GamersNexus benchmarks FFXIV for that reason I think.
MarxistMan13@reddit
Yes.
Zednot123@reddit
It frankly is a bit over hyped for wow. The performance increase from going from my tuned RPL system to my 7800X3D is barely measurable if even none existent. But ye, stock vs stock the X3D essentially gives you "OC/tuned ram" levels of performance.
In some instances however the RPL system is actually noticeably faster, like when it came to loading times.
tukatu0@reddit
That's why i look at the 7zip benchmarks baby.
However the 9950x is like 3x faster than 12th gen. Not too sure what that would mean for a 9950x3d in gaming applications.
EnsoZero@reddit
At 4k you'll see almost no uplift for a CPU upgrade. Better to save for a GPU upgrade.
Stingray88@reddit
Depends on the game. I play a lot of factory sims and they’ll definitely see an uplift from a CPU upgrade. And not all of them are like Factorio and graphically simple. Satisfactory uses unreal engine 5 and looks gorgeous.
Earthborn92@reddit
I'm surprised that Satisfactory is not a more popular "standard benchmark title". They've used UE5 well, it doesn't stutter much.
Enabling software Lumen works well as a gameplay mechanic, kind of forcing use to light up walled factories properly.
Stingray88@reddit
To be fair, it only just released. I don’t think many want to benchmark on early access games, too many variables. But now that it’s out I agree it would make for a great benchmark. Particularly given someone could build a massive factory and that save file could be shared as “late game” benchmark.
Earthborn92@reddit
It's been a month already...we already have Dragon Age Veilguard in some benchmarks and that was actually just released.
Large___Marge@reddit
Upgrade from 5800X3D netted me an insane uplift at 4k in my main game. 50%+ performance in nearly in all 3 metrics.
Disordermkd@reddit
But note that OP was talking about going from a practically new high-end CPU, 13700K to 9800X3D. The uplift in 4K from one CPU to another won't make much of a difference.
Large___Marge@reddit
That's highly dependent on the game. If you're playing CPU bound games, then the uplift can be quite substantial, as it has been for my games.
Hardware Unboxed did a deep dive on this very topic last week: https://youtu.be/5GIvrMWzr9k?si=aAvYie9Bq0cPKI-n
TLDR: 17-20% average uplift in 4K across 14 games vs 7800x3D and 285k which are both a few gens newer than the 13700k.
Standard-Potential-6@reddit
Thanks for the link! Changed my view some. Note that this is using DLSS Balanced (58% render resolution) for all games.
Still riding out my 5950X and going from 3090 to 5090 here.
Large___Marge@reddit
NP! Glad you got something out of it. Yeah the DLSS is the detractor to the results for me but I guess they were going for a real world usage scenario since most people turn on DLSS.
Earthborn92@reddit
DLSS Balanced @ 4K is very reasonable.
I have even stomached DLSS Performance @ 4K to get pathtracing running reasonably for Cyberpunk. It's worth it, but very obviously upscaled.
Disordermkd@reddit
Oh wow, okay. I didn't actually think it would be that impactful. I stand corrected
airmantharp@reddit
Think frametimes, not average framerates.
Averages over one second tell you nothing about how a game actually plays - see SLI vs. Crossfire for HD6000-series. It's why we got frametime analysis in the first place.
Large___Marge@reddit
Yeah it can be pretty stark in certain games. I expected uplift to be on the margins but have been pleasantly surprised. My 1% lows in 4k on the 9800X3D are better than my average framerate on the 5800X3D in Escape From Tarkov. I used to push about 180FPS average in Apex Legends battle royale, now it's locked at 224FPS (Nvidia Reflex FPS cap). Pretty mind-blowing improvement in experience so far and I haven't even tested all my games yet.
Tuuuuuuuuuuuube@reddit
Doing either at 4k is a waste of money
Raikaru@reddit
How do you know this?
Wooden-Agent2669@reddit
Paying 700 bucks for a mobo cpu ram. Just to gain \~2% is a waste of money.
https://tpucdn.com/review/amd-ryzen-7-9800x3d/images/relative-performance-games-38410-2160.png
Shrike79@reddit
Yeah, lets look at 4k native benchmarks that deliberately isolate gpu performance to evaluate a cpu. Genius.
What happens when you use DLSS or turn settings from ultra to high and shift the burden to the cpu? Oh right, the faster cpu has higher fps while the slower one remains gpu bound.
Ryzen 7 9800X3D, Really Faster For Real-World 4K Gaming? (Yes)
CPUs Matter for 4k Gaming
Wooden-Agent2669@reddit
Why are you using Links that compare the Rzyen 5 3600 to the 7800X3D and the Intel 9 285k to 7800X3D when the topic was 7700X to 7800X3D?
Shrike79@reddit
The 3600 vs 7800x3d is obviously an exaggerated scenario to disprove for the "cpu doesn't matter at 4k" meme but the same idea applies when talking about 13700k vs 7800x3d or any other processor.
The faster cpu is gonna be faster at 4k when it has more work to do (upscaling, lower settings, etc). Not exactly rocket science here.
The techpowerup and other 4k benchmarks people like to throw around are all 100% gpu bound so of course there isn't going to be much separation, but hardly anyone actually plays at native 4k with everything on ultra when dlss and high settings provide indistinguishable visual fidelity for the majority of people and better performance - if your cpu is up to the task.
Wooden-Agent2669@reddit
Sure then use a link that uses the 13700k in those scenarios. Till then its just hyperbole without any data.
Large___Marge@reddit
Here you go:
https://youtu.be/5GIvrMWzr9k?si=froGTT9f3MFqztUQ
Edit: TLDR: 9800X3D has a 17-21% average uplift in 4k across 16 games versus 7800X3D and 285k
Wooden-Agent2669@reddit
What are you trying to achieve by just reusing the link that shrike used?
It neither contains the 13700k nor the 7800X3D.
Large___Marge@reddit
I've watched and rewatched; it was part of the research I did prior to buying the chip. I've updated my comments to appropriately reference the 7700X instead of the 7800X3D, that's my mistake, and what I get for replying while I'm in the middle of doing something else. Regardless, the 17-21% in 4k against the 7700X and the 285k is accurate.
Here is the timestamp: https://www.youtube.com/watch?v=5GIvrMWzr9k&t=1640s
As I mentioned earlier, it depends on the title, but if you're CPU bound, you're likely to see a heavy uplift, as I have in the games that I play according to my tests with NVIDIA FrameView, further corroborated by Hardware Unboxed's tests in the link I provided.
Here's just another real-world example: https://www.youtube.com/watch?v=nDXE05RnepI I'm certain others will be coming out soon, including for the 13700k as more people get their hands on this chip.
conquer69@reddit
TPU has the lowest gains on the 9800x3d over the 7800x3d of any site. 3% at 1080p. I don't know if something went wrong with their testing or their game pool isn't a good representative.
DF has it between 15-20% faster once you exclude gains that get over 144 fps.
Wooden-Agent2669@reddit
TPU results align with Hardware Unboxed and Toms Hardware.
conquer69@reddit
No, they don't. HWU has the 9800x3d as 11% faster. TPU only at 3%.
Large___Marge@reddit
I just did it. And it's more like $900 after tax even with some insane deals and cash back. Sold my 5800X3D, Mobo, RAM for $420. Huge uplift in the games I play at 4k. Hasn't been a waste at all so fa.r
Wooden-Agent2669@reddit
So you can't read? The person has a 13700k At 4k thats a 2% performance increase.
Large___Marge@reddit
Seems you're the one who can't read. 5800X3D shows 2.1% increase in the first list you linked. In my experience, the experience of many others, and Hardware Unboxed's own deep dive into the topic of 4k gaming, it has been much more than that in many games, and nothing in others. For me, and many others in CPU bound scenarios, the upgrade isn't a waste at all.
https://youtu.be/5GIvrMWzr9k?si=froGTT9f3MFqztUQ
Raikaru@reddit
He said either. That includes the 4090 -> 5090 upgrade. I’m asking how he knows that’s a waste of money? The 3090 -> 4090 upgrade was huge.
JuanElMinero@reddit
I think 'either' refers to both 13700k and 9800X3D, not the GPU.
Raikaru@reddit
Nope he replied he meant what i said
JuanElMinero@reddit
Oh, alright. Yeah, I can't make sense of that one.
vedomedo@reddit
Lol how is a 5090 a waste at 4K? That’s exactly where it ISNT a waste seeing as the resolution is gpu limited, meaning my 4090 is 100% utilized, so a better gpu would straight up give more performance
Street-Fishing8222@reddit
lmao poorest American 😂
vedomedo@reddit
I’m not american
Tuuuuuuuuuuuube@reddit
Yeah, it definitely would. It's still a waste of money though
vedomedo@reddit
Money literally only has one use, to be spent. We’ll all be dead and gone sooner than we’re aware. So do whatever makes you happy.
Tuuuuuuuuuuuube@reddit
Right, you can do whatever you want with it, and you can justify a 4090 to 5090 however you want. It's still a waste of money tho
vedomedo@reddit
I mean.. again, to YOU it might be a waste of money. Hell I love luxury watches as well, to most people that’s also a terrible waste, but to me is a hobby.
Then again, I have no interest in cars for example, so for me, a Porsche is a waste of money. See my point?
Tuuuuuuuuuuuube@reddit
No, because it doesn't make sense. A better example would be, "I love luxury watches. I bought last year's model and I'm going to buy this year's model even though it's only slightly better". The money spent relative to performance increase is nonsensical
vedomedo@reddit
That’s exactly how it works in the luxury watch world lol. You don’t even get anything «better» there so its even «worse»
Tuuuuuuuuuuuube@reddit
Right, so we're in agreement that even though you have the money, you're getting minimal or no gain for it. Which would be a waste
Zahand@reddit
Dude just give up. You're clearly in the wrong with your phrasing. You're speaking like an authority on what's worth it or not. Maybe it's a waste for you, but it may not be for others. And until we know actual numbers everything is conjecture. So please, stop talking like you're the authority of this shit.
Tuuuuuuuuuuuube@reddit
Right, it just is a waste though
Zahand@reddit
At this point I must assume you're just some teenager that thinks it's funny to troll people. Reminds me of the lone guys at the party meme.
Honestly just sad at this point
Tuuuuuuuuuuuube@reddit
Yeah, resort to personal attacks because I'm right. Spend a grand for minimal gains? Waste of money
Zahand@reddit
Lmao yeah you can believe whatever you want man.
vedomedo@reddit
But it’s not. A waste is subjective.
AssCrackBanditHunter@reddit
The money you're dropping on tinder gold is a waste, but we ain't judging you for that brotha
Tuuuuuuuuuuuube@reddit
Lol so many angry replies for such a reasonable take
Raikaru@reddit
Your take isn’t reasonable cause you don’t know the 5090’s performance gain over the 4090 and you don’t know what games he plays
Tuuuuuuuuuuuube@reddit
Right. It's still a waste though.
Raikaru@reddit
What are you gaining from this trolling?
Tuuuuuuuuuuuube@reddit
If I was trolling, what are you gaining from falling for it?
AssCrackBanditHunter@reddit
This is the best point you've made all thread
SupFlynn@reddit
The only reasonable point that he made all thread.
tucketnucket@reddit
You're on a hardware enthusiast subreddit talking shit about the hardware that will be the best consumer GPU. Maybe your opinion would do better in some kind of budget gaming subreddit that circlejerks about 1080p, 60fps being the gold standard.
Agreeable-Weather-89@reddit
For average, probably but how is the 1% and 0.1% lows
Large___Marge@reddit
What games do you play? That should factor heavily in your decision.
vedomedo@reddit
Well obviously… but to answer your question, I play everything graphic intensive, especially looking for games with RT/PT. Cp2077 and Alan wake 2 are truly the best examples
Kougar@reddit
Depends on the games. Stellaris would be the CPU the entire way, sim rate to maintain game speed is critical. But most regular games it will be the GPU. My 4090 can't even sustain >100FPS in Talos Principle 2.
isotope123@reddit
Yes, it would, assuming the rumors of the 5090's performance hold true.
Stennan@reddit
The scary part is that node wise Intel is using technology that is 1 generation ahead of AMD 3nm vs 4/5nm.
Can we please get 8 big cores and lots of cache without e-cores?
Hakairoku@reddit
2 generations behind is one thing, but how they handled the controversy with their current gen is the most egregious shit, IMO.
Had they not been assholes, people would be a bit more sympathetic to their plight.
battler624@reddit
Intel 2 generations behind itself.
CavaloTrancoso@reddit
Is it "murder" an adequate word?
Strazdas1@reddit
murder is just multiplicative of crows.
Hakairoku@reddit
more like euthanized, it wasn't even a fair match up.
CarbonTail@reddit
I don't see how Intel is going to recover from this.
G4m3boy@reddit
Why is the testing still on 1080p?
Valmar33@reddit (OP)
Because it's a CPU benchmark, to remove the CPU bottleneck.
Firefox72@reddit
Its incredible how wrong Intel got it gaming wise on this arhitecture.
They better hope its a stepping stone to a Core leap forward otherwise they have just dug themself a massive hole.
JonWood007@reddit
Intel's historical strengths came from monolithic architecture. They abandoned it outright this gen. As such they regressed basically to Alder Lake performance wise.
BookinCookie@reddit
Chiplets aren’t the problem with Arrow Lake.
JonWood007@reddit
Yes they are. Adds latency.
BookinCookie@reddit
Nah, it’s the terrible SOC architecture, which is independent of the chiplet design. For example, Nova Lake will still use chiplets, but its SOC architecture was overhauled, which fixed the latency issue. Arrow Lake’s fabric design is atrocious.
JonWood007@reddit
You're being overly technical.
Berengal@reddit
I feel okay giving them a pass on the first instance of a new architecture. You don't really know which bottlenecks are going to crop up until you have the actual product in hand, and by that point you just gotta ship something even if it's a bit undercooked. Hopefully it's just a matter of correcting some missed assumptions, not something that invalidates the entire design hypothesis.
timorous1234567890@reddit
Meteor lake was the 1st iteration. Arrow Lake is the second.
Berengal@reddit
They both used the same underlying architectures, didn't they? Just different tile configurations.
BookinCookie@reddit
They use (nearly) the same SOC architecture, but Arrow Lake has new cores.
Slyons89@reddit
I always tend towards automotive analogies.
This is like the first model year that Intel retired their mature, powerful V8 engine (disregarding the stability issues), and moved to a turbo 4 cylinder platform. The first attempt at the newer, more efficient platform just can’t beat out the V8 that was perfected over generation and generation. But eventually the turbo 4 should be able to surpass the older design and with better efficiency.
We’ll have to let them cook. However, that definitely doesn’t mean people should be buying into this undercooked platform, and their sales numbers and reputation are suffering.
crshbndct@reddit
Tell that to Mercedes, who have just moved back to the V8
Geddagod@reddit
They already had MTL to test their specific chiplet and fabric implementation though. It just seems like they couldn't "fix" or iterate on MTL's design fast enough. I wouldn't be surprised if Intel knew, internally, they were screwed, for a while, after seeing how MTL fared.
foldedaway@reddit
monkey paw, it's Pentium back to Celeron electric boogaloo electric boogaloo all over again.
Agreeable-Weather-89@reddit
AMD 1% low beat Intel's average on some games.
perfectdreaming@reddit
Reminds me of Zen 1 vs Intel Core 7000 comparisons.
Saitham83@reddit
difference was that zen1 was cheaper and just doubled core resources for productivity. It killed intel hedt over the next gens. 285k is great in productivity but not that great compared to 9950x. Some semblance is there yes but the environment and impact was very different
koopahermit@reddit
Not to mention only after a single year, the Ryzen 5 1600 was aging way better than the Core i5 7600k and was beating it handedly in games released in 2018. Intel tried milking 4c/4t in the mid-range for too long and AMD exposed them.
I don't think the opposite is going to happen with Core Ultra aging better than what AMD offers.
bizude@reddit
Both of those CPUs aged like milk.
Shrike79@reddit
However the person who bought the 1600 could drop in a 5800x3d or 5950x or whatever else for a huge performance uplift. If the person who bought a 7600k wanted a similar upgrade they'd need to buy a whole new system.
So while the cpu aged like milk, the platform was fine wine.
JonWood007@reddit
You could've just spent an extra $100 on a 7700k and outlasted the entirety of am4 though.
Dat_Boi_John@reddit
My guy, as things are progressing, the 5800x3d is on pace to outlast the entirety of AM5.
JonWood007@reddit
...and I literally went from a 7700k to a 12900k and plan to do just that.
Dat_Boi_John@reddit
That's a different socket though and different ram, plus Intel's boards typically cost more. Someone could've easily bought a 2600 (or 1600, then sold it for a 3600) and then upgraded to a 5800x3d, while doing fine for the entirety of both AM4 and AM5 without buying a new board, ram, or having to rebuild their entire system when switching out boards.
JonWood007@reddit
Once again, no one buying in 2017 could've known how long AMD would've been supported, and tbqh, the most reasonable assumption was that the 3000 series was functionally going to be the last series on that board. People love to make this argument in perfect hindsight, but tbqh, anyone pushing this in 2017 had no fricking clue what they were talking about with this stuff.
Either way, microcenter bundles exist, and I got a pretty affordable upgrade that wasn't more expensive as many in socket upgrades would have been.
Dat_Boi_John@reddit
You're lucky for having a microcenter. This just isn't a thing in most of the US and is certainly not a thing in Europe. Still, hindsight or not, going AMD was the right choice for AM4.
JonWood007@reddit
Again, if you were buying in early 2017, that choice wasnt clear. You were talking a crappy architecture with inferior cores, with the 8 core models not even beating a quad core 7700k, and no clear upgrade path.
People only love to trot these arguments now because as it turned out that you could eventually get a 5700X3D or something. But this was after the same people who do this now were, back then, going JUST YOU WAIT MY CPU HAS MORE CORES AND WILL EVENTUALLY OUTPERFORM YOURS. And they never did. So those people quietly upgraded while i was getting 3300x/3600 level perfromance back in 2017.
This is what irritates me about this crap. You guys ALWAYS come up with new arguments. Intel is always bad, it's never worth buying, AMD is always great, it's the underdog, it's the CPU of the people, let's all worship fricking lisa su and treat her like cpu jesus.
Ya know? it's getting old. I've been literally putting up with this crap SINCE 2017, when I DID buy a 7700k, and I DID have superior gaming performance to the entire AMD lineup for the entire first 2 generations of ryzen existing, and still had decent performance that competed favorably with the third.
But hey, you can upgrade your crappy $300 1700s for $450 5800X3Ds at launch i guess. So there's that.
Ya see what Im saying? it gets old.
Shrike79@reddit
Lol?
JonWood007@reddit
No really. I literally did that. I understood both the 7600k and 1600 were terrible processors at the time and neither would age well so i spent an extra $100 to have something that would last.
Also no one in 2017 couldve guessed we would functionally get 4.5 gens on the same board. We were thinking maybe 3 gens so that would be going from a 1600 to a 3600....whoopie...
Anyone who bought AM4 lucked out and even then, we didnt actually see something like the 5700X3D get as cheap as they are now until literally this year.
So....basically you're being captain hindsight here.
Shrike79@reddit
Ok, so you hung onto a 7700k for a long time and it worked out for you. Cool I guess?
But your post seemed to imply that the 7700k's performance was as good as the entirety of the am4 lineup. At least, that was my takeaway.
Not sure why you're bringing up the 5700x3d specifically when the 5800x3d was edging out the 12900k in gaming when it came out. Also someone who bought a 1600 wouldn't be limited to upgrading to a 3600. Even if am4 "only" lasted until 2020 they could've bought a 3950x if they wanted to.
I went and looked up some of the launch reviews to refresh my memory and back then if you wanted equivalent performance from an Intel cpu you would've had to fork out nearly $2k for an i9-7980XE. Actually insane.
JonWood007@reddit
Well here's the thing. Most people have BUDGETS. They are UPGRADE ADVERSE. There's not like the weirdo upper class on these hardware subs who seemingly have infinite money to always have the best of the best.
If I were to upgrade, it's really a matter of how much money am I spending, for how much improvement.
The 7700k was already so far ahead of the 1st gen ryzens you didnt even see those CPUs start to surpass the 7700k AT ALL until third gen. The 3300x was on par with the 3600 was -15% single thread but +35% multithread give or take in gaming. And that was a $200 CPU. Am I going to want to spend $200 for a minor bump in performance like that? Unlikely.
5600x was more compelling. But it launched at $300, with AMD suddenly taking the single thread gaming crown and overcharging for it. So now we're talking like +25% single thread, +70% multithread or something. More compelling, but not until it gets cheap.
The 5800X3D cost $450 at launch. It never really dropped below $300. For all the talk of how you can UPGRADE ON THE SAME PLATFORM, DID YOU KNOW THAT? OMG! AMD kinda knew that they could price gouge people who actually want the most substantive upgrades from those crappy 1st and 2nd gen parts.
Like this is what you people dont understand. People have budgets and not everyone wants to pathologically upgrade every 1-2 generations. We wait until we have performance issues, and then we upgrade. I didnt even find a game that struggled on my 7700k until BF2042 and that...ran poorly....on everything.
And 3950x, WHY would I do that? It would have all of these weak cores and I'd barely get more performance than I would out of a 3600x/3700x.
Like....I know everyone on reddit acts like they must sing lisa su's praises ALL THE FRICKING TIME, but honestly, unless you waited until like THIS YEAR, with the 5700X3D being $200 or less at times, I just dont see a worthy upgrade for the money.
And even then, I literally just went out and got a microcenter combo deal for a 12900k a year ago. I mean I couldve gone AM5...but AM5's microcenter combos seemed to be having issues with RAM compatibility given the parts they were selling with the bundles. So I bought intel again. I dont have much of an upgrade path, but to be honest, is there even anything worth upgrading to on AM5 right now from say a 7700x outside of like an X3D chip? And even then, AMD knows what they're doing. They artificially constrained supply of the 7800X3D to kill it off and are now charging top dollar for the 9800X3D, knowing they can get away with it.
Eventually I can probably get equivalent or superior performance to that for less than the $480 they're selling those chips for. I mean, we really gotta factor in, ok, a 12900k is gonna last how many more years? At least 4 probably? Possibly more given even new chips like a 9700x or core ultra 245k get similar performance at like $300? Yeah. So...what are we really doing here?
By the time my chip struggles in games I'll be able to leapfrog to such a higher level of performance it will take the upgrade path path and diminish its value.
Because again, you guys really seem to overestimate how willing people are to drop $200+ or even as much as $450 on a new chip. That's a lot of money and I really only upgrade when I feel like i HAVE TO.
Shrike79@reddit
You seem triggered as fuck lmao.
The price for the 5800x3d went down to about \~$300 a few months after launch. I know cause I bought one and I still have the receipt sitting in my inbox (I paid $309 for mine to be exact), then I turned around and sold the 2700x I had for like $100 on ebay so I basically got 12900k gaming perf for $200.
Pretty sweet deal if you ask me.
JonWood007@reddit
Nah I just hate how many fricking AMD sycophants there are on reddit and how rabid they are.
Either way, I also spent $200 on a 12900k. While getting a nice RAM and mobo upgrade in the process.
Shrike79@reddit
Actually, I made a mistake. I double checked and I actually sold my 2700x for $167.50.
So in total it was a $139 drop in upgrade.
Now you're telling me you replaced your 7700k with a 12900k for $200? I'm not sure if I believe that, even when people were going crazy snapping up pc parts during the pandemic I don't think you could have unloaded a 7700k to cover almost the entire cost of a 12900k.
JonWood007@reddit
https://www.microcenter.com/product/5006835/intel-core-i9-12900k,-asus-z790-v-prime-ax-ddr5,-gskill-ripjaws-s5-32gb-kit-ddr5-6000,-computer-build-bundle
I also didnt sell my old parts.
Point is, you guys REALLY overrate the value of drop in upgrades and it seems like you guys are just interested in rabidly pushing AMD every chance you can get.
Meanwhile I got an upgrade for less than a 5800X3D cost new.
bizude@reddit
I think that's a fair point, however as a true devil's advocate I would be remiss to point out that wasn't an option for most of them until Alder Lake was released, when mysteriously all of those older boards which previously didn't support newer generation CPUs became compatible with BIOS updates :P
Reizath@reddit
Companies doing everything to milk customers, strange. Good that we have competition, right? ...Right?
looking at Intel CPUs and AMD GPUs
WS8SKILLZ@reddit
I disagree, I’m still getting 60fps in most games with my 1600.
996forever@reddit
60fps? A 13 year old 2600K would give 60fps.
bizude@reddit
I would argue that 60fps in "most" games isn't a high standard, and for the most part both CPUs would pass that standard.
Ryzen 1600 would be better in multi-threaded games, the i5 7600K would be better in primarily single-threaded games like e-sports, it really just depends on the games you're playing.
JonWood007@reddit
Yeah we gotta keep in mind this time though that we're comparing flagship halo products. This is more 7700k vs 1700. And honestly x3d chips are expensive so if you're below the $400 price class these chips are irrelevant and there are solid intel and amd chips that both perform about the same. Remove the x3d buff and the two companies perform about on par. Except maybe the 200 series being an overpriced joke.
As gamers nexus pointed out last night in midrange you're having a 5700x3d compete with like a 7600x, and a 12700k, and a 13600k and yeah. You don't get 75% boosts in that price range. And yeah given the 7800x3d/9800x3d are all 8 cores I don't think we will see the more cores boost here. Maybe vs the 7600x3d, I mean I already see occasional benches where non x3d intel chips can hold their own quite well with that one, but vs the 8 core models? Nah....
loozerr@reddit
That went the other way around though
perfectdreaming@reddit
Yes, still reminded me of it.
OGigachaod@reddit
Can't wait for a benchmarks after intel fixes there CPU's. These benchmarks will be worthless soon enough.
996forever@reddit
Why did they launch a broken product in the first place instead of fixing it before taking buyers’ cash?
crshbndct@reddit
And they will revisit them then. Anyone buying today is getting this today. Never buy on promises, only on current delivered results.
Disturbed2468@reddit
Yep. Absolutely this. You never buy on promises because it will eventually bite back, and bite back hard.
ww352@reddit
ICU
short for Intel Core Ultra
Valmar33@reddit (OP)
Fucking roflmao
996forever@reddit
The fact that games with 40%+ gaps aren’t even outliners
Purple10tacle@reddit
The actual "outliers" (which were shockingly plentiful) were even more brutal: literally more than double the 1% lows and games where AMD's 1% lows outperformed Intel's average, sometimes significantly so.
imaginary_num6er@reddit
Didn't they already do a comparison with the 9800X3D review?
teh_drewski@reddit
There's way more games in this comparison so it both validates the launch review data; and gives more detail to people who want a specific title benched (or at least makes it more likely they'll get their specific title results).
Long_Restaurant2386@reddit
Intels gonna nail 18A and then shit the bed anyway because they can't make a competitive architecture.
aecrux@reddit
I get one step closer to driving to microcenter every time I see a new review on the 9800x3d
quack_quack_mofo@reddit
Veilguard only gets 136 fps at 1080p? Damn
bobbie434343@reddit
They could have waited for the update Intel announced to improve performance.
skinlo@reddit
Why, this is the performance as it stands today?
I'm sure he'll do some further testing if/when Intel releases some updates.
jedidude75@reddit
There's isn't going to be a "magical update" that fixes gaming performance on Arrow Lake. Any updates will be focused on fixing the games that noticeably have problems like CP2077, but most of the gaming issue's stem from the horrible memory latency which is not getting fixed except with new hardware.
bobbie434343@reddit
You may be right but we'll see.
COMPUTER1313@reddit
Then Intel should have delayed Arrow Lake if they didn't want reviewers to benchmark it.
juGGaKNot4@reddit
Like I'm waiting for Intel to launch a node on time
Valmar33@reddit (OP)
Why wait, when that'll be who knows how long? We'll get a review then, anyways.
Firefox72@reddit
When have these post launch updates ever moved the needle? Its often just a few % at best which hardly helps Arrow Lake.
dimaghnakhardt001@reddit
Was expecting to see power consumption numbers in the video but nothing. Dont hardware unboxed touch on this in their videos?
broken917@reddit
Title : 45 game benchmark
I expected 45 game benchmark.
Leo9991@reddit
Just realized they have banned me from commenting for asking if the Nvidia app video was paid for by Nvidia lol. I get it, they don't want negativity in comments but that seems a bit much. That video was strangely Nvidia -positive and almost seemed like an infomercial to me.
azn_dude1@reddit
You questioned their journalistic integrity without any evidence just because vibes were off. Not to mention the general reception about the new app is pretty positive when compared to Geforce Experience, so you probably just came off like a troll.
Leo9991@reddit
True true
TheMiserableRain@reddit
Honestly, exhibition matches where the competitors are as grossly mismatched as this should be banned. It just seems cruel at this point.