[Hardware Unboxed] Ryzen 7 9800X3D vs. Core i9-14900K: Who's Really Faster For Battlefield 6?
Posted by Comprehensive_Ad8006@reddit | hardware | View on Reddit | 366 comments
Comprehensive_Ad8006@reddit (OP)
TLDR:
rebelSun25@reddit
Not gonna lie, the power usage alone discredits any reason to use the Intel.
Just imagine the power usage once the overclocking tunes it up a notch.
Instead of slandering the streamer, Intel fans should be screaming at Intel.
PT10@reddit
With overclocking my 14900ks runs bf6 at 210-220w.
You can't overclock it without also undervolting it unless you delid it.
LaffingGoat@reddit
So, at 5.9 all core you're pulling the same amount of watts HUB is pulling after locking it at 5.3GHz. And your 5.7 all core is running 5% to 10% more energy efficient than his 5.3. LOL. I can't believe people still watch this guy's benchmarks. This is weeks after he was acting like it was some kinda big deal that AMD improved their shitty release drivers, then credited a driver update with some giant leap in performance...utter rubbish. He's here to be the antithesis of User Benchmark. He just lies the other way.
buildzoid@reddit
Steve didn't lock the CPU to anything. 5.3GHz all core seems about right for intel's stock config.
PT10@reddit
Did the Intel Defaults profile in the newer BIOSes change the turbo ratios?
I know if you run a large all core load like Cinebench on stock, it will throttle down to 5.3 because of the crazy power/heat.
Games usually didn't have this problem. But BF6 seems to be the most CPU intensive shooter I've seen recently.
No_Raisin_1838@reddit
The stock v/f curve on Raptor Lake CPUs is not ideal. If you want your game to be sitting at 5.7+ all-core in game you really need to undervolt the CPU quite a bit (Buildzoid has quite a few videos on how to do this on a 14900K).
In general if you tune the CPU properly it looks like 16-18% is closer to the average gap between the CPUs per DannyZReviews:
https://www.youtube.com/watch?v=-5pyLavnlsY&t=661s
jerrylzy@reddit
DannyZ uses the 4090 which is pretty slow in BF6. His gameplay is mainly GPU bottlenecked. Check his GPU usage!
algorithmic_ghettos@reddit
If you're a typical working adult who games a couple of hours a day at most, Intel still has significantly better idle power consumption.
Ring bus degradation is why I won't consider a 14900k.
Sopel97@reddit
14900k idles at like 20-25W. that's not that much lower than single-CCD AMD CPUs
0xdeadbeef64@reddit
I thought the 14900K had a lower idle power usage.
My Ryzen 9700X ha a CPU Package Power of 24-26W when browsing the web and watching YouTube. I'm using EXPO for 6000 MHz and SOC voltage at 1.15 V.
Dey_EatDaPooPoo@reddit
That's because you have it set to the balanced power plan in Windows instead of power saver (which there's no reason to be using when you're not gaming or running a production workload). Power saver will bring the idle clocks and voltage down and with that it'll go down to 15-18W CPU package power during idle and regular tasks like browsing and watching video.
0xdeadbeef64@reddit
I changed the power plan from Balanced to Power saver to test this but HWINFO64 still shows around the same wattage for CPU Package Power.
Dey_EatDaPooPoo@reddit
Then that means you either don't have the chipset drivers installed or they're not up to date. On my CPU it brings down the idle power from that very same 25-30W down to 15-20W.
0xdeadbeef64@reddit
I've the latest chipset driver installed and Windows 11 is always updated. I'm in the latest BIOS as well for my motherboard.
I would love to have reduced idle CPU power usage, but here we are.
hardolaf@reddit
The 9800X3D idles at 15W. If your PC is sitting above 20W with one, then the issue is something on the motherboard, a GPU idling way higher, or Windows not letting your PC idle (very common these days if you aren't turning off every new "feature" that Microsoft pushes).
Keulapaska@reddit
Yea even a 7800X3D can be down to 21W though with only 1 monitor active with 6000 ram ofc 6400, the idle power draw of the chips is kind of a non-issue, even rapid 8khz mouse movement doesn't really spike it that much.
Though funnily if you looking TPU idle tests from the wall their amd systems draw more than intel overall, maybe the X670E/X870E dual chipset is the culprit so the single chipset boards would have less or why is that, no idea.
hardolaf@reddit
Yeah, without specifying which chipset is in use for the tests, the numbers are kind of useless. I would expect the top-tier chipset to have far worse power efficiency as they're focused on performance not power usage.
Keulapaska@reddit
They do specify the test system specs in their reviews, it's an X670E one so it does have 2 chipsets which probably increases power draw over intel or single chipset am5, but who knows how by much as I couldn't find anything quickly on the power draw difference of different motherboard chipsets.
Sopel97@reddit
I see, I was going by zen 4 figures, so for zen 5 it may actually even be the other way
algorithmic_ghettos@reddit
Make sure any mobo auto overclock function is disabled. Enable speedstep and C-states etc. In my experience that is enough to reduce idle draw to single digits (YMMV).
So much easier than having to micromanage your OS power plan and manually undervolt/stress test IF/VSOC to get idle draw down, which has been my experience with Zen.
jerrylzy@reddit
The 9800X3D with DDR5-8000 idles at \~20W, so what "significantly better idle power consumption" does the 14900K offer when it's also idling at 20-30W? https://imgur.com/a/z8LZLE8
tebee@reddit
Power consumption means heat generation. So the Intel processor is much harder to cool than the AMD one.
Also, typical working adults don't leave their PCs running the entire day, they only start them for a gaming session. So unlike office PCs, idle power consumption isn't that relevant for gaming PCs.
nepnep1111@reddit
That's not exactly how that works. Heat density plays a factor as well. A 14900K locked at 150w is easier to cool than a 9800X3D at 150w. It's a combination of what the power draw is as well as the density and if there are any severe hotspots. The 14900K requires more cooling, but it's also easier to get that heat off the CPU due to the decreased heat density.
wilkonk@reddit
Power consumption means heat generation. So the Intel processor is much harder to cool than the AMD one.
Makes it harder to cool your GPU if the CPU is heating the case up so much is the biggest problem IMO
Stingray88@reddit
Unless you use your desktop as a server as well, the idle power consumption comparison is totally bogus. All you save in idle power consumption is completely lost in just an hour or two of gaming per day… and it’s even more pointless for those of us with a dedicated NAS, because when I’m not playing games on my gaming desktop, I turn it off completely. It’s rarely ever idle.
Turtvaiz@reddit
But why would you care about that? If you're not using the PC it's probably turned off?
emeraldamomo@reddit
People should choose the superior product not be a fan of a giant mega corporation.
Tokena@reddit
I wonder how much the extra power contributes to expelled heat and room temperature.
soggybiscuit93@reddit
220W of CPU power consumption will result in 220W of extra heat.
100% of the CPU's power consumption will become heat in the room.
Portable space heaters, at max settings, typically output up to 1500W
fumar@reddit
Power costs are going up incredibly fast. Efficiency really matters too.
SilentHuntah@reddit
Turns out both AMD and Intel weren't just blowing smoke up our arses when the non-X3D and initial Arrow Lake variants toted efficiency gains.
Too bad Intel jacked up power consumption for this sku.
classifiedspam@reddit
Not only that, but it's all excess heat that has to go somewhere, added to the way higher running costs.
RephRayne@reddit
Netburst 2.0
dragonpradoman@reddit
Yeah hubs testing was horrendous, the intel cpu was horrifically throttling and his ram was unstable as he was using a z690 board. Dannyzreviews and Framechasers have some videos on the topic and the performance difference for them was closer to 2-5% and yes intels power usage was definitely horrific.
BTTWchungus@reddit
Bro was Inetl fucking astroturfing? There was no way their piece of shit Raptor Lake was keeping up with the 9800x3d with all these "redditors" acting like they could
Adventurous_Tea_2198@reddit
/r/intelstock
imaginary_num6er@reddit
Probably Accenture’s AI department telling Intel to astroturf since Intel outsourced their entire marketing department to them
DullAd8129@reddit
Timmy Joe PC Tech loves deleting users' posts, and I don't know why.
atatassault47@reddit
The intel can probably gain 30% more performance on an overclock, on LN2 with 500W power
Exist50@reddit
Even that probably can't get you 30%.
Tyz_TwoCentz_HWE_Ret@reddit
not seen any doing over 300 watt personally without bios and gpu bios implemented at cpu (its PL is rated for 253w stock) at 300w you are hitting 6ghz on that chip. Not what i would do but its your money and you are free to do what one likes with it.(within law/reason) Cheers!
I don't go looking for what people are overclocking these days. We/I don't chase bench mark leader for boards these days.
Skeptical of 30% gains from OC alone but wont discount a very good silicone sample either.
YMMV
Jeep-Eep@reddit
Jesus Christ.
Tyz_TwoCentz_HWE_Ret@reddit
The Intel Core i9-14900K has a relatively high power consumption under load!, but it is noted for having lower idle power consumption compared to some other CPUs, typically around 20 to 30 watts. However, actual idle wattage can vary based on system configuration and usage. for comparison The AMD Ryzen 7 9800X3D typically has an idle power consumption of around 35 watts. That is factual and you can search it yourself. No one is forced to run PL2 you can absolutely run PL1 and it will work i promise you.
jerrylzy@reddit
The 9800X3D with DDR5-8000 idles at \~20W, so what "idle power consumption advantage" does the 14900K offer when it's also idling at 20-30W? https://imgur.com/a/z8LZLE8
Tyz_TwoCentz_HWE_Ret@reddit
we posted what it idles at on avg and its stated specific user cases can vary, so what you afraid of facts or something? no one said it offered any advantages just factual data. I stated im not interested in what you want/buy/like its irrelevant to factual data posted. AMD's on DDR5 like 6000mhz at 28/30CL timings a well know EXPO fact as well. Are you going to get upset about that too? 6000mhz cl30 is basically as fast as you want to go before diminishing returns happen on that chip and it is well known/established. Higher clockspeeds than 6000 means the memory controller is no longer running 1:1 which funnily enough can sometimes lower performance. Again all factual information.
LLMprophet@reddit
If you're that concerned about idle power consumption you can get CPUs that are lower.
Tyz_TwoCentz_HWE_Ret@reddit
So use PL1 and not PL2. This isn't rocket science. You absolutely have that choice available as end user/owner and its up to you the owner to use it or not. Again i don't care what you want to use, that is irrelevant to the posted information, its a personal choice, folks get to make and your/their money, not mine. The above isn't an endorsement or recommendation, it is simple fact regardless of whether one cares about it or not. Wont change a bit due to anyone's feelings or likes. Cheers!
LLMprophet@reddit
Your desperation to defend Intel is just weird to the point of trying to sell idle power consumption.
Using your logic, just get the most expensive unoptimized chip and downclock it and target the lowest idle power consumption with no regard for money because it's a personal choice to do weird pointless stuff with it.
Why not take your own advice: if people want to get a 9800x3d you don't need to take it as a personal attack on your bizarre purchasing decisions.
Jeep-Eep@reddit
Like I either shut down or hibernate my rig when not at it. Idle draw is close to being something I do not give a shit about.
gusthenewkid@reddit
Ram oc and tune gives the 14900k the edge in the lows actually…
Blueberryburntpie@reddit
The 14900K was tested with DDR5-7200, with HUB mentioning that all of his 8000 MHz kits crash with the 14900K when XMP is enabled.
gusthenewkid@reddit
All Adie can hit 8000mhz.
jerrylzy@reddit
And the vast majority of RPL CPUs can't run 8000MT/s stably (not MHz, btw).
Pillokun@reddit
crashes with these mobos which are 4dimmers and those shown the prime and maximus are z690 that cant often usually do 7200mt/s even. to get higher u need a two dimmer like the apex or an itx board.
rabouilethefirst@reddit
Well ackshually if you liquid nitrogen cool the 14900k and over lock to 8GHz it can just barely match the 9800x3d with 400w power consumption. Take that, nerds 🤓
Blueberryburntpie@reddit
Might as well as also disable all of the over-voltage protections.
"We're here for a good time. Not a long time."
gusthenewkid@reddit
Yeah, that’s XMP. I clearly said tuned..
No-Actuator-6245@reddit
Any evidence to back up this claim? Seems extremely unlikely
gusthenewkid@reddit
Check Dannyz reviews.
No-Actuator-6245@reddit
You made the claim, you provide the evidence otherwise is didn’t happen
gusthenewkid@reddit
I don’t own a 9800x3d anymore. Idc what you believe
aintgotnoclue117@reddit
god even if it had the 'edge' in lows - it assumes nothing of a 9800X3D's own ability to match. in terms of pushing RAM. in terms of pushing frequency. these are two perfectly tunable things, why are we betting the hedges on specific circumstances to justify and win hypotheticals.
bogglingsnog@reddit
I wonder if this is because of Intel's security risk mitigations or if their architecture is really just that far behind.
Franseven@reddit
Frame chaser with their cherry picked overclocked ram and cpus (which btw are from a generation which produces cpu deaths by the minute so he is like to have it working at all) he is now saying that we all win supposedly, that's a defeated man that is changing perspective to avoid admitting defeat
Active-Quarter-4197@reddit
https://youtu.be/Wyubzqb-VuY?si=jYXW-VxNy9m9Soqx
You can though
Tyz_TwoCentz_HWE_Ret@reddit
The 14900K can reach 253w (as high as 300w at 6ghz provided your chip will reach it) (PL2) when reaching the 5.60 GHz max turbo frequency. The TDP of the Core i9-14900K processor can also be defined as PL1, which is the effective long-term expected steady-state power consumption of a processor. its far less than 220w. Selling anything else as truth is just a untruthful comment. These are settings that the use can absolutely control. I don't care what you buy, its your money, but you shouldn't be told stories of half truths. Serves nobody well.
pianobench007@reddit
they need to show GPU power consumption. Those extra frames generate more GPU heat. although ill admit the Intel system will use more power overall and is CPU bottlenecked. X3D is the real winner here.
Oversemper@reddit
I knew that 14900k gonna crush 9800x3d in power consumption! Much more powerful!
TheMegaDriver2@reddit
I thought that my 12900k needs a lot of power. But in most everyday games it is most of the time below 100W. Even at high framerates it`s like 130-140W. The 14900k just casually takes 100W more (yes more performance, but man the difference in power consumption is insane).
Pillokun@reddit
nah, u run at more moderate settings vs the 14900k here in hub. my 12700k with 1.4v with all restrictions gone will pull 200w even in wz. but with lower voltages of 1.35-1.37 it gets down to 17x ish W even in bf6 beta.
octatone@reddit
Perfect for heating your room in the winter! Suck it, AMD! /s
INITMalcanis@reddit
AMDiots will be sorry when they freeze this winter!
Firefox72@reddit
Not if my designated FX CPU heater has anything to say about it!!!
Blueberryburntpie@reddit
Make sure to also install a quad SLI of GTX 980 TIs.
roam3D@reddit
Seems i still have PTSD from troubleshooting these kind of systems, thank you!
kyleleblanc@reddit
Underrated comment. 🔥
HoldCtrlW@reddit
14900k also serves as a space heater. So if you think about it the clear winner is 14900k
Klorel@reddit
Damn, the delta is bigger than I thought
Gippy_@reddit
I mean, was there ever any doubt? The 14900K is just a super-overclocked 12900K from 2021 with extra E-cores stapled onto it. The 9800X3D is from 2024 actually had "new" tech by putting the 3D V-Cache on the bottom, and it had a significant IPC increase from the 7800X3D.
AreYouAWiiizard@reddit
Well surprisingly there was for a lot of people since one of the first tests was this: https://i.imgur.com/UzDvXM8.png from Pcgameshardware and a lot of people parroted those results.
Falupa1@reddit
In what universe is the 9950X3D !2 times! as fast as a 9800X3D in games? That would be insane and people would shout it from the roofs...
AreYouAWiiizard@reddit
Yeah, it was pretty obvious their testing was flawed but it didn't stop a tonne of people reposting it.
dabocx@reddit
Those numbers make no sense, no clue why they published them in that state.
SherbertExisting3509@reddit
IPC doesn't always translate into gaming performance.
Gaming is a low IPC, branch intensive, and it has poor cache locality.
Alder Lake's L3 performance isn't great and is likely closer to 86 cycles seen in Arrow Lake since both their L3 ring speeds are clocked at 3.8Ghz
L2 runs at the same speed as the core clock.
Intel decided to increase core private L2 so that more instructions/data can be kept closer to the core, which means that less miss traffic needs to be caught by the L3 before it reaches the main memory.
Increasing L2 from 1.25mb to 2mb likely didn't increase IPC across all workloads, but it likely improved gaming performance due to the unique needs of that particular workload.
cruically, the L3 ring clocks were increased from 3.8Ghz to 4.6Ghz, which improved L3 latency and allowed it to better feed the core.
Core clocks were also increased from 5.3Ghz in the 12900k 5.7Ghz in the 13900k
jerrylzy@reddit
The 14900K is an overclocked 13900K, not 12900K. RPL actually improved in quite a few areas compared to ADL.
Gippy_@reddit
Yeah, the "improvement" of speedrunning CPU instability and reducing the used value of the CPU to zero
jerrylzy@reddit
Yeah, Intel pushed it way beyond safe limits to beat Zen 4, but I'm just saying it's not a simple overclock. It has IPC improvements and much bigger L2 & L3 cache size.
42LSx@reddit
What a useless comparison. Wow, a new CPU is faster than a old CPU? What a insight!
JonWood007@reddit
Not really surprising but good to see objective data given what certain "extreme overclockers" show.
My stock 12900k is maybe 10-15% slower than the 14900k in my own attempts which tracks because I run it at stock with xmp off. And while bf6 scales with cores it seems to max out around 20 threads so that's not gonna leverage the 14900k's entire potential.
Still one thing I will say is that power usage is a lot lower on my 12900k. Like peak usage on my system is around 150w with typical wattage being much lower. I really never see insane wattage for my 12900k outside of cinebench.
dragonpradoman@reddit
Ok but why the hell would you run xmp off, you’re just giving away free performance at that point. And you can get a lot out of a 12900k by just locking the cores at 5ghz you don’t even have to touch the voltage or anything
JonWood007@reddit
Because warranty. Also, it's not like i NEED more performance. And it adds more heat. I might turn it on on the tail end of the system's lifespan to get a little more performance but i just run stock.
dragonpradoman@reddit
yeah i kinda get it, but xmp doesn't neccesarily void your warranty, if you just say you never turned it on they are unable to check and they will honor you return! but yeah i get it its just that 4800mhz jdec can substantially drop performance in alot of games, sometimes up to 15%!!!
JonWood007@reddit
Again I'm not exactly lacking frames so...off it stays.
SunnyCloudyRainy@reddit
VariousAd2179@reddit
That number is 6200 for AM5.
BrandHeck@reddit
Anyone know why the Intel chip is using 2 additional gigs of RAM throughout the benches?
fffffusername@reddit
Why are people so defensive of Intel here? Is it just stock owners?
bctoy@reddit
There's some truth to tuning the 13-14th gen from intel. I dunno about BF6, but with games like Stalker2/Oblivion which are based on the UE5 engine, disabling SMT helps. While you could do the same on 9800X3D, it being only 8-cores is not as ideal.
Then of course the power draw limiting the clocks for the intel chips, while AMD is going full throttle all time. You can easily see someone with these changes do 10-20% improvements.
KARMAAACS@reddit
I'm a huge Intel fan (historically) and honestly their CPUs and the company is in shambles. Anyone who's an x86 customer should basically only buy AMD right now until Intel gets their act together.
But I do remember the 7800X3D people were exclaiming that the 13900K was a "bad buy" at the time because X3D beat it by 10-15% in games, meanwhile the AMD CPU got stomped in multi-core benchmarks by the 13900K.
There is no best CPU right now, until AMD makes a single CCD with 24 cores that's an X3D chip. Until then everything has its strengths and weaknesses. However, the 9950X3D might be the best all round chip, if you're willing to do re-boots and have a BIOS profile setup for gaming to make it basically work like a 9800X3D.
Geddagod@reddit
When the 13900k and 7800x3d comparisons first started, they appeared very similar in gaming on average. The gap was low single digits.
The 7950X3D was also a thing?
You really don't have to do all that. Even with the borked scheduler, the 9950X3D is still on average faster than the 285k/14900k, because many games actually do schedule the game on the right CCD, and in the cases they don't, the non V-cache CCD is still \~Intel's best.
KARMAAACS@reddit
After they fixed all the scheduler issues and Intel had to patch it's issues, it was more like 8-10%., depending on the testing suite it could be as high as a 15% difference.
Yes, but required reboots to get the best gaming performance.
I never said the 14900K was faster than the 9950X3D in games. I simply said the 9950X3D was the best all round CPU if you're willing to do reboots and get the best performance in games or multi-core.
vandreulv@reddit
People hear "Intel is best", buy Intel, get defensive when it is not actually the best.
Brand name worship results in cult like behavior.
ItsMeeMariooo_o@reddit
Best at what? Gaming, obviously not. But intel is very competitive when it comes to productivity, especially when you consider the value proposition of it too.
Absolutely. And Reddit is the worst with this. Just look at r/buildapc... AMD is blindly recommended for any CPU setup with everyone assuming every person is building a PC with its sole purpose being gaming.
vandreulv@reddit
Mate.
When the overall different from any high end, mid tier and low end CPU can be benchmarked within 5% of a similar, competing model...
Nobody gives a fuck what brand you are married to and why.
https://i.imgur.com/CNhpvmH.gif
ResponsibleJudge3172@reddit
That's why he is Hella down voted right?
ResponsibleJudge3172@reddit
Sometimes people disagree with a reviewer. I know, crazy right? /s
JonWood007@reddit
Lol have you been here for more than a couple years?
Reddit is insufferable with DAE AMD GUD? circlejerking.
If the results were the opposite the amd crowd would be even worse. I would know because i remember the quad core 7700k spanking early ryzen's entire lineup in gaming and everyone was like BUT BUT MOAR COARS ARE THE FUTURE! And BUT BUT IF I TUNE THIS I CAN GET LIKE 2% BETTER PERFORMANCE THAN AN 8400 and THIS GAME ISNT OPTIMIZED FOR AMD and WELL IF YOU LOOK AT THE AVERAGE OF LIKE 30 GAMES (most of which were gpu bottlenecked or had better core usage) THE DIFFERENCE IS ONLY 13% or MUH 4K BENCHMARKS.
Seriously. It always baffles me to see people whine about intel owners not admitting x3d processors are the best. Like yes, we get it, they are. Sure you get some benchmarks showing extreme overclocking that reduces the difference. But I don't think people realize how obnoxious the pro amd crowd is on reddit. It really is.
X3d is the best okay. So what about a 7700x or 9700x? Or a 7900x or 9900x? Or a 7950x or 9950x? The gaming results aren't that different from Intel. BUT BUT I GET SIMILAR RESULTS WITH LESS WATTS. Maybe you do, but if the shoe was on the other foot would these people be pointing that out? Or would they just be doing the COMPETITION IS GOOD WHY DO YOU HATE COMPETITION INTEL BAD circlejerk the amd crowd always does on this site?
That said is the big concern "intel stock owners" or are we just sick and tired of the amd crowd on reddit being rabid and obnoxious yet again?
BioshockEnthusiast@reddit
I mean the Ryzen crowd was kinda right, 7th Gen can't even run win11
LLMprophet@reddit
Look at your own comment lmao
JonWood007@reddit
Hey someone has to call it like it is. Reddit has a massive double standard. Intel bad amd gud. It's the case when Intel is ahead, it's the case when they're behind. Sore losers AND sore winners.
porkusdorkus@reddit
It’s like Ford/Chevy. Competition is good but people get attached. The big picture problem is that Intel was Americas homegrown producer. AMD never made their own.
When Intel fails, we are reliant on Taiwan and China for all chips. It’s serious. Everything from satellites to drones relies on their silicon produced across the ocean and China is going to take Taiwan eventually.
If you look up top shareholders for Intel AMD they are the same majority owners for the most part.
That’s partially why I still root for Intel, I’m hoping they get their shit together.
ConsistencyWelder@reddit
Wasn't that what Global Foundries used to be? AMD's own fabs?
ConsistencyWelder@reddit
This sub has always been more Intel fanboyish than r/intel.
Not sure exactly why, but yeah, probably some stock holders, but some have speculated that there are a lot of Intel employees here as well. Now probably ex-employees.
BNSoul@reddit
I found that using DDR5 6400 CL30 1:1 with tight timings and FCLK 2133 (on top of PBO and per-core curve optimizer) gives my 9800X3D a huge performance boost in this beta. Of course I'm aware you need to be lucky with your CPU to achieve a 6400 1:1 config so testing the way Hardware Unboxed did is more than fair for the vast majority of gamers. In terms of out of the box experience, the stock 9800X3D annihilates the Intel CPU both in performance and efficiency. Don't trust those YouTube channels that are trying to sell you stuff to help overclock your CPU since at the end of the day most samples are just not capable of extremely tuned configs.
bctoy@reddit
Yeah it was surprising that I could eke out another 10% in Oblivion with tightening timings on the 'sweet-spot' 6000MTs RAM. Probably would have waited for a 2-DIMMer if I knew RAM OC would help so much.
I see your vSOC is around 1.25V. That is what's keeping me from budging from the 6000MTs on my 64GB kit and of course that the mobo cannot do 8000MTs to compensate for the 1:2 UCLK drop.
jerrylzy@reddit
DDR5-8000 is even faster if your IMC & mobo support it.
Beefmytaco@reddit
Coming from what, 6000mhz and non-tight timings?
Either way 6400 CL30 is pretty darn good.
BNSoul@reddit
From 6000 CL30 EXPO to these settings (Zen Timings screenshot) https://i.imgur.com/CyaVEiD.png
Beefmytaco@reddit
Holy crap, you managed to get GDM off? Very nice!
Yea that one alone if you can manage is a good chunk of latency knocked off and is pretty hard to attain; I have yet to make it happen.
Timings look pretty good, but I think some primaries could be lowered a smidge more.
Ask the /overclocking sub if you can improve then, they're pretty good with memory timings.
BNSoul@reddit
thank you ! yeah some of the timings are not as tight as they could be but I was looking for 100% rock-solid stability (and peace of mind), there's a Discord server in Spain where overclockers discuss this stuff and when I asked for help I got the timings I ended up using.
Keulapaska@reddit
tWRWRSCL 6 and TWR 54?
Is this some super secret stability tech or what is the reason for those over just 5/48? If it Is, I guess i'll gotta go try that as my 7800X3D can't really do 6400 with normal timings at least, but never though raising those...
BNSoul@reddit
yeah I was getting an error 3 or 4 hours into TestMem 5 and I was suggested to loosen a bunch of timings (such as the ones you're mentioning), now the system is 100% stable, maybe there's margin for improvement but I guess I just can't be bothered to tweak it further since I got y-cruncher 2.5B under 46 seconds.
greggm2000@reddit
What does that translate to in terms of % advantage over DDR5 6000 CL30?
BNSoul@reddit
In this game it's difficult to measure since every time you play it's different but easily 7-10% faster 1% low fps. On the other hand, the gains in average fps is modest in 3D cache chips.
greggm2000@reddit
Thanks! Interesting!
Blueberryburntpie@reddit
Especially for those who never even go into the BIOS to enable XMP/EXPO, or bought a prebuilt desktop with locked down BIOS.
C0NIN@reddit
What's the newest, top offering CPU (excluding servers/workstations) from AMD to date?
0xdeadbeef64@reddit
In the test the 9800X3D was around 30% faster than the 14900K, but was struck me was the power usage: 9800X3D around 90 W while the 14900K around 210 W.
axtran@reddit
I have been trying to use a not so large case for my 14700K and it’s been a futile effort
BrightCandle@reddit
Its maddening that 220W is the usual long term setting for Intel CPUs from the 13th and 14th gen. The actual losses from dropping it to 120W are really quite minimal and usually only show up when doing heavy multithreaded tasks. Its a lot easier and quieter to cool a 120W chip than a 220W one and when you only loose 10% in 100% load circumstances its worth the trade off. These chips are killing themselves.
KetoSaiba@reddit
I'll add on this.
Imagine if you're using a RTX5080/5090 and a 14900K. You could actually be closing in on the upper limits of a 1000w PSU. That's wild.
Capable-Silver-7436@reddit
heck man i undervolt even my x3d chips for a reason.
Blueberryburntpie@reddit
In older houses that use 10 amp, 120 voltage circuit breakers or daisy chain multiple rooms with a 15 amp circuit breaker, that doesn't leave much headroom for other stuff.
imaginary_num6er@reddit
That’s why you pull out your microwave outlet and plug your PC instead to game
Blueberryburntpie@reddit
Funny thing is, my house's kitchen doesn't have a dedicated circuit for the microwave. It was built before electrical code required that.
I guess I'll just unplug the oven or the dryer machine and use an adapter...
SilentHuntah@reddit
My old folks' home from the 70s probably fits the bill. We can't run 1,000+ watt space heaters without the entire thing going pop. Thank goodness I did the majority of my gaming on laptops growing up.
lilotimz@reddit
Dedicated 15/20A circuit for the gaming PC like what the code requires for kitchen appliances or high powered appliances. :)
literally_me_ama@reddit
It's like giving a fat guy a monster energy drink before he goes on a run because you think it will make him faster. It might, but it is also just making the problem worse
JonWood007@reddit
My 12900k typically runs more like 125w in this game if I try to force a gpu bottleneck too. Insane to see raptor lake push that hard for like 10-20% more performance.
FinBenton@reddit
Yeah thats what I do with my 14700K, I would prefer having AMD but running this thing in 125W or something like that is minimal drop atleast I cant notice any difference and its much more quiet and nice.
scrndude@reddit
If it fell 10% in performance wouldn’t it be about as fast as the prev Intel gen?
TGA0000@reddit
Corporate needs you to find the differences between this picture [13700K] and this picture [12900K].
They're the same picture.
jaydizzleforshizzle@reddit
Cause at the same time if they turned around and did this they would still be higher power and even less competetive.
ecktt@reddit
While the results speak for themselves, people are comparing chalk and cheese. The 9800X3D is a mid core count that is basically optimized for games while the 14900K has a lot of cores most of which were never used. Intel receiving performance hate is akin to WRC car asked to be being competitive in NASCAR. People forget that most of the PC market is not gaming. Google's AI suggest only 5% of the market is gaming focused and I think that is being generous.
TheRudeMammoth@reddit
And you think people are buying 14900K for watching Youtube and opening Word documents?
It's not the best for gaming but we all know gaming is one of its main uses.
ItsMeeMariooo_o@reddit
Huh? The i9 is a good gaming CPU but its main thing is productivity. Reddit is so irrational when it comes to Intel.
jerrylzy@reddit
The 9950X is better at gaming and productivity. Who's being irrational here?
ecktt@reddit
Did the video use a 9950X?
And since you are going off topic, what about the value aspect?
Nobody is questioning the results of the video; just the point of comparing 2 products that were optimized for 2 different market segments.
jerrylzy@reddit
And I’m saying the 9950X is better in both productivity and gaming, thus rendering the “optimized for 2 different market segments” argument moot. And let’s be honest, most i9 users are gamers who want the best possible gaming performance.
ecktt@reddit
Then you missed the whole point and are arguing something completely different. Carry on.
jerrylzy@reddit
No I didn’t. There’s a reason why you’re heavily downvoted. Just take the L bro.
Not_OneOSRS@reddit
I’m not sure the 14900k is topping many productivity or efficiency charts either. What type of race are they actually meant to be winning in?
ItsMeeMariooo_o@reddit
C'mon now, the i9-14900k will be near the top of the charts for productivity while costing significantly less than a comparable ryzen cpu, and still being good at gaming in 2025. The i7-14700k is also an amazing value for a high end cpu.
jerrylzy@reddit
The 14900K is about the same price as the 9950X.
conquer69@reddit
But the subject being discussed is gaming.
Kyrond@reddit
These CPUs are very focused on gaming, 14900K is already WRC car. It is the reason why x3D exists, it mainly helps in games while it suffers in productivity. Desktop CPUs are about single thread performance without regard for power.
Server CPUs are about performance per watt, laptop CPUs are about perf per watt, cheaper CPUs are about perf per $. The number of people who want powerful CPU, but not a server/HEDT and not Mac, is quite small. You can see it in sales.
Tyz_TwoCentz_HWE_Ret@reddit
the Intel i9-14900K is not a budget-friendly option. It is positioned by Intel as a high-end CPU, primarily aimed at gamers and professionals who require top-tier performance.
Eclipsed830@reddit
How do you think it would do against my 9950? (y)
Fine-Bandicoot1641@reddit
Its gets oblitirated by 14900k and even more by 285k
Blueberryburntpie@reddit
Bulldozer: "Mmmm. First time?"
Darksider123@reddit
More cores, more powah
srona22@reddit
Intel can't fix their power usage.
F9-0021@reddit
They did. The 14900k is not a current generation part.
jerrylzy@reddit
so you want Steve to use an even slower CPU as comparison?
42LSx@reddit
What do we get from this video, that a old CPU is slower and less power-efficient than a new CPU? Shocker..
jerrylzy@reddit
The “newer” 285K is slower than the “older” 14900K. Shocker, I know.
42LSx@reddit
Yes, so what? Even better for the competition.
The first FX series or Willamette P4 were also slower than their predecessors, it's not like something like hasn't happenend before.
conquer69@reddit
Still behind even the 7800x3d in efficiency.
jerrylzy@reddit
tbf the 9800x3d is also behind the 7800x3d in efficiency
Anxious-Bottle7468@reddit
What's with the condescending "you don't understand why we don't bench in 1440p and 4k :)" shit?
No, I understand that 1080p makes the graphs more impressive and gets more clicks even though nobody is going to buy a 9800x3d and 5900 to play in 1080p.
"But this is a CPU benchmark" No it's a fucking game benchmark that's evaluating the entire system.
Derailed94@reddit
Also despite what many people tend to believe resolution has a literal CPU performance cost, and some settings in some games even scale with resolution making that performance cost even more severe. Thinking that resolution can't potentially affect CPU performance is a very narrow minded view to have, and if you have a GPU that is powerful enough to not run into bottlenecks at higher resolutions then there's no reason to NOT test higher resolutions as the CPU load will generally be higher.
RealThanny@reddit
Increasing the resolution with the same aspect ratio does not increase CPU usage. It's conceivable for that to happen, if a game engine changed the way assets are drawn, or which are loaded, based on resolution (i.e. you could see this bush now at this resolution, so we'll actually render it). But no games actually do that.
In short, your contention is incorrect.
Derailed94@reddit
Wrong.
https://youtu.be/y51csslcGgk?t=198
DM_Me_Linux_Uptime@reddit
Wow, you've found one game that does this. You've completely and utterly destroyed the other guy's argument. 🙄
Luckily 99.9% of games don't scale their LOD's with resolution like Forza, and Battlefield games certainly never have, and since Battlefield is not Forza they don't need to test CPU's at 4K. 😎
bphase@reddit
Surely there is no game where resolution affects CPU performance more than it does GPU performance.
Hence lower resolution means CPU is more important than it is at higher resolution and thus lower resolution will better differentiate the CPUs.
Derailed94@reddit
That wasn't my point at all. It's not about resolution affecting the CPU more than it affects the GPU. My point is that in a scenario where the GPU has enough headroom that increasing the resolution will not end up in a GPU related bottleneck the CPU strain will increase giving a better representation of absolute top end performance.
bubblesort33@reddit
No it's not. It's a CPU test to check what the CPU can deliver, not a gaming PC benchmark. You needed to be told this, because you didn't understand it looks like.
innerfrei@reddit
Not the proper language for this sub.
b-maacc@reddit
The condescending remark is specifically for dullards such as yourself lol.
hardware-ModTeam@reddit
Thank you for your submission! Unfortunately, your submission has been removed for the following reason:
Pillokun@reddit
I do.. I only care about highest possible fps, sure I have not owned an 5090, but several 6900/50xt, several 7900xtx, two 4090 and two 9070xt with all gaming suited cpus since 2020 3700x, 10700k, 11700k, 12700k, 12900k, 5800x3d, 13900kf, 7600, 7800x3d, another 13900kf, and anoter 7800x3d and an 9800x3d just to play at 1080p low ie high refresh gaming with high refresh rate gaming monitors.
DM_Me_Linux_Uptime@reddit
Hope the next video explains it in caveman speak for you.
bphase@reddit
Because they've gone over it a million times and done multiple videos about it and people never learn. It gets old repeating oneself at some point.
caixote@reddit
Doing tests on 1080p promotes more CPU usage.
Yebi@reddit
Umm, no, it literally isn't
Delboyyyyy@reddit
Thanks for explaining in 3 paragraphs how you don’t understand why they don’t bench in 1440p and 4k. Maybe they need to be even more condescending to you? :)
BrotherAmazing6655@reddit
Comparing a CPU from 2024 against one from 2023...
InevitableSherbert36@reddit
Do you want Steve to use Intel's newer but slower 285K instead?
BrotherAmazing6655@reddit
Yes
usuddgdgdh@reddit
brainless
42LSx@reddit
"Brainless" is comparing old shit to new shit.
usuddgdgdh@reddit
this is literally how it's always been since x3d since it releases later, when Intel learn to make competitive products you won't have to complain as much
jedidude75@reddit
Just take the numbers from the 14900k and subtract 5%. There's your 285k benchmark.
nepnep1111@reddit
Too bad my 285K is destroying Steve's shit 14900K benchmark results. So that would be incorrect.
jedidude75@reddit
If you'd like to post your video showing your 285k vs 14900k fps comparison, I would love to see it.
nepnep1111@reddit
Iberian offensive at 1080p low https://youtu.be/B_C2DVFeBcg
Blessed-22@reddit
I have a 13900k that I got before I learned about the manufacturing/design faults. I've always believed Intel to be much better than AMD with less compatibility issues in regards to CPU intensive software and apps. Especially in games when I used to see AMD specific workarounds and mods to fix performance problems.
But all that said, I'm very close to switching to AMD these days. If the next gen of Intel is poor or my 13900 dies, I'm jumping ship
onewiththeabyss@reddit
What gave you that idea?
ItsMeeMariooo_o@reddit
History? I know everyone in these subs is super young, but us millennials know Intel held that crown for a veryy long time. And perceptions can be sticky.
onewiththeabyss@reddit
If we're going back in history then Intel have been beaten quite a few times by AMD alone.
timorous1234567890@reddit
How far do you want to go back? My 1st AMD system was a K6-2 which was okay compared to the P2 at the time, not quite as good but a lot cheaper. Before that I had a Pentium and before that a 486.
Athlon was much better than P3 and Intel really pushed P3 with the 1.13Ghz parts that needed a recall.
Athlon 64 was better than P4 for gaming but P4 was pretty good for productivity. The X2 vs P4D was another big AMD win. (P4D had to send data out to the northbridge on the motherboard and then back to the other CPU die, it was literally an on package 2S style setup).
Core 2 was a fantastic part though and Phenom was flawed. Phenom 2 was better but was still behind the Core 2 and Nehalem parts. Then with Sandy through to Skylake Intel had a large performance and efficiency lead since bulldozer was terrible.
If Intel had moved to 6c with the 7700K and then 8c with the 8700K I don't think Ryzen would have caught up anywhere near as well because a 6c 7700K would have easily had the gaming crown and productivity would have been a lot more comparable compared to 1st gen Ryzen. Then with an 8c 8700K Ryzen 2000 just loses at everything and who knows what that would have meant for Ryzen 3000 because AMD would have had less revenue in the early Zen years.
Through all of that time I have owned both makes (although more AMD) and neither have been prone to issues for my use cases. If the machine is setup well then it works well.
Soulspawn@reddit
I assume this is the last video on the matter, there is no better way to compare the two CPU and the outcome isn't even close the 9800x3d won by over 30% that's like winning 100m sprint in 7 seconds while everyone else did it in 10
Capable-Silver-7436@reddit
and the 14900k is still better at the game than the 285k... yikes
Blueberryburntpie@reddit
And at the fraction of the power. Put both CPUs in an airflow choked Alienware desktop case and the 14900K won't have the power/thermal headroom to fully boost.
I bought one a few years ago when it was on sale and discovered that under heavy loads the CPU would throttle to base clock rates. Tech support said that is perfectly normal behavior. I returned the desktop for a refund.
ItsMeeMariooo_o@reddit
No shit. The i9 is a multi core workhorse that destroys the 9800x3d at things besides gaming.
AnechoidalChamber@reddit
The power usage was measured when gaming, not when doing productivity tasks.
ItsMeeMariooo_o@reddit
Another captain obvious statement. It's a silly comparison to specifically compare the i9 with an x3d chip for gaming only when the i9 is a multi purpose CPU.
It would be equally stupid to compare the x3d chip with the i9 for blender workloads only.
jerrylzy@reddit
What? It's completely reasonable to compare power consumption during gaming, especially when the Intel part is 30% slower.
Good_Season_1723@reddit
No it isn't, cause the 14900k dwarfs the 9800x 3d in productivity. Let me put in words youd understand, if HUB compared the 9800x 3d to the 9950x / 9950x 3d you realize that the 9950x would draw a ton more power, right?Does that makes the 9950x / 9950x 3d are dogshit?
AnechoidalChamber@reddit
If they're 30% slower while drawing more than twice the power... Yes, it would make them dogshit... for gaming.
Good_Season_1723@reddit
Doesn't that mean that if you care both for productivity and gaming the 9800x 3d is dogshit too?
AnechoidalChamber@reddit
Depends on your uses cases, which apps exactly you use, if they profit or not that much from multi-threading and what are your priorities, etc.
It's a case by case basis then.
Take me for example, I sometimes do productivity, but mostly apps that have crap multi threading optimization and I very rarely use well multi-threaded apps, also my priority was gaming longevity. Another priority was silence and minimal thermal dissipation in a very hot room in the summer.
So my best option more than a year ago before the 9000 series was released was the 7800X3D, so I chose it.
If my use cases and priorities would've been different, my choice would've been different and the 7800X3D might've been dogshit for me then.
See how that works?
Good_Season_1723@reddit
But your argument ends up to every cpu being potentially dogshit depending on the usecase. In which case the word dogshit doesn't really mean much. Intel 11th gen - that was dogshit. The 14900k is definitely not dogshit. Not the best choice if all you do is gaming, but it's not crap.
jerrylzy@reddit
Yeah, every CPU can be dogshit for a specific use case, and the 14900k's are dogshit for gaming because it's 30% slower while consuming more than 2x the power. Glad we cleared that up.
AnechoidalChamber@reddit
Indeed, it means something crucial in the right context. As is the case here.
jerrylzy@reddit
It’s dogshit for gaming. What part do you not understand?
Good_Season_1723@reddit
It's the fastest non 3d cache cpu for gaming. Totally dogshit. Man you are a clown
GhostsinGlass@reddit
tl;dw The 9800X3D is much faster and much more efficient for playing BF6.
jerrylzy@reddit
My 14900K will destroy your 9800X3D with a simple tune. Trust me bro, wink wink.
Capable-Silver-7436@reddit
man im glad amd made us not need to buy the top end thread count chips for max gaming performance anymore. i dont need 32+ threads for just playing my games nor the power usage that goes with it
caedin8@reddit
Idk how these YouTubers aren’t going broke. This narrative hasn’t changed in years. The 14900k came out two years ago at this point?
We’ve known it loses for a long time. There’s been no new developments but they still gotta make content or they can’t pay their bills. Sucks.
Exist50@reddit
While I don't necessarily disagree, this seems to be in response to other outlets claiming very different results.
Jeep-Eep@reddit
Intel got bulldozed.
CarbonPhoenix96@reddit
Raptor lake makes bulldozer look like sandy bridge in terms of quality
MoreFeeYouS@reddit
Bulldozer had worse single core performance than it's predecessor Phenom. AMD also claimed to be an 8 core CPU and that it's potential will be unlocked in the future. It never was.
dparks1234@reddit
Bulldozer was worse than anything Intel has done since 2010.
CarbonPhoenix96@reddit
Bulldozer just sucked, it didn't actively kill itself
dparks1234@reddit
Fair enough
jerrylzy@reddit
True. I'll give you that lol
Kionera@reddit
At least it was dirt cheap. As a poor college student I bought a FX-8320 + mobo + RAM all new for $120 and it ran all my games wonderfully back in the day, including BF4.
ExplodingFistz@reddit
Shocker
oakcliffn2acp@reddit
Spocker
jedidude75@reddit
Betty Crocker
Roxxas049@reddit
Crocker? I don't even know her.
Tokena@reddit
Knew her sister Ingrid. She preferred the 9800X3D over the 14900K.
SirMaster@reddit
I mean is anyone surprised??
The intel is 10nm and the amd is 4nm. And the amd is over a year newer.
Frexxia@reddit
No it's not, it's manufactured on Intel 7.
KARMAAACS@reddit
It's not really misleading. 10nm was rebranded to Intel 7, but that was only for Intel to 'save face' with the fact their nodes were slipping behind TSMC's cadence of nodes. At the end of the day the 10nm node was supposed to compete with TSMC 10nm, not TSMC 7nm. Intel pushed power and clock speeds to keep up with TSMC. If you look at the transistor density, it's really apparent that Intel 7 (~63 MTr/mm2) is nothing close to TSMC 7nm (90-115 MTr/mm2 depending on the 7nm product node type)
Frexxia@reddit
You're not comparing like for like. The density that's directly comparable to the TSMC number is 100+
https://en.wikichip.org/wiki/7_nm_lithography_process
KARMAAACS@reddit
Intel has never said which library they used of Intel 7 for Raptor Lake. In fact, Intel hasn't said what the transistor counts are for their Desktop CPUs for years now, ever since they transitioned to 10nm (Intel 7) which is suspicious because they used to tell you and brag about it. But as soon as they started having node trouble, that disappeared.
Anyways, I'm not quoting the wrong numbers. They could have used the SHP library which has a density of around 60 MTr/mm2, it says so in the source you provided that's the transistor density for the SHP library. How you can have such a vast density difference of almost 60% more transistors within the 'same node' family is bizarre, almost like it's a trick... They're getting that 100 MTr/mm2 number with smaller transistors that probably don't do most of the logic would be my guess. But the only confirmed thing is that we do not know which library was used and Intel's never clarified it or given quoted transistor counts, when they conveniently started having node troubles and needed to rebrand their nodes which is suspicious.
Vastly more ambitious? No. More ambitious? Yes.
Frexxia@reddit
You're not getting 100 MTr/mm2 for desktop CPUs on TSMC either. Just look at Zen 2 for instance.
KARMAAACS@reddit
Yep, it's almost like the transistor density as I inferred is a BS number picked as a best case scenario. The only difference is, AMD has been upfront and told us what node they're using and how many transistors their CCDs have because they can always throw TSMC under the bus for the node. Intel won't tell you what their transistor count is around the time when they started having node trouble. Gee I wonder why!
Geddagod@reddit
Intel 7 UHP hits \~63MTr/ mm2, but HD is still \~100 MTr/mm2.
Just because Intel used UHP libs for anything, because they wanted to push well above 5GHz for GLC (and improve iGPU yields for the desktop skus), doesn't mean the node is inherently undense.
Ofc cell area isn't the end all be all of density either, and IIRC Intel 7 had some weird routing/cell placement quirks that prob hurt density of actually implemented IP. But I doubt that makes it as bad as TSMC 10nm, from a TSMC 7nm competitor.
Intel 10nm was never supposed to compete with TSMC 10nm though. OG Intel 10nm also hit 100MTr /mm2.
KARMAAACS@reddit
So which library do you think Intel used for most of the Raptor Lake chips and what do you think is the transistor density?
Geddagod@reddit
Hence why in my above comment I mentioned this:
RPL used UHP cells sure, but let's also not pretend there isn't a marked improvement in Fmax over TSMC N7 AMD desktop chips either, they have what, a >20% ST boost?
But also, hard to blame the node when Intel generally has just achieved poor transistor density in their products, regardless of using internal vs external. Just check out their GPUs, for example.
KARMAAACS@reddit
So like me, you don't know the true transistor density. Thanks.
rumsbumsrums@reddit
And the 285K is even newer, in 3nm, yet slower. What matters is the end result.
KARMAAACS@reddit
True. But honestly, I feel like Arrow Lake was gimped in some way. I dunno what it is... if it's because Intel's using someone else's process for once, or if it's the core-to-core latency, or the memory controller, or even the fact it's a tiled/chiplet design and the fabric is too slow to keep up and not create bottlenecks, or maybe it's the fact there's no hyperthreading. Whatever it is, I just feel like Arrow Lake isn't really TSMC's fault, 3nm is obviously capable of higher clock speeds and better performance if AMD's beating Intel with an inferior node (which is agreeing with your point that yeah Intel is borking it even with a node advantage).
It's also clear Intel could make a really fast CPU if it was monolithic because they did make a faster chip in the 14900K and 13900K, especially in gaming versus Arrow Lake. There's just something about the Arrow Lake architecture that's bad in games and trash even with regards to clock speeds and multi-core performance. In the end, Intel's just in the gutter and it's not just a node problem as you rightly pointed out.
SherbertExisting3509@reddit
The reason why the Zen-5 part is 30% faster for gaming is because it has 96mb of L3 cache due to TSV stacking technology.
Gaming is a low IPC, branchy workload with poor cache locality. That means more cache improves performance dramatically as the CPU doesn't need to access the latency heavy system memory as often to retrieve information.
The base Zen-5 core has close to the same IPC as Lion Cove in gaming without 3d V cache.
Geddagod@reddit
Intel's 3nm parts aren't any closer to beating AMD's 4nm parts either
Kernoriordan@reddit
I run my 13700K undervolted (5.3Ghz @ 1.212v) and usually see power draw of less than 100W in game. I am however GPU limited by my RTX 3080 so a faster CPU wouldn’t get me much except better 1% lows (which aren’t an issue).
However I have found the performance to be great and the game makes use of most of my 24 threads!
FWIW I get 100-120FPS on 3440x1440 - High Settings Preset - DLSS Quality
Proof: https://imgur.com/a/X0c1qF2
fpspro97@reddit
same cpu here, tbh man I'm embarrassed about our cpu lol
LaffingGoat@reddit
Another mystifying benchmark from HUB. First of all, he's limiting the 14900k to 5.3GHz. I guess nobody noticed that. But how tf is he still pulling 200W when he's borking it? He's done it again. Every single benchmark he does, there's something about it that's incomprehensible.
LaffingGoat@reddit
Another mystifying benchmark from HUB. First of all, he's limiting the 14900k to 5.3GHz. I guess nobody noticed that. But how tf is he still pulling 200W when he's borking it? He's done it again. Every single benchmark he does, that's something about it that's incomprehensible.
Southern-Dig-5863@reddit
The reason for the large deltas in the HWUB video is because Steve is using Intel's default settings which significantly overvolts the CPU and causes it to downclock and have excessive power draw.
If Steve had used the motherboard tuned settings rather than Intel settings, the performance and power draw of the 14900K would have significantly improved.
You can tell by the 14900K clock speed at 5.3ghz. In gaming scenarios, the 14900K should be 5.7ghz all core.
I'm NOT SAYING Steve is at fault here, just explaining as an Intel Raptor Lake owner that a voltage tuned 14900K would have performed much better.
timorous1234567890@reddit
If the Intel default settings are bad that is an Intel issue that Intel need to fix.
cowoftheuniverse@reddit
Ye this was very disappointing to see. I think they have done this before too. Did they even mention or notice they are throttling at any point in the video?
Southern-Dig-5863@reddit
I'm pretty sure they didn't explicitly mention it in their video but admittedly I skipped through most of it. Heck, he may not even be aware himself and thinks that a 14900K at 5.3ghz is normal
cowoftheuniverse@reddit
Testing games channel, that usually runs stock, has their 14900k at 5.7 drawing 170w or so.
Do you know how the 13-14th gen chips usually behave at Joe Everyman settings? Is it as overvolted as HWU is here? They really should point this out if so.
Southern-Dig-5863@reddit
That's the thing with Intel though, there really isn't really a "Joe Everyman" setting, which is why I was a bit lenient on HWUB. Technically speaking, there is nothing wrong with them using the Intel Extreme profile because that is what the manufacturer suggests. But this skews heavily towards stability and not performance and efficiency.
Anyone that knows anything about Raptor Lake knows NOT to use that setting, unless they want their CPU running hotter, slower and drawing more power.
The motherboard profile is just as accessible as the Intel Extreme profile but has far better performance and efficiency
dfv157@reddit
This is the same “motherboard optimized settings” Intel first blamed when RPL degradation first came in to play? And Intel told everyone to only use Intel defaults?
Southern-Dig-5863@reddit
The motherboard settings undervolt the CPU, but in the past they could also set the PLW to unlimited.
And undervolting a 14900K is not the same thing as what you are suggesting with the 9800X3D. That's overclocking.
dfv157@reddit
Clearly you have no idea what curve optimizer on AMD does if you think it’s “overlocking”
Southern-Dig-5863@reddit
Wouldn't DDR5 6400 at 1:1 necessitate running the FCLK at 2133, which would be above the default?
I don't have an AMD CPU, but that sounds very different from under clocking a14900K.
Regardless, it's not as though HWUB is averse to some level of tuning, as both their AMD and Intel rigs are using OC memory. Officially supported memory speed for both is DDR5 5600.
BlueGoliath@reddit
While that may be true, most people aren't going to go through the trouble of tuning their CPUs voltage. Testing it any other way than what's common among motherboard vendors(not even Intel spec, just commonly used settings) is misleading.
Southern-Dig-5863@reddit
While I agree that most 14900K owners likely wouldn't go through the hassle of tuning voltages (although there are big returns in performance, power draw and temps), just using the motherboard performance profile instead of the Intel profile would make a huge difference and would allow the CPU to hit it's normal clock speed at 5.7ghz instead of 5.3ghz.
Ultimately though, this is Intel's screw up and I can't blame the reviewers for using the manufacturer recommended settings.
Roger_005@reddit
'Who'? Hardware Unboxed, you may really like processors but no need to personnify them.
960be6dde311@reddit
This video is so horribly dishonest. The real performance of Intel versus AMD is MUCH closer than this video portrays.
Dawid95@reddit
And your statement is based on what? Have you done any benchmarks on this matter?
Southern-Dig-5863@reddit
How about that the 14900K isn't even running at its standard clock speed, which is supposed to be 5.7ghz due to Steve's incompetence or intentions because of the settings he chose in the BIOS
Dawid95@reddit
The Standard clocks for 14900k is 3.2GHz, the 5.7GHz is the boost clock dependent on many factors: https://www.intel.com/content/www/us/en/products/sku/236773/intel-core-i9-processor-14900k-36m-cache-up-to-6-00-ghz/specifications.html, and also Steve mantioned he used the Extreme profile for Intel, which is not the default one and it's more that the average user would set.
You can't even tell what the real clocks were for the Intel cpu from this video because Steve didn't show clocks per each core but for the whole cpu which is lower because of e-cores here, so even there you are not correct.
Also my question was about your statement that real perfomance of Intel is closer than HU showed. Did you do any benchmarks like Steve to prove your words or you just talk shit?
Southern-Dig-5863@reddit
The all core boost clock at 5.7ghz should always be active during gaming workloads. If it's not, it's because the user didn't set it up properly.
The Intel Extreme profile is the default option for the 14900K. The other options are from the motherboard manufacturers.
MSI Afterburner uses the performance cores for the clock speed and not the efficiency cores. I have a 14900KF at 5.8ghz and it always uses the performance cores to measure the CPU clock speed.
Plus, using the Extreme profile on my CPU causes the cores to downclock due to them being overvolted
Also, the power draw is indicative of the Extreme profile's tendency to overvolt the CPU.
Here is a review from someone that actually knows how to setup hardware properly:
https://youtu.be/-5pyLavnlsY?si=Hin0AmM2cxj3dBFO
Dawid95@reddit
The benchmark you linked doesn't matter because each test was in different match, it is completely invalid and can't be used to prove your point.
Changing the core or voltage settings is overclocking and HU was showing the stock performace. If setting Extreme profile in bios causes the voltage to be higher then thats correct setting as it's a stock behaviour. AMD cpus also can be tuned to gain extra performance. It would be interesting to see correct benchmark with overclocked and tuned both CPUs, but you can't say that intel cpu was incorrectly configured (which in reality means overclocked) when you look at benchmark intended to show stock performance.
Southern-Dig-5863@reddit
Stock performance with caveats you mean. DDR5 7200 and DDR5 6000 aren't stock configurations for neither Intel or AMD.
I'll say this again though, that I don't blame Steve fully for this. It would have been nice for him to show some proper discernment and use a better setting than the Intel extreme profile, but that would be asking too much for HWUB.
The blame falls mostly on Intel for overvolting these CPUs as a reaction to the instability from Raptor Lake degradation fiasco
Dawid95@reddit
It is asking for too much, because you expect him to give intel CPU unfair advantage by overclocking it in the benchmark for stock performance.
bubblesort33@reddit
Dude says he manually modified the BIOS and locked the cores which is overclocking. this is not default behavior that 95% of users will experience. The default all-core boost for the 14900k is somewhere around 5.4 to 5.5GHz for stock behavior. This is not not stock behavior.
Southern-Dig-5863@reddit
You clearly have no idea what you are talking about
Good_Season_1723@reddit
Wrong, default behavior is 5.7ghz actually - 5.5 was for the 13900k. Unless you are limited by amps or power, 14900k should be running 5.7 constantly in gaming.
Southern-Dig-5863@reddit
Don't tell the AMD droids and destroy their wet dreams LMAO!!
960be6dde311@reddit
I run a Ryzen 9950X but this is just blatantly dishonest. People love to hate on brands.
Illustrious_Bank2005@reddit
Intel users are mentally handicapped… However As expected, I can't defend HARDWARE UNBOXED this time His tested i9 14900K had only 5.2 GHz clock coming out... Even if it shows its original performance, it can't beat the X3D... but the i9 side is clearly not able to demonstrate its original ability. I love AMD, but I'm also pissed off when the fair, clean and correct comparison is not done. I like AMD but I haven't watched the Hardware Unboxed videos I don't like that bastard
hardware-ModTeam@reddit
Thank you for your submission! Unfortunately, your submission has been removed for the following reason:
khensational@reddit
14900K settings is terrible. There's no denying that 9800x3D is faster out of the box due to the huge L3 Cache but Intel needs to be tuned otherwise there is no point of getting an unlocked CPU and Motherboard. For reference a 14900K at 6.1ghz power draw in BF6 is 225w and HardwareUnboxed is using Extreme Profile but it's only at 5.3ghz frequency and pulling 201w. Also Intel scales well with fast memory so at DDR5 8000 it matches AMD's offering but power draw is still doubled. I'm not quite sure if E Cores are actually being utilized by this game, i've seen a 285K pulling similar power draw to the 14900K as well. 9950x3D is pulling about 160w in BF6 so I'm assuming some of the cores in the second CCD is being utilized.
bubblesort33@reddit
That's called overclocking, and reviewers don't use settings that 95% of users don't have by default. You can go watch an overclocking benchmark if you want, but this isn't that video. Maybe there will be one made by other channels, or this one if there is demand, but the point of the video is to test out of box performance you can expect if you bought a pre-build and didn't do all the tunning. On top of that most people who have a pre-build PC are likely running worse than this even, because most companies ship their systems with dogshit coolers that can't even cool that 400w thermonuclear disaster that is the 14900k.
khensational@reddit
You don't get the point genius. HardwareUnboxed's Intel setting is bad. His 14900K is underclocked yet pulls power similar to an overclocked 14900K. Stock clocks for 14900K is 5.6/4.4. This is not a stock Intel this is an underclocked one. Intel power limit spec is 253w/400a so it won't be a thermonuclear disaster pulling 400w.
bubblesort33@reddit
Intel has nerfed their CPUs over the years. They were killing themselves, so they had to issues BIOS revision after BIOS revision trying to fix their degrading CPU issues. It's bad because made a shit CPU that has been nerfed years after launch.
khensational@reddit
I agree. Thats why I make sure to set my SoC voltage manually on my 9800x3D. I really can't trust Asrock anymore .I guess in this AMD sub people only knows how to enable expo.
Southern-Dig-5863@reddit
Stop talking sense! I made the same point as well and I was of course down voted by the AMD droids that love HWUB slop and don't like anyone going against the prevailing narrative that the 9800X3D is the bestest CPU ever!
amazingspiderlesbian@reddit
What thumbs video shows me is how much better the one percent lows are on Intel. Even when the average difference was 30-40% the one percent lows could just be 10-20%. Meaning the 14900k would be the more stable and smoother experience.
I experience the same thing with my 7800x3d. AMD is just a bit more stuttery than intel.
crshbndct@reddit
Wow you are talking complete shit.
X3d parts have shown themselves to consistently be better at 1% scores.
amazingspiderlesbian@reddit
Bro im talking about the video youre literally watching. The one the post is about.
The one that shows the 9800x3d has a much bigger difference between average and 1% lows to the 14900k
MdxBhmt@reddit
There's about a 1ms difference between 1% lows and mean frame time in both intel and AMD benchmarks. This ain't a thing you would notice in both systems.
So just we are clear, the frametime difference is 1.2ms for intel and 1.1ms for AMD on the first benchmark on low setting.
May this be your friendly PSA that FPS is nonlinear, balloons at high fps, and should not be used this way to diagnose stutters.
amazingspiderlesbian@reddit
Im talking generally. Not in this game specifically. About my 7800x3d having more random stutters.
But used this game as an example to show that intels 1% low stability is stronger than the average performance would have you believe. Since its the topic of the post
StoopidRoobutt@reddit
There's a pretty decent chance your stuttering is caused by something other than your CPU.
Got bloated windows installation? Got something that keeps polling hardware constantly? Unlocked frame rate?
Fast(er) CPU can, and will, expose you to new sources of stuttering. Something that was previously chugging along fine at a lower frame rate might get overwhelmed by a higher frame rate, causing a stuttering. Which is why unlocked frame rate is a bad idea, if you want smoothness.
amazingspiderlesbian@reddit
Nah. Ive done completely fresh windows install. All my drivers are installed with DDU for a fresh install. Ive got zero RGB software. I make sure to turn off all monitoring software especially gpu power percent and voltage monitoring because those can cause stuttering. I close every single program but the game. I dont have discord chrome or anything running.
My system is installed with a pcie 4 NVME drive as the main drive. And every single one of my other drives is also nvme or sata ssd. Im running the latest chipset firmware for the 7800x3d and most up to date bios as well.
And im running 64gb of ddr5 6000 cl30 tested for completet stability with no undervolt on the cpu either.
And a 5090.
And the AMD cpu does just micro stutter more often than my previous Intel one. With bigger random dips.
MdxBhmt@reddit
The 1% lows on AMD is higher than intel average. FPS cap your AMD system and you will eliminate any FPS jitter at a higher base rate.
Probably won't solve your issue because FPS jitter is not a stutter. 0.1% or 0.01% lows would be a better metric than FPS mean to 1% low variance.
amazingspiderlesbian@reddit
My god dummy. Its not the absolute number for 1% lows its the relative proportion of them to the average.
The 9800x3d has at points in the video 191fps in average but only 126fps in the 1% lows
While the 14900K has only a 154 fps average but 120fps in the 1% lows.
The 14900k system would feel much more consistent and stable than the 9800x3d system.
Who cares about an extra 40fps average when it stutters down to just a few fps more than the 14900k 1% of the time.
Yes you can lock your fps to the 1% range to reduce the fluctuations but then you'll only be like 10% faster than the intel cpu. And the intel cpu can easily match or exceed that with a tiny amount of tuning unlike the average delta
MdxBhmt@reddit
Again, this is not how anything like this works. You don't know what you are talking about and you refuse to understand because you are deluding yourself.
If this was the source of your issue, FPS cap your system and enjoy the extra 6 FPS. This ain't a stutter, it's FPS jitter. Something completely different, looks different, feels different.
MdxBhmt@reddit
You talked about this benchmark in particular. Get your story straight or your bullshit together.
If anything, 1% lows is the better indicator for stutters , and intel has way lower 1% lows than AMD. Again, if you have stutters, this is NOT the evidence you are looking for, confirmation bias aside.
FlimsySlide6281@reddit
In my house we got both 14900k 9800x3d and 7800x3d.
All of the systems for 32 gigs of ram and 4070ti 4080 and 4080 super.
Of the 3 systems, Yes the 14900k is the slowest most times, but only like 5-10% in games.
But the amount of times i had to tinkle with the amd systems to get a game running right or sometimes even install the game without hickups, will still make me say the intel is the better system here.
I will take the performance hit and the heat of course, from the intel anyday still. I dont run in to mysterious problems with that system. But i will say amd has come a long way since i owned an amd system for myself. Athlon Thunderbird 900mhz ;)
Just my personal experience
Brush_bandicoot@reddit
As someone who owns and uses 14900kf I can tell you it's a beast of CPU but most of game developers don't know how to handle the spikes. I ended up capping the volt to 180 so it can run properly on games. It's nice for benchmarks but in practical use, you would probably need to cap the volts in order to actually use. Also most of the developers don't really utilize all of the threads and relying or a single core
Kernoriordan@reddit
As a 13700K owner, I can tell you that BF6 makes use of multi-thread really well
Brush_bandicoot@reddit
did you try to play any unreal engine game with it mate ? didn't get any crushes\ out of memory errors etc ?
jerrylzy@reddit
If you have out of memory errors, you need to RMA your CPU ASAP, before Intel stops production of Raptor Lake parts.
Dangerous_Jacket_129@reddit
Why would your CPU cause out-of-memory issues?
jerrylzy@reddit
It's a typical symptom of raptor lake degradation.
ghaginn@reddit
I really wish Intel didn't cancel the "Royal Core" project which would feature very large cores with very wide decoders, and use rentable units for use by programs. We'd have potentially seen insane single-core performance.
greggm2000@reddit
Potentially, but one can’t benchmark hypothetical CPUs.
throwawayerectpenis@reddit
Framechasers in shambles.
SunnyCloudyRainy@reddit
When has he ever not be in shambles
throwawayerectpenis@reddit
It's funny, yesterday on Youtube some dude was promoting for the guy as a proof of Intel CPUs somehow running games smoother than "stuttery" X3D chips 😭
Blueberryburntpie@reddit
One of the Youtube comments suggested HUB should pay Framechasers for a "tuned" system and then benchmark it.
I've also read that sometimes Framechasers sell an overclocked config that actually turns out to be unstable, so that would be awkward to come up in said benchmarking.
"This game benchmark run was a Did Not Finish because it kept crashing on the overclocked setting, so we'll mark it as 0 FPS."
NoireResteem@reddit
So basically 9800X3D is the king of gaming and power efficiency and its not even close. We really need Intel to step up and be more competitive because AMD will eventually go down the same path as Intel pre ryzen if there is no competition. As much as I love AMD chips right now I dont want them to become complacent
the_dude_that_faps@reddit
There is already no competition. Intel is surviving thanks to OEMs on laptops. That's it.
PsychologicalGlass47@reddit
Would matter a lot with how abhorrently optimized the game is.
CandiMan8@reddit
What? The game is optimised extremely well. It’s one of the most well optimised AAA games that has released recently. Why do you say it isn’t?
PsychologicalGlass47@reddit
You're joking, right?
CandiMan8@reddit
No? Have you even played it? There are so many benchmarks online showing how well it scales to older specs. A game like the new Mafia is significantly worse - that’s an example of an unoptimised game.
PsychologicalGlass47@reddit
I have, between the closed testing and 1st weekend the game was consistently drawing less than half of my rendered frames.
Granted, Hangar 13 is also quite underskilled for modern development. As opposed to bad devs in the case of Mafia, Battlefield has a track record of bad optimization in the form of their Frostbite engine. Everything from BF1 onwards has a disgusting CPU dependency.
CandiMan8@reddit
CPU dependency is a bit high sure, but I would take that over Slop Engine 5 any day. I don’t recall getting performance this comparable in any other modern release. Very few stutters, no shader compilation related issues, I tested the game on 1440p and 4k. Felt smooth as butter. Considering the object density, geometry and detail in most of the maps I am impressed.
PsychologicalGlass47@reddit
I wouldn't, most games made by well-reputed and quite skillfull developers such as Ark Survival and Fortnite are very well optimized and, in the case of ASA, can be some of the best looking games on the market. Only issue is that even my P6k can only run ASA at a modest FPS.
Stutters weren't at all an issue, which was nice when comparing it to BF1 and BF5. In my case at 4k, my GPU was outputting 220fps though my CPU (9950X3D) was choking that down to a modest 70\~110. It was smooth, but I wouldn't at all call it well optimized if all 8 workers dedicated to the game can barely push over 100 between draw calls and compilation.
As said before, I'm not. A large majority of Call of Duty games have quite similar graphical fidelity and are more than capable of running at 4k/240. This game is somehow choked down to half of that on the same rig.
CandiMan8@reddit
You must be rage baiting now if you’re saying Ark Survival is well optimised. I’m shocked that you’re having issues on a 9950x3D. Sounds like an issue with your rig and not the game.
PsychologicalGlass47@reddit
It most definitely is optimized, so much so that with a more limited graphics option I can get 50% better performance than BF6.
The Pro 6000 is better than a 5090 to begin with, the fact that I'm getting equivalent FPS to your build with a GPU that's almost exactly twice better than a 5080 is absurd, especially as we're both running a similar CPU. If you were to run 4k native you would run into the same drop in performance to around \~70 fps. Neither your 5080 nor my P6k are being used to any meaningful degree, with my GPU only hitting about 15% utilization. I'd imagine yours isn't going a smidgeon over 40%.
CandiMan8@reddit
Somethings wrong with your system mate. I just booted the game up to test for you and my GPU is running at ~98% utilisation according to RivaTuner/MSI Afterburner at native 4k, max settings, no AA. Running between 75-90fps. I’m playing on a VRR display, so I don’t feel any slowdowns or lag.
PsychologicalGlass47@reddit
Such as what? You've just said that you're hitting 100fps at 4k + DLSS, if native 4k max is "running at \~98% utilisation" then what is your CPU doing?
CandiMan8@reddit
I’m so confused? My performance is fine. So is a lot of other people’s. It’s an objectively well performing game. My CPU is as 35-40% use. Just because the game doesn’t run well on your non gaming setup, what conclusions are people meant to draw from that? The game runs fine my guy.
PsychologicalGlass47@reddit
Your performance may be fine for your build, though mine is not. Our CPUs are as equal as can possibly be in the realm of performance, though my GPU is twice as powerful as yours and I still run equivalent performance as you.
It's... A gaming setup. Go ahead and try to gaslight me about what the most powerful consumer-oriented GPU on the market can do, it's almost a perfect 100% uplift over every aspect of the 5080.
As said, the game barely reaches 20% gpu utilization. Are you saying that it runs as fine as Escape From Tarkov, a game that runs equally as bad?
CandiMan8@reddit
You have the same performance because you’re CPU limited like most people will be, including me. It doesn’t matter if you have 4 times the GPU performance. That’s not indicative of an unoptimised game, you’d run into the same issue in plenty of others that are CPU bound.
And no, Tarkov runs like arse. I haven’t actually tried it on my system yet, but I know it’ll still run worse than it should. Difference with Tarkov is that it doesn’t scale well to worse systems as well as BF6 from what I’ve seen, and from personal experience.
PsychologicalGlass47@reddit
Once again, being CPU limited is widely considered to be the result of poor optimization. When the best gaming CPU on the market is hitching during static gameplay, there's a major problem.
CPU hitching isn't indicative of unoptimized games? Funny how it's quite literally a symptom of poor scheduling... From shader compilation in BF1/5 causing massive stutters to games such as EFT and Ark Evolved <UE4.2.7 there was a major CPU bottleneck due to poor engine / game optimization.
Yeah, Tarkov runs the exact same as BF6. Both games max out tasked workers and both games will reach a staggeringly low 70\~100fps at maximum. Both run poorly, both are constricted by the same issue.
CandiMan8@reddit
I have no hitches. Like you said, our CPUs are similar. I can’t account for the problems you are experiencing. Most of the videos I’ve watched of people testing the game also don’t have that issue. You seem to be the exception mate.
PsychologicalGlass47@reddit
Neither do I, as already said.
No shit you can't, though there seems to be a divide in the fact that you acknowledge you can't account for the issue as your GPU isn't as strong, though you're more than happy to dictate what the problem is.
madn3ss795@reddit
Something's very wrong with your system. My Ryzen 7700 with a mild undervolt run BF6 at 150+ FPS just fine.
PsychologicalGlass47@reddit
Unsure of what it would be, seeing as any other game runs like butter.
madn3ss795@reddit
Maybe try parking the game to a single CCD?
PsychologicalGlass47@reddit
I already have, I'm running workers 0-7 with multithreading disabled on my primary CCD. There are no other applications running on my primary cores to begin with, all cores unparked, with strict affinity rules.
All of my cores are pinned to 100% utilization to begin with, though my dps is still sitting in the double digits.
Eclipsed830@reddit
I'm having no problems on ultra at 4k 160hz with my 3080 + 9950x3d.
Grena567@reddit
3080 aint getting 160fps on 4k ultra in BF6
Eclipsed830@reddit
It dips into the 140's on the NYC level, but otherwise it'll hold 160 most of the game (using my monitors FPS counter).
Breadsticks4848@reddit
I have a 3080, it barely gets 70 fps.
Eclipsed830@reddit
Maybe it's my dlss setting? But I have no problem getting 160.
Breadsticks4848@reddit
You're not playing on ultra.
I'm running 4K, all high. It hovered 60-70.
I only took note of it because the gameplay felt similar to BF3 and 4 but looked way less smooth.
I'm used to hovering 140-150 in BF4 maxed out settings at 4K. Not in BF6 though.
You're doing something to lower your graphics fidelity, but its not at 4K ultra.
Eclipsed830@reddit
I just checked... I am on ultra, but with balanced DLSS, no resolution scaling.
madn3ss795@reddit
4K with balanced DLSS is 1253p.
Eclipsed830@reddit
Yup, that would probably explain it then. lol
Eclipsed830@reddit
Actually strange... because I was curious and loaded up BF4 and was only getting around 90-110 fps on 4k ultra. Must be DLSS.
greggm2000@reddit
Probably they have DLSS Performance set as well.
Healthy_BrAd6254@reddit
What a waste. You'd get the same fps with a Ryzen 5 5600 if you're playing at 4k ultra with a 3080
porkusdorkus@reddit
The first question I would ask, is what is the Battlefield 6 memory management like. Just curious, as more of a software guy than a hardware guy.
WarEagleGo@reddit
Intel i9 1400k -- 228 FPS
AMD 8700X3D -- 321 FPS
Kyubi-sama@reddit
8700x3d huh? Never heard of it, so they released the Apus with x3d cache? I need to snag one then
LuminanceGayming@reddit
lol they corrected the 8700X3D but not the 1400K
meshreplacer@reddit
Apple saw the writing on the wall for Intel and decided better to jump ship. That benchmark pretty much says it all.
The fall of Intel started with Pentium 4.
Pillokun@reddit
There are other that claim" a burn ment to be directed at Frame Chasers, no doubt, rightfully so. But even my 12700k which is tuned is very close to my tuned 7800x3d with the am5 utilising a more powerhungry version of the 9070xt at 1080p low with fsr AA, ie native.
But z690 4dimmers usually are not the greatest memory ocers. so it has nothing to do with the cpu when it comes to the z690 hero maximus. my 12700k on an crappier board 6layer 4dimmer atx would only go to 6800 and with an itx the cpu will run 7800 but it takes fiddling to make it stale as games might crash from time to time so I went down to 7600c34.
But holy moly only the first map is showing much better fps at those settings compared to my 12700k which is tuned if we compare to the 9800x3d and 5090, the other maps are in line with what the 12700k gets, the 7800x3d with the tuned ram is actually often better in the other maps than the 9800x3d/5090 combo. So guys just tune the system just a bit if u dont really have anything against a days of fiddling.
tuning is your firend and yes, different u-arc react differently to tuning, some does impact perf that much and some u.arch like intel do, because it lacks the huge cache and have a better memory subsystem with two ddr5 imc(and one ddr4 ;p ). So yeah some u-archs can close down a perf gap and other basically just make a minor perf increase. We know this since zen2 vs intel.
greggm2000@reddit
Kinda thinking the DDR4 on my 12700K is really holding me back.
jasmansky@reddit
TLDW, the graphics settings are low to highlight the CPU differences?
Single-Ad-3354@reddit
Why make a whole video about a question everyone already knows the answer to? Should be titled “how much faster is the 9800x3d in BF6”
StarskyNHutch862@reddit
There's still a lot of die hard intel fans that can't admit defeat.
ghostsilver@reddit
because they are making fun of the i9 owner who claims their "Custom OC custom parameters abcxyz 14900K" is unbeatable.
Remind me of the old day on /r/Amd when Vega owners are like "well my Vega beats the 1080 Ti easily" while conveniently ignoring their WC setup and off the chart OC that is maybe stable for a quick benchmark and consume double the power of the 1080Ti
a5ehren@reddit
Wonder what the same bench looks like with 5800 and 7800. Throw in Intel 12900/13900/265 for fun
battler624@reddit
Just like what I said before pcgh made a mistake before.
Kernoriordan@reddit
I run my 13700K undervolted (5.3Ghz @ 1.212v) and usually see power draw of less than 100W in game. I am however GPU limited by my RTX 3080 so a faster CPU wouldn’t get me much except better 1% lows.
However I have found the performance to be great and the game makes use of most of my 24 threads!
RunForYourTools@reddit
Why use the 14900k? Its past generation...oh wait the 285k is...crap LOL
PG705@reddit
My 14900KS with HT off and only 8 ecores with DDR5 at 8000C38 runs BF6 absolutely great. Memory tuning is your friend here.
Livy__Of__Rome@reddit
I hope Intel can get back in the game soon... Both for my stock value and the healthy competitive environment for the consumer.
No sane person alive shouldn't want this. But wants and hopes are not enough unfortunately.
Blueberryburntpie@reddit
TLDW: In some maps, the 14900K's average FPS is about the same as the 9800X3D's 1% lows FPS.