Ryzen 7 5800X3D vs. 9800X3D, Battlefield 6 Open Beta Benchmark
Posted by AccomplishedRip4871@reddit | hardware | View on Reddit | 198 comments
9800X3D is 37% faster than 265K at 1080p Ultra preset, 1% lows are 26% higher - tests were made in online match.
9800X3D is also noticeably faster than Intel 265K even at 1440p Ultra preset, only at 4K Ultra preset (no upscaling) game becomes GPU-limited.
Speaking of 9800X3D vs 5800X3D, difference is +/- the same as with 265K from Intel.
KaczkaJebaczka@reddit
The most infuriating thing was reading all those comments taking PCGH testing so seriously. That they were absolutely flaming how bad 5800x3d is and it’s getting smoked by old intels… literally framing 58003xd as a disaster CPU….
JonWood007@reddit
Pcgh's results seemed really....off compared to other results I've seen.
Deathwatch72@reddit
It's really off compared to any other result using basically any other test, honestly his numbers were all over the place and something must have gone very wrong with his test
Z3r0sama2017@reddit
Maybe the owner of the benchmarking site that shall not be named has had a new acquisition?
QueenCobra91@reddit
ive got the 5800x and the game ran smooth as butter
Vushivushi@reddit
Anecdotally, my 5800X3D has no problems driving my 9070XT on BF6 and I'm seeing similar results to HWUB as well as TPU's GPU results for the 9070XT.
autumn-morning-2085@reddit
This whole saga is bonkers, what are people really expecting or wanting here? If FPS drops to the same level (across CPUs) in intensive scenarios, that would be some network code or gpu bottleneck.
You know, an optimisation issue that's preventing the use of extra CPU power. Well, the video shows this issue doesn't show up, just a well optimized title that will run fine on most hardware.
bubblesort33@reddit
I'm not sure network code would affect frame rate.
Personally I think it would be nice if they could build in some benchmark that simulates other players and action in some way. Simulation 64 players controller input I really don't think is that much CPU overhead, and an Arduino would have multiple times the processing power to execute those inputs. No idea control on those players, just a predefined list of inputs executed in order. But maybe that's harder than it sounds for some other reasons. I mean Call of Duty has a benchmark.
They have an option to build custom maps and game modes. Maybe someone can figure something out.
Deathwatch72@reddit
The problem with 64 player sim is that you'd have to have 64 accounts and computers and a custom server with them all on it at once with networking cables running everywhere connecting them.
bubblesort33@reddit
I'm just talking about simulating the client, though, and what it sees, and calculates. Your client is still getting all those inputs getting send to it. Players, and buildings, being updated, etc. I feel like there should be some way to simulate that.
Deathwatch72@reddit
Simulating the client takes resources which could impact the way the actual game itself is running. Particularly if you're going to try and simulate 64 people on a single computer that's also running the client it's going to not be indicative of real world performance
bubblesort33@reddit
You're already creating the animations, and movement of those 64 people. You see their animations, and movement, and bullets on your client in a live game, and all that stuff is run locally.
Simulation the input, like I said, is less CPU intensive than an Arduino is capable of doing. I've programmed enough to know that within 0.01 seconds you can likely simulate the mouse and keyboard inputs of a few thousand players in that time. You're not simulation their AI. Youre going through a repeatable list of saved actions. Even decade old CPUs can iterate through millions of loops per second.
AccomplishedRip4871@reddit (OP)
There's no saga, one website(PCGH) simply inaccurately benchmarked CPUs, multiple insane(for their attitude) redditors trusted them, blamed(in a pretty rude way Hardware Unboxed for "inaccurate" testing, and Hardware Unboxed corrected that website by making a proper testing - that's it.
TLDR: 9800X3D is 35-40% faster than 265K, at 1080/1440p resolution, only at 4K(2160p) this game becomes GPU-limited, end of story.
LoloTheWarPigeon@reddit
I saw that comment and was annoyed by it, but didn't realize it would lead to a whole saga. Is that redditors account gone?
yeshitsbond@reddit
in other words as gpus become stronger, then eventually at 4k the 9800x3d will also be 30-40% faster than the 265k
autumn-morning-2085@reddit
Oh I agree, the saga is just the reddit drama. This has been a long running hobby of PC "enthusiasts". Trying to find the right conditions where the CPU isn't the bottleneck, as a means of defending their purchase or team of choice.
What about 4K? Untestable multiplayer conditions? What if I don't even need such high frame rates? What if I tune the RAM to where the PC doesn't even boot half the time or crashes constantly? What if I fry the CPU with crazy overclocks? What I win the silicon lottery huh, what then?
SmilesTheJawa@reddit
Kind of baffling that it's 2025 and we still have so many Redditors on this subreddit who willfully refuse to understand the absolute basics of benchmarking and the importance of repeatability in test runs.
YakaAvatar@reddit
But don't you understand? I'd rather have benchmarks that are completely useless for making any sort of CPU comparisons due to the wildly different scenarios, instead of having consistent and accurate comparisons with a slightly lower CPU load !1!!
It would make my head hurt to substract 20-30% of the frames in very CPU intensive situations and make an informed decision based on that :(
SJGucky@reddit
As an example, I had a 5800X3D and a reliable PC site tested a certain UE5 game and got 81FPS on average with the CPU.
But a few patches later I had repeated szenarios where it went below 40FPS.
So 20-30% might not be enough...
The 9800X3D that I got last year bumped the FPS in that szenario to \~75, which is still a drop from \~95FPS. So the 9800X3D gave me a 80-100% uplift in that szenario.
That said, you need a powerful GPU like a 4090 to actually hit that.
Since I want to upgrade to a 6080 or 6090, I will be upgrading again from the 9800X3D to a 10800X3D.
But I will either skip AM6 completely or upgrade to the last CPU before AM7. That depends if AM7 goes DDR7. I don't buy first gen memory.
jerrylzy@reddit
It baffles me why some people thought the 9800X3D would lead by a significantly smaller margin when it was already 30%+ faster with allegedly less CPU load.
timorous1234567890@reddit
It is not a common occurrence but there are cases where the bottleneck in an empty street may be different to the bottleneck in a firefight and that can lead to one scenario being strong for one part and the other scenario being strong for a different part.
It is not something you would typically expect though, especially if the empty street scenario is truly CPU bound, which we know it was because of the performance differences with different CPUs.
0xdeadbeef64@reddit
You see a similar lack of understanding in reviews of CPUs in the game tests part: Why don't you test CPU in games using 4K? Who cares about 1080p as I don't play at that resolution!
Zerasad@reddit
I think, no matter what both 1080p and 4K resolutions have their place. They measure different things and both are good and informative if you are looking to buy a CPU.
jaju123@reddit
1080p and 4k resolution testing does not measure 'different things'. Why is this getting upvoted?
With a CPU the theoretical maximum performance is what matters. And a GPU bound benchmark will tell you nothing about how many possible FPS the CPU can deliver.
If your CPU can deliver 250 fps in a game at 1080p then it will deliver very close to that at 4k (probably 230-240). If your GPU gets you only 130 fps at 4k then it tells you you have some headroom on the cpu side. Testing at 4k will not tell you that you have any headroom because your CPU will be half asleep
Zerasad@reddit
If I play at 4K and want to know if I need to upgrade my CPU or not I won't garner any information from 1080p benchamrks. That's where testing in 4K comes in handy. Yea I could try to piece it together from multiple CPU and GPU benchmarks made across multiple months, driver versions and game suites. Or I could look at the 1 CPU benchmark and see if there is a noticeable difference at 4K between my CPU and the latest and greatest.
Yes, if you want to compare two CPUs and see which one is more powerful 1080p is the way to go. But if you are looking for "real world performance" for your purchasing decisions and you have a high end card a 4K monitor you won't get a real answer from 1080p benchmarks.
ForksandSpoonsinNY@reddit
CPU handles compute (player actions, networking, hitbox computations etc).
GPU handles pixel pushing between resolutions.
GPU performance gets you a resolution to work from (1080p vs 1440p vs 4k)
If your GPU isn't bottlenecked then CPU matters in terms of framerate only to the point of what your video card can handle.
Zerasad@reddit
Except when it doesn't. Like here: https://www.youtube.com/watch?v=00GmwHIJuJY or here: https://www.youtube.com/watch?v=JLEIJhunaW8 or here: https://www.youtube.com/watch?v=G03fzsYUNDU
CPU does a lot more than just "handling compute". You never know how to parts will work together until you actually put them in a system and test it out.
emn13@reddit
If you know about those corner cases, you surely know they are corner cases. Course it's great we can get info even on those niches, but's not something I imagine most people would (nor should) consider when it comes to making buying decisions (except perhaps to know to be leery of arc). It's a good first order approximation to consider the two bottlenecks of CPU and GPU as largely independent.
But as always, the devil is in the details.
Zerasad@reddit
Is it really corner cases if it affects most of the top end Nvidia cards, and Intel cards which is 2 out of 3 GPU manufecturers and like 70% of the new market?
But it doesn't even matter. It should be pretty obvious to say that one should test out the actual configuration of PC parts and resolution that they want instead of trying to extrapolate based on other benchmarks. But feels like everyone is too smart for their own good and doesn't believe that direct testing has any merit.
For non-enthusiasts it is immensely helpful to see that if they want to buy a config for 4K gaming, what parts will give them what performance and they don't have to compare and contrast 3 videos from 4 different creators to understand what they can expect.
emn13@reddit
It only meaningfully relatively affects nvidia cards at the kind of frametimes where the impact itself isn't meaningful anymore. You're talking on the order of 0.1ms here. And sure, if you look at FPS and are running at 500fps that suddenly looks meaningful because taking the reciprocal of tiny numbers leads to large absolute values, but I can't imagine it matters for anything but a vanishingly small fraction (possisbly 0) of the user base.
Speaking of 0%, that's arc's market share, rounded to the nearest percent.
These issues are real, no question. But if you're emphasizing this to somebody that is trying to understand CPU vs. GPU bottlenecks, ...how helpful is that, really? It's non-obvious enough to understand CPU vs. GPU bottlenecks in isolation from each other first; and if you're a perf-geek: cool! let's go talk about all these near corner cases by all means.
For an absurd corollary: If you're explaining the net ground speed of a thown ball when thrown from a moving vehicle, then plain trignometry (really pythagoras) is really complicated enough and has high accuracy. You're not helping in most cases if you insist on correcting for relativistic effects, nor for time dilation due to earth's gravity. Still fun to talk about and all, so if you want to show off the maths, I'm all ears, just let's not forget that we're doing that for fun, not for relevance.
Zerasad@reddit
Are we looking at the same thing? In the Nvidia video I linked in Watch Dogs: Legion the 3090 gets 83 FPS with the 1600X, while the 5600XT gets 100 FPS. That's a 20% increase. 4K with Horizon Zero Dawn the 1600X does 120 FPS with the 6900XT and 98 FPS with the 3090.
If you just look at the 1600X with the 6900XT (120 FPS) and the 3090 with the 5600X (138 FPS) you would expect to get lower of the two 120 FPS if you combine the 1600X and the 3090, but you actually get 20% worse performance.
And the driver overhead is just one example. Stop getting hung up on it.
I'm not talking about bottlenecks. Nobody cares about bottlenecks. The average buyer is not going to care about bottlenecks. That is enthusiast stuff, forget about it for a second please. Look outside of comment chains 10 replies deep in r/hardware . Do you think that your cousin that had an Xbox all his life going to start researching bottlenecks when he's buying a PC? Is he going to start comparing Gamersnexus videos to figure out what the optimal configuration is? Or is he going to look up one video that tests the CPUs at 4K and pick one within his budget that does good enough? (Or more realistically pick the Amazon top seller)
This is what I'm saying. You are trying to outsmart an issue when it doesn't need to be outsmarted. With your ball throwing example. You could try to math it out, like you are doing, calculate the speed of the ball, air resistance, the speed of the vehicle and try to extrapolate what's gonna happen. Or you could just sit in your car, throw a ball while you are moving and figure out what's actually going to happen.
emn13@reddit
Yep, for very low end or extremely old CPU's the overhead can in some games be on the order of 1ms. Still not hugely impactful, and for modern systems and most games, it's way less. If you're on that kind of ancient system stuff like PCIe version might matter too; so things get even more complicated, and of course it varies game by game I'll bet it sometimes depends even on settings.
But your point about the guy replacing an xbox I just don't follow. This is exactly the kind of person that should _not_ care about this kind of stuff; they should look at cpu and gpu benchmarks separately. The fact that in extremely unbalanced cases in a few games the heuristic isn't perfect doesn't diminish that. Such a person picking a any modern gpu based on GPU-limited benchmarks and a CPU based on cpu-limited benchmarks will get pretty good results, and even where the heuristic is off it's not off by a huge amount.
You could make the argument that even that level of technical decision making is too high a bar, but then they really shouldn't be picking their own parts at all; and most people will sanely ask a friend or two for sanity checking before buying a new GPU with a low-end low-cache AM4 CPU.
What's your proposed alternative anyhow? Because testing all kinds of combinations is completely infeasible; and it's not even better, because the kind of disinterested buyer that's supposed to protect is never going to even even find that data if it's that messy, nor know which details they need to pay attention too and which not. And that much testing would necessitate bad testing, too, probably. It's not what experts (or interested amateurs) even do anyhow; we all find patterns - such as the nvidia driver overhead and arc's CPU usage issues, rather than remember or brute force all kinds of combinations; the problem space is too complex to brute force.
ForksandSpoonsinNY@reddit
Difference here is all 3 videos are issues with the driver. The Battlefield 6 benchmarks are on a 5090 and AMD CPUs that are at least a year old. The only radical difference could be a badly optimized game engine or GPU driver tweaks.
Unless there is huge issues with things like branch prediction or some other software issue the level of performance by GPU series remains the top contributer to performance.
In that case the playing field remains level and what you're looking for is issues in software
Zerasad@reddit
It's not "issues with drivers" did you even watch the videos? These cards hammer CPUs by offloading some workloads and that's why these weaker CPUs were struggling.
Also I'm not talking about benchmarking just Battlefield 6 on AMD CPUs I'm talking about in general 1080p and 4K CPU benchmarks.
I showed you examples on why you can't just extrapolate CPU performance willy-nilly. It DOESN'T MATTER why, it still is a real world scenario where you won't know what performance you get untill you test it out.
djuice2k@reddit
Huh?! You w0t mate? Of course you can correlate how much FPS you can get at 4K when using 1080p CPU limited data.
An example here..
You have a 9800X3D and a 5090 and you play lets say Rainbow Six Siege..
At 1080p = Avg FPS 570 / GPU Usage at 45%
At 1440p = Avg FPS 565 / GPU Usage at 80%
At 2160p = Avg FPS 370 / GPU Usage at 100%
Now with that data set, you can easily gather that if you weren't GPU limited at 4K you should easily get 560-570 fps, as your CPU is the limiting factor here, as the max it could ever generate was 570 fps.
Now if you did only 4K CPU testing data, and you're performance data was all limited by the GPU been at 100% you won't be able to see what the maximum performance your CPU is capable of.
Zerasad@reddit
I'm tired of seeing this arguement and I'm tired of having to repeat myself. No, you cannot just extrapolate based on data from different configurations. See this comment: https://www.reddit.com/r/hardware/comments/1mmfxwo/comment/n82p6dt
And if I don't care about what CPU has the best performance at 1080P cause I don't WANT to play at 1080p, then it doesn't matter which has the best theoiretical perforamnce. I want to know what CPU i need to game at 4K. And the 4K benchamrk is what'"s gonna tell me that
Morningst4r@reddit
They also have to be benchmarking your exact GPU or the 4k data doesn't give you any new information anyway. In a game like BF6 where you want the highest frame rates possible, seeing 4k benchmarks and thinking - "oh well all the CPUs get \~70FPS at native 4k ultra with a 5090, no need to upgrade" kind of misses the point. In the real world you'll turn down settings and use upscaling to get to 120+, so you need to know what CPUs consistently hit that.
Zerasad@reddit
Yea, in BF6 maybe, but in most of the games you want to play at 4K (single player story games like CP77, Dragon Age, Witcher, etc.) you want 4K60 and the CPU will matter very little. I also don't understand how people can be against more information. Do you not want the 4K benchmarks to exist? All it does is give you more datapoints, so you have more information to work with.
timorous1234567890@reddit
It takes time so I would rather a wider spread of games on test than 4K testing of a smaller game suite.
To me adding Stellaris (or HoI 4 since GN test Stellaris) the latest Civ and a factory builder to test the tic-rate or turn time would be a far better use of the limited testing time available than 4K tests of the 12 or so titles they are testing at 1080p and 1440p.
timorous1234567890@reddit
Yes you will.
If you are aiming for 90fps but you are currently stuck at 60fps then 1080p testing will tell you if the CPU you own (or one with a similar performance profile) can even hit 90 fps in the games you want. If it can then great, you don't need to upgrade the CPU and can just buy a beefier GPU to hit your performance target. If your CPU cannot then no amount of GPU upgrades will get you over your performance target and you may need to upgrade both parts.
This also does not factor in the multitude of games that are Tic Rate or Turn Time limited which rarely get tested. Or even stuff that is heavy on CPU like WoW or Path of Exile and other similar games with a lot of local calculations going on.
jaju123@reddit
Personally I buy the most powerful CPU I can get because I want it to last me more than one GPU because it's more hassle to change the CPU than the GPU. Thus, I don't look at 4k benchmarks at all because they're GPU bound for the most part. At a basic level you want to test with as little GPU stress as possible to enable the CPUs to differentiate themselves. If your CPU can give you 500 FPS at 480p then it can also give you basically 500 FPS at 4k as well. It tells you how much headroom you'd have in the future when better GPUs come out.
josivh@reddit
I am also in that population group where my setup was a 3700x and a 4090 and recently playing at 4K. There are many people like us who have a higher end gpu who want to know if upgrading to am5 is worth it. Without 4K benchmarks we wouldn’t know that there are actually benefits
You share your personal experience about how you always get the best cpu, that you find it more tedious to change a cpu, and you don’t play at 4K. And because of your personal experience, you believe there shouldn’t be 4K benchmarks at all because they are useless. Not everyone shares your personal experience. And we shouldn’t have to infer what frames we get from 1080p benchmarks because 4K benchmarks are not useful to you.
Keulapaska@reddit
If you base your entire opinion on 1 benchmark maybe, but if you have looked at different benchmarks across games and years, it comes pretty easy to know how the performance scales cpu v cpu and how much fps a certain gpu can get.
Also if you have the hardware, you can just look at gpu usage yourself, which is pretty good indicator of cpu bottlenecks and if that isn't enough when you think you're cpu bound and not 100% sure, just start lower res/settings to really see.
josivh@reddit
Extrapolating data from multiple videos over the years or finding the 4K cpu benchmark that tells me all I need to know…hmmm tough choice
Can I ask why you are arguing so hard to remove 4K CPU benchmarks when you understand their use-case? You know it’s useful for people, but you are going out your way to argue against them. Why?
Keulapaska@reddit
I'm not saying remove them all together, more so they aren't that necessary to include in every benchmark and it isn't the end of the world to not have them.
Obviously more data in single benchmark is always better and I would love more indepth benchmarks showing way more data and configs, but I understand that due to time to produce good data with multiple configs/settings(meaning both software settings and hardware OC stuff), there are limits to how much as single video/article and comparing cpu:s at 4k modern games the results are gonna be quite predictable if the highest end gpu can't achieve the fps of X cpu at 1080p.
conquer69@reddit
4K data isn't that helpful since it's testing mainly the gpu or gpu related information. It's more of a curiosity than valuable data that justifies doubling the workload. Especially when people are using upscalers these days.
Zerasad@reddit
It is helpful for you to see if it's worth upgrading your CPU and possibly Mobo if you are looking to game at 4K. It helps you make the choice. Instead of getting a 9800X3D, you can save that money and put it towards getting a new GPU, since you won't see that much of an upgrade off of a CPU alone. Even if a lot of people use upscalers, more information is still better than less.
conquer69@reddit
You need to 2 data points: CPU and GPU performance. CPU benchmarks give you one but you are trying to get the second (GPU numbers) from a CPU benchmark.
You need to look at GPU benchmarks for that information. The performance you will get is the lowest of either.
josivh@reddit
There is a subsection of pc players with am4 and a 4090 like me who need this data. I have no idea why you’re so insistent on including less data in benchmarks when it’s no hair off your back
conquer69@reddit
I know you need the data. I said as much in my comment. You need to look for GPU data in GPU benchmarks, not in CPU benchmarks.
The entire argument is about people complaining about not having GPU data in CPU benchmarks instead of just looking for GPU benchmarks. They don't look for it because they don't know any better.
josivh@reddit
You think with am4 and a 4090 I’ll be looking for gpu benchmarks, to upgrade my gpu? Yikes
conquer69@reddit
If you need GPU data, you look for GPU benchmarks. If you need CPU data, you look for CPU benchmarks.
What exactly is stopping you from looking at both data points and using the lowest to estimate the performance of your new build?
josivh@reddit
Because I can just not waste my time and look at a single 4K cpu benchmark to guide my cpu choice 💀
conquer69@reddit
But that's the issue, you want information that is not meant to be there. You can easily look at the cpu benchmark and then at the gpu benchmark to reach a conclusion.
You are creating your own problem, complaining about it and then expecting others to fix it for you.
josivh@reddit
What problem am I creating? I’m chilling. We have decent YouTubers providing me the 4K benchmarks I need and you’re the one whinging that it needs to be taken out
wpm@reddit
I think it's also useful data just for shopping purposes. I know that personally I take a look at benchmarks for games I play or want to play for deciding on my builds. 4K benchmarks helped me decide to get a 32:9 1440p ultrawide (about 89% of the pixels as a 4K display), because I could look at benchmarks with my CPU/GPU and estimate whether I'd be getting playable frames per second at some given quality, with frame-gen/AI upresing, and so on. Even just getting in the ballpark can be a good boost of confidence you're making the right purchases when so much money can be on the line. It also helps save money. I have a 7800X3D and 2 AM5 systems in my house. I look at Microcenter's ads and there's always a "Get a 9800X3D! $429!!" and I go "hmmm, I should treat myself", but again, I can look at 4K benchmarks and go "With my GPU held constant, this purchase would marginally improve only a handful of my games that I play right now, i.e., not worth it, go put it in the 401K".
Also, 4K is a quadrupling of the workload, as two dimensions are being 2x'ed in pixel count. There is definitely more work for the CPU to do that can effect the outcomes in small but subtle ways.
conquer69@reddit
It's not that 4K data is useless, the problem is trying to get that data from a CPU benchmark. What you are looking for is GPU data which are a different set of benchmarks.
wpm@reddit
If that were true we would expect that all 4K benchmarks are identical across all CPUs, if the GPU is the same. Is that how it plays out?
conquer69@reddit
As long as you aren't cpu bound, basically yes.
wpm@reddit
Indeed.
But what if the added overhead and work the CPU needs to do for a game at 4K vs 1080p creates a CPU-bound situation.
Damn, that would be useful to know, huh.
conquer69@reddit
Maybe some cpu heavy elements scale with resolution but that's the exception. It's a lot of extra work for little to no pay off.
Those things are better covered with exploratory content like what Daniel Owen does.
mario61752@reddit
Actually, it's still useful because rendering higher resolutions has a slightly higher CPU overhead. When HUB posted a video of them testing CPUs at 4K to "show" these whiners why they test at 1080p they got surprising results of the 9800X3D performing better at 4k, and it made that video confusing to watch as they tried to brush that off.
conquer69@reddit
I haven't seen a single scenario where the 9800x3d (or any other cpu) performs better at 4K than 1080p.
mario61752@reddit
It shouldn't. Afterall, I just said there is higher overhead at high resolutions.
This is the HUB rant video I was talking about btw
https://youtu.be/5GIvrMWzr9k?si=umpJu2yRfIVq7E2Y
conquer69@reddit
That isn't 4K though. That's 1252p upscaled to 4K with DLSS which has it's own fixed overhead. It's why in cpu bound scenarios enabling DLSS can actually lower performance.
mario61752@reddit
Good point, but for the sake of the argument for testing "real-life" scenarios, testing DLSS performance is viable too imo.
Here's another video of them doing it, earlier. There is still a miniscule difference in performance without DLSS. It's almost negligible but I find it interesting
https://youtu.be/98RR0FVQeqs?si=-jxsAJvk8ixV6wUD
conquer69@reddit
The performance cost of upscalers can be quite severe. Daniel Owen had a test where he was getting 300 fps at 1080p and upscaling that same 1080p to 4K gave him 200 fps.
The performance of upscalers is measured in frametime cost which in this case was 1.6ms. It's not that much when playing at 60 fps but at 300 fps each frame costs 3.3ms. 1.6ms is a lot then.
The performance cost of DLSS and frame generation is still a gpu adjacent topic and should be covered in GPU benchmarks, not CPU benchmarks.
MrPayDay@reddit
Even in 4K the CPU matters. If it didnt, you could play on a 6700K in 4K with a 5070Ti for good percentil and min fps You can't tho because the 6700K is crap in todays metrics that is more a brake than anything. That's why 4K tests are important for CPUs as well.
NGGKroze@reddit
Even when GPU bound CPU still matters for pacing. I learned the hardware that my old 5600 was just not enough in Cyberpunk RT/PT.
MrPayDay@reddit
Yes, especially RT and PT penetrate the CPU with the gemoetric bounding volume hierarchies alias BVH vectors. The GPU can't help here, the CPU has to do these tasks.
TheLordOfTheTism@reddit
I didn't know how badly my 6700k was holding me back until I gained literally 40 fps by upgrading to a 5800x, and then I gained another 40 by swapping that for a 5700x3d. Yes I'm dead serious. The gains were in fortnite. Not exactly the first game that comes to mind when you think heavy on the cpu, and yet....
Keulapaska@reddit
40fps from what? 400 to 440? 100 to 140? 20 to 60?
Also I'm guessing you never looked at your gpu usage(yea it's not perfect, but good enough indicator assuming same clock speed if the drops are big enough) to see that you were indeed cpu bottlenecked if you didn't know aka you didn't care enough.
MrPayDay@reddit
Exactly this. In the past the consesus was "under 4K the CPU does not really matter", but that has dramatically changed in the last years and with all the new engines and especially with RT and PT as the BHV vectors are a CPU task, not a GPU task. I gained 20-50% (!) higher and better percentil fps (depending on the specific game and settings) after jumping from the 13900K to the 9950x3D on the 5090, it's absurd. Buying a better GPU is only one part. The CPU and (to an small extent) RAM matters as well.
conquer69@reddit
It matters but you don't need to test at 4K to get that information. If you get 300 fps at 1080p, you will get 300 fps at 4K if the gpu is fast enough.
You then look at gpu benchmarks and see which one will bottleneck you first.
The 4K data is important but that's gpu data and people are trying to get that information from a cpu benchmark.
MrPayDay@reddit
Yep, I agree here, we just need to explain and agree on the the context of every test scenario
josivh@reddit
Back when I had a 3700x and 4090 combo and the 5000 series didn’t exist these 4K benchmarks were a life saver. Upgraded to a 5800x3d and it was the most cost effective upgrade I’ve ever made.
And it’s sad people believe we shouldn’t have 4K benchmarks because it doesn’t benefit their own scenario some people just lack basic empathy
only_r3ad_the_titl3@reddit
on the other hand not testing 4k data, will lead to people thinking that they need a more expensive CPU than necessary.
CoronaLVR@reddit
All reviewers have to do to make those people shut up is benchmark at high resolution with DLSS.
Not only does it reduce load on the GPU, making the CPU the bottleneck again, it is also representative of how actual people use their PCs.
The main complaint about benchmarking at lower resolutions is that it's not realistic.
iDontSeedMyTorrents@reddit
As if that would shut anybody up.
NoBodY PlaYs 4K! sTeAM sUrVey SaYs 1080p Is mOsT PopUlAr!
ThAt'S noT HoW i PLaY!
The only way to shut anybody up is to test every possible permutation on Earth and you'll still find clueless idiots complaining about something.
Adorable-Fault-651@reddit
Plus the claims that 3d cache reduces lows and stuttering have yet to be proven at 4k by reviewers.
When their income is dependent on clicks they need graphs and drama to stay employed.
S4luk4s@reddit
Most people on 1080p and 1440p aren't Gpu bound, especially with fsr / dlss enabled. And still, it makes no sense to test the cpus in Gpu bound scenarios, how do you think it makes sense? The benchmark is for comparing the cpu performance, in a few years you will definitely tell the difference even at 4k at certain titles.
Vb_33@reddit
Yes but also the uselessness of said runs. People care about game benchmarks because games are real workloads that show how your hw will work when you actually play them. Except nobody plays Battlefield like this, BF is not a barren walking simulator and it is a CPU heavy game under normal situations (lots of enemies, vehicles, destruction).
A lot of their testing isn't testing the battlefield experience, it's more akin to a synthetic benchmark than real gaming.
AccomplishedRip4871@reddit (OP)
He explicitly said that it's very hard to test those scenarios because he will just die, which will ruin his benchmark run - it's impossible to benchmark it like you wrote - and if anything, with more players, action, firefight -, the network code also ramps up CPU work, as it must process and reconcile position updates from many players at once, which makes this game even more CPU-bound.
He used 2 PCs in the same match and in some cases his friend helped him with testing - it's as far as he should logically go, i mean, he can try to ask BF6 devs to give him godmode so he can test CPU-intensive firefights without dying.
Morningst4r@reddit
DICE is probably the only one who could provide that data from telemetry on the performance of everyone's machine while they're playing. I've always thought it would be interesting for publishers of "forever games" like MMOs to publish stats on the best performing hardware in their games so people can buy accordingly. Games like Guild Wars 2 and WoW are near unbenchable in real life scenarios, and the most serious players wouldn't hesitate to buy a new CPU based on real world stats.
CatsAndCapybaras@reddit
I think this is a fundamental misunderstanding that is common amongst people who watch this kind of content. The purpose of benchmarking from hardware reviewers is not to show what your performance will be. It is to show a comparison between different parts for the purpose of buying advise.
Morningst4r@reddit
Ideally, it should be both. But the latter is useless without the former, and as Steve says it's unrealistic to get enough data for real world testing to have any value.
Vollgaser@reddit
Didnt he show in the video that the margin between the cpus in action and out of action are basically identical and thats why he teseted the rest out of action. If the margins between ina ction and out of action are the same then testing out of action makes much more sense as it is much easier to reproduce as dying completly invalidates the run
jotarowinkey@reddit
does this have to do with the 265k? i was on buildapcsales a while back and for a few days in a row there was a deal. either the 265k is a lot better than it was on release or people are lying.
Hot-Interaction9637@reddit
But But I need my CPU benchmarks at 4K so that I have no idea how said game might perform 2 years from now when new GPU's come out.
ishsreddit@reddit
bro the post with PCGH had an absolute shitshow of a comment section lol. I'm not very active on the PC subreddits but r/hardware followers are definitely one of the most prone to bad data, followed by the spread of misinformation.
steak4take@reddit
Not just bad data, but bad takes, outright lies and not a small component of groupthink and subsequent bullying.
Darksider123@reddit
So many "experts" on reddit who just wanna shit on others who disagree with their views
Canadian_Border_Czar@reddit
I ran this with a 5800X3D and 5070 Ti and it ran beautifully @ 1440p High. Didn't test it out on Ultra, but I did consider turning the graphics down solely so I would stop getting blasted through dust clouds.
Cant see fucking shit in that game.
rng847472495@reddit
What I find strange is if you go to the PCGH article, and the forum topic for that article, they aren’t really calling the benchmark numbers to be nonsense other than 7950x3d numbers.
BNSoul@reddit
They're nonsense, I get twice the amount of frames with the same settings in the same map using a 9800X3D. I wonder if they have turned "game mode" at the BIOS level effectively cutting half the number of available threads for the 9800X3D. Also, their numbers for the 5800X3D and 7950X3D are messed up, they must have issues with Windows game bar, AMD chipset drivers and core parking. I'd say you can disregard all PCGH results until they fix their stuff.
Candle_Honest@reddit
I keep seeing comments of PCGH.. what is PCGH?
AccomplishedRip4871@reddit (OP)
Battlefield 6: Erste Benchmarks zur Open Beta
kredes@reddit
Are we expected to all know German?
AccomplishedRip4871@reddit (OP)
It's 2025, you have auto translate.
He asked a question, he got his answer - don't be rude.
kredes@reddit
I definitely was not trying to be rude, and i wasn't thinking about translating.
AccomplishedRip4871@reddit (OP)
Okay, I don't know German either and used auto translate - basically it is a website that made inaccurate benchmarks in BF6, multiple redditors believed them, afterwards were toxic/rude towards Hardware Unboxed because they consider HU testing inaccurate, after second video(one in this post) Hardware Unboxed benched CPUs again with the best testing methodology, and CPU performance difference stayed +- the same, unlike data shown from that German website.
kredes@reddit
Thanks for the explanation, was also what i came to understand.
amorek92@reddit
Wish they tested games like Oxygen Not Included or Rimworld, rather than fps
rickybluff@reddit
remember when intel were always 5% better than AMD?
Jaz1140@reddit
User benchmark refuses to forget
bizude@reddit
I remember when people didn't understand the concept of a GPU bottleneck, too! ;)
David-EN-@reddit
they still don't
Professional-Tear996@reddit
So Steve calling my statement about FPS being different when one actually attempts to play the game and moves closer to the objective as "nonsense" - is invariably nonsense too - as demonstrated by the part of the video in which he fast-forwards.
The instantaneous FPS is 200-210 on the 9800X3D and 140-150 FPS on the 265K before fastforwarding, and after fastforwarding to the action it is 170-180 on the 9800X3D and 120-130 on the 265K.
djuice2k@reddit
It's nonsense in that they delta between the 2 CPUs are still relatively the same, either in combat or out of combat, which is around 40%.
Professional-Tear996@reddit
When did I indicate that I was saying that it changes the delta between different CPUs?
Remarkable_Low2445@reddit
You indicated your lack of understanding as to why Steve regards 'your statement' as nonsense.
It doesn't matter if the performance tanks close to the objective as long as it tanks similarly between systems.
conquer69@reddit
It's nonsense to expect benchmarks done in a firefight because they can't be replicated. The test needs to be repeatable. What Steve did here is an exception.
Professional-Tear996@reddit
They can be easily replicated when data is collected over a longer time-frame.
conquer69@reddit
It can't because all the scenarios are different in a map with real players in it. Maybe some buildings are completely destroyed, or they have 20 people fighting in them. There are too many variables.
Professional-Tear996@reddit
It can be replicated as long as the review isn't fixated on keeping the moment-to-moment stuff that is happening exactly the same.
errdayimshuffln@reddit
Then replicate it yourself. Prove it.
Even if you test for a long time, you still get a huge variance, and you end up averaging in any network bottlenecks and other bottlenecks which are not cpu related.
In this case, to be honest and transparent, you'd have to compare entire frametime graphs and even then you don't eliminate the issue because the same bench will result in a different graph each time.
Professional-Tear996@reddit
Getting a huge variance isn't a problem if you know the nature of the variance and how many standard deviations away from the average a particular data point is to consider it an outlier.
errdayimshuffln@reddit
It is something undesirable. Like for example, in experimental physics, physicist will redesigned the entire experimental setup if the variance is too high relative to the absolute values themselves.
60fps with a 20fps std is terrible for example or.
b-maacc@reddit
lol I was wondering when you'd show up.
Professional-Tear996@reddit
I have showed up. Now refute what I said above.
only_r3ad_the_titl3@reddit
cant win against the AMD unboxed fanboys
b-maacc@reddit
No I think I'll just let you continue, it's entertaining.
Professional-Tear996@reddit
You entertain yourself by deliberately choosing to be lazy by not looking up the relevant timestamp in the video?
Slabbed1738@reddit
Keep going, im almost there
b-maacc@reddit
Yes, yes. More of this please.
PlexasAideron@reddit
How's that crow taste? Hope its good.
Professional-Tear996@reddit
Ask Steve perhaps?
team56th@reddit
Huh - My takeaway is that somehow Arrow Lake isn’t as terrible as I expected. I mean, of course it doesn’t have 3D cache thing so it’s slower, but compared to stock Zen 5 it’s not too bad. I guess the firmware level changes did make things better?
FragrantGas9@reddit
It’s only really terrible when you consider the entire package:
To achieve these framerates with Arrow Lake, you are investing $310 in a CPU, buying more expensive fast CUDIMM memory (HW unbox tested with that for this video), a relatively expensive mobo with NO upgrade path (next generation will be a new socket again),
Compared, against 9800X3D which yes is $150 more expensive, but can use cheaper RAM, cheaper mobo, and does still have at least one more generation upgrade path, and offers ~35% performance improvement in this game.
Or compared against 9700X for same price, same cheaper ram, still an upgrade path, and the same performance.
That’s why the 265K is considered ass. Performance can be ok but overall value is poor.
Morningst4r@reddit
You don't need CUDIMM memory, just some Hynix A-die should get you close to optimal performance. I don't know if there are any "cheap" motherboards that can push high memory speeds (any good budget 2-dimm boards?), but if not, then that definitely hurts the 265k's value. Now that VRMs aren't usually a concern, memory topology seems to be the biggest differentiater, which is definitely a win for X3D.
AccomplishedRip4871@reddit (OP)
being 40% slower sounds pretty terrible for me - only advantage of Arrow Lake is productivity performance for cheap thanks to e-cores - but if you care only about gaming and play CPU-limited games, X3D is a clear winner.
Darkomax@reddit
I think they're far better than non 3D SKUs so there's that.
team56th@reddit
I mean my preconception was that it’s significantly slower than 14900K.
danglotka@reddit
I thought it was like a few percent slower
errdayimshuffln@reddit
Yeah it was like 2-5% slower at launch and ended up being on par on average after updates.
ElementII5@reddit
Not true anymore either.
https://old.reddit.com/r/hardware/comments/1md6hnf/amd_threadripper_9980x_9970x_linux_benchmarks/
The 9950x is about 30% faster than the 285k.
jerrylzy@reddit
Only under Linux, though. Most gamers use Windows.
JonWood007@reddit
I mean yeah but x3d are only the high end cpus. If you're in the market to buy like a $200-300 cpu like 14600k or 245k vs 7700x/9700x, they'll likely perform about the same.
madmk2@reddit
played the beta a bunch on 265k and haven't noticed it dip below 173fps (gsync) a single time. Moderate OC applied. ARL biggest problem is Intels super conservative stock tuning. The chips are capable of so much more
AccomplishedRip4871@reddit (OP)
it's not the topic of discussion, both 265K and 5800X3D are capable of reaching that FPS (170+), but if you want more FPS, and what's more important higher 1% lows, your only solution is 9800X3D - you can't close the 40% gap by changing "Intels super conservative stock tuning".
madmk2@reddit
it's not reaching, it's averaging. Which literally means the fap closes significantly. That's how numbers work. In fact the gap becomes fairly close. Which is significant when you're comparing a $300 to a $500 cpu
AccomplishedRip4871@reddit (OP)
Except it's not.
In that section of benchmark, at 1080p Ultra settings 265K is averaging \~140 FPS, to achieve your "173fps" mark that you mentioned previously, you have to improve your CPU performance by 23.5% - good luck achieving that just by changing "Intels super conservative stock tuning" - plus, we can use PBO2 with 9800X3D, get +200Mhz on all cores with few clicks and the gap will remain the same.
madmk2@reddit
I mean it literally is, i don't know what else to tell you. I wouldn't point it out if it wasn't so puzzling
AccomplishedRip4871@reddit (OP)
Great, to make your argument valid, you have to provide a proper benchmark and share it with people - simply saying "I changed a few settings and now my 265K gaming performance is 30% faster" if not enough for me, I'm sorry.
madmk2@reddit
if you're actually interested and not just arguing because you're bored im trying to record some gameplay for you
AccomplishedRip4871@reddit (OP)
Yes, you can post your benchmark to HardwareUnboxed simply by commenting on this video on YouTube, they are responsive and if your data is really that impressive, they might re-test it with overclock, both on Intel and AMD.
madmk2@reddit
I went back to retest it and found the discrepancy. I was playing on high settings and moving to ultra drops the avg down from 170 to around 150 (on the same map).
I looked a little bit more into this and found a bit from daniel owen observing a similar behavior on his 9800x3d.
Pretty interesting how much the graphic settings impact CPU performance in this game. Maybe someone else is insane enough to test the individual impact of each setting. Something in there is hogging a bunch of CPU performance
Comprehensive_Ad8006@reddit
So your increase in performance from the configuration HWUB used (stock settings but with 8200 CUDIMM CL40 RAM) isn't anywhere close to 23.5%.
A charitable estimate would be 140 fps > 150 fps being a 7.14% increase. More realistically it was around 145 in the video which equates to a 3.44% increase.
And like the other dude said, you could achieve a similar margin with PBO tuning too so the point is moot.
madmk2@reddit
its 126 (from HUB video) to 150 which is close to 20% which is a fuckton considering thats just what i was able to squeeze without even touching the stock voltage limit.
i dont know much you could realistically squeeze out of the 9800x3d which is beside the point i was trying to make anyway. ARL is pretty good once its tuned properly which is something intel didnt manage to achieve with the outofbox configuration
Comprehensive_Ad8006@reddit
Then i'm confused because the other guy was talking about the 1080p native data.
If you're talking about the 1440p result (which HWUB tested at native) i think you're even more full of shit because in your own screenshots that you posted earlier you can just make out through the blurry text in the top left you're upscaling from less than 1080p (it says "Scaled from 1995x835") and you're using an Ultrawide.
StarskyNHutch862@reddit
Would love to see it please post it here.
madmk2@reddit
https://imgur.com/a/5Jthueo i even did the same map and tried the whole "running into the action" bit from the video. its the ingame performance overlay which is barely readable i hope you can make out the numbers
StarskyNHutch862@reddit
lmao how do you not understand changing the settings and then standing in a tiny building are not a comparable benchmark.... I mean really man. Stop.
madmk2@reddit
the numbers in the bottom left are averaged. from spawn to running into the action like steve did in his video. its the same thing steve did in his video.
DevastatorTNT@reddit
Last time they pushed stock voltages didn't go so well though
Vb_33@reddit
That was a different issue.
gusthenewkid@reddit
It wasn’t, voltage kills the chips.
DevastatorTNT@reddit
"different" only if you believe Intel didn't encourage board partners to push stock voltages to close the gap with ryzen
The fucked up microcode exacerbated the issue, but voltage-related degradation on their node would have still been a thing years down the line
MrPayDay@reddit
These benchmarks are proof that the best and fastest GPUs availabe can't overcompensate the deficit of an Arrowlake CPU. Arrowlake is literally braking your system in BF6 compared to the x3D CPUs.
bubblesort33@reddit
I think it's just the fact it has more cores. And this game uses them well. 95% of games aren't like this. But a 265k might actually outperform a 9600x or 9700x in gaming. I mean it already outperforms a 9900x in multi-threaded productivity.
dripkidd@reddit
this is a single game, dude, nothing to take away
Computerbase always has up to date benchmarks, firmware updates and all that, you cans use the little cogweel to cherrypick whatever gamelist you want
https://www.computerbase.de/artikel/prozessoren/rangliste.89909/#abschnitt_aktuelle_cpugamingbenchmarks
Vb_33@reddit
Your link has Arrow Lake in general beating Zen 5 and losing to Zen X3D tho
dripkidd@reddit
do you know what 'avarage' means?
Firefox72@reddit
I mean only matching the 5800X3D that is over 2 years older isn't exactly not as bad.
Exajoules@reddit
So something was definitely wrong with PCGH's tests I presume.
ResponsibleJudge3172@reddit
Or the other way round?
Exajoules@reddit
Seems unlikely. Tested twice, and PCGH's results doesn't really match real life experience with these exact CPUs either.
My 9800X3d gameplay results are very in line with HW unboxed's results, and almost twice as fast as PCGH's result. Heck, my 5800x' performance is almost on par with the 9800X3D results from PCGH, which is obviously bullocks.
jerrylzy@reddit
Same. I got 217 FPS avg at 1440p native ultra for one entire match.
BNSoul@reddit
This, exactly the same as my 9800X3D system which is twice as fast as what PCGH pictured in their benchmarks. They might have something misconfigured or maybe a bug going on (like the launcher overlay bug).
Zerasad@reddit
The PCGH tests were incosistent with themselves. The 5800X3D was somehow 15% faster than the 7950X3D, while the 9950X3D was 64% faster than the 9800X3D, that doesn't make sense.
AccomplishedRip4871@reddit (OP)
https://youtu.be/1jKmxZ4sLkM?t=579 - timecode regarding PCGH.
TLDR: he thinks that their testing methodology is "hard to believe".
autumn-morning-2085@reddit
The 14600K vs 9800X3D results there make no sense lmao, why would anyone take those results at face value? Of course they are the result of bad testing methodology.
JonWood007@reddit
Or the 5800x3d vs 10700k.
jerrylzy@reddit
I just went through a full match in breakthrough initiation at 1440p native Ultra + TAA and with GPU power limited. I got 217 FPS avg from my 9800X3D with HVCI on, so it's safe to say 129.7 FPS at 1080p with is complete fabrication.
makistsa@reddit
Steve if you are lurking here, can you tell us what ram you are using?
6000 became the default for some reason because the Amd can't really handle higher than that. Use 8000+ for once like der8auer did, instead of doing the same thing over and over again.
At least have the test system at the start of the video
HardwareUnboxed@reddit
6000 was never the default for Intel. For Alder Lake I used DDR5-6400, for Raptor Lake DDR5-7200 and for Arrow Lake CUDIMM DDR5-8200 has been standard, used from day one.
AccomplishedRip4871@reddit (OP)
or just watch a video before complaining - its the first section of this video (after introduction).
Test configuration [9800X3D&5800X3D RAM used]
makistsa@reddit
The intel one is missing. that's why i said, to have that one at the start of the video
Dawid95@reddit
He mentioned it in the previous benchmark video: https://youtu.be/NWIOU15pNpA?t=148
Intel platform was tested with DDR5 8200MT/s CL40
buildzoid@reddit
in the previous video he mentioned he used some 8000+ rated kit for arrow lake.
_PPBottle@reddit
in BF2042 RAM speeds did not seem to matter much, even on non 3d cache processors, maybe this is also the case here
Pumciusz@reddit
They say in the video. 5800x3d 3600mt/s cl14, 9800x3d 6000mt/s cl30.
makistsa@reddit
For the intel
Necrone00@reddit
That part is from his previous video, it's CUDIMM 8200 40CL
makistsa@reddit
Thanks!
Lambehh@reddit
There is quite literally a section of the video called test configuration that explains what each system uses. Did you even watch the video or just skip to the end?
makistsa@reddit
I am talking about the intel 265k
Necrone00@reddit
That part is from his previous video, it's CUDIMM 8200 40CL
Lambehh@reddit
Ok then you are correct!
Necrone00@reddit
8000+ for AMD is only increase fps like 1%-4% compare to 6000, not really worth it.
Pillokun@reddit
The 5800x3d here is slower than my tuned 12700k paired with an 9070xt at 1080p low with fsr aa(native) settings do that big of a difference still at 1080p even with an 5090. Heck even the 9800x3d is just as fast. my 7800x3d is faster or again just as fast, if we look at he fps span here.
conquer69@reddit
You are testing with different settings and a different gpu which has less cpu overhead. Can't really compare the data like that.
Pillokun@reddit
Sure, I got another 9070xt and sold my 4090 just because I want the highest possible fps and nvidia has the overhead in some games like wz and bf6 beta so both of my pcs on the desktop have an 9070xt.
This is just a fast run without any serious recording effort as the other system could have been using the capture card instead of the phone that I used on a tripod. but u see that the cpu is still very capable in combination with the 9070xt, it is actually slower than my 7800x3d but not by much at all. Tested different settings, and bf6 beta does not seem to load the cpu more with higher settings, at least I have not seen or noticed yet.
I will ofcourse tinker more with bf6 beta next time and my 2 systems, as this is something like a hobby of mine to dig into settings and how different platform behaves in the titles I am interested in.
cowoftheuniverse@reddit
I'm sure tuned 12700k is quite fast but remember to consider your settings when comparing. People are reporting that the settings make something like 20% difference to CPU load so they don't just change GPU load.
Another possible issue is if there is any driver overhead difference between AMD/Nvidia in this game.
Plank_With_A_Nail_In@reddit
Result both CPU's play the game more than good enough but one is better than the other but you can only tell that if you measure the difference.
conquer69@reddit
You can definitely tell the difference between 260 and 160 fps. Especially if you have one of the new 240hz oleds.
EndlessZone123@reddit
I get more fps in battlefield than rivals which is supposed to be more esports.
BNSoul@reddit
Some people here (and in other related threads) are suggesting that the Intel CPU can improve performance going from "barren" areas in the map to the hotspots where most players clash, nonsense. Also, PCGH saying that the 9800X3D in a certain area of this specific map the framerate averages 120 fps while this other Intel CPU gets almost double the framerate, I don't know what they're doing since I've been playing that map for some days now and my 9800X3D achieves 2x the framerate that PCGH got (9800X3D PBO +100 6400 CL30 1:1, per-core CO from -23 to -32).