Warhammer 40,000: Space Marine 2, GPU Benchmark (43 GPUs) 522 Data Points!
Posted by b-maacc@reddit | hardware | View on Reddit | 65 comments
Posted by b-maacc@reddit | hardware | View on Reddit | 65 comments
berserkuh@reddit
Why is this so CPU dependent lol, there's like 0 systems apart from the horde AI
Aggravating-Dot132@reddit
CPU does AI, geometry, texture streaming, physics.
Geometry and physics here are the key part. AI has to calculate all those pesky hordes to not clip in themselves.
Strazdas1@reddit
We really should get another push to get physics on a GPU.
Aggravating-Dot132@reddit
Not needed, really. It will just bloat it with too much stuff.
Look at Starfield. 10000 round shaped potatoes flying in zero G with collision. And merely a problem for modern CPUs.
Strazdas1@reddit
Starfield uses crappy havok physics engine like all other gamebryo (no matter how many times they rename it) engine games. 10 000 potatoes with poor collision is not impressive. Hitman cloth physics is more impressive than that. GTA 4 driving physics is more impressive than that. and both of those are older.
RedTuesdayMusic@reddit
and wireframe
berserkuh@reddit
This game is pretty much identical to WWZ: Aftermath and the hordes there don't cause this big of a problem.
Soulspawn@reddit
The performance is all over the place, RDNA2 to RDNA3 is a massive difference but that is the example. In one test the 4060ti beats 6800xt at 1080p medium but bump it to 1440p suddenly the 4060ti is considerably slower than the 6800xt.
something isn't right with drivers or settings.
ultZor@reddit
For some reason? 4060 Ti has 128 bit memory bus and 288.0 GB/s bandwidth, 6800 XT is 256 bit and 512.0 GB/s. The higher the resolution the more bandwidth is required. Same reason why 4070 Ti drops significantly at 4K. Its bandwidth is enough for 1440p, but is constrained at 4K.
Agitated-Silver@reddit
Nah this difference is too much the 3070 surpasses the 6900xt at 1080 native ultra which is crazy no amount of bandwidth or optimization will make the 3070 faster than a 2 tier above amd card
Soulspawn@reddit
it is likely a factor but I would've expected the 1% low to take a big hit and they don't. In the video they even say something doesn't look right with FSR numbers and mention the different in RDNA2 and 3. so even they notice something isn't right.
Also in this thread, some people are saying FSR option is a bit buggy.
NewRedditIsVeryUgly@reddit
About his comments on VRAM: 3 years ago, multiple people here were so confident that 16GB (wonder why...) will be the minimum around the time we are now. No amount of explaining about console graphics memory limitations or the prevalence of old 8GB GPUs helped them understand. Most developers don't push the limits of PC hardware, they simply adjust to what the consoles can do and add a bit extra for PC. The sad reality is - console generations is mostly what dictates graphical progress.
Strazdas1@reddit
yes. According to the devs i talked to, 8GB VRAM is the target for PS5 developers, probably same for Xbox since they have same amount of shared memory.
RedTuesdayMusic@reddit
There are dozens of games already that silently downgrade the visuals if you try to run high+ on 1080p with 8GB or 1440p with 10 and even 12GB. And before Horizon FW was patched to do the same, it was a stuttery mess on such graphics cards.
NewRedditIsVeryUgly@reddit
"Dozens"? I've been watching DF analysis of new games for a while, and I can maybe recall a couple games at most, and barely noticeable even then.
Berengal@reddit
With such a CPU heavy game I'm a bit worried about how good of a metric FPS is. It seems likely there's a bunch of simulations running that don't scale with FPS and would make the FPS numbers not scale linearly with the amount of processing done.
Strazdas1@reddit
Its a good metric. But its a metric you want to test CPUs for. We need more CPU-testing games.
Deeppurp@reddit
Unfortunately GPU busy is a new metric and will be a while before it becomes common just like with frame time and pacing graphs.
imaginary_num6er@reddit
That 4GB VRAM testing was impressive. The game honestly shouldn't even run with not being able to see anything
Strazdas1@reddit
Its funny how agressively some game egines attack VRAM to make sure the frames remain consistent even when it has to unload everything.
dedoha@reddit
Certainly it's interesting but even in a game where 8gb of video memory is enough, Steve can't help himself and stop fearmongering vram issues
Sopel97@reddit
tldr; heavily CPU bound
ADtotheHD@reddit
Makes me wonder what this game is gonna look like on PS5/Xbox Series X. Obviously the devs can optimize things better when you know everyone has identical hardware. That said, both PS5 and Xbox are only Zen 2 with peak frequencies between 3.4 and 3.8Ghz depending on the machine. I'm guessing there will just straight up be less enemies/detail, similarly to how empty the cities were in Cyberpunk.
No_Name_Person@reddit
Digital Foundry already released their video on the consoles. TL:DW They're able to run a locked 30 fps fine with a dynamic res 1080-1440p with FSR 2 reconstruction. The 60 fps mode drops the resolution to 720-1080p, but it's cpu bound when there are many enemies, and frame rates drop pretty badly. PS5 can drop as low as mid 30s and Series X is around 10-15fps better than PS5 in these heavy areas, but neither hit a consistent 60fps. Hopefully PS5 gets patched at some point to match the series x performance more closely and it could at least benefit more from vrr
ADtotheHD@reddit
Damn.
Kinda feels like this is the first game of this "generation" that is really pushing up against hardware. I use the quotes loosely to encompass the console gen, but also the similar gen of PCs. This game really makes Zen3 look tired and firmly puts 2000 series cards to rest. Hell, 3000 series cards are really only keeping pace with DLSS.
Tasty_Toast_Son@reddit
It's been fantastic finally seeing my 5800X3D really spread its wings. Opening up task manager to see anywhere between 78-100% usage was quite the surprise. Not a bad one mind you, I'm glad to see games are staring to spread wings of their own in regards to scale.
conquer69@reddit
The 2070 Super was getting 78 fps with medium settings at 1080p.
techraito@reddit
Not quite the first, but among the first few. Alan Wake 2 was more the first. It's not even poorly optimized but 2080tis can't even do 60fps at 1080p high settings.
ADtotheHD@reddit
Damn.
Kinda feels like this is the first game of this "generation" that is really pushing up against hardware. I use the quotes loosely to encompass the console gen, but also the similar gen of PCs. This game really makes Zen3 look tired and firmly puts 2000 series cards to rest. Hell, 3000 series cards are really only keeping pace with DLSS.
riklaunim@reddit
Alleged PS5 refresh will have bigger GPU section but the same Zen 2 cores. Curious why they didn't decided to upgrade them as well... still if they were heavily GPU limited (and probably are for most newer games) the Zen 2 cores aren't the main issue...
BuchMaister@reddit
My guess, changing CPU architecture can break more stuff, meaning more time for devs and Sony to make sure everything works well with no issues, Definitely Zen 2 at 3.5Ghz shows it's limits especially for keeping 60fps, raising the clocks to 4.0-4.2Ghz maybe even to the 3700X boost clock of 4.4 GHz could see pretty significant improvements (up to 25% improvement). Maybe for PS6 they should think about getting X3D chip, probably of Zen 6 or Zen 7 but there still time until then.
Berengal@reddit
Short answer is you can't scale CPU requirements nearly as well or as easily as GPU requirements.
WJMazepas@reddit
"Pro" consoles last gen also didn't have a different CPU. It was the same but with an increase in clock.
I think this will be the same. It will probably run at Max clock at all times instead of the variable clock we have today with PS5.
They do this because it maintains the same logic in both consoles, and the Pro is meant to have better resolution and graphics, not to have more enemies on screen or something like that.
But now that I think about it, they do need to improve because RT does tax a lot the CPU, and if they want a better RT console, they need better CPU
ADtotheHD@reddit
This is just a guess, but I'd like to think an educated one when looking at what Sony did with the PS4 as well as basic understanding of game development/optimization.
You don't update CPUs at mid-cycle refreshes because it potentially changes how devs approach how the game can execute at a core level on the console. Very generic exercise here, but lets say you have 4 cores total and you start dividing up the work. One core goes to AI. One goes to number of enemies on screen.... and so on. If you revise the CPU up a gen, make it faster, or give it more cores, it can change the entire behavior of how the game functions on that system. Updating the GPU typically has one purpose, which is higher resolutions / fill rate. You can keep the development of the game basically 95% the same for everything but when it comes to how many FPS and at what resolution it runs at, having a "newer" or beefier GPU gives you some different capabilities. That's exactly what we saw on PS4. The PS4 Pro essentially allowed games to get to 1440p or even 4k where the base PS4 couldn't do it. Now with the PS5, it's probably gonna allow more titles to hit that 4k@60FPS mark everyone wants. Mark Cerney actually said in an interview that the PS5 hardware wasn't implicitly designed with 4k@60 in mind and they thought way more devs would target 30fps in their games. Somehow they got caught off guard with people wanting that buttery smooth experience on their 4k screens.
imaginary_num6er@reddit
I'll probably come to Xbox only as a reverse exclusive compare to Wukong, only for players to be disappointed and experience PS4 CyberPunk quality
CosmicNoodle42@reddit
Man... this game looks so freaking good.
RedTuesdayMusic@reddit
Over 600 NOK = high seas for me. Doesn't help that the campaign is short and features uncustomizeable worst marine chapter in 40k
NeroClaudius199907@reddit
Unironically I pray they just slap fg and take their time to optimize cpus.
TalkWithYourWallet@reddit
It's not about optimisation
Look at the LODs and enemy counts. It's just an extremely CPU heavy.
Cutting those back would be making a different game
kindaMisty@reddit
So was World War Z, it didn’t have CPU overhead Iike this.
TalkWithYourWallet@reddit
Different games.
They aren't comparable, in spite of using the same engine. They're two different tiers of complexity entirely, WWZ won't be as demanding
kindaMisty@reddit
How so? They should be comparable. They’re mission based horde missions. I do agree that it’s cranked up, but the issue here is that the game is severely single threaded and LODs alone wont fix it.
TalkWithYourWallet@reddit
Digital foundries testing with an R5 3600 doesn't suggest particularly single-thread bound
https://youtu.be/T9CwH7f1l1o?t=12m57s
The R5 3600 is being pegged pretty effectively across all threads
kindaMisty@reddit
i can see Core #5 on my 7800x3D get utilization of 90%+ meanwhile the other cores are hardly being touched. but okay!
regenobids@reddit
It's making best possible use of those threads, but nothing is getting pegged. Seems both I/O and ST performance is lacking.
79-71-75-67-76-76-69-76-74-66-65-62
Doubt more cores is the answer for Zen 2 in any case.
We'll see soon enough. I think 14900K and 7800x3d/7950x3d will differentiate themselves greatly on this title, can't tell which will do a better job
On Zen 3 expect a similar spread for 5800x while 5800x3d likely to really peg main threads far closer to 100.
NeroClaudius199907@reddit
My cpu got hit harder in TLOU and they had the option to lessen ai quality and stuff. Its not a different game
ultZor@reddit
You cannot decrease the AI's quality or quantity because it's a multiplayer, cross-platform game. Every player must see the same enemies.
Regarding frame generation, they've already mentioned that it's planned for post-launch.
NeroClaudius199907@reddit
You can optimize quantity and quality of ai even if for multiplayer games.
ultZor@reddit
Yes, and I'd say they already optimized it because it is running pretty well for what it is, without major dips or stutters. I'm sure there are still some bugs out there, but I wouldn't expect massive gains without massive compromises. For example Ryzen 3600X performs about the same as PS5, which is to be expected, 7800X3D gets massive increase (around 2x) in both average and 1% low fps and so on.
I wouldn't compare that to TLOU at all, you can stand there in the alleyway looking at the wall, and your CPU will get hammered for whatever reason, and PS5 outperforms much more powerfull PC specs.
Shamel1996@reddit
RX 6800 performance is really weird here, hoping it's just a beta driver problem
DazenTheMistborn@reddit
I feel like the disparity between the 7000 and 6000 AMD series cards in general were kind of weird. Maybe this just highlights the optimization focus that the devs/AMD had.
Here's to some quick patches for that and FSR.
Soulspawn@reddit
I noticed the same 6700xt seems only barely better than 7600 and the 6800xt gets beaten by 7700xt in most case which is also odd.
Something isn't right and then FSR is adding only 10% to performance seems crazy.
OftenSarcastic@reddit
So performance on RDNA2 kinda broken and FSR kinda broken? Guess I'll skip this for a while.
ShogoXT@reddit
No it works fine. It's the menu that is borked. Setting it the first time for me didn't enable it properly and it ran like crap on my 6800xt and 3950x.
After messing with fsr slider it seemingly wouldn't enable until performance mode. Then once there you can go back up to quality.
OftenSarcastic@reddit
The menu worked fine for me. FSR just doesn't seem to have as big a delta between native and quality mode as DLSS does in this game for some reason.
conquer69@reddit
Nvidia loses a lot of performance at 4K native for no apparent reason.
GARGEAN@reddit
FSR is kinda broken in terms of performance, but as image quality goes - it is (as usual) way behind DLSS BUT to my eye better than many other implementations - shimmer and dissoclusion artifacts are way less obtrusive than most of the times. They seem to implement some custom variation of FSR, which is interesting in itself.
masterfultechgeek@reddit
43 GPUs 1 game...
Man, I wonder how this ends...
djashjones@reddit
I'll wait for a couple a years until there are less bugs/glitches and a cheaper price.
ShogoXT@reddit
FYI the FSR slider is kinda broken until you set it to performance. Then you can set it back to quality.
It didn't enable for me until then and my game was running like crap. So beware of settings not being enabled.
Sentinel-Prime@reddit
Why oh why didn’t they ship this with frame generation
WJMazepas@reddit
They are including it in a future update
TerriersAreAdorable@reddit
Looking forward to the CPU benchmark version of this video since it obviously doesn't push GPUs very hard.
b-maacc@reddit (OP)
Yeah I’ll be interested in seeing CPU benchmarks, looks like a good title for reviewers to add to their game suits for cpu reviews.
riklaunim@reddit
Wouldn't Ultramarines allow testing on Intel only? AMD is clearly Chaos, Nvidia are Necrons or Orcs. Deathwatch would make awesome custom PCs though ;)