[Fully Buffered] Battlefield 6 on AMD FX...it's possible (no TPM required)
Posted by KingPetunia@reddit | hardware | View on Reddit | 64 comments
Posted by KingPetunia@reddit | hardware | View on Reddit | 64 comments
spacerays86@reddit
30-44fps 768p some stutter, 98% CPU usage 4.65ghz
zoon_zoon@reddit
Considering that there's bf6 players younger than the cpu, I think it's still impressive
BlueGoliath@reddit
Making people feel old smh.
nightstalk3rxxx@reddit
Its so crazy to think about, in 2012 those 2002 CPU's seemed so ancient to me and now the FX is basically that...
Intrepid_Lecture@reddit
2002 CPU = 2GHz Netburst - 1 core
2012 = 4.5GHz PileDriver - 8 cores, 2x the IPC and about 4x the ST perf
It's about a 40x increase in MT perf
PD to Zen 1 was 2x MT; Zen 2 was 2x Zen 1MT; Zen 5 is about 2x Zen 2 MT so... 8xish MT and 4xish ST.
Not quite the same uplift but
KingPetunia@reddit (OP)
And in 2012 you most certainly couldn't run the newest games on a CPU from '02..those become absolete almost overnight around '08 or so...
einmaldrin_alleshin@reddit
Looking back, it's insane how quickly PC hardware became obsolete back then. Imagine having bought a 4080 when it was new, and struggling to even run games at full resolution now. But that was pretty much the expectation when you bought hardware around 2000.
hollow_bridge@reddit
I believe you're thinking about SSE4.1, The fx processors did support that; It was the previous generation K10 that became partially obsolete because of that.
KingPetunia@reddit (OP)
True
Exciting-Ad-5705@reddit
There's bf6 players younger than the 30 series
certainlystormy@reddit
...5 years old?
Exciting-Ad-5705@reddit
Yes
KingPetunia@reddit (OP)
Oof that's interesting to think about lol
According_Spare7788@reddit
Possible? Yes. Playable? Hell no.
EndlessZone123@reddit
Depending on the 1% lows. You will be surprised at the amount of people who played shooters at 30-40fps.
shroddy@reddit
Isn't that still many console games even these days?
ParthProLegend@reddit
I play Alan Wake 2 at 15-20, same goes for hellblade 1. Even Marvel Rivals.
BrushPsychological74@reddit
It's like playing through molasses.
letsgoiowa@reddit
Cap at 30 and use a controller and it's like the good old days of Halo lol
KingPetunia@reddit (OP)
I've played shooters in that FPS range. If it's running consistently and the input lag is not terrible, you can get away with it, but with most modern games it starts to do funky things at low FPS
Noreng@reddit
The funny part is that it could be seriously improved, since a 9590 should be capable of more than 4.65 GHz, and a memory overclock would be extremely helpful.
JaredsBored@reddit
Hell I got 5.3Ghz out of an fx8350 after I stuck my PC's intake out the window on a sub zero fahrenheit night. It didn't last that long but I was young and dumb, and learned a lot.
Didn't even overheat either with an D-15 at full speed, just pumped too much voltage and really degraded it. Still booted last time I tried but it sucked power and only was stable at like 2.something Ghz lol
Bugajpcmr@reddit
I've had fx 8350. It was thermal throttling non stop. I undervolted it and lowered the frequency to get more stable performance but still it wasn't the best experience. I decided to switch to Intels i5 4690k and it was way better. Now AMD Ryzen is a king.
nightstalk3rxxx@reddit
Not sure why the downvotes because what you say is true, the FX really wasnt a crazy good processor back then, even being beaten by older athlons in gaming.
Intel was crazy ahead in these times but really started to enjoy their monopoly a bit too much, after skylake it went downhill hard.
nismotigerwvu@reddit
Comments like these cement just how old I really am. Things have ebbed and flowed quite a bit over time, but more or less the performance crown belonged to AMD from the launch of the original Athlon in 1999 until the release of Conroe in 2006. Granted, P6 based Pentium III's weren't terribly far behind (and had the lead for short periods of time here and there) but the lead there on both frequency and IPC for AMD, with the biggest gap coming during the Netburst era (a truly dreadful design). Similarly, there were some bright spots during Intel's run for AMD, where the Phenom II in particular was a very solid upper midrange platform and the Athlon II's dervied from it were solid mid range to budget choices when taking cost into consider. That said, the construction cores (Bulldozer, Piledriver, Excavator) we're even less competitive than Netburst. There was essentially no redeeming factor to them (outside of super niche APU based builds where you could have a somewhat useful rig for less than the cost of a decent GPU). Either people have forgotten or simply weren't around for this era and apply the Ryzen shine to those pitiful FX chips. Also, Zen1 wasn't really all that great for gaming either, it was good value at the 1600~1700 model range but it was still a generation or two behind intel (again the current success of the line gets retroactively applied here). Saying FX wasn't a crazy good processor is like saying malaria is an okay disease.
nightstalk3rxxx@reddit
Yeah Zen 1 was really lacking especially in the beginning, its IPC was still not that solid but it was basically a better FX, many cores but this time atleast with decent IPC and also somewhat okay power consumption which also sucked with those FX chips.
AMD really did come a long way since then and I am very grateful for that because I dont want to imagine where else we would be right now as consumers.
nismotigerwvu@reddit
I think you're being a little too harsh on Zen1 there. Even the top FX models struggled to match mid to low end Intel offerings even in the tasks they excelled in (multi threaded, integer heavy workloads) where Zen1 could claim some strategic wins (more in HPC/server type realms that it was designed to thrive in) but generally landed closer to Haswell despite competing directly against Skylake. Again, there was no shame in snagging a Zen1, the 1700X and 1600X were really the sweet spot for almost everyone at that point in time. The awesome thing is that those builds can (and in my case are) run a 5800X/5800X3D and remain VERY competitive like what, nearly a decade later. Conversely, man y of those Intel builds from that era are e-waste now.
svenge@reddit
Zen 1's memory controller was completely ass, though.
nismotigerwvu@reddit
Yeah, the whole memory subsystem was basically released in beta form. I believe it was confirmed that the L2 latency was curiously high because the chip shipped with the debug settings set in the firmware. It's hilarious how much Zen+ gained from actually optimizing these sorts of things (and the clock headroom the optimized process afforded of course). If AMD wasn't so broke at the time, I imagine that's about Zen1 would have launched.
Bugajpcmr@reddit
Just talking from experience, the FX had good specs on paper but in gaming it wasn't that good.
nightstalk3rxxx@reddit
Yeah, there was a whole lawsuit going on over calling it the first 8-core consumer CPU because technically it was more like 4 modules with 2 cores per module.
It had horrible IPC compared to Intel and even some Athlons resulting in very poor performance. Just imagine 8 cores in 2012, not even today do games utilize 8 cores reliably.
soggybiscuit93@reddit
FX had 4 "modules".
Each module had a single front end, L1 cache, and FPU. but these modules had 2x ALUs.
AMD claimed they were 8 cores before the CPUs had 8 ALUs. But an ALU is just a subcomponent of a core, and in every other aspect, it was 4 cores.
noiserr@reddit
Nvidia does something similar with how they count CUDA cores.
ComplexEntertainer13@reddit
They have actually flip/flopped between architectures of how they operate.
Turing for example had independent INT/FP. Which is why the jump in CUDA cores was so large to Ampere. Since that has always been based on FP capable cores.
But actual realized performance wasn't anywhere near that jump in gaming as a result. And is why 2080 Ti trades blows with 3070 despite the latter having 30%+ more "CUDA cores".
YNWA_1213@reddit
But they always improve and go back and forth on the ratios a half dozen times since they unified the shaders with Curie. It’s always fascinating to me to look back through GPU performance through the eras and see how manufacturers are really chasing the optimizations for the latest rendering techniques, just to need to pivot when everytime the calculus shifts.
rilgebat@reddit
Single L1I. Each core had a dedicated L1D. The FPU was also really 2 independent FPUs when not executing 256-bit wide ops.
xternocleidomastoide@reddit
those FPUs used a single scheduler, so they could only be used as 2 superscalar FPUs under the same thread.
That architecture was more like 2 independent threads that can use a superscalar integer unit each while sharing 1 superscalar FPU
So basically for stuff that was FP intensive, like games, it looked like a 4 core. Whereas for more integer-heavy use cases, like productivity, it looked like an 8 core.
rilgebat@reddit
Not according to John Bridgman's statement here
xternocleidomastoide@reddit
that John Bridgman is repeating what I just said regarding the shared superscalar FPU unit.
rilgebat@reddit
Unless there is something I'm not understanding, this claim:
Does not appear to be repeated in this statement:
Nor in:
xternocleidomastoide@reddit
Oops sorry, I misread. His claim is wrong then.
The scheduler in the FPU cluster for AMD 15H is superscalar not multithreaded for the uOps bundles it gets from the instruction fetch engine front end.
Which is why it sucked for FP loads (in terms of scalability).
rilgebat@reddit
Do you have a citation to support this claim? I can't make a judgement call myself, so it's your word against 2 AMD employees.
I would earnestly like to know more though, FX was an interesting architecture despite its flaws.
xternocleidomastoide@reddit
Not out of the top of my head. I am just going with what I remember from comparative analysis decks (I was @ AMD's direct competitor at that time). The integer clusters weren't SMT, so it wouldn't make sense for FP to be. 15H was doing multithreading at the CMT level (not SMT).
It was an interesting arch, just not a good one for the use cases it was going to commonly execution. It was very similar to SUN's Niagara (which makes sense because some of the folk from that team went over to AMD).
Toojara@reddit
On paper, but in practice it's a bit more complicated. The modules are split in a way where you can't get great performance from them with just one thread. The scaling ratio in FP from one to eight threads is typically \~6-6.5 that's only slighty worse than a "real" eight core at \~7. Which is really not a good thing.
Practically though the performance issues mostly stem from poor cache and memory latency, with a few other quirks.
KingPetunia@reddit (OP)
Yeah once AMD caught up enough on IPC, it really went downhill fast... then came Intel's debacle on 10nm...
Helpdesk_Guy@reddit
What?! No. Intel's 10nm™ cluster-f–ck was already humming along since 2012 and by 2015, they got the first tape-outs with horrendous single-digit yields, yet still pretended and publicly claimed, that 2016 will see 10nm in volume, then 2017, then 2018, then 2019, then 2020 … until finally in 2021 is was "good" enough.
So by the time AMD had their Ryzen in 2017, Intel was already full-stop into their 10nm™ sh!t-show and pretended having "shipped" Cannon Lake by December 31, 2017.
Intel got effed only by themselves, royally, out of incompetence/arrogance/hubris. AMD then just casually dropped by to kick them down the cliff afterwards with Ryzen, Threadripper and Epyc.
So Intel dug their own grave years prior, AMD just made the coffin.
KingPetunia@reddit (OP)
Most games during the time of the FX were still very much single core limited which was never FXs strong point...
Valoneria@reddit
I remember rocking a new Fx4100 (I wasn't that good at PC specs), when Rome 2: Total War released.
A single turn took 30 minutes to compute, in real time, in the early game.
Didn't take me long to switch to a 4670K
Vb_33@reddit
Bro Rome 2.. man I'm old. I remember the hype for Rome 2 like it was 12 hours ago.
xternocleidomastoide@reddit
FX cores were only good at heavily threaded integer use cases, which games are not. So even highly threaded games were basically seeing 4 very narrow (low IPC) cores at best when it came to this architecture. Which is why intel at the time with 4 "fat" cores were destroying these AMD parts @ gaming.
HatchetHand@reddit
I'm glad bro is making videos again
YNWA_1213@reddit
I love his mix of novelty and exploration but still having the depth of technical information. Feels very tailored to this sub actually.
HatchetHand@reddit
I think it's charming to be nostalgic for Pentium 4.
He hunts down a lot of exotic hardware and finds ways to show how they can still be interesting in 2025.
itsjust_khris@reddit
This confirms for me something is severely wrong with Battlefield 6 performance on my laptop. I have a 7940HS and I'm getting similar performance, CPU is maxed out but not thermal throttling at all. Tried many fixes online nothing helps.
Phantom_Absolute@reddit
What GPU?
itsjust_khris@reddit
Nvidia RTX 4060, I don't remember the TGP in my particular laptop but I believe its around 100w.
I start matches around 70fps, all lowest settings ultra performance dlss (native screen res 2560x1600), after awhile performance drops to 30-50fps with severe stuttering. This doesn't occur in other games like Cyberpunk 2077 or Indiana Jones. CPU is absolutely maxed out the entire time to the point of Windows itself being sluggish when alt-tabbing. HWiNFO doesn't indicate my CPU is thermal throttling at all, temps are high but boost is maintained the entire time. Not sure what's going on here but the game performs unusually poorly. Am able to play previous Battlefield games at much higher settings just fine.
HugoVS@reddit
Did you test with everything on lowest graphics? Even textures and texture filtering? I was having a similar issue, in the end the "high" texture was causing the issue, even that the UI vram slider indicator was not even near the "max recommended", putting textures on minimum solved my problem.
ryemigie@reddit
Check your laptop is in the vendor based performance mode.
Phantom_Absolute@reddit
Updated your drivers?
ryemigie@reddit
I remember my FX 8350 being stuck at 70% CPU (while GPU was also 80%) usage on BF4 as it only had 4 FPUs… truly incredible what they’ve done with BF6.
dstanton@reddit
If only AMD had designed those chips with 1:1 fpu:core. It wouldn't have fixed everything wrong, but it'd have resolved a lot of the issues.
Toojara@reddit
It really wouldn't have. In raw throughput the core was \~fine but the real problems were in branchy code where the branch prediction and cache meant it couldn't always keep up even with Phenom IIs.
KingPetunia@reddit (OP)
it is a heavy game if you want to turn up the settings, but it does seem to be well optimized
YNWA_1213@reddit
I was always wondering how they were going to enforce TPM 2.0 usage on Windows 10. So as long as you have Secure Boot, you can run Windows 10 ESU/LTSC and still play BF6, much like Valorant running on older hardware with the Windows 10 version but not the Windows 11. Seems the anti-cheats still rely on a Windows hook to tell them if TPM 2.0 is enabled.