Intel's return to top with Nova Lake looks possible with more IPC uplift vs Zen 6
Posted by Geddagod@reddit | hardware | View on Reddit | 179 comments
The title of the article is:
"Zen 6 is done": Intel's return to top with Nova Lake looks possible with more IPC uplift vs Zen 6
Quoting SiliconFly over at twitter. Mind you, SiliconFly is not related to the original leak in any way. The chosen headline really speaks volumes about the author's reporting.
RogueHeroAkatsuki@reddit
That leak feels like a bullshit:
"Nova Lake CPUs could even catch up to Apple M5 in single-core"
Yeah, Panther Lake is 40-60% behind Apple in single-core and now suddenly blue team is going to catch Apple?
It would be very not Intel way, they are famous from marginal uplifts between generations.
Alive_Internet@reddit
I have serious doubts. Their best current CPUs can’t even catch up to the A18 pro in the Neo.
United-Present-262@reddit
From a hardware engineer perspective, these benchmarks is classified, in my opinion, into these classes:
- Passmark measures the raw throughput of a CPU's ALUs/FPUs and VUs. Because its algorithms are low-level and small enough to stay IN the cache, it provides very little insight into the advantages of system-level integrationm, but it highlights the SILICON (by itself) advantage.
- Cinebench 2024 (Redshift) is designed to test pointer-chasing heavy workloads (like in video rendering). This benchmark is highly sensitive to memory performance; if you hammer your DRAM speed in the BIOS, you will see a significant drop in single-thread scores compared to its predecessor, R23.
- Geekbench evaluates the entire software stack, making it the least reliable tool for comparing raw silicon or microarchitectural capabilities. It relies on OS-specific libraries, which allows manufacturers like Apple to delegate tasks to dedicated **hardware accelerators** (!!!!) or use specialized, hardware-aware optimization paths that mask the performance of the general-purpose silicon.
The A18 got smoked HARD if you use Cinebench 2026 (the latest version); because it reveals the performance left on the table as the x86 cores (on AMD side, at least) are starved by the memory subsystem and suffer from a smaller L1I + L1D on their core
Embarrassed_Adagio28@reddit
Where are you seeing 40%-60% difference in speed? I am seeing a 13% single core difference between a m5 and 285k on passmark.
As a m1 owner that is supposedly significantly faster than my 5700x3d in both single core and mulit core, apples benchmarks are fake as fuck because my 5700x3d shits on my m1 in every way except power usage.
RogueHeroAkatsuki@reddit
46% in CB24, 24% in GB6 if you compare single core performance between 285k and base M5.
60% and 39% you have compared to Panther Lake mobile chips.
And no, those are not fake. Just check PugetBench numbers for Photoshop(its CPU-bound app) or compilation time. Those are typical tasks where single-core performance is showing off and to surprise Apple wins there.
nisaaru@reddit
An UMA APU with wide and soldered memory would always have an advantage for these tests. GB6 is also imho more a system test.
RogueHeroAkatsuki@reddit
What do you mean by that?
Anyway passmark is kinda garbage. Cinebench and Geekbench are a lot better because those benchmarks are based on real world tasks.
Also you can check most 'raw' and industry standard benchmark - SPECint 2017.
https://blog.hjc.im/wp-content/uploads/2026/04/SPECint2017-details.png
Its easy to see that gap between Apple and Intel is huge - there M4 beats Intel 358h by 41% in instructions per cycle.
United-Present-262@reddit
From a hardware engineer perspective, these benchmarks is classified, in my opinion, into these classes:
- Passmark measures the raw throughput of a CPU's ALUs/FPUs and VUs. Because its algorithms are low-level and small enough to stay IN the cache, it provides very little insight into the advantages of system-level integrationm, but it highlights the SILICON (by itself) advantage.
- Cinebench 2024 (Redshift) is designed to test pointer-chasing heavy workloads (like in video rendering). This benchmark is highly sensitive to memory performance; if you hammer your DRAM speed in the BIOS, you will see a significant drop in single-thread scores compared to its predecessor, R23.
- Geekbench evaluates the entire software stack, making it the least reliable tool for comparing raw silicon or microarchitectural capabilities. It relies on OS-specific libraries, which allows manufacturers like Apple to delegate tasks to dedicated **hardware accelerators** (!!!!) or use specialized, hardware-aware optimization paths that mask the performance of the general-purpose silicon.
sSTtssSTts@reddit
Cinebench is legit but GB is trash and not at all real world.
SPEC bench not worth bothering with either. Everyone plays stupid games with the hardware configs and files to pump their scores there too. Just look at real world benches for games and various software.
United-Present-262@reddit
From a hardware engineer perspective, these benchmarks is classified, in my opinion, into these classes:
- Passmark measures the raw throughput of a CPU's ALUs/FPUs and VUs. Because its algorithms are low-level and small enough to stay IN the cache, it provides very little insight into the advantages of system-level integrationm, but it highlights the SILICON (by itself) advantage.
- Cinebench 2024 (Redshift) is designed to test pointer-chasing heavy workloads (like in video rendering). This benchmark is highly sensitive to memory performance; if you hammer your DRAM speed in the BIOS, you will see a significant drop in single-thread scores compared to its predecessor, R23.
- Geekbench evaluates the entire software stack, making it the least reliable tool for comparing raw silicon or microarchitectural capabilities. It relies on OS-specific libraries, which allows manufacturers like Apple to delegate tasks to dedicated **hardware accelerators** (!!!!) or use specialized, hardware-aware optimization paths that mask the performance of the general-purpose silicon.
nisaaru@reddit
Didn’t notice you mentioned base M5 spec as I assumed M5 Pro/Max with 256-512 wide memory bus. About Geekbench it is using OS libs so there is a lot of optimization potential in MacOS spreading work load over all units of the APU. At least I expect there is far more integration between CPU/FPU and GPU for cache coherence advantages by Apple vs a design where the GPU might just be some module added to the PCIE lanes for design flexibility in Intels product lines.
RogueHeroAkatsuki@reddit
But is this 'cheating'? Performance is always mix of both software and hardware. Just like having hardware accelerator for video codecs is not wrong. What matters is that task will be done faster. No one is preventing Intel from creating operating system for their own CPUs too.
There is no rose without thorns. Apple UM has higher latency compared to traditional RAM which obviously is not ideal for CPU.
nisaaru@reddit
I am not saying it is cheating or not useful 8nformation to users but it is not the full picture.
nanonan@reddit
The 285K has a PL2 "Extreme" setting of 295W. Is that an advantage?
ML7777777@reddit
The state of this sub I swear...
RogueHeroAkatsuki@reddit
Yeah, he is in denial. Big conspiracy theories that Apple is paying huge sums for benchmarks. It doesnt matter than almost every test(with software optimization) shows how powerful those chips are.
System0verlord@reddit
60% faster in cinebench single threaded, 40% faster in geekbench single threaded.
Your 5700x3D has a geekbench score of ~1,940/10,000
Your M1 has a score of 2,400/8,400
An M5 has a score of 4,300/17,000
An M5 Max has a score of 4,300/30,000
So your M1 is faster in single threaded applications than your 5700x3d, but slower in multithreaded stuff, and both get absolutely smoked by a base M5.
BlueSwordM@reddit
To be fair, my 5900X actually gets 2412 last I checked in the latest Geekbench 6.
It'd be really interesting to see how fast a gully optimized 9950X would be.
System0verlord@reddit
9950X3D does 3520/22200ish.
CalmSpinach2140@reddit
Maybe because it’s got a huge chuck of L3 cache on the X3D AMD chip and what’s the RAM on the M1 and is the M1 in an actively cooled device?
Geekbench shows the 5700X3D being faster in MT and Cinebench shows the same. M1 is faster in ST in both benchmarks.
Also Passmark is very synthetic and is not all optimised for ARM. It lists the M4 being slower than M3.
https://browser.geekbench.com/v6/cpu/compare/17499608?baseline=17503195
Geddagod@reddit (OP)
The fact that you are having to compare a DT class chip vs a laptop one is pretty telling.
For the M5 vs the top end PTL chip, Intel's newest laptop generation chip, the gap is \~40% in GB6 and \~55% in cinebench 2024.
It's weird how so many benchmarks that run natively on both ARM and x86 have Apple as ahead or on par in ST. Guess spec2017, cinebench 2024 and 2026, and geekbench 5 and 6 are all Apple benchmarks.
Never mind that both Intel and AMD regularly use those benchmarks on their own marketing slides too. And spec2017 is regularly referenced in technical conferences, and even RISC-V companies use spec2017 and spec2006 to estimate perf and IPC.
jocnews@reddit
He's comparing backwards though and you appear to do so too. If Apple has 60% higher 1T score in Cinebench 24, that doesn't mean Intel is 60% behind, unless you want to fail math tests in grade school.
RogueHeroAkatsuki@reddit
Yeah, I fixed it. Its picking to words(correct idiom) but well anyway gap is huge. Thats all what matters. And x86 cant do anything about it. Qualcomm, ARM and other ARM core designers will only move forward and leave x86 behind.
Geddagod@reddit (OP)
Well, as I said in my comment above, M5 vs PTL, as in how much faster the M5 is vs PTL. Not PTL vs M5, as in how much slower PTL is versus the M5, or what percent of the M5's speed PTL is.
Funnily enough if you go check out the comment I was replying too, that guy also is measuring how much faster the M5 is versus PTL when he cited passmark, when he quoted a 13% difference. Not how much slower PTL is versus the M5.
cabbeer@reddit
yeah, that would be the biggest jump in a generation since core duo came out.
Hifihedgehog@reddit
Well, people doubted Core 2 Duo as well though I seriously doubt it. If it split the difference though, I think many people here would be happy campers! AMD is in serious need of a wake-up call since they have become very Intel-like and in many respects worse with their even worse naming system (who rebadges all generations of your in-production products as seemingly current generation to purposely confuse buyers?)
the_dude_that_faps@reddit
Let's not get carried away. It's not like they're releasing 10% performance boosts that require a new motherboard year after year. They're still a long way from Intel's worst
Hifihedgehog@reddit
No, but the naming nonsense where they rebrand older Zen generations as current generation and you literally need a 4 code decoder ring to know which one it is is more peak Intel than Intel even was.
the_dude_that_faps@reddit
Is it really more tho.
Hifihedgehog@reddit
More of a different flavor. AMD right around the start of Zen 4 stopped playing underdog and is now being opportunistically good enough since their competitors aren’t pushing them hard enough. While this isn’t what Intel did, the naming nonsense is indeed something that older Intel would have considered and perhaps introduced if they had their way.
taz-nz@reddit
The Core 2 Duo got a lot of its IPC gains because Intel reclaimed it, by abandoned the P4 net burst architecture and returned to an architecture (Pentium M) based on the Pentium 3 which had higher IPC than the P4.
Intel tried to hide this fact by only ever selling P4 at higher clocks than the P3, but reviewer back in the day did clock for clock tests and the P3 was noticeable faster than P4.
Intel had a lot of easy wins with core 2 duo, it was proper dual core not two separate cores communicating via slow FSB and chipset like P4-D, improved caches from the bandaid fixes on the P4 Prescott, and faster memory, combine them with the higher IPC of the P3/PM and you get the huge leap that was c2d.
It is unlikely they can find similar levels of gains with their current architecture as they don't have any obvious weak points to address to make easy gains, they are going to have to work hard on architectural tweaks to gain performance.
Apple M5 had more cache memory, more clock speed and higher memory bandwidth than M4, all easy wins plus some architecture improvements to make solid all-round gains, but the M-series in general benefit hugely from its closely integrated high bandwidth low latency memory.
AMD Zen 5 is clearly being held back by the fact it shares the same IO die and memory controller as Zen 4, so AMD has an easy target for ZEN 6, a new IO die with improved memory controller and possibly take lessons learned from Strix Halo and have closer package integration of CPU and IO Dies all before they make any CPU core architectural or clock speed improvements.
monocasa@reddit
I wouldn't say they were "hiding" it. P4 just targeted a different design space than P3. It was designed to be more easily clocked faster with a much longer pipeline (literally some pipeline stages were just for wire delays) at the expense of IPC to get to the same perf overall. It failed because of the end of Dennard scaling. But had that not hit the industry like a brick wall and if they were able to hit 10GHz like originally planned in the roadmap, P4 would have wiped the floor with P3. The second order physics rules kind of changed out from under the design team.
taz-nz@reddit
Intel effectively refused to address the IPC loss, they just double down on the more clock speed is more better marketing until as you said they ran into a brick wall, at which point a CPU directly derived from the P3 architecture with 30-40% lower clock speed surpassed it in performance with ease.
So, while the P4 was designed to be easier to clock, time proved that if the same effort was put into the existing higher IPC architecture, they would have achieved higher performance sooner by clocker the P3 architecture at a slower rate. A Pentium 4 would have needed to be running at 8ghz to match the IPC of a core 2 duo at 4ghz.
monocasa@reddit
It's not just marketting; it was a valid way to use process supremacy. Yeah, like30some odd pipeline stages, but a much faster clock in exchange should actually be a net win.
Yeah, they were literally targeting 10GHz in the space that would be competing with 4GHz Core 2 style cores.
Once again, Dennard scaling hit the whole industry like a brick wall. Stupider but faster cores would have made sense had the physics not reached an inflection point faster than these large design cycles take.
It's the same root issue that made the Cell fail at its original design goals as well (which was originally targeting 4.5GHz in a consumer electronics device).
sSTtssSTts@reddit
Getting the clocks up is indeed a valid way to increase performance but it was pretty obvious early on in Netburst's introduction that it wasn't going to be hitting 10Ghz on air out of the box.
And without those crazy high clocks Netburst was going to be a flop performance wise*.
Intel also did initially try to sideline the Isreal team working on the Pentium M/Banias as a laptop only chip. It was only after it became apparent to the execs that Netburst couldn't hit the clock targets that they began pushing Yonah and then Merom into the desktop and other markets. Sure it morphed into the Core 2 Duo, which was a great chip, but it all started off initially as a Plan B for laptops only!
*sales wise it was pretty much guaranteed to do good since AMD was, and still is, heavily fab limited back then.
monocasa@reddit
Nobody said that they were going to hit 10Ghz initially; they were targeting ~2006 for that. However Netburst did hit its clock targets expected of it on the first iteration. First Netburst came out in 2000, and it wasn't until about 2002 that it was clear that the free GHz lunch was over.
https://en.wikipedia.org/wiki/Dennard_scaling#/media/File:CPU_clock_speed_and_Core_count_Graph.png
And then Intel started on what would be Yonah in 2003, cancelled Tejas fully by mid 2004, released Yonah in Jan 2006, and had Conroe on the desktop side in July 2006. That's close to as fast as you can move in the development time of leading edge chips like this (which are generally on a five year or so pipeline).
sSTtssSTts@reddit
Yeah they were supposed to eventually hit 10Ghz but it quickly became obvious to Intel internally that it was going to be impossible to do it. 2yr into what was supposed to be a architecture that was intended to last much longer, most of a decade IIRC, is quick.
Publicly Intel kept acting like everything was fine for a while even though internally they were in freak out mode. 2002 is years away from Intel publicly stating Netburst was going away! I will say Intel handled Netbursts' failure better than Itanium but they were still clearly in denial for about a year or 2 before they finally decided on the Pentium M derivatives as the path forward. That was why they tried to push things harder with Tejas but that was a even bigger failure to hit clock targets with even bigger hits to IPC.
That development went fairly fast was a credit to those design teams in Israel not Intel exec's business or tech acumen. Again they handled Netbursts' failure better than Itanium but it open knowledge the execs were pissed about it and in denial. They never would've tried to develop Tejas otherwise.
ComplexEntertainer13@reddit
The Netburst approach would have worked 5-10 years earlier. It just to happened to coincide with us hitting the "brick wall" of scaling.
It also did scale well initially. Pentium 3 only reached 1,4Ghz on 130nm. Meanwhile Pentium 4 Northwood was launched all the way to 3,4Ghz with the extreme edition.
You could probably have squeezed another couple of 100mhz out of Tualatin on the same node. But 2x the frequency is a reasonable assumption. And a 3,4Ghz Northwood P4 is faster than even a 1,7Ghz P3 Tualatin.
taz-nz@reddit
Intel already knew Netburst was dead before the release of Prescott, Prescott's large cache was an attempt to hide the bottleneck created by the 800FSB (200Mhz quad pumped) without having to re-engineer motherboards and chipsets. I had D1 stepping Northwood P4 2.8C that I overclocked to 3.5Ghz with a 1000FSB (250Mhz QP) and DDR2-1000 (500Mhz) and it was noticeably faster in benchmarks and games than my flatmates Prescott 3.6GHz, despite my CPU having half the cache and a 100Mhz clock penalty.
The 130nm Tualatin Pentium 3 were designed for laptops and only optimized for power efficiency they never reserved the die layout optimization and other tweaks need to reach higher clock speeds, The Banias Pentium M was an optimized Pentium 3 and shipped at 1.8Ghz on 130Nm, but it was still optimised for power efficiency not clock speed, The Dothan Pentium M on 90nm were when Intel realised that this was their path forward, despite being designed as a laptop chip, if given the power budget of a desktop CPU and pushed to its clock limits it was faster than the desktop CPUs of the days.
This video shows what the performance was like to comparable P4 and Athlon of the time.
AOpen i855GMEm-LFS and Pentium M 780 'Dothan' - Overclocking and conclusions
Intel didn't want the P3 to compete with the P4, all the chips based on P3 architecture post P4 release were designed with energy efficiency first performance second, so we will never know what speeds they could reached on 130nm if Intel's design targets had been different.
Hifihedgehog@reddit
IPC is a tricky thing and quite frankly it is not a good measure of performance. Clock is just the general rate of the machine but get more clocks in with less energy consumed. The best measure is performance-per-watt. Theoretically, you can have a device with a lower IPC yet higher performance and lower power draw than another. That's why I really don't use it much except for inter-family comparisons. Cross-brand, it can be sometimes awfully misleading.
ComplexEntertainer13@reddit
Aye, what people forget is that the fastest gaming CPU from Intel in the socket 478 days. Was not Northwood P4s, but mobile Pentium M overclocked and running with a socket 479 adapter/board on desktop. And it was often battling with A64 when overclocked despite the sub optimal platform/setup for it.
Those of us in the OC community knew that C2D was going to be a beast when it was clear it was based on the old P3/Pentium-M architecture path.
sSTtssSTts@reddit
Yup. I remember everyone getting hyped around late 2005.
Once it became obvious that Netburst was on the way out and Conroe was coming and based on the fantastic Merom/Yonah cores but with even better clocks and IPC people went nuts. Lots of people thought AMD might not even be able to stay in business.
asfsdgwe35r3asfdas23@reddit
It is impossible for Intel to march the Apple Silicon single core performance. Apple achieves that using massive cores, the die size of M chips is huge. They can do that because they do not need to make money from the chips. Intel needs to make profit from each chip they sell, so they need to have smaller die sizes so the chips are cheaper to produce. They tried to compete with Lunar lake and they almost went bankrupt.
GHz-Man@reddit
Lol, who almost went bankrupt?
Exist50@reddit
That's not true at all. Intel's P-cores are pretty massive. Intel just doesn't have as good a core design as Apple does.
It's perfectly normal for its class.
Geddagod@reddit (OP)
Largely due to the huge L2. Something which I doubt they would need to be as large if Intel's L3 and mem fabric team were better. Not counting that (and even still including the L1.5 cache), LNC would end up being marginally smaller than a A19 P-core.
RogueHeroAkatsuki@reddit
To be honest I wonder why core size is even argument. For me its like with phone/laptop battery capacity. Who cares if its 50 or 70 KWh. What matters is how long device can run on it. In case of cores how fast and efficient it is. And well used size can be beneficial for both actually.
ClassroomDecorum@reddit
70Kwh is a car sized battery
RogueHeroAkatsuki@reddit
haha, right. thanks. Fixing
ResponsibleJudge3172@reddit
Whatever is negative.
Just like how pantherlake is apparently not great because it uses 18A node, yet no one cared about that when comparing zen 3 vs raptor lake
CalmSpinach2140@reddit
Keep in mind, M5 P core now includes 1MB of L2 cache. A19 Pro doesn’t have this.
Noreng@reddit
Even disregarding the L2, Lion Cove is a large core. It's not quite on the level of Apple's "super" cores, but still quite a bit larger than Zen 5. In many ways, it is somewhat puzzling that it doesn't achieve better performance.
RogueHeroAkatsuki@reddit
Reason is a bit different, technical. x86 cant really compete with ARM on IPC(instructions per cycle)
Based on CB24 IPC is fellow:
Intel x9 388h(newest Intel flagship) - 100%
X2 Elite 96 - 125%
M5 - 177%
It means basically that to equal single-core performance of M5 with clock speed 4.6 GHz Intel would need to boost to 9 ghz.
------------
Reason for this lies in architecture, due to fixed instruction length ARM chips are a lot more effective in decoding instructions. x86 processors have varied instruction length so decoders need to guess where instructions starts and end. Its a lot more complex process than on ARM.
Thats also reason why Apple can make big cores. For Intel it wouldnt make any sense as they wouldnt gain almost anything from performance standpoint. Decoding would be still bottleneck and those extra ALU would be simply starved.
THats why x86 is losing. Not because of efficiency - performance-wise ARM is also better. If Apple wanted to make CPUs sucking as much power as Intel/AMD offering then even on desktops it wouldnt be even close.
Michelanvalo@reddit
If ARM is that more efficient than X86, then shouldn't the industry be pushing to move to ARM? Or is it because Intel and AMD co-own the licensing for X86 that they don't want to have to pay ARM?
DerpSenpai@reddit
The industry is pushing ARM though. Intel and AMD fight against it because if it remains x86 it means they have a monopoly. If everything goes to ARM it means AMD or Intel might not exist in 10 years.
Tgrove88@reddit
Doesn't amd have arm chips coming out (soundwave)?
Geddagod@reddit (OP)
That's been rumored for a while, who knows what happened to it lol.
IIRC weren't their rumors it would be fabbed at Samsung too at one point?
Tgrove88@reddit
I just look it all up. There's already shipping manifests of amd soundwave chips showing up. I think there's talks of amd using samsung 2nm for epyc server chips
Geddagod@reddit (OP)
Source? Mercury research has that figure as much lower (eyeballing it, 15-20%), which surprised me when I first saw it, cuz I thought it would be much higher.
DerpSenpai@reddit
It's 25%, not 35. My bad
Also
https://www.tomshardware.com/pc-components/cpus/report-claims-arm-chips-will-power-90-percent-of-ai-servers-based-on-custom-processors-in-2029-x86-and-risc-v-on-the-outside-looking-in
RogueHeroAkatsuki@reddit
Its already happening
https://www.tomshardware.com/pc-components/cpus/report-claims-arm-chips-will-power-90-percent-of-ai-servers-based-on-custom-processors-in-2029-x86-and-risc-v-on-the-outside-looking-in
Only reason why AMD and Intel are still in game is compatibility with legacy x86 programs and games.
JQuilty@reddit
That claim is limited to servers for AI, where the GPU is the important component.
nanonan@reddit
It applies to the vast majority of the datacentre industry, which is mostly platform agnostic.
JQuilty@reddit
The article literally specifies AI servers in the headline, blurb, and article itself.
DerpSenpai@reddit
They can compete when they switch to ARM, the reason they don't is to protect the monopoly
ExeusV@reddit
https://chipsandcheese.com/p/arm-or-x86-isa-doesnt-matter
RogueHeroAkatsuki@reddit
This article is a bit misleading. Reason why ARM is ahead is not energy efficiency of decoding. Secret of modern CPUs lies in out of order execution. Sure, its possible to do it also with x86 but ARM decoders will always 'hit' instruction to decode. x86 decoders may have miss. It means that ARM CPU can look far forward in instructions queue and decode more instructions and a lot of those can be executed simultaneously. Decoding on x86 is more more complex process, AMD is using machine learning too there.
However obviously true its that ISA doesnt determine energy efficiency. History only proved that it was faster to get efficient processor to best fast in reverse - to make fast CPU also efficient. However both AMD and Intel are catching there nicely, especially in low-load scenarios.
jaaval@reddit
Decoding is not the big performance bottleneck in modern processors. Out of order queues are long and decoders run ahead of execution quite a bit. And while x86 decoding is “complicated” the complication is fairly minor compared to size of modern processors. We are talking about checking a dozen or so bits to know where the next instruction is. That is not many transistors.
bookincookie2394@reddit
Variable-length instructions are not a limiter on decode width these days. There are plenty of techniques to overcome that. The variable-length "tax" is only really for efficiency.
Numerlor@reddit
and comparing ipc between completely different architectures with different instructions makes total sense
RogueHeroAkatsuki@reddit
Well, if you compare strictly IPS as instruction per cycle then it doesnt make much sense.
By IPC i meant more Points per clock , maybe I should use this instead, I used IPC as I feel people are more familiar with this and will understand. I will update my comment though.
Qsand0@reddit
Panther lake X7 is 30% behind base m5 in single core. Not sure where you're getting 60% from
RogueHeroAkatsuki@reddit
https://nanoreview.net/en/cpu-compare/intel-core-ultra-x9-388h-vs-apple-m5
jocnews@reddit
You can't into reading or % math or both.
RogueHeroAkatsuki@reddit
? Another one in denial phase?
You have there % how much faster M5 is compared to 388h. What you expect more? I'm confused.
exscape@reddit
M5 is 60% faster, therefore 388H is 37.5% slower.
If the 388H was 60% slower, the M5 would be 2.5x faster.
For example: if the 388H has performance 100 and the M5 160, 37.5% less is (1 - 0.375) * 160 = 100.
RogueHeroAkatsuki@reddit
Ah that, thanks it makes sense though its catching by words. Gap is big anyway.
jocnews@reddit
No, your statement was misleading by a lot (60 % vs 37.5 %), pointing that out is not "catching by words".
ritzk9@reddit
He never mentioned 30% though. He claimed 60% faster on apple and showed that. Other guy brought in 30%, not sure why this guy got downvoted
Qsand0@reddit
Yeah (I'm the guy that replied). He didn't say 60% slower but faster. So according to the link he shared, he's right.
Downvoters gotta chill
jocnews@reddit
No he had it wrong but edited his posts now.
kingwhocares@reddit
I still don't know the use of single core performance. Also, Intel isn't beating Apple unless it can get its "Software Defined Super Cores"
bookincookie2394@reddit
That proposal is entirely unrelated to what Intel's actually doing. Most of the inventors of that patent already left the company.
wankthisway@reddit
Single core performance affects the vast majority of tasks.
RogueHeroAkatsuki@reddit
Really? Single-core performance for average user is a lot more important as:
It means that current M5 will still offer 'smooth' performance few years longer than Panther Lake even if Intel chip has upper hand in multi-core.
waxwayne@reddit
They weren’t always marginal uplifts. I remember Core Duo.
resetallthethings@reddit
Sandy Bridge was pretty large jump as I recall also
waxwayne@reddit
God those were the days.
darthmaule_II@reddit
I remember the original intel core was a 40% increase… but yes lately it’s been marginal
BigBangBoomerang@reddit
Worst, Nova Lake is next year’s chip by which time the M6 is out.
snollygoster1@reddit
Intel is trying their 14th gen strategy: what if we just put in more voltage?
Geddagod@reddit (OP)
That's not a part of the leak, that's just SiliconFly's delusions.
Though I'm also assuming he is referring to NVL-S vs Apple, not NVL-H.
DerpSenpai@reddit
Yes it's his delusions
Known_Union4341@reddit
My prediction:
-Intel technically higher performance by a small margin, but at dramatically higher power draw. They overclock the piss out of the chips by default and power efficiency goes out the window.
-AMD behind by a very small margin on non-x3d but ahead by a moderate margin with x3d.
-End result will be gamers still buy AMD, Intel gets relegated to heavy content creation, editing, streamers etc. Intel will be forced to cut prices and AMD keeps theirs sky-high. No major market changeup.
Question-master3@reddit
Nova Lake has bLLC CPUs, which should compete against AMDs X3D CPUs. Obviously will have to wait tbh, but gaming wise I'm predicting they'll be basically the same
Kryohi@reddit
The latency of that bLLC is going to be much higher vs AMD's X3D cache since from what we know, it's not vertically stacked but closer to another chiplet on the cpu.
ResponsibleJudge3172@reddit
Intel Arrowlake latency is already higher than AMD but that doesn't stop them vs base zen 5
Kryohi@reddit
It forces Intel to put an abnormally enormous L2 on their CPUs...
Geddagod@reddit (OP)
I don't think it's abnormally large when the exynos 2600's C1 Ultra also has 3MB of L2 cache. Ofc, unlike LNC, they don't have the 192KB of L1.5, but what they do have is more than 2.5x the L1D cache.
Known_Union4341@reddit
I’m aware, but I’m expecting AMD’s next-gen x3d to have sizable improvements. Leaks have the bLLC chips at around 10% faster than current-gen x3d like the 9800x3d (which is great) but all AMD needs is zero improvement to the x3d tech and a sizable single-core improvement and they’ll beat 10% no problem. The thing is that they’ll absolutely have improvements to the x3d technology AND a single-core performance uplift. I just don’t see Intel topping that with the bLLC chips.
Question-master3@reddit
True that's quite the valid point. With how tight competition is looking to be, I'm really looking forward to seeing what the pricing structure will be like.
Tuarceata@reddit
Yes, this is almost exactly my take. Intel's trajectory is improving, but I doubt they can catch up in performance AND efficiency in a single generation. Even if they do come close, I'm not sure people would want to be early platform adopters after their recent mishaps.
Uptons_BJs@reddit
If true, this could be a big moment for Intel. They probably will never win the apple account back, but major IPC lifts and single core performance improvements could allow them to hold off Windows on Arm efforts.
If you compare Snapdragon elite X2 vs Panther Lake benchmarks, Intel is still behind on single core performance here. They need to regain the single core performance crown before app compatibility with Windows on ARM gets better haha.
Geddagod@reddit (OP)
The gap between WoA and Intel mobile ST perf is so large that I doubt NVL-H will be able to catch up, even with a strong tock core's worth of an IPC uplift (15-20%).
the_dude_that_faps@reddit
A couple of gens ago I might've been hopeful. Right now, I don't even know if they care. AMD and Intel are in deep shit if they can't even catch up to Qualcomm.
Qsand0@reddit
They won't catch up because arm will be coming with their own +20% yoy knockout punch for the umpteenth time
DerpSenpai@reddit
not 20%, but i bet the X3 Elite / Snapdragon 8 gen 6 is 15% faster than it's predecessor. It's using N2 after all
Vb_33@reddit
Intel could offer double the performance of apple chips and apple still would just stick to their vertical integration.
Sylanthra@reddit
Ah yes, the unreleased Intel CPU will beat the unrelease AMD CPU. What a great article.
ComputerEngineer0011@reddit
Isn’t Intel going to use TSMC for nova lake anyway? If they’re both using N2 or N2P with similar core counts, they’re going to have similar performance. At that point you’re just picking your favorite color, igpu, motherboards, or cache size, and I don’t think Intel is going to borrow 3D vcache from AMD.
RHINO_Mk_II@reddit
Real "my dad could beat up your dad" energy.
soggybiscuit93@reddit
The funniest part was that they quoted Silicon Fly. Who isn't even a leaker. He's just some guy that got laughed off the Anandtech Forums so now he spends his days shitposting on Twitter.
ResponsibleJudge3172@reddit
It's not quoting SiliconFly. It's quoting HXL
deleted_by_reddit@reddit
[removed]
AutoModerator@reddit
Hey IdeasOfOne, your comment has been removed because it is not a trustworthy benchmark website. Consider using another website instead.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
cultoftheilluminati@reddit
Worse, they claim that it can even beat the Apple M5 for some stupid reason.
lol so more like “my dad can beat up Hafthor Bjornsson” 😂
w1na@reddit
Is the M5 that hard to beat? We’re talking desktop cpu vs the entry level m5 (not pro, not max, etc). I think in multicore it should be able to beat it although m5 will use low power while the intel will go 200w +?
-protonsandneutrons-@reddit
The M5 is not interesting as a GB6 multi-core target; the OP tweet is just inane. The M5 is a fanless CPU—older mid-range x86 CPUs (e.g., i5-14600K) beat it in GB6 multi-core.
The M5 is that hard to beat in single-core. 4300+ points in GB6, 200 points in Cinebench 2024, 15.35 points in Geekerwan's SPEC2017 geomean.
Sources: Mobile Processors - Benchmark List - NotebookCheck.net Tech, https://youtu.be/BKQggt9blGo?t=97
The hilarious part (for Apple): M6 is the launch competitor to Nova Lake (NVL) & Zen6, not M5. M6 is rumored to release later this year, just like NVL and Zen6.
smarlitos_@reddit
14600K is a beast tho. The -600K’s were always most of the performance of the i7’s and i9’s, especially for gaming, just a few fewer cores, still a 14 core beast that uses a good bit of energy tho.
venfare64@reddit
It would be even more hilarious if M6 using N3P wiping floor with Zen 6 and Nova Lake despite Zen 6 and Nova Lake using N2 node.
VastTension6022@reddit
M6 will almost certainly be on N2
cultoftheilluminati@reddit
In single core, the base M5 shares perf across the lineup from what I can see. All score around ~4325 on Geekbench 6. It's diabolical the lead that Apple has on
I can totally see it happening given the rumors pointing to 52 cores, if Intel pumps ungodly amounts of power through that poor chip tbh, just to juice up their marketing and benchmark numbers (or what the kids call benchmaxxing these days)
klipseracer@reddit
But he can and he will... Just as soon as he is home from buying the milk.
ResponsibleJudge3172@reddit
Better than the YouTube video titled "Intel Nova lake might not be shit"
trmetroidmaniac@reddit
Intel have always promised big, but I can't think of anything recent besides Alder Lake and Lunar Lake which delivered.
DYMAXIONman@reddit
Panther. Nova has an x3d competitor.
ChemistPretend4636@reddit
Panther lake?
grumble11@reddit
Good on the iGPU, on the CPU part it's not bad but isn't a big jump over LNL on an isopower per-core performance, which means that in terms of single-core performance it's still mediocre relative to ARM.
It's also expensive and unavailable. I still kind of want an X7 358H though, it'd be great for a slimmer laptop that can last all day but still have fun with.
SomeMobile@reddit
Panther lake's whole claim was the efficiency gains and not performance gains. So they didn't promise something they didn't deliver on.
Shit being unavailable and expensive is not really their fault right now
Exist50@reddit
Their ST efficiency gains don't seem to have held up vs LNL.
DerpSenpai@reddit
Panther Lake is basically a fat Lunar Lake performance wise
steve09089@reddit
It’s expensive and unavailable due to the PC market basically being dead thanks to AI, not because of anything inherent to Panther Lake.
Vb_33@reddit
Intel never claimed panther lake would be a big uplift on the CPU side, i.e Intel delivered on its claims.
jhenryscott@reddit
Lmao relative to ARM do you hear yourself
grumble11@reddit
Qualcomm X2 Elite Extreme is like 30% faster single core than PTL's top end chip. Apple's latest M-series is even better. Heck right now the Macbook Neo is more zippy single-core than the PTL chips using a cellphone chip.
Single-core is what matters the most for mainstream consumers browsing and so on, and even for (many) games that tend to lean hard on a small number of main threads. Multi-core is for workstation stuff like compilation, rendering and whatever and that's pretty niche.
jhenryscott@reddit
It's apples to oranges. Most people aren't buying an ARM pc unless they are using iOS and those who buy iOS devices aren't considering x86
CalmSpinach2140@reddit
I think you meant macOS. People do cross shop for normal productivity use cases.
jhenryscott@reddit
You are correct.
grumble11@reddit
That is true, but when looking at the hardware itself it is way behind, and software compatibility is slowly improving for both Mac and arm. It is reasonable to think that ultimately both will get market share unless x86 improves a lot in power efficiency and single core. The macbook neo alone will be cross shopped with x86 computers and many people will just get the neo.
Qsand0@reddit
Nah, it's also 30% better while on par in multicore
CalmSpinach2140@reddit
PTL is on par with M5 because it has 6 more cores. The high end M5 Pro/Max SKUs are in a different league though, you need a 285HX drawing 200watts to beat them
Qsand0@reddit
No argument there. Just needed to point that out.
RumbleversePlayer@reddit
How about the refreshed arrow lake?
Flat_Pumpkin_314@reddit
They just matched 14900K in gaming what’s so impressive
a60v@reddit
Matching a 2-3 year old processor's performance is "impressive"?
Flat_Pumpkin_314@reddit
Are you slow? I said “what’s so impressive “
LuluButterFive@reddit
14900k is pretty fast and the 270k cost much less
VaultBoy636@reddit
It has high oc potential and low power draw. Can outperform a 7800x3d in games easily with a simple oc and you have a truckload of cores
windozeFanboi@reddit
I was surprised by that. It fixed some shortcomings of arrow lake design
III-V@reddit
I don't think they promised big on that
Geddagod@reddit (OP)
I mean this is a leak, not something official from Intel, so there is that.
Idk how accurate Intel's very early projections to partners/internal are, but \~1 year before ARL launch, we got leaked projections from Intel (presumably given to OEMs) about ARL being a very small 1T perf uplift over RPL (Igor's ARL leak). So that was pretty accurate.
DYMAXIONman@reddit
Intel messed up with arrow lake but their design team is typically good, it was just tsmc allowing AMD to decimate Intel. I think it'll be pretty close going forward
SmashStrider@reddit
Are they really writing articles on tweets by random twitter users now?
Tower21@reddit
Personally, I'm more interested if Intel can launch nova lake on 14a with any sort of scale. If it can, and with the performance SilconFly is speculating, it could be a lucrative market for Intel in these supply constrained times.
I personally hope the AI bubble pops soon, but one can see how it could be very beneficial for Intel if they can execute and it doesn't pop in the short term.
Geddagod@reddit (OP)
Nova Lake is too early for 14A, it's 18A and some external TSMC node for some compute tiles (rumored to be N2P for the 8+16 desktop ones).
Tower21@reddit
That's my bad, I'm seeing 2028 ramp for 14A.
Not sure why my brain thought 2026, maybe just blissful ignorance that more fab space was coming.
Handsome_fart_face@reddit
Last time I bought intel they fried themselves. So I’m not buying intel again. Not that I can afford the ram anyway.
Nicholas-Steel@reddit
Both Intel and AMD require DDR5, both have had recent issues of CPU's getting fried regardless of how they were used by the consumer.
imaginary_num6er@reddit
“Zen 6 is done”
Only for Intel to likely delay the launch window and price these with uncompetitive $/performance value propositions. Intel never misses a CPU launch opportunity to miss an opportunity.
jocnews@reddit
That part is ridiculous, they are literally quoting an uncritical internet rando. HXL's information is interesting, but it definitely doesn't tell you who will win this match.
imaginary_num6er@reddit
Same internet randos also suggested “Alchemist+”, “DLVR on Raptor Lake”, “Adamantine cache”, “Big Last Level Cache” and other fake rumors.
EnigmaSpore@reddit
All intel needs to do is stop f'n around with the sockets. Give us a long running platform for once. Nova Lake seems like a winner, so stop doing loser shit like having the next gen or 2 use a new socket. Just stick with 1 socket for like 5 years. that's it. do that and i would consider intel again, but the lack of upgrade ability sucks.
GOOGAMZNGPT4@reddit
Socket is no issue for 99.99% of consumers, it'll never happen.
No one cares that you want to spend $200 dollars yearly to flip mid range CPUs instead of just buying a flagship product, Intel not going to build their business model and CPU development around that.
Exist50@reddit
OEMs don't like redesigning their boards every generation either.
EnigmaSpore@reddit
the whole desktop discrete buying is moot based on that logic.
servers and laptops rule the industry and then prebuilt desktops. DIY PCs are a very small part of the game in comparison, but still... for DIY sales, stop f'n around with the sockets. they dont need to change sockets every architecture, but they do for the.
their business model doesnt revolve around DIY, which means your point is moot. if they cared about DIY they would listen, but they dont, so AMD gets the business.
Numerlor@reddit
Prebuilts and office pcs still use normal cpus so redesigning sockets for bigger cpus or io sooner can make sense for those, otherwise you're quite limited with the changes you can make even with positioning of chips on package.
e.g. for the rumored 52c nova lake with bllc I think you'd start starving the cpu of power a bit on 1851 compared to the power consumer cpus usually push to cores, at least if keeping the usual safety margins
erichang@reddit
not 99.99% of people still buying desktop, and certainly not even 90% of people who read those desktop cpu reviews.
HashtonKutcher@reddit
Trouble is, the chipsets still get updated. J/c, did you have to get a loaner CPU from AMD at some point in order to update your BIOS? I seem to remember that was a thing.
Personally for me it's a non issue, 2 gens is enough. Usually I want to upgrade my motherboard anyway after a few years.
Also, I'm just gonna say it because no one ever else will and I'll eat my downvotes... AMD may be a little faster, but I don't typically buy Intel so that I can get 600FPS instead of 500FPS, I buy Intel because the entire platform is just clearly better. The last time I had AMD, admittedly early Zen, I ran into a bunch of things that I never run into with Intel. Like there was a couple instances where a new game came out, and I just couldnt play until a patch came out. Also, I never want to manually adjust memory timings or do any shit like that, I found a lot of tuning to be necessary. Not to mention the number of seemingly random unreproducible hiccups, system restarts, etc was so high compared to what I was used to.
Anyway, my information is probably a bit out of date, but I'll probably just use Intel by default going forward unless things get really untenable for them. Not a fanboy, I swears it, some brand loyalty though. I've been using their shit for 30 years and it's always been good to me.
Hopperj6@reddit
get that cheap INTC stock before its too late
AnechoidalChamber@reddit
Talk is cheap, show me reviews.
SignalButterscotch73@reddit
Pretty sure that would need a 50%+ performance uplift vs current gen Intel and I can't see that happening. Arrow lake isn't bulldozer levels of bad and Intel haven't hit any performance targets since... I actually can't remember. 9th gen?
soggybiscuit93@reddit
IPC != PPC
grumble11@reddit
So many get hyped about intel products and then they're just 'okay'. LNL is genuinely efficient but it does so in part by not being very powerful. PTL isn't much better than LNL in single-core at isopower but adds in more cores and a nice iGPU as a good all-rounder, but it's expensive and not widely available. NVL looks cool, but it has way more cores than most people need - most people need faster cores, not more of them and need those cores to sip power, not chug it.
It's unlikely to catch up to ARM solutions since the cores are server-first and they come from an fmax/desktop design culture, not an IPC/mobile culture.
NVL could be a bigger jump than typical since it's using materially improved P-cores and E-cores, it's improving a node and it might have some more cache. Maybe what 25% faster in single core driven by a node jump, a big process jump and some fixes to latency and cache? Am interested to see what happens re: NVL iGPU, rumours are it'll be quite a bit faster than PTL's.
windozeFanboi@reddit
25% single core gain + some geekbench os penalty of 5-7% and nova lake you speak of is performance of apple M4 in single whatever the power consumption.
But I don't think we're getting such a jump.
grumble11@reddit
I figure best case you get 10% from the N3 -> N2 node jump, you get 10% from the bigger than expected design jump (arctic wolf and coyota cove), and maybe 5% from solving the ARL latency issue and bumping the cache up. It won't be at ARM level, but it might go from abysmal to just okay.
Similarly you might get the iGPU up 10% on the node jump, 5% on design tweaks and maybe another few percent if they add a couple more Xe3 units. That gets into the teens, it'll hit a bandwidth wall but it's pretty exciting.
Mammoth-Plane-6890@reddit
Leaks? Yawn, i'll wait for reviews
cettm@reddit
They always sell the future.
GuiltyShirt3771@reddit
Sounds like they want to pump the stock
AutoModerator@reddit
Hello Geddagod! Please double check that this submission is original reporting and is not an unverified rumor or repost that does not rise to the standards of /r/hardware. If this link is reporting on the work of another site/source or is an unverified rumor, please delete this submission. If this warning is in error, please report this comment and we will remove it.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.