Intel Core Ultra 300 “Panther Lake” SKU lineup leaked, up to 16 CPU cores, 5.1 GHz boost and Arc B390 Xe3 graphics
Posted by Geddagod@reddit | hardware | View on Reddit | 143 comments
grumble11@reddit
While this has pretty much been known for a while, you really do get the feeling when looking at these frequency and power stats that Intel hoped for more from 18A. It's a good node and a good chipset, I think it's targeted and designed well and will do well in the market, but it doesn't seem like it improved on N3 and I know that they were targeting better frequency and lower power than they got.
Spread_Uranus@reddit
Jaykihn, whose tweets these leaks are based on, says that the design characteristics cannot be extrapolated to the node itself.
So it is less likely that the clocks are limited by 18A. It is just what was decided for this particular product.
Exist50@reddit
Why would they suddenly decide to underclock it to such a degree?
Spread_Uranus@reddit
Give reasons why you claim this is an 'underclock' that was done 'suddenly'.
Exist50@reddit
It's literally a regression from the prior gen, for a node that was supposed to be "leadership", mind you.
I don't know why it's hard to acknowledge that 18A is simply underperforming.
heylistenman@reddit
Not great, not terrible. In defense of Intel, Panther Lake is pretty much a pipe cleaner product for 18A whereas Arrow Lake was manufactured on a fully mature N3 node.
Helpdesk_Guy@reddit
For what it's worth, I'm glad and somewhat relieved, that the never-ending charade of 18A (and ultimate sh!t-show it eventually again amounted to, after their blatant Vapor-ware 20A-stunt) finally is about to come to some lousy end, after years of constant shady re-schedules, even more bi-weekly Intel-lies and basically +2 Years of delays again.
So it will be a bit of … 'consolation' I guess, not having to constantly read that 18A-shit in every darn Intel-news.
hardware2win@reddit
What delays? 2025 node in 2025?
Helpdesk_Guy@reddit
What de— Seriously now!? Where you living under a rock the last years by any chance? o.O
18A is neither a 2025 node (but was once supposed to be 2H24), nor does it really come to market in 2025, but the bulk of it (read: 'cause Yields again!) will be 1H26 and most likely even end of first half of 2026.
So it's a 2H24-node, which Intel is only able to offer actual volume basically +1–2 years later. That's called »delay«.
hardware2win@reddit
How it aint 25 node when there will be products on shelves in january 26?
Exist50@reddit
It was supposed to be a 2024 node. And given the perf downgrade and admitted yield problems, seems more like they're only delivering the originally promised metrics in '26 or even '27.
hardware2win@reddit
Whats the name of the close the gap initiative?
5 nodes in 4 years... announces in 2021
2021+4=2025
Exist50@reddit
It was supposed to start with Intel 7 in '21 and end with 18A readiness in H2'24.
https://img.trendforce.com/blog/wp-content/uploads/2023/10/17144859/intel-4-years-5-nodes.jpg
And now in the way end of '25, we're getting something more like what they originally promised for 20A, if that, and now they're saying yields won't be "industry standard" until as late as 2027. It's a disaster by any objective standards. Their failure with 18A literally got the CEO fired...
Helpdesk_Guy@reddit
Nope, it's even worse actually, as they made several changes midway.
Since Intel's initial 5N4Y-claim actually was [10nm • Intel 7 • Intel 3 • Intel 4 and finally 20A].
Only later on and in the middle of it they made a bunch of switcheroos and it was made;
[~~10nm •~~ Intel 7 • Intel 3 • Intel 4 • 20A, and then 18A] — Dropping 10nm™ at the beginning, while at the same time adding 18A at the end, virtually shifting the whole roadmap-window of processes into the future for years and a whole process.
ProfessionalPrincipa@reddit
Arrow Lake was the pipe cleaner for 20A which in turn was deemed unnecessary to move into production because everything was going so well that they could use that groundwork to move straight to 18A. That's right from Intel's mouth. Now the goalposts shift once again.
Visible-Advice-5109@reddit
We all know that's a lie and 20A was a disaster. Why repeat it now? 18A may not be perfect, but it exists and these chips are comming soon.
Exist50@reddit
Probably in response to all the people assuming there must be some kind of upside.
Visible-Advice-5109@reddit
Those "people" are either bots or trolls. No sane person thinks Intel is crushing it with 18A.
heylistenman@reddit
All the cool human males are shitting on Intel!
Exist50@reddit
You'd be surprised how many people are willing to take Intel at their word on it.
Geddagod@reddit (OP)
Go to the intel stock subreddit lol
Visible-Advice-5109@reddit
Kinda making my point.
Geddagod@reddit (OP)
haha
Geddagod@reddit (OP)
I'm fairly sure that's what u/ProfessionalPrincipa was also implying.
JuanElMinero@reddit
IIRC Arrow Lake's compute tile is fabbed on N3B, which is pretty much TSMCs earliest N3 node for mass fabrication.
Its other tiles are on a mix of mature 5nm- and 7nm-class nodes.
heylistenman@reddit
Perhaps fully mature was an overstatement, but it wasn’t the first product on N3B, the node was already in HVM for over a year at that point IIRC.
Geddagod@reddit (OP)
True. But the node itself may be kinda mid. It seemed much, much more complex than N3E, and it's not as if the node was any sort of highlight in the products Apple used it in.
Going back to A17 reviews and such, there were serious questions presented about N3B vs N4P.
SkillYourself@reddit
According to jaykihn0 it's a heat issue: BSPD and transistor density is flattening the VF curve on the high-end. The ST power curve is substantially improved at lower frequencies over Lunar Lake and Arrow Lake.
hwgod@reddit
He doesn't make this claim. BSPD in particular was supposed to help the most at high voltage, and the density does not appear to meaningfully differ from N3, which it still regresses relative to.
Best case scenario, the node, for other reasons, can't hit as high voltages as N3, and the top of the curve is just capped. But you wouldn't expect thermals to be such an issue then.
Doesn't seem to be any evidence of the core-level VF curve benefiting from the node either.
soggybiscuit93@reddit
Says who?
BSPD improves signal integrity, lowers IR Drop, and helps improve density, at the expense of heat dissipation.
Idk how a technology that is known to increase heat density was supposed to improve fMax
hwgod@reddit
Intel. It was part of their own published results about PowerVia on the Intel 4 test chip. The gains range from ~negligible at low-V to around 6% at high-V.
Also explains why TSMC is not adopting it in the same way.
ResponsibleJudge3172@reddit
FMax at same power consumption is higher true. But the temperature is just too high. Both temperature and power consumption are important considerations
Spread_Uranus@reddit
Jaykihn attributes only density to the supposed heat issues, not BSPD. Which makes sense because the process guy and Stephen Robinson both talked about having to space out the signal and power wires when working with BSPD.
If they prioritized density, then it is possible that frequency could not reach the theoretical maximum due to crosstalk or heat issues.
Geddagod@reddit (OP)
I'm curious to see the core's power curve itself. Or the core+ring, I forget what exactly Intel measures here for their software power counters.
Intel claims PTL has outright lower SoC power than LNL and ARL, so if the power savings are coming from better uncore design and not the node gains themselves....
Exist50@reddit
If anything, seems strictly worse than N3, even compared to older iterations. That's not great for a node that was supposed to go toe to toe with N2.
LowerLavishness4674@reddit
18A has always been low density, but expected to compensate for it with very efficient frequency scaling past peak perf/W. It was always going to struggle in laptops, where you rarely push past the best perf/W due to thermal and battery constraints.
Wait for the desktop/server chips before you call it. With fewer thermal constraints and much higher power budgets, they should be able to push well past the perf/W peak, where it should continue to scale well and for a long time before it starts hitting severely diminishing returns.
Helpdesk_Guy@reddit
… and when is that supposed to happen? Nova Lake will be TSMC's N2. What Desktop CPU-line will be on Intel 18A so far?
Sani_48@reddit
Nova Lake is full on N2?
Helpdesk_Guy@reddit
That's what we know so far, yes. At least the performance-parts. The lower end is supposed to be 18A, I guess?
Exist50@reddit
High end NVL is N2, low end is 18A. At least for compute dies.
Sani_48@reddit
so the igpu is propably tsmc too?
do we know if thats because of poor performence or poor manufacturing capazity?
Exist50@reddit
18AP, iirc. Good enough (or at least, assumed to be when originally planned) not to be worth the extra cost of TSMC.
Poor performance. Intel can't afford to be constantly a node behind. It's just too big of a gap.
Exist50@reddit
Quite frankly, the only people "expecting" that seem to be people in the internet unwilling to admit Intel underdelivered with 18A. There's been no objective reason to believe that was even a target for the node. If anything, the exact opposite. This was supposed to be the node where Intel focused more on low voltage for data center and AI.
That is not the case. ST turbo in -H series laptops has always been high, nearly on par with the desktop chips. At the power budgets being discussed, there's plenty available to hit whatever the silicon is capable of, and Intel's never been shy about pushing the limits of thermals.
And again, when you adjust for the power/thermal envelope, you still see a clock speed regression vs even ARL-H. The best possible outcome for Intel is actually that they can't high the same peak voltage but look ok at low to mid.
Server is typically low voltage, or mid voltage at most. It's lower down than even laptop chips. And what desktop? Intel's using N2 for their flagship chips for NVL, which should really have been an obvious indicator for how 18A fairs.
LowerLavishness4674@reddit
Then there will be room to grow as the node matures, right?
If it theoretically should be capable of delivering more, it likely will do so eventually as the process matures. Isn't that what happened with 10nm (after a long, long, long time), which is what made it into a 7nm class node.
Then again it has always been said that 18A is supposed to be a relatively low density, high frequency node due to the very good voltage control enabled by BPD. If that is the case, it will only really be allowed to shine in high powered desktop chips like CPUs and GPUs, whereas power limited things like Panther lake won't be able to reap as many of the benefits of 18A.
grumble11@reddit
I actually suspect otherwise - speculation on my part, but I suspect that they were running into unexpected process issues when they pumped the frequency and/or voltage higher so they had to scale back performance. It almost feels like a process bug that they discovered when making the actual chips. They've also talked about kind of weak yields on 18A on their latest earnings call.
That makes me optimistic about the improvement opportunity in a more mature 18A if they can chase down these issues, and perhaps even more in 18AP if they can clean up these bugs. If they have a 'double upgrade' opportunity in both fixing the discovered 18A issues and implementing the originally planned 18AP improvements then it could end up being a somewhat bigger jump.
Don't think it'll be an N2 beater though. 18AP might beat the best of N3 in some applications, we'll see.
For that you'd want to look ahead to 14A where the CFO mentioned on the earnings call (and he was somewhat realistic in other areas so it's more believable here) that 14A has been a positive surprise and they're actually pretty excited about it. It's possible that if Intel gets more experience on High-NA versus TSMC that they might start having an edge later on. Maybe.
ProfessionalPrincipa@reddit
There's no doubt that given enough resources they can fix the node to the point where it's good enough for HVM in high performance chips. It's what they did with 10nm after all but it took them several years to go from poorly yielding Cannon Lake/Ice Lake to okay Tiger Lake and good Alder Lake.
If we take the talk about 18A "margins" from the recent analysts call to be a euphemism for something like node yields and a stand in for when it will be "fixed" then we could be talking 2027 before 18A gets to that point. A planned 2024 product delivered in 2027 is a big yawn and another stake in IFS.
hardware2win@reddit
How it can be 24 product in 27 when it was 25 node in 25 with products on shelfs in january 26, rofl?
grumble11@reddit
2027 just seems like hitting baseline parametric yields. Being able to get to an actual improvement in terms of performance is a whole other thing. Not an easy road.
Geddagod@reddit (OP)
I'm excited for 18A-P. Intel subnode improvements have always seemed to bring decent uplifts (Intel 10SF being a notable example).
LowerLavishness4674@reddit
Yeah I know they talked about yield issues, but everything indicated that it was not a case of manufacturing defect density, but rather something else.
That would imply issues with hitting target frequencies, which as you say, they should be able to iron out in due time. That is as long as it isn't a fundamental issues, which I have a hard time believing, given how unconcerned they seemed about it.
I'm fairly confident it will be ironed out before Clearwater Forest hits mass production.
Geddagod@reddit (OP)
Intel themselves claim they won't be hitting industry acceptable yield till 2027 for 18A.
Geddagod@reddit (OP)
NVL-S being external adds serious question marks to this.
I wonder if post NVL-S they go back to using further iterations of 18A though for even desktop, if they feel confident enough that they can hit high enough Fmax (even throwing away power and density) on those compute tiles.
Maybe for RZL or Titan Lake in 28' (going off memory for codenames lol).
On paper, Intel has caught up to TSMC (even N2) in SRAM density.
Generally speaking Intel uses way higher capacity caches than AMD, AMD uses smaller but faster caches.
This seems extreme. Intel has claimed BSPD is only what, high single digits at best improvement for Fmax IIRC?
soggybiscuit93@reddit
I wonder if the issues Intel's facing with 18A partially explain TSMC's very conservative approach to adopting it.
I image in reality, 18A (and AP's) fMax limits are almost entirely the reason for N2 in NVL-S
Geddagod@reddit (OP)
Didn't TSMC delay their BSPD node too? Ik they changed it from appearing in N2P vs A16, but unsure if there was a timeline shift in that as well.
Honestly, what's going on with N2P/N2/A16 seems to be kinda weird, timeline wise.
iDontSeedMyTorrents@reddit
6%
Exist50@reddit
And yet we see the exact opposite here. The leaker claims these go to to 65W, even 80W TDP. There's plenty of power to hit any ST boost the silicon can handle, yet it regresses vs N3B ARL. And that's with a year to refine the core as well.
Server chips are low voltage. Even lower typically than mobile chips. High-V only matters for client.
There is no reason to believe that at this time. Notice that Intel themselves are using N2 for their next flagship silicon, including desktop.
LowerLavishness4674@reddit
There is every reason to believe 18A will scale well with more power. That is literally the main point of backside power delivery. It offers better voltage control and less current leakage, leading to higher efficiency at any given voltage, but especially past the point where current leakage starts becoming more of an issue, i.e. at very high frequency.
A pipe cleaner running at low frequencies is to be expected. Wait for the desktop chips and you will see P cores hitting well over 6+ GHz advertised boost, possibly 6.5 GHz.
Exist50@reddit
No, it's not. The main long term advantage of BSPD is to reduce the pressure on the metal layers as they do not scale like logic does. Better PnP is another advantage, but secondary. No one cares much about high-V these days.
As a reminder, ARL-20A was supposed to be the pipe cleaner, and they claimed they cancelled it because 18A was doing so well! PTL was supposed to be the volume product, a year after 18A was nominally supposed to be ready.
What desktop chips? PTL-S was cancelled long ago, and for NVL, they're moving the good compute dies (including desktop) back to TSMC on N2. A decision which should demonstrate the node gap quite clearly...
Also, ST boost for -H chips is in the same ballpark as desktop ones. They're already at the high end of the curve.
Helpdesk_Guy@reddit
Well, we already had rumors of Intel struggling to hit actual intended frequency-goals, no?
Though especially the actual performance-metrics of the node will be quite sobering I think.
The power-draw of these SKUs is supposed to be a TDP of officially 25W – In reality, these parts are allowed to draw as much as 55W (2+0+4 or 4+0+4) and 65/80W (4+4+4 and 4+8+4) respectively.
So it must be seen, if these parts are any more efficient in daily usage, or just a side-grade to Lunar Lake.
soggybiscuit93@reddit
Intel's existing 2+8 parts have a PL2 of 57W, so seems to me OEMs are permitted to target the same PL2 they've been targeting on U series.
Which is a fairly large drop compared to existing H series. ARL-H PL2 is up to 115W, PTL-H PL2 will be up to 65/80W
Helpdesk_Guy@reddit
AFAIK Intel was at 165W in mobile back then …
Exist50@reddit
The article claims it's "TDP", which would typically refer to PL1. Might be leakers getting their terminology mixed up, but I wouldn't necessarily assume a large drop in power limits.
soggybiscuit93@reddit
They definitely have their terminology mixed up. There's no chance PTL-H has an 65W/80W PL1
Exist50@reddit
Willing to believe that, though then I have to wonder why it would be so hard to cool at such a substantially reduced PL2. Also if/how they could pitch this as an HX replacement.
soggybiscuit93@reddit
I imagine all core boost is probably easier to cool than single core turbo boost if heat density is an issue.
I think they'll probably have ARL-R for HX
Exist50@reddit
But that's exactly it. They said 65W vs 80W was in relation to cooling challenges, but either is more than sufficient for max ST boost. Idk, maybe reading too much into too little information.
At one point it sounded like they wanted to pitch PTL as a partial replacement. Guess it's not good enough though.
Exist50@reddit
Tbh, LNL battery life (note: not the same thing as loaded efficiency) with -U/-P/-H perf levels and market reach would still be a very good thing. It's just a shame to see Intel finally straighten out their SoC architecture only to be hamstrung by a subpar node.
Helpdesk_Guy@reddit
Of course, Lunar was quite competitive, yet it was mainly so on performance due to Intel basically sugar-coating the living benchmark-bar out of it via the on-package LPDDR5X-8533 RAM. That OpM majorly polished its efficiency-metrics.
Though looking back the recent years, that's how Intel always masked rather underwhelming progress on their process-technology – Hiding the actual architectural inefficiencies and shortcomings behind a invisible wall of obfuscation by only ever bundling it with newer stuff like newer PCi-Express versions 4.0/5.0 or newer, faster RAM-generations.
So Lunar Lake while basically very strong, was mainly so due to being propped by OpM, and TSMC's processes of course.
Exist50@reddit
Nah, credit where credit is due. LNL made a ton of fundamental design changes that PTL should also benefit from. Yes, they also benefitted a lot from both having an actually decent node and on-package memory, and no, they're not on par with the likes of Apple or Qualcomm, but merely making ARL monolithic on N3 would not have delivered these gains.
Helpdesk_Guy@reddit
I said verbatim, that Lunar Lake was quite competitive, and I meant it, unironically. It took long enough.
I haven't disputed that — The modular concept was surely ground-breaking for Intel, and urgently needed!
Though, it's kind of ironic, how Intel all by itself proved themselves liars, when it took them basically +6 years since 2017, for eventually coming up with only a mere chiplet-copycat and their first true disintegrated chiplet-esque design …
For a design, which in their world-view, Intel was working on their tiles-approach already since a decade, which is at least what they basically claimed when revealed by 2018 – I called that bullsh!t the moment I first heard it. Surprise, surprise, they again straight-up lied about it …
They most definitely did NOT have had worked on anything chiplets/tiles before, if it took them this long.
Just goes to show how arrogant Intel was back then, letting AMD cooking their Zen in complete silence since 2012/2013, for their later Ryzen. Still boggles my mind, how Intel could let that happen …
In any case, we can't really deny the fact, that Intel basically cheated on Lunar Lake using OpM, eventually creating a halo-product, which ironically was quite sought after, but yet expensive asf to manufacture.
If AMD would've been to cheat like that (using OpM) in a mobile SKU, while dealing some marked cards in such a underhanded manner, it would've been ROFL-stomped Lunar Lake …
vegetable__lasagne@reddit
Why can't they have a low core count CPU but still keep the full iGPU?
Visible-Advice-5109@reddit
Because nobody would buy it.
Exist50@reddit
Both Apple's M series and LNL use decent iGPUs for their 4+4 parts.
Dood567@reddit
Apple gets their margins in upselling memory and storage they don't need to worry about the added complexity of more SKUs for the sake of price laddering
Exist50@reddit
Apple absolutely does price ladder their SoCs.
I_Dunno_Its_A_Name@reddit
This is a minority use case, but is a very valid one. I run a home server (/r/homelab) and could use some extra GPU power for video encoding and some basic AI stuff. Though this is just a home server so I don’t want to put in a dedicated GPU due to the desire to not spend all of my discretionary income on my power bill. Right now I have a 9th gen i7 that is good enough for the basics of what I am doing, but I’ve been keeping my eye out for a replacement. I’m not alone in that need, but I do get this is a minority group. I’m sure there are other use cases though.
Known_Union4341@reddit
You gotta remember these CPU’s don’t have multi-threading. AMD gets 16 threads out of their 8 core z1 and z2 extreme but with these core ultra’s you get one core and that’s it. A full on physical core is better than a virtual core, but the same amount of cores with multi-threading is much better than just a same core-count CPU without it. Intel are offering up to 16 cores which is 16 cores/16 threads, but generally should offer better multi-core performance vs an 8-core 16-thread. If anything it’s nice Intel have the ability to improve core count offerings without significantly raising the power requirements.
I for one veered away from the MSI Claw 8 AI+ as a standalone PC solution because it lacked the multi-core performance I needed with it only having 8 physical cores and no multi-threading. A 16-core offering is exactly what I need.
steve09089@reddit
Upselling
SkillYourself@reddit
I hope there's a 4+0+4+12 for handhelds but they might not consider adding a whole new tile configuration packaging line worth it over just giving the OEM a discount.
PastaPandaSimon@reddit
At this point, I just hope they have a handheld chip with just 8 e cores and a decent GPU. Maybe in time for an Intel + Nvidia chip.
hackenclaw@reddit
even 4 e-cores is enough, dont forget we use to pair up i7-7700K with GTX1080/1080Ti.
4 e cores + even stronger iGPU probably a better combo for handheld.
PastaPandaSimon@reddit
The e-cores are unlikely to be hyperthteaded though. 8 e-cores gives you 8 threads, though none of them share cores. It feels like a sweet spot for a handheld in 2025 imho.
TemuPacemaker@reddit
For those games back in the day, yeah. But modern games can and do use more than 4 cores.
Depends how low end/power you want to go, of course. 4 cores + eGPU cuold be enough too depending on your games: https://youtu.be/XCUKJ-AgGmY?t=278
ResponsibleJudge3172@reddit
We already have games tested on Skymont E cores. They are very fast
TemuPacemaker@reddit
Oh year, right, I think I missed (or forgot) that. Darkmont should be even better but I'm guessing not by that much.
Geddagod@reddit (OP)
Honestly I wonder if the 4 e-core cluster on the compute tile is outright more power efficient than the 4 p-cores.
I wish some reviewers would do power efficiency testing with different core count configurations enabled on LNL and ARL. Just for curiosities sake.
ResponsibleJudge3172@reddit
Yes it is. Memory uses power and IPC of LPE should be past Zen 3
TemuPacemaker@reddit
Intel makes e-core only CPUs (N100 etc) but at this point they're based on several gen-old e-core design. They're good enough for what they are so they might not update them for a while still though.
OddMoon7@reddit
It's not just more power efficient, but also much faster at low power which you kinda want since you'd be mostly GPU bottlenecked. The good thing is that they've made that part of their thread director tech.
xmrlazyx@reddit
IMO it's probably not economical for them to do it.
Gaming laptops/handhelds are already niche enough as it is -- doesn't make sense for them to package both and for OEMs to carry both for just for a ~$1-200 delta between the units. Those who want the full GPU will just pay for it.
Front_Expression_367@reddit
Yeah as of now only MSI looks committed to using Intel for their handheld, and not only are they not that successful but they had also branched out to AMD for another line of their gaming handheld.
GenZia@reddit
I can't say I'm entirely sold on the idea of having three core clusters humming inside my laptop.
Two clusters make sense. What am I supposed to do with three?!
Unless I'm mistaken, AMD seems to be sticking with full fat Zen 5 cores in both Strix Halo and "Fire Range," and frankly, it sounds like Intel is just trying to keep up in the core count race with its plethora of 'baby' cores.
Now, I'm sure those might be useful in some niche use cases, but I'm speaking from an average user's perspective.
I use my laptop for convenience and my desktop for compute heavy work.
Helpdesk_Guy@reddit
Ain't these Low-Power Efficiency-Cores aren't even usable in the first place for the user anyway (and only through and for Windows' scheduler to maintain)? As I understand it, LPE-cores are essentially placebo-cores for marketing.
jtj5002@reddit
LPEs are supposed to very low power tasks but the last gen Crestmont LPEs were just too weak to actually do anything.
4 Darkmont LPE, in therory, should be a significant improvement.
Helpdesk_Guy@reddit
Makes you think why Intel even went all the way to integrate those and waste precious die-space doing so …
iDontSeedMyTorrents@reddit
Because they still improve battery life under very light loads.
Helpdesk_Guy@reddit
How even, if these weren't even used with MTL!?
soggybiscuit93@reddit
The LP-E cores were a failure in MTL/ARL's design.
They're very useful in LNL and that trend follows with PTL/NVL.
They're definitely not just "placebo cores" - LNL uses them more often then the P cores and they can be activated and used in all core workloads as well.
Helpdesk_Guy@reddit
As said, I owned a MTL-machine back then, and back then you couldn't even associated these cores after all, since you couldn't pin anything on them – So yes, back then, they were in fact paper-cores for marketing-reason and basically useless.
Seems Intel managed to improve their performance a lot and make them actually useful, which is good!
soggybiscuit93@reddit
The idea is that because LP-E cores are off ring, you can power gate the rest of the cores when you're doing light tasks, getting much better battery life.
The failure of MTL/ARL's LP-E cores was that there was only 2 and they were very weak, so in practice, even a single web page would turn the ring back on and move the process back to the main cores, negating the entire point of them.
In addition, MTL/ARL had the LP-E cores on the SoC tile which made them functionally useless for full nT load.
LNL/PTL/NVL have much more powerful LP-E cores, 4 of them instead of 2, so common tasks can stay entirely within the LP island, and they're on the same compute tile, so they're also used in full nT workloads.
iDontSeedMyTorrents@reddit
They were, just not as often as Intel would have liked.
advester@reddit
They could be useful if they could check your email and stuff while the laptop is effectively sleeping.
Geddagod@reddit (OP)
4 skymont already seem pretty good in LNL
Front_Expression_367@reddit
Windows are supposed to be using these LPE cores for lighter task such as web browsing or Word documentation or idling so that the they won't have to activate the rest of the cluster -> Reduced power draw and therefore good battery life. Meteor Lake originated with these but they were so slow they were pretty useless outside of idling. Lunar Lake also had 4 LPE cores functionally and we see how good those cores are, and these Panther Lake are supposed to be built on that but with additional 8 E cores for people that really value multi-core performance.
Helpdesk_Guy@reddit
The last info I had on the back of my mind was that these couldn't even be associated by the user and were basically reserved for Windows itself. Is that still the case now?
Front_Expression_367@reddit
Yeah I think you had to utilize tool such as Project Lasso. I tried to assess the LPE cores to HWINFO64 and it was rather painful to use so I understand why these cores are never used otherwise lol.
Helpdesk_Guy@reddit
Yup, pretty much paper-cores for marketing-reasons alone basically.
Front_Expression_367@reddit
For MTL and ARL, yes. For LNL and PTL though, I haven't test them out personally but seeing the battery results of LNL chips made me think that the LPE cores of those things seem to be legit.
Helpdesk_Guy@reddit
Yes, MTL's LPE-cores were basically a dud, LNL was fairly workable and PTL will be hopefully potent.
steve09089@reddit
They are definitely more than usable if it’s anything like LNL, which basically defaults to them.
If it’s more like ARL or MTL, it’s placebo except for S0 sleep.
soggybiscuit93@reddit
PLT's LP-E cores take after LNL. ARL/MTL's LP-E cores design is dead
Helpdesk_Guy@reddit
I hope those are powerful enough to eventually act as the low-power booster these were once supposed to.
Exotic9780@reddit
The LP cores in Meteor Lake/Arrow Lake were too weak to be usable as multithreaded boosters, this isn't the case with Lunar Lake and Panther Lake will be no different
SkillYourself@reddit
You are the niche use case. The average/median user is using H-class laptops.
These are meant to beat Strix Point and its refresh, which it will do and at better economics for Intel than Arrow Lake H.
ResponsibleJudge3172@reddit
The average user runs apps like Teams, entirely off of the LPE core of Lunarlake and Pantherlake
soggybiscuit93@reddit
The average user is using U series
Geddagod@reddit (OP)
Doesn't seem like this will be the case till the end of 26' though. At least from the earnings call a few weeks ago.
mediocre_sophist@reddit
Since when is any user determining what cores they want to use and when?
nanonan@reddit
Having to roll the dice on the scheduler doesn't make things better.
ResponsibleJudge3172@reddit
The modern CPU cores schedule all cores equally with E core priority because nowadays the difference between E core and P core is less than between AMD X3D CCD and higher clocking CCD
GenZia@reddit
Users aren't "determining" anything because they don't have to.
On smartphones, they've virtually zero control over their own hardware.
And on the x86 side, I don't think people are buying big.LITTLE hardware in droves, and even if they are, I doubt the multiple core clusters are their deciding factor.
And if I had a big.BABY CPU, I’d definitely be tempted to play around with it, and yes, I don’t see the big idea behind having three clusters, aside from marketing gimmicks and artificially inflated MT benchmark scores.
From what I've heard, Windows Scheduler has trouble dealing with just two clusters as it is. On Linux, the experience is, at best, tolerable.
At the very least, I've certain issues with Intel marketing these CPUs has having "16 cores." I know a lot of 'normies,' first hand, who have fallen prey to this deceptive marketing tactic.
FatalCakeIncident@reddit
It's been a thing for a while as a means to circumvent the Windows kernel's shitty scheduling, especially on processors with asymmetric core or chiplet arrangements. Tbf, Windows has slowly begun to catch up and run cores a little more efficiently, but some people still swear by apps like Project Lasso for manually controlling their process affinity.
mediocre_sophist@reddit
Windows Scheduler has come a long way but I mean come on, this is pretty pedantic. The commenter above is clearly not using a tool to assign cores to specific tasks they are just being annoying.
soggybiscuit93@reddit
From a user's perspective, CPU's are a blackbox and the inner-working details are irrelevant. What matters is results. They want, typically, a balance of performance, battery life, heat / fan noise, and cost, placing greater emphasis on one of these categories over the others.
The idea is that P cores = performance optizied
E cores = Area optimized
LP-E Cores = power optimized.
The theory being to have different core types focus on different parts of PPA.
E cores and LP-E cores are the same microarchitecture. Just LP-E cores are off-ring so the ring (and rest of the cores) can be powered down in light-load scenarios.
Intel's biggest problem is that looking at each of the cores, the P cores perform their designated role the worst. 200% larger than E cores for \~10% more IPC and \~15% more clockspeed is a pretty rough trade off - and by the time you've loaded all P cores and need to load E cores too, that clockspeed difference diminishes hard.
Intel's better off (and all rumors point this way) to growing the E cores slightly and that architecture the basis of the new P and E cores (like Zen vs Zen-C)
Exist50@reddit
In this case, they're even the same physical implementation. 100% identical to the compute cores. Also, for MTL/ARL they were neither power nor area optimized.
ConsistencyWelder@reddit
And with all the issues Windows already has with the scheduler, I fully expect it to never be able to properly utilize the different cores. I expect a similar situation as with previous versions, where it's sometimes necessary to disable the slow cores to eliminate stuttering in games, because they sometimes offload tasks to them which brings the whole game down to a crawl.
Exotic9780@reddit
It makes sense to have three as the low-power cores prioritize keeping the power consumption low for battery life but to achieve this they aren't attached to the ring bus which hurts performance so 8 regular E cores are also used
ConsistencyWelder@reddit
16 cores, but only 4 of them will be fast. The rest will be slow cores.
Klaeyy@reddit
The E-Cores are plenty fast for most use-cases by now and there are 8 of them in there. Aren‘t they even ahead of the skylake IPC by now? 8 will be doing a great deal with a good clock frequency.
Only the 4 LP-Cores are kind of weak. But they aren‘t attached to the ring-bus and really only used for offloading low req Backgroundtasks and an idling machine. So that you can turn off the faster cores when they aren‘t necessary.
But for performant gaming and productivity it is basically a 12 core machine.
ResponsibleJudge3172@reddit
They are ahead of Raptor lake P cores in IPC under 40W
Front_Expression_367@reddit
The LPE cores being weak is probably the point anyway, or at least partially. It doesn't need to usurp an insane amount of wattage to get to full speed because it doesn't scale well, but it stays fast enough at lower wattage to default to that power profile -> Improved battery life.
Klaeyy@reddit
Yeah after being reminded that the LPE cores are also Darkmont like the E-Cores I looked up the layout in detail and the differences is simply the P and E cores are on a large Cluster with L3 Cache and a fast ring-bus, but the LPE cores are seperated from that Ring-bus and don't have any L3 Cache (just access to a "memory side-cache" accessible by the NPU and small cluster but not the large cluster).
They are for sure also much more limited in power-budget and frequency, but they can be very fast in general. The main usage is being able to disable the large cluster in Idle or low cpu usage to heavily reduce power draw.
The communication in between the clusters is much slower. So anything that relies on synchronisation and communication between cores is only really fast (latency-wise) if it is kept within the same cluster.
The LPE cores therefore can be very fast actually, but only if the task stays within the cluster. Modern demanding games will therefore 99% soley run on the large cluster, because they would run slower if they spread across all of them. The small cluster could be used to offload background tasks to themselves (if the powerlimits aren't hit), which could at least improve 1% lows and fps stability.
Exist50@reddit
It's also accessible by the large cluster, fyi.
Front_Expression_367@reddit
Yeah. The efficiency and overall battery life will probably comes down to how Window manage threads, which seems to be pretty decent now with Lunar Lake seeing as they got pretty good battery life with the same design philosophy.
Exist50@reddit
Not for LNL or PTL. They're not crippled like the MTL/ARL cores. Should be full speed Darkmont. That's like 12th or even 13th gen big core.
soggybiscuit93@reddit
They're definitely ahead of Skylake now. Cougar Cove IPC vs Darkmont is gonna be like 10% better
Exist50@reddit
Darkmont should really not be that far off Cougar Cove. Certainly not slow.
gomurifle@reddit
Is this desktop or laptop?
Exist50@reddit
Laptop.
Mech0z@reddit
6 different top models with just 100mhz between them is so stupid, that should be 1 or MAYBE 2 models for that core configuration
FitOutlandishness133@reddit
Nice. My a770 still slaying 4k ultra gaming in 2025. No stutters no freezing just responsive beautiful gaming. Ofcourse it’s paired with 14900k so my igpu technically has identical specs but I keep temps down this way. Also have an RTX 50 series. I love arc graphics. Bang for buck AT HIGHER RESOLUTIONS — that’s where it’s at. And I’ve had 4090 (sold), amd 9070(sold). Never would I pay what they are asking for a 5090. Just because I wasn’t 100 percent thrilled with price per FPS with the 4090. Once you have had all of them you will see Intel is the way to go with 4k gaming(unless you like paying thousands of dollars). Arc doesn’t have shadows and artifacts . RTX does. Bad. Blurs my picture even on best settings possible.
Adventurous_Tea_2198@reddit
Everything i’ve heard about panther lake thus far hasn’t made me regret going with arrow lake despite the naysayers.
AutoModerator@reddit
Hello Geddagod! Please double check that this submission is original reporting and is not an unverified rumor or repost that does not rise to the standards of /r/hardware. If this link is reporting on the work of another site/source or is an unverified rumor, please delete this submission. If this warning is in error, please report this comment and we will remove it.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.