Panther Lake Geekbench Leak (its good!!)
Posted by protos9321@reddit | hardware | View on Reddit | 127 comments
Panther Lake X7 358H is about 1.74x the performance of Lunar Lake in geekbench opencl test
Do note that this should be pre-release drivers/firmware. So by the time of release we could expect more performance (maybe even quite a bit more considering Intel's current driver situation)
I have a few thoughts on this. There seem to be 2 big takeaways from this
-
Panther Lake performance
-
Asus G14
-
Panther Lake performance
So Panther Lake 358H seems to be about 1.74x the performance of Lunar Lake 288v. Based on Geekbench scores on opencl. Panther Lake 358H is approx 1.74x Lunar Lake 288v while 4050 Laptop is 2.5x and 8060 is 3x.
I've taken a look at notebookcheck gaming benchmarks for the 140V in Lunar Lale at 1080p high and compared them to the 8060s and 4050 laptop and I got approx 2.5x and 3x the performance for 4050 Laptop and 8060s respectively. So the geekbench scores seem to be a decent indication. Do remember here that the 4050 and 288v are averages of many laptop designs, so some could be lower power whereas the 8060s should only be an averge of 2 designs.
Do note that some of the gaming benchmarks seemed to BW limited and in one case, the 4050 could only manage 66% more performance than the Lunar Lake while the 8060s managed 2.25x the performance.
- Asus G14
Intel has never been used in an Asus G14 before, so this is huge. Also Asus is using the 12Xe core config, which is interesting as it would be more expensive than the 4Xe core config.
This either means that Asus thought that the igpu performance was quite important to them or that there's a chance it doesn't have an dgpu. The latter is much more interesting than the former as it would mean that this is the first igpu based G14. Some of you may think that this doesn't make sense as strix halo would be a beter fit. However Strix halo is a terrible product. Its got really nice cpu, a decent gpu that performs 10-20% better than 4050, however a stix halo laptop could cost as much as a 5070ti laptop and somehow have worse battery life than the 5070ti laptop despite having only an igpu. Panther Lake could cost half of the strix halo, while having 2x the battery life and higher single core performance. Multi core and gpu wise it would still be considerably lower than strix halo, but again it could be almost half the price. Would you pay for a hypothetical strix halo G14 which costs the same as the 5070ti model but half the gpu performance (even if the multi core performane is 40% higher) and worse battery life?
If you look at perf per dollar the 5070ti model would still be better, but at around 1200-1300$ which I think they can price at (remember PTL is cheaper for Intel to make than LNL), there really isn't any better premium laptop soc. This could compete with the macbook air, dell xps (I dont remember its new name), yoga pro 7i and others and also create a design for future igpu only performance laptops for Asus. At the end of the day this is still speculatiion but its possible..
Overall with all the improvements to Xe3, memory bandwidth being increased to 153 Gbps and the fact that Lunar Lake could only use 80Gbps of the 135Gbps of memory bandwidth (https://chipsandcheese.com/p/lunar-lakes-igpu-debut-of-intels), I think that there's enough BW for Intel to easily push 2x the performance. Infact considering the 50% in size and the 25% higher clocks even a ipc uplift of around 10% is enough to provide performance that is 2x higher than Lunar lake. We'll just have to wait and see if they can do it.
imaginary_num6er@reddit
I really hope this is not it, since if people wanted no dGPU with a 14" there's the ASUS Zenbook or Lenovo Thinkpads. The whole point of a G14 is that it is a thin 14" that has a powerful dGPU.
Front_Expression_367@reddit
I mean this could just be one variant out of the whole Zephyrus G14 lineup. The ASUS Zenbook is nice but they usually only have one fan and overall worse cooling solution which makes the whole system capped to like 28W or lower, so not exactly the most proficient at gaming, and since the Zephyrus G14 has a far better cooling situation it could pushed these iGPU to a bit higher power limit.
imaginary_num6er@reddit
It would be very unlike them though. Like they would have 1 SKU with no heat pipes to their GPU die location while the other will have them? What about their 5060 and 5070ti models that already use different heat pipes? They need 3 configurations to support at least all 3?
Front_Expression_367@reddit
If ASUS can make sure the demand for an iGPU-only Zephyrus G14 is there they can easily do it (and it is Intel so it will sell well regardless I think). From the internal I guess they can build up and further improve from the Vivobook S or Zenbook S series which already have 2 fans, although I don't know if that will make too much sense since I don't know too much about the detail of designing laptops.
LastChancellor@reddit
You just need either bigger heatpipes or start development on vapor chambers, the heatpipes on the Vivobook/Zenbook S series are pretty tiny
logosuwu@reddit
Eh, Asus has made ostensibly gaming focused laptops without dGPU before, like the Flow. It really depends on how well PTL can scale with power.
LastChancellor@reddit
Great, 1 year from now while you're trying to look for the dGPU G14s, online stores all over the world will be overcrowded with listings for the base model iGPU only G14 instead that those stores just cant sell bc its not desirable🙂↔️
IlIIIlllIIllIIIIllll@reddit
I don't care at all about raw performance. Fell into that trap with molten hot chips, loud fans and dead batteries in 3 hours. All I care about is efficiency and performance per watt - are these still going to perform worse on efficiency than 4-generation-old M1 chips?
Alternative-Luck-825@reddit
338h 444 , 10xe3 . this spec quite good for replace 258v for handheld
grumble11@reddit
Given that PTL has 1.5x the Xe cores of LNL, more CPU cores with slightly improved architecture and given it's a new GPU architecture and probably a higher TDP, 1.74x performance actually makes sense. We'll probably get a bit more as the chipset matures in the market but I think the hopes that it'll match a 4050m are going to be hard to meet - the 4050 has more powerful on-paper physical hardware. That DOES NOT mean that PTL isn't a great chipset or fine for tasks that take moderate GPU horsepower (including low-to-mid games), but it's good to temper expectations. Upscaling can help here.
The G14 is not going to have a dGPU with the 12Xe core configuration. The 12Xe model is designed to not have one, and if they did have a dGPU they'd use the 4Xe configuration to pair it with.
Strix Halo is a great test bed for the 'big iGPU' framework, though it really needs faster RAM - DDR6 will help a lot. It's a more powerful product (beats a 4060), but yes costs a bit too much. Personally I'm excited by this design direction, it needs a couple more iterations.
The best laptop SOC is the M-series. What Apple has done with that chipset is phenomenal, incredible performance and amazing efficiency. The rest of the world is playing catch-up, though PTL nudges the gap a bit closer. I just prefer x86 and Windows to Mac. I agree that PTL laptops will compete fine with Macs, though the holistic competition is more than the chipset - it's execution on overall laptop design and ability for the hardware, firmware and OS to all play nice together (ex: sleep mode that reliably works).
protos9321@reddit (OP)
Strix halo isn't faster than the 4060. Based on JarodsTech, its about 10% slower than the laptop 4060, so about 10% faster than the 4050 laptop. If you remove the memory limited games and probably use the full wattage 4050 laptop, even the notebookcheck gaming avg would probably have the gap reduce to 10%.
LNL was already better than the M series in some battery tests and PTL simply has better battery life. Lower SOC power draw and better lp island. In this (https://www.youtube.com/watch?v=cVSkLTfCZz8), PTL was using upto 20-25% lower power than lunar lake at times
grumble11@reddit
Jarrod ran the 4060 in a full-size laptop (better cooling) and a way higher wattage. He compared against a super-portable Z13 laptop at a much lower TDP. From that I can tell, that's the trend - the reviews used this specific extremely power and thermal constrained laptop and compared against much higher wattage 4060 setups. If you match the wattage, then the performance differential changes quite a bit:
Framework Desktop With AMD Ryzen AI Max Offers Excellent, Linux-Friendly Performance Review - Phoronix
This gives a bit of flavour, though honestly up to date reviews of higher TDP models are lacking in the market. The charts above indicate that an unconstrained 40RDNA3.5 iGPU setup run at similar wattage will beat the 4060m. Right now, hard to find anything that isn't running the 395 with a very power constrained envelope.
As for LNL, it did better than the M series in some battery tests but also performed worse than the M series, performance per watt is firmly in the camp of the M series. I don't expect that to change. I'm not saying the LNL is bad, you don't really NEED a performance crown on this kind of laptop, but there's still work to do.
I am also very much looking forward to the PTL launch, I'm considering picking up a 12Xe3 model myself.
Vb_33@reddit
Wow that's pretty nice, shame the 5060m exists and RDNA3.5 is feature starved vs Ada and Blackwell.
porcinechoirmaster@reddit
What features, though?
So, like, yes, nVidia has an extra features list that's pretty... but in that particular market segment, I don't see a whole lot of reason to pick them.
Vb_33@reddit
Even the Switch 2 at 9W is flexing its RT prowess in Star Wars Outlaws, despite the Z2E being significantly better at raster, the Z2Es weakness isn't process or die size, it's architecture.
puffz0r@reddit
FSR4 isn't officially available on rDNA3.5 yet
protos9321@reddit (OP)
The only problem is going to be scalability and cost. In desktop gpus amd , intel and nvidia compete fine (intel has an issue with transistor density, but it looks like that is slowly getting resolved with Xe2 on LNL, Xe3 on PTL and maybe get fully resolved by Xe3P on NVL). However with iGPU's you have to worry about yield , advanced packaging costs etc, why is why I came to a conclusion that Intel is most likely the only ones that can make large iGPU's work financially (outside of apple , as people are willing to pay a ton of money for low performance , as in real world apps/games, the m4 max performs on par or better than the 4070 while costing more than a 5080 and there's a massive difference between the two)
Strazdas1@reddit
So he tested in proper enviroment?
protos9321@reddit (OP)
A few things here. The Z13 runs at upto 93W combined CPU and GPU. If you take a look at the 4060. It gets about 11% more performance going from 85W to 102.5W, after which it flatlines. Also a 8845HS only consumes about 11W when pushing the GPU. So bringing the power down to 85W for the GPU plus the 11W for the CPU brings us to 96W which is about 3W more for about 1% lower performance and this is with a discreet GPU vs iGPU. So strix halo can match a cpu+discreet gpu combo per watt, but unfortunately thats still pretty bad in terms of efficiency as iGPU's by their nature should be able to pull lesser power compared to discreet GPUs. Even the VRAM on the discreet GPU's are part of the power budget. (All the info above is from different JarrodsTech videos of the G14, Strix Halo and the 40 series cards)
The phoronix link uses the full powered Strix Halo. 120W vs 96W(with VRAM). How is this not the exact same issue that you stated
The reason why you cant find 395 in less constrained laptops is due to the cost. If OEM's could price it at 1500-1700$ , they could try to justify it and make more models, but the cheapest device with it costs around 2300$. As I said 5070Ti pricing but with 40-50% less performance.
If you ask me the only way to get reasonbly priced large iGPU's is to go down the Intel route fully and seperate the iGPU. You will still use similar space, but atleast your yields will be better as you dont need a single large tile. On top of this you need lower cost advanced packaging so that the cost savings from the better yield dont simply do to packaging. Advanced packaging at Intel should be far less expensive than TSMC as they have been doing it at scale since MTL whereas with TSMC its mostly to enterprise at a much lower scale. Considering that Intel on Intel should be much cheaper for Intel (no pun intended), I think Intel is probably the only company that can make large iGPU's at much more resonable prices. Other companies can make large iGPU's but the pricing would still remain a problem
Perf/watt of PTL is way better than ARL/LNL at MT. Though I agree that its still not on par with M. The difference between Intel and Apple should start to drecrease from next year with Nova Lake and hopefully by 2028, they'll have same/better perf/watt with their unified core.
As far a battery life beetween LNL and M is concerned, the LNL devices almost always had higher res OLED screens which ends up eaing battery. If you take this for example (https://www.notebookcheck.net/Dell-16-Plus-laptop-review-A-wave-goodbye-to-the-Inspiron-series.1028836.0.html) vs M4 13", LNL had about 50% better battery life with a battery only 15% larger, so effectively 35% better but it did have a lower 1080p resolution. So make of this what you will, but PTL should be having similar or better battery life than LNL.
Vb_33@reddit
Won't get an LPDDR6 AMD Halo chip for a while.
grumble11@reddit
2027-2028 maybe? I’m not waiting around but that seems to be when next get RAM does its thing.
FollowingFeisty5321@reddit
That's Medusa Halo which is slated to be a crazy big generational advance: 2nm, 48x RDNA 5 compute units, Zen 6 cores, 2x - 3x memory bandwidth.
But yeah that's 2 - 3 years away unfortunately!
grumble11@reddit
If that happens (which is a big if) as advertised then that is a beast. If priced appropriately and efficient enough, it would be a XX60 killer.
Vb_33@reddit
AMD Halo chips are expensive top end chips for their niches. This ain't gonna be cheap specially not with a 384bit bus and 26 CPU cores on TSMC N2.
loczek531@reddit
Depends on the needs, pricewise it will most likely still compete with xx70m laptops, so mobile 5060/6060 will still be much cheaper.
Slabbed1738@reddit
Should be 12-18months away
imaginary_num6er@reddit
Looks like I'll be buying a 5080 G14 then since I'm not going to chance buying a system with Intel drivers that performs worse than a 5080. Just like how the G14 with a 4090 only was available in the 2023 model, but not in the 2024 release.
steve09089@reddit
I highly doubt they won’t have a dGPU for all models of the G14, I doubt there will even be a model without a dGPU, since the whole point of the G14 is to be a small form factor laptop with a dGPU, and this iGPU performance doesn’t even close the gap completely with a laptop 3050.
Take the dGPU and it’s just a fancy overpriced Zenbook with gaming aesthetics.
loczek531@reddit
They might have, just not in 12xe configuration
grumble11@reddit
No iGPU is going to perform anywhere near a 5080, so if that is your target and budget then your purchase decision is pretty easy
imaginary_num6er@reddit
Yeah. Like if someone in 2023 was waiting for 16GB VRAM laptops to decrease in price, they had to wait at least 2 years if they missed the 2023 model. The 4090 model still performed better than the 2024 versions
SERIVUBSEV@reddit
GPU upgrade is good because after 2-3 years of hyping up AI and NPU performance, Intel realized no one cares about it when buying a laptop.
Next generation has shrunk NPU size while keeping same 50 TOPS from last gen, making more space for GPU. Although they now advertise GPU+NPU TOPS for marketing to those who want locally run their AI Girlfriend.
kingwhocares@reddit
This has more to do with Microsoft Copilot nonsense. They wanted NPUs with at least 40 TOPS to get that AI tag.
Homerlncognito@reddit
Does Copilot actually utilize NPUs?
Exist50@reddit
CoPilot+, yes. That Recall feature demands it more or less constantly.
There's also the Windows Studio Effects, but that's much less demanding and more situational.
Strazdas1@reddit
Recall does not exist at the moment. The project was paused.
Exist50@reddit
It's back, as far as I can tell.
More to the point, when these CPUs were being designed, Microsoft was very insistent that it would be a major part of Windows going forward, a de facto requirement even. That's half the reason they partnered with Qualcomm as well. View the decisions from the hardware makers in that light.
And you're going to see this continue for NVL and beyond. It will take years to course-correct even if the NPU ends up going largely unused. Funny thing is, Microsoft wanted even more than we're getting. For example, in 2026, they requested 80TOPs standard.
Strazdas1@reddit
How can you tell? There were no news claiming such.
Exist50@reddit
There was this 6 months ago. https://arstechnica.com/gadgets/2025/04/microsoft-rolls-windows-recall-out-to-the-public-nearly-a-year-after-announcing-it/
Where do you claim to see it was "paused" to begin with? Microsoft certainly didn't use that wording.
Strazdas1@reddit
Microsoft used the word "postponed" in their press release. Altrough looks like it is being rolled out for insider preview versions of windows now.
DYMAXIONman@reddit
Not sure. Most copilot stuff are plugins integrated into software and I think that uses a subscription. So it uses an internet connection.
kingwhocares@reddit
No idea but that's what Microsoft wanted.
DerpSenpai@reddit
bigger GPU basically gives very positive reviews even if the rest of the product is very mediocre. Most reviewers are gamers and Lunar Lake taught Intel that. No reviewer will say that Lunar Lake CPU is slower in both Single and Multi core than a phone chip for example.
Strazdas1@reddit
Most reviewers are not gamers. In fact most reviewers that use games dont actually play them and benchmark in first chapter/tutorial resulting in data thats not representative of actual game.
Hytht@reddit
It's useless because phone chips can only do that for a slight duration in benchmarks and then throttle afterwards. They even go past 15W nowadays which no phone can sustain beyond a few seconds. There is diminishing returns for Lunar lake when going over 15W and most laptops should be configured to not use more on battery or efficiency profiles.
VastTension6022@reddit
A phone will throttle, but a phone chip swapped into a lunar lake device would have no such limitations. You don't think its notable that intel's premium laptop chip line is outperformed by a dimensity 9400 chromebook or the upcoming A18 budget macbook?
Hytht@reddit
Once those are released, lunar lake will be previous gen. It's already 1 year old and built on a similar performing node to that of 8 gen 3 which is outperformed by lunar lake.
Exist50@reddit
What chip are you referring to? Qualcomm didn't use N3 until the 8 Elite.
DerpSenpai@reddit
And the 8 Elite matches ST and MT of Lunar Lake at half the power
VastTension6022@reddit
The 9400 and a18 are last gen too.
Exist50@reddit
By the same logic, LNL will happily boost to around 30W.
steve09089@reddit
It’s not just reviewers that don’t care about slower single and multi core. Most people in general don’t care as long as it doesn’t affect their day to day tasks, and these days there’s more than enough processing power for basic tasks in most chips.
Battery life with less demanding applications, compatibility and graphics performance to a lesser extent on the other hand is something very tangible for most people.
BFBooger@reddit
> these days there’s more than enough processing power for basic tasks in most chips.
People said that 10 years ago. They said it 20 years ago.
But those old chips are pretty bad compared to today's even for every-day people.
Sure, a 2600K is going to be OK browsing the web, but something new will be notably faster for day to day tasks.
Its too bad that every web app is some bloated massive-javascript-stack with 5000 lines of css, these days, but it is what it is.
Qesa@reddit
And every desktop app is the above running in electron...
Sirts@reddit
Can you link to test that shows 3x higher power usage? In terms of performnace and efficiency, Lunar Lake held up quite well against base Apple M3, which was Apple's newest laptop chip at the time of LL release
DerpSenpai@reddit
That is in idle. Not in PPW or max power. Also lunar lake battery is good when throttling it hard
PPW in Cinebench R24
Apple M4 - 14 (points per watt)
Lunar Lake - 5.36
AMD Zen 5 - 4
BFBooger@reddit
Its power targets too.
You can lower Ghz on the latter two here, drop performance by 15%, and cut power by over half.
Yeah, Apple still wins, but not by 3x.
DerpSenpai@reddit
Well yeah but then Apple would have almost 2x the performance 😅
Front_Expression_367@reddit
I mean Lunar Lake had allow laptops to have some of the closest battery level to Apple Macbook, even edging out in some rare case, and it ran x86 Windows which means it is able to run more software in general. It did very well as a CPU lineup designed for lightweight and light-use laptop so saying the rest of Lunar Lake is mediocre compared to its GPU is pretty unfair imo. People are not necessarily looking for 3x the benchmark scores when shopping for lightweight laptops anyway so it is not like the reviewers are ignoring it for the wrong reason. They just know Lunar Lake laptops will still work very well for general use.
DerpSenpai@reddit
That battery life came at a cost of performance
Front_Expression_367@reddit
The performance cost is moreso that Lunar Lake was never as efficient as the likes of M4, but for the most part they were the first of the Intel lineup to not completely throttle while running on battery.
Agloe_Dreams@reddit
I get the impression that this is really where things are going - big.little but in AI form - NPU for power efficient background AI tasks, GPU for general purpose tasks.
zdelusion@reddit
There are some tasks that show the potential without requiring you to run full blown models. Live Captions is kind of cool. And the power efficiency gains for stuff like camera blurring feel real and noteworthy. But you're right that for the hype MS and the OEMs threw behind NPUs, they haven't done a great job at leveraging them.
Strazdas1@reddit
Well, the users revolted and canned the crowning feature microsoft was pushing - Recall.
mduell@reddit
Where?
petuman@reddit
For example iOS had "hold to copy object from photo" for years -- it does object segmentation for that cutout stupid fast.
https://youtu.be/AkVVHa8aCL4?t=10
mduell@reddit
TIL that uses NPU not GPU.
markhachman@reddit
I think this is correct. My questions are what those background tasks are, how frequently they're used, what happens when they exceed the capability of the NPU, and if they can simply flip to the GPU (Windows ML?) in the absence of an NPU or if the NPU is fully loaded.
michaelsoft__binbows@reddit
Have the NPUs been made useful from the software side yet?
Strazdas1@reddit
Theoretically yes, but adoption is low.
dampflokfreund@reddit
Practically non-existent.
Noble00_@reddit
https://github.com/intel/ipex-llm
Haven't seen much discussion, but it seems you can run local models on the NPU exclusively.
protos9321@reddit (OP)
Isn't IPEX-LLM for GPU's. I think you mean OpenVino (it supports CPU's, GPU's, NPU's and even ASIC's)
Noble00_@reddit
You can check the repo IPEX supports their NPU
Evilbred@reddit
Go on....
Natejka7273@reddit
Local Ai models thru SillyTavern. Most fine-tunes are 11-30B parameters.
dampflokfreund@reddit
The NPU is not used in inference programs like llama.cpp.
alvenestthol@reddit
ImageGen can use NPUs a lot better, although the tools to actually do that can be a lot more limited
FollowingFeisty5321@reddit
Lucy Liubot?
SirActionhaHAA@reddit
Brought to you from an r/intelstock regular lol
constantlymat@reddit
I wish Intel well because I'm a hardware agnostic who just wants good product at a reasonable price and we won't get that without Intel as a competitor in the market.
That being said, r/hardware has almost turned into a comedy subreddit when it comes to Intel news over the past few years. There's insane pro Intel spindoctoring going on. It's here on r/hardware that I learned that Reuters is apparently trying to bring down Intel and manipulate its stock.
I hope 18A is a smash hit for them that makes me look silly, but the current state of affairs is dire.
Strazdas1@reddit
Reuters has never been a reliable source. And im talkign decades here.
logosuwu@reddit
It's also here where I get told insane takes like "18A is cancelled" lol
Exist50@reddit
This was the sub insisting it would be at least N2 tier, and that 20A was "ahead of schedule" and definitely not cancelled.
Visible-Advice-5109@reddit
I mean if you said that today it would be pretty bad take, but seeing as 20A was canceled there was certainly precedent to question 18A in the past.
LuluButterFive@reddit
Zen 5 will be 20% faster than Zen 4
Rdna 4 is outselling nvidia
18a canceled, 18a yields bad
logosuwu@reddit
Zen 6 to 7GHz kappa chungus
PeteConcrete@reddit
Bro hides his post history but a quick google search shows me 99% of your posts are in /amd.
Pot calling the kettle black?
Qsand0@reddit
😂😂😂 Got the mf
Slabbed1738@reddit
Most of his comments are from this sub. There's ways around private profiles
gamebrigada@reddit
What the hell is this bashing of Strix Halo.... You have so many bad assumptions....
The battery life is f'ing phenomenal. Yes if you load the tits out of this thing it'll barely last over an hour. But I usually get 10+ hours of work on it done on a single charge, especially with the latest update from HP. This thing core parks like crazy to utilize only whats necessary and is crazy efficient.
GPU performance is a dogshit argument. A 5070 mobiles lowest TDP is 50 watts, and is adjustable by the manufacturer based on cooling design. ZBook ultra configured at 65 watts for the entire SOC absolutely blows away a 5070 at a 55 watts TDP. You're confusing Strix Halo GPU with Strix Point GPU. Also, what CPU are you using that only uses 10 Watts and doesn't bottleneck the 5070?
Calling a Strix Halo GPU an iGPU is really what shows how little you know.... Strix Halo has more shader units, more texture mapping units, more compute units, more ROPs and more raytracing cores than AMD's desktop 9060 XT. Strix Halo took a fat GPU, shoved it into an IO die and hooked it up to a couple chiplets.
Price. Yeah, of course its expensive. Its a 9950x packaged with an overgrown 9060 XT, with a unique interconnect that lowers interchip latency while also lowering power consumption. Why the hell would AMD not sell it at a high cost. It's only competitor is a 4k$ DGX Spark.
Strix Halo has no comparison from Intel, and certainly will NOT be compared to 358H because of how small that GPU is, and how little real cores that CPU has. The only thing you can actually reasonably compare Strix Halo to is the NVIDIA GB10.
Scion95@reddit
IIRC, the Panther Lake SKU with the full 12Xe Core GPU only has 12 PCIe lanes, while the Panther Lake chips with 16 CPU cores and 4 GPU cores have 20 lanes. So, I would guess, if the G14 has the full 12 core GPU, it doesn't have a dGPU, because the model with with the full iGPU isn't really designed to be used with a dGPU.
Gloriathewitch@reddit
correct me if i'm wrong but isn't the 288v an efficiency focused small core count chip while the H is a HX with more power efficiency? i feel like it wouldn't be 1.74x if you compare 285h to 385h and this is a little disingenuous perhaps
BuchMaister@reddit
Panther lake has more of broader range TDP chip than Lunar Lake we're talking 8-37 Watts for Lunar lake compared to 15 - 80 Watts for Panther Lake Across the range ( There are U, P and H). H should be in the range of 25-80 Watts. Actual power configuration is up to the OEMs. But generally speaking yes that H will consume more power and probably did for that test, but for the same power you can probably expect about 20-30% better GPU performance.
grumble11@reddit
If it's a 20-30% performance increase at equal power but has 50% more Xe units (12 vs 8) and they're Xe3 and not Xe2... that's not a good outcome. You would expect a minimum 50%, probably 60%+ given it should also see a bump from the next-gen Xe unit arch. 20% would be a disaster in my opinion.
BuchMaister@reddit
Not so much - Xe3 is not celestial, it's Battlemage but with some refinements to improve efficiency and utilization of the Xe cores, Xe3P will be celestial - so in that sense what they're using with Panther lake isn't "full next generation". Also remember they use the same process node for both Lunar lake GPU die and Panther lake compute die.
protos9321@reddit (OP)
This is GPU comparison, not a CPU comparison. Xe1 to Xe2, the opengl scores (that this comparison is about) reduced, but actual performance across games and apps increased. So this could actually be almost like a worst case scenario. (Though its probably not as performace seems inline with nvidia and amd)
Any_Carpenter_7605@reddit
The Core Ultra 9 285H has the ARC 140T which does better than the ARC 140V in OpenCL and similar to the Radeon 890M. Yet it performs the same or a little worse in games compared to the ARC 140V. Panther Lake 12 Xe3 will certainly be better at gaming than both iGPUs, but OpenCL performance is not a good way to gauge it.
Johnny_Oro@reddit
Geekbench is a good gauge for productivity performance, not so much for gaming. Especially considering Arc GPUs struggled at using its compute power efficiently compared to its competitors.
I remember B580 losing to A770 and even to A750 according to Puget bench. But gaming performance is a very different story. Xe3 will surely be even more efficient for gaming.
Kekeripo@reddit
Hopefully that translates well to real world gaming performance. Bit of a shame that the 12xe seems to be exclusive to the big boy 16 core chip. An 6 or 8 core would have been lovely for handheld use.
Other than that, I'm baffled how amd allowed the self to be over taken by Intel in the igpu space. They basically owned that space for the past 4 years and don't even have announced something to replace the rdna3.5 chips, outside the rebadged Z2 line.
Hopefully this will spark some rivalry to improve further.
GTRagnarok@reddit
I think calling it a 16 core is kind of overstating it since every SKU except the lowest end Core Ultra 3 only has 4 P cores. If this lineup is correct, half of the SKUs has the 16 core config. I bet the successor to the MSI Claw 8 will offer the 358H with possibly a cheaper 338H option.
grumble11@reddit
The P-Core and E-Core performance isn't the same but it's a lot closer than it used to be. If they were to release a 16 E-Core chip, it would still be credibly a 16-core chip.
Intel's issue is material in terms of PPW and PPA, they're getting smoked by ARM-based solutions. The P-Core team seems unable to deliver on PPW or PPA in particular, and their gen-over-gen improvements have been sluggish. E-Cores are the future, and even then they need to improve a decent amount.
Had they kept the Royal Core team then they'd probably have a different direction on P-Core and maybe not do Unified Core later on (all E-Core), but that isn't what happened. I'm sad about Royal Core 2.0.
steve09089@reddit
This is an interesting change that Asus is considering swapping the G14 to Intel, or at least making a Intel model. Seems to bode well for the overall efficiency of Panther Lake, or at least the performance and efficiency pairing.
Though I have my doubts it will truly be a iGPU only model, the whole dGPU setup was kind of the whole point of the G14, I would put my money on it being more likely they’re just testing iGPU performance as an overall evaluation of the platform, especially since this would still put the Panther Lake iGPU a decent bit behind a laptop 3050.
ComplexEntertainer13@reddit
Probably because consumers in those segments are wising up that Zen's performance advantage in gaming on desktop doesn't really transfer to laptops.
Also AMD mixing architectures and confusing product lineup is making people wary of zen in the gaming laptop market in general. Nothing like buying "current gen" and ending up with a whole other architecture with considerably worse performance.
Intel might be renaming as well. But at least performance is generally consistent in gaming from Alder Lake and onward.
Exist50@reddit
Does that not also apply to Intel?
ComplexEntertainer13@reddit
Yes?
I said
The point is that it does not really matter as much with Intel the last few years. AMD has been hiding old chips in the stacks that for gaming especially can be a lot worse if you end up with the wrong chip.
Exist50@reddit
That's not true either, especially when you include literally anything but gaming.
Also, they're rebranding even Comet Lake now.
ComplexEntertainer13@reddit
In the aspect of this conversation it is.
Are you being intentionally obtuse? We are discussing why Asus might be changing back to Intel in one of their GAMING LINE OF LAPTOPS.
Show me the gaming laptop released recently with a Comet Lake CPU and how it is relevant to this discussion.
Exist50@reddit
No, as I said, not even that. You going to tell me gaming perf it the same between ADL, RPL, "ARL-U" (MTL-U-R), and actually ARL?
As I said, it also applies to gaming. But even gaming laptops do other non-gaming things. You don't get a laptop solely to game on, especially if you're going with an Intel iGPU, as this rumored chip selection would indicate.
If your argument is that inconsistent and misleading naming is a reason for them to choose Intel, you can't just ignore all the parts of Intel's lineup with inconsistent and misleading naming. Otherwise, just do the same for AMD.
torpedospurs@reddit
AMD is too complacent on the mobile segment. Zen 6 for laptop is expected in 2027. If I was Asus I'd go Panther Lake too. This is why I don't think it will be an iGPU model either.
Quiet_Honeydew_6760@reddit
Entirely no dgpu, definitely not, I can't rule out an iGPU only version with pantherlake as a budget friendly version but this will definitely be paired with 5050, 5060, 5070, etc.
I wonder if this is a sign Gorgon point is a minor upgrade and so the only improvements we will see are from intel this year.
I'm still hoping for a 16 inch strix halo laptop, maybe we'll finally see one at CES next year?
grumble11@reddit
I can kind of get it. They replace the XX50 series with a PTL model since they figure this is people who are willing to compromise on GPU performance but want to keep price down and battery life up. It's the 'premium zippy thin and light' crowd. Then for people who are more invested in the GPU they offer more powerful solutions.
Essentially, this might be a viable alternative to the XX50 series, even though it's materially weaker, because a lot of people who buy XX50 series laptops really just want a moderately capable GPU but want to keep prices low and portability and battery life high. PTL can probably approach that market from the bottom.
Don't think it kills the XX50 series as the performance gap is significant, but I can imagine that it'll capture some of the 'better than a typical iGPU but okay if it isn't a very powerful GPU' market.
imaginary_num6er@reddit
I don't get it. What's the point of their Zenbook brand then?
Front_Expression_367@reddit
Zenbook is a general light use high-mid tier lineup. Like, really light use to the point that they only integrate 1 fan into their cooling system, which really doesn't sound gaming capable.
grumble11@reddit
The form factor is a big selling point. Ultimately a lot of consumers buy a product, not specs - a nice screen or a nice size can be more important than a Cinebench score. The Z13 tries to be a pocket powerhouse basically.
protos9321@reddit (OP)
Agree with grumble11 that its about the form factor, however if all they want to provvide is a nice screen/speakers etc , they can just go with the zenbook. To go with the G14 there would probably another reason, The only ones I can think of are, 1) TDP and 2) lower cost than developing an all new device for higher TDP than 35W. Right now the only mainstream non gaming laptops which asus sells which can make use of high TDP SOC's are Zenbook Pro and Vivobook Pro. As far as I remember Zenbook Pro has been MIA for a while and Vivobook Pro is not going to be as premium as something like a Lenovo Yoga pro 7i. Instead of making an all new design, Asus can just take the G14 make some changes and sell it. On the inside, due to no dGPU and lower requirement for cooling, they could add another SSD and increase the battery capacity. On the outside, they can remove the power port and replace it with another USB-C. Make all the USB-C's into thunderbolt 4 and throw in a 1000 nit screen with good anti-relectivity coating and a better trackpad and you could have what would arguably be amongst the best in the segment
OutrageousAccess7@reddit
yes. its good for geekbench. so how about real-world gaming performance? radeon excels time spys and fire strike blah blah. keep your legs calm down.
BuchMaister@reddit
You can extrapolate from Intel graphs (if they're worth anything) about 20-30% better performance at the same power. For Real world, you got to wait.
Johnny_Oro@reddit
Given that it's a new architecture, Xe3 cores are going to be better utilized for gaming than Xe2 at the absolute least. And Xe2 was fast already, and made even faster by a recent driver patch.
advester@reddit
That's not bad news. But it is measuring a GPU tile fabbed by TSMC. The critical thing for Intel is getting their foundry in order. We already know TSMC has good silicon.
KeyboardG@reddit
Double the cores brings 74% uplift. More news at 11.
grahaman27@reddit
8 to 12 = 50% more cores.
74% > 50%
Therefore -- this is surprising and good news :)
KeyboardG@reddit
Intel Core Ultra X7 358H @ 1.90 GHz 1 Processor, 16 Cores
Intel Core Ultra 9 288V @ 3.30 GHz 1 Processor, 8 Cores
steve09089@reddit
This is iGPU performance benchmark, not CPU performance benchmark
grahaman27@reddit
Talking about the GPU right? cores on the GPU are what matter for GPU benchmarks?
grahaman27@reddit
lunar lake had great performance but was held back by the CPU -- looks like that won't be the case by panther lake, more room to stretch the legs in every direction.
Great job intel for giving us what we actually want.
Front_Expression_367@reddit
I am surprised there is a variant of the Zephyrus G14 that use PTL before Strix Halo tbh.