These new Asus Lunar Lake laptops with 27+ hours of battery life kinda prove it's not just x86 vs Arm when it comes to power efficiency
Posted by TwelveSilverSwords@reddit | hardware | View on Reddit | 161 comments
cap811crm114@reddit
I’ve wondered how much is SoC design. I have a 2019 16” MacBook Pro (8 core Intel Core i9) and a 2023 16” MacBook Pro (M2 Pro), both with 32Gb memory. Granted, the Intel MacBook is four years older, but the battery difference is astounding. The M2 gets about four times the battery life (doing office type things - Word, Outlook, PowerPoint, etc).
I’m thinking that in the case of Intel there is a chip and Apple had to design around it. With the Apple Silicon the chip design folks are literally next door to the system folks, so they can be designed as a unit. “If we put the video decode on the M2 we can save a whole chip over here” or something like that.
I would think that there isn’t anything stopping Intel (or AMD) from some sort of cooperative arrangement with a laptop manufacturer to create an efficient x86 SoC (other than the small matter of cost - Apple can do it because of their volume).
mmcnl@reddit
Chip design is important but the vertical integration you mention matters less I think. I think Apple Silicon would work great on Windows too in theory.
BigBasket9778@reddit
Nope, the vertical integration is the most important part.
mmcnl@reddit
Why? Are you saying the chips without macOS are not that powerful?
moofunk@reddit
Many issues in OSes vs. the hardware can come down to bugs or lack of documentation of the hardware, so they just don't bother.
For Apple, it is quite a leverage to have as a HW developer that you can just email the OS guys to ask to fix a particular bug and have it done in a few days, instead of waiting months or years for a driver fix, because Intel didn't bother to prioritize you, and the guy who wrote the driver got fired 2 years ago without Apple's knowledge.
Then also you have integrated testing, where you can carry out test cycles to a degree that would not be possible without the external vendor being in the room.
Vertical integration is wildly important for bug fixing against hardware problems.
mmcnl@reddit
I think the importance of this is overstated. Apple had no problems running iOS on Samsung ARM chips for years. Apple Silicon is fast because the chips are best-in-class. Performance is also great in Asahi Linux for example.
moofunk@reddit
And I think you're understating it, by ignoring things like power management, standby power consumption, management of power to externally connected units and sleep/wake performance, where macOS always has been so wildly much better than Windows.
Heck, there was a thread in this sub the other day about how Apple are the only ones that can do proper sleep/wake on laptops with months of standby time and immediate sub-second wakeup, because they've been doing the exact same thing on their phones since 2008.
Asahi Linux doesn't have access to power management features yet and has pretty horrible performance in that regard.
Negative_Original385@reddit
Yes because I pay 2000 USD on a laptop to put it to sleep 6 months and then wake it up instantly, then wait another 6 months... then replace it after waking it up 6 times.
moofunk@reddit
The point is that stand by time is important for trusting that the device is powered over a week of not using it. If the device still has power despite months going by between using it is an indication of how little power it would use by leaving it for a week.
Negative_Original385@reddit
I'd be ok with 2 days - over weekend. anything over that means you're not really using that device to any level that requires consistent performance / service
BigBasket9778@reddit
The most important one is latency.
Sure; throughput on the Apple chips is good on Linux, but that’s not really why they feel so good. Latency is, and the latency is because the scheduler and chip are designed together. You don’t have the same snappiness on Linux as you do on Mac OS X.
unlocal@reddit
None of the shipped Apple SoCs were ever “Samsung chips”; even the original S5L8900 design was heavily reworked.
Morningst4r@reddit
Apple's vertical integration is why they can build enormous chips with very few compromises. Intel can't drop a whole bunch of legacy features without breaking software compatibility. They can't just make only huge CPUs because most of their market wants cheap processors. Apple doesn't have to recoup design costs from the hardware, they can make them back on software.
mmcnl@reddit
But there is also Snapdragon (ARM) for Windows and it's still not as a good as Apple Silicon. If you are saying that due to vertical integration Apple can afford more expensive chips, then that makes sense. But the chips by itself are still far ahead of the competition and that's purely from chil design and not software optimizations.
darthkers@reddit
The point the person above you is trying to make because apple has everything vertically integrated, it doesn't need to make a profit for each individual part, only on the whole. Whereas someone like Qualcomm has to make a profit on the chip they sell, the OEM making the laptop has to make the profit from the laptop they sell. Thus the apple chip design team has fewer restrictions, allowing them to make better.
If you see Qualcomms Android chips, they always have very little cache, usually even less than ARM reference designs. Here it's obvious that increasing the cache will a good boost in performance, but Qualcomm is more concerned about the chip cost thua increasing its profits.
LeotardoDeCrapio@reddit
Yup. AMD, intel, and Qualcomm basically follow the same business model. So they have to make their SoC's with area/cost as a main optimization directive. Not just performance/watt.
Apple's M-series is basically the idealish scenario where you aren't as constrained as the other SoC designers because your revenue comes from the end consumer.
M-seres are basically 1 to 2 generations ahead in uArch (where they can go wild in terms of core width and cache). Node process (Apple can afford to pay up the risk runs for the node and have a huge silicon team within TSMC). As well as packaging (M-series has had backside PDN as silicon-on-silicon years before intel gets their GAA BPD 18A process out)
On top of that Apple controls the Operating System as well as the APIs that are highly optimized because they have full visibility of the system within the organization.
Negative_Original385@reddit
You now have intel laptops with 25h battery life, you can relax
Prior_Jump3645@reddit
What volume does Apple have? They have 8% global laptop market share. No one buys MacBooks except Americans...
pianobench007@reddit
It is exactly that. If you look at early photos of Apple M1, they had the ram or memory on the package of the CPU. Now 4 years later, Intel has a similar design. Lunarlake with tiles and memory on the package.
What that means is less power. Because the memory now shares the same power from the CPU/SOC. If you go back to regular old ATX motherboards, you can follow the traces from the dedicated VRM to the CPU and the dedicated VRMs to the RAM.
Ram on a motherboard and even sodim sticks on a laptop motherboard require 1.25 to 1.5 volts. So they need separate board power and delivery and extra hardware. All of which requires power.
Lunarlake and Apple silicon lessen that due to on package ram.
AMD will likely follow suit soon. They have to. Just like AMD went with chiplets, Intel had to shift towards tiles. This industry is a follow and then lead style.
Nothing wrong with that. It's just how things go. I am of course on team PC but I understand why others are on team Apple. Not my cup of tea as I am old school and do my own oil still. So I need to know how things work so I can have it last.
Exist50@reddit
As I explained to you in a thread the other day, this is complete nonsense, and I have no idea where you got it from. The power deliver is the same for on package or on board memory.
Bananoflouda@reddit
The memory controller needs less volts, so there are power savings, just not from the memory chips.
Exist50@reddit
Yes, from the memory controller. Because the signal integrity is better. Nothing to do with the above claim.
ahsan_shah@reddit
Because Intel Core i9 was still using dated 14nm Skylake architecture from 2015.
NeonBellyGlowngVomit@reddit
Yeah. Technology improves with time. Imagine that. Anything four years newer is likely going to be better at every metric.
Not quite the earth shattering revelation you proclaim it to be. Everything else you've mentioned are all things that are already done by Intel and AMD. x86 CPUs are already SOCs to a substantial degree. Apple just makes their boards more proprietary.... "or something like that."
Sorry mate, but you basically spent 3 paragraphs saying nothing.
cap811crm114@reddit
Then I apologize for wasting your clearly superior time. I will refrain from making any comments here in the future.
NeonBellyGlowngVomit@reddit
Suddenly the posting quality on Reddit improved substantially.
Danne660@reddit
What is wrong with you?
Admirable-Lie-9191@reddit
Your comment is unnecessarily hostile please stop posting on here so I can enjoy reading this subreddit.
Thanks
ursastara@reddit
What's crazy is that at the time, the 2019 macbook pro you had was considered to have really good battery life lol yeah apple soc's completely changed the game
cap811crm114@reddit
I recently had a business trip that included three hours in the air. I knew that the power adapter for the M2 MacBook would draw too much power from the AC plug, so for the three hours I just ran off the battery. It went from 80% to 63% on that fight. Granted I wasn’t doing any videos or gaming, but I was using the WiFi. When I got to the client site I didn’t bother to plug it in because I didn’t need to.
TwelveSilverSwords@reddit (OP)
Microarchitecture, SoC design and process node are more important factors than the ISA.
Vb_33@reddit
Which is good news for x86 compatibility. Why settle for ARMs compatibility wors when x86 can yield good enough efficiency and compatibility.
vlakreeh@reddit
To play devil's advocate, when it comes to perf/watt in highly parallelized workloads Qualcomm and especially Apple outmatch Intel and AMD. Qualcomm's 12 cores with battery life similar to Lunar Lake is very appealing if you are looking for a thin and light laptop to run applications you know have arm native versions and you aren't gaming. As a SWE (so all the programs I wanted have arm versions) I was looking for a high core count laptop to replace my M1 MacBook Air and Qualcomm looked incredibly appealing with its MT performance while providing good battery life. I ended up getting a used MacBook Pro with an M3 Max because Qualcomm didn't have good Linux support but if they did I'd definitely opt for over a 4p4e Lunar Lake design.
Negative_Original385@reddit
"to run applications you know have arm native versions and you aren't gaming" Successfully described 3% of the laptop population.
vlakreeh@reddit
You're like a month late, but... An incredibly large portion of laptop users are just using their browser, a handful of electron apps, and maybe some productivity applications like MS office with only a handful of applications that aren't natively ARM that'll run just fine under Prism. There's plenty of people where an ARM laptop would work just fine.
Negative_Original385@reddit
... but 15% less fine than without the emulation on a hardware that already gets letss battery life than the latest generation intel... can't see the point?
Vb_33@reddit
Hopefully Qualcomm get their shit together with Linux they have decent chips.
Particular-Crazy-359@reddit
Linux? Who uses that?
Negative_Original385@reddit
Lin-what?
Particular-Crazy-359@reddit
linux is useless, cant even gaming on it
Strazdas1@reddit
"Linux? Is that something we can sue?"
-- Qualcomm exec
Abject_Radio4179@reddit
Isn’t it a bit too early to make those statements, before actual reviews are out?
This reviewer examined multiple power modes on a Snapdragon X elite laptop, and at full power when rendering in Cinebench the battery life is a mere 1 hour: https://youtu.be/SVz7oGGG2jE?si=E2vImax5c9zbTp3R
DerpSenpai@reddit
The X Elite has far better performance/watt in Cinebench R24 than Strix and Meteor Lake.
Abject_Radio4179@reddit
Perf/watt is purely academic if the battery lasts a mere 1h in Cinebench.
All I’m saying is to wait for independent reviews before jumping to conclusions.
Kagemand@reddit
Sure, but again, it's not about x86 vs ARM. Most IT deps aren't going to deal with the headaches of switching to ARM for relatively minor client performance gains.
XY-chroma@reddit
Qualcomm and TSMC*
FTFY. Agree on everything else.
Helpdesk_Guy@reddit
* with TSMC's 3nm backing it, you forgot to mention.
Strazdas1@reddit
So same as Apples chips?
Helpdesk_Guy@reddit
Yes, though it's not that people wouldn't literally asking "Why settle for ARMs compatibility wors when x86 can yield good enough efficiency and compatibility."
Strazdas1@reddit
Apple outsources almost everything they manufacture. But thats not the point here. The point is that you can have x86 perorm just as well as the best ARM has to offer (apple) when it is on the same production node. Ergo, ISA is not important.
Helpdesk_Guy@reddit
I never was talking about anything ISA either. All I was saying, with my '* with TSMC's 3nm backing it' was Intel needing TSMC for it, despite trying to be a foundry.
bigdbag999@reddit
Lunar Lake doesn't prove anything. The RISC vs CISC argument is a tale as old as time, and misunderstood. Of course ISA is meaningless in a debate about power efficiency, relatively speaking.
thatnitai@reddit
Why doesn't it prove it then?
Sopel97@reddit
because there's a fuck ton of differing assumptions
thatnitai@reddit
But different ISA is already a different CPU and that's sort of the point here - that ISA x isn't inherently more battery efficient than ISA y - to somewhat prove this claim it's enough to find an example
Sopel97@reddit
only the frontend needs to differ. If you take for example snapdragon and lunar lake then everything differs. Even including the platform that's outside of the CPU, while still contributing to the measurement.
No, that only proves the modern x86-based systems are roughly as efficient as modern ARM-based systems. It's a completely different claim.
thatnitai@reddit
When you say fronted, what do you mean? I don't follow.
Sopel97@reddit
the part of the cpu that decodes instructions
thatnitai@reddit
I don't think that's how it works. Risc vs cisc involves a lot more than just some instruction decoder logic... But I think I get what you mean.
MilkFew2273@reddit
There is no real risc or cisc, the ISA is translated to microcode and microcode is RISC. The ARM Vs X86_64 power debate is relevant to that part only, how translating and being backwards compatible affects internal design considerations, branch prediction etc. Gains are mostly driven by process at this point.
mycall@reddit
This makes me wonder why one CPU can't have multiple ISAs.
Sopel97@reddit
They kinda do already, as technically microcode is its own ISA. It's just not exposed to the user. Exposing two different ISAs would create very hard compatibility problems for operating systems and lower levels. It's just not worth it.
steve09089@reddit
Comment probably is under the assumption that it’s always been a widely held belief that ISA is meaningless to power efficiency in the grand scheme of things.
By this belief, Lunar Lake being super power efficient doesn’t prove anything because there was nothing to prove to begin with.
bigdbag999@reddit
Definitely not a widely held belief, as this post is evidence of, and the countless debates about ARM vs x86 on places like /r/hardware. But otherwise yes exactly.
For the uninitiated or those with some hobby-level knowledge, a great starting place to begin learning all about this kind of stuff: https://cs.stanford.edu/people/eroberts/courses/soco/projects/risc/risccisc/#:%7E:text=The%20CISC%20approach%20attempts%20to,number%20of%20instructions%20per%20program
My university coursework was lot more convoluted than the material on this site, it's great.
LeotardoDeCrapio@reddit
I mean, that's an undergrad project presentation from 20+ years ago...
bigdbag999@reddit
I think it's still relevant to helping people understand basics, and is effective as ever due to great illustrations and examples. I saw your other reply, obviously you get it, maybe you work in industry as I do (did, at this point). Don't you think we should try to share information for folks to passionately talk about things they don't really get?
LeotardoDeCrapio@reddit
Absolutely. Especially in this sub, with people literally going at each other over stuff they don't understand.
I was just bantering btw.
autogyrophilia@reddit
That's why x86 is basically RISC at this point.
LeotardoDeCrapio@reddit
ISA and microarchitecture were decoupled decades ago. It's a meaningless debate at all levels at this point.
CookbookReviews@reddit
Yeah but what is the cost? x86 complexity and legacy add logic increasing the cost of the die. Lunar Lake BOM is going to be higher since their outsourcing to TSMC (I've read cost is 2X, not sure if that source is valid. Snapdragon X elite is originally $160 (from Dell leak) but due to PMIC issue, its really $140.
ISA does matter because it influences the microarchitecture which influences cost. ISA doesn't matter for speed but does matter for cost. Extra logic isn't free.
No-Relationship8261@reddit
Snapdragon x Elite is 171mm2
Lunar lake is 186 mm2
Cost issue is due to Intel fabs sitting empty. Not because Intel is paying significantly more to TSMC
TwelveSilverSwords@reddit (OP)
Lunar Lake.
140 mm² N3B Compute Tile.
46 mm² N6 PCH Tile.
Packaged together with Foveros.
X Elite.
170 mm² N4 monolithic SoC.
Since TSMC N3B is said to be about 25% more expensive than N4, it means the compute tile of Lunar Lake alone costs as much as a whole X Elite SoC. On top of that Lunar Lake also has an N6 tile, which is then all packaged together with Foveros. So clearly, Lunar Lake should be more expensive to manufacture than X Elite.
No-Relationship8261@reddit
I don't disagree, but 2x?
Like if the cost of adding N6 and foveros is so much they should have just built everything in N3B. It would have been cheaper.
Helpdesk_Guy@reddit
How do you even came up with that dodgy trick of deranged mental acrobatics, attributing a SOC's increased BOM-costs (through extensively multi-layered and thus complex packaging) *while being outsourced* at the same time at higher costs to begin with, to magically end up to be caused solely by Intel's latent vacancy on their own fabs?!
How does that make even sense anyway?!
Right … since Intel just hit the jackpot and magically ends up paying *less* for their own designs, while outsourcing them as more complex multi-layered and thus by definition more expensively packaged SoCs, than building and packaging it by themselves at lower costs.
Say, do you do stretching and mental gymnastics for a living? Since you're quite good at it!
No-Relationship8261@reddit
It's cheaper to build in house because you get to keep the profit of building the chip.
It's more expensive for Intel to use TSMC because they could have used their own fab and only pay the cost. It's not 2x more expensive because TSMC hates Intel or anything...
If in a hypothetical scenario, Intel fabs were already 100% busy, then the cost wouldn't be 2x, because then it would only be calculated as what they pay to TSMC.
That 2x rumor thing comes from the fact that basically Intel pays it's own fabs to produce nothing on top of what it pays to produce TSMC.
If packaging was as expensive as the compute tile, no one and I mean no one would have used it... Like, bigger wafer costs scale non-linearly but at 200mm2 it's not even close. (200mm2 chip is always better than 2 100mm2 chips packaged with foveros. The only reason 2nd option exists, is because it's cheaper.)
You could argue, that Intel is not paying 2x of what Qualcomm is paying for similar die spaces, as keeping the fabs open is irrelevant. But if you are thinking that you should be answering the reply above me saying Intel doesn't pay 2x of qualcomm for smaller compute tiles...
TwelveSilverSwords@reddit (OP)
Lunar Lake.
140 mm² N3B Compute Tile.
46 mm² N6 PCH Tile.
Packaged together with Foveros.
X Elite.
170 mm² N4 monolithic SoC.
Since TSMC N3B is said to be about 25% more expensive than N4, it means the compute tile of Lunar Lake alone costs as much as a whole X Elite SoC. On top of that Lunar Lake also has an N6 tile, which is then all packaged together with Foveros. So clearly, Lunar Lake should be more expensive to manufacture than X Elite.
ExtremeFreedom@reddit
That's cost savings for the manufacturer, none of the snapdragon laptops have been "cheap" and the specific asus talked about in that article is going to be $1k so the same cost that I've seen for low end snapdragon. Cost savings for snapdragon is all theoretical and there is a real performance hit with them. The actually cost to consumers probably needs to be 50-70% of where they are now.
CookbookReviews@reddit
I'm talking about the BOM (Bill of Materials), not consumer cost. Many of the laptop manufacturers tried selling the QCOM PCs as AI PCs and up charged (that's why you're already seeing discounts). Snapdragon X elite has a lower cost and higher margin for QCOM than Intel chips.
https://irrationalanalysis.substack.com/p/dell-mega-leak-analysis
TwelveSilverSwords@reddit (OP)
Yup. The OEMs seemingly decided to pocket the savings instead of passing it along to the consumers.
laffer1@reddit
Snapdragon x isn’t cheap so far but snapdragon is cheap. Dell Inspiron with an older chip was 300 dollars in may. It’s fine for browsing and causal stuff. Five hour battery life on that model.
The snapdragon x chips are a huge jump but they aren’t the first windows arm products
Fascist-Reddit69@reddit
X86 still have more idle power than average ARM soc. Apple m4 idle around 1w while typical x86 idle around 5w.
steve09089@reddit
Bro pulls numbers out of his ass lmao.
Even my laptop with H series Alder Lake can technically idle at 3 watts power draw for the whole laptop
delta_p_delta_x@reddit
That's not true. On my 8-core Xeon W-11955M (equivalent to Intel Core i9-11950H) that's a top-end laptop part, I can achieve 1 – 2 W idle.
gunfell@reddit
That is not really true.
NerdProcrastinating@reddit
That's totally missing the point as your statement focuses on the ISA being the characteristic of significance as a causative factor behind SoC idle power usage.
Exist50@reddit
This more about the SoC arch than the CPU, really.
ExeusV@reddit
People have been explaining it to naive investors for years on r/stocks
DerpSenpai@reddit
ISA matters for developing front-ends. ARM you can make 10 wide frontends while on x86 you can't.
EloquentPinguin@reddit
Where is the evidence for that?
Depending on what you are looking for Skymont already has a 9 wide decode, Zen 5 has 8 wide decode and completly 8 wide frontend, why should 10 be impossible? After the decoder the ISA also starts to matter alot less. So Skymont 9 wide decode (3x3) is very close to your "impossible" 10 figure.
No matter how wide x86 frontends were, people have always said "but (current width + 2) is not feasible in x86" and later it happens.
DerpSenpai@reddit
Skymont is 3x3 and not 9-wide, not the same thing
https://x.com/divBy_zero/status/1830002237269024843/photo/1
There's workarounds but it's a tradeof you wouldn't have to do if you made it simpler
EloquentPinguin@reddit
It surely isn't the same thing but that only begs the question: "Does it matters, that it uses a split decoder, or not?"
And without further evidence I would suggest to default: I don't know, if it does actually matter for throughput or significant in PPA.
But we don't know how big the tradeoff is. For all we know it could be sub % and might be merely an implementation detail. What should not be overlooked is that decoding is not the most dominant part in the frontends. Branchprediction, dispatching, scheduling are all super complex when having wide frontends, independent of the ISA. So the question is: Does the split decoder matter? And the answer is: We don't have evidence to suggest either way.
The mentioned presentation "Computers Architectures Should Go Brrrrr" has been discussed at length in the RISC-V subreddit (ofc. especially from the RISC-V perspective) and discussed: https://www.reddit.com/r/RISCV/comments/1f6h7ji/eric_quinnell_critique_of_the_riscvs_rvc_and_rvv/
Especially camel coders comment about uop handling is worth checking out.
BookinCookie@reddit
Split decoders are better, especially with regard to scalability. Nothing’s stopping you from making something like an 8x4 32-wide decoder, which would be infeasible to create without the split design, especially on X86.
autobauss@reddit
Nothing matters, only consumer perception does
ConsistencyWelder@reddit
Why do we keep regurgitating Intels claims as if they're facts? We shouldn't conclude anything about performance nor battery life until we see independent, third party testing.
Intel has been doing this before, withhold review samples when they know they have a bad or mediocre product, to talk up the hype and release the products before the review embargo.
GlitterPhantomGr@reddit
I don't know if this can be trusted, in lenovo's psref yoga slim 7i aura edition (Intel) lasted less than yoga slim 7x (Qualcomm) on the 1080p local video benchmark.
Helpdesk_Guy@reddit
That has been the status quo since literal decades now … Not that I would endorse of it, but you know …
Media-outlets getting their free stuff. He who has the gold makes the rules!
pastari@reddit
"Oh wow a reviewer broke embargo? ... oh wait its pcgamer, they're just regurgitating marketing numbers."
Click on comments to see how we're feeling about LL today and everyone is taking a first-party ~~benchmark~~ advertisement as truth. C'mon, guys.
teen-a-rama@reddit
Will believe it when I lay my hands on one.
aminorityofone@reddit
not enough upvotes. 27 hours as claimed by Asus.... have an upvote
Esoteric1776@reddit
27+ hours maybe for video playback at <200nits which is an irrelevant figure for the 99% and is being used as marketing BS
xpander5@reddit
Huh? SDR content is mastered at a peak brightness of 100 nits. Do really believe that more than 99% of people are cranking brightness to above 200 nits?
Esoteric1776@reddit
I do, because I've heard numerous people make complaints about display quality on screens with under 200 nits. The average laptop in 2024 has 250-400 nits with high end models pushing 500 nits+. Same goes for cell phones on average range is 500-800 nits with high end pushing 1000 nits. TVs average range from 300-600 with high end pushing 1000 nits +. Consumers in 2024 are used to having more than 200 nits on their displays, as most modern devices are exceeding that. It also begs the question if 100 nits is the ideal brightness why can most modern displays far exceed this. People also want uniformity and having one device with substantially lower nits is not it.
Strazdas1@reddit
100 nits is so low it will give you eye strain quickly in a well lit room.
Rjman86@reddit
>200 nits is basically required if you want content to be tolerable if there's a window in the same room you're watching in, doubly so if the device has a glossy screen.
xpander5@reddit
Idk if you guys have bad eyesight or what but I do have a window and I lower the brightness because even 100 nits is too bright. (semi-glossy 27GR95QE)
Esoteric1776@reddit
Idk if your on medications such as Tetracycline antibiotics, Retinoids, Diuretics, Antidepressants, Antipsychotics but these do have potential photosensitivity side affects.
InvertedPickleTaco@reddit
Keep in mind Snapdragon X laptops are shipping with 50-60 watt hour batteries. The first company to shoehorn in something in the 90 range will cross 20 hours of usable battery life easily without a node change.
Strazdas1@reddit
There are multiple apple laptops with 99Whr batteries.
TwelveSilverSwords@reddit (OP)
It's an year with so many exciting chips being released. Sad that Anandtech won't be doing deep dives on them. There is hardly any other outlet who does investigative analysis/reviews of hardware like they did.
I place my trust in Geekerwan and Chips&Cheese.
InvertedPickleTaco@reddit
Agreed. We are in an age of populist tech media, even fairly honest folks like GN tend to get caught up in emotional reviews rather than sticking to facts. I took a risk and went with a Snapdragon X Elite Lenovo laptop. Since reviews made absolutely no sense in my opinion and seemed oddly focused on gaming or editing 8K movies, I just had to buy and self review within the return time frame. Luckily it worked out and I won't be going back to X86 on mobile unless something truly astounding comes out.
TwelveSilverSwords@reddit (OP)
The Yoga Slim 7x?
InvertedPickleTaco@reddit
Yes. I regularly get 12-15 hours of battery lift out of the machine. I use Browser apps, Microsoft 365 apps, and Adobe Photoshop with no issues. Discord and even some of my X86 apps for automotive diagnostics work great too. I know that there still emulation issues with some apps, but hopefully once ARM native versions are the norm rather than the exception these machines can sell well.
DerpSenpai@reddit
The only bad thing about that laptop is the trackpad, Microsoft is the only one who got it right from the OEMs on Windows. the rest is really good
InvertedPickleTaco@reddit
I've had no issues with the track pad, but I only use it when I'm writing emails on my couch or bed. That's just my experience, though, and trackpads do have some subjectivity when they're reviewed.
DazzlingHighway7476@reddit
and guess what??? some laptops are 70 watt compared to intel's 70 watt and intel wins, LOL!
and intel has better performance overall!
DerpSenpai@reddit
No it doesn't. The X Elite has far better multi core performance
DazzlingHighway7476@reddit
I said overall, not better cpu.
InvertedPickleTaco@reddit
I'm all for competition. It means better laptops for consumers.
Maximum_Stop6720@reddit
My digital photo frame gives 30 day battery backup
Strazdas1@reddit
your digital photo frame probably uses same tech as e-readers where they use extremely small amounts of power while the image is static.
CyAScott@reddit
I wonder how it performs when I put it in hibernate/sleep mode then boot it up again few days later, is the battery percentage close to when I set it to hibernate/sleep mode? My biggest problem was not the duration but going into hibernate or sleep mode drained the battery in a few days, which was very annoying for a laptop.
teen-a-rama@reddit
It’s always about sleep mode. Hibernate = powered off and OS state stored on hard disk, basically zero battery drain.
S3 sleep was supposed to save the state to RAM, but they got rid of it and now went all in on S0 (modern standby, like smartphones).
S0 had been a mess and S3 is quite power hungry too (can go as high as a couple percent per hour) so looking forward to seeing if the claims are true.
Strazdas1@reddit
memory can be hungry and saving state to RAM means you have to keep it on and you have to power ALL of RAM, you cant power parts of it you use.
auradragon1@reddit
Cinebench R24 ST
M3: 12.7 points/watt, 141 score X Elite: 9.3 points/watt, 123 score AMD HX 370: 3.74 points/watt, 116 score AMD 8845HS: 3.1 points/watt, 102 score Intel 155H: 3.1 points/watt, 102 score
Let's wait for benchmarks. So far, Strix Point has not equaled Apple and Nuvia chips in ST perf/watt.
One of the most important factors in battery efficiency is ST speed & perf/watt because most benchmarks measure web browsing or "light office work" which depend on ST. You can always run ST at drastically lower clocks to improve efficiency but you sacrifice speed. On a Mac, the speed is exactly the same on battery life as plugged in - right up until your battery drops below 10%, then Macs turn off the P cores.
In the slides Intel showed, they showed a power curve only for MT and not ST. This tells me Lunar Lake will still be behind Nuvia and Apple in ST perf/watt.
MT efficiency scaling much easier than ST for chip design companies.
somethingknew123@reddit
Points per watt will almost certainly be much higher. Intel itself said it designed lunar lake as a 9W part and you can see the result in the 37W max turbo power for all models.
For comparison, meteor lake max turbo power was 115W. This means a ST test won’t have as much power pumping through it because the core will stop scaling much earlier.
My bet is points per watt is between X Elite and M3, much closer to M3.
DerpSenpai@reddit
Lunar Lake does not use 9W at 5Ghz. It uses far more and most laptops are not doing sub 20W TDPs for sustained, let alone burst
somethingknew123@reddit
Duh.
ShelterAggravating50@reddit
No way amd 370 uses 31w for a single core, I think it's just the OEMs throwing all the power they can throw
auradragon1@reddit
It’s package power.
conquer69@reddit
The issue with that test is that no one is running cinebench on battery. Even though I don't like it, the 24/7 video streaming battery life test is still more relevant than cinebench.
dagmx@reddit
The issue with the video streaming test is that a lot of these laptops with high battery life are FHD screens.
They’re streaming less data, decoding less data and rendering less pixels.
conquer69@reddit
I assume any competent reviewer would pick the same video resolution for all the laptops.
dagmx@reddit
Perhaps, but right now all these sites/youtubers regurgitating Intels claims aren’t caveating that.
Either way, I guess the meta point is there’s a need for good reviewers who understand the different aspects of use to benchmark.
agracadabara@reddit
Is it really? There are too many factors here. All the tested systems have batteries in the 70Wh+ range. Most of these seem to have OLED panels. On movie content, which generally has very low APL ( including black bars for aspect ratio) OLED panels draw much less power.
You can’t draw any meaningful SOC efficiency data from video playback tests at all.
laffer1@reddit
On the flip side, most people who would buy an Apple m4, Qualcomm snapdragon or lunar lake laptop that care about battery life are going to do web browsing and office apps mostly. It’s not going to be heavy workloads. If it were, they would have to buy a fast cpu instead.
I personally care about multithreaded sustained workload like compiling software. I want 4 hours doing that. No one benchmarks that.
agracadabara@reddit
That’s the point I am making video playback tests are not representative workloads for most people. Most people aren’t using these systems to binge watch Netflix.
Likewise Cinebench is also not representative of general workloads but it is far better metric to measure CPU efficiency than video playback. Video playback just tells you how efficient the media engine is on the SoC. Video playback is also dependent on system configuration. So manufacturers claiming a system achieves 27 hrs battery life is mostly meaningless
TwelveSilverSwords@reddit (OP)
Single core performance and efficency is particularly important for Web browsing.
That's why Cinebench 2024 ST is relevant.
laffer1@reddit
Even fairly low end new systems can handle web browsing though. In apple's case, it's not even representative since they have javascript acceleration. A browser specific or javascript-specific test would be better.
tacticalangus@reddit
Intel 288V scored \~130 in R24 ST according to the Intel press materials. Unsure what the power draw was.
I am reasonably confident that LNL will render X Elite obsolete for the vast majority of users. Real world battery life should be similar, far superior GPU performance, slightly higher ST performance but less MT throughput. Most importantly, no x86 compatibility issues.
gunfell@reddit
Cinebench is really not relevant. Like at all
coffeandcream@reddit
Yes,
Had high hopes for either A;D or Intel to slap both Qualcomm and Apple around, but here we are.
Not impressed at all, both AMD and Intel playing catch-up for quite some time now.
DerpSenpai@reddit
And Intel has a node advantage on the X Elite
auradragon1@reddit
Yep. The thing is, I think Lunar Lake will beat Strix Point in perf and perf/watt.
The worry I have for Intel is that Strix Point will be far cheaper to manufacture because it's on N4P, a mature node while Lunar Lake is on N3B. Therefore, Lunar Lake will be in limited quantities and have a high price.
The theme I see in Intels' execution has been that there are some goods in each generation they release, but there is always 1 or 2 fatal flaws.
For example, Meteor Lake - caught up in perf/watt vs mobile AMD but can't scale in core count and manufacturing difficulties and low raw perf.
Alder Lake - great perf, but very high power.
Raptor Lake - okayish refresh, but very high power and unstable.
Arc - Not bad $/perf but low raw performance and poor driver support.
Sapphire Rapids - generally good perf but poor core count scaling, not competitive perf/watt
There is always a "but" in every Intel product over the last 5.
Abject_Radio4179@reddit
Why do you assume that N3B is low yielding? Processes yield goes up with time. The yield numbers from 2023 are not applicable anymore in 2024.
Agile_Rain4486@reddit
benchmark don't prove a shit, in real world usage like coding, ms office, surfing, tool use the scenarios are completely diff.
Snobby_Grifter@reddit
Strix point is trash and shouldn't even mentioned.
no_salty_no_jealousy@reddit
Qualcomm X CPU hype didn't last long didn't it? After Intel Lunar Lake released i'm not sure if people still interested with Arm CPU on laptop because with Lunar Lake you have better battery life but also no compatibility issues at all since every apps and games run natively on Windows.
DerpSenpai@reddit
Remindme! 70 weeks
LeotardoDeCrapio@reddit
SnapDragon X was 1 year late. They lost most of their window of opportunity. So now they are stuck with an SoC with close to zero value proposition, except for a couple corner use cases. Thus they have a negligible market penetration.
Which is a pity because Oryon looks like a very nice core.
ChampionshipTop6699@reddit
That’s seriously impressive! 27+ hours of battery life is a game changer for laptops. It really shows how far power efficiency has come, not just with ARM but across the board. This could make a big difference for people who need long lasting performance on the go
trololololo2137@reddit
irl it's still 4 hours or less
mmcnl@reddit
It's not just about on the go. Longer battery life means less charge cycles too, which slows down battery life degradation. It also means you're more likely to get equal performance on battery compared to being plugged in, even if it's just for an hour. And battery longevity also correlates with less heat and fan noise, so your laptop fan doesn't go haywire when you're on a Teams call when plugged on. Battery life is just one metric.
iindigo@reddit
Yep. The battery on my ThinkPad X1 Nano is in considerably worse shape than that of the 16” M1 Pro MBP that’s a similar age, even though the Nano has only seen a fraction of the usage that the MBP has because it eats through cycles like candy in comparison (and its awful standby times don’t help with this).
TwelveSilverSwords@reddit (OP)
And having to charge less often means your electricity bill is lower, and it's good for the environment too!
RaXXu5@reddit
laptop/phones are negligible on electricity bills.
Killmeplsok@reddit
Yeah, I'm running a XPS 24/7, 8550U, so not particularly efficient nowadays, as a home server, I was curious about it's power consumption and desided to throw in a power monitor behind it for a couple months, turns out it uses about 1.5 dollars a months.
yoge2020@reddit
I usally skip first gen products, glad I didn't jump on xElite or Meteor lake. Lunar lake looks better suited for most my needs.
iam_ryuk@reddit
https://youtu.be/ba5w8rKwd_c?si=-VFzvr5sE4IVBt7g
Found this channel last week. Lot of info in this video about the improvements in Lunar Lake.
TwelveSilverSwords@reddit (OP)
High Yield. Cool guy.
NeroClaudius199907@reddit
Improvements to the Lp 2e?
GoldenMic@reddit
Let’s see it in the real world first