Qualcomm Snapdragon X2 Elite Extreme Analysis, Benchmarks & Efficiency - Serious rival for Apple and a problem for AMD & Intel
Posted by Geddagod@reddit | hardware | View on Reddit | 77 comments
ishsreddit@reddit
The biggest differentiator for me isn't the performance. My travel laptop which i use most of the time has the Ryzen 7 5800u and I suspect there won't be a real tangible difference in terms of raw performance i.e. i get 11k in multi-core R23 but having tried the Snapdragon X Elite (16GB spec), I was incredibly impressed by the smoothness and stability of Windows on ARM despite the lower mem. On W11 on x86, there are noticable slow downs and catastrophic consequences in stability on 16GB.
I still want to give it one more generation but its evident Microsoft will favor ARM over time. Sucks for us that are gamers though. But I have found myself streaming more and more.
Angry_Homer@reddit
Why are people rooting so hard for qcomm to fail here? I understand they're not the greatest company around, but neither are intel and AMD? More competition is always better, and tbh lunar lake which gets glazed hard online is a one-trick pony with good battery life but performance that gets demolished by even Apple's entry level mac chips.
GenericUser1983@reddit
The whole ARM ecosystem in general seems to encourage locked down machines that remove control from the actual end user, while x86_64 is much, much more open in comparison.
Angry_Homer@reddit
It's locked down in comparison to x86, yes, but not at all when compared to the apple silicon stuff that people won't shut up about. At least the storage isn't soldered down on these qcom machines (even if they're worse than a macbook in like every other way lol).
Again, I don't love qcom, but more competition in the PC space is always better. Would be great if another vendor could come in with an ARM chip that allows for a bit more freedom. Mayyybe Nvidia's upcoming thing will be better? Doubt it, but at least it's competition.
sSTtssSTts@reddit
I don't know the details but I've heard nothing but bad things from those who have dealt with QComm.
Consistently worse than MS is how they're generally seen in the industry.
I don't mind ARM but I've seen enough to know not to give QComm any of my money.
Would love to see a capable R5 processor pop up in the market. They're MUCH more open than everyone and its a fairly decent architecture too. Software is a mess for them right now though and I don't see them getting official Windows support any time soon if ever.
empty_branch437@reddit
Because intel caught up and Qualcomm barely has anything going for it in laptops.
Exist50@reddit
Are we reading the same review?
GHz-Man@reddit
I thought in reviews, Panther Lake had essentially caught up to ARM in battery life? PCWorld got over 25 hours on the Galaxy Book6 Ultra.
Qsand0@reddit
If you're just watching local videos and typing documents yes. But it's still far behind in intensive tasks on battery
jocnews@reddit
Maybe sense of protecting x86, but perhaps also Apple fans. They bear a grudge for Qualcomm having that court feud with Apple and likely just because Qualcomm powers Android phones.
Just look at their angryposting about Epic, a company that gives away free games throughout year, of all things. SOOOO EVIL!
BandeFromMars@reddit
Because they're a company built on hubris and thinking they can just waltz into any market and own it. They didn't help themselves by telling Linus users to basically F off and by misleading people during the X Elite launch. And for me personally, they're the catalyst for this AI PC garbage that's been forced down our throats. The competition isn't sleeping either.
Ok_Pineapple_5700@reddit
No one is forcing you to use Copilot and they were the first ones to push for 16gb of RAM. Companies were still selling 8gb in late 2024
Show me where they said it? They said it was a small market so they would be losing a lot more on chips than they're right now with Windows. Do you also blame Apple for not supporting Linux and allow us to dual boot?
BandeFromMars@reddit
Thats not the problem, the problem is that its everywhere and annoying.
I'll give them that, but this would've happened eventually without them.
They said it without explicitly saying F off
I get that, a small market is a valid excuse, but don't promise linux support and then not even attempt to support it.
Linux can work on Macbooks (Asahi Linux), most people don't bother with it because MacOS is good enough for most people. Apple never promised Linux support though unlike Qualcomm who went through the trouble of upstreaming some kernel support and heavily hinted at full support with a roadmap, yet people have gotten linux to work better on apple silicon.
BobTheBootGuy@reddit
There are changes that already landed on the kernel that adds support for the X2. You can search them up on Phoronix
BandeFromMars@reddit
Will that honestly even matter when nothing else Qualcomm has promised has gotten done or is half assed.
Ok_Pineapple_5700@reddit
I mean it's Windows platform. You can complain about it but idk what Qualcomm can do.
So they didn't say what you said they did.
I can't seem to find the road map, maybe I missed it.
Some Linux distros work also but you get abismal battery life and saying MacOs is good enough for most people is really isn't saying much because you can also say Windows is good enough for most people.
Exist50@reddit
Of all the companies in this space, why say that of Qualcomm? What specific action or statement of theirs do you take issue with?
What?
zzzoom@reddit
Windows-only
basedIITian@reddit
Because this is in essence a gaming subreddit masquerading as a Hardware one.
-protonsandneutrons-@reddit
I noted this in the Hardware Canucks video, but the perf / GHz seems stagnant in Cinebench 2024 1T and it's even weirder with the total power draw.
Even with a node shrink and two generations of uArch, the P-cores have nearly identical performance per clock in CB2024 and even higher power. Of course, maybe \~6W are somewhere else in the laptop, but it's unfortunate.
Geekbench 1T is much better with 15% higher Pts / GHz (sure, some SME), though this is a jump of two uArches (Gen1 Oryon vs Gen3 Oryon).
Would love to see joules by Notebookcheck.
trololololo2137@reddit
M5 is less efficient than M1 at peak load. every efficiency gain is offset by pumping up clocks which is also happening here
-protonsandneutrons-@reddit
Nah, not every efficiency gain is offset by pumping clocks. Clocks are not the only variable, obviously: we need to look at the nodes and the microarchitectures, too. Per NBC's older tests + OP:
Note how a node improvement allowed the M4 to have higher clocks and higher Pts / GHz and higher Pts / W.
How are you defining "efficiency"? Perf / W, Joules, or merely power?
trololololo2137@reddit
for me efficiency would be amount of work done per Wh. i don't have a M1 machine available at the moment but my M1 pro MBP is consuming around 30-35W at 100% load (8 pcores) while a new MBA hits over 40W for slightly better multicore perf with just 4 p cores (for a few seconds because of passive cooling of course)
-protonsandneutrons-@reddit
This not measured by anyone these days. Why I wrote,
//
You're mixing up 1T tests (my tables) and your results. These are obviously not comparable.
The M5 also has six E-cores that are also used in nT tests. Again, not comparable to 1T tests.
trololololo2137@reddit
I'm actually interested in a comparison: how many code compiles or some other things can be done on one battery charge (normalized for capacity)
I assume M5 will be 2x faster but it will be only a bit better in total compiles on one charge
dev_vvvvv@reddit
Since power is usually measured in watts and watts are joules/second, doesn't it all come down to joules (or watt-hours, which is just 3600 * joules)?
-protonsandneutrons-@reddit
Not always, as some benchmarks are fixed task workload (run 10 tasks as quickly as possible) versus fixed time workload (do as many tasks as possible in 10 minutes). With the former (see Geekbench, SPEC), CPUs can boost up to higher watts and ostensibly save energy by simply competing the task faster. But that's also not necessarily true. As an exaggerated example, if the SoC boosts to 30W for 5 seconds (600 J) vs 10W for 10 seconds (100J), you've consumed six times more energy, but did you complete the task six times faster?
//
However, Cinebench is the latter, a fixed time workload. So average Watts should be roughly equivalent to total joules.
MissionInfluence123@reddit
The guys at anandtech's forums posted some results without SME
ASUS Zenbook SORA 14 UX3407NA - Geekbench
It's the 88 version but still it boost to 4.7Ghz.
3537 / 4.7Ghz = 752Pts (or 6%IPC increase)
-protonsandneutrons-@reddit
Thank you for sharing. It does seem like a smaller upgrade for the two-year gap.
dev_vvvvv@reddit
I keep seeing people touting single core and multicore performance from synthetic benchmarks. Especially for non-amd64 devices.
Why should I care when the real-world performance doesn't match those results? For example, the Asus ProArt PX13 HN7306EA vs the Apple MacBook Pro 16 2026 M5 Pro:
The other games weren't benchmarked on Apple silicon, so I'm guessing they won't even run. These Snapdragons, of course, perform even worse. Though they at least seem to run the software.
CalmSpinach2140@reddit
Good now check productivity apps like the abode suite, Blender CPU and GPU, Davinci Resolve cpu exports and handbrake cpu encoding. The M5 Pro will come ahead by a huge margin in those tasks.
Mac’s suck at AAA games due to game devs not optimising for Apples GPUs which use TBDR. You mentioned CPU benchmarks and not once did you even relate those tests to a real world app that uses CPU.
Watch this video, it’s shows you real world CPU tests. The M5 Max has the same cpu as the M5 Pro. You need the HX parts from AMD and Intel that use like 150-170 watts to match the M5 Pro/Max.
https://youtu.be/xDHZ1bEEeUI?si=ws5a4AdFZQu4yrCU
intronert@reddit
As they say - “Lies, Damn Lies, and Benchmarks”. :)
Hour_Firefighter_707@reddit
The most important performance metric outside of efficiency and power draw for a mobile SoC is single core performance. AMD and Intel are getting slaughtered here
OafishWither66@reddit
more than performance, their drivers are horrible. Had the displeasure of trying to run a game on one of these once and it can't even run games not on DX11/12 because of shit driver support and the games it does run, run like ass
Admirable-Extent2296@reddit
Linux with Mesa (Turnip) fixes this. I just hope Qualcomm will get the X2 series to actually work properly with Linux this time.
DerpSenpai@reddit
QC engineer apparently said they fixed ACPI this time so perhaps? But i heard it from someone from r/hardware
Admirable-Extent2296@reddit
Thanks for your (almost) always on point replies in this sub! together with some other people, you are the reason I bothered creating a new Reddit account.
Well, ACPI support doesn't really matter to me, device trees are easy to write and they are mostly a write once and forget thing and taking the firmware from Windows isn't a hassle.
with the semi-official Ubuntu concept ISOs, what nearly everyone uses, there is no need to do anything if your device is popular enough. I guess it may slightly help reduce the workload and open up using any(?) ARM64 distro so that's a welcome change.
What bothered me is the high power consumption in s0 sleep, funky clock throttling, fancontrol, and the missing hardware video decode, hence why I skipped the X Elite after some deliberation. Hopefully they will fix these with the X2, if they haven't already. I kinda stopped caring and just went with a AS MBA + remote linux server setup.
OafishWither66@reddit
im pretty qualcomm wont considering how closely theyre working with microsoft
Admirable-Extent2296@reddit
No, the two are unrelated, it's not like they didn't fund X Elite enablement at all. It's just that some things are still broken.
Also, the exclusivity deal they signed a decade ago has expired so there is no need to be joined at the hip.
Exist50@reddit
There's some serious doubt that an exclusivity deal existed at all. At least Ian Cutress claimed it didn't.
PastaPandaSimon@reddit
Apple and Qualcomm have got an absolute lead in hardware at this point. The elephant in the room that keeps their 95+% market share is the software support and Windows, ensuring Qualcomm's chips only perform like this in a small subset of software that was re-made/compiled for ARM, with Qualcomm hoping enough users stick to such silos and won't be running into things that frustrate them into going back to Intel/AMD.
Artoriuz@reddit
Depending on how you test, AMD and Intel don't end up looking absolutely pathetic: https://blog.hjc.im/spec-cpu-2017
I have no idea why this guy sticks to GCC 12, and Linux vs macOS numbers are not directly comparable as there are more variables than just the hardware, and he is also missing the latest hardware from both Apple and Qualcomm, but the x86 CPUs seem to be able to keep up a little better than what we generally see with Geekbench.
Still though, even if we take these numbers into consideration, a desktop class 9800X3D with extra cache needs to clock at almost 6 GHz to compete with the M4 Pro, which is literally last gen mobile hardware from Apple...
basedIITian@reddit
The 8 Elite, a flagship phone chipset from 2024 is better than all current x86 laptop processors in David's Spec results. I would say that's extremely embarassing for Intel and AMD.
dfv157@reddit
This has been mentioned before, but both QC and Apple has been designing mobile CPUs for a long time with the M series and X1/2 being offshoots of the mobile core design. In mobile settings, 1T performance is king, and thus the CPU design has always been geared that way.
Intel and AMD both design for big cores and threads meant for datacenters and unlimited power. You can see clearly that Zen cores have the exact same 1T performance as their equivalent generation Threadripper/Epyc. Intel is the same with the P cores compared to Xeon. Different design philosophy, different result. The issue for legacy X86 is that ARM is starting to catch up on the unlimited power side of things, so they need to refine the strategy or start digging X86's grave.
Exist50@reddit
That's a contradiction. Datacenter is very power constrained, and no one's hitting max desktop boost clocks with typical SKUs. They do not have the same 1T perf.
Actually, if you look at power per core, a server chips gets roughly 2W/core, almost identical to mobile chips. So it would be more accurate to say that AMD and especially Intel are designing for a desktop instead of either mobile or server.
Artoriuz@reddit
I think it's pretty reasonable to say they design for servers and then just clock things up for the desktop with little concern for power consumption...
theQuandary@reddit
I notice he explicitly turns off the flto flag. I'm not sure if that's standard for SPEC, but it generally reduces performance on Apple machines.
Front_Expression_367@reddit
"The most important performance metric outside of efficiency and power draw for a mobile SoC is single core performance. AMD and Intel are getting slaughtered here."
Ironically I think they lost more in multi core than in single core. PTL is about 30% behind in multicore and 15% behind in single core while efficiency in Cinebench 2024 single core is apparently about the same between them? Although that test also had the X2 losing to X Plus and X Elite so I'm somewhat skeptical.
basedIITian@reddit
It compares power efficiency at peak performance. Watch the Hardware Canucks video to see performance comparison at same power budget between all 4 vendors. You'll see a huge lead for Qualcomm vs Intel and AMD.
PastaPandaSimon@reddit
Qualcomm did good buying the Oryon core teams, that did a very good job.
The main issue remains Windows, legacy software compatibility, emulation performance, and GPU/gaming. Which are very tough sells on $1000+ devices that most of these target, that for such money are likely unacceptable compromises for most.
Geddagod@reddit (OP)
Not sure, I don't think they have a significant lead over the "stock" ARM cores seen on Mediatek's chip.
And we know that even in the past, Qcomm and now Xiaomi both have better implementations (at least for perf and power) of ARM cores on their own SOCs than Mediatek as well.
bwat47@reddit
well with a name like elite extreme it has to be good
pythonic_dude@reddit
But there's no "Ultra", "Max" or at least "Pro" anywhere there, so I have my doubts.
crab_quiche@reddit
No mention of AI in the name either, so I don’t think it will be able to do anything like checking email or opening excel
BandeFromMars@reddit
Qualcomm really missed the opportunity to call the top chip the X2 Elite Extreme X2E-96-100-AI. Maybe next time.
Qsand0@reddit
Dont worry. Elon won't miss it with his next kid.
Keulapaska@reddit
Give it like 3 years and there will be snapdragon elite pro max
DIYfu@reddit
Yeah, they'll atleast need to use the current naming scheme and add to it over time, before dedtroying something well established and go fir something completely differebt for no reason.
dagmx@reddit
Good on it for competing with x86, but almost all those charts have it behind the last gen Apple processors in the same class (The Mx Pro) , except for multicore.
A lot of the time it’s competing with the base M5 despite being a class higher.
Idk how they arrived at the title of “a serious rival” when this is pretty much where Qualcomm has been for a while now.
IBM296@reddit
Yup. Apple is ahead in GPU and single-core performance just like with the M3 and X-Elite in 2023.
Although the GPU gap is not as bad this time as it was before.
Ok_Pineapple_5700@reddit
It's their second generation. CPU is great, GPU could be improved with more Developpers adoption and better drives. Room to improve but good chip for a second generation
Hour_Firefighter_707@reddit
What matters is the price. At least in the USA, these 18-core X2 Elites cost the same as M5 based MacBooks. They will cost more in other markets, sadly, but because the M5 Pro has gotten more expensive this year, they have more wiggle room for pricing.
And they are quite a bit more powerful than the M5 in multi-core, so there is that. GPU will be a lot slower in real world tests (things like Blender are still unsupported) and Photoshop and general single core snappiness will also be worse. But judging by Speedometer and Jetstream performance of X Elite, these should still be around M4 performance.
For people who don't like macOS, that is pretty good.
Apple is way ahead, of course. I'd love to be able to run Windows on an M5 MacBook Pro. But it is what it is and this is the best we have for now
Eddytion@reddit
It’s been the 4th year in a row claiming serious competition to Apple, it’s also been 4 years of lies and cherry-picked benchmarks and comparing to entry level/previous gen. I’ll believe it when I see it.
IndependentMilkDrink@reddit
I hope the linux compatibility is ok this time around.
TommyYOyoyo@reddit
Damn, although performance is still lacking behind M4 / M5 families, X2 Elite seems to have better efficiency if the figures provided are accurate...
inverseinternet@reddit
X86 is on is archaic tech and soon to be consigned to history at this rate.
RealThanny@reddit
I heard the same thing over 30 years ago when PowerPC came out.
dev_vvvvv@reddit
Just 2 more weeks and the 3 day "special replace x86 operation" will be complete.
Beautiful_Ninja@reddit
X86 remains dominant due to backwards compatibility. It's archaic on purpose, a business has a reasonable chance to run a proprietary 32-bit software designed for Windows 98 on a modern PC. Rewriting your infrastructure to move away from legacy X86 to support ARM is an enormous undertaking that is realistically not feasible for most businesses.
tuhdo@reddit
Can it run local LLM models at a reasonable speed? If not, it's not a rival to the macbooks, and for everything it is still not an appealing choice.
ham_bulu@reddit
Slower Single and Multi-Core performance (exception the base M5), less efficient, slower GPU and what's not in the test: much more expensive.
Nothing in these test results suggests there's a serious rival for Apple Silicon. Not even remotely.
basedIITian@reddit
The CB26 MT scores for A16 are higher than the M5 Pro MBP. And of coure the MT performance and efficiency is both better than the MBA. And both A14 and A16 are priced at 1150 and 1600 dollars each, the latter with 1TB/48GB configuration. Best you can do in that price with a MBA is 1TB/24GB, and MBP is a different price segment entirely.
ProZoid_10@reddit
Why is the gpu getting no love Price is good Competition is lovely, i hope they continue devouring each other
empty_branch437@reddit
They said that last time
BandeFromMars@reddit
Yep, this'll probably end up being the world's most powerful e-waste just like the X Elite.
Noble00_@reddit
Woot! Reviews coming in.
Interesting to see the Elite Extreme use a different node than the regular Elite/Plus. One thing I dislike is how the X2E-90 isn't 'Extreme' while the 94/96 are. Would've made things much simpler if X2E-9x SKUs were the 'Extreme' SKUs... Also finally, pwr sensors can be monitored, something so simple needing a new gen lol.
Unsurprisingly core arch is ahead of x86 counterparts Intel and AMD.
1T/sT not quite M4 in CB24 and comparable to M4 Pro in GB6.6. 15% faster than PTL.
nT is ways ahead! In performance mode competes with 16 core Strix Halo (and M4 Pro). Though can't quite match M5 Pro (w/ the same core count but swapped heterogenous design, 12 prime + 6 performance vs 6 super + 12 performance).
AutoModerator@reddit
Hello Geddagod! Please double check that this submission is original reporting and is not an unverified rumor or repost that does not rise to the standards of /r/hardware. If this link is reporting on the work of another site/source or is an unverified rumor, please delete this submission. If this warning is in error, please report this comment and we will remove it.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.