I Tested Snapdragon X2 Elite Early - Performance Preview
Posted by vlakreeh@reddit | hardware | View on Reddit | 85 comments
Posted by vlakreeh@reddit | hardware | View on Reddit | 85 comments
lazarus_ps@reddit
I'm interested in whether it will run MySql, MariaDbm, PrestaShop, Docker. I miss developer tests.
xX_CommentTroll_Xx@reddit
this is the most edgy thumbnail i’ve ever seen
Basic_Adeptness_9273@reddit
I don't even get what he was trying to say
DYMAXIONman@reddit
It's not their "last chance". ARM has inherent advantages over x86, and Microsoft wants it to become the default for the Windows platform.
Basic_Adeptness_9273@reddit
I don't even get this thumbnail. It just popped up on YouTube. Last chance for what?
-protonsandneutrons-@reddit
Meta / rant:
The gen-on-gen comparison is kind of misleading. He is comparing the X2E 88 (18C, nearly highest X2E clocks) vs the X1E 78 (12C, slowest X1E SKU, no Turbo).
I do not know why this problem affects many laptop reviewers.
Desktop reviews usually get it. Nobody would test a 9900X (12C, nearly highest 1T clocks) vs a 7600X (6C, slowest 1T clocks) and then remarking (3:45): "Let's be honest, this kind of performance uplift in a single generation is huge."
Mate, you are not comparing a single generation leap. We also jumped like 5+ CPUs up in the product stack. The appropriate comparison does (or rather will) exist: the X2E 78 exists, another all 12C, low-tier SKUs We saw the same with the X9 388H (PTL) being compared to then mid-tier CPUs.
//
Laptop reviewers seem notorious for no clocks / cores on each CPU in the charts: it'd be beneficial, with the alphabet soup AMD, Intel, and Qualcomm use, for more clarity what SKU is what.
Desktop reviewers (usually) get it and it's a little easier because we can match by price, as well. If the X2E 88 is found in all the cheaper laptops that housed the X1E 78, it makes more sense. But until then, just match the SKUs or provide that context for users.
//
Also, it seems like this has slower perf / GHz 1T on CB2024? Some quick maths:
X1E 78 (108 pts / 3.4 GHz): 31.8 Pts / GHz
X2E 88 (146 pts / 4.7 GHz): 31.0 Pts / GHz
I would for more optimizations, but ASUS specifically said the Cinebench numbers were in line, so it is weird. However, it may be like Geekerwan's 8EG5 results, where the CPU actually notably throttles at 1T boost, so it can't hold the 4.7 GHz clock (thus dividing by 4.7 GHz is not accurate).
Siats@reddit
IPC is preferably compared at the same clocks, it doesn't scale linearly, it decresess at higher clocks.
-protonsandneutrons-@reddit
Theoretically, yes, but so far in Qualcomm's Oryon uArches, it's been minimal loss of scaling, if any, in CB2024. Per Notebookcheck's database of Cinebench 2024 1T results,
Qualcomm X1E 78, 3.4 GHz 1T: 108 pts → 31.8 pts / GHz
Qualcomm X1E 84, 4.2 GHz 1T: 133 pts → 31.7 pts / GHz
Even with +800 MHz, it's nigh identical perf / GHz within the same uArch. Perhaps X2E is significantly different or the clock gap is not large enough in my example here.
//
Though, even if perf / GHz did not scale linearly, I expected something left in the tank at the boost frequency: to drop to 0%, or here -2%, scaling is a little unexpected.
Merbil2000@reddit
https://share.google/5eoFjfR41EynEhqkd
They advertised 154 points for the 88 sku, so this is falling short of the mark
-protonsandneutrons-@reddit
That's interesting. It'd not be the first time that shipping laptops don't meet Qualcomm's claimed numbers, though, so I'm hesitant to take the 1st party results at face value.
HC data: X1E 78 (108 pts / 3.4 GHz): 31.8 Pts / GHz
HC data: X2E 88 (146 pts / 4.7 GHz): 31.0 Pts / GHz
//
QC 1st party data: X1E 00 (132 pts / 4.3 GHz): 30.7 Pts / GHz
QC 1st party data: X2E 88 (155 pts / 4.7 GHz): 33.0 Pts / GHz
Though there is a large footnote that this is at best perf, plugged in (per Qualcomm). I might've missed that in the video, in Hardware Canucks did the same.
basedIITian@reddit
30 Watts configuration per Hardware Canucks. Probably will get a bit higher with 40 Watts.
Forsaken_Arm5698@reddit
On the other hand, Apple is incredibly strong in Cinebench 1T. Their lead is even bigger than in GB6 (which uses SME to 'inflate' the score).
-protonsandneutrons-@reddit
They are very dominant and it'd be interesting to see why floating-point is still so good for Apple. Though, as CB is purely floating point, it is a little limited.
It'd be nice to have a cross-platform integer benchmark, or for Geekbench's results to separate its integer vs floating point vs SIMD-accelerated benchmarks (x86 CPUs have AVX 'inflation'; Arm CPUs have SME 'inflation'). You can hand-calculate it if you have the result URL, but reviewers usually only result the top line geometric mean (65% int; 35% fp), so we're stuck with the mass of public, unverified GB6 database.
CalmSpinach2140@reddit
Simply because M4 and later got a big boost in FP. Apple focused on it. Right now Qualcomm is in between M3 and M4 on a better process node.
Merbil2000@reddit
Uh, isn't it the other way around?
https://www.youtube.com/watch?v=YJaHi-gZESo
Qualcomm is 2 gens behind Apple in INT, but only 1 gen behind in FP.
TwoTimeHollySurvivor@reddit
There is no 'inflating' the score in GB6. It calculates the final result from the harmonic mean of all the subtests.
jaaval@reddit
It computes geometric mean of scores within integer and floating point workloads and then weighted arithmetic mean of the two categories.
Geometric mean is more robust against different test score scales but it doesn't mean that apple getting 2x scores of anybody else in object detection task wouldn't have an impact on the overall score. It's still a mean.
CalmSpinach2140@reddit
The new X Elite 2 also have SME support in GB6
nithrean@reddit
do we have a sense of whether apple's architecture is really that far ahead or if it has a lot to do with the memory subsystem and the on die ram?
Forsaken_Arm5698@reddit
there is no "on die RAM".
It's "on-package RAM". But that doesn't make it magically better. It's just standard LPDDR but soldered on the same PCB as the SoC. Whereas normal RAM is soldered to the motherboard.
Noble00_@reddit
First look, didn't notice, but is rather fair. Especially since X1E is discounted due to the impressions it received. Can't say for sure, but I think QC will be more confident in raising the pricing since they seem more adamant on the performance of these vs competitors (Apple money).
But, tbh, I think ppl (consumers and the PC HW community) are more looking forward to how these will compare to Intel/Apple(... AMD), then their trial run that was X1E.
DerpSenpai@reddit
X1E laptops are ending but X1P are still going strong
Noble00_@reddit
Haven't looked, but I suspect X1P are bangers for entry level market for the battery life and snappiness
Merbil2000@reddit
The A18 Macbook will slay that market.
NeroClaudius199907@reddit
There wont be enough supply
Forsaken_Arm5698@reddit
That remains to be seen. People were saying the iPhone 16e will kill midrange Androids, but that has not panned out so far.
okoroezenwa@reddit
Who is saying that??? It is doing absolutely none of that at the price point Apple released it (even discounted). Same for this “cheap” MB if it gets priced stupidly.
Forsaken_Arm5698@reddit
That will eventually be replaced by the 6 core X2 Plus model I guess.
Merbil2000@reddit
Which has a potato GPU worse than the X1 Elite
Forsaken_Arm5698@reddit
If it's going into sub-$800 laptops, it's a non-issue. That market doesn't care about the GPU anyway.
DerpSenpai@reddit
Which is fine, Do you call a Ryzen 5 a potato GPU?
Ok-Candidate5141@reddit
Except that it will be paired with a discrete GPU in most cases.
DerpSenpai@reddit
Idk if QC wants to sell for that low. I reckon the X1P chip will be sold in non copilot+ pcs with 8GB soon rather than later
DerpSenpai@reddit
The 78 has the same MT score of the 80-84 SKU btw, it's just Single core it falls short.
Merbil2000@reddit
This feels like Qualcomm's Netburst era, and it's not looking good.
Merbil2000@reddit
This naming scheme is terrible. Not as bad as Intel or AMD, but there is no pattern and it 's just ugly (dashes and a useless 100 hanging at the end).
theQuandary@reddit
I'd like to see something like X2D-11 through X2S-99 where S,A,B,C,D indicate performance tier and 11-99 indicate a single digit CPU + single-digit GPU performance score within that tier (so 19 would be the small CPU + big GPU aimed at console-type devices and 90 would be a desktop-grade CPU with no integrated GPU).
Forsaken_Arm5698@reddit
That sounds a bit complicated, and doesn't fit into Qualcomm's existing lettering scheme (P = Plus, E = Elite). I'd suggest a simpler solution;
Where X denotes the Snapdragon X brand, the first number (2) denotes the generation, the second number denotes the CPU, and the third number denotes the GPU. The suffix latter at the end describes the tier (non/Plus/Elite).
I'd also argue that they have way too many SKUs. The 84 and 80 are identical, except for the fact that the 84 has 4.7 GHz dual core boost, whereas the 80 has 4.7 GHz single core boost. Every other SKU has dual core boost, so why this oddity? This SKU shouldn't exist imo.
I also think the 78 doesn't deserve the Elite suffix, since it has no boost whatsoever (which cuts down the ST perf significantly). But they did that in the last gen too, and have set a bad precedent.
theQuandary@reddit
I don't know that yours is significantly more simple. I'd argue that D -tier to S-tier is instantly understandable by most of the market that actually cares about specific SKUs and provides a lot more meaningful context than E vs P. For the remainder of the market, "bigger number = better" is probably good enough.
I agree about too many products. I think 3-5 SKUs would be plenty, but both laptop makers and other chip makers (eg, Intel and AMD) disagree.
One-End1795@reddit
This is a sponsored video. It says that very clearly. So, yes, the comparisons don't make sense and he says things that don't make sense, either. That is why he is getting paid to do it.
pomyuo@reddit
Hardware Canucks is literally a scam channel I don't know why they haven't been banned from Reddit
basedIITian@reddit
The barrage of Intel sponsored videos for PTL were definitely okay though.
One-End1795@reddit
The community should not normalize this behavior, period. They are posting paid-for reviews, and people act like this is normal. it isn't, nor should it be, regardless of who is paying them.
Merbil2000@reddit
It's not a review, it's a preview. Read the title.
The devices are still under embargo, so proper reviews will have to wait.
wintrmt3@reddit
No, it's an ad.
steve09089@reddit
Were any of them Intel sponsored, or were they OEM sponsored?
Ok_Pineapple_5700@reddit
It's been two years and some people are still gatekeeping chips on Windows
trololololo2137@reddit
"Idle-normalized platform power, idle periods removed"-> the idle power draw is horrible on the new chips and it would be embarassing to compare to our older gen on larger node
basedIITian@reddit
It's to isolate the core power. Andrie has commented about this in the past.
trololololo2137@reddit
i don't care about core power consumption. i care about the end result (which is what apple has done and destroyed intel and amd)
NeroClaudius199907@reddit
Why dont you just buy apple, it meets your ends
trololololo2137@reddit
i have one already and had x elite before returning it because it was barely functioning on release
basedIITian@reddit
it's a very interesting CPU architecture comparison. not everything will cater to your interests.
trololololo2137@reddit
its not a real comparison, its worthless marketing junk
Forsaken_Arm5698@reddit
In this sub, we very much value such discussions.
DerpSenpai@reddit
Qc power consumption on the X2 will be far better than X1. What are you smoking? QC always used this metric and this metric isn't bad at all because it forces OEMs to adopt the best PMICs and DRAM. On Intel platforms you would see wild battery life differences because of doing budget ICs on the motherboard
trololololo2137@reddit
I'd be more convinced if the video showed the battery life :)
by specifically excluding the impact of these components at idle? Of course at 100% load the dram power draw will be a drop in the bucket
theQuandary@reddit
We already saw a massive efficiency boost going from X1 to 8 Elite. This is now the 3rd generation. It doesn't seem reasonable that they are going backward in power consumption with improved variants of the same uarch.
nithrean@reddit
has anyone found battery life comparisons between the new qualcomm's and intel's 3rd gen?
oguzhan377@reddit
2 or 3 more weeks.
DerpSenpai@reddit
Wait a few weeks for the official launch
IBM296@reddit
You won't find any till the laptops are released in March.
Forsaken_Arm5698@reddit
The GPU results are somewhat frustrating to see, considering that Qualcomm hasn't done their best for it (as I feel). The GPU on the X1 Elite was so underwhelming because it was an overclocked mobile GPU on an older architecture.
This X2 Elite uses their latest architecture, and has 33% more cores than their flagship mobile chip (2048 vs 1536 ALUs), and 50% higher clock speed (1.7 GHz vs 1.2 GHz). They could've gone for more cores and scaled up higher like Apple [ A18(6-core) -> M5(10-core) +66% more cores].
DerpSenpai@reddit
The X2E GPU is good for what it is. They should not do a X9 type GPU and then drivers suck ass. This GPU is good enough for gaming and a good platform to better the drivers
basedIITian@reddit
The only conclusion required is that it games at a very acceptable level for a ultralight SKU, unless we have decided Intel's last gen and AMD's current gen performance is suddenly inadequate.
Forsaken_Arm5698@reddit
I agree, but I am also disapppinted because they had the potential to do much better.
DerpSenpai@reddit
This generation is the defining moment for WoA. Nvidia joining + QCs best effort. If they stick, Qualcomm can do far more in this space, including what you said. Much fatter GPUs
Forsaken_Arm5698@reddit
Well yes, it's good to see Qualcomm is pushing through and not giving up.
Looks at Intel discrete GPUs
DerpSenpai@reddit
Intel is also trying to push through their GPU effort. They might end up not doing many more discrete GPUs. The market isn't really there for it. For every discrete GPU they make, they need to have it be sold in laptops to make a profit on that die.
basedIITian@reddit
I agree on prinicple, but chasing gamers' approval is a fool's errand. Not only will they never be satisfied, but even if it was satisfactory they won't choose to buy it anyway.
Ok-Candidate5141@reddit
Apple never chased gamers' approval. Yet made Pro and Max chips.
DerpSenpai@reddit
The economics are not there for Pro and Max chips from QC yet (technically the CPU in the Elite Extreme is Pro and Max level)
Ok-Candidate5141@reddit
If only QC sells these machines at M5 pricing.
DerpSenpai@reddit
Depends on the RAM-gadon.
Im already prelisted to buy the Asus A16 with the Extreme Variant and 48GB for 2000€. Seems like a great price for that RAM amount these days
Forsaken_Arm5698@reddit
The X2 Plus isn't a worthy M5 competitor because it has no boost clock and GPU is significantly cut down.
Ok-Candidate5141@reddit
True
Merbil2000@reddit
It'll be upto Nvidia to improve WoA's reputation, because with Qualcomm right now, it's in the gutter.
Noble00_@reddit
Looks like I opened reddit at a good time!
So it seems they are testing the *non extreme* variant. Lowest 18 core SKU without the 5ghz boost and 192-bit bus.
With limited tested as you'd expect great CPU performance. No nuanced power consumption analysis, but at similar TDP with highest end PTL, \~47% faster in CBr24 nT, \~12% faster in 1T. \~72% faster in CPU rendering in blender and \~47% in CPU transcode in Handbrake.
theregoesmyfutur@reddit
better than panther lake?
jaaval@reddit
In single core performance it is assuredly better. Panther lake is just slightly tweaked from previous generation while this probably has more new things. Also if the CPU has more cores it is practically always better in the scalable multithreaded workloads.
But if it's better depends on a lot of things. I'm more interested in how the soc design works with power. Also I would like it to work with linux.
DerpSenpai@reddit
CPU is better than panther lake, we already knew that, perf/W on X9 is the same as Lunar Lake so QC will continue the lead here too
Noble00_@reddit
Ehh, I'll wait for embargo (don't know when that is), but definitely is interesting and competitive. There'll obviously be some workloads here and there where PTL shines, but where workloads take advantage of the CPU's perf per thread this'll blaze ahead. I'm a bit tepid on the GPU, HW acceleration workloads, but from early testing, looks promising. Of course, it'll all come down to pricing and QC's aggressiveness in the SW ecosystem.
This all said, ngl, I am still impressed by the M5 (since they showed it on the graphs). On both CPU and GPU fronts, it handles both really well despite being released months ago. Especially if you're in the US market where there can be good discounts (though the state of the everything right now is pretty piss poor so I can't really say where Intel/QC/Apple stands).
Merbil2000@reddit
What happened to that guy? ARM's strongest defender.
okoroezenwa@reddit
They took a break for unknown reasons.
Noble00_@reddit
C'mon I wouldn't say he was ARM's strongest defender^(um, i think there are others who would fit that description lol). I think Swords was quite objective with their analysis. and did critic X1 SW pitfalls at the time. Plus their posts made gave this sub interesting discussions. Quite a few usual suspects here that carry on that legacy