Core Ultra 9 285K Geekbench 6 scores leak out: 4% faster than 9950X in ST and 14% in MT.
Posted by Famous_Wolverine3203@reddit | hardware | View on Reddit | 401 comments
https://wccftech.com/intel-core-ultra-9-285k-arrow-lake-cpu-blazes-past-core-i9-14900ks-ryzen-9-9950x-benchmark-leak/amp/
“Versus its true predecessor, the Intel Core i9-14900K, the CPU scores an 11.7% lead in single-core and a 10.2% lead in multi-core tests. “
“The CPU ends up 8% faster than the Core i9-14900KS & 4% faster than the Ryzen 9 9950X in single-core tests. In Multi-core, the CPU scores a 5.1% lead over the Core i9-14900KS and a 14% lead over the Ryzen 9 9950X”
654354365476435@reddit
Great, if it at similar power then its very good. If it takes 3x power then it still useless.
Cute-Pomegranate-966@reddit
If it takes 3x power that would be nearly 600 watts, so i highly doubt it's that bad.
SnooPandas2964@reddit
Intel has already reached the limit with heat. You really can't cool more than 300w effectively with any system. Doesn't matter how big the rad or how many fans. the cpu die area is only so big, it becomes the bottleneck. Intel has pulled the 'lets just give it power' trick for the last time I'm pretty sure. They better have learned their lesson after 14900ks, I couldn't believe they were stupid enough to launch that thing when the 14900k was already struggling to breathe.
joshglen@reddit
They could up it if they start delidding / using better thermal junction materials from factory...
SnooPandas2964@reddit
Yeah, that could help some. But there is a reason that heatspreader is there. But I'm sure they could at least be improved in some way, be that the material the heatspreade is made of, how thick it is, or whatever is between the ihs and the die itself. IDK how much extra room that would give them, some, for sure, 400w? Probably not.
joshglen@reddit
I currently use a liquid cooler and non-delidded cpu, 350 Watts is fine for many minutes with benchmarking / stress tests on an 11700k. With delidded I've seen people go to 1KW and higher.
SnooPandas2964@reddit
Thats nice. Can it also do it for a sustained period of time, while the temps stay reasonable? Didn't think so. I said 300w, but thats not necessarily a hard limit in my head, just a rough one. I admit there is perhaps some wiggle room.
And the question is, would intel ship a cpu without a heatspreader? I mean I wouldn't dismiss it as impossible, but you're going to expose the raw the die which could lead to chips and cracks, especially when installed by inexperienced people. Also its more important that thermal paste(or liquid metal) application is perfect, otherwise could be looking at some pretty hot hotspots. And worst of all, I don't even think there's a way to max view the max hotspot temp for intel cpus. I could be wrong on that. I've just never seen any.
I just kind of doubt intel would do this, if they do decide to go down the moar power route, I imagine it would come in the form of an improvement to the ihs, rather than removal of it. And I don't think intel will be going that direction anyway.
1000w? What consumer cpu uses 1000w? Even the 14900ks doesn't draw half that with power limits removed. Lets say I took your word for it, that there was a consumer cpu that was overclocked to the point of drawing 1000w, are we talking cooling using liquid nitrogen or something? I don't see any other way. Hardly a pragmatic solution for the masses. And kind of defeats the point of the post.
joshglen@reddit
I agree with most of your points. I'm not saying Intel should ship without a heatspreader, just a much better one that doesn't cause as much of a thermal bottleneck that delidding hopes to deal with. I personally use a Kraken X53 which isn't even the largest water AIO available though, although surprisingly air has been performing extremely well in the past few years relatively speaking.
These water coolers look like they are rated for 500 watts or more, even for just their two fan models. I'm by no means an expert but from the cooling I've seen, there is a lot more capability than just 300 watts extended for most coolers that cost over $100. https://www.enermax.com/en/search/label/500W%20TDP I've seen Thermal Grizzly Kryonaut get recommended and is what I always use, maybe that would help if you're not getting the level of cooling you expect?
The 1000w I saw was with a friend having a custom water cooling solution (although it was around $300 total, surprisingly not much more than the top end AIOs available) for a delidded Intel-7980XE. The form factor is about same so I'm fairly sure the same cooling performance could be achieved with any delidded CPU if someone else really really wanted to for overclocking.
SnooPandas2964@reddit
Alright then well lets agree that improving ihs would give intel some more thermal headroom. And that delidding can provide even more headroom but is unlikely to happen as a default configuration.
joshglen@reddit
Yeah definitely, I'm not sure how the ihs compares to AMD cpus but upgrading it might give them a fairly cheap / "easy" way to get a leg up on AMD at least for enthusiasts.
faverodefavero@reddit
I'd say ~450w is the limit, given how some GPUs can reach that much on air (even some versions of the 3080 did that). But yeah, very close to the limit, at least without a good custom loop (then maybe one could reach 500~550w safely).
Vb_33@reddit
Big GPUs have bigger dies per OPs argument.
SnooPandas2964@reddit
GPUs also have a much more intimate mating system that isn't really possible with cpus, at least not in their current form.
faverodefavero@reddit
Fair enough.
least-weasel-420@reddit
The 380 die size is 628 mm². So yea, match that die size with x86 and you're good to go. It's nowhere close.
systemBuilder22@reddit
They keep upping the power because they are bankrupt of design ideas to make their chips faster. They are now ripping features out of their chips (avx512, multithreading). I like a company that builds features in. Not one that rips them out...
onan@reddit
While I agree that Intel has spent years living in the land of diminishing returns, I don't think that removing features is a categorically bad thing. If a feature would be a bad idea to add (and some definitely are), then it is also a bad idea to keep.
In particular, hyperthreading has always been a dumb hack. So I won't be sad to see that go away.
SnooPandas2964@reddit
I agree for the most part, personally I don't see the point of hyperthreading anymore if ecores can deliver more mt performance. Yeah it helped when we had 2 - 4 cores. Now we have 24. And most of the time, most of them go unused. I think we should at least look at how it performs before saying it was bad idea. I mean if it was getting in the way of progress, then you remove it. Something steve jobs was famous for. Not that I was always happy with those decisions at the time...
onan@reddit
It was a rubbish idea right from the start. Designing your hardware to lie to your software is unlikely to ever be a good model.
I will look forward to not spending any more of my life explaining to developers and CFOs that servers at "50%" vcpu utilization are actually at full capacity.
SnooPandas2964@reddit
If you say so. But I totally thought hyperthreading really helped those sandy/ivy bridge i7s later on in their life cycle, once games became more multithreaded. And didn't it help from the start with more multithreaded tasks? I realized after 6 cores you're getting to diminishing returns. But with only 2 or 4, really you think it was a bad thing?
Strazdas1@reddit
it depended on workloads. If your workloads makes CPU wait a lot for memory, hyperthreading will utilize those resources. If your workloads does not make CPU wait a lot, hyperthreading wont be useful.
onan@reddit
In the most absolutely optimal case, running a second thread on a hyperthreading fake core can get you maybe 10% more total throughput. (And I think that ratio is pretty constant regardless of whether you have 2 real cores and are faking 4, or have 40 real cores and are faking 80.)
But there are plenty of cases in which it actually impeded workflow optimization, either manual or automated.
And if you're talking about games in particular, my understanding is that those are what would have benefited most from having a full core truly reserved for their use, without risk of some other task nosing its way in and taking up timeslices because it thought it was also running on its own separate full core.
SnooPandas2964@reddit
I get that when you otherwise have enough horsepower, running a process on a hyperthreaded... thread rather than a proper core is going to lose you performance. But if you only have 2 or 4 cores and they are fully saturated, in that case it does help (of course depends on the task), but potentially a lot.
I am talking about a pretty limited in scope scenario here and agree overall that we don't need it anymore. Especially if its taking up die space that could be put use on something else... we'll just have to see how the execution goes though.
rsta223@reddit
Was Core 2 a bad design because it didn't have hyperthreading but the prior P4s did?
System0verlord@reddit
Ah, so you’re a fan of Microsoft then. And Windows, I take it?
Rupobot@reddit
Intel said it will use 100 watts less than the 14900k
pianobench007@reddit
The 14900KS is the golden sample.
It was meant for users who want the best CPU without having to test a number of tray CPUs then pick out the best performing one. All for a few dollars more. Rather than spend time benching each CPU.
You are supposed to run the 14900KS at 14900K speeds or lower. It is the golden sample.
Turtvaiz@reddit
Although I wouldn't say it's impossible...
steve09089@reddit
It should be on a node as troubled as N3B.
But then again…never say never
hi_gooys@reddit
It's 7 nm Transistors so the power draw is likely to be the same and the e cores are improved significantly how these CPUs don't blow up (just kidding)
Strazdas1@reddit
What you call 7 nm is actually 28 nm gate length. And its the best we got and likely will have with this approach to transistors because physics.
steve09089@reddit
I mean, you’re probably right the transistors are still larger than the advertised size, but I feel like N3B should have significant improvements over Intel 7 regardless.
Oxygen_plz@reddit
How is N3B a troubled node?
Fromarine@reddit
Yields so you can't afford to be that selective with really high clocking skus I presume is his point
DYMAXIONman@reddit
It'll be TSMC 3NM. Prior chip was on Intel 7NM
soliozuz@reddit
Yeah, even the 14900K was having difficulty being cooled on with traditional AIOs, EK ended up releasing a de-lidded version that was actually somewhat beneficial.
Alternative-Luck-825@reddit
Please stop with these sour and idiotic comments. It's really annoying to see. It's been mentioned multiple times that the TSMC N3B process is being used. With this transistor density, even if you wanted to run at 600 watts, it wouldn't be possible because the temperature can't be controlled.
PERSONA916@reddit
Depends on the usage case. Workstation workloads are completely different than gaming workloads. If I leave my 10900K at the default settings with PL disabled, yea it will pull ~300W but in gaming, even the most CPU intensive titles it never goes above 70W.
I literally could not care less about the power draw. My only concern about Intel right now is if the boost algorithm will kill my CPU and if Intel will deny it's a problem and refuse to honor the warranty.
Thercon_Jair@reddit
According to rumours, these chips will be produced on TSMC 3N. What AMD wanted to do, but Intel bought out supply so AMD wemt with TSMC 4N.
With intel being a node ahead it would surprise me if their chips turned out worse.
654354365476435@reddit
Love to see that, AMD is enough on the top, I want to see competition, I want AMD to be scared. I want double 3d vcache :)
3r2s4A4q@reddit
9950X has even worse CCD-to-CCD latency than zen 4, so double V-cache on Zen 5 is an even worse idea than it was on Zen 4. it would make more sense to double the size of the cache on a single CCD.
654354365476435@reddit
Thats whats I want double on single ccd
Gex581990@reddit
The clocks would be sooo low tho.
Strazdas1@reddit
I think you misunderstood. Im with the username with a bunch of numbers. Double the V-cache. On a single CCD.
FlyingFingers2000@reddit
Nope TDP seems to only be 125W
654354365476435@reddit
And that means nothing
FlyingFingers2000@reddit
That means typical TDP will be 125W, max will be 250W, you're welcome.
654354365476435@reddit
TDP does not mean power use
FlyingFingers2000@reddit
TDP is not the same as power consumption, however it relates to it, and therefore is a good indicator ; which is why it is indicated in all most CPU specs + benchmarks.
654354365476435@reddit
It means nothing, intel and amd dont even mesure it the same
FlyingFingers2000@reddit
It certainly does not mean nothing, it means what I explained, the factthat AMD and intel have slight differences does not mean that you can't compare the results between different generations of Intel (resp. AMD) processors.
Per Intel definition, relevant in that thread:
TDP stands for Thermal Design Power, in watts, and refers to the power consumption under the maximum theoretical load. Power consumption is less than TDP under lower loads. The TDP is the maximum power that one should be designing the system for. This ensures operation to published specs under the maximum theoretical workload.
That's exactly what it means.
Kryohi@reddit
I really wouldn't say "very good" for basically a 10% increase over last gen. That's similar to zen 5 over zen 4. But efficiency should be vastly improved, yes.
PainterRude1394@reddit
10% faster at 50% the power is very good.
Kryohi@reddit
It is
gokarrt@reddit
sorry, but you're performing mental gymnastics to get those numbers.
lightmatter501@reddit
Go look at the phoronix numbers. The windows scheduler appears for be drunk and if you use Linux the CPU gets a lot better (on top of Linux being worth nearly a full CPU generation in some cases). I think the CPU is good but we have another “Windows doesn’t understand ecore” issue, except this time it may be treating some cores as ecores for no reason, or something similar.
Strazdas1@reddit
Phoronix shows Zen 4 also performing better, so the Zen 4 to Zen 5 change isnt that big. They also show similar results in windows if using same testing workloads.
lightmatter501@reddit
Even for Ubuntu (historically one of the slowest distros), that is not true: https://www.phoronix.com/review/ryzen-9950x-windows11-ubuntu/2
gokarrt@reddit
right, but the underlying reason why the scheduler is suddenly so important is because they've nearly tripled inter-CCD latency this gen. even intra-CCD latency between cores is increased.
increasing AMD's biggest gaming weakness (latency) was certainly a choice. they might be able to mitigate this with process pinning, but unless that latency can be reduced via firmware, this is just a poor gaming chip.
lightmatter501@reddit
What’s important about the Phoronix numbers is that they have a lot of NUMA-aware HPC software in there which properly handles the inter-CCD latency. I think we’ve gotten to a point where games need to become NUMA/hwtopo aware to handle new CPUs properly, including looking at things like split l3 caches, differing core sizes, etc. Ideally game engines should handle this, but they don’t appear to.
gokarrt@reddit
sure, and i agree that is an increasingly important side of the software/OS side of things.
however, even the minimum inter-CCD latencies are double the previous gen. i haven't personally dug into the linux gaming benchmarks to see the delta there, but it wouldn't surprise me that even with ideal process pinning this gen will be held back by those increases.
lightmatter501@reddit
That isn’t a great change, and I have a feeling that is the result of tuning the interconnect for having tons of chiplets on it.
gokarrt@reddit
well, there has been unending speculation whatever they goofed this gen was to cater to datacenter products, so i wouldn't count it out.
CCDs have never been good for gaming, but keep in mind, the 9600X is a single CCD and still had underwhelming gaming performance gen-on-gen, so it's not the only problem here.
lightmatter501@reddit
Some talks with the AMD chip architects from Ian Cutress and Cheese (of chips and cheese) made it seem like this gen was mostly about full AVX and overhauling some internals not touched since Zen 1. They hinted that a wider memory interface may be one of the changes coming down the pipeline, as well as a bunch of internal improvements to their EDA processes.
Kryohi@reddit
Absolutely not. Read any review of the 9950X. Only the 9700X gets lower numbers due to the reduced TDP.
conquer69@reddit
The tdp wasn't reduced. It's the same 65w tdp they used last year. The only thing they changed was adding an X to the name which isn't relevant.
PainterRude1394@reddit
Why do you think it's being called Zen 5%?
AK-Brian@reddit
We may end up with a Core Ultra 9%, Ultra 7% and Ultra 5% on the other camp at this rate.
Both companies seem to have gone with more of an architecture reset this generation, rather than ratcheting up raw performance. It's a weird time for the semiconductor industry as a whole, honestly.
PainterRude1394@reddit
Maybe! But right now the uplift on ultra looks better and there's a huge opportunity for efficiency increase on top of that.
It turns out, without a massive node advantage for AMD things are likely very close!
Geddagod@reddit
Intel has the node advantage now, and tbh it doesn't look like LNC is going to be that much more, if at all, more efficient than Zen 5.
PainterRude1394@reddit
Intel has a minor advantage, nothing like what AMD had before.
If Intel can have similar efficiency that's already a huge win. If they can also be faster that's just adding to it. And it looks like there's a good chance of both happening.
Geddagod@reddit
I mean Intel 4 and TSMC N5/N4 are similar nodes, at least based on the naming scheme Intel themselves are using. And yet RWC is marginally less efficient, something like 20-40% larger, and has similar ST frequency.
And now Intel is a full node ahead (TSMC N3B vs TSMC N4P), though even if perf/watt is similar, the massive density advantage N3 has is a massive help. So it's not minor. But even then, LNC only appears to have similar perf and similar perf/watt vs Zen 5 (though area is a big ?).
No, they need to have better perf and better efficiency considering they are using a better node. That's not a huge win, that's what should be expected.
PainterRude1394@reddit
Intel didn't use Intel 4 for 14k series lol. It's okay to acknowledge the node gap was large before when it was in AMDs favor.
Geddagod@reddit
RWC is redwood cove, in meteor lake.
I understand based on the fact I said RWC is 20-40% larger, you might think it was a full node behind and I was talking about Intel 7 RPC, but no, it's that much larger while being on the same node.
It really looks like Intel needs all the help they can get to get their cores to be competitive, because so far it looks like they remain behind AMD in their actual designs.
PainterRude1394@reddit
Intel has a minor advantage, nothing like what AMD had before.
If Intel can have similar efficiency that's already a huge win. If they can also be faster that's just adding to it. And it looks like there's a good chance of both happening.
Vb_33@reddit
The lower SKUs like the Ultra 5 and 7 should perform better than the Ultra 9 due to not being as clock capped vs it's predecessors.
jedidude75@reddit
From the meta analysis voodoo2sli posted the 9950x is 3% faster in gaming and 9% faster in applications.
Kryohi@reddit
9% is its integer IPC improvement, it should be a tad higher with a diverse set of benchmarks, but yes we're in the ballpark of 10% as with this Arrow Lake sample.
systemBuilder22@reddit
Zen5 switching fabric sux, just like zen2, upon which it is based, because they gave the zen5 project to the zen2 team .
SJGucky@reddit
Thats the power of TSMC.
jaaval@reddit
AMD was criticized specifically for lack of improvement in some applications. And lacking in efficiency improvements. In geekbench zen5 actually looks very nice. Arrow lake looks about the same than zen5 in geekbench. Maybe achieves similar scores at slightly lower clock speed.
And both of these are very top of the line results, at least until apple M4 becomes available.
So it really comes down to what kind of power is required for high performance.
Kryohi@reddit
Gaming performance is mostly unrelated to stuff like GB scores. For that we'll have to wait complete reviews, though I suspect we will end up in exactly the same situation as RPL vs zen4: zen5 vanilla losing, zen5+X3D winning, Arrow Lake in the middle but close to z5X3D with their top part.
ResponsibleJudge3172@reddit
I agree, although that will continue to erode the popularity of Intel since X3D wins is all that matter when comparing chips since 12th gen
Vb_33@reddit
The good thing about i9s is that they excel at all workloads including gaming. The 7800X 3D is a poor performer at multithreaded workloads, what I'm curious about is how the 285k will perform vs the 9950X 3D.
HandheldAddict@reddit
The 7800X 3D is a poor performer at multithreaded workloads
True, but the price also reflects that.
It's like an old school i5 2500k, if the 2500k had the the gaming performance crown over the i7's.
PainterRude1394@reddit
Considering even the 14700k is close to the 7800x3d, good chance the core ultras will also be close. But at far less power consumption while likely stomping x3ds in multicore.
ExtendedDeadline@reddit
Very good meaning it's the fastest consumer cpu if it's beating the 9950x. Extra very good if it's even close to zen in power consumption, which I think Intel is trying to prioritize.
Kryohi@reddit
Your "extra very good" scenario is simply a repetition of the Raptor Lake vs Ryzen 7000 situation, but without a massive efficiency advantage for Ryzen.
I wouldn't call that very good, but also definitely not bad. It's a matter of perspective I guess.
Nointies@reddit
Nah thats very good, Raptor Lake's biggest problem is power usage.
Fishydeals@reddit
As well as stability lmao
Nointies@reddit
Which is caused by its absurd power usage/voltage.
Fix that and be the fastest consumer cpu? ARL would be very impressive if thats all true.
Fishydeals@reddit
Wasn‘t it also due to oxidation and defects in production? I hope they figured it out with the new gen.
But more relevant competition is always welcome!
Strazdas1@reddit
As others pointed, it wasnt, but if you are still afraid of it, remmeber that these are manufactured by TSMC and not IFS so any foundry problems wouldnt be transferable.
Nointies@reddit
No. Oxidation was a separate issue thats been solved for awhile and affected a limited number of chips.
PainterRude1394@reddit
Mostly just too high a voltage. There have been. Lot of AMD fanatics spewing misinformation so I get why people are persistently confused.
capn_hector@reddit
if AMD cut their power in half or third while increasing performance 10% it would have been different. Improvements to efficiency are still improvements.
Cheeze_It@reddit
Isn't that what AMD did with Zen 5? Not in all cases but most?
SkillYourself@reddit
Not really...
https://x.com/HardwareUnboxed/status/1821436981328654462
Comparing iso-performance PPW is a marketer's sleight of hand.
Cheeze_It@reddit
So I am looking at designations and watt per watt.
At 88w amount of energy, per Gamers Nexus they found that at times it was more efficient and at times it was less efficient. But on both ends it wasn't by a large amount. It seems that the nuance on it is is that when comparing it to a Ryzen 7700, it's a wash. If one has a Zen 4, it doesn't make sense to go to Zen 5 right now. Maybe when prices drop then sure.
I personally don't have a reason to go to a Zen 4 or 5 (I have a Zen 3 in both of my systems at home), but given time I might end up jumping to Zen 5 later (or Zen 6) whenever that comes about and I will curve optimize the living hell out of it and let it run. My server is 24/7 and there any power efficiency helps a TON.
systemBuilder22@reddit
Zen5 is a great improvement but it seems they used the terrible zen2 switching fabric as a basis. So it has the 18-20% uplift for everything (e.g. nearly all massive parallel appa and linux compiles) but if you have one single monolithic app you want to run across all cores for parallel speedup, very tiny improvement (3-5%). It will be 18-20% in zen5+.
Fromarine@reddit
It is when you're comparing it to a 14900ks that the majority of people would only have as theoretical performance due to its cooling requirements and extreme stability issues (not degredation, it's configured to all core at 5.9ghz stock which is fucking absurd). It was at the point that heat wise even 420's aren't enough without delidding generally.
Hikashuri@reddit
10% multithreaded increase over the predecessor when it has 8 less threads due to dropping multithreading is extreme good.
Slyons89@reddit
Since the high end Intel Arrow Lake chips are on TSMC 3nm process instead of coming from an Intel fab it should have a massive power consumption reduction vs 14th gen at least. Might be somewhat similar to 9950X, we'll see.
jucestain@reddit
Are the high end chips really on TSMC? I figured it would be the other way around, but if so thats a disappointment.
Slyons89@reddit
Supposedly they are. Why would that be disappointing? It should cut down their power usage drastically. Although on the downside, their max frequencies are also a bit lower.
jucestain@reddit
Disappointing in that that implies intels new manufacturing process is inferior to tsmc
8milenewbie@reddit
It's pretty surprising that there were people who thought Intel would be able to close the gap with TSMC.
Strazdas1@reddit
I mean what timeframe we are talking about? Close the gap by 2024? Absolutely not. By 2028? Maybe. By 2030, if no more fuckups in IFS then likely.
DYMAXIONman@reddit
They don't really have the ability to manufacture enough at this point.
EJ19876@reddit
It is more of a volume issue. Intel's EUV capacity is rather limited and they're allocating much of it to Intel 3 products (aka Xeons).
Intel 18A is the node Intel has long said will return them to "leadership" anyway. I don't know anyone who works in Oregon anymore, but the vibe there about 18A was optimistic as recently as late last year. Of course, that may have changed if they've encountered a big problem in the past year or so.
Hendeith@reddit
Two reasons really. 20A and 18A were supposed to put Intel back into the node game. If it turns out they are so uncompetitive Intel had to rely on TSMC then it means fundamentally nothing changed, there's still no competition for TSMC.
Secondly, TSMC is expensive so that's means we could see price hike with ARL release. Which is not something anyone would like to see.
soggybiscuit93@reddit
18A was the node Intel claimed would put them back into leadership. 20A is just an limited release derisk of 18A, and 18A isn't expected to see volume until late next year
Hendeith@reddit
According to Intel own slides 20A was supposed to put them on par with competition. Which I would already count as being back in the game. 18A was supposed to pull ahead.
soggybiscuit93@reddit
Which slides? 20A was never a big deal. 18A was always the important one
Slyons89@reddit
Makes sense. I’d prepare to be disappointed in October when they launch. They do at least have a chance to take/maintain their performance lead over AMD in desktop processors while catching up in power efficiency. Not all bad.
jucestain@reddit
Yea, if intel is on parity with AMD thats still a win. The biggest drawback to intel now is their horrible power inefficiency. If they have to do it via tsmc for now then thats fine.
Over-Instruction-103@reddit
Intel doesn't have the capacity yet to mass produce its own chips, they are even losing a lot of money from their factories
mrchase05@reddit
Yep, I am too more interested seeing chip fabrication technology advanced on Intel side rather than a CPU partly outsourced to TSMC. I can buy AMD for that.
BookinCookie@reddit
Intel’s high end chips are rumored to be outsourced all the way until at least 2027. This isn’t a one-off thing.
Successful_Winner838@reddit
There has been no confirmation from anyone reputable that the compute tile for Arrow Lake will be anything but 20A
Slyons89@reddit
Guess we’ll find out pretty soon
phekahua@reddit
Why do you say that? Is the high power consumption a function of the node or the chip design?
Slyons89@reddit
The node
redsunstar@reddit
I remember than Arrow Lake was either Intel 4 or Intel 3 for the compute tile and TSMC for the GPU tile. It's only Lunar Lake that's TSMC for both.
Has something changed?
Slyons89@reddit
Lunar Lake is on TSMC N3B and Arrow Lake (high end) is on TSMC N3 node. For the CPU, not just the GPU tile.
The lower end Arrow Lake chips are still on an Intel node from what I have read. But at least the 265k and higher are using TMSC N3.
redsunstar@reddit
Is that official information or leaks/rumours?
Geddagod@reddit
Officially ARL compute tile was never Intel 4 or Intel 3. All they ever confirmed was 20A.
Slyons89@reddit
Just leaks and rumors as far as I know but the leaked power consumption numbers of Arrow lake definitely corroborate it.
hanshotfirst-42@reddit
Why would it be useless? It’s a desktop chip, who cares about power efficiency? It’s not like your power bill is going to be dramatically different. Give me moar power.
654354365476435@reddit
Would you take 5% more gaming performance for 400W more power?
System0verlord@reddit
Probably, but mostly because I have a school bus radiator that I’m not using, which should have the cooling capacity required.
hanshotfirst-42@reddit
If I have the PSU and cooling to support it? For gaming purposes? At 4K? Absolutely
654354365476435@reddit
I would never consider that ever, you must have cheap power or dont game much
hanshotfirst-42@reddit
It’s like a 10-20 dollar difference monthly lol. That’s one meal from Chipotle.
soggybiscuit93@reddit
Would you pay a $15/subscription service to increase frame rates by 5%? Because it's functionally the same thing, except in this you also have louder fans and more heat in your room in addition to the higher operational costs.
hanshotfirst-42@reddit
Yes I would. But also my setup is in a different room than my bedroom is it’s less important
654354365476435@reddit
For us its around 5x good pizza dinner a month difference (currency will not tell you anything so I will compere it like this)
hanshotfirst-42@reddit
Well yeah in that context I can see how you are more power conservative
654354365476435@reddit
You guys at US have have cheap power yes. But still it will get hot in the room.
tuhdo@reddit
You will need a beefy AC to cool all that heat.
FearFar@reddit
Who really cares about that in private sector, Ír u can still effectively cool it
conquer69@reddit
You think electricity and cooling are free for companies? They care even more.
FearFar@reddit
I meant consumers who run one cpu
conquer69@reddit
Consumers also care. The additional 100-150w will heat up the room pretty good.
FearFar@reddit
Yes that’s very good in winter and tbh 100watts is really low number
conquer69@reddit
Not everyone lives in a place with cold winters.
FearFar@reddit
Yes but it is still in watt hours, and 100Wh is really nothing
tuhdo@reddit
Have you tried running Prime95 in the summer without an AC? Even with an AC, it's still uncomforyable.
Turtvaiz@reddit
A lot of people probably care. I'd fucking die if my nordic apartment had a 600 W extra of heat when gaming.
There's also noise to think of, and electricity in server use
FearFar@reddit
Who would use i9 in server I don’t know and exactly that’s why I don’t need to even turn up the heater in my room because of pc and noise u buy noctua fans zero noise
Turtvaiz@reddit
Do you not have a summer there
FearFar@reddit
Yes I have 30+ and still absolutely fine
MumrikDK@reddit
People who give a shit about noise, room temps, the environment or their electricity bill - which mostly is for people living places where it isn't borderline free.
FearFar@reddit
Electricity is still cheap and when u switch from 70 watt to 200 u would even notice it , I have noctua fans and zero noise, and room temps in winter u don’t heater easy
Eriksrocks@reddit
Similar power/perf as the 9950X, agreed. Similar power as the 14900K, then no, it would still be terrible compared to Zen 5.
654354365476435@reddit
Agree. And it should be, they are using better node then zen
Fromarine@reddit
Itll be lower not similar power. Similar power would honestly be a bit of a failing when you have a big process node jump, no HT and no core count increase (even if there's now much bigger little cores)
ishsreddit@reddit
there is no way right? The 13/14900k are already so close and do it at 280w to 300w doesn't it? Now they are on a superior node.
654354365476435@reddit
I hope they get competetive
3Dchaos777@reddit
1.21 Jigawatts actually lmao
soggybiscuit93@reddit
3x the power of a 200W - 230W PPT 9950X?
anhphamfmr@reddit
the real life perf gap is going to be wider than this. 9950x got a huge boost in score with avx512 in geekbench
Pristine-Woodpecker@reddit
It got a huge boost because it has a ton more FP resources than Zen 4. Not just because of the full AVX512.
itsjust_khris@reddit
From the chipsandcheese look at Zen5 there isn't much benefit for anything use less than the full 512 bit wide execution. Most of the upgrades are unusable by 128 bit or 256 bit wide instructions. Even using AVX512 instructions with less than 512 bit wide data didn't see much of any benefit.
Pristine-Woodpecker@reddit
Larger internal register files and reduced latency for adds plus more cache BW are useful. Looks like 5 to 10% gain in AVX code. Unfortunately I think I've seen maybe only one review directly look at this ratio.
itsjust_khris@reddit
Interesting, the RPCS3 benchmarks I saw didn’t see any improvement beyond margin of error, which was theorized to be because AVX512 is only most useful for the newer instructions rather than the width. However your example does show some improvement so it may be unfair of me to say there’s universally no difference.
jedidude75@reddit
They are calling compare it to the 14900k which wouldn't benefit from avx 512. It's 9% faster.
Rocketman7@reddit
Performance-wise ARL seems very solid at this point. I don’t expect to see surprises after launch in this area. What I want to see now is power numbers.
Hendeith@reddit
I just hope it won't have any terrible issues like stability, degradation, power consumption or won't be priced in some crazy way. My 8700k is waiting for replacement and Zen5 turned out to be quite disappointing.
Ryrynz@reddit
If you're using it for gaming then don't write off the upcoming X3D chips based on the other releases.
Hendeith@reddit
Sure, X3D will be faster than Intel, but I'm doing a mix of gaming and work so I need those extra threads.
Steeze-God@reddit
If on a 8700k you surely don't. You need faster threads.
All_Work_All_Play@reddit
Part of me wonders if the 9000 series will have such reduced power usage they can give us that AM4 chip we never got.
NeonBellyGlowngVomit@reddit
Mate.... Zen5 is hardly a disappointment compared to the aging Lake arch.
https://www.cpubenchmark.net/compare/3098vs6171/Intel-i7-8700K-vs-AMD-Ryzen-9-9900X
Hendeith@reddit
Not quite sure why you thought Zen5 would be a disappointment comparing to 7 years old CPU. I meant Zen5 is a disappointment in general. With how little performance uplift it gets over previous gen it hardly makes sense to get it over cheaper Zen4. Since ARL (at least so far) looks like competitive product then (unless pricing will be terrible) it makes more sense to get ARL over Zen4 and 5. I had my CPU for 7 years, I can have it for few more months.
NeonBellyGlowngVomit@reddit
The point you missed is that anything, including a disappointing generation, is going to be an improvement. If Zen4 wasn't disappointing and Zen5 matches Zen4. then whether or not Zen5 is disappointing really doesn't factor much into what you're getting in the end.
Hendeith@reddit
The point you missed, despite me literally spelling it out for you, is that, despite what you seem to think, people other than you are not idiots. I and basically everyone else know that upgrading from 7 years old CPU will lead to an improvement. That's doesn't change a fact that Zen5 is in all means disappoinment. Zen4 is only few % slower while also being cheaper. Right now I could get 7800X3D instead of 9700X and get CPU that's both cheaper and faster.
Furthermore, since I'm already on a 7 years old CPU, why would I get Zen5 now (or Zen4 for that matter) when I can wait 2 months and see if ARL is worth picking over both Zen4 and Zen5?
That's some mental gimanstics. Why wouldn't it matter that Zen4 with 3D cache offers better performance at better price than Zen5?
NeonBellyGlowngVomit@reddit
Alright bro. Hope you enjoy continuing to be disappointed by every new product since that seems to be the cross you want to hang yourself on.
Hendeith@reddit
Shh shh don't cry, it's ok to admit you were wrong
NeonBellyGlowngVomit@reddit
Nice try. You're not going to convince me to go down to your basement to look at your 'rig.'
Admirable-Date-8625@reddit
it all depends on support, am5 is still supported at least until 2027, hoping arl doesnt become a joke like lga1700 in terms of upgrades. anyways I'd wait other benchmarks, geekbench notoriously favors intel. , while the 9950x has proven to be faster than the ks in multiple applications
psydroid@reddit
I've noticed that scores for AMD CPUs don't scale as nicely with higher numbers of scores as those for Intel, Apple and other CPUs.
I always though this was a limitation of AMD CPUs, but your comment got me thinking that Geekbench may be the cause of this.
I should look more closely at Phoronix and possibly do my own Openbenchmarking runs, if and when I get/build machines with these CPUs.
So far I've been using laptops and SBCs from years ago because the performance uplift hasn't been high enough to buy new hardware. But that may change in the next generation in 2026.
soggybiscuit93@reddit
And AM4 is still supported in 2024 because they're still releasing new CPUs for it. But the CPUs they're releasing hardly matter. What matter is how much additional performance and generations is left on the socket. Not vague date-based support.
Best case scenario is AM5 will have Zen 6 released on it, so if you're buying today, you can expect 1 more generation on your socket either regardless of who you go with (AMD still hasn't confirmed Zen 6 is AM5)
Hendeith@reddit
From what we heard so far Zen6 in 2026 will most likely be last CPU for AM5 and in 2027 it might get either a refresh or X3D versions.
If Intel continues their approach of changing sockets every 2-3 gens then in both cases I'm buying platform that will last same time.
Toredorm@reddit
Do it. I went 9950x from 8700k and couldn't be happier. The improvements over that generation is massive.
SnooPandas2964@reddit
Don't know why you're getting downvoted, glad you like your new cpu. But of course you're going to see big performance improvements from an upgrade like that. You would have seen huge performance increases from anything modern even from a 12900k.
Toredorm@reddit
Right? I told him to do it bc a 8700k gets beat by an 11th gen i3. He needs the upgrade, and while it's not some "massive" upgrade from 7950... it is for him. He could buy a 7950x3d while they are on sale, but my post was to push him to get the upgrade.
SnooPandas2964@reddit
I bet intel's rmas will be higher whether its stable or not. Since the kind of idea we had that intel cpus fail extremely rarely has been shattered, now cpus will be the first suspected component failure rather than the last.
Alternative-Luck-825@reddit
currently in the QS (Qualification Sample) stage. There will be approximately a 5% improvement in performance and power consumption optimization in the final version.
I expect the R23 score to be around 44,000, with a power consumption of 200 watts.
Rocketman7@reddit
Where did you hear/read about the 200W?
soggybiscuit93@reddit
Baseline power profile should be 177W PL2. , and leaked base clocks suggest a lot of ARL's efficiency gains should be in the lower power range of the efficiency curve.
Whether these results are baseline (177W PL2) or Performance (250W PL2) are the question. And another questions I'm waiting to see results for are just how much the extra 73W nets in performance and if it's even worth it.
capybooya@reddit
I don't know to which extent it builds on AL/RL, can we really trust there are no surprises similar to how Z5 was a (negative) surprise for gaming? GeekBench is pure synthetical.
The MT benchmarks were pretty impressive for no SMT/HT, I have to admit that. But again, gaming will matter more for a large percentage of the crowd on this sub.
Beneficial_Cake_595@reddit
Ryzen 9000 isn’t even apart of the conversation that generation is a waste of silicon.
piitxu@reddit
At this point I'm rooting for Arrow Lake to destroy Zen 5. AMD urgently needs a slap in the wrist
no_salty_no_jealousy@reddit
Competition is good for everyone, only fan bois in this sub mad about it.
wh33t@reddit
NvidiaX86 CPU's when?
psydroid@reddit
Never, as those will be ARM CPUs instead.
Powerful_Yoghurt1464@reddit
Will Nvidia introduce it's own ARM-based architecture like Apple? The X925 directly from ARM doesn't seem too promising and just putting like 12 or those in a chip and calling it a day seems like a glorified Snapdragon X Elite which is barely even keeping up with laptops and would be destroyed by desktops. Nvidia would need to make it's own equivalent of M4 Max to catch on with desktop competition.
psydroid@reddit
They are rumoured to be working on chips using standard ARM cores and probably know more about ARM's roadmap than we do considering their intention to buy ARM a few years ago.
Apple will probably still have faster chips for a long time to come, but I don't think Snapdragon X Elite or a similarly performing chip isn't promising. If anything I wouldn't mind having a chip with 12-14 X925 cores as a baseline and then see how things develop from there over the next few years.
Nvidia will most likely not be working on its own ARM microarchitecture either, since they wouldn't have had to make their intention to buy ARM clear, if that were the case. Most likely Nvidia knows that ARM is going to keep improving its standard cores and eventually catch up with Intel and AMD, if not Qualcomm and Apple.
If your goal is to run Windows and x86 applications with the same or better performance on ARM chips as on x86 chips, that is probably not going to happen for several more years.
Strazdas1@reddit
These are surprisingly under wraps. Makes me worried maybe its not going so great.
TheMissingVoteBallot@reddit
Agreed. If Intel can hire some competent marketing people to fix up the mess that is their degradation issue and create a processor that challenges AMD at the high end that's going to force AMD to play their hand and release the X3D "revised" processors sooner rather than later.
And maybe, just maybe these X3D processors will have microcode that doesn't cause these spurious issues.
mrchase05@reddit
At this point I would be thrilled if Intel would use 100% their own fabs, but offloading chips to TSMC is a sad move. When things escalate with China and Taiwan, what will happen to CPU availability. That's the reason I'm not supporting AMD, since they 100% rely on TSMC in their products.
Slyons89@reddit
TSMC is also building factories in the US with that issue in mind.
But if that's your reasoning, be sure to avoid Apple products too, all their processors (both laptop and phone) are made by TSMC only.
mrchase05@reddit
I work in electronics product development and have seen many availability crises over a long period. I do not boycott TSMC, I just try to support companies own chip fabrication when ever possible. In recent events of Covid and Ever given reminded us all how fragile some supply chains are.
But yes, I boycott Apple for the right to repair issues and for their whole ecosystem.
Slyons89@reddit
If China attempts to invade Taiwan, the entire global supply chain for tech is going to be so fucked up the Intel is not going to be operating normally either.
I guess I just don't understand what you mean by "I don't support AMD because they only use TSMC".
pianobench007@reddit
You guys are clowns.
Right now today at this very moment. China is in charge of China. That's right !!! We are already buying Chinese products.
Those motherboards? That ram? The GPU heatsinks, CPU AIO coolers etc.....
Designed in Taiwan/USA. Made in China.
mrchase05@reddit
Right now in the electronics industry there is a big change going on to move away from china. It's difficult to send components to China due to import restrictions. Everyone is trying to source their components somewhere closer. China is not as cheap as it used to be and now the political tensions are high. I do not have an issue with Chinese manufacturing, I just said that I prefer Intel over AMD since it has it's own fabs. In case ties are cut with China upon China invading Taiwan, It's good to have chip fabs still remaining somewhere else. Yes everything will become crazy in that case and most of the components are not available, but having a chip fab outside China/Taiwan there is still possibility to continue manufacturing the main parts.
Having said all this, I do not buy Chinese owned and made cell phones or routers, due to security concerns.
Slyons89@reddit
No shit... my argument was that it's stupid to "not support AMD because they rely on TSMC".... if China invades Taiwan it will kick off a global conflict that will destroy supply chains. We won't be getting any new tech for a while, it doesn't matter that Intel has fabs in the US, they aren't going to be getting substrates from China, companies will not have the raw materials to make their products because they come from China... the assembly of most of these products is performed in China.
yabn5@reddit
Those fabs aren’t leading edge, TSMC is holding back their most advanced fabs. As for those samsung fabs, they’re in Korea. Any Taiwan conflict has the PLARF targeting US troops in the theater, there is no reason to believe that USFK would be spared. At which point what’s a few extra at Samsung?
Slyons89@reddit
If China attempts to invade Taiwan the entire global supply chain is going to be so fucked up for years that it's not like Intel's supply chain is going to keep running normally either.
My point was that the other posters point of "not supporting AMD because they rely on TSMC" is kinda dumb.
yabn5@reddit
Running normally? No. But it’s going to be in a much better shape than needing new fabs from scratch to be built and equipped. You shouldn’t avoid buying AMD because of TSMC, but you shouldn’t hope that Intel fails either. Having a single monopoly on leading edge is bad, and no Samsung isn’t in a great place either.
Slyons89@reddit
Nobody is saying that. I'm just confused what this other guy means by "I don't support AMD because they rely on TSMC".
I'm not going to buy a worse Intel product just because they fab a chip in the US (or Ireland or Isreal) and then ship it to China (which will be a problem in OP's scenario), Costa Rica, Malaysia, or Vietnam for assembly.
Slyons89@reddit
If it has better performance and costs the same or less, everyone should be happy.
Personally I think both Intel Arrow lake and Zen 5 are going to look like "dud" generations compared to many others. But at least for Intel it should be a bigger bump than 12th > 13th > 14th gen which were kinda just sad. Plus power efficiency should be massively improved since they are using TMSC for the high end Arrow Lake chips.
SpaceBoJangles@reddit
I mean, competition would be nice. I just more hope Intel just have a good product and bring the price of Threadripper down. $5000 for a CPU is insane.
b3081a@reddit
Is it really 4% faster in ST? I found a lot of 9950X results with similar scores.
It seems to be using 6400 32-39-39-102 memory from .gb6 file, which seems to be XMP and not JEDEC. The closest AMD result that I can find is this one: https://browser.geekbench.com/v6/cpu/7399952
Doesn't look nice for a 3nm core.
Famous_Wolverine3203@reddit (OP)
Zen 5 also benefits from AVX 512 in subtests like Background Blur and Object Detection. It has a 20% lead there.
I think Intel has the ST crown here.
the_dude_that_faps@reddit
What is pure ST according to you?
Famous_Wolverine3203@reddit (OP)
No AVX-512 acceleration. As its frankly useless in ST applications barring a few exceptions.
the_dude_that_faps@reddit
What are you even talking about. That's absurd. As an example, take a look at simdjson. It's a Json parsing library that uses SIMD instructions to accelerate Json parsing. This has multiple uses, not just on servers as it could find its way to consumer workloads now that avx512 is finally supported on consumer parts. And that's just one example. Or take RPCS3 as another example...
C'mon...
Famous_Wolverine3203@reddit (OP)
“It could find its way into consumer workloads”
By your own words. And while RPCS3 is popular, I don’t think people are buying brand new 500 dollar CPUs for that.
the_dude_that_faps@reddit
I'm sure more than a few have been getting 12900k CPUs and disabling E-cores to get AVX512 support precisely because of that. Not many, sure, but we exist.
Anyway, my point is it's absurd to claim that using avx512 is somehow misleading or impure. It's not, the CPU has a feature that is available to use and software can make use of it and accelerate their workloads with it.
I doubt handbrake is as niche as RPCS3 anyway. So there's another one.
Noble00_@reddit
We can try to infer which workloads see more of a performance gain on.
Not only are Background Blur and Object Detection are benefited with AVX-512, Photo Library is as well [doc].
Here we can compare both CPUs their workloads (keep in mind both are just single data points, and well... ARL hasn't released yet).
https://browser.geekbench.com/v6/cpu/compare/7425534?baseline=7399952
In sT, all in all, 285K is 97.9% the performance of the 9950X.
In sT, if we explicitly compare AVX512 workloads on avg 285K is \~12.33% slower than the 9950X.
In sT, if we remove the AVX512 workloads, all in all, 285K performs similarly to the 9950X.
In mT, all in all, 285K is 96.3% the performance of the 9950X.
In mT, if we explicitly compare AVX512 workloads on avg 285K is 6% slower than the 9950X. (Interestingly, Object Detection and Photo Library in mT, AMD performs worse, while in Background Blur, AMD wildly outperforms, looking like an outlier)
In mT, if we remove the AVX512 workloads, all in all, 285K is \~97% the performance of the 9950X.
b3081a@reddit
Does Geekbench 6 actually use AVX-512? I found Geekbench 5 uses AVX-512 in crypto and machine learning but at the same clock desktop Zen 5 doesn't seem to have any significant IPC advantage over mobile.
DueRequirement6292@reddit
It does on the subtests wolverine mentioned
b3081a@reddit
That's comparing to Intel. I'm talking about over AMD cores themselves w/o full AVX-512. Maybe I can try disabling AVX-512 in BIOS and do some testing.
DueRequirement6292@reddit
Huh? I don’t think you can disable avx 512, also it doesn’t matter; the subtests use it when available. It’s available on recent AMD CPUs. Has nothing to do with Intel.
b3081a@reddit
There is a AVX512 option in AMD BIOS
DueRequirement6292@reddit
Cool didn’t know that thanks!
Real-Human-1985@reddit
Every benchmark is conveniently bad/wrong/cheating if AMD looks good in it This of course despite the real world fact that intel keeps cheating in benchmarks.
Kryohi@reddit
As said many times, N3B vs N4P basically isn't a node advantage. The gains in all metrics (performance, efficiency and density) are minimal at best. It should be of no surprise that ARL and zen 5 will perform very, very close to each other. Gen on gen though, Intel will look impressive on efficiency, since the bar to surpass was veeery low.
BTW regarding MT performance, it should be noted that AMD already underperformed with zen 4 on GB6 compared to real world tasks. The 7950X was often losing vs the 13900K on many MT synthetic benchmarks, while winning in many other workloads. It's likely they'll lose vs ARL on GB6 MT, as we see here, but it will be a wash on actual usage, winning sometimes, losing sometimes.
ProfessionalPrincipa@reddit
???
Going by TSMC specs, N3 over N4P has a small advantage in power and performance and not insignificant advantage in logic density. N4P is advertised as 1.06x density over N5, N3 is 1.7x over N5.
ResponsibleJudge3172@reddit
And N3B has single digit power advantage over N4P. That's the point.
ProfessionalPrincipa@reddit
It seems like you missed the point.
b3081a@reddit
Apple did gain some clock speed at the same power by moving to N3B though. It's minimal but it's there.
Famous_Wolverine3203@reddit (OP)
Apple also ramped up power significantly from A16 to A17 pro for said clock speed increases.
CalmSpinach2140@reddit
Provide the numbers. M3 still uses less than 6 watts for ST and M4 uses less than 8 watts for ST. This is a far cry from 20+ watts that Intel will probably use for the 5.7GHz ST.
Famous_Wolverine3203@reddit (OP)
This was already widely known. But I’ll indulge you.
https://youtu.be/iSCTlB1dhO0?feature=shared
Skip to 6:30.
The A17 pro uses 5.79W of power while the A16 uses 4.57W of power. Thats a 27% power increase for a 7% clock speed boost and a 10% boost in overall performance.
So N3 does not have major power improvements over N4.
mackzett@reddit
Which also a main reason to ditch HT. HT alone on a 14900k is between 75-100w on load.
Famous_Wolverine3203@reddit (OP)
HT depends on the microarchitecture. Some architectures preferably ones that are frontend bound benefit while others don’t.
mackzett@reddit
I didn't mention benefit at all. We all know HT have SOME benefits in SOME cases. I hope the saved energy is better used as a part of the cpu we do use more. That's all.
CalmSpinach2140@reddit
Nope well will be like this till a true semi-conductor breakthrough
Active-Quarter-4197@reddit
Kind of fair bc Intel scales better with faster ram
CoffeeBlowout@reddit
Zen 5 has AVX 512 to lean on. Also that is very likely PBO tuned with an undervolt.
Don’t worry when Intel Ultra is paired with good memory and tuned it will also score higher.
b3081a@reddit
You can take a look at the details in .gb6, the clocks are stable but not above official Fmax (5750), so that represents official non-OC'ed best case performance.
CoffeeBlowout@reddit
That does not tell you if it was tuned/undervolted.
b3081a@reddit
For performance-only comparison, voltage tuning does not matter if the clocks aren't tampered.
CoffeeBlowout@reddit
That does not change the fact that this is a tuned run (memory tuning as well) compared to what were likely looking at here from Intel.
Arrow Lake once tuned is going to likely score a fair bit higher. AMD didn't change IMC, while Intel is now using 6400 MTs as stock supported speed. We will see boards pushing much higher MTs and we don't know how much tuning is left under the hood.
Zen 5 just ain't exciting and is gearing up to have already been surpassed by Intel in Oct.
b3081a@reddit
That 6400MT isn't stock speed. As I mentioned you can find memory timing info in .gb6 (32-39-39-102) which indicates it's an XMP overclocked config with lower than stock memory latency.
CoffeeBlowout@reddit
Again, AMD is overclocked using 6400. Intel will be using 6400 stock. It does not matter if it's using XMP or not for tighter timings, the AMD did as well. The AMD chip is OVERCLOCKED on the fabric to the point of not being stable on many systems in that ratio. Silicon lottery will dictate if you can get anything above AMD's "sweetspot" 6000 and that is still an overclock over stock 5600 advertised speeds.
Intel will very likely be able to run far higher in their gear 2 vs 13th-14th gen IMCs.
Ryusuzaku@reddit
Tho while nice doesn't tell too much about capabilities in multicore benchmarks all in all as AMD has always been slower then Intel in geekbench multi. Even 13900KS is faster then 9950X.
But yes Intels upcoming seems to be faster then AMDs somewhat. And the X3D most likely will be faster in gaming then.
DueRequirement6292@reddit
If Arrow Lake can approach or exceed 9950x without AVX512 that will be pretty exciting, because it suggests much better FP when AVX512 isn’t used, which is most workloads.
Famous_Wolverine3203@reddit (OP)
Intel’s architectures preceeding Zen 5 always had better FP performance than AMD.
ResponsibleJudge3172@reddit
Too much focus on FP if you ask me. The entirety of 11th gen Rocketlake IPC was pretty much FP. Now even E core Skymont focused very heavily on FP to match INT despite the INT IPC improvements, where Gracemont was much better in in INT
Pristine-Woodpecker@reddit
What's the perf uplift for Zen 4 vs Zen 5 on that benchmark? Zen 5 has a ton of extra FP resources which is what made the full AVX512 support possible, but those are also useful for normal AVX2 code.
DueRequirement6292@reddit
Iirc 23%
Pristine-Woodpecker@reddit
https://www.anandtech.com/show/21493/the-amd-ryzen-7-9700x-and-ryzen-5-9600x-review/4
This relies entirely on the compiler to generate the AVX512 code (no manual assembler), and there's a 26% FP performance uplift in Zen 5. Too bad they didn't run it with AVX512 disabled, because I'd expect that number to stay largely intact.
The AVX512 support in Zen 5 kind of buries the lede that the entire FP unit was beefed up. People who think Arrow Lake will have a massive advantage on regular AVX code will be very disappointed.
the_dude_that_faps@reddit
Compare mobile zen 5 to zen 4 then.
Pristine-Woodpecker@reddit
That doesn't work because mobile Zen 5 cuts down exactly in that area too. see e.g. https://www.hwcooling.net/en/zen-5-tested-mobile-core-differs-considerably-from-desktop-one/
the_dude_that_faps@reddit
You say "The AVX512 support in Zen 5 kind of buries the lede that the entire FP unit was beefed up." If you go with mobile Zen 5, pretty much everything extra about Zen 5 for AVX512 is removed. That would allow you to know if the entire FP unit was indeed beefed up. Or am I just not following you? (Entirely possible TBH)
SJGucky@reddit
So AMD is rightly concerned about Intel. At least until they fixed their Zen5's.
I hope that pushes AMD to release their 9800X3D earlier. :D
steve09089@reddit
It probably will if ARL does good in gaming.
SJGucky@reddit
There are games that benefit from more cache, those are dominated by the 7800X3D, but other games that don't use that cache are dominated by Intel.
ARL probably won't change that.
Question is, can the 9800X3D change that?
stormdraggy@reddit
A downclocked 9700"X" with a fat cache on top? Fat chance.
Strazdas1@reddit
Given current clocks on 9000 series, its possible the x3D variants wont be downclocked.
stormdraggy@reddit
nah, that's not true.
They didn't release a true 96/7X. they put out the base versions with fake efficiency gain claims, and are now dropping an AGESA patch to make them X variant processors, with resulting TDP and clock increases. The 9/50 are basically the same clocks as before.
the_dude_that_faps@reddit
If the gap is less than 10% between this and vanilla zen 5, I don't think AMD will have much to worry thanks to x3D.
capybooya@reddit
The question is though, will the core parking/thread scheduling be more messy on Z5 3D or ARL?
Dangerman1337@reddit
I suspect AMD will launch X3D in time for Black Friday/Christmas sales. Would be a waste.
Eriksrocks@reddit
Black Friday/Christmas of this year? No way, we would have heard way more about it by now.
chocolateboomslang@reddit
I'd still be scared to touch a high end intel chip with the way they handled . . . stuff.
yabn5@reddit
It’s okay, with AMD you get have security patches for the same time that the 14th gen is getting full warranty.
chocolateboomslang@reddit
Fun, I get to mail them my cpu and wait how long for them to send me a replacement?
sansisness_101@reddit
Well, if you're on Ryzen 3000 series or lower you're fucked, you have a permanent gaping hole in your security.
Strazdas1@reddit
They relented and agreed to fix 3000 series after all.
sansisness_101@reddit
Oh really, why didnt they do it in the first place tho
Strazdas1@reddit
No idea. Probably thought there werent enough 3000 series users left to complain about.
chocolateboomslang@reddit
Recently said they would fix it.
sansisness_101@reddit
You RMA the chip? Even if you bought the first 13900KS, you would still be in warranty range
chocolateboomslang@reddit
And what do they send you, another faulty 13900? Or a new chip that you now need to buy a new motherboard for?
shrimp_master303@reddit
The CPUs aren’t faulty
chocolateboomslang@reddit
I see now that the oxidation issue and instability are unrelated, but the point stands, they won't have enough CPUs to replace them for everyone. They're not stocking 13900k's and 14900k's into 2027-2028.
Strazdas1@reddit
The warranty requires them to make the client whole by offering same or better. If they dont have the 14900k in stock they will send the 15900k to you, or 285 as they call it now.
jedidude75@reddit
Ryzen 3000 is getting patched, you point stands for 2000 and 1000 series chips.
4everban@reddit
The handling part leaves a lot to be desired
shrimp_master303@reddit
There handling of it was actually fine. People thought it affected like 50%+ of CPUs when it was more like 5%.
AHrubik@reddit
Now the question is can we trust that's it's stable and not rusting on the inside?
DjBass88@reddit
This. How do we figure this out and Can reviewers look into it without waiting for months?
Strazdas1@reddit
It would be impossible to find, or rather, no reviewer has the capital to even begin looking for these kind of issues on launch. The cost would be 8 figures.
soggybiscuit93@reddit
That would be pretty crazy if TSMC fabs had the same oxidation issues that the Intel Arizona fab did in 2023
SlamedCards@reddit
It's made by TSMC
PotentialAstronaut39@reddit
Power?
Voltage spikes?
Famous_Wolverine3203@reddit (OP)
Have seen so many of these comments similar to the one above on this thread that its starting to annoy a little.
Its a geekbench test. Where do you expect me to give you Power and Voltage graphs? Sorry if I’m being a bit snarky, but what is the point of this comment other than to ask for info that’s blindingly obvious to anyone that its not available.
PotentialAstronaut39@reddit
We're not asking you.
We're asking Intel / testers.
Famous_Wolverine3203@reddit (OP)
You’re asking Intel to answer your questions in this thread?
Strazdas1@reddit
We know that some of the reviewers do read this subreddit.
PotentialAstronaut39@reddit
SMH
ashberic@reddit
The rumor depending on who you believe is +/- 30W compared to a 14900K.
I find it very hard to believe that they'll push the power higher. My guess based off absolutely nothing is 200-220W (~ -15%).
munchkinatlaw@reddit
Geekbench? Was there nothing worse they could find? Maybe Excel row creation?
Strazdas1@reddit
Excel Vlookup via VBA. Microsoft forgot to multithread that one, runs on single cores only.
deleted_by_reddit@reddit
[removed]
AutoModerator@reddit
Hey steve09089, your comment has been removed because it is not a trustworthy benchmark website. Consider using another website instead.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
shuozhe@reddit
Wow Intel really got rid of HT.. Funny how a couple years ago some were believing 3SMT or even 4SMT is coming. Wondering now if P core will stay single threaded, and they move SMT to atom cores (intel on intel Phi)
Strazdas1@reddit
With plentiful E cores, better sheduler and SMT never being that great to begin with no wonder they are getting rid of it. SMT made sensse when we had 2-4 cores. It does not make sense when we have 8-20 cores.
BookinCookie@reddit
Both P cores and E cores will lack SMT for the foreseeable future.
Framed-Photo@reddit
Yeah after 9000 series I think I'm totally burned on any performance rumours.
They were never super accurate before don't get me wrong, but I don't remember the last time the performance rumours were that wrong, same with the slides from the company.
Would love to get some actual competition back from intel though, especially if they can get their power consumption down.
Fromarine@reddit
Zen 5 has avx 512 which inflates geekbench 6 scores
pikob@reddit
Also if you really need a 16 core cpu, it's very likely you'll make good use of avz512. Which really has substantial gains on 9xxx.
Strazdas1@reddit
Not true. I use workloads that are fully paralelized (can use 32 cores if i can give it to them) and use none of avx-512. Im just doing math.
ResponsibleJudge3172@reddit
Not really. AVX512 is way too niche, otherwise everyone would be singing praises of zen 5 rather than just certain Linux workloads by one reviewer. AVX2 is far more prevalent and noteworthy.
AVX512 is akin to FP64 performance on GPUs
the_dude_that_faps@reddit
Not really because the extra instructions are also available, and useful for 128-bit and 256-bit workloads.
The problem has been that consumer adoption has been terrible and even the implementations available had terrible compromises. This is not the case with either Zen 4 and Zen 5.
While we might not see huge adoption, that's very likely to change. Take simdjson. This library has adoption and many uses too.
So this idea that avx512 is only for HPC is absurd.
Fromarine@reddit
Not my point I'm saying the increase applies to all workloads on arrow lake, not that it's an average that has been skewed by avx 512 like on zen 5.
Pristine-Woodpecker@reddit
I'm not seeing a subtest breakdown in the article, so what makes you claim this? Or is this just a guess/assumption?
Fromarine@reddit
Go to benchleaks on Twitter. They have the browser result where geekbrnch breaks it down in like 20 categories. There was only 1 out of all 20 where arrow lake was worse in and in the general ST score; integer and FP improved onviously while crypto got the same score as the 14900k roughly bringing the average down, but who cares about crypto? That was one of the places zen 5 substantially improved over zen 4 on tho
jocnews@reddit
Not much actually.
https://x.com/9550pro/status/1824057613434434042
lightmatter501@reddit
AVX-512 is real performance, game devs just need to learn how to do runtime feature selection to take advantage of it.
capybooya@reddit
Is it even realistic to expect AVX512 to be relevant in games when Intel has dropped from its consumer lineup for several years?
itsjust_khris@reddit
From what I’ve read many games don’t need SIMD instructions at all. To truly benefit from it you likely have to hand tailor relevant code, which is incredibly difficult and given the minimal use in most games probably seen as not worth it.
I would like to be wrong since we see so many improvements in this area, but many games have launched with AVX2 or even SSE3 requirements and later dropped them because people complained, showing it wasn’t truly needed at all.
RE7 once needed AVX2 but it turns out only one scene of the game actually used it, so many on older CPUs just emulated AVX2 for that one scene and played the game just fine elsewhere.
lightmatter501@reddit
Factorio will make use of AVX-512 if it is present for substantial performance gains.
Any physics engine that needs to do collision checks can use SIMD to vectorize those checks.
Runtime feature selection is a function annotation that takes 10 seconds to add and creates a per-microarch version of the function and everything it calls based on what you specify.
The reason those requirements were dropped were because they were losing sales and probably saw refunds. In theory all CPUs running windows 11 support AVX-2, so AVX should be a safe baseline. They help performance, but every vector instruction can be done more slowly out of other instructions, so yes they aren’t needed, they are just performance multipliers for hot loops.
hunter54711@reddit
is this actually true? I don't think I've ever heard of this
lightmatter501@reddit
If you objdump the binary you can grep for AVX-512 instructions if you don’t believe me.
hunter54711@reddit
Yeah I ain't going through all that. I just did a quick Google and the only thing I saw was a Factorio dev saying they tried AVX AVX2, AVX 512 and they didn't see any measurable performance uplifts uplifts
All_Work_All_Play@reddit
I can't find anything about it. I would be pleased if it were true. I know Path of Exile uses AVX, but I don't think it specifically uses the 512 implementation (and I'm not certain how well CPUs can reallocated unfilled 512 sets)
DX12 apparently has AVX as part of the API, which makes me think Vulcan does as well.
buttplugs4life4me@reddit
It can be a huge benefit in simple stuff like collision detection, which basically means every game could benefit. It was such a slap in the face from CDPR that they patched Cyberpunk from requiring AVX, which was over 10 years old by that point, to requiring SSE only, which was 20 years old. Just do fucking runtime detection.
Falvyu@reddit
What makes you think they removed AVX code instead of adding 'runtime detection' and alternate SSE/scalar code paths ?
itsjust_khris@reddit
The thing is to my knowledge there was no change in the performance of the game. Suggesting that AVX was never a big factor in their code to begin with. It was disappointing but ultimately my research points to the conclusion that AVX just isn’t a top list item on a game engine devs list on improvements. I’m sure they’ve evaluated it, consoles support AVX2 now and many Sony games use it, but perhaps gaming truly isn’t looking to use SIMD, otherwise consoles wouldn’t cut down their vector units.
buttplugs4life4me@reddit
A lot of the AVX speedup is usually wasted by inefficient memory access patterns. If they aren't actually making use of SIMD and simply bolt it onto an existing implementation, chances are the speedup they achieve is within margin of error simply because the memory access pattern is so inefficient that the speedup gets swallowed up by that
Falvyu@reddit
Unless if you're satisfied with auto-vectorized code, which is still useful but has its limits, runtime detection requires multiple algorithm versions.
Given that :
Experienced SIMD developers are not that common
SIMD development is time-consuming (both design and debug)
Software architecture must be designed to be compatible with SIMD operations.
Games are relatively complex software (at least compared to scientific/HPC softwares).
It's fair to say that it's more interesting for companies to target a common denominator (e.g. SSE). Yes, performance isn't optimal, but going further isn't worth the development cost (especially if said game is mainly GPU-bottlenecked).
nnevatie@reddit
Or they should learn to use ISPC, which does the selection automagically for them.
lightmatter501@reddit
Or use SYCL C++, which is a much more sane programming model and is more portable.
jedidude75@reddit
It's still only 9% faster than the 14900k.
rtnaht@reddit
14900ks should be compared to 15900ks.
Vb_33@reddit
Intel Core Ultra 9 285KS*
jedidude75@reddit
15900k doesn't exist
79215185-1feb-44c6@reddit
AVX-512 does shit for virtualization and compilation. Not all of us are content creators.
cuttino_mowgli@reddit
Dude, I'm just waiting for GN reviews at this point. All this "rumors" are just not worth it, especially from just one synthetic benchmark.
Famous_Wolverine3203@reddit (OP)
The Geekbench leaks for Zen 5 were all pretty accurate a few months before launch.
Flynny123@reddit
Yeah there was a lot of ‘maybe it’s an engineering sample’ type stuff that was probably in retrospect fully functioning chips.
Rupobot@reddit
Intel said its new chip will use 200 watts instead dof the 300 watts that the 14900k uses BUT Amd is coming out with a 9950x3d that I haven't seen a comparison on yet.
vk6_@reddit
Geekbench 6 also claims my Snapdragon X Elite is faster than my 5950x (when the 5950x is 56% faster in Cinebench 2024) so I don't really trust their multi core results anymore.
jocnews@reddit
Geekbench 6 MT scores are useless, and particularly so for comparing very different architectures like this, with very different thread counts and SMT / no-SMT.
The benchmark simply doesn't scale, so the MT test mostly gives results that speak about single-thread performance, not about multi-thread performance. That leads to those absurd results where 128core server CPUs don't beat phone SoCs.
Famous_Wolverine3203@reddit (OP)
We are not comparing ARM with x86 here though.
vk6_@reddit
My point is that I don't think Geekbench multi core results are a particularly good benchmark, and especially not with these rumors where nothing is known about other factors like power consumption.
Famous_Wolverine3203@reddit (OP)
Geekbench’s MT figures while lacking are comparable between flagship Intel and AMD offerings.
But you’re right, I wouldn’t use Geekbench for MT testing. But I wouldn’t use Cinebench either.
SPEC is a better comparison point.
no_salty_no_jealousy@reddit
That Core Ultra 9 result is without AVX512, so i would say that impressive to beat ryzen 9950x, not only Intel Arrow Lake don't have AVX512 but it also has less threads too.
jocnews@reddit
AVX-512 only adds couple points in Geekbench https://x.com/9550pro/status/1824057613434434042
It doesn't seem to use the instructions much.
Real-Human-1985@reddit
Another Geekbench victory to leave heads scratching on launch.
Fromarine@reddit
zen 5 specifically got avx 512 support which boosts (inflates) geekbench scores significantly. Turn off avx 512 and itll lineup
Pristine-Woodpecker@reddit
The AnandTech review got 26% in SPECfp using only the compiler generated AVX512 code. So I'm not sure. The entire FP unit in Zen 5 got a lot beefier, not just the full size AVX512 units.
steve09089@reddit
No AVX-512 this time to confuse people in ST and MT performance, so there should be some hope.
Of course, that all depends on if the latency is good or not. It could be extremely horrendous like Zen 5, destroying any ST lead in gaming.
INITMalcanis@reddit
Let's hope it can go 6 months without significant degredation.
shrimp_master303@reddit
95% of 13th/14th gen processors did not have issues with this
Over-Instruction-103@reddit
A few things to keep in mind, they're comparing results with first tests of 9950x on Geekbench, which scored barely over 20k. But if you look for recent results with recent bios updates, it can score between 23 to 24k.
Second thing, I'd wait other benchmarks aside from Geekbench to actually start making conclusions.
Ok_Assumption_3667@reddit
As Intel say, even the 14900KS is superior to 9950X...
Astigi@reddit
Performance without power is useless
Rais93@reddit
14900K perform better than 9950X in GK6, can we already stop talking about this useless ass bench? It adds nothing to the table.
Vb_33@reddit
But Geekbench is great when comparing x86 to Apple
Rais93@reddit
A sintethics to compare two uarch? Ok mate we have a different understanding of a benchmark
Geddagod@reddit
Does it? The chart from videocardz shows the 9950x beating the 14900k by \~10% in ST, and also beating it in nT. While the nT benchmark is a bit more divisive, what makes GB6 ST a useless ass bench?
Rais93@reddit
I am not sure which chart are you looking at. Open op link from hh.
14900k is superior in mt by a very small margin(let's say equals) while for example tpu's sintethics show a 5 percent margin
The reason geekbench is the first bench to leech should ring a bell to you. It has no way to influence the launch of the product.
Geddagod@reddit
Oh sorry, I have mixed up the two websites lol. Should be from videocardz. That's on me.
I mean I agree the nT workloads is a bit weird due to how it doesn't scale well, but still, the ST score is fine IMO.
It's because GB scores are easily able to be found by the public. You can see the latest GB results online.
Rais93@reddit
It's not, every other sintetic has public scoreaboards. It's due to the fact GB is easy, free e quick to run and can give a reliable hint if an engineering sample is working as intended.
I've worked in the media and it has always been the case, there always were a benchmark that was standard in the first leak for a launch. Once it was superpi later Cinebench, but this trend is the worse.
First versions of geekbench (which was intended as an android bench) were the most absurd crap but storically favoured Intel for desktops. Maybe that's the reason too
Geddagod@reddit
If I run a cinebench r23 run on my 12900h system, you could see it? Afaik, geekbench requires users who don't pay to have their test results viewable to the public, meaning that if I was working at an OEM and have an early ARL sample, and I run it on GB, people can see my score. I don't think Cinebench R23 works the same way.
Applies to a bunch of other benchmarks too.
Successful_Ad_8219@reddit
I'll wait for Phoronix benches. I think we have learned that Windows is a pile of shit performance wise and can't be trusted to get the most out of these CPU's.
psydroid@reddit
I usually disregard all but the highest Windows scores on Geekbench, because I assume systems with much lower scores suffer from severe misconfiguration issues.
Famous_Wolverine3203@reddit (OP)
Too bad most people use their CPUs on windows. Phoronix is more server oriented.
Successful_Ad_8219@reddit
If the CPU performance is double digits better in another OS, then we should be putting pressure in Microsoft to fix windows. But, what ever. I guess we'll throw our hands in their and and just say "Too bad".
Microsoft's captured audience just keeps taking it as it's given to them and they're completely fine with it. "Too bad".
Famous_Wolverine3203@reddit (OP)
You listed one of the reasons why Linux is better than Windows. If all the differences between Linux and Windows were just CPU performance, Linux would have run away with marketshare a long time ago.
Linux isn’t being punished for the crime of having better CPU performance than Windows. There’s various other factors that limit its reach. If this product were for servers, I’d have no issue with Linux being a focal point.
But as it stands most of these CPU’s will run games and other apps on Windows.
Successful_Ad_8219@reddit
Even a small understanding of history, economics, advertising, business, etc, would find plenty of faults with that illogic.
A crime eh?
This constant conflation of Linux is for servers is historically untrue.
You mean the specific SKU? A firm maybe. The architecture that will exist all over all of this generations of products? Not by a country mile.
/thread
Famous_Wolverine3203@reddit (OP)
A complete mastery of above topics still won’t change the fact that Windows has a magnitude more market share than Linux. And evaluating consumer CPUs on Windows is the more relevant of the two.
I spend my days hating a penguin operating system. Persecution complex among redditors is quite high this year.
Linux does serve a more wider range of applications. But in the end, the market share it has in said applications, namely consumer Operating Systems, which is what these CPUs are primarily sold for is quite little compared to Windows.
I wouldn’t disagree if you said you’d rather stick with AMD because you specifically prefer Linux. But to claim benches on Windows are irrelevant, the OS that most people use said CPUs on, is quite a stretch.
Successful_Ad_8219@reddit
1) No one is marketing Free on the shelves of retailers. Microsoft captured their audience via product that would make money on store shelves because the retailers get a cut. This isn't difficult to logic out. This does not give them some superior status. There are plenty of reasons why Linux didn't "run away with market share", and it has nothing to do with the performance of the OS, which is what this is about. So drop this ignorant conflation / non-sequitur.
2) Market share has nothing to do with Windows performing like shit. Again, a complete non-sequitur.
3) I never claimed benches in Windows are irrelevant. Not even close. A deliberate strawman.
If you continue to reply to me with these irrationalities, I'll continue to point them out until I'm bored of your nonsense. Please continue. It's easy sport.
Famous_Wolverine3203@reddit (OP)
You are right. My sincerest apologies. I’ll get your Linux Geekbench results as soon as I can good sir.
Successful_Ad_8219@reddit
And you reply with a sarcasm. That was easy.
aztracker1@reddit
Kind of my take as well at this point... I run Linux for casual usage and dev work, so my usage is at least closer to what Phoronix and some of the Level1 results seem to indicate vs. a Windows gamer. That actually made me a bit more excited to make the jump from my not too old 5950X.
hunter54711@reddit
Well at least Intel will deliver something competitive if AMD isn't. I'm hopeful for Arrow lake after the current disaster for 13th/14th Gen. I hope they genuinely do what AMD did and lower TDP across the lineup to more sane levels. It would start to make Intel look better if their brand wasn't attached to "hot, loud and power hungry"
astrobarn@reddit
Wait 👏🏻 for 👏🏻 reviews 👏🏻
Geekbench is also a bit unreliable in my experience.
ChumpyCarvings@reddit
What's the blue screen per hour rate? Need that graph
juhotuho10@reddit
Does it spontaneously combust though?
VegetablesOfDoom@reddit
thats on TSMC's 3N process right? I guess a lot of improvement would come from that.
YellowMathematician@reddit
Should i go with Zen 5 if most of my heavy use cases are from Matlab?
UsedSquirrel@reddit
Intel doesn't need benchmark wins right now, they need durability test wins.
yoontruyi@reddit
Is it going to break when you use it like their other new chips?
I would be afraid of buying Intel till I am sure that the chip is actually not going to break and they actually recall their broken chips.
Beremus@reddit
Good numbers, but very hard to trust the CPU itself. What if the same fiasco repeats?
DaDibbel@reddit
Don't be an early adopter, if you can help it.
thelastasslord@reddit
Here's where we find out if Intel's design guys are as good as amd's.
Ar0ndight@reddit
I never thought I'd be saying that at this point but my next upgrade might be intel after all.
After the whole 13/14th gen debacle I was ready to hop on the AM5 train but with Zen5 in its current state that's out of the window, I wouldn't be upgrading much while having to invest in an entire new platform. If Arrow lake can be ~15% stronger than a 9950X overall with a strong showing in gaming AND good power consumption I'm onboard.
There's still X3D, if AMD can positively surprise there (higher uplift than usual + scheduling on point) they're back in the conversation for me but until I see numbers I won't be betting on that.
MonkAndCanatella@reddit
Is this really "blazing"? It seems like a very incremental improvement
Psyclist80@reddit
Looking forward to strong competition at the top of the stack! Power use and gaming performance will be two key metrics to watch for! Looking forward to this launch, then the AMD X3D parts to try and counter (if needed). Keep pushing!
TheJoker1432@reddit
If power consumption is far reduced then irs great
whatthetoken@reddit
The best part is this is unverified, without details on power usage and voltage. I'm sure Intel shoving 1.5v into p-cores for 2 generations is a good reason to "Just trust" them
ptr1337@reddit
Laughing on Linux with a 9950X:
https://browser.geekbench.com/v6/cpu/compare/7425534?baseline=7430435
Famous_Wolverine3203@reddit (OP)
Is this supposed to be a flex? Intel CPUs score better on Linux too.
ptr1337@reddit
Just wanted to joke a bit around. But yes, on Linux they score also higher.
Electrical_Tailor186@reddit
It’s hard to believe for me that anyone interested in CPU market enough to check any benchmark will trust Intel with buying their product…
kobexx600@reddit
So you’re saying?
JakeTheCake72@reddit
I’ll wait till they have confirmed functional microcode.
steve09089@reddit
This is an improvement the last GeekBench score, competitive ST and MT but…
GeekBench isn’t a good measure of Multi-Thread for scalable applications. Need to see CineBench results too.
Snobby_Grifter@reddit
Not amazing, but better than Zen 5, and probably with similar or better power draw. Assuming the upcoming x3d has similar uplift to base Zen 5, I could see new system builders being enticed back over to Intel.
If Intel ever brought back another level of cache, they might be able to dominate in all aspects, but it's Intel, so what are you gonna do?
BleaaelBa@reddit
RMA.
nanonan@reddit
Nice source.
zerokul@reddit
Now give me the power usage voltage given to the p-cores, because Intel has a funny side behaviour of self-immolating their silicon into dust because of over-volting the living hell out of their CPUs.
Nothing in their microcode updated for 13/14 gen gives me confidence that they're learning a lesson, besides putting the suffering on the user and hope the CPU doesn't burn up before warranty expires...
Intel broke that trust for me where I don't trust any numbers until all voltage behaviour and spikes of get analyzed.
The only chips of theirs which I'm running are their mobile in a couple of ThinkPads.
F9-0021@reddit
Very promising. Especially since there should be a substantial efficiency improvement over Raptor Lake as well.