Intel CFO admits Arrow Lake missed expecations: “We didn’t have a good offering this year”, pins hopes on Nova Lake
Posted by -protonsandneutrons-@reddit | hardware | View on Reddit | 213 comments
-protonsandneutrons-@reddit (OP)
All that money spent on TSMC.
SkillYourself@reddit
Videocardz cut out the context of the quote, duh.
Intel's mobile lineup is the one segment of the company gaining share while posting good profit so using TSMC was worth it. Probably should've used N4P instead of N3B but still better than refreshing I4 or I7 CPUs.
GhostMotley@reddit
I love my laptop with a 258V, thing is an absolute god-send, exactly what I was looking for.
Perfectly adequate performance for browsing the web and watching videos, extremely cool & quiet, fan hardly ever kicks in when on battery and it only gets mildly warm, I charge it like once a week and it's fully x86 compliant.
What has me concerned is Intel say Lunar Lake-MX was a one-off and they won't be doing it again, I guess we'll see how well Panther Lake is, but I would not want to go back to a hotter or louder laptop.
If neither Intel or AMD can make adequate successors, I would have to bite the bullet and hope ARM improves for Windows laptops, or buy a Macbook Air.
Glum-Position-3546@reddit
I don't get it, laptops have fulfilled this requirement since like 2018.
Dangerman1337@reddit
A darkmont only SKU with more Xe cores and maybe stacked cache would be a sick laptop and handheld SoC.
ResponsibleJudge3172@reddit
Pantherlake ultra low power will replace Lunarlake. It's about how they made the CPU that was too expensive for the prices they could charge, not the segment itself.
Scion95@reddit
But will low power pantherlake regress on performance compared to Lunar Lake?
So far, at least in the leaks, most of the Panther Lake skus have only 4 EUs in the GPU, except the top end one, which has 12, but that's listed as Panther Lake H, not U.
It also has less PCIe lanes, like Lunar Lake, and only mentions LPDDR5X for the memory, not DDR5 like the rest of the line up, which does admittedly make me curious about what's going on there, assuming the leak is even legitimate, but.
It sorta feels like, even if they have a low power Panther Lake, it won't beat Lunar Lake at the same power, and might perform worse. And it's hard to imagine how Panther Lake will get the battery life Lunar Lake does, how it will get the total system power at idle down low enough.
DerpSenpai@reddit
Windows on ARM has most consumer software running. Some games with anti cheat are not but it's a matter of time. Still if you like gaming I would buy a Nvidia Laptop and not QC. Their drivers are not very good
ProfessionalPrincipa@reddit
Just say no to locked down Qualcomm laptops.
DerpSenpai@reddit
I agree, but X Elite is working on some linux laptops
hwgod@reddit
And yet they've repeatedly called LNL a mistake that they won't do again. It's kind of a contradiction.
ResponsibleJudge3172@reddit
It's not just Lunarlake. Even Arrowlake mobile is far more competitive than desktop.
Mostly because as Intel showed at launch, the more the power, the lower the P core IPC gain is over Raptor Lake.
At ultra low power, they claim 20% overall performance gains which shrinks to 10% at full desktop performance
hwgod@reddit
I mean, yeah, a 2 node jump will do that...
steve09089@reddit
The mistake with LNL was the onboard package memory leading to more SKUs Intel would need to handle rather than OEMs.
hwgod@reddit
No, the memory problem was Intel owning the BOM cost. But the problem is that the memory optimizations are a significant part of what makes LNL so good.
Scion95@reddit
I mean, they could have just. Only had one SKU for the memory. Not bothered with the -6 or -8 and just made all of them 32GB.
I don't think the 16GB ones are even in anything, last I checked, they seem like a waste, unless the packaging of the 32GB sometimes breaks the memory dies and they can sell them as 16GB.
...I mean, also, I'm not an expert, but Apple has 24GB and 48GB on package memory, with, last I checked, 128bit wide memory buses, at least for the base M4, so like. Intel had options, to optimize for what they needed for cost.
steve09089@reddit
16GB ones are definitely in a bunch of laptops, way too many laptops in my opinion.
grahaman27@reddit
18A should be cheaper and a great way to showcase the node. And if it's good, it's a competitive advantage
lijmlaag@reddit
Maybe but Nova will also be made by TSMC.
grahaman27@reddit
Perhaps, I think its not confirmed. Most reports suggest nova lake will be using a mixture https://www.tomshardware.com/pc-components/cpus/intel-nova-leak-28-core-cpu
Professional-Tear996@reddit
Those who are downvoting you for saying what MJ Holthaus said in the Bank of America Securities conference should explain how Nova Lake uses N2 when it taped out in Q3 2024 and had pre-QS 8+16+4 CPUs being shipped around in July 2025 when N2 is only supposed to HVM in October-November 2025 according to TSMC themselves.
grahaman27@reddit
That's a good point. I'm just saying what the rumors say, but it does seem impossible based on the timeline for n2
hwgod@reddit
What's impossible? N2 is ready end of this year, while NVL ships mid/end of next. There's no contradiction here at all.
Professional-Tear996@reddit
Where are the Zen 6 products then?
hwgod@reddit
A product doesn't have to align with the first availability of the node. Arrow Lake itself is a great example. Shipped a year or more after N3B was ready.
Professional-Tear996@reddit
If N2 is ready by the end of the year, why is the first product that uses it not ready as well by the end of the year?
Contrast it with PTL and 18A which are both ready and available by the end of the year.
Do you have any idea how stupid your logic is?
hwgod@reddit
As I just said, the design may not be ready yet. Just as was the case for ARL.
Lmao, first you claimed 18A was already ready, now you claim end of the year? After just claiming you can't have ES chips before a node is HVM...
You can't even keep your own argument straight. Can't tell if this is trolling or just deep ignorance.
Professional-Tear996@reddit
If Zen 6 design is not ready then what did they tape out for which the announcement was made on 14th April?
PTL, which is on 18A, launches by the end of the year - is this so hard to follow?
hwgod@reddit
It's not ready for HVM. There's usually 1-2 years of work from A0 tapeout to launch.
You claimed 18A is HVM ready now. So by your own logic, why hasn't PTL already launched?
Professional-Tear996@reddit
Yeah we know it isn't ready for HVM because TSMC hasn't started HVM for N2 yet. This isn't arcane knowledge.
Its launch is scheduled as per Intel's statements - which is by the end of the year.
Intel doesn't have to satisfy your ass-backwards logic as a yardstick with which to measure their execution.
hwgod@reddit
Lmao, you just claimed HVM was necessary for ES chips.
Thus, by your own claim, 18A can't be ready.
hwgod@reddit
Yes, HVM is not required for engineering samples. This should be blindingly obvious.
18A isn't HVM ready either, so why are you not making the same argument for it?
Professional-Tear996@reddit
Go troll elsewhere.
18A is HVM - by Intel's standards, not TSMC - because Panther Lake is releasing in 2-3 months.
You still cannot explain how Intel got engineering samples - not A0 silicon - in the first half of 2025 - when TSMC and AMD are on record saying Zen 6 Venice is the first N2 HPC tapeout on 14th April 2025.
hwgod@reddit
No, they've said it's in risk production, same as N2 is.
Engineering samples can indeed be A0 silicon.
They announced that it had happened on that day. That does not mean that was the date of the tape out. In fact, if you bothered to read the announcement, they said it had already been brought up by that time.
Professional-Tear996@reddit
Keep believing this stupidity that 18A is still in risk production when Lenovo laptops with Panther Lake are being moved around for validation purposes.
Except A0 is the label given to first tape out and Nova Lake is currently at Pre-QS stage. Absolute horseshit take from the most illiterate member on this subreddit.
Ass-backwards logic. I said they announced the tape out on 14th April, not that they got the first silicon hot off the production line on that same day
hwgod@reddit
Engineering samples don't require a node to be in HVM. I already explained this for you.
...You don't seriously think that most companies go through multiple tapeouts before hitting any ES milestone, do you? If so, then lmao.
You used that date to claim that Intel couldn't have gotten their own N2 chips. It's clear you don't even understand your own argument.
Professional-Tear996@reddit
Laptops with model names are being moved. Not merely engineering samples. And laptop chips aren't even going to be sent to OEMs as ES but as part of RVP - which was available for PTL a long time ago.
Pre-Qualification means it has advanced much further than A0.
Nova Lake even had RVP a few months back.
They don't need to because I firmly believe that the initial Nova Lake parts will not use N2. All available, publicly verifiable statements - not rumors from locked Twitter accounts - point in that direction.
Exist50@reddit
A mixture, sure, but that just means 18A for the low end and other tiles. The compute tile for the high end parts will all be N2. If anything, some rumors about them pushing it further down the stack.
hardware2win@reddit
How do you know?
Exist50@reddit
Intel's explicitly said their use of TSMC is for the compute tile. And why would they bother using a far more expensive (to them) node unless it's significantly better than 18A? And then of course they use the premium node for the premium products.
It's not exactly some secret that 18A doesn't actually compete with N2.
Plastic-Meringue6214@reddit
bro this guy posts in the intelstock sub, he's another one of those people that refuse to let their self and others see the obvious. im ngl but i really wish mods actively banned these guys
Geddagod@reddit
Nah, only the intel stock subreddit bans people they disagree with.
Banning people with unpopular takes (or those who are invested in a company) seems like a pretty slippery slope lol.
ProfessionalPrincipa@reddit
Half of the fights on here are between "us" and investors trying to gaslight us.
hardware2win@reddit
Who is "us" and why would I care about what you believe about 18A?
ResponsibleJudge3172@reddit
It's AMD and even TSMC investors battling Nvidia and Intel investors in a battle royal.
Both financial interests and emotionally invested people
nanonan@reddit
Investors aren't fanboys, the term you're looking for is bagholder.
hardware2win@reddit
If you see "the obvious" then you can usually make money out of it :)
Exist50@reddit
Not even Intel's this delusional about 18A, which is the ironic part.
nanonan@reddit
Careful what you wish for, how you think intelstock got that way?
hardware2win@reddit
NVL is after PTL.
Pugs-r-cool@reddit
It also doesn't make sense to quietly change fab mid way through a generation, and it would be deceptive to consumers. A CPU made on N2 and on 18A will behave differently, and designs need to change depending on the fab. You'll end up with two CPU's both called a 465k or whatever, but with different designs and differing performance. If they do it quietly, then people will be screwed over when their CPU turns out to be the worse performing version. If they're upfront about the fab used, then no one will buy the worse performing version.
scytheavatar@reddit
Why would it be a "competitive advantage"? At best 18A was always going to trade blows with N2, beat N2 in some lose in some areas.
Strazdas1@reddit
being able to produce competitive chips cheaper is competitive advantage.
grahaman27@reddit
you don't know that. it "trades blows" with N2 sure. But 18A is intel's node -- based on design that works for intel's architecture best. Its very likely it will be more optimized for intel's needs.
Also, 18A has BSPD, which should allow CPU's to really stretch their legs and eke out every drop of power on the top-end. Which should help benchmarks.
Exist50@reddit
It doesn't do that. The "optimistic scenario" the comment above mentions has moved into fantasy at this point.
It was supposed to be a foundry node and relatively better optimized for mobile, AI, etc. Markets that Intel historically did not prioritize. Might still have some vestiges of the old thinking, however.
You're making the same mistake people did with 10nm. Don't focus on the bullet point features. They sound cool, but in no way guarantee an exceptional node. Instead, focus on overall PPA claims, with some scepticism as to how data is presented. And more importantly, look at what the products themselves tell you.
ResponsibleJudge3172@reddit
Arrowlake HX testing by Geekerwan basically plateaud after 80W in performance which I find intersesting. Are they saying this is reversed or is it "we''ll get em next time!"
Geddagod@reddit
I think this is more of a Intel 7 being such a mature and high volume node than a N3B/LNC design issue tbh.
We saw a similar story play out with Intel 10nm/7 as well. Intel 14nm was so mature that Intel wasn't beating 14nm skylake TVB turbo freq till raptor lake. Even the 12900k had a 100MHz lower Fmax than a 10900k.
6950@reddit
That's what happens when you too much '+' to a node lol there is still not a 6Ghz processor except RPL.
Strazdas1@reddit
I wish pentium 4 suceeded. Id love to have my 20 ghz CPU cores without needing stupidly complex multithreading for so many things. But alas physics are what they are.
6950@reddit
I want a 69 GHz core than xDd
Strazdas1@reddit
69.50 ghz based on your username.
6950@reddit
Lmfao
Exist50@reddit
Even if the more lofty rumors don't pan out, AMD will probably hit that with Zen 6. Beyond that, seems like a phyrric victory. Piledriver was a frequency champion, but no one remembers it fondly.
6950@reddit
As long as we are not talking about Zen 6 GHz seems more achievable than Zen 6 7 GHz
Exist50@reddit
Figure something in the middle is likely where it'll end up. N2 is a significant boost by itself.
Dangerman1337@reddit
Zen 5 is N4P and skipping N3 to at least N2P, I think it could hit 7GHz. Amd from 7nm to 5nm hit with Zen 3 to Zen 4 with 4.7 to 5.4Ghz with the 5800X to 7700X. Zen 5 is N4P doing 5.7Ghz. I think hitting 7GHz is more likely than people think. I think it may just fall below but I can see Zen 7 hitting 7GHz across the board.
SherbertExisting3509@reddit
According to MLID, the AMD teams have achieved 6.4Ghz but they've currently hit a scaling wall that they're trying to overcome.
IF they succeed, then perhaps we might see 7Ghz on 1-2 cores on N2X
I suspect they won't be able to overcome the scaling wall in time for a 2026 release, considering how new and niche the N2X node is.
I would also imagine that achieving 6.4 or 7Ghz won't come for free. It wouldn't surprise me if IPC will end up suffering at lower clocks.
6950@reddit
yeah but for Peak Frequency can't say by how much all foundries quote Power at the same performance or performance at same power graph.
ResponsibleJudge3172@reddit
Theoretically, N3B should stomp in performance of transistors
phire@reddit
Hyperthreadding doesn't really affect the maximum clock speed, and the area cost is minimal.
The real costs of hyperthreadding is R&D. It's tricky to get right, adds quite a lot of complexity to the design. And that complexity then makes it harder to add other optimisations, as hyperthreadding touches everything.
ResponsibleJudge3172@reddit
Intel themselves touted energy savings. Clockspeeds and energy efficiency go hand in hand
phire@reddit
Actually using both threads means higher resource utilisation, which might translate to higher power usage and therefore lower boost clocks.
But only when both threads are actually occupied. You only need to idle the second thread to get the peek clock speed, not delete hyperthreadding entirely.
Strazdas1@reddit
It depends. Does your single therad get fully loaded? how much time does it wait for other threads? how well do you feed your I/O? You can, theoretically, have a process so efficiently designed for the core that the hyperthreading overhead actually reduces performance. We had this happen with minecraft servers at one point.
Exist50@reddit
Not really. Peak clocks aren't power limited.
Scion95@reddit
IIRC, there's also the part where, the reason Intel calls it Hyperthreading is that, IBM came up with Simultaneous MultiThreading or SMT for their Power PC systems, and Intel came up with their own implementation to get a single core to behave like multiple cores in logic.
AMD licensed SMT from IBM, instead of creating their own version from scratch, and by all accounts. What I heard is that some of the Power 8, 9, 10 and so on CPUs can have, 4, 8, or a theoretically infinite number of logical threads on a single physical core, and. In practice, my understanding is that usually, hyper threading gives Intel a 30% increase to multi core performance, vs. having it off, while AMD's implementation of SMT gives them an increase of around 50%. Give or take, depending on the CPU arch in question.
Notably, ARM and RISC V cores don't tend to go for any kind of SMT.
So, it seemed to me like Intel wanted to drop SMT because, as you say, the complexity, but also because they may have felt that their SMT implementation just. Wasn't worth it?
Exist50@reddit
They support more, but certainly not infinite. And when you dig into the details, the 8t version seems to use something more like CMT underneath.
There are a couple of ARM cores with SMT. E.g. Cortex A65, Nvidia Vera CPU. RISC-V, might have to wait and see.
Front_Expression_367@reddit
Arrow Lake Mobile does have better clocks than Meteor Lake. For example, Core Ultra 5 225H had its P cores clocked at up to 4.9Ghz compared to 4.5Ghz of Ultra 5 125H and its E cores up to 4.3Ghz as opposed to 3.6Ghz. Similarly, Core Ultra 7 255H's numbers are 5.1Ghz and 4.4Ghz as opposed to 4.8Ghz and 3.8Ghz of the Ultra 7 155H. I guess that is something.
Shadow647@reddit
Meteor Lake is trash overall, the LP cores on it should always be parked otherwise it strutters like something from the 1980s
Tasty_Toast_Son@reddit
That's odd, the LP cores on my 125H haven't caused any issues. Hitting the throttle with virtualization, gaming, compressing and compiling is all pretty seamless. Even mundane tasks like spreadsheet editing, PDF reading, and word editing are buttery.
Front_Expression_367@reddit
I guess it doesn't deliver the best performance. In my use case though it does just fine. The only problem is when you wake it up from sleep and use it too quickly (seems like Windows still suck at scheduling 3 types of cores). Otherwise it is actually pretty damn battery-efficient. And the iGPU is good enough. It is just alright.
Creative-Expert8086@reddit
So is with arrow lake, the only good lpe core is skymont.
Exist50@reddit
LNC was a big redesign for the P-core, including moving to a synthesizable methodology. Probably significant growing pains there. That and maybe the structure growth vs GLC can probably explain at least most of the clock regression. After all, SKT was a huge jump from Crestmont and didn't have any such problems on N3B.
Not sure why you say Apple floundered though. Their N3B chips are fine. Certainly no regressions vs their very mature N4 ones.
The SoC fabric is the real culprit. They screwed up on MTL, and ARL inherited that. NVL should borrow more heavily from the LNL baseline, hopefully with some further enhancements on top.
monocasa@reddit
Can you expand on this? If it wasn't synthesizable, then they wouldn't be able to dump a core into one of the big hardware emulators during pre-silicon.
Exist50@reddit
Not much to say, really. All Intel P-cores prior to LNC are not synthesizable. They're basically a bundle of custom circuit implementations. That's why they couldn't port between nodes. Intel themselves have talked about this a bit publicly, albeit indirectly.
Among other problems, yes. Starting to see why P-core specifically has fallen so far behind?
ClearlyAThrowawai@reddit
Holy crap, how did they get anything done.
Amazing they've kept up as they have if that's how they've been designing the core, and wonder how on earth the things works as well as it does.
Exist50@reddit
A PD team several times larger alone than most companies' entire CPU teams, high tolerance for silicon bugs, and low standards for "getting anything done". Makes the last decade's stagnation a bit more understandable, no?
masterfultechgeek@reddit
Higher IPC and higher clock speeds are opposed tradeoffs in design.
There was something like a 10% uplift in the P cores' IPC and a 30% uplift in the E cores' IPC (give or take, YYMV, etc.)
FWIW, I saw both 9% and 14% uplift figures - https://en.wikipedia.org/wiki/Lion_Cove - https://en.wikipedia.org/wiki/Arrow_Lake_(microprocessor)
Exist50@reddit
The E cores had no frequency regression though. This is entirely a P core problem.
masterfultechgeek@reddit
What you're saying is NOT incompatible with what I wrote.
Vb_33@reddit
Lost HT but gained chiplet design unlike Raptor Lake.
ResponsibleJudge3172@reddit
Mhm, say that to 5nm zen 4 with SMT but same clocks as Arrowlake
ResponsibleJudge3172@reddit
AMDs objectively worse chip let tech but not design (even shows up in how Intel idle is still better) doesn't stop them from clocking same as Arrowlake but maintaining hyperthreading
wooq@reddit
I think they met expectations, moreso than previous generations. The biggest win is they don't seem to be oxidizing or self-immolating; your CPU not failing due to engineering and design defects is probably the biggest expectation. Aside from that, there have been noteworthy efficiency gains and IPC is up, though performance is lost due to ditching hyperthreading. They were priced unreasonably at release but have evened out, you can pick up a 245k for $200 and a 265k for $260, which if Intel had released them at that price point everyone would have said "AMD has the best chips, Intel has the best chips for the price" and they would have sold out instead of sitting on shelves for a year.
They still lag behind AMD's gaming-focused offerings in benchmarks for games but trade blows in productivity. So much discourse online is about gaming performance, but computers are used for so much more than that.
Intel's stock dropping is because of the previous generations' engineering failures and resulting PR disasters and their continual slow slide on the enterprise side. Not because Arrow Lake is a bad architecture.
Exist50@reddit
Gaming is the single largest market for high perf desktop chips. And it's not like ARL is exceptional in the rest.
But the real problem with ARL is cost. It probably costs 2x or more for Intel to produce relative to RPL, but is at best an incremental upgrade for desktop. There's good reason for the CFO to consider it a failure.
Also, ARL is pretty objectively crap as an architecture. Basically everything good about it can be attributed to N3, but even then, the efficiency for mobile is still crap vs the likes of Apple or Qualcomm with similar nodes, and the perf is unremarkable vs AMD's N4 products. LNL at least has something to show for its costs, even if Lion Cove still sucks.
Strazdas1@reddit
ARL is exceptional in latop power/heat management. Apple levels exceptional. Competition/older chips has nothing on them.
Exist50@reddit
No. That claim is ridiculous. ARL is very, very from Apple levels of efficiency, especially in battery life.
jmlinden7@reddit
High perf desktop chips are a miniscule percentage of the total market for desktop chips, the vast majority of which go into prebuilt office PCs.
Exist50@reddit
Anyone who doesn't care about desktop performance is buying RPL, because it's cheaper.
And the office desktop is dying/dead. The vast majority of companies just deploy laptops now. The remaining desktops are either a) perf insensitive, so won't pay a premium for ARL, b) form factor constrained (e.g. POS), where mobile parts work fine, or c) fall into some productivity workload like higher end content creation or engineering. That last market is still smaller than gaming, and split with the workstation platform.
So if you're going to design an expensive mainstream desktop platform, which ARL is, it damn well better be competitive in gaming, or it has little reason to exist.
ResponsibleJudge3172@reddit
They should. But they won't. They buy 7600X as the reviewer of choice told them to
Exist50@reddit
Anyone who doesn't care about perf isn't watching reviews at all.
ResponsibleJudge3172@reddit
They ask what CPU do I buy or ask AI or ask their nerdy friend or ask a clerk, all of whom are informed by marketing and reviewers. After all, they need to know what's there, and what it's compatible with and total costs of everything.
jmlinden7@reddit
Correct
Also correct, but this also applies to AMD
Also correct, but because Intel has larger volumes, their design cost per chip is lower
Exist50@reddit
Yes it does. Which is why AMD frankly doesn't spend much effort on their mainstream desktop parts.
By "cost", I was mostly referring to unit cost. N3, Foveros, large dies, high platform current limits, etc all add up very quickly.
But I would be skeptical of the claim that Intel's RnD is lower per unit. Yes, Intel is much higher volume, but their desktop platform is far more boutique. In AMD's case, their mainstream parts reuse their mobile silicon, and high end reuses the server compute die (at least for now). So the RnD for desktop is one IO die, which they reuse for multiple generations, and 1, maybe 2 chipsets (also reuse).
Meanwhile, for Intel, let's just look at ARL. They have no true mainstream offering at all, 1 compute tile reused with mobile, 1 essentially dedicated for desktop (HX I'll call negligible), a dedicated desktop SoC die (so far no reuse), dedicated GPU, and 1-2 chipsets. They're definitely spending a lot more for the amount of the market ARL covers. Right now cheap RPL is bailing them out in desktop market share. Think they really need to consolidate their platform/chipler strategy, especially with the layoffs/budget cuts.
jmlinden7@reddit
I agree that arrow lake in general was a bad product. That being said, your logic is backwards. Companies don't design gaming CPUs and then downclock them for office PC use, they design office PC CPUs and overclock a couple of them for gaming.
Exist50@reddit
But that's not what Intel or AMD are doing. AMD's office PC offering is their APUs. The Ridge chips are very different. And then there's X3D they only really use on desktop for gaming. And obviously ARL doesn't really make sense as an office PC.
Alive_Worth_2032@reddit
Wrong, X3D was designed for specific server workloads. It wasn't even meant to exist on desktop, until AMD saw haw damn good it was for gaming specifically. No other client workload really sees the same benefit across the board.
I can't remember where it was mentioned by some guy inside AMD. But was a bit like how TR originally happened. The engineers and testers inside AMD were the ones pushing for making it a client product. AMD leadership had no plans of ever doing so originally.
jmlinden7@reddit
AMD's gaming CPUs are derived from their server CPUs, with the exception of the X3D series which is gaming-specific but which only accounts for a tiny percent of the total CPU market. It's the same general principal, gaming is just too small of a market to design a CPU primarily for gaming - at most, you could justify a small modification of an existing office or server CPU (like Intel did with the F series that lacked integrated graphics)
Exist50@reddit
I think it would more accurate to say they leverage the data center silicon. If they were just overclocking office PC chips, they'd only have their Point SoCs. A new die, new package, de facto new platform. That's all well beyond just fusing off a GPU.
And on the Intel side, what does ARL exist for then? Again, it sucks even more as an office PC chip.
jmlinden7@reddit
ARL exists as a proof of concept that they can outsource desktop CPUs to TSMC
HorrorCranberry1165@reddit
Do not think that ARL is 2x more cosltly. Total amount of silicon is the same as for Raptor. It contain only additional base die which is very cheap, maybe few $ additional cost, not counting 'fillers'. How you calculated 2x more costs ? Compute tile on N3B is not that costly, to make whole CPU 2x more expensive.
Exist50@reddit
Actually not sure that's the case when you add it all up, but the comparison holds even if you assume the raw amount of silicon is equivalent.
Even if it's <$10, that's still a huge difference for the BoM. But I think the total with packaging costs exceeds that. Foveros isn't as cheap as it should be.
You're comparing effectively N7 vs first dibs N3 silicon (technically worse for N3B). At market rate, that's an enormous cost difference.
And then you add up the small things. Package layers, reticle inefficiencies, ICCmax increases, etc. ARL simply does not justify its cost profile.
ConsistencyWelder@reddit
With Intel it's always "the next big thing". Especially in this sub, Intel's comeback is always right around the corner, and when it fizzles out "we just have to wait for the next one, THAT is the ACTUAL big one".
Vb_33@reddit
To be fair Alder Lake was pretty great after years of Skylake clones.
ConsistencyWelder@reddit
Lunar Lake was only impressive because they cut half of the cores off to get better battery life. In reality the multicore performance was pitiful compared to AMD's offerings in the same segment:
https://www.cpubenchmark.net/compare/6393vs6143/Intel-Ultra-7-268V-vs-AMD-Ryzen-AI-9-HX-370
I'm sure AMD would have similar battery life and performance if they cut their CPU's in half and only offered it with non-upgradeable LPDDR5X RAM. So I'm not as impressed by Lunar Lake as r/hardware tends to be. A quad core with some "Celeron" style cores attached is not impressive in a high end CPU.
Alder Lake was good though. Competitive. Although fundamentally flawed in the higher end versions in the 13th and 14th gen copies of Alder Lake.
Raikaru@reddit
That legit already exists? The Z2 is quite literally that
ConsistencyWelder@reddit
We haven't seen the Z2 Extreme (which I assume you meant) in laptops yet, only a few "previews" in handhelds gaming devices.
It would surprise me if it didn't offer similar battery life to Lunar Lake, but with better performance. But we'll see when the first reviews are out. It's a bit early to judge it's performance when we don't have reviews.
Raikaru@reddit
No I said the Z2 for a reason.
ConsistencyWelder@reddit
h, I assumed you didn't want Zen 2, Zen 3 and Zen 4 compared to LL.
The HX 370 is based on Zen 5. The Z2 isn't.
Vb_33@reddit
I thought the Z2E had 3 Zen 5 and 5 Zen5C cores.
Geddagod@reddit
It's so ironic that you are claiming it's too early to judge when we don't have reviews for the Z2, when you paraded around a LNL ES Dell laptop, before LNL even launched, as proof of some performance claims, don't even remember what exactly tbh.
ConsistencyWelder@reddit
Huh? Did you mean to direct that at me?
Geddagod@reddit
Yes. After some digging, found what I was talking about.
ConsistencyWelder@reddit
As I said back then, if you have a better source of both LL and SP at 15 watts, go ahead and share it with us.
Both can be configured for 15 watts, so it's prudent to do so. And the best (and until you share your source, only) evidence we have says that SP is a good bit more efficient at 15 watts than LL. Which was the point I was making.
But besides, what does this have to do with the Z2 Extreme? Do you have evidence that the Z2 Extreme isn't as efficient (or more) as LL?
Remember, comparing an APU in a handheld to one in a laptop with much better cooling and a higher power limit is not exactly good practice.
Geddagod@reddit
And as numerous people have pointed out, drawing conclusions from a review where the laptop in question being reviewed, explicitly tells you not to benchmark on it, is ridiculous.
I never said it had anything to do with the Z2 perf comparison. I just said I found it hilarious how you talk about how it's too early to judge performance when we don't have reviews, when you had no problem judging LNL before it had reviews either.
ConsistencyWelder@reddit
As already said, numerous times: We have no reason to believe the results aren't accurate, and it's the best we have available. You still haven't shown us anything pointing to it being inaccurate. Even if it was an ES, it was run at the same parameters as LL is sold with. Only change they made was to make both CPU's run at a max of 15 watts. To make them comparable.
Please understand that we're not talking about the Z2. Z2 is not a Zen 5 CPU. Z2 Extreme...is.
Also, please understand that running an APU in a handheld and comparing it to a different CPU in a laptop, running with presumably better cooling and a higher power limit, is NOT what you want to do. They're not comparable. Unlike running two CPU's in a similar chassis both at 15 watts, with the parameters the CPU's are sold with, otherwise.
SkillYourself@reddit
And Krackan before that. Both lose to Lunar Lake.
This guy is just a weird troll constantly posting Passmark multicore of a 4+4 28W vs 4+8 60W CPU.
Geddagod@reddit
u/ConsistencyWelder 's hate boner for Intel, but especially LNL, is weird af lol.
ConsistencyWelder@reddit
Always makes you want to hear peoples opinions when they resort to ad hominem instead of being factual.
You are not Intel.
Geddagod@reddit
I literally was "factual" right here, in a response to your comment, what are you talking about?
You are not AMD. Honestly though, I don't even think you stan for AMD or anything. You just have a weird hate boner for Intel.
ConsistencyWelder@reddit
Is that really that weird though?
Intel has been almost as anti-consumerist as Nvidia, and keeps acting like they dominate the markets they operate in, which is partially true I guess. They've been phoning it in when it comes to product development for a decade, and when they REALLY messed up with the high end 13th and 14th gens, they denied the issue existed for a year, kept lying and gaslightling people and tried to blame motherboard manufacturers and people doing overclocking for a while. Until someone had enough and leaked their customers failure rates with Intel's top CPU's. And proved Intel had been lying about knowing about this issue for a while. There's no way they didn't know, when it was already being talked about in the industry, but no one dared say it out loud because they feared Intel getting revenge and cutting them off.
I don't hate Intel. I hate how they've been operating for about a decade now though.
Geddagod@reddit
Yes. Personifying companies is always weird. Especially when it prevents you from being objective about product comparisons.
Any large company is going to be anti-consumerist when they have a large lead.
How much of this is "phoning it in" vs them just failing is extremely debatable.
This only applies for the oxidation issues, which again, only affected a small percentage of those processors.
Intel root caused it to a physical design flaw in their cores, which they obviously wouldn't have known about, exacerbated by, you guessed it, motherboard/voltage issues.
The whole "Intel coverup" shtick is less of a coverup and more of just incompetence. Much like your whole "phoning in product development" claim as well.
No, you hate Intel lol. Hence why you are unable to be objective about their products. Either that, or you are just unable to admit you were wrong about your Lunar Lake take, which you made up your mind about before the product even launched.
ConsistencyWelder@reddit
I'm not the one being awkwardly and inappropriately personal though. That was you.
You sound like the people making excuses for russia. "Any country of that size would want to invade and dominate their neighbors".
Not really, they stopped trying when they had a massive lead in the market. And now they keep failing at holding on to their market dominance.
Not really. They knew about the high failure rates way before someone leaked it, AFTER the oxidation issues were identified and dealt with. And they were for a while trying to use the "too much overclocking" explanation.
The root cause is that they pushed their CPU's too hard to try to stay competitive and retain their market dominance position, when the competition had a better product that was much more efficient. They went balls to the wall with power to eek the last bit of performance out of their now aging Alder Lake ++ design.
There we go, making excuses for a company you probably don't even work for any more.
Nope. I hate the way they abuse their market dominance, are anti-consumerist and try to hinder healthy competition to cling on to a position in the market they don't deserve any more, and haven't deserved for a decade.
If I hated Intel I wouldn't just have bought a new Mini PC/NAS the other day with an Intel CPU. The CPU is absolute garbage, doing even the most basic Windows tasks pegs the CPU at 100% on all cores, but it was cheap. But that's about what Intel is to me these days, garbage, but cheap. If I hated them, I wouldn't buy their products. I just don't respect them. They need to earn that and they stopped doing that 10 years ago.
Geddagod@reddit
I'm personifying a person (you). You are personifying a company.
Wtf are you on about lmao T-T
Except for the fact that they were stuck on 14nm for ages because they were unable to move to 10nm, and their architectures were much more tied to nodes than other companies were due to how Intel designed their cores, which only really changed with LNC.
Because they thought they delt with the oxidation issue, and thought the rest of the problems were mobo issues.
Because that's what they thought for the longest time, unless you think Intel was just able to immediately identify what specific circuit in the core was getting exacerbated by the voltage issues.
And they still maintain that the problem gets exacerbated by microcode and motherboard issues btw. Which they had to release several updates to attempt to fix. So clearly this was a longer process than what you think it was.
Mobo manufacturers regularly push past recommended settings of both AMD and Intel. This isn't new.
Even more damning for this position is the fact that Intel refreshed their mobile processors to hit the same Fmax as their desktop ones were hitting - 5.8GHz. If the problem was really that they were pushing too hard and that's why the CPUs were degrading, Intel obviously wouldn't bother releasing mobile CPUs that are able to clock just as high....
The "excuse" of them being incompetent?
If you didn't hate Intel, you wouldn't be calling their CPUs "garbage" lmao.
I'm sure Intel earning back your respect is on the top of their priority list. Unfortunately for you ig, Intel did earn a lot of respect (and market share) back in mobile with Lunar Lake, a processor you weirdly hate.
ConsistencyWelder@reddit
Is that weird?
Intel has been almost as anti-consumerist as Nvidia, and keeps acting like they dominate the markets they operate in, which is partially true I guess. They've been phoning it in when it comes to product development for a decade, and when they REALLY messed up with the high end 13th and 14th gens, they denied the issue existed for a year, kept lying and gaslightling people and tried to blame motherboard manufacturers for a while. Until someone had enough and leaked their customers failure rates with Intel's top CPU's.
I don't hate Intel. I hate how they've been operating for about a decade now though.
ConsistencyWelder@reddit
Always makes you want to hear peoples opinions when they resort to ad hominem instead of being factual.
ConsistencyWelder@reddit
Where do you see a 60 watt CPU? The HX370 can be configured to 15 watts or up to 54 watts.
Are you being unreasonable in your defensiveness of Intel?
Scion95@reddit
I mean, Lunar Lake does make me wonder why AMD hasn't actually gone with on-package memory.
Like, even with Strix Halo, the memory chips are all on the board, not on the CPU package itself. Hence why the different OEMs are able to provide more different SKUs with it, from 32GB to 128GB, with 64 and 96GB options as well.
And, for the consoles and the steam deck. I remember that the PS5 cooler has something where it has dedicated cooling for the memory dies on the board, and. I mean, given how the Steam Deck APU's biggest claim to fame is how low in terms of power it can go, I also think having the memory directly on the package would help even further. But every image of the Van Gogh/Aerith/Z2 A chip shows that the memory is separate from the actual SoC/CPU/APU package itself.
I saw a review for a Lunar Lake ThinkPad 2 in 1 where the 258V can idle down to 1.5Watts, for the whole system.
That's with an IPS touch screen, bigger and higher resolution than any of the steam deck models.
The Van Gogh APU already idles and can go lower in power draw than basically any of the other CPUs out there, not counting Lunar Lake itself, at least with x86-64, but. Lunar Lake's improvements with the on-package memory and I think also their PMIC deal with Renesas makes me wonder how AMD could do, or could have done with Van Gogh, and if they could have gone even lower.
soggybiscuit93@reddit
We've had this debate before. Maxing out nT performance and dividing by power consumed is only one type of "efficiency", and not a super relevant one at that.
For the segment of device this is, consumers measure efficiency as power draw at ISO-task. Zen 5, ARL, LNL are all gonna feel equally snappy in web apps, office suite, etc. But LNL will do those tasks while consuming less power.
What people like about LNL is how well it can idle - and idle in this case doesnt mean sitting at desktop. Scrolling through a reddit thread and reading it is essentially idle. Typing an email is essentially idle. LNL can do these tasks, and other common tasks like Teams, Excel etc. And use less power. More nT doesnt give you a better Teams experience. It doesnt make Word type faster.
The laptop I use to do my job has a CPU that's weaker than just LNL's E cores. If it was as simple as cutting off some cores, AMD would've done so and tapped into this heavy demand for thin laptops that run with a room temp chassis and no fan noise.
ConsistencyWelder@reddit
Great, LL wins in scenarios where you aren't using it.
soggybiscuit93@reddit
I gave very accurate, specific scenarios that millions of people use.
The office suite is one of the main reasons people buy thin and lights
ConsistencyWelder@reddit
As soon as you actually make it do something, the performance tanks.
LL is not efficient. It's frugal. Efficiency is "performance per watt", but LL sacrifices performance for longer battery life.
I'd rather have something that is frugal when it needs to be, but has the performance when I need it. Lunar Lake doesn't have the performance.
soggybiscuit93@reddit
Like you didn't even read what I said.
Efficiency at ISO-task. LNL and a 9950X are equally fast for every single application I use for my job.
ConsistencyWelder@reddit
Why does that keep happening to you?
ConsistencyWelder@reddit
ResponsibleJudge3172@reddit
No. You are assuming AMD will win based on nothing when Arrowlake H and Arrowlake HX exist. That's the real problem with r/hardware
Geddagod@reddit
ARL-H has better, or at least as good as, battery life versus Strix Point too
Should've would've could've
Those celeron style cores have similar IPC to Zen 4 lol
Regardless of how impressive you may find it personally, it's pretty clear the rest of the market found it pretty interesting.
shugthedug3@reddit
We must be reading a different sub
Exist50@reddit
Just wait till Intel shows some slides. Some folk have the memory of a goldfish for how these things turn out. You can still find people claiming 18A is competitive with N2.
SherbertExisting3509@reddit
I think Pat Gelsinger ordered the foundry team to take a lot of risk with 20/18A
In fairness to Intel, they did need to take this kind of gamble if they wanted to quickly catch up to TSMC
Intel needed to implement 2 new technologies with 20/18A:
PowerVia (BPSD)
RibbonFet (Gate All Around)
dramatically improved logic and SRAM density compared to Intel-3
It's a lot of risk and engineering work for 1 node and many things needed to go right for Pat's plan to work
Conclusion:
It's no surprise that Intel ran into a few hiccups and delays with 20/18A
Combined with the rumored low-quality PDK and it explains why external interest in base 18A has dried up
It's kind of like the previous 10nm disaster but I would argue that Intel needed to take this kind of gamble if they wanted to quickly regain node leadership like Pat wanted
I think a better strategy for the future is consistent good execution over 5-10 years not 1-2 huge leaps to regain the performance crown.
ResponsibleJudge3172@reddit
You must only be looking at top SKUs and only looking at efficiency because Alderlake was a win in 90% of scenarios.
12600K was just edging out 5800X in all scenarios for similar price to 5600X.
12900KS was horribly innefient. But they even edged out 5800X3D
hanotak@reddit
Wild that that was exactly what AMD was 10 years ago
Firefox72@reddit
To be fair Ryzen was the next big thing.
And it never fizzled out. Sure it took a few generations to get there but each was a positive step forward from the last.
input_r@reddit
But you could say the same thing about Nova Lake in 2030. "To be fair, Nova Lake did end up being the next big thing"
Before that it was Bulldozer is the next thing, Piledriver is the next big thing, Ryzen is the next big thing, OK really Ryzen 2000 is the next big thing.
Finally with 3000 it became competitive and 5000 it hit its stride. But there were years of "just wait" with AMD CPUs and its still going on with the Radeon side
DanielKramer_@reddit
Ryzen was clearly a big deal at the time, it wasn't hindsight. FX cpus were so extremely bad. Ryzen was the first AMD CPU release in my life that showed any promise at all
It got my stupid middle school ass to convince my dad to buy some AMD at ~$14
Larcya@reddit
Nah it wasn't until zen 3.
Ryzen was pretty underwhelming when it first released.
Exist50@reddit
AMD competing with Haswell/Broadwell was a very big deal at the time, even if not a clean sweep on day 1.
BouldersRoll@reddit
Just like AMD GPUs.
DavidsSymphony@reddit
Same here, I got a 9800X3D because it's the best gaming CPU on the market, but if Intel take back the lead I'm going back to Intel. Also, the 9800X3D is the first time AMD has taken the gaming crown in a very long time. The 7800X3D wasn't beating the 14900k.
unapologetic-tur@reddit
On what earth was a 7800X3D not beating a 14900k? It was head to head sure, but one chip was burning thrice the power of the other to even compete. Not being able to admit a loss itself is tribalism. Intel hasn't had the gaming crown since the X3D chips have been a thing.
Slabbed1738@reddit
7800x3d was faster in gaming and used way less power?
https://www.techpowerup.com/review/intel-core-i9-14900k/18.html
pituitarythrowaway69@reddit
It's more useful to look at meta reviews since there is always some variance among the test results of different reviewers. Not that it changes the outcome, the meta review show that the 7800X3D is 1.7% faster than the 14900K in gaming. Power draw while gaming is 54W vs 153W.
CatsAndCapybaras@reddit
The 78x3d traded with 14900k. Who won depended on the game.
bizude@reddit
I think your memory is short lived. Ryzen has only held the gaming crown for this and the last generation.
anskyws@reddit
Can you say 5.9 BILLION???
__________________99@reddit
I don't have high hopes Intel will be back on par with Ryzen with Nova Lake. I'm betting it'll take another 2 or 3 generations before Intel has caught back up.
soggybiscuit93@reddit
ARL is decently competitive with vanilla Zen 5. The problem remains that Vanilla Zen5 isn't the only Zen5 products for sale, and that X3D exists.
NVL vs Zen 6 is going to be mainly a generation focused on big increases in nT performance from both vendors. NVL's big change in the overall SoC, which should fix the latency issues plaguing MTL/ARL.
NVL will also allegedly have a large bLLC version to compete with X3D.
Overall I'd say the next generation is looking very exciting from both vendors and I think it'll end up fairly close either way. I'm personally holding off on upgrading my Zen2 rig until that generation to see what to get.
SilentHuntah@reddit
People do seem to forget that both Zen 5 and ARL suffered from the same bandwidth bottlenecking issues, heavily suspected to be the result of using older I/O dies. Seems like both Intel and AMD are slotting in new ones with the upcoming architecture upgrades.
buildzoid@reddit
they both have a latency problem. The bandwidth problem is only an AMD thing because AMD made the infinity fabric really narrow.
SilentHuntah@reddit
I'm pretty sure it's the other way around. AMD going with chiplets was what improved yields but added to latency. Intel's approach with monolithic dies + forveros largely prevented this. The memory bandwidth issues have been noticed with both Arrow Lake and Zen 5. V-cache largely papers over these issues for gamers.
Exist50@reddit
What memory bandwidth issue with ARL?
Nicholas-Steel@reddit
You do know what Infinity Fabric is used for right? It connects the chiplets and IO together and Buildzoid is saying it is intentionally underpowered (lacking appropriate throughput for what everything is doing) for cost reasons.
SilentHuntah@reddit
Yes, and the hope/plan with Zen 6 is that the new interconnect resolves much of the latency issues.
soggybiscuit93@reddit
Yeah, people will be disappointed when they realize ST improvements in NVL/Zen6 will be just fine.
But the overall performance should be a big increase just from really upgrading the SoC design. Sometimes focusing on traction can yield better results than more HP
Exist50@reddit
I mean, 10-15% freq + 10% IPC would make for a very solid generational improvement. Plus any SoC improvements.
soggybiscuit93@reddit
That's true. I'd rather keep my expectations low and be pleasantly surprised this time around.
Exist50@reddit
Another problem is that Zen 5 is far cheaper to produce than ARL.
matyias13@reddit
Genuine question, but how so?
Exist50@reddit
Less silicon, cheaper silicon (N4 vs N3B), no advanced packaging, etc.
cp5184@reddit
And zen 5 is much more energy efficient?
Exist50@reddit
For a desktop, the two are more or less tied. Some differences, especially at idle for Intel or gaming with X3D for AMD, but not big enough (in Intel's favor) that people will pay the premium for ARL at equivalent margins.
ResponsibleJudge3172@reddit
That difference doesn't materialize in the msrps or general market so....
Exist50@reddit
At minimum, it materializes in Intel's financials, which is why the CFO is talking about it. Also likely to partially explain the low OEM adoption. Intel simply has more room to discount RPL. Plus ARL also has higher platform costs.
maybeyouwant@reddit
Maybe, but based on consumer prices I don't see that.
Exist50@reddit
Well that's exactly the problem for Intel. It's more expensive to produce, but doesn't have anything that lets them charge more. Thus, their margins are tanked.
LividLife5541@reddit
I see no indication that Intel would ever catch up. Do you also predict that Bing will catch up to Google? (Well, it might but that's more because Google has utterly turned itself over to AI but you get my point.)
Why would Intel magically catch up? They have decades of cutting their most talented staff and hiring cheaper staff. They also don't have the process advantage anymore. AMD was living on a shoestring for a while and had some misfires but they were never incompetent, they were after all the ones who invented the x86-64 architecture while Intel was trying to make VLIW a thing, again.
Geddagod@reddit
Unified Core is the hopium that Intel can catch up. You don't have to have the process advantage either, since Intel can just go to TSMC for their more premium parts, while taking advantage of margin stacking for their lower end parts/tiles (since they already sunk a bunch of money into 18A R&D and buildout anyway).
I also think Intel has the volume to build more expensive to manufacture parts, but still get as good margins due to the economics of scale.
yeshitsbond@reddit
Nevermind offering, stop making people have to rebuy motherboards all the time, I would have probably bought the 265k if it wasn't for this idiotic strategy
heickelrrx@reddit
it's rushed too, if the Engineer had more time to fix that latency issue, it may smoke the competition
the Single Core and Multicore on synthetic benchmark show the CPU core run really fast while consuming reasonable power, IT perform great on productivity, while on gaming the latency become the main bottleneck
Hytht@reddit
> it's rushed too, if the Engineer had more time to fix that latency issue, it may smoke the competition
It was there since meteor lake from 2023. How much more time are you going to give?. And it's due to its tile-based design and removing the memory controller from the main CPU die.
Exist50@reddit
NVL has basically the same tile arrangement.
Reactor-Licker@reddit
If that’s the case, then what is the fix for Nova Lake? Just better routing and design of the transistors and data pathways?
Exist50@reddit
The "data pathway" on MTL is basically Frankenstein's monster. A bunch of different fabrics hastily stitched together, with the underlying implementation largely derived from a tool never really built for low latency. LNL onwards simplifies and optimizes it. There's still going to be some die-die penalty, and probably a bit of SoC overhead vs the RPL and prior implementation, but I'd imagine NVL should be able to recover a significant majority of the damage done without changing the fundamental chiplet arrangement.
SilentHuntah@reddit
Seems to contrast with Lunar Lake (notebook architecture) where it's integrated onto the CPU die.
heickelrrx@reddit
AMD memory controller is separated from die, so it should be doable
AccomplishedRip4871@reddit
No, latency issues are fundamentally linked to architecture choices, CPU cores and the memory controller are now on separate tiles, which greatly increases latency, what you describe as "more time to fix" in reality would require changing architecture completely, which can't be done in the short-mid term.
It takes multiple years from blueprints to products on shelves - Arrow Lake latency increase was a choice by Intel, not a rushed decision.
If recent rumors about increased cache on next gen Intel are correct, Intel finally might offer something valuable to gamers.
HorrorCranberry1165@reddit
how do you know that separate mem ctrl greatly increase latency ? Do you know that all their future desktop and Xeons, will use separated mem ctrl ? AMD have separated mem ctrl without greatly increase latency, so it isn not reason.
AccomplishedRip4871@reddit
its how physics work.
yes, thats why i said that it's intended by design.
AMD latency was always high thanks to that specific design choice, their X3D chips excel because additional L3 cache removes the biggest bottleneck.
it is the reason, check monolithic design latency and tiles, difference is obvious.
Exist50@reddit
The tiles don't help, but it was the SoC architecture that really sinks MTL/ARL. Notice how the LP E-cores don't look any better despite being on the same die as the memory controller.
HorrorCranberry1165@reddit
As MLID said NVL have some 10-15% more ST perf, that won't save it with gaming.
Exist50@reddit
I don't think 30-50% in gaming would be unreasonable. 10-15% frequency, 10-15% IPC, 10-15% mem subsystem.
HorrorCranberry1165@reddit
I mean 10-15% more perf, not IPC alone. So increased frequency (if at all) and faster mem, included in this speed-up
Exist50@reddit
Then unsurprisingly, MLID is smoking crack. Anything short of 10-15% IPC from Panther Cove would be a disappointment given it's 2 years after LNC. Then 10-15% frequency is around what you'd expect from N2 vs N3B. And that's without touching memory.
ResponsibleJudge3172@reddit
That's non BLLC SKU
jonermon@reddit
I actually like arrow lake and reccomend it for people who want good workstation performance on a budget. The 265k is stellar at that use case.
TheAppropriateBoop@reddit
At least they’re admitting it instead of spinning it
Cute_Bar_2559@reddit
I mean the blame goes to the CEO that they recently fired. They said the same shit for Arrow Lake and set it up as the next big thing only to be generations behind the AMD let alone Apple silicon. Yeah the Lunar Lake was good but it didn't had that raw power that the H chips have are supposed to have. Hopefully they can back it with upcoming Panther Lake chips but we don't have a set deadline for that either
Pitiful_Hedgehog6343@reddit
Decent chips, they just need a big cache to complete with x3D. Non x3D chips are essentially the same as arrowlake.
ListenBeforeSpeaking@reddit
This CEO doesn’t appear to be very media savvy.
Geddagod@reddit
It was the CFO making these statements, not the CEO.
Zinsner seems to be way more comfortable telling it like it is compared to the eternally optimistic Pat. Perhaps enabled by the whole "humbleness/humility" approach encouraged by LBT himself.
ListenBeforeSpeaking@reddit
Noted.
Telling people that the current line of products missed expectations can’t help what are already tough sales.
tux-lpi@reddit
It wouldn't have fooled anyone anyways. If anything, I'd rather like less corporate speak and bullshit when everyone already knows the product didn't sell
Exist50@reddit
I think it's a good thing. Just calling a spade a spade, really. Better than saying "We don't know why people aren't buying our products, and next gen will be the same".
AutoModerator@reddit
Hello -protonsandneutrons-! Please double check that this submission is original reporting and is not an unverified rumor or repost that does not rise to the standards of /r/hardware. If this link is reporting on the work of another site/source or is an unverified rumor, please delete this submission. If this warning is in error, please report this comment and we will remove it.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.