Tom's Hardware: "Intel says defect density at 18A is 'healthy,' potential clients are lining up"
Posted by Dakhil@reddit | hardware | View on Reddit | 199 comments
GhostsinGlass@reddit
Okay is that "Intel healthy" or " healthy healthy"
XenonJFt@reddit
latter. Intel Promise!
dj_antares@reddit
Nope. If the number is true, it is indeed healthy.
For comparison, TSMC's N7 was about 0.4 at -Q3. N5 was about 0.3 at -Q3.
Exist50@reddit
Assuming they're measuring the same thing...
And if 20A/18A were that healthy, they wouldn't be canceling ARL-20A.
Famous_Wolverine3203@reddit
Yes but wasn’t ARL-20A slated to come begore 18A?
Exist50@reddit
Yes, though the gap would have been relatively small. Probably around half a year.
sylfy@reddit
Would it be right to say that 20A to 18A is like TSMC’s N3B to N3E?
Exist50@reddit
Not really. N3E was a fairly significant change, and broke design compatibility with N3B. 18A is more like N3E to N3P.
B0b_Red@reddit
I am not sure this is happening.
Exist50@reddit
What is? 20A being unhealthy?
Legal-Insurance-8291@reddit
Kinda hard to believe anything they say at this point..
bob-@reddit
what have they lied about?
Legal-Insurance-8291@reddit
Go back to Pat's original "5 nodes in 4 years" announcement and then look at where we actually are. Everything has been delayed and now 20A is completely canceled.
OldGarlic5244@reddit
Intel 7 and Intel 4 already came to fruition. Stop with the lying. 20a was always a stepping stone node.
nanonan@reddit
This idea that 20A was meant to be scrapped before finding a single customer is ridiculous. Where are you getting this from?
ProfessionalPrincipa@reddit
Intel 4 is used for a mobile-sized die expected to be shipped in relatively small quantities (something like 10% of the laptop market) and its desktop version was cancelled. Intel 3 "launched" almost 3 months ago with no sign of general availability yet? Now 20A is scrapped!
ProfessionalPrincipa@reddit
Intel 4 is used for a mobile sized die where its desktop version was cancelled. Intel 3 "launched" almost 3 months ago with no sign of general availability yet? Now 20A is scrapped!
bob-@reddit
so having delays and not meeting the deadline for your own roadmap is tantamount to lying?
xpander5@reddit
"Lying" is the word you used. He wrote it's "hard to believe anything they say", that could mean them lying or them not delivering on their claims.
bob-@reddit
He just replies to me and actually used the word as well so I don't know what you're on about
Legal-Insurance-8291@reddit
What makes it lying is the fact Pat keeps repeating these things despite knowing they're false.
nanonan@reddit
Blaming board vendors for something that only microcode could fix for one.
red286@reddit
Clearly the former.
No sane person talks about a "healthy rate of defects".
After all, they go on to state that it's four times higher than TSMC's.
swashinator@reddit
there's always a certain amount of defects in chip making from my understanding, so I think a sane person would talk about that.
Exist50@reddit
Many years ago, Intel had a perfect wafer. It was framed and given to the CEO.
phil151515@reddit
That wafer probably had many defects. Vast majority of passing chips are repaired using memory repair -- for example.
Exist50@reddit
In this case, there were claimed to be 0. That's what made it so special.
SemanticTriangle@reddit
Do you have a link to a publication with information on TSMC's N2 process defectivity?
Exist50@reddit
If we're talking 18A, then you should compare to N3.
Fast_Wafer4095@reddit
Defect density is below 0.5 threshold, but Broadcom is not yet pleased.
ProfessionalPrincipa@reddit
There's more to yield than absolute defect density. There's parametric yield too.
If Intel only has 1 out of 2 that would still put it ahead of 10nm which was horrific in both.
Educational-Plan-113@reddit
what was 10nm? and why the heck did they have so much problems with it anyway?
Yawning_Creep@reddit
10nm (now Intel7 is what all the 12th-14th gen (desktop) processors are being built on. Major issue was aggressive transistor density scaling and the problems of pitch division and multi-pas litho patterning. Took a while to fix those problems but, obviously, the technology is healthy now.
OldGarlic5244@reddit
Nope. They used Intel 4 as well. Stop lying.
ProfessionalPrincipa@reddit
Well... Meteor Lake-S was cancelled.
Yawning_Creep@reddit
Hi, If you had bothered to read what I wrote you would have seen that I was specifically talking about desktop processors but now you mention it, Meteor lake (could be classed as 14th gen laptop) was never officially branded as 14th gen - it got the core ultra branding. So you are wrong, wrong and wrong again. Such confidence while being wrong - hilarious.
Yawning_Creep@reddit
Hi, If you had bothered to read you would have seen that I was specifically talking about desktop processors but now you mention it, Meteor lake (could be classed as 14th gen laptop) was never officially branded as 14th gen - it got the core ultra branding. So you are wrong, wrong and wrong again. Such confidence while being wrong - hilarious.
ProfessionalPrincipa@reddit
I don't think they ever released official numbers (for obvious reasons) but the general gist of it is they tried to do/change too much all at once (density, use of materials like cobalt, quad patterning) leading to loads of defects. They also widely missed performance targets so even chips that were technically functional were awful. They reportedly sucked down more power than the 14nm chips they were supposed to be replacing. (Go to Ark and compare say a Kaby Lake-Y chip with an Ice Lake-U)
Educational-Plan-113@reddit
So or how did they eventually sort it out?
ProfessionalPrincipa@reddit
I wouldn't know outside of random rumors so your guess is as good as mine. Relaxing the specs for things like density was one of the things I've heard which is why I always got a kick out of people who held Intel 10nm's density on paper as a big score against TSMC N7's density specs.
captainant@reddit
Remember the end of that movie Old Yeller?
Educational-Plan-113@reddit
You have a point there, last Gen didn't exactly go out on a high note 😂
pixel_of_moral_decay@reddit
Knowing Broadcom wouldn’t be shocked if they’re using press as a way to leverage for a discount.
It would be odd if Broadcom didn’t try and take advantage.
haloimplant@reddit
there's a lot of numbers and knobs to turn with these things
often what happens is the performance goes down while they tweak the process for yield. sometimes it is small 1-2%, sometimes it really sucks 5-10% or more
Exist50@reddit
Because they know more than Intel marketing is willing to say. And deceptive marketing is one thing. Deceiving customers is another.
Rocketman7@reddit
Could it just be a case of Broadcom being spoiled by TSMC past processes? 0.5 does not sound bad at all.
Exist50@reddit
No, that doesn't make sense. Broadcom wouldn't care if it didn't matter to their product, and they would clearly have been given a defect density roadmap in advance.
III-V@reddit
There's nothing that says 0.5 is actually the threshold for what's considered good. The author incorrectly interpreted what the article they referenced - the 0.5 number was just used as a basic example, with no comment on how good or bad it is.
dabias@reddit
It is mentioned near the end of the article:
Exist50@reddit
Average defect density. Intel may be reporting best wafer.
imaginary_num6er@reddit
It’s the same “healthy” dividend Pat mentioned in a March 2023 earnings call before he axed 33% of it a month later.
logosuwu@reddit
Good. They shoukd be aggressively cutting dividends and diverting that money into RnD and their fabs.
imaginary_num6er@reddit
Investors called Pat a liar though
Helpdesk_Guy@reddit
Which he is by the very definition of it though. A liar is a person, who tells lies.
Exist50@reddit
RnD is being cut by billions. Fabs are the only thing Pat cares about now.
COMPUTER1313@reddit
As long as it’s better than 10nm.
Exist50@reddit
That sets the bar too low. Until they start making money on foundry, it's on borrowed time.
plushie-apocalypse@reddit
14900k healthy
InvertedPickleTaco@reddit
This is good news. I'd like to see some AMD use of the US Intel fabs.
ibeerianhamhock@reddit
I can actually almost believe this. They tapped TSMC for their lunar lake to let them really focus on their fabs and really pour money into them. It's a smart move, they aren't at parity with TSMC right now, and even getting there wouldn't buy them much. If they want their products/fabs/etc to succeed, they either need to dominate fabs, or just dominate in design and use the leading edge fab.
Apple has the best designed CPUs right now and it's not close, but thankfully for intel they only have one customer and they charge a ton for their excellent hardware.
I think if they don't pull off 20a and 18a, they may as well close down their labs and just do what they do best -- designing CPUs well. If they pull off the gamble, they'll be back on top again, at least for a time.
Exist50@reddit
Nah, they tapped TSMC because the entire point of LNL was to compete with Apple in low power, and you can't do that with the compromises forced by Intel Foundry. And CCG in general had been begging to go external for years.
They've burned down that safety net with their cuts to design. I think canceling Royal will end up haunting them even if Foundry somehow succeeds.
funny_lyfe@reddit
I don't agree about royal cove. The 4 small cores equals to be one big core is too much for enterprise and VPS market. Per core licensing will be painful. As long as they have performant cores, they should be okay.
Exist50@reddit
That's the exact opposite of what Royal was. Royal was fundamentally designed to be the strongest ST core, bar none. RYL1.0 didn't even have any multithreading support. But under pressure to better utilize the massive area investment required, they added MT in RYL1.1/RYL2.0. But conceptually, it's not all that different than SMT. You're splitting the resources of one core, not combining multiple independent cores.
RegularCircumstances@reddit
But if it could only be used in client to the fullest extent and probably again desktops really got the best, and area + power is a nightmare like previous cores (yes even adjusting for say using TSMC N2 or A16) what’s the point? If they straight up doubled GLC IPC, they’d only have about a 25% IPC advantage on Apple’s M4 P core as of today and 40% ish on Qualcomm as of today, with those two both still scaling frequency and IPC (even at Apple though it has slowed, but AMD and Intel haven’t exactly put it in fifth gear given how far behind they are).
Would Intel in 2028 having say a 15% ST lead (factoring in the others moving too) at 2-3x the peak power be something that I think could save them? I’m very skeptical. It would help big time but the long arc is really clear here at this point.
It seems to me like Intel would still have the same problems in many ways just dulled down (see how Lunar is a huge improvement but at the end of the day it can’t consistently beat Qualcomm on battery life as Intel showed, and I suspect ST performance/W is meh still) — but now they have real competition in the PC market that is both varied and far more competent than AMD was for most of its heyday, both via AMD itself and especially Qualcomm, Nvidia/MediaTek.
Royal from what I hear just really sounds like a niche sportscar throwing money and bloat at the wall beyond peers to an utterly ridiculous extent that isn’t worth the area/power cost. We also know the SMT hit to power in ST is real thanks to Intel so I’m skeptical the multithreading thing inherent to the core to amortize its size is the right direction.
Exist50@reddit
They were working on power as well. Overall, power and area efficiency should have been roughly in line with big core, if not better considering the lack of SMT with Core. Royal was created to help solve the "nightmare" of Intel's legacy cores.
That wouldn't be worth anything, but what about a 20% lead at iso-power? That's more in line with what they were aiming for.
Nah, it would have been huge, but able to justify that size. And they would scale down frequency/voltage to keep power in check. Like Apple's cores.
Eh, overstated. It's more complexity than anything else.
Strazdas1@reddit
Nah, they tapped TSMC because they had a 4 year old contract to fulfill. Without it no doubt Pat would have wanted to use his own foundries.
Helpdesk_Guy@reddit
Come to think, given his clowny appearances on stages, I'd guess you could even be shockingly right.
Exist50@reddit
There was no comparable Intel node. Lunar Lake doesn't make sense if you handicap it with Intel foundry. Wait for PTL if you want that.
Strazdas1@reddit
Sure, but the reason they use TSMC wasnt because they needed that note but because of the contract they had. they would have tried with IFS like every other time.
Exist50@reddit
Nah, not with LNL. For some history here, the early MTL proposal was also on N3. Intel's product teams didn't like being told "you need to compete with Apple" while simultaneously being victims of the fabs uncompetitiveness, more difficult design, and lack of transparency. That's why MTL is the clusterfuck of chiplets it is, because that was the compromise between the two sides.
For LNL, it was clear that a) Intel Foundry was still failing, and b) the sole mandate of that product was to compete with Apple. Can't do that on an Intel node.
And really, you're reversing cause and effect. They bought a lot of N3 wafers specifically with products like LNL in mind. It wasn't merely a hedge. The original vision for SRF even had it using N3 and Skymont. Crestmont/Intel 3 was a schedule compromise.
WorldlinessNo5192@reddit
This would literally cause a global recession, so I don't think that's in the cards.
nanonan@reddit
Why on earth would that happen?
Helpdesk_Guy@reddit
Something, something … Meltdown?
And Spectre, SpectreRSB, RIDL, Fallout, Zombieload, Plundervolt, Zombieload 2, TPM-FAIL, NetCAT, SWAPGS, SPOILER, Foreshadow, Machine Check DoS, BMC flaws, TSX, Lazy FPU context-switching issues, etc. – I think I forgot about the other 80% of flaws before that on their Intel ME already, or their Atoms and so forth. Superb design much, I guess?
jucestain@reddit
If you command the labor of 120k highly educated people and $30 billion a year in resources and don't end up producing something useful that qualifies for jail time in my book. Just a monumental waste of resources. But I'm still optimistic intel can pull something off, we need more competition in the world, not less.
XYHopGuy@reddit
JAIL TIME??? C'mon man
jucestain@reddit
You don't think wasting $30 billion a year and the labor of 120k highly educated people isn't worthy of punishment? Literally name a worse crime.
III-V@reddit
Geez, if you think that's bad, wait until you see the government.
Exist50@reddit
Hold on, I think he may be onto something...
iwannasilencedpistol@reddit
You haven't worked a day in your life have you?
Traditional_Yak7654@reddit
If wasting time and money is worth jail time we’d all be behind bars.
XYHopGuy@reddit
Shit posting on reddit
pianobench007@reddit
What kind of loser consumer talk is this. Jesus Christ. You know porsches are the best but I don't go and hate on my 3 cylinder toyota gr corolla.....
Just like we never hated on TSMC in the past for being behind Intel. Intel ushered us into finFET.
They gave us the good shit and it was an exciting time. I don't regret a single gaming session. Good times back then.
It's GAA time or ribbonFET and Intel and TSMC will bring us into this new era. And I am excited as hell for them to thrive!
NVIDIA is the goat. Yea. DLSS and DFG amazing shit. I play 120fps in cyberpunk 2077 and it's amazing. RT is at ultra (Not pathtracing)
All running on my little tiny 10700K self boosting to 5.1 GHz all by itself. 14nm is still viable.
ibeerianhamhock@reddit
So perhaps my comment came across like I have an objective, so let me clarify that I'm excited any time a company innovates. I see intel as a pretty huge innovator and I've been worried they would fail. I didn't want AMD to fail in the bulldozer days either and talked equally excitedly when Ryzen dropped, especially 3000 series when they really had it nailed down design-wise.
Probably there's a bit of pride and patriotism being an American citizen for me as well, it's cool to think one of our fabs may be on top again, but that's not really at the forefront of my mind.
pianobench007@reddit
Here is the skinny with AMD. They did not control their manufacturing. So when they split off global foundry, GoFlo could not scale to what the PC market demanded. And GoFlo now being a separate entity has no reason to continue supporting AMD.
Look at GoFlo now. They have been short shafted by AMD for TSMC. But no one is crying for them? Bottom line this is a cut throat industry with no loyalty and I guess that is what I miss. I support both Intel and AMD and TSMC. Without the big TSMC I won't have my GTX 4080 with DLSS or DFG which gave me 120 fps all RT ultra settings on.
For Intel, they dominated AMD back then because guess what. Owning the manufacturing means you can guarantee delivery times to your customers. And Intel's customers are Dell, HP, Acer, Asus, Apple etc... so it wasn't Intel back then putting AMD down.
It was the demanding schedule of those customers looking to meet consumer demand for a new laptop before school. Or new laptop before Christmas.
Intel is in troubled times but because they own the fab, they are still delivering. However that all said mobile is more insane. And if TSMC can continue this demand?
Well excellent for them. But you need to remind yourself. Intel was king for decades. And even they fell. Because this nanometer manufacturing isn't easy.
It ain't easy producing banger after banger after banger. So far only Taylor Swify has continued to keep delivering.
But look at Kanye? And the other kings of pop. R Kelly? Banger no more. Britney, NYSNC, all of them can't keep producing bangers.
lupin-san@reddit
GlobalFoundries doesn't have anything beyond 12nm. Are you expecting AMD to stick with them when the industry has moved to smaller nodes?
pianobench007@reddit
Of course not. But that was my counter point to him saying that Intel should just throw in the towel in foundry.
Which is strange. Because you and Idont make these devices. But we want someone else to make them. Just not Intel?
Weird.
gavinderulo124K@reddit
PT in cyberpunk is so good though.
Faranocks@reddit
Historically Intel has had very high yields, and was faster at getting yields up compared to the competition. It was only with intel 7/intel 10nm that they didn't really ever get yields up at a respectable speed. Even though 14nm/intel 10 which was a bit behind schedule, intel was still ahead of TSMC and samsung for YEARS. Even TSMC 12nm was a fair bit worse than Intel's 14nm+++++. I'm not sure what happened with intel 10nm/intel 7, they really shit the bed and even now their yields are atrocious and with issues.
b3081a@reddit
On the 10nm node they opted for quad patterning with DUV which completely destroyed their yield. Meanwhile TSMC made some compromises on density and implement N7 with double patterning while waiting for EUV to mature, and Samsung became an early adopter of EUV on their 7nm node in order to compete with TSMC.
Faranocks@reddit
That's pretty interesting. Bit off more than they could chew. Thanks for the info.
LeotardoDeCrapio@reddit
FWIW Yield data is extremely confidential. Almost everyone in this sub talking about yields are pulling stuff out of their butt.
nanonan@reddit
Sure, which is why Pat giving out those details smacks of desperation.
b3081a@reddit
TSMC always show their D0 density trends in public slides, although they're only giving a rough guidance. As for Intel they've public stated that for several times including 10nm issues back then and the recent 18A D0 stuff.
LeotardoDeCrapio@reddit
That's not the actual yield data.
Faranocks@reddit
Some stuff is easy to infer, other stuff isn't that confidential. For Intel they showed projections vs older nodes for 22nm vs 14nm. This isn't exact data, but a reference for what they expect. For example with Intel 10nm they stated in shareholder reports that 10nm yields were worse than they expected. Additionally they kept products on 14nm for generations and generations after they initially said they were hoping to release 10nm desktop CPUs. They wouldn't be refreshing 14nm for the 5th year in a row if 10nm production yields were good and on track from their initial projections.
Similar things from TSMC, maybe a little less from TSMC themselves, and a little more from their customers in terms of where the numbers are coming from. Wafer pricing can also show a bit of data.
Another way is just to track volume of what is being produced on a node. This is far from an exact science, and doesn't do much except give a slight insight to the numbers. If Intel is shipping millions of Intel 14nm chips years before TSMC is producing chips with a similar transistor density, it does say quite a bit about the nodes and yields from processes though.
LeotardoDeCrapio@reddit
A blurb on a shareholder report only gives an indication on the trend for the node.
Never mind that yield is extremely specific to a specific date/run/product/design. I.e. we can't extrapolate absolute yield data for a specific node.
Furthermore, there is a hell of a lot more going on in the background in terms of not just yield, but variability, binning, packaging, etc.
But we simply have fuck all access to actual data regarding yield. So people should just take those comments with massive grains of salt.
It's also really bizarre to read people, with bizarre emotional connections to tech companies, going at it regarding yield as if it was some sort of sport competition.
lupin-san@reddit
Intel has been like that for decades. Aiming for big gains in every generation. Until they hit a snag that makes targets unreachable at least in the short term and causes delays.
Contrast that to what AMD or TSMC have been doing. Their targets may not be that high as Intel's but they more realizable in the time frame they allocated for it.
Faranocks@reddit
I mean, Intel 14nm was so far ahead of any other process at the time. Not only that, they were shipping stuff in quantity much earlier. Part of that may have to do with the fact they design the chips they make, which might speed up some design and refinement processes, on both sides.
10nm might have seemed like a reasonable gamble at the time given the speed they got 14nm to production. Even if they were behind by a year it might have been fine. Instead they delivered in limited quantities two years after initially promised.
Also intel has historically had a fairly conservative CPU improvement approach, with their well known 'tick tock' cycle. AMD was usually the one with bigger gambles, see bulldozer, ryzen, vega.
It's easy to point out how the tables have flipped, but I don't think that's because AMD or TSMC played it conservatively, or Intel didn't play it conservatively. (Ok maybe TSMC always played safe). Go back a few years and it was AMD on the ropes after a decade of increasingly disappointing CPU results. For a few years their GPUs were refreshed again and again, getting hotter and hotter with decreasing performance gains (sound like familiar?).
It would not surprise me if Intel makes a comeback in one form or another. If they don't shelve it, their GPUs hold a lot of potential, outperforming AMD and Nvidia GPUs at certain points in the performance/power curve, while having less optimized drivers.
I say this as someone who bought a 5900x and later a 7800x3d over any Intel CPU. Before that I have had Intel CPUs from a 8750h to a 5820k and q6600. I hope Intel makes a comeback, as AMD also needs motivation to keep innovating.
Exist50@reddit
In practice, though, N7 density is comparable to or better than Intel 10nm density.
b3081a@reddit
TSMC simply executed their roadmap flawlessly while Intel had to make a ton of tradeoffs after they already spent a few years screwing up things. In the end, N7 family was way more successful than Intel 10nm.
Also, back then the majority of TSMC customers were designing low power SoCs which tend to have higher density than high performance desktop ones, while Intel was not a major player in mobile phones. The only high performance TSMC customer back then was AMD and their N7 CPU chiplets' density weren't as high as Apple or Qualcomm.
Exist50@reddit
Intel 4 also took them a very long time to get yields under control. It's been a running trend since 14nm.
Helpdesk_Guy@reddit
… and their 22nm, which they also covered up for 'successfully', concealed through another mobile-first launch and a minor delay.
theQuandary@reddit
The problem was marketing. TSMC has N7 -> N7P -> N7+ -> N6 -> N6E with a total uplift of around 18%.
Intel 14nm -> 14nm++ -> 14nm+++ -> 14nm++++ was essentially identical and even had a nearly identical 20% performance uplift too. This is without mentioning that the original Intel 14nm was better than TSMC 10nm.
But Intel was bad and TSMC was good.
As mentioned, they tried introducing too many technologies at once and the breakthroughs didn't happen when they planned.
If their research had gone as expected, Intel would have launched 10nm in Q4 2015 and TSMC would have caught up with N7+ in Q3 2019. Instead, a 4-year advantage became a 2-year lag.
I don't know of any reliable sources for this claim. Original 10nm was scrapped, 10nm-- that launched Cannon Lake was terrible. 10nm had bad yields, but 10nm++ Superfin seems to have decent yields and 10nm+++ Enhanced superfin aka Intel 7 seems to have perfectly fine yields while coming close to TSMC N7+/N6 density without needing EUV.
Helpdesk_Guy@reddit
Even worse, they majorly relaxed its density even to almost half of it, for solely be able to increase clock-rates at the very expense of way increased power-draw – The performance-uplift was actually bought dearly through way worsened power-efficiency, while the actual performance-uplift was solely brought into effect through higher throughput alone.
IPS (instructions per second) vs IPC (instructions per cycle). That's why their Core-series IPC stayed largely the very same to pretty much identical for several generations in a row while only the clock-rates (and thus, the resulting IPS …) increased majorly.
As pointed out above, yes. Initially superior, yet it only went downhill from there density-wise with every new iteration of their 14nm.
Helpdesk_Guy@reddit
Wrong … and it always gets repeated and repeated ever again and kept as the overall notion, despite being just plain wrong!
No offense here and don't take it personal! But …
They were also late on their 14nm-node before that, and their 22nm-node before that. All of them had the very same issue: Yields!
In fact, all their processes ever since their 32nm they had trouble with yield-issues and had to delay – It not just occurred as delays to the public, since the internal several months-long sanity-buffer and time-window (to provide for all contingencies, if some things may go wrong) was still largely covering the internal struggles they already had back then due to their already then infamous execution.
See the issue here? Intel had trouble with yields and always (even publicly) delayed their nodes on ALL THEIR PROCESSES since their 32nm back in 2010. I mean, even Toshiba had their own 32nm-process already running, in high-volume production and shipping said 32nm-products about a year earlier than Intel itself.
Toshiba had their 32nm-products in market by February 2009, when Intel shipped their first 32nm-CPU in January 2010.
Did you knew that? That even Toshiba had their own 32nm up and running and were shipping 32nm-products a year ahead of Intel itself?! And even back then, Intel only was able to ship the lowest-end and bottom-line SKU (it was the 2C/2T Celeron G1101).
Sounds familiar? With 22nm? Or their 14nm having their mobile-first release? Or their broken 2C/4T Core i3 8121U with fused-off graphics on their disaster 10nm™ no-one could ever buy, due to being released only for share-holders to some unknown no-name Chinese retailer no-one had heard before, only to legally comply with former statements and appease to their shareholders?
The thing is, it wasn't necessarily materials or too high ambitions all these years even well before 10nm. It was mainly their way of executing things, middle-management eff-ups and the upper floor keeping their struggles under the hood on purpose.
Their 14nm was already delayed for several months to over a year (depending on which sources you're willing to trust here…), and its delay got royally additionally already covered for with a mobile-first again, yet their 10nm wasn't ''just a bit late'*, it was delayed and delayed ever so often for several years in a row, when being initially planned for H12015, then 2016, than suddenly 2017 and so forth.
Either way, their struggles to ramp up nodes and their everlasting yield-issues on new processes ever since, which eventually always led to increasingly delaying and lots of cover-ups in with small-die and/or mobile-first launches node after node while covering for it with well-written stories, had been a long time in coming and had been knocking for eventually get a greater appearance already way, way before anything 10nm.
Yet after their 10nm, the gate-crasher and unpleasant friend Mr. Yielding finally made itself comfortable and was here to stay …
Intel just never had the balls to call up Dr. Sanity, to let him clean up the decks and finally throw out Prof. Hubris and his gang for good, who's not just Yielding's best drinking buddy, but a heavy drinker itself, and always loves to drink with the kool aid while holding hands.
tl;dr: We've seen the hubris. And now we're seeing the scandals.
ConsistencyWelder@reddit
How many of us still believe Intel with anything though?
They could be right this time, but until we see proof I'm going to categorize this in "the boy who cried wolf" category.
constantlymat@reddit
Go to the thread about the Reuters article regarding the 18A setback. The most upvoted comments are all by users who claim the reporting is misleading and that the setback wasn't actually a setback but an expected delay and that everything is progressing splendidly.
HTwoN@reddit
Maybe you don't have to take the words of us plebians in this sub. Instead, listen to Ian Cutress, who knows what he is talking about https://www.youtube.com/live/Yx5nNdz7LQo?si=JN6SUOcQrJhsszxW&t=8951
Helpdesk_Guy@reddit
You know that Cutress has a hard leaning to everything Team Blue anyway ever since even already at AnandTech, right?
constantlymat@reddit
How does his stance that Intel should not yet sell off its foundry business dispute what I said?
constantlymat@reddit
Cutress was just paid by Intel to do a big promotional tour of their most modern fab a few months ago.
He was also completely wrong about the 13th and 14th gen problems with his initial assessments that downplayed the significance of the problem.
He is not infallible.
Legal-Insurance-8291@reddit
This sub is wild. The number of people who just accept anything Pat says as fact is absurd. And when you point out all the lies and show them the actual facts they call you a "hater" and say some stuff about how it's all the fault of nameless MBAs and not the guy who has actually been CEO for the last 3.5 years.
Helpdesk_Guy@reddit
I can assure you, that has been the actual case for several years and isn't necessarily limited to Gelsinger but Intel itself in general.
Exist50@reddit
I remember I got called a troll for saying 20A was broken and useless months ago. Intel just officially canceled ARL-20A today.
Famous_Wolverine3203@reddit
Its for the best I think. Better than selling a sub par product. Let them focus all their efforts on an actually viable node, in this case 18A.
Legal-Insurance-8291@reddit
Yeah and Intel's line every time something bad has happened is that 18A will magically turn it all around. But that line is rapidly running out of room as the promised dates for 18A rapidly approach.
Strazdas1@reddit
Setback!
Exist50@reddit
To be fair, at least some of those users block anyone who says otherwise, so a lot of the upvote ratio is artificial.
DaBIGmeow888@reddit
SoftBank, Qualcomm, now BroadCom all said the same thing about Intel missing manufacturing milestones and untenable timelines. They are all consistent in saying things are not good.
pianobench007@reddit
I believe extremely strongly in Intel. They were the OG innovators of finFET that the rest of industry followed after. They will innovate with ribbonFET give them time. Overclocking is just held on by Intel and NVIDIA currently.
NVIDIA gave us DLSS, and Frame Generation bangers. Intel followed with XeSS. Sure they are lacking in leading edge node but we will know soon enough in a year or two.
Once back, the real Ai revolution will begin again and another cycle of kickass shit will be upon us.
Just think.
Seriously think. In 2007 when the very first Crysis was launched/published by EA, we all thought that this must be the pinnacle of gaming. Open world jungle physics and superior Ai that could command an entire ARMY of troops against us.
We had active cameo and active armor bullshit. Blah blah blah all that and we thought we were living the good life.
Little did we know. Come Cyberpunk 2077 and look at where we are at today. RT ray tracing ultra. Path Tracing. DLSS and Frame Generation. Sure all innovations by NVIDIA. But what else is there?
If it isn't NVIDIA innovating, it is Intel and AMD. And we need all of them to innovate. Intel brought the finFET transistors. They are the hardware guys. The crazy software guys are NVIDIA.
Without a marriage of the three or four. A menage a trios (threesome) we would not be where we are at today in gaming and graphics.
Seriously. Try cyberpunk 2077 again in 2024. Tell me it isn't amazing. I am playing on a 10700K 14nm+++ with 4080 and have 120FPS ultra raytracing DLSS and FG on.
It is butter smooth.
theQuandary@reddit
Crysis is still very close to the pinnacle of gaming and that's without installing the texture pack upgrades.
How can a mostly single-threaded game with a ~4.5GB vRAM limit from 17 years ago perform so well and look so good while providing such a huge and interactive environment? Even more interesting is how the "remastered" version manages to generally look a lot worse.
Prince_Uncharming@reddit
With all due respect…
What the fuck are you trying to say?
pianobench007@reddit
Seems counterintuitive with all the Apple gatekeeping providing shareholders infinite money glitches.
They are the PC disruptor. Apple ecosystem ported Apple apps and x86 apps to Apple silicon. But all those very same apps must be downloaded on any iOS device through first was iTunes now finally App Store. With Apple not even bothering to support redoing or doing any upkeep on iTunes for windows. Because they have a new model. A new money making model. 30% on each transaction or download if the app was originally downloaded through their own store.
Intel and their partners have done none of this. Instead Intel and their partners along with AMD and NVIDIA were the original champions of open standards etc... these standards allow everyone to contribute to the PC market and even Linux themselves. Because of the standards.
Like ATX standards, PCIE, USB and more. Most of which were led by Intel.
Sure thread director has yet to show its purpose. But you have to give them time. They've already gone away with the issue of Hyperthreading and those vulnerability.
They work with all of their partners equally in order to support new standards and innovation. Take for example Thunderbolt 4. A single standard cable that can transmit both data and power at full speed.
I don't see TSMC contributing to PC in the same that Intel has both past and in the Present. Intel pioneered the finFET leading the way to today.
What I am getting at is they've all contributed. Some more than others sure in this current Ai innovation and the struggle. But Intel has always been there especially towards gamers. Just like NVIDIA has.
I am sure they've innovated more in the data center and beyond. I am not as insightful in that arena and what those customers want. From my understanding, Intel caters directly to their needs.
Prince_Uncharming@reddit
With no due respect…
What the fuck are you trying to say?
Brapplezz@reddit
With all due respect. Are you intentionally obtuse ?
Intels been shitting the bed yes. So everyone has lost trust but can't see past the issues. Which is actually not nearly as bad as we're acting.
Not required to trust them. This guy just explained why he trust Intel, they do have a great history of innovation and pushing the industry. He is hoping/trusting they can continue to do so, despite their major setbacks.
Arrow Lake itself is way more exciting than the AMD 9000 series. No more hyper threading ? Sufficiency gains ? sounds quite promising. Should you buy the hopium ? no, but be realistic. Intel isn't dead, or even close to it
Real-Human-1985@reddit
Said the same about 20A last year along with showing LNL and ARL on 20A…..
nyrangerfan1@reddit
Let's say you're currently using TSMC or Samsung and you have a relationship with them, how do you think you would feel if all of the sudden Intel announced that you are now considering Intel? You think your relationship with TSMC or Samsung might be damaged a bit? But hey, what's more important, recognizing that challenge or making someone on r/hardware happy?
haloimplant@reddit
these aren't personal relationships where people take offense to shopping around, having other potential suppliers gives you negotiating leverage
Exist50@reddit
If Intel had customers, they could say that without naming who they are.
Well Qualcomm did just that. TSMC and Samsung know their customers are always shopping around. That's just business as usual.
SlamedCards@reddit
Lunar Lake was never on 20A. Arrow Lake had 20A and external on the slide deck. 20A parts look to be a Q1 next year
Exist50@reddit
They just cancelled that, so...
Vb_33@reddit
When did that happen?
1600vam@reddit
Foundry customers do not normally announce which supplier they use, and foundries do not normally announce who their clients are. You should not expect announcements, I suspect we'll have to wait until products are out for die shots.
santasnufkin@reddit
That fact doesn’t mesh with what intel haters want to hear though.
LeotardoDeCrapio@reddit
Yup. Most people in this sub have fuck all understanding of how this industry works. It's fascinating to see people going at each other about stuff they have no clue about.
Exist50@reddit
"Fact", lol. So what about Qualcomm and Microsoft? Or even just saying they have customers in earnings...
Exist50@reddit
If nothing else, leakers usually cover it. When's the last time a processor came out that we didn't know the node of in advance?
This also hasn't stopped Intel from bringing up would-be customers like Qualcomm and Microsoft for soundbites.
soggybiscuit93@reddit
If we take Gelsinger's statement to be truthful, then there are a lot of companies testing out 18A, hence potential customers. Not until one signs a contract do they actually become a customer.
Darlokt@reddit
For IFS its actually better to prioritise external partners compared to their internal partners with productions capacity, to get money into IFS and actual external customers. Also at the time on Lunar Lakes planning and in part Arrow Lake it was unsure whether 20A and 18A would be on time and production ready. And to be sure to have the node to produce something like Lunar Lake, it was the smartest move at the time to buy capacity at TSMC to in the end not be stalled with Lunar Lake, if IFS failed in the development of 18A.
CoffeeBlowout@reddit
https://www.reuters.com/technology/intel-manufacturing-business-will-see-meaningful-revenue-2027-cfo-says-2024-09-04/
12 potential customers for 18A in 2026.
Strazdas1@reddit
20A was always supposed to be internal, 18A for customers.
haloimplant@reddit
revisionist history
https://www.notebookcheck.net/Qualcomm-reportedly-ditches-Intel-20A-in-favour-of-TSMC-and-Samsung.739789.0.html
Johnny_Oro@reddit
Also I suppose 18A is the milestone the US gov wants intel to reach before handing them the CHIPS Act package.
Helpdesk_Guy@reddit
Sounds reasonable enough, after the cluster 10nm was, doesn't it?
Exist50@reddit
That's simply nonsense. They're skipping it because it's not healthy enough to use. They'd love to show the world a working demo with ARL-20A, but they can't because the node is broken.
greenfuelunits@reddit
Time and time again you are one of the only people on this sub that gets Intel. Intel at this point looks even less likely to stand the test of time than SMIC even if they are ahead.
Exist50@reddit
And frankly, that's only because I know one or two people that have worked at Intel. It's not complicated when it comes to these matters. You just have to discard any assumption that what they're telling you matches their own internal understanding.
Think the last time they were honest about foundry was when they announced the year delay for then-7nm.
Vushivushi@reddit
Sounds like they are in trouble until 14A.
Legal-Insurance-8291@reddit
They will be bankrupt before 14A ever sees the light of day if 18A doesn't work.
Exist50@reddit
No, they're skipping it because 20A is broken and CCG no longer has the funding to support a useless project.
shlorn@reddit
If this is news to you, then you haven't been paying attention. 20a was always planned to be a low volume node, from a 2023 article:
https://www.tomshardware.com/tech-industry/intels-comeback-appears-on-track-ceo-gelsinger-says-18a-process-node-performance-is-a-little-bit-ahead-of-tsmcs-n2-but-intels-process-arrives-a-year-earlier-than-tsmcs
Exist50@reddit
Short-lived is not the same as low volume, much less the broken, useless state of 20A today. At worst, it was supposed to be like Intel 4, but 20A doesn't even meet that bar.
If you still believe this, I have a bridge to sell you. "Ahead of N2", hah!
DueRequirement6292@reddit
What happened? Why is 20A broken?
Exist50@reddit
Basically the same deal as 1276 for Ponte Vecchio. Just behind in yield, perf, etc. Not a state suitable for a real product. They'll get something working eventually, but this whole narrative of 20A being cancelled because 18A is just doing so well is a complete fabrication.
DueRequirement6292@reddit
That’s too bad, thanks
makistsa@reddit
Where did you get that from?
You spam every intel post with negative comments. Have you gone crazy? Are you a troll? What's going on?
Exist50@reddit
The same source that told me months ago that 20A was lagging badly and causing ARL-20A to be non-viable as a product. Now that's been proven true, and you think that means I'm trolling? You think that's just a coincidence?
Because that's the reality of Intel's situation, plain and simple. Which is apparent if you listen to anything but Intel marketing. If Intel marketing was honest, I wouldn't have anything to comment on, now would I?
makistsa@reddit
100comments/hour is not the reality, it's a full time job.
catch878@reddit
Dude, you need to chill. The announcement that ARL parts are going to be exclusively external instead of 20A does not imply that the process is broken based on everything we know so far.
My understanding is that:
So if the only node customer is internal, and 18A is supposed to be the node they regain leadership, and any resources used on 20A take away from 18A, then it makes far more sense to just skip 20A for ARL entirely if 18A is on track.
You COULD be right that 20A is broken, but we don't have any solid evidence this is the case.
Exist50@reddit
Mate, I've been saying that for months now. And now that we have the proof, you still think I'm bullshitting?
If it was a problem with N3 capacity, they wouldn't have ARL-20A on the roadmap to begin with.
They don't have a capacity problem. The issue is that the node isn't mature enough for production. So just like they did with p1276, they intercept a neutered version of the successor instead.
That is true, but I never claimed otherwise.
Because they're not being honest.
"Skipping" nodes isn't really a thing. ARL-20A existed as a ramp vehicle for the entire 20A/18A family. 18A is not an independent node. It's an iteration of the foundation established by 20A.
catch878@reddit
As far as we know, Intel 20A was only ever going to be utilized on low-end ARL SKUs. There was a leak about this in March but I don't know if it was ever confirmed by Intel. But if this leak is true, then ARL-20A parts were going to be lower margin than the N3 parts and not as lucrative, from a sheer business standpoint.
ARL-20A was almost certainly on the roadmap to help with 20A development. Feedback from the design team would have been invaluable to the foundry while trying to develop their PDK and process. On the flip side, spending time developing on both N3 and 20A gives the design team valuable experience in creating designs that can possibly be more flexible with regards to foundries. I admit, cancelling ARL-20A is is probably the strongest point in your favor, but it's not the only possible explanation and not really the most compelling to me.
Sorry I worded that poorly. I'm not saying there's a capacity problem with 18A, I'm saying that 18A is where it benefits Intel to focus all of it's foundry resources since that's supposedly their leadership node. If there's even a small chance that focusing all foundry development resources on 18A will left-shift that node, Intel can't not take that chance.
You're right that skipping nodes isn't really a thing, but this isn't a typical situation and Intel isn't a typical foundry yet. I'm fairly certain Pat has been pretty vocal about 20A not being much more than a parity node with TSMC and wasn't intended to be the node most customers intercepted. So if they've learned everything they can from 20A, they have no external customers and no plans to get any more, why wouldn't they then focus resources on 18A to try and left-shift it?
If that's true, Pat will be gone by the end of 2025Q1. Time will tell.
But on a more personal note, this "guilty until proven innocent" thing really gets on my nerves. I get where you're coming from and why you think this, but we have no confirmation that this is true. Intel's behavior over the past 10 years has personally fucked me over many times, but the Intel hate in this subreddit is so out of control that it's getting to the point where it's more about vibes than facts.
Exist50@reddit
That is for the exact same reason. CCG knew 20A was behind schedule and performance targets, so they increasingly tried to limit its scope in their product line. Clearly it hit a breaking point where even that was no longer worthwhile.
Originally, it was supposed to be much more important. Would you believe me if I said they even planned for a product on p1277 (the Intel 3 + PowerVia node)?
Where I take issue with this is the assumption that 20A learnings do not translate directly to 18A. Frankly, they're basically the same for the sake of this discussion. And one cannot simultaneously argue that ARL-20A exists to ramp p1278 and that it no longer matters for that purpose.
I think you misunderstand me. I'm stating this as a fact based on my own understanding of the situation. Which is why I've been saying it so long, and why you're slowly starting to see the proof of that come out. This is the sentencing, not the indictment, to be allegorical.
Ravere@reddit
Regardless of the reality, the cancellation of 20A can't inspire confidence in any prospective foundry customers.
"Use our new foundry nodes, we only use TMSC for our new chips" Not the greatest selling pitch.
Exist50@reddit
Which is the only reason they kept it around this long. But the narrative they're pushing now is better than "look at our apples to apples 20A chips be bodied by N3B".
imaginary_num6er@reddit
What the hell happened to Ericsson’s Intel 4 product?
Exist50@reddit
That's not a foundry customer. Ericsson contracted Intel's networking product group for a custom processor.
PainterRude1394@reddit
https://www.ericsson.com/en/portfolio/networks/ericsson-radio-system/ran-compute
DYMAXIONman@reddit
Isn't 20A just supposed to supplement the TSMC chips that they'll be using this year?
CoffeeBlowout@reddit
Yes.
nanonan@reddit
Zero actual customers aside from themselves isn't quite as promising as you ar emaking it seem.
Asgard033@reddit
The article has actual defect rate numbers. Nice.
Spirited-Guidance-91@reddit
Man they have to be panicking to let that info out...
unityofsaints@reddit
Lining up at the exit door most likely
bubblesort33@reddit
When someone purchases silicon, do they pay per functioning die, or per silicon wafer?
theQuandary@reddit
Usually it's per wafer because yields vary with chip size and design in hard-to-predict ways and the fabs don't want or need to take on those risks.
There are exceptions though. TSMC's N3 was entirely unusable and got scrapped for N3B which also had yield issues leading TSMC to roll back their transistor size for N3E.
Apple was forced to release M2 to keep momentum going with their new ARM laptops (and they couldn't skip a phone chip generation either). M3/A17P launched a year late on N3B and that only happened because TSMC agreed (likely to avoid breach of contract lawsuits and all the bad press) to eat the costs of defective chips (probably all defects past a reasonable defect rate).
lupin-san@reddit
Depends on the contract and the node's yields.
The foundry eats the cost of each defective chip if customers pay per functioning chip. Customers will want this if the node yields are bad.
The customer eats the cost of each defective chip if customers pay per wafer. If the node has really good yields, customers will want this arrangement. Customers get the most out of the wafer. You'll see this more in mature nodes.
Darlokt@reddit
Normally you pay per wafer, but depending where in the production run of a node you buy your wafers, the prices fluctuate a bit. Very early is normally a bit cheaper, as you are basically funding the node development and is normally, at the ramp up of a node is the most expensive and later on the price tapers off again, the biggest drop being for a leading edge node, when the new leading edge node comes out. And normally you negotiate a defect threshold, basically if the foundry for whatever reason produces way too many defects than can be reasonably expected by the consumer and the foundry, you basically get a wafer on the house to make up for it.
juGGaKNot4@reddit
You can easily do 5 nodes in 4 years if you cancel 4 of them.
Exist50@reddit
Ok, then where are the non-potential clients? It's easy to make lists of customers you want to have.
der_triad@reddit
You were right about 20A btw, turns out it's not going to be used for ARL after all.
Exist50@reddit
Oh, good catch. Everything seemed to be trending that way, but I'm slightly surprised they outright admit it's dead. The whole "18A is going so well we're dropping 20A" spin is complete nonsense though.
Darlokt@reddit
20A was like 4 an internal development node to test the new features coming in 18A with limited libraries etc. It is not uncommon to abandon a development node for a production run, if its successor is coming along nice, because normally the improved node, in this case 18A, shares the production lines with the development node, in this case 20A, and it is simply smarter to take the step to the improved node instead if the PDK is ready etc. because they have more fleshed out libraries, better support and will improve in the future, as it is an actual production development target instead of a dev node. Also from a design perspective you can normally move your designs from the development node up ti its improved version with minimal modifications. They said originally processors were to be build on 20A, like 4, for the books, as their development would then be funded by the products produced on it, on teh books, but in this case Intel is actually creating the leading edge node, and this alone is reason enough to abandon 20A and take the leap to 18A.
Exist50@reddit
And if it met their claims, they would be using for ARL like they did Intel 4 for MTL.
When has TSMC done that in living memory? Name a single example.
That's an argument for updating a design if time permits, like they did with GNR/SRF. But that's not what's happening here. ARL-20A is canceled entirely.
SteakandChickenMan@reddit
N7+ : )
N10 was a bit different but also very limited in scope
Exist50@reddit
Fair, though that was design compatible with N6, so not really any churn for designers.
Probably shipped more than Intel 4 has.
III-V@reddit
The author said it will be primarily fabbed externally, not entirely. Saying it's 100% TSMC is not supported by the verbiage used.
III-V@reddit
They said it would primarily be fabbed externally, not that it would be entirely outsourced.
Tulkonas@reddit
'With this decision, the Arrow Lake processor family will be built primarily using external partners[...]"
You missed the "primarily".
Exist50@reddit
The base die is technically from Intel. That's the only exception.
Exist50@reddit
Also, you should probably post that as its own article. I think it's big enough to merit it.
soggybiscuit93@reddit
Off topic, but I wanted to ask you what your thoughts are on this leak today listing a 265K as 20A and how valid it is
anival024@reddit
Potential clients maybe lining up to register interest in future consideration of possible production contracts.
Ch1kuwa@reddit
Is it comparable to TSMC N5 in transistor density?
hofmny@reddit
This is actually a very good. Intel is mothballing 20A. They are transferring all of the team members from the 20A team to the 18A Team. Why run on 5 nodes when you can run on 4, it's cheaper, especially if 18A has high yield and low defect rate.
I also think the canceling of the 20A process has to do with Intel making cost cutting measures. Again, this is probably a good thing and will save them a ton of money, and help increase margins, due to the economy of scale… If they can get external and internal orders all on one process node, that node will become cheaper overtime for everyone involved, not to mention decreased defect rates and increased yield rates, which is typical the longer you've run a process node.
KirillNek0@reddit
Good to see Intel gets progress out of this venture.
Astigi@reddit
Very healthy for TSMC
mysticzoom@reddit
Let me translate that for you.
"Please use our 18A node or we're tits up! We need customers to make shitty yields so we can get better at it, PLEASE!"
kingwhocares@reddit
Defect density at 18A is 'healthy,' potential clients are lining up
jedrider@reddit
Says this with fingers crossed behind back.