Intel.com: Continued Momentum for Intel 18A [20A is scrapped, each Arrow Lake-tile to be fully TSMC-sourced]
Posted by Helpdesk_Guy@reddit | hardware | View on Reddit | 291 comments
ABotelho23@reddit
Lors, if 18A doesn't blow everyone's minds, Intel is toast. This is getting ridiculous.
Famous_Wolverine3203@reddit
Its a 15% perf improvement and a 30% density improvement. It puts it in the ballpark of N2/N3P in terms of power but slightly behind N3 in density.
Its not blowing minds but its no disappointment.
Legal-Insurance-8291@reddit
Being 3 years behind sounds pretty disappointing to me..
nyrangerfan1@reddit
Remind me, do either of those have GAA and backside power delivery?
Least-Psychology-925@reddit
18A has both.
Legal-Insurance-8291@reddit
If you're asking about the TSMC nodes then no, they don't. But those technologies are just a means to achieve higher density and performance. TSMC has smaller transistors than 18A which balances out for the lack of the technologies you mentioned.
TwelveSilverSwords@reddit
How is that 3 years behind?
Legal-Insurance-8291@reddit
Volume production of N3 was in 2022 and volume production of 18A will be I'm 2025. That's 3 years behind for an equivalent node.
zeey1@reddit
Even tsm doesn't claim that... They claim the n3p which will go in production early next year is same or better then Intel 18. If Intel 18 can go high volume production next year(big IF) then there would be node parity or (may 4-5 months behind) not 3 years..this is claim made by Tsmc itself...Intel claim is different they think Intel 18 is better then n3p
I doubt Intel will achieve 18 production as it has failed in past numerous times..the market thinks the same
Exist50@reddit
They're not wrong. Falcon Shores is on N3, not 18A, and that's 2026.
zeey1@reddit
Well, i am not an expert but tsmc claim that n3p is same as intel 18 makes no sense, as for example it doesn't include the crucial backside power delivery that was previously supposed to be there..
It seems intel claims that 18 is better then n3p and simialr to n2 makes more sense(hence why they chnaged their naming to 20, 18 etc )
But lets take tsmc claim even then if intel suceed at 18 node it will be parity between both
Cost and yield will be more important factors though
Exist50@reddit
BSPD, GAAFET, etc are all means to an end, but they are far from the only ways to improve a node. Note that even Intel is only claiming 15% vs Intel 3 now, which is very much in line with a node shrink without those sorts of features.
Intel 3 isn't comparable to TSMC 3nm though. They essentially switched from a node behind in their naming to a node ahead.
zeey1@reddit
Claims... that even tsmc isnt making..talk about being intel hater or tsmc fan
Not a single tech follower is dismissing backside delivery like you are .not even tsmc itself which is implementing it next node
Exist50@reddit
Intel's own numbers showed it at ~3% perf.
And lol, no one believes Intel 3 is comparable to N3. Intel's own design teams aren't even using it for flagships.
Legal-Insurance-8291@reddit
I guess time will tell. To be fair nobody posting here has the actual numbers and few could even interpret them if they did. What's more telling than any analysis on reddit is how potential customers are responding and none of that looks encouraging.
zeey1@reddit
Point is if Intel achieves 18 production, a big if (due to incompetence culture) they will be at node parity (though Intel will call it node superiority and tsmc will call their as similar but cheaper)
A big question is the future, not sure why tsmc is reluctant to spend money on newer machines that inte bought..
Exist50@reddit
You're assuming TSMC isn't overestimating Intel either.
That was the same strategy they employed for 7nm, and it worked out very well for them.
zeey1@reddit
Tsmc wouod never publicly do that.. š
Point is you have to look at tsmc claims and intel claims and then take whatever you want ..intel renamed the nodes by looking at tsmc..tsmc then switched names again š
Exist50@reddit
Lot of companies have. AMD themselves did so with Intel's server chips.
Look at what Intel's own product teams are using. Even in 2026, what's Falcon Shores on? TSMC.
They went from a node behind in naming to a node ahead. It's just marketing.
zeey1@reddit
There is no way intel can put any current portfolio on intel 18. Its not ready yet. Anything designed before 2025 will be on tsmc
Exist50@reddit
I'm talking 2026 products, more than a full year after Intel claims 18A will be ready.
zeey1@reddit
Its common knowledge that inteo has zero plans to use tsmc process nodes" a16" in 2026. Anyone who follow technology news knows this
Exist50@reddit
No, they're using N3, because it's still better than 18A. And lol, you're just lying if you claim any of Intel's actual '26 roadmap is public knowledge.
TwelveSilverSwords@reddit
But the first product with N3B (the Apple A17 Pro) shipped only in 2023Q3, only about a year ago.
If the rumours hold up, Panther Lake with 18A will ship in 2025Q4- 2 years after the first N3B product.
Anyways, if 18A is better than N3B and more similar to N3P, then can you say that 18A is 2 years behind?
Famous_Wolverine3203@reddit
They would be half a year behind. N2 goes into mass production 6 months after 18A does.
And its very likely that you wonāt see any actual N2 products till late 2026 on Appleās A19
tset_oitar@reddit
Yeah N2 transition is probably going to be quite slow. Celestial GPU, Venice dense, arm/apple/QC soc, rubin?, Nova lake top end, either won't use N2 or launch in late 2026. AI companies will likely wait for A16 before jumping nodes. Can't imagine smaller players/AI startups using vanilla N2 either due to sheer cost. Maybe Apple will be first with M6 in H1 2026
nanonan@reddit
Sure, if it is N2 equivalent, but it looks like it will be N3 equivalent putting them years behind.
Famous_Wolverine3203@reddit
It is better than N3 in power. Equal to N3P in that regard. Considering we wonāt see N3P products till the end of 2025, theyād be at best half a year or one year behind.
Legal-Insurance-8291@reddit
2 years isn't exactly great either, especially in this industry. And quite frankly I don't believe it will actually see the levels of improvements you mentioned.
Geddagod@reddit
The problem is that even though it is coming a good bit late, TSMC doesn't exactly have anything dramatically better after this either, with the slowing down of area scaling.
Also, 18A being competitive with N3P is something TSMC itself believes, but it's not hard to believe TSMC highballed that estimate as well either lol.
Legal-Insurance-8291@reddit
TSMC will also release N2 in 2025 so by the time 18A comes out it will be almost instantly rendered obsolete.
Famous_Wolverine3203@reddit
But what product will use N2. It arrives too late for Appleās A19. And if N3 is anything to go by, we wonāt see adoption if N2 till late 2026 on Appleās A20.
Exist50@reddit
They could have someone else for the initial availability.
Geddagod@reddit
Ye, N2 itself is exactly why I said TSMC doesn't have anything dramatically better after this either. Compare N2 vs N3 vs previous gen node shrinks.
Also, rendered instantly obsolete? Srsly?
HeroYouKey_SawAnon@reddit
TBH 2 years in the modern chip landscape isn't bad at all. (Assuming it really is 2 years and parity performance wise.) TSMC gen on gen improvements are stalling, everyone wants a competitor, Samsung is in Samsung world doing their own thing, and Rapidus in Japan are partying and genuinely excited at the idea that they may be 2 years behind even that (2027).
Warma99@reddit
Would Arrow Lake Refresh be on 18A? Has the Tick-Tock cycle returned?
Exist50@reddit
No. No update to the compute die.
Geddagod@reddit
ARL-R on desktop is rumored to use the same node, PTL for mobile is rumored to use 18A.
Also, idk how comfortable I would be with calling 18A vs TSMC N3B a tick. PTL seems more like an optimization/tock (even if not core, with the surrounding architecture).
Vb_33@reddit
It's a refresh, it's gotta be on N3B again imo. It's not like Arrow Lake refresh is an Ivy Bridge/Broadwell type deal.
the_dude_that_faps@reddit
Is Panther lake really that far away?
Famous_Wolverine3203@reddit
Volume production of N3 in 2022 was a straight up misdirection from TSMC. The original N3 node was scrapped. What we have is the fixed āN3Bā. Thats why we didnāt see any products on āN3ā till late 2023. And everyone except Apple still hasnāt adopted it.
TwelveSilverSwords@reddit
Lunar Lake is on N3B now.
Famous_Wolverine3203@reddit
Lunar Lake was taped put quite a while ago. Considering how fast Apple iterated with M4 to go with N3E. And the fact that neither Nvidia, AMD or Qualcomm are using N3B, speaks to how poor the node was in the first place. Just look at A17 pro power increases.
Exist50@reddit
I don't think N3E is that much of a PnP improvement. IIRC, it was mostly cost and yield.
Famous_Wolverine3203@reddit
Yes, but the previous comment specified that Intel was 3 years behind because āN3ā was in production since 2022. (TSMC pulling an Intel)
I pointed out that we havenāt seen any products on said node or its variants till now.
Exist50@reddit
Oh yeah, TSMC was also bullshitting about N3. Reality was the original N3 was canceled and replaced with N3B ~6 months later.
TwelveSilverSwords@reddit
The power exploded with M3 -> M4. 50% higher power consumption for single core (according to Geekerwan). Seems to be an issue of Apple's architecture, not the node.
Famous_Wolverine3203@reddit
Power exploded from A17 pro itself. Thats an issue of N3B more than anything. And the fact that everyone stuck with N4P despite N3B being available speaks for itself.
TwelveSilverSwords@reddit
So doesn't this mean that N3E is also bad? In fact, that's what Geekerwan concluded from his M4 review. N3E wasn't a big improvement over N3B.
Famous_Wolverine3203@reddit
N3E was never supposed to be a major improvement over N3B in power. Less than 5%.
the_dude_that_faps@reddit
Seems to me like it's too soon to know given that the only data point we have is M4 and we have two variables: Apple's design and TSMC's N3E.
CalmSpinach2140@reddit
Power increased from M3 to M4 because of using HP cells and clocking the core to 4.5GHz. At same clock M4 is a bit more efficient.
Famous_Wolverine3203@reddit
The M4 uses HD cells for its E cores which have the same architecture as M3 E cores. No power efficiency improvements were noted at all.
the_dude_that_faps@reddit
It remains to be seen if the other improvements will provide any additional benefits to products like backside power delivery.
Exist50@reddit
That would be factored into the PnP numbers.
Exist50@reddit
I think you're overshooting in power. It's not that good.
Famous_Wolverine3203@reddit
Youāre probably right. This is a verry optimistic projection. But N3P seems to be a reasonable assumption.
Probably not N2.
Hendeith@reddit
Intel isn't toast until 14A is bad. 20A and 18 were both meant to be relatively short lived nodes. Intel is spending lots on High NA EUV machines that will be used only on 14A and further. Development of this node is what consumes a lot of Intel's funds and resources. If 14A is terrible or delayed and delayed again then Intel's foundry will almost certainly drop out of bleeding edge race.
hwgod@reddit
18A was supposed to be anything but short lived. What on earth gave you that idea?
Hendeith@reddit
Literal Intel. 18A was supposed to be replaced in 1.5 year with 14A. If that's not a short lived for a node then I have no idea what you expect.
hwgod@reddit
No, in 2 years, the typical timeline for a new node.
Hendeith@reddit
I have no idea why you say stuff like this when it's not in line with official Intel timeline. 18A is mid 2025. 14A is 2026. That's 1.5 year at most unless you want to stretch 2026 till mid of 2027.
hwgod@reddit
18A is H2'24, according to Intel.
https://www.anandtech.com/show/17344/intel-opens-d1x-mod3-fab-expansion-moves-up-intel-18a-manufacturing-to-h22024
Hendeith@reddit
https://www.intel.com/content/www/us/en/newsroom/opinion/continued-momentum-intel-18a.html
18A is on track for production in 2025, 14A is supposed to start production in 2026. No matter what you say or how you put it there's no 2 years here.
hwgod@reddit
So a year late. Intel really loves moving deadlines every time they fall behind.
It wont for volume. The first possible intercept is '27.
Hendeith@reddit
Now you are just making things up. 18A is not a year late. 18A was always supposed to be production ready in 2025. Even link you posted said 18A was planned for 2025, but as a PR move Intel moved it to 2024 (which had no chance of happening).
18A is meant to be production ready in 2025. 14A is meant to be production ready in 2026. You are making things up.
18A was always on roadmaps for 2025. PR announcement from half a year ago was just a PR announcement and now even Intel ignores it saying that 18A is still on track for 2025. If it was planned for 2025 and will be in 2025 it can't be a year late as you claim. You could say that they didn't manage to push it out sooner than planned, but not that it's a year late.
nanonan@reddit
20A and 18A were both meant to be production nodes. One has already failed to do so. If the other is terrible or delayed I doubt their 14A plans will matter.
Hendeith@reddit
That's true, but as I said Intel is banking heavily on High NA EUV nodes. They bought all High NA EUV machines that ASML will produce this year. That's 2.4 bln total on 6 machines. They had very ambitious plans of starting 14A production in 2026.
If 18A is delayed or terrible then it will be a huge hit for their foundry, but at this point they already invested a lot in 14A. If this investment turns out like 10nm or now 20A did then I can't see them trying to catch up.
ifq29311@reddit
yeah, how long before they cancel that and focus on 14A or sth, then cancel that and focus on 10A, and so on
Ghostsonplanets@reddit
18A won't be cancelled because that's what Panther Lake will use exclusively. But they'll focus right away on 14A because that's the one mobile ready and which Nova Lake should use.
tset_oitar@reddit
NVL is 18A-P, no? 14A mass production is scheduled for some time in 2027. Thats next generation timeline
Exist50@reddit
Correct.
Famous_Wolverine3203@reddit
NVL is the tock generation right? New P cores.
Exist50@reddit
Yeah, Panther Cove/Coyote Cove.
miktdt@reddit
No Mont?
Exist50@reddit
Mont too. Was just responding to the specific big core.
TwelveSilverSwords@reddit
Which mobile chip vendor could use 14A? Mediatek? Qualcomm?
Famous_Wolverine3203@reddit
Unless 14A has UHD libraries, I wouldnāt see mobile customers jumping in.
Ghostsonplanets@reddit
Who knows. I'm just sharing what Intel has said so far.
Legal-Insurance-8291@reddit
And Intel also said ARL would be on 20A, but here we are..
soggybiscuit93@reddit
18A is and Intel 3 are gonna be around for at least the rest of the 2020s. Intel 4 and 20A were always planned to be one year nodes that were scrapped. 20A was planned to be a single, low end desktop die for a single generation.
20A is behind schedule. 18A is ahead of schedule, putting the launch of both nodes too close together to make pursuing 20A make sense.
Exist50@reddit
They're both behind schedule, but giving it another half a year to get healthy for 18A means you can at least make a product.
DaBIGmeow888@reddit
Yep, ever shifting goalpost
HonestPaper9640@reddit
It's On Tracks all the way down.
Helpdesk_Guy@reddit (OP)
Your comment is actually pure gold!
DaBIGmeow888@reddit
5 nodes in 4 years is achieved by cancelling a few nodes along the way. So much win!
Legal-Insurance-8291@reddit
It's factually 4 nodes in 5 years at this point.
Cheeze_It@reddit
My hope is that it's got really good performance per watt and has good idle wattage. Couple that with an iGPU and it'll be a great home/server CPU.
Apprehensive-Buy3340@reddit
Interesting tidbit.
Does anyone know what percentage of Arrow Lake's was supposed to be made internally?
Exist50@reddit
They've been sidelining it internally for ages as 20A fell further behind schedule and perf projections. That's the the latest rumors showed it just for low end desktop, but they had an N3 die that can do that sooner and better.
Geddagod@reddit
The wording in this announcement is a bit vague, do you think they cancelled 20A ARL outright, or do you think this is just pre-emptive spin on why ARL on 20A was going to be low end desktop only?
Exist50@reddit
Canceled outright. There'd be no sense in this announcement if it was "just" being marginalized. It's a perfectly sensible move given the health of 20A and the state of CCG budgets, but they wouldn't be doing this if 20A was healthy.
zeey1@reddit
Nope they would atill be doing it..thay was the original plan to begin with as intel 20 is not superior node to tsmc offering..it was always a limited run node
nanonan@reddit
It was always limited to zero? How do you come up with this stuff?
zeey1@reddit
Limited to 5-6 months..thats the essence of 5 nodes in 4 years .every dick harry tom paul and baby knows that
Where have you been all these years???
nanonan@reddit
Sure, but they fully intended to actually utilise 20A both for themselves and externally. Those plans failed, hence 20A is a failure.
zeey1@reddit
It was NEVER intended for external use..only for internal use..it was never viable due to this reason and ahirt life of just 8 months ..made no sense to do it but intel was too proud to go external fab
nanonan@reddit
It was always intended for both external and internal use, Qualcomm for example was looking to utillise it, then the external cusomers dropped them, then the internal customers dropped them, then the node was abandoned. None of that was Intels plan, and it means zero return for millions invested.
Exist50@reddit
No, canceling ARL-20A was certainly not their original plan, hence why it's news. If anything, they really needed it as a proof point for foundry customers. Who would sign on to use a node that Intel themselves can't justify shipping even after putting in the RnD?
Rumenovic11@reddit
Apparently it's saving them half a billion. So they could have weighed the pros and cons and decided that it was worth cancelling. Maybe if they were in a better financial position they would value holding their promise more? Who knows what the hell actually happens behind closed doors
Exist50@reddit
Where's that number coming from?
I completely agree that it makes sense to cancel ARL-20A, but it's not because 18A is doing so well.
Rumenovic11@reddit
Following the Citi tech conference on Wednesday, comment by analyst
ExeusV@reddit
Are those conferences available online?
Because I don't think it is it:
https://www.ibm.com/investor/events/citi-2024-global-technology-conference
ProfessionalPrincipa@reddit
There are so many people around here who were convinced 20A was coming within a couple of months because of Intel's ironclad roadmap. How could they be saving $500 million if it's getting cancelled at the eleventh hour?
Geddagod@reddit
Bummer
yUQHdn7DNWr9@reddit
20A is kill.
Dispator@reddit
Imagine if that was the entire announcement.Ā
nugurimt@reddit
That's what they claim. Intel doesn't have a mass production line up and running so not even intel knows what yields it'll have.
What we do know is that intel's current internal volume is less then that of 2021 so it's current intel7 yields still haven't caught up to its 14nm+++. Arrowlake being exclusively on tsmc instead of using its own intel 3/4 also hints its manufacturing capabilities atm.
Legal-Insurance-8291@reddit
I mean we know for a fact fab 52 isn't complete yet so obviously 20A/18A capacity is only from their R&D fabs.
Helpdesk_Guy@reddit (OP)
Dude, don't spill those precious red beans, okay?!
nugurimt@reddit
Yes so it's way too early to talk about defects as it's literally just R&D not a real manufacturing line. Also its former fabs have been all upgraded to atleast 10nm but its total internal volume is less then half that of before which does suggest atrocious yields on its current nodes.
Kant-fan@reddit
Pretty sure Intel 20A was always meant to be fully internal and only for a small amount of products. Now that they're in this financially undesirable sitation they would probably lose more money by using 20A on a small subset of desktop chips with low volume.
DaBIGmeow888@reddit
The bankers told Intel to cut the illusion and charade of 20A as an internal node and just outsource the remaining bit to TSMC.
imaginary_num6er@reddit
Sounds like they need an investment banker for a CEO
AHrubik@reddit
Do you want another Boeing? This is how you get it.
Helpdesk_Guy@reddit (OP)
Dude, we already have it being another Boeing! What else could go wrong by now? Not that I'd favor a bankster now, but ā¦
zeey1@reddit
Makes sense as long as 18 is able to reach the desire volume and work
cmd__line@reddit
AVGO says its got issues....
Willing-Log5759@reddit
Core 7, Core 9 = TSMC
anything below =intel
that was the plan if that's what you wanted to know
Apprehensive-Buy3340@reddit
I guess that's as close as we're gonna get without knowing how much each series sell, thanks!
DepthHour1669@reddit
.40 per cm2 would mean 1.02 defects per 14900K die area.
5nm is at D0=0.01 or so. This is very far from production.
Affectionate-Memory4@reddit
Industry convention has been that D0 of 0.5 is considered a healthy node. It's not at production quality yet, but it's now at similar quality to how other nodes have been in the past.
For comparison, TSMC's N7 was about 0.4 at -Q3. N5 was about 0.3 at -Q3. So if normal yield curve applies, 18A would be HVM-ready by mid-2025 where D0 should fall well below 0.2 (TSMC N3B is about this defect rate last quarter, which is to say they have yield issues), or ideally 0.1/mmĀ².
dj_antares@reddit
Nice copy paste.
Affectionate-Memory4@reddit
No need to change what's already out there. Just realized my quote formatting didn't work though.
dj_antares@reddit
N5 was at 0.3 HVM-Q3. N3B is still at ~0.2 HVM+Q2 (last quarter).
It's not HVM ready, it's also not that far. It should take another 2-3 quarters if everything goes relatively well.
phire@reddit
1.02 defects per 14900K doesn't mean that every single 14900K die is unusable.
A large chunk of the die is cache (or other SRAM blocks) and it's standard practice to include a few extra rows and fuses to swap them into place. So a defect that hits a cache line row is non-fatal.
And if the defect does hit something else, Intel can disable the core and sell it as a 14700k, 14700, 14600k, 14600, or even 14900f if the defect hits the GPU. All these models are the same die as the 14900K, so Intel will also be disabling cores which are otherwise fine to make sale numbers.
It's only defects which hit something common like a memory controller or IO block that will make the die unusable.
Oxire@reddit
<0.4 is EXACTLY like tsmc's N10,N7,N5 defects before mass production. It takes TSMC half a year to reduce it to \~0.15 before they start mass production and then they slowly keep reducing it to \~0.1.
If it was 0.15, they would have already started. They are not there yet, but they are ahead of schedule.
Helpdesk_Guy@reddit (OP)
Which one, the one from Intel as in Intel 3 or TSMC's N5-processes?
ACiD_80@reddit
Why is nobody mentioning that you cant just switch nodes so close to release... This was already planned quite some time ago.
Also, its not bad news. It means 18A is already at a stage they can just skip 20A.
DaBIGmeow888@reddit
You are a foundry, it's always going to be more expensive outsourcing to another foundry than producing it in-house.
Also, Apple, Nvidia, and AMD sees Intel not trusting its own foundry, why would they abandon TSMC?Ā
ACiD_80@reddit
"Also, Apple, Nvidia, and AMD sees Intel not trusting its own foundry, why would they abandon TSMC?Ā "
Aah, lol, dont be so silly/ignorant... Nvidia had no problem using Samsung's foundry.
Nvidia is already using intels advanced foveros packaging fab in mexico.
They just wanna barter... trying to use it as leverage. Wont work.
They can say whatever they want. If intel has the best nodes they will be fighting to get some capacity or they will lose their lead. (especially for nvidia this is important).
Its just that simple.
"It means HVM is going to be delayed even more than what Intel claims. In otherwords, it's going to have additional delays as the yield maturity at this stage is worse than anticipated."
No, it doesnt. If you knew something about semiconductors you'd laugh at that article and the way the writer explains things that dont make sense.
By the way, just a small update; a broadcom spokesperson already clarified that the article is false. They are still looking at it and have not made such conclusions.
-ciao.
DaBIGmeow888@reddit
Advanced packaging is easy stuff, that's why low level China excels and dominates in packaging. Good job Intel, you are doing low level stuff that China does.
Nvidia trust Samsung fabs, does not prove that Intel fabs are reliable. Intel would outsource to Samsung fabs well before doing it in-house, that proves my point - Intel doesn't trust its own foundry.
Intel needs to prove performance, and good yields, volumes, and lower prices. That takes many years to gain trust and prove reliability.
Broadcom is bound by NDA so officially unable to issue an official statement. It's clear that this is unofficial anonymous engineers leaking the 18A troubles. The fact it's consistent with SoftBank, Qualcomm, and others assessment of shoddy state of Intel speaks volumes.
ACiD_80@reddit
Well, easy non advanced packaging is... well, easy, duh!
Intel is aiming for performance and leading edge. Not most volume (yet).
Those things are very different.
DaBIGmeow888@reddit
It's a business, you still need to sell on performance and price. What good is it if you are 50% higher cost per unit because your yields suck? Nobody is abandoning TSMC for higher prices.
ACiD_80@reddit
18A will be very profitable, actually. Much cheaper than previous nodes. I forgot the exact numbers but it was a huge difference.
Intel's advanced packaging is already more profitable than TSMC's cowos.
If you want the best performance and intel has the best node then you have to go with intel or risk losing that advantage to the competition who will go to intel.
Its that simple.
nanonan@reddit
You know what would offset that enourmous estimated cost? Actually selling your own silicon instead of a competitors.
ACiD_80@reddit
They will, dont worry.
nanonan@reddit
They would a lot sooner if 20A was viable to go to full production, which it clearly never was.
ACiD_80@reddit
Well 20A would have been sooner... But their focus/target has always been 18A. And that was already looking good, so they made the wise decision to push that out earlier by skiping.
Its not such a huge deal... a couple of ARL sku's more using TSMC... so what.
nanonan@reddit
Nothing is being pushed early as far as I can tell. Completely failing to develop a single product for your node is unprecedented in the history of semiconductors as far as I can tell. I'd call an unprecedented failure at least a medium deal, if not a big deal.
ACiD_80@reddit
They are about 6 months ahead of schedule with 18A
Exist50@reddit
Yes, because 20A had been doing poorly, so N3B was CCG's main plan for a while.
No, it really means that 20A is so bad it does them more harm than good to show.
No, it said it missed Broadcom's expectations/requirements. They'd be producing something '26 earliest and they still don't think 18A will be good enough for that.
ACiD_80@reddit
You again...
20A wasnt even a thing yet when they made those decisions, so no it had not been performing poorly...
They can not claim 20A to be bad and 18A to be good. As they are very related to eachother.
Nowere have they said that 20A is bad. In fact they said that 18A has a defect rate of 0.4 which is good/heatlhy at this point.
So no, 20A is not bad...
All broadcom said, according to th article, is that 'the company concluded the manufacturing process is not yet viable to move to high-volume production'. Which is weird because noone expects high-volume production yet at this point. That is planned for somewhere next year.
Its a very 'strange' article and people doubt if this is real.
nanonan@reddit
If going from all of Arrowlake to only servicing low end Arrowlake didn't clue you in that it wasn't performing as expected, the cancellation should.
ACiD_80@reddit
Again, that decision must've been made a long time ago... You can't just phone TSMC and say you need x more wafers next month... They are fully booked for years.
Also, this is to quote Dr. Ian Cutress. You probably know him, i hope:
"if intel would have known how good intel3 turned out to be, they would probably not have used TSMC's nodes."
Also, they are skipping 20A because 18A is ahead of schedule and already looking good.
18A is actually a more refined/advanced version of 18A. So they can use the same fabs/machines no problem. It saves them around $500.000.000. Its a good decision!
nanonan@reddit
You can easily repurpose wafers, say from your delayed GPU line. Sure, they booked capacity years ago. They also planned ARL to be on 20A. That didn't work out, so they had to repurpose their TSMC allocation to cover up their failure. That's not a good sign of anything.
ACiD_80@reddit
Those plans for ARL on 20A ... its been quite some time since they last said that... that also caused many people to speculate it was all moved to TSMC. This did not happen recently.
Exist50@reddit
That's just false. They've changed its position in the roadmap multiple times, this being the final one. And 20A has been in development for a very long time.
Yes, both are bad. 20A is just unusably so, and they're hoping 18A is close enough behind that another half a year to cook will be enough to actually ship something.
Then you didn't read the article. One other key quoteL
And they're talking about test chips today. So an actual product would be '26 or later, and Broadcom still doesn't think 18A is good enough.
No, but they expect it to be ready for when they need it, and they don't think Intel can meet that expectation.
Intel claims end of this year.
"People" being those who've continually denied any and all bad news about Intel for the last half a decade.
Sure, Reuters, Qualcomm, and literally everyone else in the industry are all conspiring to lie about the health of Intel's nodes for...some unspecified reason? Is it that had to acknowledge they're not doing as well as they claim?
ACiD_80@reddit
Those TSMC wafers needed to be reserved a long time ago.
You are a very tiring person, just making up stuff, bending facts so they agree with your fantasy and constantly spamming nonsense. I have said all I wanted to say about it.
DaBIGmeow888@reddit
If you can't argue against his argument, there is no point to resort to personal attacks. It doesn't make you win any arguments, makes you look impatient and emotional.
DaBIGmeow888@reddit
It's not going to be viable next year either. Just look at how 20A for cancelled.Ā
Exist50@reddit
To some degree, but the exact conditions have changed over time, especially as Intel's N3 products were delayed. And with ARL-R on the roadmap, N3 wafers is not a problem.
For their needs, at this time. Which are different than their needs at PRQ. It's funny how your best argument is that Broadcom doesn't know the very basics about choosing a fab, but you, with no information but Intel marketing, do. Lol.
Lmao. Sorry I keep correction your false and often delusional claims.
I said the opposite. It's clear at this point you've once again resorted to lying when reality doesn't support your narrative.
DaBIGmeow888@reddit
Thank you for speaking the truth to combat these weirdo cult loyalists and their mental gymnastics. Don't forget, you have SoftBank, Qualcomm, Broadcom, all saying the same thing. Yield issues, significant delays, etc..
DaBIGmeow888@reddit
Dang, you got downvoted to shit.
makistsa@reddit
u/Exist50 can you stop spamming every intel post?
nanonan@reddit
He's been saying this for months, if not years, now he's proven 100% correct you want him to shut up?
makistsa@reddit
Is he MLID? LOL
nanonan@reddit
Well seeing as his predictions were accurate I'd say no.
Helpdesk_Guy@reddit (OP)
What's wrong with him commenting and answering, when his replies seem to be fairly reasonable anyway?
Is he somehow not allowed to post like you and others do? It's hardly spamming, when most of his posts end up being upvoted for their fair reasoning, no?
makistsa@reddit
100post/hour attacking a certain company with words "broken", "lying", without any sources. Saying that he has his own sources.
1 hour ago, i saw him doing it again, i called him out, and he instantly deleted a lot of his comments and blocked me. Why did he delete so many comments? He knows what he is doing.
nanonan@reddit
When blocked that users posts will appear to be deleted. Log out and you can see them.
mockvalkyrie@reddit
Being an Intel hater isn't against the rules or anything. Even if it's poorly sourced.
makistsa@reddit
Intel haters are doing it so systematically. It's a job.
skycake10@reddit
Believing that dedicated haters are paid actors is much more deranged than being a dedicated hater
marmarama@reddit
The tragedy of the 2024 internet is that this is no longer true. Paid actors are everywhere on the internet, there's a whole industry dedicated to it.
Then you've got those people who do it because they want to influence the market, because they have money tied up in stock prices going up or down.
Obviously bandwagoning is everywhere too, and that's been the case since the dawn of social media, but IMO the problem with anonymous influence is getting worse over time, while you'd expect bandwagoning to stay more or less constant.
By default, it's easiest to assume that everyone on anonymous social media in 2024 has a monetary agenda of some kind, unless proven otherwise. I hate it, and it's corrupt as fuck, but that's where we are.
skycake10@reddit
Even if that's true it's a way of living that makes yourself miserable for basically no advantage. Stop worrying so much about it.
marmarama@reddit
Sounds like something someone paid to downplay the "influence industry" would say. (I should probably add /jk).
Honestly it irritates me, because the internet was different once, but doesn't make me miserable. No more than knowing that hospitality staff aren't really pleased to see you, or that death is final, there are no re-runs, nothing but empty void with no thoughts.
It's just one of those things that is.
mockvalkyrie@reddit
Some people just need to bandwagon to feel good. Same thing happened in the bulldozer era of AMD
TwelveSilverSwords@reddit
Exist50 is a veteran of this sub. He often makes excellent points, and we respect him for that.
III-V@reddit
Yep. I don't always agree with him, but he tends to see things that are in my blind spots.
Helpdesk_Guy@reddit (OP)
Pretty much sums it up, yes! u/Exist50 is as much a veteran as others, and his comments are always a welcomed orientation of what is fud or a blast even before the PR could settle their dust (for me, at last).
His remarks and ability to sharply cut through PR-nonsense in a instant, is as beloved as u/COMPUTER1313 and how he always has some viable background-story to tell. In most threads, both of them are mostly found persistently at the top.
DaBIGmeow888@reddit
Why do you make it so personal?
Helpdesk_Guy@reddit (OP)
So a rather personal issue then. Why don't you just chat him up to rest the disputes?
makistsa@reddit
I am done with this sub. I don't know if you are trolling or not. Personal issue that a few people spend every hour of their day attacking intel? No one with a job could do it.
Most people are coming here to talk about hardware. Half of the comments are from people with stock options.
Famous_Wolverine3203@reddit
Heās been right about almost all things regarding Intel.
He rightfully called put 20A being a dud node when everyone said otherwise including me.
Dudeās legit. Some people have their head so far up their arses they canāt acknowledge that heās right on Intelās failings so far.
DaBIGmeow888@reddit
His replies is cogent and fact-based.Ā
FembiesReggs@reddit
I donāt like this future where the only fabs in the world are located in an island in the top 10 list for most likely WW3 cause.
Worldly_Apple1920@reddit
Taiwan was just an excuse to get free taxpayer money. Even Intel doesn't believe in the China-Taiwan threat story, which is why they are outsourcing all 20A to TSMC on Taiwan and increasing +30% of manufacturing to Taiwan. It just shows you how gullible the US gov't is.
soggybiscuit93@reddit
Having all of the world's leading edge chip manufacturing on a single island in a single country is a crazy amount of risk over all. Advanced chip manufacturing is arguably already as important as oil.
Even if not war (which is a greater than 0 chance), there's natural disasters, revolution / political instability, etc. Countries collapse sometimes. It's incredibly risky.
Doesn't even make sense.
And also just because the likelihood that Intel's node timeline slips is greater than WW3 starting doesn't mean it's an impossibility. If anything, COVID exposed just how fragile the international supplychain is. It's not prudent to believe and behave as if this recent phenomenon of globalism is a permanent feature.
Exist50@reddit
Samsung exists, and TSMC has fabs elsewhere.
soggybiscuit93@reddit
TSMC's overseas fabs are not leading edge. Having the only two fab companies that can make remotely leading edge would absolutely be disastrous long term, geopolitically.
Worldly_Apple1920@reddit
US gov't hasn't even disbursed any CHIPS ACT dollars to Intel yet, so it doesn't even sound like a priority for US gov't like you make it sound to be.
cp5184@reddit
Don't worry, the fab intels relying on right now is like 10 miles from Gaza... Tons of pointless stupid (for intel and other participants) drama around it too.
Dispator@reddit
I mean if TSMC is destroyed or fails due to war or something then Intel Foundries likely will be in prime position depending on how things play out. It's not like the experience and machines and intellectual property will go away.
Worldly_Apple1920@reddit
I think in nuclear global WW3, you have bigger problems. Intel has to compete without a dooms day scenario.
Wh1teSnak@reddit
Are they going with N3B or N3E for Arrowlake?
Helpdesk_Guy@reddit (OP)
Likely N3B I guess, the allocation must've been taken place already several months before their announcement now to scrap their 20A-node. ā¦ which makes their claim to ever have had a working 20A-process all the more dubious and questionable in the first place.
uKnowIsOver@reddit
N3B, not that it matters much between the two. They are more or less the same node:
N3B is denser, N3E has somewhere around 5% better power and performance on paper.
Wh1teSnak@reddit
Yeah I know but my understanding is that N3E is much cheaper than N3B. If they are going with N3B next gen is gonna be more expensive or their margins are gonna be very thin.
Ghostsonplanets@reddit
Intel got first share contract with TSMC for N3 back in 2019/2020 for the original N3 node (turned into N3B) under Bob Swan, when the foundry operation and R&D was being downsized. That's why they're locked into N3B for LNL and ARL. These projects were set in stone years ago.
Exist50@reddit
Nah, it's not a contract thing. N3E isn't design compatible with N3B and it's too big a headache for them to migrate. The N3 -> N3B transition was bad enough.
Kant-fan@reddit
Probably N3B same as Lunar Lake.
Fast_Wafer4095@reddit
Why is there so little focus on this? Even if Intel overall fails to its competition due to always being too late, proving that these new technologies work should big hardware news, or am I wrong?
Helpdesk_Guy@reddit (OP)
Do they though?! Where's the actual proof that these new implementations of Intel's BSPD aka PowerVia and their GAA-adoption RibbonFET really actually work? They canceled their 20A-node without anything being manufactured on it, no?
So just because Intel says so, doesn't make it the truth just because. They'd utterly lied to everyone since years.
I'd also could just claim that I've invented a fully working time-machine in my basement ā I just scrapped it completely after I've done a few journey through time (and fixing a few thingsā¦), as I assessed that it could end up bringing more harm than good.
Granted, I have no witnesses and no-one ever saw it bar myself. Yet now everyone still has to believe me and trust my claims, even if I can't prove anything I claimed about said time-machine ā If one disputes the fact, that I had a fully working time-machine and did in fact engaged successfully in time-travels, they're just hating for no reason!
See how utterly ridiculous my claims become, when I demand being believed having said time-machine, despite can't show proof of it ever existing, yet demand the right to command the truth on it regardless?!
gelade1@reddit
you need to know what those new design are meant to do. Then look at what their competition already has and you will see why it's really not that big of a deal.
Ghostsonplanets@reddit
GAA is already out on commercial products fabbed on Samsung SF3. PowerVia is noteworthy, but the industry at large isn't as interested. Specially because TSMC has already said they will only release their more advanced buried power rails solution after they introduce GAA based node.
Exist50@reddit
PowerVia is a headache. Might be necessary eventually, but the ROI is mixed today. Intel at one pointed wanted to product ARL on that Intel 3 + PowerVia "test node", but the design teams refused.
-protonsandneutrons-@reddit
To me, these are "tools" to reach PPA targets (price, performance, area), like DUV / EUV / High-NA. What tools are used is less interesting than the actual PPA delivered.
It's not someone would pick a node only because it used EUV vs DUV.
Kind of like comparing CPUs based on frequency alone: it's just a tool. The actual data is performance & power.
buttplugs4life4me@reddit
I think because backside power delivery should've enabled higher density and the fact they're still behind TSMC with that is kind of underwhelming.Ā
my_wing@reddit
Questions among questions ???
The PR statement is that D0 < 0.4. If this is the case why Intel do not produce Panther Lake NOW?
If Meteor Lake is an indicator of Compute Tile Panther Lake i.e. 8.3mm * 8.9mm using the Murphy's Model of Die Yield from this site.
https://isine.com/resources/die-yield-calculator/
is 70% (intel 18A) and if this rumor of TSMC / Apple (and even mentioned Samsung is at 50%) is anything remotely truth.
https://technode.com/2023/07/17/tsmcs-3nm-yield-rate-reportedly-just-55-with-apple-only-paying-for-qualified-circuits/
Then what is going on Intel, please release Panther Lake NOW.
70% yield is way better then 55% and way way better than 50%, what is the mind of Intel management ???
Separate_Paper_1412@reddit
Nah it's dead and buried. Even Samsung has better chances now. Intel might as well start selling off their foundry businessĀ
NerdProcrastinating@reddit
On the bright side, it makes Arrow Lake more appealing from the get go without having to be as concerned about how it performs relative to Lunar Lake.
Helpdesk_Guy@reddit (OP)
Yup, its only good for their reputation to eventually bring reasonable products, as the excessive power-draw of their line-up brought them more than enough trouble and a lot of them being joked at the last years.
I guess, for most consumers it doesn't matter if it's manufactured by them or TSMC (or Samsung, GloFo and others for that matter), as long as it's competitive and doesn't constantly needs attention on cooling over worrying heat-dissipation.
pianobench007@reddit
Except Intel products and packaging consistently run COOLER than equivalent AMD products. So it comes down to better packaging overall for Intel.Ā
This is despite using slightly more wattage. So whatever wattage gain AMD has over Intel, the advantage is reduced due to the packaging advances Intel has.Ā
Helpdesk_Guy@reddit (OP)
You know what heat-density is?
pianobench007@reddit
I am aware that Intel IHS is larger than AMD's current package. Which contributes to Intel CPU running cooler. Better dissipation.Ā
Both AMD and Intel complement each other. Seriously. Without AMD pushing tiles, Intel may have never gone that route.
Without ARM pushing a big.LITTLE design, Intel might not have launched a p/E core design.
And again if Apple didn't do on package memory cpu design. Intel might not have launched a Lunar Lake with onpackqge memory. Thus eliminating dedicated VRM to ram. Saving power.
So I'm aware. But Intel also pioneered finFET which the Industry then followed. Intel is pioneering backside power via and many other new technologies.Ā
If Michael Jordan didn't wear Nikes, kids today might all just be wearing Uggs.Ā
The point is competition compliments one another. AMD could improve on the temperature design but they decided against overclocking headroom.
It's known.
Helpdesk_Guy@reddit (OP)
Uhm, are you trying to play the buffoon here? xD That has really not that much to do with the heat-spreader ā¦
If anything, by that logic AMD's SKUs would have to run cooler, since the heat is spread over a way larger (surface-) area.
The smaller the process and the higher the density given SKU's dies are fabbed in, the higher the punctual heat-density, which increases the spotty heat-dissipation tremendously. It's basically same wattage in a way smaller spot, hence more difficult to cool.
pianobench007@reddit
There is much more to heat dissipation than that. This forum isn't really setup to discussing that freely.
You can just see the testing result yourself. And by the way it's not a major issue at least for consumers. At least in desktop enthusiasts.Ā
We've seen that desktop enthusiasts still just want overclocking features and things like that. The other end just want a fast chip. And cooling it is now upto them.
My Intel 14nm heating unit mostly runs max at 50C while my other efficient PC running a Ryzen 7 5800x runs at 63C same cooler. But the desktop sensor reports it using less wattage.
Anyhow go figure.
VenditatioDelendaEst@reddit
Either your ambient temperature is below freezing, or you have never fully utilized either CPU.
Temperatures reported by the CPU are not comparable between vendors (or even between generations, across sufficiently large spans of time), and there is no reason to believe that "80Ā°C" from (for instance) a 12900K's telemetry has the same relationship to the highest temperature on the die as "80Ā°C" from a 5800X's telemetry.
Helpdesk_Guy@reddit (OP)
That had exactly nothing to do with ARM itself nor their competitive standing, like at all.
That overly glorified Hybrid&nbsP;architecture of Intel like ARM's well-established heterogeneous big.LITTLE/DynamicIQ approach, is a mediocre and desperate stopgap-solution, solely born out of necessity due to everlasting yield-issues (can't fab any bigger dies for good) ā¦
For when Intel just had to urgently increase core-count for trying to keep pace in AMD's core-war and not fall behind any further then what they already where at that point in time. It's a hot mess, utter chaos and scheduling-nightmare anyway, where consumers have to dearly pay for Intel's panic-driven fall-out of competition and have to suffer horrendous latency-issues ever since ā¦
They didn't regressed the core-count between Rocket Lake's 8C and Comet Lake's former 10C just for fun (which was unheard of ever before!), but because they just couldn't fit any more cores on their 14nm any viable neither physically nor yield-wise, not to even mentioning the outright insane power-draws their SKUs already had at stock (253W TDP).
It's said that the best inventions are born out of necessity, it begets ingenuity and that necessity is the mother of all inventions. Yet Intel's Hybrid-thingy is the second-worst implemented architecture to date (if we take the industry's single-worst architecture and their prominent dumpster-fire Itanium for good measure) and as technically flawed as it can be, at least from the scheduling side of things.
Ever since, we have slow-downs of 500ā1500ms of interprocess-latency on Windows, horrendous frame-times and general sluggishness, which Intel never really addressed nor owned up to, as they never cared about it in the first place ā The number one goal was to increase core-count, no matter what. A chiplet-based architecture would've been a thousand times better than their garbage-hybrid mess, where they were stupid enough to intermix different cores with even completely different ISA-extensions (AVX512, Hyper-Threading etc.).
Though Intel just didn't had that very magic glue of AMD which enabled a flawless and as-good chiplet-implementation in the first place (InfinityFabric™) and not only brought their tiles approach (which is a quite good implemented version of AMD's chiplet-approach, for what Intel had at hand and at their disposal to bring). Noes ā¦
No, Intel was unfortunately stupid enough to even eff up their rather good copy-cat of AMD's chiplets, to mix in their already bad enough theoretical idea of a heterogeneous architecture, only to eff it up completely ā¦
If Intel would've only implemented Foveros/3D, and (for sanity's sake) leave it at that for once, to increase the amount of actual cores, they would've been just fine and consumers would've gotten a pretty much as-good architecture to compete against AMD's ever-increasing core-count using their chiplets. Yet it's Intel after all, and they just couldn't possibly resist their urge, to completely fāck up their quite good idea of their AMD-inspired chiplet-copycat Foveros with their hybrid-apporach to ruin it for everyone involved!
Talking about dangerously lethal cuts and self-inflicted wounds being tried to stop from bleeding with vigorously salted patches, while 'em trying to heal ā¦ and then wonder why it takes so long and the debt holders alongside the official bailiffs are come knocking.
You really just can't make up their stupid, they have to constantly display it instead.
Helpdesk_Guy@reddit (OP)
Of course, competition is necessary. The worst example for its necessity was Intel only offering quad-cores for a decade. Though AMD never had pushed tiles, but chiplets ā No, that's not the same at all!
I'm not even pedantic here, as implementing chiplets rather than tiles, requires the necessity to have a viable interconnect (the underlying fabric, to interweave the compounds) for it in the first place ā Intel has lacked AMD's InfinityFabric nicknamed MagicGlueā¢ and hence had to use tiles for that very reason, instead of going the same route and use the chiplet-approach.
Simply put, chiplets are smart (for lack of a better word), while tiles are rather dumb ā Not because it comes from AMD or intel respectively and would entail another ideological trench warfare, but because chiplets are designed as completely autonomous parts of silicons and rather act more like a platform-controller hub (PCH) than a processor-core or another pat of embedded silicon.
Contrary to chiplets, tiles are way less autonomous and need to be implemented into 'something bigger', need a more take-it-by-the-hand approach and something overarching which takes control over them, since they're way less designed to act upon and show self-reliance as compared to chiplets.
To put it simply, picture a chiplet more like a self-reliant and autonomous add-in card in a PCi-E-slot, which can fully autonomously engage in compute loads, can in and of itself ask for and receive process-loads and compute-streams and schedules brought-over or requested compute-loads for itself (to send them out again after processing has been finished), all the while it acts rather self-governing. Very much like a SCSI- or Firewire-device.
Tiles on the other hand, are way less self-governing and act more like a embedded FPU ā It's hopelessly useless and lost in place, if there's no other overarching part of silicon commanding it, takes charge and processes orders to be computed for it. Call it somewhat 'headless'. That's why their tiles have to be embedded into (onto, physically) a 'smart interposer' or tile itself, which already implements and has that very governing-function chiplets already have on their own.
Intel's interposer-tiles are NOT just mere physical bonding-tiles, but take over processing-, scheduling- and commanding-functions for the tiles which gets embedded/bonded atop first and foremost!
That's what Intel's Foveros is, the face-to-face (F2F) chip-on-chip bonding through extremely fine-pitched 36-micron microbumps. Talking about some fancy publicity-driven '3D-packaging' and marketing again giving it a misleading packaging-spin, when in reality it just incorporates their Active Interposer or Base Logic Die they talk about. It's not really a packaging-thing, as the ability to stack them (3D) is only the result of the very necessity to have their active interposer/base logic die in the first place.
That being said and all arrogance apart, it's AMD's InfinityFabric which is the very magic here, not really the chiplet-approach in and of itself, which gives Intel a hard time and a tough nut to crack ever since AMD came up with it. Since (every latency-related issue aside and just for the sake of the argument and explanation being brought across) you can put a chiplet pretty much everywhere on a PCi-Express-bus, and it will just work ā¦
A tile not so much ā¦ Since it lacks the overarching processing 'header' and administrative apparatus/system to be self-governing.
Having said that, AMD did just exactly that, when putting their soutbridge/chipset and controller-chiplet back onto the motherboard back then with Ryzen 3000, when they moved their I/O-die chiplet (acting as the chipset) onto the board again with their AMD X570-chipset. They basically released their memory-controller out of the on-chip compound-structure of chiplets from the CPU and put their memory-controller back onto the board and separated it from the rest of their on-chip (CPU) chiplet-dies.
You think you can just move a tile (or any of their tiles for that matter) from Intel's CPU off the on-chip CPU-structure and put it onto the board? I highly doubt that, since it wouldn't act the same ā It lacks the governing structure-system.
That's by the way the very reason, why Intel struggles so hard to bring anything Sapphire Rapids and Ponte Vecchio, since they never had the very interconnect they needed to put it in place. Their trying to have a chiplet-like approach using their tiles, while having to take PCi-E 5.0 as the very system-interconnect (CXL was proposed before PCi-E 5.0 was even fully drafted) backfired hard and brought them a lot of trouble on validation needing unheard of amounts of samples and revisions until it ran at least stable, right?
Exist50@reddit
LNL's major benefit is the SoC overhaul from MTL. You don't get that with ARL. PTL should be pretty good though, even if the node isn't best in class.
NerdProcrastinating@reddit
Ah, okay I see that ARL (H at least) appears to mostly be a CPU tile refresh of Meteor Lake.
That's a shame. I was hoping more for a variant of Lunar Lake that would be more suitable for a future Framework mainboard.
der_triad@reddit
It seems PTL backtracks a bit on what makes LNL appealing though. As far as I know PTL does not have LPE cluster on the same die. The LPE cluster for PTL will definitely be much more capable than MTL but still youāre forcing a D2D transition when moving off of LPE island. It seems unlikely that PTL will have the same efficiency as LNL in the 9-15W range :/
Exist50@reddit
That is not the case! PTL does have different die to die cutlines, but is way closer to LNL than MTL/ARL.
And anyway, that die to die transition isn't so problematic by itself. MTL's issues run deeper.
Probably not quite, but I think we'll see a large improvement in battery life and overall efficiency in the 15W+ range.
Legal-Insurance-8291@reddit
This whole argument is just getting ridiculous. 18A is a refinement of 20A so to say 20A is trash and behind schedule, but 18A is amazing and ahead of schedule is just a logical absurdity. Dunno how anyone can still be believing these lies at this point.
AlwaysMangoHere@reddit
Nobody is saying 20A is trash and behind schedule. It's cancelled because spending money on 20A production just for a low end Arrowlake doesn't make sense.
DaBIGmeow888@reddit
It made sense because it demonstrated to potential customers that Intel Foundry Services is capable of advanced nodes. If you can't even do that, how do you expect customers to abandon TSMC and join Intel when even Intel is outsourcing every 20A and more to TSMC?
Johnny_Oro@reddit
AKA a billion dollar marketing device with little profitability otherwise. Why keep investing in that when there's a more important project going on and thin financial resources? If customers and the government wanted 20A they'd have given Intel the capital investment intel needs. But 18A is the milestone they really want to see.
DaBIGmeow888@reddit
Pat Gelsinger stood next to a 20A wafer in front of investors. Now conveniently 20A doesn't matter and it's a genius decision to bypass it despites spending hundreds of millions into production already. It's likely Intel is forced to abandon 20A due to inefficient yields, a problem that has plagued Intel for a decade since 14nm and 10nm, and if SoftBank, Broadcom, and Qualcomm are any indication, its a problem that plagues 18A too. In other words, Intel needs to execute flawlessly or it's screwed.
Johnny_Oro@reddit
Softbank left because Intel divested from ARM due to their poor financial reports. Broadcom only reported that 18A is not ready for high yield production of their wafer as of now, which makes sense because it's only supposed to be ready next and Broadcom's own design is probably not even finalized yet, they've not even reached a conclusion yet as explained by the company's spokeperson. Read the news instead of just the headline. And Qualcomm already left intel's 20A process over a year ago because they'd rather rely on the readily available TSMC process. That just gives Intel a bigger incentive to abandon 20A and leap straight to 18A, which was reportedly ahead of schedule.
In other words, you're making up a narrativeĀ out of incomplete or inaccurate information. Is 18A going to be successful? I can't tell, but there's no publicly available evidence it'll be a failure yet.
DaBIGmeow888@reddit
SoftBank left a prospective AI chip partnership with Intel due to Intel inability to hit manufacturing timelines for volume and speed. It has nothing to do with ARM, that's a separate divestment event since Intel needs money.
Broadcom's engineers questioned the viability of the 18A due to low yields. You can argue they have a few more months to work out the kinks before HVM, that low yields is acceptable at this stage. I am not optimistic given the cancellation of 20A, outsourcing to TSMC, decades long yield issues with 14nm/10nm.
In other words, Pat Gelsinger's claim of good yields at this stage for 18A is refuted by unofficial Broadcom comments, that suggest HVM will be further delayed that it already is.
Johnny_Oro@reddit
Those are the articles I addressed. They are scant of details unlike the headlines are leading you to believe, especially the second one. It did not refute anything. And you should've been less optimistic if they sticked to 20A despite the known setbacks and lack of customers compared to 18A.
nanonan@reddit
There is zero reason to abandon something that is perfectly good that you have invested millions into. It's obviously not perfectly good.
nanonan@reddit
If it wasn't trash it would be useful for more than just low end parts.
Legal-Insurance-8291@reddit
Originally it was supposed to be all of ARL. Then it was just low end, now it's canceled altogether. Read between the lines here. If it were great then why would it be only the low end? If it were on schedule why is it completely canceled?
soggybiscuit93@reddit
Could be a lot of reasons. Could be like 10nm SF, where it can be decent in certain scenarios, but better options exist. Could be that going all N3B is a cheaper option than dual sourcing
IlliterateNonsense@reddit
Please stop, you are going against the narrative
Helpdesk_Guy@reddit (OP)
In Germany, they have a famous saying for situations like that: āSome parts of these answers would unsettle the public.ā
It's diplomatic, implies possibly everything and primes the recipient for the worst ā They'll quickly stop to ask no more.
That's how you handle problems like a boss and you can just toss the root-cause, and no-one dares to ask any further.
1600vam@reddit
20A served two purposes for Intel:
Because 18A is on schedule and healthy, there is no need to productize 20A. The limited volume that was tentatively planned for 20A would have introduced more cost, but would be worth it if 18A was late and more volume was expected on 20A. But that's not happening, so Intel is reducing costs by skipping 20A.
Grand_Can5852@reddit
Unless they plan to scale back 18A specs in the future so it's closer to what 20A was supposed to be.
Kant-fan@reddit
But who is actually saying that 20A is trash and 18A is amazing? Neither Intel nor critics are claiming both things to be true.
DaBIGmeow888@reddit
The /r/Intel subreddit is saying that.Ā
DaBIGmeow888@reddit
yes, nobody cancels a node halfway in production, these manufacturing choices are made years in advance. It it was such a a genius move, it would have been done before spending on production
Astigi@reddit
TSMC will be earning big form Intel for a very long time
Helpdesk_Guy@reddit (OP)
A recent report from Goldman Sachs stated, that Intel increases out-sourcing dramatically between 2024ā'25, I think it was over $15Bn in that time-frame; $5.6 billion and $9.7 billion in 2024 and 2025, respectively.
Source: HardwareTimes.com Intel May Outsource Processor Chiplet Production Worth $15 Billion to TSMC Between 2024 and 2025
That number is now only going up, I guess. Wonder how they're going to pay for it ā¦
As others already said here and were downvoted by rather delusionals, of course, but Intel is really just gas lighting investors at this point and give shareholders the runaround. Since the decision to out-source ARL was and had to be taken already months before, for a launch in October and it must haven been already developed with TSMC's PDKs in mind, right?
ifq29311@reddit
has intel actually made some process between 10nm and 18A, or has it all been canceled/scrapped?
Ghostsonplanets@reddit
Intel Meteor Lake has been out in commercial products since late 2023 and uses Intel 4 EUV node. Intel disclosed they have sold 10M MTL chips so far.
Xeon 6 servers Clearwater Forest and Granite Rapids are also out and using Intel 3 EUV node. This is the lateat HVM node from Intel.
Intel 18A is being used exclusively for Panther Lake next year.
ifq29311@reddit
if they have those, why they needed TSMC for lunar/arrow lake?
nanonan@reddit
TSMC is better.
Ghostsonplanets@reddit
Arrow Lake and Lunar being fabbed on external manufacturing was a decision taken under Bob Swan leadership, which had no confidence on Intel Foundry getting out of the 10nm troubles in time and planned to downsize the Foundry side.
So Intel signed a contract back in 2019 to 2020 for first share on TSMC N3 node (cancelled and turned into N3B) for products originally meant to release in 2023 to regain product manufacturing competitiveness against Apple and AMD.
The idea was that Golden Cove products would be released under Intel 10nm back in 2020 to 21, Ocean Cove under Intel 7nm back in 2022 and Lion Cove leapfrog AMD and matches Apple with usage of bleeding edge TSMC 3nm in 2023, with Intel using TSMC external manufacturing in the foreseeable future while Foundry was downsized or kept only for minimal share of certain products.
Hence why Keller was preaching Intel CCG to adopt modern methodologies and portable designs.
The whole strategy changed once Gelsinger was turned CEO and revived foundry side and doubled down on it. But the contract signed under Swan needed to be respected and as we can see, is basically saving Intel of the dishonor of not having a internal manufacturing node available right now that can support the ambitions of their design side or be competitive with QCOM, AMD or Apple.
Exist50@reddit
LNL, yes, but ARL was supposed to be much heavier on Intel nodes. Which is why ARL-20A was on the roadmap to begin with.
yUQHdn7DNWr9@reddit
Probably that manufacturing costs are so high that Intel would have to sell every chip at a loss.
ElSzymono@reddit
Yes. Intel 4 (compute tile in MTL) and Intel 3 (Sierra Forest and upcoming Granite Rapids).
Kant-fan@reddit
They already launched a product using Intel 3.
Flynny123@reddit
If they subsequently identify issues with 18A they'd be in real trouble (legally) after this statement, so though it's tough for me to understand how 20A can be a disaster but 18A (a refinement of 20A?) meeting expectations, I do think it's likely true.
One thing this highlights to me is that "5 nodes in 4 years" has been a very expensive way of introducing the kind of process iteration that they used to take for granted.
TwelveSilverSwords@reddit
I don't think 20A is a disaster. It's more like 20A isn't worth it. Why ramp a node just to fabricate a small amount of chips on it?
nanonan@reddit
Why is it only good for a small amount of chips? Or in reality no chips at all? I'd say because it was a disaster.
Flynny123@reddit
I suppose if 20A good, 18A better, it does make sense
DaBIGmeow888@reddit
Arrow Lake was suppose to have 100% in-house, then reduced to a few SKUs, then chopped off to be outsourced entirely. If you read between the lines,Ā I am not optimistic on 18A.
uKnowIsOver@reddit
Couldn't really see them ramping up a node for just a low end product with how much they have been cutting recently.
nanonan@reddit
Sure, ramping up a node that can only service the low end is probably not a great idea. Not as bad an idea as creating a node that can only be utilised by the low end though, that's their real fuck up and likely why they've abandoned it.
DaBIGmeow888@reddit
Yes but how do you convince people to join IFS and abandon TSMC when yourself outsourcing your advanced nodes to said TSMC.
Helpdesk_Guy@reddit (OP)
Maybe they're trying the good 'ol āJust have a li'l bit of trust in us ā We're doing that already for longer!ā ā¦
Who knows, it might work all of a sudden for once. If you never try, you'll never know, right?!
Exist50@reddit
It only made it this long because they wanted a proof point for external customers. But there's not much value in such a demo when it shows you're far behind where you claimed to be. And of course CCG layoffs and cost cutting.
Helpdesk_Guy@reddit (OP)
Intel notes ā¦
That means, each and every product previously meant to have Intel 20Ć , like the upcoming Arrow Lake-generation, is to be manufactured wholly by TSMC on tiles to tiles, Intel takes over the packaging afterwards.
ThePandaRider@reddit
I think the Arrow Lake generation was always meant to be on TSMC 3nm since the order with TSMC dates back to the previous CEO and Intel was on the hook to use 3nm from that order. 20A was also supposed to be used but never for the top of the line SKUs. It was always questionable if 20A would see the light of day only being used in low end consumer SKUs. The datacenter lineup is more important and I think that one was always supposed to transition from 3nm to 18A.
nanonan@reddit
The plan was all 20A, then low end 20A, and now nothing 20A. Why people think this means 18A has promise is beyond me.
Kant-fan@reddit
Where there every any other products meant to have Intel 20A other than Arrow Lake? Possibly Lunar Lake but I know that they had booked TSMC noded years ago already.
nanonan@reddit
Rumour has it Qualcomm did design for it but abandoned it for the competition.
Ghostsonplanets@reddit
Lunar Lake was never 20A. Arrow Lake was meant to have a small 6+8 Desktop die on 20A for node showcasing purposes.
eugcomax@reddit
TSMC is going to add backside power delivery in N2 and it's expected in 2026-2027. How Intel will have backside power delivery next year if it will be made by TSMC?
Exist50@reddit
That's quite a laughable spin from Intel. Wouldn't be surprised if this comes up in a shareholder lawsuit later. Looks better than admitting that 20A is broken to the point of uselessness, or releasing in mid '25 only to be beaten apples-to-apples by N3B.
ProfessionalPrincipa@reddit
Cancelling stepping stone nodes is definitely a sign of them gaining momentum with their roadmap. š
gelade1@reddit
look at yourself in the mirror here
you guys are something else really. endless entertainment
DaBIGmeow888@reddit
5 nodes in 4 years is achieved by skipping stop-gap nodes entirely. You can't make this shit up.
Real-Human-1985@reddit
Yea, spin it as a good thing that your node is so bad you just had to skip it ad potential customers got cold feet.
soggybiscuit93@reddit
20A was for internal use only.
ElementII5@reddit
How can you always so confidently state wrong facts. What is your end goal?
Real-Human-1985@reddit
Intel will make Qualcomm chips in new foundry deal
Intelās First High-Profile IFS Fab Customer: Qualcomm Jumps on Board For 20A Process
Intelās advanced manufacturing process has suffered another setback, Ming-Chi Kuo said that Qualcomm has stopped developing Intel 20A chips
-protonsandneutrons-@reddit
These are great sources. Thank you for finding them. Sigh, what a flop.
As far back as July 2021 when 5N4Y was announced, Intel 20A was designed for external:
vhk7896rty@reddit
so all the equipment intel has for 20A/18A, what is it currently doing? nothing?
gelade1@reddit
testing and training purposes.
pastari@reddit
Building momentum.
ThePandaRider@reddit
There isn't much EUV equipment right now. Most of it is likely to be used for 3nm Sierra Forest and Granite Rapids. Besides that they are likely working with foundry clients like Microsoft to produce test wafers and resolve defects.
soggybiscuit93@reddit
They have EUV machines producing Intel 3/4. The few lines that were gonna produce small volume of 20A in 2025 are being used for 18A
vhk7896rty@reddit
is there an estimate of when desktop processors on 18A will come?
soggybiscuit93@reddit
Desktop? From Intel, 2026. For laptop and server, 2025.
It enters high volume manufacturing in a few years. And they're using some of the equipment right now to make Meteor Lake and Sierra Forest.
clingbat@reddit
Honest question, why is everyone hyping Intel's 18A talking about their "node leadership" when TSMC is already cooking 16A and is far enough along Apple just bought up a ton of their 16A capacity?
I know they are taking different lithography approaches at this point, but I'm not seeing the clear Intel lead coming that some are claiming.
Johnny_Oro@reddit
18A will be the clear lead for at least year if on schedule. Besides, having an advanced chip foundry at all outside TSMC is already a good thing.
soggybiscuit93@reddit
Mostly disappointed that we won't get 20A ARL vs N3B ARL comparisons now
ConsistencyWelder@reddit
In other words: "We couldn't make 20A work, so we're giving up and trying with 18A instead".
Letting the competition do your manufacturing doesn't sound like a win to me, yet they manage to make it sound like it is.
Worldly_Apple1920@reddit
lol, "We are so good, we abandoning 20A." What kinda of an idiot believes this. Especially after BroadCom, SoftBank, and Qualcomm have shitted on viability of 18A in test units.
NewRedditIsVeryUgly@reddit
I was waiting to see how Arrow Lake performs before I upgrade. I'm not waiting for 18A, either Arrow Lake (on TSMC?) delivers or I'm going AMD.
Real-Human-1985@reddit
Arrow Lake should have 10% more IPC than Raptor Lake, but it is clocked lower.
Helpdesk_Guy@reddit (OP)
At least ARL shouldn't have any oxidation-issues when coming from TSMC, since they're at least able to handle their own processes ā¦
Killmonger130@reddit
TSMC sourced Arrow Lake should be efficient I hope, I would upgrade to it from my 12700k
Real-Human-1985@reddit
Same PL2 as Raptor Lake and the i9 has an unlimited mode.
Lost_Ad_6278@reddit
It seems like Intel is making significant moves with their 18A plans and shifting more towards TSMC for Arrow Lake. This could have interesting implications for their future chip designs and production capabilities.
DaBIGmeow888@reddit
It's not a good sign to increase outsourcing, Intel is trying to be a foundry, it needs to attract customers. Outsourcing to TSMC is an admission of defeat that does the exact opposite.
Exist50@reddit
Yes. If the node isn't good enough for Intel's own design teams to use, they'll never get external customers for it. Like they're pitching 18A for AI while their own GPU/AI products will be made at TSMC.