TSMC schedules 1.4nm mass production for 2028, targets sub-1nm pilots in 2029
Posted by sr_local@reddit | hardware | View on Reddit | 53 comments
Posted by sr_local@reddit | hardware | View on Reddit | 53 comments
_hlvnhlv@reddit
I wonder what naming scheme will they use once they can't keep making shit up.
For those that don't know, the "nm" in the process node used to indicate the gate pitch or something, and when they weren't able to keep shrinking it, they just started making it up
An example: a "3nm" node has a gate pitch of 40 something nanometers, not 3
Basically nothing uses features that small, and the improvements are usually in other places besides "smaller goes brrrrrr"
R-ten-K@reddit
Process naming has traditionally referred to the effective resolution of the lithographic process, the limits of what the optics can reliably pattern. It roughly tracking the minimum gate length of an ideal planar transistor was more of a historical accident, that held only from the ’70s through the ’90s.
For its intended audience, node naming is still a useful shorthand for the generation and capabilities of a given process.
If you’re not steeped in the EE side, it’s easy to misinterpret those labels. And when people don’t understand something, they often fill in the gaps with conspiratorial speculation that misses the mark.
account312@reddit
They’re not even remotely close to running out of SI prefixes. Though they’re kinda running low on distances that aren’t smaller than a silicon atom.
superkickstart@reddit
6.187142e+25 planck length process
gorion@reddit
They already use Angstroms. This node is called: TSMC A14
Anyways, TSMC, Intel, Samsung already dont use "nm" in their node prooces names, marketing people and pseudo journalists does.
Eg. "3nm", "5nm", "7nm"
gumol@reddit
TSMC website: "TSMC’s 2nm (N2) technology has started volume production in 4Q25 as planned."
https://www.tsmc.com/english/dedicatedFoundry/technology/logic/l_2nm
jmlinden7@reddit
The official name of the process node is N2, their marketing people decided to call it 2nm on the website for whatever reason
gumol@reddit
do TSMC marketing people not work at TSMC?
DerpSenpai@reddit
Eventually they will use the density per mm^2
ComplexEntertainer13@reddit
Of what? Not all transistors or libraries are the same.
SRAM density used to be a reasonable metric. But with scaling breaking down and transistor types scaling differently depending on what you are using them for, there are no perfect metrics.
DerpSenpai@reddit
most likely a mixture of logic and SRAM, the industry will have to decide, current naming just doesn't scale much longer.
_hlvnhlv@reddit
That's a win to me.
But yeah, it's just too complicated really, it cannot be resumed in just a magic number
Different_Lab_813@reddit
Blame journalist not TSMC, they haven't used nanometers in their roadmaps for a long time.
gumol@reddit
https://www.tsmc.com/english/dedicatedFoundry/technology/future_rd
"Summary of TSMC's Major Future R&D Projects"
"2nm logic technology platform and applications"
Seanspeed@reddit
We dont need to 'blame' anybody, as it's not actually a real problem in the first place.
TSMC's customers aren't you and I, they're major chip design companies who are not choosing a manufacturing process based purely on its name! lol They know what they're doing. It's not an issue.
Seanspeed@reddit
What is going to stop them 'making shit up'? :/
It's just a naming scheme.
III-V@reddit
I sure hope so. We've got a generation of pedantic redditors coming into power once these boomers finally croak. Then it's all over for these lawbreakers.
jmlinden7@reddit
Its supposed to be a 2x improvement in performance per generation.
J_KBF@reddit
Angstroms then elections maybe lol
mrybczyn@reddit
they should start making units.
next node 1.21 jigawats!
Marco-YES@reddit
Jiga isn't a made up SI unit. It's a mispronunciation of Giga.
szakee@reddit
can't wait to browse instagram faster!
PoemPuzzleheaded8651@reddit
Slower.
Companies will use the faster chips to justify pushing shitty features with shittier code to slow down apps.
amidoes@reddit
It already happens with games
Crysis from 2007 looks better than most of nowadays games who need shitty AI and upscaling with a RTX 5080 to not run and look like shit
Brickman759@reddit
Go back and look at Crysis again. it was mind-blowing at the time and still looks great. But it was easily surpassed by games that were coming out over a decade ago.
It's also built like garbage. Super inefficient, can only use one CPU core. It ran under 30fps on pretty much all hardware for at least 3-4 years after. Release.
Gronfir@reddit
The high end also moved from 1080p@30+ to 4k@120+ pushing 16 times the pixels.
Seanspeed@reddit
This reminds me of the stupid takes people have where GPU's get more powerful and gamers claim it'll just mean devs get lazier and stop optimizing.
Like if we all just still used Geforce 256's today, we'd totally still have games like Resident Evil Requiem and they'd totally run so much better!
BlueSiriusStar@reddit
I dont think devs are getting any lazier though. Its the corporate machine which looks at this tech as a panacea to all their problems, shrinking timelines and such.
Seanspeed@reddit
But timelines aren't shrinking. Game development timelines are WAAAY longer than they've ever been!
The reason games have issues or dont always run well(and they've never all run well) is largely because they're just seriously complicated pieces of software and games are getting ever more advanced and complex. Then on PC specifically, you need the game to still interface with the operating system and so there can be issues there as well(DX12 has notoriously been a source of many modern issues).
Almost never is 'more powerful hardware' a cause of these problems.
gumol@reddit
on consoles you also need to interface with the OS
TheFaithlessFaithful@reddit
Sure, but there's simply way less variety. It's a single (or a few highly similar) hardware/software combos.
In contrast to PCs, you have people running a huge variety of hardware, on a huge variety of OSs (many of which are behind on updates to drivers and the OS).
BlueSiriusStar@reddit
I mean the game engine also could be an issue here. UE5 could be notoriously hard to run with nanite and such. I just think that game engine aren't optimised for the workloads we require today.
I mean on every PC you still have to interface with the OS and the driver. Complicated doesn't mean slower games automatically.
lazyhustlermusic@reddit
Your take is shortsighted.
fatong1@reddit
Sorry not to be pedantic, but is it though? Looking at the current software trend + over reliance on LLM generated code I'd say no.
lazyhustlermusic@reddit
Would you expect the same code volume in 2026 as we had in 2006?
fatong1@reddit
Yeah i know, but you have to agree it's a bit funny that modern software is just becoming more and more bloated with "features" no one wants. In a perfect world the code base growth should decrease not increase, while still implementing the same amount of features.
lazyhustlermusic@reddit
How would code base decrease with feature additions?
gavinderulo124K@reddit
He said in a "perfect world", not in a realistic one.
lazyhustlermusic@reddit
You still didn’t answer how you’d expand functionality with less code
Flat_Pumpkin_314@reddit
It literally runs faster and faster every year bro
Seanspeed@reddit
Does this sub just hate hardware?
I dont get it. Process node jumps are one of the more inherently positive aspects of the progression of computing technology.
If the only thing you ever use a computing device for is Instagram, why are y'all even here?
Brickman759@reddit
Reddit incentivises snarky drive-by comments. Unless you're in a super small, well moderated sub, the comments are just a sea of cheap jokes that you've already read for the 50th time.
happydemon@reddit
Karma farming.
dabocx@reddit
It’s become more bitter as it’s become r/gaminghardware rather than general hardware
IguassuIronman@reddit
Realistically gaming is the most common consumer task that requires significant compute, so it's not surprising that's what people focus on
Johnny_Oro@reddit
I wouldn't be so sure. "What Andy giveth, Bill taketh away."
adevx@reddit
Isn't this coming close to the vacuum cleaner marketing bs, eg; a 3000 Watt vacuum cleaner with barely any suction power.
Stelligena@reddit
No.
4nm to 3nm assuming exactly same cpu specs, will have 40% more efficiency just out of the box, which also means 40% increased battery life.
smarlitos_@reddit
That’s actually crazy, that’s a lot of increased battery life
Sojmen@reddit
Why is it only 40%? Why it's not 80%, if surface is square root? So 4*4 to 3*3. So 16 to 9?
Stelligena@reddit
Because they are adding more cores or features to fill up the spaces.
Prawira_81@reddit
Well.. as long ram price still.. Skyrocketing.. Not so much hype.. I will use my imagination.. Lol
K33P4D@reddit
Why can't I hold all these gates