Patent about Intel Royal Core SMT implementation
Posted by CopperSharkk@reddit | hardware | View on Reddit | 27 comments
Posted by CopperSharkk@reddit | hardware | View on Reddit | 27 comments
nittanyofthings@reddit
I'm so sick of patents. There hasn't been a true invention since 1970. They're just gatekeeping design patterns now.
jaaval@reddit
Oh really? We are still using 70s tech? Interesting.
trackdaybruh@reddit
Wait, so Intel is bringing back hyperthreading? Why did they kill it off in the first place
R-ten-K@reddit
SMT is an optional component.
It was disabled in a few designs to reduce validation effort (and reduce design costs/time to market). SMT also added complexity to the big.LITTLE scheduling specially in Windows.
hackenclaw@reddit
hence the reason why phones has no SMT.
Really weird that intel bring it back, they should have go like Phone SoC design that comes with 1 Super core, a few performance core and finally efficient cores.
jaaval@reddit
I don’t think any of the arm client processors have SMT. Apple didn’t put it even to their biggest processors.
bookincookie2394@reddit
This was a proposal to go the opposite route, with large homogeneous cores that could dynamically transform into multiple throughput-oriented cores as needed. This would in theory allow each core to be made even larger, since their multithreaded mode would help justify the huge area cost per core.
hwgod@reddit
In LNC it wasn't "optional"; it was just straight up not included in the design. Part of the reason why there's no LNC server product. Intel didn't even think they'd need it at all going forward.
And the scheduling complexity wasn't a problem. Already accounted for with ADL, and SMT threads are bottom
R-ten-K@reddit
My point is that SMT is an optional uArch feature.
The complexity of managing the development and support of a SMT-aware and hybrid-aware (to avoid performance regressions) scheduler is not trivial.
phire@reddit
Just because there weren't any LNC parts with hyperthreading doesn't mean it wasn't still optional in the codebase.
They don't rewrite the whole codebase for each CPU, it's a bunch of incremental changes. The current cores almost certainly have code going all the way back to Sandybridge and probably have code going all the way back to the Pentium Pro. They might even have small bits of code going back to the Pentium or even 486.
TBH, I suspect it's more "somebody broke it, and we can't be bothered fixing hyperthreadding before launching LNC" than pure "validation effort"
Geddagod@reddit
If Diamond Rapids doesn't have hyperthreading, do you think that would change your mind about LNC (though technically DMR is supposed to use the next gen P-core) having an optionality in adding hyperthreading?
mkaypl@reddit
A patent doesn't mean there's a product behind it.
Due_Calligrapher_800@reddit
Exist50@reddit
Two reasons. (1) They wanted to get LNC out the door ASAP, and LNC rewrote a lot of code. (2) It doesn't make as much sense in a world where you also have a small core for highly parallel workloads.
The trouble is that some workloads care about SMT for non-technical reasons (per-core licensing), and Intel decided they no longer had the budget to fund two different core teams.
jocnews@reddit
Well, when you get core configurations with 2x as much E-Cores as P-Cores, it would make more sense to put SMT on the E-Cores.
Kind of pity (for Intel) that they didn't use that motivation to implement SMT on the unified core early, heh.
elkond@reddit
because they wanted to simplify core design but then a new ceo happend, ceo who coincidentally was on intels board of directors during the "arrogance and burning money" years
zzzoom@reddit
We'll only know when/if 14A gets any large customers if his bets would have paid off.
Exist50@reddit
18A was supposed to be that chance. Suffice it to say it was a failure.
tacticalangus@reddit
What do you think LBT has to do with "arrogance and burning money"?
elkond@reddit
he was on the board during years that lead to 14+++++++++++???
SlamedCards@reddit
Lip-Bu joined the board after Pat was hired, 2022. 14nm and 10nm was during BK and Swan?
zzzoom@reddit
Sounds similar to NVIDIA's Spatial Multithreading in Olympus cores.
Exist50@reddit
Nvidia acquired a significant number of people from the Royal team. IIRC, their main CPU leads are ex-Royal. Not all of them went to AheadComputing.
So the similarity might be a lot more than mere coincidence.
Admirable-Extent2296@reddit
Who even decided to get rid of 20 engineers working on something that could have massively benefited the entire company and why? They are also taking ideas from that project, if I remember correctly. Clearly, they were worth their salt. Was this Pat's doing?
CopperSharkk@reddit (OP)
I wonder if intel will implement this in coral rapids as well
Exist50@reddit
Will that be Unified Core or the last P-core (Griffin Cove?). If the latter, then I'm not sure I see them putting in the effort for an architecture on life support.
The real question for Unified Core will be what is the easiest to implement. The Atom team will clearly have their hands full with the performance, ISA, and SMT asks all combined.
bookincookie2394@reddit
Traditional SMT seems cumbersome on a core as highly clustered as Royal, so using hard partitioning makes a lot of sense. Maybe we’ll see it in Unified Core?