Intel's new CEO says the company has 55% of the data centre market
Posted by -protonsandneutrons-@reddit | hardware | View on Reddit | 95 comments
Posted by -protonsandneutrons-@reddit | hardware | View on Reddit | 95 comments
Geddagod@reddit
Am I tweaking, or is that the entire article?
Like literally just that one sentence. Is there some sort of paywall or something that is blocking the rest of the article? Cuz it doesn't seem like it.
Strazdas1@reddit
So you want 3 paragraphs of fluff, then 3 paragraphs of authors personal feelings to get to the part thats actually of any interest - the quote? I prefer short articles like this.
Geddagod@reddit
The quote can be the headline, there's ton of context, references to other analyst's market share graphs, and the state of the server market as a whole- how much market share is lost to ARM vs AMD, Intel's dense core competitors, etc etc that can be mentioned.
This isn't an article. It's literally one sentence.
Strazdas1@reddit
The people reporting this have neither the need nor qualifications to do this. Their job is to report facts so that other outlets license them to spin their narrative articles, talking with analysts that agree with the author, making graps poiting towards what the author wants the audience to look at, etc.
Geddagod@reddit
The funniest thing about this is that Reuters almost always does add context and all the other shit, despite apparently not having "the need nor qualifications" to do so.
Their job being just "reporting facts" doesn't mean they can add context, and you can also add context without "spinning narratives" either.
Strazdas1@reddit
Yes, Reuters are overrated and often introduce their own narrative.
Geddagod@reddit
Who the hell even rates reuters for it to be overrated lol. No one likes them, at least in this sub, too.
Also I'm fine with Reuters introducing their own narratives if they want. Since they also report the facts, and introduce context next, and then report their opinion last. Just ignore their "narrative" if you want.
TheComradeCommissar@reddit
Nah; the entire article consists of the headline.
Welcome to the ChatGPT herbalism…
ConsciousWallaby3@reddit
ChatGPT journalism is actually the opposite, with long winded sentences and information being pointlessly repeated to fill up an article's worth. This is bad but it's another issue.
HelpRespawnedAsDee@reddit
Seriously, I know Reddit loves rewriting history but shitty clickbait has been a thing way before any genAI.
ImSpartacus811@reddit
This is kind of the opposite of clickbait.
The title is accurately advertising the contents of the article. Clickbait is where there's a mismatch between the title and the article.
thoughtcriminaaaal@reddit
Reuters is a wire service, isn't it? Isn't this normal for wire services?
amonra2009@reddit
cmon
"Reporting by Max Cherney and Wen-Yee Lee Writing by Ben Blanchard Editing by David Goodman"
2 Dudes recorded that huuuuuge information,sent the huuuuuge text and 8k video to another 2 Dudes, who spent all night writing and editing this huuuuge article. Show some respect!
thoughtcriminaaaal@reddit
I'm pretty sure that's how wire services and any serious journalistic outlet works. Reuters sends a number of reporters to Computex, an editor has to sign off on it, so 3 names get credited even for a very short piece. Reporting just two sentences being said by the bigwigs is maybe a joke to you, but it's probably important to other financial news outlets.
Schmigolo@reddit
Why write something nobody's gonna read?
damodread@reddit
Tbf what Reuters does is providing press dispatches. Pure information. Then newspaper journalists take up that info, and write articles with additional context etc.
SkruitDealer@reddit
"Reporting by Max Cherney and Wen-Yee Lee Writing by Ben Blanchard Editing by David Goodman"
It took a team to get this sentence published. Such credits of shame.
TheComradeCommissar@reddit
Didn't AMD overtake Intel in the datacenter sales a few months ago?
Strazdas1@reddit
no, AMD is not even close in datacenter.
ARM has been eating Intels share more than AMD.
Alternative-Sky-1552@reddit
AMD surpassed intel in datacenter CPU revenue. Likely because if they have a use case that benefits from CPU performance they will go AMD and buy high end. If their use case mostly benefits from other things they will buy low end Intel.
Epyc has multiple times the performance of top end Xeon
Strazdas1@reddit
If intel has a 55% of total revenue, and the remaining 45% is shared by AMD, ARM and everyone else, how can AMD surpass Intel?
Alternative-Sky-1552@reddit
Thats 55% units sold. In revenue AMD passes Intel
Strazdas1@reddit
No, the article we are discussing this under states 55% revenue.
TheComradeCommissar@reddit
Not revenue; 55% of the existing servers currently have Intel CPUs.
Jensen2075@reddit
Well AMD has better profit margins from their datacenter products than Intel. Intel giving discounts to customers in order to not lose marketshare.
Geddagod@reddit
Due to the MI300 series, yea. IIRC this quarter though Intel is back on top in terms of revenue, by a marginal amount.
In terms of just CPU sales though, Intel still has a decent bit more share.
ParthProLegend@reddit
100 mate it was fucking 100%.
It's an EPYC blunder
atape_1@reddit
Still has 55%, for now... Just a few years ago, the number was 90%, then 80%, 70% ...
rationis@reddit
Was about to point that out. They used to have nearly 100% not too long ago. 55% marks an extreme decline, and I suspect they will slip into a minority share of the market by next year.
salartarium@reddit
IBM Power was still pretty big in the 2010’s. Will still exist on life support with IBM still selling z mainframes and being so involved with chip making like the Rapidus/Intel/Samsung tech sharing.
MrDunkingDeutschman@reddit
It's like glaciers melting due to climate change. First, slowly, then rapidly.
ThatOneGuyThatYou@reddit
Iwork with HPCs, core count is ”THE THING” the new EPYC 9006 chips that at are supposedly coming with up to 256 cores (cores, not threads) are going to be the thing. The one major hiccup these days is that some software just won’t run with AMD chips. That legacy code is probably one of the larger factors keeping Intel in the server space atm.
forreddituse2@reddit
I'm actually surprised it took this long for the datacenters to switch to EPYC. After Rome's release, Xeon family basically lost all competitive edges.
JackSpyder@reddit
I suspect if AMD had 10x supply chain the number would be different. I wonder how much ability to produce has limited their market share. Given the entire world is fighting for tsmc fab space and nvidia+apple soak up a lot.
Apple takes the low yield hit early with tiny mobile chips best able toncope with early low yield and refine the node. AMD chiplets can also soak medium term low yields again better than monolithic dies. Then nvidia with collosal fat GPUs sweep in once refined and with a price point so high for enterprise offerings they can bin chi0s down to pathetic 3k a gaming card offerings. Amd gpus and intel suck up remaining capacity as its drip fed.
We really need production competition. Both from just a total supply capacity but also to prevent yet another monopoly stagnating progress. I suspect tsmc won't be as greedy as US fabs in drip feeding progress to milk a gravy train having seen how that near killed intel in the past but it's also a risk politically.
Strazdas1@reddit
Based on a few people who run datacenters public speeches, Intel still significantly beats AMD with availability and variety of SKUs, which certainly does not help AMD sell units.
JackSpyder@reddit
Oh yeah im sure availability is more an issue for AMD than intel. I guess variety comes from a lack of compelling products though. AMD presumably are supply limited on their high density parts as companies like the hyperscalers and super computers vacuum them up.
Either way, its great AMD came out top for a while. Intel now need to bring some thunder in the manufacturing space especially, and keep those fabs open to 3rd parties this time too.
Strazdas1@reddit
I guess it will all depend on if Intels bet with 18A works out or not.
JackSpyder@reddit
I hope it does! We need competition and we also need way more capacity.
Strazdas1@reddit
Likewise.
Geddagod@reddit
I don't think it's a TSMC limitation as much as it is AMD being conservative with wafer orders, or perhaps them just accurately forecasting the stubbornness of the market to move to AMD after so many years of Intel entrenchment.
Strazdas1@reddit
Wafer orders are expensive and AMD may think it risky to over-buy them lest they end up with another case where they overproduced so much they were trying to sell old stock for 2 years.
JackSpyder@reddit
Could be a bit of both. Other than release, AMD seem to supply well enough outside of hyper in demand consumer X800X3D chips. But I've no idea what the enterprise acquisition market is like. I suspect interested business (cloud especially) place enormous up front orders which they can forecast well. So it's mostly small player and consumer that is more unpredictable. Makes life easier I suppose. And bad yields can bin down to consumer to fill that demand as a byproduct of enterprise fulfillment. Chiplets we're a big gamble that paid off enormously.
soggybiscuit93@reddit
Several reasons.
1) Most people don't want to switch first. Servers are more than just CPU performance. What are the quirks of the platform? Is there weird bugs like NIC instability? Driver issues? Better let someone else adopt the first gen or two and we watch
2) When is my mission critical on-prem software gonna get validated for Epyc? (we still have one or 2 programs we run in our datacenter that are technically only certified for Xeon). Hyper-V only recently added support for nested virtualization on Epyc.
3) When will I find the time to rebuild my VMs? Generally considered bad practice to move VMs between Xeon and Epyc. If you have an All Xeon datacenter and want to slowly switch to Epyc, there's more involved than simply migrating your VMs over.
4) availability. Xeons are generally widely available and Xeon servers often times have better pricing (real pricing as listed by a supplier, not publically available listed prices or MSRPs) and faster shipping dates. When I'm shopping servers, there's still a massive disparity in available SKUs for Xeon vs Epyc, with many more diverse options for the Xeon lines.
theevilsharpie@reddit
Rebuilding VMs is completely unnecessary in this case. Xeon and Epyc are both x86-64 CPUs, and can run the same binaries.
soggybiscuit93@reddit
It'll run, sure. But there are nuances. No live migrations of VMs between the two platforms, as well as the different chipset drivers, plus other quirks of the platform. You absolutely can introduce instability going between Xeon and Epyc. It's more than the ISA that matters.
theevilsharpie@reddit
Live migration isn't possible (at least not without resorting to a very limited "fake" CPU like QEMU 64), but VMs can be trivially moved between CPUs if the VM is shut off first.
Chipset drivers don't matter in a virtual machine, and every platform has its fair share of quirks.
SkruitDealer@reddit
Chipset drivers matter. The virtual machine is still an application itself that is subject to the quirks of the chipset. It's not about he number of quirks; it's that they are f different quirks; new ones that your organization will have to solve anew. Better to let someone else solve it first.
theevilsharpie@reddit
Chipset drivers don't matter to VMs, because the VM is using an emulated/paravirtualized chipset that isn't going to change if the host processor changes.
New chipsets may have their own quirks, but that's not something specific to Xeon vs. Epyc -- that's just existing platform vs. new platform.
SkruitDealer@reddit
If the environment in which the VM runs has quirks, those can propogate to the VM, which is dependant on the host environment for non virtual resources. Virtual machines virtualize the interfaces, but those interfaces still map to something on the host to function. If a chipset quirk causes issues to disk, ram, threading, power management, etc on the host system, the VM ability to perform optimally will be impacted.
FranciumGoesBoom@reddit
Isn't it fun listening people who have no idea the complexities of a large modern datacenter?
At home i'll gladly move my VMs between the two. At my current job i'll even do it. But my old job hell no.
theevilsharpie@reddit
We run hundreds (sometimes thousands, depending on scaling need) of VMs on Google Cloud's E2 instance family. This a cost-reduced VM family that will use whatever capacity that Google Cloud has available at the time.
The VM is guaranteed to be running on an x86-64 host, but otherwise, it can be assigned to whatever underlying hardware. When I spin up a VM (or power-cycle it), I have no idea whether it will be running on an AMD or Intel machine.
Works fine. Our more performance-sensitive workloads may get switched over to the N2D VM family for consistency, but I've never encountered any reliability problems. Indeed, if you were to ask me whether the VMs I manage are running on AMD or Intel hardware, I genuinely wouldn't be able to tell you without running a query, because it can easily change day-to-day.
But alas, I guess my system just isn't complex enough. :P
SkruitDealer@reddit
But that means that Google has already vetted the hardware and resolved any chipset issues for you. So back to the original point, chipset drivers do matter. Perhaps not to you, the end consumer of a VM cloud service, but the VM cloud service provider will definitely need to vet the hardware as they will have far more use cases than your own consuming organization, and any incompatibilities or inefficiencies with a new chipset will hurt their operating margins and reputation as a service provider.
theevilsharpie@reddit
I'm not sure why you keep getting hung up on "chipset issues", when my original point that I was responding to was disputing the suggestion (nevermind it being "best practice) to re-install VMs when the host CPU vendor changes, even though the CPUs are using the same architecture.
That's not necessary, if not simply due to a basic understanding of how CPU architectures work, then demonstrably based on the fact that a major world-wide hosting provider offers supported, SLA-backed service to the general public where that type of CPU switch is commonplace.
Even if you did have "chipset issues", re-installing VMs isn't going to do shit for you if your host hardware/drivers are unreliable, so it's still unnecessary.
theevilsharpie@reddit
That's still an old, mature platform vs. new, bleeding-edge platform, not Epyc vs. Xeon.
Indeed, if platform maturity is your main criteria, that's actually a benefit of Epyc right now. The current Granite Rapids Xeon performs more closely to the nearly three-year-old Epyc Genoa platform than the current Epyc Turin. Xeon is also increasingly reliant on on-chip accelerators and custom instruction set extensions to bridge that performance gap, all of which require explicit OS an application support.
NuclearRussian@reddit
Any good strategies for getting a spread of suppliers/quotes? Our procurement (scientific context) got quite absurd prices on HPE Turin systems from their 'usual channel', and are resistant to looking around.
gamebrigada@reddit
HPE and reasonably priced servers aren't a thing.
soggybiscuit93@reddit
We need to do a comparative for any big ticket purchases, so 3 quotes. Usually it's between CDW and Insight. We've been moving towards Dell a lot recently - found then much more competitive than HPE, and you can go Dell Direct.
Geddagod@reddit
I think AMD's large lead in the server market is comparatively shrinking though, both with GNR and next with DMR, but with so many data centers off Intel by now, AMD getting into new systems should also be easier than it was many years ago. Seeing how AMD's market share will grow the next couple years will be pretty interesting IMO.
There's also the ARM and semicustom DC chips threat that both AMD and Intel have to deal with.
SlamedCards@reddit
Intel has less to fear about data center. Cuz 55% is including arm/hyper scalar. While enterprise share is much higher
Intel will likely end up doing chiplets or fabbing hyper scalar CPUs IMO
Worry is ARM eventually catches on in PC. Where landing foundry customers is much harder. But it's Microsoft software so maybe not lol
SkruitDealer@reddit
When workflows on servers move to ARM - and they are rapidly because Linux on ARM is actually very well supported and cheaper to run (see the success of AWS Graviton) - developers will want their PCs to be ARM as well, as in, enterprise will start moving to ARM PCs. This is one of the main reasons x86 rose to PC prominence - developers want to develop on the same platform as their deployment target. Most deployment targets for our world of online services live in servers, so there is a lot of financial and technical incentives to transition.
You can argue that x86 will keep a foothold on PC with gaming and hardware support, but it will likely become increasing niche, or legacy, because gamers and legacy hardware vendors are a much smaller market compared to corporate's.
Then developers for both software and hardware will start targeting the growing ARM PC market.
Then x86 will be dead, or at least on expensive life support, because there will be fewer financial incentives to support it, and fewer people to do it.
psydroid@reddit
I already consider x86 on life support, even though I have recently built some relatively new AMD AM4 systems that are compatible with Windows 11 and of course Linux. In practice I mostly use my RISC-V boards for lightweight duties.
The x86 systems are only there because of compatibility with legacy software and not so much an investment in the future, as I see ARM getting better price/performance with upcoming chips slated for 2026.
And in a few years RISC-V will also have sufficient performance for end users. For now it's just for developers and early adopters for embedded systems, which do get shipped in the millions or even billions.
simplyh@reddit
Are you sure? GNR was quite disappointing imo, as someone who was quite excited to see competition, I thought Turin was just so much better it wasn't even close. GNR only had 8800 MT/s memory going for it.. I know Intel 3 is not comparable to TSMC's N3, but it really felt like it was more than just a node difference.
I say this as a Genoa customer who was entirely willing to switch to GNR if it was competitive in our workloads, but it just wasn't.
Geddagod@reddit
Turin Dense definitely, but it's decent competitive against Turin Classic, which I would assume is going to be the main server volume driver for AMD. Turin Classic uses N4P, which I would bet is still better than Intel 3, but should still be in the same node class.
For the core-dense markets though, CLF looks like it will be at least competitive with Turin Dense when it launches, at least for the half year before it gets annihilated by Venice Dense lol.
I also want to add, comparatively even GNR vs Turin Dense is at best just as bad as, at one point, 64 core Rome vs 40 core ICL (which I feel like I should remind people, the 10nm ICL used did not even have higher perf/watt than their previous 14nm), or 96 core Genoa vs 60 core SPR.
I'm extremely surprised that GNR wasn't even as competitive with Genoa in your workloads though. On paper (and in many benches) GNR should be at worst comparable to Genoa. Better core counts, more memory channels, on a similar node and on similar IPC cores.
Though ig there is a decent amount of variability on what "similar IPC" cores could mean, on average they are, but there are workloads that swing in either direction. Intel's different L3 setup could also be a culprit, since Intel has a way larger unified L3 (even if you go to SNC-3 mode), but has way worse latency too.
Even AMD's own marketing doesn't reveal massive gaps - Turin and Turin Dense are 16 and 35% faster than GNR in specint2017, and a smaller 5 and 17% gap in specFP. Meanwhile at ISSCC AMD claims an even smaller gap in 1P vs 1P systems, a 2 and 23% gap respectively.
6950@reddit
The lib they used were 3-3 fin for GNR which is not the best fin variant for I3 also Zen5 is a better architecture as well vs like 2nd refresh of Golden Cove a.k.a Redwood Cove.
Yup if it had launched in Q4 25 it would have an year to run amonk I think Clearwater Forest will clearly beat Turin Dense in non AVX-512 workload due to the fact it's based on improves Skymont.
lupin-san@reddit
DC doesn't switch as soon as something superior becomes available. Validation takes time and money. And then they have to look at the roadmap. Switching suppliers mean doing the validation again so they need to be sure that AMD can consistently deliver on that roadmap. When AMD released Naples, they're pretty much starting at the bottom. Rome is when DC starts taking them seriously and Milan is when the switch started to snowball. As long as AMD can deliver on their roadmap, they'll eat into Intel's market share.
sylfy@reddit
The “default naming” on AWS EC2 instances used to be Intel, in the sense that each generation is named with a letter followed by generation number, followed by an optional suffix denoting alternative configurations. So you would have M5 where M indicates a general purpose instance of the 5th generation - generation indicating AWS hardware generations, not the Intel or AMD processor generation.
M4, M5 was the Intel “default” configuration. M5a was the AMD config. “g” denoted Graviton, AWS’s own ARM processor, and so on.
With the 6th and 7th generation, I.e. M6x and M7x, there is no longer a “default” M6 or M7. Intel variants are labelled M6i and M7i. I’m sure this has had a noticeable effect in pushing some customers away from using Intel instances by default without trying alternatives.
hilldog4lyfe@reddit
New Data centers are for AI and use Nvidia GPUs. That’s why there’s a decline in %.
And before everyone downvotes this, I’ll just say that AMD is classy and epic and they’re 1 billion times much better than Intel and Nvidia
Kryohi@reddit
We are talking about CPUs, Nvidia still has close to 0 market share there.
rationis@reddit
A lot of it had to do with the buying culture. Even when Epyc became objectively better than Xeon, companies would still buy Intel because "No one ever got fired for buying Intel". As AMD started to pull ahead in the server and desktop products, people would still buy inferior Intel chips due to a decade of Intel being the only chips anyone in their right mind would buy.
Upgrade cycles are another reason. If a datacenter went through an upgrade cycle in 2020 while Intel still was competitive, they'd only just now be potentially updating their processors.
hilldog4lyfe@reddit
Everyone who buys AMD is a brave hero and people who buy Intel are not to be trusted!
Hell yeah I love this subreddit
ExeusV@reddit
How do you know?
LordAlfredo@reddit
Not all of it is Epyc, a lot of the big cloud players have been deploying a ton of Arm.
noiserr@reddit
Datacenter has long sticky contracts.
Alive_Worth_2032@reddit
The DC is slow to move. AMD still had some presence into the 2012-2014 time span. After Nehalem and Westmere had made them more or less obsolete several years earlier.
Strazdas1@reddit
wasnt the number last year 75%? Is this counting non-CPU datacenters to decrease the percentage so fast?
PXLShoot3r@reddit
And they will never gain it back now that Amazon, Google etc. have their own hardware.
JackSpyder@reddit
This. The only intel kit is the kit that hasn't hit its replacement lifespan. And I suspect its concentrated in lower income firms with longer replacement cycles.
Does intel offer anything compelling to a refresh or new build right now?
YakPuzzleheaded1957@reddit
AMD actually was pretty competitive with Opteron in the mid 2000s, with \~20% market share. Then Intel Xeon started to dominate in the late 2000s to mid 2010s.
Bhavacakra_12@reddit
I know they're talking about data centers but in my country, the top processors being sold on Amazon are all AMD. Like top 20 (almost) exclusively.
HorrorCranberry1165@reddit
They have Diamonds Rapids on 18A, may be strong enough to hold further losses. But AMD will counter strike with Zen 6 on N2. Having parity on hardware, they must start to compete on prices, but AMD have higher potential here.
Alternative-Sky-1552@reddit
Like if they can hold market share against 2,5x performance withnparity they will swipe 99% of it
Flameancer@reddit
Didn’t Intel have like near at least 90% a few years ago? Honestly they have themselves to blame with their stagnation. They let AMD catch up and surpass with no real response. Hopefully they can get competitive again in the cpu space. 2017-2022 was a pretty good time.
heickelrrx@reddit
The Threat is less of AMD, but ARM based
AMD might be great but Intel offer software advantages. Meanwhile ARM throw those advantages out of window because customer now even willing to design their custom solutions
CodeAndLedger5280@reddit
I wonder when the alarm bells will ring at Intel’s headquarters.
Odd_Cauliflower_8004@reddit
Yeah, total datacenters ij the world sure,some people though have wwoken up and have buyed epyc since quite a while.. it's not gonna last as they replace more and more old intel stuff with it
potatojoe88@reddit
Mercury research put it at 72% a few days ago. Given the lack of details in this article, im not sure this is real
UsernameAvaylable@reddit
It might depend on how you count "datacenter market". Like hyperscalers that do inhouse chip production are no longer open to the market, right?
Maimakterion@reddit
It's 72% of x86, 55% by revenue of the entire DC market.
Geddagod@reddit
If it was a quote from the CEO himself, then yea it's real.
I imagine this is talking about total server CPU market share- which includes ARM CPUs, I don't believe that the mercury research report included those CPUs.
potatojoe88@reddit
Agreed the CEO would know but was wondering if the ceo actually said it. Usually the quote is at least accompanied by "while speaking at ...."
Total market makes more sense yeah. Mercury was just x86
sascharobi@reddit
55% of all architectures out there, 72% of x86.
SmashStrider@reddit
I think LBT might mean the total data center market (which includes CPUs and also GPUs which Intel is tiny in). It is also possible that here, the competition might include ARM processors too, so it contrasts mercury research's data.
travelin_man_yeah@reddit
The number is deceiving. Even though they may have that market share on the CPU/Xeon side, the DC GFX and AI side is a total trainwreck. They currently don't have a shipping DC GFX product (and Jaguar Shores is like two years out), Gaudi is a disaster and they don't have great software solutions.
Kougar@reddit
So what's the breakdown, how much of this was lost to AMD versus how much was cannibalized by ARM? Wendell has been dropping examples of major scale vendors migrating to ARM for years, it's not an insignificant number anymore.
Lopsided-Prompt2581@reddit
55 percent revenue wise but unit wise it is more