Intel's Server Share Slips to 67% as AMD and Arm Widen the Gap
Posted by imaginary_num6er@reddit | hardware | View on Reddit | 205 comments
Posted by imaginary_num6er@reddit | hardware | View on Reddit | 205 comments
PercsAndCaicos@reddit
Can someone explain like I’m five that if ARM is clearly the future, why anyone bothers researching x86. Obviously for now, for compatibility but I mean… at what point do we just put all our eggs in that basket
psydroid@reddit
What do you mean by researching x86? It's there for legacy software and for the very high-end, but doesn't have the best price/performance anymore.
aminorityofone@reddit
ARM is just another option, it isnt going to replace x86 anytime soon or probably ever. Recent ARM chips are just a wake up call for x86 to start making better chips. Intel got far to lazy during the stagnation of the bulldozer era.
auradragon1@reddit
Define this. Apple replaced x86 overnight. Most of AWS is now running on ARM.
aminorityofone@reddit
To much legacy equipment run on x86 and is not getting software updates to work on arm. Much of this is mission critical stuff. Just follow the news about how airport traffic control is still using windows 95(or was it 98)... Then consider that x86 is getting better and more energy efficient. ARM is just creating competition.
auradragon1@reddit
Sure, but new software being written are usually compiled to both ARM and x86 or just straight up browser based.
aminorityofone@reddit
There is an enormous amount of legacy stuff. Even at this latest Computex there was windows 7 running.
noiserr@reddit
ARM is not the future. ARM is a fad. There is nothing ARM offers to people who need server compute over more performant x86 solutions.
scannerJoe@reddit
Arm has the considerable advantage that anybody can buy a license, leading to an incredibly lively environment with many different companies doing many different things. And the hyperscalers love having control over their hardware and making their own stuff. I don't think that Arm is going to replace x86 anytime soon, but they’re for sure going to play a sizable role in the server market in the foreseeable future.
Strazdas1@reddit
ARM has an advantage if you want to build your own hyperscaler and can afford your own chip design team. It has no advantage to your average server owner who orders ready-made racks and has 2 guys do all the work on them.
noiserr@reddit
There is no tangible advantage there. You still have one litigious company controlling the IP and you are stuck with an inferior solution.
Hamza9575@reddit
Because arm has not done anything to convert software for x86 to run on arm without bugs and atleast at same performance or faster. There is millions of different software that only runs on x86. No one is paying to patch them to run on arm natively. There are hundreds of thousands of x86 games on steam store alone.
Dreamerlax@reddit
Probably on Windows. I run almost the same set of software on my Surface and MacBook Air and they are all ARM.
Exist50@reddit
There's been a lot of work on compatibility layers. Prism and Rosetta and the like.
Due_Calligrapher_800@reddit
Arm is not the future. Most teams looking to the future are working on RISK-V as its open source and doesn’t require a licence fee
auradragon1@reddit
RSC-V is not the slam dunk that you think it is.
ARM charges a license fee but they also give you all the designs. All you have to do is to tell TSMC how many chips you want really.
So who's actually going to spend the billions in developing a "free" RISC-V core that can compete with new ARM designs every single year? Volunteers? Retired Apple CPU designers?
RISC-V will then have companies like Sifive who will then offer high performance RISC-V cores for a... fee!
The only thing RISC-V is good at is free from political interference as long as the design is open source and not proprietary. You're not going to find free and open source RISC-V designs that can compete against ARM cores.
monocasa@reddit
That's like asking "who's going to develop a free kernel that can compete with Microsoft every single year? Volunteers? Retired Windows kernel engineers?"
auradragon1@reddit
Open source software is much easier and low cost to get started.
In order to develop a cutting edge CPU that can compete against ARM, Apple, Qualcomm, AMD, Intel, you need cutting edge EDA tools that are very expensive.
A student can eating ramen can contribute to Linux kernel on a $200 laptop.
Chips are physical and permanent so verification and testing are far more costly.
Testing physical chips can take a long time.
There are many reasons why open source software can work but open source chip design has not worked.
monocasa@reddit
The approaching brick wall of the end of moore's law changes the calculus.
It pretty much guarantees democratization of these tools or equivalents.
That's why RISC-V has already pretty much dominated areas targeting lower gate counts. With open source designs being some of the most prevalent with the C906/C910.
auradragon1@reddit
The end of moore's law arguably makes it even harder for free open source RISC-V to compete in the performance segment. Remember, we're talking high performance RISC-V cores competing against Qualcomm, ARM, Apple, AMD, Intel. That's what the person I replied to was saying.
The reason the slow down in Moore's law makes it even harder is because you now need to pour even more R&D and resources to squeeze as much performance as possible. Chips are now now glueing multiple dies together. They're getting much more complicated in order to keep improving performance without Moore's law.
monocasa@reddit
The end of Moore's law means that the tooling and knowledge about how to make the best of the higher gate counts gets democratized.
And glueing multiple dies together isn't new. It ebbs and flows with the tech. The VAX 9000 used chiplets for example.
jmlinden7@reddit
The end of moore's law means that real life chips are harder (and more proprietary) to simulate, as opposed to just using standard transistors on a monolithic chip that behave in predictable ways that any university researcher can model.
monocasa@reddit
The end of moore's law means that all of that information gets democratized.
And if anything, these more complicated physics mean that fabs are less willing to let you use anything other than their standard cells.
jmlinden7@reddit
The information becomes locked down behind NDAs, since fabs have more to hide when they aren't just using bog standard transistors.
Yeah, they'll force you to use their own cells to simulate, which means that you can't use the open source ones that universities researchers open sourced.
monocasa@reddit
The information is already locked behind NDAs. So what'll happen is the information slowly leaks out and competitors open up the market, followed by the original vendors effectively opening their PDKs to be able to compete once their NDAs start being an issue for new business.
And not even university researchers use the university created PDKs. Those are just simple veneers for students like the fake ISAs like MMIX, DLX, LEG, and Y86. Actual research doesn't generally have an issue getting a license to a real PDK.
jmlinden7@reddit
Yes actual researchers have access to industry standard stuff, a random guy with a laptop will not
Plank_With_A_Nail_In@reddit
I think these two user accounts are bugged out AI bot accounts caught in a loop?
monocasa@reddit
Not everyone is a bot.
auradragon1@reddit
Nope.
https://semiengineering.com/first-time-silicon-success-plummets/
The success rate of silicon projects are declining due to increasing complexity.
jaaval@reddit
I very much doubt they are going to share any of their core designs with anyone.
monocasa@reddit
The hyperscalers are generally down to work together to homegrow anything they're currently buying from a supplier.the more complicated it is, the more willing they are to work together.
jaaval@reddit
They don’t seem to be working together on this.
monocasa@reddit
Not yet.
auradragon1@reddit
monocasa@reddit
The work on a lot together currently. The Linux kernel, OCP, etc.
auradragon1@reddit
To share industrial standards, yes, sometimes they work together. But to share AI chip designs that differentiate their clouds? Nope.
northern_lights2@reddit
I wonder why that is? Some of the best software I have used is open source. Why not the same for hardware?
The only reason seems that it may be proprietary / impossible to simulate? Why aren't PhDs publishing feasible core designs which beat everybody and are scalable with node shrinks?
jmlinden7@reddit
You can't easily compile and test hardware the way that you can with software.
Anyone with a laptop and a compiler can write, compile, and test code. You need thousands of dollars and specialized equipment to do the same with even a simple chip
Jonny_H@reddit
Also there's a lot of work required to get a performant result even if you already have perfect complete HDL, much of that work relies heavily on (closed) fab PDK specifics and IP, and extremely specific decisions that are hard to share between different products. You can't just click compile and end up with something remotely competitive.
So even if the HDL was all open and shared, the actual products companies would be releasing very different products, and the long pipeline means the feedback cycle would be slow, all of which tend to make open source contribution culture even more difficult.
Exist50@reddit
Eh, you can take shortcuts. A lot of IP, including whole CPU cores, is designed to be entirely synthesizable. If you need to get that last 10-20%, yes, you're going to need to put more work in, but for a lot of things the tools do a fine enough job.
Jonny_H@reddit
In my experience it's at least 20% in all three of area, power, and performance - which is more than the difference between a competitive product and one not even considered.
IP vendors like ARM spend a lot of time working with fabs like TSMC/Samsung to ensure their cores work well in common combinations so their customers can "just" click and drag. And big foundries have skilled teams that will do much of this for you, for a price.
Don't confuse "Someone has already done the work for common IP blocks" with "The work didn't need doing".
Exist50@reddit
IIRC, the SoC die cores for MTL/ARL are not hardened, for a real world example. But yeah, that's not something you can get away with for a compute-heavy use case. But many of the billion 1-2GHz A53/A55 implementations could probably take a penalty like that.
Jonny_H@reddit
I personally know people who work on optimising hardware layout for blocks used on the SoC portion of Intel chips :)
Maybe there's not a big "holistic" optimisation drive for the chip as a whole, but there are very much people working on hardware layout for things on the silicon.
Exist50@reddit
For most of the IPs, absolutely. That's just one notable exception that jumped to mind.
monocasa@reddit
I literally ran a gate level sim of a OoO RISC-V chip on my laptop today.
jmlinden7@reddit
Sims aren't compilations. They're, well, sims. Real life does not perform exactly the same as the simulations do. That's the difference between hardware and software.
You have to go through physical design, fabrication, and post-silicon testing to make sure that your sims are true to life.
Exist50@reddit
Not quite. You can run logical equivalence tests to guarantee your circuit is the same as the RTL you wrote. Then you only need to test the RTL. At least for functional behavior. Timing and such is a bit more involved.
monocasa@reddit
I'm quite aware of what back end chip development looks like. The tooling behind tha has been getting way better, as well as the ability to preverify what a chip will do when fabricated. That's why you see so many more A0/A1/B0 steppings getting released these days.
On top of that, what I did today is literally a compilation in this case.
xternocleidomastoide@reddit
Come on now, a GTL for an entire core. LOL
monocasa@reddit
Yes, I'm running a gate level simulator for an OoO RISC-V currently on my laptop.
xternocleidomastoide@reddit
I'd believe you if you were simulating a bunch of gates or cells at most. Nobody does a GTL for an entire core. There's literally no point, besides being prohibitively expensive in compute and space. We only do cycle accurate sims unless we really need to.
monocasa@reddit
Depends on the core size.
xternocleidomastoide@reddit
Mate, I do this for a living.
GLS of a smallish core with 100M gates takes 50+ GB and a couple days of compute on a beefy system.
I could buy it if you were doing a GLS analysis on a partition, on a laptop. Ant that will still take a few hours.
monocasa@reddit
There are OoO cores with 3M gates.
xternocleidomastoide@reddit
No, you don't do "this" for a "living." If your organization has to do full system GTL on a laptop. LOL.
FWIW A modern smallish out-of-order single-issue core (something like RISC-V BOOM) is still going to be around 50M gates. Maybe 20M if you prune a lot of stuff...
monocasa@reddit
Once again, at a lower gate count, it doesn't really matter.
And there's OoO cores like a Katmai, that were superscalar with about 5M gates.
Once again, your world isn't the whole world.
xternocleidomastoide@reddit
I wouldn't consider a 32-bit core from the mid 90s remotely "modern" LOL.
The computational and space complexity of GLS are objective qualitative matters, independent of design or organization involved.
monocasa@reddit
A Katmai has a lot more going for it than a minimal single issue core, being superscalar with even a vector processor pipe. I brought it up because your 20M minimum is clearly wrong even from wikiable information.
Just like your "objective, quantitative matters" telling you what I'm currently running on my laptop are also very wrong.
xternocleidomastoide@reddit
Katmai was 32bit, supported only a few in-flight instructions, had tiny (by modern standards) register file and ROB, few ports per register, only 32KB L1 cache, and a small branch unit.
monocasa@reddit
Just like a lot of currently shipping embedded cores. Like the core I'm simulating (though this one is 64bit).
xternocleidomastoide@reddit
Oh, you're doing a shipping core now?
LOL
monocasa@reddit
All I said was that a lot of currently shipping cores fit what you said.
Do you have anything of value to add to the discussion? So far all of your gotchas have been absurd on their face.
xternocleidomastoide@reddit
Yes, I am aware a lot of cores fit what I said, because that is literally why I said it.
Unlike you. I have added actual quantitative data to this conversation. Fascinating that you feel those were "gotchas."
monocasa@reddit
Too bad your quantitative data has either been incorrect or irrelevant.
Let's review
Me: I'm currently running a gate level sim for an OoO RISC-V core.
You: Nobody does that, it'd take at least 32GB+.
Me: Which is why I have a laptop with almost 100GB, and also it doesn't take nearly that much.
You: The simplest OoO cores are 50M-20M gates.
Me: Here's an example of a superscalar OoO core with a vector unit at 3M gates.
You: That's not a modern ocre
Me: Lot's of cores currently shipping are like this
You: So now it's a shipping core???
Me: That's not at all what I said.
Do you see how absurd this conversation is?
xternocleidomastoide@reddit
You do realize the one making the conversation absurd is you, right?
I am simply providing quantitative guidance about the sizing, computational and space complexities.
In any case. This is clearly not your domain, and I am not interested in exploring your lack of comprehension further.
cheers.
monocasa@reddit
All I said at the beginning was that I was currently running a GLS ony laptop. You said that was impossible because my laptop would need at least a third of the memory it actually has. Then proceeded to give other minimums that were even more off base.
It sounds like you've been stuck in your niche for far too long and can't comprehend what different workflows look like.
xternocleidomastoide@reddit
I gave you a low bound, as a benefit of the doubt. 20/50M gates with proper coverage is going to take 100+GB and run for more than a day, on a beefy workstation (esp if you're running Xcelium, VCS, Questa, etc).
If you are messing around with some RISCV toolchain + iverilog or whatever, that's cool. But there's no need to oversell that.
As to not end up w your bizarre run of tangents; from an OoO RISV core, to the Pentium 3, to gaslighting about EDA workflows (to a guy with a PhD in the field and several tapeouts...)
monocasa@reddit
And I gave you an example of something 1/10th your low bound.
The point was to prove what I'm saying with easily available public information. That the lower bound for OoO cores was far lower than 20-50M gates. Unless you're going to make some bizarre argument that there's something about a RISC-V core that means it would take way more gates than an x86 core. And what exactly have I gaslit you about? Name exactly what I've said that's incorrect.
xternocleidomastoide@reddit
You can't extrapolate from the original P3 core to be a representative low bound for a modern 64 bit OoO RISC-V.
E.g. The smallest 64bit OoO RISC-V single-issue core out there is comes in between 20/50M gates. .
For a normalized design/test, the time/space complexity of GLS is not workflow specific.
Perhaps you know so little about this matter, that you don't realize how little you know. Alas, feel free to reinforce what has been quite obvious.
monocasa@reddit
I'm tracking down a difference between the cycle accurate sim and the RTL.
auradragon1@reddit
Because chips progress faster than software usually and are far more costly to produce.
You can make a useful application with one developer and a $500 laptop and some coffee. Meanwhile, Apple is coming out with a new chip every year. How are volunteers suppose to compete against billions in R&D budget?
monocasa@reddit
The Linux kernel had thousands of people commit to it in just this past year.
And the vast majority aren't 'volunteers'.
auradragon1@reddit
So where is this mythical 4,000 ST GB6 RISCV core without licensing fees?
monocasa@reddit
That's like asking where all the Linux servers are in 1998.
auradragon1@reddit
In 1998, there were a ton of free and commercial UNIX servers already - just like Linux today.
Plank_With_A_Nail_In@reddit
How is it possible to be this stupid?
nanonan@reddit
They never claimed it would be free, they claimed it would avoid legal entanglements like with Qualcomm and ARM. Which it does.
auradragon1@reddit
nanonan@reddit
You seem to be under the misapprehension that everything related to RISC-V needs to be free in an open source sense, not in a license fee sense. It does not. You can be as proprietary and closed as you like.
ButterscotchFew9143@reddit
It is a slam dunk when the ecosystem can be built by many without licensing concerns. We'll see, but riscv is taking off many times faster than arm did at the same stage
anival024@reddit
You get access to the standard ARM IP. Not "all the designs". People look at what Apple and Nvidia and Qualcomm have done with ARM and think they can jump in and do the same thing. You can't. You need to create your own stuff.
auradragon1@reddit
"You" here does not mean me and you or some gamer degen. It means enterprises with resources and a will go make these chips.
Plenty of companies deployed their own high performance ARM chips including Nvidia, Amazon, Microsoft, Google, Meta, Baidu, Alibaba, Tencent, Fujitsu, Broadcom, HiSilicon, Ampere, Unisoc, Mediatek, Qualcomm, and so on.
spicesucker@reddit
I don’t think it’s as much “ARM is the future” as it is “ARM is licensable”. One of the big advantages Apple chips have over Intel / AMD is that the cores themselves are massive.
auradragon1@reddit
Apple cores are no bigger than Zen or Intel cores. It’s been proven many times.
Geddagod@reddit
Apple's cores are very large, also in terms of die area. Especially since they moved up to 3nm.
Their unique cache hierarchy (beyond L1) is what saves them a bunch in "CCX" and "core" area, since AMD, Intel, and even the "stock" ARM cores have core private L2 caches.
Apple also gets to save a decent bunch of area on the fact that they don't support 256 bit vector width like Intel, or 512 like AMD.
auradragon1@reddit
No.
https://www.reddit.com/r/hardware/comments/1fr8ws8/snapdragon_x_elite_die_shot_revealed/
Apple offloads SIMD tasks to a dedicated AMX processor or NPU depending on the optimization.
You should look at the whole die size of the SoC then compare its performance for CPU, NPU, GPU to get a higher level overview of performance/area since it's too difficult to measure accurately that includes all the caches and architectural differences. Once you do so, it's pretty clear that Apple's SoCs have better performance/area than AMD and Intel.
Geddagod@reddit
Note how the last Intel reference was RWC, an especially bad N5 class core in terms of area. Whether that be due to the worse node, Intel's older physical layout methodology, or whatever reason, they have significantly improves with LNC.
A LNC core without the L1.5 and L2 SRAM arrays alone is already the same area as a M4 P-core (actually I forget, I might have also included the tags in this area calc too lol), and that's not including all the logic that is associated with handling the core private L2 too.
As for Apple's N5/4 cores, it's important to remember that Apple didn't really start to compete with high end AMD desktop products in performance till their N3 parts. Zen 4 and Zen 5 both had 16 and 26% leads in specint2017 1t performance respectively according to david huang, making the area differences much more swallowable.
But I also want to point out- Zen 4 without the L2 block is outright smaller than a M2 core.
The problem is that for server processers, you don't have that option to offload it to those parts, and Intel and AMD have to incorporate AVX-512 or AMX as a per-core option, presumably for licensing, and also because it's just more performant.
So when looking at a core perspective, Intel and AMD obviously suffer from increased area from that while Apple does not. Something to keep in mind when comparing core area, especially considering the area difference does not seem to be insignificant-
For Zen 5, simply changing how AVX-512 is implemented causes a dramatic halving of the FPU block in desktop vs mobile for a 10% area reduction.
If you want to just compare CPU cores, you wouldn't have to look at the NPU and GPU.
I think two important measurements would be just CPU core+L1, and then the "CCX" area.
The Core + L1 because the L1 cache is way more incorporated into the core than the higher levels of cache, by a large degree, and also because the later levels of cache have a much more disproportionate impact on perf/area.
For example, Intel nearly doubling the L2 cache from GLC to RPC caused a significant area increase, while only providing a low single digit IPC uplift in most workloads. The area cost of those caches as a percent of total core area is only bound to increase too, as logic starts scaling much better than SRAM.
There might be more obvious CCX level power benefits, or be significantly better in specific workloads that isn't really represented by spec2017, or be a "future proofing" case as working set sizes grow, but from a raw perf/area perspective, it's disproportionately bad.
And why measuring CCX area is good should be clear, and yes Apple does very well against AMD and Intel here, but as I alluded too in my previous comment, this is solely because of their unique cache hierarchy, which seems to be a product of Apple's design team (the same team that then moved over to Qualcomm, hence Oryon sharing a very similar cache design), not because of the "core" area.
auradragon1@reddit
What? M1 had the fastest ST speed in SPEC2017 when it came out. What are you smoking? https://www.anandtech.com/show/17024/apple-m1-max-performance-review/5
Why Zen CPUs have massive L3 cache.
That's AMD and Intel's problem. Since Apple SoCs are only consumer/prosumer, they're designed as such.
If AMD and Intel care so much about performance/area for consumer CPUs, they can rip out AVX512 for consumers. But they'd have no SIMD competition against Apple's AMX, NEON, NPU, and GPU combined offering.
Makes no sense. Apple does not sell standalone CPUs. They only sell SoCs.
Even just the CPU cores alone, we can clearly see that Apple's P cores are no bigger than AMD and Intel's P cores when combined with all the caches.
Healthy-Doughnut4939@reddit
Uarch wise, Apple's cores are HUGE.
In die area? Apple's M4 is a lot better than Lion Cove.
auradragon1@reddit
No.
Source: https://www.reddit.com/r/hardware/comments/1fr8ws8/snapdragon_x_elite_die_shot_revealed/
Kougar@reddit
Take your desktop, 12 performance cores are going to outperform 50 slower cores in most programs you're running on your system. Not everything can use that many cores, and single-thread programs and programs like games will always run best with performance cores.
ARM chips are basically giant core-count designs, and if your workloads can make use of all those cores then ARM is offering a valid product. But it isn't a one-size-fits-all solution, an ARM processor on your desktop is simply going to end up slower in low-thread count workloads. Similarly servers and enterprise processors can have very different uses and workloads, it's not a one-size-fits-all deal. The hardware needs to match the workload case, so ARM isn't going to be a replacement for all scenarios.
nanonan@reddit
Apple has shown this isn't true at all.
Culbrelai@reddit
Arm is not the future. Its a fad just like PowerPC was
aminorityofone@reddit
Such a fad that apple has the best laptop performance in the world from it! Such a fad that there are no smart phones left that use x86!
auradragon1@reddit
Gamers live in bubble. They still think x86 is all that matters.
vlakreeh@reddit
Most computing devices sold nowadays are arm largely because of smartphones. Definitely not a fad.
brand_momentum@reddit
RISC-V > ARM
Jim Keller knows whats up.
Frothar@reddit
ARM popularity is more to do with companies can make their own silicon as only intel and AMD can make x86. performance difference is 99% how the chip is designed
xternocleidomastoide@reddit
Software moves Hardware, not he other way around.
There is a lot of x86 softwares. Ergo customers buy lots of x86 HW to run those x86 softwares.
The end.
Blueberryburntpie@reddit
Also why IBM is still in business. Decades of legacy enterprise software that were originally coded on punch cards or terminals, and IBM is the only one selling modern hardware that maintains that level of backward compatibility.
ResponsibleJudge3172@reddit
Why would Intel and AMD choose to lose control of CPU development and make a full Arm monopoly.
Remember why people didn't want Nvidia to buy arm? Those same arguments apply to Arm itself
No-Relationship8261@reddit
RiscV is the future, people are just not sure whether to jump to arm first or not.
zeehkaev@reddit
Some people are not so sure if it is the future, x86 efficiency is also improving dramatically in the last decade.
randomIndividual21@reddit
man, just 5 or 7 years ago, Intel was basically invincible and killing AMD
aminorityofone@reddit
Intel sat on their laurels and didnt keep pushing for innovation when AMD was nearly dead from the failure of bulldozer. We are seeing the results of that lack of foresight. Also, its more like 10-20 years ago now. Ryzen launched in 2017 and AMD started to gain market share almost immediately after launch.
puffz0r@reddit
Intel kept trying to do moonshot projects in things that weren't their core business like Optane
Strazdas1@reddit
Intel wasted a lot of time trying to physically shrink the transistors and ended up not suceeding, bleeding a lot of talent and burning out the rest with no tangible benefits in a decade of wasted effort. Intel has not been the same since.
ButterscotchFew9143@reddit
And yet there's nothing that comes close, performance wise. I wish they had kept developing it.
Alive_Worth_2032@reddit
I mean, they did start as a memory business. If anything you could argue that Optane was a attempt to get back to their core business! ;p
Christian_R_Lech@reddit
I wouldn't say they were fully sitting on their laurels. A good chunk of the stagnation was due to 10nm taking such a long time to get working which ended up delaying new architectures that could've allowed an increase in performance.
However, in other ways, like core count, it would be accurate to say laurels were sat on unhealthily.
brand_momentum@reddit
Intel holds ~75% of x86 market share.
PotentialAstronaut39@reddit
Intel HELD ~75% of x86 market share.
996forever@reddit
They still do.
PotentialAstronaut39@reddit
Not for years they haven't according to independent third party data:
https://imgur.com/E9sm4t2
AMD went as high as almost 40% a few years back and Intel was down to almost 60%.
Now they hover around 35% for AMD and 65% for Intel.
Last time Intel was at 75% was around 4 to 5 years ago.
Blueberryburntpie@reddit
Last time I looked at Intel's quarterly financial statement, they were making almost zero profit from the enterprise sales. Meanwhile AMD has been raking in the money.
That suggests to me that Intel is pricing their server CPUs at a discount to slow the loss of their market share.
Kougar@reddit
They thought so too, but the full year delay with 14nm was there 11 years ago. Was a big warning sign that Intel ignored going into 10nm.
Proglamer@reddit
man, just 40 years ago, DEC was basically invincible and killing Data General ;)
xternocleidomastoide@reddit
Ironically, Intel and AMD are basically on the road of becoming the IBM and digital of this era.
ReplacementLivid8738@reddit
Tell us more
xternocleidomastoide@reddit
talent goes where the volume and money are, and that eventually translates in performance.
The big players in the previous "paradigm" often become stagnant. There is somewhat a correlation with the levels of integration.
Mainframe -> Mini -> Micro -> SoC
IBM and DEC where the dominant players in the mainframe and minicomputer era.
Then intel and AMD took over, as microprocessors started to be where the market volume was, and eventually they surpassed in performance the previous systems.
Now we're seeing a similar shift, with the mobile SoC players like APPL, QCOM, etc starting to take over in terms of volume and revenue. That attracts more talent to them. And we're also starting to see how the mobile cores are surpassing the more "traditional" x86 in terms of performance.
ReplacementLivid8738@reddit
Where do Chinese vendors fit into that?
What about TPUs and other cloud specific from Amazon, Google, and NVIDIA?
xternocleidomastoide@reddit
It's too early to tell regarding Chinese vendors. Plus they are a bit behind in semi technology.
But I don't see it necessarily as a regional/nationality issue. As much as just general trends in terms of volume/integration regardless of where they happen in the world.
The ARM cloud stuff may be part of that new paradigm that is replacing Intel/AMD at the bleeding edge of performance. Losing those volumes would hurt intel/AMD tremendously.
puffz0r@reddit
The only reason the Chinese are already able to manufacture pseudo-5nm silicon despite most people knowledgeable about their tech saying they're 10-20 years behind, is because they've successfully poached top engineering talent from places like Apple and TSMC. Hell they've even gotten some former employees from Zeiss and ASML. Also, since the US has decided to weaponize its IP, don't look to the Chinese to continue producing ARM for the long term, Huawei's already going RISC-V due to sanctions.
xternocleidomastoide@reddit
You not being among those knowledgeable people, obviously.
scannerJoe@reddit
I mean they still make two thirds of all revenue, which I find impressive, given how strong AMD hardware has been for some years now. This really shows how important vendor relations, service/software environment, and pure inertia are in corporate markets. No surprise, the area where AMD does best - hyperscalers - is the least sensitive to these things.
SonOfHonour@reddit
Also because they've thrown any semblance of margin to the wind and are purely playing for market share defence right now.
Data centres and servers were the golden cash cow and now it generates basically 0 profit.
NetJnkie@reddit
Not surprising. I sell infrastructure and many of my customers are using AMD now or looking at making a shift in the future.
Strazdas1@reddit
I think that was mostly the case for consumer-facing OEMs rather than servers.
Blueberryburntpie@reddit
A while back ago, I saw someone claim if they wanted to order a new AMD server rack from their vendor, the lead time would be into the several weeks due to the volume of backorders. But if they wanted to buy an Intel server rack, it would be delivered in less than a week.
Are you seeing anything like that right now?
Dull-Tea8669@reddit
At my firm we basically order just AMD so can't really tell on the Intel lead times, but AMD is usually 3 weeks. At this point we only use Intel on half of our Windows plant, and the entire Unix plant is on AMD
cuttino_mowgli@reddit
I think that's true a couple of years back. When TSMC is fabbing most of the?chips and that include Intel's design. Dont know if that's true today.
Rexor2205@reddit
Not OP but in my OEM's case it's most noticeable in the stock in the warehouse. Tons of Intel always available, AMD basically always on order. Even where i work in evaluation we have trays upon trays of all the newest Intel chips, while 9004 and 9005 is only really available for testing on request and otherwise probably already sold. Hell this even extends to the AM5 stuff ( 4000 Series, new GRADO)
tecedu@reddit
Not the same OP but buying a ton of hardware now, for 9004 we had a ton of backlogs and waits however 9005 looks to be far more in stock. Intel's fancy chips were the ones out of stock
NetJnkie@reddit
Heavily depends on which server OEM and chip. I also don’t deal in entire built racks. Just individual servers so that may also be different.
Awakenlee@reddit
Shouldn’t that be “close the gap” if Intel still maintains a majority?
monocasa@reddit
I read "the gap" in this case referring to the amount of market share that isn't Intel's. Which is a fair way to frame it, since as recently as 2020, non-Intel market share was a single digit percentage.
Strazdas1@reddit
Its not. Its complete misinterpretation of what "the gap" means to the point where communication breaks down.
996forever@reddit
That’s just the amount of sales of alternatives itself, not the “gap of sales” or gap of anything
grumble11@reddit
Gap in terms of new business. Server market is slow to turn over but when it does it is hard to reverse. Intel is getting crushed.
PainterRude1394@reddit
You seem confused.
Market share is new business. This is talking about server market share.
chefchef97@reddit
The gap in sales is widening
dj_antares@reddit
So if you go from 68% vs 32% to 67% vs 33%, the gap is widening from 36% to 34%?
account312@reddit
No, if they have 67% of sales and used to have more than 67% of sales, the gap is not widening.
Plank_With_A_Nail_In@reddit
Did you get dropped on your head as a baby?
996forever@reddit
Astonishing in the big year of 2025 people who want to comment on these things still don’t know the difference between market share and install base.
Thelango99@reddit
Pretty glacial speed indeed. I work for Inmarsat (now part of Viasat) and hundreds of our customers are still on old Dell R420XRs from 2016.
Pablogelo@reddit
"close the gap" usually is used when you're coming back from behind and reducing the advantage the competitor had over you.
MrBill_-_AlephNull@reddit
exactly, amd is closing the gap
Pablogelo@reddit
I'm dumb
Tarapiitafan@reddit
Market share gap is closing, sales gap is widening
Plank_With_A_Nail_In@reddit
Its shocking that people don't check the basic meanings of words or phrases before joining in a discussion. To be so confidently wrong as you are is astounding to me.
These are the first search result for "Market Share" and "Sales Gap" on google.
https://www.investopedia.com/terms/m/marketshare.asp
https://www.streak.com/post/gap-selling-method
account312@reddit
Look, someone who has never been wrong before and so recently heard of the concept that they still find the very idea shocking!
scannerJoe@reddit
Market share means share of sales in a timeframe, usually a quarter or year. In economics, a market is not a group of actual or potential users, it is a place where goods are bought and sold.
Healthy-Doughnut4939@reddit
Intel is barely holding on for dear life with their Xeon 6 lineup.
Xeon 6980p is 20% slower in single socket and 40% slower in duel socket offering compared to EPYC 9755.
The 192 Zen-5c EPYC 9965 outperforms the 6980p in nT performance while Intel's only competitor in this space is the Xeon 6900E with 288 Crestmont E-cores which had so little anticipated demand that it didn't get released to the public and was only sold to Amazon for AWS.
Intel also has no answer for 3d V cache on servers.
Intel needs to turn things around in HPC/server market with Diamomd Rapids or AMD is going to eat all of their market share.
travelin_man_yeah@reddit
Intel also has no data center GFX products. My Intel friend who did enterprise sales had to refer his customers to NVidia products after they EOL'ed the Max & Flex line. Supposedly Jaguar Shores will hit in 2027 but who knows if that will be on time given their recent track record.
Exist50@reddit
I mean, let's be real. PVC was delayed multiple years and ended up being pretty crap. Falcon Shores was delayed multiple years than cancelled. And there was Rialto Bridge cancelled in between. On time isn't even in the cards. The question is whether they get out something remotely usable to begin with.
travelin_man_yeah@reddit
Oh, I get it. The company is a trainwreck. They've got some really smart people but it's been mismanaged for so long, with so much damage done under the Krazsnich regime. Pat had his issues too but LBT is too old and has too many other interests to see any kind of turnaround all the way through. Even if they were to execute perfectly, it will be years until they get products and foundry in order. It's really sad, used to be a great company.
Exist50@reddit
What concerns me about Lip Bu isn't his age or conflict of interests, necessarily. It just seems like Intel has a long history of bringing in external hires where they perceive problems to be (e.g. Murthy Renduchintala, Justin Hotard, etc), and for a variety of reasons it never seems to work. In many cases, they actively make things worse. I'm especially suspicious of anyone making Intel's problems sound simple. Implies a degree of arrogance that really doesn't tend to end well.
Academic_Carrot_4533@reddit
Jim Keller’s early departure from Intel was really the canary in the coal mine at the time.
Tuna-Fish2@reddit
The problem is that they are holding on for dear life at the cost of their margins. Granite Rapids-AP and Granite Rapids-SP are both much more expensive products to build than their more performant AMD competition. For now, AMD is content to get slow growth at nice margins, but if Intel's next gen performs much better, AMD has a lot of room to cut the prices.
What Intel needs is some of that magic glue that lets AMD build cheap yet powerful CPUs. EMIB is technically nice, but they clearly are not getting enough advantage to offset the costs.
6950@reddit
EMIB is the cheapest advanced packing you can get don't forget Intel also gets margin for their foundry with Intel 3 Xeons the problem is that server CPUs are slow to ramp due to longer packing/validation time required for both AMD/Intel.
Exist50@reddit
AMD doesn't use advanced packaging at all for server. Also smaller dies.
6950@reddit
They don't but validation still takes time also Zen6 is advanced packing
Exist50@reddit
Yes, but general server validation timelines should be similar between Intel and AMD. Well, in practice I'd expect AMD to be better thanks to more rigorous pre-silicon validation.
FO-RDL, right? That should be cheaper still than EMIB.
puffz0r@reddit
Is the MI graphics accelerator line not considered "server"?
Exist50@reddit
Context being CPUs here.
bitNine@reddit
After building computers for 30 years I just built my first AMD machine. Best decision I ever made. 14th+ gen is crap. Even the latest core ultra 9 is inferior to my 9950X. Intel messed up big time.
bubblesort33@reddit
I'm always shocked Intel is still that popular. You hear so little about their server CPUs, at least on here.
moxyte@reddit
It's still that high? How? Perf/watt they stopped being relevant when the first EPYC hot the market back like a decade ago.
Jess_S13@reddit
We haven't bought an Intel CPU in our servers since the 64c AMDs. They aren't competitive for our Hypervisors compared to the AMDs.
BarKnight@reddit
ARM could soon pass AMD in the PC market, so it's no surprise that it's also gaining server market share
Tradeoffer69@reddit
Lmao probably not, especially as both AMD and INTC move towards efficiency (they have already moved quite fast). If you get high efficiency and full compatibility, why bother with ARM?
BarKnight@reddit
https://www.msn.com/en-us/money/topstocks/arm-eats-into-intel-and-amd-market-share-in-first-quarter-say-citi-analysts/ar-AA1ELclA
AMD is at 21.1% and ARM is at 13.6%
They are much closer than you think
Which is probably what is upsetting people
Due_Calligrapher_800@reddit
The return rate of arm laptops is ridiculously high. People have been suckered into buying them by a false advertising campaign that the battery life is much better than x86. That would have been true two years ago but not anymore. Arm on windows had its shot but that train has now left the station. I don’t expect arm market share to exponentially increase here. They will have to fight tooth and nail with intel and AMD to make more progress.
vlakreeh@reddit
The return rate isn’t ridiculously high, Qualcomm themselves stated the return rate is within industry norms.
Prior to lunar lake, when the chip was launched, it absolutely was better than any x86 designs in terms of battery life.
Due_Calligrapher_800@reddit
“Within industry norms” doesn’t mean anything. Industry norm return rates vary from 1-15% depending on the segment. Windows on ARM (mainly Qualcomm) is closer to the 10-15% mark, or the upper limit of the “industry norm”. Amazon even had to flag them as frequently returned as a warning.
vlakreeh@reddit
Can you provide a source for this?
Plenty of ARM laptops don't have this and plenty of x86 laptops do, I don't think that's a great indicator of it being down to the processor vs just a bad laptop.
Due_Calligrapher_800@reddit
Not a single Intel surface pro has a frequently returned warning on it on Amazon. Every single Qualcomm one does. You are welcome to make of that what you will. If they had a good return rate, they would have published the figures instead of a vague statement of being “within industry norms”. Perhaps Qualcomm should say what they think industry norms are, but you dont get that warning on amazon unless it’s north of 10%
vlakreeh@reddit
Literally the first result, this x plus based model doesn't have the frequently returned warning and has a 4.6 stars. Meanwhile the current gen Intel surface pros don't have a single review, hardly a fair comparison.
Intel nor AMD provide return rates for laptops using their CPUs.
Instead of providing a source you instead lied, make of that what you will.
Due_Calligrapher_800@reddit
On my Amazon (UK), on the official Microsoft Amazon store, so many Qualcomm laptop SKUs have a frequently returned warning and not a single Intel one does. I’m not lying.
noiserr@reddit
A company denies their product sucking and having a high rate of return. Amazon literally had a badge on their computers saying. Frequently returned item. https://www.tomshardware.com/laptops/snapdragon-x-powered-surface-laptop-7-gets-frequently-returned-item-warning-on-amazon
scannerJoe@reddit
Those are impressive numbers for Arm, no doubt, but the article doesn't really say what they actually refer to. “Processor market share“ in what market? The PC/laptop market? Does this include Apple? Chromebooks? More information is needed, IMO, to understand how well Arm is doing.
aminorityofone@reddit
The problem with that link is that it doesnt actually tell you anything. Is this arm windows laptops, windows desktops, apple laptops, apple desktops, chromebooks etc. It certainly seems that it is including Apple and chromebooks. Other sites are saying it is only and estimate on top of that. It is estimated that Qualcomm only sold around 720,000 arm laptops, which is a pathetically small amount.
Tradeoffer69@reddit
That study includes Apple on ARM, not IncompatibleDragon
BarKnight@reddit
Yes Apple makes PCs
monocasa@reddit
Maybe if ARM hadn't been suing Microsoft's main hardware partner over this.
dr3w80@reddit
Does that number include MacOS, since that's 8.7% of apple alone.
Exist50@reddit
Don't see why that wouldn't count for the same reason Graviton does.
brand_momentum@reddit
AMD proved all it takes is 1 architecture (+ reiterations) to close the gap, this is why nobody should write Intel off, they could also be 1 architecture away from widening the gap. It also helps that they got something their competitors don't have, fabs.
puffz0r@reddit
AMD didn't close the gap due to 1 architecture, they closed the gap because they were consistent and on-time and delivered good performance increases and good power efficiency every time. Intel on the other hand kept missing their target release dates and had tons of security vulnerabilities that had very severe mitigations. It takes a confluence of factors to shift the market, not just having one miracle product.
brand_momentum@reddit
That's why I said + reiterations, the core design philosophy across Zen generations remains the same. While Intel experimented with reinventions and hybrid innovation 2017+
While Zen wasn't a miracle in the supernatural sense, it was a masterstroke of engineering, timing, and vision. The same way people counted out AMD at that time, some are counting out Intel now, I wouldn't be surprised if Intel gets their 'Zen moment' sooner than people think.
auradragon1@reddit
Do ARM revenue numbers account for hyper scalers making their own chips?
As far as I know, AWS, Google, Microsoft, Oracle do not disclose how many ARM chips they make/use.
Kryohi@reddit
ARM certainly knows that though, and I would imagine they are allowed to share aggregated numbers.
Exist50@reddit
That's not necessarily true. Why would they know for anything other than per unit licensing?
freeone3000@reddit
Per unit licensing tells you how many units.
Exist50@reddit
ResponsibleJudge3172@reddit
Don't these licenses have volume requirements and or pricing (Eg Arm vs Qualcomm lawsuit)
Exist50@reddit
Hence the one caveat on my above comment. But not all licenses charge per unit produced. Anyway, even if that information is shared with ARM, it would certainly be under NDA.
Tuna-Fish2@reddit
Since this is revenue share, that doesn't matter. When AWS makes a chip they deploy internally, that's 0 dollars of revenue share, even though it can be very impactful on the industry.
hollow_bridge@reddit
I doubt this if you're counting market as value, maybe capacity; but arm servers target cost not performance, they are almost always the budget option, so realistically I bet it's under 15%.
Exist50@reddit
Eh, the perf-optimized Graviton instances are pretty competitive.
nokeldin42@reddit
What I understand from the article, it would completely ignore things like graviton since amazon isn't selling that to anyone as hardware.
This article purely tracks sales that a hardware vendor made. So ARM sales include companies like Ampere I guess.
TheAppropriateBoop@reddit
Intel still strong at 67% ,, exciting times ahead!
Healthy-Doughnut4939@reddit
Intel used to own 99% of the x86 server market
It's a huge fall from grace caused by a perfect mixture of incompetence, complacency and becoming too corporate.
AMD is an engineering focused company while Intel had a very corporate structure.
It started to change for the better under Pat Gelsinger's tenure and this process will likely continue under Lip Bhu Tan.
Forcing the product division to treat foundry as a seprate business really helped weed out a lot of sloppiness
Sevastous-of-Caria@reddit
My guess it hasnt crashed to 0% is that lower b2b ask pricing for xeons (lower msrp) and busineesses satisfied with intel post sale support. Or a very lazy purchasing department