New Intel Xeon 6 CPUs to Maximize GPU-Accelerated AI Performance
Posted by uria046@reddit | hardware | View on Reddit | 21 comments
Posted by uria046@reddit | hardware | View on Reddit | 21 comments
Icy-Communication823@reddit
Unsurprisingly sounds like more AI bullshit to me. I can't see anywhere in that article any mention of anything that's different to a normal CPU upgrade that is AI specific. More AI crap.
simplyh@reddit
They have faster memory and more PCIE lanes than comparable EPYCs. People like to laugh at Intel but Xeons are absolutely still competitive as the host CPUs of big NVIDIA datacenter racks (which are a huge portion of data center spend today).
Icy-Communication823@reddit
So what's different about these Xeons that is AI specific?
Nothing.
Wyvz@reddit
They have a dedicated AI accelerator in each core.
Exist50@reddit
Which no one cares about when it's connected to an Nvidia GPU. Nor is unique to these SKUs.
Wyvz@reddit
OP asked what's AI specific about it, I provided one. What is not understandable? I'm not justifying its existence, but I guess they have their own target audience.
Absolutely no one claimed it's new, not even the page he sent, it's simply improved over last gen, hence it is being marketed.
OP posted a marketing piece, so it has marketing terms.
Icy-Communication823@reddit
"These new processors with Performance-cores (P-cores) include Intel’s innovative Priority Core Turbo (PCT) technology and Intel® Speed Select Technology – Turbo Frequency (Intel® SST-TF), delivering customizable CPU core frequencies to boost GPU performance across demanding AI workloads."
There's nothing about "a dedicated AI accelerator in each core" - either in what I quoted, or the rest of the document.
IAAA@reddit
Ugggghhhh...
As a trademark person this overzealous use of nonsense branding like the expanded versions of "PCT" and "SST-TF" is killing me. Also, they capped it "Performance-cores" in anticipation of getting a mark. That's not going to happen.
Icy-Communication823@reddit
Where does it say that? I'm not seeing it.
Wyvz@reddit
https://en.wikipedia.org/wiki/Advanced_Matrix_Extensions
Icy-Communication823@reddit
Thanks. So it's been supported since 2020. There's nothing new here. Just Intel marketing again.
Wyvz@reddit
Supported only by their CPUs, obviously they will market features that are unique to their platform.
And the also improved it in Granite Rapids, for example by adding FP16 acceleration, and this is what they marketed.
Icy-Communication823@reddit
"New Intel Xeon 6 CPUs to Maximize GPU-Accelerated AI Performance" - it's marketing bullshit.
Wyvz@reddit
Well, all marketing is like that. But like I said they have a good reason to claim that.
Geddagod@reddit
The CPU they are pairing with Nvidia systems, the 6776p, is 8 cores to 4.6 GHz, with max turbo of 3.9GHz and all core turbo of 3.6GHz. 64 cores total and 88 pcie lanes.
Turin, meanwhile, has the 9575F, with 64 cores and a boost of 5ghz, and an all core boost of 4.5GHz. 128 pcie lanes. Even the 6980p only has 96 pcie lanes.
When Nvidia went to Epyc Rome, core count and pcie lanes were the given reason. When Nvidia then went to SPR, ST perf was the given reason. Intel doesn't appear to have any of the advantages listed there with GNR vs Turin.
6950@reddit
What Intel has the advantage its the IMC being on the same die as the host CPU so it saves latency which matters more to keep the GPU fed not to mention Nvidia would have gotten quite the deal with low lead times.
Exist50@reddit
GNR doesn't seem to have particularly good memory latency. That aside where are you getting getting the claim that good memory latency is needed to feed a GPU from? PCIe latency dwarfs memory latency. Also, Intel's PCIe system is on a different die...
SteakandChickenMan@reddit
Technically Turin 2S is 160 lanes vs 176 on GNR XCC and down or 192 on GNR UCC. In 1S GNR 1RIO has 136 but other configs are all less than 128.
fnur24@reddit
Note that in a 2S configuration the Xeons have more lanes than Epyc since 96/128 of the lanes are earmarked for cross-socket communication (i.e. 128/160 lanes usable, depending on configured xGMI link count) whereas Xeon's PCIe lane count already accounts for UPI lanes.
ElementII5@reddit
I think a good AMD alternative would be an SP6 SKU but no Zen5 SKU has been released yet. But those are 6 channel/96PCIe lanes. So not quite comparable.
For AI servers the biggest concern is not bottlenecking the GPUs. That is pretty easily achieved with the low core 6776p.
I think that at least in part Nvidia does not want to give AMD the extra business, which is understandable.
AngelicBread@reddit
Most cutting edge Nvidia AI data center compute trays use Grace CPUs and will soon use Vera CPUs.