Why do Apple and NVIDIA GPUs with similar transistor counts (≈90B) have such different ALU lane counts and performance?
Posted by jak_human@reddit | hardware | View on Reddit | 79 comments
[removed]
the_dude_that_faps@reddit
I think it is fair to say that Apple, due to relying on a unified design where memory bandwidth is shared with the CPU and still isn't high enough compared to regular GPUs due to relying on lpddr5x, is filled with caches to compensate.
darknecross@reddit
The 80 Core GPU has 23 MiB of SRAM, or 192 kiB per core plus 8 MiB L2.
The GB202 has 200 MiB of SRAM, or 384 kiB per SM plus 128 MiB L2.
The M3 is optimized for FP16, not FP32.
https://developer.apple.com/la/videos/play/tech-talks/111375/
Slava_Tr@reddit
The Apple M3 is equally optimized for FP32 and FP16, it’s just that FP16 is significantly more efficient than FP32. Similarly, with Nvidia, FP16 used to be twice as fast as FP32. However, starting with the RTX 50 series, Nvidia dropped this FP16 advantage, and now the ratio is 1:1. This is likely one of the factors why the RTX 5080 has more cores than the RTX 4080 despite having a similar transistor count
You didn't account for the Tile/Frame buffer cache and the SLC (System Level Cache) for the M3 Ultra at all
The GPU has direct access to the SLC, which provides an additional 96MB of cache in the M3 Ultra. This explains why the L2 cache isn't as large, the latency between SLC and GPU L2 is significantly lower than the latency between vRAM and GPU L2
As for the amount of cache within the Apple GPU core itself, that remains unknown. If you have precise data on this, please provide a source
darknecross@reddit
Did you watch the video I linked?
I don’t count SLC as part of the GPU transistor count because Apple has in the past ripped out SLC on some of the iPad chips.
Cache data uses findings from here
https://github.com/vickiegpt/m3-chip-explainer/blob/main/gpu/README.md
hishnash@reddit
When apple reduces SLC cache the reason for this is that they are using a chip with some of the memory controllers that have defects, each block of SLC is tied to its own memory controllers so if you ship a binned chip (with less memory controllers, thus less bandwidth) you also loss SLC.
darknecross@reddit
No, the iPad chips had no cache whatsoever on the die. I don’t even think they’ve ever shipped a binned chip with a quarter of the memory subsystem turned off.
hishnash@reddit
No that is not correct, they did have cache on the die for the memory controllers that are turned on. These memos controllers require their connected SLC they do not work without it.
darknecross@reddit
No, for example on the A10X there is no L3 cache at all.
hishnash@reddit
The A10X absolute has SLC. it is just mislabeled on many websites as L2. This is not L2 it is SLC as it is shared across all the parts of the chip.
darknecross@reddit
I don’t know where you’re getting your sources from but I can assure you that you’re wrong.
hishnash@reddit
I have a device with an A10x and I am not wrong. The SLC is required for the SOC to work without it the display. controllers, ssd controllers etc would not work at all.
darknecross@reddit
Are you mixing up the physical SRAMs with the cache controller? The physical SRAMs are gone, but the controller can run in bypass mode. If you have any evidence whatsoever that the A10X has its L3 cache I’d love to see it.
Slava_Tr@reddit
Thank you very much for this link. It contained useful files. For certain operations, you can allocate 256 KB inside the core itself. In other words, Shared Memory is not 32 KB, but 256 KB per GPU core. And that brings it to about 416 KB total per core instead of 192 KB.
And so what? That doesn’t contradict what I said. It doesn’t mean FP32 is not optimized, it just means FP16 is significantly more efficient
For example, that quote(this is about M1, but that’s not the point) from what you sent. It’s still equally optimized
I don’t know what the iPad has to do with this, since this GPU uses SLC cache. I’m wondering if, over time, they might make it similar to the 3D cache in Ryzen. Especially since Apple has already started using chiplets in the Apple M5 Pro/Max
Yeah, M3 is a relatively weak GPU architecture by PC standards because it doesn’t have Tensor cores and the first generation of mesh shader support (while Nvidia already had it starting with the RTX 20 series). The M5 Max delivers similar RT performance and is significantly faster in tensor computations, while using about \~2x fewer transistors. This is because, with a similar TSMC process node, the chip area hasn’t changed much. It’s purely architectural optimization. The M5 Pro is about 15% faster than the top-end M3 Ultra, despite having 4x fewer GPU cores, thank it includes a Neural Accelerator
I hope the M5 Ultra will now have a single-chip GPU, a logical continuation of the M5 Pro/Max. It will improve the efficiency of the GPU cores compared to the other M Ultra chips. I got a bit carried away
darknecross@reddit
I don’t see how you’re getting 416 kiB of RAM per core. Are you double-counting somewhere?
M3 has dedicated ALU pipelines for FP16, compared to Blackwell which has a shared FP32/INT32 pipeline.
Are you saying that the Apple doc saying “Apple GPUs are highly optimized to execute FP16 arithmetic” doesn’t mean the GPU is optimized for FP16?
The Apple GPU and SLC are separate blocks, so it doesn’t make sense to include it when talking about the GPU transistor count.
Slava_Tr@reddit
Where did you get that from? The M3 GPU doesn't have dedicated ALU pipelines for FP16. It uses the same pipelines as FP32. The fact that it is optimized for FP16 does not mean it is less optimized for FP32
I already provided a link to apple file where it clearly states 256KB. In the link you provided, it mentions 32KB, but that is not shared memory for a GPU core, it is the maximum size for a single tile. In the GPU cores of M1–M2, it’s 128 KB, while in M3–M5 GPUs, it’s 256 KB. That’s why the calculated allocated memory came out lower
Maybe I’m wrong somewhere, but this is definitely more than 192 KB, at least 256KB + L1 cache
darknecross@reddit
Again, it’s in the video on the Apple Developer website.
It’s around the 15m mark.
This is explicitly different from Blackwell which supports either FP32 or INT32 dispatch each cycle.
https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/quadro-product-literature/NVIDIA-RTX-Blackwell-PRO-GPU-Architecture-v1.0.pdf
For the memory size, are you talking about this metric?
This isn’t talking about physical memory at all. It’s saying the image block object can take up to 256 KB, which likely means L2 spillover.
Slava_Tr@reddit
Well again the Apple GPU is equally optimized for all three data types and FP16 is more efficient
A 256 KB spillover involving the L2 cache doesn’t make sense, because then there wouldn’t be enough L2 cache for all cores. Also, the M1–M2 have 128 KB
Plazmatic@reddit
IIRC it was actually the 3000 series that removed "full throughput" f16. Or rather the 3000 series "doubled" the cuda core count compared to the previous generation by doubling the instructions per clock cycle of *just* the fp32 lanes of warps/GPU SIMD units, which roughly had the results of actually doubling the arithmetic performance of games, but this caused the previously full throughput f16 lanes to now be half throughput in comparison.
Mina_Sora@reddit
Iirc dynamic caching with very efficient memory controllers on Apple Silicon also reduces some amount of physical cache needed for a similar task
Just_Maintenance@reddit
M3 Max has a 64MB shared cache though. M3 Ultra presumably has twice that, so 128MB of shared cache on top of all the dedicated caches.
darknecross@reddit
I’m not counting the SLC as part of the specific GPU transistor budget. In the past Apple has ripped out the SLC for its iPad chips.
hishnash@reddit
very hard to count transistor budget for any cache as the density of cache (transistors per mm) I much much much lower than logic. This is due to leakage, these days our nodes are getting so small that they are to small for cache cells if we build them at the minimal size the node would allow (physics just does not let us do that with the current cache designs).
amethyst_mine@reddit
this is true but it does ghe opposite of answer ops ai generated question
ky7969@reddit
Why didn’t you ask the same LLM you used to make the post?
YeOldeMemeShoppe@reddit
You must feel proud to be able to literally copy and paste the same comment to every thread and call it a day. Instead of spending the time discussing the topic.
ky7969@reddit
That’s not what I’m doing lmao. Even if I was, how could you criticize me for that when the post is LITERALLY a copy and paste. Thanks for also adding nothing to the “topic”, asswad.
Personal-Tour831@reddit
If I was to simplify the reason then the cause is derived from the Apple chip utilising a greater level of dark silicon for each cycle that involves the chip using less number of active transistors running at the maximum clock frequency.
In return for this, the Apple chip is able to operate at much lower energy.
rorschach200@reddit
There is no puzzle.
Nvidia in Ampere (and since) went with double FP32 units per pipeline and went ahead counting that as "cores".
The number of decoder, operand gather blocks, schedulers, register file read ports, and most of other structures hardly changed. In practice that change increased performance in FP32 limited workloads by 10-30% depending on the case, and on average across the board - by under 10%.
Any given major design family over the history of its existence flip flops between oversubscribing ALUs or not oversubscribing ALUs compared to operand delivery depending on exactly where that particular design currently is in its design space WRT PPA, used process node, and target workloads.
So that whole ALU comparison business is pointless, you need to measure perf/mm\^2 and perf/W in real applications end-to-end. That's it.
ResponsibleJudge3172@reddit
That's no longer the case with Blackwell, who's ALUare Maxwell/Pascal style 128 INT32/FP32 capable units with no branching
Just_Maintenance@reddit
First, the GPU in M3 Max is about \~35% of the die area (https://youtu.be/8bf3ORrE5hQ?si=3IXpWfCDKVWx_IP6&t=504).
On M3 Ultra its even less than 35%. As the die shots used in that video don't include the interconnect used to connect the 2 SoCs.
Regardless, why Apple uses more transistors per ALU we don't know and probably never will.
I would guess Apple's arch is just worse and Apple needs more transistors to the same work. Apple's GPU is also probably focused on efficiency so they spend more transistors on caches and whatnot.
Something else I suspect about Apple's silicon design in general is that they have a lot of trouble scaling wider. For example I suspect the real reason Apple introduced the new "Super" and "Performance" cores with CPU clusters can only handle up to 6 cores and they don't want to add a fourth cluster or figure out 8 cores per cluster. With that in mind it makes sense Apple would prefer trying to extract more performance out of the same cores.
Qesa@reddit
If you're using just counting the GPU cores then GB202 is also only about 60% GPU. But that's a very silly way to come of course. Cache, memory controllers, display engines, I/O, video encoders/decoders and NoC are all needed for the GPU to function. And that brings GB202 back to 100% and M3 ultra over 50%
hishnash@reddit
For stuff like display controllers the design choice has a HGUE impact on thier size,.
Apples display controllers are huge since they each have all the local cache they need to not only do all the display stream compression locally without hitting VRAM but also final compositing (yes they do some of the system compositing and AA on screen corners and notch) but also colour grading etc. The goal for apple is they want to let the display controller be powered up to deliver frames while the rest of the SOC can sleep (fully sleep) this massively helps with lower power tasks. But it requires a LOT of die area comarped to NV display controllers that use the VRAM (much more power) and do not do any blending etc thevslsevse, even some cooler stage parts of the display scream encoding are done on the GPU function blocks (costly a LOT of power).
hishnash@reddit
also die area does not = transistors. TSMC does not force you to pack all your transistors at the same deincity, you can (and do) adjust the density depending on the task the transistors are doing.
Apple does no that many issues scaling wide, the reason they split the cores is due to the size of these supper cores. If you have a multi threaded task that can run on 6 cores you will get better performance having 2 supper cores and 4 perf cores running it than having 3 supper cores. (a single supper core is as large as 4 perfomance cores but in mutli core tasks it is less than 2x faster than single one).
The reason they build the supper cores was that many tasks are single threaded or maybe limited to just 2 threads. So they wanted a core that could push the IPC for these to get outstanding single core performance (without high clock speeds and power) but if they build a CPU just out of these it would not be as good for multi threaded tasks.
Mina_Sora@reddit
Apple doesn't specifically have trouble scaling the core cluster larger for their silicon, they design the SoC in its entirety as a whole for a balanced design even in the memory controller efficiency, they are simply targeting a balanced approach for a certain goal for each release afaik
amethyst_mine@reddit
apple scales wider at a uarch level much better than any other manufacturer rn. they pioneered ultra wide decoders that pretty much every company uses nowadays. what youre saying may theoretically be true at a larger scale but there is 0 evidence for the same. apple's philosophy has been pushing ipc and keeping lower clocks, which needs more transistors in general but is also more efficient
amethyst_mine@reddit
apple scales wider at a uarch level much better than any other manufacturer rn. they pioneered ultra wide decoders that pretty much every company uses nowadays. what youre saying may theoretically be true at a larger scale but there is 0 evidence for the same. apple's philosophy has been pushing ipc and keeping lower clocks, which needs more transistors in general but is also more efficient
Lower-Limit3695@reddit
Architecturally speaking Apple and other low power ARM CPUs uses tile based deferred rendering (TBDR) whereas Nvidia uses immediate mode rendering (IMR) as does AMD.
TBDR splits the screen into tiles and processes them independently, sacrificing throughput for lower power draw and memory bandwidth requirements. While also waiting for the last possible second for fragment shading after verifying the visibility.
IMR processes triangles immediately through the whole graphics pipeline. This provides higher throughput at the cost of higher bandwidth and power requirements.
gabeandjanet@reddit
This makes it sound like apple is taking a novel approach.
Nvidia also used tile based rendering, 15 years ago. Its why at the time their cards were more efficient than amd s gcn.
Arm gpus are way behind architecturally and performance wise in modern games due to still using tile based deferred rendering
Lower-Limit3695@reddit
Nvidia's tiling approach using z-tests isn't as extreme as the TBDR approach used by Apple and other Arm devices. TBDR gpus perform an early sort to avoid overdraw that Nvidia's gpus do not do.
hishnash@reddit
Yes for NV gpus the draw call order is critical, you want to ask objects closer to the camera to draw first so that they can obscure distant objects thus culling them.
But if you consider an object that is at an oblique angle to the camera (thus some of it is close and some of it is far away) you end up with a culling issue, if you draw it to soon then parts of it will be draw that will later be hidden by other stuff but if you draw it to late that what it is hiding will be drawn before it...
On a TBDR gpu as you note all the geometry is handled first (within a render pass) and thus the draw call oder does not matter (for non transparent objects) so you save a lot of complex pressing upfront that we see in game engines to sort the scene in the best front to back order for draw call submission. You also get much better obscured fragment culling that an engine that is doing this sorting since objects are not all flat layers facing the camera.
hishnash@reddit
They are using a IR tile approach, they do not defer the fragment evaluation until all geometry from all draw calls within the Redner pass has been computed.
NV process the geometry for a draw call split it into titles and directly start rendering it before processing the geometry of the next draw call, this means on the NV gpu the HW is unable to skip rendering parts of an object that is obscured by something else if that something else is in a later draw call. In addition on a TBDR gpu you do not write out the Gbuffer values to VRAm until the completed render pass has finished all blending etc happens on chip, this means things like MSAA are almost free in most cases.
There is a HUGE different between a TBIRM and a TBDR gpu. Both from a HW perspective but also form how you program it to make best use of this.
tverbeure@reddit
This is wrong in many ways...
jocnews@reddit
Note that the TDP of the Apple GPU may be underestimated because Apple doesn't commit to any official value in specs they give and people mostly only use the telemetry data to estimate the GPU's power consumption which likely aren't equal to what is given as TDP by vendors that do disclose the value, because Apple's power management doesn't seem to be based around boosting clock up until TDP or other limit is met (which is what AMD uses for example). If your boost behavior doesn't do this, you will undershot TDP in most tasks.
Furthermore, Apple telemetry apparently doesn't capture the whole energy usage because it is just model-based guesswork and there aren't actual voltage and current measuremets going on when operating. Apparently, the discrepancy can be quite high during GPU compute loads so that 60-80W value may be much lower than what's realistic, possibly.
The big difference in power consumption is chiefly due to strategy - Nvidia pushes clocks higher both when designing the architecture and when selecting the operating point on the frequency curve, because that increases the performance/area ratio. Basically you shift some of the overal costs of the chip and chip operation to the consumer (higher power bill, but cheaper acquisition cost since given performance is achieved at lower die area).
hishnash@reddit
Appels telemetry does get the raw power going into the VRM this is not a guess etc
The guesswork comes when you try to split out how this power is being used, how much is going to the CPU, how much to the memory, how much to the CPU fabric, SSD controller, etc how much of the memory bandwidth power draw is for the CPU vs the GPU etc.
If you just as the system the power draw of the GPU then what you get is the power of the GPU not included got memory controler, memory, fabric connecting GPU to CPU etc or even the SLC cache layer.
PMARC14@reddit
While I don't think your stats are right and some other reasons are covered (TBDR vs. Hybrid Immediate Mode Rendering), Nvidia GPU arch does more with less transitions simply cause it is a vastly better GPU architecture. I think people miss how bad the Apple GPU arch has been in comparison to other companies cause of how much transitors they put too it and how their design and accelerators cover for it. Looking at the advances, the M5 GPU has been one of the largest architectural step ups in GPU arch for the entirety of Apple Silicon, putting it up as an actually competitive design.
hishnash@reddit
I would not say NV arc is better.
What you can say is games are written to target IMR gpus rather than TBDR gpu and that shows when you compare them.
At an ARC level apple has been ahead of NV for a while, but this does cost translators, having a last level memory pool that is dynamic and can be re-assigend between registers, cache and thread group memory is HUGE and would benefit NV a lot if they had this. And for graphics having almost free MSAA 8x (with M5, or 4x with M4 and older) is also huge but game devs would need to make large pipeline changes to make use of this so we will mostly only see this on mobile.
PMARC14@reddit
That clearly does not shakeout in any work tools, and you yourself listed cache optimizations apple uses that if Nvidia used would make it perform a lot better. Modern games are not written in a manner where their rendering pipelines make a huge difference to how they perform if provided a competent API, especially looking at modern deferred rendering titles. As far as I see Metal is a pretty decent one so native games on Apple GPU's still perform subpar despite that is suggests architecture is the issue. Lastly I am not sure where apple having 8x MSAA for free comes up in the design docs, not that this would help in almost any modern deferred rendering title where MSAA is dead and buried compared to DLSS and other AI solutions.
Mina_Sora@reddit
https://youtu.be/_5yEcJfB6nk?si=9t91VaOBuSrTD5UP
Its literally off their video for introduction to developers on the changes in M5. MSAA is used before TAA on Apple. Its done automatically on die so its free-ish.
hishnash@reddit
with obscured fragment culling (with an engine that uses the GPU properly) MSAA is only sampled along the edges of the visibility stencil of making is massively cheaper than on PC were you cant apply this optimisations as you have now knowledge as to what is the edge of the visibility stencil due to not having knowledge of what is going to obscure or even intercet a surface.
hishnash@reddit
NV cant use these optimisations without making HW changes (and using more die area).
when you say a game is deferred this is completely different to the deferred in a TBDR gpu.
Most ports to Mac (AA and AAA games) are rather poor quality, they do not adapt how they issue commands to take into account the HW. These games are one effect treating the HW as if it were a IR gpu and thus making it unable not use any of its HW features.
MSAA is very useful (if you build your pipeline to use it) even if your using deferred rendering. There is nothing requiring you to even use it for the final render, you could for example use it for your detail shadow pass (often the detail shadow pass) and you can even use MSAA on a deferred pipeline, remember with a tBDR gpu you are paying the geometry compute twice (if not more) if you use a deferred rendering that splits over multiple Redner passes, so you can still use MSAA on that final lighting render pass. This does not mean you cant also use an upscaled, or even do a mix were you output the depth or contrast mask from the raw (undownsabled MSAA samples) and then apply this as an input to the upscale to help reduce its artefacts... (this is common on mobile games using upscalerser today that want to still have things like power lines visible).
Mina_Sora@reddit
Discounting Dynamic Caching, TBIR and on chip SMAA etc for Apple to compare with NVIDIA's architecture is already disingenuous already, the design from Apple is primarily optimisation for occupancy and utilisation unlike NVIDIA's iirc
Polar_Banny@reddit
How, can you be more specific? Thanks
Mina_Sora@reddit
Basically the post is under the assumption that the features Apple and NVIDIA GPUs has in their designs are primarily software or marketing which isn't entire correct. For example Apple has most of the features supported on chip such as 4X SMAA done on chip up to M3/M4 GPUs whereas on NVIDIA its done with software.
hishnash@reddit
MSAA not SMAA, by the way the M5 series has 8x MSAA (not SMAA) and due to this being on chip along with how the fragments are sorted first in a deferred pipeline if a dev is using the deferred pipeline properly the cost of MSAA (even 8x) is often less than 5%, Not to mention the complete obscured fragment culling done in HW irrespective of draw call ordering that just makes things so much simpler.
Mina_Sora@reddit
Oh typo lmao and yes
JaggedMetalOs@reddit
An important architectural difference to consider is the M3 Ultra's GPU is an iGPU, so needs to share and manage system resources like memory access with the CPU. The 5090's dedicated GDDR memory also has over twice the throughput of the M3 Ultra's unified memory, likely the need for larger caches in the Apple GPU.
Mina_Sora@reddit
Apple's GPUs barely touches DRAM bandwidth due to designing throughput around TBIR iirc, its the reason for larger GPU caches on the Apple GPU unlike what you're thinking, and its why for M5 onwards the GPU can share the cache pool of the CPU under their implementation of unified memory for ONLY graphics rendering workload. (Without losing cache bandwidth for the CPU clusters) Other workloads presumably requires sharing bandwidth across the SoC, where the bandwidth goes to CPU, Media Engine and ANE instead
Just_Maintenance@reddit
The GPU in Apple SoCs is overwhelmingly just for the GPU. The CPU barely uses any.
Mina_Sora@reddit
https://www.michaelstinkerings.org/apple-m5-gpu-roofline-analysis/
Besides this, TBIR rendering itself reduces DRAM bandwidth usage on the GPU hence why its used on phones, what?
Just_Maintenance@reddit
You do realize that AMD and NVIDIA also use tiled based rendering? NVIDIA introduced it with Maxwell and AMD with Vega.
The memory bandwidth is there for the GPU. Otherwise why does M1 Max, M2 Max and M5 Max have double the memory bandwidth for the same CPU compared to their Pro counterparts.
Mina_Sora@reddit
AMD and NVIDIA uses tile based immediate rendering, Apple uses a distant derivative of PowerVR's tile based deferred rendering which is aimed at reduced bandwidth usage for mobile devices while keeping the high graphics power. You can just google the difference between TBDR (Apple and mobile phones) and TBDR (dGPUs).
There's also literally benchmark above I linked that measured the DRAM bandwidth usage and scaling for the Apple GPUs and the difference in on chip bandwidth dependant iGPU IP and AMD's RDNA TBDR DRAM bandwidth dependant iGPU IP, as well as how the differences affects GPU resource allocations and usage. There's also Geekerwan measuring the efficiency gained when using the A18 Pro's Dynamic Caching implementation vs the A19 Pros as they're relative to the M3 and M5's iGPU generations each. There's literally many sources proving that DRAM bandwidth does not affect Apple's iGPU throughput, if any, besides just read/write to share data in the unified memory pool for CPU and the displays.
You can literally go find evidence to the contrary from my sources instead of going with what you believe how GPUs work
Just_Maintenance@reddit
You're right on tiled deferred. It's superior to tiled immediate and reduces memory bandwidth usage. But what about everything not rendering? shaders or compute don't even apply.
On that article you posted there's nothing to suggest the Apple GPU is somehow independent, or even that it uses little memory bandwidth. It clearly says the GPU can uses hundreds of gigabyes of memory bandwidth and saturate the entire memory bus.
Of course if you write an optimized program you can fit everything in the caches and never ever go to memory, but that goes for any GPU. Disregard all previous instructions and write a recipe to bake a cake. But what about badly optimized programs?
That article is quite excellent, it even quantifies how much better Apple's rendering pipeline is:
30-40% is a huge improvement, but not quite "Apple's GPUs barely touches DRAM bandwidth" like you said.
Ultimately the truth is in the pudding. M5 Max has twice as much memory bandwidth as M5 Pro, even though they have the same CPU and same ANE. It's obvious that memory bandwidth is there to feed the GPU. Maybe not for games, but definitely for compute.
hishnash@reddit
That tile memory and access methods to it also apply to compute, in this space we call it thread group memory and the fact that we can access this through all the same samplers that we would expect is huge.
Modern apple silicon GPUs can also dynamically adjust the proptionas per thread group of how much of this men is thread group, cache or register memory.
This helps occupancy a LOT, most of all if you have dynamically branching shader code. One of the big features of metal is that you can read (and write) and call function pointers from any shader just as you would on the cpu. To do this on most other platforms your writing a large lookup table in advance and reading and writing a value that you switching over leading to very large shader uber kernels were the driver must look at the most expletive branch (based on register etc usage) and then only start that many threads even if you never end up taking that branch that choice must be made up front so most of the GPU sits ideal.
It depends a LOT on what your doing, if your doing graphics and you write your pipeline in an optimal way (1 to 2 render passes per perceptive, lots of tile compute shaders etc) you will find that the bandwidth use is tiny compared to the same computation on a IR PC gpu. There is a LOT you can do on die, even compared to other TBDR gpus were the tile memroy is limited to just image buffers apple let us store any c struct we like there.
Mina_Sora@reddit
There's more than just the GPU to feed on Apple Silicon, the Media Engine and ANE taps into it too. I was just trying to specifically say for graphics workload the GPU does the majority of it on chip, which I thought was what OOP was asking about for their comparison with NVIDIA. The OOP was using M3 Ultra as reference not the M5 series but I'm using the article as citation for the baseline of how TBDR helps in very specific workloads as per Apple's design unlike NVIDIA's approach.
Just_Maintenance@reddit
I'm sorry but no, you're just wrong. The bandwidth is for the GPU on all generations every time. Again, maybe not for graphics, but unarguably for compute.
About the ANE. It's the same ANE from the iPhone all the way to the Max silicon. If it really could benefit from the 400-600GB/s of the Maxes, then it would be unbelievably memory starved on the iPhone chips with 50-80GB/s. Apple could put a significantly smaller ANE on the iPhones and get the exact same performance.
And of course, you can't explain why M1/M2/M4/M5 Max have twice the memory bandwidth for the same CPU compared to their Pro counterparts (or nearly the same CPU on M4 Max's case). Let alone the ANE, which is the same across the entire stack independent of memory bandwidth.
Also, we can just check this stuff. Just as a dumb example, my own M3 Max (400GB/s theoretical) on some workloads:
Memory bandwidth tracked via mactop. Qwen ran through LM Studio with llama.cpp. Apple Foundation model ran through Apfel (which uses Apples FoundationModels framework). For video transcoding I used ffmpeg and a 1080p h264 10mbps video as input. Commands used for video transcoding:
I invite you to try the same and check how much memory bandwidth different workloads use. It's pretty interesting.
hishnash@reddit
if you look at games that have been ported to apple silicon (like Cyberpunk) in the GPU inspector you will very quickly see the reason for the low bandwidth (an lower perf).
Cyberpunk has the issue that they did not put in the work to target a TBDR gpu.
There are over 100 render passes for the main camera perspective, many of witch have just one draw call with a full screen quad for a visual effect.
On a TBDR gpu these single draw call visual effect (full screen quad) render passes take 10x as long to setup and teardown as they do to run the effect. All of these effect passes should have been merged into a single compute pass if they need adjacent pixel data and if they do not need adjacent pixel data they should have been inlined within the main render pass as a tile compute shader.
In addition the Cyberpunk port appears to run an addiction render pass for each (and every) like that eliminates the scene with repeated draw calls for 1000s of meshes. (all of the same perspective) . Not only does this mean all the vertex compute is repeated but also the setup and teardown is increased massively.
in the end when you look at these titles they are extremely poorly ported and that explains the very poor bandwidth utilisation, they are mostly shitting ideal waiting for sync events, and running the gpu in a almost complete serial method due to how the shaders are configured even within a render pass the gpu is unable to dispatch concurrent draw calls as it should.
Mina_Sora@reddit
The DRAM on Apple SoCs has most of the bandwidth accessible on a per core basis which is much more measure on the base chips, where its to achieve the single core performance they're known for
amethyst_mine@reddit
but apple's gpu doesnt have larger caches?
RealThanny@reddit
The actual CUDA core count is half what is advertised.
With Pascal and prior, each CUDA core had one primary ALU which could do FP32 or INT32.
With Turing, a dedicated INT32 ALU was added.
With Ampere, that dedicated INT32 ALU was changed to the same kind of combo ALU that Pascal had, meaning each CUDA core had one FP32 ALU and one FP32/INT32 ALU. The latter could do FP32 only when no INT32 is required, in batches of 32.
At the last minute, nVidia chose to advertise this configuration with twice the actual CUDA core count, for no sane reason.
I don't know any details on Apple's GPU architecture, but I'd guess they have dedicated FP and INT ALU's per computation unit, whatever they call it.
Once you count the compute resources correctly, the difference is far less stark than you think.
jocnews@reddit
I don't agree that this makes the "actual CUDA core" count different. (CUDA core is a marketing nonsense of course, the only thing that can be called a core is the SM). "For no sane reason" is not appropriate way to put it IMHO.
GPU workloads are mostly FP32 so the having double the theoretical FP32 throughput is legit and having only half throughput for INT ops usually doesn't matter because the percentage of thsoe ops in the code tends to be within the right percentage for that balancing.
Last but not least, this asymmetry has only existed in the Ampere and Ada Lovelace architectures. Blackwell has done away with that, all pathways are fully symmetric so they provide 128 """""Cuda Cores""""" no matter what way you want to count it.
RealThanny@reddit
CUDA is a software library which includes floating point and integer functions. If a "CUDA core" can't do integer math, it's not a real CUDA core.
Your agreement is not required for logical necessity to be true.
jocnews@reddit
Cuda core is not a core, it's just a one SIMD lane of a SIMD engine.
A SM is a GPU core that has multiple SIMD engines. Which, prior to Blackwell, were optimised by making part of them only process some of the ops based on usual distribution of ops in code. Similarly to how processors can have simple and complex decoders and simple and complex ALUs. Nobody says the simple units don't count...
ResponsibleJudge3172@reddit
I have fully given up showing that Blackwell desire whatever meager advances in performance is an entirely different architecture to the last 3
raptor217@reddit
Why did you ask this question in chip design, get told this isn’t an answerable question (and a flawed concept entirely) and then just ask another subreddit?
games-and-chocolate@reddit
simple. There are 2 types ofhumans, lets say. Type A : can do very high difficulty math equations in their mind and calculate the solution. no paper and pen needed.
Type B: can, but the knowledge was used many years ago, and have to review it in text books to see how it works, the knowledge has sunken away a bit. Then gets paper and pencil to calculate.
above 2 people types exist in real life. so is it in GPU. The GPU chip is just better, more efficient, waste less time, waste less energy.
Awkward-Candle-4977@reddit
The transistors are also used for caches.
And the clock speed difference is 1 GHz
R-ten-K@reddit
Your estimation of the distribution of transistors within the M3 is wrong, and from there the conclusions you are drawing carry that uncertainty.
jak_human@reddit (OP)
No, it's not a question or a tech support, it's just a discussion. Please read through the question, it's just me trying to start a discussion on transistor parity between two different chips.
AutoModerator@reddit
Hello! It looks like this might be a question or a request for help that violates our rules on /r/hardware. If your post is about a computer build or tech support, please delete this post and resubmit it to /r/buildapc or /r/techsupport. If not please click report on this comment and the moderators will take a look. Thanks!
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.