Beating cuBLAS in SGEMM from Scratch
Posted by salykova@reddit | LocalLLaMA | View on Reddit | 12 comments
A while ago, I shared my article here about optimizing matrix multiplication on CPUs, achieving performance that outpaced NumPy - Beating NumPy's matrix multiplication in 150 lines of C code
I received positive feedback from you, and today I'm excited to share my second blog post. This one focuses on an SGEMM implementation that outperforms cuBLAS with its (modified?) CUTLASS kernel across a wide range of matrix sizes. The blog delves into benchmarking code on CUDA devices and explains the algorithm's design along with optimization techniques. These include inlined PTX, asynchronous memory copies, double-buffering, avoiding shared memory bank conflicts, and efficient coalesced storage using shared memory. The code is super easy to tweak, so you can customize it for your projects with kernel fusion or just drop it into your libraries as-is. If you have any questions, feel free to comment or send me a direct message - I'd love to hear your feedback and answer any questions you may have! Below, I've included performance comparisons against cuBLAS and Simon Boehm’s highly cited work, which is now integrated into llamafile aka tinyBLAS.
Blog post: https://salykova.github.io/sgemm-gpu
Code: https://github.com/salykova/sgemm.cu
qnixsynapse@reddit
Interesting. (I am into SYCL these days for some reason)
Healthy-Nebula-3603@reddit
Is that still constrained to RAM bandwidth?
Do my llama 3.3 70b q4km will be working faster than 1.8 t/s on CPU Ryzen 7950x3d with DDR 5 6000 currently?
LicensedTerrapin@reddit
This is for GPU inference as far as I can tell.
shing3232@reddit
well, inference is also part of training computation
LicensedTerrapin@reddit
Okay, it's still about GPU. That was the question.
AdhesivenessNo6700@reddit
Great resource for anyone working with CUDA and matrix operations!
shing3232@reddit
Could you make one for RDNA3 as well? lol
salykova@reddit (OP)
yess, work in progress!
shing3232@reddit
It would awesome to implements that onto llama cpp as well. llama cpp need lightweight implementation.
bkknqw@reddit
That‘s impressive, great work!
indicava@reddit
I’m hardly capable of understanding the specifics of your blog post, it’s way over my head. But it is very interesting work and thanks for sharing!
This left me wondering how close your implementation is to something we will be able to test on “real world” use cases like model inference.
graphitout@reddit
Interesting. How much would it improve the inference speed of an LLM? The basic dot product attention will still boil down to matrix-vector multiplications when caching is used. But MQA will benefit from a faster matrix multiplication since multiple queries can be stacked to form a matrix.