Inside the Matrix: How does matrix multiplication work inside GPUs?

preview_player
Показать описание
In this video, we dive into the mechanics of a GPU and learn how they perform matrix multiplication; the core computation powering deep neural networks and large language models. By the end of the video you'll learn, an efficient formulation of matrix multiplication, computing matrix multiplication with tiling and kernel fusion.

00:00 - Introduction
02:40 - GEMM basics
03:24 - Naive implementation of matmul
04:19 - GPU memory hierarchy
05:34 - Memory thrashing of GPUs
06:00 - Memory efficient implementation of matmul
06:33 - Matmul with tiling
08:17 - GPU execution hierarchy
09:25 - Magic of power of 2
10:15 - Tile quantization
11:14 - Kernel fusion
12:24 - Conclusion
Рекомендации по теме
Комментарии
Автор

Is there a open source llama or any other llm code ? We see all open source are built on already existing models.
Gptneo is one which has been transformed to be luke chatgpt by few.
Open llama does not show the model code, they do inference only, any good repo whoch actually shows a implementation of llama paper ?

kishoretvk