Tim Besard - GPU Programming in Julia: What, Why and How?

preview_player
Показать описание
This talk will introduce the audience to GPU programming in Julia. It will explain why GPUs can be useful for scientific computing, and how Julia makes it really easy to use GPUs. The talk is aimed at people who are familiar with Julia and want to learn how to use GPUs in their Julia code.

Resources

Contents
00:00 Introduction
00:31 Back to the basics: What are GPUs?
01:26 Why you should use GPUs?
02:01 All toolkits provided by vendors are using low level languages. So, time to switch to Julia
02:20 We now have Julia packages for creating code for GPUs of all major vendors
02:48 Funding principles of JuliaGPU ecosystem
03:23 Principle 1: Userfriendlines
04:54 Principle 2: Multiple programming interfaces
05:24 Main interface to program on GPU: GPU's arrays
06:43 The main power of Julia comes from higher-order abstractions, this is also true on GPUs
07:47 Array programming is powerful
08:23 Kernel programming give us performance & flexibility
09:30 We don't want to put too many abstraction into kernel code, here is why
10:04 We want to keep consistency across Julia GPU's ecosystem
10:47 Kernel programming features that we support
11:24 Support of more advanced features
11:37 What is JIT doing behind the scene?
12:37 Benchmarking and profiling
12:51 How to benchmark your GPU's code correctly?
13:46 You can't profile your GPU's code using standard methods, you must use vendor-specific tools
14:24 How do we ACTUALLY use all this?
15:32 We disable scalar iteration
16:09 Optimizing array operations for the GPU
17:13 Pro tip: Write generic array code!
18:21 Contrived example of using generic code
19:05 Let's write a kernel
19:36 Writing fast GPU code isn't trivial
21:02 Let's write a PORTABLE kernel
21:36 Pros and cons of kernel abstractions
22:07 Kernel abstractions and high-performance code
22:35 Conclusion
24:07 Q&A: Do you implemented dummy GPU type that actually runs on GPU?
25:51 Q&A: What about support for vendor-agnostic backends like Vulkan?
27:12 Q&A: What is a status of project like OpenCL?
28:45 Q&A: How easy is to use multiple GPUs at once?
29:45 Closing applause

Рекомендации по теме
Комментарии
Автор

Tim Besard is a magician that just happens to work on GPUs.

kamilziemian
Автор

I tried the RMSE example on a oneAPI GPU (Iris Xe) and it is 2x SLOWER than the CPU. Am I doing something wrong or is the Xe really that bad?

mattettus
Автор

I'd like to learn Julia for finite element method analysis... which Package(s) should I focus on? Thank you

Ptr-NG
Автор

I feel like CUDA.allowscalar should be false by default

conradwiebe