Webinar: GPGPU Programming with Ada

preview_player
Показать описание
GPUs have long gone beyond the sole purpose of rendering 3D graphics. They can now execute generic applications requiring massively parallel computation, sometimes performing more than 100 times faster than their CPU counterparts. This order-of- magnitude advantage opens new areas of development including video processing, machine learning, signal processing, trajectory prediction, physical simulation, cryptography and monte-carlo simulation.

GPUs can be programmed in different ways today, using shading languages (HLSL, GSL), GPU-specific languages (OpenCL, CUDA) or pragma intertwined in regular C code (OpenACC, OpenMP). These formalisms are either specific to the domain, or close to C syntax. While they bring the power of the GPU to the programmer’s desk, these languages lack support for the kind of reliable programming that would be helpful in a very constrained environment. Debugging is often very difficult, and new arrays of problems inherent to GPUs emerge, such as data dependencies which are challenging to analyze and solve.

The Ada programming language together with its SPARK formally analyzable subset provide an appropriate foundation, but these technologies have so far not been widely available for GPU programming. A solution is in progress, as AdaCore has initiated an effort to fill the gap. This talk will present the current status and the various options that are under consideration, and will invite participants to provide feedback and influence the direction of future development.

The following topics were presented:

- Brief overview of typical GPU architecture
- Specifics of GPU programming
- Usage of existing GPU libraries
- Using existing CUDA or OpenCL code with Ada
- Writing CUDA or OpenCL code directly in Ada
- Interlacing CPU and GPU code with OpenACC and Ada
- Perspectives on Ada 2020 and parallel loops
- Formally proving absence of data dependency and run-time errors with SPARK
- Forthcoming developments and how to influence the technical direction

This webinar took place on September 18, 2018
Рекомендации по теме
Комментарии
Автор

It is much easier to generate Ada bindings for CUDA and CUDnn libraries. I am using these for my deep learning project in Ada.

tusharbadyal
Автор

I mentioned on #Ada irc channel that I wanted to do Ada->SPIR-V years ago, 23/07/2015 in fact, where the use of parallel blocks from Ada 202x would be used to offload to the GPU. I wonder how much of my thoughts influenced AdaCore people in the channel?

Lucretia
Автор

Why does it sound like he's speaking through an analogue phone?

Lucretia