Parallel Programming with (Py)OpenCL for Fun and Profit

preview_player
Показать описание
Gordon Inggs

## Overview

In this talk, I will introduce the basics of the OpenCL programming and runtime APIs, using examples run in Jupyter notebooks on a variety of devices. I will also help identify the situations where it makes sense to accelerate portions of a codebase.

## Audience
This talk is aimed at anyone who loves the expressiveness of Python, but has bumped into its performance limitations. I assume no background in HPC and/or heterogeneous computing, and will be using simple, yet hopefully relevant examples such as fundamental linear algebra and analysis applications.

By the end of the talk, provided it isn't a post-lunch slot, the audience should be ready to identify the hotspots in their code, and start accelerating using the CPUs, GPUs and FPGAs in their laptops and favourite public clouds such as AWS, Azure and GCE.

pyconza2018

python
Рекомендации по теме
Комментарии
Автор

At around the 12:00 part you show program source code which appears to be C/C++ code. Is it possible to use Python code as the program source code? If not then this is essentially wrapping C/C++ code for execution on the GPU, correct? I ask because I have Python code that is easily parallelizable and I'd like to leverage OpenCL for this, to see if it is faster than using numpy.apply_along_axis() + numba + multiprocessing, but I don't want to rewrite the code into C/C++. Can anyone comment? Thanks in advance for any suggestions.

monocongo