CppCon 2019: Hartmut Kaiser “Asynchronous Programming in Modern C++”

preview_player
Показать описание



With the advent of modern computer architectures characterized by -- amongst other things -- many-core nodes, deep and complex memory hierarchies, heterogeneous subsystems, and power-aware components, it is becoming increasingly difficult to achieve best possible application scalability and satisfactory parallel efficiency. The community is experimenting with new programming models that rely on finer-grain parallelism, and flexible and lightweight synchronization, combined with work-queue-based, message-driven computation. The recently growing interest in the C++ programming language in industry and in the wider community increases the demand for libraries implementing those programming models for the language.

In this talk, we present a new asynchronous C++ parallel programming model that is built around lightweight tasks and mechanisms to orchestrate massively parallel (and -- if needed -- distributed) execution. This model uses the concept of (Standard C++) futures to make data dependencies explicit, employs explicit and implicit asynchrony to hide latencies and to improve utilization, and manages finer-grain parallelism with a work-stealing scheduling system enabling automatic load balancing of tasks.

We have implemented such a model as a C++ library exposing a higher-level parallelism API that is fully conforming to the existing C++11/14/17 standards and is aligned with the ongoing standardization work. This API and programming model has shown to enable writing highly efficient parallel applications for heterogeneous resources with excellent performance and scaling characteristics.

Hartmut Kaiser
CCT/LSU
STE||AR Group

Hartmut is a member of the faculty at the CS department at Louisiana State University (LSU) and a senior research scientist at LSU's Center for Computation and Technology (CCT). He received his doctorate from the Technical University of Chemnitz (Germany) in 1988. He is probably best known through his involvement in open source software projects, mainly as the author of several C++ libraries he has contributed to Boost, which are in use by thousands of developers worldwide. His current research is focused on leading the STE||AR group at CCT working on the practical design and implementation of future execution models and programming methods. His research interests are focused on the complex interaction of compiler technologies, runtime systems, active libraries, and modern system's architectures. His goal is to enable the creation of a new generation of scientific applications in powerful, though complex environments, such as high performance computing, distributed and grid computing, spatial information systems, and compiler technologies.


*-----*
*-----*
Рекомендации по теме
Комментарии
Автор

"Parallelization looses all its threadening behavior"

matrixstuff
Автор

It isn't clear from the code examples how the co_await steps are executed by different threads. co_await spawns a coroutine, which normally executes in the same thread as the suspended coroutine waiting for the result of the co_await.

headlibrarian
Автор

@23:48, should be `return v < *p` i think, right?

think
Автор

The first rule of thumb is really wrong' It should be "Parallelize until the payoff is no longer worth the extra complexity." That break point can be wildly different depending on the algorithm and/or type of application. Complexity kills, and no one cares how fast it is if it can't be reliably maintained. There's a huge culture of premature optimization in C++ these days.

deanroddey
Автор

I would argue that the magic and power of OMP and MPI is that it is NOT part of the C++ standard. I would not try to make HPX a part of C++.

ZapOKill