CppCon 2016: Bryce Adelstein Lelbach “The C++17 Parallel Algorithms Library and Beyond'

preview_player
Показать описание


One of the major library features in C++17 is a parallel algorithms library (formerly the Parallelism Technical Specification v1). The parallel algorithms library has both parallel versions of the existing algorithms in the standard library and a handful of new algorithms inspired by common patterns from parallel programming (such as std::reduce() and std::transform_reduce()).

We’ll talk about what’s in the parallel algorithms library, and how to utilize it in your code today. Also, we’ll discuss some exciting future developments relating to the parallel algorithms library which are targeted for the second version of the Parallelism Technical Specification – executors, and asynchronous parallel algorithms.

Bryce Adelstein Lelbach
Lawrence Berkeley National Laboratory
Berkeley, California
Bryce Adelstein Lelbach is a researcher at Lawrence Berkeley National Laboratory (LBNL), a US Department of Energy research facility. Working alongside a team of hardware engineers and scientists, he develops and analyzes new parallel programming models for exascale and post-Moore architectures. Bryce is one of the developers of the HPX C++ runtime system. He spent five years working on HPX while he was at Louisiana State University's Center for Computation and Technology. He also helped start the LLVMLinux initiative, and has occasionally contributed to the Boost C++ libraries. Bryce is an organizer for the C++Now and CppCon conferences as well as the Bay Area C++ user group, and he is passionate about C++ community development. He serves as LBNL's representative to the ISO committee for programming languages and the ISO C++ standard committee.


*-----*
*-----*
Рекомендации по теме
Комментарии
Автор

I dont know if this is me just having a positive focus day, but I found this was a really instructive, clear talk. Well done for a subject this 'dry'.

KurtDonkers
Автор

Brilliant presentation, Bryce!!! I work extensively in parallel signal processing applications - very useful. Thank you for sharing!

CellularInterceptor
Автор

Excellent talk.. super clear explanations for the fairly complicated topics.. thanks a lot Bryce !!

TheGokhansimsek
Автор

Glaring mistake at 18:15 :

GNSUM requires an associative operation.
GSUM requires an associative *and* commutative operation.

*Neither* function is "fine for a non-associative op". Remember:

associativity of op means that op(op(a, b), c) == op(a, op(b, c))
commutativity of op means that op(a, b) == op(b, a)

IOW, if the operation is associative, you are allowed to reorder *applications* of the operation. If the operation is commutative, on the other hand, you are allowed to reorder *operands* . GSUM assumes it is allowed to do both.

TruthNerds
Автор

What's that "urinary" operator he keeps talking about?

Fetrovsky
Автор

I appreciate to have this in C++ but believe it will not picked up by enough software engineers. I believe the CPU manufacturers are in demand to supply silicon where it is easier to effectively utilize the always increasing number of transistors.
It just makes so much more sense to have a new "genius" CPU than to force all software to adapt in a fundamental and exhaustive way or stagnate otherwise.
Now that Intel owns Altera turning software engineers into hardware engineers is the last thing that they could demand, just because they are somewhat clueless what to do with the ever increasing integration capability.
Again, although it is useful for experts, i think it is not the way that will take off and solve this CPU architecture problem since 2000.

raymundhofmann