EU Regional School 2015 Part 2 with Dr. Timothy Mattson

preview_player
Показать описание
Dr. Timothy Mattson - Parallel Computing: From Novice to “Expert” in Four Hours

Parallel computing for the computational sciences is quite old with the first commercially produced shared memory computer in 1962 (the 4 CPU Burroughs D825). Parallel computing started a sort of “Cambrian explosion” roughly 30 years ago with “the attack of the killer micros” resulting in a vast range of parallel architectures: VLIW and explicitly parallel instruction sets, the famous MIMD vs SIMD wars (MIMD won), fights over network topologies in MPP supercomputers (hypercube, 3D torus, grid, rings, etc.), the dream of SMP and the harsh reality of NUMA, easy to use vector units (which are often quite hard to use), and more recently heterogeneous computing with the tension between CPUs and GPUs. As if the hardware landscape is not confusing enough, the software side is even worse with abstract models (e.g. SIMT, SMT, CSP, BSP, and PRAM) and a list of programming models that would easily fill several pages. It’s enough to scare even the most motivated computational scientist away from parallel computing. Ignoring parallel computing, however, is not an option. As Herb Sutter noted several years ago, “the free lunch is over”. If you want to achieve a reasonable fraction of the performance available from a modern computer, you MUST deal with parallelism. Sorry. Fortunately, we’ve figured out how to make sense of this chaos. By looking back over all the twists and turns of the last 30 years, we can boil things down to a small number of essential abstractions. And while there are countless parallel programming models, if you focus on standard models that let you write programs that run on multiple platforms from multiple vendors, you can reduce the number of programming models you need to consider down to a small number. In fact, if you give me four hours of your time, you can learn: (1) enough to understand parallel computing and intelligently choose which hardware and software technologies to use, and (2) the small number of design patterns used in most parallel applications. That’s not too bad. In fact, parallel computing is much less confusing than the other hot trends kicking around these days (anyone up for a 40 hour summary of machine learning over big data running in the cloud?).

LINKS:

Рекомендации по теме