Practical Parallelism in C++: MPI Synchronization

preview_player
Показать описание
In this video we look how we can synchronize prints in C++!

Рекомендации по теме
Комментарии
Автор

Incredibly digestible video, much better than than the content my class is putting out. Thank you!

JitinDhillon
Автор

0:45 First 3 non-negotiable MPI instructions:
1:56 The way of synchronizing messages:
2:38 Why we need communication?
2:54 Instruction for send message:


0:45 First 3 non-negotiable MPI instructions:
1. MPI_Init(&argc, &argv);
2. MPI_Comm_rank(MPI_COMM_WORLD, &rank);
3. MPI_Comm_size(MPI_COMM_WORLD, &size);

1:56 The way of synchronizing messages:
One rank prints out all the messages.

2:38 Why we need communication?
Because we aren't in a shared memory space. Multiple process have its own isolated local memory and we have to communicate them.

2:54 Instruction for send message:
MPI_Send(buff_pointer, buff_len, datatype, dest, tag, comm);
// datatype: MPI_CHAR, etc.
// dest: destination rank of message.
// tag: message tag; can be any number or even MPI_ANY_TAG, but ofen is the source rank.

aldolunabueno
Автор

Awesome video, very helpful for my parallel systems class. Best of luck with the remainder of your phd program.

davidkooistra
Автор

Great content! Please continue this series!

omertarikkoc
Автор

Really great content. Saved me on a project!

TheBasyx
Автор

How can we find the limit of the receive buffer?

raghavv
Автор

1. Printing is about the least important activity in the sort of programs that MPI is used for. 2. MPI is usually run on hundreds of processes (if not thousands), so you don't want to print from more than one rank.

Bad example! in other words.

victotronics