Learning by doing :: MPI -- Collective Communications In MPI, Part 11

preview_player
Показать описание
This video is part of the new series called "Learning by doing :: MPI" which serves as an introductory course on MPI (Message Passing Interface). In this video we talked about collective communications in MPI! This topic corresponds to chapter 6 of the MPI standard v4.1. It is a lengthy chapter with a lot of information to digest. We followed the same pattern as for "Datatypes in MPI" by dividing a chapter into digestible ~25 mins long parts. In part 11 we continued our discussion by studying examples in both intra- and inter-communicator scenarios for MPI_allgatherv_init, MPI_alltoall_init, MPI_alltoallv_init and MPI_alltoallw_init persistent collective communication operations. Then we moved on to persistent reduce operations namely working with MPI_Reduce_init, MPI_Allreduce_init, MPI_Reduce_scatter_block_init and MPI_Reduce_scatter_init in both intra- and inter-communicator scenarios. Finally, we had examples of inclusive and exclusive persistent scan operations(MPI_Scan_init and MPI_Exscan_init respectively). Again, examples in this part were looked at pretty quickly because the semantics are covered in depth in earlier parts of collective communication regarding the blocking versions of those communication operations.

* The latest version at the time of this video is mpich-4.2.2

Hope you learned something from this video. If you have further questions don't hesitate to comment down below. Have fun!
Рекомендации по теме