TY - GEN
T1 - Full-duplex inter-group all-to-all broadcast algorithms with optimal bandwidth
AU - Kang, Qiao
AU - Agrawal, Ankit
AU - Träff, Jesper Larsson
AU - Choudhary, Alok
AU - Al-Bahrani, Reda
AU - Weikeng, Liao
N1 - Funding Information:
This work is supported in part by the following grants: NSF awards CCF-1409601; DOE awards DE-SC0007456, DESC0014330; and NIST award 70NANB14H012.
Funding Information:
This work is supported in part by the following grants: NSF awards CCF-1409601; DOE awards DE-SC0007456, DE-SC0014330; and NIST award 70NANB14H012.
Publisher Copyright:
© 2018 Association for Computing Machinery.
PY - 2018/9/23
Y1 - 2018/9/23
N2 - MPI inter-group collective communication patterns can be viewed as bipartite graphs that divide processes into two disjoint groups in which messages are transferred between but not within the groups. Such communication patterns can serve as basic operations for scientific application workflows. In this paper, we present parallel algorithms for inter-group all-to-all broadcast (Allgather) communication with optimal bandwidth for any message size and process number under single-port communication constraints. We implement the algorithms using MPI point-to-point and intra-group collective communication functions and evaluate their performance on the Cori supercomputer at NERSC. Using message sizes ranging from 256B to 64MB, the experiments show a significant performance improvement achieved by our algorithm, which is up to 9.27 times faster than production MPI libraries that adopt the so called root-gathering algorithm.
AB - MPI inter-group collective communication patterns can be viewed as bipartite graphs that divide processes into two disjoint groups in which messages are transferred between but not within the groups. Such communication patterns can serve as basic operations for scientific application workflows. In this paper, we present parallel algorithms for inter-group all-to-all broadcast (Allgather) communication with optimal bandwidth for any message size and process number under single-port communication constraints. We implement the algorithms using MPI point-to-point and intra-group collective communication functions and evaluate their performance on the Cori supercomputer at NERSC. Using message sizes ranging from 256B to 64MB, the experiments show a significant performance improvement achieved by our algorithm, which is up to 9.27 times faster than production MPI libraries that adopt the so called root-gathering algorithm.
KW - All-to-all broadcast
KW - Allgather
KW - Inter-group communication
UR - http://www.scopus.com/inward/record.url?scp=85055428643&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85055428643&partnerID=8YFLogxK
U2 - 10.1145/3236367.3236374
DO - 10.1145/3236367.3236374
M3 - Conference contribution
AN - SCOPUS:85055428643
T3 - ACM International Conference Proceeding Series
BT - EuroMPI 2018 - Proceedings of the 25th European MPI Users' Group Meeting
PB - Association for Computing Machinery
T2 - 25th European MPI Users' Group Meeting, EuroMPI 2018
Y2 - 23 September 2018 through 26 September 2018
ER -