TY - GEN
T1 - Double standards
T2 - 1996 ACM/IEEE Conference on Supercomputing, SC 1996
AU - Foster, Ian
AU - Kohr, David R.
AU - Krishnaiyer, Rakesh
AU - Choudhary, Alok
N1 - Publisher Copyright:
© 1996 IEEE.
PY - 1996
Y1 - 1996
N2 - High Performance Fortran (HPF) does not allow efficient expression of mixed task/data-parallel computations or the coupling of separately compiled data-parallel modules. In this paper, we show how a coordination library implementing the Message Passing Interface (MPI) can be used to represent these common parallel program structures. This library allows data-parallel tasks to exchange distributed data structures using calls to simple communication functions. W e present microbenchmark results that characterize the performance of this library and that quantify the impact of optimizations that allow reuse of communication schedules in common situations. In addition, results from two dimensional FFT, convolution, and multiblock programs demonstrate that the HPF / MPI library can provide performance superior to that of pure HPF. We conclude that this synergistic combination of two parallel programming standards represents a useful approach to task parallelism in a data-parallel framework, increasing the range of problems addressable in HPF without requiring complex compiler technology.
AB - High Performance Fortran (HPF) does not allow efficient expression of mixed task/data-parallel computations or the coupling of separately compiled data-parallel modules. In this paper, we show how a coordination library implementing the Message Passing Interface (MPI) can be used to represent these common parallel program structures. This library allows data-parallel tasks to exchange distributed data structures using calls to simple communication functions. W e present microbenchmark results that characterize the performance of this library and that quantify the impact of optimizations that allow reuse of communication schedules in common situations. In addition, results from two dimensional FFT, convolution, and multiblock programs demonstrate that the HPF / MPI library can provide performance superior to that of pure HPF. We conclude that this synergistic combination of two parallel programming standards represents a useful approach to task parallelism in a data-parallel framework, increasing the range of problems addressable in HPF without requiring complex compiler technology.
UR - http://www.scopus.com/inward/record.url?scp=14544272726&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=14544272726&partnerID=8YFLogxK
U2 - 10.1109/SUPERC.1996.183538
DO - 10.1109/SUPERC.1996.183538
M3 - Conference contribution
AN - SCOPUS:14544272726
T3 - Proceedings of the International Conference on Supercomputing
BT - Proceedings of the 1996 ACM/IEEE Conference on Supercomputing, SC 1996
PB - Association for Computing Machinery
Y2 - 17 November 1996 through 22 November 1996
ER -