Communicating data-parallel tasks: An MPI library for HPF

Ian T. Foster*, David R. Kohr, Rakesh Krishnaiyer, Alok Choudhary

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

High Performance Fortran (HPF) has emerged as a standard dialect of Fortran for data-parallel computing. However, HPF does not support task parallelism or heterogeneous computing adequately. This paper presents a summary of our work on a library-based approach to support task parallelism, using MPI as a coordination layer for HPF. This library enables a wide variety of applications, such as multidisciplinary simulations and pipeline computations, to take advantage of combined task and data parallelism. An HPF binding for MPI raises several interface and communication issues. We discuss these issues and describe our implementation of an HPF/MPI library that operates with a commercial HPF compiler. We also evaluate the performance of our library using a synthetic communication benchmark and a multiblock application.

Original languageEnglish (US)
Title of host publicationProceedings of the 1996 3rd International Conference on High Performance Computing, HiPC
Editors Anon
PublisherIEEE
Pages433-438
Number of pages6
StatePublished - Dec 1 1996
EventProceedings of the 1996 3rd International Conference on High Performance Computing, HiPC - Trivandrum, India
Duration: Dec 19 1996Dec 22 1996

Other

OtherProceedings of the 1996 3rd International Conference on High Performance Computing, HiPC
CityTrivandrum, India
Period12/19/9612/22/96

ASJC Scopus subject areas

  • Engineering(all)

Fingerprint Dive into the research topics of 'Communicating data-parallel tasks: An MPI library for HPF'. Together they form a unique fingerprint.

Cite this