Design and implementation of a parallel I/O runtime system for irregular applications

Jaechun No*, Sung Soon Park, Jesus Carretero Perez, Alok Choudhary

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Scopus citations


We present the design, implementation, and evaluation of a runtime system based on collective I/O techniques for irregular applications. The design is motivated by the requirements of a large number of science and engineering applications including teraflops applications, where the data must be reorganized into a canonical form for further processing or restarts. We present two designs: "collective I/O" and "pipelined collective I/O." In the first design, all processors participate in I/O simultaneously, making scheduling of I/O requests simpler but creating possible contention at the I/O nodes. In the second design, processors are organized into several groups so that only one group performs I/O while the next group performs the communication to rearrange data and this entire process is dynamically pipelined to reduce I/O node contention. In other words, the design provides support for dynamic contention management. We also present a software caching method using collective I/O to reduce I/O cost by reusing the data already present in the memory of other nodes. Chunking and on-line compression mechanisms are included in both models. We present performance results on the Intel Paragon at Caltech and on the ASCI/Red teraflops machine at Sandia National Laboratories.

Original languageEnglish (US)
Pages (from-to)193-220
Number of pages28
JournalJournal of Parallel and Distributed Computing
Issue number2
StatePublished - 2002

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science
  • Hardware and Architecture
  • Computer Networks and Communications
  • Artificial Intelligence


Dive into the research topics of 'Design and implementation of a parallel I/O runtime system for irregular applications'. Together they form a unique fingerprint.

Cite this