TY - GEN
T1 - A prefetching prototype for the parallel file system on the paragon
AU - Artmachalam, Meenakshi
AU - Choudhary, Alok
AU - Rullman, Brad
N1 - Publisher Copyright:
© 1995 ACM.
PY - 1995/5/1
Y1 - 1995/5/1
N2 - The initial performance results for the file systems demonstrate that the file system performance is scalable. The access bandwidth seen by the user when using prefetching is also scalable, and given a reasonable overlap between computation and I/O, the benefits from the system prefetching can be very significant. Given that in the normal mode of operation (without prefetching), the data is directly transferred into user's buffer, while in the system level prefetching the data is buffered, performance with prefetching is comparable when there is no overlap of I/O with computation. We also compared the performance of system level prefetching with the explicit prefetching at the user level where the knowledge of what and when to prefetch is known. We observed that the system level prefetching performance of comparable to that of user level prefetching despite an extra level of buffering in the former. That demonstrates that the prefetching prototype's performance is not much below the level expected in the best case. In any such implementation in a large-scale software, there are a large number of parameters that can be studied and need to be evaluated. As a part of the future work, we plan to evaluate the performance of prefetching on much larger systems and study the performance for a greater variety of workloads and access patterns. Furthermore, we plan to implement prefetching in other file I/O modes.
AB - The initial performance results for the file systems demonstrate that the file system performance is scalable. The access bandwidth seen by the user when using prefetching is also scalable, and given a reasonable overlap between computation and I/O, the benefits from the system prefetching can be very significant. Given that in the normal mode of operation (without prefetching), the data is directly transferred into user's buffer, while in the system level prefetching the data is buffered, performance with prefetching is comparable when there is no overlap of I/O with computation. We also compared the performance of system level prefetching with the explicit prefetching at the user level where the knowledge of what and when to prefetch is known. We observed that the system level prefetching performance of comparable to that of user level prefetching despite an extra level of buffering in the former. That demonstrates that the prefetching prototype's performance is not much below the level expected in the best case. In any such implementation in a large-scale software, there are a large number of parameters that can be studied and need to be evaluated. As a part of the future work, we plan to evaluate the performance of prefetching on much larger systems and study the performance for a greater variety of workloads and access patterns. Furthermore, we plan to implement prefetching in other file I/O modes.
UR - http://www.scopus.com/inward/record.url?scp=0039638012&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0039638012&partnerID=8YFLogxK
U2 - 10.1145/223587.223631
DO - 10.1145/223587.223631
M3 - Conference contribution
AN - SCOPUS:0039638012
T3 - Proceedings of the 1995 ACM SIGMETRICS Joint International Conference on Measurement and Modeling of Computer Systems, SIGMETRICS 1995/PERFORMANCE 1995
SP - 321
EP - 323
BT - Proceedings of the 1995 ACM SIGMETRICS Joint International Conference on Measurement and Modeling of Computer Systems, SIGMETRICS 1995/PERFORMANCE 1995
A2 - Gaither, Blaine D.
PB - Association for Computing Machinery, Inc
T2 - 1995 ACM SIGMETRICS Joint International Conference on Measurement and Modeling of Computer Systems, SIGMETRICS 1995/PERFORMANCE 1995
Y2 - 15 May 1995 through 19 May 1995
ER -