Parallelism should be frictionless, allowing every developer to start with the assumption of parallelism instead of being forced to take it up once performance demands it. Considerable progress has been made in achieving this vision on the language and training front - it has been demonstrated that sophomores can learn basic data structures and algorithms in a "parallel first" model enabled by a high-level parallel language. However, achieving both high productivity and high performance on current and future heterogeneous systems requires innovation throughout the hardware/software stack. This team brings two distinct perspectives to this problem: the (a) "theory down" approach, focusing on high-level parallel languages and the theory and practice of achieving provable performance bounds within them; and the (b) "architecture up" approach, focusing on rethinking abstractions at the architectural, operating system, run-time, and compiler levels to optimize raw performance. Through a PPoSS planning award, this team has begun to integrate these two perspectives. Extensive interactions among faculty and students at both institutions, including a workshop, have resulted in pipeline of work, the identification of synergies, and this proposal. The team envisions a full-stack system that bridges from a high-level parallel language to heterogeneous node hardware. Investigating how to meaningfully bridge these layers so that the high-level languages can be used frictionlessly is the aim of this proposal.
|Effective start/end date||10/1/21 → 9/30/25|
- National Science Foundation (CCF-2119069-001)
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.