Abstract
Address translation often emerges as a critical performance bottleneck for virtualized systems and has recently been the impetus for hardware paging mechanisms. These mechanisms apply similar translation models for both guest and host address translations. We make an important observation that the model employed to translate from guest physical addresses (GPAs) to host physical addresses (HPAs) is orthogonal to the model used to translate guest virtual addresses (GVAs) to GPAs. Changing this model requires VMM cooperation, but has no implications for guest OS compatibility. As an example, we consider a ihashed page table approach/i for GPA→HPA translation. iNested paging/i, widely considered the most promising approach, uses unhashed multi-level forward page tables for both GVA→GPA and GPA→HPA translations, resulting in a potential O(nsup2/sup ) page walk cost on a TLB miss, for n-level page tables. In contrast, the hashed page table approach results in an expected O(n) cost. Our simulation results show that when a hashed page table is used in the nested level, the performance of the memory system is not worse, and sometimes even better than a nested forward mapped page table due to reduced page walks and cache pressure. This showcases the potential for alternative paging mechanisms.
Original language | English (US) |
---|---|
Article number | 5476385 |
Pages (from-to) | 17-20 |
Number of pages | 4 |
Journal | IEEE Computer Architecture Letters |
Volume | 9 |
Issue number | 1 |
DOIs | |
State | Published - Jan 2010 |
Keywords
- Computer Architecture
- Emerging technologies
- Hardware/software interfaces
- Virtual Memory
- Virtualization
ASJC Scopus subject areas
- Hardware and Architecture