Abstract
The Least-Recently Used cache replacement policy and its variants are widely deployed in modern processors. This paper shows for the first time in detail that the LRU states of caches can be used to leak information: any access to a cache by a sender will modify the LRU state, and the receiver is able to observe this through a timing measurement. This paper presents LRU timing-based channels both when the sender and the receiver have shared memory, e.g., shared library data pages, and when they are separate processes without shared memory. In addition, the new LRU timing-based channels are demonstrated on both Intel and AMD processors in scenarios where the sender and the receiver are sharing the cache in both hyper-threaded setting and time-sliced setting. The transmission rate of the LRU channels can be up to 600Kbps per cache set in the hyper-threaded setting. Different from the majority of existing cache channels which require the sender to trigger cache misses, the new LRU channels work with the sender only having cache hits, making the channel faster and more stealthy. This paper also demonstrates that the new LRU channels can be used in transient execution attacks, e.g., Spectre. Further, this paper shows that the LRU channels pose threats to existing secure cache designs, and this work demonstrates the LRU channels affect the secure PL cache. The paper finishes by discussing and evaluating possible defenses.
Original language | English (US) |
---|---|
Title of host publication | Proceedings - 2020 IEEE International Symposium on High Performance Computer Architecture, HPCA 2020 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 139-152 |
Number of pages | 14 |
ISBN (Electronic) | 9781728161495 |
DOIs | |
State | Published - Feb 2020 |
Event | 26th IEEE International Symposium on High Performance Computer Architecture, HPCA 2020 - San Diego, United States Duration: Feb 22 2020 → Feb 26 2020 |
Publication series
Name | Proceedings - 2020 IEEE International Symposium on High Performance Computer Architecture, HPCA 2020 |
---|
Conference
Conference | 26th IEEE International Symposium on High Performance Computer Architecture, HPCA 2020 |
---|---|
Country/Territory | United States |
City | San Diego |
Period | 2/22/20 → 2/26/20 |
Funding
As shown in Figure 11 (top), compared to Tree-PLRU, the FIFO and Random replacement policies give small degradation on L1 data cache miss rate overall. Depending on the benchmark, FIFO and Random replacement policy sometimes have an even lower cache miss rate than Tree-PLRU. Since an L1 miss can still hit in L2, the overall CPU performance, indicated by cycles per instruction (CPI) in Figure 11 (bottom), is only changed less than 2% compared to the baseline. Thus, using a different replacement policy in the L1 data cache to mitigate the LRU side and covert channel only gives small overhead \u2013 while increasing security. Similarly, if the channels in all the levels of cache are to be mitigated, the replacement policies of all the levels of caches need to be changed. X. CONCLUSION We presented novel timing-based channels leveraging the cache LRU replacement states. We designed two protocols to transfer information between processes using the LRU states for both cases when there is shared memory between the sender and the receiver and when there is no shared memory. We also demonstrated the LRU channels on real-world commercial processors. The LRU channels require access (cache hit or miss) from the sender, while all the existing state-based timing-based cache side and covert channels always need the sender to trigger a cache replacement (a cache miss). Thus, the LRU channel has shorter encoding time, lower cache miss rate for the sender, and requires a smaller speculation window in transient attack scenarios. We show the new LRU channels also affect the current secure cache designs. In the end, we proposed several methods to mitigate the LRU channel and evaluated them, including a modified design of a secure PL cache. ACKNOWLEDGEMENT We would like to thank the authors of InvisiSpec [36], especially Mengjia Yan, for their open-source code and scripts. Special thanks to Linbo Shao and Junwen Shao for helping with gem5 simulation. We would like to acknowledge Amazon for providing AWS Cloud Credits for Research. This work was supported by NSF 1651945 and 1813797, and through SRC award number 2844.001.
Keywords
- Caches
- Covert channels
- LRU
- Replacement policy
- Side channels
- Timing-based channels
ASJC Scopus subject areas
- Artificial Intelligence
- Hardware and Architecture
- Safety, Risk, Reliability and Quality