Abstract
The simulation of circuit level timing errors has become a critical component in the evaluation of many emerging architectures. However, existing methods for injecting these errors tend to be either excruciatingly slow or rather inaccurate. We show that by dynamically building an error model through the use of supervised learning, excellent speedups can be achieved while maintaining little divergence from cumbersome gate-level simulation. We demonstrate performance improvements as great as 40x while limiting the Root Mean Square (RMS) divergence of estimated and true error rates to 0.5% on a range of the SPEC CINT2006 benchmarks. This is a significant result because it offers great hope for accelerating simulation and easing the evaluation of new architectures.
Original language | English (US) |
---|---|
Pages (from-to) | 132-144 |
Number of pages | 13 |
Journal | Simulation Series |
Volume | 49 |
Issue number | 9 |
State | Published - 2017 |
Event | 49th Summer Computer Simulation Conference, SCSC 2017, Part of the 2017 Summer Simulation Multi-Conference, SummerSim 2017 - Bellevue, United States Duration: Jul 9 2017 → Jul 12 2017 |
Keywords
- Architecture-level simulation
- Circuit-level timing error
- Error rate
- Gate-level simulation
ASJC Scopus subject areas
- Computer Networks and Communications