TY - JOUR
T1 - Machine Learning-Based Temperature Prediction for Runtime Thermal Management Across System Components
AU - Zhang, Kaicheng
AU - Guliani, Akhil
AU - Memik, Seda Ogrenci
AU - Memik, Gokhan
AU - Yoshii, Kazutomo
AU - Sankaran, Rajesh
AU - Beckman, Pete
N1 - Funding Information:
This work has been partially funded by DOE grant DESC0012531and by US National Science Foundation grantCCF-1422489. This material is based upon work supportedby the U.S. Department of Energy, Office of Science, undercontract number DE-AC02-06CH11357. We also gratefullyacknowledge the computing resources provided and operatedby the Joint Laboratory for System Evaluation (JLSE) atArgonne National Laboratory.
Publisher Copyright:
© 2017 IEEE.
PY - 2018/2/1
Y1 - 2018/2/1
N2 - Elevated temperatures limit the peak performance of systems because of frequent interventions by thermal throttling. Non-uniform thermal states across system nodes also cause performance variation within seemingly equivalent nodes leading to significant degradation of overall performance. In this paper we present a framework for creating a lightweight thermal prediction system suitable for run-time management decisions. We pursue two avenues to explore optimized lightweight thermal predictors. First, we use feature selection algorithms to improve the performance of previously designed machine learning methods. Second, we develop alternative methods using neural network and linear regression-based methods to perform a comprehensive comparative study of prediction methods. We show that our optimized models achieve improved performance with better prediction accuracy and lower overhead as compared with the Gaussian process model proposed previously. Specifically we present a reduced version of the Gaussian process model, a neural network-based model, and a linear regression-based model. Using the optimization methods, we are able to reduce the average prediction errors in the Gaussian process from 4.2 C to 2.9 C. We also show that the newly developed models using neural network and Lasso linear regression have average prediction errors of 2.9 C and 3.8 C respectively. The prediction overheads are 0.22, 0.097, and 0.026 ms per prediction for reduced Gaussian process, neural network, and Lasso linear regression models, respectively, compared with 0.57 ms per prediction for the previous Gaussian process model. We have implemented our proposed thermal prediction models on a two-node system configuration to help identify the optimal task placement. The task placement identified by the models reduces the average system temperature by up to 11.9 C without any performance degradation. Furthermore, these models respectively achieve 75, 82.5, and 74.17 percent success rates in correctly pointing to those task placements with better thermal response, compared with 72.5 percent success for the original model in achieving the same objective. Finally, we extended our analysis to a 16-node system and we were able to train models and execute them in real time to guide task migration and achieve on average 17 percent reduction in the overall system cooling power.
AB - Elevated temperatures limit the peak performance of systems because of frequent interventions by thermal throttling. Non-uniform thermal states across system nodes also cause performance variation within seemingly equivalent nodes leading to significant degradation of overall performance. In this paper we present a framework for creating a lightweight thermal prediction system suitable for run-time management decisions. We pursue two avenues to explore optimized lightweight thermal predictors. First, we use feature selection algorithms to improve the performance of previously designed machine learning methods. Second, we develop alternative methods using neural network and linear regression-based methods to perform a comprehensive comparative study of prediction methods. We show that our optimized models achieve improved performance with better prediction accuracy and lower overhead as compared with the Gaussian process model proposed previously. Specifically we present a reduced version of the Gaussian process model, a neural network-based model, and a linear regression-based model. Using the optimization methods, we are able to reduce the average prediction errors in the Gaussian process from 4.2 C to 2.9 C. We also show that the newly developed models using neural network and Lasso linear regression have average prediction errors of 2.9 C and 3.8 C respectively. The prediction overheads are 0.22, 0.097, and 0.026 ms per prediction for reduced Gaussian process, neural network, and Lasso linear regression models, respectively, compared with 0.57 ms per prediction for the previous Gaussian process model. We have implemented our proposed thermal prediction models on a two-node system configuration to help identify the optimal task placement. The task placement identified by the models reduces the average system temperature by up to 11.9 C without any performance degradation. Furthermore, these models respectively achieve 75, 82.5, and 74.17 percent success rates in correctly pointing to those task placements with better thermal response, compared with 72.5 percent success for the original model in achieving the same objective. Finally, we extended our analysis to a 16-node system and we were able to train models and execute them in real time to guide task migration and achieve on average 17 percent reduction in the overall system cooling power.
KW - Thermal modeling
KW - high performance computing systems
KW - many-core processors
KW - operating systems
UR - http://www.scopus.com/inward/record.url?scp=85029161627&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85029161627&partnerID=8YFLogxK
U2 - 10.1109/TPDS.2017.2732951
DO - 10.1109/TPDS.2017.2732951
M3 - Article
AN - SCOPUS:85029161627
SN - 1045-9219
VL - 29
SP - 405
EP - 419
JO - IEEE Transactions on Parallel and Distributed Systems
JF - IEEE Transactions on Parallel and Distributed Systems
IS - 2
M1 - 7995115
ER -