Machine Learning-Based Temperature Prediction for Runtime Thermal Management Across System Components

Kaicheng Zhang, Akhil Guliani, Seda Ogrenci Memik*, Gokhan Memik, Kazutomo Yoshii, Rajesh Sankaran, Pete Beckman

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

81 Scopus citations


Elevated temperatures limit the peak performance of systems because of frequent interventions by thermal throttling. Non-uniform thermal states across system nodes also cause performance variation within seemingly equivalent nodes leading to significant degradation of overall performance. In this paper we present a framework for creating a lightweight thermal prediction system suitable for run-time management decisions. We pursue two avenues to explore optimized lightweight thermal predictors. First, we use feature selection algorithms to improve the performance of previously designed machine learning methods. Second, we develop alternative methods using neural network and linear regression-based methods to perform a comprehensive comparative study of prediction methods. We show that our optimized models achieve improved performance with better prediction accuracy and lower overhead as compared with the Gaussian process model proposed previously. Specifically we present a reduced version of the Gaussian process model, a neural network-based model, and a linear regression-based model. Using the optimization methods, we are able to reduce the average prediction errors in the Gaussian process from 4.2 C to 2.9 C. We also show that the newly developed models using neural network and Lasso linear regression have average prediction errors of 2.9 C and 3.8 C respectively. The prediction overheads are 0.22, 0.097, and 0.026 ms per prediction for reduced Gaussian process, neural network, and Lasso linear regression models, respectively, compared with 0.57 ms per prediction for the previous Gaussian process model. We have implemented our proposed thermal prediction models on a two-node system configuration to help identify the optimal task placement. The task placement identified by the models reduces the average system temperature by up to 11.9 C without any performance degradation. Furthermore, these models respectively achieve 75, 82.5, and 74.17 percent success rates in correctly pointing to those task placements with better thermal response, compared with 72.5 percent success for the original model in achieving the same objective. Finally, we extended our analysis to a 16-node system and we were able to train models and execute them in real time to guide task migration and achieve on average 17 percent reduction in the overall system cooling power.

Original languageEnglish (US)
Article number7995115
Pages (from-to)405-419
Number of pages15
JournalIEEE Transactions on Parallel and Distributed Systems
Issue number2
StatePublished - Feb 1 2018


  • Thermal modeling
  • high performance computing systems
  • many-core processors
  • operating systems

ASJC Scopus subject areas

  • Signal Processing
  • Hardware and Architecture
  • Computational Theory and Mathematics


Dive into the research topics of 'Machine Learning-Based Temperature Prediction for Runtime Thermal Management Across System Components'. Together they form a unique fingerprint.

Cite this