Achieving autonomous power management using reinforcement learning

Hao Shen, Ying Tan, Jun Lu, Qing Wu, Qinru Qiu

Research output: Contribution to journalArticle

50 Citations (Scopus)

Abstract

System level power management must consider the uncertainty and variability that come from the environment, the application and the hardware. A robust power management technique must be able to learn the optimal decision from past events and improve itself as the environment changes. This article presents a novel on-line power management technique based on model-free constrained reinforcement learning (Q-learning). The proposed learning algorithm requires no prior information of the workload and dynamically adapts to the environment to achieve autonomous power management. We focus on the power management of the peripheral device and the microprocessor, two of the basic components of a computer. Due to their different operating behaviors and performance considerations, these two types of devices require different designs of Q-learning agent. The article discusses system modeling and cost function construction for both types of Q-learning agent. Enhancement techniques are also proposed to speed up the convergence and better maintain the required performance (or power) constraint in a dynamic system with large variations. Compared with the existing machine learning based power management techniques, the Q-learning based power management is more flexible in adapting to different workload and hardware and provides a wider range of power-performance tradeoff.

Original languageEnglish (US)
Article number24
JournalACM Transactions on Design Automation of Electronic Systems
Volume18
Issue number2
DOIs
StatePublished - Mar 2013

Fingerprint

Reinforcement learning
Hardware
Power management
Cost functions
Learning algorithms
Learning systems
Microprocessor chips
Dynamical systems

Keywords

  • Computer
  • Machine learning
  • Power management
  • Thermal management

ASJC Scopus subject areas

  • Computer Science Applications
  • Computer Graphics and Computer-Aided Design
  • Electrical and Electronic Engineering

Cite this

Achieving autonomous power management using reinforcement learning. / Shen, Hao; Tan, Ying; Lu, Jun; Wu, Qing; Qiu, Qinru.

In: ACM Transactions on Design Automation of Electronic Systems, Vol. 18, No. 2, 24, 03.2013.

Research output: Contribution to journalArticle

@article{41e84dc919c145d389fe8302488e8855,
title = "Achieving autonomous power management using reinforcement learning",
abstract = "System level power management must consider the uncertainty and variability that come from the environment, the application and the hardware. A robust power management technique must be able to learn the optimal decision from past events and improve itself as the environment changes. This article presents a novel on-line power management technique based on model-free constrained reinforcement learning (Q-learning). The proposed learning algorithm requires no prior information of the workload and dynamically adapts to the environment to achieve autonomous power management. We focus on the power management of the peripheral device and the microprocessor, two of the basic components of a computer. Due to their different operating behaviors and performance considerations, these two types of devices require different designs of Q-learning agent. The article discusses system modeling and cost function construction for both types of Q-learning agent. Enhancement techniques are also proposed to speed up the convergence and better maintain the required performance (or power) constraint in a dynamic system with large variations. Compared with the existing machine learning based power management techniques, the Q-learning based power management is more flexible in adapting to different workload and hardware and provides a wider range of power-performance tradeoff.",
keywords = "Computer, Machine learning, Power management, Thermal management",
author = "Hao Shen and Ying Tan and Jun Lu and Qing Wu and Qinru Qiu",
year = "2013",
month = "3",
doi = "10.1145/2442087.2442095",
language = "English (US)",
volume = "18",
journal = "ACM Transactions on Design Automation of Electronic Systems",
issn = "1084-4309",
publisher = "Association for Computing Machinery (ACM)",
number = "2",

}

TY - JOUR

T1 - Achieving autonomous power management using reinforcement learning

AU - Shen, Hao

AU - Tan, Ying

AU - Lu, Jun

AU - Wu, Qing

AU - Qiu, Qinru

PY - 2013/3

Y1 - 2013/3

N2 - System level power management must consider the uncertainty and variability that come from the environment, the application and the hardware. A robust power management technique must be able to learn the optimal decision from past events and improve itself as the environment changes. This article presents a novel on-line power management technique based on model-free constrained reinforcement learning (Q-learning). The proposed learning algorithm requires no prior information of the workload and dynamically adapts to the environment to achieve autonomous power management. We focus on the power management of the peripheral device and the microprocessor, two of the basic components of a computer. Due to their different operating behaviors and performance considerations, these two types of devices require different designs of Q-learning agent. The article discusses system modeling and cost function construction for both types of Q-learning agent. Enhancement techniques are also proposed to speed up the convergence and better maintain the required performance (or power) constraint in a dynamic system with large variations. Compared with the existing machine learning based power management techniques, the Q-learning based power management is more flexible in adapting to different workload and hardware and provides a wider range of power-performance tradeoff.

AB - System level power management must consider the uncertainty and variability that come from the environment, the application and the hardware. A robust power management technique must be able to learn the optimal decision from past events and improve itself as the environment changes. This article presents a novel on-line power management technique based on model-free constrained reinforcement learning (Q-learning). The proposed learning algorithm requires no prior information of the workload and dynamically adapts to the environment to achieve autonomous power management. We focus on the power management of the peripheral device and the microprocessor, two of the basic components of a computer. Due to their different operating behaviors and performance considerations, these two types of devices require different designs of Q-learning agent. The article discusses system modeling and cost function construction for both types of Q-learning agent. Enhancement techniques are also proposed to speed up the convergence and better maintain the required performance (or power) constraint in a dynamic system with large variations. Compared with the existing machine learning based power management techniques, the Q-learning based power management is more flexible in adapting to different workload and hardware and provides a wider range of power-performance tradeoff.

KW - Computer

KW - Machine learning

KW - Power management

KW - Thermal management

UR - http://www.scopus.com/inward/record.url?scp=84878515224&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84878515224&partnerID=8YFLogxK

U2 - 10.1145/2442087.2442095

DO - 10.1145/2442087.2442095

M3 - Article

AN - SCOPUS:84878515224

VL - 18

JO - ACM Transactions on Design Automation of Electronic Systems

JF - ACM Transactions on Design Automation of Electronic Systems

SN - 1084-4309

IS - 2

M1 - 24

ER -