TY - GEN
T1 - Hierarchical dynamic power management using model-free reinforcement learning
AU - Wang, Yanzhi
AU - Triki, Maryam
AU - Lin, Xue
AU - Ammari, Ahmed C.
AU - Pedram, Massoud
PY - 2013
Y1 - 2013
N2 - Model-free reinforcement learning (RL) has become a promising technique for designing a robust dynamic power management (DPM) framework that can cope with variations and uncertainties that emanate from hardware and application characteristics. Moreover, the potentially significant benefit of performing application-level scheduling as part of the system-level power management should be harnessed. This paper presents an architecture for hierarchical DPM in an embedded system composed of a processor chip and connected I/O devices (which are called system components.) The goal is to facilitate saving in the system component power consumption, which tends to dominate the total power consumption. The proposed (online) adaptive DPM technique consists of two layers: an RL-based component-level local power manager (LPM) and a system-level global power manager (GPM). The LPM performs component power and latency optimization. It employs temporal difference learning on semi-Markov decision process (SMDP) for model-free RL, and it is specifically optimized for an environment in which multiple (heterogeneous) types of applications can run in the embedded system. The GPM interacts with the CPU scheduler to perform effective application-level scheduling, thereby, enabling the LPM to do even more component power optimizations. In this hierarchical DPM framework, power and latency tradeoffs of each type of application can be precisely controlled based on a user-defined parameter. Experiments show that the amount of average power saving is up to 31.1% compared to existing approaches.
AB - Model-free reinforcement learning (RL) has become a promising technique for designing a robust dynamic power management (DPM) framework that can cope with variations and uncertainties that emanate from hardware and application characteristics. Moreover, the potentially significant benefit of performing application-level scheduling as part of the system-level power management should be harnessed. This paper presents an architecture for hierarchical DPM in an embedded system composed of a processor chip and connected I/O devices (which are called system components.) The goal is to facilitate saving in the system component power consumption, which tends to dominate the total power consumption. The proposed (online) adaptive DPM technique consists of two layers: an RL-based component-level local power manager (LPM) and a system-level global power manager (GPM). The LPM performs component power and latency optimization. It employs temporal difference learning on semi-Markov decision process (SMDP) for model-free RL, and it is specifically optimized for an environment in which multiple (heterogeneous) types of applications can run in the embedded system. The GPM interacts with the CPU scheduler to perform effective application-level scheduling, thereby, enabling the LPM to do even more component power optimizations. In this hierarchical DPM framework, power and latency tradeoffs of each type of application can be precisely controlled based on a user-defined parameter. Experiments show that the amount of average power saving is up to 31.1% compared to existing approaches.
KW - Bayesian classification
KW - Dynamic power management
KW - reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=84879566659&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84879566659&partnerID=8YFLogxK
U2 - 10.1109/ISQED.2013.6523606
DO - 10.1109/ISQED.2013.6523606
M3 - Conference contribution
AN - SCOPUS:84879566659
SN - 9781467349536
T3 - Proceedings - International Symposium on Quality Electronic Design, ISQED
SP - 170
EP - 177
BT - Proceedings of the 14th International Symposium on Quality Electronic Design, ISQED 2013
T2 - 14th International Symposium on Quality Electronic Design, ISQED 2013
Y2 - 4 March 2013 through 6 March 2013
ER -