TY - GEN
T1 - A deep reinforcement learning based framework for power-efficient resource allocation in cloud RANs
AU - Xu, Zhiyuan
AU - Wang, Yanzhi
AU - Tang, Jian
AU - Wang, Jing
AU - Gursoy, Mustafa Cenk
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2017/7/28
Y1 - 2017/7/28
N2 - Cloud Radio Access Networks (RANs) have become a key enabling technique for the next generation (5G) wireless communications, which can meet requirements of massively growing wireless data traffic. However, resource allocation in cloud RANs still needs to be further improved in order to reach the objective of minimizing power consumption and meeting demands of wireless users over a long operational period. Inspired by the success of Deep Reinforcement Learning (DRL) on solving complicated control problems, we present a novel DRL-based framework for power-efficient resource allocation in cloud RANs. Specifically, we define the state space, action space and reward function for the DRL agent, apply a Deep Neural Network (DNN) to approximate the action-value function, and formally formulate the resource allocation problem (in each decision epoch) as a convex optimization problem. We evaluate the performance of the proposed framework by comparing it with two widely-used baselines via simulation. The simulation results show it can achieve significant power savings while meeting user demands, and it can well handle highly dynamic cases.
AB - Cloud Radio Access Networks (RANs) have become a key enabling technique for the next generation (5G) wireless communications, which can meet requirements of massively growing wireless data traffic. However, resource allocation in cloud RANs still needs to be further improved in order to reach the objective of minimizing power consumption and meeting demands of wireless users over a long operational period. Inspired by the success of Deep Reinforcement Learning (DRL) on solving complicated control problems, we present a novel DRL-based framework for power-efficient resource allocation in cloud RANs. Specifically, we define the state space, action space and reward function for the DRL agent, apply a Deep Neural Network (DNN) to approximate the action-value function, and formally formulate the resource allocation problem (in each decision epoch) as a convex optimization problem. We evaluate the performance of the proposed framework by comparing it with two widely-used baselines via simulation. The simulation results show it can achieve significant power savings while meeting user demands, and it can well handle highly dynamic cases.
KW - Cloud Radio Access Network
KW - Deep Reinforcement Learning
KW - Green Communications
KW - Resource Allocation
UR - http://www.scopus.com/inward/record.url?scp=85028330496&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85028330496&partnerID=8YFLogxK
U2 - 10.1109/ICC.2017.7997286
DO - 10.1109/ICC.2017.7997286
M3 - Conference contribution
AN - SCOPUS:85028330496
T3 - IEEE International Conference on Communications
BT - 2017 IEEE International Conference on Communications, ICC 2017
A2 - Debbah, Merouane
A2 - Gesbert, David
A2 - Mellouk, Abdelhamid
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2017 IEEE International Conference on Communications, ICC 2017
Y2 - 21 May 2017 through 25 May 2017
ER -