Cloud radio access networks (CRANs) have become a key enabling technique for the next generation wireless communications. Resource allocation in CRANs still needs to be further improved to reach the objective of minimizing power consumption and meeting demands of wireless users over a long period. Inspired by the success of Deep Reinforcement Learning (DRL) on solving complicated control problems, we present a novel framework, ReCARL, for power-efficient resource allocation in CRANs with deep reinforcement learning. Specifically, we define the state space, action space and reward function for the DRL agent, apply a deep neural network (DNN) to approximating the action-value function, and formally formulate the resource allocation problem (in each decision epoch) as a convex optimization problem. Under ReCARL, we propose two different DRL agents: one has a regular DNN structure trained with the basic deep Q-learning method (ReCARL-Basic); while the other has a context-aware DNN structure trained with a hybrid deep Q-learning method (ReCARL-Hybrid). We evaluated the performance of ReCARL along with the two DRL agents by comparing them with two widely-used baselines via extensive simulation. The simulation results show that ReCARL achieves significant power savings while meeting user demands, and it can well handle highly dynamic cases.
- Deep reinforcement learning
- cloud radio access network
- green communications
- resource allocation
ASJC Scopus subject areas
- Computer Networks and Communications
- Electrical and Electronic Engineering