Abstract
Unmanned aerial vehicles (UAVs) can be used to serve as aerial base stations to enhance both the coverage and performance of communication networks in various scenarios, such as emergency communications and network access for remote areas. Mobile UAVs can establish communication links for ground users to deliver packets. However, UAVs have limited communication ranges and energy resources. Particularly, for a large region, they cannot cover the entire area all the time or keep flying for a long time. It is thus challenging to control a group of UAVs to achieve certain communication coverage in a long run, while preserving their connectivity and minimizing their energy consumption. Toward this end, we propose to leverage emerging deep reinforcement learning (DRL) for UAV control and present a novel and highly energy-efficient DRL-based method, which we call DRL-based energy-efficient control for coverage and connectivity (DRL-EC3). The proposed method 1) maximizes a novel energy efficiency function with joint consideration for communications coverage, fairness, energy consumption and connectivity; 2) learns the environment and its dynamics; and 3) makes decisions under the guidance of two powerful deep neural networks. We conduct extensive simulations for performance evaluation. Simulation results have shown that DRL-EC3 significantly and consistently outperform two commonly used baseline methods in terms of coverage, fairness, and energy consumption.
Original language | English (US) |
---|---|
Article number | 8432464 |
Pages (from-to) | 2059-2070 |
Number of pages | 12 |
Journal | IEEE Journal on Selected Areas in Communications |
Volume | 36 |
Issue number | 9 |
DOIs | |
State | Published - Sep 2018 |
Keywords
- UAV control
- communication coverage
- deep reinforcement learning
- energy efficiency
ASJC Scopus subject areas
- Computer Networks and Communications
- Electrical and Electronic Engineering