TY - GEN
T1 - Autonomous UAV with Learned Trajectory Generation and Control
AU - Li, Yilan
AU - Li, Mingyang
AU - Sanyal, Amit
AU - Wang, Yanzhi
AU - Qiu, Qinru
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/10
Y1 - 2019/10
N2 - Unmanned aerial vehicle (UAV) technology is a rapidly growing field with tremendous opportunities for research and applications. To achieve true autonomy for UAVs in the absence of remote control, external navigation aids like global navigation satellite systems and radar systems, a minimum energy trajectory planning that considers obstacle avoidance and stability control will be the key. Although this can be formulated as a constrained optimization problem, due to the complicated non-linear relationships between UAV trajectory and thrust control, it is almost impossible to be solved analytically. While deep reinforcement learning is known for its ability to provide model free optimization for complex system through learning, its state space, actions and reward functions must be designed carefully. This paper presents our vision of different layers of autonomy in a UAV system, and our effort in generating and tracking the trajectory both using deep reinforcement learning (DRL). The experimental results show that compared to conventional approaches, the learned trajectory will need 20% less control thrust and 18% less time to reach the target. Furthermore, using the control policy learning by DRL, the UAV will achieve 58.14% less position error and 21.77% less system power.
AB - Unmanned aerial vehicle (UAV) technology is a rapidly growing field with tremendous opportunities for research and applications. To achieve true autonomy for UAVs in the absence of remote control, external navigation aids like global navigation satellite systems and radar systems, a minimum energy trajectory planning that considers obstacle avoidance and stability control will be the key. Although this can be formulated as a constrained optimization problem, due to the complicated non-linear relationships between UAV trajectory and thrust control, it is almost impossible to be solved analytically. While deep reinforcement learning is known for its ability to provide model free optimization for complex system through learning, its state space, actions and reward functions must be designed carefully. This paper presents our vision of different layers of autonomy in a UAV system, and our effort in generating and tracking the trajectory both using deep reinforcement learning (DRL). The experimental results show that compared to conventional approaches, the learned trajectory will need 20% less control thrust and 18% less time to reach the target. Furthermore, using the control policy learning by DRL, the UAV will achieve 58.14% less position error and 21.77% less system power.
KW - Deep reinforcement learning
KW - actor-critic algorithm
KW - continuous trajectory tracking
KW - unmanned aerial vehicles
UR - http://www.scopus.com/inward/record.url?scp=85082396886&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85082396886&partnerID=8YFLogxK
U2 - 10.1109/SiPS47522.2019.9020508
DO - 10.1109/SiPS47522.2019.9020508
M3 - Conference contribution
AN - SCOPUS:85082396886
T3 - IEEE Workshop on Signal Processing Systems, SiPS: Design and Implementation
SP - 115
EP - 120
BT - 2019 IEEE International Workshop on Signal Processing Systems, SiPS 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 33rd IEEE International Workshop on Signal Processing Systems, SiPS 2019
Y2 - 20 October 2019 through 23 October 2019
ER -