TY - GEN
T1 - Deep reinforcement learning based resource allocation in low latency edge computing networks
AU - Yang, Tianyu
AU - Hu, Yulin
AU - Gursoy, M. Cenk
AU - Schmeink, Anke
AU - Mathar, Rudolf
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/10/12
Y1 - 2018/10/12
N2 - In this paper, we investigate strategies for the allocation of computational resources using deep reinforcement learning in mobile edge computing networks that operate with finite blocklength codes to support low latency communications. The end-to-end (E2E) reliability of the service is addressed, while both the delay violation probability and the decoding error probability are taken into account. By employing a deep reinforcement learning method, namely deep Q-learning, we design an intelligent agent at the edge computing node to develop a real-time adaptive policy for computational resource allocation for offloaded tasks of multiple users in order to improve the average E2E reliability. Via simulations, we show that under different task arrival rates, the realized policy serves to increase the task number that decreases the delay violation rate while guaranteeing an acceptable level of decoding error probability. Moreover, we show that the proposed deep reinforcement learning approach outperforms the random and equal scheduling benchmarks.
AB - In this paper, we investigate strategies for the allocation of computational resources using deep reinforcement learning in mobile edge computing networks that operate with finite blocklength codes to support low latency communications. The end-to-end (E2E) reliability of the service is addressed, while both the delay violation probability and the decoding error probability are taken into account. By employing a deep reinforcement learning method, namely deep Q-learning, we design an intelligent agent at the edge computing node to develop a real-time adaptive policy for computational resource allocation for offloaded tasks of multiple users in order to improve the average E2E reliability. Via simulations, we show that under different task arrival rates, the realized policy serves to increase the task number that decreases the delay violation rate while guaranteeing an acceptable level of decoding error probability. Moreover, we show that the proposed deep reinforcement learning approach outperforms the random and equal scheduling benchmarks.
KW - Deep reinforcement learning
KW - Edge computing
KW - Finite blocklength coding
KW - Ultra-reliable low-latency communications (urllc)
UR - http://www.scopus.com/inward/record.url?scp=85056711995&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85056711995&partnerID=8YFLogxK
U2 - 10.1109/ISWCS.2018.8491089
DO - 10.1109/ISWCS.2018.8491089
M3 - Conference contribution
AN - SCOPUS:85056711995
T3 - Proceedings of the International Symposium on Wireless Communication Systems
BT - 2018 15th International Symposium on Wireless Communication Systems, ISWCS 2018
PB - VDE Verlag GmbH
T2 - 15th International Symposium on Wireless Communication Systems, ISWCS 2018
Y2 - 28 August 2018 through 31 August 2018
ER -