Experience-driven Networking: A Deep Reinforcement Learning based Approach

Zhiyuan Xu, Jian Tang, Jingsong Meng, Weiyi Zhang, Yanzhi Wang, Chi Harold Liu, Dejun Yang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

63 Scopus citations

Abstract

Modern communication networks have become very complicated and highly dynamic, which makes them hard to model, predict and control. In this paper, we develop a novel experience-driven approach that can learn to well control a communication network from its own experience rather than an accurate mathematical model, just as a human learns a new skill (such as driving, swimming, etc). Specifically, we, for the first time, propose to leverage emerging Deep Reinforcement Learning (DRL) for enabling model-free control in communication networks; and present a novel and highly effective DRL-based control framework, DRL-TE, for a fundamental networking problem: Traffic Engineering (TE). The proposed framework maximizes a widely-used utility function by jointly learning network environment and its dynamics, and making decisions under the guidance of powerful Deep Neural Networks (DNNs). We propose two new techniques, TE-aware exploration and actor-critic-based prioritized experience replay, to optimize the general DRL framework particularly for TE. To validate and evaluate the proposed framework, we implemented it in ns-3, and tested it comprehensively with both representative and randomly generated network topologies. Extensive packet-level simulation results show that 1) compared to several widely-used baseline methods, DRL-TE significantly reduces end-to-end delay and consistently improves the network utility, while offering better or comparable throughput; 2) DRL-TE is robust to network changes; and 3) DRL-TE consistently outperforms a state-of-the-art DRL method (for continuous control), Deep Deterministic Policy Gradient (DDPG), which, however, does not offer satisfying performance.

Original languageEnglish (US)
Title of host publicationINFOCOM 2018 - IEEE Conference on Computer Communications
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1871-1879
Number of pages9
Volume2018-April
ISBN (Electronic)9781538641286
DOIs
StatePublished - Oct 8 2018
Event2018 IEEE Conference on Computer Communications, INFOCOM 2018 - Honolulu, United States
Duration: Apr 15 2018Apr 19 2018

Other

Other2018 IEEE Conference on Computer Communications, INFOCOM 2018
CountryUnited States
CityHonolulu
Period4/15/184/19/18

Keywords

  • Deep Reinforcement Learning
  • Experience-driven Networking
  • Traffic Engineering

ASJC Scopus subject areas

  • Computer Science(all)
  • Electrical and Electronic Engineering

Fingerprint Dive into the research topics of 'Experience-driven Networking: A Deep Reinforcement Learning based Approach'. Together they form a unique fingerprint.

  • Cite this

    Xu, Z., Tang, J., Meng, J., Zhang, W., Wang, Y., Liu, C. H., & Yang, D. (2018). Experience-driven Networking: A Deep Reinforcement Learning based Approach. In INFOCOM 2018 - IEEE Conference on Computer Communications (Vol. 2018-April, pp. 1871-1879). [8485853] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/INFOCOM.2018.8485853