Abstract
Experience-driven networking has emerged as a new and highly effective approach for resource allocation in complex communication networks. Deep Reinforcement Learning (DRL) has been shown to be a useful technique for enabling experience-driven networking. In this paper, we focus on a practical and fundamental problem for experience-driven networking: when network configurations are changed, how to train a new DRL agent to effectively and quickly adapt to the new environment. We present an Actor-Critic-based Transfer learning framework for the Traffic Engineering (TE) problem using policy distillation, which we call ACT-TE. ACT-TE effectively and quickly trains a new DRL agent to solve the TE problem in a new network environment, using both old knowledge (i.e., distilled from the existing agent) and new experience (i.e., newly collected samples). We implement ACT-TE in ns-3, and compare it with commonly-used baselines using packet-level simulations on three representative network topologies: NSFNET, ARPANET and random topology. The extensive simulation results show that 1) The existing well-trained DRL agents do not work well in new network environments; 2) ACT-TE significantly outperforms both two straightforward methods (training from scratch and fine-tuning based on an existing DRL agent) and several widely-used traditional methods in terms of network utility, throughput and delay.
Original language | English (US) |
---|---|
Article number | 9274515 |
Pages (from-to) | 360-371 |
Number of pages | 12 |
Journal | IEEE/ACM Transactions on Networking |
Volume | 29 |
Issue number | 1 |
DOIs | |
State | Published - Feb 2021 |
Keywords
- Experience-driven networking
- deep reinforcement learning and transfer learning
ASJC Scopus subject areas
- Software
- Computer Science Applications
- Computer Networks and Communications
- Electrical and Electronic Engineering