Abstract
While Deep Reinforcement Learning has emerged as a de facto approach to many complex experience-driven networking problems, it remains challenging to deploy DRL into real systems. Due to the random exploration or half-trained deep neural networks during the online training process, the DRL agent may make unexpected decisions, which may lead to system performance degradation or even system crash. In this paper, we propose PnP-DRL, an offline-trained, plug and play DRL solution, to leverage the batch reinforcement learning approach to learn the best control policy from pre-collected transition samples without interacting with the system. After being trained without interaction with systems, our Plug and Play DRL agent will start working seamlessly, without additional exploration or possible disruption of the running systems. We implement and evaluate our PnP-DRL solution on a prevalent experience-driven networking problem, Dynamic Adaptive Streaming over HTTP (DASH). Extensive experimental results manifest that 1) The existing batch reinforcement learning method has its limits; 2) Our approach PnP-DRL significantly outperforms classical adaptive bitrate algorithms in average user Quality of Experience (QoE); 3) PnP-DRL, unlike the state-of-the-art online DRL methods, can be off and running without learning gaps, while achieving comparable performances.
Original language | English (US) |
---|---|
Article number | 9454317 |
Pages (from-to) | 2476-2486 |
Number of pages | 11 |
Journal | IEEE Journal on Selected Areas in Communications |
Volume | 39 |
Issue number | 8 |
DOIs | |
State | Published - Aug 2021 |
Keywords
- Experience-driven networking
- batch reinforcement learning
- deep reinforcement learning
ASJC Scopus subject areas
- Computer Networks and Communications
- Electrical and Electronic Engineering