Actor-critic deep reinforcement learning for dynamic multichannel access

Chen Zhong, Ziyang Lu, M. Cenk Gursoy, Senem Velipasalar

Research output: Chapter in Book/Entry/PoemConference contribution

28 Scopus citations

Abstract

We consider the dynamic multichannel access problem, which can be formulated as a partially observable Markov decision process (POMDP). We first propose a model-free actor-critic deep reinforcement learning based framework to explore the sensing policy. To evaluate the performance of the proposed sensing policy and the framework's tolerance against uncertainty, we test the framework in scenarios with different channel switching patterns and consider different switching probabilities. Then, we consider a time-varying environment to identify the adaptive ability of the proposed framework. Additionally, we provide comparisons with the Deep-Q network (DQN) based framework proposed in [1], in terms of both average reward and the time efficiency.

Original languageEnglish (US)
Title of host publication2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages599-603
Number of pages5
ISBN (Electronic)9781728112954
DOIs
StatePublished - Jul 2 2018
Event2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Anaheim, United States
Duration: Nov 26 2018Nov 29 2018

Publication series

Name2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings

Conference

Conference2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018
Country/TerritoryUnited States
CityAnaheim
Period11/26/1811/29/18

Keywords

  • Actor-critic
  • Channel selection
  • Deep reinforcement learning
  • POMDP

ASJC Scopus subject areas

  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'Actor-critic deep reinforcement learning for dynamic multichannel access'. Together they form a unique fingerprint.

Cite this