Dynamic channel access and power control via deep reinforcement learning

Ziyang Lu, M. Cenk Gursoy

Research output: Chapter in Book/Entry/PoemConference contribution

14 Scopus citations


Efficient use of spectral and energy resources is critical in wireless networks and has been extensively studied in recent years. In particular, dynamic spectrum access and power control have been addressed primarily via optimization and game-theoretic tools. In this paper, motivated by recent advances in machine learning and, more specifically, the success of reinforcement learning for addressing dynamic control problems, we consider deep reinforcement learning to jointly perform dynamic channel access and power control in wireless interference channels. We propose a deep Q-learning model, develop an algorithm, and evaluate the performance considering different utilities and reward mechanisms. We provide comparisons with the optimal centralized strategies that require complete information and use weighted minimum mean square error (WMMSE) based power control and exhaustive search over all channel selection policies. We highlight the performance improvements with power control.

Original languageEnglish (US)
Title of host publication2019 IEEE 90th Vehicular Technology Conference, VTC 2019 Fall - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781728112206
StatePublished - Sep 2019
Event90th IEEE Vehicular Technology Conference, VTC 2019 Fall - Honolulu, United States
Duration: Sep 22 2019Sep 25 2019

Publication series

NameIEEE Vehicular Technology Conference
ISSN (Print)1550-2252


Conference90th IEEE Vehicular Technology Conference, VTC 2019 Fall
Country/TerritoryUnited States

ASJC Scopus subject areas

  • Computer Science Applications
  • Electrical and Electronic Engineering
  • Applied Mathematics


Dive into the research topics of 'Dynamic channel access and power control via deep reinforcement learning'. Together they form a unique fingerprint.

Cite this