Deep Reinforcement Learning Based Edge Caching in Wireless Networks

Research output: Contribution to journalArticle

Abstract

With the purpose to offload data traffic in wireless networks, content caching techniques have recently been studied intensively. Using these techniques and caching a portion of the popular files at the local content servers, the users can be served with less delay. Most of the content replacement policies are based on the content popularity, that depends on the users’ preferences. In practice, such information varies over time. Therefore, an approach to determine the file popularity patterns must be incorporated into caching policies. In this context, we study content caching at the wireless network edge using a deep reinforcement learning framework with Wolpertinger architecture. In particular, we propose deep actor-critic reinforcement learning based policies for both centralized and decentralized content caching. For centralized edge caching, we aim at maximizing the cache hit rate. In decentralized edge caching, we consider both the cache hit rate and transmission delay as performance metrics. The proposed frameworks are assumed to neither have any prior information on the file popularities nor know the potential variations in such information. Via simulation results, the superiority of the proposed frameworks is verified by comparing them with other policies, including least frequently used (LFU), least recently used (LRU), and first-in-first-out (FIFO) policies.

Original languageEnglish (US)
JournalIEEE Transactions on Cognitive Communications and Networking
DOIs
StateAccepted/In press - Jan 1 2020

ASJC Scopus subject areas

  • Hardware and Architecture
  • Computer Networks and Communications
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Deep Reinforcement Learning Based Edge Caching in Wireless Networks'. Together they form a unique fingerprint.

Cite this