Deep actor-critic reinforcement learning for anomaly detection

Research output: Contribution to journalConference Articlepeer-review

30 Scopus citations

Abstract

Anomaly detection is widely applied in a variety of domains, involving for instance, smart home systems, network traffic monitoring, IoT applications and sensor networks. In this paper, we study deep reinforcement learning based active sequential testing for anomaly detection. We assume that there is an unknown number of abnormal processes at a time and the agent can only check with one sensor in each sampling step. To maximize the confidence level of the decision and minimize the stopping time concurrently, we propose a deep actor-critic reinforcement learning framework that can dynamically select the sensor based on the posterior probabilities. We provide simulation results for both the training phase and testing phase, and compare the proposed framework with the Chernoff test in terms of claim delay and loss.

Original languageEnglish (US)
Article number9013223
JournalProceedings - IEEE Global Communications Conference, GLOBECOM
DOIs
StatePublished - 2019
Event2019 IEEE Global Communications Conference, GLOBECOM 2019 - Waikoloa, United States
Duration: Dec 9 2019Dec 13 2019

Keywords

  • Actor-critic framework
  • Anomaly detection
  • Deep reinforcement learning

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Networks and Communications
  • Hardware and Architecture
  • Signal Processing

Fingerprint

Dive into the research topics of 'Deep actor-critic reinforcement learning for anomaly detection'. Together they form a unique fingerprint.

Cite this