Detecting Adversarial Images via Texture Analysis

Weiheng Chai, Senem Velipasalar

Research output: Chapter in Book/Entry/PoemConference contribution

3 Scopus citations


Neural networks have been shown to be vulnerable to carefully crafted adversarial examples. Recently, new adversarial attacks, including dispersion reduction (DR), have been proposed, and shown to be transferable across different computer vision tasks. This means that an ensemble of different defense/detection mechanisms can be evaded all at once. Unlike previous attack methods, the DR attack minimizes the dispersion of an internal feature map providing state-of-the-art results. In this paper, we propose an algorithm to detect the adversarial examples generated by different adversarial attacks, including the dispersion reduction, projected gradient descent, diverse inputs method and momentum iterative fast gradient sign method. Our approach employs 1D Gabor filter responses, and detects adversarial examples generated from different surrogate neural network models and datasets with high accuracy.

Original languageEnglish (US)
Title of host publicationConference Record of the 54th Asilomar Conference on Signals, Systems and Computers, ACSSC 2020
EditorsMichael B. Matthews
PublisherIEEE Computer Society
Number of pages5
ISBN (Electronic)9780738131269
StatePublished - Nov 1 2020
Externally publishedYes
Event54th Asilomar Conference on Signals, Systems and Computers, ACSSC 2020 - Pacific Grove, United States
Duration: Nov 1 2020Nov 5 2020

Publication series

NameConference Record - Asilomar Conference on Signals, Systems and Computers
ISSN (Print)1058-6393


Conference54th Asilomar Conference on Signals, Systems and Computers, ACSSC 2020
Country/TerritoryUnited States
CityPacific Grove

ASJC Scopus subject areas

  • Signal Processing
  • Computer Networks and Communications


Dive into the research topics of 'Detecting Adversarial Images via Texture Analysis'. Together they form a unique fingerprint.

Cite this