On Distributed Stochastic Gradient Descent for Nonconvex Functions in the Presence of Byzantines

Saikiran Bulusu, Prashant Khanduri, Pranay Sharma, Pramod K. Varshney

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We consider the distributed stochastic optimization problem of minimizing a nonconvex function f in an adversarial setting. All the w worker nodes in the network are expected to send their stochastic gradient vectors to the fusion center (or server). However, some (at most α-fraction) of the nodes may be Byzantines, which may send arbitrary vectors instead. Vanilla implementation of distributed stochastic gradient descent (SGD) cannot handle such misbehavior from the nodes. We propose a robust variant of distributed SGD which is resilient to the presence of Byzantines. The fusion center employs a novel filtering rule that identifies and removes the Byzantine nodes. We show thatT = tilde Oleft( {frac{1}{{w{varepsilon 2}}} + frac{{{alpha 2}}}{{{varepsilon 2}}}} right) iterations are needed to achieve an-approximate stationary point (x such that{left| {nabla f(x)} right|2} leq varepsilon ) for the nonconvex learning problem. Unlike other existing approaches, the proposed algorithm is independent of the problem dimension.

Original languageEnglish (US)
Title of host publication2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages3137-3141
Number of pages5
ISBN (Electronic)9781509066315
DOIs
StatePublished - May 2020
Event2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Barcelona, Spain
Duration: May 4 2020May 8 2020

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Volume2020-May
ISSN (Print)1520-6149

Conference

Conference2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020
CountrySpain
CityBarcelona
Period5/4/205/8/20

Keywords

  • Adversarial machine learning
  • Byzantines
  • Distributed optimization
  • Stochastic Gradient Descent

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Electrical and Electronic Engineering

Fingerprint Dive into the research topics of 'On Distributed Stochastic Gradient Descent for Nonconvex Functions in the Presence of Byzantines'. Together they form a unique fingerprint.

  • Cite this

    Bulusu, S., Khanduri, P., Sharma, P., & Varshney, P. K. (2020). On Distributed Stochastic Gradient Descent for Nonconvex Functions in the Presence of Byzantines. In 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Proceedings (pp. 3137-3141). [9052956] (ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings; Vol. 2020-May). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICASSP40776.2020.9052956