Robust Decentralized Learning Using ADMM With Unreliable Agents

Qunwei Li, Bhavya Kailkhura, Ryan Goldhahn, Priyadip Ray, Pramod K. Varshney

Research output: Contribution to journalArticlepeer-review

2 Scopus citations


Many signal processing and machine learning problems can be formulated as consensus optimization problems which can be solved efficiently via a cooperative multi-agent system. However, the agents in the system can be unreliable due to a variety of reasons: noise, faults and attacks. Providing erroneous updates leads the optimization process in a wrong direction, and degrades the performance of distributed machine learning algorithms. This paper considers the problem of decentralized learning using ADMM in the presence of unreliable agents. First, we rigorously analyze the effect of erroneous updates (in ADMM learning iterations) on the convergence behavior of the multi-agent system. We show that the algorithm linearly converges to a neighborhood of the optimal solution under certain conditions and characterize the neighborhood size analytically. Next, we provide guidelines for network design to achieve a faster convergence to the neighborhood. We also provide conditions on the erroneous updates for exact convergence to the optimal solution. Finally, to mitigate the influence of unreliable agents, we propose ROAD, a robust variant of ADMM, and show its resilience to unreliable agents with an exact convergence to the optimum.

Original languageEnglish (US)
Pages (from-to)2743-2757
Number of pages15
JournalIEEE Transactions on Signal Processing
StatePublished - 2022


  • ADMM
  • Decentralized learning
  • unreliable

ASJC Scopus subject areas

  • Signal Processing
  • Electrical and Electronic Engineering


Dive into the research topics of 'Robust Decentralized Learning Using ADMM With Unreliable Agents'. Together they form a unique fingerprint.

Cite this