TY - JOUR
T1 - Robust Decentralized Learning Using ADMM With Unreliable Agents
AU - Li, Qunwei
AU - Kailkhura, Bhavya
AU - Goldhahn, Ryan
AU - Ray, Priyadip
AU - Varshney, Pramod K.
N1 - Publisher Copyright:
© 1991-2012 IEEE.
PY - 2022
Y1 - 2022
N2 - Many signal processing and machine learning problems can be formulated as consensus optimization problems which can be solved efficiently via a cooperative multi-agent system. However, the agents in the system can be unreliable due to a variety of reasons: noise, faults and attacks. Providing erroneous updates leads the optimization process in a wrong direction, and degrades the performance of distributed machine learning algorithms. This paper considers the problem of decentralized learning using ADMM in the presence of unreliable agents. First, we rigorously analyze the effect of erroneous updates (in ADMM learning iterations) on the convergence behavior of the multi-agent system. We show that the algorithm linearly converges to a neighborhood of the optimal solution under certain conditions and characterize the neighborhood size analytically. Next, we provide guidelines for network design to achieve a faster convergence to the neighborhood. We also provide conditions on the erroneous updates for exact convergence to the optimal solution. Finally, to mitigate the influence of unreliable agents, we propose ROAD, a robust variant of ADMM, and show its resilience to unreliable agents with an exact convergence to the optimum.
AB - Many signal processing and machine learning problems can be formulated as consensus optimization problems which can be solved efficiently via a cooperative multi-agent system. However, the agents in the system can be unreliable due to a variety of reasons: noise, faults and attacks. Providing erroneous updates leads the optimization process in a wrong direction, and degrades the performance of distributed machine learning algorithms. This paper considers the problem of decentralized learning using ADMM in the presence of unreliable agents. First, we rigorously analyze the effect of erroneous updates (in ADMM learning iterations) on the convergence behavior of the multi-agent system. We show that the algorithm linearly converges to a neighborhood of the optimal solution under certain conditions and characterize the neighborhood size analytically. Next, we provide guidelines for network design to achieve a faster convergence to the neighborhood. We also provide conditions on the erroneous updates for exact convergence to the optimal solution. Finally, to mitigate the influence of unreliable agents, we propose ROAD, a robust variant of ADMM, and show its resilience to unreliable agents with an exact convergence to the optimum.
KW - ADMM
KW - Decentralized learning
KW - unreliable
UR - http://www.scopus.com/inward/record.url?scp=85131741190&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85131741190&partnerID=8YFLogxK
U2 - 10.1109/TSP.2022.3178655
DO - 10.1109/TSP.2022.3178655
M3 - Article
AN - SCOPUS:85131741190
SN - 1053-587X
VL - 70
SP - 2743
EP - 2757
JO - IEEE Transactions on Signal Processing
JF - IEEE Transactions on Signal Processing
ER -