TY - GEN
T1 - On Distributed Online Convex Optimization with Sublinear Dynamic Regret and Fit
AU - Sharma, Pranay
AU - Khanduri, Prashant
AU - Shen, Lixin
AU - Bucci, Donald J.
AU - Varshney, Pramod K.
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - In this work, we consider a distributed online convex optimization problem, with time-varying (potentially adversarial) constraints. A set of nodes, jointly aim to minimize a global objective function, which is the sum of local convex functions. The objective and constraint functions are revealed locally to the nodes, at each time, after taking an action. Naturally, the constraints cannot be instantaneously satisfied. Therefore, we reformulate the problem to satisfy these constraints in the long term. To this end, we propose a distributed primal-dual mirror descent-based algorithm, in which the primal and dual updates are carried out locally at all the nodes. This is followed by sharing and mixing of the primal variables by the local nodes via communication with the immediate neighbors. To quantify the performance of the proposed algorithm, we utilize the challenging, but more realistic metrics of dynamic regret and fit. Dynamic regret measures the cumulative loss incurred by the algorithm compared to the best dynamic strategy, while fit measures the long term cumulative constraint violations. Without assuming the restrictive Slater's conditions, we show that the proposed algorithm achieves sublinear regret and fit under mild, commonly used assumptions.
AB - In this work, we consider a distributed online convex optimization problem, with time-varying (potentially adversarial) constraints. A set of nodes, jointly aim to minimize a global objective function, which is the sum of local convex functions. The objective and constraint functions are revealed locally to the nodes, at each time, after taking an action. Naturally, the constraints cannot be instantaneously satisfied. Therefore, we reformulate the problem to satisfy these constraints in the long term. To this end, we propose a distributed primal-dual mirror descent-based algorithm, in which the primal and dual updates are carried out locally at all the nodes. This is followed by sharing and mixing of the primal variables by the local nodes via communication with the immediate neighbors. To quantify the performance of the proposed algorithm, we utilize the challenging, but more realistic metrics of dynamic regret and fit. Dynamic regret measures the cumulative loss incurred by the algorithm compared to the best dynamic strategy, while fit measures the long term cumulative constraint violations. Without assuming the restrictive Slater's conditions, we show that the proposed algorithm achieves sublinear regret and fit under mild, commonly used assumptions.
UR - http://www.scopus.com/inward/record.url?scp=85127020220&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85127020220&partnerID=8YFLogxK
U2 - 10.1109/IEEECONF53345.2021.9723285
DO - 10.1109/IEEECONF53345.2021.9723285
M3 - Conference contribution
AN - SCOPUS:85127020220
T3 - Conference Record - Asilomar Conference on Signals, Systems and Computers
SP - 1013
EP - 1017
BT - 55th Asilomar Conference on Signals, Systems and Computers, ACSSC 2021
A2 - Matthews, Michael B.
PB - IEEE Computer Society
T2 - 55th Asilomar Conference on Signals, Systems and Computers, ACSSC 2021
Y2 - 31 October 2021 through 3 November 2021
ER -