A simple convergence analysis of Bregman proximal gradient algorithm

Yi Zhou, Yingbin Liang, Lixin Shen

Research output: Contribution to journalArticle

Abstract

In this paper, we provide a simple convergence analysis of proximal gradient algorithm with Bregman distance, which provides a tighter bound than existing result. In particular, for the problem of minimizing a class of convex objective functions, we show that proximal gradient algorithm with Bregman distance can be viewed as proximal point algorithm that incorporates another Bregman distance. Consequently, the convergence result of the proximal gradient algorithm with Bregman distance follows directly from that of the proximal point algorithm with Bregman distance, and this leads to a simpler convergence analysis with a tighter convergence bound than existing ones. We further propose and analyze the backtracking line-search variant of the proximal gradient algorithm with Bregman distance.

Original languageEnglish (US)
JournalComputational Optimization and Applications
DOIs
StatePublished - Jan 1 2019

Fingerprint

Proximal Algorithm
Bregman Distance
Gradient Algorithm
Convergence Analysis
Proximal Point Algorithm
Backtracking
Line Search
Convergence Results
Convex function
Objective function

Keywords

  • Bregman distance
  • Convergence analysis
  • Line-search
  • Proximal algorithms

ASJC Scopus subject areas

  • Control and Optimization
  • Computational Mathematics
  • Applied Mathematics

Cite this

A simple convergence analysis of Bregman proximal gradient algorithm. / Zhou, Yi; Liang, Yingbin; Shen, Lixin.

In: Computational Optimization and Applications, 01.01.2019.

Research output: Contribution to journalArticle

@article{20ab773d23034a28a44f3769f20e5acf,
title = "A simple convergence analysis of Bregman proximal gradient algorithm",
abstract = "In this paper, we provide a simple convergence analysis of proximal gradient algorithm with Bregman distance, which provides a tighter bound than existing result. In particular, for the problem of minimizing a class of convex objective functions, we show that proximal gradient algorithm with Bregman distance can be viewed as proximal point algorithm that incorporates another Bregman distance. Consequently, the convergence result of the proximal gradient algorithm with Bregman distance follows directly from that of the proximal point algorithm with Bregman distance, and this leads to a simpler convergence analysis with a tighter convergence bound than existing ones. We further propose and analyze the backtracking line-search variant of the proximal gradient algorithm with Bregman distance.",
keywords = "Bregman distance, Convergence analysis, Line-search, Proximal algorithms",
author = "Yi Zhou and Yingbin Liang and Lixin Shen",
year = "2019",
month = "1",
day = "1",
doi = "10.1007/s10589-019-00092-y",
language = "English (US)",
journal = "Computational Optimization and Applications",
issn = "0926-6003",
publisher = "Springer Netherlands",

}

TY - JOUR

T1 - A simple convergence analysis of Bregman proximal gradient algorithm

AU - Zhou, Yi

AU - Liang, Yingbin

AU - Shen, Lixin

PY - 2019/1/1

Y1 - 2019/1/1

N2 - In this paper, we provide a simple convergence analysis of proximal gradient algorithm with Bregman distance, which provides a tighter bound than existing result. In particular, for the problem of minimizing a class of convex objective functions, we show that proximal gradient algorithm with Bregman distance can be viewed as proximal point algorithm that incorporates another Bregman distance. Consequently, the convergence result of the proximal gradient algorithm with Bregman distance follows directly from that of the proximal point algorithm with Bregman distance, and this leads to a simpler convergence analysis with a tighter convergence bound than existing ones. We further propose and analyze the backtracking line-search variant of the proximal gradient algorithm with Bregman distance.

AB - In this paper, we provide a simple convergence analysis of proximal gradient algorithm with Bregman distance, which provides a tighter bound than existing result. In particular, for the problem of minimizing a class of convex objective functions, we show that proximal gradient algorithm with Bregman distance can be viewed as proximal point algorithm that incorporates another Bregman distance. Consequently, the convergence result of the proximal gradient algorithm with Bregman distance follows directly from that of the proximal point algorithm with Bregman distance, and this leads to a simpler convergence analysis with a tighter convergence bound than existing ones. We further propose and analyze the backtracking line-search variant of the proximal gradient algorithm with Bregman distance.

KW - Bregman distance

KW - Convergence analysis

KW - Line-search

KW - Proximal algorithms

UR - http://www.scopus.com/inward/record.url?scp=85064431245&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85064431245&partnerID=8YFLogxK

U2 - 10.1007/s10589-019-00092-y

DO - 10.1007/s10589-019-00092-y

M3 - Article

JO - Computational Optimization and Applications

JF - Computational Optimization and Applications

SN - 0926-6003

ER -