Hyperparameter Estimation for Sparse Bayesian Learning Models

Feng Yu, Lixin Shen, Guohui Song

Research output: Contribution to journalArticlepeer-review

Abstract

Sparse Bayesian learning (SBL) models are extensively used in signal processing and machine learning for promoting sparsity through hierarchical priors. The hyperparameters in SBL models are crucial for the model's performance, but they are often difficult to estimate due to the nonconvexity and the high-dimensionality of the associated objective function. This paper presents a comprehensive framework for hyperparameter estimation in SBL models, encompassing well-known algorithms such as the expectation-maximization, MacKay, and convex bounding algorithms. These algorithms are cohesively interpreted within an alternating minimization and linearization (AML) paradigm, distinguished by their unique linearized surrogate functions. Additionally, a novel algorithm within the AML framework is introduced, showing enhanced efficiency, especially under low signal noise ratios. This is further improved by a new alternating minimization and quadratic approximation paradigm, which includes a proximal regularization term. The paper substantiates these advancements with thorough convergence analysis and numerical experiments, demonstrating the algorithm's effectiveness in various noise conditions and signal-to-noise ratios.

Original languageEnglish (US)
Pages (from-to)759-787
Number of pages29
JournalSIAM-ASA Journal on Uncertainty Quantification
Volume12
Issue number3
DOIs
StatePublished - 2024

Keywords

  • alternating minimization
  • hyperparameter estimation
  • sparse Bayesian learning

ASJC Scopus subject areas

  • Statistics and Probability
  • Modeling and Simulation
  • Statistics, Probability and Uncertainty
  • Discrete Mathematics and Combinatorics
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Hyperparameter Estimation for Sparse Bayesian Learning Models'. Together they form a unique fingerprint.

Cite this