K-mappings and Regression trees

Yi Wang, Arthur Szlam

Research output: Chapter in Book/Entry/PoemConference contribution

Abstract

We describe a method for learning a piecewise affine approximation to a mapping f : Rd → Rp given a labeled training set of examples {x1,..., xn} = X ⊂ Rd and targets {y1 = f(x1),..., yn = f(xn)} = Y ⊂ Rp. The method first trains a binary subdivision tree that splits across hyperplanes in X corresponding to high variance directions in Y. A fixed number K of affine regressors of rank q are then trained via a K-means like iterative algorithm, where each leaf must vote on its best fit mapping, and each mapping is updated as the best fit for the collection of leaves that chose it.

Original languageEnglish (US)
Title of host publication2014 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2014
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages2937-2941
Number of pages5
ISBN (Print)9781479928927
DOIs
StatePublished - 2014
Externally publishedYes
Event2014 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2014 - Florence, Italy
Duration: May 4 2014May 9 2014

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
ISSN (Print)1520-6149

Other

Other2014 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2014
Country/TerritoryItaly
CityFlorence
Period5/4/145/9/14

Keywords

  • Partial Least squares
  • Piecewise linear Regression
  • Sparse Modeling

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'K-mappings and Regression trees'. Together they form a unique fingerprint.

Cite this