TY - JOUR

T1 - Recovery of sparse linear classifiers from mixture of responses

AU - Gandikota, Venkata

AU - Mazumdar, Arya

AU - Pal, Soumyabrata

N1 - Funding Information:
Acknowledgements: This research is supported in part by NSF CCF 1909046 and NSF 1934846.
Publisher Copyright:
© 2020 Neural information processing systems foundation. All rights reserved.

PY - 2020

Y1 - 2020

N2 - In the problem of learning a mixture of linear classifiers, the aim is to learn a collection of hyperplanes from a sequence of binary responses. Each response is a result of querying with a vector and indicates the side of a randomly chosen hyperplane from the collection the query vector belong to. This model provides a rich representation of heterogeneous data with categorical labels and has only been studied in some special settings. We look at a hitherto unstudied problem of query complexity upper bound of recovering all the hyperplanes, especially for the case when the hyperplanes are sparse. This setting is a natural generalization of the extreme quantization problem known as 1-bit compressed sensing. Suppose we have a set of l unknown k-sparse vectors. We can query the set with another vector a, to obtain the sign of the inner product of a and a randomly chosen vector from the l-set. How many queries are sufficient to identify all the l unknown vectors? This question is significantly more challenging than both the basic 1-bit compressed sensing problem (i.e., l = 1 case) and the analogous regression problem (where the value instead of the sign is provided). We provide rigorous query complexity results (with efficient algorithms) for this problem.

AB - In the problem of learning a mixture of linear classifiers, the aim is to learn a collection of hyperplanes from a sequence of binary responses. Each response is a result of querying with a vector and indicates the side of a randomly chosen hyperplane from the collection the query vector belong to. This model provides a rich representation of heterogeneous data with categorical labels and has only been studied in some special settings. We look at a hitherto unstudied problem of query complexity upper bound of recovering all the hyperplanes, especially for the case when the hyperplanes are sparse. This setting is a natural generalization of the extreme quantization problem known as 1-bit compressed sensing. Suppose we have a set of l unknown k-sparse vectors. We can query the set with another vector a, to obtain the sign of the inner product of a and a randomly chosen vector from the l-set. How many queries are sufficient to identify all the l unknown vectors? This question is significantly more challenging than both the basic 1-bit compressed sensing problem (i.e., l = 1 case) and the analogous regression problem (where the value instead of the sign is provided). We provide rigorous query complexity results (with efficient algorithms) for this problem.

UR - http://www.scopus.com/inward/record.url?scp=85108445700&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85108445700&partnerID=8YFLogxK

M3 - Conference Article

AN - SCOPUS:85108445700

SN - 1049-5258

VL - 2020-December

JO - Advances in Neural Information Processing Systems

JF - Advances in Neural Information Processing Systems

T2 - 34th Conference on Neural Information Processing Systems, NeurIPS 2020

Y2 - 6 December 2020 through 12 December 2020

ER -