TY - JOUR
T1 - Robust Kullback-Leibler Divergence and Universal Hypothesis Testing for Continuous Distributions
AU - Yang, Pengfei
AU - Chen, Biao
N1 - Funding Information:
Manuscript received May 12, 2017; revised October 10, 2018; accepted October 21, 2018. Date of publication November 9, 2018; date of current version March 15, 2019. This work was supported in part by the Air Force Office of Scientific Research under Grant FA9550-16-1-0077 and in part by the National Science Foundation under Grant CNS-1731237.
Publisher Copyright:
© 2018 IEEE.
PY - 2019/4
Y1 - 2019/4
N2 - Universal hypothesis testing (UHT) refers to the problem of deciding whether samples come from a nominal distribution or an unknown distribution that is different from the nominal distribution. Hoeffding's test, whose test statistic is equivalent to the empirical Kullback-Leibler divergence (KL divergence), is known to be asymptotically optimal for distributions defined on finite alphabets. With continuous observations, however, the discontinuity of the KL divergence in the distribution functions results in significant complications for UHT. This paper introduces a robust version of the classical KL divergence, defined as the KL divergence from a distribution to the Lévy ball of a known distribution. This robust KL divergence is shown to be continuous in the underlying distribution function with respect to the weak convergence. The continuity property enables the development of an asymptotically optimal test for the university hypothesis testing problem with continuous observations. The optimality is in the same sense as that of the Hoeffding's test and stronger than that of Zeitouni and Gutman. Perhaps more importantly, the developed test statistic can be computed through convex programs, making it much more meaningful in practice. Numerical experiments are also conducted to evaluate its performance as compared with some kernel based goodness of fit test that has been proposed recently.
AB - Universal hypothesis testing (UHT) refers to the problem of deciding whether samples come from a nominal distribution or an unknown distribution that is different from the nominal distribution. Hoeffding's test, whose test statistic is equivalent to the empirical Kullback-Leibler divergence (KL divergence), is known to be asymptotically optimal for distributions defined on finite alphabets. With continuous observations, however, the discontinuity of the KL divergence in the distribution functions results in significant complications for UHT. This paper introduces a robust version of the classical KL divergence, defined as the KL divergence from a distribution to the Lévy ball of a known distribution. This robust KL divergence is shown to be continuous in the underlying distribution function with respect to the weak convergence. The continuity property enables the development of an asymptotically optimal test for the university hypothesis testing problem with continuous observations. The optimality is in the same sense as that of the Hoeffding's test and stronger than that of Zeitouni and Gutman. Perhaps more importantly, the developed test statistic can be computed through convex programs, making it much more meaningful in practice. Numerical experiments are also conducted to evaluate its performance as compared with some kernel based goodness of fit test that has been proposed recently.
KW - Kullback-Leibler divergence
KW - Lévy metric
KW - universal hypothesis testing
UR - http://www.scopus.com/inward/record.url?scp=85056312568&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85056312568&partnerID=8YFLogxK
U2 - 10.1109/TIT.2018.2879057
DO - 10.1109/TIT.2018.2879057
M3 - Article
AN - SCOPUS:85056312568
SN - 0018-9448
VL - 65
SP - 2360
EP - 2373
JO - IRE Professional Group on Information Theory
JF - IRE Professional Group on Information Theory
IS - 4
M1 - 8528471
ER -