K-medoids clustering of data sequences with composite distributions

Tiexing Wang, Qunwei Li, Donald J. Bucci, Yingbin Liang, Biao Chen, Pramod Kumar Varshney

Research output: Contribution to journalArticle

Abstract

This paper studies clustering of data sequences using the k-medoids algorithm. All the data sequences are assumed to be generated from unknown continuous distributions, which form clusters with each cluster containing a composite set of closely located distributions (based on a certain distance metric between distributions). The maximum intracluster distance is assumed to be smaller than the minimum intercluster distance, and both values are assumed to be known. The goal is to group the data sequences together if their underlying generative distributions (which are unknown) belong to one cluster. Distribution distance metrics based k-medoids algorithms are proposed for known and unknown number of distribution clusters. Upper bounds on the error probability and convergence results in the large sample regime are also provided. It is shown that the error probability decays exponentially fast as the number of samples in each data sequence goes to infinity. The error exponent has a simple form regardless of the distance metric applied when certain conditions are satisfied. In particular, the error exponent is characterized when either the Kolmogrov-Smirnov distance or the maximum mean discrepancy are used as the distance metric. Simulation results are provided to validate the analysis.

Original languageEnglish (US)
Article number8651294
Pages (from-to)2093-2106
Number of pages14
JournalIEEE Transactions on Signal Processing
Volume67
Issue number8
DOIs
StatePublished - Apr 15 2019

Fingerprint

Composite materials
Error probability

Keywords

  • composite distributions
  • error probability
  • k-medoids clustering
  • Kolmogorov-Smirnov distance
  • maximum mean discrepancy
  • unsupervised learning

ASJC Scopus subject areas

  • Signal Processing
  • Electrical and Electronic Engineering

Cite this

K-medoids clustering of data sequences with composite distributions. / Wang, Tiexing; Li, Qunwei; Bucci, Donald J.; Liang, Yingbin; Chen, Biao; Varshney, Pramod Kumar.

In: IEEE Transactions on Signal Processing, Vol. 67, No. 8, 8651294, 15.04.2019, p. 2093-2106.

Research output: Contribution to journalArticle

Wang, Tiexing ; Li, Qunwei ; Bucci, Donald J. ; Liang, Yingbin ; Chen, Biao ; Varshney, Pramod Kumar. / K-medoids clustering of data sequences with composite distributions. In: IEEE Transactions on Signal Processing. 2019 ; Vol. 67, No. 8. pp. 2093-2106.
@article{8e6d146813c248e6aa4bb478927199a6,
title = "K-medoids clustering of data sequences with composite distributions",
abstract = "This paper studies clustering of data sequences using the k-medoids algorithm. All the data sequences are assumed to be generated from unknown continuous distributions, which form clusters with each cluster containing a composite set of closely located distributions (based on a certain distance metric between distributions). The maximum intracluster distance is assumed to be smaller than the minimum intercluster distance, and both values are assumed to be known. The goal is to group the data sequences together if their underlying generative distributions (which are unknown) belong to one cluster. Distribution distance metrics based k-medoids algorithms are proposed for known and unknown number of distribution clusters. Upper bounds on the error probability and convergence results in the large sample regime are also provided. It is shown that the error probability decays exponentially fast as the number of samples in each data sequence goes to infinity. The error exponent has a simple form regardless of the distance metric applied when certain conditions are satisfied. In particular, the error exponent is characterized when either the Kolmogrov-Smirnov distance or the maximum mean discrepancy are used as the distance metric. Simulation results are provided to validate the analysis.",
keywords = "composite distributions, error probability, k-medoids clustering, Kolmogorov-Smirnov distance, maximum mean discrepancy, unsupervised learning",
author = "Tiexing Wang and Qunwei Li and Bucci, {Donald J.} and Yingbin Liang and Biao Chen and Varshney, {Pramod Kumar}",
year = "2019",
month = "4",
day = "15",
doi = "10.1109/TSP.2019.2901370",
language = "English (US)",
volume = "67",
pages = "2093--2106",
journal = "IEEE Transactions on Signal Processing",
issn = "1053-587X",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "8",

}

TY - JOUR

T1 - K-medoids clustering of data sequences with composite distributions

AU - Wang, Tiexing

AU - Li, Qunwei

AU - Bucci, Donald J.

AU - Liang, Yingbin

AU - Chen, Biao

AU - Varshney, Pramod Kumar

PY - 2019/4/15

Y1 - 2019/4/15

N2 - This paper studies clustering of data sequences using the k-medoids algorithm. All the data sequences are assumed to be generated from unknown continuous distributions, which form clusters with each cluster containing a composite set of closely located distributions (based on a certain distance metric between distributions). The maximum intracluster distance is assumed to be smaller than the minimum intercluster distance, and both values are assumed to be known. The goal is to group the data sequences together if their underlying generative distributions (which are unknown) belong to one cluster. Distribution distance metrics based k-medoids algorithms are proposed for known and unknown number of distribution clusters. Upper bounds on the error probability and convergence results in the large sample regime are also provided. It is shown that the error probability decays exponentially fast as the number of samples in each data sequence goes to infinity. The error exponent has a simple form regardless of the distance metric applied when certain conditions are satisfied. In particular, the error exponent is characterized when either the Kolmogrov-Smirnov distance or the maximum mean discrepancy are used as the distance metric. Simulation results are provided to validate the analysis.

AB - This paper studies clustering of data sequences using the k-medoids algorithm. All the data sequences are assumed to be generated from unknown continuous distributions, which form clusters with each cluster containing a composite set of closely located distributions (based on a certain distance metric between distributions). The maximum intracluster distance is assumed to be smaller than the minimum intercluster distance, and both values are assumed to be known. The goal is to group the data sequences together if their underlying generative distributions (which are unknown) belong to one cluster. Distribution distance metrics based k-medoids algorithms are proposed for known and unknown number of distribution clusters. Upper bounds on the error probability and convergence results in the large sample regime are also provided. It is shown that the error probability decays exponentially fast as the number of samples in each data sequence goes to infinity. The error exponent has a simple form regardless of the distance metric applied when certain conditions are satisfied. In particular, the error exponent is characterized when either the Kolmogrov-Smirnov distance or the maximum mean discrepancy are used as the distance metric. Simulation results are provided to validate the analysis.

KW - composite distributions

KW - error probability

KW - k-medoids clustering

KW - Kolmogorov-Smirnov distance

KW - maximum mean discrepancy

KW - unsupervised learning

UR - http://www.scopus.com/inward/record.url?scp=85063012599&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85063012599&partnerID=8YFLogxK

U2 - 10.1109/TSP.2019.2901370

DO - 10.1109/TSP.2019.2901370

M3 - Article

VL - 67

SP - 2093

EP - 2106

JO - IEEE Transactions on Signal Processing

JF - IEEE Transactions on Signal Processing

SN - 1053-587X

IS - 8

M1 - 8651294

ER -