PERCEPT-R: An Open-Access American English Child/Clinical Speech Corpus Specialized for the Audio Classification of/ɹ/

Nina R. Benway, Jonathan L. Preston, Elaine Hitchcock, Asif Salekin, Harshit Sharma, Tara McAllister

Research output: Contribution to journalConference Articlepeer-review

4 Scopus citations

Abstract

We present the PERCEPT-R corpus, a labeled corpus of child speakers of American English with typical speech and residual speech sound disorders affecting rhotics. We demonstrate the utility of age-and-gender normalized formants extracted from PERCEPT-R in training support vector classifiers to predict ground-truth perceptual judgments of “rhotic” (i.e., dialect-typical) and clinical “derhotic”/ɹ/for novel speakers (mean of participant-specific f-metrics=.83; SD =.18, N = 281).

Original languageEnglish (US)
Pages (from-to)3648-3652
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Volume2022-September
DOIs
StatePublished - 2022
Event23rd Annual Conference of the International Speech Communication Association, INTERSPEECH 2022 - Incheon, Korea, Republic of
Duration: Sep 18 2022Sep 22 2022

Keywords

  • /ɹ/
  • child speech
  • clinical speech
  • mispronunciation detection
  • open access dataset

ASJC Scopus subject areas

  • Language and Linguistics
  • Human-Computer Interaction
  • Signal Processing
  • Software
  • Modeling and Simulation

Fingerprint

Dive into the research topics of 'PERCEPT-R: An Open-Access American English Child/Clinical Speech Corpus Specialized for the Audio Classification of/ɹ/'. Together they form a unique fingerprint.

Cite this