Short description: Use of speech recognition to verify pronunciation
Automatic pronunciation assessment is the use of speech recognition to verify the correctness of pronounced speech,[1] as distinguished from manual assessment by an instructor or proctor.[2] Also called speech verification, pronunciation evaluation, and pronunciation scoring, the main application of this technology is computer-aided pronunciation teaching (CAPT) when combined with computer-aided instruction for computer-assisted language learning (CALL), speech remediation, or accent reduction. Pronunciation assessment does not determine unknown speech (as in dictation or automatic transcription) but instead, knowing the expected word(s) in advance, it attempts to verify the correctness of the learner's pronunciation and ideally their intelligibility to listeners,[3][4] sometimes along with often inconsequential prosody such as intonation, pitch, tempo, rhythm, and stress.[5] Pronunciation assessment is also used in reading tutoring, for example in products such as Microsoft Teams[6] and from Amira Learning.[7] Automatic pronunciation assessment can also be used to help diagnose and treat speech disorders such as apraxia.[8]
The earliest work on pronunciation assessment avoided measuring genuine listener intelligibility,[9] a shortcoming corrected in 2011 at the Toyohashi University of Technology,[10] and included in the Versant high-stakes English fluency assessment from Pearson[11] and mobile apps from 17zuoye Education & Technology,[12] but still missing in 2023 products from Google Search,[13]Microsoft,[14] Educational Testing Service,[15] Speechace,[16] and ELSA.[17] Assessing authentic listener intelligibility is essential for avoiding inaccuracies from accent bias, especially in high-stakes assessments;[18][19][20] from words with multiple correct pronunciations;[21] and from phoneme coding errors in machine-readable pronunciation dictionaries.[22] In 2022, researchers found that some newer speech to text systems, based on end-to-end reinforcement learning to map audio signals directly into words, produce word and phrase confidence scores very closely correlated with genuine listener intelligibility.[23] In the Common European Framework of Reference for Languages (CEFR) assessment criteria for "overall phonological control", intelligibility outweighs formally correct pronunciation at all levels.[24]
Although there are as yet no industry-standard benchmarks for evaluating pronunciation assessment accuracy, researchers occasionally release evaluation speech corpuses for others to use for improving assessment quality.[25][26] Such evaluation databases often emphasize formally unaccented pronunciation to the exclusion of genuine intelligibility evident from blinded listener transcriptions.[4] Some promising areas for improvement being developed in 2023 include articulatory feature extraction[27] and transfer learning to suppress unnecessary corrections.[28] Other interesting advances under development include "augmented reality" interfaces for mobile devices using optical character recognition to provide pronunciation training on text found in user environments.[29][30]
↑ 4.04.1O’Brien, Mary Grantham et al. (31 December 2018). "Directions for the future of technology in pronunciation research and teaching" (in en). Journal of Second Language Pronunciation4 (2): 182–207. doi:10.1075/jslp.17001.obr. ISSN2215-1931. "pronunciation researchers are primarily interested in improving L2 learners’ intelligibility and comprehensibility, but they have not yet collected sufficient amounts of representative and reliable data (speech recordings with corresponding annotations and judgments) indicating which errors affect these speech dimensions and which do not. These data are essential to train ASR algorithms to assess L2 learners’ intelligibility.".
↑Gao, Yuan et al. (May 25, 2018), "Spoken English Intelligibility Remediation with PocketSphinx Alignment and Feature Extraction Improves Substantially over the State of the Art", 2nd IEEE Advanced Information Management, Communication, Electronic and Automation Control Conference (IMCEC 2018), doi:10.1109/IMCEC.2018.8469649
↑Wu, Peter; et al. (14 February 2023), "Speaker-Independent Acoustic-to-Articulatory Speech Inversion", arXiv:2302.06774 [eess.AS]
↑Sancinetti, Marcelo et al. (23 May 2022). "A Transfer Learning Approach for Pronunciation Scoring". ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). pp. 6812–6816. doi:10.1109/ICASSP43922.2022.9747727. ISBN978-1-6654-0540-9.