Automatic pronunciation assessment uses speech recognition to check how accurately speech is pronounced,[1][2] instead of relying on a human instructor or proctor.[3] Also called speech verification, pronunciation evaluation, and pronunciation scoring, this technology is mainly used for computer-aided pronunciation teaching (CAPT), when combined with computer-aided instruction for computer-assisted language learning (CALL), speech remediation, or accent reduction.[4]

Pronunciation assessment does not determine unknown speech (as in dictation or automatic transcription) but instead, knowing the expected word(s) in advance or from prior transcription, it attempts to verify the correctness of the learner's pronunciation and ideally their intelligibility to listeners,[5][6] sometimes along with often inconsequential prosody such as intonation, pitch, tempo, rhythm, and syllable and word stress.[7] Pronunciation assessment is also used in reading tutoring, for example in products such as Microsoft Teams[8] and from Amira Learning.[9] Automatic pronunciation assessment can also be used to help diagnose and treat speech disorders such as apraxia.[10]

Intelligibility

edit

The earliest work on pronunciation assessment avoided measuring genuine listener intelligibility,[11] a shortcoming corrected in 2011 at the Toyohashi University of Technology,[12] and included in the Versant high-stakes English fluency assessment from Pearson[13] and mobile apps from 17zuoye Education & Technology,[14] but still missing in 2023 products from Google Search,[15] Microsoft,[16] Educational Testing Service,[17] Speechace,[18] and ELSA.[19] Assessing authentic listener intelligibility is essential for avoiding inaccuracies from accent bias, especially in high-stakes assessments;[20][21][22] from words with multiple correct pronunciations;[23] and from phoneme coding errors in machine-readable pronunciation dictionaries.[24] In the Common European Framework of Reference for Languages (CEFR) assessment criteria for "overall phonological control", intelligibility outweighs formally correct pronunciation at all levels.[25]

In 2022, researchers found that some newer speech-to-text systems, based on end-to-end reinforcement learning to map audio signals directly into words, produce word and phrase confidence scores closely correlated with genuine listener intelligibility.[26] In 2023, others were able to assess intelligibility using dynamic time warping based distance from Wav2Vec2 representation of good speech.[27] Further work through 2025 has focused specifically on measuring intelligibility.[28][29]

Evaluation

edit

Although there are as yet no industry-standard benchmarks for evaluating pronunciation assessment accuracy, researchers occasionally release evaluation speech corpuses for others to use for improving assessment quality.[30][31][32][33] Such evaluation databases often emphasize formally unaccented pronunciation to the exclusion of genuine intelligibility evident from blinded listener transcriptions.[6] As of mid-2025, state of the art approaches for automatically transcribing phonemes typically achieve an error rate of about 10% from known good speech.[34][35][36][37]

Ethical issues in pronunciation assessment are present in both human and automatic methods. Authentic validity, fairness, and mitigating bias in evaluation are all crucial. Diverse speech data should be included in automatic pronunciation assessment models. Combining human judgment with automated feedback can improve accuracy and fairness.[38]

Second language learners benefit substantially from their use of common speech regognition systems for dictation, as virtual assistants, and AI chatbots.[39] In such systems, users naturally try to correct their own errors evident in speech recognition results that they notice in the results. Such use improves their grammar and vocabulary development along with their pronunciation skills. The extent to which explicit pronunciation assessment and remediation approaches improve on such self-directed interactions remains an open question.[39]

Recent developments

edit

Some promising areas for improvement which were being developed in 2024 include articulatory feature extraction[40][41][42] and transfer learning to suppress unnecessary corrections.[43] Other interesting advances under development include "augmented reality" interfaces for mobile devices using optical character recognition to provide pronunciation training on text found in user environments.[44][45]

In 2024, audio multimodal large language models were first described as assessing pronunciation.[46] That work has been carried forward by other researchers who report positive results.[47][48]

In 2025, the Duolingo English Test authors published a description of their pronunciation assessment method, purportedly built to measure intelligibility rather than accent imitation.[49] While achieving a correlation of 0.82 with expert human ratings, very close to inter-rater agreement and outperforming alternative methods, the method is nonetheless based on experts' scores along the six-point CEFR common reference levels scale, instead of actual blinded listener transcriptions.[49]

Further promising work in 2025 includes assessment feedback aligning learner speech to synthetic utterances using interpretable features, identifying continuous spans of words for remediation feedback;[50] synthesizing corrected speech matching learners' self-perceived voices, which they prefer and imitate more accurately as corrections;[51] and streaming such interactions.[52]

See also

edit

References

edit
  1. ^ El Kheir, Yassine; et al. (October 21, 2023), Automatic Pronunciation Assessment — A Review, Conference on Empirical Methods in Natural Language Processing, arXiv:2310.13974, S2CID 264426545
  2. ^ Lounis, Meriem; Dendani, Bilal; Bahi, Halima (January 2024). "Mispronunciation detection and diagnosis using deep neural networks: a systematic review". Multimedia Tools and Applications. 83 (23): 62793–62827. doi:10.1007/s11042-023-17899-x. Retrieved 12 July 2025.
  3. ^ Isaacs, Talia; Harding, Luke (July 2017). "Pronunciation assessment". Language Teaching. 50 (3): 347–366. doi:10.1017/S0261444817000118. ISSN 0261-4448. S2CID 209353525.
  4. ^ Ehsani, Farzad; Knodt, Eva (July 1998). "Speech technology in computer-aided language learning: Strengths and limitations of a new CALL paradigm". Language Learning & Technology. 2 (1). University of Hawaii National Foreign Language Resource Center; Michigan State University Center for Language Education and Research: 54–73. Retrieved 11 February 2023.
  5. ^ Loukina, Anastassia; et al. (September 6, 2015), "Pronunciation accuracy and intelligibility of non-native speech" (PDF), INTERSPEECH 2015, Dresden, Germany: International Speech Communication Association, pp. 1917–1921, only 16% of the variability in word-level intelligibility can be explained by the presence of obvious mispronunciations.
  6. ^ a b O’Brien, Mary Grantham; et al. (31 December 2018). "Directions for the future of technology in pronunciation research and teaching". Journal of Second Language Pronunciation. 4 (2): 182–207. doi:10.1075/jslp.17001.obr. hdl:2066/199273. ISSN 2215-1931. S2CID 86440885. pronunciation researchers are primarily interested in improving L2 learners' intelligibility and comprehensibility, but they have not yet collected sufficient amounts of representative and reliable data (speech recordings with corresponding annotations and judgments) indicating which errors affect these speech dimensions and which do not. These data are essential to train ASR algorithms to assess L2 learners' intelligibility.
  7. ^ Eskenazi, Maxine (January 1999). "Using automatic speech processing for foreign language pronunciation tutoring: Some issues and a prototype". Language Learning & Technology. 2 (2): 62–76. Retrieved 11 February 2023.
  8. ^ Tholfsen, Mike (9 February 2023). "Reading Coach in Immersive Reader plus new features coming to Reading Progress in Microsoft Teams". Techcommunity Education Blog. Microsoft. Retrieved 12 February 2023.
  9. ^ Banerji, Olina (7 March 2023). "Schools Are Using Voice Technology to Teach Reading. Is It Helping?". EdSurge News. Retrieved 7 March 2023.
  10. ^ Hair, Adam; et al. (19 June 2018). "Apraxia world: A speech therapy game for children with speech sound disorders". Proceedings of the 17th ACM Conference on Interaction Design and Children (PDF). pp. 119–131. doi:10.1145/3202185.3202733. ISBN 9781450351522. S2CID 13790002.
  11. ^ Bernstein, Jared; et al. (November 18, 1990), "Automatic Evaluation and Training in English Pronunciation" (PDF), First International Conference on Spoken Language Processing (ICSLP 90), Kobe, Japan: International Speech Communication Association, pp. 1185–1188, retrieved 11 February 2023, listeners differ considerably in their ability to predict unintelligible words.... Thus, it seems the quality rating is a more desirable... automatic-grading score. (Section 2.2.2.)
  12. ^ Hiroshi, Kibishi; Nakagawa, Seiichi (August 28, 2011), "New feature parameters for pronunciation evaluation in English presentations at international conferences" (PDF), INTERSPEECH 2011, Florence, Italy: International Speech Communication Association, pp. 1149–1152, retrieved 11 February 2023, we investigated the relationship between pronunciation score / intelligibility and various acoustic measures, and then combined these measures.... As far as we know, the automatic estimation of intelligibility has not yet been studied.
  13. ^ Bonk, Bill (25 August 2020). "New innovations in assessment: Versant's Intelligibility Index score". Resources for English Language Learners and Teachers. Pearson English. Archived from the original on 2023-01-27. Retrieved 11 February 2023. you don't need a perfect accent, grammar, or vocabulary to be understandable. In reality, you just need to be understandable with little effort by listeners.
  14. ^ Gao, Yuan; et al. (May 25, 2018), "Spoken English Intelligibility Remediation with PocketSphinx Alignment and Feature Extraction Improves Substantially over the State of the Art", 2nd IEEE Advanced Information Management, Communication, Electronic and Automation Control Conference (IMCEC 2018), pp. 924–927, arXiv:1709.01713, doi:10.1109/IMCEC.2018.8469649, ISBN 978-1-5386-1803-5, S2CID 31125681
  15. ^ Snir, Tal (14 November 2019). "How do you pronounce quokka? Practice with Search". The Keyword. Google. Retrieved 11 February 2023.
  16. ^ "Pronunciation assessment tool". Azure Cognitive Services Speech Studio. Microsoft. Retrieved 11 February 2023.
  17. ^ Chen, Lei; et al. (December 2018). Automated Scoring of Nonnative Speech: Using the SpeechRater v. 5.0 Engine. ETS Research Report Series. Vol. 2018. Princeton, NJ: Educational Testing Service. pp. 1–31. doi:10.1002/ets2.12198. ISSN 2330-8516. S2CID 69925114. Retrieved 11 February 2023.
  18. ^ Alnafisah, Mutleb (September 2022), "Technology Review: Speechace", Proceedings of the 12th Pronunciation in Second Language Learning and Teaching Conference (Virtual PSLLT), no. 40, vol. 12, St. Catharines, Ontario, ISSN 2380-9566, retrieved 14 February 2023{{citation}}: CS1 maint: ___location missing publisher (link)
  19. ^ Gorham, Jon; et al. (March 10, 2022). Speech Recognition for English Language Learning (video). Technology in Language Teaching and Learning. Education Solutions. Retrieved 2023-02-14.
  20. ^ "Computer says no: Irish vet fails oral English test needed to stay in Australia". The Guardian. Australian Associated Press. 8 August 2017. Retrieved 12 February 2023.
  21. ^ Ferrier, Tracey (9 August 2017). "Australian ex-news reader with English degree fails robot's English test". The Sydney Morning Herald. Retrieved 12 February 2023.
  22. ^ Main, Ed; Watson, Richard (9 February 2022). "The English test that ruined thousands of lives". BBC News. Retrieved 12 February 2023.
  23. ^ Joyce, Katy Spratte (January 24, 2023). "13 Words That Can Be Pronounced Two Ways". Reader's Digest. Retrieved 23 February 2023.
  24. ^ E.g., CMUDICT, "The CMU Pronouncing Dictionary". www.speech.cs.cmu.edu. Retrieved 15 February 2023. Compare "four" given as "F AO R" with the vowel AO as in "caught," to "row" given as "R OW" with the vowel OW as in "oat."
  25. ^ Common European framework of reference for languages learning, teaching, assessment: Companion volume with new descriptors. Language Policy Programme, Education Policy Division, Education Department, Council of Europe. February 2018. p. 136. OCLC 1090351600.
  26. ^ Tu, Zehai; Ma, Ning; Barker, Jon (2022). "Unsupervised Uncertainty Measures of Automatic Speech Recognition for Non-intrusive Speech Intelligibility Prediction" (PDF). Proc. Interspeech 2022. INTERSPEECH 2022. ISCA. pp. 3493–3497. doi:10.21437/Interspeech.2022-10408. Retrieved 17 December 2023.
  27. ^ Anand, Nayan; Sirigiraju, Meenakshi; Yarra, Chiranjeevi (15 June 2023). "Unsupervised speech intelligibility assessment with utterance level alignment distance between teacher and learner Wav2Vec-2.0 representations". arXiv:2306.08845 [cs.SD].
  28. ^ Geng, Haopeng; Saito, Daisuke; Minematsu, Nobuaki (August 2025). "A Perception-Based L2 Speech Intelligibility Indicator: Leveraging a Rater's Shadowing and Sequence-to-sequence Voice Conversion". Interspeech 2025. Rotterdam, The Netherlands: ISCA. pp. 2420–2424.
  29. ^ Phukon, Bornali; Zheng, Xiuwen; Hasegawa-Johnson, Mark (17–21 August 2025). "Aligning ASR Evaluation with Human and LLM Judgments: Intelligibility Metrics Using Phonetic, Semantic, and NLI Approaches". Interspeech 2025. Rotterdam, The Netherlands: ISCA. pp. 5708–5712.
  30. ^ Zhang, Junbo; et al. (August 2021), "speechocean762: An Open-Source Non-Native English Speech Corpus for Pronunciation Assessment" (PDF), Interspeech 2021, pp. 3710–3714, arXiv:2104.01378, doi:10.21437/Interspeech.2021-1259, S2CID 233025050, retrieved 19 February 2023; GitHub corpus repository.
  31. ^ Vidal, Jazmín; et al. (September 2019), "EpaDB: A Database for Development of Pronunciation Assessment Systems" (PDF), Interspeech 2019, pp. 589–593, doi:10.21437/Interspeech.2019-1839, hdl:11336/161618, S2CID 202742421, retrieved 19 February 2023; database .zip file.
  32. ^ Menzel, Wolfgang; Atwell, Eric; Bonaventura, Patrizia; Herron, Daniel; Howarth, Peter; Morton, Rachel; Souter, Clive (May 2000). "The ISLE Corpus of Non-Native Spoken English". Proceedings of the Second International Conference on Language Resources and Evaluation (LREC’00). Athens, Greece: European Language Resources Association (ELRA). Retrieved 13 August 2025.
  33. ^ Zhao, Guanlong; Sonsaat, Sinem; Silpachai, Alif; Lucic, Ivana; Chukharev-Hudilainen, Evgeny; Levis, John; Gutierrez-Osuna, Ricardo (2018). "L2-ARCTIC: A Non-native English Speech Corpus". Interspeech 2018. International Speech Communication Association (ISCA). pp. 2783–2787. doi:10.21437/Interspeech.2018-1110. Retrieved 13 August 2025.
  34. ^ Zhou, Xuanru; Lian, Jiachen; Cho, Cheol Jun; Prabhune, Tejas; Li, Shuhe; Li, William; Ortiz, Rodrigo; Ezzes, Zoe; Vonk, Jet; Morin, Brittany; Bogley, Rian; Wauters, Lisa; Miller, Zachary; Gorno-Tempini, Maria; Anumanchipalli, Gopala (August 2025). "Towards Accurate Phonetic Error Detection Through Phoneme Similarity Modeling". Interspeech 2025. pp. 4738–4742.
  35. ^ Alon, Yonatan (March 2021). "Real-time low-resource phoneme recognition on edge devices". arXiv:2103.13997 [cs.CL].
  36. ^ Yeo, Eunjung (October 2022). "wav2vec2-large-english-TIMIT-phoneme_v3". huggingface.co. Seoul National University Spoken Language Processing Lab. Retrieved 19 August 2025.
  37. ^ Lee, Jooyoung (June 2024). "wav2vec2-large-lv60_phoneme-timit_english_timit-4k". huggingface.co. Seoul National University Spoken Language Processing Lab. Retrieved 19 August 2025.
  38. ^ Babaeian, Ali (2023). "Pronunciation Assessment: Traditional vs Modern Modes". Journal of Education for Sustainable Innovation. 1 (1): 61–68. doi:10.56916/jesi.v1i1.530. Retrieved 2024-12-31.
  39. ^ a b Akhter, Elmoon (June 2025). "The Impact of Human-Machine Interaction on English Pronunciation and Fluency: Case Studies Using AI Speech Assistants". Review of Applied Science and Technology. 4 (2): 473–500. doi:10.63125/1wyj3p84.
  40. ^ Wu, Peter; et al. (14 February 2023), "Speaker-Independent Acoustic-to-Articulatory Speech Inversion", arXiv:2302.06774 [eess.AS]
  41. ^ Cho, Cheol Jun; Mohamed, Abdelrahman; Black, Alan W.; Anumanchipalli, Gopala K. (16 January 2024). "Self-Supervised Models of Speech Infer Universal Articulatory Kinematics". arXiv:2310.10788 [eess.AS].
  42. ^ Mallela, Jhansi; Aluru, Sai Harshitha; Yarra, Chiranjeevi (28 February 2024). Exploring the Use of Self-Supervised Representations for Automatic Syllable Stress Detection. National Conference on Communications. Chennai, India. pp. 1–6. doi:10.1109/NCC60321.2024.10486028.
  43. ^ Sancinetti, Marcelo; et al. (23 May 2022). "A Transfer Learning Approach for Pronunciation Scoring". ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). pp. 6812–6816. arXiv:2111.00976. doi:10.1109/ICASSP43922.2022.9747727. ISBN 978-1-6654-0540-9. S2CID 249437375.
  44. ^ Che Dalim, Che Samihah; et al. (February 2020). "Using augmented reality with speech input for non-native children's language learning" (PDF). International Journal of Human-Computer Studies. 134: 44–64. doi:10.1016/j.ijhcs.2019.10.002. S2CID 208098513. Retrieved 28 February 2023.
  45. ^ Tolba, Rahma M.; et al. (2023). "Mobile Augmented Reality for Learning Phonetics: A Review (2012–2022)". Extended Reality and Metaverse. Springer Proceedings in Business and Economics. Springer International Publishing. pp. 87–98. doi:10.1007/978-3-031-25390-4_7. ISBN 978-3-031-25389-8. Retrieved 28 February 2023.
  46. ^ Fu, Kaiqi; Peng, Linkai; Yang, Nan; Zhou, Shuran (18 July 2024). "Pronunciation Assessment with Multi-modal Large Language Models". arXiv:2407.09209 [cs.CL]. Note that Speak.com produced an earlier commercial system that they had not described in technical detail.
  47. ^ Ma, Rao; Qian, Mengjie; Tang, Siyuan; Bannò, Stefano; Knill, Kate M.; Gales, Mark J.F. (27 May 2025). "Assessment of L2 Oral Proficiency using Speech Large Language Models". arXiv:2505.21148 [cs.CL].
  48. ^ Shankar, Natarajan Balaji; Zhang, Kaiyuan; Mai, Andre; Shi, Mohan; Long, Alaria; Washington, Julie; Morris, Robin; Alwan, Abeer (August 2025). "Leveraging ASR and LLMs for Automated Scoring and Feedback in Children's Spoken Language Assessments". 10th Workshop on Speech and Language Technology in Education (SLaTE). Nijmegen, Netherlands: ISCA. pp. 1–5.
  49. ^ a b Cai, Danwei; Naismith, Ben; Kostromitina, Maria; Teng, Zhongwei; Yancey, Kevin P.; LaFlair, Geoffrey T. (July 2025). "Developing an Automatic Pronunciation Scorer: Aligning Speech Evaluation Models and Applied Linguistics Constructs". Language Learning. doi:10.1111/lang.70000. Proficiency [is] estimated by an ML classifier trained to predict the human CEFR rating of a speaking response
  50. ^ McGhee, Charles; Gales, Mark J. F.; Knill, Kate M. (August 2025). "Comparative Pronunciation Assessment and Feedback with Interpretable Speech Features". 10th Workshop on Speech and Language Technology in Education (SLaTE). Nijmegen, Netherlands: ISCA. pp. 36–40.
  51. ^ Yamanaka, Ryoga; Osa, Kento; Fujiwara, Akari; Geng, Haopeng; Saito, Daisuke; Minematsu, Nobuaki; Inoue, Yusuke (August 2025). "Synthesizing True Golden Voices to Enhance Pronunciation Training for Individual Language Learners". 10th Workshop on Speech and Language Technology in Education (SLaTE). Nijmegen, Netherlands: ISCA. pp. 209–213.
  52. ^ Nguyen, Tuan-Nam; Pham, Ngoc-Quan; Akti, Seymanur; Waibel, Alexander (17–21 August 2025). "Streaming Non-Autoregressive Model for Accent Conversion and Pronunciation Improvement". Interspeech 2025. Rotterdam, The Netherlands: ISCA. pp. 4163–4167.
  53. ^ Mathad, Vikram C.; et al. (2021). "The Impact of Forced-Alignment Errors on Automatic Pronunciation Evaluation" (PDF). 22nd Annual Conference of the International Speech Communication Association (INTERSPEECH 2021). International Speech Communication Association. pp. 176–180. doi:10.21437/interspeech.2021-1403. ISBN 9781713836902. S2CID 239694157. Retrieved 10 March 2023.
edit