We use cookies to ensure that we give you the best experience on our website. By continuing to browse this repository, you give consent for essential cookies to be used. You can read more about our Privacy and Cookie Policy.

Durham Research Online
You are in:

An AI-Based Feedback Visualisation System for Speech Training

Wynn, Adam T. and Wang, Jingyun and Umezawa, Kaoru and Cristea, Alexandra I. (2022) 'An AI-Based Feedback Visualisation System for Speech Training.', in Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners’ and Doctoral Consortium. Cham: Springer, pp. 510-514. Lecture Notes in Computer Science., 13356


This paper proposes providing automatic feedback to support public speech training. For the first time, speech feedback is provided on a visual dashboard including not only the transcription and pitch information, but also emotion information. A method is proposed to perform emotion classification using state-of-the-art convolutional neural networks (CNNs). Moreover, this approach can be used for speech analysis purposes. A case study exploring pitch in Japanese speech is presented in this paper.

Item Type:Book chapter
Full text:Publisher-imposed embargo until 26 July 2023.
(AM) Accepted Manuscript
File format - PDF
Publisher Web site:
Publisher statement:The final authenticated version is available online at
Date accepted:No date available
Date deposited:08 August 2022
Date of first online publication:26 July 2022
Date first made open access:26 July 2023

Save or Share this output

Look up in GoogleScholar