The assessment of speech difficulty is at the core of translation and interpreting studies and is relevant to all aspects of language learning and cognitive testing. Educators and employers expend a great amount of time and effort to select material that is appropriate for competence development and for discriminating between different levels of skills. In recent years, attempts have been made to use automated assessment to make the selection of texts more objective and less costly. A typical approach is to use readability or coherence measures to automatically grade texts. However, studies have shown that current tools do not handle difficulty assessment for speech well and to date, a speech-based automated assessor does not exist. With the development of automatic speech recognition technology, it is possible to turn a large number of speeches into transcripts and find reliable indicators that can predict the perception of difficulty by professionals and novices. This proposed project aims to provide an automated difficulty assessor for speech. It will use simultaneous interpreting as a case study, as this form of speech processing requires immediate local comprehension and production. The project will use a large sample of simultaneous interpreting performances from a diverse group of professionals at the United Nations (UN) and student interpreters, the latter data will be collected at three time points over a two-year training program. Audio recordings will be used to build corpora of English to Chinese interpreting, comprising 45 hours of English speeches and their interpretation. Automatic speech recognition tools will be used, with novel techniques applied to accurately transcribing dysfluent, accented and spoken features in English and Chinese. This proposed project will go beyond previous studies by adopting a longitudinal, expert–novice design; it will assemble linguistic and prosodic difficulty indicators that have psychological validity in speech processing. The difficulty levels of the English speeches will be judged by professionals and students, and the performances will be rated both objectively and by using validated rubrics; the performance measures have been investigated in previous research of the PI and the Co-I. This project will deliver an automated software program and web tool for assessing the difficulty of English speeches. The assessor will assist educators and employers in knowing a speech’s level of difficulty without listening to the speech or having it interpreted first.
|Effective start/end date||1/01/24 → 31/12/25|
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.