This paper provides evidence regarding the extent to which different human and computer evaluations of spontaneous engaged speech provide statistically significant measures of proficiency. Three types of measures were used: quantitative measures, common sense notional measures, and comprehensive measures. As such, it contributes to the growing body of literature describing the current limits of automatic systems for evaluating spoken proficiency, it supports the continued development and implementation of hybrid systems, and it includes suggestions for the utilization of additional automatic analyses within a hybrid system. Data for this project were created and analyzed as follows: After four weeks of activities related to career development, twenty native English speaking college freshmen and eleven native Chinese speaking students in Freshman English Composition class made recordings in English explaining their career preferences. We then conducted three experiments for both groups, first analyzing the recordings according to current quantitative evaluations of fluency, then through a qualitative perception study according to the everyday notions of fluency, and finally according to comprehensive rubrics used by the Educational Testing Service (ETS) for evaluating oral proficiency. Correlations reveal the extent to which each type of analysis contributes to our understanding of the proficiency of the speakers. In addition, they provide a basis for supporting a variety of suggestions made by other researchers regarding future goals for enhancing automatic evaluation and utilizing hybrid systems.
|Keywords:||Spoken Proficiency, Computer Analysis, Human Perception, Correlation, Hybrid System|
Assistant Professor of Spanish, Department of Languages and Cultures, West Chester University of Pennsylvania, West Chester, PA, USA
Professor of Linguistics, Department of English, West Chester University of Pennsylvania, West Chester, PA, USA
There are currently no reviews of this product.Write a Review