Proceedings chapter (Author postprint) (287 Kb) - Free access
Methodological Issues in Evaluating a Spoken CALL Game: Can Crowdsourcing Help Us Perform Controlled Experiments?
|Published in||Proceedings of the Workshop on Speech and Language Technology in Education (SLaTE). Grenoble (France) - 30-31 Aug. & 1 Sept. 2013 - . 2013|
|Abstract||We summarise a series of experiments we have carried out over the last three years on CALL-SLT, a speech-enabled web-based CALL game for learning and improving fluency in domain language, focussing on the methodological aspects. In particular, we argue that our previous evaluations have been systematically flawed due to the lack of a control group. We present a detailed description of our most recent evaluation, where 130 subjects, recruited using crowdsourcing methods, followed a short course in basic French over a period of one week, with 24 subjects completing the course. About a third of the subjects (half of the ones that finished) were assigned to a control group who used a version of the system with speech recognition feedback disabled; subjects in both groups demonstrated significant improvements in language skills over the duration of the experiment, but the improvements were significantly larger for the noncontrol subjects. We argue in conclusion that this type of experiment opens up interesting new ways to attack the difficult problem of performing controlled experiments with CALL applications.|
|Keywords||CALL — Speech recognition — Evaluation — Methodology — Crowdsourcing|
|RAYNER, Emmanuel, TSOURAKIS, Nikolaos. Methodological Issues in Evaluating a Spoken CALL Game: Can Crowdsourcing Help Us Perform Controlled Experiments?. In: Proceedings of the Workshop on Speech and Language Technology in Education (SLaTE). Grenoble (France). [s.l.] : [s.n.], 2013. https://archive-ouverte.unige.ch/unige:30953|