UNIGE document Conference Presentation
previous document  unige:100812  next document
add to browser collection

Comparison of the quality of two speech translators in emergency settings : A case study with standardized Arabic speaking patients with abdominal pain

Presented at European Congress of Emergency Medicine, (EUSEM 2017). Athens, (Greece) - 23-27 September - . 2017
Abstract In the context of the current European refugee crisis, at the Geneva University Hospitals, 52% of the patients are foreigners and 10% don't speak french at all. In 2015, the languages which caused most problems were Tigrinya, Arabic and Farsi. Several researchers pointed the serious problems for quality, security and equitability of medical care in a such situation. BabelDr is a joint project of Geneva University's Faculty of Translation and Interpreting and Geneva University Hospitals. BabelDr application is a flexible speech-enabled phrase-book. The linguistic coverage is organised into domains, centered around body parts (abdomen, chest, head, kidney/back). Each of the four domains has a semantic coverage consisting of a prepecified set of utterance-types, but users can use a wide variety of surface foms when speaking to the system. Each utterance-type is associated with a canonical source language version, which is rendered into the target languages by suitably qualified translation experts. We compare BabelDr with Google Translate. French speaking doctors were asked to use both systems to diagnose Arabic speaking patients with abdominal pain, based on two scenarios. For each scenario (appendicitis and cholecystitis), a patient was standardized by the Geneva University Hospitals. Participants were four medical students and six doctors, which each perform two diagnoses, one with BabelDr and one with GT. All participants had one week before each test a short introduction to both systems and were given 30 minutes to practice. One of the doctors was dismissed because her level in French was too low. We analyse the user's interactions with both systems, the quality of translation, the participant's ability to reach a diagnosis with the two systems as well as user satisfaction. The translation quality were evaluated in terms of adequacy and comprehensibility by three Arabic advanced translation students and was annotated on a four point scale (nonsense/mistranslation/ambiguous/correct) and comprehensibility on a four point scale (incomprehensible/syntax errors/non idiomatic/fluent). For the BabelDr translation, 93% of doctor's interactions sent to translation are correct and 94% fluent. For Google Translate it's 38% respectively 38%. Light's Kappa for adequacy 0.483 and 0.44 for comprehensibility, according to Landis and Koch. With Google Translate 5/9 doctors found the correct diagnosis, against 8/9 with BabelDr. The satisfaction of doctors is better with BabelDr than with Google Translate : 8/9 versus 5/9. More doctors were confident in the translation to the target language with BabelDr than Google Translate, more say they could integrate BabelDr in their everyday practice mostly in emergency room. We conclude that Google Translate is less adequate, less accuracy and less comprehensiblle than BabelDr's.
Full text
Supplemental data (80 Kb) - public document Free access
(ISO format)
SPECHBACH, Hervé et al. Comparison of the quality of two speech translators in emergency settings : A case study with standardized Arabic speaking patients with abdominal pain. In: European Congress of Emergency Medicine, (EUSEM 2017). Athens, (Greece). 2017. https://archive-ouverte.unige.ch/unige:100812

230 hits



Deposited on : 2017-12-22

Export document
Format :
Citation style :