en
Conference presentation
English

Comparison of the quality of two speech translators in emergency settings : A case study with standardized Arabic speaking patients with abdominal pain

Presented atEuropean Congress of Emergency Medicine, (EUSEM 2017), Athens, (Greece), 23-27 September
Publication date2017
Abstract

In the context of the current European refugee crisis, at the Geneva University Hospitals, 52% of the patients are foreigners and 10% don't speak french at all. In 2015, the languages which caused most problems were Tigrinya, Arabic and Farsi. Several researchers pointed the serious problems for quality, security and equitability of medical care in a such situation. BabelDr is a joint project of Geneva University's Faculty of Translation and Interpreting and Geneva University Hospitals. BabelDr application is a flexible speech-enabled phrasebook. The linguistic coverage is organised into domains, centered around body parts (abdomen, chest, head, kidney/back). Each of the four domains has a semantic coverage consisting of a prespecified set of utterances-types, but users can use a wide variety of surface foms when speaking to the system. Each utterance-type is associated with a canonical source sentence, which is rendered into the target languages by suitably qualified translation experts. We compared BabelDr with Google Translate. French speaking doctors were asked to use both systems to diagnose Arabic speaking patients with abdominal pain, based on two scenarios. For each scenario (appendicitis and cholecystitis), a patient was standardized by the Geneva University Hospitals. Participants were four medical students and six doctors, which each perform two diagnoses, one with BabelDr and one with GT. All participants had one week before each test a short introduction to both systems and were given 30 minutes to practice. One of the doctors was dismissed because her level in French was too low. We analyse the user's interactions with both systems, the quality of translation, the participant's ability to reach a diagnosis with the two systems as well as user satisfaction. The translation quality was evaluated in terms of adequacy and comprehensibility by three Arabic advanced translation students and was annotated on a four point scale (nonsense/mistranslation/ambiguous/correct) and comprehensibility on a four point scale (incomprehensible/syntax errors/non idiomatic/fluent). For BabelDr, 93% of doctor's interactions sent to translation are correct and 94% fluent. For Google Translate it's respectively 38% and 38%. Light's Kappa for adequacy 0.483 and 0.44 for comprehensibility, according to Landis and Koch. With Google Translate 5/9 doctors found the correct diagnosis, against 8/9 with BabelDr. The satisfaction of doctors is better with BabelDr than with Google Translate : 8/9 versus 5/9. More doctors were confident in the translation to the target language with BabelDr than Google Translate, more say they could integrate BabelDr in their everyday practice mostly in emergency room. We conclude that Google Translate is less adequate, less accurate and less comprehensiblle than BabelDr.

Citation (ISO format)
SPECHBACH, Hervé et al. Comparison of the quality of two speech translators in emergency settings : A case study with standardized Arabic speaking patients with abdominal pain. In: European Congress of Emergency Medicine, (EUSEM 2017). Athens, (Greece). 2017.
Secondary files (1)
Supplemental data
accessLevelPublic
Identifiers
  • PID : unige:100812
1148views
97downloads

Technical informations

Creation2017/10/26 10:59:00
First validation2017/10/26 10:59:00
Update time2023/03/15 07:42:10
Status update2023/03/15 07:42:10
Last indexation2024/05/02 19:52:02
All rights reserved by Archive ouverte UNIGE and the University of GenevaunigeBlack