Scientific article
OA Policy
English

Performance of machine translators in translating French medical research abstracts to English: A comparative study of DeepL, Google Translate, and CUBBITT

Published inPloS one, vol. 19, no. 2, p. e0297183
Publication date2024-02-01
First online date2024-02-01
Abstract

Background: Non-English speaking researchers may find it difficult to write articles in English and may be tempted to use machine translators (MTs) to facilitate their task. We compared the performance of DeepL, Google Translate, and CUBBITT for the translation of abstracts from French to English.

Methods: We selected ten abstracts published in 2021 in two high-impact bilingual medical journals (CMAJ and Canadian Family Physician) and used nine metrics of Recall-Oriented Understudy for Gisting Evaluation (ROUGE-1 recall/precision/F1-score, ROUGE-2 recall/precision/F1-score, and ROUGE-L recall/precision/F1-score) to evaluate the accuracy of the translation (scores ranging from zero to one [= maximum]). We also used the fluency score assigned by ten raters to evaluate the stylistic quality of the translation (ranging from ten [= incomprehensible] to fifty [= flawless English]). We used Kruskal-Wallis tests to compare the medians between the three MTs. For the human evaluation, we also examined the original English text.

Results: Differences in medians were not statistically significant for the nine metrics of ROUGE (medians: min-max = 0.5246–0.7392 for DeepL, 0.4634–0.7200 for Google Translate, 0.4815–0.7316 for CUBBITT, all p-values > 0.10). For the human evaluation, CUBBITT tended to score higher than DeepL, Google Translate, and the original English text (median = 43 for CUBBITT, vs. 39, 38, and 40, respectively, p-value = 0.003).

Conclusion: The three MTs performed similarly when tested with ROUGE, but CUBBITT was slightly better than the other two using human evaluation. Although we only included abstracts and did not evaluate the time required for post-editing, we believe that French-speaking researchers could use DeepL, Google Translate, or CUBBITT when writing articles in English.

Keywords
  • Biomedical Research
  • Canada
  • Humans
  • Publications
  • Search Engine
  • Translating
Citation (ISO format)
SEBOE, Paul, DE LUCIA, Sylvain. Performance of machine translators in translating French medical research abstracts to English: A comparative study of DeepL, Google Translate, and CUBBITT. In: PloS one, 2024, vol. 19, n° 2, p. e0297183. doi: 10.1371/journal.pone.0297183
Main files (1)
Article (Published version)
Identifiers
Additional URL for this publicationhttps://dx.plos.org/10.1371/journal.pone.0297183
Journal ISSN1932-6203
51views
57downloads

Technical informations

Creation02/02/2024 7:22:29 AM
First validation03/18/2024 6:54:31 PM
Update time03/18/2024 6:54:31 PM
Status update03/18/2024 6:54:31 PM
Last indexation11/01/2024 8:55:41 AM
All rights reserved by Archive ouverte UNIGE and the University of GenevaunigeBlack