6533b7ddfe1ef96bd1273c48

RESEARCH PRODUCT

Comparing the Quality of Neural Machine Translation and Professional Post-Editing

Jennifer VardaroSilvia Hansen-schirraMoritz Schaeffer

subject

050101 languages & linguisticsTransitive relationMachine translationComputer sciencebusiness.industrymedia_common.quotation_subject05 social sciences02 engineering and technologycomputer.software_genrelanguage.human_languageTerminologyGermanAnnotation0202 electrical engineering electronic engineering information engineeringlanguage020201 artificial intelligence & image processing0501 psychology and cognitive sciencesQuality (business)Artificial intelligencebusinesscomputerQuality assuranceNatural language processingSentencemedia_common

description

This empirical corpus study explores the quality of neural machine translations (NMT) and their post-edits (NMTPE) at the German Department of the European Commission’s Directorate-General for Translation (DGT) by evaluating NMT outputs, NMTPE, and respective revisions (REV) with the automatic error annotation tool Hjerson (Popovic 2011) and the more fine-grained manual MQM framework (Lommel 2014). Results show that quality assurance measures by post-editors and revisors at the DGT are most often necessary for lexical errors. More specifically, if post-editors correct mistranslations, terminology or stylistic errors in an NMT sentence, revisors are likely to correct the same type of error in the same sentence, suggesting a certain transitivity between the NMT system and human post-editors.

https://doi.org/10.1109/qomex.2019.8743218