Authors: Eleftherios Avramidis,Aljoscha Burchardt,Sabine Hunsicker,Maja Popovi{\'c},Cindy Tscherwinka,David Vilar,Hans Uszkoreit
Where published:
LREC 2014 5
Document:
PDF
DOI
Abstract URL: https://www.aclweb.org/anthology/L14-1347/
Human translators are the key to evaluating machine translation (MT) quality and also to addressing the so far unanswered question when and how to use MT in professional translation workflows. This paper describes the corpus developed as a result of a detailed large scale human evaluation consisting of three tightly connected tasks: ranking, error classification and post-editing.