Taking MT evaluation metrics to extremes: Beyond correlation with human judgments

M Fomicheva, L Specia - Computational Linguistics, 2019 - direct.mit.edu
Abstract Automatic Machine Translation (MT) evaluation is an active field of research, with a
handful of new metrics devised every year. Evaluation metrics are generally benchmarked …

Multi-hypothesis machine translation evaluation

M Fomicheva, L Specia… - Proceedings of the 58th …, 2020 - eprints.whiterose.ac.uk
Reliably evaluating Machine Translation (MT) through automated metrics is a long-standing
problem. One of the main challenges is the fact that multiple outputs can be equally valid …

Using contextual information for machine translation evaluation

M Fomicheva, N Bel - … of the Tenth International Conference on …, 2016 - aclanthology.org
Abstract Automatic evaluation of Machine Translation (MT) is typically approached by
measuring similarity between the candidate MT and a human reference translation. An …

The role of human reference translation in machine translation evaluation

M Fomicheva - 2017 - dialnet.unirioja.es
Tanto los métodos manuales como los automáticos para la evaluación de la Traducción
Automática (TA) dependen en gran medida de la traducción humana profesional. En la …

[SITAATTI][C] Evaluation metrics and analysis of first annotation round

A Burchardt, F Blain, O Bojar, J Dehdari, YG Dcu… - 2017 - research.tilburguniversity.edu
Evaluation metrics and analysis of first annotation round — Tilburg University Research Portal
Skip to main navigation Skip to search Skip to main content Tilburg University Research Portal …

[SITAATTI][C] Public Report M1-M18

F Blain, A Burchardt, O Bojar, C Dugast, YG DCU… - 2016