Hallucinations in large multilingual translation models

NM Guerreiro, DM Alves, J Waldendorf… - Transactions of the …, 2023 - direct.mit.edu
Hallucinated translations can severely undermine and raise safety issues when machine
translation systems are deployed in the wild. Previous research on the topic focused on …

Measure and improve robustness in NLP models: A survey

X Wang, H Wang, D Yang - arxiv preprint arxiv:2112.08313, 2021 - arxiv.org
As NLP models achieved state-of-the-art performances over benchmarks and gained wide
applications, it has been increasingly important to ensure the safe deployment of these …

Robust neural machine translation with doubly adversarial inputs

Y Cheng, L Jiang, W Macherey - arxiv preprint arxiv:1906.02443, 2019 - arxiv.org
Neural machine translation (NMT) often suffers from the vulnerability to noisy perturbations
in the input. We propose an approach to improving the robustness of NMT models, which …

Understanding and detecting hallucinations in neural machine translation via model introspection

W Xu, S Agrawal, E Briakou, MJ Martindale… - Transactions of the …, 2023 - direct.mit.edu
Neural sequence generation models are known to “hallucinate”, by producing outputs that
are unrelated to the source text. These hallucinations are potentially harmful, yet it remains …

Domain adaptation and multi-domain adaptation for neural machine translation: A survey

D Saunders - Journal of Artificial Intelligence Research, 2022 - jair.org
The development of deep learning techniques has allowed Neural Machine Translation
(NMT) models to become extremely powerful, given sufficient training data and training time …

Faithfulness in natural language generation: A systematic survey of analysis, evaluation and optimization methods

W Li, W Wu, M Chen, J Liu, X **ao, H Wu - arxiv preprint arxiv:2203.05227, 2022 - arxiv.org
Natural Language Generation (NLG) has made great progress in recent years due to the
development of deep learning techniques such as pre-trained language models. This …

Advaug: Robust adversarial augmentation for neural machine translation

Y Cheng, L Jiang, W Macherey, J Eisenstein - arxiv preprint arxiv …, 2020 - arxiv.org
In this paper, we propose a new adversarial augmentation method for Neural Machine
Translation (NMT). The main idea is to minimize the vicinal risk over virtual sentences …

Automatic testing and improvement of machine translation

Z Sun, JM Zhang, M Harman, M Papadakis… - Proceedings of the ACM …, 2020 - dl.acm.org
This paper presents TransRepair, a fully automatic approach for testing and repairing the
consistency of machine translation systems. TransRepair combines mutation with …

On the use of BERT for neural machine translation

S Clinchant, KW Jung, V Nikoulina - arxiv preprint arxiv:1909.12744, 2019 - arxiv.org
Exploiting large pretrained models for various NMT tasks have gained a lot of visibility
recently. In this work we study how BERT pretrained models could be exploited for …

Why should adversarial perturbations be imperceptible? rethink the research paradigm in adversarial nlp

Y Chen, H Gao, G Cui, F Qi, L Huang, Z Liu… - arxiv preprint arxiv …, 2022 - arxiv.org
Textual adversarial samples play important roles in multiple subfields of NLP research,
including security, evaluation, explainability, and data augmentation. However, most work …