Fine-Tuned Machine Translation Metrics Struggle in Unseen Domains
OPEN ACCESS
Loading...
Author / Producer
Date
2024-08
Publication Type
Conference Paper
ETH Bibliography
yes
Citations
Altmetric
OPEN ACCESS
Data
Rights / License
Abstract
We introduce a new, extensive multidimensional quality metrics (MQM) annotated dataset covering 11 language pairs in the biomedical domain. We use this dataset to investigate whether machine translation (MT) metrics which are fine-tuned on human-generated MT quality judgements are robust to domain shifts between training and inference. We find that fine-tuned metrics exhibit a substantial performance drop in the unseen domain scenario relative to both metrics that rely on the surface form and pre-trained metrics that are not fine-tuned on MT quality judgments.
Permanent link
Publication status
published
External links
Book title
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Journal / series
Volume
Pages / Article No.
488 - 500
Publisher
Association for Computational Linguistics
Event
62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024)