Fine-Tuned Machine Translation Metrics Struggle in Unseen Domains


Loading...

Date

2024-08

Publication Type

Conference Paper

ETH Bibliography

yes

Citations

Altmetric

Data

Abstract

We introduce a new, extensive multidimensional quality metrics (MQM) annotated dataset covering 11 language pairs in the biomedical domain. We use this dataset to investigate whether machine translation (MT) metrics which are fine-tuned on human-generated MT quality judgements are robust to domain shifts between training and inference. We find that fine-tuned metrics exhibit a substantial performance drop in the unseen domain scenario relative to both metrics that rely on the surface form and pre-trained metrics that are not fine-tuned on MT quality judgments.

Publication status

published

Book title

Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

Journal / series

Volume

Pages / Article No.

488 - 500

Publisher

Association for Computational Linguistics

Event

62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024)

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

Organisational unit

Notes

Funding

Related publications and datasets