This record is currently in review state, the data hasn’t been validated yet.
- Conference Paper
We explore the suitability of self-attention models for character-level neural machine translation. We test the standard transformer model, as well as a novel variant in which the encoder block combines information from nearby characters using convolutions. We perform extensive experiments on WMT and UN datasets, testing both bilingual and multilingual translation to English using up to three input languages (French, Spanish, and Chinese). Our transformer variant consistently outperforms the standard transformer at the character-level and converges faster while learning more robust character-level alignments.(1) Show more
External linksSearch via SFX
Journal / series58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020)
Pages / Article No.
PublisherASSOC COMPUTATIONAL LINGUISTICS-ACL
MoreShow all metadata