Abstract
In Switzerland legal translation is uniquely im portant due to the country’s four official languages and requirements for multilingual legal documentation. However, this process traditionally relies on professionals who must be both legal experts and skilled translators — creating bottlenecks and impacting effective access to justice. To address this challenge, we introduce SwiLTra-Bench, a comprehensive multilingual benchmark of over 180K aligned Swiss legal translation pairs comprising laws, headnotes, and press releases across all Swiss languages along with English, designed to evaluate LLM-based translation systems. Our systematic evaluation reveals that frontier models
achieve superior translation performance across all document types, while specialized translation systems excel specifically in laws but under-perform in headnotes. Through rigorous testing and human expert validation, we demonstrate that while fine-tuning open SLMs significantly improves their translation quality, they still lag behind the best zero-shot prompted frontier models such as Claude-3.5-Sonnet. Additionally, we present SwiLTra-Judge, a specialized LLM evaluation system that aligns best with human expert assessments. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000729956Publication status
publishedJournal / series
Center for Law & Economics Working Paper SeriesVolume
Publisher
ETH Zurich, Center for Law & EconomicsEdition / version
First versionSubject
Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)Organisational unit
03795 - Bechtold, Stefan / Bechtold, Stefan
Related publications and datasets
Is supplemented by: https://huggingface.co/collections/joelniklaus/swiltra-bench-67c569a2ada47e4549733deb
Is variant form of: https://doi.org/10.48550/arXiv.2503.01372
More
Show all metadata
ETH Bibliography
yes
Altmetrics