Presentation Information
[4K5-GS-6c-02]Translation Quality Evaluation Using Large Language Models Incorporating Legal Translation Guidelines
〇Kota Kawano1, Isao Goto1, Takashi Ninomiya1 (1. Ehime University)
Keywords:
Large Language Models,Legal Translation,Translation Quality Evaluation,Error Detection
Legal translation requires a high level of accuracy, as it deals with specialized documents. While practical translation guidelines are essential in this domain, there has been limited research on systematically evaluating whether assessment methods can detect guideline violations. This study proposes a meta-evaluation framework for legal translation assessment. We construct a benchmark dataset consisting of translation examples that violate individual items of the Guidelines for the Translation of Japanese Laws into English, enabling item-wise measurement of violation detection performance. As a case study, we implement an LLM-based error detection method and evaluate it within the proposed framework. Experimental results show that explicitly providing the guidelines improves detection rates, demonstrating that the framework can quantitatively compare evaluation methods based on their ability to identify guideline violations.
