File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/05/w05-0836_concl.xml

Size: 1,631 bytes

Last Modified: 2025-10-06 13:55:02

<?xml version="1.0" standalone="yes"?>
<Paper uid="W05-0836">
  <Title>Training and Evaluating Error Minimization Rules for Statistical Machine Translation</Title>
  <Section position="7" start_page="213" end_page="213" type="concl">
    <SectionTitle>
7 Conclusions and Further Work
</SectionTitle>
    <Paragraph position="0"> This work describes a general algorithm for the efficient optimization of error counts for an arbitrary Loss function, allowing us to compare and evaluate the impact of alternative decision rules for statistical machine translation. Our results suggest the value and sensitivity of the translation process to the Loss function at the decoding and reordering stages of the process. As phrase-based translation and reordering models begin to dominate the state of the art in machine translation, it will become increasingly important to understand the nature and consistency of n-best list training approaches. Our results are reported on a complete package of translation tools and resources, allowing the reader to easily recreate and build upon our framework. Further research might lie in finding efficient representations of Bayes Risk loss functions within the decoding process (rather than just using MBR to rescore n-best lists), as well as analyses on different language pairs from the available Europarl data. We have shown score sampling to be an effective training method to conduct these experiments and we hope to establish its use in the changing landscape of automatic translation evaluation. The source code is available at: www.cs.cmu.edu/~zollmann/scoresampling/</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML