File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/06/p06-2065_concl.xml

Size: 1,600 bytes

Last Modified: 2025-10-06 13:55:25

<?xml version="1.0" standalone="yes"?>
<Paper uid="P06-2065">
  <Title>Unsupervised Analysis for Decipherment Problems</Title>
  <Section position="11" start_page="504" end_page="505" type="concl">
    <SectionTitle>
9 Conclusion
</SectionTitle>
    <Paragraph position="0"> We have discussed several decipherment problems and shown that they can all be attacked by the same basic  method. Our primary contribution is a collection of rst empirical results on a number of new problems. We also studied the following techniques in action: a55 executing random restarts a55 cubing learned channel probabilities before decoding null a55 using uniform probabilities for parameters of less interest a55 checking learned P(c) against the P(c) of a correct model a55 using a well-smoothed source model P(p) a55 bootstrapping larger-parameter models with smaller ones a55 appealing to linguistic universals to constrain models Results on all of our applications were substantially improved using these techniques, and a secondary contribution is to show that they lead to robust improvements across a range of decipherment problems.</Paragraph>
    <Paragraph position="1"> All of the experiments in this paper were carried out with the Carmel nite-state toolkit, (Graehl, 1997), which supports forward-backward EM with epsilon transitions and loops, parameter tying, and random restarts. It also composes two or more transducers while keeping their transitions separate (and separately trainable) in the composed model. Work described in this paper strongly in uenced the toolkit's design.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML