File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/06/p06-2122_concl.xml
Size: 2,427 bytes
Last Modified: 2025-10-06 13:55:24
<?xml version="1.0" standalone="yes"?> <Paper uid="P06-2122"> <Title>Inducing Word Alignments with Bilexical Synchronous Trees</Title> <Section position="7" start_page="958" end_page="959" type="concl"> <SectionTitle> 5 Discussion </SectionTitle> <Paragraph position="0"> The BLITG model has two components, namely the dependency model on the upper levels of the tree structure and the word-level translation model at the bottom. We hope that the two components will mutually improve one another. The current experiments indicate clearly that the word level alignment does help inducing dependency structures on both sides. The precision and recall on the dependency retrieval sub-task are almost doubled for both languages from LITG which only has a kind of uni-lexical dependency in each language. Although 20% is a low number, given the fact that the dependencies are learned basically through contrasting sentences in two languages, the result is encouraging. The results slightly improve over ITG with right-head assumption for English, which is based on linguistic insight. Our results also echo the ndings of Kuhn (2004).</Paragraph> <Paragraph position="1"> They found that based on the guidance of word alignment between English and multiple other languages, a modi ed EM training for PCFG on English can bootstrap a more accurate monolingual probabilistic parser. Figure 4 is an example of the dependency tree on the English side from the output of BLITG, comparing against the parser output. null We did not nd that the feedback from the de- null pendencies help alignment. To get the reasons, we need further and deeper analysis. One might guess that the dependencies are modeled but are not yet strong and good enough given the amount of training data. Since the training algorithm EM has the problem of local maxima, we might also need to adjust the training algorithm to obtain good parameters for the alignment task. Initializing the model with good dependency parameters is a possible adjustment. We would also like to point out that alignment task is simpler than decoding where a stronger component of reordering is required to produce a uent English sentence. Investigating the impact of bilexical dependencies on decoding is our future work.</Paragraph> <Paragraph position="2"> Acknowledgments This work was supported by NSF ITR IIS-09325646 and NSF ITR IIS0428020. null</Paragraph> </Section> class="xml-element"></Paper>