File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/05/h05-1009_intro.xml

Size: 2,070 bytes

Last Modified: 2025-10-06 14:02:53

<?xml version="1.0" standalone="yes"?>
<Paper uid="H05-1009">
  <Title>Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing (HLT/EMNLP), pages 65-72, Vancouver, October 2005. c(c)2005 Association for Computational Linguistics NeurAlign: Combining Word Alignments Using Neural Networks</Title>
  <Section position="3" start_page="65" end_page="65" type="intro">
    <SectionTitle>
2 Related Work
</SectionTitle>
    <Paragraph position="0"> Previous algorithms for improving word alignments have attempted to incorporate additional knowledge into their modeling. For example, Liu (2005) uses a log-linear combination of linguistic features. Additional linguistic knowledge can be in the form of part-of-speech tags. (Toutanova et al., 2002) or dependency relations (Cherry and Lin, 2003). Other approaches to improving alignment have combined alignment models, e.g., using a log-linear combination (Och and Ney, 2003) or mutually independent association clues (Tiedemann, 2003).</Paragraph>
    <Paragraph position="1"> A simpler approach was developed by Ayan et al. (2004), where word alignment outputs are combined using a linear combination of feature weights assigned to the individual aligners. Our method is more general in that it uses a neural network model that is capable of learning nonlinear functions.</Paragraph>
    <Paragraph position="2"> Classifier ensembles are used in several NLP applications. Some NLP applications for classifier ensembles are POS tagging (Brill and Wu, 1998; Abney et al., 1999), PP attachment (Abney et al., 1999), word sense disambiguation (Florian and Yarowsky, 2002), and parsing (Henderson and Brill, 2000).</Paragraph>
    <Paragraph position="3"> The work reported in this paper is the first application of classifier ensembles to the word-alignment problem. We use a different methodology to combine classifiers that is based on stacked generalization (Wolpert, 1992), i.e., learning an additional model on the outputs of individual classifiers.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML