File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/06/w06-1615_concl.xml

Size: 2,124 bytes

Last Modified: 2025-10-06 13:55:36

<?xml version="1.0" standalone="yes"?>
<Paper uid="W06-1615">
  <Title>Sydney, July 2006. c(c)2006 Association for Computational Linguistics Domain Adaptation with Structural Correspondence Learning</Title>
  <Section position="11" start_page="126" end_page="127" type="concl">
    <SectionTitle>
9 Conclusion
</SectionTitle>
    <Paragraph position="0"> Structural correspondence learning is a marriage of ideas from single domain semi-supervised learning and domain adaptation. It uses unlabeled data and frequently-occurring pivot features from both source and target domains to find correspondences among features from these domains.</Paragraph>
    <Paragraph position="1"> Finding correspondences involves estimating the correlations between pivot and non-pivot feautres, and we adapt structural learning (ASO) (Ando and Zhang, 2005a; Ando and Zhang, 2005b) for this task. SCL is a general technique that can be applied to any feature-based discriminative learner.</Paragraph>
    <Paragraph position="2"> We showed results using SCL to transfer a PoS tagger from the Wall Street Journal to a corpus of MEDLINE abstracts. SCL consistently out-performed both supervised and semi-supervised learning with no labeled target domain training data. We also showed how to combine an SCL tagger with target domain labeled data using the classifier combination techniques from Florian et al. (2004). Finally, we improved parsing performance in the target domain when using the SCL PoS tagger.</Paragraph>
    <Paragraph position="3"> One of our next goals is to apply SCL directly to parsing. We are also focusing on other potential applications, including chunking (Sha and Pereira, 2003), named entity recognition (Florian et al., 2004; Ando and Zhang, 2005b; Daum'e III and Marcu, 2006), and speaker adaptation (Kuhn et al., 1998). Finally, we are investigating more direct ways of applying structural correspondence  learning when we have labeled data from both source and target domains. In particular, the labeled data of both domains, not just the unlabeled data, should influence the learned representations.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML