File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/05/p05-1044_concl.xml
Size: 1,037 bytes
Last Modified: 2025-10-06 13:54:45
<?xml version="1.0" standalone="yes"?> <Paper uid="P05-1044"> <Title>Contrastive Estimation: Training Log-Linear Models on Unlabeled Data[?]</Title> <Section position="8" start_page="361" end_page="361" type="concl"> <SectionTitle> 7 Conclusion </SectionTitle> <Paragraph position="0"> We have presented contrastive estimation, a new probabilistic estimation criterion that forces a model to explain why the given training data were better than bad data implied by the positive examples. We have shown that for unsupervised sequence modeling, this technique is efficient and drastically out-performs EM; for POS tagging, the gain in accuracy over EM is twice what we would get from ten times as much data and improved search, sticking with EM's criterion (Smith and Eisner, 2004). On this task, with certain neighborhoods, contrastive estimation suffers less than EM does from diminished prior knowledge and is able to exploit new features--that EM can't--to largely recover from the loss of knowledge.</Paragraph> </Section> class="xml-element"></Paper>