File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/04/w04-2405_concl.xml

Size: 1,455 bytes

Last Modified: 2025-10-06 13:54:27

<?xml version="1.0" standalone="yes"?>
<Paper uid="W04-2405">
  <Title>Co-training and Self-training for Word Sense Disambiguation</Title>
  <Section position="6" start_page="35" end_page="35" type="concl">
    <SectionTitle>
5 Conclusion
</SectionTitle>
    <Paragraph position="0"> This paper investigated the application of co-training and self-training to supervised word sense disambiguation. If the right parameters for co-training and self-training can be identified for each individual classifier, an average error reduction of 25.5% is achieved, with similar performance observed for both co-training and self-training. Given that these optimal settings cannot always be identified in practical applications, several algorithms for empirical parameter selection were investigated: global settings determined as the best set of parameters across all classifiers, and per-word settings, identified separately for each classifier, both using a validation set. An improved co-training method was also introduced, that combines co-training with majority voting, with the effect of smoothing the learning curves, and improving the average performance. This improved co-training algorithm, applied with a global parameter selection scheme, brought a significant error reduction of 9.8% with respect to the basic classifier, which shows that co-training can be successfully employed in practice for bootstrapping sense classifiers.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML