File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/evalu/04/w04-0808_evalu.xml

Size: 2,595 bytes

Last Modified: 2025-10-06 13:59:16

<?xml version="1.0" standalone="yes"?>
<Paper uid="W04-0808">
  <Title>An Evaluation Exercise for Romanian Word Sense Disambiguation</Title>
  <Section position="8" start_page="0" end_page="0" type="evalu">
    <SectionTitle>
7 Results and Discussion
</SectionTitle>
    <Paragraph position="0"> Table 6 lists the results obtained by all participating systems, and the baseline obtained using the &amp;quot;most frequent sense&amp;quot; (MFS) heuristic. The table lists precision and recall figures for both fine grained and coarse grained scoring.</Paragraph>
    <Paragraph position="1"> The performance of all systems is significantly higher than the baseline, with the best system performing at 72.7% (77.1%) for fine grained (coarse grained) scoring, which represents a 35% (38%) error reduction with respect to the baseline.</Paragraph>
    <Paragraph position="2"> The best system (romanian-swat hk-bo) relies on a Maximum Entropy classifier with boosting, using local context (neighboring words, lemmas, and their part of speech), as well as bag-of-words features for surrounding words.</Paragraph>
    <Paragraph position="3"> Not surprisingly, several of the top performing systems are based on combinations of multiple sclassifiers, which shows once again that voting</Paragraph>
    <Section position="1" start_page="0" end_page="0" type="sub_section">
      <SectionTitle>
System Description
</SectionTitle>
      <Paragraph position="0"> romanian-swat hk-bo Supervised learning using Maximum Entropy with boosting, using bag-of-words and n-grams around the head word as features swat-hk-romanian The swat-romanian and romanian-swat hk-bo systems combined with majority voting. Duluth-RLSS An ensemble approach that takes a vote among three bagged decision trees, based on unigrams, bigrams and co-occurrence features swat-romanian Three classifiers: cosine similarity clustering, decision list, and Naive Bayes, using bag-of-words and n-grams around the head word as features combined with a majority voting scheme.</Paragraph>
      <Paragraph position="1"> UMD SST6 Supervised learning using Support Vector Machines, using contextual features. ubb nbc ro Supervised learning using a Naive Bayes learning scheme, and features extracted using a bag-of-words approach.</Paragraph>
    </Section>
  </Section>
  <Section position="9" start_page="0" end_page="0" type="evalu">
    <SectionTitle>
UBB A k-NN memory-based learning approach, with bag-of-words features.
</SectionTitle>
    <Paragraph position="0"> schemes that combine several learning algorithms outperform the accuracy of individual classifiers.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML