File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/evalu/06/p06-3010_evalu.xml
Size: 2,199 bytes
Last Modified: 2025-10-06 13:59:46
<?xml version="1.0" standalone="yes"?> <Paper uid="P06-3010"> <Title>Sydney, July 2006. c(c)2006 Association for Computational Linguistics A Hybrid Relational Approach for WSD - First Results</Title> <Section position="7" start_page="58" end_page="58" type="evalu"> <SectionTitle> 4 Experiments and results </SectionTitle> <Paragraph position="0"> In order to assess the accuracy of our approach, we ran a set of initial experiments with our sample corpus. For each verb, we ran Aleph in the default mode, except for the following parameters: The accuracy was calculated by applying the rules to classify the new examples in the test set according to the order these rules appeared in the theory, eliminating the examples (correctly or incorrectly) covered by a certain rule from the test set. In order to cover 100% of the examples, we relied on the existence of a rule without conditions, which generally is induced by Aleph and points out to the most frequent translation in the training data. When this rule was not generated by Aleph, we add it to the end of theory. For all the verbs, however, this rule only classified a few examples (form 1 to 6).</Paragraph> <Paragraph position="1"> In Table 2 we show the accuracy of the theory learned for each verb, as well as accuracy achieved by two propositional machine learning algorithms on the same data: Decision Trees (C4.5) and Support Vector Machine (SVM), all according to a 10-fold cross-validation strategy.</Paragraph> <Paragraph position="2"> Since it is rather impractical to represent certain KSs using attribute-value vectors, in the experiments with SVM and C4.5 only low level features were considered, corresponding to KS1, KS2, KS3, and KS4. On average, Our approach outperforms the two other algorithms. Moreover, its accuracy is by far better than the accuracy of the most frequent sense baseline (Table 1).</Paragraph> <Paragraph position="3"> For all verbs, theories with a small number of rules were produced (from 19 to 33 rules). By looking at these rules, it becomes clear that all KSs are being explored by the ILP system and thus are potentially useful for the disambiguation of verbs.</Paragraph> </Section> class="xml-element"></Paper>