File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/evalu/06/w06-3206_evalu.xml
Size: 2,233 bytes
Last Modified: 2025-10-06 13:59:57
<?xml version="1.0" standalone="yes"?> <Paper uid="W06-3206"> <Title>constraint satisfaction inference</Title> <Section position="6" start_page="46" end_page="47" type="evalu"> <SectionTitle> 4 Results </SectionTitle> <Paragraph position="0"> We performed experiments with the memory-based learning algorithm IB1, equipped with constraint satisfaction inference post-processing, on the four aforementioned tasks. In one variant, IB1 was simply used to predict atomic classes, while in the other variant IB1 predicted trigram classes, and constraint satisfaction inference was used for post-processing the output sequences. We chose to measure the gen- null morphological analysis by the default unigram classifier and the trigram method with constraint satisfaction inference, with confidence intervals.</Paragraph> <Paragraph position="1"> letter-phoneme conversion by the default unigram classifier and the trigram method with constraint satisfaction inference, with confidence intervals. eralization performance of our trained classifiers on a single 90% training set - 10% test set split of each data set (after shuffling the data randomly at the word level), and measuring the percentage of fully correctly phonemized words or fully correctly morphologically analyzed words - arguably the most critical and unbiased performance metric for both tasks. Additionally we performed bootstrap resampling (Noreen, 1989) to obtain confidence intervals. Table 4 lists the word accuracies obtained on the English and Dutch morphological analysis tasks.</Paragraph> <Paragraph position="2"> Constraint satisfaction inference significantly out-performs the systems that predict atomic unigram classes, by a large margin. While the absolute differences in scores between the two variants of English morphological analysis is 5.4%, the error reduction is an impressive 27%.</Paragraph> <Paragraph position="3"> Table 5 displays the word phonemization accuracies of both variants on both languages. Again, significant improvements over the baseline classifier can be observed; the confidence intervals are widely apart. Error reductions for both languages are impressive: 26% for English, and 22% for Dutch.</Paragraph> </Section> class="xml-element"></Paper>