File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/evalu/05/i05-2030_evalu.xml
Size: 3,517 bytes
Last Modified: 2025-10-06 13:59:27
<?xml version="1.0" standalone="yes"?> <Paper uid="I05-2030"> <Title>Opinion Extraction Using a Learning-Based Anaphora Resolution Technique</Title> <Section position="6" start_page="176" end_page="177" type="evalu"> <SectionTitle> 4.4 Results </SectionTitle> <Paragraph position="0"> Table 1 shows the results of opinion extraction.</Paragraph> <Paragraph position="1"> We evaluated the results by recall R and precision P defined as follows (For simplicity, we substitute &quot;A-V&quot; for attribute-value pair):</Paragraph> <Paragraph position="3"> correctly extracted A-V opinions total number of A-V opinions found by the system .</Paragraph> <Paragraph position="4"> In order to demonstrate the effectiveness of the information about the candidate attribute, we evaluated the results of pair extraction and opinionhood determination separately. Table 2 shows the results. In the pair extraction, we assume that the value is given, and evaluate how successfully attribute-value pairs are extracted.</Paragraph> <Section position="1" start_page="176" end_page="177" type="sub_section"> <SectionTitle> 4.5 Discussions </SectionTitle> <Paragraph position="0"> As Table 1 shows, our proposed ordering is out-performed on the recall in Proc.3, however, the precision is higher than Proc.3 and get the best Fmeasure. In what follows, we discuss the results of pair extraction and opinionhood determination.</Paragraph> <Paragraph position="1"> Pair extraction From Table 2, we can see that carrying out attribute identification before pairedness determination outperforms the reverse ordering by 11% better precision and 3% better recall.</Paragraph> <Paragraph position="2"> This result supports our expectation that knowledge of attribute information assists attribute-value pair extraction. Focusing on the rows labeled &quot;(dependency)&quot; and &quot;(no dependency)&quot; in Table 2, while 80% of the attribute-value pairs in a direct dependency relation are successfully extracted with high precision, the model achieves only 51.7% recall with 61.7% precision for the cases where an attribute and value are not in a direct dependency relation.</Paragraph> <Paragraph position="3"> According to our error analysis, a major source of errors lies in the attribute identification task. In this experiment, the precision of attribute identification is 78%. A major reason for this problem was that the true attributes did not exist in our dictionary. In addition, a major cause of error in the pair determination stage is cases where an attribute appearing in the preceding sentence causes a false decision. We need to conduct further investigations in order to resolve these problems.</Paragraph> <Paragraph position="4"> Opinionhood determination Table 2 also shows that carrying out attribute identification followed by opinionhood determination out-performs the reverse ordering, which supports our expectation that knowing the attribute information aids opinionhood determination.</Paragraph> <Paragraph position="5"> While it produces better results, our proposed method still has room for improvement in both precision and recall. Our current error analysis has not identified particular error patterns -- the types of errors are very diverse. However, we need to at least address the issue of modifying the feature set to make the model more sensitive to modality-oriented distinctions such as subjunctive and conditional expressions.</Paragraph> </Section> </Section> class="xml-element"></Paper>