File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/evalu/05/i05-2036_evalu.xml
Size: 2,360 bytes
Last Modified: 2025-10-06 13:59:28
<?xml version="1.0" standalone="yes"?> <Paper uid="I05-2036"> <Title>Svetlana.Hensman@comp.dit.ie</Title> <Section position="7" start_page="212" end_page="212" type="evalu"> <SectionTitle> 6 Experimental results </SectionTitle> <Paragraph position="0"> Each module of the system was evaluated separately. null The first experiment we carried out was to estimate the accuracy of the sentence frame constructed by the role labelling module and it was performed on randomly selected 2% of the verbs in Reuters and 7% of the verbs in AAIU corpora. The parse trees produced by Charniak's parser were manually edited to avoid any errors due to incorrect parses. The results showed that the system identified the correct set of possible candidates for semantic roles for 90% and 89% of the verbs in the Reuters and in the AAIU documents respectively.</Paragraph> <Paragraph position="1"> Further experiments were carried out to evaluate the performance of the role assigning module. As a testbed we randomly selected 2% of the verbs in Reuters and 15% of the verbs in the AAIU documents. From these, we analysed only those cases where the verb is a member of at least one VerbNet frame and the possible role candidates were correctly identified. For 60% and 70% of the remaining verbs, respectively, the algorithm identifies a single correct solution. In 3% and 4% of the cases respectively a partially correct result is found (in the majority of such cases it is Agent, Patient and Theme roles that are correctly identified, together with some incorrect ones).</Paragraph> <Paragraph position="2"> In 11% and 9% of the cases for Reuters and AAIU, respectively, the algorithm identifies a set of possible solutions, containing the correct and several incorrect ones. For these cases the weighting function identifies the correct solution in 38% of the of the cases for AAIU documents and 59% of the cases for the Reuters documents, while in 40% and 21% of the cases, respectively, it identifies the correct and one or more incorrect results. We also evaluated the percentage of the syntactic patterns that the graph builder recognises: for AAUI and Reuters documents, respectively, we can build a graph for 76% and 67% of the noun phrases, for 95% and 94% of the prepositional phrases and for 91% and 97% of the subordinate clauses.</Paragraph> </Section> class="xml-element"></Paper>