File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/evalu/04/w04-0810_evalu.xml
Size: 1,336 bytes
Last Modified: 2025-10-06 13:59:16
<?xml version="1.0" standalone="yes"?> <Paper uid="W04-0810"> <Title>A First Evaluation of Logic Form Identification Systems</Title> <Section position="9" start_page="0" end_page="0" type="evalu"> <SectionTitle> 8 Results </SectionTitle> <Paragraph position="0"> Initially, there were 27 teams registered to participate in the Logic Form Identification task and 6 submissions were received by the deadline. One of the submissions was discarded since the file contained no valid data and another one was not included in the comparative results shown in Table 2 since it used manual parsing (parsing is not a necessary step in obtaining the LF). The part of speech info attached to some predicates was ignored when computing the scores. We plan to use it in further trials.</Paragraph> <Paragraph position="1"> If one looks at the results in the table one may notice their consistency. At Argument level precision and recall range from 0.729 to 0.776 and from 0.655 to 0.777, respectively. The same trend can be observed at Predicate level (the results are slighlty better). At a more coarse-grain level (Sentence level) the results vary more but still one can distinguish a certain degree of consistency: the Sent-A measure ranges from 0.160 to 0.256 and the Sent-AP measure varies from 0.386 to 0.510.</Paragraph> </Section> class="xml-element"></Paper>