File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/evalu/04/w04-2422_evalu.xml

Size: 2,888 bytes

Last Modified: 2025-10-06 13:59:22

<?xml version="1.0" standalone="yes"?>
<Paper uid="W04-2422">
  <Title>Learning Transformation Rules for Semantic Role Labeling</Title>
  <Section position="5" start_page="0" end_page="0" type="evalu">
    <SectionTitle>
4 Discussion
</SectionTitle>
    <Paragraph position="0"> First, note that only one rule was learned in the verb-tagging phase: Lengthen region V if followed by chunk with tag=PRT. With earlier releases of the data the system did learn multiple rules, including lexically-based rules, but in later releases only this one rule was learned.</Paragraph>
    <Paragraph position="1"> Second, observe that the system actually did reorder rules after discovering them, as evidenced by the non-monotonic &amp;quot;discovery order&amp;quot; column. To attain this result, we used a look-behind of 2, i.e. the last 3 rules learned were candidates for reordering.</Paragraph>
    <Paragraph position="2"> Third, several of the rules in the sequence are identical.</Paragraph>
    <Paragraph position="3"> In some cases, this seems to be because multiple applications of a rule were necessary to achieve full results (e.g. rule &amp;quot;H&amp;quot;, which extended an A0 or A1 region through joined NP chunks several times). In other cases, this seems to be one rule re-applying itself after another rule modified the results of its earlier application (e.g. rule &amp;quot;E&amp;quot;, which was affected by applications of rule &amp;quot;H&amp;quot;). Finally, note that only 23 transformations were found.</Paragraph>
    <Paragraph position="4"> The last few rules begin dealing with lesser-represented argument types like R-A0 and AM-NEG, but many types remain completely unaddressed by the system. We may be able to increase performance on those types by adding additional rule templates, or by decreasing the learning termination threshold for the system. Rule &amp;quot;K&amp;quot; was created as an explicit attempt to recognize R-A0 and similar argument types, and seems to have been reasonably successful. There may be other relatively simple templates we can create to recognize other arguments.</Paragraph>
    <Paragraph position="5">  In future work, there are several avenues we would like to explore. Our first-tag-wins assignment strategy mentioned above is not grounded in research into alternate strategies, and in fact we have not yet tried any others.</Paragraph>
    <Paragraph position="6"> We also experimented with isolating common verb types into their own corpus--for example, if we train separately on the verb &amp;quot;say,&amp;quot; which represents nearly 10% of the target verbs in the training set and exhibits different argument patterns from other verbs, we achieve an F1 value of about 82% on this subset using only five learned rules. It may be possible to leverage this work by grouping other less common verbs by their VerbNet class(es).</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML