File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/06/w06-2909_intro.xml

Size: 4,068 bytes

Last Modified: 2025-10-06 14:04:10

<?xml version="1.0" standalone="yes"?>
<Paper uid="W06-2909">
  <Title>Semantic Role Labeling via Tree Kernel Joint Inference</Title>
  <Section position="3" start_page="0" end_page="61" type="intro">
    <SectionTitle>
1 Introduction
</SectionTitle>
    <Paragraph position="0"> Recent work on Semantic Role Labeling (SRL) (Carreras and M`arquez, 2005) has shown that to achieve high labeling accuracy a joint inference on the whole predicate argument structure should be applied. For this purpose, we need to extract features from the sentence's syntactic parse tree that encodes the target semantic structure. This task is rather complex since we do not exactly know which are the syntactic clues that capture the relation between the predicate and its arguments. For example, to detect the interesting context, the modeling of syntax/semantics-based features should take into account linguistic aspects like ancestor nodes or semantic dependencies (Toutanova et al., 2004).</Paragraph>
    <Paragraph position="1"> A viable approach to generate a large number of features has been proposed in (Collins and Duffy, 2002), where convolution kernels were used to implicitly define a tree substructure space. The selection of the relevant structural features was left to the Voted Perceptron learning algorithm. Such successful experimentation shows that tree kernels are very promising for automatic feature engineering, especially when the available knowledge about the phenomenon is limited.</Paragraph>
    <Paragraph position="2"> In a similar way, we can model SRL systems with tree kernels to generate large feature spaces. More in detail, most SRL systems split the labeling process into two different steps: Boundary Detection (i.e. to determine the text boundaries of predicate arguments) and Role Classification (i.e. labeling such arguments with a semantic role, e.g. Arg0 or Arg1 as defined in (Kingsbury and Palmer, 2002)).</Paragraph>
    <Paragraph position="3"> The former relates to the detection of syntactic parse tree nodes associated with constituents that correspond to arguments, whereas the latter considers the boundary nodes for the assignment of the suitable label. Both steps require the design and extraction of features from parse trees. As capturing the tightly interdependent relations among a predicate and its arguments is a complex task, we can apply tree kernels on the subtrees that span the whole predicate argument structure to generate the feature space of all the possible subtrees.</Paragraph>
    <Paragraph position="4"> In this paper, we apply the traditional boundary (TBC) and role (TRC) classifiers (Pradhan et al., 2005a), which are based on binary predicate/argument relations, to label all parse tree nodes corresponding to potential arguments. Then, we ex- null tract the subtrees which span the predicate-argument dependencies of such arguments, i.e. Argument Spanning Trees (ASTs). These are used in a tree kernel function to generate all possible substructures that encode n-ary argument relations, i.e. we carry out an automatic feature engineering process.</Paragraph>
    <Paragraph position="5"> To validate our approach, we experimented with our model and Support Vector Machines for the classification of valid and invalid ASTs. The results show that this classification problem can be learned with high accuracy. Moreover, we modeled SRL as a re-ranking task in line with (Toutanova et al., 2005).</Paragraph>
    <Paragraph position="6"> The large number of complex features provided by tree kernels for structured learning allows SVMs to reach the state-of-the-art accuracy.</Paragraph>
    <Paragraph position="7"> The paper is organized as follows: Section 2 introduces the Semantic Role Labeling based on SVMs and the tree kernel spaces; Section 3 formally defines the ASTs and the algorithm for their classification and re-ranking; Section 4 shows the comparative results between our approach and the traditional one; Section 5 presents the related work; and finally, Section 6 summarizes the conclusions.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML