File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/06/p06-2010_intro.xml

Size: 3,413 bytes

Last Modified: 2025-10-06 14:03:42

<?xml version="1.0" standalone="yes"?>
<Paper uid="P06-2010">
  <Title>Sydney, July 2006. c(c)2006 Association for Computational Linguistics A Hybrid Convolution Tree Kernel for Semantic Role Labeling</Title>
  <Section position="3" start_page="0" end_page="73" type="intro">
    <SectionTitle>
1 Introduction
</SectionTitle>
    <Paragraph position="0"> In the last few years there has been increasing interest in Semantic Role Labeling (SRL). It is currently a well defined task with a substantial body of work and comparative evaluation. Given a sentence, the task consists of analyzing the propositions expressed by some target verbs and some constituents of the sentence. In particular, for each target verb (predicate) all the constituents in the sentence which fill a semantic role (argument) of the verb have to be recognized.</Paragraph>
    <Paragraph position="1"> Figure 1 shows an example of a semantic role labeling annotation in PropBank (Palmer et al., 2005). The PropBank defines 6 main arguments, Arg0 is the Agent, Arg1 is Patient, etc. ArgMmay indicate adjunct arguments, such as Locative, Temporal.</Paragraph>
    <Paragraph position="2"> Many researchers (Gildea and Jurafsky, 2002; Pradhan et al., 2005a) use feature-based methods  ture syntactic tree representation for argument identification and classification in building SRL systems and participating in evaluations, such as Senseval-3 1, CoNLL-2004 and 2005 shared tasks: SRL (Carreras and M`arquez, 2004; Carreras and M`arquez, 2005), where a flat feature vector is usually used to represent a predicate-argument structure. However, it's hard for this kind of representation method to explicitly describe syntactic structure information by a vector of flat features. As an alternative, convolution tree kernel methods (Collins and Duffy, 2001) provide an elegant kernel-based solution to implicitly explore tree structure features by directly computing the similarity between two trees. In addition, some machine learning algorithms with dual form, such as Perceptron and Support Vector</Paragraph>
    <Section position="1" start_page="0" end_page="73" type="sub_section">
      <SectionTitle>
Machines (SVM) (Cristianini and Shawe-Taylor,
</SectionTitle>
      <Paragraph position="0"> 2000), which do not need know the exact presentation of objects and only need compute their kernel functions during the process of learning and prediction. They can be well used as learning algorithms in the kernel-based methods. They are named kernel machines.</Paragraph>
      <Paragraph position="1"> In this paper, we decompose the Moschitti (2004)'s predicate-argument feature (PAF) kernel into a Path kernel and a Constituent Structure ker- null nel, and then compose them into a hybrid convolution tree kernel. Our hybrid kernel method using Voted Perceptron kernel machine outperforms the PAF kernel in the development sets of CoNLL-2005 SRL shared task. In addition, the final composing kernel between hybrid convolution tree kernel and standard features' polynomial kernel outperforms each of them individually.</Paragraph>
      <Paragraph position="2"> The remainder of the paper is organized as follows: In Section 2 we review the previous work. In Section 3 we illustrate the state of the art feature-based method for SRL. Section 4 discusses our method. Section 5 shows the experimental results. We conclude our work in Section 6.</Paragraph>
    </Section>
  </Section>
class="xml-element"></Paper>
Download Original XML