File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/06/p06-1095_intro.xml

Size: 3,335 bytes

Last Modified: 2025-10-06 14:03:37

<?xml version="1.0" standalone="yes"?>
<Paper uid="P06-1095">
  <Title>Machine Learning of Temporal Relations</Title>
  <Section position="3" start_page="37" end_page="37" type="intro">
    <SectionTitle>
1 Introduction
</SectionTitle>
    <Paragraph position="0"> The growing interest in practical NLP applications such as question-answering and text summarization places increasing demands on the processing of temporal information. In multi-document summarization of news articles, it can be useful to know the relative order of events so as to merge and present information from multiple news sources correctly. In questionanswering, one would like to be able to ask when an event occurs, or what events occurred prior to a particular event.</Paragraph>
    <Paragraph position="1"> A wealth of prior research by (Passoneau 1988), (Webber 1988), (Hwang and Schubert 1992), (Kamp and Reyle 1993), (Lascarides and Asher 1993), (Hitzeman et al. 1995), (Kehler 2000) and others, has explored the different knowledge sources used in inferring the temporal ordering of events, including temporal adverbials, tense, aspect, rhetorical relations, pragmatic conventions, and background knowledge.</Paragraph>
    <Paragraph position="2"> For example, the narrative convention of events being described in the order in which they occur is followed in (1), but overridden by means of a discourse relation, Explanation in (2).</Paragraph>
    <Paragraph position="3">  (1) Max stood up. John greeted him.</Paragraph>
    <Paragraph position="4"> (2) Max fell. John pushed him.</Paragraph>
    <Paragraph position="5">  In addition to discourse relations, which often require inferences based on world knowledge, the ordering decisions humans carry out appear to involve a variety of knowledge sources, including tense and grammatical aspect (3a), lexical aspect (3b), and temporal adverbials (3c):  (3a) Max entered the room. He had drunk a lot of wine.</Paragraph>
    <Paragraph position="6"> (3b) Max entered the room. Mary was seated behind the desk.</Paragraph>
    <Paragraph position="7"> (3c) The company announced Tuesday that  third-quarter sales had fallen.</Paragraph>
    <Paragraph position="8"> Clearly, substantial linguistic processing may be required for a system to make these inferences, and world knowledge is hard to make available to a domain-independent program. An important strategy in this area is of course the development of annotated corpora than can facilitate the machine learning of such ordering inferences.</Paragraph>
    <Paragraph position="9"> This paper  investigates a machine learning approach for temporally ordering events in natural language texts. In Section 2, we describe the annotation scheme and annotated corpora, and the challenges posed by them. A basic learning approach is described in Section 3. To address data sparseness, we used temporal reasoning as an over-sampling method to dramatically expand the amount of training data.</Paragraph>
    <Paragraph position="10"> As we will discuss in Section 5, there are no standard algorithms for making these inferences that we can compare against. We believe strongly that in such situations, it's worthwhile for computational linguists to devote consider- null Research at Georgetown and Brandeis on this problem was funded in part by a grant from the ARDA</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML