File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/03/p03-1008_intro.xml

Size: 4,450 bytes

Last Modified: 2025-10-06 14:01:48

<?xml version="1.0" standalone="yes"?>
<Paper uid="P03-1008">
  <Title>Syntactic Features and Word Similarity for Supervised Metonymy Resolution</Title>
  <Section position="2" start_page="0" end_page="3" type="intro">
    <SectionTitle>
1 Introduction
</SectionTitle>
    <Paragraph position="0"> Metonymy is a figure of speech, in which one expression is used to refer to the standard referent of a related one (Lakoff and Johnson, 1980). In (1),  &amp;quot;seat 19&amp;quot; refers to the person occupying seat 19. (1) Ask seat 19 whetherhewantstoswap  The importance of resolving metonymies has been shown for a variety of NLP tasks, e.g., machine translation (Kamei and Wakao, 1992), question answering (Stallard, 1993) and anaphora resolution (Harabagiu, 1998; Markert and Hahn, 2002).  (1) was actually uttered by a flight attendant on a plane.  In order to recognise and interpret the metonymy in (1), a large amount of knowledge and contextual inference is necessary (e.g. seats cannot be questioned, people occupy seats, people can be questioned). Metonymic readings are also potentially open-ended (Nunberg, 1978), so that developing a machine learning algorithm based on previous examples does not seem feasible.</Paragraph>
    <Paragraph position="1"> However, it has long been recognised that many metonymic readings are actually quite regular (Lakoff and Johnson, 1980; Nunberg, 1995).</Paragraph>
    <Paragraph position="2">  In (2), &amp;quot;Pakistan&amp;quot;, the name of a location, refers to one of its national sports teams.</Paragraph>
    <Paragraph position="3">  (2) Pakistan had won the World Cup Similar examples can be regularly found for many other location names (see (3) and (4)).</Paragraph>
    <Paragraph position="4"> (3) England won the World Cup (4) Scotland lost in the semi-final  In contrast to (1), the regularity of these examples can be exploited by a supervised machine learning algorithm, although this method is not pursued in standard approaches to regular polysemy and metonymy (with the exception of our own previous work in (Markert and Nissim, 2002a)). Such an algorithm needs to infer from examples like (2) (when labelled as a metonymy) that &amp;quot;England&amp;quot; and &amp;quot;Scotland&amp;quot; in (3) and (4) are also metonymic. In order to  Due to its regularity, conventional metonymy is also known as regular polysemy (Copestake and Briscoe, 1995). We use the term &amp;quot;metonymy&amp;quot; to encompass both conventional and unconventional readings.</Paragraph>
    <Paragraph position="5">  All following examples are from the British National Corpus (BNC, http://info.ox.ac.uk/bnc).</Paragraph>
    <Paragraph position="6">  draw this inference, two levels of similarity need to be taken into account. One concerns the similarity of the words to be recognised as metonymic or literal (Possibly Metonymic Words, PMWs). In the above examples, the PMWs are &amp;quot;Pakistan&amp;quot;, &amp;quot;England&amp;quot; and &amp;quot;Scotland&amp;quot;. The other level pertains to the similarity between the PMW's contexts (&amp;quot;&lt;subject&gt; (had) won the World Cup&amp;quot; and &amp;quot;&lt;subject&gt; lost in the semi-final&amp;quot;). In this paper, we show how a machine learning algorithm can exploit both similarities. Our corpus study on the semantic class of locations confirms that regular metonymic patterns, e.g., using a place name for any of its sports teams, cover most metonymies, whereas unconventional metonymies like (1) are very rare (Section 2). Thus, we can recast metonymy resolution as a classification task operating on semantic classes (Section 3). In Section 4, we restrict the classifier's features to head-modifier relations involving the PMW. In both (2) and (3), the context is reduced to subj-of-win. This allows the inference from (2) to (3), as they have the same feature value. Although the remaining context is discarded, this feature achieves high precision. In Section 5, we generalize context similarity to draw inferences from (2) or (3) to (4). We exploit both the similarity of the heads in the grammatical relation (e.g., &amp;quot;win&amp;quot; and &amp;quot;lose&amp;quot;) and that of the grammatical role (e.g. subject). Figure 1 illustrates context reduction and similarity levels. We evaluate the impact of automatic extraction of head-modifier relations in Section 6. Finally, we discuss related work and our contributions.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML