File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/91/j91-1003_metho.xml

Size: 62,175 bytes

Last Modified: 2025-10-06 14:12:41

<?xml version="1.0" standalone="yes"?>
<Paper uid="J91-1003">
  <Title>met*: A Method for Discriminating Metonymy and Metaphor by Computer</Title>
  <Section position="3" start_page="55" end_page="55" type="metho">
    <SectionTitle>
PRODUCER FOR PRODUCT
</SectionTitle>
    <Paragraph position="0"> &amp;quot;I'll have a L6wenbrau. &amp;quot; &amp;quot;He bought a Ford.&amp;quot; &amp;quot;He's got a Picasso in his den.&amp;quot; &amp;quot;I hate to read Heidegger.&amp;quot;</Paragraph>
  </Section>
  <Section position="4" start_page="55" end_page="55" type="metho">
    <SectionTitle>
OBJECT USED FOR USER
</SectionTitle>
    <Paragraph position="0"> &amp;quot;The sax has the flu today.&amp;quot; &amp;quot;The BLT is a lousy tipper. &amp;quot;*.2 &amp;quot;The buses are on strike.&amp;quot; Example 9 &amp;quot;You'll find better ideas than that in the library&amp;quot; (Reddy 1979, p. 309). Reddy (1979) has observed that metonymies can occur in chains. He suggests that (9) contains a chain of PART FOR WHOLE metonymies between 'ideas' and 'library': the ideas are expressed in words, words are printed on pages, pages are in books, and books are found in a library.</Paragraph>
    <Paragraph position="1">  &amp;quot;He happened to die of some disease, though I don't know what the cause was&amp;quot; (ibid.). Yamanashi (1987) points out that basic metonymic relationships like part-whole and cause-result often also link sentences. According to him, the links in (10) and (11) are PART-WHOLE relations, the one in (12) is PRODUCT-PRODUCER, and the one in (13) is a CAUSE-RESULT relation.</Paragraph>
    <Paragraph position="2"> There has been some computational work on metonymy (Weischedel and Sondheimer 1983; Grosz et al. 1987; Hobbs and Martin 1987; Stallard 1987; Wilensky 1987). The TEAM project (Grosz et al. 1987) handles metonymy, though metonymy is not mentioned by name but referred to instead as &amp;quot;coercion,&amp;quot; which &amp;quot;occurs whenever some property of an object is used to refer indirectly to the object&amp;quot; (ibid., p. 213). Coercion is handled by &amp;quot;coercion-relations;&amp;quot; for example, a coercion relation could be used to understand that 'Fords' means &amp;quot;cars whose CAR-MANUFACTURER is Ford&amp;quot; (in Lakoff and Johnson's terms, this is an example of a PRODUCER FOR PRODUCT metonymic concept).</Paragraph>
  </Section>
  <Section position="5" start_page="55" end_page="58" type="metho">
    <SectionTitle>
2 A BLT is a bacon, lettuce, and tomato sandwich.
</SectionTitle>
    <Paragraph position="0"/>
    <Section position="1" start_page="56" end_page="56" type="sub_section">
      <SectionTitle>
Fass Discriminating Metonymy
</SectionTitle>
      <Paragraph position="0"> Grosz et al. (1987) note a similarity between coercion (i.e., metonymy) and modification in noun-noun compounds, and use &amp;quot;modification relations&amp;quot; to decide whether, e.g., &amp;quot;U.S. ships&amp;quot; means &amp;quot;ships of U.S. registry&amp;quot; or &amp;quot;ships whose destination is the U.S.&amp;quot; Hobbs and Martin (1987) and Stallard (1987) also discuss the relationship between metonymy and nominal compounds. Hobbs and Martin treat the two phenomena as twin problems of reference resolution in their TACITUS system. They argue that resolving reference requires finding a knowledge base entity for an entity mentioned in discourse (i.e., what that entity refers to), and suggest that the resolution of metonymy and nominal compounds both require discovering an implicit relation between two entities referred to in discourse. The example of metonymy they show is &amp;quot;after the alarm,&amp;quot; which really means after the sounding of the alarm.</Paragraph>
      <Paragraph position="1"> Hobbs and Martin seem to assume a selection restrictions approach to metonymy because metonymy is sought after a selection restrictions violation (ibid., p. 521). In their approach, solving metonymy involves finding: \[1\] the referents for 'after' and 'alarm' in the domain model, which are after(eo, a) and alarm(a); \[2\] an implicit entity z to which 'after' really refers, which is afier(eo, z); and \[3\] the implicit relation between the implicit entity z and the referent of 'alarm,' q(z, a).</Paragraph>
      <Paragraph position="2"> Like Hobbs and Martin (1987), Stallard (1987) translates language into logical form.</Paragraph>
      <Paragraph position="3"> Stallard argues that with nominal compounds and metonymies &amp;quot;the problem is determining the binary relation which has been 'elided' from the utterance&amp;quot; (ibid., p. 180) and suggests shifting the argument place of a predicate &amp;quot;by interposing an arbitrary, sortally compatible relation between an argument place of the predicate and the actual argument&amp;quot; (ibid., p. 182). Stallard notes that &amp;quot;in any usage of the metonomy (sic) operation there is a choice about which of two clashing elements to extend&amp;quot; (ibid.).</Paragraph>
      <Paragraph position="4"> Stallard's work has not yet been implemented (ibid., p. 184).</Paragraph>
      <Paragraph position="5"> Stallard (1987) also briefly discusses anaphora resolution. Brown (1990) is beginning research on metonymy and reference resolution, particularly pronouns. This should prove a promising line of investigation because metonymy and anaphora share the function of allowing one entity to refer to another entity.</Paragraph>
      <Paragraph position="6"> Example 2 &amp;quot;The ham sandwich is waiting for his check&amp;quot; (= the male person who ordered the ham sandwich).</Paragraph>
      <Paragraph position="7"> Example 14 &amp;quot;He is waiting for his check&amp;quot; (= the male person).</Paragraph>
      <Paragraph position="8"> This similarity of function can be seen in comparing (2), which is metonymic, with (14), which is anaphoric.</Paragraph>
    </Section>
    <Section position="2" start_page="56" end_page="57" type="sub_section">
      <SectionTitle>
2.3 Relationship between Metonymy and Metaphor
</SectionTitle>
      <Paragraph position="0"> Both metonymy and metaphor have been identified as central to the development of new word senses, and hence to language change (see, e.g., Stern 1931; Waldron 1967). Some of the best examples of the differences between the two phenomena come from data used in studies of metonymic and metaphorical effects on language change.</Paragraph>
      <Paragraph position="1"> Nevertheless, there are widely differing views on which phenomenon is the more important. Some argue that metaphor is a kind of metonymy, and others propose that metonymy is a kind of metaphor, while still others suggest that they are quite different (see Fass 1988c).</Paragraph>
      <Paragraph position="2">  Computational Linguistics Volume 17, Number 1 Among the third group, two differences between metonymy and metaphor are commonly mentioned. One difference is that metonymy is founded on contiguity whereas metaphor is based on similarity (cf. Jakobsen and Halle 1956; Ullmann 1962). Contiguity and similarity are two kinds of association. Contiguity refers to a state of being connected or touching whereas similarity refers to a state of being alike in essentials or having characteristics in common (Mish 1986).</Paragraph>
      <Paragraph position="3"> A second difference, advanced by Lakoff and Johnson (1980) for example, is that metaphor is &amp;quot;principally a way of conceiving of one thing in terms of another, and its primary function is understanding&amp;quot; (ibid., pp. 36-37) whereas metonymy &amp;quot;has primarily a referential function, that is, it allows us to use one entity to stand for another&amp;quot; (ibid., their italics), though it has a role in understanding because it focuses on certain aspects of what is being referred to.</Paragraph>
      <Paragraph position="4"> There is little computational work about the relationship between metonymy and metaphor. Stallard (1987) distinguishes separate roles for metonymy and metaphor in word sense extension. According to him, metonymy shifts the argument place of a predicate, whereas metaphor shifts the whole predicate. Hobbs (1983a; 1983b) writes about metaphor, and he and Martin (1987) develop a theory of &amp;quot;local pragmatics&amp;quot; that includes metonymy, but Hobbs does not seem to have written about the relationship between metaphor and metonymy.</Paragraph>
      <Paragraph position="5"> In knowledge representation, metonymic and metaphorical relations are both represented in the knowledge representation language CycL (Lenat and Guha 1990).</Paragraph>
    </Section>
    <Section position="3" start_page="57" end_page="58" type="sub_section">
      <SectionTitle>
2.4 Literalness and Nonliteralness
</SectionTitle>
      <Paragraph position="0"> Much of the preceding material assumes what Gibbs (1984) calls the &amp;quot;literal meanings hypothesis,&amp;quot; which is that sentences have well defined literal meanings and that computation of the literal meaning is a necessary step on the path to understanding speakers' utterances (ibid., p. 275).</Paragraph>
      <Paragraph position="1"> There are a number of points here, which Gibbs expands upon in his paper. One point concerns the traditional notion of literal meaning, that all sentences have literal meanings that are entirely determined by the meanings of their component words, and that the literal meaning of a sentence is its meaning independent of context. A second point concerns the traditional view of metaphor interpretation, though Gibbs' criticism applies to metonymy interpretation also. Using Searle's (1979) views on metaphor as an example, he characterizes the typical model for detecting nonliteral meaning as a three-stage process: \[1\] compute the literal meaning of a sentence, \[2\] decide if the literal meaning is defective, and if so, \[3\] seek an alternative meaning, i.e., a metaphorical one (though, presumably, a metonymic interpretation might also be sought at this stage). Gibbs (1984, p. 275) concludes that the distinction between literal and metaphoric meanings has &amp;quot;little psychological validity.&amp;quot; Among AI researchers, Martin (1990) shares many of Gibbs's views in criticizing the &amp;quot;literal meaning first approach&amp;quot; (ibid., p. 24). Martin suggests a two-stage process for interpreting sentences containing metaphors: \[1\] parse the sentence to produce a syntactic parse tree plus primal (semantic) representation, and \[2\] apply inference processes of &amp;quot;concretion&amp;quot; and &amp;quot;metaphoric viewing&amp;quot; to produce the most detailed semantic representation possible.</Paragraph>
      <Paragraph position="2"> The primal representation represents a level of semantic interpretation that is explicitly in need of further processing. Although it is obviously related to what</Paragraph>
    </Section>
    <Section position="4" start_page="58" end_page="58" type="sub_section">
      <SectionTitle>
Fass Discriminating Metonymy
</SectionTitle>
      <Paragraph position="0"> has traditionally been called a literal meaning, it should not be thought of as a meaning at all. The primal representation should be simply considered as an intermediate stage in the interpretation process where only syntactic and lexical information has been utilized (ibid., p. 90, his italics).</Paragraph>
      <Paragraph position="1"> However, Martin believes that at least some sentence meaning is independent of context because the primal representation contains part of the primal content of an utterance and \[t\]he Primal Content represents the meaning of an utterance that is derivable from knowledge of the conventions of a language, independent of context (ibid.).</Paragraph>
    </Section>
    <Section position="5" start_page="58" end_page="58" type="sub_section">
      <SectionTitle>
2.5 Review Summary
</SectionTitle>
      <Paragraph position="0"> The metaphor literature contains many differing views, including the comparison, interaction, selection restrictions, and conventional metaphors views. AI research on metaphor includes all of these views. Of the AI research, only Martin's work has been implemented to my knowledge. Among the points raised are that metaphorical sentences exist that do not contain selection restriction violations and that metaphor requires interpretation in context. The much smaller metonymy literature stresses the selection restrictions view too. The TEAM and TACITUS systems both seem to process metonymics.</Paragraph>
      <Paragraph position="1"> The two main differences commonly noted between metonymy and metaphor are in their function (referential for metonymy and understanding with metaphor) and the kind of relationship established (contiguity in metonymy versus similarity in metaphor). No one to my knowledge has a working system that discriminates examples of metaphor and metonymy.</Paragraph>
    </Section>
  </Section>
  <Section position="6" start_page="58" end_page="63" type="metho">
    <SectionTitle>
3. met* Method
</SectionTitle>
    <Paragraph position="0"> In this section, the basic met* algorithm is outlined. The met* method is based on the selection restriction, also known as the preference. Metonymy, metaphor, literalness, and anomaly are recognized by evaluating preferences, which produces four kinds of basic &amp;quot;preference-based&amp;quot; relationship or semantic relation: literal, metonymic, metaphorical, and anomalous. Within the method, the main difference between metonymy and metaphor is that a metonymy is viewed as consisting of one or more semantic relationships like CONTAINER FOR CONTENTS and PART FOR WHOLE, whereas a metaphor is viewed as containing a relevant analogy.</Paragraph>
    <Paragraph position="1"> I agree with Ortony's remark that metaphor be viewed as contextual anomaly, but would suggest two modifications. First, not just metaphor but all of the preference-based relations should be understood in terms of the presence or absence of contextual constraint violation. Second, I prefer the term contextual constraint violation because \[1\] one of the phenomena detected by contextual violation is anomaly and \[2\] the selection restriction/preference (on which the met* method is based) is a kind of lexical contextual constraint. The section starts with an explanation of some of the linguistic background behind the met* method.</Paragraph>
    <Section position="1" start_page="58" end_page="60" type="sub_section">
      <SectionTitle>
3.1 Linguistic Background
</SectionTitle>
      <Paragraph position="0"> I have argued elsewhere (Fass 1989a) that understanding natural language (or semantic interpretation) be viewed as the integration of constraints from language and from context. Some language constraints are syntactic, while others are semantic. Some  Computational Linguistics Volume 17, Number 1 language constraints are lexical constraints; that is, constraints possessed by lexical items (words and fixed phrases). Lexical syntactic constraints include those on word order, number, and tense. This section describes three lexical semantic constraints: preferences, assertions, and a lexical notion of relevance.</Paragraph>
      <Paragraph position="1"> Preferences (Wilks 1973), selection restrictions (Katz 1964), and expectations (Schank 1975) are the same (see Fass 1989c; Fass and Wilks 1983; Wilks and Fass in press): all are restrictions possessed by senses of lexical items of certain parts of speech about the semantic classes of lexical items with which they co-occur. Thus an adjective sense has a preference for the semantic class of nouns with which it co-occurs and a verb sense has preferences for the semantic classes of nouns that fill its case roles. For example, the main sense of the verb 'drink' prefers an animal to fill its agent case role, i.e., it is animals that drink.</Paragraph>
      <Paragraph position="2"> The assertion of semantic information was noted by Lees (1960) in the formation of noun phrases and later developed by Katz (1964) as the process of &amp;quot;attribution.&amp;quot; Assertions contain information that is possessed by senses of lexical items of certain parts of speech and that is imposed onto senses of lexical items of other parts of speech, e.g., the adjective 'female' contains information that any noun to which it applies is of the female sex.</Paragraph>
      <Paragraph position="3"> Lexical syntactic and semantic constraints are enforced at certain places in sentences which ! call dependencies. Within a dependency, the lexical item whose constraints are enforced is called the source and the other lexical item is called the target (after Martin 1985). Syntactic dependencies consist of pairs of lexical items of certain parts of speech in which the source, an item from one part of speech, applies one or more syntactic constraints to the target, another lexical item. Examples of source-target pairs include a determiner and a noun, an adjective and a noun, a noun and a verb, and an adverb and a verb.</Paragraph>
      <Paragraph position="4"> Example 15 &amp;quot;The ship ploughed the waves.&amp;quot; Semantic dependencies occur in the same places as syntactic dependencies. The (metaphorical) sentence (15) contains four semantic dependencies: between the determiner 'the' and the noun 'ship,' between 'ship' and the verb stern 'plough,' between 'the' and the noun 'waves,' and between 'waves' and 'plough.' In each semantic dependency, one lexical item acts as the source and applies constraints upon the other lexical item, which acts as the target. In (15), 'the&amp;quot; and 'plough' both apply constraints upon 'ship,' and 'the' and 'plough' apply constraints on 'waves.' Semantic dependencies exist between not just pairs of lexical items but also pairs of senses of lexical items. For example, the metaphorical reading of (15) is because 'waves' is understood as being the sense meaning &amp;quot;movement of water,&amp;quot; not for example the sense meaning &amp;quot;movement of the hand.&amp;quot; Semantic relations result from evaluating lexical semantic constraints in sentences. Every semantic relation has a source (a lexical item whose semantic constraints are applied) and a target (a lexical item which receives those constraints). Other terms used to refer to the source and target in a semantic relation include: vehicle and tenor (Richards 1936), subsidiary subject and principal subject (Black 1962), figurative term and literal term (Perrine 1971), referent and subject (Tversky 1977), secondary subject and primary subject (Black 1979), source and destination (Winston 1980), old domain and new domain (Hobbs 1983a), and base and target (Gentner 1983).</Paragraph>
      <Paragraph position="5"> In CS, seven kinds of semantic relation are distinguished: literal, metonymic, metaphorical, anomalous, redundant, inconsistent, and novel relations (this list may</Paragraph>
    </Section>
    <Section position="2" start_page="60" end_page="62" type="sub_section">
      <SectionTitle>
Fass Discriminating Metonymy
</SectionTitle>
      <Paragraph position="0"> not be exhaustive -- there could be others). Combinations of these seven semantic relations are the basis of (at minimum) literalness, metonymy, metaphor, anomal)~ redundancy, contradiction, contrariness, and novelty. Semantic relations belong to two classes, the preference-based and assertion-based classes of relations, depending on the kind of lexical semantic constraint enforced. The preference-based class of semantic relations, which are the focus of this paper, contains literal, metonymic, metaphorical, and anomalous semantic relations. The assertion-based class of relations are described in greater length in Fass (1989a).</Paragraph>
      <Paragraph position="1"> Figure 1 shows the met* method laid out as a flow chart and illustrates how the preference-based class of semantic relations is discriminated. A satisfied preference (diamond 1) distinguishes literal relations from the remaining three relations, which are all nonliteral.</Paragraph>
      <Paragraph position="2"> Example 16 &amp;quot;The man drank beer.&amp;quot; There is a literal relation between 'man' and 'drink' in (16) because 'drink' prefers an animal as its agent and a man is a type of animal so the preference is satisfied.  Metonymy is viewed as a kind of domain-dependent inference. The process of finding metonymies is called metonymic inferencing. The metonymic concepts presently used are adapted from the metonymic concepts of Lakoff and Johnson (1980). Two of the metonymic concepts used are CONTAINER FOR CONTENTS and ARTIST FOR ART FORM. In (19), for example, Ted does not literally play the composer Bach -- he plays music composed by him.</Paragraph>
      <Paragraph position="3"> As Figure 1 shows, a metonymy is recognized in the met* method if a metonymic inference (diamond 2) is found. Conversely, if no successful inference is found then no metonymy is discovered and a metaphorical or anomalous semantic relation is then sought. A successful inference establishes a relationship between the original source or the target (&amp;quot;one entity'9 and a term (&amp;quot;another that is related to it'3 that refers to one of them.</Paragraph>
      <Paragraph position="4"> Like Stallard (1987), who noted that &amp;quot;in any usage of the metonomy (sic) operation there is a choice about which of two clashing elements to extend&amp;quot; (ibid., p. 182), the met* method allows for metonymies that develop in different &amp;quot;directions.&amp;quot; A successful inference is sometimes directed &amp;quot;forward&amp;quot; from the preference or &amp;quot;backward&amp;quot; from the target, depending on the metonyrnic concept (more on this shortly). It is this direction of inferencing that determines whether the source or target is substituted in a successful metonymy. The substitute source or target is used to discover another semantic relation that can be literal, metonymic again, metaphorical, or anomalous. In Figure 1, the presence of a relevant analogy (diamond 3) discriminates metaphorical relations from anomalous ones. No one else (to my knowledge) has emphasized the role of relevance in the discovery of an analogy central to a metaphor though, as noted in Section 2.2, the importance of relevance in recognizing metaphors and the centrality of some analogy have both been discussed.</Paragraph>
      <Paragraph position="5"> Example 20 &amp;quot;The car drank gasoline&amp;quot; (adapted from Wilks 1978).</Paragraph>
      <Paragraph position="6"> The form of relevance used is a lexical notion -- i.e., the third kind of lexical semantic constraint -- that what is relevant in a sentence is given by the sense of the main sentence verb being currently analyzed. Thus, it is claimed that the semantic relation between 'car' and 'drink' in (20) is metaphorical because there is a preference violation and an underlying relevant analogy between 'car' and 'animal,' the preferred agent of 'drink.' A car is not a type of animal, hence the preference violation. However, what is relevant in (20) is drinking, and there is a relevant analogy that animals and cars both use up a liquid of some kind: animals drink potable liquids while cars use gasoline. Hence the metaphorical relation between 'car' and 'drink.' Metaphor recognition in the met* method is related to all four views of metaphor described in Section 2. Recognition is viewed as a two-part process consisting of \[1\] a contextual constraint violation and \[2\] a set of &amp;quot;correspondences&amp;quot; including a key correspondence, a relevant analogy. The contextual constraint violation may be a preference violation, as in the selection restrictions view of metaphor. The set of &amp;quot;correspondences&amp;quot; is rather like the system of commonplaces between tenor and vehicle in the interaction view. The relevant analogy is related to the comparison and interaction</Paragraph>
    </Section>
    <Section position="3" start_page="62" end_page="63" type="sub_section">
      <SectionTitle>
Fass Discriminating Metonymy
</SectionTitle>
      <Paragraph position="0"> views, which emphasize a special comparison or an analogy as central to metaphor.</Paragraph>
      <Paragraph position="1"> Moreover, the relevant analogies seem to form groupings not unlike the conceptual metaphors found in the conventional view.</Paragraph>
      <Paragraph position="2"> Example 21 &amp;quot;The idea drank the heart.&amp;quot; Anomalous relations have neither the semantic relationships of a metonymic relation nor the relevant analogy of a metaphorical relation. Hence the semantic relation between 'idea' and 'drink' is anomalous in (21) because 'idea' is not a preferred agent of 'drink' and no metonymic link or relevant analogy can be found between animals (the preferred agent) and ideas; that is, 'idea' in (21) does not use up a liquid like 'car' does in (20). This is not to say that an anomalous relation is uninterpretable or that no analogy can possibly be found in one. In special circumstances (for example, in a poem), search for analogies might be expanded to permit weaker analogies, thereby allowing &amp;quot;ideas drinking&amp;quot; to be interpreted metaphorically.</Paragraph>
      <Paragraph position="3"> The topology of the flow chart in Figure 1 results from needing to satisfy a number of observations about the preference-based phenomena, particularly metonymy:  1. literalness is distinct from the others, which are all nonliteral; 2. metonymies can occur in chains (Reddy 1979); 3. metonymy always seems to occur with one of the other three; and 4. metaphor and anomaly are the hardest to tell apart (and thus require the most extended processing to distinguish).</Paragraph>
      <Paragraph position="4"> Hence a preference-based semantic relation can be either a single relation or a multi-relation. A single relation consists of one literal, metaphorical, or anomalous relation. A multi-relation contains one literal, metaphorical, or anomalous relation plus  either a single metonymy or a chain of metonymies. All these combinations, but only these, are derivable from Figure 1.</Paragraph>
      <Paragraph position="5"> Note that in the met* method as presented in Figure 1, semantic relations are tried in a certain order: literal, metonymic, metaphorical, and finally anomalous. This ordering implies that a literal interpretation is sought before a nonliteral one (cf. Harris 1976). The ordering results from thinking about discriminating the semantic relations in serial processing terms rather than parallel processing terms, particularly the serial order in which selection restrictions are evaluated and metonymic inference rules are tried: satisfied selection restrictions (indicating literalness) then metonymic inference (metonymy) then violated selection restrictions (metaphor or anomaly).</Paragraph>
      <Paragraph position="6"> Gibbs (1984) criticizes the idea that literal and nonliteral meaning can be discriminated in ordered processing stages. My response is that if the met* method is viewed in parallel processing terms then literal, metonymic, metaphorical, and anomalous interpretations are all sought at the same time and there is no ordering such that the literal meaning of a sentence is computed first and then an alternative meaning sought if the literal meaning is defective. Gibbs' other main criticism, concerning the traditional analysis of sentence meaning as composed from word meanings and independent of context, will be discussed in Section 7.</Paragraph>
      <Paragraph position="7">  Computational Linguistics Volume 17, Number 1</Paragraph>
    </Section>
  </Section>
  <Section position="7" start_page="63" end_page="65" type="metho">
    <SectionTitle>
4. CoUative Semantics
</SectionTitle>
    <Paragraph position="0"> CS is a semantics for natural language processing that extends many of the main ideas behind Preference Semantics (Wilks 1973; 1975a; 1975b; 1978; see also Wilks and Fass in press). CS has four components: sense-frames, collation, semantic vectors, and screening. The met* method is part of the process of collation. Fuller and more general descriptions of the four components appear in Fass (1988a; 1989b).</Paragraph>
    <Paragraph position="1"> Sense-frames are dictionary entries for individual word senses. Sense-frames are composed of other word senses that have their own sense-frames, much like Quillian's (1967) planes. Each sense-frame consists of two parts, an arcs section and a node section, that correspond to the genus and differentia commonly found in dictionary definitions (Amsler 1980).</Paragraph>
    <Paragraph position="2"> The arcs part of a sense-frame contains a labeled arc to its genus term (a word sense with its own sense-flame). Together, the arcs of all the sense-frames comprise a densely structured semantic network of word senses called the sense-network. The node part of a sense-frame contains the differentia of the word sense defined by that senseframe, i.e., information distinguishing that word sense from other word senses sharing the same genus. The two lexical semantic constraints mentioned earlier, preferences and assertions, play a prominent part in sense-frame nodes.</Paragraph>
    <Paragraph position="3"> Sense-frame nodes for nouns (node-type 0) resemble Wilks' (1978) pseudo-texts.</Paragraph>
    <Paragraph position="4"> The nodes contain lists of two-element and three-element lists called cells. Cells contain word senses and have a syntax modeled on English. Each cell expresses a piece of functional or structural information and can be thought of as a complex semantic feature or property of a noun. Figure 2 shows sense-frames for two senses of the noun 'crook.' Crook1 is the sense meaning &amp;quot;thief&amp;quot; and crook2 is the shepherd's tool. All the terms in sense-frames are word senses with their own sense-frames or words used in a particular sense that could be replaced by word senses. It1 refers to the word sense being defined by the sense-frame so, for example, crook1 can be substituted for it1 in \[it1, steal1, valuables1\]. Common dictionary practice is followed in that word senses are listed separately for each part of speech and numbered by frequency of occurrence. Hence in crook2, the cell \[shepherd1, use1, it1\] contains the noun sense shepherd1 while the cell \[it1, shepherd1, sheep1\] contains the verb sense shepherd1 (in a three-element cell, the second position is always a verb, and the first and third positions are always nouns).</Paragraph>
    <Paragraph position="5"> Sense-frame nodes for adjectives, adverbs and other modifiers (node-type 1) contain preferences and assertions but space does not permit a description of them here. Sense-frame nodes for verbs and prepositions (node-type 2) are case frames containing case subparts filled by case roles such as 'agent,' 'object,' and 'instrument.' Case subparts contain preferences, and assertions if the verb describes a state change.</Paragraph>
    <Paragraph position="6"> sf(crookl, sf(crook2, \[\[arcs, \[\[arcs, \[\[supertype. criminal1\]\]\], \[\[supertype, stick1\]\]\], \[nodeO, \[nodaO, \[\[it1, steal1, valuables1\]\]\]\]). \[\[shepherd1, use1, it1\], \[it1, shepherd1, sheep1\]\]\]\]).</Paragraph>
    <Paragraph position="7"> Figure 2 Sense-frames for crook1 and crook2 (noun senses)</Paragraph>
    <Section position="1" start_page="64" end_page="65" type="sub_section">
      <SectionTitle>
Fass Discriminating Metonymy
</SectionTitle>
      <Paragraph position="0"> sf(eatl, sf(clrinkl, \[(arcs, \[\[arcs, \[\[supertype, (ingest1, expenclt\]\]\]\], \[\[supertype, (ingest1, expendt\]\]\]\], (node2, \[node;t, \[\[agent, \[\[agent, \[preference, animalt\]\], \[preference, animal1\]\], \[object, \[object, \[preference, foocll\]\]\]} D. \[preference, drink1\]\]\]\]\]). Figure 3 Sense-frames for eat1 and drink1 (verb senses) Figure 4 The met* method (CS version) Figure 3 shows the sense-frames for the verb senses eat1 and drink1. In both, the agent preference is for an animal but the object preferences differ: the preference of eat1 is for food1, i.e., an edible solid, while the preference of drink1 is for drink1 (the noun sense), i.e., a potable liquid.</Paragraph>
      <Paragraph position="1"> The second component of CS is the process of collation. It is collation that contains the met* method in CS. Collation matches the sense-frames of two word senses and finds a system of multiple mappings between those sense-frames, thereby discriminating the semantic relations between the word senses. Figure 4 shows the use of the met* method in CS. Figure 4 is similar to the one in Figure 1 except that the diamonds contain the processes used in CS to check for satisfied preferences (diamond 1), metonymic inferences (diamond 2), and relevant analogies (diamond 3). The basic mappings in collation are paths found by a graph search algorithm that operates over the sense-network. Five types of network path are distinguished. Two types of path, called ancestor and same, denote kinds of &amp;quot;inclusion,&amp;quot; e.g., that the class of vehicles includes the class of cars (this is an ancestor relationship). Satisfied  Computational Linguistics Volume 17, Number 1 preferences are indicated by network paths denoting inclusion, also known as &amp;quot;inclusive&amp;quot; paths (see diamond 1 in Figure 4). The other three types of network path, called sister, descendant, and estranged, denote &amp;quot;exclusion,&amp;quot; e.g., that the class of cars does not include the class of vehicles (this is a descendant relationship). Violated preferences are network paths denoting exclusion, also known as &amp;quot;exclusive&amp;quot; paths. These paths are used to build more complex mappings found by a frame-matching algorithm. The frame-matching algorithm matches the sets of cells from two senseframes. The sets of cells, which need not be ordered, are inherited down the sensenetwork. A series of structural constraints isolate pairs of cells that are matched using the graph search algorithm. Network paths are then sought between terms occupying identical positions in those cells. Seven kinds of cell match are distinguished, based on the structural constraints and types of network path found. Ancestor and same are &amp;quot;inclusive&amp;quot; cell matches, e.g. \[composition1, metal1\] includes \[composition1, steel1\] because the class of metals includes the class of steels (another ancestor relationship). Sister, descendant, and estranged are types of &amp;quot;exclusive&amp;quot; cell matches, e.g. \[composition1, steel1\] and \[composition1, aluminiuml\] are exclusive because the class of steels does not include the class of aluminiums since both belong to the class of metals (this is a sister relationship). The remaining cell matches, distinctive source and distinctive target, account for cells that fail the previous five kinds of cell match. For more detail on cell matches, see Fass (1988a).</Paragraph>
      <Paragraph position="2"> A kind of lexical relevance is found dynamically from the sentence context. This notion of relevance is used in finding the relevant analogies that distinguish metaphorical from anomalous relations; it is also used when finding CO-AGENT FOR ACTIVITY metonymies. Relevance divides the set of cells from the source sense-frame into two subsets. One cell is selected as relevant given the context; the remaining cells are termed nonrelevant. Collation matches both the source's relevant and nonrelevant cells against the cells from the target sense-frame. A relevant analogy is indicated by a sister match of the source's relevant cell (see diamond 3 in Figure 4).</Paragraph>
      <Paragraph position="3"> Five types of metonymic concepts are currently distinguished. Examples of two of the metonymic concepts, CONTAINER FOR CONTENTS and ARTIST FOR ART FORM, have already been given. The remaining three are PART FOR WHOLE, PROP-</Paragraph>
    </Section>
  </Section>
  <Section position="8" start_page="65" end_page="67" type="metho">
    <SectionTitle>
ERTY FOR WHOLE, and CO-AGENT FOR ACTIVITY.
</SectionTitle>
    <Paragraph position="0"> Example 22 &amp;quot;Arthur Ashe is black&amp;quot; (= skin colored black ~ PART FOR WHOLE). Example 23 &amp;quot;John McEnroe is white&amp;quot; (= skin colored white --+ PART FOR WHOLE). In (22) and (23), the skins of Arthur Ashe and John McEnroe, parts of their bodies, are colored black (white).</Paragraph>
    <Paragraph position="1"> Example 24 &amp;quot;John McEnroe is yellow&amp;quot; (= limited in bravery --* PROPERTY FOR WHOLE). Example 25 &amp;quot;Natalia Zvereva is green&amp;quot; (= limited in experience ~ PROPERTY FOR WHOLE).  In (24), for example, John McEnroe is limited with respect to his bravery, a property possessed by humans and other animals.</Paragraph>
    <Section position="1" start_page="66" end_page="67" type="sub_section">
      <SectionTitle>
Fass Discriminating Metonymy
Example 26
</SectionTitle>
      <Paragraph position="0"> &amp;quot;Ashe played McEnroe&amp;quot; (= tennis with McEnroe --~ CO-AGENT FOR ACTIVITY).</Paragraph>
      <Paragraph position="1"> These concepts are encoded in metonymic inference rules in CS (see diamond 2 in Figure 4). The rules are ordered from most common (synecdoche) to least. The order used is PART FOR WHOLE, PROPERTY FOR WHOLE, CONTAINER FOR CONTENTS, CO-AGENT FOR ACTIVITY, and ARTIST FOR ART FORM.</Paragraph>
      <Paragraph position="2"> The first two concepts, PART FOR WHOLE and PROPERTY FOR WHOLE, are source-driven; the others are target-driven. The difference in direction seems to be dependent on the epistemological structure of the knowledge being related by the different inferences. PART FOR WHOLE metonymies are source-driven, perhaps because the epistemological nature of parts and wholes is that a part generally belongs to fewer wholes than wholes have parts, hence it makes sense to drive inferencing from a part (source) toward the whole (target) than vice versa.</Paragraph>
      <Paragraph position="3"> In CONTAINER FOR CONTENTS (target-driven), on the other hand, the epistemological nature of containers and contents is that the containers generally mentioned in CONTAINER FOR CONTENTS metonymies are artifacts designed for the function of containing -- hence one can usually find quite specific information about the typical contents of a certain container, for example, some glasses as in (7) -- whereas the contents do not generally have the function of being the contents of something.</Paragraph>
      <Paragraph position="4"> Hence it makes sense to drive inferencing from the container, and the function it performs, toward the contents than vice versa. The same reasoning applies to ARTIST FOR ART FORM (target-driven). An artist has the vocation of creating art: that is his/her purpose.</Paragraph>
      <Paragraph position="5"> A further step in collation distinguishes metaphorical from anomalous semantic relations. Recall that a metaphorical relation contains a relevant analogy, as in (15) and (20), while an anomalous relation does not, as in (21). A relevant analogy is found by matching the relevant cell from the source sense-frame with one of the cells from the target sense-frame. If the match of cells is composed of a set of sister network paths between corresponding word senses in those cells, then this is interpreted as analogical and hence indicative of a metaphorical relation. Any other match of ceils is interpreted as not analogical and thus an anomalous semantic relation is recognized (see Fass 1986; 1987).</Paragraph>
      <Paragraph position="6"> The third component of CS is the semantic vector which is a form of representation, like the sense-frame; but sense-frames represent lexical knowledge, whereas semantic vectors represent coherence. Semantic vectors are therefore described as a kind of coherence representation. A semantic vector is a data structure that contains nested labels and ordered arrays structured by a simple dependency syntax. The labels form into sets.</Paragraph>
      <Paragraph position="7"> The outer sets of labels indicate the application of the three kinds of lexical semantic constraints. The outermost set of labels is 'preference' and 'assertion.' The middle set is 'relevant' and 'nonrelevant.' The innermost set is the kind of mapping used: 'network path' and 'cell matches.' The nesting of labels shows the order in which each source of knowledge was introduced. The ordered arrays represent the subkinds of each kind of mapping. Five-column arrays are for the five network paths; seven-column arrays are for the seven types of cell match. Each column contains a positive number that shows the number of occurrences of a particular network path or cell match.</Paragraph>
      <Paragraph position="8"> The fourth component of CS is the process of screening. During analysis of a sentence constituent, a Semantic vector is created for every pairwise combination of word senses. These word sense combinations are called semantic readings or simply &amp;quot;readings.&amp;quot; Each reading has an associated semantic vector. Screening chooses between two semantic vectors and hence their attached semantic readings. Rank orderings  Computational Linguistics Volume 17, Number 1 among semantic relations are applied. In the event of a tie, a measure of conceptual similarity is used.</Paragraph>
      <Paragraph position="9"> The ranking of semantic relations aims to achieve the most coherent possible interpretation of a reading. The class of preference-based semantic relations takes precedence over the class of assertion-based semantic relations for lexical disambiguation. The rank order among preference-based semantic relations is literal --* metaphorical ~ anomalous.</Paragraph>
      <Paragraph position="10"> If the semantic vectors are still tied then the measure of conceptual similarity is employed. This measure was initially developed to test a claim by Tourangeau and Sternberg (1982) about the aptness of a metaphor. They contend that aptness is a function of the distance between the conceptual domains of the source and target involved: the claim is that the more distant the domains, the better the metaphor. This is discussed further in Section 5. The conceptual similarity measure is also used for lexical ambiguity resolution (see Fass 1988c).</Paragraph>
    </Section>
  </Section>
  <Section position="9" start_page="67" end_page="74" type="metho">
    <SectionTitle>
5. The Meta5 Program
</SectionTitle>
    <Paragraph position="0"> CS has been implemented in the meta5 natural language program. The meta5 program is written in Quintus Prolog and consists of a lexicon holding the sense-frames of just over 500 word senses, a small grammar, and semantic routines that embody collation and screening, the two processes of CS. The program is syntax-driven, a form of control carried over from the structure of earlier programs by Boguraev (1979) and Huang (1985), on which meta5 is based. Meta5 analyzes sentences, discriminates the seven kinds of semantic relation between pairs of word senses in those sentences (i.e., the program recognizes metonymies, metaphors, and so on), and resolves any lexical ambiguity in those sentences. Meta5 analyzes all the sentences given in Sections 3 and 4, plus a couple more metaphorical sentences discussed in Section 7.</Paragraph>
    <Paragraph position="1"> Below are simplified versions of some of the metonymic inference rules used in meta5. The metonymic concepts used in CS contain three key elements: the conceptual relationship involved, the direction of inference, and a replacement of the source or target. The metonymic inference rules in meta5 contain all three key elements. The rules, though written in a prolog-like format, assume no knowledge of Prolog on the part of the reader and fit with the role of metonymy shown in Figures 1 and 4.</Paragraph>
    <Paragraph position="2"> Each metonymic inference rule has a left-hand side and a right-hand side. The left-hand side is the topmost statement and is of the form metonymic_inference_rule(Source, Target). The right-hand side consists of the remaining statements. These statements represent the conceptual relationship and the direction of inference, except for the bottom most one, which controls the substitution of the discovered metonym for either the source or target: this statement is always a call to find a new sense-network path.</Paragraph>
    <Paragraph position="3">  ment \[1\] represents the conceptual relationship and direction of inference. The conceptual relationship is that the source is a property possessed by the whole in a propertywhole relation. The inference is driven from the source: find_cell searches through the</Paragraph>
    <Section position="1" start_page="68" end_page="70" type="sub_section">
      <SectionTitle>
Fass Discriminating Metonymy
</SectionTitle>
      <Paragraph position="0"> source's list of cells for one referring to a &amp;quot;whole&amp;quot; of which the source is a &amp;quot;part.&amp;quot; Statement \[2\] controls the substitution of the discovered metonym: the &amp;quot;whole&amp;quot; is the substitute metonym that replaces the source, and the next sense-network path is sought between the whole and the target.</Paragraph>
      <Paragraph position="1">  This metonymic concept is target-driven. The target is the &amp;quot;container&amp;quot;in a containercontents relation (\[1\]). The &amp;quot;contents&amp;quot; is the substitute metonym that replaces the target. The next sense-network path is sought between the source and the contents (\[2\]).  Again, the inference in ARTIST FOR ART FORM is from the target. The target is a person who is an &amp;quot;artist&amp;quot; in an artist-art form relation. The occupation of the person is found by searching up the sense-network (\[1\]). The list of ceils associated with the occupation are searched for a cell describing the main activity involved in the occupation (\[2\]), e.g., a cook cooks food and an artist makes art forms. Checks are done to confirm that any activity found is indeed making an art form, i.e., that the &amp;quot;making&amp;quot; involved is a type of creating (\[3\]) and that the &amp;quot;art form&amp;quot; is a type of art form1 (\[4\]). The &amp;quot;art form&amp;quot; is the substitute metonym that replaces the target. A new sense-network path is computed between the source and the art form (\[5\]). I will now describe how meta5 recognizes some metonymies and metaphors.</Paragraph>
      <Paragraph position="2"> Example 19 &amp;quot;Ted played Bach&amp;quot; (= the music of Bach).</Paragraph>
      <Paragraph position="3"> In (19), between 'Bach' and the twelfth sense of 'play' in meta5's lexicon (meaning &amp;quot;to play music&amp;quot;), there is a chain of metonymies plus a literal relation. The chain consists of ARTIST FOR ART FORM and CONTAINER FOR CONTENTS metonymies. Both metonymic concepts are target-driven. In ARTIST FOR ART FORM the inference is from the ARTIST (the target) to the ART FORM (the source), so the substitute metonym replaces the target (the ARTIST) if the inference is successful.</Paragraph>
      <Paragraph position="4"> The sense-frames of the verb sense play12 and the noun senses music1 and johann_sebastian_bach are shown in Figure 5. The semantic relation results from matching the object preference of play12, which is for music, against the surface object, which is 'Bach,' short for 'Johann Sebastian Bach.' The preference is the source and the surface object is the target.</Paragraph>
      <Paragraph position="5"> We will follow what happens using the flow chart of Figure 4. (Enter diamond 1 of the chart.) The sense-network path between the source (music1) and the target  Computational Linguistics Volume 17, Number 1 sf(play12, \[\[arcs, \[\[supertype, per forrnl\]\] 1, \[node2, \[\[agent, \[preference, human_being1\]\], \[object, \[preference,music1\]\]\]\]\]).</Paragraph>
      <Paragraph position="6"> sf(musicl, sf (johann_sebastian_bach, \[\[arcs, \[\[arcs, \[\[supertype, \[sound1, art_fermi\]\]I\], \[\[supertype, composerl\]l \] \[nodeO, \[nodeO, \[\[musician1, play12, itt\]\]\]\]). \[\[animacyl, dead1\], \[sex1, male1\], \[bornt, 1685\], \[died1, 17501111).</Paragraph>
      <Paragraph position="7"> \]Figure 5 Sense-frames for play12 (verb sense), music1 and johann_sebastian_bach (noun senses) (johann_sebastian_bach) is sought. The path is not inclusive because johann_sebastian_ bach is not a type of music1.</Paragraph>
      <Paragraph position="8"> (Enter diamond 2 of the chart.) Metonymic inference rules are applied. The rules for PART FOR WHOLE, PROPERTY FOR WHOLE, CONTAINER FOR CONTENTS, CO-AGENT FOR ACTIVITY are tried in turn, but all fail. The rule for ARTIST FOR ART FORM, however, succeeds. The discovered metonymic inference is that johann_ sebastian_bach (the ARTIST) composes musical pieces (the ART FORM). The metonymic inference is driven from the target (the ARTIST), which is johann_sebastian_bach. The successful metonymic inference, using the ARTIST FOR ART FORM inference rule above, is as follows: \[1\] johann_sebastian_bach (the ARTIST) is a composer1, \[2\] composers compose1 musical pieces (the ART FORM). Additional tests confirm \[2\], which are that \[3\] composing is a type of creating, and \[4\] a musical_piece1 is a type of art_form1.</Paragraph>
      <Paragraph position="9"> (Enter the leftmost statement box -- also step \[5\] of the ARTIST FOR ART FORM inference rule above.) The original target (johann_sebastian_bach) is replaced by the substitute metonym (musical_piece1).</Paragraph>
      <Paragraph position="10"> (Enter diamond 1 for a second time.) The sense-network path between the source (music1) and the new target (musical_piece1) is sought. The path is not inclusive. (Enter diamond 2 for a second time.) Metonymic inference rules are applied. The rules for PART FOR WHOLE and PROPERTY FOR WHOLE fail, but the rule for CONTAINER FOR CONTENTS succeeds. The successful inference, using the description of the CONTAINER-CONTENTS inference rule given previously, is that \[1\] a musical_piece1 (the CONTAINER) contains music1 (the CONTENTS).</Paragraph>
      <Paragraph position="11"> (Enter the leftmost statement box for a second time.) The direction of inference in the CONTAINER FOR CONTENTS metonymic concept is from the target (the CON-TAINER) towards the source (the CONTENTS), so \[2\] the target (the CONTAINER) is replaced by the substitute metonym when an inference is successful. Hence in our example, the target (musical_piece1) is again replaced by a substitute metonym (music1). The source, which is music1, the object preference of play12, remains unchanged. (Enter diamond 1 for a third time.) The sense-network path between the source (music1) and the latest target (music1) is sought. The path is inclusive, that music1 is a type of music1, so a literal relation is found.</Paragraph>
      <Paragraph position="12"> (Exit the chart.) The processing of the preference-based semantic relation(s) between play12, and its preference for music1, and johann_sebastian_bach is completed.</Paragraph>
    </Section>
    <Section position="2" start_page="70" end_page="72" type="sub_section">
      <SectionTitle>
Fass Discriminating Metonymy
</SectionTitle>
      <Paragraph position="0"> After an initial preference violation (Johann Sebastian Bach is not a kind of music), the semantic relation found was an ARTIST FOR ART FORM metonymic relation (that johann_sebastian_bach composes musical pieces) followed by a CONTAINER FOR CONTENTS metonymic relation (that musical pieces contain music) followed by a literal relation (that music is music).</Paragraph>
      <Paragraph position="1"> Example 20 &amp;quot;The car drank gasoline.&amp;quot; There is a metaphorical relation between carl and the verb sense drink1 in (20). The source is drink1, whose agent preference is animal1, and the target is carl (see Figure 6).</Paragraph>
      <Paragraph position="2"> A metaphorical relation is sought after failing to find an inclusive network path or a metonymic inference between animal1 and carl, hence the network path between animal1 and carl must be exclusive. The network path found is an estranged one. The second stage is the match between the relevant cell of animal1 and the cells of carl. In the present example, drinkl is relevant. The list of cells for animal1 is searched for one referring to drinking. The relevant cell in the list is \[animal1, drink1, drink1\], which is matched against the inherited cells of carl (see Figure 7). A sister match is found between \[animal1, drink1, drink1\] and \[carl, use2, gasoline1\] from carl. The sister match is composed of two sister paths found in the sense-network. The first sister path is between the verb senses drink1 and use2, which are both types of expending (Figure 8). The second path is between the noun senses drink1 and gasoline1, which are both types of liquid (Figure 9). The effect of the network paths is to establish correspondences between the two cells such that an analogy is &amp;quot;discovered&amp;quot; that animals drink potable liquids as cars use gasoline. Note that, like Gentner's (1983) systematicity principle, the correspondences found are structural and independent of the content of the word senses they connect. Note also that the two cells have an underlying similarity or &amp;quot;ground&amp;quot; (Richards 1936) in that both refer to the expenditure of liquids. This second stage of finding a relevant analogy seems the crucial one in metaphor recognition.</Paragraph>
      <Paragraph position="3"> Figure 10 shows the match of the nonrelevant cells from animal1 and carl. The cell \[carl, use2, gasoline1\] has been removed. There are three inclusive cell matches as animals and cars share physical objectlike properties of boundedness, three dimensions, sf(drinkl, \[\[arcs, \[\[supertype, \[ingest1, expendl\]\]J\], \[node2, \[\[agent, \[preference, anlrnall\] \], \[object, \[preference, drinkl\]\]\]\]\]).</Paragraph>
      <Paragraph position="4"> sf(animatl, sf(carl, \[\[arcs, \[\[arcs, \[\[supertype, organism1\]\]\], \[\[supertype, motor_vehicle1\]\]\], \[nodeO, \[nodeO, .</Paragraph>
      <Paragraph position="5"> \[\[biology1, animal1\], \[lit1, carry1, passenger1\]\]\]\]).</Paragraph>
      <Paragraph position="6"> lit1, drink1, drink1\], lit1, eat1, food1\]\]\]\]).</Paragraph>
      <Paragraph position="7"> Figure 6 Sense-frames for drink1 (verb sense), animal1 and carl (noun senses)  \[behaviourl, solid1\], \[animacyl, nonliving1\], \[carl, roll1, \[on3, land1\]\], \[composition1, steel1\], \[driver1, drivel, carl\], \[carl, have1, \[4, wheel1\]\], \[carl, have1, engine1\], \[carl, use2, gasoline1\], \[caN, carry1, passenger1\]\]</Paragraph>
    </Section>
    <Section position="3" start_page="72" end_page="74" type="sub_section">
      <SectionTitle>
Fass Discriminating Metonymy
</SectionTitle>
      <Paragraph position="0"> Non-relevant cells of animal1 Non-relevant c~lJ~ of carl Cell matches (SOURCE) (TARGET} \[\[bour=dsl, distinct1 \] \]\]bounds I distinct1 \] i sam \]extent1, three dimensional1|, \[extent , three d mens ona \],13 e \[behaviourt, solid1|, \[behaviourt, sollidl\], ICell matches lanimacyl, livingt\], \[animacyl, nonlivingl\], ~2 sister \]composition1, flesh1|, \]composition1, steel1|, ~ cell matches \]animal1, eat1, food1|, J2 distinctive \[biol0gyt, animalt\]\] I source celts (of animalt) \]carl, roHt, \]on3, land1||, | \]driver1, drivel, carl|, 15 distinctive \]carl, hayer, \[4. wheelt\]\], target cells \[cart, hayer, enginet\], (of carl)  \[cart, carryt, passenger1|| Figure 10 Matches of non-relevant cells from animal1 and carl \[preference, \[\[network~oath, \[0, 0, 0, 0, 1\]\], \[cellmatch, \[\[relevant, \[0, 0, 1, 0, 0, 0, 10\]\], \]non_relevant, \[0, 3, 2, o, o, 2, 5\]\]\]\]\]\]  Semantic vector for a metaphorical semantic relation and solidity. Two cell matches are exclusive. Animals are composed of flesh, whereas cars are composed of steel. Animals are living, whereas cars are nonliving. There are two distinctive cells of animal1 and five distinctive cells of carl. Tourangeau and Sternberg's (1982) hypothesis predicts that the greater the distance between the conceptual domains of the terms involved in a metaphor, the more apt the metaphor. The proportion of similarities (inclusive cell matches) to differences (exclusive cell matches) is 3 to 2, which is a middling distance suggesting, tentatively, an unimposing metaphor. All of these matches made by collation are recorded in the semantic vector shown in Figure 11. The crucial elements of the metaphorical relation in (20) are the preference violation and the relevant analogy. In Figure 11, the preference violation has been recorded as the 1 in the first array and the relevant analogy is the 1 in the second array. Information about the distance between conceptual domains is recorded in the third array.</Paragraph>
      <Paragraph position="1"> The 'preference' label indicates that a preference has been matched (rather than an assertion). The five columns of the first array record the presence of ancestor, same, sister, descendant and estranged network paths respectively. When a preference is evaluated, only one network path is found, hence the single 1 in the fifth column, which indicates that an estranged network path was found between animal1 and carl. Cell matches are recorded in the second and third arrays, which each contain seven columns. Those columns record the presence of ancestor, same, sister, descendant, estranged, distinctive source, and distinctive target cell matches respectively. The 1 in the third column of the second array is the relevant analogy -- a sister match of the  Computational Linguistics Volume 17, Number 1 relevant cell \[animal1, drink1, drink1\] and the cell \[carl, use2, gasoline1\]. The 10 is the ten distinctive cells of carl that did not match \[animal1, drink1, drink1\]. This is the match of 12 cells, 1 from the source and 11 from the target (see Figure 7). The sum of array columns is: ((0+0+1+0+0) x2) 4-((0+10) xl)=(1 x2)+(10x1)=12.</Paragraph>
      <Paragraph position="2"> The 3 similarities, 2 differences, 2 distinctive cells of animal1 and 5 distinctive cells of carl are the nonzero numbers of the final array. The 3 similarities are all same cell matches; the 2 differences are both sister cell matches. A total of 17 cells are matched, 7 from the source and 10 from the target (see Figure 10). The total of array columns is:</Paragraph>
      <Paragraph position="4"> &amp;quot;The ship ploughed the waves.&amp;quot; In (15), there is a metaphorical relation between a sense of the noun 'ship' and the second sense of the verb 'plough' :in meta5's lexicon. Note that 'plough,' like 'drink,' belongs to several parts of speech. Figure 12 shows the sense-frames for the verb sense plough2, the noun sense plough1, which is the instrument preference of plough2, and the noun sense ship1.</Paragraph>
      <Paragraph position="5"> In (15), meta5 matches senses of 'ship' against senses of 'plough.' When meta5 pairs ship1 with plough2, it calls upon collation to match ship1 against the noun sense plough1, the instrument preference of plough2.</Paragraph>
      <Paragraph position="6"> First, the graph search algorithm searches the sense-network for a path between plough1 (which is the preference) and ship1 and finds an estranged network path between them, i.e., a ship is not a kind of plough, so plough2's instrument preference is violated.</Paragraph>
      <Paragraph position="7"> Next, collation inherits down lists of cells for plough1 and ship1 from their superordinates in the sense-network. What is relevant in the present context is the action of ploughing because (15) is about a ship ploughing waves. Collation then runs through the list of inherited cells for the noun sense plough1 searching for a cell that refers to the action of ploughing in the sense currently under examination by meta5, plough2.</Paragraph>
      <Paragraph position="9"> Sense-frames for plough2 (verb sense), plough1 and ship1 (noun senses)</Paragraph>
      <Paragraph position="11"> Matches of non-relevant cells from plough1 and ship1 Collation finds a relevant cell \[plough1, plough2, soil1\] and uses its frame-matching algorithm to seek a match for the cell against the list of inherited cells for ship1, shown in Figure 13 (for ease of reading, it1 has again been replaced by the word senses being defined). The algorithm finds a match with \[ship1, sail2, water2\] (highlighted in Figure 13), and hence collation &amp;quot;discovers&amp;quot; a relevant analogy that both ships and ploughs move through a medium, i.e., that ploughs plough through soil as ships sail through water.</Paragraph>
      <Paragraph position="12"> Finally, collation employs the frame matching algorithm a second time to match together the remaining nonrelevant cells of plough1 and ship1 (see Figure 14). The cell \[ship1, sail2, water2\] is removed to prevent it from being used a second time.</Paragraph>
      <Paragraph position="13"> Figure 15 shows the semantic vector produced. As with Figure 11, it shows a metaphorical relation. There is a preference violation, an estranged network path indicated by the 1 in the fifth column of the first array. There is also a relevant analogy, shown by the 1 in the third column of the second array: the analogical match of the cells \[plough1, plough2, soil1\] and \[ship1, sail2, water2\]. The second array shows that 11 cells are matched, I from the source and 10 from the target (check against Figure 13). The sum of the array's columns is:  Semantic vector for another metaphorical semantic relation In the third array, the match of nonrelevant cells, there is 1 ancestor match, 4 same matches, 1 sister match, and 3 distinctive cells of ship1. Fifteen cells are matched, 6 from the source and 9 from the target (see Figure 14). The totals are: ((1+4+1+0+0) x2)+((0+3)x1)=(6x2)+(3x1)=15.</Paragraph>
      <Paragraph position="14"> Semantic vectors can represent all the semantic relations except metonymic ones. The reason is that metonymic relations, unlike the others, are not discriminated by CS in terms of only five kinds of network path and seven kinds of cell matches. Instead, they consist of combinations of network paths and specialized matches of cells that have not fallen into a regular enough pattern to be represented systematically.</Paragraph>
    </Section>
  </Section>
class="xml-element"></Paper>
Download Original XML