File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/evalu/91/j91-1003_evalu.xml

Size: 24,702 bytes

Last Modified: 2025-10-06 14:00:00

<?xml version="1.0" standalone="yes"?>
<Paper uid="J91-1003">
  <Title>met*: A Method for Discriminating Metonymy and Metaphor by Computer</Title>
  <Section position="10" start_page="74" end_page="83" type="evalu">
    <SectionTitle>
6. Extensions
</SectionTitle>
    <Paragraph position="0"> Even for those semantic dependencies investigated, the interpretation of semantic relations seems to require more complexity than has been described so far in this paper.</Paragraph>
    <Paragraph position="1"> Consider the differences between the following sentences:  Intuitively, sentence (20) is metaphorical while (27) is metaphorical/anomalous. In (20), the semantic relation between 'car' and 'drink' is thought to be metaphorical, and the isolated semantic relation between just 'drink' and 'gasoline' is anomalous, but the sentence as a whole is metaphorical because it is metaphorical that cars should use up gasoline.</Paragraph>
    <Paragraph position="2"> In (27), the semantic relation between 'car' and 'drink' is metaphorical; the semantic relation between just 'drink' and 'coffee' is literal; yet the effect of (27) as a whole is metaphorical/anomalous. The object preference of 'drink' is for a drink, i.e., a potable liquid. It seems that it is metaphorical for cars to &amp;quot;drink&amp;quot; a liquid commonly used up by cars, e.g., gasoline, but anomalous if the liquid has nothing to do with cars, e.g., coffee, as in (27).</Paragraph>
    <Paragraph position="3"> The problem of understanding the differences between sentences (20) and (27) requires some further observations about the nature of semantic relations, principally</Paragraph>
    <Section position="1" start_page="76" end_page="80" type="sub_section">
      <SectionTitle>
Fass Discriminating Metonymy
</SectionTitle>
      <Paragraph position="0"> that the differences are caused by the combinations of semantic relations found in the sentences and the relationships between those relations. Below is a suggestion as to how deeper semantic processing might discriminate the differences between the two sentences.</Paragraph>
      <Paragraph position="1"> Before getting to the deeper processing, we need a better semantic vector notation. The better semantic vector notation, which developed from a discussion with Afzal Ballim, is a modification of the notation shown in Section 5. The key differences are reformulation by rewriting the five and seven column arrays in terms of the predicate-argument notation used in the rest of semantic vectors, and extension by adding the domain knowledge connected by every network path and cell match.</Paragraph>
      <Paragraph position="2"> Figure 16 shows the semantic vector in Figure 11 reformulated and extended. The advantage of vectors like the one in Figure 16 is that they record both how the sense-frames of two word senses are matched (i.e., as various kinds of network path and cell match) and what information in the sense-frames is matched (i.e., all the cells). For example, the part of Figure 16 that begins &amp;quot;\[relevant, ...&amp;quot; contains all the information found in Figure 7, the match of the relevant cell from animal1 against the cells of carl, both the types of cell matches and the cells matched. The equivalent part of Figure 11 only records the types of cell matches. Recording the contents of the matched cells is useful because it enables a deepened analysis of semantic relations. Such an analysis is needed to detect the differences between (20) and (27).</Paragraph>
      <Paragraph position="3"> In the description of CS in Section 4, collation discriminates the one or more semantic relations in each semantic dependency, but treats the semantic relations in one dependency as isolated from and unaffected by the semantic relations in another dependency. What is needed is extra processing that interprets the semantic relation(s) in a later dependency with respect to the semantic relation(s) established in an earlier</Paragraph>
      <Paragraph position="5"> \[animacyl, nonliving1\], \[carl, rollf, \[on3, land1\]\], Idriverl, drivel, carl\], \[carl, hayer, \[4, wheel1\]\], \[carl, have1, engine1\], \[carl, caCryl, passenger1\]\]\]\]\]\],  Vector statement of match of relevant cell from drink1 against cells of gasoline1 (noun senses) one. This processing matches the domain knowledge in semantic vectors, i.e., this processing is a comparison of coherence representations.</Paragraph>
      <Paragraph position="6"> In sentences such as (20) and (27) there are two key semantic dependencies. The first one is between the subject noun and the verb; the second is between the verb and object noun. In each dependency, the source is the verb (through its agent and object preferences) and the targets are the nouns. Semantic relations are found for each dependency. One way to detect the difference between metaphorical sentences such as (20) and metaphorical/anomalous ones such as (27) is in each sentence to consult the semantic vectors produced in its two main semantic dependencies and compare the matches of the relevant cells that are found by collation.</Paragraph>
      <Paragraph position="7"> Let us go through such an analysis using CS, starting with the first semantic dependency between subject noun and verb. In this semantic dependency in both (20) and (27), a relevant analogy is discovered as part of a metaphorical relation between the target carl and animal1, the agent preference of the source drink1. The semantic vector in Figure 16 records the two cells that figure in that relevant analogy. Figure 17 shows the same information from the semantic vector but written as a statement.</Paragraph>
      <Paragraph position="8"> When the second semantic dependency is analyzed in (20), the target is gasoline1 and is matched against the noun sense drink1, the object preference of the source drink1 (the verb sense). A semantic vector is produced. The relevant cell found in the noun sense drink1 is \[animal1, drink1, drink1\]. Its match against \[vehicle1, use2, gasoline1\], a cell from gasoline1, is shown in the vector statement in Figure 18. The match is a sister match, indicating a relevant analogy.</Paragraph>
      <Paragraph position="9"> Now this is peculiar because &amp;quot;drinking gasoline&amp;quot; is anomalous, yet a relevant analogy has been found and this paper has argued that relevant analogies are special to metaphorical relations. One possible explanation is that differences exist between the recognition of metaphorical relations that concern agents and metaphorical relations that concern objects and other case roles. It may be that metaphorical relations are indicated by a relevant analogy, but only in selected circumstances. This needs further investigation.</Paragraph>
      <Paragraph position="10">  Vector statement of match of relevant cell from drink1 against cells from coffee1 (noun senses) To return to the analysis of (20), what appears to be important in determining that (20) is a metaphorical sentence is the comparison of the two pairs of matched relevant cells: \[\[animal1, drink1, drink1\], \[carl, use2, gasoline1\]\] \[\[animal1, drink1, drink1\], \[vehicle1, use2, gasoline1\]\] The two source cells are the same and the two target cells, \[carl, use2, gasoline1\] and \[vehicle1, use2, gasoline1\], are almost identical, indicating that the same basic analogy runs through the whole of (20), hence the sentence as a whole is metaphorical. Now let us analyze the second semantic dependency in (27). The target is coffee1 and is again matched against drink1, the object preference of the verb sense drink1, the source. The relevant cell from the noun sense drink1 is again \[animal1, drink1, drink1\], which matches against \[human_being1, drink1, coffee1\] from the target coffee1. This time, the match is an ancestor match and hence not a relevant analogy. Figure 19 shows this match of the relevant cell as a vector statement. Let us compare the two pairs of matched relevant cells for (27): \[\[animal1, drink1, drink1\], \[carl, use2, gasoline1\]\] \[\[animal1, drink1, drink1\], \[human_being1, drink1, coffee1\]\] The two source cells are the same but the two target cells, \[carl, use2, gasoline1\] and \[human_being1, drink1, coffee1\], are very different. The reason that the sentence as a whole is metaphorical/anomalous is because of the clash between these target cells. The basic analogy of a car ingesting a liquid does not carry over from the first semantic dependency into the second. The anomalous flavor of (27) could not be detected by looking at the semantic relations in the dependencies in isolation because one semantic relation is metaphorical and the other is literal. Neither relation is anomalous -- the anomaly comes from the interaction between the two relations.</Paragraph>
      <Paragraph position="11"> Figure 20 is a proposed representation for sentence (20). The left side of Figure 20 shows the knowledge representation part of the sentence representation: a simple case-frame based representation of (20). The right side of Figure 20, within the grey partition, is the coherence representation component of the sentence representation: abridged semantic vectors for the two main semantic dependencies in (20). The upper semantic vector is the match of the target carl against the source animal1. The lower semantic vector is the match of the target gasoline1 against the source drink1, the noun sense. The upper abridged semantic vector indicates a metaphorical relation.</Paragraph>
      <Paragraph position="12"> The lower semantic vector also indicates a metaphorical relation though, as was noted earlier, &amp;quot;drinking gasoline&amp;quot; when interpreted in isolation is surely anomalous. The underlines in Figure 20 denote pointers linking the semantic vectors to the case frame. The grey vertical arrows show that the two semantic vectors are also linked  Computational Linguistics Volume 17, Number 1 ~rlnkl, \[a~t.</Paragraph>
      <Paragraph position="13"> C/ar~l. \[object, gasollnet \]\] Ii urce' \[\[explicit, ddnkl\], \[implicit, animal1\[\[\[, target, carl\[l, reference, network_path, \[estranged, \[1, \[anirnalt, car 1\]\]\]\], {cell_match, \[\[relevant, \[{sister, \[1, \[\[\[animall, drink1, drink1\[, \[carl, use2, gasolinet|\]\]\]\]\]\]\]\[\]\] \[\[\[source, l \[\[explicit, drink1\[,  \[Implicit, drink1 \]\]\[, \[target, ~\]esollnel\]\], \[preference, 8,4 \[\[network_path, \[dster, \[1, \[drink1, gasolinet \]\]\] \], (cell_match.</Paragraph>
      <Paragraph position="14"> \[{relevant, {{sister, '~ \[1, \[\[\[animal1. drink1, drink1\[, \[vehicle1. use2, gasolinel\]\]\]\]\]\]\]\]\]\]\] Figure 20 Sentence representation for &amp;quot;The car drank gasoline&amp;quot; I ~PS\[\[\[sou roe, {I \[\[explicit, drink1\[, It \[implicit, ~11\]\]\], t~ \[age, carl\[l, / \[p ref erenc'~'.</Paragraph>
      <Paragraph position="15">  |\[\[network path, I| \[estranged, ,~ \[1, \[anirneH, carl\]\]\]\], II \[cell match, l~ \[\[relevant, \[drink1, {agent, c.~J \],  Sentence representation for &amp;quot;The car drank coffee&amp;quot; together via the matches of their relevant cells. In those matches, the arrows are sense-network paths found between the elements of the two target cells. The network paths indicated in grey, that connect the two abridged semantic vectors, show processing of coherence representations. The particular network paths found (indicated in italics), a descendant path and two same &amp;quot;paths,&amp;quot; show that the same relevant analogy is used in both semantic relations -- that both semantic relations involve a match between animals drinking potable liquids and vehicles (including cars) using gasoline -- hence sentence (20) as a whole is metaphorical. Figure 20 is therefore unlike any of the coherence representations shown previously, because it shows a representation of a metaphorical sentence, not just two isolated metaphorical relations.</Paragraph>
    </Section>
    <Section position="2" start_page="80" end_page="81" type="sub_section">
      <SectionTitle>
Fass Discriminating Metonymy
</SectionTitle>
      <Paragraph position="0"> Compare Figure 20 with Figure 21, a sentence representation for (27). The upper semantic vector again indicates a metaphorical relation between carl and drink1. The lower semantic vector indicates a literal relation between drink1 and coffee1. What is important here is the match of relevant information discovered in the two semantic relations, as indicated by the three network paths. The paths found are two estranged paths and a. sister path, indicating that the relevant information found during the two semantic relations is different: in one semantic relation, information about animals drinking potable liquids is matched against cars using gasoline; in the other, the same information is matched against human beings drinking coffee; but cars using gasoline and human beings drinking coffee are quite different, hence sentence (27) is anomalous overall.</Paragraph>
      <Paragraph position="1"> Note that in Figures 20 and 21, the coherence representation part of the sentence representation is much larger than the knowledge representation part. The detailed &amp;quot;world knowledge&amp;quot; about carl, the verb sense drink1, gasoline1, and coffee1 are all on the right side. It is interesting to contrast the figures with early Conceptual Dependency (CD) diagrams such as those in Schank (1973) because, rather than the large and seemingly unlimited amounts of world knowledge that appear in CD diagrams, the two figures present only the world knowledge needed to discriminate the semantic relations in (20) and (27).</Paragraph>
      <Paragraph position="2"> 7. Discussion and Conclusions This section reviews the material on metonymy and metaphor in Section 2 in light of the explanation of the met* method given in Sections 3-6. When compared with the AI work described in Section 2, the met* method has three main advantages. First, it contains a detailed treatment of metonymy. Second, it shows the interrelationship between metonymy, metaphor, literalness, and anomaly. Third, it has been programmed.</Paragraph>
      <Paragraph position="3"> Preference Semantics addresses the recognition of literal, metaphorical, and anomalous relations, but does not have a treatment of metonymy. In the case of Preference Semantics, the theory described in Wilks (1978) has not been implemented, though the projection algorithm was implemented (Modiano 1986) using some parts of CS to supply detail missing from Wilks' original specification.</Paragraph>
      <Paragraph position="4"> Gentner's (1983) Structure-Mapping Theory has no treatment of metonymy. The theory has been implemented in the Structure-Mapping Engine (Falkenhainer, Forbus and Gentner 1989) and some examples analyzed by it but not, to my knowledge, examples of metaphor or anomaly.</Paragraph>
      <Paragraph position="5"> Indurkhya's (1988) Constrained Semantic Transference theory of metaphor has no treatment of metonymy, anomaly or literalness. It has also not been implemented: see Indurkhya (1987) for reasons why.</Paragraph>
      <Paragraph position="6"> Hobbs and Martin (1987) offer a relatively shallow treatment of metonymy without, for instance, acknowledgement that metonymies can be driven from either the source or the target. Hobbs' &amp;quot;selective inferencing&amp;quot; approach to text interpretation has been applied to problems including lexical ambiguity (Hobbs 1977; 1982b; Hobbs and Martin 1987), metaphor (Hobbs 1977; 1983a; 1983b) and the &amp;quot;local pragmatics&amp;quot; phenomena of metonymy (Hobbs and Martin 1987), but not anomaly. To my knowledge, Hobbs has yet to produce a unified description of selective inferencing that shows in detail how lexical ambiguity is resolved or how the differences between metaphor, metonymy, and so on can be recognized. Hobbs&amp;quot; earlier papers include a series of programs -- SATE, DIANA, and DIANA-2 -- but the papers are not clear about what the programs can do. It is not clear, for example, whether any of the programs actually analyze any metaphors.</Paragraph>
      <Paragraph position="7">  Computational Linguistics Volume 17, Number 1 Martin's (1990) work is the only other computational approach to metaphor that has been implemented. However, the work does not have a treatment of metonymy.</Paragraph>
      <Paragraph position="8"> Martin's metaphor-maps, which are used to represent conventional metaphors and the conceptual information they contain, seem to complement semantic vectors of the extended kind described in Section 6. In Section 6, I argued that vectors need to record the conceptual information involved when finding mappings between a source and target. What metaphor-maps do is freeze (some of) the conceptual information involved in particular metaphorical relations. There is some theoretical convergence here between our approaches; it would be interesting to explore this further.</Paragraph>
      <Paragraph position="9"> Moreover, the metaphors studied so far in CS seem linked to certain conventional metaphors because certain types of ground have recurred, types which resemble Lakoff and Johnson's (1980) structural metaphors. Two types of ground have cropped up so far.</Paragraph>
      <Paragraph position="10"> Example 28 &amp;quot;Time flies.&amp;quot; The first is a use-up-a-resource metaphor which occurs in (20) and in (28) when viewed as noun-verb sentence. Both sentences are analyzed by meta5. Use-up-a-resource resembles structural metaphors like TIME IS A RESOURCE and LABOR IS A RESOURCE which, according to Lakoff and Johnson (1980, p. 66), both employ the simple ontological metaphors of TIME IS A SUBSTANCE and AN ACTIVITY IS A SUBSTANCE: These two substance metaphors permit labor and time to be quantified -- that is, measured, conceived of as being progressively &amp;quot;used up,&amp;quot; and assigned monetary values; they allow us to view time and labor as things that can be &amp;quot;used&amp;quot; for various ends.</Paragraph>
      <Paragraph position="11"> Example 29 &amp;quot;The horse flew.&amp;quot; The second type of ground is motion-through-a-medium, a type of ground discussed by Russell (1976). This appears in (15) and (29), again both analyzed by meta5. Incidentally, it is worth noting that structural metaphors have proven more amenable to the met* method than other kinds tried. I assumed initially that orientational and ontological metaphors would be easier to analyze than structural metaphors because they were less complex. However, structural metaphors have proved easier to analyze, probably because structural metaphors contain more specific concepts such as &amp;quot;drink&amp;quot; and &amp;quot;plough,&amp;quot; which are more simple to represent in a network structure (like the sense-network of CS) so that analogies can be found between those concepts.</Paragraph>
    </Section>
    <Section position="3" start_page="81" end_page="82" type="sub_section">
      <SectionTitle>
7.1 Relationship between Literalness and Nonliteralness
</SectionTitle>
      <Paragraph position="0"> We return here to Gibbs' point concerning the traditional notion of literal meaning that \[1\] all sentences have literal meanings that are entirely determined by the meanings of their component words and that \[2\] the literal meaning of a sentence is its meaning independent of context. Although \[1\] and \[2\] are both presently true of CS, there are means by which context can be introduced more actively into sentence interpretation.</Paragraph>
      <Paragraph position="1"> At present, the meaning of a sentence in CS -- whether literal or nonliteral -is not derived entirely independently of context; however, the only context used is a</Paragraph>
    </Section>
    <Section position="4" start_page="82" end_page="82" type="sub_section">
      <SectionTitle>
Fass Discriminating Metonymy
</SectionTitle>
      <Paragraph position="0"> limited notion of relevance which is generated by collation from within the sentence being analyzed: what is relevant is given by the sense of the main sentence verb.</Paragraph>
      <Paragraph position="1"> Nevertheless, because of this notion of relevance, contextual influence is present in semantic interpretation in CS. Moreover, the notion of relevance is recorded in semantic vectors (Figures 11 and 15) and the extended coherence representations discussed in Section 6. Hence, the processes and representations of CS possess basic equipment for handling further kinds of context.</Paragraph>
    </Section>
    <Section position="5" start_page="82" end_page="82" type="sub_section">
      <SectionTitle>
7.2 Relationship between Metonymy and Metaphor
</SectionTitle>
      <Paragraph position="0"> The met* method is consistent with the view that metaphor is based on similarity, whereas metonymy is based on contiguity (cf. Jakobsen and Halle 1956). Contiguity, readers may recall, refers to being connected or touching whereas similarity refers to being alike in essentials or having characteristics in common. The difference comes from what and how the conceptual information is related.</Paragraph>
      <Paragraph position="1"> Example 1 &amp;quot;My car drinks gasoline.&amp;quot; Let us consider what is related first. In metaphor, an aspect of one concept is similar to an aspect of another concept; e.g., in (1), an aspect of the concept for animal, that animals drink potable liquids, is similar to an aspect of another concept, that cars use gasoline.</Paragraph>
      <Paragraph position="2"> Example 2 &amp;quot;The ham sandwich is waiting for his check.&amp;quot; However, in metonymy, a whole concept is related to an aspect of another concept. For example, in (2) the metonymy is that the concept for ham sandwich is related to an aspect of another concept, for &amp;quot;the man who ate a ham sandwich.&amp;quot; Regarding how that conceptual information is related: in the case of metaphor, the met* method assigns a central role to finding an analogy, and an analogy between two terms is due to some underlying similarity between them (the ground), e.g., in the analogy that animals drinking potable liquids is like cars using gasoline, the underlying similarity is that both animals and cars ingest liquids. In an analogy, the relationship between aspects of two concepts is purely structural. In metonymies, however, the relationships are &amp;quot;knowledge-laden&amp;quot; connections, e.g., PART-WHOLE and CONTAINER-CONTENTS.</Paragraph>
      <Paragraph position="3"> So in summary, &amp;quot;similarity&amp;quot; in metaphor is understood to be based on structural relationships between aspects of concepts, whereas &amp;quot;contiguity&amp;quot; in metonymy is based on knowledge-specific relationships between a concept and an aspect of another concept. These observations, I would argue, support the view that metonymy has primarily a referential function, allowing something to stand for something else -a connection between a concept and an aspect of another concept. The observations also support the view that metaphor's primary function is understanding, allowing something to be conceived of in terms of something else: the role of analogy is especially crucial to this function.</Paragraph>
    </Section>
    <Section position="6" start_page="82" end_page="83" type="sub_section">
      <SectionTitle>
7.3 Metonymy
</SectionTitle>
      <Paragraph position="0"> The treatment of metonymy permits chains of metonymies (Reddy 1979), and allows metonymies to co-occur with instances of either literalness, metaphor, or anomaly.</Paragraph>
      <Paragraph position="1">  Computational Linguistics Volume 17, Number 1 The kinds of inferences sought resemble the kinds of inferences that Yamanashi (1987) notes link sentences. An obvious direction in which to extend the present work is toward across-sentence inferences.</Paragraph>
      <Paragraph position="2"> Example 30 &amp;quot;John drank from the faucet&amp;quot; (Lehnert 1978, p. 221). Example 31 &amp;quot;John filled his canteen at the spring&amp;quot; (Ibid.). Metonymy seems closely related to the work on non-logical inferencing done by Schank (Schank 1973) and the Yale Group (Schank 1975; Schank and Abelson 1977; Schank and Riesbeck 1981). For example, Lehnert (1978) observes that just one inference is required for understanding both (30) and (31). The inference, that water comes from the faucet in (30) and the spring in (31), is an instance of PRODUCER FOR PRODUCT in which the faucet and spring are PRODUCERs and water is the PROD-UCT. However, the inference is not a metonymy because it is from unused cases of the verbs 'drink' and 'fill' whereas metonymy only occurs in the presence of a violated selection restriction, that neither (30) nor (31) contain.</Paragraph>
    </Section>
  </Section>
class="xml-element"></Paper>
Download Original XML