File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/89/j89-3002_intro.xml

Size: 8,564 bytes

Last Modified: 2025-10-06 14:04:48

<?xml version="1.0" standalone="yes"?>
<Paper uid="J89-3002">
  <Title>KNOWLEDGE REPRESENTATION FOR COMMONSENSE REASONING WITH TEXT</Title>
  <Section position="2" start_page="0" end_page="0" type="intro">
    <SectionTitle>
1 INTRODUCTION
</SectionTitle>
    <Paragraph position="0"/>
    <Section position="1" start_page="0" end_page="0" type="sub_section">
      <SectionTitle>
1.1 NAIVE SEMANTICS
</SectionTitle>
      <Paragraph position="0"> The reader of a text actively constructs a rich picture of the objects, events, and situation described. The text is a vague, insufficient, and ambiguous indicator of the world that the writer intends to depict. The reader draws upon world knowledge to disambiguate and clarify the text, selecting the most plausible interpretation from among the (infinitely) many possible ones. In principle, any world knowledge whatsoever in the reader's mind can affect the choice of an interpretation. Is there a level of knowledge that is general and common to many speakers of a natural language? Can this level be the basis of an explanation of text interpretation? Can it be identified in a principled, projectable way? Can this level be represented for use in computational text understanding? We claim that there is such a level, called naive semantics (NS), which is commonsense knowledge associated with words. Naive semantics identifies words with concepts, which vary in type.</Paragraph>
      <Paragraph position="1"> Nominal concepts are categorizations of objects based upon naive theories concerning the nature and typical description of conceptualized objects. Verbal concepts are naive theories of the implications of conceptualized events and states. 2 Concepts are considered naive because they are not always objectively true, and bear only a distant relation to scientific theories. An informal example of a naive nominal concept is the following description of the typical lawyer.</Paragraph>
      <Paragraph position="2"> 1. If someone is a lawyer, typically they are male or female, well-dressed, use paper, books, and briefcases in their job, have a high income and high status. They are well-educated, clever, articulate, and knowledgeable, as well as contentious, aggressive, and ambitious. Inherently lawyers are adults, have gone to law school, and have passed the bar. They practice law, argue cases, advise clients, and represent them in court. Conversely, if someone has these features, he/she probably is a lawyer.</Paragraph>
      <Paragraph position="3"> In the classical approach to word meaning, the aim is to find a set of primitives that is much smaller than the set of words in a language and whose elements can be conjoined in representations that are truth-conditionally adequate. In such theories &amp;quot;bachelor&amp;quot; is represented as a conjunction of primitive predicates.</Paragraph>
      <Paragraph position="5"> In such theories, a sentence such as (3) can be given truth conditions based upon the meaning representation of &amp;quot;bachelor,&amp;quot; plus rules of compositional semantics that map the sentence into a logical formula that asserts that the individual denoted by &amp;quot;John&amp;quot; is in the set of objects denoted by &amp;quot;bachelor.&amp;quot; 3. John is a bachelor.</Paragraph>
      <Paragraph position="6"> The sentence is true just in case all of the properties in the meaning representation of &amp;quot;bachelor&amp;quot; (2) are true of &amp;quot;John.&amp;quot; This is essentially the approach in many computational knowledge representation schemes such as KRYPTON (Brachman et al. 1985), approaches following Schank (Schank and Abelson 1977), and linguistic semantic theories such as Katz (1972) and Jackendoff (1985).</Paragraph>
      <Paragraph position="7"> Smith and Medin (1981), Dahlgren (1988a), Johnson-Laird (1983), and Lakoff (1987) argue in detail that all of these approaches are essentially similar in this way and all suffer from the same defects, which we summarize Copyright 1989 by the Association for Computational Linguistics. Permission to copy without fee all or part of this material is granted provided that the copies are not made for direct commercial advantage and the CL reference and this copyright notice are included on the first page. To copy otherwise, or to republish, requires a fee and/or specific permission. 0362-613X/89/010149-170503.00 Computational Linguistics, Volume 15, Number 3, September 1989 149 Kathleen Dahlgren, Joyce McDowell, and Edward P. Stabler, Jr. Knowledge Representation for Commonsense Reasoning with Text briefly here. Word meanings are not scientific theories and do not provide criteria for membership in the categories they name (Putnam 1975). Concepts are vague, and the categories they name are sometimes vaguely defined (Labov 1973; Rosch et al. 1976). Membership of objects in categories is gradient, while the classical approach would predict that all members share full and equal status (Rosch and Mervis 1975). Not all categories can be decomposed into primitives (e.g., color terms). Exceptions to features in word meanings are common (most birds fly, but not all) (Fahlman 1979).</Paragraph>
      <Paragraph position="8"> Some terms are not intended to be used truth-conditionally (Dahlgren 1988a). Word meanings shift in unpredictable ways based upon changes in the social and physical environment (Dahlgren 1985b). The classical theory also predicts that fundamentally new concepts are impossible.</Paragraph>
      <Paragraph position="9"> NS sees lexical meanings as naive theories and denies that meaning representations provide truth conditions for sentences in which they are used. NS accounts for the success of natural language communication, given the vagueness and inaccuracy of word meanings, by the fact that natural language is anchored in the real world. There are some real, stable classes of objects that nouns are used to refer to, and mental representations of their characteristics are close enough to true, enough of the time, to make reference using nouns possible (Boyd 1986). Similarly, there are real classes of events which verbs report, and mental representations of their implications are approximately true.</Paragraph>
      <Paragraph position="10"> The vagueness and inaccuracy of mental representations requires non-monotonic reasoning in drawing inferences based upon them. Anchoring is the main explanation of referential success, and the use of words for imaginary objects is derivative and secondary.</Paragraph>
      <Paragraph position="11"> NS differs from approaches that employ exhaustive decompositions into primitive concepts which are supposed to be true of all and only the members of the set denoted by lawyer. NS descriptions are seen as heuristics. Features associated with a concept can be overridden or corrected by new information in specific cases (Reiter 1980). NS accounts for the fact that while English speakers believe that an inherent function of a lawyer is to practice law, they are also willing to be told that some lawyer does not practice law. A non-practicing lawyer is still a lawyer. The goal in NS is not to find the minimum set of primitives required to distinguish concepts from each other, but rather, to represent a portion of the naive theory that constitutes the cognitive concept associated with a word. NS descriptions include features found in alternative approaches, but more as well. The content of features is seen as essentially limitless and is drawn from psycholinguistic studies of concepts. Thus, in NS, featural descriptions associated with words have as values not primitives, but other words, as in Schubert et al. (1979).</Paragraph>
      <Paragraph position="12"> In NS, the architecture of cognition that is assumed is one in which syntax, compositional semantics, and  naive semantics are separate components with unique representational forms and processing mechanisms.</Paragraph>
      <Paragraph position="13"> Figure l illustrates the components. The autonomous syntactic component draws upon naive semantic information for problems such as prepositional phrase attachment and word sense disambiguation. Another autonomous component interprets the compositional semantics and builds discourse representation structures (DRSs) as in Kamp (1981) and Asher (1987). Another component models naive semantics and completes the discourse representation that includes the implications of the text. All of these components operate in parallel and have access to each other's representations whenever necessary.</Paragraph>
    </Section>
  </Section>
class="xml-element"></Paper>
Download Original XML