File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/86/c86-1051_intro.xml
Size: 12,010 bytes
Last Modified: 2025-10-06 14:04:31
<?xml version="1.0" standalone="yes"?> <Paper uid="C86-1051"> <Title>ABSTRACT ~ IDEAL V PROPOSITIONAL v qUANTITY v \]RREAL REAL ~ (PHYSICAL v TEMPORAL v SENTIENT) & (NATURAL v SOCIAL) PHYSICAL ~ (STATIONARY v NONSTATIONARY) & (ANIMATE v INANIMATE) NONSTATIONARY -- SELFMOVING V NONSELFMOVING COLLECT\]VE ~ MASS v SET v STRUCTURE STATIONARY ~ ~ MOVEABLE TEMPORAL ~ STATIVE v NONSTATiVE NONSTATIVE ~ (GOAL v NONGOAL) & (PROCESS v ACTIVITY v MOTION) PROCESS ~ POSITIVE v NEGATIVE ACTIVITY ~ OCCUPATIONAL v INTERACTIONAL OCCUPATIONAL ~ AGRICULTURAL v MININGMANU v IRADE v SERVICE v EDUCATION INTERACTIONAL ~ POSSESSIVE v ASSISTIVE v CONIACTt}A4. V CONFRONTATIONAL MOTION ~ (FAST v SLOW) & (TOWARD v AWAY)</Title> <Section position="3" start_page="0" end_page="216" type="intro"> <SectionTitle> AXIOMS </SectionTitle> <Paragraph position="0"/> <Paragraph position="2"> A question-answering system with the above database of facts and axioms can respond easily to questions such as (2) but would be unable to answer (3).</Paragraph> <Paragraph position="3"> Are John and Mary part of an institution? The questions in (3) can be answered by a system which has a taxonomic hierarchy with features at the nodes, such as KL-ONE (Brachman and Schmolze 1985). If Mary is human, Mary is physical object, which has the feature &quot;touchable&quot;. Similarly, since Mary is an animal, she can move herself about. KT employs such a taxonomy, and it is called an ontology to reflect the fact that KT reasons with such information as though it were true and complete, in contrast to generic information whicl{ is probabilistle. The ontology is is unique to KT and is based upon results in cognitive psychology, linguistics and philosophy. Another deficiency of tile database in (l) is that it knows nothing about John, Mary and their relationship, even though English speakers share descriptions of the typical objects in the sets defined by the predicates Human, Teacher and Student. For example, it would be desireablo if the system could respond as follows: 4) Is Mary intelligent? --Probably so.</Paragraph> <Paragraph position="4"> Is Mary articulate7 --Probably so.</Paragraph> <Paragraph position="5"> Does John listen to Mary7 --Probably so.</Paragraph> <Paragraph position="6"> ls Mary educated? --Inherently so.</Paragraph> <Paragraph position="7"> What does Mary do? --Inherently, teaches.</Paragraph> <Paragraph position="8"> The questions in (4) reflect the kind of things that average people think of when confronted with the predicates (1) (Dah gren 85). Why not have the AI system infer similarly? In order for such information to be useful, the system needs to know that &quot;intelligent&quot; is a probabilistic feature associated with the predicate Teacher. Therefore, if told ~lntelligent(mary) it should be able to reason that (5) while reasoning</Paragraph> <Paragraph position="10"> A system needs the capacity to reason with prototype information assnelated with concepts. But the vastness of such information is an obstacle to its use in commonsense reasoning systems. The strategy employed in the KT system is to take advantage of the high degree of structure in prototype information in order to constrain it. Different types of kinds, such as artifacts, natural kinds and persons, are associated with predictably different types of information, and KT exploits these constraints.</Paragraph> <Paragraph position="11"> II. Diversity in the Lezicpt/The task of representing meaning for sorts (common nouns) attd predicates (verbs and adjectives) themselves, has been impeded by several philosophical problems which are yet to be resolved. The traditional approach, decomposition into conjunctions of other predicates, is notoriously defective. There is no principled way to select or limit the number of other predicates. Suppose the meaning of a~n~N_le is represented as (7).</Paragraph> <Paragraph position="12"> 7) Apple -~. Fruit (P, ed V Green) A l',otmd A Sizel0 Why not addGrowsontrces? The proposal to justify the addition of furthor predicates by the contrast with the moaning of other words has been rejected on a number of grounds (Dowty 79).</Paragraph> <Paragraph position="13"> Predicate meaning representation is difficult because the domain of the cognitive model is the actual World, which is both open and unknown to a large extent. I-h,mans can never be totally expert about the actual world. And, the knowledge o1 predicates used by speakers of a natural language varies with expertise, how precise the predicate itself is, and context. Some psychologists maintain that the the inherent openness of the actual world is dealt with cognitively by making clear (though possibly inaccurate) category cuts, and then reasoning about categories of objects, including the unclear cases, using prototypes. (P, osch, el al 76) (Smith and Medin 80). This view implies diversity of representations of predicate meanings across the lexicon. Some types of predicates will have criterial features, ODD NUMBEI'~, others, such as names and natural kinds, LEMON, will not.</Paragraph> <Paragraph position="14"> Because it represents sort and predicate meaning with prototypes, and because it uses first order logic, KT differs in theory and results from systems such as KL-ONE. In KL-ONE, concepts are defined by thelr roles (descriptive elements) and their subsuming concepts (those concepts superordinate to them in the taxonomy). The concept ELEPHANT is defined by rolesets describing facts such as &quot;has 4 legs&quot;, and by its attachment to MAMMAL. The claim is that all and any instantiation of the ELEPHANT concept has 4 legs. In contrast, descriptions in KT are probabilistic. The system accepts elephants with 3 legs, though it knows that elephants inherentfy have 4 legs. It accepts eggs which are brown, even though it knows that eggs are prototypically white. Further, in KL-ONE, since the descriptions arc meant to be defining, non-defining associated information is not encoded. By contrast, KT encodes a great deal of information usually associated with a concept, without the hnplicit claim that it applies to all instantiations of the concept. ELEPHANT can have features &quot;forgetful&quot;, &quot;lmnbering&quot; and so forth, without claiming that all elephants have those features.</Paragraph> <Paragraph position="15"> Another implication of the prototype model is that the content of features is seen as essentially limitless. In contrast, the semantic net model assumes that there is a manageable set of primitive concepts whose size is much smallcr that that of the English lexicon, that these are explicitly connected. In KT, only ontological relationships are stated as rulcs. The relationships between specific descriptions can bc derived through problem-solving, but is not encoded. For example, in KL-ONE, the fact that both clouds and eggs are white is directly stated by a link from both CLOUD and EGG to WHITE. In KT, that both have a color is stated in the kind type PHYSICAL OBJECT, but that they both have the same color is reasoned at run time.</Paragraph> <Paragraph position="16"> The diversity of information KT accepts is constrained by kind types, which predict that associated with ELEPHANT are leatures describing parts, because ELEPHANT is in the kind type PHYSICAL OBJECT. On the other band, ELEPHANT does not have features describing its mode of construction because it is not in tile kind type AI~.'I'IFAC'F. Thus, the KT system predicts limitless numbers of possible descriptions which are constrained by types deriving from correlational constraints of the actual world.</Paragraph> <Paragraph position="17"> The KT system differs from most other representations of the commonsense knowledge underlying natural langtmge in taking the content of descriptions from psycholinguistic studies. Because of its empirical basis, KT responds to queries in a natural and human-like way.</Paragraph> <Paragraph position="18"> Though other formalisms could be used to represent empirically-derived models of human commonsense knowledge, KT lends itself to representins the diversity of information found in the data because it allows a vlrtually unlimited number of features, while organizing them with the kind types.</Paragraph> <Paragraph position="19"> III. The Kind~~ KT reads geography text, and shows its understanding of the textby answering questions. Text understanding demonstrates the usefulness of the system, but many interesting problems in that area of resemch are not addressed by this work. KT is written in VM/PROLOG. It uses a parser, a first-order logic translator and a metainterpreter dew:loped by Stabler and Tarnawsky (19851. It employs a set of databases which represent the commonsense ontology, tim generic features for sorts, type information for the generic features, and kind types for the ontology. Below is a sample text representative of the English KT understands.</Paragraph> <Paragraph position="20"> Sam ling_ Text John is a miner who lives in a mountain town. His wife raises a chicken who lays brown eggs. The company-owned clinic is near the mine. The nurse monitors the health of the miners. She approves of John's diet.</Paragraph> <Paragraph position="21"> 111.1 Tim Ontological Schema To capture ontological constraints, KT employs a top-level conceptual schema, some of which appears in Figure what tile major category cuts of the environment arc, that is, a eomlnooscnge ontology.</Paragraph> <Paragraph position="22"> Figure 1 The Ontological Schema ENTITY ~ (ABSTRACT v ILEAL) & (INDIVIDUAL v COLLECTIVE) ABSTRACT ~ IDEAL V PROPOSITIONAL v qUANTITY v \]RREAL REAL ~ (PHYSICAL v TEMPORAL v SENTIENT) & (NATURAL v SOCIAL) PHYSICAL ~ (STATIONARY v NONSTATIONARY) & (ANIMATE v INANIMATE) NONSTATIONARY -- SELFMOVING V NONSELFMOVING COLLECT\]VE ~ MASS v SET v STRUCTURE STATIONARY ~ ~ MOVEABLE TEMPORAL ~ STATIVE v NONSTATiVE NONSTATIVE ~ (GOAL v NONGOAL) & (PROCESS v ACTIVITY v MOTION) PROCESS ~ POSITIVE v NEGATIVE ACTIVITY ~ OCCUPATIONAL v INTERACTIONAL OCCUPATIONAL ~ AGRICULTURAL v MININGMANU v IRADE v SERVICE v EDUCATION INTERACTIONAL ~ POSSESSIVE v ASSISTIVE v CONIACTt}A4.</Paragraph> <Paragraph position="23"> V CONFRONTATIONAL MOTION ~ (FAST v SLOW) & (TOWARD v AWAY) Tim goal is to encode all ontology which is consistent with an ernpirically verifiable cognitive model. As much evklence as possible was derived from psychologicalrcscarch. The schema was developed to handle the predicates found in 4100 wolds o\[ geography text drawn from textbooks. Despite the complexity of constructing a computer model of the ontology, two commonly-used sinrplifieations, binary trees and planar branching in trees, wcre rejected. First, though binary trees have simpIilying mathematical properties, they arc not likely to be psychologically real. People easily think in terms of more than two branches, such as FISH vs BIRD vs MAMMAL, and so on, off of the VEWI'EBI~.A'I'E node. Secondly, most representations assume that each node has a unique parent. But cross-classification is necdcd shlce colnn\]oosensc rt2}lso?ling USeS it. People understand, for example, that entities cress-classify as individuals or sets and real or abstract. Tiffs means that at each node, more than one plane might be needed for branching. Cross-classiflcation is handled as in (McCord 85). A type hierarchy is generated which pernfits each node to be cross-classified in !! ways. In the top level rule of Figure</Paragraph> <Paragraph position="25"> ILEAL, and as either INDIVIDUAL or COLLECTIVE. 'lhis corresponds to the claim that cngnilively there is essentially a palallel ontological schema for collectives. For example, people know that herds consist of animals, so that herds are real and concrete. Thus we have the parallel ontology Iragments in (8).</Paragraph> <Paragraph position="26"> (8)</Paragraph> </Section> class="xml-element"></Paper>