America Journal of Com~rttational Lianuistics 
Microfiche 83 
A LEXICON 
FOR A COMPUTER QUESTION-ANSW~RI NG SYSTEM 
Deparement of Computer Science 
Depar~ment of Linguistics 
Illinois Institute of Technology 
Northwestern Universily 
This is an expanded version of a paper presented at the Fifteenth 
Annual Meeting of the Assooiation for Computational Linguistics, 
March 16, 1977. Accepted for publication September 30, 1978. 
Copyright @ 1979 
Association for Computational Linguistics 
SlJMMARY OF 
A 'L;EXICON FOR A COMPUTER QUF ST!! ON-AHSWCRING SYSTEM 
An integral part of any natural language understanding sys tern, 
but one which has received very little attention in application, is 
the lexicon. It is needed during the parsing of the input text for 
making inferences, and for generating language output or performing 
some action. This paper discusses Lhe principal questions concern- 
ing the lexicon as it relates in particular to a question-answering 
system and proposes a specific type of lexicon to fulfill the needs 
of this system. 
Rather than make a distinction between dictionary and encyclo- 
pedia, we have a single global data base which we call the lexicon. 
Homographs are differentiated and phrases with fixed meanings are 
treated as separate entries. All the information in this lexicon 
is encoded in the form OZ relations and words or word senses. These 
form a large network with the words as nodes and the relations as 
edges. In addition the relations define semsntic fields and these 
are used to treat problems of ambiguity. Relations are use6 to en- 
code both syntactic and smntic information. Axiom schemes are 
associated with each relation and these are used for inferencing. 
The lexical relations then are at the heart (or brain) of the system 
for representation, retrieval, and inferencfng. 
For each relation we describe its semantics and the axioms appro- 
priate to it. In the positing of lexical relations our appreh has 
been in£ luenced by the work of 4presfan, Me1 cuk, and Zolkovs~y . The 
lexical relations we have posited are the traditional svnonvmy and 
antonymy, taxonomy, part whole, gradfig and approximately forty others. 
The whole set, deliberately left open ended, is subdiudded into nin~ 
subsets which include attribute relatibns, coLl~cational relations 
and paradigma ti* onEs. 
Each relation has its own lexical entry givhng its properties 
and telling how to interpret lexical relationships in a first order 
predicate calculus form. lor example, the tnformatfon for the lext- 
cal entry dog gncludes the statement dog T animal, that is, that a 
dog is a kind of mim~z. The lexical enfry for T, the taxonomic re- 
lation, in its turn includes infprmatic , which allows the statement 
to be interpreted as 
HoZda(Ncom(dog,X)) - RoZds(lYcorn(anima1,X)). 
The inventory of relations is expandable simply by adding lexical 
entries for new relations. In addition having both the lexical en- 
tries and the relations in the entrdes expressed in the same nota- 
tional form as that of input sentences, namely in a Eirst order 
predicate calculus notation, allows for a consistent, coherent, and 
easily modifiable system for analysis, inference, and synthebi~, 
TABU OF CQNTENTS 
.. m.................... 1, In trodu~t ion 
...................... 2. Design Decfsiohs 
a. The Dictionary and the Encyclopedia - One Data Base 
......................... or TWO 
#. 
b. Lexical Models - Componential ~eatuke Analysis vs. 
.................... Relational Networks 
c. Selection Preference .................. 
d. The Homonymy - Polysemy Problem - Criteria for 
SeparateEntries ............... - - - . 
e. Idioms 
f. Preliminary Design Decisions for the Lexicon ....... 
3. Some Theories of Lexical Relations .............. 
4. The Set of Lexical Relations ................ 
a. me Classical Relations: Taxonmy and Synonymy ...... 
3. Antonymy * ........................ 
......................... C. Grading 
em Parts and Wholes ................... 
........... f. Typical Case Relations ... 
................ g. Other Callocation Relations 
h. Paradigmatic Ke1;1tions .................. 
j . Inflectional Relations ...*.............. 
5. The Organization of the Lexicon and the Semantic 
Representations ...................... 
6. Tha rorm of the Lexical Entry ................. 
.......................... 
7. Sunnary 
............. Appendix I. The Semantic Representations 
Ref erenccs 
I. INTRODUCTION 
The lexicon presented here is being developed as an integral part 
of a computer question-answering system which answers multiple-choice 
questians about simple children's stories, It thus must make informa- 
tion readily available for the parsing process, for building an internal 
nodel OF the story being read, and for making inferences. Knovledge 
nbout words and knowledge about the world must both be stored in a com- 
pact but  mediately accessible form. 
Many decisions must be made, therefore, about the design of the 
lexicon. The first problem is to decide on an organizing structure, 
Should Lexical and "encycloped~c" information be stored separately or 
together? Which ltems will have separate lexik3l entries? Which will 
be included in other entries? What about homonymy and polysemy? What 
connecting links between words and word senses will be recorded and how? 
The next problem is to determine a characterization of word mean- 
ings. This leads to some deep theoretrcal questrohs What kind of 
lexical semantic representations are appropriate? What is the structure 
ot these represefitations? What are the semantic primes, the elements of 
that structure? The design of the lexical entry is thus subject to theo- 
retical biases as well as the practicas constrajnts of space, retrfeval 
effkiency, and effective support of rnference-making. 
The decision to use lexical relations as fundamental elements of 
the struclture of the lexicon has strongly influenced our design. Relations 
are wed to encode both semantic and syntactic information, 
Axiom schemes 
essential to inferencing are associated with each relation. Relati-1 
informatgon makes up.a signifkant part of,the lexical entry, 
Lexical relations offer significant advantages. They aS.Low us to 
generalize familiar gnference patterns into axiom schemes. They can en- 
capsulate the defining formulae of the commercial dictidnary. They have 
an iritwitive appeal which we believe reflects a certain measure of psy- 
chological reality. On a practical level they allbw~tls to express both 
syntactic and semantic informa-tion in a form that is compact and eaay to 
retrieve. They can be used in many ways. For example, the following 
paragraph from a test administered to first and second graders-by , 
local school system says: 
(Pl) Ted has a puppy. His name is Happy. 
Ted and Happy like to play. 
(Ql) The pet is a: dog boy toy 
In order to answer this question we need to know what p& means. In our 
Lexicon the lexical entry for pet contains a simple definition. a pet is 
an animal that is owned by a human. In order to answer this question we 
also need to know that a puppy is a young dog. 'This information in pre- 
dicate calculus form would be part of the lexical entry for puxpy. We 
would, of course, need axioms of the same form as w'ell for the entrqies 
for kitten, jamb, etc. Instead of such a representation we express this 
;information by uslnp, a l exical relation, CIIILD. The Lexical entry for 
puppy contdins CHILD Jog. Similarly, the lexjcal entry for kitten con- 
tains CHILD cut; while the lexical entry for CHILD contains the axio~tl 
scheme from which the relevant axioms are formed when needed. 
We treat verb& in a similar way. Corresponding to each case re- 
Lation there is a lexical relation which points to typfcal. fillers of 
that ca le slot. Thc lexical entry for bake2 includes TLOC kitchen. It 
also ihcludes~ T make where T 1s the well-known taxonomy relation, so that 
if the story says that "Mother baked a cake " we can1 inter that she inade 
one add CAUSE bakel so that we car deduce that the cake has baked 
  he 
select Lon rgstrictions that help us tell instances sf bakel and bdke2 
apn'rt can also be expressed compactly using the T relation. We also 
need to make deducti~ns from main verbs in predicate complemhnt con- 
structions, deduetions such ag the speaker's view of the truth of the 
proposition stated in the complement as derived from the factivity of 
the verb. In order to answer several questions from the test cited 
above the reader must infer that everything that Mother says is true. 
~exical entries for main verbs that take predicate complements contakn 
pointers to the implication clgss. These relatiohs can then be expanded 
to give the proper axioms. 
The lexicon includes separate entries for each derived form unless 
the root can be identified by a simple suffix-chopp-tng routine. Lexical 
relations are useful, here too in saving space. The lexical entry for 
man contains PLURAL men. The 1ex;bcal entry for went consists of PAST go, 
The lexical entry for death consJ.sts of NOMV die. There are, as well, 
lexical entries for some multiple word expressions such as birthday par ~IJ, 
baZZ gm, piggy bank, and thank you. 
As to the form of presentation here, the next section presents some 
of the practical problems and theoretical convictions which determined our 
most critical design decisions. Then, after a brief description of some 
earlier developments in the theory of lexical relations, we explain the 
system of relations which structure our lexicon, discussing each group of 
relations in turn 
Finally we describe the actual form of our lexical entries. 
8 
2. DESIGN ~JECISIONS 
The lexicbn in this system must make information readily available 
for parsing, for building the story model, and for making inferences 
during question answerivg . So th knowledge about words and knowledge about 
the world must be stored in a compact but immedrlately accessible form. 
Therefore, many decisions must be made concerning the design of the lexi- 
con The problems involved includ~ the asganizatian of lexical and ency- 
clopedic i.nformation, the chotce of a lexdcal model and the determ3nafion 
of appropriate serantic primes, the representation of selection prefer- 
ences, the recognition and storage of homographs, and the criteria for 
establishing separate entries for idioms and other fixed phrases. Th i s 
paper attempts to develop some consxstent solutions to these problems, 
sol utlons which determine the design decisions for the lexicon in this 
ques t~on--rnsa erinp system. 
a. 1'he dze-tzonary ad the elzcgctoy~dza - one data-base or two7 
Any question-answering system must uBe lexical information in at 
least two ways, In parsing and in making inferences. The flrst critl- 
cal dec5s50n that must be made 9s whether tm separate data-bades are 
needed to support these sepdrate functions or whether a single unified 
~lobal datq-base is better. Traditionally human bejnfs have used two 
separate stores of inf orma tlon, the dictioqary and the encycl opedia. 
Some linguistic and computational mode1.s of language have also been 
based on the assumption thzt information about words should be stored 
in two separate co1 lectians. 
Ln Chomsky's fl~pp~to model (1965) there are two separate storage 
plzcc .; for lexical Informztf on, one in the base component 2nd another in 
thc scmantic component. Kirr (1972)   to red syntactic information In a 
9 
' dictionary" and semantic inf omation in an ' encyclopedia" 
Winograd 
(1971) has two separate word lists, one used by the parser and one by 
the semantic routines, even though the parslng and the semantic routgnes 
are very closely interwoven in his BLOCKS system 
Before deciding on whether to carry on this tradition one must ask 
whether there is really a clearcut d~stincti-on between these two klnds 
of lexical information Is there a simple algorltnm for deciding which 
data should go where? 
Bzem, Bzemzsch, mefer, and the Smantzc Funct ton of the Lpxzcon 
Both the dictionary and the encyclopedia are ways of recording in- 
formation stored in human memory But human memory ns probably not orga- 
nized in the usual graphic form of an alphabetic word list, therefore 
alternative memory s-tructures should be examlned One such alternat~ve 
has been presented by Bierman (1964) In his system lexical-semantic 
fields &re prlmary, they define the basic organization of semantic In- 
formation 
The function of the lexicon, if it has one 4x1 the semantic 
domain, is to index thebe fields, to store pointers to the location of 
a word In the varlous fields containing it An appropr:iate image for 
such a system is a very large single page dictionazy with Language speci- 
flc nodes connected by semantic relations (See also Werner 1969) 
Can the dictionary and the encyclopedia be distmguished in this 
context? Bierwisch and Kiefer (1970) assume that both kinds of informa- 
tion are contained in the same lexical entry The distinction between 
lexical and encyclopedic knowledge corregponds then to the dffference 
between the care and the periphery of a lexical entry, where 
The core of,a lexical reading copprises all and only 
those semantic specifications that determine, roughly 
speaking, its place within thb system of dictionary 
entries, i. e. deljmit it from other (non-synonymous') 
entries. The periphez-8 conelsts of those mantic 
specifications whikh could be removed from its reaaing 
withour changing its relation tc other lexical readings 
within the same grammar (2bid: 69-70) 
Unfortunately they do not speqify whether the lexical-gemanttc relations 
which form the structure of the fields are part of the core or the peri- 
The major difficulty with this crikerion is its instability. As new 
entries are added to the system, information sufficient to Oistinguish one 
enrry from another may have go be shifted from the periphery to the core 
--and thus from the encyclopedia to the lexicop. For instance, sb~pose a 
new entry, "2eopard--a large, wild cat" is to be added. The entire lexicon 
must be ~ear'ched for entries which mention large wild cass. If one is 
found, say "lion--a large wlld cat", then enough information must be added 
to both definitions to differentiate leopard and zwfl from each other. 
Soviet Lexicography and the Lmical Universe. 
Apresyan, iolkovsky, and  el ' Zuk run into the same difficulty of dis- 
tinguishing dictionary and encyclopedic $nformation in attempting to define 
the lexical universe of a word C 
0' 
The main themes dealt with under the heading 'lexical 
universe' are: 1) the types of Co; 2) the main parts or 
phases of Co; 3) typical situations occurring %efore or 
after Co etc. Thus, the section lexical universe for the 
ward skis consis-ts of a liat of the types of skis (racing, 
mountain, jumping, hunting), their main parts (skis prdper 
and bindings), the main objects and actions necessary 2o.r 
the correct use (exploitatibn) of skis (sticks, grease, $0 
WLZE), the main types of actiJities connected with skis (a 
ski-trip, a ski-race ...I and so on. Even these scan*yt 
examples make it clear that the information about the lexi- 
cal universe is, at least partially, of an eneyclopaedic 
nature. We say "partZallyl' because genuine encyclopaedic 
information about skis (their history, the way 
they are manufactured, etc.) is not supplied 
hgre: 
the sections contain only such words and 
phrases as are necessary for talking on the topic, 
and nothing else. (1970:19,) 
The problem here is that "what is needed for talking about the toqic" 
hepends very much on who is going to do the talking. 
The definition 
of slEi in Webster 's Nm tntemationaZ (2nd Edition) begins : 
One of a pair of narrow strips of wood, 
metal, or plastic, usually in conibinatian, 
bound one on each foot and used fox gliding 
over a snow-covered surface. 
Apresyan, iolkovsky, and Me1 cuk do not provide for three of the items, 
mentioned here: what skis are made of (wood, plastic, or metal), what 
shape they come in (long and narrow) and where they belong spatially 
(on the feet). Yet these items could be essential in understanding in- 
ferences in a story. 
It was snowing, Jim took out his skis. 
He waxed the wooden strips.... 
You could need this information in answering questions. 
Jim skid down the mountain.... 
What was he wearing on his feet: 
slippers skis skates? 
Although in English or Russian it is possible to refer to skis without 
knowing that they are long and narrow, it is not possible in Navajo where 
physical shapes determine verb forme. While the entry in Webster's goes 
on at length beyond the sentence given above, it does not include all 
V 
the items which Apresyan, Zolkovsky, dhd Mel'guk mention. 
This, however, 
is not surprising; the boundaries of the lexical universe are not well 
def lned . 
Difficulties in updating a system with separate dictionary and encyclopedia, 
This lack of definition cahses tremepdous problem in a dynamic system. 
A "real" dictionary ar encyclopedia, the one in a person's brain, is 
constantly changing. Infomation is added, corrected, apd perhaps lost. 
A truly interesting memorymodel must be dynamic. The problems of updating 
this information are not easy to solve, the problem of distinguishing be- 
tween dictionary and encyclopedic inf~miation in the updating process 
seems insuperable, 
Recognizing definitions phrased in ordinmy English is already, diffi 
cult (Bferwisch and Riefer 1970, Lawler 1972). Determhing the reliability 
of such fnfornation is also a problem and the dichotomy of dictionary and 
eacyclop&dia increases this difficulty. Unfortunately inf~rmation does 
pot cpme neatly packaged and marked "for the dict30nary1' or "for the en- 
cycloped~a". And addition of ififormation to one part of the entry may 
necessitate updating 'other parts of th-e entry. For example, if we learn 
that record is a verb as well as a noun we need to add morphological in- 
formation, lescribe the relatisons between record and write, and we should # 
probably describe recording materials. Mention must be made that record 
is a factitv, i.e, if someone records that something happened, one can 
assume that from the standpoint of the speaker the something really did 
happen. Whi~h of this information is dictiona~y information and which is 
encyclopedicf And once this decision is made, information added to that 
entry may require additions to other entries In the record example, the 
entries for ebase and write would have to be updated. Also, a decision 
must be made @n whether a new entry is needed and whether homography or 
polysemy exists for th3.s new entry. 
The wrk of Kiparsky and Kiparsky (1970) , Lakof f (1971) and KcCdwley, 
(1968) has shown that syntax and semantics cannot be separated into such 
neat compartments. But if syntax and semantics are interwoven then does 
it make sense to put syntactic information in one box and semadtic informa- 
tior inanother? The enswer to this questgon given at least by generative 
semantics calls into question the traditional distinction between the dic- 
e".& 
tionary and the encyclopedia. 
We accept the generative semantikist arguments that syntax ad seman- 
tics cannot be separated and thus do nut separate syntactic and semantic 
inbrmation. Furthermore, as shown above there seem to be no practical 
cr,iteria for dig tinguishing dictionary inf~rmation from encyclopedic in- 
formation.- Thus our system has one singls global data base. For brevity 
and' since it is a kind of vllection of words, it will be called "the 
lexicon". 
b. Lex{eaZ Models - ComponentiaZ Feature AnaZysis vs. ReZationaZ Netr~orFs. 
A second critically important decision involves the choice of an 
appropriate lexical model, the determination of what semantic primes to 
use and how they should be combined in lexical senantic structures. Two 
importarit competing models a're provided by componential feature analysis 
and by relational networks. 
In a componential analysis model the primes 
are semantic features and words are deffried by bundles of features. This 
is a natural extension of the dj ~inctive feature approach to phoneme 
description which h$~; been used to explain many phonological phenomena. 
Certain practical problems arise. The number of words in any 1anguage'Fs 
far larger than the number of phonemes. 
The number of distinctive fea- 
tures which serve to discrjhinate them must be larger too. 
The word- 
semantic feature matrix for a given language would be vastly larger than 
14 
the phoneme-phonetic feature natrix. In addition, this matrlx woyld be 
extremely sparse, Also, it $8 not clear Qhether all the entries ih this 
malcr* cd'uld be +/- as in a phoneme marix. Axe semantic features either 
definitely absent or bfinitelg present dr are some features present by 
degrees? The size of the comp~nential analysis matrix would imfnediatelg 
ihtroduce difficulties in a computerized model. Fortunately, both numeric 
c31 analysis and document retriqval offet experience in handling immense 
matrices by machipe. When a six is extremely sparse it tutna out tp be 
sensible to store a list of entlrre with SOW and wlumn numbers Here it 
would mean storing a list of features for each word. This. in fact, is 
close to Katz's proposal (1966). 
Ip a relatzanal network model, however, the primes are relations and 
words or word senses. Relations connect w'osds together in a network in 
wnlch the woPds are nodes and the relations are edges. In fact, wo.rds,are 
defined in terms of their relationships to other words. 
These *models differ radically in their approach to the critical Lexi- 
cal task pf finding related words. In the componential analysis model re- 
lated words share related features. Prl~.;umably, the more features two 
words share the more closely related khey are. Thus, some kind of cluster 
analysis must be used to identify related worgs. In the relational network 
model the lexicon is Eormed from 2eTatfionships between words. Thus relatl ed 
words are immedzately available 
Both models, componential and relational, require a search for semanric 
primes 
The componential analysis model requires the discovery of possibly 
thousaads 0% semantic features . 
For a relational network mob1 an inventc ry 
of lexical relatAons and theil; properties must be developed. 
This is apparent-ly 
a significantly simpler task than the discovery of semantic features, for the 
w 
number of rekvant relations is probably quite small. 
Zolkovsky and 
~el'&k (1970) list about fifty in their paper. 
Related to both of these models is the notion of semant5c fields. 
Intuitively, semantlc fields are collections of related words used to 
talk about a particular subject. 
Semantic fields seem-to offer some 
heip in coping with the problems of ambiguity and context. Many utter- 
ances, taken out of context, are ambiguous. But remarhlQy, people 
almost never perceive this ambf guitg . They immediately choose the 
correct word sense and ignore the others. ~~~areitl~ the toplc of con- 
versation deterrdines a s'emantic field and the word sense chosen is the 
one which lj es in this field. The semantic field pomehow defines the 
I I 
verbal context. (Or as Fillmore 1977 :59 phrases it meanings are re- 
latlvized to scenes". ) 
The componential analysis model makes it possible to define dis- 
tinct semantic f lelds, but getting ,from one word nn the ff eld to the others 
may take a sigrrifxcant amount of processing time. bery set of semantic 
markers can be used to define a semantic fleld; the fiel&consists of all 
the words wldh definitions containing the markers. The smaller the number 
of,markers the larger the field obtained. It is possible to decide ime- 
diatkly whether aogfven word is in the field or not, just by checking its 
list of markers. 
In the relational network model r~lated words are easy to find, but 
the boundaries of semantic field5 are extremely fuzzy and indistinct. A 
semantic field can be defined by starting at a particular node and going 
a given number of steps in any direction. The semantic fields obtained 
ibis way, however, have very arb J trary boundaries and overJap considerably . 
Certain bash philosophical -psycho1 ogical assumptions may create 
a strong bias for one of these models over the other. Someone who 
believes that semantic features exist as Platonic ideals or who accepts 
them as psychological realities may easily find componential analysis 
a most natural kind of description and regard the necessary search for 
features or sememes as highly relevant. Someone who Feels that "There 
is no thought without words" would be much mre likely to prefer a re- 
lational network description. A lexicon 28, in an important sense, a 
memory model. Intuitio'i abbut our own internal memory models must 
have a strong influence on the lexicon we sha~e. 
We have chosen a relational network model for both intfiitive and 
practical reasons. We find lexical-semantic relations theoretically 
interesting (see Evens et al. ms), Useful mventorles of thesd re- 
lations are available, in a later section we describe some of these 
sources. As will be show they proviae a convenient way of storing 
axiom schemes far deductive inference. 
As lexical semantic structures @e use the same first-order pre- 
dicate calculus notation in which semantic representations are written 
in the que9tion-answering system - meanings of words and meanings of 
sentenc~s must have the same underlyjng form As McCawley (1970) has 
argued "denyist" and "doctor who creats teeth" must contain the same 
units of mehnlng tied together in the same %ray 
c "'e Zectzqn Preferences 
A third important problem to be faced in constructing a lexicon 
which is to support a parser is the prnhl~rn nf selection restrictions. 
Chomsky (1965) developed the theory of selection restrictions in order 
to block the generation of nonsense sentences in the syntactic com- 
ponent of his model, The lexical entry for frzghten, 
for examae, 
contains the information that it requires as object a noun with the featur 
[+animate] , while drznk requlres an animate subject . If these conditions 
are not met, generation is blocked. Selectional restrictrions seem much 
too restrictive. Traller trucks drink diesel fuel and the earth drinks 
in the rain. In describing dreams we can invent perfectly appropriate 
sentences in which inanimate objects by the dozens get up, run around, 
and drink untjl frightened back to place. Still it is %rue that sen- 
tences like these are somehow more surprising than sentences in which 
cows drink from a brook and are frightened by lighthing. We need some 
method ~f recording the ordinary, everyday ways in which words combine 
w~thout excluding the unusual, the poetic, the nletaphoric uses, We 
kill call them seZectzonprefe~ences instead of selection restrictions. 
Some truly semantic means of identifying semantic anomalies are 
needed. Raphael mentions this question rather casually, almost as an 
aside in the SIR paper. He draws taxonomic trees, one for the nouns 
and one for the verbs, from the vocabulary of a first grade reader. 
Then he makes statements like this 
1. Any noun below node 1 is a suitable subject for 
any verb below node 1'. 
2. Only nouns below nodes 3 or 4 may be subjects for 
verbe below node 3'. (1968, p. 51) 
He makes it clear that he is indeed trying to solve the selectional 
problem 
The complete model cohlposed of tree structures 
and statements abodt their possible connectionp, 
is a representation for the class of all pos- 
sible events. In other words, it represents 
the computer's knowledge of the world. We now 
have a mechanism for testing the ' coherence' or 
'meaningfulness' of new samples of text. (1968, p. 51) 
Werner (1972) has suggested a method for handling the selectlonal prob- 
lem which uses noun taxonomies in very much the same way that Raphael does. 
His proposal includes an elegant way of storing sele~tional information 
within his memory model. In his network model, noun phrase arguments are 
connected to the verb by prepositions. The node representing the lexical 
entry for the verb has arcs connecting it to compound nodes, one for each 
prepwition which can be used with the verb. The object of each prepo- 
sition is a node in the noun taxonomy. This noun or any noun below ft 
in the taxonomy may serve as an argument for the verb. Hem is an over- 
simplified example of a network for saZ2. 
Figure 1: Werner's Ansyer to the Selection Problem 
This network says that oeZZ takes a human subject, a thing as 
object, the preposit-n to followed by a human, the preposition Jbr 
followed by money. The square brackets around [human] indicate that this 
is just a pointer to the top noun in the taxonomy for human beings. 
Any node in this taxonomy below the node marked human, whether it is 
Sm OK a Navajo or Mother, can be used as a subject for saZZ. He does 
not use the verb taxonomy as Raphael does. 
Each verb haa its own set 
of selection indicators 
fn his discussion of the goals of a semantic theory Winogtad describes 
semantic markers and selection restrictions , quotes Katz and Fodor (1964) 
and indicates that he intends to mbody this theory in his system. 
But 
in fact semantic markers in the BLOCKS program are derived from a marker 
tree (Winograd, 1971, Figure 59) which is organized taxonomically. 
In 
the implementation process Winograd seems to have moved from .a strict 
Katz-Chomsky position to a position somewha1 closer to RaphaeJ. a J Werner. 
The Raphael and Werner proposals are the guiding principles here, 
ajapted to accmodate case-defxned arguments. 
The lexlcal entry for 
mooel, the intransitive pave, must tell us about selection as well as how 
to relatesubject, object, and prepositional phrases to cases. The in- 
formation is organized this way: 
gramnutical function case frame selection infaration 
1. subject &per fencer thing 
move1 # 2. from source thing, place 
3, to, into, onto goal thing, place 
The numbers 1, 2, 3, indicate argdment positions for the predicate cal~uld~ 
representation. The next column lists the grammatical function. Next 
come dase indications. Last comes the selection information, the top 
node in the relevant part of the taxonomy. 
For move1 the subject is an 
experiencer. The source is usually marked by the prepositaon fmm. 
The 
god is lrirually marked by a preposition like to, into, br onto. 
The 
selection information in column four is rather dull, since any argument 
can be a physical argument or thzw, the source and goal can both be 
places. There is a rule that any physical goal can be replaced by a 
class of adverbs containing back and there, so these alternatives do 
not have to be listed, 
An attempt is being made to use the verb taxorlomy as Raphgel 
suggested. In this lexicon go is marked as taxonomically related to 
move. The entry for go does not contain the information labelled # 
above. Instead, when this information is needed, the look-up routine 
climbs the taxonomic tree in the lexicon until it finds a verb which 
has this information and oopies it from that entry. Thus it gets,case- 
argument and selectional information for go from the entry for move. 
It is not clear yet whether this will really work with a sizable vo- 
cabulary. 
This selec tional informa tion is treated as selec tfon preference 
and not selection restriction. Each candidate word sense for a verb is 
checked for selectional prefbrence. If no arrangement of the avail- 
able noun phrase arguments 1s consistent with these preferences another 
word senqe is examined But if all word sensee have been rejected on 
the basis of selectional information, the sentence is not rejected 
Instead we look again at the candidate word senses and count for ~ach one 
the number of qteps up the tzxonomic tree we have to make tomresolve 
the conflict. The word-sensf which requires the fewest steps Is ~hdsen. 
The hope is tHat the system will be able to "understand" simple metaphors 
this way. It would be interesting to try to create metaphors by picking 
noun phrase arpuments claw to but: not undtr the nodcs Indicittd by the 
sel ection information. 
d. The Amonm-PoZyeemy Problm - Cr$te&a for Separate Entriee. 
Words with the same phyeical shape but dgffprent meanings constantly 
cause trouble in natural language processing. 
In designing a lexicon we 
must decide whether or not to create a separate entry for each variation 
t meaning aid type of use. 
Quillian is particularly interested in words 
w'ith multiple meafiings and he experimented with tseveral in his memory 
model. In Quillian (1968) the word ptmt is treated as a three-way homo- 
nym with three separate ttrpe nodes, each with a separate definition-plane: 
PLANT 
Living structure which is not ah animal, 
frequently with leaves, getting its food 
from air, water, earth. 
PUNT2 
Apparatus used for any process in industry. 
WT3 Put (seed, plant, etc.) in eaxh for growth. 
The type node for the first forms a disjunctive set with token nodes 
pointing to the other two 
The word food has a single definition with alternative formulations: 
That which Jiving being has to take in to keep it 
living and for growth. Things f omllng meals, 
especially other than dklnk. 
A polysemous word like this has a single type node and a single definition- 
plane, but the two alternative definitions are combined with an OR link. 
r/ 
Apresyan, Me1 'guk and ZolkovsQ at tack the homonymy-polysemy problem 
with vigor. Graphically coincident worda are considered homonyms, given 
distinctive superscripts and listed as separateeentries, if their definitions 
V 
"hiive no common part" (Apresyan, Zolkovsky and ~el'guk 1970:3) 
They do 
not define "a common part," but they do give an example. KOCA (scythe), 
2 3 
KOCA (braid of hair), KOCA (spit). If two gefinitibns have a single 
common part, the word is classified as polysemantic with a single entry 
divided into separate parts. They distinguish two types of polysemy. Ifi 
one case the difference between two words is regular. The relation of a 
verb to its typical ob J ect is such a regular meaning change, e. q. record (v) 
- record(n) , f ieh(v) - f ish(n) , and aid(v) - aid (n) . ghese regular vari- 
ations in meaning are numbered with Arabic numerals, while irreguldr vari- 
ations are numbered with Roman numerals. Thus part 3 of the lexical entry 
for bm, the definition, might have the form: 
bod XC 1. To bend the head in assent or reve~ence. (vt) 
2. To submit or yreld. (vi) 
3. TO cause to bend. (vt) 
a 
4. An inclinacibn of the head. (n) 
5. A bent amplement used to propel an arrow 
or play a stringed mstrument. 
(n) 
I. 1. The forward part of a boat. (n) 
2. One who rows in the bow of a boat. 
(n) 
There seems to be some redundancy between definition-elements and the 
lexical functions. Shouldn't regular variations in meaning be captured 
by fegular lexical fllnctions? If so, then the distinction Apresyan, 
u? 
Zolkovsky and ~el'fuk make between regular and irregular meaning variations 
will be appareht from the form and need not heindicated by different no- 
tation, such as Arabic and Roman numerals. 
For convenience in lexical lookup we have a single physical entry 
for each grapUcal form. Each word sense whether irregular or regular 
is numbered separately with Arabic numerals. 
Thus the ad~ective is coozl, 
coo22 i$ the verb to become coozl, ad cod3 is the verb meaning to cause 
to become cooll. 
Separate information aboub. lexical relati~ns, etc. is 
stored for each subentry. 
e. Xdzornb 
Idloms present a serious problem to the designer of an English lexicon 
Some criterla must be established for deciding which idioms deserve separate 
lexical erntries and how multi-word phrases should be stored. 
When does an idiom deserve to be treated as a separate Lexical unlt? 
v 
Apresyan, Zolkovsky, and  el' Euk (1970) and Klparsky (1979) represent 
oppbsfte poles of opinion here. In the explanatory-combinatory dictionary 
(ECF) of the Soviets word combinati~ns which have a definition of their 
own or "a pecullar 'combinability pattern" have separate entries. Kiparsky 
(1975) considers an idiom as a separate lexical uqit only if it invoxves 
syntactic patterns which are no longer productive. Th~s "house beautiful" 
and "come hell or high water" are treated be units, but "make headway" is 
no't. Instead headmy is defxned as "progress" and marked as appearing 
after make' and rose. Uparsky's proposal places a greater burden on the 
recognition program which would have to be able to retrieve and put to- 
gether the pieces of the idiom using his lexicon The system descrQed 
here follows Apresyan, ~olkovsky and Me1 'Euk, and treats fixed phrases 
as units. In particular, all noun-noun combinations like pzggg bank 
and bz~thday cake are separately defined, although thls is certainly a 
productive partuof English. 
Judith Levi (1974, 1975) has proposed a theoretically elegant and 
intuitively attractive method of generating these forms. According to 
Levi the underlying structure for "birthday boy" is "boy-have-b5rthday" 
and the underlying s truc tpre for "Sir thday cake" is "cake-for-bir thday . 
tl 
Then under certain conditions he, for, etc. can be deleted to give us 
the noun adjunct expressions. Given these rules, she argues, it is not 
necessary to treat these exptessions acCl separate lexical items. While her 
rules seem sufficient to allow us to syntheeiize these compounds correctly, 
difficlClJties arise when we try to use them for analysis. The question- 
answering system needs to be able to infer from "birthday boy" that the 
boy in question is having a birthday, but to avoid inferring from "birthday 
cake" that the cake is having a birthday. For correct recognition we need 
to be able to recover the qnique underlying stl'ucture if one exists. (For 
a similar criticism see Downing 1977 : 814-15. ) Levi' s theory accounts for 
the generation of new noun-noun compounds. However, in order to pcrount 
for the recognition process we need lexical entries for fa~piliar fixed com- 
pounds and her theory to analyze new compounds. We have used Le+iVs struc- 
ture as a basis for our representation of  omp pound nouns. 
Noun-noun colppounds have separate entries. A birthday cake is treated 
as "a cake for a birthday." A ball gme is represented as "a game that has 
a ball". A piggy bank 1s deflned as "a bank that is a pj g. 
It 
The system is told that Jim has a piggy bank and asked what the bank 
looks like. It could be argued that anyone with sufficient cultural know- 
ledge ought to be able to answer this even if all the banks in his past 
were shaped like bee-hives, but we need a place to write down this cultural 
encyclopeaic knowledge and a lexical entry for piggy bank seems like a good 
place to put it. 
Becker in hie wrk on "The Phrasal Lexicon" 
(1975) has produced 
evidence on the Soviet side of this argument, 
His data suggest that 
f hed phrases comprise approximately half of our spoken output and have 
an independent lexical existewe. 
He includes in his lexicon euphemisms 
("the oldest profession") , phrasal constraintr ('8y i""" sheer coincidence"), 
deictic locutions ("for that matter"), sentence builders (" (person A) 
gave (person B) a loag song and dance about (a topic)'!), situational 
utterances ("HOW can I ever repay yau?") , and verbatim texts (proverbs, 
song titles, etc .) . He claims that 
we speak mostly .by stitching together bswatcbs of 
text that we have heard before; productive processes 
have the secondary role of adapting the old phrases 
to the new situation....most utterances are produced 
in ste~e k yped social situations, where the comni- 
cative and ritualistic functions of languagg demand 
not novelty, but rather an appropriate combination 
of formulas, clichea, idioms, allusions, slogans. . . 
(1975 : 60) 
He has collected 25,000 phrases for the phrasal lexicon. 
Catherinq Flournoy (1975) has found several hundred fixed phrases 
in d computer scudy of Father Coughlin's speeches. 
This is not a new 
idea to students of oral epic poetry. Homer constantly used fixed phrases 
to fit syntactic and metrical slots. Dawn is always "rosy-tingered"; 
Hector is constantly "tail Hector of the shining helm. II 
There is a serious space-time tradeoff here between parsing time 
and lexical storage space. It is probably true that people possess and 
constantly use a phrasal lexicon. Whether we should use atorage space 
for items whieh we can parse/produce without ambiguity is another question. 
Currently we provide separate entries for any phrase that we cannot 
parse and interpret correctly from the entries for individual words. Briaf 
entries for these phrases seem absolutely necessaty for any prwtacal re- 
cognition scheme. These entries also seem to be the appropriate place for 
indexing pointers to the cultural in£ ormation necessary br mWog ifif er- 
ences and answering question6 about birthday cakes and birthday parties. 
There are theoretical arguments for such entries as well. We believe, as 
Becker does, in the phrasal lexicon, although we do not Pnslude entries 
for any phrases that can be parsed and interpreted correttly without a 
separate entry. Any complete system for language proceestng must also, 
of course, contain rules like Levi's to providg an ability to process 
novel forms. 
f . Prelhznary Design Deczsions for the Lex{con. 
The goal of this project is a lexicon suf ficieat for parsing, forming 
semantic representations, and making inferences, coqac t but still alI0~4.ng 
rapid lexical lookup. 
The lexicon is a data-base for the question-answering system, 
a combination lexicon-encyclopedia . Syntactic aad semantdc inf ob tion 
are combined ih the same lexical enrries. Lexical semanttc representations 
are mitten in the same form as the semantic representations for sentenced, 
in a many-tiorted. first order predicate calculus. Homographs w3ich vary 
in meaning or use are differentiated by Arabic numetal subscripts. Separate 
entries are included for phras es with fixed meaning. 
The lexicon is organized In terms of lexical relations. Semantic fields 
defined by relations are used to handle prablems of ambiguity and context. 
The relations are used to express and retrieve plany dif f erem kzncls 
informaticn, from past participles to selection preferences to proper 
habjtats for lions. 
Thus the system of lexical relations is crucial to 
representation, retrieval, and inferance. 
3. sOM~ THEORIES OF LEItgcat RELATIONS 
While developing our lexical relations we examined a variety of 
relational theories in anthropology and lingui st ice and even collected 
folk defilpitions of our own (Evens 1975) . 
We have been parrlcularly 
influenced by the anthropological fieldwork of Casagrande and Hale (1967) 
by the memory models of Raphael (1968) and Werner (19741, and most of all 
by the ECD of Apresyan, Me1 cuk, and Zolkovsky (1970). 
But we looked at 
each of these relational theories from the peaaliar point crf view of corn 
put er ques tion-answering and the pal-titular lexical environment of chil- 
dren's stories, adding and discarding relations to fit the problem. 
Cdsagmnde mrd Bale - L&caZ RejSations in FoZk Definitions. 
Casagrande and Hale (1967) collected 800 Papago folk-defini*tions 
and sorted them into groups on the basis bf semantic and grammatical 
similarities. They produced the following list of thirteen lexical re- 
lations. (Table 1) 
Table 1. The Relatims of Casagrande and Ha3e 
ReZatiofi Word EngZish 0208s of Papagu Cefinitiow 
1, Attributive burrming ~IZ but they are small; and they act like 
mice; they live in holes. 
2. Contingency to get agry When we do not like something we get 
angry. 
3. Function tongue with which we speak 
4 Spatial bucket in which we get water 
5. Operational bred which we eat 
6. Cobpariaon ~oZf they are rather like coyotes, but 
they are big 
7. Exemplification sweet as s igar 
8. Class Lntlusion cme a Bird 
9. Synohymy amwag funny 
10. Antonymy ~OU not High 
11. Provenience mzzk we get it from a cow 
12 Crad~ng M~nday the one following Sunday 
13. Clrcular lty near when somethihg is sitting nearby 
we gay near 
Casagrande and Bale make no claim that they have found aW possfble 
lexical relatiops. These definitions were collekted as part hf a study 
of dialect variation in Papago and Pima, The wolds to be defined were 
chosen because they night exhibit dlal-ect diffemnces and not to elidt 
all possible defining formulae. They suggest for intuitive reasons 
adding the part-whole relation to thelr list althbugh they d~d not iden- 
tify it in their data, They also provide an interesti~g Biscussian of 
word assoclation data in whlch they give stimulus-resranse pairs from 
tile Mzryresota  oms of Russell and Jenkim (1954) exmplifying each of 
their lexical semantic relations (except for drculardty). They cite 
some word assoclation pairs which do wt have exact analogues in the 
Papago definitions. These are "cootdinate" pairs like "needle-thread" 
or "bread-butter", "clang" responses like "table-stable" , or sequential 
responses 'bish-bone" and "whistle-st~~~~. 'Phey remark about the 
bread-but ter pair that the relationship involved between "bread" and 
"butter" is similar to that discussed for contingency, except that in 
the Papago sample, the contingency relationship is not used if both X 
and Y are nominal concepts. hbsterts CoZZegiate Dictioll~(~~ does 
pot mention butter in the bread entry but it has a separate entry: 
"bread and butter. Bread spread with butter; hence, Collo q. livelihood 
. . . ." (p. 103) 
It does mentiol thread in the needle entry and needle 
in the thread entry. This kind o.f association belongs in every lexicon. 
Werner Is Lexica2 Re'lations. 
There are two ways to go from the study of folk definitions. One 
way IS to find or invent lexical relations to fit all the folk definl- 
tigns one can collect in a given language, and then look for more in. 
the formu1.a~ of published dictionaries of that lahguage. me other is 
to abstract a minimal set of language-universal lexical-semantic relation1 
and then attempt to express other proposed lexical relations in terms of 
the minimal set. Werner has made substantial steps in this second di- 
rection (Werner and Topper 1976). 
Werner's basic semantic relations are the taxonomic relation (T), 
the modification or attribution relation (M) and the temporal sequencing 
relation, queuing (Q) . These he calls "the basic cement of the organi- 
zation of. cultural knowledge and memory. " (1974 : 173, 
'Ehe re1 ation of taxonomy (T), the one expressed in English by "a 
canary is a (kind of) bird , is written (bird) T (canary) and is re- 
presented in Wernerf s diagrams by a directed'arc labelled T. 
(bird) 
o (canary) 
The relation of modification or 
attribution (M), €he one expressed 
in English by "the yellow bird" or "the bird is yellow", is represented 
bp a directed arc labelled R. 
(bird) 
; M 
b [yellow] 
These last two djagrams can be combined to expresb the idea that a 
canary is a yellow bird. 
(bird) 
[yel low7 
(canary) 
The queuing relation Q represents the idea of order or sequence. 
For example, (Monday) Q (~uesday). This relation is fundamental in the 
representation of plans in Werner's memory model. "Kming how... re- 
quires the retention of temporal order. there are things to be done 
first, second, and so on and usually nonsense results if the order is 
changed (One can ' t drink the beer before the bottle cap is removed) ." 
(ibid, p. 11) 
I' 
Relations like '~onsists of, part of,' 'cause of,' 'like' are 
handled as complex relations and composed from thr primitive relations 
M and T using the logical operators not (-), and (a) or (v) and parti- 
cular lex3cal items. For the 'part of' relation he gives the example 
"the thumb is a part of the hand" (ibid, pp. 50, 51) 
oof [hand] 
-4 
o (thumb) 
This 'diagram essentially says "the thumb is a (kind of) hand-part. 
It 
This is an extremely elegant and general theory. 
~erner's claim 
of linguistic universality seems well-founded. His model is m many 
ways intuitively agpealing although we are not convinced that our basic 
lexical relations and our basic semantic relations are the safne. 
Our declsion to try to design a lexicon with a larger set of lexical 
relations is really an engineering declsion, based on two probably 
temporary practical difficulties. 
(i) 
We do not know how to prove theorems in 
Werner's model. 
(ii) 
We believe that a variety of language speci- 
fic lexical relations can produce a more com- 
pact lexicon with more efficient search 
routines. 
~aphael's Semantic Information Retrieval program (1968) combined 
a semantic net representation with a relational calculys whichpakes 
inference8 in this net. SIR inputs simph English sentences, trans- 
lates them into node-relation-node form, uses a relational calculys 
to prove theorems, asks for more information, if needed, and answers 
questions using those inferences. The rehtions which Raphael used 
are: 
x~y (An x is a y, e.g. A boy is a person.) 
x y (x is a y, e.g. 
John ie a person.) 
equ$v[x;y] (x and .y are two names for same thing. ) 
owng[x;y] (Every y owns an x.) 
own [x; y] (y owns an x. ) 
partg [x;y] (Some x is part of every y.) 
part [x;y] (An x is part of y. ) 
right [r;y] (x is to the right of y.) 
jright [x;y] (xBs just to the right of y.) (ibid, p. 92) 
Each relation R has an inverse z. If aRb then the pair (R,b) is stored 
C 
on the property list of a and (~,a) is stored on the property list of 
b. For each relation there are axioms. Further axioms describe how 
fllfferent relations interact. For instance, the set Inclusi~n relation 
has the following properties: 
/r 
J [c] i.e., set inclusion is transitive 
a ~xA xcy~acy 
The interaction between set inclusion and partg is expressed by the 
axiom 
In other words, if an x is part of a y and a z is a y then an x is patt 
of o z. For example, if you know that mammals have hair and that whales 
are mammals, then you know that whales have hair. 
Some of Raphael's relations represent particnlar information, some 
represent generic information. It is the generic relations which cor- 
respond to the kind of lexical relations we are working with: set in- 
clusion, equiv, partg, and owng. 
33 
dprssgan, Zolkouskg, ud Me$ '&k. 
The Explana tory-Combinator9 pic t ionary of Apr~syan , &el ' Zuk, and 
Q 
Zolko~sky (1970) contains a wide variety of lexical relations. 
When- 
ever they notice a lexical regularity, they invent a lexical relatlon 
to express it. Their paper contains about fifty relations and they outline 
ways of combining the given relations to get still more. 
Many of these 
relations appear in an earlier paper by Zolkovsky and Mel16uk, which em- 
phasizes the importance of specifying the grammatical transformations 
associgted with each lexical pairing. Suppose a story says: 
The prince's gift of a magic apple to Zamiya 
dismayed his mother. 
Zn order to represent this correctly or answer a question like "What 
did the prince give Zamiya?" the system needs to know not only the lexi- 
cal relation between give and gift but also the transformation which 
carries one string into another. In this lexicm the accompmying trans- 
formation will be indicated in the lexical entry for the relation, not 
in the lexical entry for the particular words gi@e and gift. 
Most of the relations given in these two papers are as appropriate 
in English as in Russian. Some, although appropriate in English, embody 
more sophistication than seems necessary in this ploject. 
The Soviet 
collection of relatvns is open-ended. 
They expect to identify more 
in further lexicographical work and to discover further properties of 
the relations already identified. This seems highly intuitive. It is 
probably the case that people go on expanding their repertoires of lexi- 
cal relations and learning their properties and that this learning con- 
tinues to a much greater age than the acquisition of syntax. 
Lexical 
relations can be added to our lexicon just by adding a lexical entry 
for the relation. At this point the actual addition of entries can 
only be done by internal manipulation. Eventually it would be preferable 
for the system tov"learn" such relations or at least accept them in Englieh 
form. The authors refer to their relations as functions and the examples 
are wr$tten in functional notation: Figur(passim)=f&me; Anti(beautifu1) 
=pZain ugly. Since these functions are definiteqy not single-valued, we 
have used the term lexical relation, in deference to the mathematical con- 
ventions. 
4. THE SET OF LEXICAL RELATION& 
The fesearch reviewed above and our own experience with children' s 
stories has led us to posit nine major categories of relatioa6. 
See 
Table 2. These categories do not have any interndl structure as a set; 
however many of the relations themselves seemed to share some comonality 
usually semantic, and so it became natural to group them into sets of 
categories. Our category list beglns with bhe more familiar and classicag 
ralations of synonymy and faxonomy, apd presents an expanded sub-catego* 
rization within antonmy. The grading category includes a somewhat aiverse 
collection of three relations. The attribute relations and the part-whole 
category seem firmly motivated. The next two categories consist of co- 
occurrence or collocational relations. The last twb groups of relations 
are paradigmatic in nature, 
The set of relations presented here is by no means complete, Indeed, 
it iq deliberately open-ended. Whenever a new lexical regularity is seen 
in the data, a new relation is added. In order to make the system of re- 
lations extensible, theredwe, a separate lexical entry has been construc- 
ted for each relation containing its special properties and associated 
axiom schemes. (Examples of this appear below, for example, in section d. 
tIn addation definitions of properties, such as tranpitivity, ad a discussbn 
of their use in this system can be found in Appendix IT). 
There are several arguments for this methodology. 
Primarily we are 
convinced that lexical relations do not constitute a fixed set of language* 
universal semantic primes. 
We also feel that we have not yet discovered 
the most appropriate oollectior 
or our own use. In addition we hope to 
Table 2. TA33LE OF LEXICAL S'EMANTIC RELATIONS* 
I. T raxmmy lion T animal 
2. S SYnonPY amusing S f unnyl 
1. COMP complementarity single COMP married 
2. ANTI an tonymy hot ANTI cold 
3. COW converseness to buy COW (3-2-1-4) to 
sell 
4. RECK reciprocal kinship husband RECK wife 
1, Q queuing Monday Q Tuesday 
2. SET set-element flock SET sheep. 
3. STAGE manifestation ice STAGE wat ex 
1. MALE male - unmarked drake MALE dud 
term 
2. FEMALE female - unmarked lioness FEMALE lion 
t em 
3. CHILD juvenile - parent calf CHILD cow 
4. HOME habitat - object A£ rica HOME lion 
5 SON characteristic bark SON dog 
sound - anlmal 
6. WEOF substance ski MADEOF wood 
e., Parts ancZ Wholes 
1. PART part - whole horn 
2. CAP head - organiza- chief 
t ion 
3. EQUIP personnel - object crew 
4. PIECE count - mass lump 
5. COM~SFJXOM provenience milk 
PART COW 
CAP tribe 
EQUIP gm 
PIECE sugar 
COMES- cow 
rn0M 
The numbering matches that in the text. 
f. Typzcat Case ReZatzons 
1 TAGENT typical ageat 
2, TOIPJECT typical ob j ect 
3. TRESULT typical result 
4. TCAGENT typical counter 
agent 
5. TINST typical ins tru- 
ment 
6. TsoURCE typical source 
7, TEXPER typical exper i- 
enc er 
8, TLOC typical loca t Son 
g Other CoZZocatzon Rektwns 
I COPUL speciaJ. copula 
verb 
2 LIQU dest~ ~ying verb 
3. PREPAR verb which means 
prepare 
4 DEGRAD verb to deterio- 
rate 
5. (INC increase verb 
(DEC decrease verb 
6 PREPOS preposition - 
o%j mp 
h. Paradzgmatzc ReZatwns 
1. CAUSE cause - thing or 
action effected 
2 BECOME become + adj 
be + predicate 
conqueror TAGENT to conquer 
dinner TOBJECT co dine 
hole TRESULT to dig 
loser TCAGENT to beat 2 
needle TINST to sew 
earth TSOURCE to sprout 
lover T-ER to love 
kitchen T][X)C to bakeg 
to fall COPUL vlct vn 
to correct LIQU mistake 
t~ lay PREPAa table 
to decay DEGW teeth 
tomount INC tension 
to shrlnk DEC cloth 
on PREPOS list 
to send CAUSE to go 
to redden BECOME red 
to clean2 CAUSE* clean1 
BECOME 
to neighbor BE near 
process noun - 
verb 
death NdlMV to die 
5. ADJhT adjectives- noun solar ADJN sun 
6. ABLE used in combina- combustible EXPER*ABLE to burnl 
tion with case 
relations only 
7. IMPER irregular impera- go ahead! IMPER to talk 
tive - 
I. PAST past tense - in- went PAST to go 
f initive 
past participle - gone PP 
in£ initi've 
3. PLURAL plural - singular men PLURAL man 
model the acquisition of relations at some later point. 
Aad finally, 
we are attempting to introduce some modularity of deaign into a diffi- 
cult programming project. 
a. The CZassicaZ Re&zt$ons: Tuxcmomy and Synonymy. 
Aristotle demanded that every definition begin with the statemen5 
of the genus to which the term belonged. 
The genus now is called a super- 
ordinate taxon and the relatioh between the term and its germs is labelled 
as the taxonomy relation. Even today commerc%al lexicographers following 
the classical tradition use taxonomy along with synonymy as the funda- 
mental relatiuns. These relations also 'have played an easeneial part in 
attempts at question-answering. In Raphael's (1968) system they appear 
as set inclusion and equivalence. In Simmnns@(l973) system they are 
called IMPIY snd EQ. The inference-makfng scheme in Marx's question- 
answering system is based on these two relations, For eample, one of 
his test paragraphs says that a dog is brown and the question asks, "14 
the aaimal brown?" (Man 1972:224). A dictionary lookup of dog finds 
the taxonomic relationship between dbg and mimaz. 
AnimaZ is subs titu- 
ted for dog and the two sides match. Marx uses synonyqy in the same way. 
Suppose the text says "John wants money" and a question asks "Does 
John desire money?" (P972:229). A dictionary lookup finds that &~ipe 
is a synonym of want. The substitution of one for the other results in 
a successful pattern match. 
1.) T~zco~o~~. 
The taxonomy relation T is expressed in many ways in 
English; perhaps "is a kind of" is the most typical: 
A dog is a kind of animal. 
A dog is an animal. 
Dogs are animals. 
The notation dog T mhaZ is used to state 'this relationship. In the 
lexicon it is represented by an edge from the dog entry to the ank~z 
entry labelled TI 
Werner's work on the taxonomy relation in memory models has shown 
that this relation plays a crucial role in lexical theory as well as in 
~ractical question answering. He has discussed the theoretical aspects 
of the taxonomy relation at length (Werner 1969, 1972, 1973, Perchonock 
and Werner 1969, Werner and Fenton 1970) and has used it in geveral studies 
(Werner and Begishe 1969, 1970). 
Casagrande and Hale (1967) and Raphael (1968) use the name inch- 
sion for this relation. It is certainly related to set inclusion. If 
A T B then the set of objects named by A, the extension of A, is a subseti 
of the set of objects named by B, the extension of 8. The set of dogs 
is ti subset of the set of animals, If we look instead at the intensions 
of A and B, the sets of attributes implied by the terms, we agaln find 
a set inclusion relationship but in the other direction. If A T B then 
the intension of A includes the intension of B. The characteristics that 
letas identify an object as a dog include the characteristics that make 
it an animal. Because of the possible confusion about the direction of 
the inclusion relation, it seenied like a good idea to use another name. 
The term taxommy is the natural choice since it is now well-known m 
anthropology. 
2. ) Synorapy. The synonymy relation poses some dif f iwit philo- 
sophicqJ problems. Do two words ever have the same meaning, or are there 
always differences? What criteria can be used to decide whether two words 
41 
are spnonymourUl 
Apresyan, ~olkovsky and   el ' Euk (1970 : 5) have at tempted 
to state a precise criterion: the two words should be san&call~ sub- 
stitutable f6r each other, the meaning of one shouU be expressible through 
the other in any context. 
But this criterion substidutes one problem for 
another. How can one tell whether such a substitution is successful, 
whether the resulting sentences have the same meaning? 
It can be argued 
that different sentence forms exist: .precisely in order to allow the ex- 
pression of differences in meaning. However impossible it may be to de- 
fine synonymy precisely, this concept is used daily in ordinary discourse* 
Dictionary writers use it constantly. To sfmplify matters it is assumed 
her4 that the synonymy relation holds between two words whenever any of 
the dictipnaries in thz bibliography defines one as the other. This should 
I1 
be read as "rough synonymy" or approximate synonymy. 
11 
b. -Anton$my. 
Antonymy has long been recognized as a lexical relation. Websterrs 
NW CoZZsgiate Dictionmy, for esample, regularly lists antonyms. Its 
definition of cotd includes "Ant. Hot" (1951:161) (The definition of 
hot, although it mentions c~Zd, does not include "Ant. cold" ) The same 
dictiaaary defines antonym as "A word so upposed in meaning to another 
word that it negates or nullifies every single one of its implications. 11 
It is true that antonymy indicates some important facts about implications, 
and these need to be captured, but it is not true that amttoyrny involves 
nqaring every proposition in sight. The problem is that there are many 
kinds of oppositeness of meaning. 
We have found four separate lexical relations whidt correspond to 
separate subcategories of antonymy: 
complementarity, antonymy proper, 
converseness, and reciprotal kinship. 
1) 
CompZempntar<ty, isolated by Lyons (1968), is the kind of 
oppositeness that holds between single and married or male and fmaze. 
The denial of one implies the assertion of the other: me assertion of 
one implies the denial of the other. 
If John is married, then John is not single. 
If John is not married, then John is single 
If John imsiggle, then John is not married. 
If John is not single, then John is married, 
j€"his kind of relation seems to hold primarily between two adjectives or 
two adverbs belonging to the same primitive concept. If we set up a 
lexical relation COW, then the appropriate axiom schemes seem to be, 
for &he case where Ad j COW Ad j 2, if Z 2, looked at along dimension Z1 , 
has property Adj 
, then it also has the property Not (Adj 2) and vice versa. 
7 
In the notation used for the bemantic representations in the question- 
answering system this is stated: 
if, on the other hand, it has the property ~ot (Adjl) then it 21 ao has 
the property Adj2 and vice ogrsa. 
(and similarly for adverbs). :OMP is a symmetric relation. If A C(IMP B, 
then B COW A. In other words it is its own inverse, jn this lexlcon 
~f A is marked COW B, then B fs.marked COMP A and so inferences are 
available in both directions. Anything marriageable is either married or 
single4 nokboth; if me tern applies, the other must not. 
2.) Antonymy. Lyons restricts the tern antonyqj to the sstuation 
where the assertion of one implies the denial of the other, but the denial 
of one does not imply the assertion of the other. ~ed and green are anto- 
nyms in this sense. If X is red, it is not green. 
OR the other hand, if 
X is not red it does not have to be green. 
It could be blue or yellow in- 
stead. IIot or cold behave in the same way. 
If X is hot then it is not cold, 
but if X is not hot we do not know for sure that it is cold; it may just 
be lukewarm. We set up a lexical relation ANTI to express this kind of 
antonymy . Again it applies par ticularlj to ad j ec tives and adverbs belong- 
ing to the same primitive concept. The lexical entry for ANTI gives an 
appropriate axiom scheme for the case in which Adjl ANTI Adj~: 
If Z2 is 
Adjl then it is not Adj2. 
Hozd~ (P(Z1,~2,Adjl)) ~otds(P(z~rz~,~~~t(~dj~))) 
(similarly for adverbs. ) 
Verbs may be included in this kind of antonymy. 
Consider the pairs 
love-hate and open-shut. 
For a child, at least, "X loves Yt' may imply 
lC.X hates'~." The appropriate axiom scheme for verbl ANTI verbZ would be: 
if a simple sentence containing verbl is true, then the negation, is true 
when verb2 is substituted for verbl 
floZds(R(verbl, Zl, Z*, z3, Z4)) 
~H~lds(R(verb~,~~,~~,Z~,~~)) 
Since such verb pairs do not appear in our examples such problematical 
inferences have been avoided. 
There are some important semantic realities here which are not being 
captured. There is a set of inrnmpatible color terms: red, orange, peZZow, 
peen, blue, purple, brw, bhk, white. One can describe any small area 
of a physical object in one of these terms if it is forbidden to use hedges 
Like turquoise and pink. Hot and cold, like big and smaZZ, are opposite 
ends of a scale. Between hot and cotd, wm?n and cool can be placed some- 
where. Binary lexical relations are not adequate here, Perhaps develop- 
ments in the theory of fuzzy sets will eventually provide a better de- 
scriptdon. 
There are logical problems here too. If the story says the toy is 
red, then we want to answer "no" to the question "Is the toy green?" 
But toys can be both red and green in spots, patches, or stripeer 
JE 
the story says that the toy is red and green, we do not want tl) get lost 
inba self-contradiction. 
Adjectives which imply grading (cf section c below) involve poten- 
tial self-contradictions of a slightly different kind. Lyons discusses 
the sentence reference "A small elephant is a large animal." The current 
representation for that sentence in pur system would be: 
ZVl =Ncorn(elephant,X1) PI =P.P(size,X1 ,small) 
& =Ncom(animal,X1) P2 =P(size ,XI ,large) 
For more details see the section on semantic representations. But 8man 
ANTI large so w must conclude from P1 that P(size,X1,Not(large)). The 
problem is that when we call something a small elephant we imply a com- 
parison with some norm for elephants. However, this comparison does no€ 
appear in our representation. (This di£+itulty has also been discussed 
by Bierwisch 1969 
and S;Lmmons 1973,) 
3.) 
Cm~erseness. This is Lyons' name for a third kind of antonymy. 
As examples he gives the pairs buy-se2Z aod husband-&fee 
 his kind of 
oppositeness does not seem to involve negation at all. 
Rather it in- 
volves some kind of permutation of the associated individuals. 
Dale 
calls this relation reciprocity and explains it this way: 
Buy and sell are reciprocals, as are give and 
receive. What die tinguishes these from antonyms 
(which they are, in a sense) is that whenever a 
sentence using one of them is appropriate, there 
is another appropriate sentence using the other 
member of the pair. For example, John h9e books 
from Bill has the same meaning as BiZZ sells books 
to John. He guve flowers to her has the same 
meaning as She received flowers from him. This is 
a sort of "semantic passivet'--like the passive 
transformation in syntax, it presents the saiie 
meaning from a different point of view. (1972: 
144) 
Wether  ale's sentences have exactly the same meaning or not is debatable, 
but anyone would agree that one implies the other, What is needed is some 
compact way to indicate what these other appropriate sentences are and to 
V 
derive them when they are needed. Zolkovsky and ~ei'xuk, (1970) have a 
clever way of doing this for verbs. They use a flotation of the form: 
Buy CONV (3 2 1 4) Sell 
to indicate that 
% buys Y2 from X3 for X4 
becomes 
X sells X2 to XI for X4 
3 
We have borrowed this notation, applying it to cases rather than sub- 
jects and objects. 
It is interesting that the Soviets include regular 
syntactic passives in their discussion of this relation. Since in this 
systesn inferences are made on the basis of the fully formed semantic rep- 
resentations from which passives have been eliminated, they need not be 
included here. 
4.) RecipcaZ Kinship. Tf we had followed Reichenbach (1966) in 
treating kinship relations as functions of several arguments then wemc'ould 
have used CONV for pairs like husband-wzfe also. Since kinship and social 
relationships like teacher-student are expressed in terms of have, however, 
it makes sense to posit a new felation RECK far RECiprocal Kinship and other 
social terms. 
Husband and wife relationships are represented this way: 
Len is Martha's husband, R$%gtve, XI, X2, husband) 
Martha is Len's wife R(have,X2,X1,wife) 
We want to be able to derive one of theselsentences from the other, using 
the lexical information husband RECK wife, i.e. if X1 has X2 as husbond 
then X2 has X1 as wife. 
The axiom scheme for A WCR B says that if X1 has 
X2 as A then X2 has XI as B. 
HoZdaIR(have,X1,X2,A)) HoZds(R(have,X2,X1,B)) 
Ocher kxnds of converseness or reciprocity have not occurred often e~ough 
to warrant a separate relation and a sephrate axiom scheme. They are en- 
tered as individual Inferences in each entry. 
Antonymy seems to be a highly diverse lexical concept. 
With further 
study it may spawn still more lexical relations. 
0. Grading. 
Grading relations like antonymy relations involve alternatives of 
some kind. Graded alternatives appear to be organized in lists or other 
kinds of formal structures. 
Our collection of grading relations is in a 
state of fLux, many aspects of grading are still not properly defined. 
1. ) meuzng. 
The notation Q is borrowed from Werner but used in a 
very restricted sense to connect adjacent items on lists, as in   on day Q 
hcesw. 
It could be read "is immediately followed by." 
2.) Set-ehnent. SET relates the name for the set to the name of the 
elements, e.g. flock SET sheep. This is the relation which the Soviets 
call Mult, This relation seems to be particularly well-founded psycholo-. 
gically, for English has many special words of this type pride of lions, 
Bevy of maidens, gaggle of geese, and it is certainly a source of word-pla: 
3 .) Marrzfestatzm. By contrasf the STAGE relation, as in zce STAGE 
water, seems very shaky, The axiom schemes are not satisfactory and some 
of the territory is covered by the CHILD relation described in the section 
on attribute relatfons. 
There seems to be a gap in our collection here. We have no parallel 
to the comparison relation of CWlagrande and dale (1967). Of course in 
the most common type of examples where the items related are taxonomic 
Brothers, or cohyponyms as they are soaetimes callea, the comparison re- 
lation can be =pressed by a combination of T and T. Recent work by 
Litowitz (1977) suggests that comparisons are an important component of 
the defining strategy of children. The boundary between the grading re- 
lations and the attribute relations described in the next section is also 
uncomfortably arbitrary. 
d, Aetrzbute ReZa$zons. 
According to Caeagrande and Hale (1967 168) whenever "X is defined 
with respect to one or more distinctive ow characteristic attributes Y". 
48 
a definition is "attributive". Given this all-inclusive description it 
is not surprising that the attributive category was the largest in their 
sample. They propose several subcategories including stimulus properties 
like size and color, distinctive markers, habitat, behavior, sex, gene- 
ration, and line of descent. But in order to fatilitate inference we 
need to associate axiom schemes with each relation. Thus we have broken 
these subcategories into still more precise relatlns. 
1.) Maze. The relation MALE as in drake MALE duck relates the mas- 
culine t~ the unmarked term. We want to be able to infer that if something 
is a drake, then it is a duck and it i~ male, i.e. 
Ncom(dralce, Z1) + Ncom(duck, Z1) A P(sex, Zl ,male) 
This axiom can be derived when needed from an axiom scheme in the lexical 
entry for maze which says that whenever ZN1 MALE ZN2 then a ZN1 is also a 
ZN2 and if is male; i. e. , 
Ncom(ZN1, Z1) + Ncom (ZN2, ZI) A P(sex, Zl ,male) 
2.) FmaZe. Similarly, FEMALE, as in lioness FEMALE Zion, relates 
the dame of the femade ta the unmarked term. 
3 .) Tern6 for juveniZes. The most common attribute relation in our 
vocabulary is CHILD, which relates the term for the offspring tn the term 
for its parent, as in puppy CHILD dog, kitten CHILD cat, lamb CHILD sheep. 
The lexical entry for CHILD contains the axiom scheme 
Ncom(ZN1, Z1) + Ncom(ZN2, Zl) A P(age,Zl ,young) 
men puppy and dog have been substituted for ZN1 and ZNp respectively we 
get an axiom that tells us that if Z1 id a puppy then Z1 is a dog and Z1 
is young. 
49 
4.) HaJita*. 
The habitat relation we have called HOME, so thai 
Awca HOME th. 
5. ) Ckrmoteristic Sod 
The relation SON was borrowed from the 
Soviets. SON relates an object and the verb expressing the kind of 
sound it produces. 
to bark SON dog 
to roar SON Zion 
to meow SON cat 
to choa choo SON truin 
This relation seas to underlie a crucial part 
the vocabulary of 
young children. Wby is such a tremendous amount of time spent teaching 
children words like mem? Was this tntormation once lif e-preserving or 
is it a way of teaching how sound is structured into words, the phonology 
of the language? For whatever reason, children who never see a farm are 
carefully kaught to associate the sound moo with c&ws. 
6 .) Etcbs-t;ance. The relation we call MADEOF as in 
ski MADEOE wood 
relates an object to the substance of which it is made. Casagrande and 
Hale classify as provenience both batea: "which is made out of meequite" 
and milk: "we get it from a cow1' (1967:184). Since in-English these. re- 
lationships* are expressed in different ways, for example, the ski is made 
~f woad - wooden ski, but milk comes from a caw - cow's milk, 
and since 
the appropriate inferences are different 
(the milk was once in the cow but 
the ski was not in the wood), we chose to classify them separately. 
As the vocabulary-expands we expect the list of attrfbute relations 
to expamt. Litowitz (1977) is current1 y collecting defihitions-from 
children and isolating further relations. Smith: and Maxwell (1977) have 
identified certain attribute relations which occur repeatedly in defining 
a 
formulae in Webster's Seventh: COLOR, TIME, LOCIATION, SIZE, and QUALITY. 
These relations, among others, will be added eventually to our lexicon, 
e. Parts and Wh6Zes. 
1. ) Part..WhoZe. The relation which links finger to hand and car- 
hetop to car we call PART: 
finger PART 
carburetor PART car 
The PART relation seems to be crucial in the definition of many every day 
objects. While it is clearly important-in computer models of memory, it 
seems hard to isolate from natural English sentences, Raphael's (1968) 
SIR model used some subtle heuristics to determine whether a particular 
instance of the verb have should be represented by the pmt relation or 
the cp~n relation. Sometimes dialog with a human is necessary to resolve 
the asiguity. Simmons (1973) recognizes a three-way ambiguity in have 
which is represented variously as HASPART, POSSess, and ASSOC (1973:76) 
Mary has long fingers HASPART 
Mary has money POSSess 
Mary has fun in the park ASSOC 
Apparently the part-whole relation is hard to identify in Papago also. 
Casagrande and Hale do not find it in their Papago sample. They classjfy 
tl 
as exemplification deffnitionZwhich,are translated inta English as cows 
I1 
have horns" and "horses have tails. However, on, the basis of intuition 
and t_he word-association data of Russell and Jenkins (1954) they posit a 
fourteenth relation (1967:191): 
Constituent: 
X is defined as being a constituent 
or part of Y. 
The example given is cheek-face. 
Apresyan, Me1 hk an'd 3olkovsky do not have an explicit part-whole 
relation but they do include tGo relations in this same area. 
We have 
borrowed 
2.) Bead-Orgrmiaatipn. CAP relates the head to the organization. 
chief CAP tribe 
3.) PersonneZ-Object, EQUIP relates the associated staff to the 
organization or object they serve* 
crw EQUIP ship 
4.) C-t-Mass. The relation PIECE which carvea a countable chynk 
out of a mass also belungs to the part-whole family. 
For example, 
lwnp PIECE sugar 
item PIECE net38 
Jespersen was intrigued by this mechanisa which he named individuatizat<on 
(1933:209); he discovered and listed many such examples. This seems to 
be the relation which the ECD calls SING (Apresyan et al., 1970:11). 
5.) Provenience. We include here also the relation COMESPROM, as in 
milk COMESPROM cad. This is one aspect of the relation which Casagrade 
and Hale (1967) call provenience. (It should passibly be listed as an 
attribute relation along with its close cousin MADEOF.)' 
Our current lexicon contains only two axioms for the part-whole re- 
lation. One is transitivity: 
if X PART Y and Y PART Z, then X PART 2. 
The other, borrowed from Raphael, connects PART and Taxonomy. 
Essentially 
52 
it says that if all X's are Y's and allY1s have 2's as parts, then all 
X's also have 2'6 as parts. There is an extensive philosophical lite- 
rature involving this relation. Martin (1971) presents a system of axioms 
for part-whole and a review of work by Lesniewski, Woodger, and Tarski. 
f. TypicaZ-Case ReZatwns . 
Casagrande and Hale discovered that certain familiar objects, body 
parts, foods, tools, and other objects of material culture were most 
often defined not by the relations discussed above but rather by their 
use in daily life, by common activities associated with them. For ex- 
ample, under the I1functiontl relation they classify examples in which "X 
is defined as the means of effecting Y" such as 
eye: ". . .with which we see things" 
money : If. . .we buy things with it" (1967 : 175) 
The "operational1' class includes examples in which X is defined as "the 
characteristic goal or recipient" of action Y 
bridle: " . . .which they put on horses" (1967 :178) 
What they call the "spatial" relation also seem to be of this same type, 
grindstone: "...on which a knife is sharpened" (1967:177) 
Folk definitions collected from speakers of English often are of this 
variety, sometimes combined with taxonomy, e.g. "a house is a building 
in whfch people reside" (Evens 1975:340). Children in particular seem 
to prefer functional definitions (cf. Ruth ~rauss' collection of chil- 
dren's definitions, A Bole is to Dig, 1952). 
Apresyan, Mel1c!uk, and Zolkovsky's system includes a family of 
fundions S1, S2, Sg, $4 which relate nouns and verbs or adjectives. 
Their semantic structures are based on grammatical relations. 
For verbs 
these are a subject relation, a direct object relation, and two kinds of 
indirect object relatiotls. 
The functions S1, S2, Sj, and S4 correspond to 
these grammatical relations. 
S1 relates the verb to its generic subject. 
S2 relates the verb to its generic direct object, etc. 
For example (1970:lO: 
S (to se1l)~seller 
1 
S2(to sell)=goods (that which is sold) 
S (to sell)=buyw, client, customer (the one to whom the goods 
3- are sold) 
S4(to sell)=price (that for which the goods are sold) 
The Em also contains four other substantive relations (1970:ll). The 
values are nouns. The arguments can apparently be verbs, adjectives or 
Rouns. First is Smod whxch gives the noun denoting the? mode of action> 
%od (to write)=hmdwritzng. Sl0, g ives the noun denotlng the place of the 
argument; Slot (act<on)=scene. Sinst, g ives the noun denoting the inatru- 
ment; S~strI~~mnication)nem2s, SinLnstr (to think)=brazn. 
S,,, gives 
the noun denoting the result; Sres(to hunt)=bq. 
Since the semantic representations in the question-answering system 
are structured m terms of cases tather than grmtical relacions we 
have set up a group of "typical-case1' relations, one for each case re- 
lation in our case system. 
The typical-case relation relates the verb 
to typical fillers of that case argument slot. 
Thus, corresponding to 
the semantic relation AGENT we have a laical relation TAGENT. 
The fact 
that someone who bakes can be called a baker is expressed in our lexicon 
baker TAGENT to bake 
54 
The smff thgt you eat is usaally called Bod; pbbd TOBJEZT €0 eat. 
The result of digging is usually a hole; hole TRES'ULT to dig. Wen the 
Cubs beat the Cardinals the Cardinals are the losers; zoser TCAGENT to 
beat2. 
The thing you sew with is called a needle; needte TINST to,sm. 
(This is the Casagrande and Hale operational relation.) Most plants 
sprout from earth.: earth TSOURCE to sprout. One who loves is called a 
lover; bver TEXPR to Zove. People usually bake cakes in a kitchen; 
kitclzm mOC to bake2. It should be noticed that the relation TLOC 
bears zt close resemblance to HOME which gives the typical habitat Mr 
an ahimal or other object. The Soviet relation S1,, seems to include 
both. It is not clear that eemantic theory can justify usidg two re- 
latiom here. We have made a distitnction because our system of seman- 
tic representations treats nouns and verbs differently, so that the 
associated axiom schemes for TLOC and are formally different. It 
would be possible to use only oqe relation and test the argument for 
part of speech before choosing an axiom scheme Perhaps the real prob- 
lem is in the system of semantic representations. 
This particular choice of lexical relatiws is based on the parti- 
cular case system being used. We claim, however, that thewame basic 
scheme would be effective for a lexicon functioning with a different 
system of semantic representations based on my other set of case or 
grammatical relations. This IS so since in this scheme corresponding LO 
each semantic relation in the eemantic representation there is a lexical 
relation in the lexicon relatingverbs and typical fillers of argument 
dots. 
g. Other CoZZocation ~ezations. 
The relations in this group, like the typical case relations exm 
amined in the preceding section, are basically ooocurrence relations. 
They connect words which cooccur conqtantly and point to words which 
have special meanings in particular contexts. 
This fs an important  pa^ L 
of the lexical knowledge of the native speaker often neglected in dic- 
tionaries. 
Most of our relations in this group are borrowed from the 
Soviet lexicographers: 
COPUL, LIQU, PREPAR, DEGRAD. 
1.) Special Coputa Verb. The COPUL relation indicates the cor- 
rect copula verb for nouns where belbecome is not appropriate. For 
example, to fa22 is the special copula verb for victim, to fuzz COPUL 
victim, as in "Constance fell victim to Louis' aharm. 
11 
2.) Destroying Verb. LIQU relates a noun and the verb which means 
to liquidate or destroy it. This seems to be useful in English as well 
and some examples belong to a child's vocabultary. 
to erase LIQU mistake 
to wipe out LIQU traces 
3,) Srepare for use. The relation PREPAR relates a noun and the verb 
which means to prepare the object, to make it ready for use. This is par- 
ticularly useful in making deductions about why people are doing things. 
to lay PREPAR tabzs 
to make PMPAR bed 
to bad PREPAR guz 
4.) Verb to deteriorate. 
The relation DEGRAD connects nouns and the 
appropriate verbs meaning to deteriorate. to 20 bad. 
to decay DEGW teeth 
to wear put DEGRAII clothes 
5. ) Incream and decrease in activify. 
The pair of relations INC- 
rease and DECrease connect nouns and special-purpose verbs for increase 
and decrease. 
to grow INC ch<ld 
to sbznk DEC cZot;h 
0 - 
(In terms of the Soviet relations INC (x) =Incep (Plus(x) ) and DEC (x) =1ncep 
(Minus (x) ) . ) 
6.) Preposition - Object. PREPOS behaves much like the relation 
which the Soviets call LOC. It links suitable prepositions ta particular; 
nouns. In English things gr, on lisbe, not in them. The fact that on 
is the appropriate prepositxon for list is recorded as on PREPOS Zest. 
These are all collacational relations that we have observed in our 
data. Mel'&k's ECD contains even mare collocation relations bdt we 
have not included them because they seem too l~terary or too sophistl- 
cated for the vocabulary of children s stories. For example, Bon (Apresyar 
Melt Euk, and ~olkovsky 1970 : 13) points to attributes meaning "good": 
Boa (conditions) = favor& Ze 
Bon (aimsh= Zo f ty 
Both the typical-case,relations md the other collocation relations 
which we have describdare syntagmatic relations. They connect words 
with other words which coocur frequently in natural language sentences, 
sometimes with special meanings. We turn now to a group of paradigmatic 
relations which connect wards which express aspects of the same core of 
meanlpg as it appears in various contexts or in different parts of speech. 
h. Paradigmatic Rehtions . 
The relations which we have grouped together as paradigmatic relations 
are highly disparate in kind and importance. CAUSE, BECOME, and Nomare, 
we believe, essential to the structlve of the English lexicon; ABLE and 
@JN seem potentially quite useful. There seem to be very few emples 
of BE. /ill except BECOME were influenced by the inventory of Apresyan; 
V 
Zolkovs'ky , and Me1 ' Euk . 
1) Cacse. Traditional dictionaries use-cause constantly to describe 
relationships between verbs. Dennison (1972) defines to send as "to 
causk to go". KebsterFe Neu CoZZegiate (1951) defines to boiZ as "to 
cauae to bubble.. . ." (p.96). Schank (1975) treats cause as the most Lm- 
portant relation. McCavley (1975~) in discussing to open argues for two 
lexical entries, open1 for "intransitive" uses : "the door opened" and 
I1 
open2 for "transitive" uses: "John opened the door. Openl and op3 
arp related by cause: to open2 is to cause to openl. McCawleyls fonrm- 
lation will be followed here. 
The first and longest entry in Websterrs Nm CoZZegiate Diatwnary 
for Open belongs to the adjective. The definrtion of the intransitive 
verb6 begins "to become open". 
This suggests a renumbering : 
open1 - adjective - "the doot is open" 
open2 - to become open1 - verb intransittve -. '%he door opens" 
open3-to cause to open2 - verb transitive - "John opens the door" 
@en is only one of hundreds of verb-adjective homographs in English. 
Coo'l 
behaves like opm. We start with the adjective coozl, "the jello was 
cool". me intraneitive wan cool2 means "to become cool " 9 "the jello 
11 
cooled in the refrigerator." The tranait&ve verb cooZ3 means 
to cause 
to become cooZ1" , "Jane coaled the j sllo in the refrigerator. 'I Other 
verb-adjective homographs like clean how a different pattern, the in: 
transitive verb is missing. 
cleanl - adjective - The rodm was clean. 
$clean? - to become cleanl~- %he room cleaned. 
clean2 - to cause to become c3 !an] - Jane cleaned the room. 
Not all verb-adjective pairs are homograt)hs. Mobm Englisl retains 
traces af an old suffix -en which turns adjectives into verbs. To 
reddm is to make or become red. Somatimes the verb and the adjective 
are etymologically distant: to agel is to become old. 
We need a lexical relation CAUSE re la tin^ sd and go, openg ,d 
QPm2 
send CAUSE go 
openj CAUSE open 2 
The appropriate axio~ scheme for the case verbl CAUSE verbZ tells us that: 
if the sentence containing verbl holds, then so does the sentence contain- 
ing verb2 . Formally, 
HoZdsfRhel;Ql,Zl,Z2,Z3,Zq)) + HQZ~~IR(V~~~~,Z~,Z~,Z~,Z~)) 
2 .) $ecome ~djective. We alsov need a lexical relation BECOME 
relating age, an8 oZd, open, and open, . 
awl BECOME 0x2 
redden BECOME red 
*Pm 2 BECOME @m 1 
If verbl BECOME adjl; then if the sentence containing verbl holds, then 
the object that did the becoming must now have the property expressed by 
the adjective, i.a 
~oZds(~(verb~,~~,Z~,Z~,Z~)) * ~oZds(r)(~~~,~~,adj~)) 
where ZC1 is the primitive concept corresponding to adgl. (This axiom 
may conceivably react in uncomfortable ways with tense.) For the moment 
?1 
the relation between czem2 and cZeanl the cause to become" relation 
will be compounded from CAUSE and BECOME4 It will probably occur often 
enough to deserve a name of its own, perhaps MAKE. 
3. ) Be. The relatioq BE parallels BECOME very closely. While 
BECOME relates the verb of becoming and the predicate adjective, BE re- 
lates the verb of being and the predicate adjective. For example, to 
neigh&o~ is the verb which means to be near. 
to neighbor BE near 
This is the inverse of the relation which the Soviets call PRED. For 
some reason it seems to be much less common than BECOME. 
4.) Process Nacn mrd Verb. 
NOMV relates a process noun and its 
fi 
verb. 
Death is the nominalization of the verb to die; death NOMV to die. 
  his is the Soviet relation V 
and the inverse of the relation So: ) 
0 
5.) Adjective and mn. 
The relation ADJW, the inverse of the 
Soviet Ag, relatee adjectives and nouns, as in sda~ ADJN sun. This re- 
lation my have to be splxt into two or more pieces. 
Magnus Ljp'ilg (1970) 
60 
suggests that adjectives formed from nouns by adding -y, e.g. sunny as 
opposed to 8d0, mean "having more than a normal amount of" whatever 
the noun denotes. Adjectives in -a1 and -ful may present certain othex 
semantic regularities. 
6.) Able. The relation ABLE is used in combination with case re- 
lations only. 
undersknd#b Ze OB JECT*ABLE to understand 
Z.iterute AGENT*ABLE to read 
legible OBJECTJcABLE to read 
The Soviet version of thjs relation has different eubcategoriee - Ablel, 
Able2, Able3, Able4 - to indicate grammatical arguments of the verb. 
Ablel (to burn) = cornZnc8tibZe things are precisely those which can be sub- 
jects of the verb to bum. On the other hand, Ablep(to eat) 
gince edible things are those which can be objects of the verb to eat, 
Since the semantic representation system in the questrion-answerer uses 
cases to connect verbs and arguments, we handle different kinds of ABLE- 
ness by combining ABLE with a case. 
7.) ImeguZar imperative. The relation IMPER comes directly from 
the Soviet inventory. It relates colloquial imperative expressions to the 
appr~priat e milin verb. 
fire! IMPER to shoot 
go ahead! ~ER to talk 
This relation esgentially involves very irregular imperatives; and this 
brings us to the inflectional relations. 
Inflectional relations are dull but useful. 
Regu?ar noun plurals 
and verb forms are handled by a suffix-chopping algorithm byt words like 
men and 8cz7zg defeat it completely. 
We get around this difficulty in 
essentially the same way as some conrmercial dictionaries do. 
A separate 
entry is included for these words. The lexical entry for men c%nsists 
of PLURAL man. The entry for sang is PAST-to sing; for swrg we have PP 
to sing. The axiom-generator for PLURAI, changes the number assoc~ated 
with the object if necessary and moves to the main entry to pick up other 
axiom scheme% there. 
The inflectional relations are, of course, paradigmatic relations, 
but are groupea separately because of their sttong family resemblance 
and particularly uninteresting nature. 
5. THE ORGANIZATION OF THE LEXICON AND THE SEMANTIC REPRESENTATIONS 
The lexicon is a largo network in which the nodes are lexical 
entrles and the arcs are lexical relations; all the arcs are double- 
To represent the nework in the data base, each entry contains a 
list of attribute-value pairs. Each pair consists of an arc 
(1.e. a 
relation name) and the name of the entry at the other end of the arc. 
Each lexical relationo L has an inverse r. If entry1 contains the 
l.r 
attribute-value pair L-en try^, then entry2 contains L-entryl. 
Each relation also has a lexical entry which gives its properties and 
also tells how to ffiterpret lexical relationships in the predicate cal- 
culus. 
For example, the entry for dag includes the information dog T 
anma2 (dog is taxonomically related to anumrl). The system use, the 
information in the lexical entry for T to interpret this as* 
~oZds (Ncom (dox, X) ) - Ho Me (Neon (animal, X) ) 
The lexical entry for T also tells us that T is transitive. The in- 
ventory of relations is expandable. To add a relation we need only add 
a lexical entry, 
When the meaning of the word cannot be expressed solely in terms 
of le;xica~xelations, a defiLnftion is added to the lexical entry, phrased 
in the same form as the semantic representations and using the same depth 
lexis. These lexical semantic relations axe wfittkn in the same form as 
semantic representgtions for sentences. The lexical entry for pet dn- 
cludes the information that a pet is an animal which is owned by a human 
mcom (animal, Z1) A Ncom (human, Z2) 
A R(m; Z2,Zl) 
This becomes the @xiom 
HoZds (Ncom (pet, Z1) ) -r Holds (Neom (animal, Z1) ) A 
HoZds(Ncom(human,Z2)) A HoZds(R(own,~~,Z~)) 
If an hdividual 21 is a Pet, then 21 is an animal and is owned by a 
human Z2, 
Thus this lexicon is a relational network model with words and 
Lexical relations as semantic primes. Definitions are written using 
lexical relations and first order predicate calculus formulas. 
The design of this lexicon is independent of a particular repre- 
sentation scheme and the lexical relations we propose can be equally 
useful in another context. Nevertheless an overview of the semantic 
representations is included here in order to enable the reader to under- 
stand the notation in the examples of lexical entries in the n.ext section. 
Anyone who does not find notational problems attractive should skip thesew 
paragraphs; with the exception of a few lines of formal details the rest 
of this paper will make sense without it. 
An Ooervia? of the System of Semantic Pepreeentatiae . 
The question-answering system of which this lexicon is a fundamental 
part uses a first order predicate calculos system of semantic representa- 
tions., As it readse paragraph, the system makgs an internal model of 
the story, identifying objects and events and the relationships between 
them. The representations are written in a Sirst order predicate calculus 
so that they can be used in an existing theorem prover (Henschen, Over- 
beck, and Wos 1W4)= a a firqt order predicate calculus we are allowed 
predicates, functions, and quantifiers like "there exists" and "for all" 
but predicates are not allowed to be arguments of other predicates. This 
particular calculus is many-sorted; that is, there are many dizferene 
classes of objects in the system. 
Suppose a story begins : 
Peter heard a meow. Mother said, "The kitten 
is hungry." ,She sent Petek to the store. He 
bought milk and a big, red lollipop. 
As we process this story we need first of all to recognize the different 
entities in the story. 
Here we have seven individual objects: 
XI - Peter 
X5 A- store 
X2 - meow 
X3 - Mother 
X4 - kitten 
X6 - milk 
X7 - lollipop 
We can write Ncom(lollipop,X7) to signify that X7 is a lollipop since 
ZoZZipop is the common noun that names X7. The story mentions two pro- 
perties of the lollipop; it 18 bi~ and it is red. The lexicon tus 
us that red is an adjective of cqlor, SO we represent this property 
using a functional notation 
Ptcolor ,X7, red) 
Similarly, 
P(size ,X7, big) 
records the fact that the lollipop is big. These propertAes are num- 
bered and put on a list for convenient retrieval. We may wrlte 
PI = P(color,X7,red) P2 = P(siza,X7,big) 
This story also tells us some relations between entities. "~e bought 
a lollip~p,'~ can be expressed as 
R1 = R(buy,X1,X7) 
since he refers to XI, Peter, and X, is the 1ollipop.The third sentence 
in the story: 
SHe sent Peter to the more 
contalns a relation Rg 
Rj = R(send,Xj,Xl) 
and a property of that relation 
P4 P(direction,Rg,X5) 
The predicate fioZds is used to make assertions. To assert the third 
sentence we write 
Ho zd8 (P4) 
me connection between the milk and the lollipop in the last sentence 
is described by an interrelation I, I(and,X6,X7) so-that the whole sen- 
tence becomes 
Bozds(R(buy,~~,I(and,~~,X~)) 
(There is a rule to rewrite this later as 
" BoZds (R(buy ,xl ,x6)) A hzds (R(bujr , x1 ,X7) ) 
hue it is applied only if some kind of inference is required from this 
sentence, ,e. g., if a question asks,  id Peter buy some milk?") 
To obtain these representations we, of course, need a great deal 
of in£ ormation from the lexicon (like the information mentioned above 
that red is an adjective of color and that big is an adjective of size). 
I exical information is also used in setting up representations for 
quest ion s like 
1. What color is the Iallipop? 
P(color,X7, ?) 
The answer to this question can be found by a simple matching process 
because the story representation already contains this kind of lexical 
information. A question such as 
2. Did Peter buy some candy? 
requires further lexical lookup since the word candy does not appear 
in the storyd The answer is found using the lexical relation T 
(taxonomy 
or class inclusion) between ZoZZipop and candy - the entry for- lollipop in- 
cludes T candy. Similarly, the entry for candy includes !? ZoZZipop, 
where 1 is the inverse or, converse relation of T, which relates the 
same pairs of objects in the opposite order. 
Likewise a multiple choice 
question such as 
3, Where does milk come from: cats cows trees cars 
can be answered correctly usiug the provenience relation COMESFROM listed 
in the entries for milk and eat?. The question 
4. Where did Peter grp? 
is represented 
P(direction,R(go ,XI) ,?) 
The lexicon is then used to look for cohnections between go and send. 
The lexical entry fol; send includes the information CAUSE do. 
The entry 
for the learn1 relation CAUSE contains several axiom schemes. With send 
and go substituted in the correct positions we get the axioms 
Holds(R(send,Zl,Z2)) - Holds (R(cause,ZI, (R(go ,Z2))) 
and 
HoZds(R(cause,Z1,ZR1)) ' Hold$[ZR1) 
In order to answer the question 
5, How old is the cat? 
we must first identify the cat in the s~o-ry, that is, recognize that a 
kitten is a cat and then realize that it is a young one. The lexical 
relation CHILD is essential to this task. The definition of Etten 
- 
consists of CHLLD cat. The lexical entry for cat contains CHILD kitten. 
The lexical entry for the relatton CHILD contains axiom schemes which, 
when kitten and cat are filled in in the praper places, tell us that if 
X is a kitten then it is a cat and it is young. That is, if IVcom(kitten,X) 
then Ncom(cat ,X) and P(age,X, young) . 
In addition some questions force us to look at the interaction be- 
tween two or more lexical relations. To answer the question 
6. What animal did Peter hear? 
we need to know that a meow is a typical cat soubd, which is expressed 
by the lexical relation SON, memo SON oat. We also need to know that 
a cat is an animal, cat T anirnaz, and that a kitten is a young cat, a5 
above kitten CHILD cat. 
Ws has been an extremely brief introduction to the semantic systa 
used in me question-answering scheme of which this lexicon is a part. 
For those who are interested in the representations themselves Appendix 
I contains a brief formal presentation. A more complete description is 
in preparation. (M. Evens and G. Krulee, "Semantic Representations for 
Question-Bnswezing Sys terns. ") 
Lexical relations, we are convinced, are an extremely useful addi- 
tion to any lexicon, whatever the underlying semnt;ic system The 
a,xioms which are associated with each relation. of course, have to be 
expressed in the semantic representations of the system in which the 
lexicon is being used. 
6. THE FORM OF THE LEXICAL ENTRY 
The most crucial step for the lexicographer is the design of the 
lexical entry. 
Somehow all the different kinds of lexical information 
previously decided upon must be neatly packaged into a compact, con- 
sistent, and accessible package. The lexi~on 1s a large network in which 
the nodes are lexical entries and Che arcs are lexical relations. 
Lexi- 
cal entries can be found from an alphabetic list, 
so that the network 
may be entered at any point. 
There is a subnetwork contatlning lexical 
relations and their logical properties. 
Each entry begins with the lettex atring which names it. Homo- 
graphs are numbered 1,2,3, ... to prevent confusion. Thus, oZt3arl is 
the adjective, clear2 is the verb 'to become cZearlr, and c'learg is the 
verb 'tb cause to c2earp1 or 'to cause to become cZsmI1. Entries con- 
(i) Category - Part of speech, sort, lexical relation, etc. 
(ii) Irregular inflectional morphology. 
This latter is stated in terms of a special set of lexical rela- 
tions-- PAST, PI? (past participle), and PLUR(a1) are the only ones 
nebded for our simple data-base. The lexical entry for make includes 
- 
PAST - made, - mads. 
Made has a separate lexical entry but a very 
short one 
mude PAST - make 
PP - make 
The laical entry for chdd includes PLUR - chzldren, the lexical entry 
far chzZdren consists of: 
chz Zdren FLUR - chzZd 
(lii) Lexical relations and pointers to thelr values in the fom 
of attribute-value pairs. The lexical entry for puppy contairk CHILD 
- 
- dog. The lexical entry for dog contains CHILD - puppy. 
The lexical 
entry for the lexical relation CHILD tells us how to interpret these. 
It contains an axio111 scheme which when filled in tells us that X is a 
puppy if and only if X is a dog and X is young 
Ncbm(puppy,X) means 
that Ncom(dog,X) and also P(age,X,ygung). 
Information often classed as 
derivational morphology will be included here, the lexical entry for 
- 
soap, for example, contains ADJN- soapy 
Some of this derivational 
information could be ~tated instead in general rules and probablv 
should be Ln any larger data base. 
bv)- Parameters appropriate to particular categories. 
(v) Def init~ons. These are in the form of logical inferences 
that may be drawn when a ~iven word is used, and which are idiosyn- 
cratic enough not to be coded in terms of lexical relations. 
Only a 
few words have definitions. Puppy, for example, does not because 
the information that a puppy is a young dog 1s indicated by the lexi- 
cal relation CHILD - dog. Pet, on the other hand, 
has a definition 
Ncom(pet,Z1) Ncom(animal,Zl) A Rcom(human,Z2) A R(own,Z2,Z1) 
When thxs def mition is retrieved it is transformed into the axiom 
Hdds (Ncom(pet, Z1) )+ RoZds (Neom Canunal ,Z1) ) A 
Holdb(Ncom(human,z2)) h BoZds(R(own,~~,Z~)) 
In other words, if some indivxdual 21 is a pet, then Zl is an anmal 
owned by some human Z2, 
Omitted from this lexicon are the examples which are an important 
and valuable part of other diczionaries. This system does not have 
the generalizing power to use examples effectively and, m addition, 
they occupy a great deal of space. The most natural way of handling 
examples in such a model might be to accumulate tZl- from semantzc 
representations of sentences which the system parses. The task of 
organizing, pruning, and generalizing from examples is too formidable 
to tackle here. 
Nouns: Taxonomy seems to be the most important lexical  elation 
Eor nouns, but many others appear in the texts as well. 
dog T mima2 A dog is an animal. 
cep6 T maey A cent is a kind of money. 
puppy CHILD dog A puppy %s a young dog. 
*oil S earth Soil is the same thing as earth 
cake TRESULT bake The typical bring-inta-being verb 
for cake is bake. 
bubble TRESULT bZmd The typical brinp-into-being verb 
for bubbZe is blow. 
The sjmtoctico-semantic features are used in now entries only. 
Feature Nmes Feakme Values 
Gender Ma16 Female Neuter 
Human Animate Inanimat e 
Not human 
Numb er Singular? PluraL 
Count/Mass Count \I Mass 
Originally, following Winograd, the number and count features were com- 
bined ipto a single feature with three values: singular, plural, and 
mass. But McCawUy has recently (1975a) given examples of plural mass 
nouns: oZothes guts, bra%s, etc. It is impossible to argue with 
counterexamples from everydgy language. The feature information can 
be expressed compactly as a vector of 1's and 0's. 
Gender Anhat enes s Number Count 
red 100 100 10 10 
PUPPY 11 5 11 0 
10 10 
sugar 001 001 10 0 1 
f lsh 111 010 11 3 1 
boat 011 011 0 10 
clothes 0 0 1 0 0.1 0 1 0 1 
These features are used to determine pronoun choice, for example, not to 
provide semantic information. 
Puppy is marked as having the feature 
ban so that the system can parse "the puppy who barks1' and "the cat 
who walks alone. 
tt 
Definitions for nouns begin with the specification of the function, 
Ncom or Nprop; 
BANK Ncom(bank,Z1): P(location,R(save,Z2,Z3),Prep(in,Z1)) 
(A bank is a place where thitlgs are saved.) 
Smith and Maxwell 41977) include here commonly understood metaphorical 
extensions, metaphorical cliches (e. g. pitch=heZZ) . These also can be 
expressed by lexical relations (cf. the Soviet Flgur function which gives 
figurative f oms; presumably pitch Figur heZZ) . No obvious ones occur 
in this data base, so that this item is not currently included. 
~rmrpZe entry for puppy: 
Category: common noun 
Relations: S pup 
CHILD dog 
Parameters: 111 110 10 10 
The relationship puppy T is nut inclulrd. It can be inferred 
f tom puppy CHILD dog and dog T anima l Qd mission of relationships 
whjch can be easily inferred saves space but costs time. 
It is probably 
the case that people actually otore these relationships directly. 
The 
fact that most, if not all, puppies in the child's w0tl.d are pets is 
not stored either. This is open to question. 
Sampte entr3 for pet 
Category : common noyn 
Relations: T rm$maZ 
RECK owner 
Parameters: 1 110 10 10 
Definition : Ncm(pet, 21) : Ncorn(animal,Zl) Nocm (human, Z2) 
R(0wn,Z2,Z1) 
The word mer definitely beldngs in the lexical univeree of pet: We 
can recover it from the presence of own in the definition and the fact 
that mer TEXPER om. In a child's world, though, the pet-owner re- 
lationship seems to be a reciprocal ki6ship relationship like daughter- 
mother . 
Ron-CopuZa Verb8 Every non-copula verb entry includes case in- 
formation, in the tom of a list of one or more arguments. For eqch 
argument we need four pieces of information: 
(i) How it IS realized syntactically: subject, object, 
or a list of prepositions. 
(ti) The case (s) involved. 
(iii) Whether the case must be explicitly' speclrlea WBL~, 
whether it is optional and unnecessary (OPT), or 
whether when absent it must be understood (ELLiptical). 
(iv) Selection preferences: the top node of the taxonomy 
subtree. 
(The classification names in (iii) are borrowed from the SPEECHLIS 
project, Nash-Webber, 1974). The ellipfical cases belong to verbs 
73 
khich Chomsky (1965) marked [+objrct-deletion] , which allow the object 
deletion transformation. Such verbs are eat and read whete the object 
is easily understood. But this phenomenon also occurs with other asso- 
ciated noun phrases, not just the object, The sentence 
Joha and Mary gave an alarm  lock, 
begs for a dative-experiencer in isolation, but sounds perfectly appro- 
priate in answer to the question 
What did John and Mary give the Andersons for a 
wedding present? 
For gzve boch the object and the experiencer may be deleted. A sign on 
the door sayshg "We gave" is acceptable because everybody understands 
that it mans "We gave money to the United Fund " For buy the argu- 
ments age 
d ii iii 1v 
1 Subject agent, source OBL human, organization. 
2. Object objective OBL thing 
(In the Wall Street Journal dialect thls argument is 
ELLiptlcal.) 
3. From source OPT human, organization. 
4. For Instrument OFT money 
For gzve, they are 
1. Subject agent, source OBL human, organization. 
2. Object objective EL thing 
3. To, Object experieacer ELL human, organization. 
The next item tells us whether a verb is an actioz? verb or not 
(SACTION). Action verbs and adjectives can appear in imperative 
sentences but non-action verbs and adjectives cannot. 
throw the ball! 
* own the house! 
be sensib3e: 
* be tall: 
These caq also appear in embedded sentences dependent on imperative 
perfotlmatives like oder and teZ7. 
Sally told Sam to throw the ball. 
* Sally told Sam to own, the house. 
Sally t,~ld Sam to be sensible, 
* Sally told Sam to be tall. 
And they can take the progressive aspect, while non-action verbs and 
adjectives cannot. This feature is jmportant in calculating duration. 
Sam is throwing the ball. 
* Sam is owning the house. 
Sam is being sensiblk. 
* Sam is being tall. 
The next item tells us whether 1he verb allows a regular passive 
or not. Only thobe whlch do not allow a passfve are marked. Apresyan, 
  el' Eak, and ?!olkovsky treat this also using lexical relations. Eventu- 
ally this will probably be computable from other information in the entry. 
Some important items apply only to verbs that take sentential com- 
plements. This includes the complementizer(s) the verb takes and whether 
or not it allows not-transportation. The possible complementizers are* 
]CHAT Mother said that Mike should move, 
FORT0 Mother told Mike to move. 
75 
TNG 
Mother did not like Mike's sitting there, 
FROM Mothex prevented Mike from going. 
Verbs like t?z$nk whfch give us roughly synonymous sentences whetheq 
not is in the main clause or the subordinate clause are said to penbit 
not- transportation. 
John didn ' t think Mary had gone. 
John thought Mary hadnq t gone. 
Many verbs do not permit not-transportatzon, of course. These sew 
tences are not synonymoust 
John didn't say that Mary had gone. 
John said that Mary hadn' t gone. 
This complementizer infomafiod is coded by adding to the entry: 
THAT, 
FORTO, ING,  FRO^, or NOT, as appropriate. 
The next item is the imp1icati;onal structure of the verb. There 
are seven such verb-classes and an eighth wastebasket class from which 
no inferences can be made (Joshi and Weischedel 1973; Karttunen 1930; 
Kiparsky and Kiparsky 1970) ; sbe table 3. In this system f actives are 
the unmarked qase since we always assume that we can assert arguments 
unless we are explicitly told not to. The lexicql entry for each verb 
which can take a predicate complement and which is not a factive is not 
a factive is marked with its class name. Eaoh class name appears in 
the lexicon with its appropriate inference pattern. For a negative-if 
verb, for example, this is : 
If R(V,Z1,s) can be asserted then S can be denied. 
If R(~ot (V)) Z1,S) can be assertzed then S is in limbo. 
CZa88 
Fact ive 
Implicative 
Only- i f 
Negative R (S )3-S 
fslplicat ive 
-R(S 13s 
R: realize 
S : Meg baked, the oake 
Jerry reaZized that Meg 
baked the cake. 
We manuged to finish 
the job. 
They alZmed Jim 
Jim had an opportuniQj 
to visit China. 
3 
Larry persuaded 
Lrced 3 
Bill to accept the job. 
Larry pervented BilT F l 
winning the game. 
John failed to go. 
Hugh ~efmined from 
smoking. 
Mary pretended that 
Ben went home. 
No implications Jerry wanted Meg to 
elope with him. 
Table 3 Classification of Main Verbs in Predicate Complement Constructions 
(adaptled from Joshl and Weischedel 1973) 
The next to lasc ftem is the performative classification. The 
1 
classification used is that proposed by McCawley (1975b) as an ex- 
tension to the work ~f Austin and Vendler: Verdictive, Operative, Ad- 
visory, Imperatitre Conmissive, Behabitive, Expositive (1-7). This is 
reall~ a .luxury id a recognition-only system for children's paragraphs. 
The only speech-act verbs involved in our data are say and tell. Per- 
formqtive classification does interact with syntax (especially modals), 
77 
particularly in use with "woulU like to", "would", "willtt, and "let me". 
The last item tells whether a verb takes indirect question (IQ) . 
It 
is probably the case that when £activity and p0rformative structure are 
understood, this item will be predictable. 
The 10 verbs are apparently 
all expositives, but not all exposirivee are IQ'S and the IQ clashifica- 
tion seems to cut across ~c~awley's subclassification of the expositives. 
Presuppoeitions are included in the definition, at present, rather 
than as a separate item. 
SmnpZe entry for bakel: 
Category: noncopula verb 
(The cake baked in the oven.) 
(The rock baked in the sun.) 
- 
Relations: CAUSE bake* 
#. 
TLOC pa, oven 
Parameters: Args - 1. Subject - result, experiencer - OBL - thing, 
ACtlon - Yes 
SampZe entry for bake2 
(Mother baked a cake.) 
Category: noncopula verb 
Relations: T make 
CAUSE bake 
- 
TAGENT baker 
- 
TLoC kitchen 
Parameters: Arguments - 1. Subject - agent - OBL - hmm. 
2. Object - result - ELL - food, pottgrg. 
3. For - experiencer - OPT - human, event. 
Action - Yea 
This entry does not include bakery. 
A large lexicon could use a new lexi- 
cal relation, STORE. 
SmpZe entry for telll 
Category noncopula verb 
Rplatlons T speak 
s say 
- 
TAGENT mra tor 
Parameters 
Arguments - 1 Subject - agent - OBL - h~m. 
2 Object - objective - ELL - story. 
3. To, object - experiencer - OBL - 
han 
Action - Yes 
Comp - THAT 
PERF (performative) - Expositive 
10 - Yes 
SampZe entry for tellp 
Category noncopula verb 
Rela t ians T speak 
s conanavld 
Parameters Arguments 1 Subject - agent - OBL - human. 
2 
0bj ect - experienc er - OBZ. - animal 
hwnan . 
3 Object - objective - OBL - Sentence. 
Action - Yes 
Comp - TO 
IS (implimtive structure) - Dull 
Definition: R(tel12,Z1,R(Z2,Z3) ,Z3) 
(that is, if eomeone tells somebody to perform an action then he is 
saying that he orders that person to perform the action.) 
CopuZa Verbs: 
These are marked as verbs of perception or verbs of 
motion as appropriate if they are not of the 'be-become-seem' variety. 
Verbs of perception are marked with the perceptual sthere. 
This helps 
to construct appropriate semantic representatioas. 
There is a close 
r~lation between the following sentences and we need to make inferences 
from one to another. 
Sally listened to the trumpets. 
(active) 
Sally heard the trumpets. (cogaitive) 
The trumpets sounded beautiful to Sally. 
(flipped) 
The third sentence is called flipped because its arguments are switched 
from those in the first two. Sowzd is the flip perception verb for hear 
dcf. Rogers 1972). Thus, the entry for the copula verb Sound 1s marked: 
type - perception 
sphbre - aw?aZ 
flip - he 
Adjectives: The f irat special item for an adjective is the primi- 
tive concept. For red it is coZor; far big and smaZZ it is size. 
The second item is the selection preference. For red it is thing; 
for big it is thhg, thought. The selection preference could probably 
be stated once in the entry for the primitive concept and not repeated. 
Since it is useful to have it readily available in pareing, it is 
included separately in every adjective entry. 
With adjectjves as with verbs we oftea have causally related homo- 
graphs. The zdm in "warm coat" has a different meaning from the 
in "warm pie." A warm pie has a temperature greater than room tempera- 
ture, but a warm coat makes you warm. These are called warm and W~XPEI 
I 2 
land are connected by 
CAUSE u-. HOW does one recognize which is 
which? If the head noun is cZothing or one of the 'furnace-stove-oven' 
family or indeed anything else which has function heat, is assumed. 
Adjectives, like verbs, are marked 'Action - yes' or 'Action - No' 
Lexical entries €or adverbs are very much like those for adjectives. 
The main strategy rfollowed in the design of the lexical entry has 
been to make it as compact as possible. It seems likely that more in- 
formation will have to be added later. 
7. SUMMARY 
This lexicon 1s designed to serve as the global data base for a 
computer question answering system. It is therefore an integrated lexi- 
con-encyclopedia, storing information needed for parsing, for development 
of an internal model, and 201; making inferences. Syntactic and semantic 
information are integrated into each entry, 
Lexical entries are provided Eof all words which appear in the text 
except for those derived forms khose roots can be recovered by a trivial 
computation. Thus there are entries for went and gone but not for goes 
and go%ng, for urnanted but not for wanted. Entries are also provided 
for some word combinations, such as birthday cake and thmtk yoy. Lexical 
erztries- are tied together by lexical-semantic relations which provide the 
internal structure df the lexlcon. 
Relations present both practical advantages and theoretical chams. 
The most immedfate practical advantages appear in the mechanisms for 
saving space. Relations allow us 
to abbreviate entries, to state axiolr 
schemes once and produce particular axioms only when they are needed, to 
include selection preferences in a compqct form. They are in one sense 
a generalization of defining formulae already present in commercial 
dictionaries. Thus mere is a possibility that we can extract some 
relation values automatically f tom existing dictionaries (cf . Smith 
and M'axwell 1977). From a the~retical~standpoint relations provide a 
model of lexical memory with some modicum of psychd8gical reality. 
Lexical-semantic relations and the theory of semantic fields suggest 
a tentative approach to the problem of identifying the context, of 
f i-ding the right frame or script. 
Appendkt I. The Swnuntic Representat<one. 
This is a sonewhat more formal descgiption of the system of seman-: 
tic representations described informally in Section 5. (more details may 
be found in M. Evens and G. Krulee, "Semantic Representations for Ques- 
tion -Answering Systems , " in preparation) . 
The representations are currently written in a many-sorted first 
order predicate calculus with indivfdual constants and variables, function 
constants, and a predicate constant. 
(1) 
Individual constants of each sort. 
The object constants ate ~itren XI, X2 .... 
Each corresponds to a unique object in the story. 
(2) 
An infinite 1i~t of variables of each sort. 
The object variables are written ZX1,ZX2, . . a 
When we do not wigh to specify the sort of a variable it 
is labelled Z1, Z2, .... 
(3) 
Function cbnstants Ncom and Nprop. 
These are used to name objects. 
Ncom: Common nouns X objects + names 
Ncom (boy, X1) 
The boy went home. 
Nprop: proper nouns X objects -+ names 
Nprop (Anne, Xp ) 
Anne went home. 
(4) A. Function constant R a f ive-plac e function 
R is used to represent clauses with nbncopula verbs. 
R: noncopula verbs X objects X objects X objezts 
relations relations relations 
properties properties properties 
inter- inter- !inter- 
,relations relations relations 
names names name$ 
X objects + relations 
relations 
properties 
interrelations 
names 
The boy hit Anne. 
Most of the time this example will be abbreviated 
Unspecified arguments are often omitted for convenience in writing. 
Within the system they are represented by variables and thus will match 
I1 
anything. If the story says, Donna sang a song", the internal repre- 
sentation is R(sing,X3,X4,Z1,Zq) with Nprop(l)onna,X3) and flcom(song,X4). 
The question: 
"Did .donna sing?" becoines R(sing,X3,Zg,Z4,Z5) which 
matches the statement from the story. 
B. Function constant P a three-place function 
P is used to represent expressions with adjectives 
and adverbs. 
P: primitive concepts X objects X modifiers4 properties 
relations 
properties 
interrelations 
names 
P (manner ,XI, kind) 
The boy is kind. 
The kind boy. 
C. Function constant I a three-place function 
I is used to represent expressions with conjunctions 
and con j unc t ive adverbs. 
I: interrelational X objects X obj ects + interrelations 
operato'rs relations relati~ns 
properties proper ties 
interrelations,inte;rrelations 
names names 
modifiers modifiers 
nand ,Xp ,XI) 
Anne and the boy. 
Anne smiZed bscaube the 
ZaoeJZdd. 
D. Function constant hep a two-place function 
Rrep is uged to present prepositional phrases. 
Prep: preposition X objects -+ modifiers 
relations 
properties 
interrelations 
to Anne 
E. Fuhction constants + * X two-place functions 
These functions rep~esent arithmetic operations. 
+: objects Xabjects4 objects 
* 
7 
F. Function con8tant Not a one-place function 
Not: aoncopula verbs ' noncopula vkrbs 
modifiers + modifiers 
interrelatibnal operators * interrelational 
operators 
RINot(sing) ,X2) 4 
Anne did not sing. 
R(sing,I(Not(or),X2,X1)) Neither Anne nor 
the boy ;ang. 
(5) Predicate constant Hdds. 
This applies to every sentence at the top level. 
It represents the underlying performative in the narrative 
~ozds(R, ) where asserts : Artne sees the 
boy eat. 
Relations often appear as arguments of properties. 
P (manner ,R2 ,hungrily) fie boy eats hungrily 
PI = P (manner, x~, kind) 
The boy &I kind. 
The boy ie very kind. 
The notation ''PI =P(manner ,xl,kind)It merely indicates that 
P(manner,X ,kind) is the first property formed in the representation 
of a particular story. 
Both doun and verb phrase complements are represented by writing 
the subordinate relation or propezty as an argument of the f orsn..la which 
represents the main clau* 
Relative clauses are represented as mterrelationsi. Clauses in- 
trodtced by relatlve pronouns are ordinarily treated as conjoined maln 
clauses. FOI adverbial clauses the conjunctive adverb serves as Qter- 
relational operator I(when, SI, SJ ) or I(becauae, Si, Sj ) . 
Generic relatives and other types of gmeric expressions are treatutl 
as conditions, 
(6) 
bists and the Model of the Story World 
As the system reads the story it forms a model of the world 
the 5 tory describes. The representations developed here are organized 
in five sepata te lists 
Lists of, Indivi~ls 
Lists of Names 
Liets of Relations 
Lists of Properties 
Lists of Interrelations 
We have defined the follQwing sorts. For each sort ye have me 
constants and an infinite supply of variables. 
SoPt Constant SymboZ8 VariabZe SymboZs 
names NlSN2, ZNl,ZN2,... 
relations R~,~zs a* l ZRl,ZRz, ... 
properties p132, . . ZP1,ZP2,... 
inter r@la t ions 
common nouns 
(house,dog,. . .) 
proper nouns 
(Anne,Sam,. . .) 
nohcopula verbs 
(go,sing, .I 
primitive concepts 
(color,time, ...I 
modifiers 
(red, on Tuesday, 
~n fwzzr el&&onal 
operators 
prepositions 
(t0,inFa .) 
when we do nat wish to specify the sort of a variable, we call. ~t 
Assume further the ~tandard~machinery of the first order 
Predicate ralcalus : 
The logical operators 
The quantifiers (dx) 
(3x1 
- (not) 
A (and) 
v (or) 
(if.. . then) 
(for all X) 
(there exist4 X) . 

References 
Aptesyan, Y Dm, I. A. ~eI.'?!uk, and A. K kolkovsky, 1970 Semantics and 
Lex~cography, Toward a New Type of ~nilingual ~ictionary. In Kiefer, 
1970, 1-33. 

Bach, Bmmon and Robert T. Harms, eds. 1968 Un%versats zn Lznguzstzc 
meory Holt Rinehart and Winstan, New York 

Becker, Joseph D. 1975. The Phrasal Lexic'on. In Schank and Nash-Webber, 
1975. 

Bierman, A. K. 1964. Mgzc, a DzaZogue. Holden-Day, San Francisco 

Bierwisch, M. 1969. On Certain Problems of Semantic Representations 
Fowdutzons of Language 5, 153-184. 

Biemisch, Manfred and Ferenc Kiefer. 1970. Remarks on Definitions 
%n Natural' Language. In Kiefer, 1970. 

Casmgrande, J. Be and K. L. Hale. 1967, Semantic Relations in Papago 
Folk Definitions. In Hymes and Bittle, 1967, 165-196. 

Chomsky, Noam. 1965b A~pects of the Theory of Syntm MIT Press, 
Cambr idse . 

Cole, P. and Jd Sadock. 1977. Syntm andSemrmtzcs B Gran~~~twaZ 
R~Zatzdns Academlc Press, New York 

Dale, Philip S 1972. rnngwtge 1FsveZopmen0, Strudtur~ and ~wu?tz~lz 
flryden Press, Hinsdale, Illinois. 

Dennisonb 1972. Webster '8 Notebook Dictionary, The Dennis011 Reference 
Libraqy, Dennison Manufacturing Company, Framinghh, Massachusetts. 

Downing, Pamela. 1977. On the creation and use of English compound nouns, 
Language 53.4: 810-842. 

Edmupdson, He P. and M. N. Epstein. 1972. Research on Symnpy rmd An- 
tonym~, A Mode2 and its ~epresentation. University of Maryland Compu- 
ter Science Center, College Park, Maryland, TR-J.85. 

Evens, Martha We 1975. Semrmt<o Representations for Question-Anmering 
Systems, Unpublished Ph.D, Dissertation, Department of Compater 
Sciegces, Northwestern University, Evanston, ILlinois. 

Evens, Martha W, and Gilbert K. Krulee. In Preparation, Semantic"Re- 
presentations for Question Answeriqg Systems. 

Evens, Martha, Bmnfe Litowitz, Judith Markowitz, Raoul Smith, and Oswald 
Werner. ms. L.azZcaZ-Smtic ReZakions: A ComparatCue Szm~y. 

Fillmore, Charles. 1977. The case for case reopenedr in Cole and Sadock, 
1977, 59-81. 

Flournoy , Catherine. 197.5, Distribution and Collocation. Unpublished 
paper, English Department, University of Chicago 

Fodor, Jerry and J'krrold Katz. 1964. Structure of Language, Reading6 
in the PhiZosophy of &rtguage. Prentice-Hall, Englewood Cliffs, 
Umi krsey. 

Garvin, Paul? ed. 1970 Copition: a MuZtipZe View. Spartan, New York. 

Henschen, Lawrence, Ross Overbeek, and Larry Wos, 197A. A Theorem-Proving 
Language for Experimentation. CACM 17, 3, 303-314. 

Iiymes, D. and W. Eo Blttle, ebs, 1967. Ctadiae in ,Southuestern Ethno- 
zinguist6co. Moutop, The Hague. 

Jesperrsen, Otto 1933. Z?i'~bent~dc of F'wgZMih Grmar. Allen and Unwln, 
London. 

Joshi, Aravind K. and Ralph M. Weischedel. 
1973. 
Some Frills for the 
Modal Tic-Tac-Toe of Davies and Isard 
Semantics of Predicate Com- 
plement Constructions. 
Third Intetnationsl Joint Conference on Arti- 
ficial Intelligence, Stanford, August, 1973. 

Karttuben, L. 1970. On the Semantics of Complement Sentences. In 
Papers fxm tHe Shth RegwmZ Meeting of the Chicagb Linguistics 
Society. 

Katz, Jerrold J. 1966. Semantic Theory ~epri'nted in Steinberg and 
Jakobovits. 1971. 297-307. 

Katz,   err old and Jerry Fodor. 1964. In Fodor and Katz, 1964. 

Kief er, Ferenc, ed. 1970. Studiecl in Syntasc ond Semantics. 
Reidel, 
Dordrecht-Holland. 

Kiparsky, Paul. 1975. 
On the Treatment of Fixed Phraaology in Genexa- 
tive Gramplar. 
Paper delivered at the Eleventh Regiona Meeting of 
the Chicago I.&nguistics Socilty , April ; 1975! 

KiparsEy, P. and C. Kipazsky. 197Q. Fact. In Biemksch and Heidolph, 
1970. 

Krauss, Ruth, 1952. A H&e a tsto Dig, A First Book of Definitiom. 
Harper and Row, Wanston, 

Lagaly, Michael, Robert A. Fox, and Anthony Bruck, 1974. Papers from 
the Tenth Repiom2 Meeting of the 6fdcago Linguistic$ Society. Lka- 
guistics DepStment, University of Chicago. 

Lakoft, George. 1971. On Generative Semantics. Zn Steinberg and 
J&kobovlts, 1971. 

Lawler, John M. 1972. Generic to a Fault. In Petarrteau et al., 1972, 

Levi, Judith. 1974. On the Alleged Idiosyncrasy of Non-Predicate NPs. 
In Lagaly er: ale, 1974. 

Levi, ~udith. 1975. me Syntm rmd  erna an ties of  on-~redicating Ad- 
jectives <n EngZzsh. Unpublished Ph.D. Thesis, Linguistics Depart- 
ment, University of Chicago. 

Litowitz, Bonnie. 1977. Learning to Make Definitions. Jo-2 of 
ChiZd Language, Vol. 4, 1-16. 

Ljung , Magnus . 1970. Ehglish Denohmal Adjectives . Gothenburg Studies 
in BzgZish Vol. 2 L. Studentlitteratur, Lund. 

Lyons, John. 1968. lntroduczhrn to !i%eoretzcai! L~,znacistiae. Cambridge 
University Press London. 

Martin, Richard. 1971. Lbgzc, Language and Metaphysics. New Yark Uni- 
versity Press, New York. 

Wrx, q Stephen M. 1932. Deductive QuestZon Answering with NaturaZ Lax- 
guage inputs. Unpublished Ph.D. Thesis, Computer Science Department, 
Northwestern University. 

McCawley, James D. 1968. The Role of Semantics in a Grammar. In Bach 
and Hams 1968, 124-169, 

McCawley , James D. 1970. Semantic Representation, In Garvin,& 1970, 
227-247. 

McCawley, Jahee D. 1975a. Lexicography and the Count-Mass Distinction. 
First Annud Coniference of the Berkeley Linguistics Sociew, 1975. 

McCawley, Jmes Do 1975b. Remarkg on the Lextcography of Performatlve 
Verbs. Pregrmt. 

McCawley, James D 1975~. Cpnversational Ipplicature and the T exicon. 
Paper Delivered at Northwestern University 5/19 /7 5. 

Miller, George and Philip Johnson-~aird. 1976. Ldhguqe and Perception. 
Haward University Press, Cambridge, Massachusetts. 

Minsky, Marvin, ed. 1968. Semantic Information ~rocessing. MIT Press, 
Cambridge . 

Naroll, R. and R. Cohen, eds. 1970. Handbook of Anthropology. Natural 
Eistory Press, Garden City, New York. 

Nash-Webber, Bonnie, 1974. Semantics and Speech-Understanding. BBN 
Report No. 2896, A1 Report No. 19, Bolt Beranek and Newman, Cambridge, 
Mas$ . 

Palmer, F. R. 1976. Ssnrmtiog, A New Outline. Cambridge University 
Press, London, 

Peranteau, Paul, Judith Levi, Gloria Phares, eds. 1972. Paper8 ~PMT 
the Eighth RegiomZ Meeting of the Chicago Linguistics Societu. Lin- 
guistics Department, University of Chicago. 

Perchonock, Norma and Oswald Werner. 1969. Navaho Systems of Classifi- 
cation, Some Implications for Ethnoscience. E*hdogy, Vol. 8, No. 3, 

QuilkLan, M. Ross. 1968. Semantic Memoqy. In Minsky, 1968. 

Raphael, Bertram. 1968. Sir, A Computer Program for Semantic Informa- 
tfon Retrieval In Mihsky, 1968. 

Rameh, Clea, ed* 1976. SemantXcs, Theory and Application. pr6eeedings 
of the Georgetown University Round Table on Language and Linguistics. 

Reichenbach, Hans. 1966. EZemmts of SpboZio Logic. Tlte Free Press, 
New York, 

Rogers, Andy. 1972. Another Look at Flip Perception Verbs. In Peranteau 
et al., 1972, 

Russkll, Wallace A. and James H. Jenkins. 1954. The Complete Minnesota 
Norms for Responses to 100 Words from the Kent-Rosanoff Word Associa- 
tion Test. Studies on the Role of Language in Behavior. Zeghnical 
Report 11. Department of Psychology, University of Minnesota. 

Schank, Roger C. and Kenneth M. Colby, qds. 1973. Computer Mode26 of 
Tharght and Language. W. H. Freeman and Company, 9an Francisco . 

Schank, Roger C. and Bonn2e Wash-Webber, 1975. Theoretical Issues in 
Natural Language Processing, ACL Workshop, Cmbrldge, June, 1975. 

Schank, Roger C. and the Yale A. I. Project.-1975. Sam - A Story 
Under s tander , Reqearch Report 43, Yal e VnZversit y , New Haven. 

Simon3, Robert F. 3970. Natural Language Question Answering, CACM 
13,15-30. 

Simmons, Robert F. 1973. Semantic Networks, Their Computation and 
Use for Understanding English Sentenc~e. Tn Schank and Colby, 1973. 

Smith, Raoul and Edwasd Maywell. 1937. Afi English Dictionary for 
Computerized Syntactic and Semantic Processing Systems. In Zampolli 
and Calzol~ri, 1977. 

Steinberg, Danny D. an@ Leon A. Jakobovits, edb. 1971. Semantictl, 
An Inter-DisczpZinary Reader. Cambridge Univa%ity Press, Cambridge. 

Webeterrs Nm CoZzegzate Dzc~onmry. 1951. Based on Webster's New In- 
ternational. G and C Merriam Company, Springfield, Massachusetts. 

??ebei&'s fle~ hternatzonaZ Dzct7onapy. 1958. Second Edition. G and C 
Mew iam, Springf l cld , hssac~zusetl-a. 

Wernr I , Oskald. 1969. On the Univerpal l ty of I exical /(;emintic Relatione. 
Anerican Anthropologicul Association Atfnunl Mtrcting , November, 1969. 

Werner, Oswald, 1972. On the Structure of Ethnoscience. conference 
on Methods of Structural Analysis, University of North Carolina, 
April, 1972. 

Werner, Oswald. 1973. The Synthetic Informant Model. P~oceedings of 
the 9th International Conference of Apthropological and Ethnological 
Science&. Mouton, The Hague. 

Wgrner,.Oswald. 1974. Intermediate Memory, A Central Concept in ~thno- 
sclence. Preprint, Anthropology Department, Northwestern Unlversrty 

Werner, Oswald and Kenneth Y. Beglshe. 1969. The AmtomzcaZ Atlas of 
the Navaho. Mimeo , Anthropology Department , Northwestern University . 

Werner, Oswald and Kenneth Begishe. 1970. The Taxonom~c Aspect of the 
Navaho Universe. Proceedings of the 39th International Congress of 
Americanlsts, Lima, Peru, August, 1970. 

Werner, Oswald and Joann Fenton. 1970. Method and Theory In Ethnosbience 
and Ethnoepistemology. In Naroll and Cohen, 1970. 

Werner, Oswald and Martin Topper. 1976. On the Thearetical Wity of 
Ethnoscience Lexicography and Ethnoscience Ethnographlcq In Rameh, 
1976, 111-143, 

Winograd. Terry. 
1971. Procedures as a Represer)tatzon of Data fn a 
Computer bogrm for Understandzng dlaturai! Lcmwe. MAC TR-84. MIT 
Artificial Intelligence Laboratory. Ph.D. TLeis, MIT. 

Zampolli, A. and N. Calzolari,eds. Computatiorrl and Mathematical Lin- 
guistics 
Proceedings of the Ihternational Conference on Comptitational 
Lingaistics in Pisa, 1973. Olschki, Firenze. 1977. 

Zolkovsky, A. K. and ~el'Euk, I. A. 1970. 
Semantic Synthesis. 6ystems 
meoq Researoh, 19, 170-243. 
