Treating Coordination in Logic Grammars 
Veronica Dahl 1 
Computing Sciences Department 
Simon Fraser University 
Burnaby, B.C. V5A 1S6 
Michael C. McCord 2 
Computer Science Department 
University of Kentucky 
Lexington, KY 40506 
Logic grammars are grammars expressible in predicate logic. Implemented in the 
programming language Prolog, logic grammar systems have proved to be a good basis for 
natural language processing. One of the most difficult constructions for natural language 
grammars to treat is coordination (construction with conjunctions like 'and'). This paper 
describes a logic grammar formalism, modifier structure grammars (MSGs), together with an 
interpreter written in Prolog, which can handle coordination (and other natural language 
constructions) in a reasonable and general way. The system produces both syntactic 
analyses and logical forms, and problems of scoping for coordination and quantifiers are 
dealt with. The MSG formalism seems of interest in its own right (perhaps even outside 
natural language processing) because the notions of syntactic structure and semantic 
interpretation are more constrained than in many previous systems (made more implicit in 
the formalism itself), so that less burden is put on the grammar writer. 
1. Introduction 
Since the development of the Prolog programming 
language (Colmerauer 1973; Roussel 1975), logic 
programming (Kowalski 1974, 1979; Van Emden 
1975) has been applied in many different fields. In 
natural language processing, useful grammar formal- 
isms have been developed and incorporated in Prolog: 
metamorphosis grammars, due to Colmerauer (1978), 
and extraposition grammars, defined by F. Pereira 
(1981); definite clause grammars (Pereira and Warren 
1980) are a special case of metamorphosis grammars. 
The first sizable application of logic grammars was a 
Spanish/French-consultable data base system by Dahl 
(1977, 1981), which was later adapted to Portuguese 
l Visiting in the Computer Science Department, University of 
Kentucky, during part of this research. Work partially supported by 
Canadian NSERC Grant A2436 and Simon Fraser P.R. Grant 
42406, 
2 Current address: IBM Thomas J. Watson Research Center, 
P.O. Box 218, Yorktown Heights, NY 10598. 
by L. Pereira and H. Coelho and to English by F. 
Pereira and D. Warren. Coelho (1979) developed a 
consulting system in Portuguese for library service, 
and F. Pereira and D. Warren (1980) developed a 
sizable English data base query system with facilities 
for query optimization. McCord (1982, 1981) pres- 
ented ideas for syntactic analysis and semantic inter- 
pretation in logic grammars, with application to Eng- 
lish grammar; some of these ideas are used in our 
work described here. 
Coordination (grammatical construction with the 
conjunctions 'and', 'or', 'but') has long been one of 
the most difficult natural language phenomena to han- 
dle, because it can involve such a wide range of gram- 
matical constituents (or non-constituent fragments), 
and ellipsis (or reduction) can occur in the items con- 
joined. In most grammatical frameworks, the grammar 
writer desiring to handle coordination can get by rea- 
sonably well by writing enough specific rules involving 
particular grammatical categories; but it appears that a 
proper and general treatment must recognize coordina- 
Copyright 1983 by the Association for Computational Linguistics. Permission to copy without fee all or part of this material is granted 
provided that the copies are not made for direct commercial advantage and the Journal reference and this copyright notice are included on 
the first page. To copy otherwise, or to republish, requires a fee and/or specific permission. 
0362-613X/83/020069-23 $03.00 
American Journal of Computational Linguistics, Volume 9, Number 2, April-June 1983 69 
Veronica Dahl and Michael C. McCord Treating Coordination in Logic Grammars 
tion as a "metagrammatical" construction, in the sense 
that metarules, general system operations, or "second- 
pass" operations such as transformations, are needed 
for its formulation. 
Perhaps the most general and powerful metagram- 
matical device for handling coordination in computa- 
tional linguistics has been the SYSCONJ facility for 
augmented transition networks (ATNs) (Woods 1973; 
Bates 1978). The ATN interpreter with this facility 
built into it can take an ATN that does not itself men- 
tion conjunctions at all, and will parse reduced coordi- 
nate constructions, which are of the form 
A X and Y B, 
for example, 
John drove his car through and 
A X 
completely demolished a plate glass window. 
Y B 
where the unreduced deep structure corresponds to 
A X B and A Y B. 
The result of the parse is this unreduced structure. 
SYSCONJ accomplishes this by treating the conjunc- 
tion as an interruption which causes the parser to back 
up in its history of the parse. Before backing up, the 
current configuration (immediately before the inter- 
ruption) is suspended for later merging. The backing 
up is done to a point when the string X was being 
parsed (this defines X), and with this configuration the 
string Y is parsed. The parsing of Y stops when a 
configuration is reached that can be merged with the 
suspended configuration, whereupon B is parsed. The 
choices made in this process can be deterministic or 
non-deterministic, and can be guided by syntactic or 
semantic heuristics. 
There are some problems with SYSCONJ, however. 
It suffers from inefficiency, due to the combinatorial 
explosion from all the choices it makes. Because of 
this inefficiency, it in fact has not been used to a great 
extent in ATN parsing. Another problem is that it 
does not handle embedded complex structures. Fur- 
thermore, it is not clear to us that SYSCONJ offers a 
good basis for handling the scoping problems that arise 
for semantic interpretation when conjunctions interact 
with quantifiers (and other modifiers) in the sentence. 
This latter problem is discussed in detail below. 
In this paper we present a system for handling co- 
ordination in logic grammars. The system consists of 
three things: 
(1) a new formalism for logic grammars, which we 
call modifier structure grammars (MSGs), 
(2) an interpreter (or parser) for MSGs that takes all 
the responsibility for the syntactic aspects of co- 
ordination (as with SYSCONJ), and 
(3) a semantic interpretation component that prod- 
uces logical forms from the output of the parser 
and deals with scoping problems for coordination. 
The whole system is implemented in Prolog-10 
(Pereira, Pereira, and Warren 1978). 
Coordination has of course received some treat- 
ment in standard logic grammars by the writing of 
specific grammar rules. The most extensive treatment 
of this sort that we know of is in Pereira et al. (1982), 
which also deals with ellipsis. However, we are aware 
of no general, metagrammatical treatment of coordina- 
tion in logic grammars previous to ours. 
Modifier structure grammars, described in detail in 
Section 2, are true logic grammars, in that they can be 
translated (compiled) directly into Horn clause sys- 
tems, the program format for Prolog. In fact, the 
treatment of extraposition in MSGs is based on F. 
Pereira's (1981) extraposition grammars (XGs), and 
MSGs can be compiled into XGs (which in turn can be 
compiled into Horn clause systems). A new element 
in MSGs is that the formation of analysis structures of 
sentences has been made largely implicit in the gram- 
mar formalism. For previous logic grammar formal- 
isms, the formation of analyses is entirely the responsi- 
bility of the grammar writer. Compiling MSGs into 
XGs consists in making this formation of analyses 
explicit. 
Although MSGs can be compiled into XGs, it seems 
difficult to do this in a way that treats coordination 
automatically (it appears to require more metalogical 
facilities than are currently available in Prolog sys- 
tems). Therefore, we are using an interpreter for MSGs 
(written in Prolog). 
For MSGs, the analysis structure associated (by the 
sYstem) with a sentence is called the modifier structure 
(MS) of the sentence. This structure can be consid- 
ered an annotated phrase structure tree, and in fact 
the name "modifier structure grammar" is intended to 
be parallel to "phrase structure grammar". If extrapo- 
sition and coordination are neglected, there is a 
context-free phrase structure grammar underlying an 
MSG; and the MS trees are indeed derivation trees for 
this underlying grammar, but with extra information 
attached to the nodes. 
In an MS tree, each node contains not only syntac- 
tic information but also a term called a semantic item 
(supplied in the grammar), which determines the 
node's contribution to the logical form of the sentence. 
This contribution is for the node alone, and does not 
refer to the daughters of the node, as in the approach 
of Gazdar (1981). Through their semantic items, the 
daughters of a node act as modifiers of the node, in a 
fairly traditional sense made precise below - hence the 
term "modifier structure". 
The notion of modifier structure used here and the 
semantic interpretation component, which depends on 
it, are much the same as in previous work by McCord 
70 American Journal of Computational Linguistics, Volume 9, Number 2, April-June 1983 
Veronica Dahl and Michael C. McCord Treating Coordination in Logic Grammars 
(1982, 1981), especially the latter paper. But new 
elements are the notion of MSG (making modifier 
structure implicit in the grammar), the MSG interpret- 
er, with its treatment of coordination, and the specific 
rules for semantic interpretation of coordination. 
The MSG interpreter is described in Section 3. As 
indicated above, the interpreter completely handles the 
syntax of coordination. The MSG grammar itself 
should not mention conjunctions at all. The interpret- 
er has a general facility for treating certain words as 
demons (cf. Winograd 1972), and conjunctions are 
handled in this way. When a conjunction demon ap- 
pears in a sentence 
A X conj Y B, 
a process is set off which in outline is like SYSCONJ, 
in that backing up is done in the parse history in order 
to parse Y parallel to X, and B is parsed by merger 
with the state interrupted by the conjunction. Howev- 
er, our system has the following interesting features: 
(1) The MSG interpreter manipulates stacks in 
such a way that embedded coordination (and coordi- 
nation of more than two elements) and interactions 
with extraposition are handled. (Examples are given 
in the Appendix.) 
(2) The interpreter produces a modifier structure 
for the sentence 
A X conj Y B 
which remains close to the surface form, as opposed to 
the unreduced structure 
A X B conj A Y B 
(but it does show all the pertinent semantic relations 
through unification of variables). Not expanding to 
the unreduced form is important for keeping the modi- 
fier relationships straight, in particular, getting the 
right quantifier scoping. Our system analyzes the sen- 
tence 
Each man drove a car through and 
completely demolished a glass window, 
producing the logical form 
each(X,man(X),exists(Y,car(Y), 
exists(Z,glass(Z)&window(Z), 
drove_through(X,Y,Z) 
&completely(demolished(X,Z)) ))) 
This logical form would be difficult to recover from 
the unreduced structure, because the quantified noun 
phrases are repeated in the unreduced structure, and 
the logical form that corresponds most naturally to the 
unreduced structure is not logically equivalent to the 
above logical form. 
(3) In general, the use of modifier structures and 
the associated semantic interpretation component per- 
mits a good treatment of scoping problems involving 
coordination. Examples are given below. 
(4) The system seems reasonably efficient. For 
example, the analysis of the example sentence under 
(2) above (including syntactic analysis and semantic 
interpretation) was done in 177 milliseconds. The 
reader can examine analysis times for other examples 
in the Appendix. One reason for the efficiency is just 
that the system is formulated as a logic programming 
system, and especially that it uses Prolog-10, with its 
compiler. Another reason presumably lies in the de- 
tails of the MSG interpreter. For example, the inter- 
preter does not save the complete history of the parse, 
so that the backing up necessary for coordination does 
not examine as much. 
(5) The code for the system seems short, and 
most of it is listed in this paper. 
The semantic interpretation component is described 
in Section 4, but not in complete detail since it is tak- 
en in the main from McCord (1982, 1981). Emphasis 
is on the new aspects involving semantic interpretation 
of coordinate modifiers. 
Semantic interpretation of a modifier structure tree 
is done in two stages. The first stage, called reshaping, 
deals heuristically with the well-known scoping prob- 
lem, which arises because of the discrepancies that can 
exist between (surface) syntactic relations and intend- 
ed semantic relations. Reshaping is a transformation 
of the syntactic MS tree into another MS tree with the 
(hopefully) correct modifier relations. The second 
stage takes the reshaped tree and translates it into 
logical form. The modifiers actually do their work of 
modification in this second stage, through their seman- 
tic items. 
As an example of the effects of reshaping on coor- 
dinate structures involving quantifiers, the sentence 
Each man and each woman ate an apple 
is given the logical form 
each(X,man(X),exists(Y,apple(Y),ate(X,Y))) 
& each(X,woman(X),exists(Y,apple(Y),ate(X,Y))), 
whereas the sentence 
A man and a woman sat at each table 
is given the form 
each(Y,table(Y), exists(X,man(X),sat_at(X,Y)) 
& exists(X,woman(X),sat_at (X,Y))). 
Section 5 of the paper presents a short discussion 
of possible improvements for the system, and Section 
6 consists of concluding remarks. The Appendix to 
the paper contains a listing of most of the system, a 
sample MSG, and sample parses. The reader may wish 
to examine the sample parses at this point. 
American Journal of Computational Linguistics, Volume 9, Number 2, April-June 1983 71 
Veronica Dahl and Mlichael C. IVlcCord Treating Coordination in Logic Grammars 
2. Modifier Structure Grammars 
The most fundamental type of logic grammar is 
Colmerauer's (1978) metamorphosis grammar (MG). 
Grammars of this type can be viewed as generalized 
type-0 phrase structure grammars in which the gram- 
mar symbols (terminals and non-terminals) are terms 
from predicate logic. In derivations, the rewriting of 
symbol strings involves unification (Robinson 1965), 
instead of simple replacement. 
F. Pereira's (1981) extraposition grammars (XGs) 
are essentially generalizations of MGs designed to 
handle (left) extraposition. In the left-hand side of an 
XG rule, grammar symbols can be connected by the 
infix operator '...', indicating a gap. When such a rule 
is used in rewriting, the gaps appearing in the left- 
hand side may match arbitrary strings of grammar 
symbols, and then the left-hand side is replaced by the 
right-hand side followed by the symbol strings 
matched by the gaps (in the same order). For exam- 
ple, the XG rule 
a,b...c...d --> e,f 
is really a rule schema 
a,b,X,c,Y,d--> e,f,X,Y 
where X and Y stand for arbitrary grammar symbol 
strings. There is a constraint on the use of gaps in 
rewriting called the bracketing constraint, for which we 
refer to F. Pereira (1981). However, our MSG inter- 
preter includes XG interpretation, so the use of gaps is 
in fact completely specified below. 
In XG rules, symbols on the left-hand side follow- 
ing gaps represent left-extraposed elements. For ex- 
ample, the extraposition of noun phrases to the front 
of relative clauses (with replacement by relative pro- 
nouns) can be handled by the XG rules: 
relative clause --> rel marker, sentence. 
rel_marker...trace -- > rel_pronoun. 
nounphrase --> trace. 
where 'trace' marks the position out of which the noun 
phrase is being moved, and is used by the second rule 
above in conjunction with a relative marker to produce 
(or analyze) a relative pronoun. 
Pereira's implementation of XGs is a Prolog pro- 
gram that compiles XGs to Horn clause systems, which 
in turn can be run by Prolog for parsing sentences. In 
the compiled systems, extraposition is handled by the 
manipulation of a stack called the extraposition list, 
which is similar to the HOLD list for ATN's (Woods 
1973). Elements (like 'trace' above) on the left-hand 
sides of XG rules following the initial symbol are in 
effect put on the extraposition list during parsing, and 
can be taken off when they are required later by the 
right-hand side of another rule. Our MSG interpreter 
uses a reformulation of this same method. 
Since the grammar symbols in XGs (and MGs) can 
be arbitrary terms from predicate logic, they can con- 
tain arguments. These arguments can be used to hold 
useful information such as selectional restrictions and 
analysis structures. For example, in the rule 
sentence (s (Subj,Pred)) -- > 
noun_phrase(Subj),verb_phrase(Pred) 
each non-terminal is augmented with an argument 
representing a syntactic structure. (Here, following 
Prolog-10 syntax, the capitalized items are variables.) 
Manipulating such arguments is the only way of get- 
ting analysis structures in XGs. As indicated in the 
Introduction, a new ingredient in MSGs over XGs is to 
automate this process, or to make it implicit in the 
grammar. 
MSG rules are of two forms. The basic form is 
A:Sem--> B. 
where A-->B is an XG rule and Sem is a term called a 
semantic item, which plays a role in the semantic inter- 
pretation of a phrase analyzed by application of the 
rule. The semantic item is (as in McCord 1981) of 
the form 
Operator-LogicalForm 
where, roughly, LogicalForm is the part of the logical 
form of the sentence contributed by the rule, and Op- 
erator determines the way this partial structure com- 
bines with others. Details on semantic items are post- 
poned to Section 4 (on semantic interpretation). Ac- 
tually, the current section and Section 3 deal mainly 
with syntactic constructions which are independent of 
the form of semantic items. 
The second type of MSG rule looks exactly like an 
XG rule (no Sem is exhibited), but the system takes 
care of inserting a special "trivial" Sem, g-true. (Here 
the '8' is the operator for left-conjoining, described in 
Section 4.) Most MSG rules for higher (non-lexical) 
types of phrases are of this type, but not all of them 
are. 
A simple example of an MSG is shown in Figure 1. 
Following the notational conventions of XGs (as well 
as the simpler definite clause grammars built into 
Prolog-10), we indicate terminal symbols by enclosing 
them in brackets \[ \]. Rules can contain tests on their 
right-hand sides, enclosed in braces {}, which are Pro- 
log goals. In this example, the tests are calls to the 
lexicon, shown after the grammar rules, which consists 
of assertions (non-conditional Horn clauses). 
This grammar, together with the semantic interpre- 
tation component, will handle sentences like the fol- 
lowing, producing the indicated logical forms: 
72 American Journal of Computational Linguistics, Volume 9, Number 2, April-June 1983 
Veronica Dahl and Mlichael C. IVlcCord Treating Coordination in Logic Grammars 
sent--> nounph(X),verbph(X). 
nounph(X) --> det(X),noun(X). 
nounph(X) --> proper_noun(X). 
verbph(X) --> verb(X,Y),nounph(Y). 
det(X):Sem--> \[D\],{d(D,X,Sem)}. 
noun(X):g-Pred--> \[N\],{n(N,X,Pred)}. 
proper_noun(N) --> 
verb (X,Y): g-Pred -- > 
/* Lexical entries */ 
\[N\],{npr(N)}. 
\[V\],{v(V,X,Y,Pred) }. 
n(man,X,man(X)), n(woman,X,woman(X)). 
npr(john), npr(mary). 
v(saw,X,Y,saw(X,Y)), v(heard,X,Y,heard(X,Y)). 
d(each,X,P/Q-each(X,Q,P)). 
d(a,X,P/Q-exists(X,Q,P)). 
Figure 1. A simple MSG with lexicon. 
John saw Mary. 
saw(john,mary). 
John heard each woman. 
each(Y,woman(Y),heard(john,Y)). 
Each man saw a woman. 
each(X,man(X),exists(Y,woman(Y),saw(X,Y))). 
A larger example MSG is listed in the Appendix. 
This grammar includes rules dealing with extraposition, 
and the lexicon contains rules used by the MSG inter- 
preter in handling coordination. 
Now let us look at the formation of syntactic struc- 
tures by the MSG system. As stated in the Introduc- 
tion, syntactic structures are trees called modifier 
structure (MS) trees. 
Suppose that a phrase is analyzed by application of 
an MSG rule 
A:Sem --> B. 
and further rule applications in an MSG. (The Sem 
may be explicit or supplied by the system for the sec- 
ond type of rule.) Then the modifier structure of the 
phrase is a term of the form 
syn(NT,Sem,Mods) 
where NT is the leading symbol (a non-terminal) in A 
and where Mods is the list of modifier structures of 
the subphrascs analyzed with the right-hand side B of 
the rule. The 'syn' structure is considered a tree node, 
labeled with the two items NT and Sem, and having 
daughter list Mods. 
As an example, the MS tree for the sentence "Each 
man saw a woman" produced by the grammar in Fig- 
ure 1 is shown in Figure 2. This tree is printed by 
displaying the first two fields of a 'syn' on one line 
and then recursively displaying the daughters, indented 
a fixed amount. 
sent g-true 
nounph(X) g-true 
det(X) P/Q-each(X,Q,P) 
noun(X) g-man(X) 
verbph(X) g-true 
verb(X,Y) g-saw(X,Y) 
nounph(Y) g-true 
det(Y) R/S-exists(Y,S,R) 
noun(Y) g-woman(Y) 
Figure 2. MS tree for "Each man saw a woman.". 
Let us now indicate briefly how MSGs can be com- 
piled into XGs so that these MS trees are produced as 
analyses. This method of compiling does not handle 
coordination metagrammatically (as does the interpret- 
er), but it does seem to be of general interest for 
MSGs. 
In the compiled XG version of an MSG, each non- 
terminal is given two additional arguments, added, say, 
at the end. Each argument holds a list of modifiers. 
If the original non-terminal is nt(X1 .... ,Xn), the new 
non-terminal will look like 
nt(X1 .... ,Xn,Modsl,Mods2). 
When this non-terminal is expanded by a non-trivial 
rule, then Modsl will differ from Mods2 by having 
one additional modifier on the front, namely the modi- 
fier contributed by the rule application. A rule is 
trivial if its right-hand side is empty. When a trivial 
rule is used to expand 'nt' above, Modsl will equal 
Mods2. 
As an example of rule translation, the first rule in 
Figure l is translated to 
sent(\[syn(sent,g-true,Modsl) I Mods\],Mods) --> 
nounph(X,Mods 1,Mods2),verbph(X,Mods2,\[ \]). 
(Here \[X \] Y\] denotes the list with first member X and 
remainder Y.) 
Any non-terminal on the left-hand side of an MSG 
besides the leading non-terminal is given a pair of 
identical Mods arguments (because it contributes no 
modifier by itself). For example, the MSG rule 
rel mk(X)...trace(X) --> rel_pron. 
would translate to 
rel_mk(X,\[syn(rel mk(X),g-true,Mods 1 ) I Mods\],Mods) 
...trace(X,Mods2,Mods2) --> rel_pron(Modsl,\[ \]). 
American Journal of Computational Linguistics, Volume 9, Number 2, April-June 1983 73 
Veronica Dahl and Michael C. McCord Treating Coordination in Logic Grammars 
For parsing a sentence with respect to the grammar 
in Figure 1, one would use 
sent(\[MST\],\[ \]) 
as start symbol (with MST unknown) and the parse 
would bind MST to the modifier structure tree of the 
sentence. 
Pairs of list arguments manipulated in the way just 
outlined are called "difference lists", and the tech- 
nique is common in logic programming. One part of 
compiling MGs to Horn clauses is the addition to each 
non-terminal of an argument pair for the terminal 
strings being analyzed. Pereira's compilation of XGs 
to Horn clauses involves one more argument pair for 
extraposition lists. So the compilation of MSGs to 
Horn clauses involves three argument pairs totally. In 
the MSG interpreter, described in the next section, 
only a single argument (not a pair) is needed for each 
of these three lists. 
3. The MSG Interpreter and the Syntax of 
Coordination 
Our MSG processor actually has a bit of compiler in it, 
because there is a preprocessor that translates MSG 
rules into a form more convenient for the interpreter 
to use. 
An MSG rule 
A:Sem --> B 
is preprocessed into a term 
rule (NT,Ext,Sem,B 1 ) 
where NT is the leading non-terminal in A, Ext is the 
conversion of the remainder of A into an extraposition 
list, and B1 is the conversion of B to list form. 
The notion and representation of extraposition lists 
used here are just the same as F. Pereira's (1981). A 
node in such a list is of the form 
x(Context,Type,Symbol,Ext) 
where Context is either 'gap' or 'nogap', Type is either 
'terminal' or 'nonterminal', Symbol is a grammar sym- 
bol, and Ext is the remainder of the list. We denote 
the empty extraposition list by 'nil' (Pereira used \[ \]). 
The "left-hand side remainder" in a grammar rule 
(the part after the leading symbol) is converted to an 
extraposition list in a straightforward way, with one 
node for each symbol in the remainder. The Context 
says whether the symbol has a gap preceding it, and 
the remaining fields of an 'x' node have the obvious 
meaning. For the rule 
a,\[b\]...c--> d 
the extraposition list would be 
x(nogap,terminal,b,x(gap,nonterminal,c,nil)). 
The right-hand side of an MSG rule is preprocessed 
to a (simple) list form in the obvious way. Thus, a 
right-hand side (d,e,f) is converted to the list \[d,e,f\], 
and a right-hand side with a single element d is con- 
verted to the list \[d\]. 
As a complete example, the MSG rule 
a...b:e-p--> \[c\],{d},e 
is converted to 
rule (a,x(gap,nonterminal,b,nil),e-p,\[\[c\],{d} ,el). 
If the MSG rule exhibits no semantic item, then the 
preprocessor supplies the trivial item e-true. 
The 'rule' forms of the rules in an MSG are stored 
as assertions in the Prolog data base, to be used by the 
interpreter. One can understand 
rule (NT,Ext,Sem,B 1 ) 
as the assertion: "There is a rule for the non-terminal 
NT with extraposition list Ext, etc." 
The rule preprocessor is listed at the beginning of 
the Appendix. 
Now let us look at the interpreter itself, which is 
listed after the preprocessor in the Appendix. 
The top-level procedure is 
parse (String,NonTerminal,Syn) 
which takes a String of terminals and attempts to parse 
it as a phrase of type NonTerminal, with the syntactic 
structure Syn. We should comment that 'parse' de- 
fines a top-down parser. 
This procedure calls the main working procedure 
prs (BodyList,String,Mods,Par,Mer,Ext) 
which parses String against a list BodyList of goals of 
the type that can appear in the right-hand side (the 
body) of a rule. The list of resulting syntactic struc- 
tures is Mods (one modifier for each non-trivial ex- 
pansion of a non-terminal in BodyList). The remain- 
ing three arguments of 'prs' are for stacks called the 
parent stack, the merge stack, and the extraposition 
list. These are initialized to 'nil' in the call of 'parse' 
to 'prs'. 
The parent stack serves two purposes. One is to 
control the recursion in the normal working of the 
parser. (The parser is much like an interpreter for a 
programming language - in fact, for a specialized ver- 
sion of Prolog itself.) The other purpose is to provide 
information for the coordination demon, when it backs 
up in (part of) the parse history. 
A non-empty parent stack is a term of the form 
parent (BodyList,Mods,Par) 
where BodyList is a body list, Mods is a modifier list, 
and Par is again a parent stack. A new level gets 
pushed onto the parent stack by the sixth rule for 'prs' 
and the ancillary procedure 'prspush'. This happens 
74 American Journal of Computational Linguistics, Volume 9, Number 2, April-June 1983 
Veronica Dahl and Michael C. McCord Treating Coordination in Logic Grammars 
when 'prs' is looking at a body list of the form 
\[NTIBL\], where the initial element NT is a non- 
terminal that can be expanded by a 'rule' entry. If 
that rule is trivial (if its own body is empty), then no 
actual push is done, and 'prs' continues with the re- 
maining current body list BL. Otherwise, 'prspush' 
goes to a lower level, to parse against the body of the 
expanding rule. The items \[NTIBL\] and Mods from 
the higher level are saved on the parent stack (Mods is 
a variable for the remaining modifiers to be found on 
the higher level). 
Note that the body list \[NTIBL\] saved in the first 
field of the 'parent' term is more than is needed for 
managing the recursive return to the higher level. 
Only the remainder, BL, is needed for this, because 
NT has already been used in the parse. In fact, the 
rule that pops to the higher level (the eighth rule for 
'prs') does ignore NT in doing the pop. The extra 
information, NT, is saved for the second purpose of 
the parent stack, the backing up by the coordination 
demon. 
Before going into the details of coordination, 
though, let us continue with the description of the 
"normal" working of the parser. 
In normal parsing, there is exactly one place where 
a new 'syn' node is added to the MS trees being built. 
This is in the second rule for 'prspush', which handles 
non-trivial rule expansions. The addition of this node 
is in accordance with the definition of modifier struc- 
ture given in the preceding section. 
The pushing rule of 'prs' (the sixth rule) also ma- 
nipulates the extraposition stack. The extraposition 
component of the expanding rule is concatenated onto 
the front of the main extraposition list (being carried 
in the last argument of 'prs'). This is analogous to a 
HOLD operation in ATNs. Of course, if no extraposi- 
tion is shown in the rule, the extraposition list will 
remain the same. 
The third and fourth rules for 'prs' handle terminals 
in the body list. The first of these tries to remove the 
terminal from the string argument, and the second 
tries to remove it from the extraposition list (as in a 
VIR arc for ATNs). 
The seventh 'prs' rule tries to remove a non- 
terminal from the extraposition list (again, like a VIR 
arc). 
The last 'prs' rule is the termination condition for 
the parse. It just requires that all arguments be null. 
Now we can discuss coordination demons. All the 
rest of the interpreter rules deal with these. 
The first 'prs' rule is the one that notices demon 
words D. It calls a procedure 'demon', passing D as 
the first argument and all the rest of the information it 
has in other arguments. 'demon' takes control of the 
rest of the parse. In the listed interpreter there is only 
one 'demon' rule, one that tests whether D is a con- 
junction. It does this with the goal 
conjunction(D,Cat,Sem), 
which gives the syntactic category Cat for the con- 
junction D, and the semantic item Sem for a new mod- 
ifier node to be constructed for the right conjunct. 
The lexicon contains 'conjunction' entries as asser- 
tions. 
For understanding what the conjunction demon 
does, it is best to look at an example, as it would be 
parsed for the grammar in the Appendix. We will use 
the specific notation (for variables, etc.) given in the 
demon rule, and the reader should refer to that rule in 
the Appendix. It should be borne in mind that Prolog 
is non-deterministic; we will only state what happens 
on the successful path through the choices made. 
The example is 
John saw and Mary heard the train. 
When the demon for 'and' is called, the current body 
list is 
BL= \[comps(\[obj-Y\])\[. 
The non-terminal comps(Comps) looks for a list 
Comps of complements, and in this case there is to be 
one complement, an object noun phrase. The MS tree 
constructed so far is 
sent e-true 
nounph(X,def) @P-def(X,X=john,P) 
verbph(X) C-true 
verb(X,\[obj-Y\]) g-saw(X,Y) 
I Mods 
I Mods2 
Here the entry \[ Mods in the last daughter position for 
the verb phrase indicates further modifiers on that 
level to be put in the unbound variable Mods. (This is 
explicitly the same variable 'Mods' used in the demon 
rule.) Similarly, I Mods2 represents the remaining 
modifiers for 'sent' node. The variable Mods2 does 
not appear in the 'demon' rule, but will be referred to 
below. 
The parent stack Par available to the demon has 
two levels, and the two body lists are 
\[verbph(X)\], 
\[sent\]. 
(Recall that we are describing the state of affairs in 
the successful path through the search space.) The 
recursive procedure 'backup' is called, which can look 
any number of levels through the parent stack. It goes 
to the second level, where the body list is \[sent\]. 
(Choosing the first level with \[verbph(X)\] would be 
appropriate for the sentence "John saw and barely 
heard the train".) In passing up a level, 'backup' re- 
quires that the body list skipped over must be 
'satisfied', which means that any pending goals in the 
body list (members of its tail) are satisfiable by trivial 
American Journal of Computational Linguistics, Volume 9, Number 2, April-June 1983 75 
Veronica Dahl and Michael C. McCord Treating Coordination in Logic Grammars 
rules. When 'backup' does pass up a level, the modifi- 
er list for that level is closed off. Thus Mods in the 
tree displayed above will be bound to the empty list. 
(There are no more modifiers for that 'verbph' node.) 
As a single remaining daughter for the level backed 
up to, a new 'syn' node for the right conjunct is at- 
tached by the demon. This means binding the variable 
Mods2 in the above tree to the list consisting of this 
node. Now our tree looks like 
sent g-true 
nounph(X,def) @P-def(X,X=john,P) 
verbph(X) g-true 
verb(X,\[obj-Y\]) g-saw(X,Y) 
conj(and) Q*R-(Q&R) 
I Mods0 
The variable Mods0 is to contain the list of modifiers 
for the conjunction node. This list will turn out to 
have a single element, a new 'sent' node for the re- 
mainder of the sentence, "Mary heard the train". 
Backing up to the \[sent\] level makes the non- 
terminal NT=sent available to the demon, and the 
parent stack Parl at the \[sent\] level. The demon then 
continues the parse by calling 'prs' with body list 
\[NT\]=\[sent\], but with information pushed onto the 
merge stack. The main item stored on the merge stack 
is the body list BL=\[comps(\[obj-Y\])\], which was 
pending at the time of interruption by the conjunction. 
The items Parl, Ext, and of course the old merge 
stack Mer are also pushed on. 
So now we continue parsing "Mary heard the 
train", but with another kind of demon lurking, the 
interrupted goal BL. The second rule for 'prs' notices 
this demon. When we are parsing and come to a goal 
that can be unified with BL, then we can try merging. 
This happens when we are looking for the comple- 
ments of "heard". This unification includes the unifi- 
cation of the object variable Y of "saw" with the ob- 
ject variable of "heard", so that "the train" will logi- 
cally be the object of "saw" as well as "heard". 
The procedure 'cutoff' called by the second 'prs' 
rule requires that no new unsatisfied goals have devel- 
oped in parsing the right conjunct (aside from the goal 
BL to be merged) and also closes off modifier lists in 
the local parent stack Par for the right conjunct. 
Then the merged parse is continued by a call to 
'prs', with BL as goal and with the parent stack, merge 
stack, and extraposition list popped from the merge 
stack. When this is completed, our MS tree is as 
shown in Figure 3. 
The meanings of the semantic items used in this MS 
tree, and their use in producing the logical form, will 
be explained in the next section; but it is worth stating 
now what the resulting logical form is: 
def (Y,train(Y),saw(john,Y) &heard(mary,Y)). 
The reader may examine the analyses produced for 
other examples listed in the Appendix. 
sent g-true 
nounph(X,def) @P-def(X,X=john,P) 
verbph(X) g-true 
verb(X,\[obj-Y\]) g-saw(X,Y) 
conj(and) Q*R-(Q&R) 
sent g-true 
nounph(Z,def) @S-def(Z,Z=mary,S) 
verbph(Z) g-true 
comps(\[obj-Y\]) g-true 
comp(obj-Y) g-true 
nounph(Y,def) g-true 
det(Y,def) T/U-def(Y,U,T) 
noun(Y,\[ \]) g-train(Y) 
Figure 3. MS tree for 
"John saw and Mary heard the train." 
4. Semantic Interpretation and Coordination 
The overall idea of the semantic interpretation 
component was given in the Introduction. The rule 
system is listed completely in the Appendix. This 
system is taken essentially from McCord (1981), with 
some rules deleted (rules dealing with focus), and 
some rules added for coordination. 
For a discussion of MS tree reshaping as a means of 
handling scoping of modifiers, we refer to McCord 
(1982, 1981). Also, the reader may examine the ex- 
amples of reshaped trees given in the Appendix. 
We will, however, review the second stage of se- 
mantic interpretation, because the new rules for coor- 
dination are added here and because it is more central 
for understanding modifier structure. In this stage, the 
reshaped MS tree is translated to logical form, and the 
top-level procedure for this is 'translate'. This proce- 
dure actually works only with the semantic-item com- 
ponents of MS tree nodes. (Reshaping uses the first, 
syntactic components.) 
One semantic item can combine with (or modify) a 
second semantic item to produce a third semantic item. 
'translate' uses these combining operations in a 
straightforward recursive fashion to produce the logi- 
cal form of an MS tree. The ancillary procedure 
('transmod') that actually does the recursion produces 
complete semantic items as translations, not just logi- 
cal forms. For the top-level result, the operator com- 
ponent is thrown away. 'transmod' works simply as 
follows: The daughters (modifiers) of a tree node N 
are translated recursively (to semantic items) and 
these items cumulatively modify the semantic item of 
N, the leftmost acting as the outermost modifier, etc. 
So the heart of the translation process is in the 
rules that say how semantic items can combine with 
76 American Journal of Computational Linguistics, Volume 9, Number 2, April-June 1983 
Veronica Dahl and Michael C. McCord Treating Coordination in Logic Grammars 
other semantic items. These are rules for the proce- 
dure 
trans(Sem0,Seml,Sem2) 
which says that Sere0 combines with (modifies) Seml 
to produce Sem2. In the typical case, this combination 
depends only on the Operator component of Sem0; 
but there are exceptional cases where it depends as 
well on the operator in Seml. Furthermore, 'trans' is 
free to create a new operator for the result, Sem2, 
which can affect later operations. This happens with 
coordinate modifiers. We often speak of Sem0 
"operating on" Seml, but "combining with" is the 
more accurate term generally. 
The only operators appearing in the small sample 
grammar in the Appendix are of the form g, @P, P/Q, 
and P*Q. Here P and Q are variables standing for 
logical forms. The listing for 'trans' in the Appendix 
includes only rules for these operators and their auxili- 
aries, although larger grammars involve other opera- 
tors. We will elucidate the effects of these four opera- 
tors with examples. The last one, P'Q, is used for 
coordination. 
The operator 'g' is for left-conjoining. When 
g-man(X) operates on g-see(X,Y), the result is 
g-man(X)&see(X,Y). 
The operator @P is used for substitutions in its 
associated logical form. When @P-not(P) operates on 
g-laugh(X), the result is g-not(laugh(X)). 
The operator P/Q is used for forms that require 
two substitutions. When 
P/Q-each(X,Q,P) 
operates on g-man(X), the result is 
@P-each(X,man(X),P), 
which in turn can operate by substituting for P. 
Notice that @p and P/Q are similar to lambda(P) 
and lambda(Q)lambda(P) respectively. But they also 
interact with other operators in the system in specific 
ways. 
To show these first three operators working togeth- 
er, let us look at the MS tree that would be produced 
for the sentence "Each man laughed". (Reshaping 
leaves this tree unaltered.) We throw away the syn- 
tactic fields in the tree nodes (working only with the 
semantic items), and show the successive stages in 
producing the logical form in Figure 4. In following 
the steps in Figure 4, the reader should refer to the 
'trans' rules in the Appendix, which are numbered for 
reference here. In each step of the translation, a node 
combines with its parent, and the 'trans' rule used to 
do this is indicated. 
The operator P*Q appears in coordinate modifiers. 
The first four 'trans' rules deal with it, and they create 
auxiliary operators. The following example will make 
clear how these are manipulated. The sentence is 
(1) g-true 
g-true 
P/Q-each(X,Q,P) 
g-man(X) 
g-true 
g-laughed(X) (Rule 7 applies) 
(2) g-true 
g-true 
P/Q-each(X,Q,P) 
g-man(X) 
g-laughed(X) (Rule 7) 
(3) g-laughed(X) 
g-true 
P/Q-each(X,Q,P) 
g-man(X) (Rule 7) 
(4) g-laughed(X) 
g-man(X) 
P/Q-each(X,Q,P) Rule 5) 
(5) g-laughed(X) 
@P-each(X,man(X),P) (Rule 6) 
(6) g-each(X,man(X),laughed(X)). 
Figure 4. The working of 'translate'. 
"Each man ate an apple and a pear." 
This example is shown in the Appendix, with the ini- 
tial syntactic analysis and the reshaped tree. In the 
reshaped tree, the 'sent' node has three daughters, the 
first being for the simple noun phrase "each man", the 
second for the conjoined noun phrase "an apple and a 
pear", and the third for the verb phrase with the ob- 
ject removed. 
If we perform all the modifications that are possible 
in this tree without involving the coordination opera- 
tor, and if we remove the syntactic fields, then the tree 
looks like the following: 
g-ate(X,Y) 
@P-each(X,man(X),P) 
g-true 
Q/R-exists(Y,R,Q) 
g-apple(Y) 
S*T-(S&T) 
@U-exists(Y,pear(Y),U) 
Now the first 'trans' rule can apply to the lowest pair 
of nodes, and the tree becomes: 
g-ate(X,Y) 
@P-eaeh(X,man(X),P) 
g-true 
Q/R-exists(Y,R,Q) 
g-apple(Y) 
cbase 1 (@U-exists(Y,pear(Y),U),S,T)-(S&T) 
American Journal of Computational Linguistics, Volume 9, Number 2, April-June 1983 77 
Veronica Dahl and Michael C. McCord Treating Coordination in Logic Grammars 
We have saved the modifier for "a pear" in the first 
argument of the 'cbasel' operator. Next, this item 
operates on the g-true node, by application of the 
second 'trans' rule, and we get the tree 
g-ate(X,Y) 
@P-each(X,man(X),P) 
cbase2(g,@U-exists(Y,pear(Y),U),S,T,S&T)-true 
Q/R-exists(Y,R,Q) 
g-apple(Y) 
Now, the third 'trans' rule is applied twice, to the two 
daughters of the 'cbase2' node, and we get 
g-ate(X,Y) 
@P-each(X,man(X),P) 
cbase2(@Q,@U-exists(Y,pear(Y),U),S,T,S&T) 
-exists(Y,apple(Y),Q) 
Then, as the last step with coordination operators, the 
fourth 'trans' rule is applied to let the 'cbase2' node 
operate on the top node of the tree. This involves two 
recursive calls to 'trans', in which the two conjunct 
noun phrases operate on the material in the scope of 
the coordinate node. (In this case, the material in the 
scope is ate(X,Y).) This material gets duplicated, be- 
cause of the double application to it. The resulting 
tree now is 
g-exists(Y,apple(Y),ate(X,Y))&exists(Y,pear(Y), 
ate(X,Y)) 
@P-each(X,man(X),P) 
Finally, the @P node modifies the top node, and after 
discarding the operator (an 'g') in the resulting item, 
we get the logical form 
each(X,man(X),exists(Y,apple(Y),ate(X,Y)) 
&exists(Y),pear(Y),ate(X,Y)) ) 
Near the end of the Introduction, examples were 
given of two syntactically similar sentences with coor- 
dination, for which the produced logical forms are 
quite different. For the sentence 
"Each man and each woman ate an apple", 
the reshaping stage produces a tree that in outline 
looks like the following: 
sent 
nounph "each man" 
conj(and) 
nounph "each woman" 
nounph "an apple" 
verbph "ate" 
Then, the material for "ate an apple" will be in the 
scope of the conjoined noun phrase and this material 
gets duplicated, with the resulting logical form being 
each(X,man(X),exists(Y,apple(Y),ate(X,Y))) 
&each(X,woman(X),exists(Y,apple(Y),ate(X,Y))). 
On the other hand, for the sentence 
"A man and a woman sat at each table", 
reshaping moves the universally quantified noun 
phrase to the left of the existentially quantified con- 
joined noun phrase, and the tree is as follows: 
sent 
nounph "each table" 
nounph "a man" 
conj(and) 
nounph "a woman" 
verbph "sat at" 
Then the only material in the scope of the conjoined 
noun phrase is for "sat at", and only this gets dupli- 
cated. (In fact, the scoping is like that for our earlier 
example, "Each man ate an apple and a pear".) The 
complete logical form is 
each(Y,table(Y), exists(X,man(X),sat at(X,Y)) 
& exists(X,woman(X),sat_at(X,Y)) ). 
Notice that the logical forms for conjoined phrases 
in the above analyses share variables. For instance, 
the same variable X is used in both man(X) and 
woman(X) in the last analysis. This sharing of varia- 
bles arises naturally because of the unification of body 
lists that is performed during parsing by the 'merge' 
demon. It keeps things straight very nicely, because 
the shared variables may appear in another predica- 
tion, like sat_at(X,Y) above, which occurs only once, 
outside the conjoined phrase, but is related logically to 
both conjuncts. 
This sharing of variables presents no problems as 
long as the variables are quantified over (as they are 
by the existential in the preceding example). But it 
makes proper nouns less convenient to treat. If coor- 
dination were not being considered, it would be con- 
venient to parse proper nouns by the sort of rule listed 
in Figure 1 in Section 2, where the proper noun gets 
immediately unified with the variable X appearing in 
nounph(X). But if such a rule is used with the MSG 
parser, then a sentence as simple as "John and Mary 
laughed" will not parse, because the parser attempts to 
unify the logical subject variable with both 'john' and 
'mary'. 
Therefore, as the semantic item for a proper noun 
N, we use a quantified form, specifically 
@P-def(X,X=N,P), 
and this is carried through in most of the processing. 
However, the procedure 'translate', after it has carried 
out all the modification, calls a procedure 'simplify' 
which simplifies the logical form. This gets rid of 
some unnecessary 'true's and it carries out the substi- 
tutions implicit in the proper noun forms, by doing 
some copying of structures and renaming of variables. 
78 American Journal of Computational Linguistics, Volume 9, Number 2, April-June 1983 
Veronica Dahl and Michael C. McCord Treating Coordination in Logic Grammars 
For example, the logical form for "John and Mary 
laughed" prior to simplification is essentially 
def(X,X= john,laughed(X)) 
&def(X,X=mary,laughed(X)). 
But after simplification, it is 
laughed(john)&laughed(mary). 
In the sample analyses in the Appendix, we give in 
some cases only the logical form and in other cases the 
intermediate structures also (the syntactic analysis tree 
and the reshaped tree). Analysis times are in milli- 
seconds. These do not include times for I/O and con- 
version of character strings to word lists. Variables 
are printed by Prolog-10 in the form n, where n is an 
integer. 
5. Possible Extensions 
The main advantages of the formalism presented 
here are: 
• automating the treatment of coordination, 
• freeing the user of concern with structure-building, 
and 
• providing a modular treatment of semantics, based 
upon information given locally in each rule. 
While making a reasonable compromise between 
power and elegance on the one hand, and efficiency 
on the other, our present implementation could be 
improved in several ways. For instance, because the 
parsing history is kept in a stack that is regularly pop- 
ped - the Parent stack - some parsing states are no 
longer available for backing up to, so the possibility 
exists for some acceptable sentences not to be recog- 
nized. 
We have experimented with modifications of the 
MSG interpreter in which more of the parse history is 
saved, and have also considered compiling MSGs into 
Prolog and using a general 'state' predicate which re- 
turns the proof history, but we have not as yet ob- 
tained satisfactory results along these lines. 
Another possible improvement is to use some se- 
mantic guidance for the (at present blind) backing up 
through parsing states. The parser already carries 
along semantic information (in semantic items) to be 
used later on. Some of this information could perhaps 
also be used during parsing, in order to improve the 
backup. Research along these lines may well provide 
some more insight into the dilemma of whether syntax 
and semantics should be kept separate or intermingled. 
It would also be interesting to include collective 
and respective readings of coordinated noun phrases, 
perhaps along the lines proposed in Dahl (1981). 
We do not presume that our general treatment of 
coordination will work for all possible MSG grammars. 
Care is necessary in writing an MSG, as with any other 
formalism. What we do provide are enough elements 
to arrive at a grammar definition that can treat most 
structure-building and coordination problems in a 
modular and largely automated manner. 
We have also investigated an alternative approach 
to coordination, which is not metagrammatical but is 
nevertheless more flexible than previous approaches, 
and involves still another grammar formalism we be- 
lieve worth studying in itself. We have named it the 
gapping grammar (GG) formalism, as its main feature 
is that it allows a grammar rule to rearrange gaps in a 
fairly arbitrary fashion. This will be the subject of a 
forthcoming article. 
6. Concluding Remarks 
We have described a new logic grammar system for 
handling coordination metagrammatically, which also 
automatically builds up the modifier structure of a 
sentence during parsing. This structure, as we have 
seen, can be considered an annotated phrase structure 
tree, but the underlying grammar - unlike other recent 
approaches to NL processing - is not necessarily 
context-free. The rules accepted are generalized 
type-0 rules that may include gaps (in view, for in- 
stance, of left extraposition), and semantic interpreta- 
tion, as we have seen, is guided through the semantic 
items, local to each rule, which help resolve scoping 
problems. The system's semantic interpretation com- 
ponent can in particular deal with scoping problems 
involving coordination. 
While the treatment of coordination is the main 
motivation for developing still another logic grammar 
formalism, we believe our notion of modifier structure 
grammar to be particularly attractive for allowing the 
user to write grammars in a more straightforward man- 
ner and more clearly. Also, because the semantic 
information about the structure being built up is de- 
scribed modularly in the grammar rules, it becomes 
easier to adapt the parser to alternative domains of 
application: modifying the logical representation ob- 
tained need only involve the semantic items in each 
rule. A related but less flexible idea was independent- 
ly developed for Restriction Grammars by Hirshman 
and Puder (1982). RGs are also logic grammars in the 
sense that they are based on Prolog, but they deal only 
with context-free definitions augmented by restrictions 
(which are procedures attached to the rules). In RGs, 
a tree record of the context-free rules applied is auto- 
matically generated during the parse. More evolved 
representations for the sentence, however, are again 
the user's responsibility and require processing this 
automatically generated parse tree. 
Another important point, in our view, is the fact 
that our system does not preclude context-sensitive 
rules, transformations, or gaps. This is contrary to 
what seems to be the general tendency today, both in 
theoretical linguistics (for example, Gazdar 1981) and 
in computational linguistics (for example, Hirshman 
and Puder 1982, Joshi and Levy 1982, Robinson 
American Journal of Computational Linguistics, Volume 9, Number 2, April-June 1983 79 
Veronica Dahl and Michael C. McCord Treating Coordination in Logic Grammars 
1982, Schubert and Pelletier 1982), towards using 
context-free grammars (which, however, are often 
augmented in some way - through restrictions, local 
constraints, rule schemata, metarules, etc. - compen- 
sating for the lack of expressiveness in simple context- 
free grammars). This approach was largely motivated 
by the need to provide alternatives to transformational 
grammar, which on the one hand was felt by AI re- 
searchers to deal insufficiently with semantics and with 
sentence analysis, and on the other hand, as observed 
by Gazdar (1981), could not offer linguistically ade- 
quate explanations for important constructs, such as 
coordination and unbounded dependencies. Further 
arguments supporting this approach include claims of 
more efficient parsability, simplicity, and modularity. 
From the particular point of view of logic gram- 
mars, more evolved grammar formalisms make a great 
deal of sense for various reasons. In the first place, 
they provide various advantages that have been illus- 
trated in Dahl (1981), namely modularity and concise- 
ness, clarity and efficiency. A detailed discussion of 
these advantages with respect to augmented transition 
networks can be found in Pereira and Warren (1980). 
Furthermore, they include lower-level grammars as 
a special case. In particular, context-free rules aug- 
mented with procedures may be written, since even the 
simplest logic grammar defined to date (DCGs) allows 
Prolog calls to be interspersed with the rules. The 
greater expressive power allowed by more evolved 
formalisms, then, can only represent a gain, since it 
does not preclude more elementary approaches. Logic 
grammars, in short, seem to be developing - like other 
computer formalisms - into higher-level tools that 
allow the user to avoid mechanizable effort in order to 
concentrate on as yet unmechanizable, creative tasks. 
MSGs are intended as a contribution in this direction. 

References 
Bates, M. 1978 The Theory and Practice of Augmented Transition 
Network Grammars. in Bolc, L., Ed., Natural Language Com- 
munication with Computers. Springer-Verlag, New York: 191- 
259. 
Coelho, H.M.F. 1979 A Program Conversing in Portuguese Provid- 
ing a Library Service. Ph.D. thesis, University of Edinburgh. 
Colmerauer, A. 1973 Un systeme de communication homme-machine 
in francais. Groupe d' Intelligence Artificielle, Universit6 d'Aix- 
Marseille. 
Colmerauer, A. 1978 Metamorphosis Grammars. In Bolc, L., Ed., 
Natural Language Communication with Computers. Springer- 
Verlag, New York: 133-189. 
Dahl, V. 1977 Un Systeme Deductif d'Interrogation de Banques de 
Donnees en Espagnol. Th6se de Doctorat de Sp6cialit6, 
Universit6 d'Aix-Marsielle. 
Dahl, V. 1981 Translating Spanish into Logic through Logic. 
American Journal of Computational Linguistics 13 : 149-164. 
Gazdar, G. 1981 Unbounded Dependencies and Coordinate Struc- 
ture. Linguistic Inquiry 12(2): 155-184. 
Hirshman, L. and Puder, K. 1982 Restriction Grammar in Prolog. 
Proc. First International Logic Programming Conference. Mars- 
eille, France: 85-90. 
Joshi, A. and Levy, L.S. 1982 Phrase Structure Trees Bear More 
Fruit than You Would Have Thought. American Journal of 
Computational Linguistics 8:1-11. 
Kowalski, R.A. 1974 Predicate Logic as a Programming Language. 
Proc. IFIP 74. North-Holland, Amsterdam, The Netherlands: 
569-574. 
Kowalski, R.A. 1979 Logic for Problem Solving. North-Holland, 
New York, New York. 
McCord, M.C. 1982 Using Slots and Modifiers in Logic Grammars 
for Natural Language. Artificial Intelligence 18: 327-367. 
McCord, M.C. 1981 Focalizers, the Scoping Problem, and Seman- 
tic Interpretation Rules in Logic Grammars. Technical Report, 
University of Kentucky. To appear in Warren, D. and van 
Caneghem, M., Eds., Logic Programming and its Applications. 
Pereira, F. 1981 Extraposition Grammars. American Journal of 
Computational Linguistics 7: 243-256. 
Pereira, F. and Warren, D. 1980 Definite Clause Grammars for 
Language Analysis - a Survey of the Formalism and a Compari- 
son with Transition Networks. Artificial Intelligence 13: 231- 
278. 
Pereira, F. and Warren, D. 1982 An Efficient Easily Adaptable 
System for Interpreting Natural Language Queries. American 
Journal of Computational Linguistics 8:110-122. 
Pereira, L.M. et al. 1982 ORBI - An Expert System for Environ- 
mental Resource Evaluation through Natural Language. Univer- 
sidade Nova de Lisboa. 
Pereira, L.; Pereira, F.; and Warren, D. 1978 User's Guide to DEC 
System-lO Prolog. Department of Artificial Intelligence, Univer- 
sity of Edinburgh. 
Robinson, J. 1982 Diagram: a Grammar for Dialogues. Comm. 
ACM 25: 27-47. 
Robinson, J.A. 1965 A Machine-Oriented Logic Based on the 
Resolution Principle. J. ACM 12: 23-41. 
Roussel, P.L. 1975 Prolog Manuel de Reference et d'Utilisation. 
Universit6 d'Aix-Marseille. 
Schubert, L. and Pelletier, F. 1982 From English to Logic: 
Context-Free Computation of 'Conventional' Logical Transla- 
tion. American Journal of Computational Linguistics 8 (1) : 27-44. 
Van Emden, M.H. 1975 Programming with Resolution Logic. 
Machine Intelligence, 8. John Wiley, New York, New York. 
Winograd, T. 1972. Understanding Natural Language. Academic 
Press, New York. New York. 
Woods, W.A. 1973 An Experimental Parsing System for Transition 
Network Grammars. In Rustin, R., Ed., Natural Language 
Processing. Algorithmics Press, New York, New York: 145-149. 
