GENF~ALIZED AUGMENTED TRANSITION NETWORK GRAMMARS 
FOR GENERATION FROM SD£%NTIC NETWORKS 
Stuart C. Shapiro 
Department of Computer Science, SUNY at Buffalo 
I. YNTRODUCTYON 
Augmented transition network (ATN) grammars have, since 
their development by Woods \[ 7; ~, become the most used 
method of describing grammars for natural language 
understanding end question answering systems. The ad- 
vantages of the ATN notation have been su,naarized as 
"I) perspicuity, 2) generative power, 3) efficiency of 
representation, 4) the ability to capture linguistic 
regularities and generalities, and 5) efficiency of 
operation., \[ I ,p.191 \]. The usual method of utilizing an 
ATN grammar in a natural language system is to provide 
an interpreter which can take any ATH graam~ar, a lexi- 
con, and a sentence as data and produce either a parse 
of a sentence or a message that the sentence does not 
conform to the granunar. A compiler has been written 
\[2;3 \] which takes an ATH grammar as input and produces 
a specialized parser for that grammar, but in this paper 
we will presume that an Interpreter is being used. 
A particular ATN grammar may be viewed as a program 
written in the ATH language. The program takes a sen- 
tence, a linear sequence of symbols, as input, and pro- 
duces as output a parse which is usually a parse tree 
(often represented by a LISP S-expression) or some 
"k~ewledge reprssentatioc" such as a semantic network. 
The operation of the program depends on the interpreter 
being used and the particular program (grannar), as well 
as on the input (sentence) being processed. 
Several methods have been described for using ATN gram- 
mars for sentence generation. One method \[1,p.235\]is 
to replace the usual interpreter by a generation inter- 
preter which con take an ATN grammar written for pars- 
ing and use it to produce random sentences conforming 
to the grammar. This is useful for testing and debug- 
ging the granmmLr. Another method \[5 \] uses a modified 
interpreter to generate sentences from a semantic net- 
work. In this method, an ATN register is initialized to 
hold a node of the semantic network and the input to the 
grammar is a linear string of symbols providing a 
pattern of the sentence to be generated. Another method 
\[4 \] also generates sentences from a semantic network. 
In this method, input to the granmmr is the semantic 
network itself. That is, instead of successive words of 
a surface sentence or successive symbols of a linear 
sentence pattern being scanned as the ATM grammar is 
traversed by the interpreter, different nodes of the 
ssmantic network are scanned. The gramnar controls the 
syntax of the generated sentence based on the structural 
properties of the semantic network and the information 
contained therein. 
It was intended that a single ATN interpreter could be 
used both for standard ATN parsing and for generation 
based on this last method. However, a special inter- 
preter was written for generation grammars of the type 
described in \[4 \], and, indeed, the definition of the ATN 
formalism given in that paper, though based on the 
standard ATN formalism, was inconsistent enough with the 
standard notation that a single interpreter could not be 
used. This paper reports the results of work carried 
out to remo~ those inconsistencies. A generalization 
of the ATN formalism has been derived which allows a 
single interpreter to be used for both parsing and gen- 
erating gras~re. In fact, parsing and generating 
grammars can be sub-networks of each other. For example 
an A~M grammar can be constructed so that the ,,parse,, 
This material is based on work supported in part by the 
MaticeuLl Science Foundation under Grant #MCS78-O2274. 
of a natural language question is the natural language 
statement which answers it, interaction with representa- 
tion end inference routines beinR done on arcs along the 
way. The neW formalism is a strict generalization in 
the sense that it interprets all old ATN gralnars as 
having the same semantics (carrying out the same 
actions and producing the same parses) as before. 
2. Gm~ERATION FROM A S~2~ANTIC NETWGRK--BRIEF OV~VIEg 
In our view, each node of a semantic network represeats 
a concept. The goal of the generator is, given a node, 
to express the concept represented by that node in a 
natural language surface string. The syntactic cate- 
gory of the surface string is determined by the 
grammar, which can include tests of the stracture of 
the semantic network connected to the node. In order 
to express the concept, it is often necessary to in- 
clude in the string substrings which express the con- 
cepts represented by adjacent nodes. For example, if 
a node represents a fact to he expressed as a state- 
ment, part of the statement may he a noun phrase 
expressing the concept represented by the node con- 
nected to the original node by an AGENT case arc. 
This can be done by a recursive call to a section of 
the grammar in charge of building noun phrases. This 
section will be passed the adjacent node. When it 
finishes, the original statement section of the grammar 
will continue adding additional substrings to the 
growing statement. 
In ATN grmrs written for parsing, a recurstve push 
does not change the input symbol being examined, but 
when the original level continues, parsing continues 
at a different symbol. In the generation approach we 
use, a recursive push often involves a change in the 
senantic node being examined, and the original level 
continues with the original node. This difference is 
a major motivation of some of the generalizations to 
the ATN formalism discussed below. ~ne other major 
motivation is that, in parsing a string of symbols, 
the .,next.. symbol is well defined, but in ,.parsing. a 
network, .next" mast be explicitly specified. 
3. THE GEN~IALIZATION 
The following sub-sections shoW the generalized syn- 
tax of the ATN formalism, and assume a knowledge of the 
standard formalimm (\[I \] is an excellent introduction). 
Syntactic structures already familiar to ATH users, 
but not discussed here remain unchanged. Parentheses 
and terms in upper case letters are terminal symbols. 
Lower case terms in angle brackets are non-terminals. 
Ternm enclosed in square brackets are optional. Terms 
followed by .*, m~ occur zero or more times in suc- 
cession. To avoid confusion, in the re, sAnder of this 
section we will underline the name of the * register. 
3.1 TERMINAL ACTIONS 
Successful traversal of an ATN arc might or might not 
consume an input symbol. When parsing, such consump- 
ticn normally occurs, when ge~erating it normally does 
not, but if it does, the next symbol (semantic node) 
must be specified. To allow for these choices, we have 
returned to the technique of \[6 \] of having two terminal 
action, TO and J~P, and have added an optional second 
argent to TO. The syntax is: 
(TO <stats> \[~for~\]) 
(JUMP <state>) 
25 
Both cause the parser to enter the given state . 
JUMP never conswms the input symbol; TO always does. 
It the <forw~ is absent in tbe TO action, the nex~ 
symbol to be scanned will be the next one in the input 
buffer. If <form is present, its value will be the 
next symbol to be scanned. All traditional ATN arcs ex- 
cept JU~ and POP end with a terminal action. 
The explanation given for the replacement of the JUMP 
terminal action by the Ob~ are ~ac that, ,since POP, 
PUSH and VTR ares never advance the input, to decide 
whether or not an arc advanced the input required k~o~- 
ledge of both the arc type and termination action. The 
introduction o£ the JUMP arc ... means that the input 
edvancement is a funetinn of the arc type alone." \[2\] 
That our reintroduction of the JUMP ter~L~tl action 
does not bring back the con/~ion is explained below in 
~tion h. 
3.2 APeS 
We retain a JU~ arc a8 veil as a JU~ temlnal action. 
The JUMP arc provides a place to make an arbitrary test 
and par'form sow actions without consuming an input 
symbol. We need such an are that does conmmm its in- 
put s~bol, but TST is not adequate since it, ~ CAT, 
is really a bundle of ares, one for each lexloal entry 
of the scarmed symbol, should the letter be lexlcall7 
ambiguous. A semntle node, however, does not have a 
lexlcal entry. We therefore introduce a TO eros 
(TO (<state> \[<~em \]) <test> <aetion~) 
It < test> is successful, the <aotion>s are performed 
and transfer is made to <state>. The input s~ubol is 
con~. The next symbol to be scanned is the value 
OF <form> if it is present or the next symbol in the 
input buffer if ~fer~ is ~Losing. 
The PUSH arc mBk~8 two asnn~lo~ms 1 ) the first 
symbol to be scud in ths ~zheetvoz4c is the cmTent 
contents of the * registers 2) the cuzTent input symbol 
will be consuned~oy the subnet~ork, so the content8 of 
can be replaced by the value returned by the subnet- 
~ork. We need an are that causes a ~ive call to 
su~aetwork, but makes neither of ~heea two assmnp- 
tions, so we introduce the CALL arc: 
(CALL <state> ~fom ~es~> <preaction or ac~ion~ 
<rcgieter> <action>* <terminal action~ ) 
where <preaction or action> is <preaetice~ or <aotloa~>. 
Lf the <test> is successful, all the <action~e of 
< preactlon or action> are performed and a zqenwslve 
is made to the state <state> whore the next s~mbol 
to be scanned is the value of <fo~ and registers are 
initialized by the <prenc~Ion>s. Y.f the subnetwerk 
succeeds, its value is placed into <rsglstar> and the 
<action,s and <terminal action> are performed. 
Just as the normal TO terminal action is the general- 
Ised TO terminal action with. a default foru, the PUSH 
arc (which we retain) is the CALL arc with the folloe- 
ing defanltss <form> is e! the <preactlon or aotlon~s 
are only <prcaotion>e! <~gister> is _~. 
The on~ fm~ which must be added is 
(OETA <arc> (<node tom>\]) " 
m <node fern is a form which evaluates to a seman- 
tic node. Y~ abeant, <node fozs~ defaults to ~. The 
value of OETA i8 tha node at the end c~ the ar~ label- 
led <arc> fm the spaoified node, or a IAst of such 
nodes L~ there are more than rose. 
3.2 TESTS, PREACTION, ETC. 
The generalization o£ the ATN formalism to one which 
allN for writing gre~rs which generate s~'Tace 
strings from semantic networks, yet csn be interpret- 
ed bY the same interpreter whAch handles parsing 
grsm~8, requires no changes other t~an the ones des- 
eribed above. Of course, each t~plementation of an ATN 
interpreter contains slight di~erences in the set of 
tests and actions implemented beyond the basic ones. 
h. M INPUT Bb~ee~ 
Zr~ut to the ATN parser can be thought of as being the 
contents o£ a stack, called the input buffer. Zf the 
input is a string of' words, the ~ ~--'-~vill be at 
the top of the input buffer and successive words will 
be in successively deeper positions of the input buffer. 
ZF the input is a graph, the input buffer might controLs 
only a single node OF the graph. 
Ca antes-Lug an arc, the • register is set to the top 
element of the input buffer, uhlch must not be empty. 
The on~ exceptions to this are the VTR and POP arcs. 
VIR sets e to an element of the HOLD register. POP 
leaves .M, undefined since e is always the element to be 
accounted for by the current arc, and a POP arc is not 
trying to account for ar~ elmmut. ~he input buffer is 
not changed between the time a PUSH 8re is entered and 
t~ fine an arc emanating from the stata pushed to is 
antoM) 8o the contents of e on the latter ar~ will be 
the same as on the former. A CALL arc is allmred to 
opeei~ the centante of. on the arcs of the called 
s1~ta. This is accueplished by replacing the top 
element of the input buffer by that value before trans- 
fer to the called state. Y~ the value is a list of 
olemnto) we push each elmwnt individual~ onto the 
input buffer. ~ makes it particularly easy to loop 
thz~ a set of nodes, each of which uili contribute 
the sane syntactic tom to the growing santenee (nob 
as a st~A~g o£ adJectlves). 
on an arc (except for POP), i.e. during evaluation 
OF the test and the acts, the onntents OF ~ and the top 
elanent of the input buffer are the same. This re- 
quires spaeial pz~eessing for V~R, P~H, and CALL ares. 
Atter setting % a VIR are pushes the contents of ~ on- 
to tbe input buffer. When a PUSH are resuaes, and the 
lower level has sueceestu~ returned a value, the 
value is placed into * and also pushed onto the input 
buffer. ~an a CALL resumes, and the Immr level has 
8uceassfUlly returned a value, the value is placed into 
the spueified register, and the centers of ~ is pushed 
onto the input butter. The s1~eitied register might or 
might not be e. In either case the contents of. e and 
the top OF the input buffer a~ the sane. 
There are two possible terminal acts, JUMP and TO. 
JUMP does not affect the input buffer, so the contents 
OF e will be same on the successor ares (except for POP 
and VIR) as at the end OF the curreut arc. TO pops the 
input buffer, but if provided with an optional tom, 
also pushes the value of ~Jmt form on~o the input but- 
ler. 
POPping from ~e top level is one7 legal if the input 
buffer is empty. POPPint fz~m any level should 
that a constituent has been accounted for. Accounting 
for a constituent should en~l removing it from the 
in1~t buffer. From this we conclude that ever~ path 
within a level fm an initial state to a POP ere 
oon1'~Lin at least one TO transfer, and in most cases, it 
is proper to trausfer TO ra~her than to JUMP to a state 
that hss a POP are emanat~ from it. TO will be 
terulnal ast for most V~R and PUSH a~s. 
26 
In an~ ATN interpreter which abides by this discussion, 
advancement of the input is a function of the terminal 
action alone in the sense that at any state JUMPed to, 
the top of the input buffer will be the last value of *, 
and at any state Jumped TO it will not be. 
Parsing and generating require a lexicon -- a file of 
words giving syntactic categories, features and inflec- 
tional forms ~or irregularly inflected words. Parsing 
and generating require different information, yet we 
wish to avoid duplication as much as possible. 
During parsing, morphological analysis is performed. 
The analyzer is given an inflected form, must segment 
it, find the stem in the lexicon and modify the lexical 
entry of the stem according to its analysis of the 
original form. Irregularly inflected forms must have 
their own entries in the lexicon. An entry in the lex- 
icon may be lexically ambiguous, so each entry must be 
associated with a list of one or more lexical feature 
lists. Each such list, whether stored in the lexicon 
or constructed by the morphological analyzer, must in- 
clude a syntactic category and a stem, which serves as 
a link to the semantic network, as well as other fea- 
tures such as transitivity for a verb. 
In the semantic network, sc~e nodes are associated with 
lexical entries. During generation, these entries, 
along with other information from the semantic network, 
are used by a morphological synthesizer to construct 
an inflected word. We assume that all such entries are 
unambiguous stems, and so contain only a single lexical 
feature list. This feature list must contain any ir- 
regularly inflected forms. 
In summary, a single lexicon may be used for both 
parsing and generating under the following conditions. 
An unambiguous stem can be used for both parsing and 
generating if its one lexlcal feature list contains 
features required for both operations. An ambiguous 
lexical entry will only be used during parsing. Each 
of its lexlcal feature lists ,met contain a unique but 
arbitrary ,stem,' for connection to the semantic net- 
work and for holding the lexical information required 
for generation. Every lexical feature list used for 
generating must contain the proper natural language 
spe!1~ng of its stem as well as any irregularly in- 
flected forms. Lexical entries for irregularly in- 
flected forms will only be used during parsing. 
For the purposes of this paper, it should be irrelevant 
whether the "stems,, connected to the semantic network 
are actual surface words llke "give,,, deeper sememes 
such as that underlying both ,,give, and ,,take", or 
primitives such as .ATRANS". 
6. EXAMPLE 
Figure I shOWs an example interaction using the SNePS 
Semantic Network Processing ~ystem \[5\] in which I/O is 
controlled by a parsing-generating ATN grammar. Lines 
begun by "**" are user's input, which are all calls to 
the function named ,, : ". This function passes its 
argument llst as the input buffer for a parse to begin 
in state S. The form popped by the top level ATN ned- 
worm is then printed, folluwed by the CPU time in 
milliseconds. (The system is partly c~lled, partly 
interpreted LISP on a CYB~ 173. The ATN gra,mer is 
interpreted. ) Figure 2 shores the grammar in abbrevi- 
ated graphical form, and Figure 4 gives the details of 
each arc. The parsing network, beginning at state S~ 
is included for completeness, but the reader unfamiliar 
with SMePSUL, the S~ePS User Language, \[5\] is not ex- 
pected to understand its details. 
The first arc in the network is a PUSH to the parsing 
network. This network determines whether the inlmat is 
a statement (type D) or a question (type Q). If a 
statement, the network builds a SNAPS network repre- 
senting the information contained in the sentence 
and pops a semantic node representing the fact con- 
rained in the main clause. If the input is a question 
the parsing network calls the SNePS deduction routines 
(DEDUCE) to find the answer, and pops the semantic 
node representing that (no actual deduction is re- 
quired in this example). Figure 3 shews the complete 
SNePS network built during this example. Nodes MTh- 
M85 were built by the first statement,nodes M89 and 
MgOby the second. 
When the state RESPOND is reached, the input buffer 
contains the SNAPS node popped by the parsing network. 
The generating network then builds a sentence. The 
first two sentences were generated from node M85 before 
M89 end MgO were built. The third sentence was gener- 
ated from MgO, and the fourth from M85 again. Since 
the voice (VC) register is LIFTRed from the parsing 
network, the generated sentence has the same voice as 
the input sentence (see Figure I). 
Of particular note is the sub-network at state PRED 
which analyzes the proper tense for the generated 
sentence. For brevity, only simple tenses are included 
here, but the more complicated tenses presented in \[4\] 
can be handled in a similar manner. Also of interest 
is the subnetwork at state ADJS which generates a 
string of adjectives which are not already scheduled 
to be in the sentence. (Compare the third and fourth 
generated sentences of Figure 1.) 
7. CONCLUSIONS 
A generalization of the ATN formalism has been pre- 
sented which allows grammars to be written for gener- 
ating surface sentences from semantic networks. The 
generalization has involved: adding an optional 
argument to the TO terminal act; reintroducing the 
JUMP terminal act; introducing a TO arc similar to the 
JUMP arc; introducing a CALL arc which is a generaliza- 
tion of the PUSH arc; introducing a GETA form; clari- 
fying the management of the input buffer. The benefits 
of these few changes are that parsing and generating 
gramnars may be written in the same familiar notation, 
may be interpreted (or compiled) by a single program, 
and may use each other in the same parser-generator 
network grammar. 
R~ENCES 
\[1\] Bates, Nadeleine. The theory and practice of aug- 
mented transition network grammars. In L. Bloc, ed. 
Natural Language Communication with Ccm~uters, Springev- 
~'erlag, Berlin, 197U, 192-259. 
\[2\] Burton, R.R. Semantic grammar, an engineering 
technique for constructing natural language understand- 
ing systems. BBN Report No. 3h53, Bolt Beranek and 
Newman, Inc., Cambridge, MA., December 1976. 
\[3\] Burton, Richard R. and Woods, ~. A. A compiling 
system for augmented transition networks. Prtprints of 
COLING 76z The Lnternational Conference on Computation- 
al Linguistics, Ottawa, June 1976. 
\[4\] Shapiro, Stuart C. Generation as parsing from a 
network into a linear string. AJCL Microfiche 33 (1975) ~5-62. 
\[5\] Shapiro, Stuart C. The SNoPS semantic network 
processing system. In N.Y. Findler, ed., Associative 
Networks: Representation and Use of KnowledKe by Com- 
puters, Academic Press, New York, I~79, 17~-203. 
\[6\] ~1~ew, R. and Slocum, J. Generating e~gllsh 
discot~'se from e~tic networks. CACN ~, 10 (October 
1972), 8~-905. 
27 
\[7\] Woods, W.A. Transition natwcrk ~smuars for ~.~(z A DOG KISSED YOUNG LUCY) 
natural langua@s ana~TSlSo CACM I~, 10 (October 1970), (I UND~STAND THAT A DOG KISSED YOUNG LUCY) 591 ...606. 
3769 MSECS 
\[8\] Woods, W.A. An experimental parsing system for #~(, WHO KISS~ LUCY) 
transition network Rrsmmaz~. In Ro Rns~Ln, ed., Nat- (A DOG KIS3~ YOUNG LUCY) 
u~al LanRua~e P,-ocessin~. Algorlthmlcs Press, Mew~o~, 2714 MSEC3 1973, 111-15~. 
~(, LUCY IS SWEET) 
(I ~D~L~TAND THAT YOUNG LUCT IS SWEET) 
2127 MSECS 
#,~( z WHO WAS KISSED ~ A DOG) 
(SWEET YOUNG LUCY WAS KISSED BY A raG) 
3OOh MSZCS 
Figure I. Example Interaction 
~SH SP J ~ CALLNQ~3R J )(~ CALL NP J ~) CALLPRED J~.~ 
ADJS J CALL NP TO 
CALL PAST TO 
CAT V TO ~ ~ ..... ~ _ J~ ~WRD BY TO PUSH gNP 
CAT ADJ TO ~ 
Figure 2. A ?arsL~-(~nerating Grammar 
Terminal acta are tnd:Lcated by "J" or "TO" 
Figure 3. Samnt, ic Hetwoz.tc Build by ~ent, encea of Figure 1 
28 
(S (PUSH SP T (JUMP RESPOND))) 
(RESPO~ (JeW G} (Z~ (OKrR TrPZ) 'D) (SKrR ST~INO '(I UtmmSTAND THAT))) (av~ G} (za (G~.'m ~PZ) ,~))) 
(O (JUMP ~ (AND (GE~A OBJECT) (OVERLAP (GETR VC) 'PASS)) (SErR ~ (O~A OBJECT))) 
(JUMP @$ (AND (O~A AGENT) (DISJOINT (OK"HI VC) ,PASS)) (SErR SUBJ (OK"rA AO~T)) (SErR VC 'ACT)) (~ ~ (OK'PA WHICH) (SEI'R 5~IBJ (GErA WHICH)) (SETR 
VC 'ACT))) 
(os (cALL NUmR SUSa T NUmR (szm m~z .) (JUMP ore))) (081 (CaLL 
NP SUBJ T (S~Im DONE) (SENDR NUMBR) Rm (ADDR STRING REO) (JUMP SgB))) 
(SVB (CALL PRED * T (S~DR NUMBR) (S~#ER VC) (SENIR VB (OR (OKRA LEX (GETA VERB)) 'BE)) REG (AIER STRING PEG) (Ju~ smo~a))) 
(SUROBJ (CALL NP (OKRA AGENT) (AND GETA AGO'r) (OVERLAP VC 'PASS)) (SENDR DONE) * (ADDR STRING 'BY *) (TO ~D)) 
(CALL NP (OKRA OBJECT) (AnD (OKRA OBJECT) (OVmLAP VO 'ACT)) (S~Xm DONE) * (ADIR Sm~O *) (TO ram)) (CaLL NP (GETA ADJ) (OEPA ADJ) 
* (ADDR STRING *) (TO ~D)) (TO (roD) T)) 
(z~ (POP smiNo T)) 
(NUMBR (TO (NUMBRI) (OR (OETA SUB-) (OKRA SUP-) (OKRA CLASS-)) (SKTR NUM~ 'FL)) 
(TO (NLR~RI) (NOT (OR (GE~A SUB-) (OKRA SUP-) (OKRA CLASS-))) (SETR NUMBR 'SING}))) (NU~RI (POP NUMSR T)) 
(PRED (CALL PAST (OKRA E'f~) T T~SE (TO O~VB)) 
(CALL ~ (OKRA 5"r~) T TENSE (TO GE~qVB)) (TO (G~-NVB) T (SKRR TENSE 'PRES))) 
(G}~ (IOP (V~{BIZE (G}EI~ NUMBR) (G}E~I~ TENSE) (GEI~ VC) (G}m VB)) T)) (PAST (TO (PASTEND) (OVmLAP 
* *NOW)) (TO (PAST (G}ETA BEFORE)) 
T)) (PASTmD (POP 'PAST T)) 
(FUTR (TO (ZUTRZ~) (ovmLAp. ~ow)) (TO (rUT~ (GETA Arrm)) T)) 
(~ (POP '~ T)) 
(NP (TO (roD) (G}KRA LEX) (SE%~ STRING} (WHDIZE (G}ETR ~rb'Fd~R) (G}KRA IF, I\[)))) (at.e 
N~A (~ (OKRA NANED-) (~ZSJOI~T (OKRA N~d~)~X~aZ))) 
(JUMP NPMA (AND (OKRA MEMBER-) (DISJOINT (OKRA MEMBER-) DONE)))) (trP~A (CALl. ADJS (OKRA WHICH-) (G}KrA WHICH-) (SE~ DONE) RZO (ADIR ETRINO Rm) (JUMP ~N)) 
(JUMP ~P~ T)) 
(~ (TO ~m) (~. STRI.G} (VaCaTE (G}KRR ~m'~) (OKRA ;2X (OZ~A rt~MZ (OKRA ~))))))) (~Pm 
(CALL A~S (OZn WHICH-) (OnA WHZC.-) (S~DS m~Z) Rm (aam s'miNo 'A zm) (JUMP ~)) (~ ~ T (ADDR STRING} 'A))) 
(NPM (CALL NP (GETA CLASS (OKRA M~SER-)) T (S~T~R DONE) REG (AD~R STRING} REG) (TO roD))) (ADJS (CALL NP (GETA ADJ) (DISJOINT * DONE) (S~DR DONE) * (ADDR STRING *) (TO ADJS)) 
(TO (A~JS) T) (raP STRING 
T)) 
(sP (w~ WHO T (SKrR TYPE 'Q) (LIFTS TYPE) (szm sVSa ~X (To v)) 
(maSH NPP T (sz~mR net ,D) (SETR n'PZ 'D) (Un~ n~Z) (sz'm susa .) (To v))) 
(v (CaT v T (szm vs (FmmREurm LZX (+(OKrR *)))) (SKrR TNS (OKrZ Z~SZ)) (W COMPL))) 
(C(~L (CAT V (AND (GETF PPRT) (OVmLAP (GETR VB) (GETA I~X- 'BE))) (SKTR OBJ (OKTR SUBJ)) (SETR SIBJ NIL) (SKrR VC 'PASS) (szm ~ (FINmPaUZU~ ~ (~(ozm .)))) (To sv)) 
(CaT ADJ (OVERlaP (ore VB) (OETA LEX- 'BE)) (SKrR ADJ (FINDORBUILD LEX (~(GETR *)))) (TO SVO)) (JUMP SV T)) 
(SV (JUMP 0 (EQ (OETR TNS) 'FRES) (SErR STM (BUILD BEI~ORE *NOW (BUILD AFTra *NOW) - ETM))) 
(ame o (zQ (GZ'm T.S) 'PAS'r) (SZ~ STM (BUrLD Sm'ORZ (B,ZLD sm~oaz .Now) - KrM)))) (0 (WRD BY (EQ (O~ VC) 'PASS) (TO PAO)) 
(~SH ~P r (sm~'m n, Pz) (szm oBJ .) (LZ~ VC) (TO SVO))) 
(PAO (PUS~ NPP T (S~\]~R TYPE) (SETR SUBJ *) (LIFTR VC) (TO SVO))) 
(~ (raP (BU~.n AG~ (÷(OETR ~J)) VERB (+(OE'I~R ~))OBJECT (~(Gm OBJ))ST~2{E.'(f(OETR S'rM)) ~ *~TH) (zQ (ozm 
T~PZ 'D)) 
(rap (~AL (BU~ ( mmcz AOZtrr + v~ + OSJmT +) s~mJ w o~)) (zQ (ozm TrPz) ,Q))) (SVC (POp (EVAL (BIHIIX~ (FINDORBUILD WHICH + AIIJ +) SUBJ ADJ)) (~ (GKTR T3\[PE) 'D)) 
(POP (EVAL (B~ (DEDUCE WHICH + ADJ +) S~J ~)) (EQ (OEI'R TYPE) 'Q))) (~ (~n~ A T (sm~ ~ T) (To ~PDKr)) 
(~ NPDET T)) 
(~nZT (CA~ Am T (HOLD (P~m,SU~ ~X (,(ozm .)))) (m ~)) 
(CAT N (AND (GETR INDEF) (EQ (OE'i~ TYPE) 'D)) 
(sin ~ (BOND Mmsm- (~u'~ c~ass (ziNmPa~LD ~x (*(oz'm .)))))) (TO re,A)) (CAT N (AND (OETR \]~qDEF) (EQ (OETR TI'PE) 'Q)) 
(SKrR ~ (FIND M~B~R- (DEDUCE M~ER %Y CLASS (TBUILD LEX (+(OKTR *)))))) (TO ICPA)) (CAT NPR T (SETR NH (FINDORBUILD NAMED- (FINDORBUILD 
NAME (F~UILD LEX (+(GETR *)))))) (TO ~Z))) (~A Orm ~ T (~AL (B~r~ (FZ~rmREuI~m W~CH. Aa)J *) ~H)) (TO ~PA)) 
(POP ~ T)) 
Figure 4. Details of the Parser~2en~rator ~t~mork 
29 

