Restrictions on Tree Adjoining Languages 
Giorgio Satta 
Dip. di Elettronica e Informatica 
Universit£ di Padova 
35131 Padova, Italy 
satta@dei, unipd, it 
William Schuler 
Computer and Information Science Dept. 
University of Pennsylvania 
Philadelphia, PA 19103 
schuler@linc, cis. upenn, edu 
Abstract 
Several methods are known for parsing lan- 
guages generated by Tree Adjoining Grammars 
(TAGs) in O(n 6) worst case running time. In 
this paper we investigate which restrictions on 
TAGs and TAG derivations are needed in order 
to lower this O(n 6) time complexity, without in- 
troducing large runtime constants, and without 
losing any of the generative power needed to 
capture the syntactic constructions in natural 
language that can be handled by unrestricted 
TAGs. In particular, we describe an algorithm 
for parsing a strict subclass of TAG in O(nS), 
and attempt to show that this subclass retains 
enough generative power to make it useful in 
the general case. 
1 Introduction 
Several methods are known that can parse lan- 
guages generated by Tree Adjoining Grammars 
(TAGs) in worst case time O(n6), where n is 
the length of the input string (see (Schabes 
and Joshi, 1991) and references therein). Al- 
though asymptotically faster methods can be 
constructed, as discussed in (Rajasekaran and 
Yooseph, 1995), these methods are not of prac- 
tical interest, due to large hidden constants. 
More generally, in (Satta, 1994) it has been ar- 
gued that methods for TAG parsing running in 
time asymptotically faster than O(n 6) are un- 
likely to have small hidden constants. 
A careful inspection of the proof provided 
in (Satta, 1994) reveals that the source of the 
claimed computational complexity of TAG pars- 
ing resides in the fact that auxiliary trees can 
get adjunctions at (at least) two distinct nodes 
in their spine (the path connecting the root and 
the foot nodes). The question then arises of 
whether the bound of two is tight. More gen- 
erally, in this paper we investigate which re- 
strictions on TAGs are needed in order to lower 
the O(n 6) time complexity, still retaining the 
generative power that is needed to capture the 
syntactic constructions of natural language that 
unrestricted TAGs can handle. The contribu- 
tion of this paper is twofold: 
• We define a strict subclass of TAG where 
adjunction of so-called wrapping trees at 
the spine is restricted to take place at no 
more than one distinct node. We show that 
in this case the parsing problem for TAG 
can be solved in worst case time O(n5). 
• We provide evidence that the proposed 
subclass still captures the vast majority 
of TAG analyses that have been currently 
proposed for the syntax of English and of 
several other languages. 
Several restrictions on the adjunction opera- 
tion for TAG have been proposed in the liter- 
ature (Schabes and Waters, 1993; Schabes and 
Waters, 1995) (Rogers, 1994). Differently from 
here, in all those works the main goal was one 
of characterizing, through the adjunction oper- 
ation, the set of trees that can be generated by 
a context-free grammar (CFG). For the sake of 
critical comparison, we discuss some common 
syntactic constructions found in current natural 
language TAG analyses, that can be captured 
by our proposal but fall outside of the restric- 
tions mentioned above. 
2 Overview 
We introduce here the subclass of TAG that we 
investigate in this paper, and briefly compare it 
with other proposals in the literature. 
A TAG is a tuple G = (N,~,I,A,S), where 
N, ~ are the finite sets of nonterminal and ter- 
minal symbols, respectively, I, A are the finite 
1176 
sets of initial and auxiliary trees, respectively, 
and S E N is the initial symbol. Trees in 112 A 
are also called elementary trees. The reader is 
referred to (Joshi, 1985) for the definitions of 
tree adjunction, tree substitution, and language 
derived by a TAG. 
The spine of an auxiliary tree is the (unique) 
path that connects the root and the foot node. 
An auxiliary tree fl is called a right (left) tree 
if (i) the leftmost (rightmost, resp.) leaf in ~ is 
the foot node; and (ii) the spine of fl contains 
only the root and the foot nodes. An auxiliary 
tree which is neither left nor right is called a 
wrapping tree. 1 
The TAG restriction we propose is stated 
as followed: 
. 
. 
At the spine of each wrapping tree, there is 
at most one node that can host adjunction 
of a wrapping tree. This node is called a 
wrapping node. 
At the spine of each left (right) tree, no 
wrapping tree can be adjoined and no ad- 
junction constraints on right (left, resp.) 
auxiliary trees are found. 
The above restriction does not in any way con- 
strain adjunction at nodes that are not in the 
spine of an auxiliary tree. Similarly, there is 
no restriction on the adjunction of left or right 
trees at the spines of wrapping trees. 
Our restriction is fundamentally different 
from those in (Schabes and Waters, 1993; Sch- 
abes and Waters, 1995) and (Rogers, 1994), 
in that we allow wrapping auxiliary trees to 
nest inside each other an unbounded number 
of times, so long as they only adjoin at one 
place in each others' spines. Rogers, in contrast, 
restricts the nesting of wrapping auxiliaries to 
a number of times bounded by the size of the 
grammar, and Schabes and Waters forbid wrap- 
ping auxiliaries altogether, at any node in the 
grammar. 
We now focus on the recognition problem, 
and informally discuss the computational ad- 
vantages that arise in this task when a TAG 
obeys the above restriction. These ideas are 
formally developed in the next section. Most of 
1The above names are also used in (Schabes and Wa- 
ters, 1995) for slightly different kinds of trees. 
the tabular methods for TAG recognition rep- 
resent subtrees of derived trees, rooted at some 
node N and having the same span within the 
input string, by means of items of the form 
(N,i,p,q,j I. In this notation i, j are positions 
in the input spanned by N, and p, q are posi- 
tions spanned by the foot node, in case N be- 
longs to the spine, as we assume in the discus- 
sion below. 
i' i p q j j' 
Figure 1: O(n 6) wrapping adjunction step. 
The most time expensive step in TAG recog- 
nition is the one that deals with adjunction. 
When we adjoin at N a derived auxiliary tree 
rooted at some node R, we have to combine to- 
gether two items (R, i', i, j, j'> and (N, i, p, q, j>. 
This is shown in Figure 1. This step involves 
six different indices that could range over any 
position in the input, and thus has a time cost 
of O(n~). 
Let us now consider adjunction of wrapping 
trees, and leave aside left and right trees for 
the moment. Assume that no adjunction has 
been performed in the portion of the spine 
below N. Then none of the trees adjoined 
below N will simultaneously affect the por- 
tions of the tree yield to the left and to the 
right of the foot node. In this case we can 
safely split the tree yield and represent item 
(N,i,p,q, jl by means of two items of a new 
kind, (Nle~,i,P> and (Wright,q,j>. The adjunc- 
tion step can now be performed by means of 
two successive steps. The first step combines 
(R, i', i, j, j') and (Ntelt, i, p>, producing a new 
intermediate item I. The second step combines 
I and (Nright, q, Jl, producing the desired result. 
In this way the time cost is reduced to O(n5). 
It is not difficult to see that the above rea- 
soning also applies in cases where no adjunc- 
tion has been performed at the portion of the 
spine above N. This suggests that, when pro- 
1177 
(b): 
Figure 2: (.9(n 5) wrapping adjunction step. 
cessing a TAG that obeys the restriction intro- 
duced above, we can always 'split' each wrap- 
ping tree into four parts at the wrapping node 
N, since N is the only site in the spine that 
can host adjunction (see Figure 2(a)). Adjunc- 
tion of a wrapping tree /3 at N can then be 
simulated by four steps, executed one after the 
other. Each step composes the item resulting 
from the application of the previous step with 
an item representing one of the four parts of the 
wrapping tree (see Figure 2(b)). 
We now consider adjunction involving left 
and right trees, and show that a similar split- 
ting along the spine can be performed. Assume 
that 7 is a derived auxiliary tree, obtained by 
adjoining several left and right trees one at the 
spine of the other. Let x and y be the part of 
the yield of 7 to the left and right, respectively, 
of the foot node. From the definition of left 
and right trees, we have that the nodes in the 
spine of V have all the same nonterminal label. 
Also, from condition 2 in the above restriction 
we have that the left trees adjoined in 7 do not 
constrain in any way the right trees adjoined in 
7. Then the following derivation can always be 
performed. We adjoin all the left trees, each one 
at the spine of the other, in such a way that the 
resulting tree 7te/t has yield x. Similarly, we ad- 
joining all the right trees, one at the spine of the 
other, in such a way that the yield of the result- 
ing tree "Yright is y. Finally, we adjoin "\[right at 
the root of 71e/t, obtaining a derived tree having 
the same yield as 7. 
From the above observations it directly fol- 
lows that we can always recognize the yield 
of 7 by independently recognizing 71~/t and 
7right. Most important, 71e/t and 7ri~ht can be 
represented by means of items (Rte/t,i,p) and 
(Rright,q,j). As before, the adjunction of tree 
V at some subtree represented by an item I can 
be recognized by means of two successive steps, 
one combining I with (Rle~, i,p) at its left, re- 
sulting in an intermediate item I t, and the sec- 
ond combining I ~ with (Rright, q, j) at its right, 
obtaining the desired result. 
3 Recognition 
This section presents the main result of the pa- 
per. We provide an algorithm for the recogni- 
tion of languages generated by the subclass of 
TAGs introduced in the previous section, and 
show that the worst case running time is (.9(n5), 
where n is the length of the input string. To 
simplify the presentation, we assume the fol- 
lowing conditions throughout this section: first, 
that elementary trees are binary (no more than 
two children at each node) and no leaf node is 
labeled by e; and second, that there is always 
a wrapping node in each wrapping tree, and it 
differs from the foot and the root node. This is 
without any loss of generality. 
3.1 Grammar transformation 
Let G = (N, E, I, A) be a TAG obeying the re- 
strictions of Section 2. We first transform A into 
a new set of auxiliary trees A ~ that will be pro- 
cessed by our method. The root and foot nodes 
of a tree/3 are denoted R E and FE, respectively. 
The wrapping node (as defined in Section 2) of 
~3 is denoted W E. 
Each left (right) tree ~ in A is inserted in 
A l and is called j3L (j3R). Let 13 be a wrapping 
tree in A. We split ~ into four auxiliary trees, as 
informally described in Section 2. Let ~0 be the 
subtree of fl rooted at W~. We call j3v the tree 
obtained from/~ by removing every descendant 
of W~ (and the corresponding arcs). We remove 
every node to the right (left) of the spine of ~3D 
and call ~LD (~RD) the resulting tree. Similarly, 
we remove every node to the right (left) of the 
spine of ~j and call flnv (~R~\]) the resulting 
tree. We set F~L D and FER D equal to FE, and 
set FZL v and FER v equal to W E. Trees ~LU, 
BRv, ~LD, and ~RD are inserted in A ~ for every 
wrapping tree/3 in A. 
1178 
Each tree in A' inherits at its nodes the ad- 
junction constraints specified in G. In addition, 
we impose the following constraints: 
• only trees j3L can be adjoined at the spine 
of trees ~LD, I~LU; 
• only trees fir can be adjoined at the spine 
of trees ~RD, ~RU; 
• no adjunction can be performed at nodes 
F~Lu,FZRu. 
3.2 The algorithm 
The algorithm below is a tabular method that 
works bottom up on derivation trees. Follow- 
ing (Shieber et al., 1995), we specify the algo- 
rithm using inference rules. (The specification 
has been optimized for presentation simplicity, 
not for computational efficiency.) 
Symbols N, P, Q denote nodes of trees in A' 
(including foot and root), c~ denotes initial trees 
and j3 denotes auxiliary trees. Symbol label(N) 
is the label of N and children(N) is a string 
denoting all children of N from left to right 
(children(N) is undefined if N is a leaf). We 
write c~ E Sbst(N) if c~ can be substituted at 
N. We write f~ E Adj(N) if ~ can be adjoined 
at N, and nil E Adj(N) if adjunction at N is 
optional. 
We use two kind of items: 
• Item <NX,i,j), X E {B,M,T}, denotes a 
subtree rooted at N and spanning the por- 
tion of the input from i to j. Note that two 
input positions are sufficient, since trees in 
A ~ always have their foot node at the posi- 
tion of the leftmost or rightmost leaf. We 
have X -- B if N has not yet been pro- 
cessed for adjunction, X = M if N has 
been processed only for adjunction of trees 
f~L, and X = T if N has already been pro- 
cessed for adjunction. 
• Item (~,i,p,q,j) denotes a wrapping tree 
(in A) with RZ spanning the portion of 
the input from i to j and with F~ spanning 
the portion of the input from p to q. In 
place of ~ we might use symbols \[f~,LD\], 
\[~, RD\] and \[f~, RU\] to denote the tempo- 
rary results of recognizing the adjunction 
of some wrapping tree at W~. 
Algorithm. Let G be a TAG with the re- 
strictions of Section 2, and let A' be the asso- 
ciated set of auxiliary trees defined as in sec- 
tion 3.1. Let aza2...an, n > 1, be an input 
string. The algorithm accepts the input iff some 
item (R T, 0, n) can be inferred for some c~ E I. 
Step 1 This step recognizes subtrees with root 
N from subtrees with roots in children(N). 
(g'l ,i - 1, i) ' label(N) = ai; 
(F~,i,i) '   e A', 0<i<n; 
(RT,i,jl (N~.,i,j) , ~ E Sbst(g); 
(pT,i, k) {QT, k,j) (N~,i,j) , children(N) = PQ; 
(pT, i, j) children(N) = P. (N ~, i, j) ' 
Step 2 This step recognizes the adjunction of 
wrapping trees at wrapping nodes. We rec- 
ognize the tree hosting adjunction by compos- 
ing its four 'chunks', represented by auxiliary 
trees ~LD, ~RD, ~RU and ~LU in X, around the 
wrapped tree. 
{R~,k,p) (~,i,k,q,j) (\[~,iD\],i,p,q,j) ,~' E Adj(Wz),p < q; 
<R~sD,q,k ) (\[~,LD\],i,p,k,j) 
<\[~,Rn\],i,p,q,j) ' p < q; 
R T (O~r~,k,j) <\[~,RD\],i,p,q,k) 
(\[~,RU\],i,p,q,j) 
(R~L,,i,k) (\[~,RU\],k,p,q,j). 
(~,i,p,q,j) 
(R~,,i,p) (R~,,q,j) nil E Adj(W~),p < q. (\[~,RD\],i,p,q,j} ' 
Step 3 This step recognizes all remaining cases 
of adjunction. 
(R~a,i,k) <NB,k,j),~EAdj(N),XE{M,T}; (N~,i,j) 
(N x, i, k) (R~,, k, j) (NT,i,j) ,~EAdj(N),XE{B,M}; 
(NB'i'J) nil E Adj(N); (N~ ,i,j) , 
(NB,p,q) (~,i,p,q,j) (N.~,i,j) , ~ E Adj(N). 
Due to restrictions on space, we merely claim 
the correctness of the above algorithm. We now 
establish its worst case time complexity with re- 
spect to the input string length n. We need to 
consider the maximum number d of input posi- 
tions appearing in the antecedent of an inference 
rule. In fact, in the worst case we will have to 
execute a number of different evaluations of each 
1179 
inference rule which is proportional to n d, and 
each evaluation can be carried out in an amount 
of time independent of n. It is easy to establish 
that Step 1 can be executed in time O(n 3) and 
that Step 3 can be executed in time O(n4). Ad- 
junction at wrapping nodes performed at Step 2 
is the most expensive operation, requiring an 
amount of time O(n5). This is also the time 
complexity of our algorithm. 
4 Linguistic Relevance 
In this section we will attempt to show that the 
restricted formalism presented in Section 2 re- 
tains enough generative power to make it useful 
in the general case. 
4.1 Athematic and Complement Trees 
We begin by introducing the distinction be- 
tween athematic auxiliary trees and comple- 
ment auxiliary trees (Kroch, 1989), which are 
meant to exhaustively characterize the auxil- 
iary trees used in any natural language TAG 
grammar. 2 An athematic auxiliary tree does 
not subcategorize for or assign a thematic role 
to its foot node, so the head of the foot node be- 
comes the head of the phrase at the root. The 
structure of an athematic auxiliary tree may 
thus be described as: 
X n _+ Xn... (ymax) ..., (1) 
where X n is any projection of category X, y,nax 
is the maximal projection of Y, and the order of 
the constituents is variable. 3 A complement 
auxiliary tree, on the other hand, introduces a 
lexical head that subcategorizes for the tree's 
foot node and assigns it a thematic role. The 
structure of a complement auxiliary tree may be 
• described as: 
Xrnax _+ ... yO . . . Xrna~ . . . , (2) 
where X rna~ is the maximal projection Of some 
category X, and y0 is the lexical projection 
2The same linguistic distinction is used in the con- 
ception of 'modifier' and 'predicative' trees (Schabes and 
Shieber, 1994), but Schabes and Shieber give the trees 
special properties in the calculation of derivation struc- 
tures, which we do not. 
3The CFG-like notation is taken directly 
from (Kroch, 1989), where it is used to specify labels 
at the root and frontier nodes of a tree without placing 
constraints on the internal structure. 
of some category Y, whose maximal projection 
dominates X max . 
From this we make the following observations: 
1. Because it does not assign a theta role to 
its foot node, an athematic auxiliary tree 
may adjoin at any projection of a category, 
which we take to designate any adjunction 
site in a host elementary tree. 
2. Because it does assign a theta role to its 
foot node, a complement auxiliary tree may 
only adjoin at a certain 'complement' ad- 
junction site in a host elementary tree, 
which must at least be a maximal projec- 
tion of a lexical category. 
3. The foot node of an athematic auxiliary 
tree is dominated only by the root, with 
no intervening nodes, so it falls outside of 
the maximal projection of the head. 
4. The foot node of a complement auxiliary 
tree is dominated by the maximal projec- 
tion of the head, which may also dominate 
other arguments on either side of the foot. 
To this we now add the assumption that each 
auxiliary tree can have only one complement ad- 
junction site projecting from y0, where y0 is 
the lexical category that projects yrnax. This 
is justified in order to prevent projections of y0 
from receiving more than one theta role from 
complement adjuncts, which would violate the 
underlying theta criterion in Government and 
Binding Theory (Chomsky, 1981).We also as- 
sume that an auxiliary tree can not have com- 
plement adjunction sites on its spine project- 
ing from lexical heads other than y0 in or- 
der to preserve the minimality of elementary 
trees (Kroch, 1989; Frank, 1992). Thus there 
can be no more than one complement adjunc- 
tion site on the spine of any complement auxil- 
iary tree, and no complement adjunction site on 
the spine of any athematic auxiliary tree, since 
the foot node of an athematic tree lies outside 
of the maximal projection of the head. 4 
4It is important to note that, in order to satisfy the 
theta criterion and minimality, we need only constrain 
the number of complement adjunctions - not the number 
of complement adjunction sites - on the spine of an aux- 
iliary tree. Although this would remain within the power 
of our formalism, we prefer to use constraints expressed 
in terms of adjunction sites, as we did in Section 2, be- 
1180 
Based on observations 3 and 4, we can fur- 
ther specify that only complement trees may 
wrap, because the foot node of an athematic 
tree lies outside of the maximal projection of the 
head, below which all of its subcategories must 
attach. 5 In this manner, we can insure that only 
one wrapping tree (the complement auxiliary) 
can adjoin into the spine of a wrapping (com- 
plement) auxiliary, and only athematic auxil- 
iaries (which must be left/right trees) can ad- 
join elsewhere, fulfilling our TAG restriction in 
Section 2. 
4.2 Possible Extensions 
We may want to weaken our definition to in- 
clude wrapping athematic auxiliaries, in order 
to account for modifiers with raised heads or 
complements as in Figure 3: "They so revered 
him that they built a statue in his honor." This 
can be done within the above algorithm as long 
as the athematic trees do not wrap produc- 
tively (that is as long as they cannot be ad- 
joined one at the spine of the other) by splitting 
the athematic auxiliary tree down the spine and 
treating the two fragments as tree-local multi- 
components, which can be simulated with non- 
recursive features (Hockey and Srinivas, 1993). 
VP "" ....... "-. S WB' 
Adv VP* S' NI~ VP 
so C S~ v NI~ 
I I 
that revered 
Figure 3: Wrapping athematic tree. 
Since the added features are non-recursive, this 
extension would not alter the (9(n 5) result re- 
ported in Section 3. 
4.3 Comparison of Coverage 
In contrast to the formalisms of Schabes and 
Waters (Schabes and Waters, 1993; Schabes and 
Waters, 1995), our restriction allows wrapping 
complement auxiliaries as in Figure 4 (Schabes 
and Waters, 1995). Although it is difficult to 
find examples in English which are excluded by 
cause it provides a restriction on elementary trees, rather 
than on derivations. 
5Except in the case of raising, discussed below. 
Rogers' regular form restriction (Rogers, 1994), 
we can cite verb-raised complement auxiliary 
trees in Dutch as in Figure 5 (Kroch and San- 
torini, 1991). Trees with this structure may 
adjoin into each others' internal spine nodes 
an unbounded number of times, in violation of 
Rogers' definition of regular form adjunction, 
but within our criteria of wrapping adjunction 
at only one node on the spine. 
tcr~ vP 
V S* PP 
discern P NI~ 
I 
from 
Figure 4: Wrapping complement tree. 
13: S 
NI~ VP laten 
S* V 
I E 
Figure 5: Verb-raising tree in Dutch. 
5 Concluding remarks 
Our proposal is intended to contribute to the 
assessment of the computational complexity of 
syntactic processing. We have introduced a 
strict subclass of TAGs having the generative 
power that is needed to account for the syntac- 
tic constructions of natural language that unre- 
stricted TAGs can handle. We have specified a 
method that recognizes the generated languages 
in worst case time O(nS), where n is the length 
of the input string. In order to account for the 
dependency on the input grammar G, let us de- 
fine IGI = EN(I + \[Adj(N)1), where N ranges 
over the set of all nodes of the elementary trees. 
1181 
It is not difficult to see that the running time of 
our method is proportional to I GI. 
Our method works as a recognizer. As for 
many other tabular methods for TAG recogni- 
tion, we can devise simple procedures in order 
to obtain a derived tree associated with an ac- 
cepted string. To this end, we must be able to 
'interleave' adjunctions of left and right trees, 
that are always kept separate by our recognizer. 
The average case time complexity of our 
method should surpass its worst case time per- 
formance, as is the case for many other tabular 
algorithms for TAG recognition. In a more ap- 
plicative perspective, then, the question arises 
of whether there is any gain in using an algo- 
rithm that is unable to recognize more than one 
wrapping adjunction at each spine, as opposed 
to using an unrestricted TAG algorithm. As 
we have tried to argue in Section 4, it seems 
that standard syntactic constructions do not ex- 
ploit multiple wrapping adjunctions at a single 
spine. Nevertheless, the local ambiguity of nat- 
ural language, as well as cases of ill-formed in- 
put, could always produce cases in which such 
expensive analyses are attempted by an unre- 
stricted algorithm. In this perspective, then, 
we conjecture that having the single-wrapping- 
adjunction restriction embedded into the rec- 
ognizer would improve processing efficiency in 
the average case. Of course, more experimental 
work would be needed in order to evaluate such 
a conjecture, which we leave for future work. 
Acknowledgments 
Part of this research was done while the first 
author was visiting the Institute for Research 
in Cognitive Science, University of Pennsylva- 
nia. The first author was supported by NSF 
grant SBR8920230. The second author was sup- 
ported by U.S. Army Research Office Contract 
No. DAAH04-94G-0426. The authors would 
like to thank Christy Doran, Aravind Joshi, 
Anthony Kroch, Mark-Jan Nederhof, Marta 
Palmer, James Rogers and Anoop Sarkar for 
their help in this research. 

References 
Noam Chomsky. 1981. Lectures on government and 
binding. Foris, Dordercht. 
Robert Frank. 1992. Syntactic locality and tree ad- 
joining grammar: grammatical acquisition and 
processing perspectives. Ph.D. thesis, Computer 
Science Department, University of Pennsylvania. 
Beth Ann Hockey and Srinivas Bangalore. 1993. 
Feature-based TAG in place of multi-component 
adjunction: computational implications. In Pro- 
ceedings of the Natural Language Processing Pa- 
cific Rim Symposium (NLPRS), Fukuoka, Japan. 
Aravind K. Joshi. 1985. How much context sensitiv- 
ity is necessary for characterizing structural de- 
scriptions: Tree adjoining grammars. In L. Kart- 
tunen D. Dowty and A. Zwicky, editors, Natural 
language parsing: Psychological, computational 
and theoretical perspectives, pages 206-250. Cam- 
bridge University Press, Cambridge, U.K. 
Anthony S. Kroch and Beatrice Santorini. 1991. 
The derived constituent structure of west ger- 
manic verb-raising construction. In Robert Frei- 
din, editor, Principles and Parameters in Com- 
parative Grammar, pages 269-338. MIT Press. 
Anthony S. Kroch. 1989. Asymmetries in long dis- 
tance extraction in a TAG grammar. In M. Baltin 
and A. Kroch, editors, Alternative Conceptions 
of Phrase Structure, pages 66-98. University of 
Chicago Press. 
Sanguthevar Rajasekaran and Shibu Yooseph. 1995. 
TAL recognition in O(M(n2)) time. In Proceed- 
ings of the 33rd Annual Meeting of the Associa- 
tion \[or Computational Linguistics (ACL '95). 
James Rogers. 1994. Capturing CFLs with tree 
adjoining grammars. In Proceedings of the 32nd 
Annual Meeting of the Association for Computa- 
tional Linguistics (ACL '94). 
Giorgio Satta. 1994. Tree adjoining grammar pars- 
ing and boolean matrix multiplication. Computa- 
tional Linguistics, 20(2):173-192. 
Yves Schabes and Aravind K. Joshi. 1991. Pars- 
ing with lexicalized tree adjoining grammar. In 
M. Tomita, editor, Current Issues in Parsing 
Technologies. Kluwer Academic Publishers. 
Yves Schabes and Stuart M. Shieber. 1994. An al- 
ternative conception of tree-adjoining derivation. 
Computational Linguistics, 20(1):91-124. 
Yves Schabes and Richard C. Waters. 1993. Lexi- 
calized context-free grammars. In Proceedings of 
the 31st Annual Meeting of the Association for 
Computational Linguistics (A CL '93). 
Yves Schabes and Richard C. Waters. 1995. Tree 
insertion grammar: A cubic-time parsable formal- 
ism that lexicalizes context-free grammar without 
changing the trees produced. Computational Lin- 
guistics, 21(4):479-515. 
Stuart M. Shieber, Yves Schabes, and Fer- 
nando C.N. Pereira. 1995. Principles and imple- 
mentation of deductive parsing. Journal of Logic 
Programming, 24:3-36. 
