Stochastically Evaluating the Validity of Partial Parse Trees in
Incremental Parsing
Yoshihide Kato1, Shigeki Matsubara2 and Yasuyoshi Inagaki3
Graduate School of International Development, Nagoya University 1
Information Technology Center, Nagoya University 2
Furo-cho, Chikusa-ku, Nagoya, 464-8601 Japan
Faculty of Information Science and Technology, Aichi Prefectural University 3
1522-3 Ibaragabasama, Kumabari, Nagakute-cho, Aichi-gun, 480-1198 Japan
yosihide@gsid.nagoya-u.ac.jp
Abstract
This paper proposes a method for evaluating the
validity of partial parse trees constructed in incre-
mental parsing. Our method is based on stochastic
incremental parsing, and it incrementally evaluates
the validity for each partial parse tree on a word-
by-word basis. In our method, incremental parser
returns partial parse trees at the point where the va-
lidity for the partial parse tree becomes greater than
a threshold. Our technique is effective for improv-
ing the accuracy of incremental parsing.
1 Introduction
Real-time spoken language processing systems,
such as simultaneous machine interpretation sys-
tems, are required to quickly respond to users’ utter-
ances. To fulfill the requirement, the system needs
to understand spoken language at least incremen-
tally (Allen et al., 2001; Inagaki and Matsubara,
1995; Milward and Cooper, 1994), that is, to ana-
lyze each input sentence from left to right and ac-
quire the content.
Several incremental parsing methods have been
proposed to date (Costa et al., 2001; Haddock,
1987; Matsubara et al., 1997; Milward, 1995;
Roark, 2001). These methods construct candidate
partial parse trees for initial fragments of the input
sentence on a word-by-word basis. However, these
methods contain local ambiguity problems that par-
tial parse trees representing valid syntactic relations
can not be determined without using information
from the rest of the input sentence.
On the other hand, Marcus proposed a method
of deterministically constructing valid partial parse
trees by looking ahead several words (Marcus,
1980), while Kato et al. proposed an incremental
parsing which delays the decision of valid partial
parse trees (Kato et al., 2000). However, it is hard to
say that these methods realize broad-coverage incre-
mental parsing. The method in the literature (Mar-
cus, 1980) uses lookahead rules, which are con-
structed by hand, but it is not clear whether broad
coverage lookahead rules can be obtained. The
incremental parsing in the literature (Kato et al.,
2000), which is based on context free grammar, is
infeasible to deal with large scale grammar, because
the parser exhaustively searches all candidate partial
parse trees in top-down fashion.
This paper proposes a probabilistic incremental
parser which evaluates the validity of partial parse
trees. Our method extracts a grammar from a tree-
bank, and the incremental parsing uses a beam-
search strategy so that it realizes broad-coverage
parsing. To resolve local ambiguity, the parser in-
crementally evaluates the validity of partial parse
trees on a word-by-word basis, and delays the deci-
sion of which partial parse trees should be returned,
until the validity for the partial parse tree becomes
greater than a threshold. Our technique is effective
for improving the accuracy of incremental parsing.
This paper is organized as follows: The next
section proposes a probabilistic incremental parser.
Section 3 discusses the validity of partial parse tree
constructed in incremental parsing. Section 4 pro-
poses a method of incrementally evaluating the va-
lidity of partial parse tree. In section 5, we report an
experimental evaluation of our method.
2 TAG-based Incremental Parsing
Our incremental parsing is based on tree adjoining
grammar (TAG) (Joshi, 1985). This section pro-
poses a TAG-based incremental parsing method.
2.1 TAG for Incremental Parsing
Firstly, we propose incremental-parsing-oriented
TAG (ITAG). An ITAG comprises two sets of ele-
mentary trees just like TAG: initial trees and auxil-
iary trees. The difference between ITAG and TAG
is the form of elementary trees. Every ITAG ini-
tial tree is leftmost-expanded. A tree is leftmost-
expanded if it is of the following forms:
1. [t]X, where t is a terminal symbol and X is a
nonterminal symbol.
SNP VP
PRPI
VPVBNP
found
NPDTN
a
Ndime NPDTN
theN
wod
Initial tres: 1  2 
5  7  8 
10
VPVBNP
found
 3 ADJP
PIN NP
in
NPNP* P
IN NPin
VPVP*Auxiliary tres: 1  2
NPDT N
a
 6 J NPDT N
the
 9 JVPVBfound 4
Figure 1: Examples of ITAG elementary trees
2. [ X1¢¢¢Xk]X, where  is a leftmost expanded
tree, X1;:::;Xk, X are nonterminal symbols.
On the other hand, every ITAG auxiliary tree is of
the following form:
[X⁄ X1¢¢¢Xk]X
where  is a leftmost expanded tree and X,
X1;:::;Xk are nonterminal symbols. X⁄ is called
a foot node. Figure 1 shows examples of ITAG ele-
mentary trees.
These elemental trees can be combined by using
two operations: substitution and adjunction.
substitution The substitution operation replaces a
leftmost nonterminal leaf of a partial parse tree
 with an initial tree fi having the same nonter-
minal symbol at its root. We write sfi for the
operation of substituting fi and sfi( ) for the
result of applying sfi to  .
adjunction The adjunction operation splits a par-
tial parse tree  at a nonterminal node having
no nonterminal leaf, and inserts an auxiliary
tree fl having the same nonterminal symbol at
its root. We write afl for the operation of ad-
joining fl and afl( ) for the result of applying
afl to  .
The substitution operation is similar to rule expan-
sion of top-down incremental parsing such as (Mat-
subara et al., 1997; Roark, 2001). Furthermore,
by introducing the adjunction operation to incre-
mental parsing, we can expect that local ambiguity
of left-recursive structures is decreased (Lombardo
and Sturt, 1997).
Our proposed incremental parsing is based on
ITAG. When i-th word wi is scanned, the parser
combines elementary trees for wi with partial parse
trees for w1¢¢¢wi¡1 to construct the partial parse
trees for w1¢¢¢wi¡1wi.
As an example, let us consider incremental pars-
ing of the following sentence by using ITAG shown
in Figure 1:
I found a dime in the wood. (1)
Table 1 shows the process of tree construction
for the sentence (1). When the word “found” is
scanned, partial parse trees #3, #4 and #5 are con-
structed by applying substitution operations to par-
tial parse tree #2 for the initial fragment “I”. When
the word “in” is scanned, partial parse trees #12 and
#13 are constructed by applying adjunction opera-
tions to partial parse tree #10 for the initial frag-
ment “I found a dime”. This example shows that
the ITAG based incremental parsing is capable of
constructing partial parse trees of initial fragments
for every word input.
2.2 ITAG Extraction from Treebank
Here, we propose a method for extracting an ITAG
from a treebank to realize broad-coverage incre-
mental parsing. Our method decomposes parse trees
in treebank to obtain ITAG elementary trees. The
decomposition is as follows:
† for each node ·1 having no left-sibling, if the
parent ·p has the same nonterminal symbol as
·1, split the parse tree at ·1 and ·p, and com-
bine the upper tree and the lower tree. ·1 of
intermediate tree is a foot node.
† for each node ·2 having only one left-sibling,
if the parent ·p does not have the same nonter-
minal symbol as the left-sibling ·1 of ·2, split
the parse tree at ·2.
† for the other node · in the parse tree, split the
parse tree at ·.
For example, The initial trees fi1, fi2, fi5, fi7 fi8 and
fi10 and the auxiliary tree fl2 are extracted from the
parse tree #18 in Table 1.
Our proposed tree extraction is similar to the TAG
extractions proposed in the literatures (Chen and
Vijay-Shanker, 2000; Chiang, 2003; Xia, 1999).
The main difference between these methods is the
position of nodes at which parse trees are split.
While the methods in the literatures (Chen and
Vijay-Shanker, 2000; Chiang, 2003; Xia, 1999) uti-
lize a head percolation rule to split the parse trees at
complement nodes, our method splits the parse trees
Table 1: Incremental parsing process of “I found a dime in the wood.”
word # partial parse tree
1 s
I 2 [[[I]prp]npvp]s
found 3 [[[I]prp]np[[found]vbnp]vp]s
4 [[[I]prp]np[[found]vbnp adjp]vp]s
5 [[[I]prp]np[[found]vb]vp]s
a 6 [[[I]prp]np[[found]vb[[a]dtnn]np]vp]s
7 [[[I]prp]np[[found]vb[[a]dtjj nn]np]vp]s
8 [[[I]prp]np[[found]vb[[a]dtnn]npadjp]vp]s
9 [[[I]prp]np[[found]vb[[a]dtjj nn]npadjp]vp]s
dime 10 [[[I]prp]np[[found]vb[[a]dt[dime]nn]np]vp]s
11 [[[I]prp]np[[found]vb[[a]dt[dime]nn]npadjp]vp]s
in 12 [[[I]prp]np[[[found]vb[[a]dt[dime]nn]np]vp[[in]innp]pp]vp]s
13 [[[I]prp]np[[found]vb[[[a]dt[dime]nn]np[[in]innp]pp]np]vp]s
the 14 [[[I]prp]np[[[found]vb[[a]dt[dime]nn]np]vp[[in]in[[the]dtnn]np]pp]vp]s
15 [[[I]prp]np[[[found]vb[[a]dt[dime]nn]np]vp[[in]in[[the]dtjj nn]np]pp]vp]s
16 [[[I]prp]np[[found]vb[[[a]dt[dime]nn]np[[in]in[[the]dtnn]np]pp]np]vp]s
17 [[[I]prp]np[[found]vb[[[a]dt[dime]nn]np[[in]in[[the]dtjj nn]np]pp]np]vp]s
wood 18 [[[I]prp]np[[[found]vb[[a]dt[dime]nn]np]vp[[in]in[[the]dt[wood]nn]np]pp]vp]s
19 [[[I]prp]np[[found]vb[[[a]dt[dime]nn]np[[in]in[[the]dt[wood]nn]np]pp]np]vp]s
at left recursive nodes and nodes having left-sibling.
The elementary trees extracted by our method are of
the forms described in section 2.1, and can be com-
bined from left to right on a word-by-word basis.
The property is suitable for incremental parsing. On
the other hand, the elementary trees obtained by the
method based on head information does not neces-
sarily have this property 1.
2.3 Probabilistic ITAG
This section describes probabilistic ITAG (PITAG)
which is utilized by evaluating partial parse trees in
incremental parsing. PITAG assigns a probability
to the event that an elementary tree is combined by
substitution or adjunction with another tree.
We induce the probability by maximum likeli-
hood estimation. Let fi be an initial tree and X be
the root symbol of fi. The probability that fi is sub-
stituted is calculated as follows:
P(sfi) = C(sfi)P
fi02I(X) C(sfi0)
(2)
where C(sfi) is the count of the number of times of
applying substitution sfi in the treebank, and I(X)
is the set of initial trees whose root is labeled with
X.
1For example, the tree extraction based on head informa-
tion splits the parse tree #18 at the node labeled with dt to ob-
tain the elementary tree [a]dt for “a”. However, the tree [a]dt
cannot be combined with the partial parse tree for “I found”,
since substitution node labeled with dt exists in the initial tree
[dt[dime]nn]np for “dime” and not the partial parse trees for “I
found”.
Let fl be a auxiliary tree and X be the root symbol
of fl. The probability that fl is adjoined is calculated
as follows:
P(afl) = C(afl)C(X) (3)
where C(X) is the count of the number of occur-
rences of symbol X. The probability that adjunction
is not applied is calculated as follows:
P(nilX) = 1¡ X
fl2A(X)
P(afl) (4)
where nilX means that the adjunction is not applied
to a node labeled with X, and A(X) is the set of all
auxiliary trees whose root is labeled X.
In this PITAG formalism, the probability that el-
ementary trees are combined at each node depends
only on the nonterminal symbol of that node 2.
The probability of a parse tree is calculated by the
product of the probability of the operations which
are used in construction of the parse tree. For ex-
ample, the probability of each operation is given as
shown in Table 2. The probability of the partial
parse tree #12, which is constructed by using sfi1,
sfi2, sfi5, sfi7, nilNP and afl2, is 1 £ 0:7 £ 0:3 £
0:5£0:7£0:7 = 0:05145.
We write P( ) for the probability of a partial
parse tree  .
2The PITAG formalism corresponds to SLG(1) in the liter-
ature (Carroll and Weir, 2003).
Table 2: Probability of operations
operation probability
sfi1 1.0
sfi2 0.7
sfi7, sfi10 0.5
sfi5, sfi8 0.3
sfi4, sfi6, sfi9 0.2
sfi3 0.1
afl1 0.3
afl2 0.7
nilNP 0.7
nilVP 0.3
2.4 Parsing Strategies
In order to improve the efficiency of the parsing, we
adapt two parsing strategies as follows:
† If two partial parse trees have the same se-
quence of nodes to which ITAG operations are
applicable, then the lower probability tree can
be safely discarded.
† The parser only keeps n-best partial parse trees.
3 Validity of Partial Parse Trees
This section gives some definitions about the valid-
ity of a partial parse tree. Before describing the va-
lidity of a partial parse tree, we define the subsump-
tion relation between partial parse trees.
Definition 1 (subsumption relation) Let  and ¿
be partial parse trees. Then we write  ⁄ ¿, if
sfi( ) = ¿, for some initial tree fi or afl( ) = ¿,
for some auxiliary tree fl. Let ⁄⁄ be the reflexive
transitive closure of ⁄. We say that  subsumes ¿,
if  ⁄⁄ ¿. 2
That  subsumes ¿ means that ¿ is the result of ap-
plying a substitution or an adjunction to  . Figure 2
shows the subsumption relation between the partial
parse trees constructed for the sentence (1).
If a partial parse tree for an initial fragment repre-
sents a syntactic relation correctly, the partial parse
tree subsumes the correct parse tree for the input
sentence. We say that such a partial parse tree is
valid. The validity of a partial parse tree is defined
as follows:
Definition 2 (valid partial parse tree) Let  be a
partial parse tree and w1¢¢¢wn be an input sen-
tence. We say that  is valid for w1¢¢¢wn if  sub-
sumes the correct parse tree for w1¢¢¢wn. 2
#1I #2 #3 #6#7founda dime#10 #12 #14 #18
#19#16#13
in the wod
subsumptionrelation
#4 #8#9 #1 #15#17
#5
Figure 2: Subsumption relation between partial
parse trees
#1I #2 #3 #6#7founda dime#10 #12 #14 #18
#19#16#13
in the wod
subsumptionrelation
#4 #8#9 #1 #15#17
valid partial parse tre#5
Figure 3: Valid partial parse trees
For example, assume that the #18 is correct parse
tree for the sentence (1). Then partial parse tree #3
is valid for the sentence (1), because #3 ⁄⁄ #18. On
the other hand, partial parse tree #4 and #5 are not
valid for (1). Figure 3 shows the valid partial parse
trees for the sentence (1).
4 Evaluating the Validity of Partial Parse
Tree
The validity of a partial parse tree for an initial frag-
ment depends on the rest of the sentence. For ex-
ample, the validity of the partial parse trees #3, #4
and #5 depends on the remaining input that follows
the word “found.” This means that the validity dy-
namically varies for every word input. We define a
conditional validity of partial parse tree:
V( j w1¢¢¢wj) =
P
¿2Sub( ;w1¢¢¢wj) P(¿)P
¿2T(w1¢¢¢wj) P(¿)
(5)
where  is a partial parse tree for an initial frag-
ment w1¢¢¢wi(i • j), T(w1¢¢¢wj) is the set of
constructed partial parse trees for the initial frag-
ment w1¢¢¢wj and Sub( ;w1¢¢¢wj) is the subset
of T(w1¢¢¢wj) whose elements are subsumed by  .
The equation (5) represents the validity of  on the
condition w1¢¢¢wj.  is valid for input sentence
if and only if some partial parse tree for w1¢¢¢wj
subsumed by  is valid. The equation 5 is the ratio
of such partial parse trees to the constructed partial
parse trees.
4.1 Output Partial Parse Trees
Kato et al. proposed a method of delaying the deci-
sion of which partial parse trees should be returned
as the output, until the validity of partial parse trees
are guaranteed (Kato et al., 2000). The idea of
delaying the decision of the output is interesting.
However, delaying the decision until the validity are
guaranteed may cause the loss of incrementality of
the parsing.
To solve the problem, in our method, the in-
cremental parser returns high validity partial parse
trees rather than validity guaranteed partial parse
trees.
When the j-th word wj is scanned, our incremen-
tal parser returns the following partial parse:
argmaxf :V( ;w1¢¢¢wj)‚ gl( ) (6)
where  is a threshold between [0;1] and l( ) is
the length of the initial fragment which is yielded
by  . The output partial parse tree is the one for
the longest initial fragment in the partial parse trees
whose validity are greater than a threshold  .
4.2 An Example
Let us consider a parsing example for the sentence
(1). We assume that the threshold  = 0:8.
Let us consider when the partial parse tree
#3, which is valid for (1), is returned as output.
When the word “found” is scanned, partial parse
trees #3, #4 and #5 are constructed. That is,
T(I found) = f#3;#4;#5g. As shown in Figure
2, Sub(#3;I found) = f#3g. Furthermore,
P(#3) = 0:7, P(#4) = 0:1 and P(#5) = 0:2.
Therefore, Validity(#3;I found) =
0:7=(0:7 + 0:1 + 0:2) = 0:7. Because
Validity(#3;I found) <  , partial parse tree
#3 is not returned as the output at this point. The
parser only keeps #3 as a candidate partial parse
tree.
When the next word “a” is scanned, partial parse
trees #6, #7, #8 and #9 are constructed, where
P(#6) = 0:21, P(#7) = 0:14, P(#8) = 0:03 and
P(#9) = 0:02. Sub(#3;I found a) = f#6;#7g.
Therefore, Validity(#3;I found a) = (0:21 +
0:14)=(0:21+0:14+0:03+0:02) = 0:875. Because
Validity(#3;I found a) ‚  , partial parse tree #3
is returned as the output.
Table 3 shows the output partial parse tree for ev-
ery word input.
Our incremental parser delays the decision of the
output as shown in this example.
Table 3: Output partial parse trees
input word output partial parse tree
I #2
found
a #3
dime #10
in #12
the
wood #18
5 Experimental Results
To evaluate the performance of our proposed
method, we performed a parsing experiment. The
parser was implemented in GNU Common Lisp on a
Linux PC. In the experiment, the inputs of the incre-
mental parser are POS sequences rather than word
sequences. We used 47247 initial trees and 2931
auxiliary trees for the experiment. The elementary
trees were extracted from the parse trees in sec-
tions 02-21 of the Wall Street Journal in Penn Tree-
bank (Marcus et al., 1993), which is transformed
by using parent-child annotation and left factoring
(Roark and Johnson, 1999). We set the beam-width
at 500.
The labeled precision and recall of the parsing
are 80.8% and 78.5%, respectively for the section
23 in Penn Treebank. We used the set of sentences
for which the outputs of the incremental parser are
identical to the correct parse trees in the Penn Tree-
bank. The number of these sentences is 451. The
average length of these sentences is 13.5 words.
We measured the delays and the precisions for va-
lidity thresholds 0.5, 0.6, 0.7, 0.8, 0.9 and 1.0.
We define the degree of delay as follows: Let
s = w1¢¢¢wn be an input sentence and oj(s) be
the partial parse tree that is the output when the j-th
word wj is scanned. We define the degree of delay
when j-th word is scanned as follows:
D(j;s) = j ¡l(oj(s)) (7)
We define maximum delay Dmax(s) and average
delay Dave(s) as follows:
Dmax(s) = max1•j•nD(j;s) (8)
Dave(s) = 1n
nX
j=1
D(j;s) (9)
The precision is defined as the percentage of valid
partial parse trees in the output.
Moreover, we measured the precision of the pars-
ing whose delay is always 0 and which returns the
Table 4: Precisions and delays
precision(%) Dmax Dave
 = 1:0 100.0 11.9 6.4
 = 0:9 97.3 7.5 2.9
 = 0:8 95.4 6.4 2.2
 = 0:7 92.5 5.5 1.8
 = 0:6 88.4 4.5 1.3
 = 0:5 83.0 3.4 0.9
baseline 73.6 0.0 0.0
0
2
4
6
8
10
12
14
70 75 80 85 90 95 100
delay(number of words)
precision(%)
Dmax
3
33
33
3
3D
ave
£
£££
££
£
baseline
2
2
Figure 4: Relation between precision and delay
partial parse tree having highest probability. We call
it the parsing baseline.
Table 4 shows the precisions and delays. Figure
4 illustrates the relation between the precisions and
delays.
The experimental result demonstrates that there
is a precision/delay trade-off. Our proposed method
increases the precision in comparison with the base-
line, while returning the output is delayed. When
 = 1, it is guaranteed that the output partial parse
trees are valid, that is, our method is similar to the
method in the literature (Kato et al., 2000). In com-
parison with this case, our method when  < 1 dra-
matically decreases the delay.
Although the result does not necessarily demon-
strates that our method is the best one, it achieves
both high-accuracy and short-delay to a certain ex-
tent.
6 Concluding Remarks
In this paper, we have proposed a method of evalu-
ating the validity that a partial parse tree constructed
in incremental parsing becomes valid. The method
is based on probabilistic incremental parsing. When
a word is scanned, the method incrementally calcu-
lates the validity for each partial parse tree and re-
turns the partial parse tree whose validity is greater
than a threshold. Our method delays the decision of
which partial parse tree should be returned.
To evaluate the performance of our method, we
conducted a parsing experiment using the Penn
Treebank. The experimental result shows that our
method improves the accuracy of incremental pars-
ing.
The experiment demonstrated a precision/delay
trade-off. To evaluate overall performance of in-
cremental parsing, we would like to investigate a
single measure into which delay and precision are
combined.
Acknowledgement
This work is partially supported by the Grant-in-Aid
for Scientific Research of the Ministry of Education,
Science, Sports and Culture, Japan (No. 15300044),
and The Tatematsu Foundation.

References
J. Allen, G. Ferguson, and A. Stent. 2001. An Ar-
chitecture for More Realistic Conversational Sys-
tems. In Proceedings of International Confer-
ence of Intelligent User Interfaces, pages 1–8.
J. Carroll and D. Weir. 2003. Encoding Frequency
Information in Stochastic Parsing Models. In
R. Bod, R. Scha, and K. Sima’an, editors, Data-
Oriented Parsing, pages 43–60. CSLI Publica-
tions, Stanford.
J. Chen and K. Vijay-Shanker. 2000. Automated
Extraction of TAGs from the Penn Treebank. In
Proceedings of the 6th International Workshop on
Parsing Technologies, pages 65–76.
D. Chiang. 2003. Statistical Parsing with an Auto-
matically Extracted Tree Adjoining Grammar. In
R. Bod, R. Scha, and K. Sima’an, editors, Data-
Oriented Parsing, pages 299–316. CSLI Publica-
tions, Stanford.
F. Costa, V. Lombardo, P. Frasconi, and Soda G.
2001. Wide Coverage Incremental Parsing by
Learning Attachment Preferences. In Proceed-
ings of the 7th Congress of the Italian Association
for Artificial Intelligence, pages 297–307.
N. J. Haddock. 1987. Incremental Interpretation
and Combinatory Categorial Grammar. In Pro-
ceedings of the 10th International Joint Confer-
ence on Artificial Intelligence, pages 661–663.
Y. Inagaki and S. Matsubara. 1995. Models for In-
cremental Interpretation of Natural Language. In
Proceedings of the 2nd Symposium on Natural
Language Processing, pages 51–60.
A. K. Joshi. 1985. Tree Adjoining Grammar: How
Much Context-Sensitivity is required to provide
reasonable structural descriptions? In D. R.
Dowty, L. Karttunen, and A. Zwicky, editors,
Natural Language Parsing, pages 206–250. Cam-
bridge University Press, Cambridge.
Y. Kato, S. Matsubara, K. Toyama, and Y. Ina-
gaki. 2000. Spoken Language Parsing based on
Incremental Disambiguation. In Proceedings of
the 6th International Conference on Spoken Lan-
guage Processing, volume 2, pages 999–1002.
V. Lombardo and P. Sturt. 1997. Incremental Pro-
cessing and Infinite Local Ambiguity. In Pro-
ceedings of the 19th Annual Conference of the
Cognitive Science Siciety, pages 448–453.
M. P. Marcus, B. Santorini, and M. A.
Marcinkiewicz. 1993. Building a Large Anno-
tated Corpus of English: the Penn Treebank.
Computational Linguistics, 19(2):310–330.
M Marcus. 1980. A Theory of Syntactic Recog-
nition for Natural Language. MIT Press, Cam-
brige, MA.
S. Matsubara, S. Asai, K. Toyama, and Y. Inagaki.
1997. Chart-based Parsing and Transfer in In-
cremental Spoken Language Translation. In Pro-
ceedings of the 4th Natural Language Processing
Pacific Rim Symposium, pages 521–524.
D. Milward and R. Cooper. 1994. Incremental In-
terpretation: Applications, Theory, and Relation-
ship to Dynamic Semantics. In Proceedings of
the 15th International Conference on Computa-
tional Linguistics, pages 748–754.
D. Milward. 1995. Incremental Interpretation of
Categorial Grammar. In Proceedings of the 7th
Conference of European Chapter of the Associ-
ation for Computational Linguistics, pages 119–
126.
B. Roark and M. Johnson. 1999. Efficient Prob-
abilistic Top-down and Left-corner Parsing. In
Proceedings of the 37th Annual Meeting of the
Association for Computational Linguistics, pages
421–428.
B. Roark. 2001. Probabilistic Top-Down Parsing
and Language Modeling. Computational Lin-
guistics, 27(2):249–276.
F. Xia. 1999. Extracting Tree Adjoining Gram-
mars from Bracketed corpora. In Proceedings of
the 5th Natural Language Processing Pacific Rim
Symposium, pages 398–403.
