Compilation of Weighted Finite-State Transducers from 
Decision Trees 
Richard Sproat 
Bell Laboratories 
700 Mountain Avenue 
Murray Hill, N J, USA 
rws@bell-labs, tom 
Michael Riley 
AT&T Research 
600 Mountain Avenue 
Murray Hill, N J, USA 
riley@research, att. com 
Abstract 
We report on a method for compiling 
decision trees into weighted finite-state 
transducers. The key assumptions are 
that the tree predictions specify how to 
rewrite symbols from an input string, 
and the decision at each tree node is 
stateable in terms of regular expressions 
on the input string. Each leaf node 
can then be treated as a separate rule 
where the left and right contexts are 
constructable from the decisions made 
traversing the tree from the root to the 
leaf. These rules are compiled into trans- 
ducers using the weighted rewrite-rule 
rule-compilation algorithm described in 
(Mohri and Sproat, 1996). 
1 Introduction 
Much attention has been devoted recently to 
methods for inferring linguistic models from data. 
One powerful inference method that has been 
used in various applications are decision trees, 
and in particular classification and regression trees 
(Breiman et al., 1984). 
An increasing amount of attention has also 
been focussed on finite-state methods for imple- 
menting linguistic models, in particular finite- 
state transducers and weighted finite-state trans- 
ducers; see (Kaplan and Kay, 1994; Pereira et al., 
1994, inter alia). The reason for the renewed in- 
terest in finite-state mechanisms is clear. Finite- 
state machines provide a mathematically well- 
understood computational framework for repre- 
senting a wide variety of information, both in NLP 
and speech processing. Lexicons, phonological 
rules, Hidden Markov Models, and (regular) gram- 
mars are all representable as finite-state machines, 
and finite-state operations such as union, intersec- 
tion and composition mean that information from 
these various sources can be combined in useful 
215 
and computationally attractive ways. The reader 
is referred to the above-cited papers (among oth- 
ers) for more extensive justification. 
This paper reports on a marriage of these two 
strands of research in the form of an algorithm for 
compiling the information in decision trees into 
weighted finite-state transducers. 1 Given this al- 
gorithm, information inferred from data and rep- 
resented in a tree can be used directly in a system 
that represents other information, such as lexicons 
or grammars, in the form of finite-state machines. 
2 Quick Review of Tree-Based 
Modeling 
A general introduction to classification and regres- 
sion trees ('CART') including the algorithm for 
growing trees from data can be found in (Breiman 
et al., 1984). Applications of tree-based modeling 
to problems in speech and NLP are discussed in 
(Riley, 1989; Riley, 1991; Wang and Hirschberg, 
1992; Magerman, 1995, inter alia). In this section 
we presume that one has already trained a tree 
or set of trees, and we merely remind the reader 
of the salient points in the interpretation of those 
trees. 
Consider the tree depicted in Figure 1, which 
was trained on the TIMIT database (Fisher et al., 
1987), and which models the phonetic realization 
of the English phoneme/aa/(/a/) in various en- 
vironments (Riley, 1991). When this tree is used 
in predicting the allophonic form of a particular 
instance of /aa/, one starts at the root of the 
tree, and asks questions about the environment 
in which the/aa/is found. Each non-leaf node n, 
dominates two daughter nodes conventionally la- 
beled as 2n and 2n+ 1; the decision on whether to 
go left to 2n or right to 2n + 1 depends on the an- 
swer to the question that is being asked at node n. 
1The work reported here can thus be seen as com- 
plementary to recent reports on methods for directly 
inferring transducers from data (Oncina et al., 1993; 
Gildea and Jurafsky, 1995). 
A concrete example will serve to illustrate. Con- 
sider that we have/aa/in some environment. The 
first question that is asked concerns the number of 
segments, including the /aa/itself, that occur to 
the left of the/aa/in the word in which/aa/oc- 
curs. (See Table 1 for an explanation of the sym- 
bols used in Figure 1.) In this case, if the /aa/ 
is initial -- i.e., lseg is 1, one goes left; if there 
is one or more segments to the left in the word, 
go right. Let us assume that this /aa/is initial 
in the word, in which case we go left. The next 
question concerns the consonantal 'place' of artic- 
ulation of the segment to the right of/an/; if it 
is alveolar go left; otherwise, if it is of some other 
quality, or if the segment to the right of/aa/is not 
a consonant, then go right. Let us assume that the 
segment to the right is/z/, which is alveolar, so we 
go left. This lands us at terminal node 4. The tree 
in Figure 1 shows us that in the training data 119 
out of 308 occurrences of/aa/in this environment 
were realized as \[ao\], or in other words that we can 
estimate the probability of/aa/being realized as 
\[ao\] in this environment as .385. The full set of 
realizations at this node with estimated non-zero 
probabilities is as follows (see Table 2 for a rele- 
vant set of ARPABET-IPA correspondences): 
phone probability - log prob. (weight) 
ao 0.385 0.95 
aa 0.289 1.24 
q+aa 0.103 2.27 
q+ao 0.096 2.34 
ah 0.069 2.68 
ax 0.058 2.84 
An important point to bear in mind is that a 
decision tree in general is a complete description, 
in the sense that for any new data point, there 
will be some leaf node that corresponds to it. So 
for the tree in Figure 1, each new novel instance 
of/aa/will be handled by (exactly) one leaf node 
in the tree, depending upon the environment in 
which the/an/finds itself. 
Another important point is that each deci- 
sion tree considered here has the property that 
its predictions specify how to rewrite a symbol (in 
context) in an input string. In particular, they 
specify a two-level mapping from a set of input 
symbols (phonemes) to a set of output symbols 
(allophones). 
3 Quick Review of Rule 
Compilation 
Work on finite-state phonology (Johnson, 1972; 
Koskenniemi, 1983; Kaplan and Kay, 1994) has 
shown that systems of rewrite rules of the famil- 
iar form ¢ --* ¢/)~ p, where ¢, ¢, A and p are 
216 
regular expressions, can be represented computa- 
tionally as finite-state transducers (FSTs): note 
that ¢ represents the rule's input rule, ¢ the out- 
put, and ~ and p, respectively, the left and right 
contexts. 
Kaplan and Kay (1994) have presented a con- 
crete algorithm for compiling systems of such 
rules into FSTs. These methods can be ex- 
tended slightly to include the compilation of prob- 
abilistic or weighted rules into weighted finite- 
state-transducers (WFSTs -- see (Pereira et al., 
1994)): Mohri and Sproat (1996) describe a rule- 
compilation algorithm which is more efficient than 
the Kaplan-Kay algorithm, and which has been 
extended to handle weighted rules. For present 
purposes it is sufficient to observe that given this 
extended algorithm, we can allow ¢ in the expres- 
sion ¢ --~ ¢/~ p, to represent a weighted reg- 
ular expression. The compiled transducer corre- 
sponding to that rule will replace ¢ with ¢ with 
the appropriate weights in the context A p. 
4 The Tree Compilation Algorithm 
The key requirements on the kind of decision trees 
that we can compile into WFSTs are (1) the pre- 
dictions at the leaf nodes specify how to rewrite 
a particular symbol in an input string, and (2) 
the decisions at each node are stateable as regu- 
lar expressions over the input string. Each leaf 
node represents a single rule. The regular expres- 
sions for each branch describe one aspect of the 
left context )~, right context p, or both. The left 
and right contexts for the rule consist of the inter- 
sections of the partial descriptions of these con- 
texts defined for each branch traversed between 
the root and leaf node. The input ¢ is prede- 
fined for the entire tree, whereas the output ¢ is 
defined as the union of the set of outputs, along 
with their weights, that are associated with the 
leaf node. The weighted rule belonging to the leaf 
node can then be compiled into a transducer us- 
ing the weighted-rule-compilation algorithm refer- 
enced in the preceding section. The transducer for 
the entire tree can be derived by the intersection 
of the entire set of transducers associated with the 
leaf nodes. Note that while regular relations are 
not generally closed under intersection, the subset 
of same-length (or more strictly speaking length- 
preserving) relations is closed; see below. 
To see how this works, let us return to the ex- 
ample in Figure 1. To start with, we know that 
this tree models the phonetic realization of/aa/, 
so we can immediately set ¢ to be aa for the whole 
tree. Next, consider again the traversal of the tree 
from the root node to leaf node 4. The first deci- 
sion concerns the number of segments to the left 
of the /aa/ in the word, either none for the left 
0/349 69/128 
10 11 
,many 
6 
no,sec 
2080/2080 415/439 
14 15 
Figure 1: Tree modeling the phonetic realization of/aa/. All phones are given in ARPABET. Table 2 gives 
ARPABET-IPA conversions for symbols relevant to this example. See Table 1 for an explanation of other 
symbols 
cpn 
cp-n 
place of articulation of consonant n segments to the right 
place of articulation of consonant n segments to the left 
values: alveolar; bilabial; labiodental; dental; palatal; velar; pharyngeal; 
n/a if is a vowel, or there is no such segment 
vpn 
vp-n 
place of articulation of vowel n segments to the right 
place of articulation of vowel n segments to the left 
values: central-mid-high; back-low; back-mid-low; back-high; front-low; 
front-mid-low; front-mid-high; front-high; central-mid-low; back-mid-high 
n/a if is a consonant, or there is no such segment 
Iseg number of preceding segments including the segment of interest within the word 
rseg number of following segments including the segment of interest within the word 
values: 1, 2, 3, many 
str stress assigned to this vowel 
values: primary, secondary, no (zero) stress 
n/a if there is no stress mark 
Table 1: Explanation of symbols in Figure 1. 
217 
aa (\] 
ao 
ax 
ah ^ 
q-baa 7(\] 
q+ao ?~ 
Table 2: ARPABET-IPA conversion for symbols relevant for Figure 1. 
branch, or one or more for the right branch. As- 
suming that we have a symbol a representing a 
single segment, the symbol # representing a word 
boundary, and allowing for the possibility of in- 
tervening optional stress marks ~ which do 
not count as segments, these two possibilities can 
be represented by the regular expressions for A in 
(a) of Table 3. 2 At this node there is no deci- 
sion based on the righthand context, so the right- 
hand context is free. We can represent this by 
setting p at this node to be E*, where E (con- 
ventionally) represents the entire alphabet: note 
that the alphabet is defined to be an alphabet of 
all ¢:¢ correspondence pairs that were determined 
empirically to be possible. 
The decision at the left daughter of the root 
node concerns whether or not the segment to the 
right is an alveolar. Assuming we have defined 
classes of segments alv, blab, and so forth (repre- 
sented as unions of segments) we can represent the 
regular expression for p as in (b) of Table 3. In 
this case it is A which is unrestricted, so we can 
set that at ~*. 
We can derive the ~ and p expressions for 
the rule at leaf node 4 by intersecting together 
the expressions for these contexts defined for each 
branch traversed on the way to the leaf. For 
leaf node 4, A = #Opt(')N E* = #Opt('), and 
p = E* n Opt(')(alv) = Opt(')(alv). 3 The rule 
input ¢ has already been given as aa. The output 
¢ is defined as the union of all of the possible ex- 
pressions -- at the leaf node in question -- that aa 
could become, with their associated weights (neg- 
ative log probabilities), which we represent here as 
subscripted floating-point numbers: 
¢ = a00.95 U aal.24 O q+aa2.27 O q-l-ao2.34U 
ah2.6s U ax2.s4 
Thus the entire weighted rule can be written as 
2As far as possible, we use the notation of Kaplan 
and Kay (1994). 
3Strictly speaking, since the As and ps at each 
branch may define expressions of different lengths, it 
is necessary to left-pad each )~ with ~*, and right-pad 
each p with ~*. We gloss over this point here in order 
to make the regular expressions somewhat simpler to 
understand 
218 
follows: 
aa --~ (aoo.95Uaal.24tdq+aa2.27Uq-bao2.34t.Jah2.6sU 
ax2.s4)/#Opt(') Opt(')(alv) 
By a similar construction, the rule at node 6, for 
example, would be represented as: 
aa --* (aa0.40 U aol.n)/ N 
(Z*((cmh) U (bl) U (bml) U (bh))) r: 
Each node thus represents a rule which states 
that a mapping occurs between the input symbol 
¢ and the weighted expression ¢ in the condition 
described by A p. Now, in cases where ¢ finds 
itself in a context that is not subsumed by A p, 
the rule behaves exactly as a two-level surface co- 
ercion rule (Koskenniemi, 1983): it freely allows 
¢ to correspond to any ¢ as specified by the al- 
phabet of pairs. These ¢:¢ correspondences are, 
however, constrained by other rules derived from 
the tree, as we shall see directly. 
The interpretation of the full tree is that it 
represents the conjunction of all such mappings: 
for rules 1, 2 ...n, ¢ corresponds to ¢1 given con- 
dition ~1__Pl and ¢ corresponds to ¢~ given 
condition ~2 P2 ...and ¢ corresponds to ¢,, 
given condition ~ p~. But this conjunction is 
simply the intersection of the entire set of trans- 
ducers defined for the leaves of the tree. Observe 
now that the ¢:¢ correspondences that were left 
free by the rule of one leaf node, are constrained 
by intersection with the other leaf nodes: since, as 
noted above, the tree is a complete description, it 
follows that for any leaf node i, and for any context 
A p not subsumed by hi Pi, there is some 
leaf node j such that )~j pj subsumes ~ p. 
Thus, the transducers compiled for the rules at 
nodes 4 and 6, are intersected together, along with 
the rules for all the other leaf nodes. Now, as 
noted above, and as discussed by Kaplan and Kay 
(1994) regular relations -- the algebraic counter- 
part of FSTs -- are not in general closed under 
intersection; however, the subset of same-length 
regular relations is closed under intersection, since 
they can be thought of as finite-state acceptors ex- 
(a) left branch A = #Opt(') 
p = E* 
right branch A 
(b) left branch A = E* 
p = Opt(O(alv ) 
= (#Opt(')aOpt(')) U (#Opt(')aOpt(')aOpt('))U 
( #Opt(')aOpt(')aOpt(')( aOpt(') ) +) 
= (Opt(')~)+Opt(') 
right branch A = 
p = ( Opt(')( blab) ) U ( Opt(')( labd) ) U ( Opt(')( den ) ) U ( Opt(')(pal) )U 
(Opt(')(vel)) U (Opt(')(pha)) U (Opt(')(n/a)) 
Table 3: Regular-expression interpretation of the decisions involved in going from the root node to leaf node 
4 in the tree in Figure 1. Note that, as per convention, superscript '+' denotes one or more instances of an 
expression. 
pressed over pairs of symbols. 4 This point can 
be extended somewhat to include relations that 
involve bounded deletions or insertions: this is pre- 
cisely the interpretation necessary for systems of 
two-level rules (Koskenniemi, 1983), where a sin- 
gle transducer expressing the entire system may 
be constructed via intersection of the transduc- 
ers expressing the individual rules (Kaplan and 
Kay, 1994, pages 367-376). Indeed, our decision 
tree represents neither more nor less than a set of 
weighted two-level rules. Each of the symbols in 
the expressions for A and p actually represent (sets 
of) pairs of symbols: thus alp, for example, rep- 
resents all lexical alveolars paired with all of their 
possible surface realizations. And just as each tree 
represents a system of weighted two-level rules, so 
a set of trees -- e.g., where each tree deals with 
the realization of a particular phone -- represents 
a system of weighted two-level rules, where each 
two-level rule is compiled from each of the indi- 
vidual trees. 
We can summarize this discussion more for- 
mally as follows. We presume a function Compile 
which given a rule returns the WFST computing 
that rule. The WFST for a single leaf L is thus 
defined as follows, where CT is the input symbol 
for the entire tree, eL is the output expression de- 
fined at L, t95 represents the path traversed from 
the root node to L, p is an individual branch on 
4One can thus define intersection for transducers 
analogously with intersection for acceptors. Given 
two machines Gz and G2, with transition functions 
51 and 52, one can define the transition function 
of G, 5, as follows: for an input-output pair (i,o), 
5((ql, q2), (i, o)) = (q~, q~) if and only if 5z(ql, (i, o)) = 
q~ and 62(q2, (i, o)) = q~. 
219 
that path, and Ap and pp are the expressions for 
A and p defined at p: 
RuleL = Compite(¢ -  eL/ N aP N ;P) 
PEPL pEPL 
The transducer for an entire tree T is defined as: 
RuleT ---- D RuleL 
LET 
Finally, the transducer for a forest F of trees is 
just: 
RuleF = N RuleT 
TEF 
5 Empirical Verification of the 
Method. 
The algorithm just described has been empiri- 
cally verified on the Resource Management (RM) 
continuous speech recognition task (Price et al., 
1988). Following somewhat the discussion in 
(Pereira et al., 1994; Pereira and Riley, 1996), 
we can represent the speech recognition task as 
the problem of finding the best path in the com- 
position of a grammar (language model) G, the 
transitive-closure of a dictionary D mapping be-" 
tween words and their phonemic representation, 
a model of phone realization (I), and a weighted 
lattice representing the acoustic observations A. 
Thus: 
BestPath(G o D* o ¢ o A) (1) 
The transducer ¢ fo= ~e~ RuleT can be con- 
structed out of the r of 40 trees, one for 
each phoneme, trained on the TIMIT database. 
The size of the trees range from 1 to 23 leaf nodes, 
with a totM of 291 leaves for the entire forest. 
The model was tested on 300 sentences from 
the RM task containing 2560 word tokens, and 
approximately 10,500 phonemes. A version of the 
model of recognition given in expression (1), where 
q~ is a transducer computed from the trees, was 
compared with a version where the trees were used 
directly following a method described in (Ljolje 
and Riley, 1992). The phonetic realizations and 
their weights were identical for both methods, thus 
verifying the correctness of the compilation algo- 
rithm described here. 
The sizes of the compiled transducers can be 
quite large; in fact they were sufficiently large that 
instead of constructing ¢b beforehand, we inter- 
sected the 40 individual transducers with the lat- 
tice D* at runtime. Table 4 gives sizes for the 
entire set of phone trees: tree sizes are listed in 
terms of number of rules (terminal nodes) and raw 
size in bytes; transducer sizes are listed in terms 
of number of states and arcs. Note that the entire 
alphabet comprises 215 symbol pairs. Also given 
in Table 4 are the compilation times for the indi- 
vidual trees on a Silicon Graphics R4400 machine 
running at 150 MHz with 1024 Mbytes of memory. 
The times are somewhat slow for the larger trees, 
but still acceptable for off-line compilation. 
While the sizes of the resulting transducers 
seem at first glance to be unfavorable, it is im- 
portant to bear in mind that size is not the only 
consideration in deciding upon a particular repre- 
sentation. WFSTs possess several nice properties 
that are not shared by trees, or handwritten rule- 
sets for that matter. In particular, once compiled 
into a WFST, a tree can be used in the same way 
as a WFST derived from any other source, such as 
a lexicon or a language model; a compiled WFST 
can be used directly in a speech recognition model 
such as that of (Pereira and Riley, 1996) or in a 
speech synthesis text-analysis model such as that 
of (Sproat, 1996). Use of a tree directly requires 
a special-purpose interpreter, which is much less 
flexible. 
It should also be borne in mind that the size 
explosion evident in Table 4 also characterizes 
rules that are compiled from hand-built rewrite 
rules (Kaplan and Kay, 1994; Mohri and Sproat, 
1996). For example, the text-analysis ruleset for 
220 
the Bell Labs German text-to-speech (TTS) sys- 
tem (see (Sproat, 1996; Mohri and Sproat, 1996)) 
contains sets of rules for the pronunciation of var- 
ious orthographic symbols. The ruleset for <a>, 
for example, contains 25 ordered rewrite rules. 
Over an alphabet of 194 symbols, this compiles, 
using the algorithm of (Mohri and Sproat, 1996), 
into a transducer containing 213,408 arcs and 
1,927 states. This is 72% as many arcs and 48% 
as many states as the transducer for/ah/in Ta- 
ble 4. The size explosion is not quite as great here, 
but the resulting transducer is still large compared 
to the original rule file, which only requires 1428 
bytes of storage. Again, the advantages of rep- 
resenting the rules as a transducer outweigh the 
problems of size. 5 
6 Future Applications 
We have presented a practical algorithm for con- 
verting decision trees inferred from data into 
weighted finite-state transducers that directly im- 
plement the models implicit in the trees, and we 
have empirically verified that the algorithm is cor- 
rect. 
Several interesting areas of application come 
to mind. In addition to speech recognition, where 
we hope to apply the phonetic realization models 
described above to the much larger North Amer- 
ican Business task (Paul and Baker, 1992), there 
are also applications to TTS where, for example, 
the decision trees for prosodic phrase-boundary 
prediction discussed in (Wang and Hirschberg, 
1992) can be compiled into transducers and used 
directly in the WFST-based model of text analysis 
used in the multi-lingual version of the Bell Lab- 
oratories TTS system, described in (Sproat, 1995; 
Sproat, 1996). 
7 Acknowledgments 
The authors wish to thank Fernando Pereira, 
Mehryar Mohri and two anonymous referees for 
useful comments. 

References 
Leo Breiman, Jerome H. Friedman, Richard A. 
Olshen, and Charles J. Stone. 1984. Clas- 
sification and Regression Trees. Wadsworth 
& Brooks, Pacific Grove CA. 
William Fisher, Victor Zue, D. Bernstein, and 
David Pallet. 1987. An acoustic-phonetic 
data base. Journal of the Acoustical Society 
of America, 91, December. Supplement 1. 
Daniel Gildea and Daniel Jurafsky. 1995. Au- 
tomatic induction of finite state transducers 
for simple phonological rules. In 33rd Annual 
Meeting of the Association for Computational 
Linguistics, pages 9-15, Morristown, NJ. As- 
sociation for ComputationM Linguistics. 
C. Douglas Johnson. 1972. Formal Aspects of 
Phonological Description. Mouton, Mouton, 
The Hague. 
Ronald Kaplan and Martin Kay. 1994. Regular 
models of phonological rule systems. Compu- 
tational Linguistics, 20:331-378. 
Kimmo Koskenniemi. 1983. Two-Level Mor- 
phology: a General Computational Model 
for Word-Form Recognition and Production. 
Ph.D. thesis, University of Helsinki, Helsinki. 
Andrej Ljolje and Michael D. Riley. 1992. Op- 
timal speech recognition using phone recogni- 
tion and lexical access. In Proceedings of IC- 
SLP, pages 313-316, Banff, Canada, October. 
David Magerman. 1995. Statistical decision-tree 
models for parsing. In 33rd Annual Meeting 
of the Association for Computational Linguis- 
tics, pages 276-283, Morristown, NJ. Associ- 
ation for Computational Linguistics. 
Mehryar Mohri and Richard Sproat. 1996. An ef- 
ficient compiler for weighted rewrite rules. In 
34rd Annual Meeting of the Association for 
Computational Linguistics, Morristown, NJ. 
Association for Computational Linguistics. 
Jos~ Oncina, Pedro Garela, and Enrique Vidal. 
1993. Learning subsequential transducers for 
pattern recognition tasks. IEEE Transactions 
on Pattern Analysis and Machine Intelligence, 
15:448-458. 
Douglas Paul and Janet Baker. 1992. The design 
for the Wall Street Journal-based CSR corpus. 
In Proceedings of International Conference on 
Spoken Language Processing, Banff, Alberta. 
ICSLP. 
Fernando Pereira and Michael Riley. 1996. Speech 
recognition by composition of weighted finite 
automata. CMP-LG archive paper 9603001. 
Fernando Pereira, Michael Riley, and Richard 
Sproat. 1994. Weighted rational transduc- 
tions and their application to human lan- 
guage processing. In ARPA Workshop on 
Human Language Technology, pages 249-254. 
Advanced Research Projects Agency, March 
8-11. 
Patty Price, William Fisher, Jared Bernstein, and 
David Pallett. 1988. The DARPA 1000-word 
Resource Management Database for contin- 
uous speech recognition. In Proceedings of 
ICASSP88, volume 1, pages 651-654, New 
York. ICASSP. 
Michael Riley. 1989. Some applications of tree- 
based modelling to speech and language. In 
Proceedings of the Speech and Natural Lan- 
guage Workshop, pages 339-352, Cape Cod 
MA, October. DARPA, Morgan Kaufmann. 
Michael Riley. 1991. A statistical model for gener- 
ating pronunciation networks. In Proceedings 
of the International Conference on Acoustics, 
Speech, and Signal Processing, pages Sl1.1.- 
Sll.4. ICASSP91, October. 
Richard Sproat. 1995. A finite-state architecture 
for tokenization and grapheme-to-phoneme 
conversion for multilingual text analysis. In 
Susan Armstrong and Evelyne Tzoukermann, 
editors, Proceedings of the EACL SIGDAT 
Workshop, pages 65-72, Dublin, Ireland. As- 
sociation for Computational Linguistics. 
Richard Sproat. 1996. Multilingual text analy- 
sis for text-to-speech synthesis. In Proceed- 
ings of the ECAI-96 Workshop on Extended 
Finite State Models of Language, Budapest, 
Hungary. European Conference on Artificial 
Intelligence. 
Michelle Wang and Julia Hirschberg. 1992. Au- 
tomatic classification of intonational phrase 
boundaries. Computer Speech and Language, 
6:175-196. 
