Finite State Transducers 
Approximating Hidden Markov Models 
Andrd Kempe 
Rank Xerox Research Centre - Grenoble Laboratory 
6, chemin de Maupertuis - 38240 Meylan - France 
andre, kempe©grenoble, rxrc. xerox, com 
http ://www. rxrc. xerox, com/research/mltt 
Abstract 
This paper describes the conversion of a 
Hidden Markov Model into a sequential 
transducer that closely approximates the 
behavior of the stochastic model. This 
transformation is especially advantageous 
for part-of-speech tagging because the re- 
sulting transducer can be composed with 
other transducers that encode correction 
rules for the most frequent tagging errors. 
The speed of tagging is also improved. The 
described methods have been implemented 
and successfully tested on six languages. 
1 Introduction 
Finite-state automata have been successfully applied 
in many areas of computational linguistics. 
This paper describes two algorithms 1 which ap- 
proximate a Hidden Markov Model (HMM) used for 
part-of-speech tagging by a finite-state transducer 
(FST). These algorithms may be useful beyond the 
current description on any kind of analysis of written 
or spoken language based on both finite-state tech- 
nology and HMMs, such as corpus analysis, speech 
recognition, etc. Both algorithms have been fully 
implemented. 
An HMM used for tagging encodes, like a trans- 
ducer, a relation between two languages. One lan- 
guage contains sequences of ambiguity classes ob- 
tained by looking up in a lexicon all words of a sen- 
tence. The other language contains sequences of tags 
obtained by statistically disambiguating the class se- 
quences. From the outside, an HMM tagger behaves 
like a sequential transducer that deterministically 
1There is a different (unpublished) algorithm by 
Julian M. Kupiec and John T. Maxwell (p.c.). 
maps every class sequence to a tag sequence, e.g.: 
\[DET, PRO\] \[ADJ,NOUN\] \[ADJ,NOUN\] ...... \[END\] (i) 
DET ADJ NOUN ...... END 
The aim of the conversion is not to generate FSTs 
that behave in the same way, or in as similar a way 
as possible like IIMMs, but rather FSTs that per- 
form tagging in as accurate a way as possible. The 
motivation to derive these FSTs from HMMs is that 
HMMs can be trained and converted with little man- 
ual effort. 
The tagging speed when using transducers is up 
to five times higher than when using the underly- 
ing HMMs. The main advantage of transforming an 
HMM is that the resulting transducer can be han- 
dled by finite state calculus. Among others, it can 
be composed with transducers that encode: 
• correction rules for the most frequent tagging 
errors which are automatically generated (Brill, 
1992; Roche and Schabes, 1995) or manually 
written (Chanod and Tapanainen, 1995), in or- 
der to significantly improve tagging accuracy 2. 
These rules may include long-distance depen- 
dencies not handled by HMM taggers, and can 
conveniently be expressed by the replace oper- 
ator (Kaplan and Kay, 1994; Karttunen, 1995; 
Kempe and Karttunen, 1996). 
• further steps of text analysis, e.g. light parsing 
or extraction of noun phrases or other phrases 
(Ait-Mokhtar and Chanod, 1997). 
These compositions enable complex text analysis 
to be performed by a single transducer. 
An IIMM transducer builds on the data (probabil- 
ity matrices) of the underlying HMM. The accuracy 
2Automatically derived rules require less work than 
manually written ones but are unlikely to yield better 
results because they would consider relatively limited 
context and simple relations only. 
460 
of this data has an impact on the tagging accuracy 
of both the HMM itself and the derived transducer. 
The training of the HMM can be done on either a 
tagged or untagged corpus, and is not a topic of this 
paper since it is exhaustively described in the liter- 
ature (Bahl and Mercer, 1976; Church, 1988). 
An HMM can be identically represented by a 
weighted FST in a straightforward way. We are, 
however, interested in non-weighted transducers. 
2 n-Type Approximation 
This section presents a method that approximates 
a (lst order) HMM by a transducer, called n-type 
approximation 3. 
Like in an HMM, we take into account initial prob- 
abilities ~r, transition probabilities a and class (i.e. 
observation symbol) probabilities b. We do, how- 
ever, not estimate probabilities over paths. The tag 
of the first word is selected based on its initial and 
class probability. The next tag is selected on its tran- 
sition probability given the first tag, and its class 
probability, etc. Unlike in an HMM, once a decision 
on a tag has been made, it influences the following 
decisions but is itself irreversible. 
A transducer encoding this behaviour can be gen- 
erated as sketched in figure 1. In this example we 
have a set of three classes, Cl with the two tags tn 
and t12, c2 with the three tags t21, t22 and t23 , and 
c3 with one tag t31. Different classes may contain 
the same tag, e.g. t12 and t2s may refer to the same 
tag. 
For every possible pair of a class and a tag (e.g. 
Cl :t12 or I'ADJ,NOUN\] :NOUN) a state is created and 
labelled with this same pair (fig. 1). An initial state 
which does not correspond with any pair, is also cre- 
ated. All states are final, marked by double circles. 
For every state, as many outgoing arcs are created 
as there are classes (three in fig. 1). Each such arc 
for a particular class points to the most probable 
pair of this same class. If the arc comes from the 
initial state, the most probable pair of a class and a 
tag (destination state) is estimated by: 
argrnkaxpl(ci,tih ) ---- 7r(tik) b(ciltik) (2) 
If the arc comes from a state other than the initial 
state, the most probable pair is estimated by: 
argmaxp2(ci,tik) = a(tlkltp,eoio~,) b(ciltik) (3) 
In the example (fig. 1) cl :t12 is the most likely pair 
of class cl, and c2:t23 the most likely pair of class e2 
aName given by the author. 
when coming from the initial state, and c2 :t21 the 
most likely pair of class c2 when coming from the 
state of c3 :t31. 
Every arc is labelled with the same symbol pair 
as its destination state, with the class symbol in the 
upper language and the tag symbol in the lower lan- 
guage. E.g. every arc leading to the state of cl :t12 
is labelled with Cl :t12. 
Finally, all state labels can be deleted since the 
behaviour described above is encoded in the arc la- 
bels and the network structure. The network can be 
minimized and determinized. 
We call the model an nl-type model, the resulting 
FST an nl-type transducer and the algorithm lead- 
ing from the HMM to this transducer, an nl-type 
approximation of a 1st order HMM. 
Adapted to a 2nd order HMM, this algorithm 
would give an n2-type approximation. Adapted to 
a zero order HMM, which means only to use class 
probabilities b, the algorithm would give an nO-type 
approximation. 
n-Type transducers have deterministic states only. 
3 s-Type Approximation 
This section presents a method that approxi- 
mates an HMM by a transducer, called s-type 
approximation 4. 
Tagging a sentence based on a 1st order HMM 
includes finding the most probable tag sequence T 
given the class sequence C of the sentence. The joint 
probability of C and T can be estimated by: 
p(C, T) = p(cl .... Cn, tl .... tn) = 
Its) 12 I a(t, lt _l) ItO 
i=2 
(4) 
The decision on a tag of a particular word cannot 
be made separately from the other tags. Tags can 
influence each other over a long distance via transi- 
tion probabilities. Often, however, it is unnecessary 
to decide on the tags of the whole sentence at once. 
In the case ofa 1st order HMM, unambiguous classes 
(containing one tag only), plus the sentence begin- 
ning and end positions, constitute barriers to the 
propagation of HMM probabilities. Two tags with 
one or more barriers inbetween do not influence each 
other's probability. 
4Name given by the author. 
461 
classes 
r-} 
tags of classes 
22 ~ 
Figure 1: Generation of an nl-type transducer 
3.1 s-Type Sentence Model 
To tag a sentence, one can split its class sequence at 
the barriers into subsequences, then tag them sep- 
arately and concatenate them again. The result is 
equivalent to the one obtained by tagging the sen- 
tence as a whole. 
We distinguish between initial and middle sub- 
sequences. The final subsequence of a sentence is 
equivalent to a middle one, if we assume that the 
sentence end symbol (. or ! or ?) always corresponds 
to an unambiguous class c~. This allows us to ig- 
nore the meaning of the sentence end position as an 
HMM barrier because this role is taken by the un- 
ambiguous class cu at the sentence end. 
An initial subsequence Ci starts with the sentence 
initial position, has any number (incl. zero) of am- 
biguous classes ca and ends with the first unambigu- 
ous class c~ of the sentence. It can be described by 
the regular expressionS: 
Ci = ca* (5) 
The joint probability of an initial class subse- 
quence Ci of length r, together with an initial tag 
subsequence ~, can be estimated by: 
r 
p(C,, ~1~) = r(tl) b(cl\]tl). H a(tj\]tj_l) b(cj Itj) (6) 
j=2 
A middle subsequence Cm starts immediately af- 
ter an unambiguous class cu, has any number (incl. 
SRegular expression operators used in this section are 
explained in the annex• 
zero) of ambiguous classes ca and ends with the fol- 
lowing unambiguous class c~ : 
Cm = ca* c~ (7) 
For correct probability estimation we have to in- 
clude the immediately preceding unambiguous class 
cu, actually belonging to the preceding subsequence 
Ci or Cm. We thereby obtain an extended middle 
subsequence 5: 
= % ca* (8) 
The joint probability of an extended middle class 
subsequence C~ of length s, together with a tag sub- 
sequence Tr~ , can be estimated by: 
$ 
p(c£,7£) = b(clltl). I-\[ a(tjltj_ ) b(cjlt ) (9) 
j=2 
3.2 Construction of an s-Type Transducer 
To build an s-type transducer, a large number of ini- 
tial class subsequences Ci and extended middle class 
subsequences C~n are generated in one of the follow- 
ing two ways: 
(a) Extraction from a corpus 
Based on a lexicon and a guesser, we annotate an 
untagged training corpus with class labels. From ev- 
ery sentence, we extract the initial class subsequence 
Ci that ends with the first unambiguous class c~ (eq. 
5), and all extended middle subsequences C~n rang- 
ing from any unambiguous class cu (in the sentence) 
to the following unambiguous class (eq. 8). 
462 
A frequency constraint (threshold) may be im- 
posed on the subsequence selection, so that the only 
subsequences retained are those that occur at least 
a certain number of times in the training corpus 6. 
(b) Generation of possible subsequences 
Based on the set of classes, we generate all possi- 
ble initial and extended middle class subsequences, 
Ci and C,e, (eq. 5, 8) up to a defined length. 
Every class subsequence Ci or C~ is first dis- 
ambiguated based on a 1st order HMM, using the 
Viterbi algorithm (Viterbi, 1967; Rabiner, 1990) for 
efficiency, and then linked to its most probable tag 
subsequence ~ or T~ by means of the cross product 
operationS: 
Si -- Ci .x. T/ ---- c 1 :tl c2 :t2 ...... Cn :tn (10) 
01) e. e S~ = C~ .x. 7~ = el.t1 c2:t2 ...... c, :t, 
In all extended middle subsequences S~n, e.g.: 
S~ - C~ _ (12) 
\[DET\] \[ADJ,NOUN\] \[ADJ, NOUN\] \[NOUN\] 
DET ADJ ADJ NOUN 
the first class symbol on the upper side and the first 
tag symbol on the lower side, will be marked as an 
extension that does not really belong to the middle 
sequence but which is necessary to disambiguate it 
correctly. Example (12) becomes: 
s ° = = (13) TO 
O.\[DET\] \[ADJ,NOUN\] \[ADJ, NOUN\] \[NOUN\] 
O.DET ADJ ADJ NOUN 
We then build the union uS i of all initial subse- 
quences Si and the union uS~n of all extended middle 
subsequences S,e=, and formulate a preliminary sen- 
tence model: 
uS ° = ~S, uS°~* (14) 
in which all middle subsequences S ° are still marked 
and extended in the sense that all occurrences of all 
unambiguous classes are mentioned twice: Once un- 
marked as cu at the end of every sequence Ci or COn, 
0 at the beginning and the second time marked as c u 
of every following sequence C ° . The upper side of 
the sentence model uS° describes the complete (but 
6The frequency constraint may prevent the encoding 
of rare subsequences which would encrease the size of 
the transducer without contributing much to the tagging 
accuracy. 
extended) class sequences of possible sentences, and 
the lower side of uS° describes the corresponding (ex- 
tended) tag sequences. 
To ensure a correct concatenation of initial and 
middle subsequences, we formulate a concatenation 
constraint for the classes: 
0 = N \[-*\[ % (15) 
J 
stating that every middle subsequence must begin 
0 with the same marked unambiguous class % (e.g. 
0.\[DET\]) which occurs unmarked as c~ (e.g. \[DET\]) 
at the end of the preceding subsequence since both 
symbols refer to the same occurrence of this unam- 
biguous class. 
Having ensured correct concatenation, we delete 
all marked classes on the upper side of the relation 
by means of 
and all marked tags on the lower side by means of 
By composing the above relations with the prelim- 
inary sentence model, we obtain the final sentence 
modelS: 
S = Dc .o. Rc .o. uS° .o. Dt (18) 
We call the model an s-type model, the corre- 
sponding FST an s-type transducer, and the whole 
algorithm leading from the HMMto the transducer, 
an s-type approximation of an HMM. 
The s-type transducer tags any corpus which con- 
tains only known subsequences, in exactly the same 
way, i.e. with the same errors, as the corresponding 
HMM tagger does. However, since an s-type trans- 
ducer is incomplete, it cannot tag sentences with 
one or more class subsequences not contained in the 
union of the initial or middle subsequences. 
3.3 Completion of an s-Type Transducer 
An incomplete s-type transducer S can be completed 
with subsequences from an auxiliary, complete n- 
type transducer N as follows: 
First, we extract the union of initial and the union 
of extended middle subsequences, u u e Si and s Sm from 
the primary s-type transducer S, and the unions ~Si 
463 
and ~S,~ from the auxiliary n-type transducer N. To 
extract the union °S i of initial subsequences we use 
the following filter: 
Fs,=\[\<c~,t>\]* <c-,0 \[?:\[\]\]* (19) 
where (c,, t) is the l-level format 7 of the symbol pair 
cu :t. The extraction takes place by 
usi = \[ N.1L .o. Fs, \].l.2L (20) 
where the transducer N is first converted into l- 
level format 7, then composed with the filter Fs, (eq. 
19). We extract the lower side of this composition, 
where every sequence of N.1L remains unchanged 
from the beginning up to the first occurrence of an 
unambiguous class c,. Every following symbol is 
mapped to the empty string by means of \[? :\[ \]\]. 
(eq. 19). Finally, the extracted lower side is again 
converted into 2-level format 7. 
The extraction of the union uSe of extended mid- 
die subsequences is performed in a similar way. 
We then make the joint unions of initial and ex- 
tended middle subsequences 5 : 
U~/ U O O U : I\[ \] \] (21) 
-- ~Si .o. ~Si 
U e U e U e U e U e = \[, Sm.u s., ,sin I\[ (22) - \] .o. \] 
In both cases (eq. 21 and 22) we union all subse- 
quences from the principal model S, with all those 
subsequences from the auxiliary model N that are 
not in S. 
Finally, we generate the completed s+n-typc 
transducer from the joint unions of subsequences uSi 
and uS~n , as decribed above (eq. 14-18). 
A transducer completed in this way, disam- 
biguates all subsequences known to the principal 
incomplete s-type model, exactly as the underlying 
HMM does, and all other subsequences as the aux- 
iliary n-type model does. 
4 An Implemented Finite-State 
Tagger 
The implemented tagger requires three transducers 
which represent a lexicon, a guesser and any above 
mentioned approximation of an HMM. 
All three transducers are sequential, i.e. deter- 
ministic on the input side. 
Both the lexicon and guesser unambiguously map 
a surface form of any word that they accept to the 
corresponding class of tags (fig. 2, col. 1 and 2): 
~l-Level and 2-level format are explained in the an- 
flex. 
First, the word is looked for in the lexicon. If this 
fails, it is looked for in the guesser. If this equally 
fails, it gets the label \[UNKNOWN\] which associates 
the word with the tag class of unknown words. Tag 
probabilities in this class are approximated by tags 
of words that appear only once in the training cor- 
pus. 
As soon as an input token gets labelled with the 
tag class of sentence end symbols (fig. 2: \[SENT\]), 
the tagger stops reading words from the input. At 
this point, the tagger has read and stored the words 
of a whole sentence (fig. 2, col. 1) and generated the 
corresponding sequence of classes (fig. 2, col. 2). 
The class sequence is now deterministically 
mapped to a tag sequence (fig. 2, col. 3) by means of 
the HMM transducer. The tagger outputs the stored 
word and tag sequence of the sentence, and contin- 
ues in the same way with the remaining sentences of 
the corpus. 
The \[AT\] AT 
share \[NN, VB\] NN 
of \[IN\] IN 
tripled \[VBD, VBN\] VBD 
within \[IN,RB\] IN 
that \[CS, DT, WPS\] DT 
span INN, VB, VBD\] VBD 
of \[IN\] IN 
t ime INN, VB\] NN 
\[SENT\] SENT 
Figure 2: Tagging a sentence 
5 Experiments and Results 
This section compares different n-type and s-type 
transducers with each other and with the underlying 
HMM. 
The FSTs perform tagging faster than the HMMs. 
Since all transducers are approximations of 
HMMs, they give a lower tagging accuracy than the 
corresponding HMMs. However, improvement in ac- 
curacy can be expected since these transducers can 
be composed with transducers encoding correction 
rules for frequent errors (sec. 1). 
Table 1 compares different transducers on an En- 
glish test case. 
The s+nl-type transducer containing all possible 
subsequences up to a length of three classes is the 
most accurate (table 1, last line, s+nl-FST (~ 3): 
95.95 %) but Mso the largest one. A similar rate of 
accuracy at a much lower size can be achieved with 
the s+nl-type, either with all subsequences up to a 
464 
HMM 
accuracy 
in % 
96.77 
tagging speed 
in words/sec 
4 590 
transducer size creation 
time # states # arcs 
1 297 
71 21 087 
927 203 853 
2 675 564 887 
4 709 976 785 
476 107 728 
211 52 624 
154 41 598 
2 049 418 536 
799 167 952 
432 96 712 
9 796 1 311 962 
92 463 13 681 113 
n0-FST 83.53 20 582 16 sec 
nl-FST 94.19 17 244 17 sec 
s+nl-FST (20K, F1) 94.74 13 575 3 min 
s+nl-FST (50K, F1) 94.92 12 760 10 min 
s+nl-FST (100K, F1) 95.05 12 038 23 min 
s+nl-FST (100K, F2) 94.76 14 178 2 min 
s+nl-FST (100K, F4) 94.60 14 178 76 sec 
s+nl-FST (100K, F8) 94.49 13 870 62 see 
s+nl-FST (1M, F2) 95.67 11 393 7 min 
s+nl-FST (1M, F4) 95.36 11 193 4 min 
s+nl-FST (1M, FS) 95.09 13 575 3 min 
s+nl-FST (< 2) 95.06 8 180 39 min 
s+nl-FST (< 3) 95.95 4 870 47 h 
Language: English 
Corpora: 19 944 words for HMM training, 19 934 words for test 
Tag set: 74 tags 297 classes 
Types of FST (Finite-State Transducers) : 
nO, nl n0-type (with only lexical probabilities) or nl-type (sec. 2) 
s+nl (100K, F2) s-type (sec. 3), with subsequences of frequency > 2, from a training 
corpus of 100 000 words (sec. 3.2 a), completed with nl-type (sec. 3.3) 
s+nl (< 2) s-type (sec. 3), with all possible subsequences of length _< 2 classes 
(sec. 3.2 b), completed with nl-type (sec. 3.3) 
Computer: ultra2, 1 CPU, 512 MBytes physical RAM, 1.4 GBytes virtual RAM 
Table 1: Accuracy, speed, size and creation time of some HMM transducers 
length of two classes (s+nl-FST (5 2): 95.06 %) or 
with subsequences occurring at least once in a train- 
ing corpus of 100 000 words (s+nl-FST (lOOK, F1): 
95.05 %). 
Increasing the size of the training corpus and the 
frequency limit, i.e. the number of times that a sub- 
sequence must at least occur in the training corpus 
in order to be selected (sec. 3.2 a), improves the re- 
lation between tagging accuracy and the size of the 
transducer. E.g. the s+nl-type transducer that en- 
codes subsequences from a training corpus of 20 000 
words (table 1, s+nl-FST (20K, F1): 94.74 %, 927 
states, 203 853 arcs), performs less accurate tagging 
and is bigger than the transducer that encodes sub- 
sequences occurring at least eight times in a corpus 
of 1 000 000 words (table 1, s+nl-FST (1M, F8): 
95.09 %, 432 states, 96 712 arcs). 
Most transducers in table 1 are faster then the 
underlying HMM; the n0-type transducer about five 
times s. There is a large variation in speed between 
SSince n0-type and nl-type transducers have deter- 
ministic states only, a particular fast matching algorithm 
can be used for them. 
the different transducers due to their structure and 
size. 
Table 2 compares the tagging accuracy of different 
transducers and the underlying HMM for different 
languages. In these tests the highest accuracy was 
always obtained by s-type transducers, either with 
all subsequences up to a length of two classes 9 or 
with subsequences occurring at least once in a corpus 
of 100 000 words. 
6 Conclusion and Future Research 
The two methods described in this paper allow the 
approximation of an HMM used for part-of-speech 
tagging, by a finite-state transducer. Both methods 
have been fully implemented. 
The tagging speed of the transducers is up to five 
times higher than that of the underlying HMM. 
The main advantage of transforming an HMM 
is that the resulting FST can be handled by finite 
9A maximal length of three classes is not considered 
here because of the high increase in size and a low in- 
crease in accuracy. 
465 
.... HMM 
-'n0-FST 
nl-FST 
English 
96.77 
83.53 
94.19 
s+nl-FST (20K, F1) 94.74 
s+nl-FST (50K, F1) 94.92 
s+nl-FST (100K, F1) 95.05 
s+nl-FST (100K, F2) 94.76 
s÷nl-FST (100K, F4) 
s+nl-FST (100K, F8) 
94.60 
94.49 
:HMM train.crp. (#wd) 
'"test corpus (# words) 
s+nl-FST (< 2) 95.06 
19 944 
19 934 
#tags 74 
#classes 297 
accuracy in % 
I Dutch I French I German 
I 94"76\[ 98"651 97.62 
81.99 91.13 
91.58 98.18 
92.17 98.35 
92.24 98.37 
92.36 98.37 
92.17 98.34 
92.02 98.30 
91.84 98.32 
92.25 98.37 
26 386 22 622 
10 468 6 368 
47 45 
230 287 
\[ Types of FST (Finite-State Transducers) : 
Portug. Spanish 
\[ 97.12 97.60 
82.97 91.03 93.65 
94.49 96.19 96.46 
95.23 96.71 
95.57 
95.81 
95.51 
95.29 
96.33 
96.49 
96.56 
96.42 
96.27 
96.76 
96.87 
96.74 
96.64 
95.02 96.23 96.54 
95.92 96.50 96.90 
91 060 20 956 16 221 
39 560 15 536 15 443 
66 67 55 
389 303 254 
cf. table 1 I 
Table 2: Accuracy of some HMM transducers for different languages 
state calculus 1° and thus be directly composed with 
other transducers which encode tag correction rules 
and/or perform further steps of text analysis. 
Future research will mainly focus on this pos- 
sibility and will include composition with, among 
others: 
• Transducers that encode correction rules (pos- 
sibly including long-distance dependencies) for 
the most frequent tagging errors, ill order to 
significantly improve tagging accuracy. These 
rules can be either extracted automatically from 
a corpus (Brill, 1992) or written manually 
(Chanod and Tapanainen, 1995). 
• Transducers for light parsing, phrase extraction 
and other analysis (A'/t-Mokhtar and Chanod, 
1997). 
An HMM transducer can be composed with one or 
more of these transducers in order to perform com- 
plex text analysis using only a single transducer. 
We also hope to improve the n-type model by us- 
ing look-ahead to the following tags 11. 
Acknowledgements 
I wish to thank the anonymous reviewers of my pa- 
per for their valuable comments and suggestions. 
I am grateful to Lauri Karttunen and Gregory 
Grefenstette (both RXRC Grenoble) for extensive 
and frequent discussion during the period of my 
work, as well as to Julian Kupiec (Xerox PARC) 
and Mehryar Mohri (AT&:T Research) for sending 
me some interesting ideas before I started. 
Many thanks to all my colleagues at RXRC 
Grenoble who helped me in whatever respect, partic- 
ularly to Anne Schiller, Marc Dymetman and Jean- 
Pierre Chanod for discussing parts of the work, and 
to Irene Maxwell for correcting various versions of 
the paper. 
l°A large library of finite-state functions is available 
at Xerox. 
11Ongoing work has shown that, looking ahead to just 
one tag is worthless because it makes tagging results 
highly ambiguous. 
466 

References 
Ait-Mokhtar, Salah and Chanod, Jean-Pierre 
(1997). Incremental Finite-State Parsing. In 
the Proceedings of the 5th Conference of Applied 
Natural Language Processing. ACL, pp. 72-79. 
Washington, DC, USA. 
Bahl, Lalit R. and Mercer, Robert L. (1976). Part 
of Speech Assignment by a Statistical Decision 
Algorithm. In IEEE international Symposium on
Information Theory. pp. 88-89. Ronneby. 
Brill, Eric (1992). A Simple Rule-Based Part-of-
Speech Tagger. In the Proceedings of the 3rd con- 
ference on Applied Natural Language Processing, 
pp. 152-155. Trento, Italy. 
Chanod, Jean-Pierre and Tapanainen, Pasi (1995). 
Tagging French - Comparing a Statistical and a 
Constraint Based Method. In the Proceedings of 
the 7th conference of the EACL, pp. 149-156. 
ACL. Dublin, Ireland. 
Church, Kenneth W. (1988). A Stochastic Parts 
Program and Noun Phrase Parser for Unre- 
stricted Text. In Proceedings of the 2nd Con- a 
ference on Applied Natural Language Processing. 
ACL, pp. 136-143. 
Kaplan, Ronald M. and Kay, Martin (1994). Reg- 
ular Models of Phonological Rule Systems. In 
Computational Linguistics. 20:3, pp. 331-378. 
Karttunen, Lauri (1995). The Replace Operator. 
In the Proceedings of the 33rd Annual Meeting 
of the Association for Computational Linguistics. 
Cambridge, MA, USA. cmp-lg/9504032 
AI B Kempe, Andrd and Karttunen, Lauri (1996). Par- 
allel Replacement in Finite State Calculus. In  
the Proceedings of the 16th International Confer- 
ence on Computational Linguistics, pp. 622-627. 
Copenhagen, Denmark. crap-lg/9607007 
Rabiner, Lawrence R. (1990). A Tutorial on Hid- 
den Markov Models and Selected Applications in 
Speech Recognition. In Readings in Speech Recog- 
nition (eds. A. Waibel, K.F. Lee). Morgan Kauf- 
mann Publishers, Inc. San Mateo, CA., USA. 
A.2L Roche, Emmanuel and Schabes, Yves (1995). De- 
terministic Part-of-Speech Tagging with Finite-
State Transducers. In Computational Linguistics.
Vol. 21, No. 2, pp. 227-253. 
Viterbi, A.J. (1967). Error Bounds for Convolu- 
tional Codes and an Asymptotical Optimal De- 
coding Algorithm. In Proceedings of IEEE, vol. 
61, pp. 268-278. 
