A POLYNOMIAL--ORDER ALGORITHM 
FOR 
OPTIMAL PHRASE SEQUENCE SELECTION FROM A PHRASE LATTICE 
AND ITS PARALLEL LAYERED IMPLEMENTATION 
Kazuh i ko OZEKI 
The Univens i ty of Elect co--Gommunica< ions 
Cho'ffu, Tokyo, \] 82, Japan 
Abstract 
This paper deals with a problem of select- 
ing an optimal phrase sequence from a phrase 
lattice, which is often encountered in 
language processing such as word processing 
and post-processing for speech recognition. 
The problem is formulated as one of combina- 
torial optimization, and a polynomial order 
algorithm is derived. This algorithm finds 
an optimal phrase sequence and its dependen- 
cy structure simultaneously, and is there- 
fore particularly suited for an interface 
between speech recognition and various 
language processing. What the algorithm does 
is numerical optimization rather than sym- 
bolic operation unlike conventional pars- 
ers. A parallel and layered structure to 
implement the algorithm is also presented, 
Although the language taken up here is Japa- 
nese, the algorithm can be extended to cover 
a wider :family of languages. 
1. Introduction 
In Japanese language processing related to 
speech recognition and word processing, we 
often encounter a problem of selecting a 
phrase :sequence which constitutes the most 
acceptable sentence from a phrase lattice, 
that is, a set of phrases with various 
starting and ending positions, By solving 
this problem, linguistic ambiguities and/or 
uncertainties coming from the inaccuracy in 
speech :recognition are expected to be re- 
solved. 
This problem can be solved, in principle, 
by enumerating all the possible combinations 
of the phrases and measuring the syntactic 
and semantic acceptability of each phrase 
sequence as a sentence. Obviously, however, 
the amount of computation in this enumera- 
tive method grows exponentially with respect 
to the length of the sequence and becomes in- 
tractable even for a moderate problem size. 
In this paper we formulate this task as a 
combinatorial optimization problem and 
derive a set of recurrence equations, which 
leads to an algorithm of polynomial order in 
time and space. We utilize the idea of 
dependency grammar \[Hays 64\] for defining 
the acceptability of a phrase sequence as a 
Japanese sentence. 
With a review of recent theoretical devel- 
opment on this topic, a parallel and layered 
implementation of the algorithm is present- 
ed. 
2. Dependency Structure of Japanese 
In Japanese, words and morphemes are con- 
catenated to form a linguistic unit called 
'bnnsetsu', which is referred to as simply 
'phrase' here. h typical phrase consists of 
a content word followed by some functional 
morphemes, h Japanese sentence is a sequence 
of phrases with a structure which can be de- 
scribed by a diagram as in Fig. 1 
\[Hashimoto 463. For a sequence of phrases 
XlXZ...x n to be a well-formed Japanese 
sentence, it must have a structure satisfy- 
ing the following constraints \[Yoshida 72\]: 
(el) For any i (l<i<n-1), there exists 
unique j (i<j<n) such that x i modifies xj in 
a wide sense. 
(c2) For any i,j,k,1 (l<i<j<k<l<n), it 
never occurs that x i modifies x k and xj 
modifies x I. 
A structure satisfying these constraints 
is called a dependency structure here. Mere 
formally we define a dependency structure as 
follows \[Ozeki 86a\], 
Definition 1 
(1) If x 0 is a phrase, then <x0> is a de- 
pendency structure, 
(2) If X 1 ..... X n are dependency structures 
and x 0 is a phrase, then <Xl...X n x0> is a 
dependency structure. 
A dependency structure <XI...X n x0> 
(Xi=<...xi>) implies that each x i, which is 
the last phrase in X i, modifies x 0, It is 
easily verified that a structure satisfying 
the constraints (el) and (c2) is a dependen- 
cy structure in the sense of Definition 1 
and vice versa \[Ozeki 86a3. 
When a dependency structure X is composed 
of phrases Xl,X 2 ..... x n we say that X is a 
dependency structure on XlX2...x n. The set 
of all the dependency structures on 
XlX2...x n is denoted as K(XlX2...Xn): and 
for a sequence of phrase sets A1,A 2 ..... A n , 
we define 
KB(A 1 ,A 2 ..... A n) 
={X\[XeK(XlX2...Xn), xieh i (l<i<n)}. 
Fig.1 Example of dependency structure 
in Japanese. A,B .... are phrases. 
311 I 
3. Acceptability of a Dependency Structure 
For a pair of phrases x 1 and x 0' we can 
think of a penalty imposed on a modifier- 
modificant relation between x 1 and x 0. This 
non-negative value is denoted as pen(xl;x0). 
The smaller value of pen(xl;x 0) represents 
the more natural linguistic relation. Al- 
though it is very important to establish a 
way of computing pen(xl;x0), we will not go 
into that problem in this paper. Based on 
the 'local' penalty, a 'global' penalty P(X) 
of a dependency structure X is defined 
recursively as follows \[0zeki 86a\]. 
Definition 2 
(1) For X=<x>, P(X)=O. 
(2) For X=<Xl...X n xo>, where Xi=<...xi> 
(I<i<n) is a dependency structure, 
P(X)= P(Xl)+...+P(X n) 
+pen(xl;xo)+...÷pen(xn;XO). 
Note that P(X) is the sum of the penalty 
of all the phrase pairs which are supposed 
to be in modifier-modificant relation in the 
dependency structure X. This function is 
invariant under permutation of X 1 ..... X n in 
accordance with the characteristic of Japa- 
nese. 
4. Formulation of the Problem 
For simplicity, let us begin with a 
special type of phrase lattice composed of a 
sequence of phrase sets BI,B 2 ..... B N as 
shown in Fig.2, which we call phrase ma- 
trix. Suppose we are given a phrase matrix 
and a reliability function 
s: BIUB2U...UB N --> R+, 
where R+ denotes the set of non-negative 
real numbers. The smaller value of s(x) 
represents the higher reliability of x. We 
encounter this special type of phrase lat- 
tice in isolated phrase speech recognition. 
In that case B i is the set of output candi- 
dates for the ith utterance, and s(x) is the 
recognition score for a candidate phrase x. 
For a dependency structure X on a phrase 
sequence XlX2...x N, the total reliability 
of X is defined as 
S(X)= S(Xl)+...+S(XN). 
Combining the acceptability and the reli- 
ability, we define an objective function 
F(X) as 
F(X)= P(X) +S(X) . 
B 1 B 2 • • • B N 
Xll x21 - XN1 
x12 x22 .- _ XN2 
Xl3 x23 XN3 
Fig.2 Phrase matrix. B 1 ..... B N are 
phrase sets. 
Then the central problem here is formulat- 
ed as the following combinatorial optimiza- 
tion problem \[Matsunaga 86, 0zeki 86a\]. 
Problem Find a dependency structure 
XeKB(B1,B 2 ..... B N) 
which minimizes the objective function F(X). 
By solving this problem, we can obtain the 
optimal phrase sequence and the optimal 
dependency structure on the sequence simul- 
taneously. 
When \[Bll=\[B2I=...=IBN\] : M, 
we have 
\[KB(B1,B 2 ..... BN)\[= (2(N_I)C(N-1))/N)MN, 
where C denotes combination. This oecomes a 
huge number even for a moderate problem 
size, rendering an enumerative method prac- 
tically impossible. 
5. Recurrence equations and a resulting 
algorithm 
Combining two dependency structures X and 
Y=<YI ..... Ym,Y>, a new dependency structure 
<X,Y 1 ..... Ym,y> is obtained which is denoted 
as X O V. Conversely, any dependency struc- 
ture Z with length greater than 1 can be 
decomposed as Z= X@ Y, where X is the top 
dependency structure in Z. Moreover, it is 
easily verified from the definition of the 
objective function that 
F(Z)= F(X) ÷ F(Y) ÷ pen(x;y), 
where x and y are the last phrases in X and 
Y, respectively. The following argument is 
based on this fact. 
We denote elements in B i as Xjl,Xi2 ..... 
For l<i<j<N and l<p<lBj\[,'where \[Bj\['denotes 
the number of elements in Bj, we define 
opt(i,j;p) 
=min{F(X) XeKB(B i ..... Bj_l{Xjp})} 
and 
opts(i,j p) 
=argminCF(X)\[X~KB(B i ..... Bj_l{Xjp})}. 
Then the following recurrence equations 
hold for opt(i,j;p) and opts(i,j;p), respec- 
tively \[Ozeki 86a\]. 
Proposition 1 For l<i~jJN and I~p<\[Bj\[ 
(1) if i=j, then opt(i,j;p)=s(Xjp), 
(2) and if i<j, then 
opt(i,j;p) =min{f(k,q)\[iJk<j-l,l~q~\]Bk\[}, 
where 
f(k,q)=opt(i,k;q)÷opt(k+l,j;p) 
÷pen(xkq;Xjp). 
Proposition 1' For l~i<j<N and lJp<\]Bj , 
(1) if i=j, then opts(i,j;p)=<Xjp>, 
(2) and if i<j, then 
opts(i,j;p) 
=opts(i,*k;*q) O opts(*k+l,j;p), 
where *k is the best segmentation point and 
*q is the best phrase number in Bgk: 
(*k,*q)=argmin{f(k,q)\[i~k<j-l,l<q~\[Bk\[}. 
According to Proposition 1, if the values 
of opt(i,k;q) and opt(k÷l,j;p) are known for 
l~k<j-1 and l<q<\[Bk\[, it is possible to 
calculate the value of opt(i,j:p) by search- 
ing the best segmentation point and the best 
phrase number at the segmentation point. 
This fact enables us to calculate the value 
312 2 
of opt(1,N'p) recursively, starting with 
opt(i,i;q) (lJi<N,lJqJlBiI). This is the 
principle of dynamic programming \[Bell- 
man 57\]. 
Let *p= argmin{opt(1,N'p) ll<p<lBN\[}, 
then we have the final solution 
opt(1,Ngp)=min{F(X)\[XeKB(B 1 ..... BN)} 
and 
opts(1,N;gp) 
=argmin{F(X) lXeKB(B 1 ..... BN)}. 
The opts(1,N'*p) can be calculated recur- 
sively using Proposition 2. Fig.3 illus- 
trates an algorithm translated from these 
recurrence equations \[Ozeki 86a\]. This 
algorithm uses two tables, tablel and 
table2, of upper triangular matrix form as 
shown in Fig.4. The (i,j) element of the 
matrix has \[Bil 'pigeon-holes'. The value 
of opt(i,j;p) ts " stored in tablel and the 
pair of the best segmentation point and the 
best phrase number is stored in tableZ. It 
should be noted that there is much freedom 
in the order of scanning i,j and p, which 
will be utilized when we discuss a parallel 
implementation of the algorithm. 
Optimal Dependency_Structure; 
begin 
/* Analysis Phase */ 
for j:=l to N do 
for i:=j downto 1 do 
for p:=l to IBjl do 
if i=j then 
tablel(i,j;p):=s(Xjp); 
else 
begin 
tablel(i,j;p) 
:=min{tablel(i,k;q)+tablel(k+!,j;p) 
+pen(xkq;Xip) 
ll<k<j-l,l<q<\[Bkl}" 
table2(i,j;p) 
:=argmin{tablel(i,k;q) 
+tablel(k+l,j;p) 
+pen(xkq;Xip) Ii~k<j-l,t<q<|gkl\[: 
end: 
/* Composition Phase */ 
*p:=argmin{tablel(1,N;p) 
\]I<p<IBNI}: 
result:=opts(1,N:#p): 
end. 
function opts(i j;p):char string; 
begin 
if i=j then 
opts:='<Xjp>' 
else 
begin 
(*k,*q):=table2(i,j;p); 
opts:=opts(i,*k;*q) (~)opts(*kil,j;p); end; 
end. 
Fig.3 Algorithm to select an optimal 
dependency structure from a phrase 
matrix. 
(T,3T. ~') .................... 
{r -, = ........ __£ - -22 -_-£ _-.7 Z - {2, 5; \]), 
I I .......... £ g-2K .... :.17£ 2--21 , ;-# 7_-.77 
:--:: 22221--_-2 :-_: J 
Fig.4 Triangular matrix table ......... 
for tablel and table2. - ....... 
In this example, N=7 and ~-~ 
IBII=...:IBTI:3. ~77523 
character position 
1 2 3 4 5 '6 7 8 9 10 11 12 13 14 151 ~ 
1 A) I B(S,8) 
B(12) B(35) \[ B(68) 
~ B(11,15) --1 
B(9,13) 
Fig.5 Example of phrase lattice. 
When IB1\]=...=IBNI=M, the number of opera- 
tions(additions and comparisons) necessary 
to fill tablel is O(M2N3). 
These recurrence equations and algorithm 
can be easily extended so that they can 
handle a general phrase lattice. A Phrase 
lattice is a set of phrase sets, which looks 
like Fig.5. B(i,j) denotes the set of 
phrases beginning at character position i 
and ending at j. A phrase lattice is oh-- 
rained, for example, as the output of a con- 
tinuous speech recognition system, and also 
as the result of a morphological analysis of 
non-segmented Japanese text spelled in kana 
characters only. We denote the elements of 
B(~j~ as Xijl,Xij 2 ..... and in parallel 
wi be definition of opt and opts, we 
define opt' and opts' as follows. 
For l<i<m<j(N and Xmj p, 
opt'(i,j,m;p) 
=the minimum value of \[P(X)iS(X)\] as X 
runs over all the dependency structures 
on all the possible phrase sequences 
beginning at i and ending at j with the 
last phrase being fixed as Xmj p, 
and 
opts'(i,j,m;p) 
=the dependency structure which gives the 
above minimum. 
Then recurrence equations similar to 
Proposition 1 and Proposition 1' hold for 
opt' and opts'\[Ozeki 86bJ: 
Proposition 2 For l!iJm!j!S and 
lJp<lB(m,j)\[, 
(1) if i=m, then opt'(i,j,m;p)=S(Xmjp), 
313 3 
(2) and if i<m, then 
opt'(i,j,m;p) 
=min{f' (k,n,q) l i<n<k<m-1, lJqJlB(n,k) l}, 
where 
f'(k,rl,q)= ept'(i,k,n;q)÷opt'(k+l,j,m;p) 
÷pen(xnkq:Xmjp) 
Propo~;ition 2' For \[<i<mi3~N and 
lJpJIB(m,J)l, (I) if i=m then opts'(i,j,m;p)=<Xmjp>, 
(2) and if i<m, then 
opts'(i j,m;p) 
=opts'(i *k,gn;gq) O opts'(gk+l,j,m;p), 
where *k is the best segmentation point, *n 
is the top position of the best phrase at 
the segmentation point and *q is the best 
phrase number in B(*n,*k): 
(~k,$n,*q) 
=argmin{f(k,n,q) li<n<k<m-l,lJqJIB(n,k)\[}. 
The minimum is searched on 3 variables in 
this case. It is a straight forward matter 
to translate these recurrence equations into 
an algorithm similar to Fig.3 \[Ozeki 88b, 
Kohda 86\]. In this case, the order of 
amount of computation is O(M2NS), where 
M=IB(i,j)I and N is the number of starting 
and ending positions of phrases in the 
top layer 
node(I,1) bottom layer node(7,7) 
Fig.6 2-dimensional array of computing 
elements. 
lattice. 
Also, we can modify the algorithm in such 
a way that up to kth optimal solutions are 
obtained. 
6. Parallel and Layered Implementation 
When only one processor is available, the 
amount of computation dominates the proc- 
essing time. On the other hand, when there 
is no limit as to the number of processors, 
the processing time depends on how much of 
the computation can be executed in parallel. 
There exists a tidy parallel and layered 
structure to implement the above algorithm. 
For simplicity, let us confine ourselves 
to a phrase matrix case here. Furthermore, 
let us first consider the case where there 
is only one element x i in each of the 
phrase set B i. If we define 
opt''(i,j)=min{P(X)lXeK(x i ..... xj)} 
then Proposition 1 is reduced to the follow- 
ing simpler form. 
Proposition 3 For lJiJjJN, 
(1) if i=j, then opt"(i,j)=O, 
(2) and if i<j, then 
opt"(i,j) 
=min{opt"(i,k)iopt"(k+l,j) 
+pen(xk;xj)\[i<k<j-1}, 
It is easy to see that opt''(i,j) and 
opt"(i÷m,j÷m) (m~O) can be calculated inde- 
pendently of each other. This motivates us 
to devise a parallel and layered computa- 
tion structure in which processing elements 
are arranged in a 2-dimensional array as 
shown in Fig.6. There are N(N+I)/2 process- 
ing elements in total. The node(i,j) has an 
internal structure as shown in Fig.7, and is 
connected with node(i,k) and node(k÷l,j) 
(lJk<j-1) as in Fig.8. The bottom elements, 
node(i,i)'s (l<i<N), hold value 0 and do 
nothing else. The node(i,j) calculates the 
value of opt"(i,j) and holds the result in 
memory i together with the optimal segmenta- 
tion point in memory 2. Within a layer all 
the nodes work independently in parallel and 
the computation proceeds from the lower to 
upper layer. An upper node receives informa- 
tion about a longer sub-sequence than a 
lower node: an upper node processes more 
global information than a lower node. When 
\[. oinio;zatio. 
... 
x,  '"ut t on 
J 0 node(i÷l.j' / 0 node(i+g,J) 
0 node(i.i) 0 node(i.i+l) 
memory I o~ut p 
min ; ut 1 
, ,L\]~ 
~utation of 
Fig.7 Internal structure of node(i,j). 
314 4 
e(i,j) 
node(i,j-1) node(i+l,j) 
/ \ 1 \ 
1 \ 
node (i , i+ l) node(j-I, j) 
dnode(i,i) node(j,j)~ 
Fig.8 Nodes connected to node(i,j). 
(1, 6;5) 
d: C~ / / "//C~"/3C~\ >3 2nd (~aver/(D\\ 'C)x 
(\], i;!) bottom layer (6,6:1) 
Fig.9 3--dimendional array of computing 
elements. 
the top element, node(1,N), finishes its 
iob, each node holds information which is 
uecessary to compose the optimal dependency 
'.~tructure on XlX2...x N. This computation 
~;tructure, having many simple inter-related 
computing elements, might be reminiscent of 
a conneetionist model or a neural network. 
This result can be easily extended, based 
,:)n Proposition 1, to the case in which each 
phrase set has more than one elements. In 
i:his case processing elements are arranged 
in a 3-dimensional array as shown in Fig.9. 
The bottom elements, node(i,i;p)'s, hold the 
value of s(Xip). The node(i,jp) calculates 
I:he value of opt(i,j;p). The computation 
i,roceeds from tile lower to upper layer just 
as in the previous simpler case. Further 
extension of this str.ucture is also possible 
:',o that it can handle a general phrase lat- 
l;ice. 
?. Related Works 
The problem of selecting an appropriate 
?hrase sequence from a phrase lattice has 
been treated in the field of Japanese word 
?recessing, where a non-segmented Japanese 
t:ext spe\].led in kana character must be 
converted into an orthographic style spelled 
in kana and kanji. Several practical methods 
have been devised so far. Among them, the 
approach in \[Oshima 86\] is close in idea to 
the present one in that it utilizes the 
Japanese case grammar in order to disambi- 
guate a phrase lattice. However, their 
method is enumeration-oriented and some 
kind of heuristic process is necessary to 
reduce the size of the phrase lattice before 
syntactic analysis is performed. 
In order to disambiguate the result of 
speech recognition, an application of de- 
pendency analysis was attempted \[Matsunaga 
86, Matsunaga 87\]. The algorithm used is a 
bottom-up, depth-first search, and it is 
reported that it takes considerable process- 
ing time. By introducing a beam search 
technique, computing time can be very much 
reduced \[Nakagawa 87\], but with loss of 
global optimality. 
Perhaps tile most closely related algo- 
rithm will be (extended)CYK algorithm with 
probabilistic rewriting rules \[Levinson 85, 
Ney 87, Nakagawa 87\]. In spite of the dif- 
ference in the initial ideas and the formu- 
lations, both approaches lead to similar 
bottom-up, breadth-first algorithms based on 
the principle of dynamic programming. 
In Fig.2, if each phrase set has only one 
phrase, and the value of between-phrase 
penalty is 0 or 1, then the algorithm re- 
duces to the conventional Japanese dependen- 
cy analyzer \[Hitaka 80\]. Thus, the algorithm 
presented here is a twofold extension of the 
conventional Japanese dependency analyzer: 
the value of between-phrase penalty can 
take an arbitrary real number and it can 
analyze not only a phrase sequence but a 
phrase matrix and a phrase lattice in poly- 
nomial time. 
We have considered a special type of de- 
pendency structure ill this paper, in which a 
modificant never precedes the modifier as is 
normally the case in Japanese. It has been 
shown that the algorithm can be extended to 
cover a more general dependency structure 
\[Katoh 893. 
The fundamental algorithm presented here 
has been modified and extended, and utilized 
for speech recognition \[Matsunaga 88\]. 
8. Concluding Remarks 
In the method presented here, the linguis- 
tic data and the algorithm are completely 
separated. The linguistic data are condensed 
in the penalty function which measures the 
naturalness of modifier-modificant relation 
between two phrases. No heuristics has 
slipped into the algorithm. This makes the 
whole procedure very transparent. 
The essential part of the algorithm is 
execution of numerical optimization rather 
than symbolic matching unlike conventional 
parsers. Therefore it can be easily imple- 
mented on an arithmetic processor such as 
DSP (Digital Signal Processor). The parallel 
5 315 
and layered structure will fit LSI imple- 
mentation. 
An obvious limitation of this method is 
that it takes account of only pair-wise 
relation between phrases. Because of this, 
the class of sentences which have a low 
penalty in the present criterion tends to be 
broader than the class of sentences which we 
normally consider acceptable. Nevertheless, 
this method is useful in reducing the 
number of candidates so that a more sophis- 
ticated linguistic analysis becomes possible 
within realistic computing time in a later 
stage. 
A reasonable way of computing the penalty 
for a phrase pair is yet to be established. 
There seems to be two approaches to this 
problem: a deterministic approach taking 
syntactic and semantic relation between two 
phrases into consideration, and a statisti- 
cal one based on the frequency of co-occu- 
fence of two phrases. 
Acknowledgement 
The author is grateful to the support of 
Hose Bunka Foundation for this work. 

References 

\[Bellman 573 Bellman,R.: 'Dynamic Program- 
ming', Princeton Univ. Press, 1957. 

\[Hashimoto 46\] Hashimoto,S.: 'Kokugo-gaku 
Gairon', lwanami. 1946. 

\[Hays 64\] llays,D.G,: 'Dependency Theory: A 
Formalism and Some Observations', Lan- 
guage, Vol.40, No.4, pp.511-525, 1964. 

\[Hitaka 80\] nitaka,T, and Yoshida,S. 'A 
Syntax Parser Based on the Case Dependency 
and Its Efficiency' Prec. COLING'80, 
pp.15-20, 1980. 

\[Katoh 89\] Katoh,N. and Ehara,T. • 'A fast 
algorithm for dependency structure analy- 
sis' Prec. 39th Annual Convention IPS 
Japan, 1989. 

EKohda 86\] Kohda,M.' 'An algorithm for 
optimum selection of phrase sequence from 
phrase lattice',Paper Tech. Group, IECE 
Japan, SP86-72, pp.9-16, 1986. 

\[Levinson 853 Levinson,S.E.' 'Structural 
Methods in Automatic Speech Recognition' 
Prec. of IEEE, Vol.?3, No.ll, pp.1625- 
1649, 1985. 

\[Matsunaga 86\] Matsunaga,S. and Kohda,M.' 
'Post-processing using dependency struc- 
ture of inter-phrases for speech recogni- 
tion', Prec. Acoust. Soc. Jpn. Spring 
Meeting, pp.45-46, 1986. 

\[Matsunaga 87\] Matsunaga,S. and Kohda,M,: 
'Speech Recognition. of Minimal Phrase Se- 
quence Taking Account of Dependency Rela- 
tionships between Minimal Phrases', 
Trans. IEICE Vol. JTO-D,No.ll, pp.2102-2107, 
1987. 

\[Matsunaga 88\] Matsunaga,S. and Kohda,M." 
'Linguistic processing using a dependency 
structure grammar for speech recognition 
and understanding' Prec. COLING'88, 
pp.402-407, 1988. 

\[Nakagawa 873 Nakagawa,S. and Ito,T. : 
'Recognition of Spoken Japanese Sentences 
Using Menu-Syllable Units and Backward 
Kakari-Uke Parsing Algorithm', Trans. 
IEICE Vol. J70-D,No.12, pp.2469-2478, 1987. 

\[Nakagawa 87\] Nakagawa. S : 'Unification of 
Kakari-Uke Analysis and Context-Free 
Parsing by CYK Algorithm for Continuous 
Speech Recognition', Prec. Acoust. Soc. 
Jpn. Spring Meeting, pp.131-13Z, 1987. 

\[Ney 87\] Ney,H.: 'Dynamic Programming Speech 
Recognition Using a Context-Free Grammar', 
Prec. ICASSP'87, pp,69-72, 1987. 

\[Oshima 86\] Oshima,Y., Abe,M,, Yuura,K. and 
Takeichi,N. : 'A Disambiguation Method in 
Kana-Kanji Conversion Using Case Frame 
Grammar', Trans. IPSJ, Vol.27, No.7, 
pp.679-687, 1986. 

\[Ozeki 86a\] Ozeki,K.: 'A multi-stage deci- 
sion algorithm for optimum bunsetsu se- 
quence selection', Paper Tech. Group, IECE 
Japan, SP86-32, pp.41-48, 1986. 

\[Ozeki 86b\] Ozeki,K.: 'A multi-stage deci- 
sion algorithm for optimum bunsetsu se- 
quence selection from bunsetsu lattice', 
Paper Tech. Group, IECE Japan, COMP86-47, 
pp.47-57, 1986. 

\[Yoshida 72\] Yoshida,S.: 'Syntax analysis of 
Japanese sentence based on kakariuke rela- 
tion between two bunsetsu'. Trans. IECE 
Japan, Vol. J55-D, No.4, 1972. 
