File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/80/c80-1071_concl.xml
Size: 1,770 bytes
Last Modified: 2025-10-06 13:55:59
<?xml version="1.0" standalone="yes"?> <Paper uid="C80-1071"> <Title>SPEECH RECOGNITION SYSTEM FOR SPOKEN JAPANESE SENTENCES</Title> <Section position="7" start_page="1033" end_page="1033" type="concl"> <SectionTitle> 5. Conclusion </SectionTitle> <Paragraph position="0"> We have just started to construct a speech recognition system which can deal with semantic information and inflexion of words and have many problems to be solved. IIowever, from this experiment it may be able to say as follows: (i) The acoustic analyser gives Pretty neat phoneme strings, if only a learning process using Bayes decision theory for a group of vowels, nasals and buzz is executed for each speaker.</Paragraph> <Paragraph position="1"> ii) Use of global acoustic features is effective to reduce the number of predicted candidate words, though its effectiveness is not so much as in case of our isolatedly spoken word recognition system \[12\].</Paragraph> <Paragraph position="2"> (iii) In Japanese, inflexion of inflexional words are complicated, and the number of Roman letters involved in the stem and inflexional ending of each verb or each auxiliary verb is usually very small. Especially the number of letters which very important particles have is much smaller. These aspects are very unfavorable for speech recognition in which ideal acoustic processing can not be expected. But the syntactic and matching processors can, to some extent, process input phoneme strings with erroneous phonemes satisfactorily.</Paragraph> <Paragraph position="3"> (iv) Developing the vocabulary is very easy.</Paragraph> <Paragraph position="4"> Of course we must improve the capability of the syntactic and semantic analysers and also develop the vocabulary.</Paragraph> </Section> class="xml-element"></Paper>