File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/92/h92-1079_concl.xml
Size: 1,778 bytes
Last Modified: 2025-10-06 13:56:57
<?xml version="1.0" standalone="yes"?> <Paper uid="H92-1079"> <Title>Large Vocabulary Recognition of Wall Street Journal Sentences at Dragon Systems</Title> <Section position="7" start_page="390" end_page="390" type="concl"> <SectionTitle> 6. CONCLUSIONS </SectionTitle> <Paragraph position="0"> The training paradigm outlined above in the description of our tied mixture modeling has only recently been fully implemented at Dragon. Many aspects of the training strategy await full exploration, but the early results we have described are very encouraging. Already we have improved our performance relative to our old modeling and training paradigms.</Paragraph> <Paragraph position="1"> In the coming months we plan to focus on a number of different aspects of training. First, we will be con- null development test set word error rate (%) using verbalized punctuation.</Paragraph> <Paragraph position="2"> structing basis distributions for streams with more than one parameter and studying the effect of this modeling on performance. We anticipate that we should obtain improved performance as we will then be modeling the dependence among parameters in an individual frame.</Paragraph> <Paragraph position="3"> We will also be studying a variety of backoff strategies, which involve substituting fully contextual PICs instead of generic PICs, when a PIC model has not been built.</Paragraph> <Paragraph position="4"> Another issue of importance will be the nature of our Bayesian smoothing, which we hope to implement in a more &quot;data driven&quot; way. Furthermore, we expect that the use of tied mixture modeling will allow us to develop a high-performance speaker-independent recognizer, an important goal for the coming year.</Paragraph> </Section> class="xml-element"></Paper>