File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/92/h92-1036_concl.xml

Size: 2,176 bytes

Last Modified: 2025-10-06 13:56:51

<?xml version="1.0" standalone="yes"?>
<Paper uid="H92-1036">
  <Title>MAP Estimation of Continuous Density HMM : Theory and Applications</Title>
  <Section position="9" start_page="189" end_page="189" type="concl">
    <SectionTitle>
SUMMARY
</SectionTitle>
    <Paragraph position="0"> The theoretical framework for MAP estimation of multivariate Gaussian mixt~e density and HMM with mixture Gaussian state observation densities was presented. Two MAP training algorithms, the forward-baclovard MAP estimation and the segmental MAP estimation, were formulated. Bayesian learning serves as a unified approach for speaker adaptation, speaker group modeling, parameter smoothing and corrective training.</Paragraph>
    <Paragraph position="1"> Tested on the RM task, encouraging results have been obtained for all four applications. For speaker adaptation, a 37% word error reduction over the SI results was obtained on the FEB91-SD test with 2 minutes of speaker-specific training data. It was also found that speaker adaptation is more effective when based on sex-dependent models than with an SI seed. Compared to speaker-dependent training, speaker adaptation achieved a better performance with the same amount of training/adaptation data. Corrective training appfied to CI models reduced word errors by 15-20%. The best SI results on RM tests were obtained with p.d.L smoothing and sex-dependent modeling, an average word accuracy of about 95.8% on four test sets.</Paragraph>
    <Paragraph position="2"> Only corrective training and p.d.L smoothing were applied to the TI/NIST connected digit task. It was found that corrective training is effective.for improving CI models, reducing the number of string errors by up to 27%. Corrective training was found to be more effective for models having smaller numbers of parameters. This implies that we can reduce computational requierements by using corrective training on a smaller model and achieve performance comparable to that of a larger model. Using 213 CD models, p.d.L smoothing provided a robust model that gave a 99.1% string accuracy on the test data, the best performance reported on this corpus.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML