File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/94/h94-1013_intro.xml

Size: 1,081 bytes

Last Modified: 2025-10-06 14:05:43

<?xml version="1.0" standalone="yes"?>
<Paper uid="H94-1013">
  <Title>Weide, R., Huang, X., and Alleva, F., &amp;quot;Improving Speech- Recognition Performance Via Phone-Dependent VQ Codebooks, Multiple Speaker Clusters And Adaptive Language Models&amp;quot;, ARPA Spoken Language Systems Workshop, March</Title>
  <Section position="2" start_page="0" end_page="0" type="intro">
    <SectionTitle>
ABSTRACT
</SectionTitle>
    <Paragraph position="0"> We desert'be our latest attempt at adaptive language modeling. At the heart of our approach is a Maximum Entropy (ME) model which inc.orlxnates many knowledge sources in a consistent manner. The other components are a selective unigram cache, a conditional bigram cache, and a conventionalstatic trigram. We describe the knowledge sources used to build such a model with ARPA's official WSJ corpus, and report on perplexity and word error rate results obtained with it. Then, three different adaptation paradigms are discussed, and an additional experiment, based on AP wire data, is used to compare them.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML