File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/91/h91-1021_intro.xml

Size: 3,084 bytes

Last Modified: 2025-10-06 14:05:01

<?xml version="1.0" standalone="yes"?>
<Paper uid="H91-1021">
  <Title>Augmented Role Filling Capabilities for Semantic Interpretation of Spoken Language</Title>
  <Section position="2" start_page="0" end_page="0" type="intro">
    <SectionTitle>
INTRODUCTION
</SectionTitle>
    <Paragraph position="0"> Improving the performance of spoken language systems requires addressing issues along several fxonts, including basic improvements in natural language processing and speech recognition as well as issues of integration of these components in spoken language systems. In this paper we report the results of our recent work in each of these areas. * One major area of work has been in the the semantic and pragmatic components of the Unisys natural language processins system. The work in semantics enhances the robustness of semantic processing by allowing parses which do not directly express the argument structure expected by semantics to nevertheless be processed in a rule-governed way. In the area of pragmatics we have extended our techniques for bringing material displayed to the user into the dialog context to handle several additional classes of references to material in the display.</Paragraph>
    <Paragraph position="1"> * This work was supported by DARPA contract N000014-89-C0171, administered by the Office of Naval Research. We are gratefuI to Victor Zue of MIT, Doug Paul of MIT Lincoln Laboratories and John Mak.houl of BBN for making output from their speech recognition systems available to us. We also wish to thank Tim Finln, Rich Fritzson, Don McKay, Jim Meldlnger, and Jan Pastor of Unisys and Lynette Hirschnmn of MIT for their contributions to this work.</Paragraph>
    <Paragraph position="2"> In the area of integration of speech and natural language, we report on an experiment with three spoken language systems, coupling the same Unisys natural language system to three different speech recognisers as shown in Figure 1.</Paragraph>
    <Paragraph position="3"> ~&amp;quot; :.,~:::::::.~::~:::::::~::::::::~,,-::::~,.:: l~.'.-'.;,-',.~i.~~.,.~,.,,...,...,,~,~,.,.i-.,..-,,~:~il</Paragraph>
    <Paragraph position="5"> speech recognlsers We believe this is a very promising technique for evaluating the components of spoken language systems. Using this technique we can make a very straightforward compe~ison of the performance of the recognizers in a spoken language context. Furthermore, this technique also allows us to make a fine-gralned comparision of the interaction between speech and natural language in the three systems by looking at such questions as the relative proportion of speech recognizer outputs that fail to parse, fall to receive a semantic analysis and so on. Finally, we report on speech recognition results obtained by filtering the N-best (N=16) from MIT-Summlt through the Unisys natural language system. We note that there was a higher error rate for context-dependent speech as compared to context-independent speech (54.6% compared to 45.8~) and suggest two hypotheses which may account for this difference.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML