File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/91/h91-1021_concl.xml
Size: 2,303 bytes
Last Modified: 2025-10-06 13:56:40
<?xml version="1.0" standalone="yes"?> <Paper uid="H91-1021"> <Title>Augmented Role Filling Capabilities for Semantic Interpretation of Spoken Language</Title> <Section position="8" start_page="130" end_page="131" type="concl"> <SectionTitle> CONCLUSIONS </SectionTitle> <Paragraph position="0"> In this paper we presented benchmark test results on natural language understanding, spoken language understanding and speech recognition. Our weighted score for the Class A natural language test was 48.3%, for the D1 pairs, 63.2%, for the Class AO test, 9.1% and for the DO test, -50%. We presented five benchmark tests of spoken language systems, Unisys-MIT on Class A, which received a weighted score of 9.7%, Unisys-MIT on Class AO, which received a weighted score of 18.2~, Unlsys-LL on Class A, which received a weighted score of 18.6%, Unisys-BBN on Class A, which received a weighted score of 39.6%, and Unlsys-BBN on Class AO, which received a weighted score of 18.2%. Fined\]y, we presented speech recognition results using the Unisys natural language system as a filter on the N-best output of the MIT SUMMIT system.</Paragraph> <Paragraph position="1"> The semantics enhancements to the natural language system are motivating us to revisit the tightly integrated archltecture of semantics/pragmatics processing in our system, because with these enhancements, semantic information regarding a discourse entity can become available to the processing at a much later point than previously. Thus, pragmatic processing must be invoked at a later point to ensure that ed\] relevant semantic information has been exploited.</Paragraph> <Paragraph position="2"> The spoken language results are especially interesting, because we are now beginning to be able to look at the interactions of the natural language system with different speech recognisers,, and see how to tune the natural language system to make the best use of the information available from the various speech recognlsers. We believe that it is important to make these kinds of comparisons and we are planning to work with at least one other speech recognition system using the N-best interface. We also plan to begin exploring more tightly coupled systems using the stack decoder architecture (\[9\]).</Paragraph> </Section> class="xml-element"></Paper>