File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/02/p02-1048_concl.xml
Size: 1,510 bytes
Last Modified: 2025-10-06 13:53:19
<?xml version="1.0" standalone="yes"?> <Paper uid="P02-1048"> <Title>MATCH: An Architecture for Multimodal Dialogue Systems</Title> <Section position="6" start_page="0" end_page="0" type="concl"> <SectionTitle> 4 Conclusion </SectionTitle> <Paragraph position="0"> The MATCH architecture enables rapid development of mobile multimodal applications. Combining finite-state multimodal integration with a speech-act based dialogue manager enables users to interact flexibly using speech, pen, or synchronized combinations of the two depending on their preferences, task, and physical and social environment. The system responds by generating coordinated multimodal presentations adapted to the multimodal dialog context and user preferences. Features of the system such as the browser-based UI and general purpose finite-state architecture for multimodal integration facilitate rapid prototyping and reuse of the technology for different applications. The lattice-based finite-state approach to multimodal understanding enables both multimodal integration and dialogue context to compensate for recognition errors. The multimodal logging infrastructure has enabled an iterative process of pro-active evaluation and data collection throughout system development. Since we can replay multi-modal interactions without video we have been able to log and annotate subjects both in the lab and in NYC throughout the development process and use their input to drive system development.</Paragraph> </Section> class="xml-element"></Paper>