File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/06/e06-1046_intro.xml

Size: 1,326 bytes

Last Modified: 2025-10-06 14:03:26

<?xml version="1.0" standalone="yes"?>
<Paper uid="E06-1046">
  <Title>Edit Machines for Robust Multimodal Language Processing</Title>
  <Section position="3" start_page="361" end_page="361" type="intro">
    <SectionTitle>
2 MATCH: A Multimodal Application
MATCH (Multimodal Access To City Help) is a
</SectionTitle>
    <Paragraph position="0"> working city guide and navigation system that enables mobile users to access restaurant and subway information for New York City and Washington, D.C. (Johnston et al., 2002). The user interacts with an interface displaying restaurant listings and a dynamic map showing locations and street information. The inputs can be speech, drawing/pointing on the display with a stylus, or synchronous multimodal combinations of the two modes. The user can ask for the review, cuisine, phone number, address, or other information about restaurants and subway directions to locations. The system responds with graphical labels on the display, synchronized with synthetic speech output. Forexample, if the user says phone numbers for these two restaurants and circles two restaurants asinFigure2[A],thesystem willdraw a callout with the restaurant name and number and say, for example Time Cafe can be reached at 212533-7000, for each restaurant in turn (Figure 2 [B]).</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML