File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/abstr/04/w04-2320_abstr.xml
Size: 1,551 bytes
Last Modified: 2025-10-06 13:43:59
<?xml version="1.0" standalone="yes"?> <Paper uid="W04-2320"> <Title>Dialogue Systems that can Handle Face-to-Face Joint Reference to Actions in Space</Title> <Section position="1" start_page="0" end_page="0" type="abstr"> <SectionTitle> Abstract </SectionTitle> <Paragraph position="0"> This talk introduces new research that works towards an overarching model of natural face-to-face conversation about spatially-located actions in the world, and then uses that model to implement a trustworthy embodied conversational agent to guide users' ongoing, real-world activities away from the desktop.</Paragraph> <Paragraph position="1"> Past research has demonstrated that the relationship between verbal and nonverbal behavior exists at the level of intonational phrases, conversational turns, discourse units, and the negotiation of reference to objects and actions, that mental representations of shared space are structured in such a way as to allow participants in a dialogue to draw on them, that dialogue systems must be based on models of coordination and collaboration, and that users are willing to engage in persistent, natural, trusting conversation with embodied conversational systems. In this talk, these diverse strands of research are brought together in the service of a single underlying modality-independent model of action and language, non-verbal behaviors and words, production and comprehension that can lead to a physically-located, spatially-aware, collaborative embodied conversational agent.</Paragraph> </Section> class="xml-element"></Paper>