File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/06/e06-1022_intro.xml

Size: 3,966 bytes

Last Modified: 2025-10-06 14:03:19

<?xml version="1.0" standalone="yes"?>
<Paper uid="E06-1022">
  <Title>Addressee Identification in Face-to-Face Meetings</Title>
  <Section position="2" start_page="0" end_page="169" type="intro">
    <SectionTitle>
1 Introduction
</SectionTitle>
    <Paragraph position="0"> Addressing is an aspect of every form of communication. It represents a form of orientation and directionality of the act the current actor performs toward the particular other(s) who are involved in an interaction. In conversational communication involving twoparticipants, the hearer isalways the addressee of the speech act that the speaker performs. Addressing, however, becomes a real issue in multi-party conversation.</Paragraph>
    <Paragraph position="1"> The concept of addressee as well as a variety of mechanisms that people use in addressing their speech have been extensively investigated by conversational analysts and social psychologists (Goffman, 1981a; Goodwin, 1981; Clark and Carlson, 1982).</Paragraph>
    <Paragraph position="2"> Recently, addressing has received considerable attention in modeling multi-party interaction in various domains. Research on automatic addressee identification has been conducted in the context of mixed human-human and human-computer interaction (Bakx et al., 2003; van Turnhout et al., 2005), human-humanrobot interaction (Katzenmaier et al., 2004), and mixed human-agents and multi-agents interaction (Traum, 2004). In the context of automatic analysis of multi-party face-to-face conversation, Otsuka et al. (2005) proposed a framework for automating inference of conversational structure that is defined in terms of conversational roles: speaker, addressee and unaddressed participants.</Paragraph>
    <Paragraph position="3"> In this paper, we focus on addressee identification in a special type of communication, namely, face-to-face meetings. Moreover, we restrict our analysis to small group meetings with four participants. Automatic analysis of recorded meetings has become an emerging domain for a range of research focusing on different aspects of interactions among meeting participants. The outcomes of this research should be combined in a targeted application that would provide users with useful information about meetings. For answering questions such as &amp;quot;Who was asked to prepare a presentation for the next meeting?&amp;quot; or &amp;quot;Were there any arguments between participants A and B?&amp;quot;, some sort of understanding of dialogue structure is required. In addition to identification of dialogue acts that participants perform in multi-party dialogues, identification of addressees of those acts is also important for inferring dialogue structure.</Paragraph>
    <Paragraph position="4"> There are many applications related to meeting research that could benefit from studying addressing in human-human interactions. The results can be used by those who develop communicative agents in interactive intelligent environments and remote meeting assistants. These agents need to recognize when they are being addressed and how they should address people in the environment.</Paragraph>
    <Paragraph position="5"> This paper presents results on addressee identi- null fication in four-participants face-to-face meetings using Bayesian Network and Naive Bayes classifiers. The goals in the current paper are (1) to find relevant features for addressee classification in meeting conversations using information obtained from multi-modal resources - gaze, speech and conversational context, (2) to explore to what extent the performances of classifiers can be improved by combining different types of features obtained from these resources, (3) to investigate whether the information about meeting context can aid the performances of classifiers, and (4) to compare performances of the Bayesian Network and Naive Bayes classifiers for the task of addressee prediction over various feature sets.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML