File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/04/p04-3028_intro.xml
Size: 1,446 bytes
Last Modified: 2025-10-06 14:02:30
<?xml version="1.0" standalone="yes"?> <Paper uid="P04-3028"> <Title>Co-training for Predicting Emotions with Spoken Dialogue Data</Title> <Section position="3" start_page="0" end_page="0" type="intro"> <SectionTitle> 2 Data </SectionTitle> <Paragraph position="0"> Our data consists of the student turns in a set of 10 spoken dialogues randomly selected from a corpus of 128 qualitative physics tutoring dialogues between a human tutor and University of Pittsburgh undergraduates. Prior to our study, the 453 student turns in these 10 dialogues were manually labeled by two annotators as either &quot;Emotional&quot; or &quot;Non-Emotional&quot; (Litman and Forbes-Riley, 2004). Perceived student emotions (e.g. confidence, confusion, boredom, irritation, etc.) were coded based on both what the student said and how he or she said it. For this study, we use only the 350 turns where both annotators agreed on the emotion label. 51.71% of these turns were labeled as Non-Emotional and the rest as Emotional.</Paragraph> <Paragraph position="1"> Also prior to our study, each annotated turn was represented as a vector of 449 features hypothesized to be relevant for emotion prediction (Forbes-Riley and Litman, 2004). The features represent acoustic-prosodic (pitch, amplitude, temporal), lexical, and other linguistic characteristics of both the turn and its local and global dialogue context.</Paragraph> </Section> class="xml-element"></Paper>