File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/04/p04-1045_concl.xml

Size: 2,417 bytes

Last Modified: 2025-10-06 13:54:03

<?xml version="1.0" standalone="yes"?>
<Paper uid="P04-1045">
  <Title>Predicting Student Emotions in Computer-Human Tutoring Dialogues</Title>
  <Section position="8" start_page="0" end_page="0" type="concl">
    <SectionTitle>
7 Conclusions and Current Directions
</SectionTitle>
    <Paragraph position="0"> Our results show that acoustic-prosodic and lexical features can be used to automatically predict student emotion in computer-human tutoring dialogues.</Paragraph>
    <Paragraph position="1"> We examined emotion prediction using a classication scheme developed for our prior human-human tutoring studies (negative/positive/neutral), as well as using two simpler schemes proposed by other dialogue researchers (negative/non-negative, emotional/non-emotional). We used machine learning to examine the impact of different feature sets on prediction accuracy. Across schemes, our feature sets outperform a majority baseline, and lexical features outperform acoustic-prosodic features.</Paragraph>
    <Paragraph position="2"> While adding identi er features typically also improves performance, combining lexical and speech features does not. Our analyses also suggest that prediction in consensus-labeled turns is harder than in agreed turns, and that prediction in our computer-human corpus is harder and based on somewhat different features than in our human-human corpus.</Paragraph>
    <Paragraph position="3"> Our continuing work extends this methodology with the goal of enhancing ITSPOKE to predict and adapt to student emotions. We continue to manually annotate ITSPOKE data, and are exploring partial automation via semi-supervised machine learning (Maeireizo-Tokeshi et al., 2004). Further manual annotation might also improve reliability, as understanding systematic disagreements can lead to coding manual revisions. We are also expanding our feature set to include features suggested in prior dialogue research, tutoring-dependent features (e.g., pedagogical goal), and other features available in our logs (e.g., semantic analysis). Finally, we will explore how the recognized emotions can be used to improve system performance. First, we will label human tutor adaptations to emotional student turns in our human tutoring corpus; this labeling will be used to formulate adaptive strategies for ITSPOKE, and to determine which of our three prediction tasks best triggers adaptation.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML