File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/abstr/06/e06-1045_abstr.xml
Size: 1,227 bytes
Last Modified: 2025-10-06 13:44:49
<?xml version="1.0" standalone="yes"?> <Paper uid="E06-1045"> <Title>Data-driven Generation of Emphatic Facial Displays</Title> <Section position="1" start_page="0" end_page="0" type="abstr"> <SectionTitle> Abstract </SectionTitle> <Paragraph position="0"> We describe an implementation of data-driven selection of emphatic facial displays for an embodied conversational agent in a dialogue system. A corpus of sentences in the domain of the target dialogue system was recorded, and the facial displays used by the speaker were annotated. The data from those recordings was used in a range of models for generating facial displays, each model making use of a different amount of context or choosing displays differently within a context. The models were evaluated in two ways: by cross-validation against the corpus, and by askinguserstoratetheoutput. Thepredictions of the cross-validation study differed from the actual user ratings. While the cross-validation gave the highest scores to models making a majority choice within a context, the user study showed a significant preference for models that produced more variation. This preference was especially strong among the female subjects.</Paragraph> </Section> class="xml-element"></Paper>