File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/97/w97-1409_concl.xml
Size: 2,846 bytes
Last Modified: 2025-10-06 13:57:57
<?xml version="1.0" standalone="yes"?> <Paper uid="W97-1409"> <Title>Planning Referential Acts for Animated Presentation Agents</Title> <Section position="5" start_page="0" end_page="0" type="concl"> <SectionTitle> 5 Conclusion </SectionTitle> <Paragraph position="0"> In this paper, we have argued that the use of life-like characters in the interface can essentially increase the effectiveness of referrring expressions. We have presented an approach for the automated planning of referring expressions which may involve different media and dedicated body movements of the character. While content selection and media choice are performed in a proactive planning phase, the transformation of referential acts into fine-grained animation sequences is done reactively taking into account the current situation of the character at presentation runtime.</Paragraph> <Paragraph position="1"> The approach presented here provides a good starting point for further extensions. Possible directions include: * Extending the repertoire of pointing gestures Currently, the Persona only supports punctual pointing with a hand or a stick. In the future, we will investigate additional pointing gestures, such as encircling and underlining, by exploiting the results from the XTRA project (cf.</Paragraph> <Paragraph position="2"> (Rei92)).</Paragraph> <Paragraph position="3"> 72 E. Andrd and T. Rist * Spatial deixis The applicability of spatial prepositions, such as &quot;on the left&quot;, depends on the orientation of the space which is either given by the intrinsic organization of the reference object or the loca-tion of the observer (see e.g. (Wun85)). While we assumed in our previous work on the semantics of spatial prepositions that the user's loca-tion coincides with the presenter's location (cf. (Waz92)), we now have to distinguish whether an object is localized from the user's point of view or the Persona's point of view as the situated presenter.</Paragraph> <Paragraph position="4"> * Referring to moving target objects A still unsolved problem results from the dynamic nature of online presentations. Since image attributes may change at any time, the visual focus has to be updated continuously which may be very time-consuming. For instance, the Persona is currently not able to point to moving objects in an animation sequence since there is simply not enough time to determine an object's coordinates at presentation time.</Paragraph> <Paragraph position="5"> * Empirical evaluation of the Persona's pointing gestures We have argued that the use a life-like character enables the realization of more effective referring expressions. To empirically validate this hypothesis, we are currently embarking on a study of the user's reference resolution processes with and without the Persona.</Paragraph> </Section> class="xml-element"></Paper>