File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/relat/92/a92-1010_relat.xml
Size: 2,794 bytes
Last Modified: 2025-10-06 14:16:04
<?xml version="1.0" standalone="yes"?> <Paper uid="A92-1010"> <Title>Integrating Natural Language Components into Graphical Discourse</Title> <Section position="3" start_page="72" end_page="72" type="relat"> <SectionTitle> 2 Related Work </SectionTitle> <Paragraph position="0"> One body of related work has primarily intended to coordinate different modes of expression within a framework of natural communication (cf. Hayes, 1987; Neal and Shapiro, 1988; Cohen et al., 1989, Feiner and McKeown, 1990; Bandyopadhyay, 1990; S tock, 1991; Wahlster et al., 1991). The principle effort here is to ascertain factors that can motivate the distribution of information across different modes (e.g. Arens and Hovy, 1990). A further body of related work moves towards problems of interaction by exploring the potential of the combination of natural language and deictic gestures (cf. Allgayer et al., 1989; Moore and Swartout, 1990). In a similar vein, approaches to flexible graphical interaction based on the conversationalmetaphor (cf. Reichman, 1986, 1989; Thiel, 1990) treat user inputs such as mouse clicks, menu selections, etc. not as invocations of methods that can be executed without regarding the dialog context, but instead as dialog acts expressing a discourse goal of the user. The direct manipulation of an object then becomes itself a part of the dialog of the user with the system, meaning that the system can respond in a more flexible way by taking into account the illocutionary and semantic aspects of the user's input. Related work from generation includes the correction of misconceptions in the work of McCoy (1986) and the explicit representation of information about a system's own knowledge and planning activities that is found in the Explainable Expert Systems system of Swartout and Smoliar (1987). None of this work, however, addresses the problems of meta-dialog concerning graphically supported interaction. null In our approach we bring the kind of natural language capabilities required by the first body of related work (i.e., graphical and natural language information functioning together) to bear on the kinds of problems that arise in the second body of related work when the direct manipulation of objects by a user creates goals that the system cannot fulfil. Here we must not only respond to the user's attempt to manipulate an object or the user's deictic gesture as a dialog act, but also be able to engage in a meta-interaction to debug that act if it creates a problematic situation. We show that natural language possesses properties that make it preferable over the graphical mode of expression for such meta-interaction and hence natural language generation needs to be supported even in graphics-oriented interfaces.</Paragraph> </Section> class="xml-element"></Paper>