File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/97/j97-1006_concl.xml

Size: 3,500 bytes

Last Modified: 2025-10-06 13:57:45

<?xml version="1.0" standalone="yes"?>
<Paper uid="J97-1006">
  <Title>Smith and Gordon Human-Computer Dialogue INPUTS OUTP\[.q ~ Current Computer Goal Current User Focus Dialog Mode Computer Response Selection Algorithm Selected Task Goal</Title>
  <Section position="10" start_page="165" end_page="165" type="concl">
    <SectionTitle>
8. Conclusions
</SectionTitle>
    <Paragraph position="0"> While there is ample analysis of dialogue structure based on human-human and simulated human-computer dialogue, there is very little information on the structure of actual human-computer dialogue. In this paper we have reviewed an integrated approach to dialogue processing that allows a system to support variable initiative behavior in its interactions with human users. Furthermore, we have reported the results from the analysis of 141 dialogues collected during experimental use with a system based on the overall dialogue-processing model. These results indicate differences in both dialogue structure and user behavior as a function of the computer's level of initiative. An important open question is the degree to which the parameters of the transition model for task-oriented dialogues for repair assistance are domain dependent. For example, the relative amount of time spent in each subdialogue phase is likely to be highly dependent on the domain. Furthermore, the model does not fully take into account different types of miscommunication and their repair. These issues require further study.</Paragraph>
    <Paragraph position="1"> The next step in extending the dialogue-processing model is to incorporate the knowledge gained from this study in addressing two of the most significant unresolved problems in human-computer dialogue: (1) automatic switching of initiative during dialogue; and (2) automatic detection and repair of miscommunication. The current dialogue-processing model considers subdialogues at the lower level of basic domain actions. Extending the model to describe dialogue structure at the more abstract level of task phases would allow the system to track the excessive and unusual subdialogue transitions observed in this study. Such tracking can be used for recognizing evolving user expertise as well as detecting a lack of mutual understanding about the current situation. Normally, implemented dialogue systems tend to be based on processing models that are rich in domain information, but are deficient in one or more areas of knowledge about dialogue. Incorporating more metaknowledge about dialogue structure into the model should lead to more human-like performance in handling initiative changes and miscommunication problems.</Paragraph>
    <Paragraph position="2"> The observations reported in this paper are an initial step on the long road to a comprehensive model of actual human-computer dialogue structure. It is hoped that these results will encourage other researchers to construct experimental NL dialogue systems, test these systems, and then analyze and report the results so that a more comprehensive view of human-computer dialogue structure can be obtained. In general, we believe that the natural life cycle of experimental NL dialogue systems should be one of analyzing, modeling, building, and testing so that the analysis of actual human-computer dialogues can lead to the development of more effective systems. Such a methodology allows us to gain clearer insight into the evolving nature of human-computer dialogues.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML