File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/abstr/93/w93-0217_abstr.xml

Size: 3,441 bytes

Last Modified: 2025-10-06 13:47:53

<?xml version="1.0" standalone="yes"?>
<Paper uid="W93-0217">
  <Title>The Need for Intentionally-Based Approaches to Language*</Title>
  <Section position="2" start_page="65" end_page="66" type="abstr">
    <SectionTitle>
(2 System:
3 User:
</SectionTitle>
    <Paragraph position="0"> System: User: Show me the generic concept called &amp;quot;employee&amp;quot;. OK. ~system displays network~ I can't fit a new ie below it. Can you move it upf Yes. ~system displays network~ OK, now make...</Paragraph>
    <Paragraph position="1">  The problem with Litman and Allen's approach, like RST-based approaches in general, is that it essentially provides only an utterance-to-utterance based analysis of discourse. In addition to not recognizing discourse segments as separate units with an overall purpose, the model also fails to recognize a subdialogue's relationship to the discourse in which it is embedded. That is, it cannot account for why agents engage in subdialogues. More recent models \[LC91, LC92, Ram91\] that augment Litman and Allen's two types of plans with other types also suffer from the same shortcomings 3.</Paragraph>
    <Paragraph position="2"> Evidence from Generation Work in generation has recognized a similar problem with respect to RST-based approaches. In particular, Moore gz Paris \[MP91\] (see also \[MP92, Hov93\]) have argued for the need to augment RST-based text plans or schemas \[Hov88, McK85\] with an intentional structure in order to respond to follow-up questions. The problem is that although solely RST-based approaches associate a communicative goal with each schema, they do not represent the intended effect of each component of the schema, nor the role that each component plays in satisfying the overall communicative goal associated with the schema. Without such information, a system cannot respond effectively if the hearer does not understand or accept its utterances. In response to this problem, Moore and Paris have devised a planner that constructs text plans containing both intentional and rhetorical information. By recording these text plans as part of the dialogue history, their system is able to reason about its previous utterances in interpreting and responding to users' follow-up questions.</Paragraph>
    <Paragraph position="3"> Conclusions Both the interpretation process and the generation process need intentionally-based approaches to language. In the former, a solely intentional approach provides a more general  model for understanding subdialogues and their relationships. In the latter, intentional information augments RST information to allow more effective participation in explanation dialogues. Although rhetorical relations have proved useful in machine-baaed natural language generation (see Hovy's recent survey \[Hov93\]), their cognitive role rem~ns unclear. Does a speaker actually have them &amp;quot;in mind&amp;quot; when he produces utterances? Or axe they only &amp;quot;compilations&amp;quot; of intentional information that axe computationally efficient for generation systems \[MP91\]? And if a speaker does have rhetorical relations in mind, does a hearer actually infer them? On that matter, I'd argue, baaed on the above discussion (and following Grosz and Sidner \[GS86\]), that a discourse can be understood even if the hearer (be it machine or person) cannot infer, construct, or name any such relations used by the speaker.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML