File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/abstr/81/p81-1018_abstr.xml
Size: 6,560 bytes
Last Modified: 2025-10-06 13:45:56
<?xml version="1.0" standalone="yes"?> <Paper uid="P81-1018"> <Title>TEST: (FOR-EACH TOPICS (AND (EQUAL 'REQINFO (GET X 'CPURPOSE)) (NULL (GET X 'CLOSEDBY)))) ACTION: (MAPCAN '(LAMBDA (X) (PROG (TMP) (RETURN (COND ((SETQ TMP (QUESTIONIZE (GET- HYPO ( EVAL X)))) (MAPCAN '(LAMBDA (Y) (COND (Y (LIST (UTTER Y (LIST X)))))) TMP</Title> <Section position="1" start_page="0" end_page="83" type="abstr"> <SectionTitle> Abstract </SectionTitle> <Paragraph position="0"> The problem of modeling human understanding and generation of a coherent dialog is investigated by simulating a conversation participant. The rule-based system currently under development attempts to capture the intuitive concept of &quot;topic&quot; using data structures consisting of declarative representations of the subjects under discussion linked to the utterances and rules that generated them. Scripts, goal trees, and a semantic network are brought to bear by general, domain-independent conversational rules to understand and generate coherent topic transitions and specific output utterances.</Paragraph> <Paragraph position="1"> 1. Rules, topics, and utterances Numerous systems have been proposed to model human use of language in conversation (speech acts\[l\], MICS\[3\], Grosz \[5\]). They have attacked the problem from several different directions. Often an attempt has been made to develop some intersentential analog of syntax, despite the severe problems that grammar-oriented parsers have experienced. The program described in this paper avoids the use of such a grammar, using instead a model of the conversation's topics to provide the necessary connections between utterances. It is similar to the ELI parsing system, developed by Riesbeck and Schank \[7\], in that it uses relatively small, independent segments of code (or &quot;rules&quot;) to decide how to respond to each utterance, given the context of the utterances that have already occurred. The program currently operates in the role of a graduate student discussing qualifier exams, although the rules and control structures are independent of the domain, and do not assume any a priori topic of discussion.</Paragraph> <Paragraph position="2"> The main goals of this project are: * To develop a small number of general rules that manipulate internal models of topics in order to produce a coherent conversation.</Paragraph> <Paragraph position="3"> * To develop a 'representation for these models of topics which will enable the rules to generate responses, control the flow of conversation, and maintain a history of the system's actions during the current conversation.</Paragraph> <Paragraph position="4"> This research was sponsored in part by the Defense Advanced Research Projects Agency (DOO), ARPA Order No.</Paragraph> <Paragraph position="5"> 3597, monitored by the Air Force Avionics Laboratory Under Contract F33615-78-C- 1551.</Paragraph> <Paragraph position="6"> The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the US Government.</Paragraph> <Paragraph position="7"> * To integrate information from a semantic network, scripts, dynamic goal trees, and the current conversation in order to allow intelligent action by the rules.</Paragraph> <Paragraph position="8"> The rule-based approach was chosen because it appears to work in a better and more natural way than syntactic pattern matching in the domain of single utterances, even though a grammatical structure can be clearly demonstrated there. If it is awkward to use a grammar for single-sentence analysis, why expect it to work in the larger domain of human discourse,, where there is no obviously demonstrable &quot;syntactic&quot; structure? in place of grammar productions, rules are used which can initiate and close topics, and form utterances based on the input, current topics, and long-term knowledge. This set of rules does not include any domain-specific inferences; instead, these are placed into the semantic network when the situations in which they apply are discussed.</Paragraph> <Paragraph position="9"> It is important to realize that a &quot;topic&quot; in the sense used in this paper is not the same thing as the concept of &quot;focus&quot; used in the anaphora and coreference disambiguation literature. There, the idea is to decide which part of a sentence is being focused on (the &quot;topic&quot; of the sentence), so that the system can determine which phrase will be referred to by any future anaphoric references (such as pronouns). In this paper, a topic is a concept, possibly encompassing more than the sentence itself, which is &quot;brought to mind&quot; when a person hears an utterance (the &quot;topic&quot; of a conversation). It is used to decide which utterances can be generated in response to the input utterance, something that the focus of a sentence (by itself) can not in general do. The topics need to be stored (as opposed to possibly generating them when needed) simply because a topic raised by an input utterance might not be addressed until a more interesting topic has been discussed. The data structure used to represent a topic is simply an object whose value is a Conceptual Dependency (or CD) \[8\] description of the topic, with pointers to rules, utterances, and other topics which are causally or temporally related to it, plus an indication of what conversational goal of the program this topic is intended to fulfill. The types of relations represented include: the rule (and any utterances involved) that resulted in the generation of the topic, any utterances generated from the topic, the topics generated before and after this one (if any), and the rule (and utterances) that resulted in the closing of this topic (if it has been closed). Utterances have a similar representation: a CD expression with pointers to the rules, topics, and other utterances to which they are related. This interconnected set of CD expressions is referred to as the topic-utterance graph, a small example of which (without CDs) is illustrated in Figure 1.1. The various pointers allow the program to remember what it has or has not done, and why. Some are used by rules that have already been implemented, while others are provided for rules not yet built (the current rules are described in sections 2.2 and 3).</Paragraph> </Section> class="xml-element"></Paper>