File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/abstr/80/j80-2003_abstr.xml
Size: 5,309 bytes
Last Modified: 2025-10-06 13:45:56
<?xml version="1.0" standalone="yes"?> <Paper uid="J80-2003"> <Title>Responding Intelligently to Unparsable</Title> <Section position="2" start_page="0" end_page="0" type="abstr"> <SectionTitle> 1. Introduction </SectionTitle> <Paragraph position="0"> A truly natural language processing system does not have to have a perfect model of human language use, but it should have knowledge of whatever limitations its model has. Then, for a user who has exceeded these limitations, the system can interactively aid the user to rephrase the input in an acceptable way.</Paragraph> <Paragraph position="1"> This is a prerequisite to any practical application, whether it be natural language communication to a data base, a medical consultation system, or an office automation system. Users will not find such a system practical unless it gives helpful feedback when the system fails to understand an input.</Paragraph> <Paragraph position="2"> As an example of how a user's input can exceed the system's model, we repeat an anecdote of Woods (1973b) about his system for answering natural language queries about lunar rock samples. One question asked was, &quot;What is the average weight of all your samples?&quot; This overstepped the system's model in at least three ways.</Paragraph> <Paragraph position="3"> 1 This work was supported in part by the University of Delaware Research Foundation, Inc.</Paragraph> <Paragraph position="4"> 2 Current address: W.L. Gore & Associates, Inc., Newark, Delaware 19711.</Paragraph> <Paragraph position="5"> It surpassed the syntactic model, which did not provide for a predeterminer such as &quot;all&quot; preceding another determiner, such as &quot;your&quot; or &quot;the&quot;. Therefore, the sentence could not be parsed, even though &quot;What is the average weight of all samples&quot; or &quot;What is the average weight of your samples&quot; could have been.</Paragraph> <Paragraph position="6"> The semantic capabilities were also surpassed, because semantic rules for translating &quot;weight of&quot; to a functional representation had not been incorporated. Indeed, no data had been included for the weights of the samples.</Paragraph> <Paragraph position="7"> The third problem was that no semantic translation rules for possession were present. The input violated the system's model of pragmatics, for the designers had not attributed possession of the samples to the machine.</Paragraph> <Paragraph position="8"> This paper presents three ideas for giving useful feedback when a user exceeds the system's model.</Paragraph> <Paragraph position="9"> The ideas help to identify and explain the system's problem in processing an input in many cases, but do not perform the next step, which is suggesting how the user might rephrase the input.</Paragraph> <Paragraph position="10"> These ideas have been tested in one of two systems: (1) an intelligent tutor for instruction in a foreign language and (2) a system which computes the Copyright 1980 by the Association for Computational Linguistics. Permission to copy without fee all or part of this material is granted provided that the copies are not made for direct commercial advantage and the Journal reference and this copyright notice are included on the first page. To copy otherwise, or to republish, requires a fee and/or specific permission. 0362-613X/80/020097-13501.00 American Journal of Computational Linguistics, Volume 6, Number 2, April-June 1980 97 Ralph M. Weischedel and John E. Black Responding Intelligently to Unparsable Inputs presuppositions and entailments of a sentence. For each idea presented in the paper, we will indicate whether it pertains to systems in general or pertains specifically to the foreign language tutor system with its unique position of knowing more of the language than the user.</Paragraph> <Paragraph position="11"> In Section 2 of this paper we offer a way to recognize that an input exceeds the semantic model. In general, the presuppositions or given information (defined later), of a user's input must be true in the system's model of context, for they represent facts that must be shared among the participants of a dialogue. For each presupposition not true in the machine's model, the system should print the false presupposition to identify an assumption that the user cannot make.</Paragraph> <Paragraph position="12"> Section 3 presents a technique for relaxing constraints to accept sentences that would not parse otherwise. Frequently one wonders whether the syntactic component is responsible for much of the inability of previous systems to understand input partially, to isolate parts not understood, and to interpret ill-formed input. A top-down, left-right parser essentially cannot proceed to material to the right of a construction which the grammar is not prepared for. Yet, such a parser should have much predictive ability about what was expected when the block occurred. Section 4 describes a collection of heuristics that capitalize on the predictive abilities of a top-down, left-right parser to produce helpful messages when input is not understood. null Finally, Section 5 discusses related work, and Section 6 presents our conclusions.</Paragraph> </Section> class="xml-element"></Paper>