File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/92/h92-1017_metho.xml

Size: 12,379 bytes

Last Modified: 2025-10-06 14:13:07

<?xml version="1.0" standalone="yes"?>
<Paper uid="H92-1017">
  <Title>Recent Improvements and Benchmark Results for Paramax ATIS System</Title>
  <Section position="3" start_page="0" end_page="91" type="metho">
    <SectionTitle>
2. SYSTEM IMPROVEMENTS
2.1. Non-Monotonic Reasoning
</SectionTitle>
    <Paragraph position="0"> We previously described \[1\] a feature of the PUNDIT natural language processing system whereby the *This paper was supported by DARPA contract N000014-89-C0171, administered by the Office of Naval Research, and by internal funding from Paramax Systems Corporation (formerly Unisys Defense Systems). We wish to thank Suzanne Taylor for helpful discussions on applying natural language constraints to OCR enhancement.</Paragraph>
    <Paragraph position="1"> system makes inferences involving more than one decomposition. 1 For example, the instantiated decompositions produced for &amp;quot;flights leaving Boston&amp;quot; are: flight_C(flightl, source(_) .... ) leaveP(leavel, flight(flightl), source(boston) .... ) Application of a rule relating the leaveP and the flight_C decompositions results in the source slot of the flight_C decomposition being instantiated to &amp;quot;boston&amp;quot;.</Paragraph>
    <Paragraph position="2"> We have extended this feature to make it possible to retract such inferences. This extension allows PUNDIT to do non-monotonic reasoning (\[2\]). That is, the system can make and reason from tentative inferences, which can be removed from the context if they are not supported by the developing dialog. The facility has been implemented in a fully general way, so that any test that can be coded in PROLOG can be the trigger for retraction.</Paragraph>
    <Paragraph position="3"> Currently we use this capability to retract certain inferences which result in a database call with no answers. This facilitates better dialog processing. If the query Do any flights from Boston to Denver serve dinner? is answered by a list of one or more such dinner flights, the preferred antecedent of a subsequent reference to &amp;quot;those flights&amp;quot; is the set of those dinner flights. In contrast, if the answer to the query is &amp;quot;no&amp;quot;, a subsequent reference to &amp;quot;those flights&amp;quot; refers to all flights from Boston to Denver. As explained in detail below, the ability to conditionally retract the inference enables our system to correctly identify the preferred antecedent in both cases.</Paragraph>
    <Paragraph position="4"> In addition, this capability simplifies our system's processing of yes/no questions. The same inference rule applies to both flights serving dinner and Do any flights serve dinner; i.e., the rule makes no provision for distinguishing between these two contexts. Yet when they are embedded in a dialog, there are some differences. If a query such as Show me flights from Boston to Denver serving dinner revealed that there were no such flights,  a subsequent query about &amp;quot;those flights&amp;quot; would seem rather odd. In contrast, as shown in the preceding paragraph, such a subsequent query can follow the yes/no question quite naturally.</Paragraph>
    <Paragraph position="5"> The detailed processing of the example query Do any flights from Boston to Denver serve dinner? proceeds as follows. First a rule relating the decompositions for &amp;quot;flights&amp;quot; and &amp;quot;serve&amp;quot; causes the meals slot of the flight_C decomposition to be instantiated:</Paragraph>
    <Paragraph position="7"> Then a database request is made for all dinner flights from Boston to Denver. If there are any, the flight_C decomposition as modified by the inference is retained; i.e., the context is left with a concept of the dinner flights from Boston to Denver. If there are no such flights, the inference is retracted, leaving in the context a concept of (all) th.e flights from Boston to Denver. In both cases, the correct concept is made available for subsequent reference resolution.</Paragraph>
    <Section position="1" start_page="89" end_page="90" type="sub_section">
      <SectionTitle>
2.2. Implicit Reference Resolution
</SectionTitle>
      <Paragraph position="0"> The pragmatics component in PUNDIT takes care of explicit reference resolution (\[3\]), as in What is the cost of those flights? But there are many cases where the reference to be resolved is implicit. A second extension to our system handles implicit reference resolution as in the following pair of queries: Show me morning flights from Boston to Washington.</Paragraph>
      <Paragraph position="1"> Show me afternoon flights.</Paragraph>
      <Paragraph position="2"> We have implemented an AWIS-specific heuristic which addresses this need. It is invoked when our system is attempting to produce a database request for flights or fares but cannot find either the origin or destination (or both) in the current utterance. The heuristic allows the system to broaden its search for this information to earlier inputs. We have circumscribed this search in order to limit incorrect inferences; currently the heuristic works only in the following restricted manner: The system finds the most recent flight entity in the discourse context other than the one explicitly involved in the current request, and checks that this entity satisfies two conditions: a. If any origin or destination information is known about the current flights, the candidate entity must have no conflicting origin or destination information. So, for example, if a dialog proceeds Show me flights from Boston to Philadelphia.</Paragraph>
      <Paragraph position="3"> Show me the earliest flight from Boston.</Paragraph>
      <Paragraph position="4"> the condition will be satisfied and the heuristic will apply, whereas for Show me flights from Boston to Philadelphia.</Paragraph>
      <Paragraph position="5"> Show me the earliest flight leaving Philadelphia.</Paragraph>
      <Paragraph position="6"> it will not apply. Unfortunately, the heuristic currently will not apply in the following case, either: Show me flights from Boston to Philadelphia.</Paragraph>
      <Paragraph position="7"> Show me flights to Pittsburgh.</Paragraph>
      <Paragraph position="8"> It would not be hard to refine the heuristic to apply to the above sequence, but we have not done so.</Paragraph>
      <Paragraph position="9"> b. The query giving rise to the candidate entity must have been successfully processed, and must have received a non-null response. By successfully processed, we mean that a database request was made for the query. The system cannot tell, of course, if it was the correct request. But if no request was made, that is evidence that the earlier query either was not properly understood or that it was flawed in some way, and that it would be dangerous to use the candidate entity as a referent. Given the fact that our system currently fails to create database requests for over one-third of its inputs, taking this conservative approach turns out to be well-justified.</Paragraph>
      <Paragraph position="10"> The requirement that the database request produce a non-null response is needed for cases such as: Show me afternoon flights from Boston to San Francisco.</Paragraph>
      <Paragraph position="11"> \[there aren't any\] Show me flights on wide-body aircraft.</Paragraph>
      <Paragraph position="12"> If the heuristic applied, it would create a request for afternoon flights from Boston to San Francisco on wide-body aircraft, and obviously none would be found.</Paragraph>
      <Paragraph position="13"> If the candidate entity satisfies both of the above conditions, the non-conflicting properties of the current (origin or destination-deficient) entity and the candidate entity are merged. Thus for the pair of queries Show me morning flights from Boston to San Francisco.</Paragraph>
      <Paragraph position="14"> Show me flights on wide-body aircraft.</Paragraph>
      <Paragraph position="15">  our system generates a request for morning flights from Boston to San Francisco on wide-body aircraft. However, for Show me flights from Boston to San Francisco that serve lunch.</Paragraph>
      <Paragraph position="16"> Show me dinner flights.</Paragraph>
      <Paragraph position="17"> it asks for flights from Boston to San Francisco that serve dinner, not flights that serve both lunch and dinner.</Paragraph>
      <Paragraph position="18"> If the candidate entity fails to satisfy both conditions, the heuristic simply fails; no other candidate entities are considered.</Paragraph>
      <Paragraph position="19"> On the basis of training data, we predicted that the implicit reference resolution heuristic would apply to about 15 percent of discourse-dependent utterances. The recent benchmark test showed that our heuristic was more relevant than we expected, although it also turned out to be somewhat more error-prone. On the February 1992 natural language test, it resulted in the production of a database request for 48 class D utterances that otherwise would have been unanswered. 38 of these requests obtained the correct answer, 10 did not, so that the heuristic produced a net improvement in our overall score in spite of its unfortunately high level of errors. (It also was invoked on 4 class A queries, presumably inappropriately, although one of the four ended up with the correct answer!) There were 285 class D utterances in the test, so the heuristic was invoked for 16.8 percent of them. In addition, there was an undetermined number of other utterances for which it could have been invoked if our system had successfully processed the appropriate antecedent utterances.</Paragraph>
    </Section>
    <Section position="2" start_page="90" end_page="91" type="sub_section">
      <SectionTitle>
2.3. Database Query Paraphrases
</SectionTitle>
      <Paragraph position="0"> We have added a database query paraphrase capability to our ATIS system, which is used as follows.</Paragraph>
      <Paragraph position="1"> When a database query is created by the system and the response received from the database, the query as well as the response is passed to an output formatting routine. At first this routine merely formatted the tabular response, but it turned out to be difficult for users to notice if the table displayed to them contained the desired information or not. Some of the time the system would mis-interpret the user's query, but the misinterpretation would go undetected. For example, a request for flights from Boston to Pittsburgh on Monday may have resulted in a table of all flights from Boston to Pittsburgh, and of a large number of flights, perhaps only one did not operate on Mondays. To address this shortcoming, we implemented a query paraphraser. The database query, after all, encodes what is actually retrieved from the database, so if we label the output table with a description of what it contains, any discrepancy between what the user requested and what the system provided can be spotted more easily. And in the majority of cases, when the system provided the desired output, the paraphrase served as a useful header to the table. For example, for the input sentence Show me round-trip fares for flights from Boston to Denver leaving on Sunday.</Paragraph>
      <Paragraph position="2"> the following paraphrase is produced.</Paragraph>
      <Paragraph position="3"> Displaying: Fare(s):  - round-trip - on Sunday - for Flights: - from Boston - to Denver  As can be seen from the above example, the paraphrase is not in sentence form, but in a stylized form that is easy to read and understand. Sometimes it gives useful feed-back concerning the system's interpretation of imprecise queries by the user, as in: I need a flight from Boston to Pittsburgh that leaves early in the morning.</Paragraph>
      <Paragraph position="4">  And when an error is made, the user can notice it easily, particularly if told to check the easy-to-read paraphrases for missing conditions, as in this example from the October 1991 dry run test: I want to travel from Atlanta to Baltimore early in the  As in the above example, even though the system misinterprets the user's query, sometimes the desired answer can be gotten from the response produced, particularly with the guidance provided by the paraphrase of the response. In this way the system as a whole becomes more  capable of assisting the user in reaching a successful conclusion to the travel planning task.</Paragraph>
    </Section>
  </Section>
class="xml-element"></Paper>
Download Original XML