File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/relat/04/p04-1073_relat.xml
Size: 2,836 bytes
Last Modified: 2025-10-06 14:15:44
<?xml version="1.0" standalone="yes"?> <Paper uid="P04-1073"> <Title>Question Answering using Constraint Satisfaction: QA-by-Dossier-with-Constraints</Title> <Section position="3" start_page="0" end_page="0" type="relat"> <SectionTitle> 2 Related Work </SectionTitle> <Paragraph position="0"> Logic and inferencing have been a part of Question-Answering since its earliest days. The first such systems employed natural-language interfaces to expert systems, e.g. SHRDLU (Winograd, 1972), or to databases e.g. LUNAR (Woods, 1973) and LIFER/LADDER (Hendrix et al. 1977). CHAT-80 (Warren & Pereira, 1982) was a DCG-based NLquery system about world geography, entirely in Prolog. In these systems, the NL question is transformed into a semantic form, which is then processed further; the overall architecture and system operation is very different from today's systems, however, primarily in that there is no text corpus to process.</Paragraph> <Paragraph position="1"> Inferencing is used in at least two of the more visible systems of the present day. The LCC system (Moldovan & Rus, 2001) uses a Logic Prover to establish the connection between a candidate answer passage and the question. Text terms are converted to logical forms, and the question is treated as a goal which is &quot;proven&quot;, with real-world knowledge being provided by Extended WordNet. The IBM system PIQUANT (Chu-Carroll et al., 2003) uses Cyc (Lenat, 1995) in answer verification. Cyc can in some cases confirm or reject candidate answers based on its own store of instance information; in other cases, primarily of a numerical nature, Cyc can confirm whether candidates are within a reasonable range established for their subtype.</Paragraph> <Paragraph position="2"> At a more abstract level, the use of constraints discussed in this paper can be viewed as simply an example of finding support (or lack of it) for candidate answers. Many current systems (see, e.g.</Paragraph> <Paragraph position="3"> (Clarke et al., 2001), (Prager et al., 2004)) employ redundancy as a significant feature of operation: if the same answer appears multiple times in an internal top-n list, whether from multiple sources or multiple algorithms/agents, it is given a confidence boost, which will affect whether and how it gets returned to the end-user.</Paragraph> <Paragraph position="4"> Finally, our approach is somewhat reminiscent of the scripts introduced by Schank (Schank et al., 1975, and see also Lehnert, 1978). In order to generate meaningful auxiliary questions and constraints, we need a model (&quot;script&quot;) of the situation the question is about. Among others, we have identified one such script modeling the human life cycle that seems common to different question types regarding people. null</Paragraph> </Section> class="xml-element"></Paper>