File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/04/p04-1073_intro.xml
Size: 2,987 bytes
Last Modified: 2025-10-06 14:02:23
<?xml version="1.0" standalone="yes"?> <Paper uid="P04-1073"> <Title>Question Answering using Constraint Satisfaction: QA-by-Dossier-with-Constraints</Title> <Section position="2" start_page="0" end_page="0" type="intro"> <SectionTitle> 1 Introduction </SectionTitle> <Paragraph position="0"> Traditionally, Question Answering (QA) has drawn on the fields of Information Retrieval, Natural Language Processing (NLP), Ontologies, Data Bases and Logical Inference, although it is at heart a problem of NLP. These fields have been used to supply the technology with which QA components have been built. We present here a new methodology which attempts to use QA holistically, along with constraint satisfaction, to better answer questions, without requiring any advances in the underlying fields.</Paragraph> <Paragraph position="1"> Because NLP is still very much an error-prone process, QA systems make many mistakes; accordingly, a variety of methods have been developed to boost the accuracy of their answers. Such methods include redundancy (getting the same answer from multiple documents, sources, or algorithms), deep parsing of questions and texts (hence improving the accuracy of confidence measures), inferencing (proving the answer from information in texts plus background knowledge) and sanity-checking (verifying that answers are consistent with known facts). To our knowledge, however, no QA system deliberately asks additional questions in order to derive constraints on the answers to the original questions. We have found empirically that when our own QA system's (Prager et al., 2000; Chu-Carroll et al., 2003) top answer is wrong, the correct answer is often present later in the ranked answer list. In other words, the correct answer is in the passages retrieved by the search engine, but the system was unable to sufficiently promote the correct answer and/or deprecate the incorrect ones. Our new approach of QA-by-Dossier-with-Constraints (QDC) uses the answers to additional questions to provide more information that can be used in ranking candidate answers to the original question. These auxiliary questions are selected such that natural constraints exist among the set of correct answers.</Paragraph> <Paragraph position="2"> After issuing both the original question and auxiliary questions, the system evaluates all possible combinations of the candidate answers and scores them by a simple function of both the answers' intrinsic confidences, and how well the combination satisfies the aforementioned constraints. Thus we hope to improve the accuracy of an essentially NLP task by making an end-run around some of the more difficult problems in the field.</Paragraph> <Paragraph position="3"> We describe QDC and experiments to evaluate its effectiveness. Our results show that on our test set, substantial improvement is achieved by using constraints, compared with our baseline system, using standard evaluation metrics.</Paragraph> </Section> class="xml-element"></Paper>