File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/06/p06-4007_metho.xml
Size: 9,737 bytes
Last Modified: 2025-10-06 14:10:31
<?xml version="1.0" standalone="yes"?> <Paper uid="P06-4007"> <Title>FERRET: Interactive Question-Answering for Real-World Environments</Title> <Section position="4" start_page="25" end_page="25" type="metho"> <SectionTitle> 2 The FERRET Interactive </SectionTitle> <Paragraph position="0"/> <Section position="1" start_page="25" end_page="25" type="sub_section"> <SectionTitle> Question-Answering System </SectionTitle> <Paragraph position="0"> This section provides a basic overview of the functionality provided by the FERRET interactive Q/A system. 1 FERRET returns three types of information in response to a user's query. First, FERRET utilizes an automatic Q/A system to nd answers to users' questions in a document collection. In order to provide users with the timely results that they expect from information gathering applications (such as Internet search engines), every effort was made to reduce the time FERRET takes to extract answers from text. (In the current version of the system, answers are returned on average in 12.78 seconds. 2) In addition to answers, FERRET also provides information in the form of two different types of predictive question-answer pairs (or QUABs).</Paragraph> <Paragraph position="1"> With FERRET, users can select from QUABs that pabilities, the reader is invited to consult (Harabagiu et al., 2005a); for more information on FERRET's predictive question generation component, please see (Harabagiu et al., 2005b).</Paragraph> <Paragraph position="2"> 2This test was run on a machine with a Pentium 4 3.0 GHz processor with 2 GB of RAM.</Paragraph> <Paragraph position="3"> were either generated automatically from the set of documents returned by the Q/A system or that were selected from a large database of more than 10,000 question-answer pairs created of ine by human annotators. In the current version of FER-RET, the top 10 automatically-generated and hand-crafted QUABs that are most judged relevant to the user's original question are returned to the user as potential continuations of the dialogue. Each set of QUABs is presented in a separate pane found to the right of the answers returned by the Q/A system; QUABs are ranked in order of relevance to the user's original query.</Paragraph> <Paragraph position="4"> Figure 1 provides a screen shot of FERRET's interface. Q/A answers are presented in the center pane of the FERRET browser, while QUAB question-answer pairs are presented in two separate tabs found in the rightmost pane of the browser. FERRET's leftmost pane includes a drag-and-drop clipboard which facilitates note-taking and annotation over the course of an interactive Q/A dialogue.</Paragraph> </Section> </Section> <Section position="5" start_page="25" end_page="26" type="metho"> <SectionTitle> 3 Predictive Question-Answering </SectionTitle> <Paragraph position="0"> First introduced in (Harabagiu et al., 2005b), a predictive questioning approach to automatic question-answering assumes that Q/A systems can use the set of documents relevant to a user's query in order to generate sets of questions known as predictive questions that anticipate a user's information needs. Under this approach, topic representations like those introduced in (Lin and Hovy, 2000) and (Harabagiu, 2004) are used to identify a set of text passages that are relevant to a user's domain of interest. Topic-relevant passages are then semantically parsed (using a PropBank-style semantic parser) and submitted to a question generation module, which uses a set of syntactic rewrite rules in order to create natural language questions from the original passage.</Paragraph> <Paragraph position="1"> Generated questions are then assembled into question-answer pairs known as QUABs with the original passage serving as the question's answer , and are then returned to the user. For example, two of the predictive question-answer pairs generated from the documents returned for question Q0, What has been the impact of job outsourcing programs on India's relationship with the U.S.? , are presented in Table 1.</Paragraph> <Paragraph position="2"> Q0 What has been the impact of job outsourcing programs on India's relationship with the U.S.? PQ1 How could India respond to U.S. efforts to limit job outsourcing? A1 U.S. of cials have countered that the best way for India to counter U.S. efforts to limit job outsourcing is to further liberalize its markets.</Paragraph> <Paragraph position="3"> PQ2 What bene ts does outsourcing provide to India? A2 India's prowess in outsourcing is no longer the only reason why outsourcing to India is an attractive option. The difference lies in the scalability of major Indian vendors, their strong focus on quality and their experience delivering a wide range of services , says John Blanco, senior vice president at Cablevision Systems Corp. in Bethpage, N.Y.</Paragraph> <Paragraph position="4"> PQ2 Besides India, what other countries are popular destinations for outsourcing? A2 A number of countries are now beginning to position themselves as outsourcing centers including China, Russia, Malaysia, the Philippines, South Africa and several countries in Eastern Europe. null While neither PQ1 nor PQ2 provide users with an exact answer to the original question Q0, both QUABs can be seen as providing users information which is complementary to acquiring information on the topic of job outsourcing: PQ1 provides details on how India could respond to antioutsourcing legislation, while PQ2 talks about other countries that are likely targets for outsourcing. null We believe that QUABs can play an important role in fostering extended dialogue-like interactions with users. We have observed that the incorporation of predictive-question answer pairs into an interactive question-answering system like FERRET can promote dialogue-like interactions between users and the system. When presented with a set of QUAB questions, users typically selected a coherent set of follow-on questions which served to elaborate or clarify their initial question. The dialogue fragment in Table 2 provides an example of the kinds of dialogues that users can generate by interacting with the predictive questions that FERRET generates.</Paragraph> <Paragraph position="5"> In experiments with human users of FERRET, we have found that QUAB pairs enhanced the quality of information retrieved that users were able to retrieve during a dialogue with the system. 3 In 100 user dialogues with FERRET, users clicked hyperlinks associated with QUAB pairs 56.7% of the time, despite the fact the system returned (on average) approximately 20 times more answers than QUAB pairs. Users also derived value from information contained in QUAB pairs: reports written by users who had access to QUABs while gathering information were judged to be signi cantly (p < 0.05) better than those reports written by users who only had access to FERRET's Q/A system alone.</Paragraph> </Section> <Section position="6" start_page="26" end_page="27" type="metho"> <SectionTitle> 4 Answering Complex Questions </SectionTitle> <Paragraph position="0"> As was mentioned in Section 2, FERRET uses a special dialogue-optimized version of an automatic question-answering system in order to nd high-precision answers to users' questions in a document collection.</Paragraph> <Paragraph position="1"> During a Q/A dialogue, users of interactive Q/A systems frequently ask complex questions that must be decomposed syntactically and semantically before they can be answered using traditional Q/A techniques. Complex questions submitted to FERRET are rst subject to a set of syntactic decomposition heuristics which seek to extract each overtly-mentioned subquestion from the original question. Under this approach, questions featuring coordinated question stems, entities, verb phrases, or clauses are split into their separate conjuncts; answers to each syntactically decomposed question are presented separately to the user. Table 3 provides an example of syntactic decomposition performed in FERRET.</Paragraph> <Paragraph position="2"> FERRET also performs semantic decomposition of complex questions using techniques rst outlined in (Harabagiu et al., 2006). Under this approach, three types of semantic and pragmatic information are identi ed in complex questions: (1) information associated with a complex question's expected answer type, (2) semantic dependencies derived from predicate-argument structures discovered in the question, and (3) and topic information derived from documents retrieved using the keywords contained the question. Examples of the types of automatic semantic decomposition that is performed in FERRET is presented in Table 4.</Paragraph> <Paragraph position="3"> Complex questions are decomposed by a procedure that operates on a Markov chain, by following a random walk on a bipartite graph of question decompositions and relations relevant to the topic of the question. Unlike with syntactic decomposition, FERRET combines answers from semantically decomposed question automatically and presents users with a single set of answers that represents the contributions of each question. Users are noti ed that semantic decomposition has occurred, however; decomposed questions are displayed to the user upon request.</Paragraph> <Paragraph position="4"> In addition to techniques for answering complex questions, FERRET's Q/A system improves performance for a variety of question types by employing separate question processing strategies in order to provide answers to four different types of questions, including factoid questions, list questions, relationship questions, and de nition questions. null</Paragraph> </Section> class="xml-element"></Paper>