File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/06/p06-2068_metho.xml
Size: 23,809 bytes
Last Modified: 2025-10-06 14:10:28
<?xml version="1.0" standalone="yes"?> <Paper uid="P06-2068"> <Title>The Role of Information Retrieval in Answering Complex Questions</Title> <Section position="5" start_page="523" end_page="524" type="metho"> <SectionTitle> * Dothemilitarypersonnelexchangesbetween </SectionTitle> <Paragraph position="0"> Israel and India show an increase in cooperation? If so, what are the driving factors behind this increase? Evidence for a relationship includes both the means to influence some entity and the motivation for doing so. Eight types of relationships (&quot;spheres of influence&quot;) were noted: financial, movement of goods, family ties, co-location, common interest, and temporal connection.</Paragraph> <Paragraph position="1"> Relationship questions are significantly different from definition questions, which can be paraphrased as &quot;Tell me interesting things about x.&quot; Definition questions have received significant amounts of attention recently, e.g., (Hildebrandt et al., 2004; Prager et al., 2004; Xu et al., 2004; Cui et al., 2005). Research has shown that certain cue phrases serve as strong indicators for nuggets, and thus an approach based on matching surface patterns (e.g., appositives, parenthetical expressions) works quite well. Unfortunately, such techniques donotgeneralizetorelationshipquestionsbecause their answers are not usually captured by patterns or marked by surface cues.</Paragraph> <Paragraph position="2"> Unlike answers to factoid questions, answers to relationship questions consist of an unsorted set of passages. For assessing system output, NIST employs the nugget-based evaluation methodology originally developed for definition questions; see (Voorhees, 2005) for a detailed description.</Paragraph> <Paragraph position="3"> Answers consist of units of information called &quot;nuggets&quot;, which assessors manually create from system submissions and their own research (see example in Figure 1). Nuggets are divided into two types (&quot;vital&quot; and &quot;okay&quot;), and this distinction plays an important role in scoring. The official metric is an F3-score, where nugget recall is computed on vital nuggets, and precision is based on a length allowance derived from the number of both vital and okay nuggets retrieved.</Paragraph> <Paragraph position="4"> In the original NIST setup, human assessors were required to manually determine whether a particular system's response contained a nugget.</Paragraph> <Paragraph position="5"> This posed a problem for researchers who wished to conduct formative evaluations outside the annual TREC cycle--the necessity of human involvement meant that system responses could not be rapidly, consistently, and automatically assessed. However, the recent introduction of POURPRE, an automatic evaluation metric for the nugget-based evaluation methodology (Lin and Demner-Fushman, 2005), fills this evaluation gap and makes possible the work reported here; cf.</Paragraph> <Paragraph position="6"> Nuggeteer (Marton and Radul, 2006).</Paragraph> <Paragraph position="7"> This paper describes experiments with the 25 relationship questions used in the secondary task of the TREC 2005 QA track (Voorhees, 2005), which attracted a total of eleven submissions. Systems used the AQUAINT corpus, a three gigabyte collection of approximately one million news articles from the Associated Press, the New York Times, and the Xinhua News Agency.</Paragraph> </Section> <Section position="6" start_page="524" end_page="525" type="metho"> <SectionTitle> 3 Document Retrieval </SectionTitle> <Paragraph position="0"> Since information retrieval systems supply the initial set of documents on which a question answering system operates, it makes sense to optimize document retrieval performance in isolation. The issue of end-to-end system performance will be taken up in Section 4.</Paragraph> <Paragraph position="1"> Retrieval performance can be evaluated based on the assumption that documents which contain one or more relevant nuggets (either vital or okay) arethemselvesrelevant. Fromsystemsubmissions to TREC 2005, we created a set of relevance judgments, which averaged 8.96 relevant documents per question (median 7, min 1, max 21).</Paragraph> <Paragraph position="2"> Our first goal was to examine the effect of different retrieval systems on performance.</Paragraph> <Paragraph position="3"> Two freely-available IR engines were compared: Lucene and Indri. The former is an open-source implementation of what amounts to be a modified tf.idf weighting scheme, while the latter employs a language modeling approach. In addition, we experimented with blind relevance feedback, a re- null and without blind relevance feedback.</Paragraph> <Paragraph position="4"> trieval technique commonly employed to improve performance (Salton and Buckley, 1990). Following settings in typical IR experiments, the top twenty terms (by tf.idf value) from the top twenty documents were added to the original query in the feedback iteration.</Paragraph> <Paragraph position="5"> For each question, fifty documents from the AQUAINT collection were retrieved, representing the number of documents that a typical QA system might consider. The question itself was used verbatim as the IR query (see Section 6 for discussion). Performance is shown in Table 1.</Paragraph> <Paragraph position="6"> We measured Mean Average Precision (MAP), the most informative single-point metric for ranked retrieval, and recall, since it places an upper bound on the number of relevant documents available for subsequent downstream processing.</Paragraph> <Paragraph position="7"> For all experiments reported in this paper, we applied the Wilcoxon signed-rank test to determinethestatisticalsignificanceoftheresults. This test is commonly used in information retrieval research because it makes minimal assumptions about the underlying distribution of differences. Significance at the 0.90 level is denoted with a [?] or [?], depending on the direction of change; at the 0.95 level, triangle or triangleinv; at the 0.99 level, trianglesolid or triangledownsld. Differences not statistically significant are marked with *. Although the differences between Lucene and Indri are not significant, blind relevance feedback was found to hurt performance, significantly so in the case of Indri. These results are consistent with the findings of Monz (2003), who made the same observation in the factoid QA task.</Paragraph> <Paragraph position="8"> There are a few caveats to consider when interpreting these results. First, the test set of 25 questions is rather small. Second, the number of relevant documents per question is also relatively small, and hence likely to be incomplete. Buckley and Voorhees (2004) have shown that evaluation metrics are not stable with respect to incomplete relevance judgments. Third, the distribution of relevant documents may be biased due to the small number of submissions, many of which used Lucene. Due to these factors, one should interpret the results reported here as suggestive, not definitive. Follow-up experiments with larger data sets are required to produce more conclusive results.</Paragraph> </Section> <Section position="7" start_page="525" end_page="527" type="metho"> <SectionTitle> 4 Selecting Relevant Sentences </SectionTitle> <Paragraph position="0"> We adopted an extractive approach to answering relationship questions that views the task as sentence retrieval, a conception in line with the thinking of many researchers today (but see discussion in Section 6). Although oversimplified, there are several reasons why this formulation is productive: since answers consist of unordered text segments, the task is similar to passage retrieval, a well-studied problem (Callan, 1994; Tellex et al., 2003) where sentences form a natural unit of retrieval. In addition, the TREC novelty tracks have specifically tackled the questions of relevance and redundancy at the sentence level (Harman, 2002).</Paragraph> <Paragraph position="1"> Empirically, a sentence retrieval approach performs quite well: when definition questions were first introduced in TREC 2003, a simple sentence-ranking algorithm outperformed all but the highest-scoring system (Voorhees, 2003). In addition, viewing the task of answering relationship questions as sentence retrieval allows one to leverage work in multi-document summarization, where extractive approaches have been extensively studied. This section examines the task of independently selecting the best sentences for inclusion in an answer; attempts to reduce redundancy will be discussed in the next section.</Paragraph> <Paragraph position="2"> There are a number of term-based features associated with a candidate sentence that may contribute to its relevance. In general, such features can be divided into two types: properties of the document containing the sentence and properties of the sentence itself. Regarding the former type, two features come into play: the relevance score of the document (from the IR engine) and its rank in the result set. For sentence-based features, we experimented with the following: * Passage match score, which sums the idf values of unique terms that appear in both the candidate sentence (S) and the question (Q):</Paragraph> <Paragraph position="4"> * Length of the sentence (in non-whitespace characters).</Paragraph> <Paragraph position="5"> Note that precision and recall values are bounded between zero and one, while the passage matchscoreandthelengthofthesentenceareboth unbounded features.</Paragraph> <Paragraph position="6"> Our baseline sentence retriever employed the passage match score to rank all sentences in the top n retrieved documents. By default, we used documents retrieved by Lucene, using the question verbatim as the query. To generate answers, thesystemselectedsentencesbasedontheirscores until a hard length quota has been filled (trimming the final sentence if necessary). After experimenting with different values, we discovered that a document cutoff of ten yielded the highest performance in terms of POURPRE scores, i.e., all but the ten top-ranking documents were discarded. In addition, we built a linear regression model that employed the above features to predict the nugget score of a sentence (the dependent variable). For the training samples, the nugget matching component within POURPRE was employed to compute the nugget score--this value quantified the &quot;goodness&quot; of a particular sentence in terms of nugget content.1 Due to known issues with the vital/okay distinction (Hildebrandt et al., 2004), it was ignored for this computation; however, see (Lin and Demner-Fushman, 2006b) for recent attempts to address this issue.</Paragraph> <Paragraph position="7"> When presented with a test question, the system ranked all sentences from the top ten retrieved documents using the regression model. Answers were generated by filling a quota of characters, just as in the baseline. Once again, no attempt was made to reduce redundancy.</Paragraph> <Paragraph position="8"> We conducted a five-fold cross validation experiment using all sentences from the top 100 Lucene documents as training samples. After experimenting with different features, we discovered that a regression model with the following performed best: passage match score, document score, and sentence length. Surprisingly, adding the term match precision and recall features to the regression model decreased overall performance slightly. We believe that precision and recall encodes information already captured by the other features.</Paragraph> <Paragraph position="9"> Results of our experiments are shown in Table 2 for different answer lengths. Following the TREC QA track convention, all lengths are measured in non-whitespace characters. Both the baseline and regression conditions employed the top ten documents supplied by Lucene. In addition to the F3-score, we report the recall component only (on vital nuggets). For this and all subsequent experiments, we used the (count, macro) variant of POURPRE, which was validated as producing the highest correlation with official rankings. The regression model yields higher scores at shorter lengths, although none of these differences were significant. In general, performance decreases with longer answers because both variants tend to rank relevant sentences before non-relevant ones.</Paragraph> <Paragraph position="10"> Our results compare favorably to runs submitted to the TREC 2005 relationship task. In that evaluation, the best performing automatic run obtained a POURPRE score of 0.243, with an average answer length of 4051 character per question.</Paragraph> <Paragraph position="11"> Since the vital/okay nugget distinction was ignored when training our regression model, we also evaluated system output under the assumption that allnuggetswerevital. Thesescoresarealsoshown inTable2. Onceagain,resultsshowhigher POURPRE scores for shorter answers, but these differences are not statistically significant. Why might this be so? It appears that features based on term statistics alone are insufficient to capture nugget relevance. We verified this hypothesis by building a regression model for all 25 questions: the model exhibited an R2 value of only 0.207.</Paragraph> <Paragraph position="12"> How does IR performance affect the final system output? To find out, we applied the base-line sentence retrieval algorithm (which uses the passage match score only) on the output of different document retrieval variants. These results are shown in Table 3 for the four conditions discussed intheprevioussection: LuceneandIndri,withand without blind relevance feedback.</Paragraph> <Paragraph position="13"> Just as with the document retrieval results, Lucene alone (without blind relevance feedback) yielded the highest POURPRE scores. However, none of the differences observed were statistically significant. These numbers point to an interesting interaction between document retrieval and question answering. The decreases in performance at- null tributedtoblindrelevancefeedbackinend-to-end QA were in general less than the drops observed in the document retrieval runs. It appears possible that the sentence retrieval algorithm was able to recover from a lower-quality result set, i.e., one with relevant documents ranked lower. Nevertheless, just as with factoid QA, the coupling between IR and answer extraction merits further study.</Paragraph> </Section> <Section position="8" start_page="527" end_page="528" type="metho"> <SectionTitle> 5 Reducing Redundancy </SectionTitle> <Paragraph position="0"> The methods described in the previous section for choosing relevant sentences do not take into account information that may be conveyed more than once. Drawing inspiration from research in sentence-level redundancy within the context of the TREC novelty track (Allan et al., 2003) and work in multi-document summarization, we experimented with term-based approaches to reducing redundancy.</Paragraph> <Paragraph position="1"> Instead of selecting sentences for inclusion in the answer based on relevance alone, we implemented a simple utility model, which takes into account sentences that have already been added to the answer A. For each candidate c, utility is defined as follows:</Paragraph> <Paragraph position="3"> This model is the baseline variant of the Maximal Marginal Relevance method for summarization (Goldstein et al., 2000). Each candidate is compared to all sentences that have already been selected for inclusion in the answer. The maximum of these pairwise similarity comparisons is deducted from the relevance score of the sentence, subjected to l, a parameter that we tune. For our experiments, we used cosine distance as the similarity function. All relevance scores were normalized to a range between zero and one.</Paragraph> <Paragraph position="4"> At each step in the answer generation process, utility values are computed for all candidate sentences. The one with the highest score is selected for inclusion in the final answer. Utility values are then recomputed, and the process iterates until the length quota has been filled.</Paragraph> <Paragraph position="5"> We experimented with two different sources for the relevance scores: the baseline sentence retriever (passage match score only) and the regression model. In addition to taking the max of all pairwisesimilarityvalues,asintheaboveformula, we also experimented with the average.</Paragraph> <Paragraph position="6"> Results of our runs are shown in Table 4. We report values for the baseline relevance score with the max and avg aggregation functions, as well as the regression relevance scores with max. These experimental conditions were compared against the baseline run that used the relevance score only (no redundancy penalty). To compute the optimal l, we swept across the parameter space from zero to one in increments of a tenth. We determined the optimal value of l by averaging POURPRE scores acrossalllengthintervals. Forallthreeconditions, we discovered 0.4 to be the optimal value.</Paragraph> <Paragraph position="7"> These experiments suggest that a simple term-based approach to reducing redundancy yields statistically significant gains in performance. This result is not surprising since similar techniques have proven effective in multi-document summarization. Empirically, we found that the max operator outperforms the avg operator in quantifying the degree of redundancy. The observation that performance improvements are more noticeable at shorter answer lengths confirms our intuitions. Redundancy is better tolerated in longer answers because a redundant nugget is less likely to &quot;squeeze out&quot; a relevant, novel nugget.</Paragraph> <Paragraph position="8"> While it is productive to model the relationship task as sentence retrieval where independent decisions are made about sentence-level relevance, this simplification fails to capture overlap in information content, and leads to redundant answers.</Paragraph> <Paragraph position="9"> We found that a simple term-based approach was effective in tackling this issue.</Paragraph> </Section> <Section position="9" start_page="528" end_page="529" type="metho"> <SectionTitle> 6 Discussion </SectionTitle> <Paragraph position="0"> Although this work represents the first formal study of relationship questions that we are aware of, by no means are we claiming a solution--we see this as merely the first step in addressing a complex problem. Nevertheless, information retrieval techniques lay the groundwork for systems aimed at answering complex questions. The methods described here will hopefully serve as a starting point for future work.</Paragraph> <Paragraph position="1"> Relationship questions represent an important problem because they exemplify complex information needs, generally acknowledged as the futureofQAresearch. Othertypesofcomplexneeds include analytical questions such as &quot;How close is Irantoacquiringnuclearweapons?&quot;,whicharethe focus of the AQUAINT program in the U.S., and opinion questions such as &quot;How does the Chilean governmentviewattemptsathavingPinochettried inSpanishCourt?&quot;,whichwereexploredina2005 pilot study also funded by AQUAINT. In 2006, there will be a dedicated task within the TREC QA track exploring complex questions within an interactive setting. Furthermore, we note the convergence of the QA and summarization communities, as demonstrated by the shift from generic to query-focused summaries starting with DUC 2005 (Dang, 2005). This development is also compatible with the conception of &quot;distillation&quot; in the current DARPA GALE program. All these trends point to same problem: how do we build advanced information systems to address complex information needs? The value of this work lies in the generality of IR-based approaches. Sophisticated linguistic processing algorithms are typically unable to cope with the enormous quantities of text available. To render analysis more computationally tractable, researchers commonly employ IR techniques to reduce the amount of text under consideration. We believe that the techniques introduced in this paper are applicable to the different types of information needs discussed above.</Paragraph> <Paragraph position="2"> While information retrieval techniques form a strong baseline for answering relationship questions, there are clear limitations of term-based approaches. Although we certainly did not experiment with every possible method, this work examined several common IR techniques (e.g., relevance feedback, different term-based features, etc.). In our regression experiments, we discovered that our feature set was unable to adequately capture sentence relevance. On the other hand, simple IR-based techniques appeared to work well at reducing redundancy, suggesting that determining content overlap is a simpler problem.</Paragraph> <Paragraph position="3"> To answer relationship questions well, NLP technology must take over where IR techniques leave off. Yet, there are a number of challenges, the biggest of which is that question classification and named-entity recognition, which have worked well for factoid questions, are not applicable to relationship questions, since answer types are difficult to anticipate. For factoids, there exists a significant amount of work on question analysis--the resultsofwhichincludeimportantquerytermsand the expected answer type (e.g., person, organization, etc.). Relationship questions are more difficult to process: for one, they are often not phrased as direct wh-questions, but rather as indirect requests for information, statements of doubt, etc.</Paragraph> <Paragraph position="4"> Furthermore, since these complex questions cannot be answered by short noun phrases, existing answertypeontologiesarenotveryuseful. Forour experiments, we decided to simply use the question verbatim as the query to the IR systems, but undoubtedly performance can be gained by better query formulation strategies. These are difficult challenges, but recent work on applying semantic models to QA (Narayanan and Harabagiu, 2004; Lin and Demner-Fushman, 2006a) provide a promising direction.</Paragraph> <Paragraph position="5"> While our formulation of answering relationship questions as sentence retrieval is productive, it clearly has limitations. The assumption that information nuggets do not span sentence boundaries is false and neglects important work in anaphora resolution and discourse modeling. The current setup of the task, where answers consist of unordered strings, does not place any value on coherence and readability of the responses, which will be important if the answers are intended for human consumption. Clearly, there are ample opportunities here for NLP techniques to shine.</Paragraph> <Paragraph position="6"> The other value of this work lies in its use of an automatic evaluation metric (POURPRE) for system development--the first instance in complex QA that we are aware of. Prior to the introduction of this automatic scoring technique, studies such as this were difficult to conduct due to the necessity of involving humans in the evaluation process. POURPRE was developed to enable rapid exploration of the solution space, and experiments reported here demonstrate its usefulness in doing just that. Although automatic evaluation metrics are no stranger to other fields such as machine translation (e.g., BLEU) and document summarization (e.g., ROUGE, BE, etc.), this represents a new development in question answering research.</Paragraph> </Section> class="xml-element"></Paper>