File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/06/p06-1039_metho.xml

Size: 26,510 bytes

Last Modified: 2025-10-06 14:10:15

<?xml version="1.0" standalone="yes"?>
<Paper uid="P06-1039">
  <Title>Sydney, July 2006. c(c)2006 Association for Computational Linguistics Bayesian Query-Focused Summarization</Title>
  <Section position="4" start_page="0" end_page="306" type="metho">
    <SectionTitle>
2 Bayesian Query-Focused
Summarization
</SectionTitle>
    <Paragraph position="0"> In this section, we describe our Bayesian query-focused summarization model (BAYESUM). This task is very similar to the standard ad-hoc IR task, with the important distinction that we are comparing query models against sentence models, rather than against document models. The shortness of sentences means that one must do a good job of creating the query models.</Paragraph>
    <Paragraph position="1"> To maintain generality, so that our model is applicable to any problem for which multiple relevant documents are known for a query, we formulate our model in terms of relevance judgments.</Paragraph>
    <Paragraph position="2"> For a collection of D documents and Q queries, we assume we have a D x Q binary matrix r, where rdq = 1 if an only if document d is relevant to query q. In multidocument summarization, rdq will be 1 exactly when d is in the document set corresponding to query q; in search-engine sum- null marization, it will be 1 exactly when d is returned by the search engine for query q.</Paragraph>
    <Section position="1" start_page="305" end_page="305" type="sub_section">
      <SectionTitle>
2.1 Language Modeling for IR
</SectionTitle>
      <Paragraph position="0"> BAYESUM is built on the concept of language models for information retrieval. The idea behind the language modeling techniques used in IR is to represent either queries or documents (or both) as probability distributions, and then use standard probabilistic techniques for comparing them.</Paragraph>
      <Paragraph position="1"> These probability distributions are almost always &amp;quot;bag of words&amp;quot; distributions that assign a probability to words from a fixed vocabulary V.</Paragraph>
      <Paragraph position="2"> One approach is to build a probability distribution for a given document, pd(*), and to look at the probability of a query under that distribution: pd(q). Documents are ranked according to how likely they make the query (Ponte and Croft, 1998). Other researchers have built probability distributions over queries pq(*) and ranked documents according to how likely they look under the query model: pq(d) (Lafferty and Zhai, 2001).</Paragraph>
      <Paragraph position="3"> A third approach builds a probability distribution pq(*) for the query, a probability distribution pd(*) for the document and then measures the similarity between these two distributions using KL divergence (Lavrenko et al., 2002):</Paragraph>
      <Paragraph position="5"> The KL divergence between two probability distributions is zero when they are identical and otherwise strictly positive. It implicitly assumes that both distributions pq and pd have the same support: they assign non-zero probability to exactly the same subset of V; in order to account for this, the distributions pq and pd are smoothed against a background general English model. This final mode--the KL model--is the one on which BAYESUM is based.</Paragraph>
    </Section>
    <Section position="2" start_page="305" end_page="306" type="sub_section">
      <SectionTitle>
2.2 Bayesian Statistical Model
</SectionTitle>
      <Paragraph position="0"> In the language of information retrieval, the query-focused sentence extraction task boils down to estimating a good query model, pq(*). Once we have such a model, we could estimate sentence models for each sentence in a relevant document, and rank the sentences according to Eq (1).</Paragraph>
      <Paragraph position="1"> The BAYESUM system is based on the following model: we hypothesize that a sentence appears in a document because it is relevant to some query, because it provides background information about the document (but is not relevant to a known query) or simply because it contains useless, general English filler. Similarly, we model each word as appearing for one of those purposes.</Paragraph>
      <Paragraph position="2"> More specifically, our model assumes that each word can be assigned a discrete, exact source, such as &amp;quot;this word is relevant to query q1&amp;quot; or &amp;quot;this word is general English.&amp;quot; At the sentence level, however, sentences are assigned degrees: &amp;quot;this sentence is 60% about query q1, 30% background document information, and 10% general English.&amp;quot; To model this, we define a general English language model, pG(*) to capture the English filler. Furthermore, for each document dk, we define a background document language model, pdk(*); similarly, for each query qj, we define a query-specific language model pqj(*). Every word in a document dk is modeled as being generated from a mixture of pG, pdk and {pqj : query qj is relevant to document dk}. Supposing there are J total queries and K total documents, we say that the nth word from the sth sentence in document d, wdsn, has a corresponding hidden variable, zdsn that specifies exactly which of these distributions is used to generate that one word. In particular, zdsn is a vector of length 1 + J + K, where exactly one element is 1 and the rest are 0.</Paragraph>
      <Paragraph position="3"> At the sentence level, we introduce a second layer of hidden variables. For the sth sentence in document d, we let pids be a vector also of length 1 + J + K that represents our degree of belief that this sentence came from any of the models.</Paragraph>
      <Paragraph position="4"> The pidss lie in the J + K-dimensional simplex</Paragraph>
      <Paragraph position="6"> ables is that if the &amp;quot;general English&amp;quot; component of pi is 0.9, then 90% of the words in this sentence will be general English. The pi and z variables are constrained so that a sentence cannot be generated by a document language model other than its own document and cannot be generated by a query language model for a query to which it is not relevant.</Paragraph>
      <Paragraph position="7"> Since the pis are unknown, and it is unlikely that there is a &amp;quot;true&amp;quot; correct value, we place a corpus-level prior on them. Since pi is a multinomial distribution over its corresponding zs, it is natural to use a Dirichlet distribution as a prior over pi. A Dirichlet distribution is parameterized by a vector a of equal length to the corresponding multinomial parameter, again with the positivity restric- null tion, but no longer required to sum to one. It has continuous density over a variable th1, . . ., thI given by: Dir(th  |a) = G(</Paragraph>
      <Paragraph position="9"/>
    </Section>
    <Section position="3" start_page="306" end_page="306" type="sub_section">
      <SectionTitle>
2.3 Generative Story
</SectionTitle>
      <Paragraph position="0"> The generative story for our model defines a distribution over a corpus of queries, {qj}1:J, and documents, {dk}1:K, as follows:  1. For each query j = 1 . . . J: Generate each word qjn in qj by pqj(qjn) 2. For each document k = 1 . . .K and each sentence s in document k: (a) Select the current sentence degree piks by Dir(piks  |a)rk(piks) (b) For each word wksn in sentence s: * Select the word source zksn according to Mult(z  |piks) * Generate the word wksn by</Paragraph>
      <Paragraph position="2"> We used r to denote relevance judgments: rk(pi) = 0 if any document component of pi except the one corresponding to k is non-zero, or if any query component of pi except those queries to which document k is deemed relevant is non-zero (this prevents a document using the &amp;quot;wrong&amp;quot; document or query components). We have further assumed that the z vector is laid out so that z0 corresponds to general English, zk+1 corresponds to document dk for 0 [?] j &lt; J and that zj+K+1 corresponds to query qj for 0 [?] k &lt; K.</Paragraph>
    </Section>
    <Section position="4" start_page="306" end_page="306" type="sub_section">
      <SectionTitle>
2.4 Graphical Model
</SectionTitle>
      <Paragraph position="0"> The graphical model corresponding to this generative story is in Figure 1. This model depicts the four known parameters in square boxes (a, pQ, pD and pG) with the three observed random variables in shaded circles (the queries q, the relevance judgments r and the words w) and two unobserved random variables in empty circles (the word-level indicator variables z and the sentence level degrees pi). The rounded plates denote replication: there are J queries and K documents, containing S sentences in a given document and N words in a given sentence. The joint probability over the observed random variables is given in Eq (2):</Paragraph>
      <Paragraph position="2"/>
      <Paragraph position="4"> This expression computes the probability of the data by integrating out the unknown variables. In the case of the pi variables, this is accomplished by integrating over [?], the multinomial simplex, according to the prior distribution given by a. In the case of the z variables, this is accomplished by summing over all possible (discrete) values. The final word probability is conditioned on the z value by selecting the appropriate distribution from pG, pD and pQ. Computing this expression and finding optimal model parameters is intractable due to the coupling of the variables under the integral.</Paragraph>
    </Section>
  </Section>
  <Section position="5" start_page="306" end_page="307" type="metho">
    <SectionTitle>
3 Statistical Inference in BAYESUM
</SectionTitle>
    <Paragraph position="0"> Bayesian inference problems often give rise to intractable integrals, and a large variety of techniques have been proposed to deal with this. The most popular are Markov Chain Monte Carlo (MCMC), the Laplace (or saddle-point) approximation and the variational approximation. A third, less common, but very effective technique, especially for dealing with mixture models, is expectation propagation (Minka, 2001). In this paper, we will focus on expectation propagation; experiments not reported here have shown variational  EM to perform comparably but take roughly 50% longer to converge.</Paragraph>
    <Paragraph position="1"> Expectation propagation (EP) is an inference technique introduced by Minka (2001) as a generalization of both belief propagation and assumed density filtering. In his thesis, Minka showed that EP is very effective in mixture modeling problems, and later demonstrated its superiority to variational techniques in the Generative Aspect Model (Minka and Lafferty, 2003). The key idea is to compute an integral of a product of terms by iteratively applying a sequence of &amp;quot;deletion/inclusion&amp;quot; steps. Given an integral of the form: integraltext[?] dpi p(pi)producttextn tn(pi), EP approximates each term tn by a simpler term ~tn, giving Eq (3).</Paragraph>
    <Paragraph position="3"> In each deletion/inclusion step, one of the approximate terms is deleted from q(*), leaving q[?]n(*) = q(*)/~tn(*). A new approximation for tn(*) is computed so that tn(*)q[?]n(*) has the same integral, mean and variance as ~tn(*)q[?]n(*). This new approximation, ~tn(*) is then included back into the full expression for q(*) and the process repeats. This algorithm always has a fixed point and there are methods for ensuring that the approximation remains in a location where the integral is well-defined. Unlike variational EM, the approximation given by EP is global, and often leads to much more reliable estimates of the true integral.</Paragraph>
    <Paragraph position="4"> In the case of our model, we follow Minka and Lafferty (2003), who adapts latent Dirichlet allocation of Blei et al. (2003) to EP. Due to space constraints, we omit the inference algorithms and instead direct the interested reader to the description given by Minka and Lafferty (2003).</Paragraph>
  </Section>
  <Section position="6" start_page="307" end_page="310" type="metho">
    <SectionTitle>
4 Search-Engine Experiments
</SectionTitle>
    <Paragraph position="0"> The first experiments we run are for query-focused single document summarization, where relevant documents are returned from a search engine, and a short summary is desired of each document.</Paragraph>
    <Section position="1" start_page="307" end_page="307" type="sub_section">
      <SectionTitle>
4.1 Data
</SectionTitle>
      <Paragraph position="0"> The data we use to train and test BAYESUM is drawn from the Text REtrieval Conference (TREC) competitions. This data set consists of queries, documents and relevance judgments, exactly as required by our model. The queries are typically broken down into four fields of increasing length: the title (3-4 words), the summary (1 sentence), the narrative (2-4 sentences) and the concepts (a list of keywords). Obviously, one would expect that the longer the query, the better a model would be able to do, and this is borne out experimentally (Section 4.5).</Paragraph>
      <Paragraph position="1"> Of the TREC data, we have trained our model on 350 queries (queries numbered 51-350 and 401-450) and all corresponding relevant documents. This amounts to roughly 43k documents, 2.1m sentences and 65.8m words. The mean number of relevant documents per query is 137 and the median is 81 (the most prolific query has 968 relevant documents). On the other hand, each document is relevant to, on average, 1.11 queries (the median is 5.5 and the most generally relevant document is relevant to 20 different queries). In all cases, we apply stemming using the Porter stemmer; for all other models, we remove stop words. In order to evaluate our model, we had seven human judges manually perform the query-focused sentence extraction task. The judges were supplied with the full TREC query and a single document relevant to that query, and were asked to select up to four sentences from the document that best met the needs given by the query. Each judge annotated 25 queries with some overlap to allow for an evaluation of inter-annotator agreement, yielding annotations for a total of 166 unique query/document pairs. On the doubly annotated data, we computed the inter-annotator agreement using the kappa measure. The kappa value found was 0.58, which is low, but not abysmal (also, keep in mind that this is computed over only 25 of the 166 examples).</Paragraph>
    </Section>
    <Section position="2" start_page="307" end_page="308" type="sub_section">
      <SectionTitle>
4.2 Evaluation Criteria
</SectionTitle>
      <Paragraph position="0"> Since there are differing numbers of sentences selected per document by the human judges, one cannot compute precision and recall; instead, we opt for other standard IR performance measures.</Paragraph>
      <Paragraph position="1"> We consider three related criteria: mean average precision (MAP), mean reciprocal rank (MRR) and precision at 2 (P@2). MAP is computed by calculating precision at every sentence as ordered by the system up until all relevant sentences are selected and averaged. MRR is the reciprocal of the rank of the first relevant sentence. P@2 is the precision computed at the first point that two relevant sentences have been selected (in the rare case that  humans selected only one sentence, we use P@1).</Paragraph>
    </Section>
    <Section position="3" start_page="308" end_page="308" type="sub_section">
      <SectionTitle>
4.3 Baseline Models
</SectionTitle>
      <Paragraph position="0"> As baselines, we consider four strawman models and two state-of-the-art information retrieval models. The first strawman, RANDOM ranks sentences randomly. The second strawman, POSITION, ranks sentences according to their absolute position (in the context of non-query-focused summarization, this is an incredibly powerful baseline). The third and fourth models are based on the vector space interpretation of IR. The third model, JACCARD, uses standard Jaccard distance score (intersection over union) between each sentence and the query to rank sentences. The fourth, CO-SINE, uses TF-IDF weighted cosine similarity.</Paragraph>
      <Paragraph position="1"> The two state-of-the-art IR models used as comparative systems are based on the language modeling framework described in Section 2.1. These systems compute a language model for each query and for each sentence in a document. Sentences are then ranked according to the KL divergence between the query model and the sentence model, smoothed against a general model estimated from the entire collection, as described in the case of document retrieval by Lavrenko et al. (2002). This is the first system we compare against, called KL.</Paragraph>
      <Paragraph position="2"> The second true system, KL+REL is based on augmenting the KL system with blind relevance feedback (query expansion). Specifically, we first run each query against the document set returned by the relevance judgments and retrieve the top n sentences. We then expand the query by interpolating the original query model with a query model estimated on these sentences. This serves as a method of query expansion. We ran experiments ranging n in {5, 10, 25, 50, 100} and the interpolation parameter l in {0.2, 0.4, 0.6, 0.8} and used oracle selection (on MRR) to choose the values that performed best (the results are thus overly optimistic). These values were n = 25 and l = 0.4.</Paragraph>
      <Paragraph position="3"> Of all the systems compared, only BAYESUM and the KL+REL model use the relevance judgments; however, they both have access to exactly the same information. The other models only run on the subset of the data used for evaluation (the corpus language model for the KL system and the IDF values for the COSINE model are computed on the full data set). EP ran for 2.5 hours.</Paragraph>
      <Paragraph position="4">  as well as BAYESUM, when all query fields are used.</Paragraph>
    </Section>
    <Section position="4" start_page="308" end_page="308" type="sub_section">
      <SectionTitle>
4.4 Performance on all Query Fields
</SectionTitle>
      <Paragraph position="0"> Our first evaluation compares results when all query fields are used (title, summary, description and concepts1). These results are shown in Table 1. As we can see from these results, the JACCARD system alone is not sufficient to beat the position-based baseline. The COSINE does beat the position baseline by a bit of a margin (5 points better in MAP, 9 points in MRR and 4 points in P@2), and is in turn beaten by the KL system (which is 7 points, 14 points and 4 points better in MAP, MRR and P@2, respectively). Blind relevance feedback (parameters of which were chosen by an oracle to maximize the P@2 metric) actually hurts MAP and MRR performance by 0.3 and 1.2, respectively, and increases P@2 by 1.5.</Paragraph>
      <Paragraph position="1"> Over the best performing baseline system (either KL or KL+REL), BAYESUM wins by a margin of</Paragraph>
    </Section>
    <Section position="5" start_page="308" end_page="309" type="sub_section">
      <SectionTitle>
7.5 points in MAP, 6.7 for MRR and 4.4 for P@2.
4.5 Varying Query Fields
</SectionTitle>
      <Paragraph position="0"> Our next experimental comparison has to do with reducing the amount of information given in the query. In Table 2, we show the performance of the KL, KL-REL and BAYESUM systems, as we use different query fields. There are several things to notice in these results. First, the standard KL model without blind relevance feedback performs worse than the position-based model when only the 3-4 word title is available. Second, BAYESUM using only the title outperform the KL model with relevance feedback using all fields. In fact, one can apply BAYESUM without using any of the query fields; in this case, only the relevance judgments are available to make sense 1A reviewer pointed out that concepts were later removed from TREC because they were &amp;quot;too good.&amp;quot; Section 4.5 considers the case without the concepts field.</Paragraph>
      <Paragraph position="1">  model, the KL-based models and BAYESUM, with different inputs.</Paragraph>
      <Paragraph position="2"> of what the query might be. Even in this circumstance, BAYESUM achieves a MAP of 39.4, an MRR of 64.7 and a P@2 of 30.4, still better across the board than KL-REL with all query fields. While initially this seems counterintuitive, it is actually not so unreasonable: there is significantly more information available in several hundred positive relevance judgments than in a few sentences. However, the simple blind relevance feedback mechanism so popular in IR is unable to adequately model this.</Paragraph>
      <Paragraph position="3"> With the exception of the KL model without relevance feedback, adding the description on top of the title does not seem to make any difference for any of the models (and, in fact, occasionally hurts according to some metrics). Adding the summary improves performance in most cases, but not significantly. Adding concepts tends to improve results slightly more substantially than any other.</Paragraph>
    </Section>
    <Section position="6" start_page="309" end_page="310" type="sub_section">
      <SectionTitle>
4.6 Noisy Relevance Judgments
</SectionTitle>
      <Paragraph position="0"> Our model hinges on the assumption that, for a given query, we have access to a collection of known relevant documents. In most real-world cases, this assumption is violated. Even in multi-document summarization as run in the DUC competitions, the assumption of access to a collection of documents all relevant to a user need is unrealistic. In the real world, we will have to deal with document collections that &amp;quot;accidentally&amp;quot; contain irrelevant documents. The experiments in this section show that BAYESUM is comparatively robust.</Paragraph>
      <Paragraph position="1"> For this experiment, we use the IR engine that performed best in the TREC 1 evaluation: Inquery (Callan et al., 1992). We used the offi0.4 0.5 0.6 0.7 0.8 0.9 128  ments. The X-axis is the R-precision of the IR engine and the Y-axis is the summarization performance in MAP. Solid lines are BAYESUM, dotted lines are KL-Rel. Blue/stars indicate title only, red/circles indicated title+description+summary and black/pluses indicate all fields.</Paragraph>
      <Paragraph position="2"> cial TREC results of Inquery on the subset of the TREC corpus we consider. The Inquery R-precision on this task is 0.39 using title only, and 0.51 using all fields. In order to obtain curves as the IR engine improves, we have linearly interpolated the Inquery rankings with the true relevance judgments. By tweaking the interpolation parameter, we obtain an IR engine with improving performance, but with a reasonable bias. We have run both BAYESUM and KL-Rel on the relevance judgments obtained by this method for six values of the interpolation parameter. The results are shown in Figure 2.</Paragraph>
      <Paragraph position="3"> As we can observe from the figure, the solid lines (BAYESUM) are always above the dotted lines (KL-Rel). Considering the KL-Rel results alone, we can see that for a non-perfect IR engine, it makes little difference what query fields we use for the summarization task: they all obtain roughly equal scores. This is because the performance in KL-Rel is dominated by the performance of the IR engine. Looking only at the BAYESUM results, we can see a much stronger, and perhaps surprising difference. For an imperfect IR system, it is better to use only the title than to use the title, description and summary for the summarization component.</Paragraph>
      <Paragraph position="4"> We believe this is because the title is more on topic than the other fields, which contain terms like &amp;quot;A relevant document should describe . . . .&amp;quot; Never- null theless, BAYESUM has a more upward trend than KL-Rel, which indicates that improved IR will result in improved summarization for BAYESUM but not for KL-Rel.</Paragraph>
    </Section>
  </Section>
  <Section position="7" start_page="310" end_page="310" type="metho">
    <SectionTitle>
5 Multidocument Experiments
</SectionTitle>
    <Paragraph position="0"> We present two results using BAYESUM in the multidocument summarization settings, based on the official results from the Multilingual Summarization Evaluation (MSE) and Document Understanding Conference (DUC) competitions in 2005.</Paragraph>
    <Section position="1" start_page="310" end_page="310" type="sub_section">
      <SectionTitle>
5.1 Performance at MSE 2005
</SectionTitle>
      <Paragraph position="0"> We participated in the Multilingual Summarization Evaluation (MSE) workshop with a system based on BAYESUM. The task for this competition was generic (no query) multidocument summarization. Fortunately, not having a query is not a hindrance to our model. To account for the redundancy present in document collections, we applied a greedy selection technique that selects sentences central to the document cluster but far from previously selected sentences (Daum'e III and Marcu, 2005a). In MSE, our system performed very well. According to the human &amp;quot;pyramid&amp;quot; evaluation, our system came first with a score of 0.529; the next best score was 0.489. In the automatic &amp;quot;Basic Element&amp;quot; evaluation, our system scored 0.0704 (with a 95% confidence interval of [0.0429, 0.1057]), which was the third best score on a site basis (out of 10 sites), and was not statistically significantly different from the best system, which scored 0.0981.</Paragraph>
    </Section>
    <Section position="2" start_page="310" end_page="310" type="sub_section">
      <SectionTitle>
5.2 Performance at DUC 2005
</SectionTitle>
      <Paragraph position="0"> We also participated in the Document Understanding Conference (DUC) competition. The chosen task for DUC was query-focused multidocument summarization. We entered a nearly identical system to DUC as to MSE, with an additional rule-based sentence compression component (Daum'e III and Marcu, 2005b). Human evaluators considered both responsiveness (how well did the summary answer the query) and linguistic quality. Our system achieved the highest responsiveness score in the competition. We scored more poorly on the linguistic quality evaluation, which (only 5 out of about 30 systems performed worse); this is likely due to the sentence compression we performed on top of BAYESUM. On the automatic Rouge-based evaluations, our system performed between third and sixth (depending on the Rouge parameters), but was never statistically significantly worse than the best performing systems.</Paragraph>
    </Section>
  </Section>
class="xml-element"></Paper>
Download Original XML