File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/06/p06-1048_metho.xml

Size: 22,971 bytes

Last Modified: 2025-10-06 14:10:16

<?xml version="1.0" standalone="yes"?>
<Paper uid="P06-1048">
  <Title>Models for Sentence Compression: A Comparison across Domains, Training Requirements and Evaluation Measures</Title>
  <Section position="4" start_page="377" end_page="378" type="metho">
    <SectionTitle>
2 Algorithms for Sentence Compression
</SectionTitle>
    <Paragraph position="0"> In this section we give a brief overview of the algorithms we employed in our comparative study. We focus on two representative methods, Knight and Marcu's (2002) decision-based model and Hori and Furui's (2004) word-based model.</Paragraph>
    <Paragraph position="1"> The decision-tree model operates over parallel corpora and offers an intuitive formulation of sentence compression in terms of tree rewriting. It has inspired many discriminative approaches to the compression task (Riezler et al. 2003; Nguyen et al. 2004b; McDonald 2006) and has been extended to languages other than English (see Nguyen et al. 2004a). We opted for the decision-tree model instead of the also well-known noisy-channel model (Knight and Marcu 2002; Turner and Charniak 2005). Although both models yield comparable performance, Turner and Charniak (2005) show that the latter is not an appropriate compression model since it favours uncompressed sentences over compressed ones.1 Hori and Furui's (2004) model was originally developed for Japanese with spoken text in mind, 1The noisy-channel model uses a source model trained on uncompressed sentences. This means that the most likely compressed sentence will be identical to the original sentence as the likelihood of a constituent deletion is typically far lower than that of leaving it in.</Paragraph>
    <Paragraph position="2"> SHIFT transfers the rst word from the input list onto the stack.</Paragraph>
    <Paragraph position="3"> REDUCE pops the syntactic trees located at the top of the stack, combines them into a new tree and then pushes the new tree onto the top of the stack.</Paragraph>
    <Paragraph position="4"> DROP deletes from the input list subsequences of words that correspond to a syntactic constituent.</Paragraph>
    <Paragraph position="5"> ASSIGNTYPE changes the label of the trees at the top of the stack (i.e., the POS tag of words).</Paragraph>
    <Paragraph position="6">  it requires minimal supervision, and little linguistic knowledge. It therefor holds promise for languages and domains for which text processing tools (e.g., taggers, parsers) are not readily available. Furthermore, to our knowledge, its performance on written text has not been assessed.</Paragraph>
    <Section position="1" start_page="377" end_page="377" type="sub_section">
      <SectionTitle>
2.1 Decision-based Sentence Compression
</SectionTitle>
      <Paragraph position="0"> In the decision-based model, sentence compression is treated as a deterministic rewriting process of converting a long parse tree, l, into a shorter parse tree s. The rewriting process is decomposed into a sequence of shift-reduce-drop actions that follow an extended shift-reduce parsing paradigm.</Paragraph>
      <Paragraph position="1"> The compression process starts with an empty stack and an input list that is built from the original sentence's parse tree. Words in the input list are labelled with the name of all the syntactic constituents in the original sentence that start with it. Each stage of the rewriting process is an operation that aims to reconstruct the compressed tree. There are four types of operations that can be performed on the stack, they are illustrated in Table 1.</Paragraph>
      <Paragraph position="2"> Learning cases are automatically generated from a parallel corpus. Each learning case is expressed by a set of features and represents one of the four possible operations for a given stack and input list. Using the C4.5 program (Quinlan 1993) a decision-tree model is automatically learnt. The model is applied to a parsed original sentence in a deterministic fashion. Features for the current state of the input list and stack are extracted and the classi er is queried for the next operation to perform. This is repeated until the input list is empty and the stack contains only one item (this corresponds to the parse for the compressed tree).</Paragraph>
      <Paragraph position="3"> The compressed sentence is recovered by traversing the leaves of the tree in order.</Paragraph>
    </Section>
    <Section position="2" start_page="377" end_page="378" type="sub_section">
      <SectionTitle>
2.2 Word-based Sentence Compression
</SectionTitle>
      <Paragraph position="0"> The decision-based method relies exclusively on parallel corpora; the caveat here is that appropriate training data may be scarce when porting this model to different text domains (where abstracts  are not available for automatic corpus creation) or languages. To alleviate the problems inherent with using a parallel corpus, we have modi ed a weakly supervised algorithm originally proposed by Hori and Furui (2004). Their method is based on word deletion; given a prespeci ed compression length, a compression is formed by preserving the words which maximise a scoring function.</Paragraph>
      <Paragraph position="1"> To make Hori and Furui's (2004) algorithm more comparable to the decision-based model, we have eliminated the compression length parameter. Instead, we search over all lengths to nd the compression that gives the maximum score. This process yields more natural compressions with varying lengths. The original score measures the signi cance of each word (I) in the compression and the linguistic likelihood (L) of the resulting word combinations.2 We add some linguistic knowledge to this formulation through a function (SOV ) that captures information about subjects, objects and verbs. The compression score is given in Equation (1). The lambdas (lI, lSOV, lL) weight the contribution of the individual scores:</Paragraph>
      <Paragraph position="3"> The sentence V = v1,v2,...,vm (of M words) that maximises the score S(V) is the best compression for an original sentence consisting of N words (M &lt; N). The best compression can be found using dynamic programming. The l's in Equation (1) can be either optimised using a small amount of training data or set manually (e.g., if short compressions are preferred to longer ones, then the language model should be given a higher weight). Alternatively, weighting could be dispensed with by including a normalising factor in the language model. Here, we follow Hori and Furui's (2004) original formulation and leave the normalisation to future work. We next introduce each measure individually.</Paragraph>
      <Paragraph position="4"> Word significance score The word signi cance score I measures the relative importance of a word in a document. It is similar to tf-idf, a term weighting score commonly used in information retrieval: null</Paragraph>
      <Paragraph position="6"> upon how reliable the output of an automatic speech recognition system is. However, we need not consider this score when working with written text and manual transcripts.</Paragraph>
      <Paragraph position="7"> Where wi is the topic word of interest (topic words are either nouns or verbs), fi is the frequency of wi in the document, Fi is the corpus frequency of wi and FA is the sum of all topic word occurrences in the corpus ([?]i Fi).</Paragraph>
      <Paragraph position="8"> Linguistic score The linguistic score's L(vi|vi!1,vi!2) responsibility is to select some function words, thus ensuring that compressions remain grammatical. It also controls which topic words can be placed together. The score measures the n-gram probability of the compressed sentence.</Paragraph>
      <Paragraph position="9"> SOV Score The SOV score is based on the intuition that subjects, objects and verbs should not be dropped while words in other syntactic roles can be considered for removal. This score is based solely on the contents of the sentence considered for compression without taking into account the distribution of subjects, objects or verbs, across documents. It is de ned in (3) where fi is the document frequency of a verb, or word bearing the subject/object role and ldefault is a constant weight assigned to all other words.</Paragraph>
      <Paragraph position="11"> The SOV score is only applied to the head word of subjects and objects.</Paragraph>
    </Section>
  </Section>
  <Section position="5" start_page="378" end_page="380" type="metho">
    <SectionTitle>
3 Corpora
</SectionTitle>
    <Paragraph position="0"> Our intent was to assess the performance of the two models just described on written and spoken text. The appeal of written text is understandable since most summarisation work today focuses on this domain. Speech data not only provides a natural test-bed for compression applications (e.g., subtitle generation) but also poses additional challenges. Spoken utterances can be ungrammatical, incomplete, and often contain artefacts such as false starts, interjections, hesitations, and dis uencies. Rather than focusing on spontaneous speech which is abundant in these artefacts, we conduct our study on the less ambitious domain of broadcast news transcripts. This lies in-between the extremes of written text and spontaneous speech as it has been scripted beforehand and is usually read off an autocue.</Paragraph>
    <Paragraph position="1"> One stumbling block to performing a comparative study between written data and speech data is that there are no naturally occurring parallel  speech corpora for studying compression. Automatic corpus creation is not a viable option either, speakers do not normally create summaries of their own utterances. We thus gathered our own corpus by asking humans to generate compressions for speech transcripts.</Paragraph>
    <Paragraph position="2"> In what follows we describe how the manual compressions were performed. We also brie y present the written corpus we used for our experiments. The latter was automatically constructed and offers an interesting point of comparison with our manually created corpus.</Paragraph>
    <Paragraph position="3"> Broadcast News Corpus Three annotators were asked to compress 50 broadcast news stories (1,370 sentences) taken from the HUB-4 1996 English Broadcast News corpus provided by the LDC. The HUB-4 corpus contains broadcast news from a variety of networks (CNN, ABC, CSPAN and NPR) which have been manually transcribed and split at the story and sentence level. Each document contains 27 sentences on average and the whole corpus consists of 26,151 tokens.3 The Robust Accurate Statistical Parsing (RASP) toolkit (Briscoe and Carroll 2002) was used to automatically tokenise the corpus.</Paragraph>
    <Paragraph position="4"> Each annotator was asked to perform sentence compression by removing tokens from the original transcript. Annotators were asked to remove words while: (a) preserving the most important information in the original sentence, and (b) ensuring the compressed sentence remained grammatical. If they wished they could leave a sentence uncompressed by marking it as inappropriate for compression. They were not allowed to delete whole sentences even if they believed they contained no information content with respect to the story as this would blur the task with abstracting.</Paragraph>
    <Paragraph position="5"> Ziff-Davis Corpus Most previous work (Jing 2000; Knight and Marcu 2002; Riezler et al. 2003; Nguyen et al. 2004a; Turner and Charniak 2005; McDonald 2006) has relied on automatically constructed parallel corpora for training and evaluation purposes. The most popular compression corpus originates from the Ziff-Davis corpus a collection of news articles on computer products. The corpus was created by matching sentences that occur in an article with sentences that occur in an abstract (Knight and Marcu 2002). The abstract sentences had to contain a subset of the original sentence's words and the word order had to remain the same.</Paragraph>
    <Paragraph position="6">  Comparisons Following the classi cation scheme adopted in the British National Corpus (Burnard 2000), we assume throughout this paper that Broadcast News and Ziff-Davis belong to different domains (spoken vs. written text) whereas they represent the same genre (i.e., news). Table 2 shows the percentage of sentences which were compressed (Comp%) and the mean compression rate (CompR) for the two corpora. The annotators compress the Broadcast News corpus to a similar degree. In contrast, the Ziff-Davis corpus is compressed much more aggressively with a compression rate of 47%, compared to 73% for Broadcast News. This suggests that the Ziff-Davis corpus may not be a true re ection of human compression performance and that humans tend to compress sentences more conservatively than the compressions found in abstracts.</Paragraph>
    <Paragraph position="7"> We also examined whether the two corpora differ with regard to the length of word spans being removed. Figure 1 shows how frequently word spans of varying lengths are being dropped. As can be seen, a higher percentage of long spans ( ve or more words) are dropped in the Ziff-Davis corpus. This suggests that the annotators are removing words rather than syntactic constituents, which provides support for a model that can act on the word level. There is no statistically signi cant difference between the length of spans dropped between the annotators, whereas there is a significant difference (p &lt; 0.01) between the annotators' spans and the Ziff-Davis' spans (using the  Wilcoxon Test).</Paragraph>
    <Paragraph position="8"> The compressions produced for the Broadcast News corpus may differ slightly to the Ziff-Davis corpus. Our annotators were asked to perform sentence compression explicitly as an isolated task rather than indirectly (and possibly subconsciously) as part of the broader task of abstracting, which we can assume is the case with the Ziff-Davis corpus.</Paragraph>
  </Section>
  <Section position="6" start_page="380" end_page="380" type="metho">
    <SectionTitle>
4 Automatic Evaluation Measures
</SectionTitle>
    <Paragraph position="0"> Previous studies relied almost exclusively on human judgements for assessing the well-formedness of automatically derived compressions. Although human evaluations of compression systems are not as large-scale as in other elds (e.g., machine translation), they are typically performed once, at the end of the development cycle. Automatic evaluation measures would allow more extensive parameter tuning and crucially experimentation with larger data sets. Most human studies to date are conducted on a small compression sample, the test portion of the Ziff-Davis corpus (32 sentences). Larger sample sizes would expectedly render human evaluations time consuming and generally more dif cult to conduct frequently. Here, we review two automatic evaluation measures that hold promise for the compression task.</Paragraph>
    <Paragraph position="1"> Simple String Accuracy (SSA, Bangalore et al.</Paragraph>
    <Paragraph position="2"> 2000) has been proposed as a baseline evaluation metric for natural language generation. It is based on the string edit distance between the generated output and a gold standard. It is a measure of the number of insertion (I), deletion (D) and substitution (S) errors between two strings. It is de ned in (4) where R is the length of the gold standard string.</Paragraph>
    <Paragraph position="3"> Simple String Accuracy = (1[?] I + D + SR ) (4) The SSA score will assess whether appropriate words have been included in the compression.</Paragraph>
    <Paragraph position="4"> Another stricter automatic evaluation method is to compare the grammatical relations found in the system compressions against those found in a gold standard. This allows us to measure the semantic aspects of summarisation quality in terms of grammatical-functional information (Riezler et al. 2003). The standard metrics of precision, recall and F-score can then be used to measure the quality of a system against a gold standard.</Paragraph>
    <Paragraph position="5"> Our implementation of the F-score measure used the grammatical relations annotations provided by RASP (Briscoe and Carroll 2002). This parser is particularly appropriate for the compression task since it provides parses for both full sentences and sentence fragments and is generally robust enough to analyse semi-grammatical compressions. We calculated F-score over all the relations provided by RASP (e.g., subject, direct/indirect object, modi er; 15 in total).</Paragraph>
    <Paragraph position="6"> Correlation with human judgements is an important prerequisite for the wider use of automatic evaluation measures. In the following section we describe an evaluation study examining whether the measures just presented indeed correlate with human ratings of compression quality.</Paragraph>
  </Section>
  <Section position="7" start_page="380" end_page="381" type="metho">
    <SectionTitle>
5 Experimental Set-up
</SectionTitle>
    <Paragraph position="0"> In this section we present our experimental set-up for assessing the performance of the two algorithms discussed above. We explain how different model parameters were estimated. We also describe a judgement elicitation study on automatic and human-authored compressions.</Paragraph>
    <Paragraph position="1"> Parameter Estimation We created two variants of the decision-tree model, one trained on the Ziff-Davis corpus and one on the Broadcast News corpus. We used 1,035 sentences from the Ziff-Davis corpus for training; the same sentences were previously used in related work (Knight and Marcu 2002). The second variant was trained on 1,237 sentences from the Broadcast News corpus.</Paragraph>
    <Paragraph position="2"> The training data for both models was parsed using Charniak's (2000) parser. Learning cases were automatically generated using a set of 90 features similar to Knight and Marcu (2002).</Paragraph>
    <Paragraph position="3"> For the word-based method, we randomly selected 50 sentences from each training set to optimise the lambda weighting parameters4. Optimisation was performed using Powell's method (Press et al. 1992). Recall from Section 2.2 that the compression score has three main parameters: the signi cance, linguistic, and SOV scores. The signi cance score was calculated using 25 million tokens from the Broadcast News corpus (spoken variant) and 25 million tokens from the North American News Text Corpus (written variant). The linguistic score was estimated using a trigram language model. The language model was trained on the North Ameri4To treat both models on an equal footing, we attempted to train the decision-tree model solely on 50 sentences. However, it was unable to produce any reasonable compressions, presumably due to insuf cient learning instances.</Paragraph>
    <Paragraph position="4">  can corpus (25 million tokens) using the CMU-Cambridge Language Modeling Toolkit (Clarkson and Rosenfeld 1997) with a vocabulary size of 50,000 tokens and Good-Turing discounting. Subjects, objects, and verbs for the SOV score were obtained from RASP (Briscoe and Carroll 2002).</Paragraph>
    <Paragraph position="5"> All our experiments were conducted on sentences for which we obtained syntactic analyses.</Paragraph>
    <Paragraph position="6"> RASP failed on 17 sentences from the Broadcast news corpus and 33 from the Ziff-Davis corpus; Charniak's (2000) parser successfully parsed the Broadcast News corpus but failed on three sentences from the Ziff-Davis corpus.</Paragraph>
    <Paragraph position="7"> Evaluation Data We randomly selected 40 sentences for evaluation purposes, 20 from the testing portion of the Ziff-Davis corpus (32 sentences) and 20 sentences from the Broadcast News corpus (133 sentences were set aside for testing). This is comparable to previous studies which have used the 32 test sentences from the Ziff-Davis corpus. None of the 20 Broadcast News sentences were used for optimisation. We ran the decision-tree system and the word-based system on these 40 sentences. One annotator was randomly selected to act as the gold standard for the Broadcast News corpus; the gold standard for the Ziff-Davis corpus was the sentence that occurred in the abstract. For each original sentence we had three compressions; two generated automatically by our systems and a human authored gold standard. Thus, the total number of compressions was 120 (3x40).</Paragraph>
    <Paragraph position="8"> Human Evaluation The 120 compressions were rated by human subjects. Their judgements were also used to examine whether the automatic evaluation measures discussed in Section 4 correlate reliably with behavioural data. Sixty unpaid volunteers participated in our elicitation study, all were self reported native English speakers. The study was conducted remotely over the Internet.</Paragraph>
    <Paragraph position="9"> Participants were presented with a set of instructions that explained the task and de ned sentence compression with the aid of examples. They rst read the original sentence with the compression hidden. Then the compression was revealed by pressing a button. Each participant saw 40 compressions. A Latin square design prevented subjects from seeing two different compressions of the same sentence. The order of the sentences was randomised. Participants were asked to rate each compression they saw on a ve point scale taking into account the information retained by the compression and its grammaticality. They were told all o: Apparently Fergie very much wants to have a career in television.</Paragraph>
    <Paragraph position="10"> d: A career in television.</Paragraph>
    <Paragraph position="11"> w: Fergie wants to have a career in television.</Paragraph>
    <Paragraph position="12"> g: Fergie wants a career in television.</Paragraph>
    <Paragraph position="13"> o: Many debugging features, including user-de ned break points and variable-watching and message-watching windows, have been added.</Paragraph>
    <Paragraph position="14"> d: Many debugging features.</Paragraph>
    <Paragraph position="15"> w: Debugging features, and windows, have been added. g: Many debugging features have been added.</Paragraph>
    <Paragraph position="16"> o: As you said, the president has just left for a busy three days of speeches and fundraising in Nevada, California and New Mexico.</Paragraph>
    <Paragraph position="17"> d: As you said, the president has just left for a busy three days.</Paragraph>
    <Paragraph position="18"> w: You said, the president has left for three days of speeches and fundraising in Nevada, California and New Mexico.</Paragraph>
    <Paragraph position="19"> g: The president left for three days of speeches and fundraising in Nevada, California and New Mexico.</Paragraph>
    <Paragraph position="20">  tence, d: decision-tree compression, w: word-based compression, g: gold standard) compressions were automatically generated. Examples of the compressions our participants saw are given in Table 3.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML