File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/06/e06-3003_metho.xml
Size: 23,709 bytes
Last Modified: 2025-10-06 14:10:04
<?xml version="1.0" standalone="yes"?> <Paper uid="E06-3003"> <Title>An Approach to Summarizing Short Stories</Title> <Section position="3" start_page="55" end_page="56" type="metho"> <SectionTitle> 2 Data Pre-Processing </SectionTitle> <Paragraph position="0"> Before working on selecting salient descriptive sentences, the stories of the training set were analyzed for presence of surface markers denoting characters, locations and temporal anchors. To this end, the GATE Gazetteer (Cunningham et al., 2002) was used, and only entities recognized by it automatically were considered.</Paragraph> <Paragraph position="1"> The findings were as follows. Each story contained multiple mentions of characters (an average of 64 mentions per story). Yet only 22 loca-tion markers were found, most of these being street names. The 22 markers were found in 10 out of 14 stories, leaving 4 stories without any identifiable location markers. Only 4 temporal anchors were identified in all 14 stories: 2 absolute (such as years) and 2 relative (names of holidays). These findings support the intuitive idea that short stories revolve around their characters, even if the ultimate goal is to show a larger social phenomenon.</Paragraph> <Paragraph position="2"> Due to this fact, the data was pre-processed in such a way as to resolve pronominal and nominal anaphoric references to animate entities. The term anaphora can be informally explained as a way of mentioning a previously encountered entity without naming it explicitly. Consider examples 1a and 1b from The Gift of the Magi by O.</Paragraph> <Paragraph position="3"> Henri. 1a is an example of pronominal anaphora, where the noun phrase (further NP) Della is referred to as an antecedent and both occurrences of the pronoun her as anaphoric expressions or referents. Example 1b illustrates the concept of nominal anaphora. Here the NP Dell is the antecedent and my girl is the anaphoric expression (in the context of this story Della and the girl are the same person).</Paragraph> <Paragraph position="4"> (1a) Della finished her cry and attended to her cheeks with the powder rag.</Paragraph> <Paragraph position="5"> (1b) &quot;Don't make any mistake, Dell,&quot; he said, &quot;about me. I don't think there's anything [...] that could make me like my girl any less.</Paragraph> <Paragraph position="6"> The author created a system that resolved 1st and 3rd person singular pronouns (I, me, my, he, his etc.) and singular nominal anaphoric expressions (e.g. the man, but not men). The system was implemented in Java, within the GATE framework, using Connexor Machinese Syntax parser (Tapanainen and Jarvinen, 1997).</Paragraph> <Paragraph position="7"> A generalized overview of the system is provided below. During the first step, the documents were parsed using Connexor Machinese Syntax parser. The parsed data was then forwarded to the Gazetteer in GATE, which recognized nouns denoting persons. The original version of the Gazetteer recognized only named entities and professions, but the Gazetteer was extended to include common animate nouns such as man, woman, etc. As the next step, an implementation based on a classical pronoun resolution algorithm (Lappin and Leass, 1994) was applied to the texts. Subsequently, anaphoric noun phrases were identified using the rules outlined</Paragraph> <Section position="1" start_page="55" end_page="56" type="sub_section"> <SectionTitle> The Cost of Kindness </SectionTitle> <Paragraph position="0"> Jerome K. Jerome (1859-1927) Augustus Cracklethorpe would be quitting Wychwood-on-the-Heath the following Monday, never to set foot--so the Rev. Augustus Cracklethorpe himself and every single member of his congregation hoped sincerely--in the neighbourhood again. [...] The Rev. Augustus Cracklethorpe, M.A., might possibly have been of service to his Church in, say, some East-end parish of unsavoury reputation, some mission station far advanced amid the hordes of heathendom. There his inborn instinct of antagonism to everybody and everything surrounding him, his unconquerable disregard for other people's views and feelings, his inspired conviction that everybody but himself was bound to be always wrong about everything, combined with determination to act and speak fearlessly in such belief, might have found their uses. In picturesque little Wychwood-on-the-Heath [...] these qualities made only for scandal and disunion.</Paragraph> <Paragraph position="1"> in (Poesio and Vieira, 2000). Finally, these anaphoric noun phrases were resolved using a modified version of (Lappin and Leass, 1994), adjusted to finding antecedents of nouns.</Paragraph> <Paragraph position="2"> A small-scale evaluation based on 2 short stories revealed results shown in Table 1. After resolving anaphoric expressions, characters that are central to the story were selected based on normalized frequency counts.</Paragraph> </Section> </Section> <Section position="4" start_page="56" end_page="58" type="metho"> <SectionTitle> 3 Selecting Descriptive Sentences Using </SectionTitle> <Paragraph position="0"/> <Section position="1" start_page="56" end_page="56" type="sub_section"> <SectionTitle> Aspectual Information 3.1 Linguistic definition of aspect </SectionTitle> <Paragraph position="0"> In order to select salient sentences that set out the background of a story, this project relied on the notion of aspect. For the purposes of this paper the author uses the term aspect to denote the same concept as what (Huddleston and Pullum, 2002) call the situation type. Informally, it can be explained as a characteristic of a clause that gives an idea about the temporal flow of an event or state being described.</Paragraph> <Paragraph position="1"> A general hierarchy of aspectual classification based on (Huddleston and Pullum, 2002) is shown in Figure 2 with examples for each type. In addition, aspectual type of a clause may be altered by multiplicity, e.g. repetitions. Consider examples 2a and 2b.</Paragraph> <Paragraph position="2"> (2a) She read a book.</Paragraph> <Paragraph position="3"> (2b) She usually read a book a day. (e.g. She used to read a book a day).</Paragraph> <Paragraph position="4"> Example 2b is referred to as serial situation (Huddleston and Pullum, 2002). It is considered to be a state, even though a single act of reading a book would constitute an event.</Paragraph> <Paragraph position="5"> Intuitively, stative situations (especially serial ones) are more likely to be associated with descriptions; that is with things that are, or things that were happening for an extended period of time (consider He was a tall man. vs. He opened the window.).The rest of Section 3 describes the approach used for identifying single and serial stative clauses and for using them to construct summaries.</Paragraph> </Section> <Section position="2" start_page="56" end_page="56" type="sub_section"> <SectionTitle> 3.2 Overall system design </SectionTitle> <Paragraph position="0"> Selection of the salient background sentences was conducted in the following manner. Firstly, the pre-processed data (as outlined in Section 2) was parsed using Connexor Machinese Syntax parser. Then, sentences were recursively split into clauses. For the purposes of this project a clause is defined as a main verb with all its complements, including subject, modifiers and their sub-trees.</Paragraph> <Paragraph position="1"> Subsequently, two different representations were constructed for each clause: one fine-grained and one coarse-grained. The main difference between these two representations was in the number of attributes and in the cardinality of the set of possible values, and not in how much and what kind of information they carried. For instance, the fine-grained dataset had 3 different features with 7 possible values to carry tenserelated information: tense, is_progressive and is_perfect, while the coarse-grained dataset carried only one binary feature, is_simple_past_or_present.</Paragraph> <Paragraph position="2"> Two different approaches for selecting descriptive sentences were tested on each of the representations. The first approach used machine learning techniques, namely C5.0 (Quinlan, 1992) implementation of decision trees. The second approach consisted of applying a set of manually created rules that guided the classification process. Motivation for features used in each dataset is given in Section 3.3. Both approaches and preliminary results are discussed in Sections</Paragraph> </Section> <Section position="3" start_page="56" end_page="58" type="sub_section"> <SectionTitle> 4.1 - 4.4. </SectionTitle> <Paragraph position="0"> The part of the system responsible for selecting descriptive sentences was implemented in Features for both representations were selected based on one of the following criteria: (Criterion 1) a clause should 'talk' about important things, such as characters or locations (Criterion 2) a clause should contain background descriptions rather then events The number of features providing information towards each criterion, as well as the number of possible values, is shown in Table 2 for both representations.</Paragraph> <Paragraph position="1"> The attributes contributing towards Criterion 1 can be divided into character-related and location-related. null Character-related features were designed so as to help identify sentences that focused on characters, not just mentioned them in passing. These attributes described whether a clause contained a character mention and what its grammatical function was (subject, object, etc.), whether such a mention was modified and what was the position of a parent sentence relative to the sentence where this character was first mentioned (intuitively, earlier mentions of characters are more likely to be descriptive).</Paragraph> <Paragraph position="2"> Location-related features in both datasets described whether a clause contained a location mention and whether it was embedded in a prepositional phrase (further PP). The rationale behind these attributes is that location mentions are more likely to occur in PPs, such as from the Arc de Triomphe, to the Place de la Concorde.</Paragraph> <Paragraph position="3"> In order to meet Criterion 2 (that is, to select descriptive sentences) a number of aspect-related features were calculated. These features were selected so as to model characteristics of a clause that help determine its aspectual class. The characteristics used were default aspect of the main verb of a clause, tense, temporal expressions, semantic category of a verb, voice and some properties of the direct object. Each of these characteristics is listed below, along with motivation for it, and information about how it was calculated.</Paragraph> <Paragraph position="4"> It must be mentioned that several researchers looked into determining automatically various semantic properties of verbs, such as (Siegel, 1998; Merlo et al., 2002). Yet these approaches dealt with properties of verbs in general and not with particular usages in the context of concrete sentences.</Paragraph> <Paragraph position="5"> Default verbal aspect. A set of verbs, referred to as stative verbs, tends to produce mostly stative clauses. Examples of such verbs include be, like, feel, love, hate and many others. A common property of such verbs is that they do not readily yield a progressive form (Vendler, 1967; Dowty, 1979). Consider examples 3a and 3b.</Paragraph> <Paragraph position="6"> (3a) She is talking. (a dynamic verb talk) (3b) *She is liking the book. (a stative verb like) The default aspectual category of a verb was approximated using Longman Dictionary of Contemporary English (LDOCE). Verbs marked in LDOCE as not having a progressive form were considered stative and all others - dynamic. This information was expressed in both datasets as 1 binary feature.</Paragraph> <Paragraph position="7"> Grammatical tense. Usually, simple tenses are more likely to be used in stative or habitual situations than progressive or perfect tenses. In fact, it is considered to be a property of stative clauses that they normally do not occur in progressive (Vendler, 1967; Huddleston and Pullum, 2002). Perfect tenses are feasible with stative clauses, yet less frequent. Simple present is only feasible with states and not with events (Huddleston and Pullum, 2002) (see examples 4a and 4b).</Paragraph> <Paragraph position="8"> (4a) She likes writing.</Paragraph> <Paragraph position="9"> (4b) *She writes a book. (e.g. now) In the fine-grained dataset this information was expressed using 3 features with 7 possible values (whether a clause is in present, past or future tense, whether it is progressive and whether it is perfective). In the coarse-grained dataset, this information was expressed using 1 binary feature: whether a clause is in simple past or present tense.</Paragraph> <Paragraph position="10"> Temporal expressions. Temporal markers (often referred to as temporal adverbials), such as usually, never, suddenly, at that moment and many others are widely employed to mark the aspectual type of a sentence (Dowty, 1982; Harkness, 1987; By, 2002). Such markers provide a wealth of information and often unambiguously signal aspectual type. For example: (5a) She read a lot tonight.</Paragraph> <Paragraph position="11"> (5b) She always read a lot. (Or She used to read a lot.) Yet, such expressions are not easy to capture automatically. In order to use the information expressed in temporal adverbials, the author analyzed the training data for presence of such expressions and found 295 occurrences in 10 stories. It appears that this set could be reduced to 95 templates in the following manner. For example, the expressions this year, next year, that long year could all be reduced to a template <some_expression> year. Each template is characterized by 3 features: type of the temporal expression (location, duration, frequency, enactment) (Harkness, 1987); magnitude (year, day, etc.); and plurality (year vs. years). The fine-grained dataset contained 3 such features with 14 possible values (type of expression, its magnitude and plurality). The coarse-grained dataset contained 1 binary feature (whether there was an expression of a long period of time).</Paragraph> <Paragraph position="12"> Verbal semantics. Inherent meaning of a verb also influences the aspectual type of a given clause.</Paragraph> <Paragraph position="13"> (6a) She memorized that book by heart. (an event) (6b) She enjoyed that book. (a state) Not surprisingly, this information is very difficult to capture automatically. Hoping to leverage it, the author used semantic categorization of the 3,000 most common English verbs as described in (Levin, 1993). The fine-grained dataset contained a feature with 49 possible values that corresponded to the top-level categories described in (Levin, 1993). The coarse-grained dataset contained 1 binary feature that carried this information. Verbs that belong to more than one category were manually assigned to a single category that best captured their literal meaning.</Paragraph> <Paragraph position="14"> Voice. Usually, clauses in passive voice only occur with events (Siegel, 1998). Both datasets contained 1 binary feature to describe this information. null Properties of direct object. For some verbs properties of direct object help determine whether a given clause is stative or dynamic. (7a) She wrote a book. (event) (7b) She wrote books. (state) The fine-grained dataset contained 2 binary features to describe whether direct object is definite or indefinite and whether it is plural. The coarse-grained dataset contained no such information because it appeared that this information was not crucial.</Paragraph> <Paragraph position="15"> Several additional features were present in both datasets that described overall characteristics of a clause and its parent sentence, such as whether these were affirmative, their index in the text, etc. The fine-grained dataset contained 4 such features with 9 possible values and the coarse-grained dataset contained 3 features with</Paragraph> </Section> </Section> <Section position="5" start_page="58" end_page="59" type="metho"> <SectionTitle> 7 values. 4 Experiments </SectionTitle> <Paragraph position="0"/> <Section position="1" start_page="58" end_page="59" type="sub_section"> <SectionTitle> 4.1 Experimental setting </SectionTitle> <Paragraph position="0"> The data used in the experiments consisted of 23 stories split into a training set (14 stories) and a testing set (9 stories). Each clause of every story was annotated by the author of this paper as summary-worthy or not. Therefore, the classification process occurred at the clause-level. Yet, summary construction occurred at the sentencelevel, that is if one clause in a sentence was considered summary-worthy, the whole sentence was also considered summary-worthy. Because of this, results are reported at two levels: clause and sentence. The results at the clause-level are more appropriate to judge the accuracy of the classification process. The results at the sentence level are better suited for giving an idea about how close the produced summaries are to their annotated counterparts.</Paragraph> <Paragraph position="1"> The training set contained 5,514 clauses and the testing set contained 4,196 clauses. The target compression rate was set at 6% expressed in terms of sentences. This rate was selected because it approximately corresponds to the average compression rate achieved by the annotator (5.62%). The training set consisted of 310 positive examples and 5,204 negative examples, and the testing set included 218 positive and 3,978 negative examples.</Paragraph> <Paragraph position="2"> Before describing the experiments and discussing results, it is useful to define baselines. The author of this paper is not familiar with any comparable summarization experiments and for this reason was unable to use existing work for comparison. Therefore, a baseline needed to be defined in different terms. To this end, two naive baselines were computed.</Paragraph> <Paragraph position="3"> Intuitively, when a person wishes to decide whether to read a certain book or not, he opens it and flips through several pages at the beginning. Imitating this process, a simple lead baseline consisting of the first 6% of the sentences in a story was computed. It is denoted LEAD in Tables 3 and 4. The second baseline is a slightly modified version of the lead baseline and it consists of the first 6% of the sentences that contain at least one mention of one of the important characters. It is denoted LEAD CHAR in Tables</Paragraph> </Section> </Section> <Section position="6" start_page="59" end_page="60" type="metho"> <SectionTitle> 3 and 4. </SectionTitle> <Paragraph position="0"/> <Section position="1" start_page="59" end_page="59" type="sub_section"> <SectionTitle> 4.2 Experiments with the rules </SectionTitle> <Paragraph position="0"> The first classification procedure consisted of applying a set of manually designed rules to produce descriptive summaries. The rules were designed using the same features that were used for machine learning and that are described in Section 3.3.</Paragraph> <Paragraph position="1"> Two sets of rules were created: one for the fine-grained dataset and another for the coarse-grained dataset. Due to space restrictions it is not possible to reproduce the rules in this paper. Yet, several examples are given in Figure 4. (If a rule returns True, then a clause is considered to be summary-worthy.) The results obtained using these rules are presented in Table 3. They are discussed along with the results obtained using machine learning in Section 4.4.</Paragraph> </Section> <Section position="2" start_page="59" end_page="60" type="sub_section"> <SectionTitle> 4.3 Experiments with machine learning </SectionTitle> <Paragraph position="0"> As an alternative to rule construction, the author used C5.0 (Quilan, 1992) implementation of decision trees to select descriptive sentences. The algorithm was chosen mainly because of the readability of its output. Both training and testing datasets exhibited a 1:18 class imbalance, which, given a small size of the datasets, needed to be compensated. Undersampling (randomly removing instances of the majority class) was applied to both datasets in order to correct class imbalance. null This yielded altogether 4 different datasets (see Table 4). For each dataset, the best model was selected using 10-fold cross-validation on Rule 1 if a clause contains a character mention as subject or object and a temporal expression of type enactment (ever, never, always) return True Rule 2 if a clause contains a character mention as subject or object and a stative verb return True Rule 3 if a clause is in progressive tense return False</Paragraph> </Section> <Section position="3" start_page="60" end_page="60" type="sub_section"> <SectionTitle> 4.4 Results </SectionTitle> <Paragraph position="0"> The results displayed in Tables 3 and 4 show how many clauses (and sentences) selected by the system corresponded to those chosen by the annotator. The columns Precision, Recall and F-score show measures for the minority class (summary-worthy). The columns Overall error rate and Kappa show measures for both classes.</Paragraph> <Paragraph position="1"> Although modest, the results suggest an improvement over both baselines. Statistical significance of improvements over baselines was tested for p = 0.001 for each dataset-approach.</Paragraph> <Paragraph position="2"> The improvements are significant in all cases.</Paragraph> <Paragraph position="3"> The columns F-score in Tables 3 and 4 show f-score for the minority class (summary-worthy sentences), which is a measure combining precision and recall for this class. Yet, this measure does not take into account success rate on the negative class. For this reason, Cohen's kappa statistic (Cohen, 1960) was also computed. It measures the overall agreement between the system and the annotator. This measure is shown in the column named Kappa.</Paragraph> <Paragraph position="4"> In order to see what features were the most informative in each dataset, a small experiment was conducted. The author removed one feature at a time from the training set and used the decrease in F-score as a measure of informativeness. The experiment revealed that in the coarse-grained dataset the following features were the most informative: 1) the position of a sentence relative to the first mention of a character; 2) whether a clause contained character mentions; 3) voice and 4) tense. In the fine-grained dataset the findings were similar: 1) presence of a character mention; 2) position of a sentence in the text; 3) voice; and 4) tense were more important than the other features.</Paragraph> <Paragraph position="5"> It is not easy to interpret these results in any conclusive way at this stage. The main weakness, of course, is that the results are based solely on the annotations of one person while it is generally known that human annotators are likely to exhibit some disagreement. The second issue lies in the fact that given the compression rate of 6%, and the objective that the summary be indicative and not informative, more that one 'good' summary is possible. It would therefore be desirable that the results be evaluated not based on overlap with an annotator (or annotators, for that matter), but on how well they achieve the stated objective. null</Paragraph> </Section> </Section> class="xml-element"></Paper>