File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/01/j01-1002_metho.xml
Size: 55,990 bytes
Last Modified: 2025-10-06 14:07:31
<?xml version="1.0" standalone="yes"?> <Paper uid="J01-1002"> <Title>Integrating Prosodic and Lexical Cues for Automatic Topic Segmentation</Title> <Section position="4" start_page="33" end_page="35" type="metho"> <SectionTitle> 3. The Approach </SectionTitle> <Paragraph position="0"> Topic segmentation in the paradigm used in this study and others (Allan et al. 1998) proceeds in two phases. In the first phase, the input is divided into contiguous strings of words assumed to belong to the same topic. We refer to this step as chopping. For example, in textual input, the natural units for chopping are sentences (as can be inferred from punctuation and capitalization), since we can assume that topics do not change in mid sentence. 1 For continuous speech input, the choice of chopping criteria is less obvious; we compare several possibilities in our experimental evaluation. Here, for simplicity, we will use &quot;sentence&quot; to refer to units of chopping, regardless of the criterion used. In the second phase, the sentences are further grouped into contiguous stretches belonging to one topic, i.e., the sentence boundaries are classified into topic boundaries and nontopic boundaries. 2 Topic segmentation is thus reduced to a boundary classification problem. We will use B to denote the string of binary boundary classifications. Furthermore, our two knowledge sources are the (chopped) word sequence W and the stream of prosodic features F. Our approach aims to find the segmentation B with highest probability given the information in W and F</Paragraph> <Paragraph position="2"> using statistical modeling techniques.</Paragraph> <Paragraph position="3"> Ttir, Hakkani-Ttir, Stolcke, and Shriberg Integrating Prosodic and Lexical Cues In the following subsections, we first describe the prosodic model of the dependency between prosody F and topic segmentation B; then, the language model relating words W and B; and finally, two approaches for combining the models.</Paragraph> <Section position="1" start_page="34" end_page="35" type="sub_section"> <SectionTitle> 3.1 Prosodic Modeling </SectionTitle> <Paragraph position="0"> The job of the prosodic model is to estimate the posterior probability (or, alternatively, likelihood) of a topic change at a given word boundary, based on prosodic features extracted from the data. For the prosodic model to be effective, one must devise suitable, automatically extractable features. Feature values extracted from a corpus can then be used in training probability estimators and to select a parsimonious subset of features for modeling purposes. We discuss each of these steps in turn in the following sections.</Paragraph> <Paragraph position="1"> 3.1.1 Features. We started with a large collection of features capturing two major aspects of speech prosody, similar to our previous work (Shriberg, Bates, and Stolcke 1997): Duration features: duration of pauses, duration of final vowels and final rhymes, and versions of these features normalized both for phone durations and speaker statistics. 3 Pitch features: fundamental frequency (F0) patterns preceding and following the boundary, F0 patterns across the boundary, and pitch range relative to the speaker's baseline. We processed the raw F0 estimates (obtained with ESPS signal processing software from Entropic Research Laboratory \[1993\]), with robustness-enhancing techniques developed by S6nmez et al. (1998).</Paragraph> <Paragraph position="2"> We did not use amplitude- or energy-based features since exploratory work showed these to be much less reliable than duration and pitch and largely redundant given the above features. One reason for omitting energy features is that, unlike duration and pitch, energy-related measurements vary with channel characteristics. Since channel properties vary widely in broadcast news, features based on energy measures can correlate with shows, speakers, and so forth, rather than with the structural locations in which we were interested.</Paragraph> <Paragraph position="3"> We included features that, based on the descriptive literature, should reflect breaks in the temporal and intonational contour. We developed versions of such features that could be defined at each interword boundary, and that could be extracted by completely automatic means (no human labeling). Furthermore, the features were designed to be as independent of word identities as possible, for robustness to imperfect recognizer output. A brief characterization of the informative features for the segmentation task is given with our results in Section 4.6. Since the focus here is on computational modeling we refer the reader to a companion paper (Shriberg et al. 2000) for a detailed description of the acoustic processing and prosodic feature extraction.</Paragraph> <Paragraph position="4"> Computational Linguistics Volume 27, Number 1 the IND package (Buntine and Caruana 1992), because of their ability to model feature interactions, to deal with missing features, and to handle large amounts of training data. The foremost reason for our preference for decision trees, however, is that the learned models can be inspected and diagnosed by human investigators. This ability is crucial for understanding what features are used and how, and for debugging the feature extraction process itself. 4 Let Fi be the features extracted from a window around the ith potential topic boundary (chopping boundary), and let Bi be the boundary type (boundary/no-boundary) at that position. We trained decision trees to predict the ith boundary type, i.e., to estimate P(\]3ilFi, W). The decision is only weakly conditioned on the word sequence W, insofar as some of the prosodic features depend on the phonetic alignment of the word models (which we will denote with Wt). We can thus expect the prosodic model estimates to be robust to recognition errors. The decision tree paradigm also allows us to add, and automatically select, other (nonprosodic) features that might be relevant to the task.</Paragraph> <Paragraph position="5"> 3.1.3 Feature Selection. The greedy nature of the decision tree learning algorithm implies that larger initial feature sets can give worse results than smaller subsets. Furthermore, it is desirable to remove redundant features for computational efficiency and to simplify the interpretation of results. For this purpose we developed an iterarive feature selection &quot;wrapper&quot; algorithm (John, Kohavi, and Pfleger 1994) that finds useful, task-specific feature subsets. The algorithm combines elements of a brute-force search with previously determined heuristics about good groupings of features. The algorithm proceeds in two phases: In the first phase, the number of features is reduced by leaving out one feature at a time during tree construction. A feature whose removal increases performance is marked as to be avoided. The second phase then starts with the reduced feature set and performs a beam search over all possible subsets to maximize tree performance.</Paragraph> <Paragraph position="6"> We used entropy reduction in the overall tree (after cross-validation pruning) as a metric for comparing alternative feature subsets. Entropy reduction is the difference in entropy between the prior class distribution and the posterior distribution estimated by the tree, as measured on a held-out set; it is a more fine-grained metric than classification accuracy, and is also more relevant to the model combination approach described later.</Paragraph> <Paragraph position="7"> 3.1.4 Training Data. To train the prosodic model, we automatically aligned and extracted features from 70 hours (about 700,000 words) of the Linguistic Data Consortium (LDC) 1997 Broadcast News (BN) corpus. Topic boundary information determined by human labelers was extracted from the SGML markup that accompanies the word transcripts of this corpus. The word transcripts were aligned automatically with the acoustic waveforms to obtain pause and duration information, using the SRI Broadcast News recognizer (Sankar et al. 1998).</Paragraph> </Section> <Section position="2" start_page="35" end_page="35" type="sub_section"> <SectionTitle> 3.2 Lexical Modeling </SectionTitle> <Paragraph position="0"> Lexical information in our topic segmenter is captured by statistical language models (LMs) embedded in an HMM. The approach is an extension of the topic segmenter</Paragraph> </Section> </Section> <Section position="5" start_page="35" end_page="41" type="metho"> <SectionTitle> 4 Interpreting large trees can be a daunting task. However, the decision questions near the tree root are </SectionTitle> <Paragraph position="0"> usually interpretable, or, when nonsensical, usually indicate problems with the data. Furthermore, as explained in Section 4.6, we have developed simple statistics that give an overview of feature usage throughout the tree.</Paragraph> <Paragraph position="1"> Tfir, Hakkani-Tfir, Stolcke, and Shriberg Integrating Prosodic and Lexical Cues Figure 2 Structure of the basic HMM developed by Dragon for the TDT Pilot Project. The labels on the arrows indicate the transition probabilities. TSP represents the topic switch penalty. developed by Dragon Systems for the TDT2 effort (Yamron et al. 1998), which was based purely on topical word distributions. We extend it to also capture lexical and (as described in Section 3.3) prosodic discourse cues.</Paragraph> <Paragraph position="2"> 3.2.1 Model Structure. The overall structure of the model is that of an HMM (Rabiner and Juang 1986) in which the states correspond to topic clusters Tj, and the observations are sentences (or chopped units) W1 ..... WN. The resulting HMM, depicted in Figure 2, forms a complete graph, allowing for transitions between any two topic clusters. Note that it is not necessary that the topic clusters correspond exactly to the actual topics to be located; for segmentation purposes, it is sufficient that two adjacent actual topics are unlikely to be mapped to the same induced cluster. The observation likelihoods for the HMM states, P(WilTj), represent the probability of generating a given sentence Wi in a particular topic cluster Tj.</Paragraph> <Paragraph position="3"> We automatically constructed 100 topic cluster LMs, using the multipass k-means algorithm described in Yamron et al. (1998). Since the HMM emissions are meant to model the topical usage of words, but not topic-specific syntactic structures, the LMs Computational Linguistics Volume 27, Number 1 consist of unigram distributions that exclude stopwords (high-frequency function and closed-class words). To account for unobserved words, we interpolate the topic-clusterspecific LMs with the global unigram LM obtained from the entire training data. The observation likelihoods of the HMM states are then computed from these smoothed unigram LMs.</Paragraph> <Paragraph position="4"> All HMM transitions within the same topic cluster are given probability one, whereas all transitions between topics are set to a global topic switch penalty (TSP) that is optimized on held-out training data. The TSP parameter allows trading off between false alarms and misses. Once the HMM is trained, we use the Viterbi algorithm (Viterbi 1967; Rabiner and Juang 1986) to search for the best state sequence and corresponding segmentation. Note that the transition probabilities in the model are not normalized to sum to one; this is convenient and permissible since the output of the Viterbi algorithm depends only on the relative weight of the transition weights.</Paragraph> <Paragraph position="5"> We augmented the Dragon segmenter with additional states and transitions to also capture lexical discourse cues. In particular, we wanted to model the initial and final sentences in each topic segment, as these often contain formulaic phrases and keywords used by broadcast speakers (From Washington, this is .... And now ... ). We added two additional states, BEGIN and END, to the HMM (Figure 3) to model these sentences. Likelihoods for the BEGIN and END states are obtained as the unigram language model probabilities of the initial and final sentences, respectively, of the topic segments in the training data. Note that a single BEGIN and END state are shared for all topics. Best results were obtained by making traversal of these states optional in the HMM topology, presumably because some initial and final sentences are better modeled by the topic-specific LMs.</Paragraph> <Paragraph position="6"> The resulting model thus effectively combines the Dragon and UMass HMM topic segmentation approaches described in Allan et al. (1998). In preliminary experiments, we observed a 5% relative reduction in segmentation error with initial and final states over the baseline HMM topology of Figure 2. Therefore, all results reported later use an HMM topology with initial and final states. Note that, since the topic-initial and topic-final states are optional, our training of the model is suboptimal. Instead of labeling all topic-initial and topic-final training sentences as data for the corresponding state, we would expect further improvements by training the HMM in unsupervised fashion using the Baum-Welch algorithm (Baum et al. 1970; Rabiner and Juang 1986).</Paragraph> <Paragraph position="7"> TDT Pilot and TDT2 training data (Cieri et al. 1999), covering transcriptions of broadcast news from January 1992 through June 1994 and from January 1998 through February 1998, respectively. These corpora are similar in style, but do not overlap with the 1997 LDC BN corpus from which we selected our prosodic training data and the evaluaton test set. For training the language models, we removed stories with fewer than 300 and more than 3,000 words, leaving 19,916 stories with an average length of 538 words (including stopwords).</Paragraph> <Section position="1" start_page="37" end_page="41" type="sub_section"> <SectionTitle> 3.3 Model Combination </SectionTitle> <Paragraph position="0"> We are now in a position to describe how lexical and prosodic information can be combined for topic segmentation. As discussed before, the LMs in the HMM capture topical word usage as well as lexical discourse cues at topic transitions, whereas a decision tree models prosodic discourse cues. We expect that these knowledge sources are largely independent, so their combination should yield significantly improved performance.</Paragraph> <Paragraph position="1"> T~r, Hakkani-Ttir, Stolcke, and Shriberg Integrating Prosodic and Lexical Cues Structure of an HMM with topic BEGIN and END states. TSP represents the topic switch penalty.</Paragraph> <Paragraph position="2"> Below we present two approaches for building a combined statistical model that performs topic segmentation using all available knowledge sources. For both approaches it is convenient to associate a &quot;boundary&quot; pseudotoken with each potential topic boundary (i.e., with each sentence boundary). Correspondingly, we introduce into the HMM new states that emit these boundary tokens. No other states emit boundary tokens; therefore each sentence boundary must align with one of the boundary states in the HMM. As shown in Figure 4, there are two boundary states for each topic cluster, one representing a topic transition and the other representing a topic-internal transition between sentences. Unless otherwise noted, the observation likelihoods for the boundary states are set to unity.</Paragraph> <Paragraph position="3"> The addition of boundary states allows us to compute the model's prediction of topic changes as follows: Let B1,.. *, Bc denote the topic boundary states and, similarly, let N1,..., Nc denote the nontopic boundary states, where C is the number of topic clusters. Using the forward-backward algorithm for HMMs (Rabiner and Juang 1986), we can compute P(qi = BflW) and P(qi = NjlW), the posterior probabilities that one of these states is occupied at boundary i. The model's prediction of a topic boundary prosodic models. In the figure, states B1, B2, ..., B100 represent the presence of a topic boundary, whereas states N1, N2,..., N100 represent topic-internal sentence boundaries. TSP is the topic switch penalty.</Paragraph> <Paragraph position="4"> is simply the sum over the corresponding state posteriors:</Paragraph> <Paragraph position="6"> Tiir, Hakkani-T(ir, Stolcke, and Shriberg Integrating Prosodic and Lexical Cues 3.3.1 Model Combination in the Decision Tree. Decision trees allow the training of a single classifier that takes both lexical and prosodic features as input, provided we can compactly encode the lexical information for the decision tree. We compute the posterior probability PHMM(Bi = yeslW) as shown above, to summarize the HMM's belief in a topic boundary based on all available lexical information W. The posterior value is then used as an additional input feature to the prosodic decision tree, which is trained in the usual manner. During testing, we declare a topic boundary whenever the tree's overall posterior estimate PDT(BilFi, W) exceeds some threshold. The threshold may be varied to trade off false alarms for miss errors, or to optimize an overall cost function.</Paragraph> <Paragraph position="7"> Using HMM posteriors as decision tree features is similar in spirit to the knowledge source combination approaches used by Beeferman, Berger, and Lafferty (1999) and Reynar (1999), who also used the output of a topical word usage model as input to an overall classifier. In previous work (Stolcke et al. 1998) we used the present approach as one of the knowledge source combination strategies for sentence and disfluency detection in spontaneous speech.</Paragraph> <Paragraph position="8"> combination uses the HMM as the top-level model. In this approach, the prosodic decision tree is used to estimate likelihoods for the boundary states of the HMM, thus integrating the prosodic evidence into the HMM's segmentation decisions.</Paragraph> <Paragraph position="9"> More formally, let Q =- (rl, ql ..... ri, qi,...,rN, qN) be a state sequence through the HMM. The model is constructed such that the states ri representing topic (or BEGIN/END) clusters alternate with the states qi representing boundary decisions.</Paragraph> <Paragraph position="10"> As in the baseline model, the likelihoods of the topic cluster states Tj account for the</Paragraph> <Paragraph position="12"> as estimated by the unigram LMs. Now, in addition, we let the likelihood of the boundary state at position i reflect the prosodic observation Fi. Recall that, like Wi, Fi refers to complete sentence units; specifically, Fi denotes the prosodic features of the ith boundary between such units.</Paragraph> <Paragraph position="14"> Using this construction, the product of all state likelihoods will give the overall likelihood, accounting for both lexical and prosodic observations:</Paragraph> <Paragraph position="16"> Applying the Viterbi algorithm to the HMM will thus return the most likely segmentation conditioned on both words and prosody, which is our goal.</Paragraph> <Paragraph position="17"> Although decomposing the likelihoods as shown allows prosodic observations to be conditioned on the words W, we use only the phonetic alignment information Wt from the word sequence W in our prosodic models, ignoring the word identities, so as to make them more robust to recognition errors.</Paragraph> <Paragraph position="18"> The likelihoods P(FilBi, Wt) for the boundary states can now be obtained from the prosodic decision tree. Note that the decision tree estimates posteriors PDT(Bil\]2i, Wt). Computational Linguistics Volume 27, Number 1 These can be converted to likelihoods using Bayes rule as in</Paragraph> <Paragraph position="20"> The term P(FilWt) is a constant for all decisions Bi and can thus be ignored when applying the Viterbi algorithm. Next, we approximate P(BilWt) ,~ P(Bi), justified by the fact that the Wt contains information about start and end times of phones and words, but not directly about word identities. Instead of explicitly dividing the posteriors, 1 we prefer to downsample the training set to make P(Bi = yes) = P(Bi = no) = ~.</Paragraph> <Paragraph position="21"> A beneficial side effect of this approach is that the decision tree models the lower-frequency events (topic boundaries) in greater detail than if presented with the raw, highly skewed class distribution.</Paragraph> <Paragraph position="22"> As is often the case when combining probabilistic models of different types, it is advantageous to weight the contributions of the language models and the prosodic trees relative to each other. We do so by introducing a tunable model combination weight (MCW), and by using PDT(FilBi, Wt) MCW as the effective prosodic likelihoods.</Paragraph> <Paragraph position="23"> The value of MCW is optimized on held-out data.</Paragraph> </Section> </Section> <Section position="6" start_page="41" end_page="50" type="metho"> <SectionTitle> 4. Experiments and Results </SectionTitle> <Paragraph position="0"> To evaluate our topic segmentation models, we carried out experiments in the TDT paradigm. We first describe our test data and the evaluation metrics used to compare model performance, then give the results we obtained with individual knowledge sources, followed by the results of the combined models.</Paragraph> <Section position="1" start_page="41" end_page="42" type="sub_section"> <SectionTitle> 4.1 Test Data </SectionTitle> <Paragraph position="0"> We evaluated our system on three hours (6 shows, about 53,000 words) of the 1997 LDC BN corpus. The threshold for the model combination in the decision tree and the topic switch penalty were optimized on the larger development training set of 104 shows, which includes the prosodic model training data. The MCW for the model combination in the HMM was optimized using a smaller held-out set of 10 shows of about 85,000 words total size, separate from the prosodic model training data.</Paragraph> <Paragraph position="1"> We used two test conditions: forced alignments using the true words, and recognized words as obtained by a simplified version of the SRI Broadcast News recognizer (Sankar et al. 1998), with a word error rate of 30.5%.</Paragraph> <Paragraph position="2"> Our aim in these experiments was to use fully automatic recognition and processing wherever possible. For practical reasons, we departed from this strategy in two areas. First, for word recognition, we used the acoustic waveform segmentations provided with the corpus (which also included the location of non_news material, such as commercials and music). Since current BN recognition systems perform this segmentation automatically with very good accuracy and with only a few percentage points penalty in word error rate (Sankar et al. 1998), we felt the added complication in experimental setup and evaluation was not justified.</Paragraph> <Paragraph position="3"> Second, for prosodic modeling, we used information from the corpus markup concerning speaker changes and the identity of frequent speakers (e.g., news anchors).</Paragraph> <Paragraph position="4"> Automatic speaker segmentation and labeling is possible, although not without errors (Przybocki and Martin 1999). Our use of speaker labels was motivated by the fact that meaningful prosodic features may require careful normalization by speaker, and unreliable speaker information would have made the analysis of prosodic feature usage much less meaningful.</Paragraph> <Paragraph position="5"> Tfir, Hakkani-Ttir, Stolcke, and Shriberg Integrating Prosodic and Lexical Cues</Paragraph> </Section> <Section position="2" start_page="42" end_page="43" type="sub_section"> <SectionTitle> 4.2 Evaluation Metrics </SectionTitle> <Paragraph position="0"> We have adopted the evaluation paradigm used by the TDT2--Topic Detection and Tracking Phase 2 (Doddington 1998) program, allowing fair comparisons of various approaches both within this study and with respect to other recent work. Segmentation accuracy was measured using TDT evaluation software from NIST, which implements a variant of an evaluation metric suggested by Beeferman, Berger, and Lafferty (1999).</Paragraph> <Paragraph position="1"> The TDT segmentation metric is different from those used in most previous topic segmentation work, and therefore merits some discussion. It is designed to work on data streams without any potential topic boundaries, such as paragraph or sentence boundaries, being given a priori. It also gives proper partial credit to segmentation decisions that are close to actual boundaries; for example, placing a boundary one word from an actual boundary is considered a lesser error than if the hypothesized boundary is off by, say, 100 words.</Paragraph> <Paragraph position="2"> The evaluation metric reflects the probability that two positions in the corpus probed at random and separated by a distance of k words are correctly classified as belonging to the same story or not. If the two words belong to the same topic segment, but are erroneously claimed to be in different topic segments by the segmenter, then this will increase the system's false alarm probability. Conversely, if the two words are in different topic segments, but are erroneously marked to be in the same segment, this will contribute to the miss probability. The false alarm and miss rates are defined as averages over all possible probe positions with distance k.</Paragraph> <Paragraph position="3"> Formally, miss and false alarm rates are computed as 5</Paragraph> <Paragraph position="5"> v-,N~-k as (i i + k) Es Z.~i=l ~ref \ &quot; where the summation is over all broadcast shows s and word positions i in the test corpus and where d sli,,={i if words i and j in show s are deemed by sys to be within the same story otherwise Here sys can be ref to denote the reference (correct) segmentation, or hyp to denote the segmenter's decision.</Paragraph> <Paragraph position="6"> An analogous metric is defined for audio sources, where segmentation decisions (same or different topic) are probed at a time-based distance A: where the integration is over the entire duration of all stories of the shows in the test corpus, and where if times tl and t2 in show s are deemed by sys to be within the same story otherwise We used the same parameters as used in the official TDT2 evaluation: k = 50 and A = 15 seconds. Furthermore, again following NIST's evaluation procedure, we combine miss and false alarm rates into a single segmentation cost metric Cseg : CMiss X PMiss X P~eg + CFalseAlarm X PFalseAlarm x (1 - P~C/9) (12) where the CMis~ = 1 is the cost of a miss, CFalseAlarm : 1 is the cost of a false alarm, and Pseg = 0.3 is the a priori probability of a segment being within an interval of k words or A seconds on the TDT2 training corpus. 6</Paragraph> </Section> <Section position="3" start_page="43" end_page="44" type="sub_section"> <SectionTitle> 4.3 Chopping </SectionTitle> <Paragraph position="0"> Unlike written text, the output of the automatic speech recognizer contains no sentence boundaries. Therefore, chopping text into (pseudo)sentences is a nontrivial problem when processing speech. Some presegmentation into roughly sentence-length units is necessary since otherwise the observations associated with HMM states would comprise too few words to give robust likelihoods of topic choice, causing poor performance. null We investigated chopping criteria based on a fixed number of words (FIXED), at speaker changes (TURN), at pauses (PAUSE), and, for reference, at actual sentence boundaries (SENTENCE) obtained from the transcripts. Table 1 gives the error rates for the four conditions, using the true word transcripts of the larger development data set. For the PAUSE condition, we empirically determined an optimal minimum pause duration threshold to use. Specifically, we considered pauses exceeding 0.575 of a second as potential topic boundaries in this (and all later) experiments. For the FIXED condition, a block length of 10 words was found to work best.</Paragraph> <Paragraph position="1"> We conclude that a simple prosodic feature, pause duration, is an excellent criterion for the chopping step, giving comparable or better performance than standard sentence boundaries. Therefore, we used pause duration as the chopping criterion in all further experiments.</Paragraph> <Paragraph position="2"> 6 Another parameter in the NIST evaluation is the deferral period, i.e., the amount of look-ahead before a segmentation decision is made. In all our experiments, we allowed unlimited deferral, effectively until the end of the news show being processed.</Paragraph> <Paragraph position="3"> Tfir, Hakkani-Tiir, Stolcke, and Shriberg Integrating Prosodic and Lexical Cues Table 2 Summary of error rates with the language model only (LM), the prosody model only (PM), the combined decision tree (CM-DT), and the combined HMM (CM-HMM). (a) shows word-based error metrics, 00) shows time-based error metrics. In both cases a &quot;chance&quot; classifier that labels all potential boundaries as nontopic would achieve 0.3 weighted segmentation cost.</Paragraph> </Section> <Section position="4" start_page="44" end_page="44" type="sub_section"> <SectionTitle> 4.4 Source-Specific Model Tuning </SectionTitle> <Paragraph position="0"> As mentioned earlier, the segmentation models contain global parameters (the topic transition penalty of the HMM and the posterior threshold for the combined decision tree) to trade false alarms for miss errors. Optimal settings for these parameters depend on characteristics of the source, in particular on the relative frequency of topic changes.</Paragraph> <Paragraph position="1"> Since broadcast news programs come from identified sources, it is useful and legitimate to optimize these parameters for each show type. 7 We therefore optimized the global parameter for each model to minimize the segmentation cost on the training corpus (after training all other model parameters in a source-independent fashion).</Paragraph> <Paragraph position="2"> Compared to a baseline using source-independent global TSP and threshold, the source-dependent models showed between 5% and 10% relative error reduction. All results reported below use the source-dependent approach.</Paragraph> </Section> <Section position="5" start_page="44" end_page="45" type="sub_section"> <SectionTitle> 4.5 Segmentation Results </SectionTitle> <Paragraph position="0"> Table 2 shows the results for both individual knowledge sources (words and prosody), as well as for the combined models (decision tree and HMM). It is worth noting that the prosody-only results were obtained by running the combined HMM without language model likelihoods; this approach gave better performance than using the prosodic decision trees directly as classifiers.</Paragraph> <Paragraph position="1"> Both word- and time-based metrics are given; they exhibit generally very similar results. Another dimension of the evaluation is the use of correct word transcripts (forced alignments) versus automatically recognized words. Again, results along this dimension are very similar, with some exceptions noted below.</Paragraph> <Paragraph position="2"> Comparing the individual knowledge sources, we observe that prosody alone does somewhat better than the word-based HMM alone. The types of errors made differ Computational Linguistics Volume 27, Number 1 consistently: the prosodic model has a higher false alarm rate, while the word-LMs have more miss errors. The prosodic model shows more false alarms because regular sentence boundaries often show characteristics similar to those of topic boundaries. It also suggests that both models could be combined by letting the prosodic model select candidate topic boundaries that would then be filtered using lexical information. The combined models generally improve on the individual knowledge sources, s In the word-based evaluation, the combined decision tree (DT) reduced overall segmentation cost by 19% over the language model on true words (17% on recognized words). The combined HMM gave even better results: 27% and 24% improvement in the error rate over the language model for true and recognized words, respectively. Looking again at the breakdown of errors, we can see that the two model combination approaches work quite differently: the combined DT has about the same miss rate as the LM, but a lower false alarms rate. The combined HMM, by contrast, combines a miss rate as low as (or lower than) that of the prosodic model with the lower false alarm rate of the LM, suggesting that the functions of the two knowledge sources are complementary, as discussed above. Furthermore, the different error patterns of the two combination approaches suggest that further error reductions could be achieved by combining the two hybrid models. 9 The trade-off between false alarms and miss probabilities is shown in more detail in Figure 5, which plots the two error metrics against each other. Note that the false alarm rate does not reach one because the segmenter is constrained by the chopping algorithm: the pause criterion prevents the segmenter from hypothesizing topic boundaries everywhere.</Paragraph> </Section> <Section position="6" start_page="45" end_page="48" type="sub_section"> <SectionTitle> 4.6 Decision Tree for the Prosody-Only Model </SectionTitle> <Paragraph position="0"> Feature subset selection was run with an initial set of 73 potential features, which the algorithm reduced to a set of 7 nonredundant features helpful for the topic segmentation task. The full decision tree learned is shown in Figure 6. We can identify four different kinds of features used in the tree, listed below. For each feature type, we give the feature names found in the tree and the relative feature usage, an approximate measure of feature importance (Shriberg, Bates, and Stolcke 1997). Relative feature usage is computed as the relative frequency with which features of a given type are queried in the tree, over a held-out test set.</Paragraph> <Paragraph position="2"> the nonspeech interval occurring at the boundary. The importance of pause duration is underestimated here because, as explained earlier, pause durations are already used during the chopping process, so that the decision tree is applied only to boundaries exceeding a certain duration. Separate experiments using boundaries below our chopping threshold show that the tree also distinguishes shorter pause durations for segmentation decisions.</Paragraph> <Paragraph position="3"> FO differences across the boundary (FOK_LRd~EAN_KBASELN and FOK_WRD_DIFF_MNMI~_NG, 35.9% usage). These features compare the mean 8 The exception is the time-based evaluation of the combined decision tree. We found that the posterior probability threshold optimized on the training set works poorly on the test set for this model architecture and the time-based evaluation. The threshold that is optimal on the test set achieves Csea = 0.1651. Section 4.7 gives a possible explanation for this result. 9 Such a combination of combined models was suggested by one of the reviewers; we hope to pursue it in future research.</Paragraph> <Paragraph position="4"> within that word) to either the speaker's estimated baseline F0 (FOK_LR_MEAN_KBASELN) or to the mean F0 of the word following the boundary (FOK_WRD_DIFF_.MNMN_N). Both features were computed based on a log-normal scaling of F0. Other measures (such as minimum or maximum F0 in the word or preceding window) as well as other normalizations (based on F0 toplines, or non-log-based scalings) were included in the initial feature set, but were not selected in the best-performing tree. The baseline feature captures a pitch range effect, and is useful at boundaries where the speaker changes (since range here is compared only within-speaker). The second feature captures the relative size of the pitch change at the boundary, but of course is not meaningful at speaker boundaries.</Paragraph> <Paragraph position="5"> Turn features (TURN_F and TURN_TIME, 14.6% usage). These features reflect the change of speakers. TURN_F indicates whether a speaker Tar, Hakkani-Ti~r, Stolcke, and Shriberg Integrating Prosodic and Lexical Cues .</Paragraph> <Paragraph position="6"> change occurred at the boundary, while TURN_TIME measures the time passed since the start of the current turn.</Paragraph> <Paragraph position="7"> Gender (GEN, 6.8% usage). This feature indicates the speaker gender right before a potential boundary.</Paragraph> <Paragraph position="8"> Inspection of the tree reveals that the purely prosodic features (pause duration and F0 differences) are used as the prosody literature suggests. The longer the observed pause, the more likely a boundary corresponds to a topic change. Also, the closer a speaker comes to his or her F0 baseline, or the larger the difference to the F0 following a boundary, the more likely a topic change occurs. These features thus correspond to the well-known phenomena of boundary tones and pitch reset that are generally associated with sentence boundaries (Vaissi6re 1983). We found these indicators of sentences boundaries to be particularly pronounced at topic boundaries.</Paragraph> <Paragraph position="9"> While turn and gender features are not prosodic features per se, they do interact closely with them since prosodic measurements must be informed by and carefully normalized for speaker identity and gender, and it is therefore natural to include them in a prosodic classifier. 1deg Not surprisingly, we find that turn boundaries are positively correlated with topic boundaries, and that topic changes become more likely the longer a turn has been going on.</Paragraph> <Paragraph position="10"> Interestingly, speaker gender is used by the decision tree for several reasons. One reason is stylistic differences between males and females in the use of F0 at topic boundaries. This is true even after proper normalization, e.g., equatIng the genderspecific nontopic boundary distributions. In addition, we found that nontopic pauses (i.e., chopping boundaries) are more likely to occur in male speech. It could be that male speakers in BN are assigned longer topic segments on average, or that male speakers are more prone to pausing in general, or that male speakers dominate the spontaneous speech portions, where pausing is naturally more frequent. The details of this gender effect await further study.</Paragraph> </Section> <Section position="7" start_page="48" end_page="49" type="sub_section"> <SectionTitle> 4.7 Decision Tree for the Combined Model </SectionTitle> <Paragraph position="0"> sions with prosodic features (see Section 3.3.1). Again, we list the features used with their relative feature usages.</Paragraph> <Paragraph position="1"> 1. Language model posterior (POST_TOPIC, 49.3% usage). This is the posterior probability P(Bi = yeslW) computed from the HMM. 2. Pause duration (PAU_DUR, 49.3% usage). This feature is the same as described for the prosody-only model.</Paragraph> <Paragraph position="2"> 3. FO differences across the boundary (FOK_WRD_DIFF_HILO_N and FOK_LR_MEAN_KBASELN, 1.4% usage). These features are similar to those found for the prosody-only tree. The only difference is that for the first feature, the comparison of FO values across the boundary is done by taking the maximum FO of the previous word and the minimum FO of the following word, rather than the mean for both cases.</Paragraph> <Paragraph position="3"> 10 For example, the features that measure F0 differences across boundaries do not make sense if the speaker changes at the boundary. Accordingly, we made such features undefined for the decision tree at turn boundaries.</Paragraph> <Paragraph position="4"> The decision tree of the combination model.</Paragraph> <Paragraph position="5"> The decision tree found for the combined task is smaller and uses fewer features than the one trained with prosodic features only, for two reasons. First, the LM posterior feature is found to be highly informative, superseding the selection of many of the low-frequency features previously found. Furthermore, as explained in Section 3.3.2, the prosody-only tree was trained on a downsampled dataset that equalizes the priors for topic and nontopic boundaries, as required for integration into the HMM. A welcome side effect of this procedure is that it forces the tree to model the less frequent class (topic boundaries) in much greater detail than if the tree were trained on the raw class distribution, as is the case here.</Paragraph> <Paragraph position="6"> Because of its small size, the tree in Figure 7 is particularly easy to interpret. The top-level split is based on the LM posterior. The right branch handles cases where words are highly indicative of a topic boundary. However, for short pauses, the tree queries further prosodic features to prevent false alarms. Specifically, short pauses must be accompanied both by an F0 close to the speaker's baseline and by a large F0 reset to be deemed topic boundaries. Conversely, if the LM posteriors are low (left top-level branch), but the pause is very long, the tree still outputs a topic boundary.</Paragraph> </Section> <Section position="8" start_page="49" end_page="50" type="sub_section"> <SectionTitle> 4.8 Comparison of Model Combination Approaches </SectionTitle> <Paragraph position="0"> Results indicate that the model combination approach using an HMM as the top-level model works better than the combined decision tree. While this result deserves more investigation, we can offer some preliminary insights.</Paragraph> <Paragraph position="1"> We found it difficult to set the posterior probability thresholds for the combined decision tree in a robust way. As shown by the CM-DT curve in Figure 5, there is a large jump in the false alarm/miss trade-off for the combined tree, in contrast to the combined HMM approach, which controls the trade-off by a changing topic switch penalty. This occurs because posterior probabilities from the decision tree do not vary smoothly; rather, they vary in steps corresponding to the leaves of the tree. The dis- null Tfir, Hakkani-Tiir, Stolcke, and Shriberg Integrating Prosodic and Lexical Cues Table 3 Segmentation error rates with the language model only (LM), the combined HMM using all prosodic features (CM-HMM-all), the combined HMM using only pause duration and turn features (CM-HMM-pause-turn), and using only pause duration, turn, and gender features continuous character of the thresholded variable makes it hard to estimate a threshold on the training data that performs robustly on the test data. This could account for the poor result on the time-based metrics for the combined tree (where the threshold optimized on the training data was far from optimal on the test set; see footnote 8). The same phenomenon is reflected in the fact that the prosody-only tree gave better results when embedded in an HMM without LM likelihoods than when used by itself with a posterior threshold.</Paragraph> </Section> <Section position="9" start_page="50" end_page="50" type="sub_section"> <SectionTitle> 4.9 Contributions of Different Feature Types </SectionTitle> <Paragraph position="0"> We saw in Section 4.6 that pause duration is by far the single most important feature in the prosodic decision tree. Furthermore, speaker changes are queried almost as often as the F0-related features. Pause durations can be obtained using standard speech recognizers, and are in fact used by many current TDT systems (see Section 4.10). Speaker changes are not prosodic features per se, and would be detected independently from the prosodic features proper. To determine if prosodic measurements beyond pause and speaker information improve topic segmentation accuracy, we tested systems that consisted of the HMM with the usual topic LMs, plus a decision tree that had access only to various subsets of pause- and speaker-related features, without using any of the F0-based features. Decision tree and HMM were combined as described in Section 3.3.2.</Paragraph> <Paragraph position="1"> Table 3 shows the results of the system using only topic language models (LM) as well as combined systems using all prosodic features (CM-HMM-all), only pause duration and turn features (CM-HMM-pause-turn), and using only pause duration, turn, and gender features (CM-HMM-pause-turn-gender). These results show that by using only pause duration, turn, and gender features, it is indeed possible to obtain better results (20% reduced segmentation cost) than with the lexical model alone, with gender making only a minor contribution. However, we also see that a substantial further improvement (9% relative) is obtained by adding F0 features to the prosodic model.</Paragraph> </Section> </Section> <Section position="7" start_page="50" end_page="52" type="metho"> <SectionTitle> 4.10 Results Compared to Other Approaches </SectionTitle> <Paragraph position="0"> Because our work focused on the use of prosodic information and required detailed linguistic annotations (such as sentence punctuation, turn boundaries, and speaker labels), we used data from the LDC 1997 BN corpus to form the training set for the prosodic models and the (separate) test set used for evaluation. This choice was crucial for the research, but unfortunately complicates a quantitative comparison of our results to other TDT segmentation systems. The recent TDT2 evaluation used a different set of broadcast news data that postdated the material we used, and was generated by a different speech recognizer (although with a similar word error rate) (Cieri et al.</Paragraph> <Paragraph position="1"> 1999). Nevertheless we have attempted to calibrate our results with respect to these TDT2 results, n We have not tried to compare our results to research outside the TDT evaluation framework. In fact, other evaluation methodologies differ too much to allow meaningful quantitative comparisons across publications.</Paragraph> <Paragraph position="2"> We wanted to ensure that the TDT2 evaluation test set was comparable in segmentation difficulty to our test set drawn from the 1997 BN corpus, and that the TDT2 metrics behaved similarly on both sets. To this end, we ran an early version of our words-only segmenter on both test sets. As shown in Table 4, not only are the results on recognized words quite close, but the optimal false alarm/miss trade-off is similar as well, indicating that the two corpora have roughly similar topic granularities.</Paragraph> <Paragraph position="3"> While the full prosodic component of our topic segmenter was not applied to the TDT2 test corpus, we can compare the performance of a simplified version of SRI's segmenter to other evaluation systems (Fiscus et al. 1999). The two best-performing systems in the evaluation were those of CMU (Beeferman, Berger, and Lafferty 1999) with Cse9 = 0.1463, and Dragon (Yamron et al. 1998; van Mulbregt et al. 1999) with Cse9 = 0.1579. The SRI system achieved Cs~g = 0.1895. All systems in the evaluation, including ours, used only information from words and pause durations determined by a speech recognizer.</Paragraph> <Paragraph position="4"> A good reference to calibrate our performance is the Dragon system, from which we borrowed the lexical HMM segmentation framework. Dragon made adjustments in its lexical modeling that account for the improvements relative to the basic HMM structure on which our system is based. As described by van Mulbregt et al. (1999), a significant segmentation error reduction was obtained from optimizing the number of topic clusters (kept fixed at 100 in our system). Second, Dragon introduced more supervision into the model training by building separate LMs for segments that had been hand-labeled as not related to news (such as sports and commercials) in the TDT2 training corpus, which also resulted in substantial improvements. Finally, Dragon used some of the TDT2 training data for tuning the model to the specifics of the TDT2 corpus.</Paragraph> <Paragraph position="5"> In summary, the performance of our combined lexical-prosodic system with CsC/9 = 0.1438 is competitive with the best word-based systems reported to date. More importantly, since we found the prosodic and lexical knowledge sources to complement each other, and since Dragon's improvements for TDT2 were confined to a better modeling of the lexical information, we would expect that adding these improvements to our combined segmenter would lead to a significant improvement in the state of the art.</Paragraph> <Paragraph position="6"> 11 Since our study was conducted, a third round of TDT benchmarks (TDT3) has taken place (NIST 1999). However, for TDT3, the topic segmentation evaluation metric was modified and the most recent results are thus not directly comparable with those from TDT2 or the present study, Tiir, Hakkani-Ti.ir, Stolcke, and Shriberg Integrating Prosodic and Lexical Cues</Paragraph> </Section> <Section position="8" start_page="52" end_page="53" type="metho"> <SectionTitle> 5. Discussion </SectionTitle> <Paragraph position="0"> Results so far indicate that prosodic information provides an excellent source of information for automatic topic segmentation, both by itself and in conjunction with lexical information. Pause duration, a simple prosodic feature that is readily available as a by-product of speech recognition, proved highly effective in the initial chopping phase, and was the most important feature used by prosodic decision trees. Additional, pitch-based prosodic features are also effective as features in the decision tree.</Paragraph> <Paragraph position="1"> The results obtained with recognized words (at 30% word error rate) did not differ greatly from those obtained with correct word transcripts. No significant degradation was found with the words-only segmentation model, while the best combined model exhibited about a 5% error increase with recognized words. The lack of degradation on the words-only model may be partly due to the fact that the recognizer generally outputs fewer words than contained in the correct transcripts, biasing the segmenter toward a lower false alarm rate. Still, part of the appeal of prosodic segmentation is that it is inherently robust to recognition errors. This characteristic makes it even more attractive for use in domains with higher error rates due to poor acoustic conditions or more conversational speaking styles. It is especially encouraging that the prosody-only segmenter achieved competitive performance.</Paragraph> <Paragraph position="2"> It was fairly straightforward to modify the original Dragon HMM segmenter (Yamron et al. 1998), which is based purely on topical word usage, to incorporate discourse cues, both lexical and prosodic. The addition of these discourse cues proved highly effective, especially in the case of prosody. The alternative knowledge source combination approach, using HMM posterior probabilities as decision tree inputs, was also effective, although less so than the HMM-based approach. Note that the HMM-based integration, as implemented here, makes more stringent assumptions about the independence of lexical and prosodic cues. The combined decision tree, on the other hand, has some ability to model dependencies between lexical and prosodic cues. The fact that the HMM-based combination approach gave the best results is thus indirect evidence that lexical and prosodic knowledge sources are indeed largely independent.</Paragraph> <Paragraph position="3"> Apart from the question of probabilistic independence, it seems that lexical and prosodic models are also complementary in the errors they make. This is manifested in the different distributions of miss and false alarm errors discussed in Section 4.5.</Paragraph> <Paragraph position="4"> It is also easy to find examples where the two models make complementary errors.</Paragraph> <Paragraph position="5"> Figure 8 shows two topic boundaries that are missed by one model but not the other.</Paragraph> <Paragraph position="6"> Several aspects of our model are preliminary or suboptimal in nature and can be improved. Even when testing on recognized words, we used parameters optimized on forced alignments. This is suboptimal but convenient, since it avoids the need to run word recognition on the relatively large training set. Since results on recognized words are very similar to those on true words, we can conclude that not much was lost with this expedient. Also, we have not yet optimized the chopping stage relative to the combined model (only relative to the words-only segmenter). The use of prosodic features other than pause duration for chopping should further improve the overall performance.</Paragraph> <Paragraph position="7"> The improvement obtained with source-dependent topic switch penalties and posterior thresholds suggests that more comprehensive source-dependent modeling would be beneficial. In particular, both prosodic and lexical discourse cues are likely to be somewhat source specific (e.g., because of different show formats and different speakers). Given enough training data, it is straightforward to train source-dependent models.</Paragraph> <Paragraph position="8"> Computational Linguistics Volume 27, Number 1 (a) *.. we have a severe thunderstorm watch two severe thunderstorm watches and a tornado watch in effect the tornado watch in effect back here in eastern colorado the two severe thunderstorm watches here indiana over into ohio those obviously associated with this line which is already been producing some hail i'll be back in a moment we'll take a look at our forecast weather map see if we can cool it off in the east will be very cold tonight minus seven karen just walked in was in the computer and found out for me that national airport in washington d. c. did hit one hundred degrees today it's a record high for them it's going to be uh hot again tomorrow but it will begin to cool off the que question is what time of day is this cold front going to move by your house if you want to know how warm it's going to be tomorrow comes through early in the day won't be that hot at all midday it'll still be into the nineties but not as hot as it was today comes through late in the day you'll still be in the upper nineties but some relief is on the way ...</Paragraph> <Paragraph position="9"> (b) ... you know the if if the president has been unfaithful to his wife and at this point you know i simply don't know any of the facts other than the bits and pieces that we hear and they're simply allegations at this point but being unfaithful to your wife isn't necessarily a crime lying in an affidavit is a crime inducing someone to lie in an affidavit is a crime but that occurred after this apparent taping so i'll tell you there are going to be extremely thorny legal issues that will have to be sorted out white house spokesman mike mccurry says the administration will cooperate in starr's investigation <TOPIC_CHANGE> LM probability: 1*000000 PM probability: 0.134409 cubans have been waiting for this day for a long time after months of planning and preparation pope john paul the second will make his first visit to the island nation this afternoon it is the first pilgrimage ever by a pope to cuba judy fortin joins us now from havana with more ....</Paragraph> <Paragraph position="10"> Figure 8 Examples of true topic boundaries where lexical and prosodic models make opposite decisions* (a) The prosodic model correctly predicts a topic change, the LM does not. (b) The LM predicts a topic change, the prosodic model does not.</Paragraph> </Section> class="xml-element"></Paper>