File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/97/p97-1056_metho.xml
Size: 22,867 bytes
Last Modified: 2025-10-06 14:14:37
<?xml version="1.0" standalone="yes"?> <Paper uid="P97-1056"> <Title>Memory-Based Learning: Using Similarity for Smoothing</Title> <Section position="4" start_page="0" end_page="436" type="metho"> <SectionTitle> 2 Memory-Based Language </SectionTitle> <Paragraph position="0"/> <Section position="1" start_page="0" end_page="0" type="sub_section"> <SectionTitle> Processing </SectionTitle> <Paragraph position="0"> The basic idea in Memory-Based language processing is that processing and learning are fundamentally interwoven. Each language experience leaves a memory trace which can be used to guide later processing. When a new instance of a task is processed, a set of relevant instances is selected from memory, and the output is produced by analogy to that set.</Paragraph> <Paragraph position="1"> The techniques that are used are variants and extensions of the classic k-nearest neighbor (k-NN) classifier algorithm. The instances of a task are stored in a table as patterns of feature-value pairs, together with the associated &quot;correct&quot; output. When a new pattern is processed, the k nearest neighbors of the pattern are retrieved from memory using some similarity metric. The output is then determined by extrapolation from the k nearest neighbors, i.e. the output is chosen that has the highest relative frequency among the nearest neighbors.</Paragraph> <Paragraph position="2"> Note that no abstractions, such as grammatical rules, stochastic automata, or decision trees are extracted from the examples. Rule-like behavior results from the linguistic regularities that are present in the patterns of usage in memory in combination with the use of an appropriate similarity metric.</Paragraph> <Paragraph position="3"> It is our experience that even limited forms of abstraction can harm performance on linguistic tasks, which often contain many subregularities and exceptions (Daelemans, 1996).</Paragraph> </Section> <Section position="2" start_page="0" end_page="436" type="sub_section"> <SectionTitle> 2.1 Similarity metrics </SectionTitle> <Paragraph position="0"> The most basic metric for patterns with symbolic features is the Overlap metric given in equations 1 weight for feature i, and 5 is the distance per feature. The k-NN algorithm with this metric, and equal weighting for all features is called IB1 (Aha et al., 1991). Usually k is set to 1.</Paragraph> <Paragraph position="1"> where:</Paragraph> <Paragraph position="3"> This metric simply counts the number of (mis)matching feature values in both patterns. If we do not have information about the importance of features, this is a reasonable choice. But if we do have some information about feature relevance one possibility would be to add linguistic bias to weight or select different features (Cardie, 1996). An alternative--more empiricist--approach, is to look at the behavior of features in the set of examples used for training. We can compute statistics about the relevance of features by looking at which features are good predictors of the class labels. Information Theory gives us a useful tool for measuring feature relevance in this way (Quinlan, 1986; Quinlan, 1993).</Paragraph> <Paragraph position="4"> Information Gain (IG) weighting looks at each feature in isolation, and measures how much information it contributes to our knowledge of the correct class label. The Information Gain of feature f is measured by computing the difference in uncertainty (i.e. entropy) between the situations without and with knowledge of the value of that feature (Equation 3).</Paragraph> <Paragraph position="6"> Where C is the set of class labels, V f is the set of values for feature f, and H(C) = - ~cec P(c) log 2 P(e) is the entropy of the class labels. The probabilities are estimated from relative frequencies in the training set. The normalizing factor si(f) (split info) is included to avoid a bias in favor of features with more values. It represents the amount of information needed to represent all values of the feature (Equation 4). The resulting IG values can then be used as weights in equation 1.</Paragraph> <Paragraph position="7"> The k-NN algorithm with this metric is called ml-IG (Daelemans & Van den Bosch, 1992).</Paragraph> <Paragraph position="8"> The possibility of automatically determining the relevance of features implies that many different and possibly irrelevant features can be added to the feature set. This is a very convenient methodology if theory does not constrain the choice enough beforehand, or if we wish to measure the importance of various information sources experimentally.</Paragraph> <Paragraph position="9"> Finally, it should be mentioned that MBclassifiers, despite their description as table-lookup algorithms here, can be implemented to work fast, using e.g. tree-based indexing into the case-base (Daelemans et al., 1997).</Paragraph> </Section> </Section> <Section position="5" start_page="436" end_page="437" type="metho"> <SectionTitle> 3 Smoothing of Estimates </SectionTitle> <Paragraph position="0"> The commonly used method for probabilistic classification (the Bayesian classifier) chooses a class for a pattern X by picking the class that has the maximum conditional probability P(classlX ). This probability is estimated from the data set by looking at the relative joint frequency of occurrence of the classes and pattern X. If pattern X is described by a number of feature-values Xl,..., xn, we can write the conditional probability as P(classlxl,... , xn). If a particular pattern x~,..., x&quot; is not literally present among the examples, all classes have zero ML probability estimates. Smoothing methods are needed to avoid zeroes on events that could occur in the test material.</Paragraph> <Paragraph position="1"> There are two main approaches to smoothing: count re-estimation smoothing such as the Add-One or Good-Turing methods (Church & Gale, 1991), and Back-off type methods (Bahl et al., 1983; Katz, 1987; Chen & Goodman, 1996; Samuelsson, 1996).</Paragraph> <Paragraph position="2"> We will focus here on a comparison with Back-off type methods, because an experimental comparison in Chen & Goodman (1996) shows the superiority of Back-off based methods over count re-estimation smoothing methods. With the Back-off method the probabilities of complex conditioning events are approximated by (a linear interpolation of) the probabilities of more general events:</Paragraph> <Paragraph position="4"> Where/5 stands for the smoothed estimate,/3 for the relative frequency estimate, A are interpolation weights, ~-'\]i~0&quot;kx' = 1, and X -< X i for all i, where -< is a (partial) ordering from most specific to most general feature-sets 2 (e.g the probabilities of trigrams (X) can be approximated by bigrams (X') and unigrams (X&quot;)). The weights of the linear interpolation are estimated by maximizing the probability of held-out data (deleted interpolation) with the forward-backward algorithm. An alternative method to determine the interpolation weights without iterative training on held-out data is given in Samuelsson (1996).</Paragraph> <Paragraph position="5"> We can assume for simplicity's sake that the Ax, do not depend on the value of X i, but only on i. In this case, if F is the number of features, there are 2 F - 1 more general terms, and we need to estimate A~'s for all of these. In most applications the interpolation method is used for tasks with clear orderings of feature-sets (e.g. n-gram language modeling) so that many of the 2 F -- 1 terms can be omitted beforehand. More recently, the integration of information sources, and the modeling of more complex language processing tasks in the statistical framework has increased the interest in smoothing methods (Collins ~z Brooks, 1995; Ratnaparkhi, 1996; Magerman, 1994; Ng & Lee, 1996; Collins, 1996).</Paragraph> <Paragraph position="6"> For such applications with a diverse set of features it is not necessarily the case that terms can be excluded beforehand.</Paragraph> <Paragraph position="7"> If we let the Axe depend on the value of X ~, the number of parameters explodes even faster. A practical solution for this is to make a smaller number of buckets for the X i, e.g. by clustering (see e.g.</Paragraph> <Paragraph position="8"> Magerman (1994)): Note that linear interpolation (equation 5) actually performs two functions. In the first place, if the most specific terms have non-zero frequency, it still interpolates them with the more general terms. Because the more general terms should never overrule the more specific ones, the Ax e for the more general terms should be quite small. Therefore the interpolation effect is usually small or negligible. The second function is the pure back-off function: if the more specific terms have zero frequency, the probabilities of the more general terms are used instead. Only if terms are of a similar specificity, the A's truly serve to weight relevance of the interpolation terms.</Paragraph> <Paragraph position="9"> If we isolate the pure back-off function of the interpolation equation we get an algorithm similar to the one used in Collins & Brooks (1995). It is given in a schematic form in Table 1. Each step consists of a back-off to a lower level of specificity. There are as many steps as features, and there are a total of 2 F terms, divided over all the steps. Because all features are considered of equal importance, we call this the Naive Back-off algorithm.</Paragraph> <Paragraph position="10"> Usually, not all features x are equally important, so that not all back-off terms are equally relevant for the re-estimation. Hence, the problem of fitting the Axe parameters is replaced by a term selection task. To optimize the term selection, an evaluation of the up to 2 F terms on held-out data is still necessary. In summary, the Back-off method does not provide a principled and practical domain-independent method to adapt to the structure of a particular domain by determining a suitable ordering -< between events. In the next section, we will argue that a formal operationalization of similarity between events, as provided by MBL, can be used for this purpose.</Paragraph> <Paragraph position="11"> In MBL the similarity metric and feature weighting scheme automatically determine the implicit back-</Paragraph> <Paragraph position="13"> f(X) stands for the frequency of pattern X in the training set. An asterix (*) stands for a wildcard in a pattern. The terms at a higher level in the back-off sequence are more specific (-<) than the lower levels.</Paragraph> <Paragraph position="14"> off ordering using a domain independent heuristic, with only a few parameters, in which there is no need for held-out data.</Paragraph> </Section> <Section position="6" start_page="437" end_page="439" type="metho"> <SectionTitle> 4 A Comparison </SectionTitle> <Paragraph position="0"> If we classify pattern X by looking at its nearest neighbors, we are in fact estimating the probability P(classlX), by looking at the relative frequency of the class in the set defined by simk(X), where slink(X) is a function from X to the set of most similar patterns present in the training data 3. Although the name &quot;k-nearest neighbor&quot; might mislead us by suggesting that classification is based on exactly k training patterns, the sima(X) fimction given by the Overlap metric groups varying numbers of patterns into buckets of equal similarity. A bucket is defined by a particular number of mismatches with respect to pattern X. Each bucket can further be decomposed into a number of schemata characterized by the position of a wildcard (i.e. a mismatch). Thus simk(X) specifies a ~ ordering in a Collins 8z Brooks style back-off sequence, where each bucket is a step in the sequence, and each schema is a term in the estimation formula at that step. In fact, the unweighted overlap metric specifies exactly the same ordering as the Naive Back-off algorithm (table 1).</Paragraph> <Paragraph position="1"> In Figure 1 this is shown for a four-featured pattern. The most specific schema is the schema with zero mismatches, which corresponds to the retrieval of an identical pattern from memory, the most general schema (not shown in the Figure) has a mis-match on every feature, which corresponds to the 3Note that MBL is not limited to choosing the best class. It can also return the conditional distribution of all the classes.</Paragraph> <Paragraph position="3"> (see section 5.1).</Paragraph> <Paragraph position="4"> entire memory being best neighbor. If Information Gain weights are used in combination with the Overlap metric, individual schemata instead of buckets become the steps of the back-off sequence 4. The -~ ordering becomes slightly more complicated now, as it depends on the number of wildcards and on the magnitude of the weights attached to those wildcards. Let S be the most specific (zero mismatches) schema. We can then define the ordering between schemata in the following equation, where A(X,Y) is the distance as defined in equation 1.</Paragraph> <Paragraph position="5"> s' -< s&quot; ~ ~,(s, s') < a(s, s&quot;) (6) Note that this approach represents a type of implicit parallelism. The importance of the 2~back-off terms is specified using only F parameters--the IG weights-, where F is the number of features. This advantage is not restricted to the use of IG weights; many other weighting schemes exist in the machine learning literature (see Wettschereck et aL (1997) for an overview).</Paragraph> <Paragraph position="6"> Using the IG weights causes the algorithm to rely on the most specific schema only. Although in most applications this leads to a higher accuracy, because it rejects schemata which do not match the most important features, sometimes this constraint needs 4Unless two schemata are exactly tied in their IG values.</Paragraph> <Paragraph position="7"> to be weakened. This is desirable when: (i) there are a number of schemata which are almost equally relevant, (ii) the top ranked schema selects too few cases to make a reliable estimate, or (iii) the chance that the few items instantiating the schema are mislabeled in the training material is high. In such cases we wish to include some of the lower-ranked schemata. For case (i) this can be done by discretizing the IG weights into bins, so that minor differences will lose their significance, in effect merging some schemata back into buckets. For (ii) and (iii), and for continuous metrics (Stanfill & Waltz, 1986; Cost & Salzberg, 1993) which extrapolate from exactly k neighbors 5, it might be necessary to choose a k parameter larger than 1. This introduces one additional parameter, which has to be tuned on held-out data. We can then use the distance between a pattern and a schema to weight its vote in the nearest neighbor extrapolation. This results in a back-off sequence in which the terms at each step in the sequence are weighted with respect to each other, but without the introduction of any additional weighting parameters. A weighted voting function that was found to work well is due to Dudani (1976): the nearest neighbor schema receives a weight of 1.0, the furthest schema a weight of 0.0, and the other neighbors are scaled linearly to the line between these two points.</Paragraph> </Section> <Section position="7" start_page="439" end_page="440" type="metho"> <SectionTitle> 5 Applications </SectionTitle> <Paragraph position="0"/> <Section position="1" start_page="439" end_page="439" type="sub_section"> <SectionTitle> 5.1 PP-attachment </SectionTitle> <Paragraph position="0"> In this section we describe experiments with MBL on a data-set of Prepositional Phrase (PP) attachment disambiguation cases. The problem in this data-set is to disambiguate whether a PP attaches to the verb (as in I ate pizza with a fork) or to the noun (as in I ate pizza with cheese). This is a difficult and important problem, because the semantic knowledge needed to solve the problem is very difficult to model, and the ambiguity can lead to a very large number of interpretations for sentences.</Paragraph> <Paragraph position="1"> We used a data-set extracted from the Penn Treebank WSJ corpus by Ratnaparkhi et al. (1994).</Paragraph> <Paragraph position="2"> It consists of sentences containing the possibly ambiguous sequence verb noun-phrase PP. Cases were constructed from these sentences by recording the features: verb, head noun of the first noun phrase, preposition, and head noun of the noun phrase contained in the PP. The cases were labeled with the attachment decision as made by the parse annotator of the corpus. So, for the two example sentences given above we would get the feature vectors ate,pizza,with,fork,V, and ate,pizza, with, cheese, N. The data-set contains 20801 training cases and 3097 separate test cases, and was also used in Collins & Brooks (1995).</Paragraph> <Paragraph position="3"> The IG weights for the four features (V,N,P,N) were respectively 0.03, 0.03, 0.10, 0.03. This identifies the preposition as the most important feature: its weight is higher than the sum of the other three weights. The composition of the back-off sequence following from this can be seen in the lower part of Figure 1. The grey-colored schemata were effectively left out, because they include a mismatch on the preposition.</Paragraph> <Paragraph position="4"> Table 2 shows a comparison of accuracy on the test-set of 3097 cases. We can see that Isl, which implicitly uses the same specificity ordering as the Naive Back-off algorithm already performs quite well in relation to other methods used in the literature.</Paragraph> <Paragraph position="5"> Collins & Brooks' (1995) Back-off model is more sophisticated than the naive model, because they performed a number of validation experiments on held-out data to determine which terms to include and, more importantly, which to exclude from the back-off sequence. They excluded all terms which did not match in the preposition! Not surprisingly, the 84.1% accuracy they achieve is matched by the performance of IBI-IG. The two methods exactly mimic each others behavior, in spite of their huge difference in design. It should however be noted that the computation of IG-weights is many orders of magnitude faster than the laborious evaluation of terms on held-out data.</Paragraph> <Paragraph position="6"> We also experimented with rich lexical representations obtained in an unsupervised way from word co-occurrences in raw WSJ text (Zavrel & Veenstra, 1995; Schiitze, 1994). We call these representations Lexical Space vectors. Each word has a numeric 25 dimensional vector representation. Using these vectors, in combination with the IG weights mentioned above and a cosine metric, we got even slightly better results. Because the cosine metric fails to group the patterns into discrete schemata, it is necessary to use a larger number of neighbors (k = 50). The result in Table 2 is obtained using Dudani's weighted voting method.</Paragraph> <Paragraph position="7"> Note that to devise a back-off scheme on the basis of these high-dimensional representations (each pattern has 4 x 25 features) one would need to consider up to 2 ldegdeg smoothing terms. The MBL framework is a convenient way to further experiment with even more complex conditioning events, e.g. with semantic labels added as features.</Paragraph> </Section> <Section position="2" start_page="439" end_page="440" type="sub_section"> <SectionTitle> 5.2 POS-tagging </SectionTitle> <Paragraph position="0"> Another NLP problem where combination of different sources of statistical information is an important issue, is POS-tagging, especially for the guessing of the POS-tag of words not present in the lexicon. Relevant information for guessing the tag of an unknown word includes contextual information (the words and tags in the context of the word), and word form information (prefixes and suffixes, first and last letters of the word as an approximation of affix information, presence or absence of capitalization, numbers, special characters etc.). There is a large number of potentially informative features that could play a role in correctly predicting the tag of an unknown word (Ratnaparkhi, 1996; Weischedel et al., 1993; Daelemans et al., 1996). A priori, it is not clear what the relative importance is of these features.</Paragraph> <Paragraph position="1"> We compared Naive Back-off estimation and MBL with two sets of features: * PDASS: the first letter of the unknown word (p), the tag of the word to the left of the unknown word (d), a tag representing the set of possible lexical categories of the word to the right of the unknown word (a), and the two last letters (s).</Paragraph> <Paragraph position="2"> The first letter provides information about capitalisation and the prefix, the two last letters about suffixes.</Paragraph> <Paragraph position="3"> * PDDDAAASSS: more left and right context features, and more suffix information.</Paragraph> <Paragraph position="4"> The data set consisted of 100,000 feature value patterns taken from the Wall Street Journal corpus. Only open-class words were used during construction of the training set. For both IBI-IG and Naive Back-off, a 10-fold cross-validation experiment was run using both PDASS and PDDDAAASSS patterns.</Paragraph> <Paragraph position="5"> The results are in Table 3. The IG values for the features are given in Figure 2.</Paragraph> <Paragraph position="6"> The results show that for Naive Back-off (and ml) the addition of more, possibly irrelevant, features quickly becomes detrimental (decrease from 88.5 to 85.9), even if these added features do make a generalisation performance increase possible (witness the increase with IBI-IG from 88.3 to 89.8). Notice that we did not actually compute the 21deg terms of Naive Back-off in the PDDDAAASSS condition, as IB1 is guaranteed to provide statistically the same results. Contrary to Naive Back-off and IB1, memory-based learning with feature weighting (ml-IG) manages to integrate diverse information sources by differentially assigning relevance to the different features. Since noisy features will receive low IG weights, this also implies that it is much more noise-tolerant.</Paragraph> </Section> <Section position="3" start_page="440" end_page="440" type="sub_section"> <SectionTitle> Back-off and Memory-Based Learning on prediction </SectionTitle> <Paragraph position="0"> of category of unknown words. All differences are statistically significant (two-tailed paired t-test, p < 0.05). Standard deviations on the 10 experiments are between brackets.</Paragraph> </Section> </Section> class="xml-element"></Paper>