Measures of Distributional Similarity 
Lillian Lee 
Department of Computer Science 
Cornell University 
Ithaca, NY 14853-7501 
llee@cs, cornell, edu 
Abstract 
We study distributional similarity measures for 
the purpose of improving probability estima- 
tion for unseen cooccurrences. Our contribu- 
tions are three-fold: an empirical comparison 
of a broad range of measures; a classification 
of similarity functions based on the information 
that they incorporate; and the introduction of 
a novel function that is superior at evaluating 
potential proxy distributions. 
1 Introduction 
An inherent problem for statistical methods in 
natural language processing is that of sparse 
data -- the inaccurate representation in any 
training corpus of the probability of low fre- 
quency events. In particular, reasonable events 
that happen to not occur in the training set may 
mistakenly be assigned a probability of zero. 
These unseen events generally make up a sub- 
stantial portion of novel data; for example, Es- 
sen and Steinbiss (1992) report that 12% of the 
test-set bigrams in a 75%-25% split of one mil- 
lion words did not occur in the training parti- 
tion. 
We consider here the question of how to es- 
timate the conditional cooccurrence probability 
P(v\[n) of an unseen word pair (n, v) drawn from 
some finite set N x V. Two state-of-the-art 
technologies are Katz's (1987) backoff method 
and Jelinek and Mercer's (1980) interpolation 
method. Both use P(v) to estimate P(v\[n) 
when (n, v) is unseen, essentially ignoring the 
identity of n. 
An alternative approach is distance-weighted 
averaging, which arrives at an estimate for un- 
seen cooccurrences by combining estimates for 
25 
cooccurrences involving similar words: 1 
/P(v\[n) ---- ~-~mES(n) sim(n, m)P(v\[m) 
~-\]mES(n) sim(n, m) , (1) 
where S(n) is a set of candidate similar words 
and sim(n, m) is a function of the similarity 
between n and m. We focus on distributional 
rather than semantic similarity (e.g., Resnik 
(1995)) because the goal of distance-weighted 
averaging is to smooth probability distributions 
-- although the words "chance" and "probabil- 
ity" are synonyms, the former may not be a 
good model for predicting what cooccurrences 
the latter is likely to participate in. 
There are many plausible measures of distri- 
butional similarity. In previous work (Dagan 
et al., 1999), we compared the performance of 
three different functions: the Jensen-Shannon 
divergence (total divergence to the average), the 
L1 norm, and the confusion probability. Our 
experiments on a frequency-controlled pseu- 
doword disambiguation task showed that using 
any of the three in a distance-weighted aver- 
aging scheme yielded large improvements over 
Katz's backoff smoothing method in predicting 
unseen coocurrences. Furthermore, by using a 
restricted version of model (1) that stripped in- 
comparable parameters, we were able to empir- 
ically demonstrate that the confusion probabil- 
ity is fundamentally worse at selecting useful 
similar words. D. Lin also found that the choice 
of similarity function can affect the quality of 
automatically-constructed thesauri to a statis- 
tically significant degree (1998a) and the ability 
to determine common morphological roots by as 
much as 49% in precision (1998b). 
1The term "similarity-based", which we have used 
previously, has been applied to describe other models 
as well (L. Lee, 1997; Karov and Edelman, 1998). 
These empirical results indicate that investi- 
gating different similarity measures can lead to 
improved natural language processing. On the 
other hand, while there have been many sim- 
ilarity measures proposed and analyzed in the 
information retrieval literature (Jones and Fur- 
nas, 1987), there has been some doubt expressed 
in that community that the choice of similarity 
metric has any practical impact: 
Several authors have pointed out that 
the difference in retrieval performance 
achieved by different measures of asso- 
ciation is insignificant, providing that 
these are appropriately normalised. 
(van Rijsbergen, 1979, pg. 38) 
But no contradiction arises because, as van Rijs- 
bergen continues, "one would expect this since 
most measures incorporate the same informa- 
tion". In the language-modeling domain, there 
is currently no agreed-upon best similarity met- 
ric because there is no agreement on what the 
"same information"- the key data that a sim- 
ilarity function should incorporate -- is. 
The overall goal of the work described here 
was to discover these key characteristics. To 
this end, we first compared a number of com- 
mon similarity measures, evaluating them in a 
parameter-free way on a decision task. When 
grouped by average performance, they fell into 
several coherent classes, which corresponded to 
the extent to which the functions focused on 
the intersection of the supports (regions of posi- 
tive probability) of the distributions. Using this 
insight, we developed an information-theoretic 
metric, the skew divergence, which incorporates 
the support-intersection data in an asymmetric 
fashion. This function yielded the best perfor- 
mance overall: an average error rate reduction 
of 4% (significant at the .01 level) with respect 
to the Jensen-Shannon divergence, the best pre- 
dictor of unseen events in our earlier experi- 
ments (Dagan et al., 1999). 
Our contributions are thus three-fold: an em- 
pirical comparison of a broad range of similarity 
metrics using an evaluation methodology that 
factors out inessential degrees of freedom; a pro- 
posal, building on this comparison, of a charac- 
teristic for classifying similarity functions; and 
the introduction of a new similarity metric in- 
corporating this characteristic that is superior 
at evaluating potential proxy distributions. 
2{} 
2 Distributional Similarity Functions 
In this section, we describe the seven distri- 
butional similarity functions we initally evalu- 
ated. 2 For concreteness, we choose N and V 
to be the set of nouns and the set of transitive 
verbs, respectively; a cooccurrence pair (n, v) 
results when n appears as the head noun of the 
direct object of v. We use P to denote probabil- 
ities assigned by a base language model (in our 
experiments, we simply used unsmoothed rel- 
ative frequencies derived from training corpus 
counts). 
Let n and m be two nouns whose distribu- 
tional similarity is to be determined; for nota- 
tional simplicity, we write q(v) for P(vln ) and 
r(v) for P(vlm), their respective conditional 
verb cooccurrence probabilities. 
Figure 1 lists several familiar functions. The 
cosine metric and Jaccard's coefficient are com- 
monly used in information retrieval as measures 
of association (Salton and McGill, 1983). Note 
that Jaccard's coefficient differs from all the 
other measures we consider in that it is essen- 
tially combinatorial, being based only on the 
sizes of the supports of q, r, and q • r rather 
than the actual values of the distributions. 
Previously, we found the Jensen-Shannon di- 
vergence (Rao, 1982; J. Lin, 1991) to be a useful 
measure of the distance between distributions: 
JS(q,r)=-~l \[D(q aVgq,r)+D(r aVgq,r) \] 
The function D is the KL divergence, which 
measures the (always nonnegative) average in- 
efficiency in using one distribution to code for 
another (Cover and Thomas, 1991): 
(v) D(pl(V) IIp2(V)) = EPl(V)log Pl 
p2(v) " 
V 
The function avga, r denotes the average distri- 
bution avgq,r(V ) --= (q(v)+r(v))/2; observe that 
its use ensures that the Jensen-Shannon diver- 
gence is always defined. In contrast, D(qllr ) is 
undefined if q is not absolutely continuous with 
respect to r (i.e., the support of q is not a subset 
of the support of r). 
2Strictly speaking, some of these functions are dissim- 
ilarity measures, but each such function f can be recast 
as a similarity function via the simple transformation 
C - f, where C is an appropriate constant. Whether we 
mean f or C - f should be clear from context. 
Euclidean distance 
L1 norm 
cosine 
Jaccard's coefficient 
L2(q,r) = 
Ll(q,r) = 
cos(q, r) = 
Jac(q, r) = 
~v (q(v) - r(v)) 2 
Iq(v) - r(v)l 
V 
~-~v q(v)r(v) 
X/~-~v q(v) 2 V/Y~-v r(v) 2 
I{v : q(v) > 0 and r(v) > 0}l 
I{v I q(v) > 0 or r(v) > O}l 
Figure 1: Well-known functions 
The confusion probability has been used by 
several authors to smooth word cooccurrence 
probabilities (Sugawara et al., 1985; Essen and 
Steinbiss, 1992; Grishman and Sterling, 1993); 
it measures the degree to which word m can 
be substituted into the contexts in which n ap- 
pears. If the base language model probabili- 
ties obey certain Bayesian consistency condi- 
tions (Dagan et al., 1999), as is the case for 
relative frequencies, then we may write the con- 
fusion probability as follows: 
P(m) conf(q, r, P(m) ) = E q(v)r(v) -p-~(v) " 
V 
Note that it incorporates unigram probabilities 
as well as the two distributions q and r. 
Finally, Kendall's % which appears in work 
on clustering similar adjectives (Hatzivassilo- 
glou and McKeown, 1993; Hatzivassiloglou, 
1996), is a nonparametric measure of the as- 
sociation between random variables (Gibbons, 
1993). In our context, it looks for correlation 
between the behavior of q and r on pairs of 
verbs. Three versions exist; we use the simplest, 
Ta, here: 
r(q,r) = E sign \[(q(vl) - q(v2))(r(vl) - r(v2))\] 
v,,v  2(l t) 
where sign(x) is 1 for positive arguments, -1 
for negative arguments, and 0 at 0. The intu- 
ition behind Kendall's T is as follows. Assume 
all verbs have distinct conditional probabilities. 
If sorting the verbs by the likelihoods assigned 
by q yields exactly the same ordering as that 
which results from ranking them according to 
r, then T(q, r) = 1; if it yields exactly the op- 
posite ordering, then T(q, r) -- -1. We treat a 
value of -1 as indicating extreme dissimilarity. 3 
It is worth noting at this point that there 
are several well-known measures from the NLP 
literature that we have omitted from our ex- 
periments. Arguably the most widely used is 
the mutual information (Hindle, 1990; Church 
and Hanks, 1990; Dagan et al., 1995; Luk, 
1995; D. Lin, 1998a). It does not apply in 
the present setting because it does not mea- 
sure the similarity between two arbitrary prob- 
ability distributions (in our case, P(VIn ) and 
P(VIm)) , but rather the similarity between 
a joint distribution P(X1,X2) and the cor- 
responding product distribution P(X1)P(X2). 
Hamming-type metrics (Cardie, 1993; Zavrel 
and Daelemans, 1997) are intended for data 
with symbolic features, since they count feature 
label mismatches, whereas we are dealing fea- 
ture Values that are probabilities. Variations of 
the value difference metric (Stanfill and Waltz, 
1986) have been employed for supervised disam- 
biguation (Ng and H.B. Lee, 1996; Ng, 1997); 
but it is not reasonable in language modeling to 
expect training data tagged with correct prob- 
abilities. The Dice coej~cient (Smadja et al., 
1996; D. Lin, 1998a, 1998b) is monotonic in Jac- 
card's coefficient (van Rijsbergen, 1979), so its 
inclusion in our experiments would be redun- 
dant. Finally, we did not use the KL divergence 
because it requires a smoothed base language 
model. 
SZero would also be a reasonable choice, since it in- 
dicates zero correlation between q and r. However, it 
would then not be clear how to average in the estimates 
of negatively correlated words in equation (1). 
27 
3 Empirical Comparison 
We evaluated the similarity functions intro- 
duced in the previous section on a binary dec- 
ision task, using the same experimental frame- 
work as in our previous preliminary compari- 
son (Dagan et al., 1999). That is, the data 
consisted of the verb-object cooccurrence pairs 
in the 1988 Associated Press newswire involv- 
ing the 1000 most frequent nouns, extracted 
via Church's (1988) and Yarowsky's process- 
ing tools. 587,833 (80%) of the pairs served 
as a training set from which to calculate base 
probabilities. From the other 20%, we pre- 
pared test sets as follows: after discarding pairs 
occurring in the training data (after all, the 
point of similarity-based estimation is to deal 
with unseen pairs), we split the remaining pairs 
into five partitions, and replaced each noun- 
verb pair (n, vl) with a noun-verb-verb triple 
(n, vl, v2) such that P(v2) ~ P(vl). The task 
for the language model under evaluation was 
to reconstruct which of (n, vl) and (n, v2) was 
the original cooccurrence. Note that by con- 
struction, (n, Vl) was always the correct answer, 
and furthermore, methods relying solely on uni- 
gram frequencies would perform no better than 
chance. Test-set performance was measured by 
the error rate, defined as 
T(# of incorrect choices + (# of ties)/2), 
where T is the number of test triple tokens in 
the set, and a tie results when both alternatives 
are deemed equally likely by the language model 
in question. 
To perform the evaluation, we incorporated 
each similarity function into a decision rule as 
follows. For a given similarity measure f and 
neighborhood size k, let 3f, k(n) denote the k 
most similar words to n according to f. We 
define the evidence according to f for the cooc- 
currence ( n, v~) as 
Ef, k(n, vi) = \[(m E SLk(n) : P(vilm) > l }l • 
Then, the decision rule was to choose the alter- 
native with the greatest evidence. 
The reason we used a restricted version of the 
distance-weighted averaging model was that we 
sought to discover fundamental differences in 
behavior. Because we have a binary decision 
task, Ef,k(n, vl) simply counts the number of k 
nearest neighbors to n that make the right de- 
cision. If we have two functions f and g such 
that Ef,k(n, Vl) > Eg,k(n, vi), then the k most 
similar words according to f are on the whole 
better predictors than the k most similar words 
according to g; hence, f induces an inherently 
better similarity ranking for distance-weighted 
averaging. The difficulty with using the full 
model (Equation (1)) for comparison purposes 
is that fundamental differences can be obscured 
by issues of weighting. For example, suppose 
the probability estimate ~v(2 -Ll(q, r)). r(v) 
(suitably normalized) performed poorly. We 
would not be able to tell whether the cause 
was an inherent deficiency in the L1 norm or 
just a poor choice of weight function -- per- 
haps (2- Ll(q,r)) 2 would have yielded better 
estimates. 
Figure 2 shows how the average error rate 
varies with k for the seven similarity metrics 
introduced above. As previously mentioned, a 
steeper slope indicates a better similarity rank- 
ing. 
All the curves have a generally upward trend 
but always lie far below backoff (51% error 
rate). They meet at k = 1000 because Sf, looo(n) 
is always the set of all nouns. We see that the 
functions fall into four groups: (1) the L2 norm; 
(2) Kendall's T; (3) the confusion probability 
and the cosine metric; and (4) the L1 norm, 
Jensen-Shannon divergence, and Jaccard's co- 
efficient. 
We can account for the similar performance 
of various metrics by analyzing how they incor- 
porate information from the intersection of the 
supports of q and r. (Recall that we are using 
q and r for the conditional verb cooccurrrence 
probabilities of two nouns n and m.) Consider 
the following supports (illustrated in Figure 3): 
Vq = {veV : q(v)>O} 
= {v•V:r(v)>0} 
Yqr = {v • V : q(v)r(v) > 0} = Yq n 
We can rewrite the similarity functions from 
Section 2 in terms of these sets, making use of 
the identities ~-~veyq\yq~ q(v) + ~veyq~ q(v) = 
~'~-v~U~\Vq~ r(v) + ~v~Vq~ r(v) = 1. Table 1 lists 
these alternative forms in order of performance. 
28 
0.4 
0.38 
0.36 
0.34 
~ 0.32 
0.3-- 
0.28 
0.26 
100 
Error rates (averages and ranges) 
I i i I i 
I.,2-*.-- 
Jag~ 
200 300 400 500 600 700 800 900 1000 
k 
Figure 2: Similarity metric performance. Errorbars denote the range of error rates over the five 
test sets. Backoff's average error rate was 51%. 
L2(q,r) 
. 2(l l) 
= ,/Eq(v)2-2Eq(v)r(v)+ Er(v) 2 
V.. vq~ v~ 
= 2 IVq~l IV \ (vq u V~)l - 2 IVq \ Vail Iv~ \Vq~l 
+ E E sign\[(q(vl) - q(v2))(r(vl) - r(v2))\] 
Vl E(VqA Vr) v2EYq~, 
+ E E sign\[(q(vl)-q(v2))(r(vl)-r(v2))\] 
Vl eVqr v2EVqUVr 
conf(q, r, P(m)) 
cos(q, r) 
= P(ra) Y\] q(v)r(v)/P(v) 
v e Vq~ 
= E q(v)r(v)( E q(v) 2 E r(v)2) -1/2 
v~ Vqr ve Vq v~ Vr 
Ll(q,r) 
JS(q, r) 
Jac(q, r) 
= 2-- E (Iq(v)-r(v)l-q(v)-r(v)) 
vE Vqr 
= log2 + 1 E (h(q(v) + r(v)) - h(q(v)) - h(r(v))) , 
v ~ Vq~ 
= IV~l/IV~ u v~l 
h( x ) = -x log x 
Table 1: Similarity functions, written in terms of sums over supports and grouped by average 
performance. \ denotes set difference; A denotes symmetric set difference. 
We see that for the non-combinatorial functions, 
the groups correspond to the degree to which 
the measures rely on the verbs in Vat. The 
Jensen-Shannon divergence and the L1 norm 
can be computed simply by knowing the val- 
ues of q and r on Vqr. For the cosine and the 
confusion probability, the distribution values on 
Vqr are key, but other information is also incor- 
porated. The statistic Ta takes into account all 
verbs, including those that occur neither with 
29 
v 
Figure 3: Supports on V 
n nor m. Finally, the Euclidean distance is 
quadratic in verbs outside Vat; indeed, Kaufman 
and Rousseeuw (1990) note that it is "extremely 
sensitive to the effect of one or more outliers" 
(pg. 117). 
The superior performance of Jac(q, r) seems 
to underscore the importance of the set Vqr. 
Jaccard's coefficient ignores the values of q and 
r on Vqr; but we see that simply knowing the 
size of Vqr relative to the supports of q and r 
leads to good rankings. 
4 The Skew Divergence 
Based on the results just described, it appears 
that it is desirable to have a similarity func- 
tion that focuses on the verbs that cooccur with 
both of the nouns being compared. However, 
we can make a further observation: with the 
exception of the confusion probability, all the 
functions we compared are symmetric, that is, 
f(q, r) -= f(r, q). But the substitutability of 
one word for another need not symmetric. For 
instance, "fruit" may be the best possible ap- 
proximation to "apple", but the distribution of 
"apple" may not be a suitable proxy for the dis- 
tribution of "fruit".a 
In accordance with this insight, we developed 
a novel asymmetric generalization of the KL di- 
vergence, the a-skew divergence: 
sa(q,r) = D(r \[\[a'q + (1 - a)-r) 
for 0 <_ a < 1. It can easily be shown that sa 
depends only on the verbs in Vat. Note that at 
a -- 1, the skew divergence is exactly the KL di- 
vergence, and su2 is twice one of the summands 
of JS (note that it is still asymmetric). 
40n a related note, an anonymous reviewer cited the 
following example from the psychology literature: we can 
say Smith's lecture is like a sleeping pill, but "not the 
other way round". 
30 
We can think of a as a degree of confidence 
in the empirical distribution q; or, equivalently, 
(1 - a) can be thought of as controlling the 
amount by which one smooths q by r. Thus, 
we can view the skew divergence as an approx- 
imation to the KL divergence to be used when 
sparse data problems would cause the latter 
measure to be undefined. 
Figure 4 shows the performance of sa for 
a = .99. It performs better than all the other 
functions; the difference with respect to Jac- 
card's coefficient is statistically significant, ac- 
cording to the paired t-test, at all k (except 
k = 1000), with significance level .01 at all k 
except 100, 400, and 1000. 
5 Discussion 
In this paper, we empirically evaluated a num- 
ber of distributional similarity measures, includ- 
ing the skew divergence, and analyzed their in- 
formation sources. We observed that the ability 
of a similarity function f(q, r) to select useful 
nearest neighbors appears to be correlated with 
its focus on the intersection Vqr of the supports 
of q and r. This is of interest from a computa- 
tional point of view because Vqr tends to be a 
relatively small subset of V, the set of all verbs. 
Furthermore, it suggests downplaying the role of 
negative information, which is encoded by verbs 
appearing with exactly one noun, although the 
Jaccard coefficient does take this type of infor- 
mation into account. 
Our explicit division of V-space into vari- 
ous support regions has been implicitly con- 
sidered in other work. Smadja et al. (1996) 
observe that for two potential mutual transla- 
tions X and Y, the fact that X occurs with 
translation Y indicates association; X's occur- 
ring with a translation other than Y decreases 
one's belief in their association; but the absence 
of both X and Y yields no information. In 
essence, Smadja et al. argue that information 
from the union of supports, rather than the just 
the intersection, is important. D. Lin (1997; 
1998a) takes an axiomatic approach to deter- 
mining the characteristics of a good similarity 
measure. Starting with a formalization (based 
on certain assumptions) of the intuition that the 
similarity between two events depends on both 
their commonality and their differences, he de- 
rives a unique similarity function schema. The 
0.4 
0.38 I 
0.36 \[ 
0.34 
0.32 
0.3 
0.28 
0.26 ¢- 100 
Error rates (averages and ranges) 
L1 
JS 
~0 300 ~0 ~0 600 700 800 ~0 1000 
k 
Figure 4: Performance of the skew divergence with respect to the best functions from Figure 2. 
definition of commonality is left to the user (sev- 
eral different definitions are proposed for differ- 
ent tasks). 
We view the empirical approach taken in this 
paper as complementary to Lin's. That is, we 
are working in the context of a particular appli- 
cation, and, while we have no mathematical cer- 
tainty of the importance of the "common sup- 
port" information, we did not assume it a priori; 
rather, we let the performance data guide our 
thinking. 
Finally, we observe that the skew metric 
seems quite promising. We conjecture that ap- 
propriate values for a may inversely correspond 
to the degree of sparseness in the data, and 
intend in the future to test this conjecture on 
larger-scale prediction tasks. We also plan to 
evaluate skewed versions of the Jensen-Shannon 
divergence proposed by Rao (1982) and J. Lin 
(1991). 
6 Acknowledgements 
Thanks to Claire Cardie, Jon Kleinberg, Fer- 
nando Pereira, and Stuart Shieber for helpful 
discussions, the anonymous reviewers for their 
insightful comments, Fernando Pereira for ac- 
cess to computational resources at AT&T, and 
Stuart Shieber for the opportunity to pursue 
this work at Harvard University under NSF 
Grant No. IRI9712068. 

References 
Claire Cardie. 1993. A case-based approach 
to knowledge acquisition for domain-specific 
sentence analysis. In 11th National Confer- 
ence on Artifical Intelligence, pages 798-803. 
Kenneth Ward Church and Patrick Hanks. 
1990. Word association norms, mutual in- 
formation, and lexicography. Computational 
Linguistics, 16(1):22-29. 
Kenneth W. Church. 1988. A stochastic parts 
program and noun phrase parser for un- 
restricted text. In Second Conference on 
Applied Natural Language Processing, pages 
136-143. 
Thomas M. Cover and Joy A. Thomas. 1991. 
Elements of Information Theory. John Wiley. 
Ido Dagan, Shanl Marcus, and Shanl Marko- 
vitch. 1995. Contextual word similarity 
and estimation from sparse data. Computer 
Speech and Language, 9:123-152. 
Ido Dagan, Lillian Lee, and Fernando Pereira. 
1999. Similarity-based models of cooccur- 
rence probabilities. Machine Learning, 34(1- 
3) :43-69. 
Ute Essen and Volker Steinbiss. 1992. Co- 
occurrence smoothing for stochastic language 
modeling. In ICASSP 92, volume 1, pages 
161-164. 
Jean Dickinson Gibbons. 1993. Nonparametric 
Measures of Association. Sage University Pa- 
per series on Quantitative Applications in the 
Social Sciences, 07-091. Sage Publications. 
Ralph Grishman and John Sterling. 1993. 
Smoothing of automatically generated selec- 
tional constraints. In Human Language Tech- 
nology: Proceedings of the ARPA Workshop, 
pages 254-259. 
Vasileios Hatzivassiloglou and Kathleen McKe- 
own. 1993. Towards the automatic identifica- 
tion of adjectival scales: Clustering of adjec- 
tives according to meaning. In 31st Annual 
Meeting of the ACL, pages 172-182. 
Vasileios Hatzivassiloglou. 1996. Do we need 
linguistics when we have statistics? A com- 
parative analysis of the contributions of lin- 
guistic cues to a statistical word grouping 
system. In Judith L. Klavans and Philip 
Resnik, editors, The Balancing Act, pages 67- 
94. MIT Press. 
Don Hindle. 1990. Noun classification from 
predicate-argument structures. In 28th An- 
nual Meeting of the A CL, pages 268-275. 
Frederick Jelinek and Robert L. Mercer. 1980. 
Interpolated estimation of Markov source pa- 
rameters from sparse data. In Proceedings 
of the Workshop on Pattern Recognition in 
Practice. 
William P. Jones and George W. Furnas. 
1987. Pictures of relevance. Journal of the 
American Society for Information Science, 
38(6):420-442. 
Yael Karov and Shimon Edelman. 1998. 
Similarity-based word sense disambiguation. 
Computational Linguistics, 24(1):41-59. 
Slava M. Katz. 1987. Estimation of probabili- 
ties from sparse data for the language model 
component of a speech recognizer. IEEE 
Transactions on Acoustics, Speech and Signal 
Processing, ASSP-35(3):400--401, March. 
Leonard Kanfman and Peter J. Rousseeuw. 
1990. Finding Groups in Data: An Intro- 
duction to Cluster Analysis. John Wiley and 
Sons. 
Lillian Lee. 1997. Similarity-Based Approaches 
to Natural Language Processing. Ph.D. the- 
sis, Harvard University. 
Dekang Lin. 1997. Using syntactic dependency 
as local context to resolve word sense ambi- 
guity. In 35th Annual Meeting of the ACL, 
pages 64-71. 
Dekang Lin. 1998a. Automatic retrieval and 
clustering of similar words. In COLING-A CL 
'98, pages 768-773. 
Dekang Lin. 1998b. An information theoretic 
definition of similarity. In Machine Learn- 
ing: Proceedings of the Fiftheenth Interna- 
tional Conference (ICML '98). 
Jianhua Lin. 1991. Divergence measures based 
on the Shannon entropy. IEEE Transactions 
on Information Theory, 37(1):145-151. 
Alpha K. Luk. 1995. Statistical sense disam- 
biguation with relatively small corpora using 
dictionary definitions. In 33rd Annual Meet- 
ing of the ACL, pages 181-188. 
Hwee Tou Ng and Hian Beng Lee. 1996. Inte- 
grating multiple knowledge sources to disam- 
biguate word sense: An exemplar-based ap- 
proach. In 3~th Annual Meeting of the ACL, 
pages 40--47. 
Hwee Tou Ng. 1997. Exemplar-based word 
sense disambiguation: Some recent improve- 
ments. In Second Conference on Empiri- 
cal Methods in Natural Language Processing 
(EMNLP-2), pages 208-213. 
C. Radhakrishna Rao. 1982. Diversity: Its 
measurement, decomposition, apportionment 
and analysis. SankyhZt: The Indian Journal 
of Statistics, 44(A):1-22. 
Philip Resnik. 1995. Using information content 
to evaluate semantic similarity in a taxonomy. 
In Proceedings of IJCAI-95, pages 448-453. 
Gerard Salton and Michael J. McGill. 1983. In- 
troduction to Modern Information Retrieval. 
McGraw-Hill. 
Frank Smadja, Kathleen R. McKeown, and 
Vasileios Hatzivassiloglou. 1996. Translat- 
ing collocations for bilingual lexicons: A sta- 
tistical approach. Computational Linguistics, 
22(1):1-38. 
Craig Stanfill and David Waltz. 1986. To- 
ward memory-based reasoning. Communica- 
tions of the ACM, 29(12):1213-1228. 
K. Sugawara, M. Nishimura, K. Toshioka, 
M. Okochi, and T. Kaneko. 1985. Isolated 
word recognition using hidden Markov mod- 
els. In ICASSP 85, pages 1-4. 
C. J. van Rijsbergen. 1979. Information Re- 
trieval. Butterworths, second edition. 
Jakub Zavrel and Walter Daelemans. 1997. 
Memory-based learning: Using similarity for 
smoothing. In 35th Annual Meeting of the 
A CL, pages 436-443. 
