File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/97/w97-1102_metho.xml

Size: 17,526 bytes

Last Modified: 2025-10-06 14:14:50

<?xml version="1.0" standalone="yes"?>
<Paper uid="W97-1102">
  <Title>Measuring Dialect Distance Phonetically</Title>
  <Section position="4" start_page="0" end_page="0" type="metho">
    <SectionTitle>
2 Background
</SectionTitle>
    <Paragraph position="0"> In the interest of space we omit an introduction to Levenshtein distance, referring to (Kruskal, 1983).</Paragraph>
    <Paragraph position="1"> It may be understood as the cost of (the least costly set of) operations mapping from one string to another. The basic costs are those of (single-phone) insertions and deletions, each of which costs half that of substitutions. Nerbonne et al. (1996)explains its use in the present application at some length. The various modifications below all tinker with the cost of substituting one phone for another.</Paragraph>
    <Paragraph position="2"> Kessler (1995) experimented with making the measure more sensitive, but found little progress in using features, for example. The present paper experiments systematically with several variations on the basic Levenshtein theme.</Paragraph>
    <Paragraph position="3"> il The overall scheme is as follows: a definition of phonetic difference is applied to 101 pairs of words from forty different Dutch dialect areas. All of the pronunciations are taken from the standard dialect atlas ((Blacquart et al, 19251982)--hence: REND, Reeks Nederlandse Dialectatlassen). After some normalization, this results in an AVER.AGE PHONETIC difference for those dialects--a 40 x 40 matrix of differences in total (of which one half is redundant due to the symmetry of distance: dist(a, b) = dist(b, a)). This distance matrix is compared to existing accounts of the dialects in question, especially the most recent systematic account, (Daan and Blok, 1969).</Paragraph>
    <Paragraph position="4"> A visualization tool normally identifies very deviant results, see Fig. 1. Finally the distance matrix is subjected to a heuristic clustering algorithm as a further indication of quality. 2</Paragraph>
  </Section>
  <Section position="5" start_page="0" end_page="0" type="metho">
    <SectionTitle>
3 Refinements for Dialectology
</SectionTitle>
    <Paragraph position="0"> The dialects are compared on the basis of the words of 101 items. So the total distance of two dialects is equal to the sum of 101 Levenshtein-distances.</Paragraph>
    <Paragraph position="1"> If we simply use the Levenshtein-distance, it would tend to bias measurements so that changes in longer words would tend to contribute more toward the average phonetic distance (since they tend to involve more changes). This may be legitimate, but since words are a crucial linguistic unit we chose to stick to average word distance. This involves the computation of 'relative distance', which we get by dividing the absolute distance by the length of the larger word. We have also considered using the average length of the two words being compared, which makes little difference where both words are present.</Paragraph>
    <Paragraph position="2"> Missing words pose a problem as does lexical replacement. We wished to handle these consistently (to obtain a consistent measure of distance), even recognizing the danger of conflating phonetic and lexical effects. Throughout this paper we do conflate the two, reasoning that this is the lesser of two evils--the other of which is deciding when massive phonetic modification amounts to lexical difference.</Paragraph>
    <Paragraph position="3"> Naturally no difference is recorded where a word is missing in both dialects. If only one dialect is missing the word, the difference at that point is just length x insertion-cost, but normalization divides this by the length again, yielding just the cost of insertion. This is a point at which the decision 2The choice of clustering technique is important, but is not the focus of the present paper. The methods here were compared using Ward's method, a variant of hierarchical agglommerative clustering which minimizes squared error. See (Jain and Dubes, 1988) for clustering techniques.</Paragraph>
    <Paragraph position="4"> noted above-to obtain relative distance via Levenshtein distance divided by longer length-is important. Recall the alternative mentioned there, that of relativizing to the average length. This would double the distance measured in cases where words are missing, biasing the overall distance toward dialects with less lexical overlap. This seemed excessive.</Paragraph>
    <Paragraph position="5"> Similarly, for some items there are two words possible. If dialect 1 has wordla and wordlb, and dialect 2 has word2, we calculate the distance by averaging dislance(wordla,word2) and distance(wordlb,word2). If both dialect 1 and dialect 2 have multiple variants, we average all pairs of distances. null Although we experimented with variable costs for substitutions, depending on whether their base segments or diacritics differ, we could not settle on a natural weighting, and further reasoned that a feature-based cost-differential should systematize what the transcription-based differential intended.</Paragraph>
    <Paragraph position="6"> This is resumed below.</Paragraph>
    <Paragraph position="7"> Dutch has a rich system of diphthongs, which, moreover have been argued to be phonologically disegmental (Moulton, 1962). We therefore experimented both with single-phone and two-phone diphthongal representations. It turned out the representations with two phones were superior (for the purposes of showing dialectal relatedness). 3</Paragraph>
    <Section position="1" start_page="0" end_page="0" type="sub_section">
      <SectionTitle>
3.1 Feature Vectors
</SectionTitle>
      <Paragraph position="0"> If we compare dialects on the basis of phonetic symbols, it is not possible to take into account the affinity between sounds that are not equal, but are still related. Methods based on phonetic symbols do not show that 'pater' and 'vader' are more kindred then 'pater' and 'maler'. This problem can be solved by replacing each phonetic symbol by a vector of features. Each feature can be regarded as a phonetic property which can be used for classifying of sounds.</Paragraph>
      <Paragraph position="1"> A feature vector contains for each feature a value which indicates to what extent the property is instantiated. Since diacritics influence feature values, they likewise figure in the mapping from transcriptions to feature vectors, and thus automatically figure in calculations of phonetic distance.</Paragraph>
      <Paragraph position="2"> In our experiment, we have used the feature vectors which are developed by (Vieregge, A.C.M.Rietveld, and Jansen, 1984) (we earlier used the SPE features as these were modified for dialec3It would be rash to argue from this to any phonological conclusion about the diphthongs. The two-phone representation makes it easier to measure related pronunciation, and this is probably why it suits present purposes better.</Paragraph>
      <Paragraph position="3"> I:z tology use by (Hoppenbrouwers and Hoppenbrouwers, 1988), but obtained distinctly poorer results in spite of the larger number of features). Vieregge et al. make use of 14 features \[longer discussion of Vieregge's system as well as the translation transcriptions in the RND in full version of paper\].</Paragraph>
      <Paragraph position="4"> We compare three methods for measuring phonetic distance/ The first is MANHATTAN DISTANCE (also called &amp;quot;taxicab distance&amp;quot; or &amp;quot;city block&amp;quot; distance). This is simply the sum of all feature value differences for each of the 14 features in the vector.</Paragraph>
      <Paragraph position="6"> Second, we tried EUCLIDEAN DISTANCE. As usual, this is the square root of the sum of squared differences in feature values. 6(X,Y) = v/Ei l (xi -Third, we examined the Pearson correlation coefficient, r. To ixiterpret this as distance we used 1 - r, where r is the usual ~ ~ ('-TU)(~-~ y_.A,~ ).</Paragraph>
      <Paragraph position="7"> In the Levenshtein algorithm based on symbols, three operations were used: 'substitution', 'insertion' and 'deletion'. A substitution was regarded as a combination of an insertion and a deletion, so substitutions counted two, and &amp;quot;indels&amp;quot; one. When we compare vectors instead Of phonetic symbols, the value for a substitution is no longer a fixed value, but varies between '.two extremes. However, for indels we have to choose'a fixed value as well. This value was estimated by calculating the average of the values of all substitutiofis which take place in the comparison proces, and dividing this average by 2.</Paragraph>
    </Section>
    <Section position="2" start_page="0" end_page="0" type="sub_section">
      <SectionTitle>
3.2 Information-Gain Weighting
</SectionTitle>
      <Paragraph position="0"> Not all features are equally important in classifying the sounds lused in the dialects. For example, it turned out that no positive value for the feature \[flap =t:\] occurred in any of the words in the dialects examined. We therefore experimented with weighing each feature by information gain, a number expressing the average entropy reduction a feature represents when known (Quinlan, 1993; Daelemans et al., 1996).</Paragraph>
      <Paragraph position="1"> To calculate this we need a base figure for database entropy:</Paragraph>
      <Paragraph position="3"> If we have n: different vectors for all the dialects, then 1 &lt; i &lt; n. Pi is the probability of vector i, estimated by its frequency divided by ID\[, which is the total number of vectors in all dialects.</Paragraph>
      <Paragraph position="4"> Second we calculate the average entropy for each feature: ID\[/ H(D\[y\]) = E H(Du='~'\]) v,~v IDI \[Dll=~d\[ is the number of vectors that have value vi for feature f. V is the set of possible values for feature f. H(Du=vd) is the remaining entropy of all vectors in the database that have value vi for feature f. It is calculated using the first formula, where the i's are now only the vectors that have value vi for feature f.</Paragraph>
      <Paragraph position="5"> Finally we can calculate the information gain associated with a feature:</Paragraph>
      <Paragraph position="7"> If we then compare two vectors using Manhattan distance, the weighted difference between two vec-</Paragraph>
      <Paragraph position="9"> And similarly for Euclidean distance and &amp;quot;inverse correlation&amp;quot;.</Paragraph>
      <Paragraph position="10"> We have recently become aware of the work of Broe (1996), which criticizes the simple application of entropy measures to feature systems in which some features are only partially defined. Such phonological features clearly exist: e.g., \[lateral\] and \[strident\] apply only to consonants and not to vowels. Broe furthermore develops a generalization of entropy sensitive to these cases. This is an area of current interest.</Paragraph>
    </Section>
  </Section>
  <Section position="6" start_page="0" end_page="0" type="metho">
    <SectionTitle>
4 Experiments
</SectionTitle>
    <Paragraph position="0"> The dialect varieties were chosen to contain &amp;quot;easy&amp;quot; cases as well as difficult ones. Frisian is accepted as rather more distinct from other areas, and eight Frisian varieties are represented in the wish to see quickly that that distance metrics could distinguish these. The full list of variants may be seen in Fig. 1.</Paragraph>
  </Section>
  <Section position="7" start_page="0" end_page="15" type="metho">
    <SectionTitle>
5 Results
</SectionTitle>
    <Paragraph position="0"> We compared a total of 14 methods, shown in Table 1. While none of these performed very poorly, several tendencies emerge.</Paragraph>
    <Paragraph position="1"> Two-phone representations of diphthongs out-perform single-phone representations Unweighted representations outperform representations to which weightings were added. This is surprising.</Paragraph>
    <Paragraph position="2"> Manhattan distance narrowly outperforms &amp;quot;correlation&amp;quot; which narrowly outperforms Euclidean distance.</Paragraph>
    <Paragraph position="3">  dialect distances. Top performer (3) used features in place of discrete segments, no information-gain weighting, Manhattan (city-block) distance, and a two-segment representation of diphthongs. Thus, method (:3) was best.</Paragraph>
    <Paragraph position="4"> The superiority is seen in the degree to which the distance matrices and resulting dendrograms match those of expert dialectologists, in particular, (Daan and Blok, 1969). 4 We did not apply a measure to the degree of coincidence between the experts' division into dialect groups and the grouping induced by the Levenshtein distance metric, Instead, we compared the dendrogram to the dialect map and checked for congruence. Some of the results accord better with expert opinion. null For example, dialectologists generally locate Delft as closer to Haarlem and Schagen (than to Oosterhout, Dussen and Gemert). The better distance measures do this as well, but not several of the weighted measures. The weighted measures and the unweighted correlation-based measures similarly failed to recogniz:e the coastal (western) Flemish sub-group (Weslflaams or Zeeuwsvlaams), represented in our data set by Alveringem, Damme, Lamswaarde, and Renesse.</Paragraph>
    <Paragraph position="5"> Daan's work is accompanied by a map that also appears in the Atlas of the Netherlands, as Plate 4It should be noted that Daan and Blok (1969) incorporate native speakers' subjective judgements of dialect distance in their assessment (their &amp;quot;arrow method&amp;quot;). But their final partition of dialects into differenct groups is well-accepted.</Paragraph>
    <Paragraph position="6"> X-2. 5 It divides the Dutch area into 28 areas of roughly comparable dialect regions. Furthermore, it uses colortones to denote relative distance from standard Dutch. This information can be used to further calibrate the methods here. First, the relative distance from standard Dutch (given in colortones) can be translated to predictions about relative phonetic distance. For example, Twents is shaded dark green (and is represented in our data set by the the dialect spoken in Almelo), while Veluws is shaded light green (and is represented by Soest and Putten). There is an intermediate dialect, Gelders-Overijssels shaded an intermediate green and represented by Ommen, Wijhe and Spankeren. These relative distances (to ABN, represented in our data set by Haarlem and Delft) should be reflected in Levenshtein distance, and we can test the prediction by how accurate the relfection is. This method of testing has the large advantage that it tests only Levenshtein distance without involving the added level of clustering.</Paragraph>
    <Paragraph position="7"> A second method of using the dialect map to calibrate the Levenshtein metric is to use the 28 various dialect regions as predictions of &amp;quot;minimal distance&amp;quot;. Here we can compare the map most simply to the dendrogram. In the present work, it may be noted that the Frisian dialects and the dialect of Groningen-North Drenth are indeed identified as  groups (by the Levenshtein method combined minimal error clustering). It is more difficult to use the dialect map in this way without using the dendrogram as well. In particular, it is not clear how the borders on the dialect map are to be interpreted.</Paragraph>
    <Paragraph position="8"> Keeping in mind the &amp;quot;continuum&amp;quot; metaphor noted in Sec. 1, the borders cannot be interpreted to be marking partitions of minimal distance. That is, it will not be the case that each pair of elements in a given cluster are closer to each other than to any elements outside.</Paragraph>
    <Paragraph position="9"> An interesting fact is that while no very close correlation is expected between dialectal distance and geographical distance, still the better techniques generally correlated higher with geographic distance than did the poorer techniques (at approx, r = 0.72).</Paragraph>
    <Paragraph position="10"> We conclude that the present methods perform well, and we discuss opportunities for more definitive testing and further development in the following section.</Paragraph>
  </Section>
  <Section position="8" start_page="15" end_page="15" type="metho">
    <SectionTitle>
6 Future Directions
</SectionTitle>
    <Paragraph position="0"> We should like to extend this work in several directions. null * We should like to find a way to measure the success of a given distance metric. This should reflect the degree to which it coincides with expert opinion (which is necessarily rougher). See Sec. 5.</Paragraph>
    <Paragraph position="1"> * An examination of grouping methods is desirable. null * The present method averages 101 word distances to arrive at a notion of dialect difference. It would he interesting to experiment directly with the 101-dimensional vector, standardized to reflect the distance to standard Dutch (algemeen beschaafd Nederlands, ABN) and using, e.g., the cos(~,y~ as a distance measure (on vectors whose individual cells represent Levenshtein distances from ABN pronunciations). * For more definitive results, the method should be tested on material for which it has NOT been calibrated, ideally a large database of dialectal material.</Paragraph>
    <Paragraph position="2"> * Finally, it would be interesting to apply the technique to problems involving the influence of external factors on language variation, such as migration, change in political boundaries, or cultural innovation.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML