File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/93/p93-1034_metho.xml

Size: 19,460 bytes

Last Modified: 2025-10-06 14:13:30

<?xml version="1.0" standalone="yes"?>
<Paper uid="P93-1034">
  <Title>PART-OF-SPEECH INDUCTION FROM SCRATCH</Title>
  <Section position="4" start_page="0" end_page="254" type="metho">
    <SectionTitle>
CATEGORY SPACE
</SectionTitle>
    <Paragraph position="0"> The goal of the first step of the induction is to compute a multidimensional real-valued space, called category space, in which the syntactic category of each word is represented by a vector. Proximity in the space is related to similarity of syntactic category. The vectors in this space will then be used as input and target vectors for the connectionist net.</Paragraph>
    <Paragraph position="1"> The vector space is bootstrapped by collecting relevant distributional information about words.</Paragraph>
    <Paragraph position="2"> The 5,000 most frequent words in five months of the New York Times News Service (June through  October 1990) were selected for the experiments. For each pair of these words &lt; wi, w i &gt;, the number of occurrences of wi immediately to the left of wj (hi,j), the number of occurrences of wi immediately to the right ofwj (cij), the number of occurrences of wl at a distance of one word to the left of wj (ai,j), and the number of occurrences ofwi at a distance of one word to the right of wj (d/j) were counted. The four sets of 25,000,000 counts were collected in the 5,000-by-5,000 matrices B, C, A, and D, respectively. Finally these four matrices were combined into one large 5,000-by-20,000 matrix as shown in Figure 1. The figure also shows for two words where their four cooccurrence counts are located in the 5,000-by-20,000 matrix. In the experiments, w3000 was resistance and ~/24250 was theaters. The four marks in the figure, the positions of the counts 1:13000,4250, b3000,4250, e3000,4250, and d3000,4~50, indicate how often resistance occurred at positions -2, -1, 1, and 2 with respect to theaters.</Paragraph>
    <Paragraph position="3"> These 20,000-element rows of the matrix could be used directly to compute the syntactic similarity between individual words: The cosine of the angle between the vectors of a pair of words is a measure of their similarity. I However, computations with such large vectors are time-consuming. Therefore a singular value decomposition was performed on the matrix. Fifteen singular values were computed using a sparse matrix algorithm from SVDPACK (Berry 1992). As a result, each of the 5,000 words is represented by a vector of real numbers. Since the original 20,000-component vectors of two words (corresponding to rows in the matrix in Figure 1) are similar if their collocations are similar, the same holds for the reduced vectors because the singular value decomposition finds the best least square approximation for the 5,000 original vectors in a 15-dimensional space that preserves similarity between vectors. See (Deerwester et al. 1990) for a definition of SVD and an application to a similar problem.</Paragraph>
    <Paragraph position="4"> Close neighbors in the 15-dimensional space generally have the same syntactic category as can be seen in Table 1. However, the problem with this method is that it will not scale up to a very large number of words. The singular value decomposition has a time complexity quadratic in the rank of the matrix, so that one can only treat a small part of the total vocabulary of a large corpus.</Paragraph>
    <Paragraph position="5"> Therefore, an alternative set of features was considered: classes of words in the 15-dimensional space. Instead of counting the number of occurrences of individual words, we would now count 1The cosine between two vectors corresponds to the normalized correlation coefficient: cos(c~(~,ff)) = the number of occurrences of members of word classes. 2 The space was clustered with Buckshot, a linear-time clustering algorithm described in (Cutting et al. 1992). Buckshort applies a high-quality quadratic clustering algorithm to a random sample of size v/k-n, where k is the number of desired cluster centers and n is the number of vectors to be clustered. Each of the remaining n - ~ vectors is assigned to the nearest cluster center. The high-quality quadratic clustering algorithm used was truncated group average agglomeration (Cutting et al. 1992).</Paragraph>
    <Paragraph position="6"> Clustering algorithms generally do not construct groups with just one member. But there are many closed-class words such as auxiliaries and prepositions that shouldn't be thrown together with the open classes (verbs, nouns etc.). Therefore, a list of 278 closed-class words, essentially the words with the highest frequency, was set aside.</Paragraph>
    <Paragraph position="7"> The remaining 4722 words were classified into 222 classes using Buckshot.</Paragraph>
    <Paragraph position="8"> The resulting 500 classes (278 high-frequency words, 222 clusters) were used as features in the matrix shown in Figure 2. Since the number of features has been greatly reduced, a larger number of words can be considered. For the second matrix all 22,771 words that occurred at least 100 times in 18 months of the New York Times News Service (May 1989 - October 1990) were selected.</Paragraph>
    <Paragraph position="9"> Again, there are four submatrices, corresponding to four relative positions. For example, the entries aij in the A part of the matrix count how often a member of class i occurs at a distance of one word to the left of word j. Again, a singular value decomposition was performed on the matrix, this time 10 singular values were computed. (Note that in the first figure the 20,000-element rows of the matrix are reduced to 15 dimensions whereas in the second matrix the 2,000-element columns are reduced to 10 dimensions.) Table 2 shows 20 randomly selected words and their nearest neighbors in category space (in order of proximity to the head word). As can be seen from the table, proximity in the space is a good predictor of similar syntactic category. The nearest neighbors of athlete, clerk, declaration, and dome are singular nouns, the nearest neighbors of bowers and gibbs are family names, the nearest neighbors of desirable and sole are adjectives, and the nearest neighbors of financings are plural nouns, in each case without exception. The neighborhoods of armaments, cliches and luxuries (nouns), and b'nai and northwestern (NP-initial modifiers) fail to respect finer grained syntactic 2Cf. (Brown et al. 1992) where the same idea of improving generalization and accuracy by looking at word classes instead of individual words is used.</Paragraph>
    <Paragraph position="10">  submitted banned financed developed authorized headed canceled awarded barred virtually merely formally fully quite officially just nearly only less reflecting forcing providing creating producing becoming carrying particularly elections courses payments losses computers performances violations levels pictures professionals investigations materials competitors agreements papers transactions mood roof eye image tool song pool scene gap voice chinese iraqi american western arab foreign european federal soviet indian reveal attend deliver reflect choose contain impose manage establish retain believe wish know realize wonder assume feel say mean bet angeles francisco sox rouge kong diego zone vegas inning layer Oil must through in at over into with from for by across we you i he she nobody who it everybody there they might would could cannot will should can may does helps  turmoil weaponry landmarks coordination prejudices secrecy brutality unrest harassment \[ virus scenario \[ event audience disorder organism candidate procedure epidemic I suffolk sri allegheny cosmopolitan berkshire cuny broward multimedia bovine nytimes jacobs levine cart hahn schwartz adams bucldey dershowitz fitzpatrick peterson \[ salesman \] psychologist photographer preacher mechanic dancer lawyer trooper trainer pests wrinkles outbursts streams icons endorsements I friction unease appraisals lifestyles antonio I' clara pont saud monica paulo rosa mae attorney palma sequence mood profession marketplace concept facade populace downturn moratorium I re'cognizable I frightening loyal devastating excit!ng troublesome awkward palpable blackout furnace temblor quartet citation chain countdown thermometer shaft I I somewhat progressively acutely enormously excessively unnecessarily largely scattered \[ endeavors monopolies raids patrols stalls offerings occupations philosophies religions adler reid webb jenkins stevens carr lanrent dempsey hayes farrell \[ volatility insight hostility dissatisfaction stereotypes competence unease animosity residues \]  credits promises \[ forecasts shifts searches trades practices processes supplements controls through from in \[ at by 'within with under against for will might would cannot could can should won't \[ doesn't may we \[ i you who nobody he it she everybody there distinctions, but are reasonable representations of syntactic category. The neighbors of cruz (second components of names), and equally and vividly (adverbs) include words of the wrong category, but are correct for the most part.</Paragraph>
    <Paragraph position="11"> In order to give a rough idea of the density of the space in different locations, the symbol &amp;quot;1&amp;quot; is placed before the first neighbor in Table 2 that has a correlation of 0.978 or less with the head word. As can be seen from the table, the regions occupied by nouns and proper names are dense, whereas adverbs and adjectives have more distant nearest neighbors. One could attempt to find a fixed threshold that would separate neighbors of the same category from syntactically different ones. For instance, the neighbors of oh with a correlation higher than 0.978 are all interjections and the neighbors of cliches within the threshold region are all plural nouns. However, since the density in the space is different for different regions, it is unlikely that a general threshold for all syntactic categories can be found.</Paragraph>
    <Paragraph position="12"> The neighborhoods of transports and walks are not very homogeneous. These two words are ambiguous between third person singular present tense and plural noun. Ambiguity is a problem for the vector representation scheme used here, because the two components of an ambiguous vector can add up in a way that makes it by chance similar to an unambiguous word of a different syntactic category. If we call the distributional vector fi'C/ of words of category c the profile of category c, and if a word wl is used with frequency c~ in category cl and with frequency ~ in category c2, then the weighted sum of the profiles (which corresponds to a column for word Wl in Figure 2) may turn out to be the same as the profile of an unrelated third category c3: This is probably what happened in the cases of transports and walks. The neighbors of claims demonstrate that there are homogeneous &amp;quot;ambiguous&amp;quot; regions in the space if there are enough words with the same ambiguity and the same frequency ratio of the categories, lransports and walks (together with floats, jumps, sticks, stares, and runs) seem to have frequency ratios a/fl different from claims, so that they ended up in different regions.</Paragraph>
    <Paragraph position="13"> The last three lines of Table 2 indicate that function words such as prepositions, auxiliaries, and nominative pronouns and quantifiers occupy their own regions, and are well separated from each other and from open classes.</Paragraph>
  </Section>
  <Section position="5" start_page="254" end_page="256" type="metho">
    <SectionTitle>
A BIRECURRENT NETWORK
FOR PART-OF-SPEECH
PREDICTION
</SectionTitle>
    <Paragraph position="0"> A straightforward way to take advantage of the vector representations for part of speech categorization is to cluster the space and to assign part-of-speech labels to the clusters. This was done with Buckshot. The resulting 200 clusters yielded good results for unambiguous words. However, for the reasons discussed above (linear combination of profiles of different categories) the clustering was not very successful for ambiguous words. Therefore, a different strategy was chosen for assigning category labels. In order to tease apart the different uses of ambiguous words, one has to go back to the individual contexts of use. The connectionist network in Figure 3 was used to analyze individual contexts.</Paragraph>
    <Paragraph position="1"> The idea of the network is similar to Elman's recurrent networks (Elman 1990, Elman 1991): The network learns about the syntactic structure of the language bY trying to predict the next word from its own context units in the previous step and the current word. The network in Figure 3 has two novel features: It uses the vectors from the second singular vMue decomposition as input and target.</Paragraph>
    <Paragraph position="2"> Note that distributed vector representations are ideal for connectionist nets, so that a connectionist model seems most appropriate for the prediction task. The second innovation is that the net is birecurrent. It has recurrency to the left as well as to the right.</Paragraph>
    <Paragraph position="3"> In more detail, the network's input consists of the word to the left tn-1, its own left context in the previous time step c-l,,-1, the word to the right tn+l and its own right context C-rn+l in the next time step. The second layer has the context units of the current time step. These feed into thirty hidden units h,~ which in turn produce the output vector o,,. The target is the current word tn. The output units are linear, hidden units are sigmoidM.</Paragraph>
    <Paragraph position="4"> The network was trained stochastically with truncated backpropagation through time (BPTT, Rumelhart et al. 1986, Williams and Peng 1990).</Paragraph>
    <Paragraph position="5"> For this purpose, the left context units were unfolded four time steps to the left and the right context units four time steps to the right as shown in Figure 4. The four blocks of weights on the connections to c-in-3, c-ln-~., c-in-l, and c-In are linked to ensure identical mapping from one &amp;quot;time step&amp;quot; to the next. The connections on the right side are linked in the same way. The training set consisted of 8,000 words in the New York Times newswire (from June 1990). For each training step, four words to the left of the target word (tn_3, tn_2,tn_l, and in) and four words to the right of the target word (tn, tn+l, tn+2, and in+3)  were the input to the unfolded network. The target was the word tn. A modification of bp from the pdp package was used with a learning rate of 0.01 for recurrent units, 0.001 for other units and no momentum.</Paragraph>
    <Paragraph position="6"> After training, the network was applied to the category prediction tasks described below by choosing a part of the text without unknown words, computing all left contexts from left to right, computing all right contexts from right to left, and finally predicting the desired category of a word t, by using the precomputed contexts c-l,, and c-rn.</Paragraph>
    <Paragraph position="7"> In order to tag the occurrence of a word, one could retrieve the word in category space whose vector is closest to the output vector computed by the network. However, this would give rise to too much variety in category labels. To illustrate, consider the prediction of the category NOUN. If the network categorizes occurrences of nouns correctly as being in the region around declaration, then the slightest variation in the output will change the nearest neighbor of the output vector from declaration to its nearest neighbors sequence or mood (see Table 2). This would be confusing to the human user of the categorization program.</Paragraph>
    <Paragraph position="8"> Therefore, the first 5,000 output vectors of the network (from the first day of June 1990), were clustered into 200 output clusters with Buckshot.</Paragraph>
    <Paragraph position="9"> Each output cluster was labeled by the two words closest to its centroid. Table 3 lists labels of some of the output clusters that occurred in the experiment described below. They are easily interpretable for someone with minimal linguistic knowledge as the examples show. For some categories such as HIS_THI~. one needs to look at a couple of instances to get a &amp;quot;feel&amp;quot; for their mean-</Paragraph>
    <Paragraph position="11"> ing.</Paragraph>
    <Paragraph position="12"> The syntactic distribution of an individual word can now be more accurately determined by the following algorithm: * compute an output vector for each position in the text at which the target word occurs. * for each output vector j do the following: - determine the centroid of the cluster i which is closest - compute the correlation coefficient of the output vector j and the centroid of the output cluster i. This is the score si,i for cluster i and vector j. Assign zero to the scores of the other clusters for this vector: s~,j :- 0, k ~ i * for each cluster i, compute the final score fi as the sum of the scores sij : fi := ~j si,j * normalize the vector of 200 final scores to unit length This algorithm was applied to June 1990. If for a given word, the sum of the unnormalized final scores was less than 30 (corresponding to roughly 100 occurrences in June), then this word was discarded. Table 4 lists the highest scoring categories for 10 random words and 11 selected ambiguous words. (Only categories with a score of at least 0.2 are listed.) The network failed to learn the distinctions between adjectives, intransitive present participles and past participles in the frame &amp;quot;to-be + \[\] + non-NP'. For this reason, the adjective close, the present participle beginning, and the past participle shot are all classified as belonging to the category STRUGGLING_TRAVELING. (Present Participles are successfully discriminated in the frame &amp;quot;to-be + \[\] + NP&amp;quot;: see winning in the table, which is classified as the progressive form of a transitive verb: HOLDING_PROMISING.) This is the place where linguistic knowledge has to be injected in form of the following two rules:  * If a word in STRUGGLING_TRAVELING is a morphological present participle or past participle assign it to that category, otherwise to the category ADJECTIVE_PREDICATIVE.</Paragraph>
    <Paragraph position="13"> * If a word in a noun category is a morpho null logical plural assign it to NOUN_PLURAL, to NOUN_SINGULAR otherwise.</Paragraph>
    <Paragraph position="14"> With these two rules, all major categories are among the first found by the algorithm; in particular the major categories of the ambiguous words better (adjective/adverb), close (verb/adjective), work (noun/base form of verb), hopes (noun/third person singular), beginning (noun/present-participle), shot (noun/past participle) and's ('s/is). There are two clear errors: GIVEN_TAKING for contain, and RICAN_ADVISORY for 's, both of rank three in the table.</Paragraph>
    <Paragraph position="16"> These results seem promising given the fact that the context vectors consist of only 15 units. It seems naive to believe that all syntactic information of the sequence of words to the left (or to the right) can be expressed in such a small number of units. A larger experiment with more hidden units for each context vector will hopefully yield better results.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML