File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/93/w93-0307_metho.xml

Size: 29,735 bytes

Last Modified: 2025-10-06 14:13:29

<?xml version="1.0" standalone="yes"?>
<Paper uid="W93-0307">
  <Title>Structural Ambiguity and Conceptual Relations</Title>
  <Section position="1" start_page="0" end_page="0" type="metho">
    <SectionTitle>
resnik @ linc.cis.upenn.edu
ABSTRACT
</SectionTitle>
    <Paragraph position="0"> Lexical co-occurrence statistics are becoming widely used in the syntactic analysis of unconstrained text. However, analyses based solely on lexical relationships suffer from sparseness of data: it is sometimes necessary to use a less informed model in order to reliably estimate statistical parameters. For example, the &amp;quot;lexical association&amp;quot; strategy for resolving ambiguous prepositional phrase attachments \[Hindle and Rooth.</Paragraph>
    <Paragraph position="1"> 1991\] takes into account only the attachment site (a verb or its direct object) and the preposition, ignoring the object of the preposition.</Paragraph>
    <Paragraph position="2"> We investigated an extension of the lexical association strategy to make use of noun class information, thus permitting a disambiguation strategy to take more information into account.</Paragraph>
    <Paragraph position="3"> Although in preliminary experiments the extended strategy did not yield improved performance over lexical association alone.</Paragraph>
    <Paragraph position="4"> a qualitative analysis of the results suggests that the problem lies not in the noun class information, but rather in the multiplicity of classes available for each noun in the absence of sense disambiguation. This suggests several possible revisions of our proposal.</Paragraph>
  </Section>
  <Section position="2" start_page="0" end_page="58" type="metho">
    <SectionTitle>
1. Preference Strategies
</SectionTitle>
    <Paragraph position="0"> Prepositional phrase attachment is a paradigmatic case of the structural ambiguity problems faced by natural language parsing systems. Most models of grammar will not constrain the analysis of such attachments in examples like (1): the grammar simply specifies that a prepositional phrase such as on computer theft can be attached in several ways, and leaves the problem of selecting the  correct choice to some other process.</Paragraph>
    <Paragraph position="1"> (1) a. Eventually, Mr. Stoll was invited to both the CIA and NSA to brief high-ranking officers on computer theft.</Paragraph>
    <Paragraph position="2"> b. Eventually, Mr. Stoll was invited to both the ClA and NSA \[to brief \[high-ranking officers on computer theft\]\].</Paragraph>
    <Paragraph position="3"> c. Eventually, Mr. Stoll was invited to both the CIA and NSA \[to brief \[high-ranking  ollicers\] \[on computer theft\]\].</Paragraph>
    <Paragraph position="4"> As \[Church and Patil, 1982\] point out, the number of analyses given combinations of such &amp;quot;all ways ambiguous&amp;quot; constructions grows rapidly even for sentences of quite  marti @ cs.berkeley.edu reasonable length, so this other process has an important role to play.</Paragraph>
    <Paragraph position="5"> Discussions of sentence processing have focused primarily on structurally-based preference strategies such as right association and minimal attachment \[Kimball, 1973; Frazier, 1979; Ford et al., 1982\]; \[Hobbs and Bear, 1990\], while acknowledging the importance of semantics and pragmatics in attachment decisions, propose two syntactically-based attachment rules that are meant to be generalizations of those structural strategies.</Paragraph>
    <Paragraph position="6"> Others, however, have argued that syntactic considerations alone are insumcient for determining prepositional phrase attachments, suggesting instead that preference relationships among lexical items are the crucial factor. For example: \[Wilks et aL, 1985\] argue that the right attachment rules posited by \[Frazier, 1979\] are incorrect for phrases in general, and supply counterexarnples.</Paragraph>
    <Paragraph position="7"> They further argue that lexical preferences alone as suggested by \[Ford et al., 1982\] are too simplistic, and suggest instad the use of preference semantics. In the preference semantics framework, attachment relations of phrases are determined by comparing the preferences emanating from all the entities involved in the attachment, until the best mutual fit is found. Their CASSEX system represents the various meanings of the preposition in terms of (a) the preferred semantic class of the noun or verb that proceeds the preposition (e.g., move, be, strike), (b) the case of the preposition (e.g., instrument, time, loc.static), and (c) the preferred semantic class of the head noun of the prepositional phrase (e.g., physob, event). The difficult part of this method is the identification of preference relationships and particularly determining the strengths of the preferences and how they should interact. (See also discussion in \[Schubert, 19841.) lDahlgren and McDowell, 1986\] also suggests using preferences based on hand-built knowledge about the prepositions and their objects, specifying a simpler set of rules than those of \[Wilks et al., 1985\].  She argues that the knowledge is needed lot understanding the text as well as for parsing it. Like CASSE.X, this system requires considerable effort to provide hand-encoded preference information.</Paragraph>
    <Paragraph position="8"> * \[Jensen and Binot, 1987\] resolve prepositional phrase attachments by using preferences obtained by applying a set of heuristic rules to dictionary definitions. The rules match against lexico-syntactic patterns in the definitions -- for example, if confronted with the sentence She ate a fish with a fork, the system evaluates separately the plausibility of the two proposed constructs eat with fork and fish with fork based on how well the dictionary supports each. By making use of on-line dictionaries, the authors hope to create a system that will scale up; however, they report no overall evaluation.</Paragraph>
    <Paragraph position="9"> An empirical study by \[Whittemore eta/., 1990\] supports the main premise of these proposals: they observe that in naturally-occurring data, lexical preferences (e.g., arrive at, flight to) provide more reliable attachment predictions than structural strategies. Unfortunately, it seems clear that, outside of restricted domains, handencoding of preference rules will not suffice for unconstrained text. Information gleaned from dictionaries may provide a solution, but the problem of how to weight and combine preferences remains.</Paragraph>
  </Section>
  <Section position="3" start_page="58" end_page="58" type="metho">
    <SectionTitle>
2. Lexieal Association
</SectionTitle>
    <Paragraph position="0"> \[Hindle and Rooth, 1991\] offer an alternative method for discovering and using lexical attachment preferences, based on corpus-based lexical co-occurrence statistics.</Paragraph>
    <Paragraph position="1"> In this section, we briefly summarize their proposal and experimental results.</Paragraph>
    <Paragraph position="2"> An &amp;quot;instance&amp;quot; of ambiguous prepositional phrase attachment is taken to consist of a verb, its direct object, a preposition, and the object of the preposition. Furthermore, only the heads of the respective phrases are considered; so, for example, the ambiguous attachment in (1) would be construed as the 4-tuple (brief, officer,on,theft). 1 We will refer to its elements as v, n 1, p, and n2, respectively. null The attachment strategy is based on an assessment of bow likely the preposition is, given each potential attachment site; that is, a comparison of the values Pr(/,inl) and Pr(p\[v). For (1), one would expect Pr(on\[bri~f) to be greater than Pr(on\[o~fficer), reflecting the intuition that briefX on Y is more plausible as a verb phrase than o.~icer on Z is as a noun phrase.</Paragraph>
    <Paragraph position="3"> Hindle and Rooth extracted their training data from a corpus of Associated Press news stories. A robust parser ! Verbs aad nou~xrC/ reduced to their root form, F, hence officerrather than o~cen.</Paragraph>
    <Paragraph position="4"> \[Hindle, 1983\] was used to construct a table in which each row contains the head noun of each noun phrase, the preceding verb (if the noun phrase was the verb's direct object), and the following preposition, if any occurred.</Paragraph>
    <Paragraph position="5"> Attachment decisions for the training data in the table were then made using a heuristic procedure -- for example, given spare it from, the procedure will count this row as an instance of spare from rather than it from, since a prepositional phrase cannot be attached to a pronoun.</Paragraph>
    <Paragraph position="6"> Not all the data can be assigned with such certainty: ambiguous cases in the training data were handled either by using statistics collected from the unambiguous cases, by splitting the attachment between the noun and the verb, or by defaulting to attachment to the noun.</Paragraph>
    <Paragraph position="7"> Given an instance of ambiguous prepositional phrase attachment from the test set, the lexical association procedure for guessing attachments used the t-score \[Church et aL, 1991\] to assess the direction and significance of the difference between Pr(p\[n 1 ) and Pr(plv) -- t will be positive, zero, or negative according to whether Pr(pln 1) is greater, equal to, or less than Pr(plv), respectively, and its magnitude indicates the level of confidence in the significance of this difference.</Paragraph>
    <Paragraph position="8"> On a set of test sentences held out from the training data, the lexical association procedure made the correct attachment 78.3% of the time. For choices with a high level of confidence (magnitude of t greater than 2.1, about 70% of the time), correct attachments were made 84.5% of the time.</Paragraph>
  </Section>
  <Section position="4" start_page="58" end_page="59" type="metho">
    <SectionTitle>
3. Prepositional Objects
</SectionTitle>
    <Paragraph position="0"> The lexical association strategy performs quite well, despite the fact that the object of the preposition is ignored. However, Hindle and Rooth note that neglecting this information can hurt in some cases. For instance, the lexical association strategy is presented with exactly the same information in (2a) and (2b), and is therefore unable to distinguish them.</Paragraph>
    <Paragraph position="1"> (2) a. Britain reopened its embassy in December.</Paragraph>
    <Paragraph position="2"> b. Britain reopened its embassy in Teheran.</Paragraph>
    <Paragraph position="3"> In addition, \[Hearst and Church, in preparation\] have conducted a pilot study in which human subjects are asked to guess prepositional phrase attachments despite the omission of the direct object, the object of the preposition, or both. The results of this study, though preliminary, suggest that the object of the preposition contributes an amount of information comparable to that contributed by the direct object; more important, for some prepositions, the object of the preposition appears to be more informative. null Thus, there appears to be good reason to incorporate the object of the preposition in lexicai association calculations. The difficulty, of course, is that the data are far too  sparse to permit the most obvious extension. Attempts to simply compare Pr(p, n21nl) against Pr(p, n21v) using the t-score fail dismally?</Paragraph>
  </Section>
  <Section position="5" start_page="59" end_page="59" type="metho">
    <SectionTitle>
4. Word Classes
</SectionTitle>
    <Paragraph position="0"> We are faced with a well-known tradeoff: increasing the number of words attended to by a statistical language model will in general tend to increase its accuracy, but doing so increases the number of probabilities to be estimated, leading to the need for larger (and often im'practically larger) sets of training data in order to obtain accurate estimates. One option is simply to pay attention to fewer words, as do Hindle and Rooth. Another possibility, however, is to reduce the number of parameters by grouping words into equivalence classes, as discussed, for example, by \[Brown et al., 1990\].</Paragraph>
    <Paragraph position="1"> \[Resnik, 1992\] discusses the use of word classes in discovering lexical relationships, demonstrating that WordNet \[Beckwith et al., 1991; Miller, 1990\], a broadcoverage, hand-constructed lexical database, provides a reasonable foundation upon which to build class-based statistical algorithms. Here we briefly describe WordNet, and in the following section describe its use in resolving prepositional phrase attachment ambiguity.</Paragraph>
    <Paragraph position="2"> WordNet is a large lexical database organized as a set of word taxonomies, one for each of four parts of speech (noun, verb, adjective, and adverb). In the noun taxonomy, the only one used here, each word is mapped to a set of word classes, corresponding roughly to word senses, which Miller et al. term synonym sets. For example, the word paper is a member of synonym sets \[newspaper, pa.</Paragraph>
    <Paragraph position="3"> per\] and \[composition, paper, report, theme\], among others. For notational convenience, we will refer to each synonym set by its first word, sometimes together with a unique identifier -- for example (paper, 2241323 ) and (newspaper, 2202048).3 The classes in the taxonomy form the nodes of a semantic network, with links to superordinates, subordinates, antonyms, members, parts, etc. In this work only the superordinate/subordinate (i.e., IS-A) links are used w for example, (newspaper, 2202048) is a sub-class of (press,2200204), which is a subclass of (print.media, 2200360), and so forth.</Paragraph>
    <Paragraph position="4"> Denoting the set of words subsumed by a class c (that is, the set of all words that are a member of c or any subordinate class) as words(c), the frequency of a class can be estimated as follows:</Paragraph>
    <Paragraph position="6"> used; the wock desaribed in this paper was done using version 1.2.</Paragraph>
    <Paragraph position="7"> Owing to multiple inheritance and word sense ambiguity, equation (1) represents only a coarse estimate -- for example, each occurrence of a word contributes equally to the count of all classes of which it is a member. However, \[Resnik, 1992\] estimated class frequencies in a similar fashion with acceptable results.</Paragraph>
  </Section>
  <Section position="6" start_page="59" end_page="59" type="metho">
    <SectionTitle>
5. Conceptual Association
</SectionTitle>
    <Paragraph position="0"> In what follows, we propose to extend Hindle and Rooth's lexical association method to take advantage of knowledge about word-class memberships, following a strategy one might call conceptual association. From a practical point of view, the use of word classes reduces the sparseness of the training data, permitting us to make use of the object of the preposition, and also decreases the sensitivity of the attachment strategy to the specifics of the training corpus. From a more philosophical point of view, using a strategy based on conceptual rather than purely lexical relationships accords with our intuition that, at least in many cases, much of the work clone by lexical statistics is a result of the semantic relationships they indirectly encode.</Paragraph>
    <Paragraph position="1"> Our proposal for conceptual association is to calculate a measure of association using the classes to which the direct object and object of the preposition belong, and to select the attachment site for which the evidence of association is strongest. The use of classes introduces two sources of ambiguity. The first, shared by lexical association, is word sense ambiguity: just as lexically-based methods conflate multiple senses of a word into the count of a single token, here each word may be mapped to many different classes in the WordNet taxonomy. Second, even for a single sense, a word may be classified at many levels of abstraction -- for example, even interpreted solely as a physical object (ratber than a monetary unit),penny may be categorized as a (coin, 3566679), (cash,3566144), (money, 3565439), and SO forth on up to (possession, 11572 ).</Paragraph>
    <Paragraph position="2"> In our experiments, we adopted the simplest possible approach: we consider each classification of the nouns as a source of evidence about association, and combine these sources of evidence to reach a single attachment decision.</Paragraph>
    <Paragraph position="3"> Algorithm 1. Given (v, nl, p, n2), I. Let C1 = {c J nl E words(c)} Let C2 = {c \] n2 E words(c)} = {c2.1 ..... c2.N}  2. For i from l to N,</Paragraph>
    <Paragraph position="5"> 3. For i from 1 to N, = freq(el,~, r', c2,i) I~ = freq(v,r,, e2,~) I~ 4. Compute a paired samples t-test for a difference of the means of ,5&amp;quot;' and S ~ . Let &amp;quot;confidence&amp;quot; be the significance of the test with N - 1 degrees of freedom.</Paragraph>
    <Paragraph position="6"> 5. Select attachment to nl or v according to whether t is positive or negative, respectively.</Paragraph>
    <Paragraph position="7"> Step 1 of the algorithm establishes the range of possible classifications for nl and n2. For example, if the algorithm is trying to disambiguate (3) But they foresee little substantial progress in  exports...</Paragraph>
    <Paragraph position="8"> the word export can be classified alternatively as (export, 248913), (commerce, 244370), (group_action, 241055), and (act, 10812).</Paragraph>
    <Paragraph position="9"> In step 2, each candidate classification for n2 is held fixed, and a classification for n l is chosen that maxirnizes the association (as measured by mutual information) between the noun-attachment site and the prepositional phrase. In effect, this answers the question, &amp;quot;If we were to categorize n2 in this way, what would be the best class to use for niT' This is done for each classification of n2, yielding N different class-based interpretations for (nl,p,n2). I~ is the noun-attachment association score for the ith interpretation. Correspondingly, there are N interpretations 1~' for (v,p,n2).</Paragraph>
    <Paragraph position="10"> At this point, each of the N classifications for nl (progress) and n2 (export) provides one possible interpretation of (foresee,progress,in,export), and each of these interpretations provides associational evidence in favor of one attachment choice or the other. How are these sources of evidence to be combined? As a first effort, we have proceeded as follows. Each of the values for I/~ and I/' are not equally reliable: values calculated using classes low in the taxonomy involve lower frequencies than those using higher-level classes. In an attempt to assign more credit to scores calculated using higher counts, we weight each of the mutual informarion scores by the corresponding trigram frequency thus in step 3 the association score for noun-attachment is calculated as the product of f(el,i, P, e2,i) I~. The corresponding verb-attachment score is f(v, p, c2.~) I~'. This leaves us with a table like the following:  In step 4 the N different sources of evidence are combined: a t.test for the difference of the means is performed, treating ~ and S&amp;quot; as paired samples (see, e.g., \[Woods et al., 1986\]). In step 5 the resulting value of t determines the choice of attachment site, as well as an estimate of how significant the difference is between the two alternatives. (For this example, t(3) = 3.57,p &lt; 0.05, yielding the correct choice of attachment.)</Paragraph>
  </Section>
  <Section position="7" start_page="59" end_page="62" type="metho">
    <SectionTitle>
6. Combining Strategies
</SectionTitle>
    <Paragraph position="0"> In addition to evaluating the performance of the conceptual association strategy in isolation, it is natural to combine the predictions of the lexical and conceptual association strategies to make a single prediction. Although well-founded strategies for combining the predictions of multiple models do exist in the speech recognition liter- null ature \[Jelinek and Mercer, 1980; Katz, 1987\], we have chosen a simpler &amp;quot;backing off&amp;quot;' style procedure: Algorithm 2. Given (v, nl, p, n2), 1. Calculate an attachment decision using Algorithm 1.</Paragraph>
    <Paragraph position="1"> 2. If significance &lt; 0.1, use this decision, 3. Otherwise, use iexical association.</Paragraph>
    <Paragraph position="2"> 7. Experimental Results 7.1. Experiment 1  An experiment was conducted to evaluate the performance of the lexical association, conceptual association, and combined strategies. The corpus used was a collection of parses from articles in the 1988-89 Wall Street Journal, found as part of the Penn Treebank. This corpus is an order of magnitude smaller than the one used by Hindle and Rooth in their experiments, but it provides considerably less noisy data, since attachment decisions have been performed automatically by the Fidditch parser \[Hindle, 1983\] and then corrected by hand.</Paragraph>
    <Paragraph position="3"> A test set of 201 ambiguous prepositional phrase attachment instances was set aside. After acquiring attachment choices on these instances from a separate judge (who used the full sentence context in each case), the test set was reduced by eliminating sentences for which the separate judge disagreed with the Treebank, leaving a test set of 174 instances. 4 #Of the 348 nouns appearing a.~ part of the test set. 12 were not covered by WordNet ; these were clas.~ified by default L~ members of file WordNet cl~C/s (ant i ty, 23 ~ 3 ).</Paragraph>
    <Paragraph position="4">  Lexical counts for relevant prepositional phrase attachments (v,p,n2 and nl,p,n2) were extracted from the parse trees in the corpus; in addition, by analogy with Hindie and Rooth's training procedure, instances of verbs and nouns that did not have a prepositional phrase attached were counted as occurring with the &amp;quot;null prepositional phrase.&amp;quot; A set of clean-up steps included reducing verbs and nouns to their root forms, mapping to lowercase, substituting the word someone for nouns not in WordNet that were part-of-speech-tagged as proper names, substituting the word amount for the token % (this appeared as a head noun in phrases such as rose 10 %), and expanding month abbreviations such as Jan. to the full month name.</Paragraph>
    <Paragraph position="5"> The results of the experiment are as follows:</Paragraph>
    <Paragraph position="7"> When the individual strategies were constrained to answer only when confident (Itl &gt; 2.1 for lexical association, p &lt;. 1 for conceptual association), they performed as follows:</Paragraph>
    <Paragraph position="9"> The performance of lexical association in this experiment is striking: despite the reduced size of the training corpus in comparison to \[Hindle and Rooth, 1991\], performance exceeds previous results ~ and although fewer test cases produce confident predictions (as might be expected given generally lower counts), when the algorithm is confident it performs very well indeed.</Paragraph>
    <Paragraph position="10"> The performance of the conceptual association strategy seems reasonable, though it is clearly overshadowed by the performance of the lexical association strategy.</Paragraph>
    <Paragraph position="11"> The tiny improvement on lexical association by the combined strategy suggests that including the conceptual association strategy may improve performance overall, but further investigation is needed to determine whether such a conclusion is warranted; the experiments described in the following two sections bear on this issue.</Paragraph>
    <Paragraph position="12"> 7.2. Experiment 2 Although the particular class-based strategy implemented here might not provide great leaps in performance at least as judged on the basis of Experiment 1 one might expect that a strategy based upon a domain-independent semantic taxonomy would provide a greater degree of robustness, reducing dependence of the attachment strategy on the training corpus.</Paragraph>
    <Paragraph position="13"> We set out to test this supposition by considering the performance of the various associational attachment strategies when tested on data from a corpus other than the one on which they were trained. First, we tested performance on a test set drawn from the same genre. Of the test cases drawn by Hindle and Rooth from the Associated Press corpus, we took the first 200; eliminating those sentences for which Hindle and Rooth's two human judges could not agree on an attachment reduced the set to 173. Several minor clean-up steps were taken to make this test set consistent with our training data: if the object of the preposition was a complementizer or other word introducing a sentence (e.g. begin debate on whether), it was replaced with the word something; proper names (e.g. Bush) were replaced with someone; some numbers were replaced with year or atnount, (e.g. 1911 and 0, respectively); and &amp;quot;compound&amp;quot; prepositions were replaced by a &amp;quot;normal&amp;quot; preposition consistent with what appeared in the full sentence (e.g. by_about was replaced with by for the phrase oumumbered losers by about 6 to 5).</Paragraph>
    <Paragraph position="14"> The results of the experiment are as follows:  The conceptual association strategy, not being as dependent on the specific lexical items in the training and test sets, sustains a somewhat higher level of overall performance, although once again the lexical association strategy performs well when restricted to the relatively small set of predictions that it can make with confidence.</Paragraph>
    <Paragraph position="15"> 7.3. Experiment 3 Wishing to pursue the paradigm of cross-corpus testing further, we conducted a third experiment in which the training set was extracted from the Penn Treebank's parsed version of the Brown corpus \[Francis and Kucera, 1982\], testing on the Wall Street Journal test set of Experiment 1.</Paragraph>
    <Paragraph position="16"> The results of the experiment are as follows:  In this experiment it is surprising that all the strategies perform as well as they do. However, the pattern of results leads us to conjecture that the conceptual association strategy, taken in combination with the lexical association strategy, may permit us to make more effective use of general, corpus-independent semantic relationships than does the lexical association strategy alone.</Paragraph>
    <Paragraph position="17"> 8. Qualitative Evaluation The overall performance of the conceptual association strategy tends to be worse than that oflexical association, and the combined strategy yields at best a marginal improvement. However, several comments are in order.</Paragraph>
    <Paragraph position="18"> First, the results in the previous section demonstrate that conceptual association is doing some work: when the strategies are constrained to answer only when confident, conceptual association achieves a 50-60% increase in coverage over lexical association, at the cost of a 3-9% decrease in accuracy.</Paragraph>
    <Paragraph position="19"> Second, it is clear that class information is providing some measure of resistance to sparseness of data. As mentioned earlier, adding the object of the preposition without using noun classes leads to hopelessly sparse data-- yet the performance of the conceptual association strategy is far from hopeless. In addition, examination of what the conceptual association strategy actually did on specific examples shows that in many cases it is successfully compensating for sparse data.</Paragraph>
    <Paragraph position="20"> (,4) To keep his schedule on track, he flies two personal secretaries in from Little Rock to augment his staff in Dallas, For example, verb augment and preposition in never co-occur in the WSJ training corpus, and neither do noun staff and preposition in; as a result, the lexical association strategy makes an incorrect choice for the ambiguous verb phrase in (,4). However, the conceptual association strategy makes the correct decision on the basis of thefollowing classifications:  Third, mutual information appears to be a successful way to select appropriate classifications for the direct object, given a classification of the object of the preposition (see step 2 in Algorithm 1). For example, despite the fact that staff belongs to 25 classes in WordNet -- including (rnusical_notation, 23325281 and (rod, 1613297), for instance -- the classes to which it assigned in the above table seem appropriate given the context of (4).</Paragraph>
    <Paragraph position="21"> Finally, it is clear that our method for combining sources of evidence -- the paired t-test in step 4 of Algorithm 1 -- is hurting performance in many instances  because (a) it gives equal weight to likely and unlikely classifications of the object of the preposition, and (b) the significance of the test is overestimated when the object of the preposition belongs to many different classes.</Paragraph>
    <Paragraph position="22"> (5) Goodrich's vinyl-products segment reported operating pn~t for the quarter of $30. l mil null lion.</Paragraph>
    <Paragraph position="23"> For example, given the ambiguous attachment highlighted in (5), the contribution of the time-related classifications of quarter ((t ime_period, 4 0142 63), (ttrne, 9819), etc.) is swamped by numerous other classifications in which quttrter is interpreted as a physical object (coin, animal part), a number (fraction, rational number), a unit of weight (for measuring grain), and so forth. As a result, the conceptual association strategy comes up with the wrong attachment and identifies its decision as a confident one.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML