File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/90/c90-3037_metho.xml

Size: 16,719 bytes

Last Modified: 2025-10-06 14:12:30

<?xml version="1.0" standalone="yes"?>
<Paper uid="C90-3037">
  <Title>Coordination in an A xlomatl Grammar*</Title>
  <Section position="2" start_page="0" end_page="0" type="metho">
    <SectionTitle>
I Introduction
</SectionTitle>
    <Paragraph position="0"> Coordi:uation is a particularly troublesome phenomenon to account for in theories of syntax based upon phrase structure rules. Acceptable examples of 'non-constituent' coordination such as:  (1) Jolhn gave Mary a book and Peter a paper (2) Ben likes and Fred admires Mary  have led some to abandon a single level of grammatical description, and others to abandon phrase structure rules.</Paragraph>
    <Paragraph position="1"> An example of the former approach is Modifier Structure Grammar (Dahl and McCord, 1983), which was justified as tbllows: ... it appears that a proper and general treatment must recognise coordination as a &amp;quot;metagrmnmatieal' construction, in the sense that metarules, general system operations, or 'second-pass' operations such as transformations, are needed for its formulation. null Modifier Structure Grammar embeds its rules for co-ordination into the parsing algorithm (there are close *This research was supported by an SE\[IC resem'ch studentship.</Paragraph>
    <Paragraph position="2"> parallels with the SYSCONJ system (Woods, 1973)). In order to parse sentence (2), the state of the parser at the point immediately before 'Fred' is matched to the state immediately before 'Ben'. 'Fred admires' is then parsed, and the resulting state is merged with the state after parsing 'lien likes'.</Paragraph>
    <Paragraph position="3"> The alternative approach to dealing with coordination uses a single level of grammatical description, but uses a weaker notion of constituency than phrase structure grammar. It is presently exemplified by proposals to extend Categorial Grammar with Forward Composition, the Product operator, Subject Type-Raising etc.</Paragraph>
    <Paragraph position="4"> Categorial Grammar, just like phrase structure grammar, is based upon the combination of one or more constituents to form a further constituent. In order to deal with coordination, the category (X\X)/X 1 is assigned to the conjunction, or, n\]ore usually, a phrase structure rule is invoked of the form:</Paragraph>
  </Section>
  <Section position="3" start_page="0" end_page="0" type="metho">
    <SectionTitle>
X-~X conj X
</SectionTitle>
    <Paragraph position="0"> In either case, each conjunct has to be assigned a category. Extensions to Categorial Grammar provide a greater coverage of coordination phenomena by allowing a greater number of strings to form categories. For example, to accept both (2) and (3), (3) Ben likes Mary and admires Jane an extended grammar must allow 'Ben likes' to form a category which can combine with 'Mary' to form a sentence, and 'likes Mary' to form a category which can combine with 'Ben' to form a sentence. The consequence of this is that the simple sentence 'Ben likes Mary' can be assigned at least two different syntactic structures: (Ben (likes Mary)) or ((Ben likes) Mary), which both correspond to the same readi,g (the sentence is spuriously ambiguous according to the grammar).</Paragraph>
    <Paragraph position="1"> Axiomatic Grammar avoids the problem of spurious ambiguity by avoiding the need to assign categories to conjuncts. Although the formalism was developed during research into extended Categorial Grammar, the separation of grammatical information into axioms and rules makes its treatment of coordination look similar to that in a metalevel approach such as Modifier Structure Grammar. '\]'his 1 Capita\] letters will be used to denote variables througl~out this paper.</Paragraph>
    <Paragraph position="2"> l 207 similarity is easiest to show if we introduce the central notion of 'category transition' through the idea of state transition.</Paragraph>
    <Paragraph position="3"> Consider a left-to-right parse of the sentence 'a man sits' based upon a phrase structure grammar including the rules: s --~ np vp np --+ det n Initially we start in a state expecting a sentence (which we can encode as a list (s)). After absorbing the determiner, 'a', we can move to a new state which expects a noun followed by a verb-phrase (encoded as a list (n,vpl). Following the absorption of 'man', we can move to a state expecting just a verbphrase, and following the absorption of 'sits' we have a successful parse since there is no more input and nothing more expected. The transitions between the encodings of the states are as follows:</Paragraph>
    <Paragraph position="5"> ('a:det' means lhat the word &amp;quot;a' is a determiner) Instead of deriving these transitions from the phrase structure rules, consider directly supplying axioms of the form:  &lt;s&gt; + &amp;quot;w&amp;quot; --+ &lt;n,vp) where W:det (n,vp&gt; -{- :'W&amp;quot; ---+ (vp&gt; where W:n &lt;vp} + &amp;quot;W&amp;quot; --+ &lt;&gt; where W:vp  If constituent names are replaced by category specifications, generalisations become possible. The three axioms:  &lt;s) + &amp;quot;W&amp;quot; --* &lt;n,s\np) where W:np/n (n,s\np) + &amp;quot;W&amp;quot; --+ (s\np) where W:n (s\np) + &amp;quot;W&amp;quot; --* (&gt; where W:s\np ('np/n' is an np requiring a noun on its right, and 's\np' is a sentence requiring an up on its left) are instantiations of the axioms: &lt;X) (r) R + &amp;quot;W&amp;quot; -+ &lt;Z,X\Y) * R where W:Y/Z (X) (r) R + &amp;quot;W&amp;quot; --+ R where W:X ('X' is the head and 'R', the tail of the list encoding the state. '.' denotes concatenation, so '(n,s\np}' is equivalent to '(n) * &lt;s\np&gt;')  The 'encoded states' will roughly correspond to 'principal' categories in Axiomatic Grammar, and the axioms above to the axioms of Prediction and Composition.</Paragraph>
    <Paragraph position="6"> The rule for coordination in Axiomatic Grammar is stated in terms of principal category transition. For example, the acceptability of sentence (2) is dependent upon a proof that the two strings &amp;quot;Ben likes&amp;quot; and &amp;quot;Fred admires&amp;quot; both take us from the initial category (corresponding to a parsing state expecting a sentence) to a second category (corresponding to a state expecting a noun-phrase). The rule will be stated formally after a general description of the formalism.</Paragraph>
  </Section>
  <Section position="4" start_page="0" end_page="0" type="metho">
    <SectionTitle>
2 The Basics
</SectionTitle>
    <Paragraph position="0"> Axiomatic Grammar is mainly lexically based, with lexical entries containing both subcategorisation and order information. An association of a word with a 'lexical' category is given by an expression of the form: word: LEX-CAT Each lexical category is a feature valued structure. The features of interest are 'cat', which gives the base type of the category ('s','np', or 'n'), and 'left' and 'right' which contain lists of 'arguments'. Each argument is itself a lexical category. Categories are complete if the argument lists are empty. As an example, consider the lexical entries for the determiner 'the' and the transitive verb 'likes': \[ \[</Paragraph>
    <Paragraph position="2"> We can read the category for 'likes' as follows: given a complete noun-phrase on the left and a complete noun-phrase on the right, we can form a complete sentence. It is Worth comparing this category with the category generally assigned to 'likes' by a Categorial Grammar: (S \ NP)/ NP The categories differ in two respects. Firstly, the Categorial Grammar category not only provides information as to what is on the left, and what is on the right, but also determines the order in which each argument is to be absorbed (in the above, the argument on the right must be absorbed first, followed by the argument on the left). Secondly, whereas the Categorial Grammar category would be regarded as having the syntactic type 'np~-~(np--+s)', the Axiomatic Grammar category is regarded as having the base type 's'. This difference has a bearing on the treatment of modifiers (discussed later).</Paragraph>
    <Paragraph position="3"> When a string of words is absorbed it causes a transition between principal categories. A principal category is again a feature structure, the feature of interest being the 'right' feature i.e. the list of argmnents 2 required on the right. A parse of a sentence consists of a proof that, starting with a principal category which requires a sentence, we can end</Paragraph>
    <Paragraph position="5"> llenceforth, the convention is adopted that left or right argument lists which are not specified are empty. This allows us to rewrite the statement above rather more compactly as: A proof of a parse is performed using rules and axioms. An axaom declares that a string of words per{brms a transition between two principal categories. Axioms are either simple statements, or restricted ~tatements of the form: Co String C1 where ....</Paragraph>
    <Paragraph position="6"> Three axiorns will be discussed here 5. The first, Ideni;ity, merely declares that an empty string performs the identity transition i.e.</Paragraph>
    <Paragraph position="8"> The other axioms, Prediction and C, omposition, work on strings consisting of a single word. They have the format: Co &amp;quot;W&amp;quot; C1 where W:LEX-CAT rFhe flfll definitions, given in Figure 1, should become clearer as we work through an example.</Paragraph>
    <Paragraph position="9"> A deduction rule in Axiomatic Grammar declares that a string of words performs a transition between t~vo principal categories provided that certain subst.rings perform certain transitions i.e. rules have the  (subscripted strings are substrings of'String') The consequent of a rule (the statement tinder the line) can be proved by proving all the antecedents (the staternents above the line).</Paragraph>
    <Paragraph position="10">  In order to prove that 'Ben sits' is a sentence, we need to use all the axioms, and two rules, Sequencing and Optional Reduction. The relevant proof tree is given in Figure 2 7.</Paragraph>
    <Paragraph position="11"> The Prediction Axiom is restricted in English to the case where a category requires a sentence on the right, and the word encountered has a lexical category of base type noun-phrase. Thus starting with the principal category: we can absorb the proper-name 'BeeF, which has the lexical category, \[c = up l, to form a principal category, 'c0', which requires first an optional noun-phrase modifier (e.g. a non-restrictive relative clause), and then a sentence which requires a noun-phrase (a verb-phrase) i.e.</Paragraph>
    <Paragraph position="12"> = ,,p\]l ' np\]/ I (the use of parenlheses around the base type of the noun-phrase modifier denotes optionalily s) Writing this as a statement in the logic, we have a proof that: \[r = (\[c = s\])\] &amp;quot;Ben&amp;quot; cO The Sequencing Rule 9 is used to combine the effects of the absorption of two strings. The rule declares that if one string defines a transition from Category0 to Category1, and another defines a transition from Category1 to Category2, then the combined string defines a transition from Category0 to Category2 i.e.  (here 'e' denotes concatenation of word strings e.g. &amp;quot;Ben&amp;quot;(r) &amp;quot;sits&amp;quot; is equivalent to &amp;quot;Ben sits&amp;quot;) For this example, we can instantiate the Sequencing Rule as follows: i j, \] &amp;quot; eo&amp;quot; cO, co \[ \] It: ,,B0o \[ \] 7At this stage no restrictions have been imposed upon the ordering of the rules, and more than one proof tree is possible. However, it is relatively trivial to prove the existence of a nolwnal proof strategy which supplies a single proof tree for a given sentence and a possible semantics (Milward, 1990).</Paragraph>
    <Paragraph position="14"> We can thus obtain a proof of the whole sentence by proving the antecedents to the rule. The first, has already been proved, so we are left to prove: cO &amp;quot;sits&amp;quot; \[\] The head of the argument list of cO is an optional noun-phrase modifier, Optional categories at the head of the argument list of a principal category can be deleted by the use of the Optional Reduction Rule which is as follows:</Paragraph>
    <Paragraph position="16"> in which 'el' is cO without the optional modifier i.e.</Paragraph>
    <Paragraph position="18"> Tile proof now consists of proving the antecedent of tile Optional Reduction Rule i.e.</Paragraph>
    <Paragraph position="20"> This can be proved using first the Composition Axiom, then tile Sequencing Rule followed by Optional Reduction, and finally the Identity Axiom.</Paragraph>
    <Paragraph position="21"> The Composition Axiom 1deg absorbs a word which has the same base category as the head of the a~rgument list of a principal category. Since the word 'sits' has the following category:</Paragraph>
    <Paragraph position="23"> the Composition Axiom can be used to absorb 'sits' and get us to the category 'c2&amp;quot; ldegThe name 'Composition' is due to the sinfilarity with the rule of generalised I~rward Composition in a Catcgorial G r&amp;mlll&amp;r.</Paragraph>
    <Paragraph position="24"> 210 4 Using the Sequencing Rule once more, we can prove the whole given a proof of r:l i (\[~:~\])) ....</Paragraph>
    <Paragraph position="25"> which can be proved by first invoking the Optional Reduction Rule. The optional sentcntial modifier is then deleted, leaving ,is with a proof of \[\] .... \[\] which is true by the Identity Axiom.</Paragraph>
  </Section>
  <Section position="5" start_page="0" end_page="0" type="metho">
    <SectionTitle>
4 tgul.es and Lexical Items
</SectionTitle>
    <Paragraph position="0"> So tar we have introduced three axioms which are used by the grammar, and two rules. Before considering further rules it is worth discussing the grammar as it stands.</Paragraph>
    <Paragraph position="1"> The effect of the axioms, Prediction and Composition, is to absorb a word and to predict an optional modifier for the base type. For exarnple, in parsing 'the girl' a noun-phrase modifier is predicted after parsing 'the' and a noun-modifier is predicted after parsing 'girl'. Thus, given a treatment of non-restrictive relatives, we could parse something like: (4) The girl outside, who has been waiting a long time, looks frozen Moreover, alter parsing a noun modifier, another noun modifier is predicted (the base type of a noun modifier is, after all, a noun). Thus we could also parse (5) The girl outside in the red dress with the large man ....</Paragraph>
    <Paragraph position="2"> Although the treatment of noun and noun-phrase modification looks reasonably traditional, the treatment of verbal modification is less so. Since the base type of a verb is a sentence, a modifier for the verb has the same type as a sentential modifier. For example, in: (6) John hit the ball with a racket the action of the Composition Axiom is to add an optional sentential modifier onto tile end of the subcategorisation list of the verb 'hit', and then to add this list onto the list of expected arguments i.e. after absorbing &amp;quot;hit&amp;quot; the principal category becomes: &amp;quot;:= I ~ = np , i: (\[~ = s\]l A successful proof of the sentence is achieved by giving 'witlf a lexical entry: Sentences such as 'John decided to sack Mary in secret? are correctly treated as being structurally ambiguous, since 'in secret' may modify the 's' introduced by 'decided' or the %' introduced by 'sack'. The grammar which has been described so far ira=.</Paragraph>
    <Paragraph position="3"> poses a strict notion of word order. This seems particularly inappropriate fbr relative clauses which can be extraposed from a position following the subject noun-phrase to after the verb-phrase. Consider the sentence: (7) Children arrived who only spoke English The present grammar treats this case by allowing heavy noun and noun-phrase modifiers to swa.p places with categories having a base type ~s'. Thus the principal category created afLer absorbing &amp;quot;Children&amp;quot; : \[ \]\[ can be transformed into:</Paragraph>
    <Paragraph position="5"> The possibility is being considered of replacing lists of arguments by sets of arguments associated with linear precedence constraints (along the lines of work done on bounded discontinuous constituency (Rcape, 1989)).</Paragraph>
    <Paragraph position="6"> Finally, let us consider the particular restriction which was made to the Prediction Rule for English. The effect of the restriction is that the only acceptable lexical entries with left arguments are either of the form</Paragraph>
    <Paragraph position="8"> i.e. verbs (which require a noun-phrase subject on their left), or modifiers of the base types.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML