File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/abstr/69/c69-1401_abstr.xml

Size: 8,319 bytes

Last Modified: 2025-10-06 13:45:45

<?xml version="1.0" standalone="yes"?>
<Paper uid="C69-1401">
  <Title>A ^ A V A V U'/ V A</Title>
  <Section position="2" start_page="16" end_page="22" type="abstr">
    <SectionTitle>
$MAIN TRIN GEN TRAN .
</SectionTitle>
    <Paragraph position="0"> specifies a run in which a skeletal tree is read, a full tree is generated including lexical items, and the transformations are applied.</Paragraph>
    <Paragraph position="1"> The specification $~u~ ~I~ 5 u~x T~.</Paragraph>
    <Paragraph position="2"> might be used in testing a lexicon and transformations against a fixed base tree. The tree will be read and five cases of lexical /  insertion plus transformation will be carried out. SMA~N ~IN 4 nEX .</Paragraph>
    <Paragraph position="3"> would do four examples of lexical insertion for each input. After the process is completed for one input, another input is read and the cycle repeats. A run terminates when there are no more inputs.</Paragraph>
    <Paragraph position="4"> Computer experiments in transformational ~rammar The system has been in use since February 1968, although not fully complete at that time. The first experiments were carried out by the designers of the system, using granrnars based on material in the linguistic literature. This was done to provide test material for the programs, but, more importantly, to help ensure that the notational conventions would be adequate. A fragment of grammar from Chomsky's Aspects was used to test ideas and programs for lexical insertion. The II~ Core Grammar of Rosenbaum and Lochak \[6\] was used in developing and testing the transformational component. Both of these projects led to valuable teaching materials, as we shall discuss later.</Paragraph>
    <Paragraph position="5"> Aspects and Core provided us with separate examples of lexicon and transformations. There was at first no single source which contained both. A relatively formal grammar was needed, even though a final translation into the notation of the system would still of course be necessary. Elizabeth Closs Traugott's Dee~0 and surface structure in Alfredian Prose \[ 7 \] appeared at about that time and was the first grammar which was formalized in the notation after the  fact. Considerable effort had gone into designing the notation; we were anxious to see if it would now seem natural for a grammar which was new to us. Alfred was thus the first real test for the system. As it turned out there were a few difficulties which arose because the notation had not been explained clearly enough, but the results of the run were also revealing about the grsm~nar.</Paragraph>
    <Paragraph position="6"> One general effect which was noticed in these first few cases had continued to be striking: the need for complete precision in the statement of a grammar forces the linguist to consider problems which are important, but of which he would otherwise be unaware. Also during the spring of 1969 Barbara Hall Partee made two sets of runs with preliminary versions of a grammar of English being developed by the U.C.L.A. Air Force English Syntax Project. This grammar presented another kind of challenge to the system, because it was not based directly on the Aspects model, but incorporated some recent ideas of Fillmore. As before, these runs assisted in cleaning up the programs but were also of interest to the linguist. The major advantages from the linguistic point of view seem to have been, first, that the notational system of the computer model provided a framework in which grammars could be stated, and second, that the computer runs made it easier todetect certain errors in the grammars. In the main, these errors were not particularly subtle, and could have been caught by working over the grammar carefully.</Paragraph>
    <Paragraph position="7"> The program was also used by L. Klevansky~ who wrote a grammar of Swahili for the dual purposes of testing the programs and learning the language.</Paragraph>
    <Paragraph position="8">  These early experiments are described in a report \[5\] which gives the gran~nars as well as a detailed discussion of the results of the computer runs.</Paragraph>
    <Paragraph position="9"> The form of the French grammar used in the extended example above is based on the form of the Core grammar; it was therefore easily translated into the notation of the system. Shortly after the grsmmnar was received, a large part of it was running on the computer. Minor errors in the grammar have been found and corrected; it will now be available to students as another example of a transformational grammar.</Paragraph>
    <Paragraph position="10"> The next experiment planned using the system is a project proposed by Susumu Nagara and Donald Smith at the University of Michig~, who plan to use the system to aid in writing a grammar of Japanese.</Paragraph>
    <Paragraph position="11"> Modifications to grammars based on computer runs In almost all cases the gran~nars used with the system have been sufficiently complete for at least informal distribution. The programs were really designed to make it easier to write grammars, not to test completed grammars. Nonetheless, on the basis of computer runs, certain types of changes have been found to be needed in the grammars. The cotangents which follow are based on all the grammars; they do not all apply to any one of them.</Paragraph>
    <Paragraph position="12"> i Trivial corrections The most co~on errors are typographical errors in transcription of the grammar. These are not errors in the grammar itself; having 2i to deal with them is one of the prices of using the computer. In general, these can be caught with relative ease.</Paragraph>
    <Paragraph position="13"> More than one grammar has had simple errors with respect to repetition of a transformation. Number agreement transformations are written so that they produce CAT S S S ... where CAT S is wanted. (The grammar as written calls for an infinite sequence of S's to be added. The program, more cautious, adds ten S's, then complains and goes on to the next transformation. ) Transformations are often stated so that they fail to apply in all cases where it is intended they apply. For example, the structural description of PASSIVE as SD # (PRE) 3NP AUX 5V (PREP) 7NP % PREP 10P % # , WHERE 3EQ7.</Paragraph>
    <Paragraph position="14"> fails to take into account some additional parts of the VP. The correction to SD # (PRE) ~NP AUX (HAVE EN)(BE ING) 5V (PREP) 7NP PREP lOP ~ #, WHERE 3 EQ 7will allow PASSIVE to work in the additional cases. Similarly, a NOMINAL-AGREemeNT transformation which marks subjects as +NOMIN must apply not only to pronouns which precede verbs but also to those which precede copulas. Thus the structural description  Interrelatedness of transformations A slightly more interesting set of problems found in the computer runs are those which arise through the interrelatedness of two or more transformations. For example, in one of the grsmm~ars there ~ere both WH-questions and TAG-questions. It was found that the TAG transformations was (optionally) applicable to any question, so that for example TOM HAS PREFER EN WHAT GIRL HAS TOM NOT was produced. This error was easily repaired once it was detected. On the other hand, a similar problem which was not easily fixed arose with another transformation which was marked optional. Testing showed that for certain base trees the ~esult was bad if the tr~usformation did not apply; however3 when the transformation was l temporarily changed to obligatory, the grammsx then failed to produce some intended sentences. The proper correction to the grammar would have required specification of the contexts in which the transformation was obligatory.</Paragraph>
    <Paragraph position="15"> Incompleteness of grammars Formal gram~nars so far have each attempted to describe some subset of a language. In computer testing many problems outside the scope of the grammar are evident. If, for example, a grammar does not treat prepositions seriously, then once this becomes apparent, i the computer runs need to be designed to avoid prepositions. Dee~ structure ~roblems Two of the grammars which have been studied suffer problems  with the WH-morpheme when it occurs in non-sentences and not as a relative marker. Thus, for example, sentences such as</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML