File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/79/j79-1012_metho.xml

Size: 20,013 bytes

Last Modified: 2025-10-06 14:11:12

<?xml version="1.0" standalone="yes"?>
<Paper uid="J79-1012">
  <Title>American Journal of Computational Linguistics</Title>
  <Section position="4" start_page="11" end_page="11" type="metho">
    <SectionTitle>
3 and 5).
</SectionTitle>
    <Paragraph position="0"> Given a basic list of correlators and their linguistic expression, the classification of lexical items can be carried out by listing for each item the correlators by means of which it can be potentially (3) One area where the distinction has to be maintained is the -semantic analysis of natural languages, because correlator expressions such as prepositions rarely have a one-to-one correspondence to relational concepts; instead, they merely mark the preqence of one of a set of relational concepts.</Paragraph>
    <Paragraph position="1"> linked to other items. To give an example, there is a relational, concept (No. U ) paraphrased as active ingestion of solids involving solid food stuff'; on the linguistic level, this correlator is expressed by the juXtaposition of two lexical items in a certain order. If we now have a lexigra EAT, that designates active ingestion of solids' and another lexigram RAISIN, that designates a subcategory of 'solid food stuff '., we can form a compound or correlation with the two lexigrms which can be represented as the structure: (a) EAT RAI S IN L,~~~-~,~~~~~-- 11 -- ------------A Because the order of succession of the two items in the linear linguistic expression is obligatory and cannot be reversed, it is not enough for the grammar merely to supply the information that the lexigrams EAT and RAISIN can be linked by correlator No. 11, but the grammar must aLss specify that, in this correlation, EAT has to be the left-hand piece (LH) and RAISIN the right-hand + piece (W).</Paragraph>
    <Paragraph position="2"> This information is part of the permanent lexicoq of the system.</Paragraph>
    <Paragraph position="3"> If It is recorded there by means of correlation indices&amp;quot; (IC1s), which consist of the number of the potential correlator plus the indication whether the items to which this I, is assigned can function as LH-piece or as w-piece. In many cases there are, of course, several lexical items that can function in the same place (e.g. NUT, MGM candy, RAISIN, etc., as RH-piece of correlator No. 11). Therefore , 1,'s are assigned to lexigram classes, not to single lexical items. Thus, while the lexigram EAT, in the present Yerkish lexicon, is the only member of the class VE ('active ingestion of solids'), the lexigrarr RAISIN is one of several in the class EU ('solid food stuff1). On the one hand, this indexing of classes, rather than individuals, is obviously more economical with regard t~ storage space, on the otKer, it makes it possible to add new lexigrams to the existing classes without in any way disturbing the operative part of the lexicon or the parsing algorithms.</Paragraph>
    <Paragraph position="4"> To expand the above example, let us add another correlation. The  relational concept that can be paraphrased as autonomous animate actorf perf ordng 'stationary activity' is correlator No. 01, The paraphrase ! autonomous animate actor' comprises three lexigram classes in the present lexicon, nsmely kP ('familiar primates', i.e., the regular technicians TIN, SHELLEY, BEVERLY, and the experimental animal LANA) ; AV ('visiting primates', i.e., unnamed human or non-human visitors); and A0 ('non-primates', i.e., at present ROACH only). The paraphrase  stationary activity1, i.e, acti.vities that do not involve a change of place on the part of the actor, nor a change of hands on the part of a patient, comprises three lexigram classes, nanely VE (with the single member EAT), VD (with the single member DRINK), and LrA (with several members such as: GROOII, TICKLE ,. HOLD, etc. ) .</Paragraph>
    <Paragraph position="5"> Given the lexigran sequence LANA EAT, the interpretive grammar finds that LANA, belonging to class AF, bears the. I,: 01, LH, while EAT, belonging to class VE, bears the I,: 01, RH; and on the strength of this the grammar will allow the correlation:</Paragraph>
  </Section>
  <Section position="5" start_page="11" end_page="11" type="metho">
    <SectionTitle>
LANA CAT
</SectionTitle>
    <Paragraph position="0"> I I For the parser, allowing a correlation&amp;quot; means to record it as a possible part-interpretation of the input striqg.. As such it is recorded as a &amp;quot;product&amp;quot; in order to be tested for its potential correlability with other parts of the input.</Paragraph>
    <Paragraph position="1"> The information, on the basis of which such first-level correlations (connecting single lexigrams as in - a and b) are formed, is contained in  the permanent lexic~n and the form in which it is stored can be visualised as a kind of matrix (see Fig. 2 and Table 5).</Paragraph>
    <Paragraph position="2"> The correlational data required to form examples (a) and (bj is represented by markers (x) indicating the 1,'s (at head of column) assigned to the lexigxam classes (at beginning of row), In the present implementation of the Yerkish grammar h = 34, m = 35.</Paragraph>
    <Paragraph position="3"> Though this information contained in the lexicon covers all correlations involving two single lexigrams, it does not provide 'or cogrelations linking phrases or phrases and lexigrams. The syscem a correlational grammar uses to cfiscouer higher-level structures in a given input string is again rather different from that of traditional glammar. It order to be able to handle phrases, i.e., already correlated lexigrams,  product must be assigned a string of I c s that represents its particular potential for functioning as component (LH-piece or CH-piece) of a new and larger correlation that links it with other lexical items or phrases. The procedure that assigns these IC'S to a given product is what might be called the dynamic part of the grammar, because it is poverned by .an set of operational rul-es tha.t cannot be stated in- a siaple formalized way. (4) The reason for this is that the. correlruliility of a given phrase often depends on more than one constituent of the phrase.</Paragraph>
    <Paragraph position="4"> An example may help to make this clear. Wit11 regard to. rorrelator 1'0, 30 that links the two single lexigrarris involved, the phrases and TIIIS CALI, L------30------1 are identical. Ps potential YH-pieces of a correlation, forlllad by correlator KO, 11, however, they are not equivalent.</Paragraph>
  </Section>
  <Section position="6" start_page="11" end_page="11" type="metho">
    <SectionTitle>
EAT TEIS PAISIF
</SectionTitle>
    <Paragraph position="0"> would not Ee acceptable because BALL does not belong to the lexipr,am class El' clef ined as !.solid food stuff.' and, therefore. is not a potential RH-piecc (4) The operational rules are, of course, always conFinations of indivdually simple rules taken from a relatively small set. This is, indeed the way in which the parsing program compi1.e~ them; although. this - can be called 'f,ormalisatj-on&amp;quot; it. certainly is not a simple one.</Paragraph>
    <Paragraph position="1"> of correlaticln No.. 11. In fact, if the string EAT THIS BALL occurs as input to the interpretive grammar, it must he rejected as incorrect.</Paragraph>
    <Paragraph position="2"> To implement this dl~criminat ion, the phrase TIIIS MISIP! must be assigned the I,: 11, EH, while the phrase THIS DAI.IA must not. And in order to do this, the assignation must be based not only on the particular correlator that links THIS with another item, but also on the condition that t'his other item is one that belongs tu the lexigram class solid food stuff'. In other vords, there has to be an operational assignation rule that makes sure that a first-level correlation produced by correlator KO. 30 is assigned tbe Ic:ll, PiT, so that it can I-e linked in a second-level correlation with the precedirp lexigram FAT, v~hich bears the I 11, 1.1;; but this a'ssignation must. be made contingent upon the c condition that the product 30 (P.: 30) does, in fac.t, contaic a lexical' itern of class El' as RH-piece; because only if P:30 contains a member of tl~r: t class solid food stuff' can it function as patient of the activity designated by the LH lexigram FAT.</Paragraph>
    <Paragraph position="3"> The operational assignation rules, therefore, are of diverse Lypes, I ? sore assigning I, s unconditionally, others assigning I, s only on condition that the same Ic is present, as the case ray Fe, arrong those charaeterising the 1A1' or the 911 of the produet thgt is tieipg classified. (.See  In the implementation of the parser., tile assignation of I, s to products is primarily dete'mined by the particular correlator that is involved in the product to be classified, Tk.e assignation rales a particular correlator calls into action, tliouph functionally of three types only, are specific. to that correlator and cannot be written in a generalised form; This indeed, is the fundamental reason why a correlational grammar cannot be represented by means of a small number- of relat-ively &amp;quot;powerful&amp;quot; rules.</Paragraph>
    <Paragraph position="4"> In a correlational grammar there must be as many sets of specific assignation rules as there are correlators ; and since the. number of correlators in such an interpretive gram.ar is very much laqger than the number of &amp;quot;syntactic functions&amp;quot; in conventional descriptive grammars, correlational grammars connot be wrj tten in concise and powerful formulas. As a justification for this lack of Ebrmal elegance, however, it can be said that correlational grammar has no need of the otherwise indispensable and somewhat unwieldy adjunct of &amp;quot;selection rules&amp;quot;, because it incorporates that very information in its one basic interpretive algorithm, Peculiarilies -- of the Yerkisk Gr.mar The grammar of Yerkish had to be kept as simple as possible for the reasons mentioned above, First, given the small size of the computer, it was mandatory to avoid complex constructians and rules of grammar that might require special space- and time-consuming subroutines in the parsing pfocedure. Second, the rules of the language to which the lingpistic behavior of our subject would have ta confo;rm, were to be few and consistent from the learner's point of view; nevertheless they were co be such that Yerkish structures could be translated easily and without major structural transformations into comprehensible English. As a result of these objectives, Yerkish grammar may seem somewhat unusual.</Paragraph>
    <Paragraph position="5"> In the following paragraphs several deviations from English grammar will be discvssed. null Yerkish, at present, bas only one voice. the active, and three moods, i. e. indicative, interrogative, ,and imperative. Both :he interrogative and the imperative are formed) not ,by specific v,erb-forms or word-order (as in ~lilny natural languages), but by sentential prefixes or markers, i.e. specific lexigrams that are placed at the beginning of the message. The prefix of the interrogative is the conventional question mark 'I?&amp;quot;', that  for imperatives (in Yerkish requests&amp;quot;) is an arrow, translated into English as PLEASE. The keys representing these two lexigrams nust be pressed at the beginning of a string and they can appear only in the first feedback projector on the left. The lexigram string following them has the same form as an .indicative statement. In fact, if the string is to he interpreted as an indicative-statement, i.e.. if it is - not preceded by !I 711 either . or PLEASE, the first feedl;acl;projector on the left remains blank. Thus:</Paragraph>
  </Section>
  <Section position="7" start_page="11" end_page="11" type="metho">
    <SectionTitle>
TIM MOVE INTO RpOM = indicative statement;
? TIIIE YOVE INTO ROOF? = interrogative;
</SectionTitle>
    <Paragraph position="0"> PLEASE TIE1 MQVE INTO ROOM = request.</Paragraph>
    <Paragraph position="1"> A third lexigram that fu.nc-tions as a sentential marker is NO, which corresponds to an over-all negation of the statement. NO TIM MOVE INTO ROOM, therefore, corresponds L-o the English &amp;quot;it is not the case that Tim moves into the room&amp;quot;. However, since Lana has quite spontaneously come to use the lexigram NO to designate what, given the situational context, could be interpreted onLy as &amp;quot;don't&amp;quot;, we may adapt the grammar to her usage and turn this NO into a marker for negative imperatives (see Appendix) .</Paragraph>
    <Paragraph position="2"> Yerkish, as yet, has no tenses but the present. A simple past and future, however, are foreseen, and they will be designated by particles preceding the activity lexigram.</Paragraph>
    <Paragraph position="3"> There are no auxiliaries in Yerkish and the function of the English copula &amp;quot;to be&amp;quot; is taken over by corr,elator No. 10, which is expressed by juxtaposition of a lexigram belonging to one of the classes of items that</Paragraph>
  </Section>
  <Section position="8" start_page="11" end_page="11" type="metho">
    <SectionTitle>
1 t
</SectionTitle>
    <Paragraph position="0"> are modifiable&amp;quot; and a lexigram designating a specific state.</Paragraph>
    <Paragraph position="1"> e.g. BALL RED.</Paragraph>
    <Paragraph position="2">  The absence of an explicit copula is noticeable also in conjunction with the &amp;quot;naming functionf1, an important instrument in ~ana's acquisition of new lexical items. It is used for the ostensive definition of new lexigrams which areplaced at the beginning of a string of the form:  XX is the name of this1'.</Paragraph>
    <Paragraph position="3"> (where LY is the new Icx~gram) Two English coQstructions that have a specificatory restrictive function., i. e. for instance, &amp;quot;the red ball'! and &amp;quot;the ball is red&amp;quot; are one and the same in Yerkish, and the specificatgry relation is designated by a lexigram which can be translated into English as the comp~und 'WHICH-IS (correlator )lo. 31) .</Paragraph>
    <Paragraph position="4"> e. BALL ITHICH-IS RED. &amp;quot;the red ball&amp;quot; or &amp;quot;the bal which -is red1'.</Paragraph>
    <Paragraph position="5"> For the sake of greater univocality, Yer,kish spatial prepositions were strictly divided into locatignal and directional ones. The first -e.g. IN, ON,OUTSIDE, etc. -- could designate ~nly the locatign of items or activities, the second -- e.g. INTO, OUT-OF, FROM, etc. -- could designate only the direction of activities involving a change of place. Ho~lever, since Lana has spontaneously used a locational prepositYon to indicate the target of a directional activity, and since this is allowable in many if not all natural lartguages, we are considering the removal of this restriction with regard to spatial prepositions.</Paragraph>
    <Paragraph position="6"> So far, there are no conjunctions in Yerkish, but a somewhat restricted form of &amp;quot;and&amp;quot; and &amp;quot;or&amp;quot; has been worked out and will shortly be introduced into the system, There are also some minor pecularities that an English-speaker must keep in mind.. A Yerkish structure involving correlacor No. IT, for instance, implies that the speaker is the receiver of the item that changes hands, unless another teceiver is explicitly indicated by a prepositional complement. Thus, if Lana sends the message :</Paragraph>
  </Section>
  <Section position="9" start_page="11" end_page="11" type="metho">
    <SectionTitle>
PLEASE TIME GIVE IIILK.
</SectionTitle>
    <Paragraph position="0"> it must be understood that the milk is to be given to Lana. The receiver, however, can be made explicit by adding a prepositional phrase, which yields the correlational structure:  English resultative verbs, e.g. to open&amp;quot;, &amp;quot;to clean&amp;quot;, ete., are, broken up in Yerkish. The causative element is rendered by EIAKE, the effect by a lexigram designating the resulting state. Also, in Yerkish the agenr must be specified. Thus, l lease (Timj open the window1'</Paragraph>
  </Section>
  <Section position="10" start_page="11" end_page="11" type="metho">
    <SectionTitle>
becomes :
PLEASE TIM MAKE WINDOW OPEN.
</SectionTitle>
    <Paragraph position="0"> Translated literally into English, this should be lease, Tim, make window - be open&amp;quot;, since the correlator that links WlNDOW and OPEN is No. 10, i.e. the gredicative copula equivalent to &amp;quot;to betf. But in this case, as.</Paragraph>
    <Paragraph position="1"> indeed in most occurrences of correlator No. 10, the Yerkish string is easily understood without the explicit copula.</Paragraph>
    <Paragraph position="2"> The Yerkish MA~E is not limit-ed to causation of a change of state of specific items, but can be used also to indicate a number of perceptual conditions or events in the environment. Specific sensory events or changes, such a NOVIE, NUSIC, SLIDE, HEAT, COLD, LIGHT, and DARKNESS, are considered the- result* of activities subsumed by .RAKE. In Lana1 s wholly technoPogica1 environment. this is not at all unreasonable. It obviously makes sense for her to request:</Paragraph>
    <Paragraph position="4"> It is, indeed, the machine that causes the projector to start running.</Paragraph>
    <Paragraph position="5"> S2milarly, however, in Yerkish one cauld correctly say:</Paragraph>
  </Section>
  <Section position="11" start_page="11" end_page="11" type="metho">
    <SectionTitle>
PLEASE TIM MAKE LESS HEAT.
</SectionTitle>
    <Paragraph position="0"> It Though in Lana's expe~ience Tim can indeed cause less heat' by turning down the thermostat, this would hardly he a reasonable request in the &amp;quot;realIt world outside the Yerkes Lab. MAKE also opens the way to embedded constructions. since it can govern a clause. Though ,Lana has not yet come to this, the grammar foresees strings such as : ? TIM NAKE LANA SEE VISITOR or even a double embedding: ? TIM SEE LANA MAKE ROACH MOVE.</Paragraph>
    <Paragraph position="1"> and similar structures are, of course, possible with WANT. Lest these correlational diagrams create the impression that Yerkish structures are invariably right-branching, here are two examples that contain lef t-branchings : ? NO PIECE OF APPLE HERE.</Paragraph>
    <Paragraph position="2"> which,in English, would read': &amp;quot;IS there no piece of apple here?&amp;quot; And</Paragraph>
  </Section>
  <Section position="12" start_page="11" end_page="11" type="metho">
    <SectionTitle>
STICK WHICH-1s BLUE DIRTY
</SectionTitle>
    <Paragraph position="0"/>
    <Section position="1" start_page="11" end_page="11" type="sub_section">
      <SectionTitle>
-10 -
</SectionTitle>
      <Paragraph position="0"> which in English, would be: &amp;quot;?he stick which is blue is dirty&amp;quot;, and, as such, roughly equivalent to &amp;quot;The blue stick is dirty.</Paragraph>
      <Paragraph position="1">  Irr one particular the grammar of Yerkish deviates from correlational practice.</Paragraph>
      <Paragraph position="2"> Prepositions and conjunct ions being &amp;quot;explicitH 'correlator expressions in that they designate relational c.oncepts only, are (in the' correlational approach) n~t items-to be linked, but itenla that do the linking. Thus, in the original Ffultis tore parser they functioned as cor~elators and not as ordinary lexical items. In the structure diagrams, therefore, they appeared in a node, not at a terminal. Given the very smLl computer used for the Yerkish parser; as well as the fact that the lexicon was t'o remain extremely limited (in comparison to English), it was more economical to correlate prepositional phrases in two steps rather than introduce the special routine that had been developed for prepositions and other &amp;quot;explicit&amp;quot; correlator expressions in the English pagser. Thus a string. such as &amp;quot;move into room&amp;quot; is not constructed as it would be in a proper correlational systan, i.e.: but rather in twa steps:</Paragraph>
    </Section>
  </Section>
class="xml-element"></Paper>
Download Original XML