File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/79/j79-1012_intro.xml
Size: 35,768 bytes
Last Modified: 2025-10-06 14:04:14
<?xml version="1.0" standalone="yes"?> <Paper uid="J79-1012"> <Title>American Journal of Computational Linguistics</Title> <Section position="3" start_page="1" end_page="11" type="intro"> <SectionTitle> PC </SectionTitle> <Paragraph position="0"> initiator and Principal Investigator of the project, Timothy - V. - dill and Josephine Brown (Dept . of Psychology, Georgia State Cniversity) are responsible for behavioral design and experimentation. Harold Warner and Charles - Bell (Yef1:es Regional Primate Research Center) are responsible for design and engineering of the interface with the computer and the eledtromechanical devices. Ernst von Glasersfeld and Pier Paolo - Pisani (Dept. of Psychology and. Computer Center, University of Georgia) are responsible for the design and computerisation of the Yerkish language, its grammar, and the 'fl~l~istore linguists will, I am sure, agree that natural languages would have different grammars and different interpretive rules if, from the very beginning of their evolution, they had had to be intelligible to a computer. And that is what Yerkish had to be. For reasons that will become clear when we discuss the research background of the project, the introduction of a computer as m~nitor of the communication system was one of the salient feature of this research effort (Rumbaugh -- et al., 1973a).</Paragraph> <Paragraph position="1"> Other constraints in the development of the Yerkish language will be discussed at those points in the exposition where their explication seemed most appropriate. I have tried to concentrate specific aspects under indicative subtitles. I have no illusions that this has been wholly successful. lly rmin goal, however, was to give the reader as complete as possihle a picture, not only of what was done, but also of why it was done. The instrumental aspect of Yerkish as the linguistic vehicle in an experimental communication study must be kept in mind at all times; much of what follows in these pages can make sense only if it is put into that perspective.</Paragraph> <Paragraph position="2"> Another point that I should like to stress is that. the artificial language on which this paper reports is only one of several major efforts that made this communication study possible. Such success as we have had is the result of. team work in the fullest sense of that term.</Paragraph> <Paragraph position="3"> The project would never have got off the ground if it had not been for the -continuous patient collaboration 6f seven rather heterogeqeous specialists from three different institutions (see footnote l) and, last but not least, for the perseverance of our f.emale chimpanzee Lana. (Fig. 1: Figure 1 The chimpanzee Lana worl'ing at her lceyboard Background Yerkish is a visual language of graphic word-sy~tols, designed for research in .communication with non-human primates and, possi.bly, as a substitute vehicle for humans who, for physiological reasons,-could not acquire a spoken language. Lieberman(l968) , on the basis of anatomical investigations, .came to the conclusion that the vocal apparatus of the great apes precludes the production and modulation of many of the phonemes that make up the repertoire of human languages. Forty years earlier, Robert Yerk-es (1925), the founder of the first primatological research institute, in whose honor we have named our language, had already observed this vocal handicap of the non-human primates. In the intervening years, the Zailure of several long-term efforts to teach a chimpanzee English, Japanese, or Russian, empirically confirmed his observation '(for a review, cf. Floog and MelnechuE., 1971).</Paragraph> <Paragraph position="4"> The fact, however, that the great apeas are barred from speak.-ing a language does not necessarily mean that they could not understand one, nor that they could not learn to use a 1in.guistic co~munication systent that functions in another sensory modality. There still are, of course, It scholars who, defining language&quot; from a rather anthropocentric point of view, refuse to allow the term for any cornrr.unication system that does not use the vocal-auditory mode of transmission. Among otl~er tl~ings , this would mean that programming languages and other silent communication systems could never be called &quot;language&quot;, no matter lla~ adequately they might be described in terms of lexicon, syntax, and semantics. Today, It there seems to be a gro~~~ing consensus that this restriction of language&quot; to acoustic systems is not a stientifically necessary or useful one (Ploog and Melnechucli, 1971 : 640; Lyons, 1972 : 64) . Interest, thus, has shifted from the question whether or not other organisms can learn to speak a language, to the question whether or not they can learn to handle a cornmunicati~n system that is linguistic in its structure.</Paragraph> <Paragraph position="5"> Given that there seems to be no compelling evidence that any non-human species on this earth has, in fact, developed a c~~unication system that could legitimately be called language&quot;, one might ke inclined to thinlc that attemts t'o teach a Language to a non-human organism are necessarf-fg ci-owmd Bail. This conclusion, however, would be quite unwarranted. Animal trainers in circuses and in the laboratory have shown beyond all doubt that many species have a potential for the acquisition of skills t?hich no one, wl~o observed the species in the wild, would suspect. The fact is that the behaviors an organism manifests in a given environment constitute under all circumstances only a subset of the behaviors which the organism could acquire in different environments (Lorenz, 1974). In the area of cognitive skills, for instance, ~ohler's investigations (1925) already indicated that the great apes had been drastically underrated. Since then, and up to the demonstration of &quot;higher mental functions&quot; by Viki (Hayes and Nissen, 1956/1971) , especially the cl~impanzee's intellectual reputation has continuously grodn (P.umbaug11, 1970) . Thus it has, indeed, become more and more pertinent to ask just how far a chimpanzee (or other great ape) could be brought in the acquisition of linguistic skills which do - not require vocal speecl~.</Paragraph> <Paragraph position="6"> T.he success of the Gardners (1969, 1971) with +-heir chimpanzee \\'ashoe is so well known that there is no need to rei,terate the description of their pioneering work. Using American Sign Language (ASL) as a vehicle, they established irrefQtably that an infant chimpanzee can be taught to communicate very effectively; and there would seem to be no reason why, given a conducive environment, Washoe's communicatory skill should not continue to grow as she develops towards intellectual maturity.</Paragraph> <Paragraph position="7"> It has been repeatedly objected, however, that Washoe's successful communications are as yet no proof that she has acquired &quot;language&quot; (e.g. Lenneherg, 1971; Brown, 1971). Piost of the skepticism about washoe's linguistilc accomplishments is based on the argument that the strings of signs she produced do not manifest syntactic competence, When Washoe was introduced to ASL, no rigid rules of sign-order were observed and the relational semantics (which, for instance in English, is taken care of to a large extent by word-order) was left implicit in the communicatory event and had to be intuitively gleaned from the situational context by the observer (Gardner and Gardner, 1971). Since a language user's compliance with the syntactic rules of the language an important criterion in the evaluation of his linguistic performance, the apparent tack of such rules in ASL made it - a priori questionable whether Washoe's or, indeed, any other ASL-user's stringing together of I I signs could be considered syntactic and thus evidence of language&quot; on the theoretical level. In addition to that, a l'ack of syntactic rules is the- very reason why Washoe's ,~~mmunications could not contain many relational indications. For instance, since the sign system taught to Washoe had no consistent means for designating actor and patient in activity situations (comparable to, say, the subject'-yerb-ob ject sequence in many natural languages), the assignation of these roles was left to the intuition or the common sense of the receiver.</Paragraph> <Paragraph position="8"> In retrospett it is easy to see t-hat this relative lack of syntactic rigidity would supply critics with arguments that.tend to diminish claims with regard to washoe's linguistic competence.</Paragraph> <Paragraph position="9"> On the other hand, it is equally clear that the GardnersS when they started on their splendid enterprise., were concerned abov'e all with the formidable task of establishing a viable form of communication with a chimpanzee, and they could not possibly have foreseen all the theoretical reasons why linguists and philosophers of language might doubt that the communication system they chose, and the \\ray. VJashoe was going to use it, should be called &quot;linguistic&quot;. Hence I should like to emphasize that my attempt to clarify the syntactic problem is in no way intended as a criticism of the work accomplished with Washoe, but solely to throw some light on the several ways in which our project, staxted a few years' l'ater, was able to benefit from the ~ardners' effort.</Paragraph> <Section position="1" start_page="11" end_page="11" type="sub_section"> <SectionTitle> The Cornmunitation Facility </SectionTitle> <Paragraph position="0"> -, The basic idea of our project at the Yerkes Primate Research Center was the introduction of a computer as a thoroughly objective monztor of all linguistic transactions. This solved sev-era1 problems at once. In the first -place, it eliminated the proble~ of subjective or intuitive evaluation of the grammatical correctness of the experirental animal's linguistic products.</Paragraph> <Paragraph position="1"> Incorporating a reduced and suitably adapted Multistore Parser (von Glasersfeld and Pisani, 1970), the computer can t I objectively&quot; judge grammaticality.</Paragraph> <Paragraph position="2"> An input string either conforms to legitimate syntactic structure, dr it does not. There cannot be any doubt either way. Second, the computer has no difficulmty in recording every input and transaction that takes place, be it grammatical or not. Third, thanks to the computer, the communication facility can be kept in operation twenty-four hours a day, without the forbidding cost of several shifts of techinicians and- observers.</Paragraph> <Paragraph position="3"> In order to turn the communication facility into a learning environment that could af least to some extent operate without the presence of a.human a system of automatic responses was implemented. By activating one of a set of machine-commanded dispensers, the computer can satisfy a number of requests, provided these requests are correctly formulated by the experimental animal. So far, the automatic responses are limited to the dispensing of various foods and drinks, to openivg and shutting 8 window., activating a movie and a slide projector as well as a tape player. In .the future we hope to add something of a question-answering system in order to enable the computer To respond verbally to some questions and, perhaps, also to give some feedback with regard to errore made in the subject's linguistic input.</Paragraph> <Paragraph position="4"> A full description of the communication facility, as it is in operati,on at present, has been published elsewhere (Rumbaugh -- et al., 1973a), Here we shall be mainly concerned with the Yerkish language. A quick survey of the main components of the installation will have to suffice. Input to the system is effected by means of a keyboard of maximally 125 keys, arranged in vertical panels of 25 each. Four such panels are in use at present, corresponding to a total of 100 E'eys. Each key represents one lexigram, i.e., a geometrical design which constitutes a word-symbol (lexical item) of the ~erkish language. Depression of a key activates the correspoqding item in the computerized lexicon which is permanently incorporated in the llultistore parser. The spatial arrangement of the lexigrams in the keyboard can be easily reshuffled (to prevent the experimental animal from acquiring a fixed motor pattern). To switch on the system, a horizontal bar, mounted \dell above the keyboard, has to be pulled down. The bqr has to be held down continuously thfoughout the input of a message. Lana, the female infant Fhimpanzee with whom we have bees working, does this by hanging on to the bar with one hand while using the other to press keys. If the system is switched on and several keys are then pressed in successi.on, ending with the I1 period&quot; key (bl~e &quot;end-of-message&quot; signal for the computer) , the parser takes this sfring as a- sentence&quot; and analyzes it in order to establish whether or not is is gramnatikally correct. If the input string is a grammatically correct request, the machine also determines the object of the request and, if it is within the range of automated responses, satisfies the request by activating the r'elevant dispenser ar mechanism. Regardless of the outcore of the grmatical analysis, the machine prints out the English=word tor-responding to each lexigram that has been activated and records, at the end of the string, whether or not it was found to be correct.</Paragraph> <Paragraph position="5"> Directly above the keykoard in the experimental chamber, there is a row of seven! small projectors in which the geometric designs of the lexigrams appear, one by one from left to right, as their keys are being pressed on the keyboard.</Paragraph> <Paragraph position="6"> This provides Lana with feedback as to the part of the message that has already been typed in, and also with a linear representation of the string she is composing. P signallight, on the right of the projectors, lights up when the period&quot; key has been pressed and terminates the message.</Paragraph> <Paragraph position="7"> Above this first row of projectors there is a second similar one which serves to display messages- that are seqt. in ta Lana from a second keyboard in the technicianfs station outside &anafs chamber. ljessages originating fromthe4technicianfs keyboard are also recorded the I I computer, but they are marked by a code symbol as operator's messages&quot; and cannot be confounded with ~ana's linguistic production.</Paragraph> <Paragraph position="8"> Thue Yerkish Lexigrams The original constraints under which the Yerkish language was to be designed were eskentially three.</Paragraph> <Paragraph position="9"> 1) Drawing on the experience of the Cardners (1969, 1971) and Premack (1971), Yerkish had to be a visual language with a lexicon of unitary word-symbols that could be represented singly on the keys of a keyboard.</Paragraph> <Paragraph position="10"> 2) Both lexical items and sentence structure were to be as univocal as possible, because this, on the one hand, would facilitate the automatic parsing of input and, on the other, it was expected to make acquisition of the lariguage easier for our subjects.</Paragraph> <Paragraph position="11"> 3) The structure of Yerkish was to be close enough to English 'to allow word-by-word translation, in order to make participation in communication- events, as well as their evaluation, maximally accessiblk to technicians and observers.</Paragraph> <Paragraph position="12"> For a few weelcs at the very-outset of the enterprise, the author revelled in dreams of an ideal language in which each word was. to be composed of semantically significant pleremes (Hockett, 1961). here were to be individual design elements designating the more important recurrent semantic categories, and each concept available in the ~erkish universe of discourse was to be represented by a lexigram (i,e. the visualfgraphic caunterparts to words in spoken languages) composed of design elements &ich, in their own right, would designate the major semantic categories to which the concept belonged. Thus, for instance, as the American Indian language Yuehi (Crawford, 1973) has a morpheme that recurs. in any word that designates a part of the human body, every Yerkish lexigram designating a part of the primate body would have contained a specific design element. Given that the Yerki,sh lexicon was, in any case,to co.ntain no more than two or three hundred lexigrams, it seemed feasible to cover at least the major semantic categories with a hundred or so design elements. The reason for doing this was, of course, that such a language would have been an invaluable instrument for testing our subject's c'lassificatory skill and processes of concept formation. The dream was soon shattered by fiechn.ica1 res tr.ictions. The feedback projectors above the keyboard had to be such that each one of them could display every lexigsam of the language. Within our budget, this could be achieved only if all lexigrams were designed in such a way that they could be generated by combining design elements of,a common set limited to twelve.</Paragraph> <Paragraph position="13"> Vnder these circumstances it was obviously impossible to maintain the individual design elements semantically constant and a drastic compromise had to be made. By choosing nine graphic elements that could be readi-Ly superimpos-ed, one over the other, and thre6 basic colours, a little additional flexibility was gained (see Table 1.). By &quot;mixing1' 'the three basic colours we could generate seven discriminable hues. Together with black (absence of colour) , this gave us eight background features, and these could be used to colour-code at least some important conceptual categories (see Table 2).</Paragraph> <Paragraph position="14"> Interpretive versus Descriptive Grammar I I The grammar of Yarkdsh is a direct derivative of the correlationa.1&quot; 'grammar that was implemented some years ago in the blultistore parser for English sentences (von Gldsersfeld, 1964, 1965, 1970; von Glasersfeld and Pisani, 1968, 1970). It is, therefore, strictly an interpretive</Paragraph> <Paragraph position="16"> grammar and lays no claim to ge.nerativeU properties, nor is it transformational&quot; in the Chomskyan sense of that term.</Paragraph> <Paragraph position="17"> Tn the hope that it might dispel some misunderstandings that have haunted the development of correla.tiona1 gr.ammar since its initial co~~ception by'Silvio Ceccato (Ceccata et al., 1960, 1963), 1 should like to dwell for a moment on a purely theoretical point. 1hile the term I I grammar&quot; is predominantky used to indicate the formalized description of a language (e.g. Chomsky, 1965; 4 and.140), correlational grammar&quot; is, instead, the description of an interpretive system. The main difference between the two, though basically simple, has perhaps not been made sufficiently explicit. An ordinary gfammar is expected to account for all grammatical sentences of the language in a mare or less axiomatic way, i.e. by demonstrating that every possible grammatical sentence is a case under a formally stated rule or set of rules. An interpretive grammar, on the other hand, is not concerned with demonstrating the grammaticality of any sentence, but with transfoming the contept of a given piece of language into a canonical fbrh composed of pre-established semantic elements or modules, It is a &quot;gr;immarl! in the sense that it consists of rules that govern this transformation, but these rules describe the language only indirectly, since what they actually describe is a model of the language user in the receiving role. (Mote that by &quot;model&quot;, in -- --this context, we intend a processor which, given the same input, will yield the same output as the processor to be modelled, regardless of the means it employs to do so. ) An interpretive system of this kind, thus, presupposes the grammaticality of its input. But since it is designed to interpret all grammatical pieces of language, it can be used to define f f operationally as grammatical&quot; any input that it can interpret, while input that it cannot interpret can be considered ungrammatical&quot;. When designing a correlational grammar for a natural language, it i5 a truly enornous problem to bring the gramarrs interpretive capability anywhere near the interpretive capability of the native user of the language. In the case of an artificial language, hailever, this problem is altogether eliminated, because the lexicon, the rules of concatenation, and the interpretive grammar can be designed all at the same titre. Since there is no native user, who has a universe of experiential content and well-established semantic connections (by means of which this content is linked to linguistic expressions), the designer is free to tailor the lexicon, as well as the syntax of his language, to the universe of discourse he envisages.</Paragraph> <Paragraph position="18"> That is, to a large extent, how Yerkish was designed, expecially with regard to the rules of grammar. The result of it is that the user of Yerkish can communicate in grammatically correct-lexigram strings no more than the correlational grammar of Yerkish can interpret.</Paragraph> <Paragraph position="19"> A Restricted Universe of Dis'course - -Yerkish, as it operates at present, is in fact a compromise In more than one respect. An effoft was made to create a potential universe of discourse that would allow a non-human primate to formulate as many communications as possible which, given the particular environment, could be used instrumentally for the attainment of goals (von Glasersfeld, 1974a). Such an attempt is necessarily based on more or less anthropocentric conjecture. There is, however, a certain amount of evidence that non-human primates organize their perceptual world in a way that does not seem incompatible with ours. In actual fact, Lana has already demonstrated that all the fterns which we assumed would take on the function of goals for her and would, therefore, act as incentive to communicatory activity, were indeed appropriate. Where food and drink were concerned, this could almost be taken for granted. h'ith visual displays such as a movie and slides, with thesounds of music and voices, and with the view through an open window, our anthropocentric hope of analogy was well rewarded. Above all it is gratifying t~ note that there was never a need to resort to any fom of negative r~inforcement or punishment. Though there were, especially at the beginning, not very mgny things that Lana could &quot;say'' in Yerkish, she has never tired of saying them.</Paragraph> <Paragraph position="20"> On the practical side, since the interpretive grarmnar was to be implemented in a functioning parser, the universe of discourse was strictly limited by the size of the computer that could be obtained within the budget of the project. Becau'se the project is ~I~olly experimental and explorative, it was and is an absolut@ requirerent to leave within the computerized system a certain amount of room for ad hoc modifications and additions that might suddenly prove necessary -in view of our subject's actual performance.</Paragraph> <Paragraph position="21"> Thus it was essential that the implemented grammar should never occupy all of the available space within the comp'uter. This is smll the case and we hope to be able to maintain this flexibility for some time to come.</Paragraph> </Section> <Section position="2" start_page="11" end_page="11" type="sub_section"> <SectionTitle> Technical Cons txaints </SectionTitle> <Paragraph position="0"> There are four ways in which the Yerkish universe of discourse is restricted. First, there is the nunber of lexical Items the system can handle. The present version of the llultistore parser can deal with a maximum of 250 Lexigrams. The interface that links the con-.puter to the keyboard in the experimental chamber is designed for half that number, i.e. for 125. The-keyboard, however, is divided into five panels of 25 keys each and these panels are readily exchafigeable. This means that the subject's vocabulary can, in fact, be extended to 250 items, but only a subset of these, namely 125, will be operative during any one session. (Since Lana at present uses a total of 100 lexigrams, there is still much room for vocabulary expansion.), The second restriction also concerns the vocabulary of lexigrams, but it springs from the grammar of Yerkish and does not limit the number of individual lexigrarns but rather the number of conceptual classes to which lexigrams have to be assigned. Because of its interpretive function, correlational grammar requires a classification of lexical items that differs considerably from the word-classification used by txadi tional dcscr iptive gramars. Lexigrams , in fact, are classified according to certain functional characteristics of the concepts thev designate, i. e., according to cognitive characteristics.</Paragraph> <Paragraph position="1"> The lexicon with which a correlational gramnar operates, therefore, is divided, not into a few generic and largely rnorphological~y defined classes such as nouns, verbs, adjectives, etc. , but into a much larger number of classes defined in terms of what the designated items can do, i.e, , by the role or roles they play in the cognitive representation of experiential situations. In the case of &quot;things1'- this is, for instance, the kinds of activity which they can perform as actors and the kinds of activity in which they can play the part of patient; and in the case of &quot;activities&quot; it is, for instance, the kinds of change they bring about.</Paragraph> <Paragraph position="2"> In the implementation of the interpretive system, i.e. the parser, it id the characterization of the lexical classes that occupies considerable space, not the individual lexical items. The total number of classes, therefore, has to be decided - a priori. In the present Yerkish parser, the maximum number of lexigram classes is 46. At the time of rkiting, 35 of these classes have been filled (see Table 3). The remaining 11 are still empty, but they can be made operative at any moment by the simple insertion of new lexigrams and the definition of the functional properties uf the items they designate, The third restriction concerns the number of lexigrams that can be strung together to form one message. The amount of data the parser has to take into account during the processing of a given message, obviously, depends to some extent on the number of lexical items of which the message is composed. This dimension corresponds to sentence length in natural languages. As it was impossible to foresee with any precision just how much work space the parser might require for the analysis of all types of grammatical input strings, we preferred to be on tile safe side and limited' sentence length to seven lexigrams. On the basis of the experience gathered since then, we can 13ow say that the computer system could, 'in fact, handle input strings of up to ten lexigrams and, hence, we plan to extend the capacity of the hterface hardware in the near future fr~m seven to ten Lexigrams.</Paragraph> <Paragraph position="3"> The fourth restriction involves the number of connectives (see Table 4) by means of which phrases and sentences can be put together.</Paragraph> <Paragraph position="4"> These connectives or correlatoxs are Ear more numerous in a correlational grammar than a?e the traditional syntactic functions. This proliferation is again the result of the interpretive purpose of the systen. A parser that is intended to extract the conceptual content from pieces of language must be able to idehtify not only the conceptual items involved, but also the relational- concepts by means of which they are connected with one another. Hence, the traditional distinction between syntax and semantics is no longer operative in a correlational grammar, and the few basic grammatical relations&quot; (e.g. subject-verb, verb-object, etc.) which connect grammatically characterized items, are replaced by a great f r many correlators&quot; which are considered the linguistic expression of the relational concepts that link items on the conceptual level.</Paragraph> <Paragraph position="5"> While our English grammar operated with some five hundred correlators, (2) the grammar of ~erkish in its present implementation is limited to 46. Of these, 34 have so far been specified and are functioning (see Table 4).</Paragraph> <Paragraph position="6"> The remaining 12 will be filled as additions to the grammar became desirable from an experimental point of view.</Paragraph> <Paragraph position="7"> The Grammar of Yerkish - --.</Paragraph> <Paragraph position="8"> The interpretive purpose of correlational grammars leads to a shifting of focus from dharacteristics of words and sentences, qua linguistic items, to the- characteristics of concepts and conceptual structures, qua cognitive items. Ideally, a correlational grammar should be a complete mapping of the semantic connections between the elements and structures of a given language, on the one hand, and the elements and structures of conceptual representation, on the other. The bulk of vork required to produce such a mapping for a given natural language is so vast as to be almost forbidding. Nevertheless, work in that direction continues under various headings and significant advances have been made (e. g. Schank, 1972, 1973) . It will take a good deal more time to map the semantics of an average language user's universe of drscourse, but that is hardly a reason for not going on with it, expecially since much of what has been done encouragesthe hope that the task can, indeed, be completed.</Paragraph> <Paragraph position="9"> In designing an artificial language with a drastically curtailed universe of discourse, the problem is far more manageable. The semantic connections can be made as univocal as desired and, consequently, the process of interpretation can be thoroughly systematic. In the case of Yerkisb, unlvocality was desirable not only with a view to the- size of the automatic parser but also from the point of view of the teaching strategies to be employed with a non-human subject* Hence, Yerkish was (2) cf . Final Scientific Report , Automatic English Sentence Analysis, (December 1969) Grant AFOSR 1319-67, Georgia Institute for Research, Athens, Georgia. (Obtainable through D.O.D.) made as univocal as possible.</Paragraph> <Paragraph position="10"> Since both on the linguistic and on the c~nceptual level we are dealing with eiement~ and their concaternation in structures, the interpretive grammar has to specify the connections (a) between linguistic and conceptual elements and (b) between linguistic and conceptual structures. With regard to the elements that are concatenated on the linguistic level, their semantic specification ca2 be given in the lexicon because, here, we are dealing with a fixed set of items, i.e, precisely, lexical items. With regard to structures - phrases and sentences on the linguistic level, and situational repr6sentations on the conceptual level they have to be specified by rules of composition on concatenation, i.e. by a grammar, because language is open&quot; in that direction and allows of a practically infinite numter of individually different vord concatenat ions.</Paragraph> <Paragraph position="11"> Because Yerkish is based on Lnglish and the output of subjects in the experimental environment will be evaluated by English speakers, the lexical semantics of Yerkish lexigrams c~uld be left implicit to a certain extent. Thus, for instance, the Yerkish parser does not have to contain an exhaustive semantic analysis of lexigrams such as BALL or RAISIN, because it can be taken for granted that the reader of the parser's output will be quite familiar with the concepts designated by &quot;ball&quot; or by raisin&quot; qua experiential items. What the parser must contain, however, is a mapping of- those specific characteristics of the concepts which determine these items' potential for entering into structural relations with other items.</Paragraph> <Paragraph position="12"> In Yerkish, then, the relational characteristics of conceptual items determine the classification of lexigrams . Thus, having decided, for instance, that there should be items that can be eaten and items that can be drunk, the lexigrams designating thss-e items will be divided into edibles (i.e. suitable patient/okjects for the activity designated by EAT) and drinkables (i.e. suitable patientlobjects for the activity designated by DRINK). Together they constitufethe class of ingestibles which, as it happens, is marked by the red hue of the corresponding lexigrams (see Tables 2 and 3).</Paragraph> <Paragraph position="13"> In short, Yerkfsh grammar does not require, nor lead to, a complete semantic analysis of lexical items. What it does require is a lexicon in which classes of lexical items are exhaustively characterized as to the specific relations into which their members can enter with members of other classes. This exhaustive characterization is supplied, not by listing all the other classes with whose members connections can be potentially formed, bat by a string of indices, each of which specifies a connective relation and the place in it (c-f. below) 5 member of the class thus characterized can occupy, Finally, we come to the relational concepts or coxrelators which are instrumental in the building up of complex structures, both on the conceptual and on the linguistic level. Strictly speaking, a correlator is a connective fgnction that links conceptual items on the cognitiverepresentational level. Languages indicate these connective functions by a variety of means: prepositions; verbs, nouns, and other types of words that incorporate a preposition; conjunctions and other particles; syntactic markers&quot; and, very frequently, merely word-order . Since these linguistic elements indicate correlators, we should call them I I correlator expressions&quot;.</Paragraph> <Paragraph position="14"> Rowever, once it has been made clear that correlators function on the conceptual level and connect concepts with other concepts or combinations thereof, we can in most cases use the term &quot;correlator&quot; for both the relational concepts and the linguistic devices that express them,</Paragraph> <Paragraph position="16"> In designing an artificial language, the classification -of lexical items and the definition or explication of relational concepts must go hand in hand since the first is done in terms of the second, The relational concepts have to be explicitly listed and explicated by some form of paraphrase. In principle, that is what a &quot;case grammar&quot; does. Its cases, basically, are relational concepts (e.g. Tillmore, 1968). Ho~~ever, because correlational grammar attempts to cover as much relational semantics as possible, its list of correlators will be I I both much longer and wore specific than the lists of cases'' l~hich, to my knowledge, have been suggested.</Paragraph> <Paragraph position="17"> Yerkish; in its present form operates with some thirty correlators and the Yerkish lexicon is classified with reference to these (see Tables</Paragraph> </Section> </Section> class="xml-element"></Paper>