File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/82/p82-1032_intro.xml
Size: 3,026 bytes
Last Modified: 2025-10-06 14:04:21
<?xml version="1.0" standalone="yes"?> <Paper uid="P82-1032"> <Title>A Model of Early Syntactic Development</Title> <Section position="3" start_page="145" end_page="145" type="intro"> <SectionTitle> 2. An Overview of AMBER </SectionTitle> <Paragraph position="0"> Although Reeker's PST and Selfridge's CHILD address the transition from one-word to multi-word utterances, we have seen that problems exist with both accounts. Neither of these programs focus on the acquisition of function words, their explanations of content word omissions leave something to be desired, and though they learn more slowly than other systems, they still learn more rapidly than children. In response to these limitations, the goals of the current research are: * Account for the omission of content&quot; words, and the eventual recovery from such omissions.</Paragraph> <Paragraph position="1"> * Account for the omission of function words, and the order in which these morphemes are mastered.</Paragraph> <Paragraph position="2"> * Account for the gradual nature of both these linguistic developments.</Paragraph> <Paragraph position="3"> In this section I provide an overview of AMBER, a model that provides one set of answers to these questions. Since more is known about children's utterances than their ability to understand the utterances of others, AMBER models the learning of generation strategies, rather than strategies for understanding language.</Paragraph> <Paragraph position="4"> Selfridge's and Reeker's models differ from other language learning systems in their concern with the problem of recovering from errors. The current research extends this idea even further, since all of AMBER'S learning strategies operate through a process of error recovery. 1 The model is presented with three pieces of information: a legal sentence, an event to be described, and a main goal or topic of the sentence. An event is represented as a semantic network, using relations like agent, action, object, size, color, and type. The specification of one of the nodes as the main topic allows the system to restate the network as a tree structure, and it is from this tree that AMBER generates a sentence. If this sentence is identical to the sample sentence, no learning is required. If a disagreement between the two sentences is found, AMBER modifies its set of rules in an attempt to avoid similar errors in the future, and the system moves on to the next example.</Paragraph> <Paragraph position="5"> AMBER'S performance system is stated as a set of condition-action rules or productions that operate upon the goal tree to produce utterances. Although the model starts with the potential for producing (unordered) telegraphic sentences, it can initially generate only one word at a time. To see why this occurs, we must consider the three productions that make up AMBER'S initial performance system. The first rule (the start rul~) is responsible for establishing subgoals; it may be paraphrased as:</Paragraph> </Section> class="xml-element"></Paper>