File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/87/e87-1013_intro.xml
Size: 4,861 bytes
Last Modified: 2025-10-06 14:04:32
<?xml version="1.0" standalone="yes"?> <Paper uid="E87-1013"> <Title>TEXT UNDERSTANDING WITH MULTIPLE KNOWLEDGE SOURCES: AN EXPERIMENT IN DISTRIBUTED PARSING</Title> <Section position="3" start_page="0" end_page="0" type="intro"> <SectionTitle> 1. INTRODUCTION </SectionTitle> <Paragraph position="0"> The processes underlying text understanding involve a variety of complex, multifaceted activities which have not been yet completely understood from the cognitive point of view, and which still lack adequate computational models. Recent research trends in cognitive science and artificial intelligence, however, have put forward some ideas conoeming human cognition and automatic problem solving that offer promising tools for the design of text understanding systems.</Paragraph> <Paragraph position="1"> One of the key ideas emerged in the field of cognitive study of natural language comprehension is that text understanding constitutes in humans an interactive process, where bottom-up, data-driven activities combine with top-down, expectation-driven ones to cooperatively determine the most I~ely interpretation of the input (Lesgoid and Perfetti, 1981). Roughly speaking, humans begin with a set of expectations about what information is likely to be found in the text. These expectations are based both on linguistic knowledge (about words, phrases, sentences, and larger pieces of discourse) and on non-linguistic world knowledge. As information from the text becomes available, the reader strengthens those hypotheses that are consistent with the input and weakens those that are inconsistent. The stronger hypotheses, in turn, make even more specific predictions about the information represented in the text, so as the initial expectations are successively corrected and refined until they eventually yield an adequate approximation of the meaning of the text.</Paragraph> <Paragraph position="2"> In one of the first and mere detailed descriptions of interactive processes in text understanding, Rumelhart (1977) has proposed a model comprising several knowledge sources, each one operating independently and in parallel with the others. These knowledge sources are processors operating at different levels of linguistic representation. The outputs of each of these knowledge sources are hypotheses or best guesses from the data available at that level. The hypotheses are transferred to a central device, called the message center, where they can be observed by all other knowledge sources, thus being available as evidence for or against hypotheses at other levels. In a more dynamic view of interaction, Levy (1981) suggests that the message center could modify the activity of each individual processor. That is, when a particular hypothesis has strong outside support, the analyzers of a particular knowledge source may change their own processing either to seek confirming evidence for it or to accept that view and therefore stop analyzing information that would otherwise have been tested.</Paragraph> <Paragraph position="3"> The idea of decomposing a difficult problem into a large number of functionally distinct subproblems, each one being tackled by a specialized problem solver, has been pursued with great interest in the last years also in the field of artificial intelligence, where the area of distributed problem solving has developed into a much researched and hot topic. Several computational paradigms have been proposed, such as blackboard systems (for a review, see: Nil, 1986a; 1986b), contract net (Davis and Smith, 1983), the scientific community metaphor (Komfeid and Hewitt, 1981), FA/C systems (Lesser and Corkill, 1981) which proved appropriate to several tasks and application domains. As far as the field of text understanding is concerned, we mention here the work of Cullingford (1981) on DSAM, the distributed script applier, in which an arbitrary number of distinct, potentially distributed, processors are used to read and summarize newspaper stories.</Paragraph> <Paragraph position="4"> In this paper we present a novel approach to the problem of text understanding through a distributed processing paradigm, where different knowledge sources come into play and cooperate in the course of comprehension. In section two we deal with the rationale of advocating such an approach and the advantages (and disadvantages) in following it. Section three illustrates the general architecture of a prototype distributed parser, and describes the mechanisms of coordination and communication among the various knowledge sources. In section four we present an example of the parser operation through the tracing of the analysis of a sample sentence. Finally, section five deals with the current state of the implementation and highlights the novelty and originality of the approach.</Paragraph> </Section> class="xml-element"></Paper>