File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/94/c94-2127_intro.xml
Size: 2,561 bytes
Last Modified: 2025-10-06 14:05:42
<?xml version="1.0" standalone="yes"?> <Paper uid="C94-2127"> <Title>Bottom-Up Earley Deduction</Title> <Section position="2" start_page="0" end_page="796" type="intro"> <SectionTitle> 1 Introduction </SectionTitle> <Paragraph position="0"> Recently, there has been a lot of interest in Earley deduction \[10\] with applications to parsing and generation \[13, 6, 7, 3\].</Paragraph> <Paragraph position="1"> Earley deduction is a very attractive framwork for natural language processing because it has the following properties and applications.</Paragraph> <Paragraph position="2"> gemeinschaft through the project N3 &quot;Bidirektionale Linguistische Deduktion (BiLD)&quot; in the Sonderforschungsbereich 314 Kilnstliche Intelligenz -- Wissensbasierte Systerne and by the Commission of the European Communities through the project LRE-61-061 &quot;Reusable Grammatical Resources.&quot; I would like to thank Gfinter Neumann, Christer Samuelsson and Mats Wirdn for comments on this paper.</Paragraph> <Paragraph position="3"> Like Earley's algorithm, all of these approaches operate top-down (backward chaining). The interest has naturally focussed on top-down methods because they are at least to a certain degree goal-directed.</Paragraph> <Paragraph position="4"> In this paper, we present a bottom-up variant of Earley deduction, which we find advantageous for the following reasons: Inerementality: Portions of an input string can be analysed as soon as they are produced (or generated as soon as the what-to-say component has decided to verbalize them), even for grammars where one cannot assume that the left-corner has been predicted before it is scanned.</Paragraph> <Paragraph position="5"> Data-Drlven Processing: Top-down algorithms are not well suited for processing grammatical theories like Categoriai Grammar or nesG that would only allow very general predictions because they make use of general schemata instead of construction-specific rules. For these grammars data-driven bottom-up processing is more appropriate. The same is true for large-coverage rule-based grammars which lead to the creation of very many predictions.</Paragraph> <Paragraph position="6"> Subsumption Checking: Since the bottom-up algorithm does not have a prediction step, there is no need for the costly operation of subsumption checking) Search Strategy: In the case where lexical entries \]lave been associated with preference in- null formation, this information can be exploited to guide the heuristic search.</Paragraph> </Section> class="xml-element"></Paper>