File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/91/p91-1031_concl.xml
Size: 6,883 bytes
Last Modified: 2025-10-06 13:56:44
<?xml version="1.0" standalone="yes"?> <Paper uid="P91-1031"> <Title>STRATEGIES FOR ADDING CONTROL INFORMATION TO DECLARATIVE GRAMMARS</Title> <Section position="6" start_page="242" end_page="244" type="concl"> <SectionTitle> 4 Conclusion and future research </SectionTitle> <Paragraph position="0"> Strategies are proposed for combining declarative linguistic knowledge bases with an additional layer of control information. The unification grammar itself remains declarative. The grammar also retains completeness. It is the processing model that uses the control information for ordering and pruning the search graph. However, if the control information is neglected or if all solutions are demanded and sought by backtracking, the same processing model can be used to obtain exactly those results derived without control information.</Paragraph> <Paragraph position="1"> Yet, if control is used to prune the search tree in such a way that the number of solutions is reduced, many observations about human linguistic performance some of which are mentioned in Section 1 can be simulated.</Paragraph> <Paragraph position="2"> 6The selected simple model is sufficient for illustrating the basic idea. Certainly more sophisticated eormectionist models will have to be employed for eognitively plausible simulation. One reason for the simple design of the net is the lack of a learning. Kt this time, no learning model has been worked out yet for the proposed type of spreading-activation nets. For the time being it is assumed that the weights are set by hand using linguistic knowledge, corpora, and association dictionaries.</Paragraph> <Paragraph position="3"> Criteria for selection among alternatives can be encoded. The smaller set of actively used constructions and lexemes is simply explained by the fact that for all the items in the knowledge base that are not actively used there are alternatives that have a higher preference.</Paragraph> <Paragraph position="4"> The controlled linguistic deduction approach offers a new view of the competence-performance distinction, which plays an important r61e in theoretical linguistics.</Paragraph> <Paragraph position="5"> Uncontrolled deduction cannot serve as a plausible performance model. On the other hand, the performance model extends beyond the processing model, it also includes the structuring of the knowledge base and control information that influence processing.</Paragraph> <Paragraph position="6"> performance distinction Since this paper reports about the first results from a new line of research, many questions remain open and demand further research.</Paragraph> <Paragraph position="7"> Other types of control need to be investigated in relation with the strategies proposed in this paper. Uszkoreit \[1990\], e.g., argues that functional uncertainty needs to be controlled in order to reduce the search space and at the same time simulate syntactic preferences in human processing.</Paragraph> <Paragraph position="8"> Unification grammar formalisms may be viewed as constraint languages in the spirit of constraint logic programming (CLP). Efficiency can be gained through appropriate strategies for delaying the evaluation of different constraint types. Such schemes for delayed evaluation of constraints have been implemented for LFG. They play an even greater role in the processing of Constraint Logic Grammars (CLG) \[Balari et al.</Paragraph> <Paragraph position="9"> 1990\]. The delaying scheme is a more sophisticated method for the ordering of conjuncts. More research is needed in this area before the techniques of CLP/CLG can be integrated in a general model of controlled (linguistic) deduction.</Paragraph> <Paragraph position="10"> So far the weight of the links for preference assignment can only be assigned on the basis of association dictionaries as they have been compiled by psychologists. For nonlexieal links the grammar writer has to rely on a trial and error method.</Paragraph> <Paragraph position="11"> A training method for inducing the best conjunct order on the basis of failure potential was described in Section 2.1. The training problem, .ie., the problem of automatic induction of the best control information is much harder for disjunctions. Parallel to the method for conjunctions, during the training phase the success potential of a disjunct needs to be determined, i.e., the average number of contributions to successful derivations for a given number of inputs. The problem is much harder for assigning weights to links in the spreading-activation net employed for dynamic preference assignment.</Paragraph> <Paragraph position="12"> Hirst \[1988\] uses the structure of a semantic net for dynamic lexical disambiguation. Corresponding to their marker passing method a strategy should be developed that activates all supertypes of an activated type in decreasing quantity. Wherever activations meet, a mutual reinforcement of the paths, that is of the hypotheses occurs.</Paragraph> <Paragraph position="13"> Another topic for future research is the relationship betwccn control information and feature logic. What happens if, for instance, a disjunction is transformed into a conjunction using De Morgans law? The immediate reply is that control structures are only valid on a certain formulation of the grammar and not on its logically eqtfivalent syntactic variants. However, assume that a fraction of a statically or dynamically calculated fraction involving success potential sp and failure potentialfp is attached to every subterm. For disjuncts, sp is C/fivided by fp, for conjuncts fp is divided bysp.</Paragraph> <Paragraph position="14"> De Morgans law yields an intuitive result if we assume that negation of a term causes the attached fraction to be inverted. More research needs to be carried out before one can even start to argue for or against a preservation of control information under logical equivalences.</Paragraph> <Paragraph position="15"> Head-driven or functor-driven deduction has proven very useful. In this approach the order of processing conjuncts has been fixed in order to avoid the logically perfect but much less effcient orderings in which the complement conjuncts in the phrase structure (e.g., in the value of the daughter feature) are processed before the head conjunct. This strategy could not be induced or learned using the simple ordering criteria that are merely based on failure and success. In order to induce the strategy from experience, the relative computational effort needs to be measured and compared for the logically equivalent orderings. Ongoing work is dedicated to the task of formulating well-known processing algorithms such as the Earley algorithm for parsing or the functor-driven approach for generation purely in terms of preferences among conjuncts and disjuncts.</Paragraph> </Section> class="xml-element"></Paper>