File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/04/w04-0311_intro.xml

Size: 7,528 bytes

Last Modified: 2025-10-06 14:02:26

<?xml version="1.0" standalone="yes"?>
<Paper uid="W04-0311">
  <Title>Dynamic Dependency Parsing</Title>
  <Section position="2" start_page="0" end_page="0" type="intro">
    <SectionTitle>
1 Introduction
</SectionTitle>
    <Paragraph position="0"> In an incremental mode of operation, a parser works on a pre x of a prolonging utterance, trying to compute pre x-analyses while having to cope with a growing computational e ort. This situation gives rise at least to the following questions: null  (1) Which provisions can be made to accept pre x-analyses transiently given a model of language that describes complete sentences? null (2) How shall pre x-analyses look like? (3) How can the complexity of incremental  parsing be bounded? We will introduce underspeci ed dependency edges, called nonspec dependency edges, to the framework of weighted constraint dependency grammar (WCDG) (Schr oder, 2002). These are used to encode an expected function of a word already seen but not yet integrated into the rest of the parse tree during incremental parsing. In WCDG, parse trees are annotated by constraint violations that pinpoint deviations from grammatical requirements or preferences. Hence weighted constraints are a means to describe a graded grammaticality discretion by describing the inherent 'costs' of accepting an imperfect parse tree. Thus parsing follows principles of economy when repairing constraint violations as long as reducing costs any further is justi ed by its e ort.</Paragraph>
    <Paragraph position="1"> The following sections revise the basic ideas of applying constraint optimization to natural language parsing and extend it to dynamic dependency parsing.</Paragraph>
    <Paragraph position="2"> 2 From static to dynamic constraint satisfaction We begin by describing the standard constraint satisfaction problem (CSP), then extend it in two di erent directions commonly found in the literature: (a) to constraint optimization problems (COP) and (b) to dynamic constraint satisfaction problems (DynCSP) aggregating both to dynamic constraint optimization problems (DynCOP) which is motivated by our current application to incremental parsing1.</Paragraph>
    <Section position="1" start_page="0" end_page="0" type="sub_section">
      <SectionTitle>
2.1 Constraint Satisfaction
</SectionTitle>
      <Paragraph position="0"> Constraint satisfaction is de ned as being the problem of nding consistent values for a xed set of variables given all constraints between those values. Formally, a constraint satisfaction problem (CSP) can be viewed as a triple (X; D; C) where X = fx1; : : :; xng is a nite set of variables with respective domains</Paragraph>
      <Paragraph position="2"> a relation de ned on a subset of variables, called 1Note that we don't use the common abbreviations for dynamic constraint satisfaction problems DCSP in favor of DynCSP in order to distinguish if from distributed constraint satisfaction problems which are called DCSPs also. Likewise we use DynCOP instead of DCOP, the latter of which is commonly known as distributed constraint optimization problems.</Paragraph>
      <Paragraph position="3"> the scope, restricting their simultaneous assignment. Constraints de ned on one variable are called unary; constraints on two variables are binary. We call unary and binary constraints local constraints as their scope is very restricted.</Paragraph>
      <Paragraph position="4"> Constraints of wider scope are classi ed nonlocal. Especially those involving a full scope over all variables are called context constraints.</Paragraph>
      <Paragraph position="5"> The 'local knowledge' of a CSP is encoded in a constraint network (CN) consisting of nodes bundling all values of a variable consistent with all unary constraints. The edges of a CN depict binary constraints between the connected variables. So a CN is a compact representation (of a superset) of all possible instantiations. A solution of a CSP is a complete instantiation of variables hx1; : : :; xni with values hdi1; : : :; dini with dik 2 Dk found in a CN that is consistent with all constraints.</Paragraph>
      <Paragraph position="6"> Principles of processing CSPs have been developed in (Montanari, 1974), (Waltz, 1975) and (Mackworth, 1977).</Paragraph>
    </Section>
    <Section position="2" start_page="0" end_page="0" type="sub_section">
      <SectionTitle>
2.2 Constraint Optimization
</SectionTitle>
      <Paragraph position="0"> In many problem cases no complete instantiation exists that satis es all constraints: either we get stuck by solving only a part of the problem or constraints need to be considered defeasible for a certain penalty. Thus nding a solution becomes a constraint optimization problem (COP). A COP is denoted as a quadruple (X; D; C; f), where (X; D; C) is a CSP and f is a cost function on (partial) variable instantiations. f might be computed by multiplying the penalties of all violated constraints. A solution of a COP is a complete instantiation, where f(hdi1; : : :; dini) is optimal. This term becomes zero if the penalty of at least one violated constraint is zero. These constraints are called hard, those with a penalty greater zero are called soft.</Paragraph>
      <Paragraph position="1"> An more precise formulation of COPs (also called partial constraint satisfaction problems), can be found in (Freuder and Wallace, 1989).</Paragraph>
    </Section>
    <Section position="3" start_page="0" end_page="0" type="sub_section">
      <SectionTitle>
2.3 Dynamic Constraint Satisfaction
</SectionTitle>
      <Paragraph position="0"> The traditional CSP and COP framework is only applicable to static problems, where the number of variables, the values in their domains and the constraints are all known in advance. In a dynamically changing environment these assumptions don't hold any more as new variables, new values or new constraints become available over time. A dynamic constraint satisfaction problem (DynCSP) is construed as a series of CSPs P0; P1; : : : that change periodically over time by loss of gain of values, variables or constraints (Pi+1 = Pi + Pi+1). For each problem change Pi+1 we try to nd a solution change Si+1 such that Si+1 = Si + Si+1 is a solution to Pi+1. The legitimate hope is that this is more e cient than solving Pi+1 the naive way from scratch whenever things change.</Paragraph>
      <Paragraph position="1"> This notation is consistent with previous ones found in in (Dechter and Dechter, 1988) and (Wir en, 1993).</Paragraph>
    </Section>
    <Section position="4" start_page="0" end_page="0" type="sub_section">
      <SectionTitle>
2.4 Dynamic Constraint Optimization
</SectionTitle>
      <Paragraph position="0"> Most notions of DynCSPs in the literature are an extension of the classical CSP that use hard constraints exclusively. To model the aimed application of incremental parsing however, we still like to use weighted constraints. Therefore we de ne dynamic constraint optimization problems (DynCOP) the same way DynCSPs were de ned on the basis of CSPs as a series of COPs P0; P1; : : : that change over time. In addition to changing variables, values and constraints we are concerned with changes of the cost function as well. In particular, variable instantiations evaluated formerly might now be judged di erently. As this could entail serious computational problems we try to keep changes in the cost function monotonic, that is re-evaluation shall only give lower penalties than before, i.e. instantiations that become inconsistent once don't get consistent later on again.</Paragraph>
    </Section>
  </Section>
class="xml-element"></Paper>
Download Original XML