File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/94/c94-2144_metho.xml
Size: 29,339 bytes
Last Modified: 2025-10-06 14:13:40
<?xml version="1.0" standalone="yes"?> <Paper uid="C94-2144"> <Title>amp;quot;YD/2---A Type Description Language for Constraint-Based Grammars</Title> <Section position="5" start_page="0" end_page="0" type="metho"> <SectionTitle> * TYPE (JIIECKIN(I </SectionTitle> <Paragraph position="0"> 'Fype deliniti(ms allow n gramm~riml to (leelar(~ which attributes are al)l)rOl)riate lkq' a given l.yl)e and which types m:e a.l)prol)riate for a given at..</Paragraph> <Paragraph position="1"> tribute, therelb.'e disallowiug one to write il~(:(m sistent, feat, m'e structures. Again, type expansioll is necess;try to determine the glol)M etmsist,eney of it given description.</Paragraph> </Section> <Section position="6" start_page="0" end_page="0" type="metho"> <SectionTitle> * RECIJltSIVI,\] TYI'ES </SectionTitle> <Paragraph position="0"> l{ecursive l,ypes give it glmlmnar writ.or the opporl.unity to formulnl.e cerl.Mn fimel.ion.s or re-lations as recm'sivc type specific.;ttions. \York ing in the type deduel,io|l i)~-tra(ligm el\]i'orecs a, grammar writer 1,o rel)la(:e the eonl;exl;..fl'ee back.</Paragraph> <Paragraph position="1"> bone through recursive types. Here, parameterized delayed type expansion is the ticket to the world of controlled linguistic deduction \[13\]; see Section 3.4.</Paragraph> </Section> <Section position="7" start_page="0" end_page="896" type="metho"> <SectionTitle> 3 TDPS </SectionTitle> <Paragraph position="0"> TDZ: is a unificatiol,-based grammar development environment and run time system snpporting HPSG-like grammars. Work on TDPS has started within the DISCO project of the DFKI \[14\] (this volume). The DISCO grammar currently consists of approx. 900 type specifications written in TDPS and is the largest HPSG grammar for German \[9\]. The core engine of DISCO consists of T/IPS and the feature constraint solver //D/A~ \[3\]. ND/~ itself is a powerful untyped unification machinery which allows the use of distributed disjunctions, general negation, and fllnctional dependencies. The modules communicate through an interface, and this connection mirrors exactly the way an abstract typed unification algorithm works: two typed feature structures can only be unified if the attached types are definitely compatible. This is accomplished by the unifier in that ~ handles over two typed feature structures to TDPS which gives back a simplified form (plus additional information; see Fig. 1). The motivation for separating type and featnre constraints and processing them in specialized modules (which again might consist of specialized components as is the case in 73)PS) is twofold: (i) this strategy reduces the complexity of tile whole system, thus making the architecture clear, and (ii) leads to a higher performance of the whole system because every module is designed to cover only a specialized task.</Paragraph> <Section position="1" start_page="0" end_page="0" type="sub_section"> <SectionTitle> 3.1 TDPS Language </SectionTitle> <Paragraph position="0"> 7&quot;DPS supports type definitions consisting of type constraints and feature constraints over the operators A, V, -1, and (r) (xor). The operators are generalized in that they can connect feature descriptions, coreference tags (logical variables) as well as types.</Paragraph> <Paragraph position="1"> 77)PS distinguishes between arm types (open-world semantics), sort types (closed-world semantics), built-in types (being made available by the underlying COMMON LISP system), and atoms. Recursive types are explicitly allowed and handled by a sophisticated lazy type expansion mechanism.</Paragraph> <Paragraph position="2"> In asking for the greatest lower bound of two awn types a and b which share no common subtype, TDPS always returns a A b (open-world reasoning), and not 2_. The reason for assuming this is manifold: (i) partiality of our linguistic knowledge, (ii) approach is in harmony with terminological (KL-ONE-like) languages which share a similar semantics, (iii) important during incremental grammar/lexicon construetion (which has been shown usefid in our project), and (iv) one must not write superfluous type definitions to guarantee successful type unifications during processing.</Paragraph> <Paragraph position="3"> The opposite case holds for the C, LB of sort types (closed-world approach). Furthermore, sort types differ in another point from avm types in that they arc not fllrther structured, as is the case for atoIns. Moreover, 779PS oilers the possibility to declare partitions, a feature heavily used in IfPSG. In addition, one can declare sets of types as incompatible, meaning that the conjunction of them yields +-, so that specific avm types can be closed.</Paragraph> <Paragraph position="4"> 7&quot;DPS allows a grammarian to define and use parameterized templates (macros). There exists a special instance definition facility to ease the writing of lexicon entries which differ from nor,hal types in that they are not entered into the type hierarchy. Input given to TDPS is parsed by a Zebu-generated LALR(1) parser \[8\] to allow for an intuitive, hi9h-level input syntax and to abstract fi'om uninteresting details imposed by the unifier and the underlying Lisp systenr.</Paragraph> <Paragraph position="5"> The kernel of TDPS-. (and of most other monotonic systems) can be given a set-theoretical semantics Mong the lines of \[12\]. It is easy to translate TDPS.</Paragraph> <Paragraph position="6"> statements into denotation-preserving expressions of Smolka's feature logic, thus viewing 7&quot;DPS only as syntactic sugar for a restricted (decidable) subset of first-order logic. Take for instance the following feature description O written as an attribute-vMue matrix: It is not hard to rewrite this two-dimensionM description to a flat first-order formula, where attributes/features (e.g., .~GR) are interpreted as binary relations and types (e.g., up) as unary predicates: 3~. ,~p(C/) A ,Ga(e,, ~) A ,,a,deg~em~,,t(~) A RUM(x, sg) A PERS(x, odeg7&quot;(1) A SUBJ(C/, x) The corresponding VDPS type definition of C/ looks as follows (actually &; is used on the keyboard instead of A, \[instead of V,~instead of ~):</Paragraph> <Paragraph position="8"/> </Section> <Section position="2" start_page="0" end_page="895" type="sub_section"> <SectionTitle> 3.2 Type Hierarchy </SectionTitle> <Paragraph position="0"> The type hierarchy is either called directly by the control machinery of TDPS during the detinition of a type (type classification) or indirectly via the simplitier both at definition and at run time (type unification). null The implementation of the type hierarchy is based on A'/t-Kaci's encoding technique for partial orders \[1\]. Every type t is assigned a code 7(t) (represented via a bit vector) such that 7(0 reflects tile reflexive transitive closure of the subsumption relation with respect to t. Decoding a code c is realized either by a look-up OFF 3t . 7-1(c) = t) or by computing the &quot;maximal restriction&quot; of the set of types whose codes are less than c. l)eper, ding on the encoding method, the hierarchy occupies O(n logn) (compact encoding) resp. O(n 2) (transitive closure encoding) bits. ltere, GLB/LUB operations directly correspond to bit-or/and instructions. GI,B, I, UB and ~ computations 1-1ave the nice property that they can be carried out in this tYamework in O(n), where n is the TDPS. either returns c (c is delinitely the (;LB of a and b) or a A b (open-world reasoning) resl). ~L (clo.se<l-world reasoning 9 if there doesn't exist a single type which is ecplal to the GLB of a and b. In addition, 7&quot;DL: determitws whether tlDi32: must carry out lbature term unification (yes) or not (no), i.e., the return type contains all the information one needs to work on prol>erly (fail signals a global unification lhilure). number of (,ypes. 1 Aitq(aci's nmthod has been extended in 7'DPS to cover the ol)en-world nature of avm types in thai; potential (\]I,I~/LUB cmMidates (calculated front their codes) must be verified. Why so'. e Take the. lbllowing ex~mrple to see why this is ne.cessary:</Paragraph> <Paragraph position="2"> l)uring processing, one can definitely substitute y A z through % I)ut rewriting !I' A z' to a:' is not correct, because x' difl'ers fi'om f A z'- a/ is more speciiic as a coltseqtlellCX: of the l~e;~ture consl, r~t\[llt \[tt 1\]. So We make ;~ distinction between the &quot;internal&quot; gre;~test lower bound GI,B4, concerning only the type sub sumptiot~ relation i)y using Ait-Kaci's method alone (which is however used for sort types) and the :'ext(,rmq&quot; one, GIA}c , which takes the subsumption relation over fi;ature structures into &(:COtlllt.</Paragraph> <Paragraph position="3"> With Gl,l)-< and GLIJc in mind, we (:m~_ define, a generalized (~,B operation infbrmally by the follow- null ing table. This GLI} operation, is actually used during type mfitication (jr(.' :: feature constraint): -di~g- ,Tis,,~-.~oT! ,~fft,~i,u-F_ f,;7\[_f,'~ _lLs,'-~. \[ +- l - Is-.-- ;.j ?lJh('A'c a,Jmj < > Gl.llE(.vmi, a,,m~) -..vm,:~ (IUIIL 1 #,:--~? (ll~?lt I ~ (llHII. 2 I. .L ~-->. C\[,I\]~ (amnl, arm2) -- J_, via an explicit incomp~ttibility declara~tion aural A aline!, otherwise (open world) ~.. ,,,,,,,,~ <=. exp~md(,,,,,,,,,.~) ni~:~,, C/ z _L, otherwise sor't.~ C/-=> (\]l,I}5(sortt, .sorte) = ~orta 3. sort t 4;:._~ sort1 == sort2 .L, otherwise (closed worM) at, oral ,~ ~--'.~ type-of(a/oral ,~) ~ sort.& i, ,1. where sort~,l is ~ built-in J., otherwise 5.. atoHtt #,--~ o{oHt I =: (tlont 2 +-, otherwise</Paragraph> <Paragraph position="5"> The encoding algorithm is also exl,m,ded towards the rcdcJiuition of types and the use of undcJlmd types, an essentiM lmrt of at, im:remental grammar/lexicon dew.qopmetd, systenl, ll.edetining a I,ype means not oldy to m~ke changes local to this type.</Paragraph> <Paragraph position="6"> h,stead, (.,lie }I:4S to redefil,e all depcndcul.s of this type -all subtypes in case of a conjunctive l;ype def itdtion and all disjunction alternatives for at disjuuctive type speeilication plus, in both cases, all types which use these types in their de\[inition. The dependent types o\[ a l.ype t can be characterized gr~q)htheoretically via l,he strongly c(mnected component of t with respect to the depe,Mency relation.</Paragraph> <Paragraph position="7"> Conjm~ctivc, e.g., x := J/A z ~tnd disju,u;tivc t!lp('.</Paragraph> <Paragraph position="8"> specificalio)~s, e.g., a/ ::-= f V z / are entered difl'erently into the hier~u'chy: :c inherits from its s,,perl;ypes 9 and z, whereas x' delines itse|f through its IActuMly, one can choose, in 7&quot;DE I)ctwccn the two encoding I:cchniques and between bit vectors and bignums ill COMMON \[ASP for the representatiou of the codes, h, our I,l.ql' implelnentaLion: operatimm on bignulns are. a magtfil;ude faster than on bi~ vectors.</Paragraph> <Paragraph position="9"> are introduced by TDPS during the type delinitions</Paragraph> <Paragraph position="11"> alternatives !/ and z'. This distinction is represented through tile use of different kinds of edges in the type graph (bold edges denote disjunction elements; see Fig. 3). But; it is worth noting that both of tllem express subsumption (x ~ y and x' >-_ y') and that the GLB/LUB operations must work properly over &quot;conjunctive&quot; as well as &quot;disjunctive&quot; subsumption links.</Paragraph> <Paragraph position="12"> TDPS decomposes complex definitions consisting of A, V, and ~ by introducing intermediate types, so that the resulting expression is either a pure covjunclion or&quot; a disjunction of type symbols. Intermediate type names are enclosed in vertical bars (ef. the intermediate types \[u A v I and lu A v A w{ in Fig. 2).</Paragraph> <Paragraph position="13"> Tile same technique is applied when using * (see Fig. 3). (b will be decomposed into A, V and ~, plus additional intermediates. For each negated tyt)e ~t, 7&quot;1)PS introduces a new intermediate type symbol I-'tl having the definition ~t and dechu'es it incompatible with t (see Section 3.2.a). I,~ addition, if t is not already present, T/)PS will add t as a new type to the hierarchy (see types \[~b\[ allcl \]-el in Fig. 3).</Paragraph> <Paragraph position="14"> Let's consider the example a := b (r) c. The decomposition can be stated informally by the following rewrite steps (assuming that the user tu~s chosen CNF):</Paragraph> <Paragraph position="16"> Incompatible lypes lead to the introduction of specialized bottom symbols (see Fig. 3 and 4) which however are identified in the underlying logic in that they denote the empty set. These bottom symbols must be propagated downwards by a mechanism called bottom propagation which takes place at definition time (see Fig. 4). Note that it is important to take not only subtypes of incompatible types into account but also disjunction elements as the following example shows:</Paragraph> <Paragraph position="18"> from tile intermediates IbVc\[ and b/,v~cl.</Paragraph> <Paragraph position="19"> .k -- a A b. } _~C+ a A bi := J- and a A b~ = J_ b := bl V b.). One might expect; that incompatibility statements together with feature term unification no longer lead to a monotonic, set-theoretical semantics. But this is not the case. To preserve monotonicity, one must assume a 2-level interpretation of tgpcd feature structures, where feature constraints and t, ype constraints might denote diflb.rent sets of objects and the glob~ al interpretation is determined by the intersection of the two sets. Take for instance the type definitions</Paragraph> <Paragraph position="21"> sponding feature structures of A and \[t successfully unify to \[a 1, b 1\], thus the global interpretation is +-.</Paragraph> </Section> <Section position="3" start_page="895" end_page="896" type="sub_section"> <SectionTitle> 3.3 Symbolic Simplifier </SectionTitle> <Paragraph position="0"> \[File simplifier operates on arbitrary TD~ expressions.</Paragraph> <Paragraph position="1"> Simplitication is done at definition time and at run time when typed unification takes place (cf. \]rig. 1).</Paragraph> <Paragraph position="2"> The main issue of symbolic simplitication is to avoid (i) unnecessary feature constraint unification and (it) queries to the type hierarchy by simply applying &quot;syntactic&quot; reduction rules. Consider all expression like x~ A... A xi . . . A &quot;~a:i , . . A xn. The shnplilier will detect .k by simply applying reduction rules.</Paragraph> <Paragraph position="3"> The simplification schemata are well known from the propositional calculus. They are hard-wired in the implementation to speed up computation. Forreally, type simplitication in &quot;FDPS can be characterized as a term rewriting system. A set of reduction rnles is applied until a normal form is reached. Confluence and termination is guaranteed by imposing a total generalized lexicwraphic order on terms (see below). In addition, this order has the nice effects of neglecting eommutativity (which is expensiw.' and might lead to termination problems): there is only one representative lbr a given formula. Therefore, memoizatiou is cheap and is employed in TDPS to reuse precomputed results of simplilied expressions (one must not cover all permutations of a formula).</Paragraph> <Paragraph position="4"> Additional reduction rules are applied at run time using &quot;semantic&quot; inlbrmation of the type hierarchy (GLB, LUB, and ~).</Paragraph> <Paragraph position="6"> Iqgure d: IJottom propagat, icm trigg'ered throltg'h the :mbEglWS d aud c of b, ,so f, ha.L a A d A c as well w; a A ,.: A c will simI>lil ~ to _L during processing.</Paragraph> <Paragraph position="7"> a.a.1 Nornial Forlli hi order to reduce ;ui m;1)il;rary l,yl m expression to it simpler expression, Siml)lifi(:al;ion rules inusl; I)e a\])plied. So we have to deline wh;Ll, it, lfie0Al.q for &ll expressioll t() t)(; &quot;SJlllllle'. Ollo, CilJl eil;he, r (:boo,q(; the coujimcl,ivc or disjuimt, ive nol:maJ tbrm. The ~tdwtlr-I, a gcs of CN I&quot;/I)NF are: This l)roperi,y is a colls(xlll(;lt(:(! of the two oi;hel; proli(;l:tie,<;. (~'ni(lue aal(I line,u: exl)ressions lnake it; easy i,o liild O1&quot; 1,() cOUllm,'e (sul))expl:essions. This is itlll)Ort, allt, \[or the liierlloiz~d;ioli lx;t:hliique described in Scctioii 3.3.4.</Paragraph> <Paragraph position="8"> lu order to reach a normal forui, it; would suffice to at)l)ly only the s(:ll.etlt;t|;;~ \['or (lf)ll})\[(~ neg~l,ion, dis-I,ribul, ivity, and De Morgan's h~ws. Ilowever, in the worst case, t, hcsc I, hr('(; rtlles wouht blow iI t) i,he leugl;h of th(~ normal lbrm to eXl)OnCnl,ial size ((:omp~u'ed with \],he mtull)er o\[ lit, erals iu the originaJ expression). To ~o,'oi(l I, his, ()(;her titles ;tr(' use(I intermediately: idempotcnce, idenl, ity, al)sorpl,ioih etc. If they can l)e applied, t, he.y alw~tys re(tilt:t; l,he lengl,h of I,hc expressions, l'\]specia.lly w\[; run l, ime, llu(; also al; del L \]nil;ion tilne, i\[, is use\['ul to eXldOi\[, infbrmM, i(m \['rt)ln the, t,ype hi(warchy. I&quot;url,h(:r siml)lilit:al, ious are l)ossihie by ~csking lbr l,h(; (:ll,lt, \],till, altd ~.</Paragraph> <Paragraph position="9"> To avoid the al)pli('ation of l;ltc cotmnutativil, y rule, wc introduc(~ ;~ to(,al lcxicographic order on tyllc cxlU'essious. Together with I)NF/(TNI,', we ol)taiil a unique sorl;ed normal fornt tbr an arbitrary l;y\[)e expression. This guarant,ees fast (:oinparabilil,y.</Paragraph> <Paragraph position="10"> We define I;he order <NF on 7>ary normal forms: t~,lpe <N~; neqaled type <NI; conjunction <NI,' dis,,\]'?trtCti01~ <NI&quot; symbol <NI&quot; striu9 <NF ~lltll21J(~F. l&quot;ot' the coinl)arisoil of atoms, st;rings, and type names, we use the lexic, ographical order on strings ;rod lbr llitlllt)(!l:S \[,h(~ ordering < ou n;ttural IIIIlH\[)OI'S. l&quot;,x;unple: a <NI; b <NI; bb <NI; -m(t <NI; c.z A b <NI,' a A -,a <NI; a V b <NJ&quot; (t V b V c <NI; a V i :1.3.4 Memoizalfioit The memoization t, cchnique describe, d in \[10\] hw-; 1)een ad;q)ted in order to reuse precomlml,ed resull;s o\]' l.ype sinq)li\[i<:at,ion. The lexicogral>hically sorted norlnM fC/)rni guar;uitees fast; ~u:cess 1;o lU:CCOlnlml;e(l l,ype sinll)lilications. Memoiza|;ion resull, s are also used by the recursive simplific;d;ion algorit;hm (;o exploit, preconllmted results for subexln:cssions.</Paragraph> <Paragraph position="11"> Some enqfirical results show I;he usefulness of nteuioization, The current DISCO grallltlUtr \]'t)r Q',0rlI|~l,ll co118i81,8 o\]' 88 F) types ;uld 27 tentl~latx:s. AI: ter a lull (,ylm expausion of a toy lexicon of 244 in s(,;tltces/elll, ries, the lnemoiz;tl, ion table txmtaium approx. 3000 cnl;ries (literals m'c noL lneuloized). 18000 results have been reused ~tt; lc'asl; once (some up t;(~ 600 tiines) of whMl 90 % ~re proper sinlplilica(,ions (i.e., the siinplilicd formulae m:e really shorter th~m t, he unsimplilied ones).</Paragraph> </Section> <Section position="4" start_page="896" end_page="896" type="sub_section"> <SectionTitle> 3.4 Type Exlmnsion and Control </SectionTitle> <Paragraph position="0"> Wc noted earlier I, hat types allow us to refer to c(m,-pIex constraints folirougli tim use o\[ symbol nantes.</Paragraph> <Paragraph position="1"> l/,ecolml, rucl, ing |,he consl, r;tinl,s which determilm a I,ype (rept:eseltted as a \['eature sl,rucl;ure) requires a complex ol)er;-ttion called Qjpc c,7,Tmusz'om This is COml);tr;tble to (Jat'lmnl;er's lolalhj wcll-l~jpcdncss \[5\].</Paragraph> </Section> <Section position="5" start_page="896" end_page="896" type="sub_section"> <SectionTitle> 3.4.1 Motivation </SectionTitle> <Paragraph position="0"> In ~J'l)l~, I,he mot, iwttioll for type expansion is m;miibl(l: null</Paragraph> </Section> </Section> <Section position="8" start_page="896" end_page="897" type="metho"> <SectionTitle> * CONSISTI,;NCY </SectionTitle> <Paragraph position="0"> AI, definition time, type expansion del,ermiues whc|;her tim st:l, of |,ype delinil;ion,s (grammar and lexicon) is consistent. At; run time, t, ype exi);msion is involved in checking the satis\[i;d)ility of l;he unilical;ion of two part,\]ally explm(h.'d typed fe;d,ure s(;rucl, lures, e.g., during parsing.</Paragraph> </Section> <Section position="9" start_page="897" end_page="897" type="metho"> <SectionTitle> * ECONOMY </SectionTitle> <Paragraph position="0"> From the standpoint of efficiency, it; does make sense to work only with small, partially expanded structures (if possible) to speed up feature term unification and to reduce the antount of copying. At the end of processing however, one has to snake the result/constraints explicit.</Paragraph> </Section> <Section position="10" start_page="897" end_page="898" type="metho"> <SectionTitle> * ItECURSION </SectionTitle> <Paragraph position="0"> l{ecursive types are inherently present in modern constraint-based granmlar theories like IIPSG which are not provided with a context-free backbone. Moreover, if the formalism does not allow fnnctionM or relational constraints, one tnust specify certain flmctions/relations like append through recurslve types. Take for instance Ait-Kaci's version of the append type which (:ass be stated in &quot;\]-DE as follows:</Paragraph> <Paragraph position="2"> Parsing and generation can be seen in the light of type deduction as a uniforin process, where ideally only the phonology (for parsing) or the semantics (for generation) must be giw'.n. Type expansion together with a sufficiently specified grammar then is responsible in both cases for covstrncting a fully specified feature structure which is maximal informative and compatible with the input. Itowever, \[la\] has shown that type expansion without sophistieated control strategies is in Illany cases inelficient and moreover does not guarantee termination.</Paragraph> <Section position="1" start_page="897" end_page="898" type="sub_section"> <SectionTitle> 3.4.2 Controlled Type. Exliansion </SectionTitle> <Paragraph position="0"> Uszkoreit \[la\] introduced a new strategy tbr linguistic processing called controlled linguistic deduclion. Ills approaeh permits the.specitication of lit> gnistic performance models without giving up the declarative basis of linguistie competence, especially monotonicity and eompleteness. The ewduation of both cm0nnctive and disjunctive constraints can be controlled in this framework. For conjunctive constraints, the one with the highest faihtre probability should be evahtated first. For disjunctive ones, a success measure is used instead: the alternative with the highest success probability is used until a unification fails, in which case one has to backtrack to the next best alternative.</Paragraph> <Paragraph position="1"> 7'DPS and /./D~de snpport this strategy in that every feature structnre can be associated with its suecess/faihtre potentiM such that type expansion can be sensitive to these settings. Moreover, one can make other decisions as well during type expansion: * only regard structures which art subsumed by a given type resp. the opposite case (e.g., expand the type subcat-list always or never expand the type daughters) * take into &ccouttt only structures under certain paths or again assume the oliposite case (e.g., always expand the wtlue nailer path SYNSEMILOCICAT; in addition, it is possible to employ path pattenls in the sense of pattern matching) * set the depth of type expansion for a given type Note that we are not restricted to apply only one of these settings-- they can be used in combination and can be changed dynamically during processing.</Paragraph> <Paragraph position="2"> It does make sense, tbr instance, to expand at certain well-defined points during parsing the (partial) information obtained so far. If this will not resnlt in a failure, one can throw away (resp. store) this flflly expanded feature structure, working on with the older (and smaller) one. tlowever, if the information is inconsistent, we luust backtrack to older stages in computation. Going this way which of course assumes /seuristic knowledge (language as well as grammar-specific knowledge) results in faster processing and copying. Moreover, the inference engine lllnst be able to handle possibly illconsistenl, knowledge, e.g., in cast of a chart parser to allow for a third kind of edge (besides active and passive ones).</Paragraph> <Paragraph position="3"> Issues, and Undeeidabillty The set of all recursive types of a given grammar/lexicon can be precompiled by employing the dependency graph of this type system. This graph is updated every time a new type delhfition is added to the system. Thus detecting whether a given type is recnrsive or not reduces to a simple table look--up. ltowever l, he expansion of a recnrsive type itself is a little bit harder. In T'DPS, we are using a lazy expansion technique whMt only makes those constraints explicit which are really new. To pslt it in anoth-.</Paragraph> <Paragraph position="4"> er way: if no (global or local) control information is specified to guide a specific expansion, a recnrsive type will be be expanded under all its paths (local plus inherited paths) until one reaches a point where the information is already given in a prcJi:r path. We call such an expanded structure a resolved typeil t?~.a ture structure. Of course, there are inlinitely many resolved feature structures, but this structure is the most general resolved one.</Paragraph> <Paragraph position="5"> Take lbr instance the append example lY=om the 1)revions section, append is of course a recursive type because one of its alternatives, viz., append 1 uses append under the PATCH attrilmte. Exl)anding append with no additional information supplied (especiMly no path leading inside appcndl, e.g., PATCH I PATCH I PATCH) yields a disjunctive feature structure where both append o and append I are substituted by their definitiorl. The expansion then stops if no other informatioll enforce a fisrther expansion.</Paragraph> <Paragraph position="6"> In practice, one has to keep track of the visited paths and visited typeil feature structures to avoid unnecessary expansion. 3'0 make expansion more el L ficient, we mark structures whether they are fully expanded or not. A feature strnetnre is then fully expanded iff all of its substructures are fully expanded. This simple idea leads to a massive reduction of the search space when dealing wills incremental expansion (e.g., during parsing).</Paragraph> <Paragraph position="7"> It is worth noting that the sat|st|ability of feature descriptions admitting recursive type equations/detinitions is in general undecidable. Rounds and Manaster-ll, aumr \[11\] were the tirst having shown that a t(asper-ll.ounds logic enriched with recnrsive types allows one to encode it Turing machine, liecause our logic is much more richer, we imlne(liately get; the sanle result tbr TDPS.</Paragraph> <Paragraph position="8"> itowever, one can choose in 7&quot;l)PS between a complete expansion algorithm which may not terniinate and a non-comf)lete on(.' to guarantee tcrmin~-ttion (see \[2\] and \[5, Ch. 1,5\] for similar prol,osals ). The latter ease heavily depends on the notion of resolvedness (see above). In both cases, the depth of the search space can be restricted by specifying a maximal path length.</Paragraph> </Section> </Section> <Section position="11" start_page="898" end_page="898" type="metho"> <SectionTitle> 4 Comparison with other Systems </SectionTitle> <Paragraph position="0"> 7'D/~ is tmique in that it iml)lemevts many novel features not found in other systems like ALE \[4\], I,IFI'; \[2\], (7,: TIeS \[15\]. Of course, these systems l,rovide other l~atures whiclt are not present in our formal|sin. What makes 7,DPS unique in COmlTarison to them is the distinction open vs. closed world, the awdlability of the full boolean connectives and distributed disjunctions (via UD/~), as well as an imphmte,lted lazy type expansion mechaifism for reeursive types (as compared with LIFE). AI,E, \[br instance, neither allows dis|mint|re nor recurs|re tyl)es and enforces the l;ype hierarchy to be a I?,CPO. IIowever, il; makes recursion available througl, detinite relations and incorporates special mechanisms \[br eml)ty categories and lexical rules. TFS comes up with a closed worhl, the unawdlability of negative information (only implicitly present) and only a poor tbrm of disjunctive information but performs parsing and generation entirely through type deduction (in fact, it was the tirst system). LIF'I'3 comes closest to us but l)rovides a semantics for types that is similar to TFS. Moreover the lack of negative information and distributed disjunctions makes it again comparal)le with TFS. LIFF as a whole can be seen as an extension of PROI,O(~ (as was the case for its predecessor LO('HN), where tirstorder terms are rel)laced by .~-terms. In this sense, I,IFF, is rMmr than onr fomalism in that it offers a fifll relational calculus.</Paragraph> </Section> class="xml-element"></Paper>