File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/evalu/91/e91-1049_evalu.xml
Size: 6,030 bytes
Last Modified: 2025-10-06 14:00:01
<?xml version="1.0" standalone="yes"?> <Paper uid="E91-1049"> <Title>A PREFERENCE MECHANISM BASED ON MULTIPLE CRITERIA RESOLUTION</Title> <Section position="7" start_page="0" end_page="0" type="evalu"> <SectionTitle> SCORING </SectionTitle> <Paragraph position="0"> Scoring is an important novelty proposed in our Stem to replace the rule ordering strategy in use in previous Eurotra preference tool. Whereas arbitrary decisions were made in the earlier tool in cases of contradictory preference criteria and mul.tiple matches between a rule and two (sub)trees, scoring permits us to control the interaction of preference rules in a declarative way. However, there is a Iradeoff between the declarativeness permitted by a scoring system and the difficulty of finding the right sco~es for a p-rule set of nontrivial coverage.</Paragraph> <Paragraph position="1"> In this section we show how optimum p-rule scores can be derived automatically. Starting from a set of p-rules and an initial set of objects ordered by the user, the system tries to compute optimum values for the p-rules in the set, on the assumption that they will hold for different sets of objects.</Paragraph> <Paragraph position="2"> If Pi (i=l,...n) stands for the score of the i-th p-rule, then the j-th object is assigned a score Sj given by the following expression:</Paragraph> <Paragraph position="4"> where n is the number of existing p-rules, aij is a constant equal to the number of times p-rule i has applied to object j and N is the number of exmting objects. In other words, Sj stands for the final score totalled by a given object after all possible p-rules have applied to it as many times as possible.</Paragraph> <Paragraph position="5"> To compute optimum scores, an. arbi .W.ary high score is assigned to the best object(s) m me initial training corpus and a much lower one m the rest. The set of equations (1) is transtormea then into an overdetermined system of N equations with n unknowns - the p-rule scores - where N can be greater than n. The set of equations (1) can be further decomposed and reformulatea as follows: Find x~ (i--1,...n/l) such that, (2) xtav + x2a2, + ... + XnSnj &quot; Xn+lSj = 0 , By comparing the set of equations (2) against the set (1), the following relation between me values of x i and p-rule scores is deduced: (3) Pi = x/xt~l) Therefore, we claim that problems (1) and (2) are equivalent. Now, problem (.2) hasno exact solution whenever N is greater man n. rtowever, it can be solved by converting it into a constraint optimization problem whereby optimum scores for p-rules will emerge. Thus the set of equations (2) is rearranged by introducing, the erro~ ej (i=l,...N) and by imposing mat me sum ol ml these errors is minimum. More precisely problem (2) takes now the following form: subject to the constraints (4) e i = xtali + x~%j + ... + x.~ - x~/iS i (j=I,...N) xt2 + x, +... x(,, m = 1 In the literature (cf. \[Key & Marple 1981\] and \[Kunmr~an & Tufts 1982\]), one of the most efficient techniques offered to the solution of the constraint optimization problem (4) is called Singular Value Decomposition (SVD). SVD provides an optimum set of x~ (i=l,...n+l) which guarantees minimum accumulated squared error. Thus the values of the scores p~ (i=l,...n) are computed in a straight-forward way from the x~ (i=l,...n+l) using equation (3).</Paragraph> <Paragraph position="6"> Note that SVD is a non-linear optimization technique which provides the best set of parameters for a given training corpus. Therefore, it is &quot; portant to apply it to a linguistically balanced corpus. Moreover, for the produced result to be reliable, the existing number of equations N should be at least five to ten times bigger than the existing number of p-rules n.</Paragraph> <Paragraph position="7"> Although SVD provides an optimum set of p-rule scores, there is no guarantee that these scores are all positive. However, since p-rules express positive selection criteria, p-rnle scores must always be positive: the following paragraph proposes an lterative algorithm which computes p-rule scores guaranteeing their positiveness at the same time. The idea is that the set of SVD parameters xl (i=l,...n+l) and the N sets of parameters in the training corpus are uncorrelated sets, i.e. they do not belong to the same space section. If the SVD solution set x~ (i=l,...n+l) is also included in the training set, the new SVD solution yi (i=l,...n+l) of the augmented training corpus willbe tmcorrelated to all the sets in the corpus. Consequently, Yi (i=l,...n+l) will also be uncorrelated to x L (i=l,...n+l). This means that not all the signs ofyi (i=l,...n+l) will be identical to the signs of x~ (i=l,...n+l). If the y components are all positive or all negative, the algorithm ends successfully and positive p-rule scores are computed via equation (3). In all other cases, the set of y~ (i=l,...n+l) is also incorporated in the training corpus and a new SVD solution z~ (i=l,...n+l) is computed which is uncorrelated to both x~ and Yi (i=l,...n+l). The algorithm continues in a similar way by checking whether the signs of z~ (i=l,...n+l) are all the same or not: in the first case the algorithm ends successfully; in the second case the set of z~ (i=l,...n+l) is included in the corpus and a new SVD solution is computed.</Paragraph> <Paragraph position="8"> The algorithm will eventually come up with the desirable set of parameters when all alternatives have been exhausted throughout the precedin~ iterations. The time of convergence varies relattve to the number of parameters or, equivalently, to the number of p-rules, as well as the size of the training corpus. More precisely, the larger the number of p-rules, the longer it takes for the algorithm to converge, on the other hand, the larger the training corpus, the faster the time of convergence. The obtained solution is optimum given .the maposed constraint thai all p-rule scores are posinve.</Paragraph> </Section> class="xml-element"></Paper>