File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/00/p00-1020_intro.xml
Size: 9,415 bytes
Last Modified: 2025-10-06 14:00:51
<?xml version="1.0" standalone="yes"?> <Paper uid="P00-1020"> <Title>An Empirical Study of the Influence of Argument Conciseness on Argument Effectiveness</Title> <Section position="3" start_page="0" end_page="0" type="intro"> <SectionTitle> 1 Introduction </SectionTitle> <Paragraph position="0"> Empirical methods are critical to gauge the scalability and robustness of proposed approaches, to assess progress and to stimulate new research questions. In the field of natural language generation, empirical evaluation has only recently become a top research priority (Dale, Eugenio et al. 1998). Some empirical work has been done to evaluate models for generating descriptions of objects and processes from a knowledge base (Lester and Porter March 1997), text summaries of quantitative data (Robin and McKeown 1996), descriptions of plans (Young to appear) and concise causal arguments (McConachy, Korb et al. 1998).</Paragraph> <Paragraph position="1"> However, little attention has been paid to the evaluation of systems generating evaluative arguments, communicative acts that attempt to affect the addressee's attitudes (i.e. evaluative tendencies typically phrased in terms of like and dislike or favor and disfavor).</Paragraph> <Paragraph position="2"> The ability to generate evaluative arguments is critical in an increasing number of online systems that serve as personal assistants, advisors, or shopping assistants1. For instance, a shopping assistant may need to compare two similar products and argue why its current user should like one more than the other.</Paragraph> <Paragraph position="3"> In the remainder of the paper, we first describe a computational framework for generating evaluative arguments at different levels of conciseness. Then, we present an evaluation framework in which the effectiveness of evaluative arguments can be measured with real users. Next, we describe the design of an experiment we ran within the framework to verify the influence of argument conciseness on argument effectiveness. We conclude with a discussion of the experiment's results.</Paragraph> <Paragraph position="4"> 2 Generating concise evaluative arguments Often an argument cannot mention all the available evidence, usually for the sake of brevity. According to argumentation theory, the selection of what evidence to mention in an argument should be based on a measure of the evidence strength of support (or opposition) to the main claim of the argument (Mayberry and Golden 1996). Furthermore, argumentation theory suggests that for evaluative arguments the measure of evidence strength should be based on a model of the intended reader's values and preferences.</Paragraph> <Paragraph position="5"> Following argumentation theory, we have designed an argumentative strategy for generating evaluative arguments that are properly arranged and concise (Carenini and Moore 2000). In our strategy, we assume that the reader's values and preferences are represented as an additive multiattribute value function (AMVF), a conceptualization based on multiattribute utility theory (MAUT)(Clemen 1996). This allows us to adopt and extend a measure of evidence strength proposed in previous work on explaining decision theoretic advice based on an AMVF (Klein1994).</Paragraph> <Paragraph position="6"> Figure 1 Sample additive multiattribute value function (AMVF) The argumentation strategy has been implemented as part of a complete argument generator. Other modules of the generator include a microplanner, which performs aggregation, pronominalization and makes decisions about cue phrases and scalar adjectives, along with a sentence realizer, which extends previous work on realizing evaluative statements (Elhadad 1995).</Paragraph> <Section position="1" start_page="0" end_page="0" type="sub_section"> <SectionTitle> 2.1 Background on AMVF </SectionTitle> <Paragraph position="0"> An AMVF is a model of a person's values and preferences with respect to entities in a certain class. It comprises a value tree and a set of component value functions, one for each primitive attribute of the entity. A value tree is a decomposition of the value of an entity into a hierarchy of aspects of the entity2, in which the leaves correspond to the entity primitive attributes (see Figure 1 for a simple value tree in the real estate domain). The arcs of the tree are weighted to represent the importance of the value of an objective in contributing to the value of its parent in the tree (e.g., in Figure 1 location is more than twice as important as size in determining the value of a house). Note that the sum of the weights at each level is equal to 1. A component value function for an attribute expresses the preferability of each attribute value as a number in the [0,1] interval. For instance, in Figure 1 neighborhood n2 has preferability 0.3, and a distance-from-park of 1 mile has preferability (1 - (1/5 * 1))=0.8).</Paragraph> <Paragraph position="1"> 2 In decision theory these aspects are called objectives. For consistency with previous work, we will follow this terminology in the remainder of the paper.</Paragraph> <Paragraph position="2"> Formally, an AMVF predicts the value )(ev of an entity e as follows:</Paragraph> <Paragraph position="4"> - [?]attribute i, vi is the component value function, which maps the least preferable xi to 0, the most preferable to 1, and the other xi to values in [0,1] - wi is the weight for attribute i, with 0[?] wi [?]1 and Swi =1 - wi is equal to the product of all the weights from the root of the value tree to the attribute i A function vo(e) can also be defined for each objective. When applied to an entity, this function returns the value of the entity with respect to that objective. For instance, assuming the value tree shown in Figure 1, we have:</Paragraph> <Paragraph position="6"> Thus, given someone's AMVF, it is possible to compute how valuable an entity is to that individual. Furthermore, it is possible to compute how valuable any objective (i.e., any aspect of that entity) is for that person. All of these values are expressed as a number in the interval [0,1].</Paragraph> </Section> <Section position="2" start_page="0" end_page="0" type="sub_section"> <SectionTitle> 2.2 A measure of evidence strength </SectionTitle> <Paragraph position="0"> Given an AMVF for a user applied to an entity (e.g., a house), it is possible to define a precise measure of an objective strength in determining the evaluation of its parent objective for that entity. This measure is proportional to two factors: (A) the weight of the objective</Paragraph> <Paragraph position="2"> represented by dots and ordered by their compellingness (which is by itself a measure of importance), (B) a factor that increases equally for high and low values of the objective, because an objective can be important either because it is liked a lot or because it is disliked a lot. We call this measure s-compellingness and provide the following definition: s-compellingness(o, e, refo) = (A)[?] (B) = = w(o,refo)[?] max[[vo(e)]; [1 - vo(e)]], where [?] o is an objective, e is an entity, refo is an ancestor of o in the value tree [?] w(o,refo) is the product of the weights of all the links from o to refo [?] vo is the component value function for leaf objectives (i.e., attributes), and it is the recursive evaluation over children(o) for nonleaf objectives Given a measure of an objective's strength, a predicate indicating whether an objective should be included in an argument (i.e., worth mentioning) can be defined as follows: s-notably-compelling?(o,opop,e, refo) [?] s-compellingness(o, e, refo)>ux+ksx , where [?] o, e, and refo are defined as in the previous Def; opop is an objective population (e.g., siblings(o)), and opop>2 [?] p[?] opop; x[?]X = s-compellingness(p, e, refo) [?] ux is the mean of X, sx is the standard deviation and k is a user-defined constant Similar measures for the comparison of two entities are defined and extensively discussed in (Klein 1994).</Paragraph> </Section> <Section position="3" start_page="0" end_page="0" type="sub_section"> <SectionTitle> 2.3 The constant k </SectionTitle> <Paragraph position="0"> In the definition of s-notably-compelling?, the constant k determines the lower bound of s-compellingness for an objective to be included in an argument. As shown in Figure 2, for k=0 only objectives with s-compellingness greater Figure 3 Arguments about the same house, tailored to the same subject but with k ranging from 1 to -1 than the average s-compellingness in a population are included in the argument (4 in the sample population). For higher positive values of k less objectives are included (only 2, when k=1), and the opposite happens for negative values (8 objectives are included, when k=-1).</Paragraph> <Paragraph position="1"> Therefore, by setting the constant k to different values, it is possible to control in a principled way how many objectives (i.e., pieces of evidence) are included in an argument, thus controlling the degree of conciseness of the generated arguments.</Paragraph> <Paragraph position="2"> Figure 3 clearly illustrates this point by showing seven arguments generated by our argument generator in the real-estate domain. These arguments are about the same house, tailored to the same subject, for k ranging from 1 to -1.</Paragraph> </Section> </Section> class="xml-element"></Paper>