File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/evalu/90/p90-1013_evalu.xml
Size: 6,967 bytes
Last Modified: 2025-10-06 14:00:01
<?xml version="1.0" standalone="yes"?> <Paper uid="P90-1013"> <Title>THE COMPUTATIONAL COMPLEXITY OF AVOIDING CONVERSATIONAL IMPLICATURES</Title> <Section position="8" start_page="102" end_page="102" type="evalu"> <SectionTitle> 7. ISSUES </SectionTitle> <Paragraph position="0"> 7.1. The Impact of NP-Hard Preference Rules It is difficult to precisely determine the computational expense of generating referring expressions that are maximal under the Full Brevity or No Unnecessary Words preference rules. The most straightforward algorithm that obeys Full Brevity (a similar analysis can be done for No Unnecessary Words) simply does an exhaustive search: it first checks if any one-component referring expression is successful, then checks if any two-component referring expression is successful, and so forth. Let L be the number of components in the shortest referring expression, and let N be the number of components that are potentially useful in a description, i.e., the number of members of Target-Components that rule out at least one member of Excluded. The straight-forward full-brevity algorithm will then need to examine the following number of descriptions before it finds a successful referring expression: For the problem of generating a referring expression that identifies object B in the example context presented in Section 2, N is 3 and L is 2, so the straightforward brevity algorithm will take only 6 steps to find the shortest description. This problem is artificially simple, however, because N, the number of potential description components, is so small. In a more realistic problem, one would expect Target-Components to include size, shape, orientation, position, and probably many other attribute-value pairs as well, which would mean that N would probably be at least 10 or 20. L, the number of attributes in the shortest possible referring expression, is probably fairly small in most realistic situations, but there are cases where it might be at least 3 or 4 (e.g., consider Uthe upside-down blue cup on the second shelf&quot;). 203 For some example values of L and N in this range, the straightforward brevity algorithm will need to examine the following number of descriptions:</Paragraph> <Paragraph position="2"> The straightfo~vard full-brevity algorithm, then, seems prohibitively expensive in at least some circumstances. Because finding the shortest description is N-P-Hard, it seems likely (existing complexity-theoretic techniques are too weak to prove such statements) that all algorithms for finding the shortest description will have similarly bad performance in the worst case. It is possible, however, that there exist algorithms that have acceptable performance in almost all 'realistic' cases. Any such proposed algorithm, however, should be carefully analyzed to determine in what circumstances it will fail to find the shortest description or will take exponential time to run.</Paragraph> <Section position="1" start_page="102" end_page="102" type="sub_section"> <SectionTitle> 7.2. Conflicts Between Preference Rules </SectionTitle> <Paragraph position="0"> The assumption has been made in this paper that the preference rules do not conflict, i.e., that it is never the case that description A is preferred over description B by one preference rule, while description B is preferred over description A by another preference rule. This means, in particular, that if lexical class LC1 is preferred over lexical class LC2, then LC,'s realization must not contain more open-class words than LC2's realization; otherwise, the Lexical Preference and Local Brevity preference rules may conflict. 1deg This can be supported by psychological and linguistic findings that basic-level classes are almost always realized with single words (Rosch 1978; Berlin, Breedlove, and Raven 1973).</Paragraph> <Paragraph position="1"> However, there are a few exceptions to this rule, i.e., there do exist a small number of basic-level categories that have realizations that require more than one open-class word. For example, Washing-Machine is a basic-level class for some people, and it has a realization that uses two open-class words.</Paragraph> <Paragraph position="2"> This leads to a conflict of the type mentioned above: basic-level Washing-Machine is preferred over non10 This assmnes that the Local Brevity pTcfenmcc rule uses number of open-class words as its measure of descriptic~ length. If number of comp~cnts or number of lcxical units is used as the measure of description length, then Local Brevity will never conflict with Lcxical Prcfc~-ncc.</Paragraph> <Paragraph position="3"> No other conflicts can occur between the No Unnecessaw Components, Local Brevity, and Lexical Preference preference rules.</Paragraph> <Paragraph position="4"> basic-level Appliance, but Washing-Machine's realization contains more open-class words than Appliance's.</Paragraph> <Paragraph position="5"> The presence of a basic-level class with a multi-word realization can also cause a conflict to occur between the two lexical-preference principles given in Section 6 (such conflicts are otherwise impossible). For example, Washing-Machine's realization contains a superset of the open-class words used in the realization of Machine, so the basic-level preference of Section 6 indicates that Washing-Machine should be lexically preferred over Machine, while the realization-subset preference indicates that Machine should be lexically preferred over Washing-Machine. The basic-level preference should take priority in such cases, so Washing-Machine is the true lexicaUy-preferred class in this example.</Paragraph> <Paragraph position="6"> 7.3. Generalizability of Results For the task of generating attributive descriptions as formalized in Reiter (1990a, 1990b), the Local Brevity, No Unnecessary Components, and Lexieal Preference rules are effective at prohibiting utterances that carry unwanted conversational implicatures, and also can be incorporated into a polynomial-time generation algorithm, provided that some restrictions are imposed on the underlying knowledge base. The effectiveness and tractability of these preference rules for other generation tasks is an open problem that requires further investigation.</Paragraph> <Paragraph position="7"> The Full Brevity and No Unnecessary Words preference rules are computationally intractable for the attributive description generation task (Reiter 1990b), and it seems likely that they will be intractable for most other generation tasks as well. Because global maxima are usually expensive to locate, finding the shortest acceptable utterance will probably be computationally expensive for most generation tasks. Because the 'new parse' problem arises whenever the preference function is staled solely in terms of the surface form, detecting unnecessary words will also probably be quite expensive in most situations.</Paragraph> </Section> </Section> class="xml-element"></Paper>