File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/81/j81-1002_concl.xml

Size: 8,976 bytes

Last Modified: 2025-10-06 13:55:57

<?xml version="1.0" standalone="yes"?>
<Paper uid="J81-1002">
  <Title>Computer Generation of Multiparagraph English Text 1</Title>
  <Section position="6" start_page="0" end_page="0" type="concl">
    <SectionTitle>
TEST AND BRANCH
</SectionTitle>
    <Paragraph position="0"> When P then determine if X. / If X then Q.</Paragraph>
    <Paragraph position="1"> If not X then R.</Paragraph>
    <Paragraph position="2"> Whenever C then Xand Y. Whenever Xthen Y and then Z.</Paragraph>
    <Paragraph position="3"> Whenever Xthen Z.</Paragraph>
    <Paragraph position="4"> &lt;mention of Y&gt; If P then Q otherwise R. When P then determine X and decide Q or R.</Paragraph>
    <Paragraph position="5">  These are only a few of the Aggregation rules that have been used in KDS; others have been developed in American Journal of Computational Linguistics, Volume 7, Number 1, January-March 1981 27 William C. Mann and James A. Moore Computer Generation of Multiparagraph English Text the course of working on this and other examples. Coverage of English is still very sparse. In other examples, an aggregation rule has been used to produce a multiple-sentence structure with intersentential dependencies.</Paragraph>
    <Paragraph position="6"> Figure 12 shows the Preference rules. They were derived empirically, to correspond to those used by the author of some comparable human-produced text. be elaborate -- that being able to advise that a term is good or a term is bad is adequate.</Paragraph>
    <Paragraph position="7"> Rule 6 is somewhat of a puzzle. Empirically, a sentence produced by reapplication of an Aggregation rule was always definitely unacceptable, primarily because it was awkward or confusing. We do not understand technically why this should be the case, and some say it should not be. We do know that this rule contributes significantly to overall quality.  1. Every protosentence gets an initial value of -1000.</Paragraph>
    <Paragraph position="8"> 2. Every primitive protosentence embedded in a composite protosentence decreases value by 10.</Paragraph>
    <Paragraph position="9"> 3. If there is advice that a term is good, each occurrence of that term increases value by 100. 4. Each time-sequentially linked protosentence after the first increases value by 100.</Paragraph>
    <Paragraph position="10"> 5. Certain constructions get bonuses of 200: the if-then-else construct and the when-Xdetermine-Y. null 6. Any protosentence produced by multiple appli- null cations of the same aggregation rule gets a large negative value.</Paragraph>
    <Paragraph position="11"> Figure 12. Preference rules.</Paragraph>
    <Paragraph position="12"> One of the surprising discoveries of this work, seen in all of the cases investigated, is that the task of text generation is dominated by the need for brevity: How to avoid saying things is at least as important as how to say things. Preference Rule 1 introduces a tendency toward brevity, because most of the Aggregation rules consume two or three protosentences but produce only one, yielding a large gain in score. Sentences produced from aggregated protosentences are generally briefer than the corresponding sentences for the protosentences consumed. For example, applying Rule 1 to the pair: &amp;quot;When you permit 5 the alarm system, call the Fire Department if possible. When you permit the alarm system then evacuate.&amp;quot; yields, &amp;quot;When you permit the alarm system, call the Fire Department if possible, then evacuate.&amp;quot; Rule 3 introduces the sensitivity to advice. We expect that this sort of advice taking does not need to 5 This way of using &amp;quot;permit&amp;quot; is unfamiliar to many people, but it is exactly the usage that we found in a manual of instruction for computer operators on what they should do in case of fire. In the course of attempting to produce comparable text we accepted the usage.</Paragraph>
    <Paragraph position="13"> Sentence Generator Module The Sentence Generator (Figure 13) takes the final ordered set of protosentences produced by the Hill Climber and produces the final text, one sentence at a time. Each sentence is produced independently, using a simple context-free grammar and semantic testing rules. Because sentence generation has not been the focus of our work, this module does not represent much innovation, but merely establishes that the text formation work has been completed and does not depend on further complex processing.</Paragraph>
    <Paragraph position="14">  The single significant innovation in the Sentence Generator is the Referring Phrase Generator, the only part in which prior sentences affect the current sentence. The Referring Phrase Generator keeps track of what objects have been referred to, and how. It presumes that objects previously referred to are in the reader's attention and that after they have been identified by the first reference, subsequent references need only distinguish the object from others in attention. This process is equivalent to the one described by \[6\] developed for this research. It knows how to introduce terms, refer to objects by incomplete descriptions, and introduce pronouns. However, none of our examples has exercised all of the features of Levin and Goldman's algorithm.</Paragraph>
    <Paragraph position="15"> Output Text Applying all of this machinery in our example, we get the result shown in Figure 14. Note the paragraph break, a product of a factoring rule (the first rule in Instructional-narrate) in the Problem Solver module. 28 American Journal of Computational Linguistics, Volume 7, Number 1, January-March 1981 William C. Mann and James A. Moore Computer Generation of Multiparagraph English Text Whenever there is a fire, the alarm system is started, which sounds a bell and starts a timer. Ninety seconds after the timer starts, unless the alarm system is cancelled, the system calls Wells Fargo. When Wells Fargo is called, they, in turn, call the Fire Department.</Paragraph>
    <Paragraph position="16"> When you hear the alarm bell or smell smoke, stop whatever you are doing, determine whether or not there is a fire, and decide whether to permit the alarm system or to cancel it. When you determine whether there is a fire, if there is, permit the alarm system, otherwise cancel it.</Paragraph>
    <Paragraph position="17"> When you permit the alarm system, call the Fire Department if possible, then evacuate. When you cancel the alarm system, if it is more than 90 seconds since the timer started, the system will have called Wells Fargo already, otherwise continue what you were doing.</Paragraph>
    <Paragraph position="18"> Figure 14. Final fire-alarm text from KDS.</Paragraph>
    <Paragraph position="19">  The development of KDS highlights several aspects of the task of writing that strongly influence text quality. The overwhelming importance of brevity, seen in both the Knowledge Filter and the Preference rules, is striking. Writing is seen here as a constructive activity rather than simply as interpretive. That is, it is not so much a mapping between knowledge representations as it is the creation of new symbolic objects, not equivalent to older ones, but suitable for achieving particular effects. The image of writing as a kind of goal pursuit activity helps us to factor the task into parts. The task (and the program) is occupied with finding a good way to say things, not with establishing feasibility of saying them.</Paragraph>
    <Paragraph position="20"> The KDS development has also identified important features of the problem of designing a knowledge-delivery pi~ogram. The defects of the Partitioning paradigm are newly appreciated; the Fragment-and-Compose paradigm is much more manageable. It is easy to understand, and the creation of Aggregation rules is not difficult. The separation of Aggregation and Preference actions seems essential to the task, or at least to making the task manageable. As a kind of competence/performance separation it is also of theoretical interest. Knowledge filtering, as one kind of responsiveness of the writer to the reader, is essential to producing good text.</Paragraph>
    <Paragraph position="21"> The importance of fragmenting is clear, and the kinds of demands placed on the Fragmenter have been clarified, but effective methods of fragmenting arbitrary knowledge sources are still not well understood. In the future, we expect to see the Fragment-and-Compose paradigm reapplied extensively. We expect to see goal-pursuing processes applied to text organization and style selection. We expect distinct processes for aggregating fragments and selecting combinations on a preference basis. We also expect a well developed model of the reader, including inference capabilities and methods for keeping the model up to date as the text progresses. Finally, we expect a great deal of elaboration of the kinds of aggregation performed and of the kinds of considerations to which preference selection responds.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML