Meaning Representation and Text Planning. 
Christine DEFRISE Sergei NIRENBURG 
IRIDIA Center for Machine Translation 
Universit6 Libre de Bruxelles Carnegie Mellon University 
The data flow in natural language generation (NLG) 
starts with a 'world' state, represented by structures of 
an application program (e.g., an expert system) that has 
text generation needs and an impetus to produce a natu- 
ral language text. The output of generation is a natural 
language text. The generation process involves the tasks 
of a) delimiting the content of the eventual text, b) plano 
ning its structure, c) selecting lexieal, syntactic and word 
order me,'ms of realizing this structure and d) actually re- 
alizing the textusing the latter. In advanced generation 
systems these processes are treated not in a monolithic 
way, but rather as components of a large, modular gener- 
ator. NLG researchers experiment with various ways of 
delimiting the modules of the generation process and con- 
trol architectures to drive these modules (see, for instance, 
McKeown, 1985, Hovy, 1987 or Meteer, 1989). But re- 
gardless of the decisions about general (intermodular) or 
local (intramodular) control flow, knowledge structures 
have to be defined to support processing and facilitate 
communication among the modules. 
The natural language generator DIOGENES(e.g., Niren- 
burg et al., 1989) has been originally designed for use 
in machine translation. This means that the content de- 
limitation stage is unnecessary, as the set of meanings 
to be realized by the generator is obtained in machine 
translationas a result of source text analysis. The first 
processing component in DIOGENES is, therefore, its text 
planner which, takes as input a text meaning representa- 
tion (TMR) and a set of static pragmatic factors (similar to 
Hovy's (1987) rhetorical goals) and produces a text plan 
(TP), a structure containing information about the order 
and boundaries of target language sentences; the deci- 
sions about reference realization and lexical selection, t 
At the next stage, a set of semantics-to-syntax mapping 
rules are used to produce a set of target-language syn- 
tactic structures (we are using the f-structures of LFG -- 
see, e.g., Nirenburg and Levin, 1989). Finally, a syntactic 
realizer produces a target language text from the set of 
f-structures. 
To produce texts of adequate quality, natural language 
generation needs a sufficiently expressive input language. 
In this paper we discuss several important aspects of the 
knowledge and the processing at the text planning stage of 
a generation system. First, we describe a comprehensive 
language processing paradigm which underlies work on 
both generation and analysis of natural language in our en- 
vironment. Next, we illustrate the features of our meaning 
representation languages, the text meaning representation 
language TAMERLAN and the text plan representation lan- 
guage TPL. Finally, we describe the mechanism of text 
planning in DIOGENES and illustrate the formalism and the 
strategy for acquiring text planning rules. 
~Text phmning in DIOGF3qES is described in detail Defrise 
and Nirenburg, 1989; the language for writing planning rules, in 
Nirenburg et al., in preparation; the lexical selection in DIOGF.NF'^S 
is described, e.g., in Nirenburg and Nirenburg, 1988. 
1 Text Meaning in Analysis & Generation. 
Understanding a text involves determining its proposi- 
tional content as well as detecting pragmatic meaning el- 
ements, those pertaining to the speech situation and to the 
nature of the discourse producer's goals. ~ A high-quality 
natural language generator must be capable of expressing 
the entire set of meaning types in a natural language. The 
input to a generator must, therelore, express all of the 
kinds of meanings to be rendered in the output text. This 
requires a convenient knowledge representation scheme 
for the input, as well as actual knowledge (in the form of 
heuristic selection rules) about interactions between the 
input and elements of the output. 
The acquisition of this knowledge must be informed 
by systematic field work on such linguistic phenomena as 
locus, topic and comment, speech acts, speaker attitudes, 
or reference-related phenomena (anaphora, deixis, ellip- 
sis, definite description, etc.). When these phenomena 
are studied in computational linguistics, they are usually 
approached from the standpoint of language understand- 
ing. The types of activities in generation often differ from 
those in analysis (see, e.g., Nirenburg and Raskin, 1987). 
~Ib highlight the requirements of natural language gener- 
ation, we will describe a generation-oriented approach to 
speech acts and attitudinal knowledge. 
1.1 Speech Acts. 
We model a text producer/consumer as having, in addition 
to knowledge about the grammar and lexis of a natural 
language and knowledge about the world, an inventory of 
world-related and communication-related goals and plans 
that are known to lead to the achievement of these goals. 
Producing text, then, is understood as a planning process 
to achieve a goal. Being able to achieve goals through 
communication can be seen as broadening the range of 
tools an agent has for attaining its goals. It is important to 
distinguish clearly between the producer's goal and plan 
inventory and a set of active goal and plan ins tances which 
constitute the text producer's agenda. When the decision 
is made to try to achieve a go~d by rhetorical means, one 
of the available rhetorical plans corresponding to this goal 
is selected for processing. Based on knowledge recorded 
in this plan structure, the agent produces a new, language- 
dependent, structure, a text plan, as an intermediate step 
in producing a text. 
The process of natural language understanding in our 
approach will be modeled as a plan recognition process in 
a similar conceptual architecture. In understanding, it is 
necessary to determine not only the propositional content 
but also the illocutionary force of each utterance. This 
involves reconstructing (on the basis of the propositional 
content and the knowledge of the speech situation) the text 
2We decided to use the term 'producer' for the author of a 
text and the speaker in a dialog; and the term 'consumer' to 
indicate the reader of a text or a hearer in a dialog. 
1 219 
producer's goals, in order to decide if an utterance con- 
stitutes a direct or an indirect speech act, calculating any 
added non-propositional, implicit elements of meaning. 
In generation, as we have seen, the task is to decide, on 
the basis of the communication situation and knowledge 
of the producer's intentions, whether to achieve a given 
goal via rhetorical means. If so, it is also necessary to 
decide whether it will be realized in the target text directly 
or indirectly. If a direct speech act is chosen, one must 
further determine whether to lexicalize the speech act 
through the use of a performative verb (e.g., promise, 
order, etc.). 
The treatment of speech acts is thus always accom- 
plished using links between the set of producer's current 
goals and the propositional content. The order of oper- 
ations, however, differs depending on whether this treat- 
ment is a part of analysis or generation. To illustrate, 
suppose an agent is cold and has a goal to be warm. To 
achieve this goal, we construct a plan tree, with the goal 
be-warm as the root, and various plans of action as sub- 
trees (see Figure 1). 
BE WARM 
self-action delegation 
go close put on ... physical action 
window swctae~ ~ \[reqt~mt .~tion\] 
J I\ 
point ~ art point to a direct indirect 
open window sweater speech act Inymch act 
lexieal synUtctie 
(perforrnati~x) (mood, tense, ...) 
Figure 1: The Plan Tree for the Goal 'Be-Warm.' 
In analysis, the tree is traversed bottom-up: the con- 
sumer has direct access to the input utterance and uses it 
as a clue to reconstruct the producer's goals. If complete 
extraction of meaning is to be achieved, it is crucial to 
know whether an utterance realizes a direct or an indi- 
rect speech act, In generation, the tree will be traversed 
top-down. If the producer's choice is to achieve his goal 
through rhetorical means, he can generate any of the fol- 
lowing three utterances: 
(1) I order you to close the window. 
(2) Close the window, please. 
(3) It's cold in here. 
Now, even though (3) differs in propositional meaning 
from (1) and (2), the distinction between direct speech act 
(1) or (2) and indirect speech act (3) does not matter from 
the point of view of the producer's goals. In TAMERLAN 
the representation of both Close the window and It is cold 
in here will contain a pointer (in TAMERLAN the filler of 
theproducer-intention slot of every clause frame) to the 
same plan node, in this case, 'request-action.' The fact 
that in the above utterances, the request-action is realized 
as a command or a statement pertains to text, not domain 
planning. 
1.2 Producer Attitude. 
Our reasons for introducing attitudes as an explicit part of 
the representation of the meaning of a natural language 
clause are manifold. In what follows we will review 
three (partially interconnected) reasons. Representing at- 
titudes helps to a) support reasoning about producer goals; 
b) highlight the argumentative structure of a discourse; 
c) provide a convenient vehicle for representing m(xlal 
meanings. 
Almost all spoken and written discourse involves the 
participants' opinions, so much so that producing a per- 
fectly objective text is an almost impossible task. Within 
the set of possible goals relating to generating text, the in- 
troduction (explicit or implicit, lexicalized or not) of the 
producer's opinions and points of view serves two goals: 
• modifying the consumer's model of the producer 
by stating facts (including opinions) about the latter 
which are not in principle observable by the con- 
sumer 
• modifying the consumer's opinions by stating pro- 
ducer's opinions about facts of the world (the latter 
can in principle be observed by the consumer) 
The above distinctions only become visible if one de- 
cides to represent attitudes overtly. Once this decision 
is made, it becomes clear that it brings about better de- 
scription possibilities for additional linguistic phenom- 
ena, such as the argumentative structure of discourse. It 
has been observed (e.g., Anscombre and Ducrot, 1983) 
that texts have a well-defined argumentative structure 
which a) reflects the producer's current goals and b) influ- 
ences such processes as the ordering of text components 
and lexical selection in generation. The argumentative 
structure of a text is realized (or, in text understand- 
ing, detected) through linguistic means such as the use 
of scalar adverbs ('only', 'even', 'almost', etc.), connec- 
tives ('but', 'since'), adjectives ('unbearable', 'fascinat- 
ing', etc.). Sets of such lexical items may have to be con- 
sidered equivalent from a purely semantic point of view, 
but different in a facet of their pragmatic effect known 
as argumentative orientation. For example, to illustrate 
the interplay between semantic content and argumenta- 
tive orientation (i.e. the producer's attitude towards an 
event), compare (4) and (5), which have opposite truth 
conditions, but the same pragmatic value -- from both 
(4) and (5) the consumer will infer that the producer re- 
gards Burma as an inefficient sleuth. In this example it is 
sufficient to retain pragmatic information concerning the 
producer's judgment of Burma while the semantic differ- 
ences (induced by the use of 'few' versus 'none at all') 
can be disregarded. However, in other contexts the se- 
mantics will matter much more -- consider, for instance, 
(6) for which there can be no paraphrase With 'no clues at 
all.' 
(4) Nestor Burma found few clues. Nobody 
was surprised. 
(5) Nestor Burma found no clues at all. Nobody 
was surprised. 
(6) Nestor Burma found few clues. But it was 
still better than having none at all. 
The difference between (7) and (8), whose truth condi- 
tions are similar, is purely argumentative (or attitudinal) 
-- (7) expresses a positive (optimistic!) attitude, (8) the 
opposite point of view. This example shows how crucial 
the extraction of the argumentative structure is, since it is 
the only clue for the (in)acceptability of (9). 
(7) Nestor has a little money. 
(8) Nestor has little money. 
(9) *Nestor has little money. He wouldn't mind 
spending some on chocolate. 
220 2 
1Tinnily, we use the attitude markers as a means of 
exgressing modality. Traditionally, formal semanticists 
have extended first order logic to modal logic in order to 
account lbr mc~lals. This places the modals at a purely 
semantic level, as a result of which the it is difficult to 
distinguish between what is observable for both producer 
and consumer and what is not (opinions, beliefs, etc.). We 
consider that expressions like 'perhaps,' 'possibly,' 'it is 
almost certain that' are clues as to what the producer's be- 
lieg\]s and attitudes are towards facts of the world and help 
the consumer modify or update his model of the producer. 
It is for the above reasons that we decided to include a 
de, ailed specification of producer attitudes into the input 
sp(>cilication for generation. 
2 The Structure of TAMERLAN, 
TAMERLAN is a frame-based representation language that 
hat; the following basic entity types: text, clause, relation, 
proposition, attitude and pointer (to the producer's cur- 
rent plan). A text frame indexes a set of clauses and a set 
of ):elations comprising an input text. TAMERLAN clauses 
delimit the propositional and pragmatic content of Utr- 
get language utterances. Relations represent links among 
ew;:nts, objects, or textual objects. In what follows, we 
will illustrate the structure of TAMERLAN concentrating on 
the', representation of attitudes and agenda pointers (cor- 
responding to speech acts). An exhaustive definition and 
de.~cription of TAMERLAN is given in Defrise and Niren- 
bu~g (in preparation). 
2.1 Representation of Attitudes. 
Each TAMERLAN clause contains one or several attitude 
slots. Each is composed of four lacets: the type facet, the 
value facet, the scope facet, and the 'attributed-to' facet. 
The possible types of attitudes are : 
,t, epistemic (with values taken from the {0,1 } interval 
to account for expressions like perhaps; the end- 
points of the interval intuitively correspond to the 
values of impossible and necessary); 
,~, evaluative (with values taken from a similar scale, 
with the endpoints interpreted as, roughly, 'the best' 
'the worst,' the midpoint as 'neutral,' and other inter- 
mediate points used to account for expressions like 
'fairly interesting'); 
,~ deontic (ranging from 'unfair' to 'fair'); 
expectation (ranging from 'expected' to 'surprise'). 
The organization of the above types is similar -- their 
value ranges are all one type of scale. The differences 
~unong them are semantic. The above classification is 
an enhancement of Reichman's treatment of "context 
spaces" (1985: 56). We use the terminology (if not ex- 
actly the spirit) of her distinction among the epistemic, 
evaluative and dex)ntic issue-type context spaces. Context 
space is Reichman's term for a discourse segment. The 
iss~,e context space roughly COIXesponds to our attitude 
component, while the non-issue context space provides 
a sh~low taxonomy for discourse segment types (Reich- 
ma~ lists comment, narrative support, and nonnarrative 
support as the non-issue type values). 
Every attitude type has a scale of values associated 
with it. The value component specifies a region on the 
appropriate scale. Though the semantics of all scales is 
different, the set of values is the same -- we represent 
all attitude scales as {0,1 } intervals. Thus, the value (> 
0.8) on the scale of evaluative-saliency will be lexically 
realizable through the modifier important(ly), while the 
same value on the deontic scale will end up being realized 
as should or ought to. 
qt~e attributed-to component of the attitude simply binds 
the attitude to a particular cognitive agent (which may be 
the producer of the utterance or some other known or un- 
known agent), who is responsible for the content of the 
utterance. This is important for understanding reported 
speech, and more generally the polyphony phenomena, in 
the sense of Ducrot (1984). Ducrot's theory of polyphony, 
an approach to extended reported speech treatment, pro- 
vides a framework for dealing with the interpretation of a 
number of semantic and pragmatic phenomena, e.g., the 
difference in meaning and use between 'since' and 'bed 
cause,' certain particularities of negative sentences, etc. 
The scope of the attitude representation pinpoints the en- 
tity to which this attitude is expressed. The values of 
the scope can be the whole proposition, a part of it or 
another attitude value, with its scope. In understanding 
the text the consumer notes the attitudes of the producer 
to the content. The attitudes can be expressed toward 
events, objects, properties or other attitudes (see 10 -- 
13, respectively). 
(10) The train, unfortunately, left at 5 p.m. 
(11) This book is interesting. 
(12) The meeting was reprehensibly short. 
(13) Unfortunately, I ought to leave. 
McKeown and Elhadad (1989) "also treat argumentative 
scales aid attitudinals in a generation environment. They, 
however, consider these phenomena as part of syntax, thus 
avoiding the need to add a special pragmatic component to 
their system. This decision is appropriate from the point 
of view of minimizing the changes in an existing generator 
due to the inclusion of attitude information. However, 
if compatibility is an overriding concern, introducing a 
separate component is a more appropriate choice. 
2.2 Representation of Producer's Goals. 
We overtly refer to the producer's goals in each TAMER- 
LAN clause, using the 'producer-intention' slot. The filler 
of this slot is a pointer to an action or a plan in the pro- 
ducer agenda. The producer agenda, which is a necessary 
background component of text planning, contains repre- 
sentations of active goal and plan instances. Since we are 
interested in discourse situations, we take into account 
only the goals and plans thata) presuppose the situation 
with at least two cognitive agents and b) relate to rhetorio 
cal (and not physical-action) realizations of goals. 
To illustrate our use of the 'producer-intention' slot in 
text generation, consider the task of generating from a 
TMR which we will gloss as 'The speaker promises to 
return to his current location at 10 o'clock.' Depending 
on the context and other parameters, the producer may 
decide to generate 1 will return at 10 or l promise to return 
at 10. In the latter case the decision is made to realize the 
meaning of the speech act lexically. The mechanism for 
this is as follows: traversing the producer-intention slot 
the producer gets to the relevant point in the agenda, which 
is the (primitive) plan PROMISE. 3 Since the realization 
3or "I'IIRFakT, etc., as the case may be 
3 22i. 
rules for speech acts prescribe their realization as first- 
person-singular clauses with the lexical realizations of 
the nmnes of appropriate speech plans (acts), the natural 
language clause I promise X is produced, and, eventually, 
X is expanded into the subordinate natttral language clause 
to return at 10. The central point is that the former natural 
language clause is the realization of a pointer in the input, 
not an entire text representation clause. 4 
3 The Text Plan Language (TPL). 
The text plan is an intermediate data structure which is 
the obtained through the application of the text planning 
rules to TMRs. TPs serve as input to the semantics-to- 
syntax mapping rules that produce f-structures which in 
turn serve as input to the syntactic realization module. 
While TMR does not specify the boundaries and order 
of sentences in the output text, the ways of treating co- 
reference or the selection of the appropriate lexical units 
(both open-class items and such closed-class ones as re- 
alizations of producer attitudes and discourse cohesion 
markers, connectives) for the target text, TP contains this 
information. Additionally, the text planning stage also 
produces values for such features as definiteness, tense 
and mood. 
A text plan is an hierarchically organized set of frames. 
The root of this hierarchy is the text structure frame which, 
notably, contains the slot "has-as-part" whose value is an 
ordered set of plan sentence frames, S_i. A plan sen- 
tence frame contains a slot "subtype" whose values in- 
clude "simple," "complex," and "and." Another slot con- 
tains an ordered list of plan clause frames comprising the 
sentence. A plan clause frame contains the realization, 
through lexical choice and feature values, of both propo- 
sitional and pragmatic meanings in the input. It lists the 
realizations of the head of the proposition and pointers 
to plan-role frames that contain realizations of the case 
roles of this proposition. (Sometimes the filler of a case 
role slot in a plan clause will be a pointer to another plan 
clause.) Realizations of producer attitudes pertaining to 
the clause and of cohesion markers which realize some of 
TAMERLAN relations are also included, as is an indication 
of whether a given clause is a subordinate clause or a ma- 
trix clause in a complex sentence. The plan role frames 
are composed in a similar fashion. 
4 The Mechanism of Text Planning. 
Text planning can be understood as a mapping problem 
between sets of expressions in TAMERLAN and TPL. The 
mapping is achieved through the application of heuristic 
rules of the situation-action kind. The rules take as input 
elements of the input representation and mapped them into 
elements of a text plan based the various meaning com- 
ponents in the input and their combinations. In addition, 
the heuristic rules take into account the state of affairs in 
the communication situation, modelled in our system as a 
static set of pragmatic factors that determine the stylistic 
slant of a text. Our pragmatic factors are a subset of the 
rhetorical goals suggested by Hovy (1987) and include 
4In natural language understanding, one will expect to obtain 
as input some full-fledged natural language clauses whose real 
meaning is purely pragmatic. Consider, for instance, the sen- 
tence"I hereby inform you that X." The meaning of"I hereby in- 
form you" will represented as a filler of the "producer-intention" 
slot or as an attitude. 
such factors as simplicity, formality, colorfulness, etc. 
Finally, the heuristic rules can take into account elements 
of the text plan under construction -- those elements that 
were produced by previously triggered rules. 
4.1 Control. 
In our generator, we adopt a blackboard model of control. 
This means that the work in the system is performed by 
a set of semi-autonomous knowledge sources which ap- 
ply various heuristic rules or run specialized algorithms 
in a distributed fashion. There is no centralized sequen- 
tial model of control. The activated knowledge som'ce 
instances are collected in the processing agenda(s). Af- 
ter a knowledge source instantiation "fires," its results 
are listed on one of several public data structures, black- 
boards, supported by our system. The balckboards are 
public in the sense that subsequent knowledge source 
instantiations can draw knowledge needed for their ap- 
plication from the blackboard spaces in which outputs of 
other knwoledge source instances is recorded. Elements 
of TMR and'IT' are recorded on one of the blackboards, as 
are various intermediate processing results, such as those 
produced in the process of lexical selection. 
Efficiency of a computational architecture can be 
controlled and improved by introducing special control 
knowledge sources which perform manipulations of the 
contents of the processor agendas, including such opera- 
tions as obviation and retraction. See Nirenburg, Nyberg 
and Defrise, 1989 for a sketch of the text plan controller 
in our generation system. 
5 From Meaning Representations to Plans. 
In this section we show how decisions concerning the 
treatment of agenda pointers are reflected in the text plan 
and how decisions about realization of attitudinals are 
made. We will illustrate the treatment of attitudinals by 
showing the TMRs and their corresponding TPs for a set 
of examples. The treatment of attitudinals will be illus- 
trated by listing the attitude-related text plauning rules (a 
much more comprehensive list of text planning rules is 
included in Defrise and Nirenburg, 1989). In this fashion 
we will be able to illustrateboth knowledge representation 
languages and the planning rule language. 
5.1 Planning the Realization of Agenda Pointers. 
Consider (14) -- (16) as the desired target for generation. 
(14) I will definitely be back by 10. 
(15) I will be back by 10. 
(16) I promise to be back by 10. 
The corresponding set of (somewhat simplified) 
TAMERLAN structures is in (17) and (18), since the struc- 
tures for (15) and (16) will be identical -- indeed, the sen- 
tences are synonymous and differ only in what is known 
as register characteristics, in this case, the level of formal- 
ity. Taking the above sets as input, the text planner will 
produce the text plans in (19) -- (21). The differences 
between the realizations of the input (18) as either (15) or 
(16) will be reflected in the text plans, see (20) and (21). 
(17) 
(make-frame text 
(clauses (value clausel) ) 
(relations 
(value re\]ationl )) 
(make-frame clausel 
222 4 
(proposition (va\].ue #returnl)) 
(attitude (value attitude\].)) 
(producer-intention (value #statementl))) 
(make-frame #returnl 
(is-token-of (value *return)) 
(phase (value begin)) 
(iteration (value i)) 
(duration (value I)) 
(time (value (< I0)) 
(agent (value *producer*)) 
(destination (value #statementl.space))) 
(make-frame attitudel 
(epistemic (value I) 
(scope #returnl) 
(attributed-to *producer*)) 
(make-frame #statementl 
(is-token-of (value *statement)) 
(time (value #statement\].time)) 
(space (value #statementl.space))) 
(make-frame relationl 
(type (value intention-before) } 
(arguments 
(first {value #statementl.time)) 
~second (value #returnl.time)))) 
(18) 
(make-frame text 
(clauses (value clausel)) 
(relations 
(value relationl))) 
(make-frame clausel 
(proposition (value #returnl)) 
(producer-intention (value #promisel))) 
make-frame #returnl 
(is-token-of (value *return)) 
(phase (value begin)) 
(iteration (value I)) 
(duration (value I)) 
(time (value (< i0)) 
(agent (value *producer*)) 
(destination (value #statement\].space))) 
make-frame #promisel 
(is-token-of (value *promise)) 
(time (value #promisel.time)) 
(space (value #promisel.space))) 
make-frame relationl 
(type (value intention-before)) 
(arguments 
(first (value #promisel.time)) 
(second (value #return\].time)))) 
(19) 
(TSF 
(has-as-part Sl)) 
(Sl 
(type simple) 
(clauses CI)) 
(Cl 
(head be back) 
(time 
(head i0) 
(features 
(tlme-relation before))) 
(modifier 
(head definitely)) 
(features 
(tense future) 
(mood declarative)) 
(agent rl) 
(destination r2)) 
(rl (head PRO) 
(features 
(r2 
(2O) 
(number singular) 
(person first))) 
(head *ellided*)) 
(TSF 
(has-as-part SI)) 
(Sl 
(type simple) 
(clauses CI)) 
(Cl 
(head be back) 
(features 
(tense future) 
(mood declarative)) 
(time 
(head i0) 
(features 
(time-relation before))) 
(agent rl) 
(destination r2)) 
(rl (head PRO) 
(features 
(number singular) 
(person first))) 
(r2 (head *ellided*)) 
(2~) 
(TSF 
(has-as-part SI)) 
(Sl 
(type complex) 
(clauses (CI C2)) 
{Cl 
(head promisel) 
(features 
(tense presen%) 
(mood declarative)) 
(agent rl) 
(theme C2)) 
(rl (head PRO) 
(features 
(number singular) 
(person first))) 
(C2 
(head be back) 
(time 
(head i0) 
(features 
(tlme-relation before))) 
(features 
(tense future) 
(mood declarative)) 
(agent r2) 
(destination r3)) 
(r2 (head PRO) 
{features 
(number singular) 
(person first))) 
(r3 (head *ellided*)) 
5.2 Text Planning Rules for Attitudes. 
Text planning rules in DIOGF.NES deal with a variety of 
phenomena. Some are devoted to text structure proper-- 
the number and order of sentences and clauses to express 
the meanings of input; clause dependency structures, etc. 
Others deal with treatment of reference --- pronominal- 
ization, ellipsis, etc. Still others take care of lexical selec- 
tion, determine tense and mood features of the target text, 
etc. A set of text planning rules devoted to realization of 
producer attitudes is presented in Figure 2. 
5 223 
A\]. IF (and (= clause i.attitude.type evaluative) 
(= clause i.attitude.value low) 
(:: clause-i.attitude.~cope clause i.proposition)) 
'\]?HEN (add-unit-fille-r C i 'attitude 'unfort~nately) 
A2. \]IF (and (- clause i.attit.ude.type epistemic) 
(-: clause i.attitude.va\[ue I) 
(= clause i.attitude.scope clause i°proposition)) 
THEN (add-unit-fac-et-fil\]er C i ~features ~moed ~declarative) 
A3. IF (and (= clause i.attitude.type epistemic) 
(=: clause--i.att\]tude.value O) 
(= clause i.attitude.scope clause i.proposition)) 
THEN (add-unit-facet-filler C i 'features 'mood ~negative) 
A4. I\]:' (and (: clause i.attitude.type epistemic) 
(=: clause--i.attitude.value 0.5) 
(:: clause i.attitude.scope clause \].proposition)) 
TIIEN (add-unit-filler C i 'attitude 'perhaps) 
Figure 2: Text planning rules for 'producer attitudes'. 
Rule A1 deals with an attitude of the evaluative type; 
rules A2 through A4 with attitudes of the epistemic type. 
In the rules, the if clauses check the values in TMR and, 
depending onthe match, either add features to TP or add a 
lexical realization for the attitudinal meaning (as in Rule 
A4). 
6 Statusand Future Work° 
In the DK)GENES project we adopt the methodological 
attitude of developing the generator functionalities in a 
breadth-first fashion. In other words, unlike many other 
projects, DIOGENES do not tend to describe exhaustively a 
specific linguistic phenomenon (e.g., negation, attaphora, 
aspect, scope of quantifiers) or type of processing (e.g., 
text planning, lexical selection, syntactic realiz~ttiou) be- 
fore proceeding to the next one. We prefer instead to go 
for a complete functioning system which contains all (or, 
in practice, most) of the above components and covers 
all (or most) of the above phenomena. It is clear that, 
at the beginning, each (or many) of these components 
is somewhat incomplete, and not every phenomenon is 
described in sufficient detail. However, this methodol- 
ogy allows us to benefit from a complete experimentation 
environment and an open-ended architecture that facili- 
tates the addition of knowledge to the system as well as 
testing and debugging. At present we have a working pro- 
totype text planning and generation system with narrow 
coverage. We are working on expanding the knowledge 
needed for achieving a deeper level of analysis of each of 
the linguistic phenomena covered in the system. 
Acknowledgements. 
Many thanks to the members of the DIOGENES project, 
especially Eric Nyberg, and to Ken Goodman for useful 
discussions about the material and its presentation. The 
first author was supported in conducting the research re- 
ported in this document by the Belgian National Incentive 
Program for Fundamental Research in Artificial Intelli- 
gence initiated by the Belgian state -- Prime Minister's 
Office -- Science Policy Programming. The scientific 
responsibility is assumed by the authors. 

Bibliography

Anscombre, J.-C. andO. Ducrot. 1983. \[,'argumentation dans 
la iangue. Brussels: Mardaga. 

Defrise, C. and S. Nirenburg. 1989. Aspects of Text Planning. 
CMU-CMT Memorandum. August. 

Defrise, C. and S. Nirenburg (in preparation). Aspects of Text 
Meaning. Center for Machine Translation, Carnegie Mellon 
University. 

Ducrot, O. 1984. Polyphonic. In Lalies, 4. 

Hovy, E. 1987. Generating Natural Language under Pragmatic 
Constraints. Yale University Ph.D. Dissertation. 

McKeown, K. 1985. Text Generatkm. Cambridge: Cambridge 
University Press. 

McKeown, K. and M. Elhadad. 1989. A Comparison of Surface 
Language Generators: A Case Study in Choice of Connectives. 
MS. Columbia University. 

Meteer. M. 1989. The Spokesman Natural Language Genera- 
tion System. TechnicalReport7090. BoltBeranekandNewman 
Inc. July. 

Nirenburg, S. and V. Raskin. 1987. The Subworld Concept 
Lexicon and the Lexicon Management System. Computational 
Linguistics, Volume 13, Issue 3-4. 

Nirenburg, S., E. Nyberg, R. McCardell, S. Huffman, E. Ken- 
schaft and I. Nirenburg. 1988. Diogenes-88. Technical Report 
CMU-CMT-88-107. Carnegie Mellon University. June. 

Nirenburg, S. and L. Levin. 1989. Knowledge Representation 
Support. Machine Translation, 4, pp. 25 - 52. 

Nirenburg, S. and I. Nirenburg. 1988. A Framework tbr Lexi- 
cal Selection in Natural Language Generation..Proceedings of 
COLING-88. 

Nirenburg, S., E. Nyberg and C. Defrise. 1989. Text Planning 
with Opportunistic Control. Technical Report CMU-CMT-88- 
113. Carnegie-Mellon University. June. 

Nirenburg, S., E. Nyb~'rg and C. Defrise. (In preparation) The 
D1OG\[~F2~ Natural Language Generation System. 

Reichman, R. 1985. Getting Computers to Talk Like You and 
Me. Cambridge, MA: MIT Press. 
