File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/00/w00-1010_metho.xml
Size: 31,814 bytes
Last Modified: 2025-10-06 14:07:25
<?xml version="1.0" standalone="yes"?> <Paper uid="W00-1010"> <Title>Social Goals in Conversational Cooperation</Title> <Section position="3" start_page="0" end_page="84" type="metho"> <SectionTitle> 1 (Merrit, 1976) </SectionTitle> <Paragraph position="0"> ~(Green and Carberry, 1999) shared goals), intention-based approaches leave unexplained why a participant should bother to be cooperative, both at the conversational and at the behavioral level. In order to overcome these difficulties, (Traum and Allen, 1994) claim that speech acts pose obligations on the hearer: obligations are pro-attitudes which provide the hearer with a motivation to act, even if he is not - strictly speaking - cooperating with the speaker. Elaborating this proposal, (Poesio and Traum, 1998) propose to add obligations to the illocutive effect of speech acts: for instance, a (successful) question would pose on the addressee the obligation to answer; and, in general, a speech act poses the obligation to ground it.</Paragraph> <Paragraph position="1"> While we agree with (Traum and Allen, 1994) that cooperation between agents who are not part of a group has to be explained by some mechanism which obliges an agent to answer - at least for refusing explicitly - we want to go deeper inside the notion of obligation and try to show that it is strictly related to that of intention.</Paragraph> <Paragraph position="2"> In order to explain obligations, we resort to the notion of social goals, starting from (Goffman, 1981)'s sociolinguistic analysis of interactions. We argue that, in non-cooperative situations, social goals provide agents with the motivation for committing to other agents' communicated goals. As shown by (Brown and Levinson, 1987), an agent has the social goal of taking into account the face of other people (and his own as well); this concern generates complementary needs for the requester and for the requestee. From the requester's point of view, it results in the production of complex linguistic forms aimed at reducing the potential offence intrinsic to a demand to act (conversationally or behaviorally); from the requestee's point of view, while acceptance normally addresses the requester's potential offence by a displaying of good-tempered feelings, any refusal at the conversational or behavioral level constitutes in turn a potential offence to the requestee's face, and sets up the social need for the refusing agent to act in order to nullify this potential offence (Goffman, 1981).</Paragraph> <Paragraph position="3"> Differently from obligations, social goals influence actions in an indirect way: in order to evaluate the effects of an action on his interlocutor, an agent has to make a tentative prediction of his reaction (anticipatory coordination) (Castelfranchi, 1998). This prediction allows the agent to keep the partner's possible reaction into account when planning his next (domain or linguistic) action. Social goals intervene as preferences during the action selection phase, by leading the planning agent to choose the actions which minimize the offence to the partner and address the potential offence conveyed by a refusal.</Paragraph> </Section> <Section position="4" start_page="84" end_page="85" type="metho"> <SectionTitle> 2 The Interactional Framework </SectionTitle> <Paragraph position="0"/> <Section position="1" start_page="84" end_page="84" type="sub_section"> <SectionTitle> 2.1 Goals and Preferences </SectionTitle> <Paragraph position="0"> We assume that every agent A has a set of goals G, and a set of preferences P towards states of affairs. Besides, an agent has at his disposal a set of action operators (recipes for achieving domain and linguistic goals, corresponding to behavioral and conversational cooperation) organized in a hierarchical way.</Paragraph> <Paragraph position="1"> The preferences of an agent are expressed as functions which map states, represented as sets of attribute-value pairs, to real numbers; an overall utility function, which consists of the weighted sum of the individual functions, expresses the utility of reaching the state depicted by a certain configuration of attributes, according to the results of the multi-attribute utility theory (Haddawy and Hanks, 1998).</Paragraph> <Paragraph position="2"> Goals provide the input to the planning process; in addition, they can appear in the preferences of the agent, i.e., they can be related to a utility function which evaluates the expected utility of achieving them 3. On the basis of his goals and of the recipes he knows, an agent builds a set of plans, by selecting the recipes which have among their effects one (or more) of the goals in the set. 4 The planner we use is a modification of the DRIPS decision-theoretic hierarchical planner (Haddawy and Hanks, 1998). The planning process starts by applying to the current state all selected recipes and recursively expands the partial plans until the appropriate level of detail is reached. When the planning algorithm concludes the refinement of the input recipes, it returns the preferred plan, i. e., the one with the highest expected utility: the agent becomes committed to that plan, which constitutes his current intention. The use of preferences allows a plan to be ewluated not only with respect to the fact that it achieves the goal it has been built for, but also with respect to its side effects (for instance, consuming less resources).</Paragraph> </Section> <Section position="2" start_page="84" end_page="85" type="sub_section"> <SectionTitle> 2.2 Anticipatory Coordination and Adoption </SectionTitle> <Paragraph position="0"> The planning situation depicted above becomes more complex when two or more agents interact. In particular, a goal of agent A may become known to agent B; a special occurrence of this situation arises when A has explicitly asked B for help. If this is the case, it is possible that agent B comes to choose a plan to satisfy this goal, even if it does not yield any direct utility to him.</Paragraph> <Paragraph position="1"> Notice that if an agent evaluated the utility of a plan for achieving a goal that has been requested by another agent only on the basis of its immediate outcome, he would never choose that plan in a non-cooperative setting: performing an action for achieving another agent's goal often results only in a negative utility, since the side effects of the action ZNot all goals are among the preferred states of affairs, since there are instrurnentalgoals which arise as a consequence of the intention to achieve some higher-level goal.</Paragraph> <Paragraph position="2"> 4In the planning process, we distinguish between primary effects, which are the goals that led to the selection of a given recipe, and side e_ffects, i.e., all other effects of the recipe.</Paragraph> <Paragraph position="3"> cannot but affect resources and time. The reason why B m:topts a partner's goal is the fact that the satisfaction of an adopted goal can have an indirect utility for B in the light of A's reaction. Here, the ability of an agent to predict the potential reactions of another agent is exploited to decide whether it is worth for him to commit to the satisfaction of the other agent's goal.</Paragraph> <Paragraph position="4"> In order to evaluate how the partner's reaction affects his own preferences, like not offending the partner and other social goals, an agent evaluates the utility of a plan by considering the world states resulting from the other partner's reaction (one-level lookahead), both in case he has committed to the partner's goals, and in case he has decided that they are not worth pursuing.</Paragraph> <Paragraph position="5"> The DRIPS planner has been modified to implement the following process of intention formation in interactions with other agents (see figure 1): 1. adoption: if A communicates to B a goal gA which he wants B to achieve, then the current set of B's goals, GB, becomes G~, the union of {gA} and GB.</Paragraph> <Paragraph position="6"> 2. planning: B builds the set of plans PB which aim at achieving (all or some of) the goals in G~ (in this way the plans achieving also gA are compared with those which do not).</Paragraph> <Paragraph position="7"> 3. anticipatory coordination: from the state resulting from each plan pi in PB, B considers the possible reaction of A: the world state resulting from the reaction becomes the new outcome of p~.</Paragraph> <Paragraph position="8"> 4. preference-driven choice: B chooses the Pi in PB whose outcome maximizes his utility.</Paragraph> <Paragraph position="9"> For a detailed description of the planning algorithm with anticipatory coordination, see (Boella, 2000).</Paragraph> <Paragraph position="10"> In the following Section, we will show how social obligations arise spontaneously in a model of conversational interaction which exploits the planning framework described above.</Paragraph> </Section> </Section> <Section position="5" start_page="85" end_page="88" type="metho"> <SectionTitle> 3 Social Goals and Conversational </SectionTitle> <Paragraph position="0"/> <Section position="1" start_page="85" end_page="87" type="sub_section"> <SectionTitle> Cooperation 3.1 Social Goals </SectionTitle> <Paragraph position="0"> In this section, we exploit the framework described above to model the complex dynamics of goals and social preferences that underlies examples like \[1\]. In particular, we consider the possibility that the partner is offended by the agent's response to a request. The offence is not modeled as a direct effect of an action of the agent. Instead, during the planning phase, the agent makes a tentative prediction of the partner's attitude in the state where he is faced with a refusal, in order to evaluate how this state complies with his preference for not offending: the partner is offended as a result of his reaction to the agent's refusal.</Paragraph> <Paragraph position="1"> In our model, the preference for not offending the partner corresponds to a social goal ~ of an agent: this preference doesn't constitute an input to planning, but, by being embodied in the utility function of the agent, it contributes to plan selection, by promoting the plans which do not have offending as a consequence. This is in line with (Goffman, 1967)'s claim that &quot;Ordinarily, maintenance of face is a condition of interaction, not its objective&quot; (p.12).</Paragraph> <Paragraph position="2"> Some authors ((Schegloff and Sacks, 1973), (Coulthard, 1977)) have characterized the organization of conversation in terms of prototypical pairs, adjacency pairs. In our model, the existence of adjacency pairs is not motivated by the action of specific grounding norms, or obligations. 6 Rather, these exchanges are explained by the interplay of the communicative intentions of the participants, and by their ability to recognize the intentions of the interlocutors (Ardissono et al., 2000).</Paragraph> <Paragraph position="3"> Sin (Clark, 1996)'s terminology, goals like being polite are called interpersonal goals.</Paragraph> <Paragraph position="4"> o &quot;Given a speaker's need to know whether his message has been received, and if so, whether or not has been passably understood, and given a recipient's need to show that he has received the message and correctly - given these very fundamental requirements of talk as a communication system - we have the essential rationale for the existence of adjacency pairs, that is, for the organization of talk into two-part exchanges&quot; ((Goffman, 1981), p. 12).</Paragraph> <Paragraph position="5"> 4. preference-driven choice ; ................................................................................................................................................ * =========================== _l f ............ ~ 3. anticipatory ~ J ~PS~-~t~)~ 2. planning coordinta#on i,~Z~-. A'S REACTION 8', PuN2 .....</Paragraph> <Paragraph position="6"> In general, the preference for not offending which encompasses conversational phenomena like request-response pairs, is motivated by the requestee's goal of displaying a good-tempered acceptance of the request itself: in (Goffman, 1981)'s terms, communicative exchanges are subject to a set of &quot;constraints regarding how each individual ought to handle himself with respect to each others, so that he not discredit his own tacit claim to goodcharacter or the tacit claim of the others that they are persons of social worth (...)&quot; (p. 16). Within an interaction, agents are aware of the fact that their actions have social effects, like conveying some information about their character and about their attitude towards the partner: &quot;An act is taken to carry implications regarding the character of the actor and his evaluation of his listeners, as well as reflecting on the relationship between him and them&quot; ((Goffman, 1981), p. 21). As a consequence, agents are very cautious in the use of the expressive means they have at disposal, namely verbal actions: besides monitoring the partner's reactions, they try to anticipate them with the aim of not offending the partner.</Paragraph> <Paragraph position="7"> The preference for not offending holds as well in the circumstances where an agent is forced to refuse his cooperation by the impossibility of executing the appropriate action to achieve the partner's goal. However, if this is the case, the requestee has to cope with the additional fact that a simple, negative answer can be mistakenly taken to count as a refusal to cooperate at all: \[4\] A: Have you got a cigarette? B: No For this reason, the refusing agent is likely to provide the requester with an acceptable reason, i.e. a remedy or account (Levinson, 1983), when the request is to be turned down.</Paragraph> <Paragraph position="8"> What remains to be explained is why reequests at behavioral level seem to pose less constraints on the addressee, if compared to requests at conversational level: provided that the interactants don't have shared goals, it is a matter of fact that it is easier to refuse a request for money (see example \[3\]) ~ than a request to tell the time (see example \[2\]).</Paragraph> <Paragraph position="9"> In particular, conversational goals often force the hearer to satisfy them: it is aggressive not to answer at all or to ignore the speaker.</Paragraph> <Paragraph position="10"> The reason why paying attention to people, listening and understanding are not easily refused is that they are low cost actions, or &quot;free goods&quot; in (Goffman, 1967) terminology, so no one can refuse them without threatening the speaker's face and offending him. A refusal at the conversational level - ignoring a potential partner and not even responding to his verbal request - constitutes a menace to the face of the requester, so it is hardly justified.</Paragraph> <Paragraph position="11"> 7We thank the anonymous reviewers for the observation that this example lends itself to a deeper analysis, involving further social and psycological parameters. However, we will not discuss the example here, due to space reasons.</Paragraph> <Paragraph position="12"> Up to this point no explicit obligation is created: the &quot;obligation to act&quot; depends on the utility of the action needed to establish cooperation; if the cost of the action is low (e.g., a conversational a~=tion): the refusal to execute it can be motivated in the requester's eyes only by a requestee's negative attitude towards him. So, the requester, as a result of his ability to infer the requestee's reasoning, will be offended by a refusal; the preference for not threatening the face of the partners and preserving one's own social face normally makes the utility of offences negative, thus leading requestees to avoiding refusals. At the same time, this analysis, by making explicit the underlying motivations for the preference for a certain type of response, accounts for the existence of preferred and dispreferred second turns in adjacency pairs.</Paragraph> </Section> <Section position="2" start_page="87" end_page="88" type="sub_section"> <SectionTitle> 3.2 Conversational Cooperation </SectionTitle> <Paragraph position="0"> The effect on the requester is evaluated by the planning agent by means of the anticipation of his reaction. In general, the situation a requestee is faced with is constituted by the choice between the alternative of satisfying the requester's conversational or behavioral goals and the alternative of going on with his own activity.</Paragraph> <Paragraph position="1"> Consider the situation depicted in example \[1\] from B's point of view, where B is faced with A's indirect request: \[1\] A: Do you have Marlboros? B: Uh, no. We ran out B can attribute to A two main goals (see figure 2): s 1. the behavioral goal that B sells to A a packet of cigarettes (sell): however, since A cannot take B's cooperation for granted, this goal is related in turn to the goal of knowing whether B has committed to the perlocutionary intention of selling to A a packet of cigarettes (knowif-satisfy), by committing to A's goal (satisfy), and, if this is the case, to the goal of knowing whether he has completed the corresponding plan to sell the cigarettes 8The recognition of domain goals depends on the recognition of the linguistic goals, :i.e., on the success of the linguistic actions.</Paragraph> <Paragraph position="2"> (hand the cigarettes, cash, ect.) to A (knowifcompleted); null 2. the conversational goal of knowing if the request has been understood by B and is now part of the common ground (grounded); this goal directly relates to the management of dialog: if A does not believe that the illocutionary effect of his question holds, he should repeat or reformulate the question.</Paragraph> <Paragraph position="3"> Note that, at both levels, subsidiary goals arise as part of the intentional behavior of an agent: for example, after performing an action for achieving some goal, it is rational to check whether this action has succeeded. 9 B considers if it is possible for him to commit to the higher-level goal (sell), to which the remaining recognized goals are subordinated: although B is inclined to satisfy this goal, B knows that one of the preconditions for executing the selling action (has(B, Marlboros)) is not true.</Paragraph> <Paragraph position="4"> At this point, besides the choice of not responding at all, the alternative courses of action available to B consist in committing to A's goal to know if B has committed to his (unachievable) sell goal (knowif-satisfy), and the subordinate goal to know if B has completed the plan to achieve it (knowifcompleted), or to commit to A's goal - at conversational level - to have his illocutionary act grounded (grounded)3 deg The choice between the alternative of not responding at all and any of the other alternatives is accomplished by considering the reaction of the partner to a refusal at the conversational level; this choice is enforced by the consideration that communicative actions are &quot;free goods&quot;, so they cannot be refused without incurring in a state where the partner is offended.</Paragraph> <Paragraph position="5"> Being committed to the satisfaction of the knowif-cornpleted goal, B has to choose be9We will not describe here how these goals are identified and kept together in a unified structure: works like (Ardissono et al., 2000) show how the recognition of the intentions stemming from the problem solving activity can constitute the required glue 1degNote that, when producing an illocutionary act to satisfy the know-satisfied or knowifieompleted goal, B satisfies the grounded goal as well: by displaying the reaction to the perlocutionary effect, the uptake of the illocutionary effect is granted.</Paragraph> <Paragraph position="7"> tween different ways to communicate the impossibility to execute the plan. In this case, two plans can apply: the simple plan for refusing, or the elaborated plan for refusing which includes a justification for the refusal.</Paragraph> <Paragraph position="8"> The first plan is less expensive, by being shorter and by not requiring a mental effort; however, it is not fully explicit about the motivations of the refusal, and so it is potentially offensive in the partner's evaluation (A could think that B didn't want to sell the cigarettes to him). On the contrary, the second plan, though more expensive, obeys to the preference for not offending, since it protects the refusing requestee from the accusation of noncooperativeness. null The existence of complex refusal acts has been remarked on by (Gree n and Carberry, 1999).</Paragraph> <Paragraph position="9"> In their mechanism for initiative in answer generation, the ambiguity of a negative answer to a pre-request between a literal answer and a refusal triggers the &quot;Excuse-Indicated&quot; rule, which generates the appropriate explanation. null</Paragraph> </Section> </Section> <Section position="6" start_page="88" end_page="89" type="metho"> <SectionTitle> 4 Related Work </SectionTitle> <Paragraph position="0"> (Traum and Allen, 1994) defined a model of linguistic interaction based on the notion of obligation. Obligations are pro-attitudes that impose less commitment than intentions (so that they can be violated), while their social character explains why humans are solicited to act, in both cooperative and non cooperative contexts. The notion of obligation has been exploited also in applied dialog systems, like (Jameson et al., 1996), where they are associated to move types.</Paragraph> <Paragraph position="1"> While in (Traum and Allen, 1994) discourse obligations are social norms that speakers have to learn, in our model, the speakers have to learn in what conditions humans happen to be offended; this same knowledge explains the use of indirect speech acts (as in (Ardissono et al., 1999)).</Paragraph> <Paragraph position="2"> Moreover, obligations seem somehow redundant in cooperative contexts, where intentions are sufficient to explain grounding and other conversational phenomena.</Paragraph> <Paragraph position="3"> Differently from (Traum and Allen, 1994), (Allwood, 1994) introduces, besides the obligations associated to the communicative acts, two additional sources of obligation which are related, respectively, to ethical and rational motivations intrinsic to social relations and to the management of communication itself.</Paragraph> <Paragraph position="4"> Communication management obligations give rise to the mechanisms of turn-taking, interaction sequencing, and so on, while the ethical obligation are socially desirable qualities of the interactional behavior: there exists a strong social expectation towards them, but an agent can decide to disobey them.</Paragraph> <Paragraph position="5"> (Kreutel and Matheson, 2000) claim that the intentional structure in uncooperative dialogues can be determined by resorting to discourse obligations. In order to do so, they define a set of inference rules which allow to reconstruct the participants' intentions separately from obligations, then show how obligations account for the existence of conversational preferences by addressing pending intentions. However, the semantic rules they propose seem to constitute a shortcut to the recognition of the communicative intentions of the speaker, which has been proven to be necessary to reconstruct dialog coherence (Levinson, 1981); the resulting representation, since it lacks a model of the private intentions of the participants inadequately accounts for the presence of individual intentions which have to be traded-off against obligations in situations where cooperation is not granted.</Paragraph> </Section> <Section position="7" start_page="89" end_page="91" type="metho"> <SectionTitle> 5 An Example Situation </SectionTitle> <Paragraph position="0"> In order to verify the feasibility of exploiting social goals for motivating cooperation, we have implemented a prototype using a decision theoretic planner inspired to the approach of (Haddawy and Hanks, 1998). The planner exploits hierarchical plans to find the optimal sequence of actions under uncertainty, based on a multi-attribute utility function. Goals can be by traded off against cost (waste of resources) and against each other.</Paragraph> <Paragraph position="1"> Five different attributes n have been introduced to depict the situation in example \[2\], where B is interrupted by A while he is executing a generic action Act; this action is aimed at reaching one of B's private goal.</Paragraph> <Paragraph position="2"> \[2\] A: Can you tell me the time? B: No. My watch is broken The following attributes model the states involved in the example situation, and appear in the effects of the participants' actions.</Paragraph> <Paragraph position="3"> * time: it models time as a bounded resource; the utility decreases as a function of time; * grounded: it models A's goal of knowing that B has successfully interpreted the request; * res: it models the consumption of (generic) resources; * refused: it is true if A believes that B has refused, without any justification, to commit to A's communicated goal; * offended: it models A's degree of offence; n Note that the values 0 and 1 of the attributes ground and satisfied.requestrepresent the truth-values of the corresponding propositions.</Paragraph> <Paragraph position="4"> Other goals like knowing whether B has committed to the achievement of the goal or whether the achievement has been successful or higher-level domain goals are not included in this example for space reasons.</Paragraph> <Paragraph position="5"> In order to model the alternatives available to B, we have introduced the following actions (see figure 3). Effects are represented as changes in the value of attributes: for example, (time=time+2) means that after the execution of the Notify-motivation action, the value of the time attribute will increase by 2. * Action Tell-time: it represents B's cooperation with A at the behavioral level (B executes the requested action); * Action Ground: it has the effect that A knows that the illocutionary effect of his request has been properly recognized by the partner (the grounded attribute is set to &quot;true&quot;).</Paragraph> <Paragraph position="6"> * Action Notify-impossible: it models B's notification that A's goal is impossible to achieve; it specializes into two subactions, Notify-motivation and Notifysimple: both actions have a cost in terms of resources and time and set the grounded attribute to true, but the second one negatively affects the refused attribute, meaning that A considers it as a (possible) unjustified refusal.</Paragraph> <Paragraph position="7"> * Action Act: it constitutes B's current plan when he is interrupted by A's request. It affects both the grounded and the refused attribute, by setting the latter to &quot;false&quot;. * Action Refuse: it represents B's act of communicating to A that he will not do what A requested, without any justifica null tion. Among its effects, there is the fact that B comes to know A's choice (refused and grounded attribute are set to &quot;true&quot;). Before B replies to A's request, the grounded attribute is set to false and the refused attribute is set to true. Note that - with the exception of Act - all actions affect the value of the grounded attribute, meaning that, after performing any of them, A's request results grounded anyway, since all these actions are coherent replies.</Paragraph> <Paragraph position="9"> name is followed by the list of its effects.</Paragraph> <Paragraph position="10"> On A's side, we have introduced the action React 12 (see Figure 5), that models the change of the offended parameter depending on B's choice. The key parameter affecting the level of offence is the cost 13 of the requested actions: the less the cost of the requested action, the greater the offence; this follows the principle that low-cost actions cannot be refused (Goffman, 1967), and, if they are, requesters get offended. The lack of grounding is interpreted by A as B is not cooperating at the conversational level: since cooperating at the conversational level (interpreting the sentence, grounding it) has a low cost, it is offensive not to do it.</Paragraph> <Paragraph position="11"> Now, let's consider in detail the current situation, i.e, the one where A has just asked to B to do something while B has just performed the first step of Act. In order to explore the different alternatives, the planner builds and evaluates some plans. These plans differ in that the actions for pursuing the partner's recognized goal can be included or omitted.</Paragraph> <Paragraph position="12"> From the result state of each alternative, the planner then tries to predict the reaction of A by simulating the execution of the React action by A (see figure 5), and commits to the plan whose resulting state after the predicted reaction yields the greater utility according to B's preferences (see Figure 6).</Paragraph> <Paragraph position="13"> As explained in Section 2.1., an agent's utility function is a weighted sum of individual utility functions, which represent the prefer*2We assume that weights Wi and Wj are set, respectively, to 20 and 10.</Paragraph> <Paragraph position="14"> *aWhere (cost(action) = (res * 2) + time).</Paragraph> <Paragraph position="15"> ences of the agent. The weights associated to the individual functions reflect the strength of each preference, by allowing for different trade-offs among preferences during the process of decision making. 14 In figure 4, two alternative plans are represented, where the utility of B is calculated by using the utility function in figure 6. Assuming that the weights W1, W2, W3; and W4 are set to 10, 5, 8, and 100, respectively, B will choose the plan which includes Notify-impossible as the first step, and Act - the prosecution of B's previous activity - as the second step. This solution yields in fact a higher utility than the alternative of ignoring A's request at all and continuing one's own activity. A change in the weights of the utility function of B affects his behavior, by determining a variation in the degree of cooperation: the stronger is the preference for not offending, the more cooperative is the agent. For example, if the utility function of B associates a greater utility to the achievement of B's private goal (by executing Act) than to the social preference for not offending, B will decide to disregard A's request, both at conversational and behavioral level, is On the contrary, if the 14 As (Traum, 1999) notices with reference to social rules, &quot;when they directly conflict with the agent's personal goals, the agent may choose to violate them (and perhaps suffer the consequences of not meeting its obligations).&quot; In our model, this roughly amounts to associating a greater utility to the achievement of the agent's own goals than to the preference for not offending.</Paragraph> <Paragraph position="16"> *STipically, this is the case in specific contexts when private goals of the addressee are very relevant and contrast with the satisfaction of the requester's goal;</Paragraph> <Paragraph position="18"/> <Paragraph position="20"> utility function of B models a more balanced tra~:le-off between the achievement of B's private goMs and social preferences, B will decide to ground A's request, at least, or to be fully cooperative by satisfying A's request.</Paragraph> </Section> class="xml-element"></Paper>