File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/93/e93-1033_metho.xml

Size: 26,386 bytes

Last Modified: 2025-10-06 14:13:18

<?xml version="1.0" standalone="yes"?>
<Paper uid="E93-1033">
  <Title>Abductive Explanation of Dialogue Misunderstandings</Title>
  <Section position="4" start_page="277" end_page="278" type="metho">
    <SectionTitle>
3 The formal language
</SectionTitle>
    <Paragraph position="0"> The model is based on a sorted first-order language, PS, comprising a denumerable set of predicates, variables, constants, and functions, along with the boolean connectives V, A,-,, D, and --, and the predicate =. The terms of PS come in six sorts: agents, turns, sequences of turns, actions, descriptions, and suppositions 1. PS includes an infinite number of variables and function symbols of every sort and arity. We also define a number of special ones: do, mistake, intend, knowif, knowref, knows-BetterRef, not, and and. Each of of these functions takes an agent as its first argument and an action, supposition, or description for each of its other arguments; each of them returns a supposition. The function symbols that return speech acts each take two agents as their first two argument and an action, supposition, or description for each of their other arguments. null For the abductive model, we define a corresponding language/~Th in the Prioritized Theorist framework. /:Th includes all the sorts, terms, functions, and predicates of /:; however, /:Tit lacks explicit quantification, distinguishes facts from defaults, and associates with each default a priority value. Variable names are understood to be universally quantified in facts and defaults (but existentially quantified in an explanation). Facts are given by &amp;quot;FACT w.&amp;quot;, where w is a wff. A default can be given either by &amp;quot;DEFAULT (p, d).&amp;quot; or &amp;quot;DEFAULT (p, d) : w.&amp;quot;, 1Suppositions represent the propositions that speakers express in a conversation, independent of the truth values that those propositions might have.</Paragraph>
    <Paragraph position="1">  where p is a priority value, d is an atomic symbol with only free variables as arguments, and w is a wtf. For example, we can express the default that birds normally fly, as: DEFAULT (2, birdsFly(b)) : bird(b) D .fly(b).</Paragraph>
    <Paragraph position="2"> If Y: is the set of facts and AP is the set of defaults with priority p, then an expression DEFAULT(p, d) : w asserts that d E A p and (d D w) E .~'.</Paragraph>
  </Section>
  <Section position="5" start_page="278" end_page="280" type="metho">
    <SectionTitle>
4 The architecture of the model
</SectionTitle>
    <Paragraph position="0"> In the architecture that we have formulated, producing an utterance is a default, deductive process of choosing both a speech act that meets an agent's communicative and interactional goals and a utterance that will be interpretable as this act in the current context. Utterance interpretation is the complementary (abductive) process of attributing to the speaker communicative and interactional goals by attributing to him or her a discourse-level form that provides a reasonable explanation for an observed utterance in the current context. Social norms delimit the range of responses that a participant may produce without becoming accountable for additional explanation. 2 The attitudes that speakers express provide additional constraints, because speakers are expected not to contradict themselves. We therefore attribute to each agent: * A theory T describing his or her linguistic knowledge, including principles of interaction and facts relating linguistic acts.</Paragraph>
    <Paragraph position="1"> * A set B of prior assumptions about the beliefs and goals expressed by the speakers (including assumptions about misunderstanding).</Paragraph>
    <Paragraph position="2"> * A set Ad of potential assumptions about misunderstandings and meta-planning 3 decisions that agents can make to select among coherent alternatives. null To interpret an utterance u, by speaker s, the hearer h will attempt to solve: T O B U M t- utter(s, h, u, ts) for some set M C AJ, where ts refers to the current context.</Paragraph>
    <Paragraph position="3"> In addition, acts of interpretation and generation update the set of beliefs and goals assumed to be expressed during the discourse. Our current formalization focuses on the problems of identifying how an utterance relates to a context and whether it has been understood. The update of expressed beliefs  man's \[1985\] use of meta-plans, but we prefer to treat meta-planning as a pattern of inference that is part of the task specification rather than as an action.</Paragraph>
    <Paragraph position="4"> is handled in the implementation, but outside the formal language. 4</Paragraph>
    <Section position="1" start_page="278" end_page="278" type="sub_section">
      <SectionTitle>
4.1 Speech acts
</SectionTitle>
      <Paragraph position="0"> For simplicity, we represent utterances as surface-level speech acts in the manner first used by Perrault and Allen \[1980\]. For example, if speaker m asks speaker r the question &amp;quot;Do you know who's going to that meeting?&amp;quot; we would represent this as: srequest(m, r, informif(r, m, knowref(r, w))).</Paragraph>
      <Paragraph position="1"> Following Cohen and Levesque \[1985\], we limit the surface language to the acts s-request, s-inform, sinformref, and s-informif. Discourse-level acts include inform, informif, informref, askref, askif, request, preteH 5, testref, testif and warn, and are represented using a similar notation.</Paragraph>
    </Section>
    <Section position="2" start_page="278" end_page="279" type="sub_section">
      <SectionTitle>
4.2 Expressed attitudes
</SectionTitle>
      <Paragraph position="0"> We distinguish the beliefs that speakers act as if they have during a course of a conversation from those they might actually have. Most models of discourse incorporate notions of belief and mutual belief to describe what happens when a speaker talks about a proposition, without distinguishing the expressing of belief from believing (see Cohen et al. 1990). However, real belief involves notions of evidence, trustworthiness, and expertise, not accounted for in these models; it is not automatic. Moreover, the beliefs that speakers as if they have need not match their real ones. For example, a speaker might simplify or ignore certain facts that could interfere with the accomplishment of a primary goal \[Gutwin and Mc-Calla, 1992\]. Speakers need to keep track of what others say, in addition to whether they believe them, because even insincere attitudes can affect the interpretation and production of utterances. Although speakers normally choose to be consistent in the attitudes they express, they can recant if it appears that doing so will lead (or has led) to conversational breakdown.</Paragraph>
      <Paragraph position="1"> Following Thomason \[1990\], we call the contents of the attitudes that speakers express during a dialogue suppositions and the attitude itself simply active. 6 Thus, when a speaker performs a particular speech act, she activates the linguistic intentions associated with the act, along with a belief that the act has been done. These attitudes do not depend on the  &amp;quot;I'm going to tell you something that will surprise you. You might think you know, but you don't.&amp;quot;  eSupposition differs from belief in that speakers need not distinguish their own suppositions from those of another \[Stalnaker, 1972; Thomason, 1990\].</Paragraph>
      <Paragraph position="2">  speakers' real beliefs. 7 The following expressions are used to denote suppositions: null * do(s, a) expresses that agent s has performed the action a; * mistake(s, at, az) expresses that agent s has mistaken an act al for act a2; * intend(s,p) expresses that agent s intends to achieve a situation described by supposition p; * knowif(s,p)expresses that the agent s knows whether the proposition named by supposition p is true; * knowref(s, d) expresses that the agent s knows the referent of description d; * knowsBetterP~ef(st, s2, d) expresses that agent sl has &amp;quot;expert&amp;quot; knowledge about the referent of description d, so that if s2 has a different belief about the referent, then sz is likely to be wrong; s and * and(pl,p2) expresses the conjunction of suppositions Pl and P2; * not(p) expresses the negation of supposition p.9</Paragraph>
    </Section>
    <Section position="3" start_page="279" end_page="279" type="sub_section">
      <SectionTitle>
4.3 Linguistic knowledge relations
</SectionTitle>
      <Paragraph position="0"> We represent agents' linguistic knowledge with three relations: decomp, a binary relation on utterance forms and speech acts; lintention, a binary relation on speech acts and suppositions; lezpectation, a three-place relation on speech acts, suppositions, and speech acts. The decomp relation specifies the speech acts that each utterance form might accomplish. The lintention relation specifies the beliefs and intentions that each speech act conventionally expresses. The lexpectation relation specifies, for each speech act, which speech acts an agent believing the given condition can expect to follow.</Paragraph>
    </Section>
    <Section position="4" start_page="279" end_page="279" type="sub_section">
      <SectionTitle>
4.4 Beliefs and goals
</SectionTitle>
      <Paragraph position="0"> We assume that an agent's beliefs and goals are given explicitly by statements of the form believe(S, P) and hasGoal(S, P, TS), respectively, where S is an agent, P is a supposition and TS is a turn sequence.</Paragraph>
    </Section>
    <Section position="5" start_page="279" end_page="280" type="sub_section">
      <SectionTitle>
4.5 Activation
</SectionTitle>
      <Paragraph position="0"> To represent the dialogue as a whole, including repairs, we introduce the notion of a turn sequence and tit is essential that these suppositions name propositions independent of their truth values, so that we may represent agents talking about knowing and intending without fully analyzing these concepts.</Paragraph>
      <Paragraph position="1">  agent who says something negative, e.g., &amp;quot;I do not w~nt to go.&amp;quot; the activation of a supposition with respect to a sequence. A turn sequence represents the interpretations of the discourse that a speaker has considered. Turn sequences are characterized by the following three relations: * tumOr(is, t) holds if and only if t is a turn in the sequence ts; * succ(tj, tl, ts) holds if and only if turnO\](ts, ti), turnOf(ts, tj), tj follows ti in ts, and there is no t~ such that turnOf(ts, tk), suce(tk,ti,ts), and succ(tj, tk, ts); * focus(ts, t) holds ift is a distinguished turn upon which the sequence is focused; normally this is the last turn of ts.</Paragraph>
      <Paragraph position="2"> We also define a successor relation on turn sequences. A turn sequence TS2 is a successor to turn sequence TS1 if TS2 is identical to TS1 except that TS2 has an additional turn t that is not a turn of TS1 and that is the successor to the focused turn of TS1. The set of prior assumptions about the beliefs and goals expressed by the participants in a dialogue is represented as the activation of suppositions. For example, an agent nan performing an informref(nan, bob, theTime) expresses the supposition do(nan, informref(nan, bob, theTime)) and the Gricean intention, and(knowref(nan, theTime), intend(nan, knowref(bob, theTirne))) given by the lintention relation. We assume that an agent will maintain a record of both participants' suppositions, indexed by the turns in which they were expressed. It is represented as a set of statements of the form expressed(P, T) or expressedNot(P, T) where P is a simple supposition and T is a turn.</Paragraph>
      <Paragraph position="3"> Beliefs and intentions that participants express  during a turn of a sequence tSl become and remain active in all sequences that are successors to tsl, unless they are explicitly refuted.</Paragraph>
      <Paragraph position="4"> DEFINITION 1: If, according to the interpretation of the conversation represented by turn sequence TS with focused turn T, the supposition P was expressed during turn T, we say that P becomes active with respect to that interpretation and the predicate active(P, TS) is derivable:  FACT expressed(p, t) A focus (ts, t) D active(p, ts).</Paragraph>
      <Paragraph position="5"> FACT ezpressedNot(p, t) A focus(ts, t) aaiveCnot(p), t,).</Paragraph>
      <Paragraph position="6"> FACT -,(active(p, ts) A active(not(p), ts)).</Paragraph>
      <Paragraph position="7">  If formula P is active within a sequence TS, it will remain active until not(P) is expressed:  FACT expressed(p, t) A focns(ts, t) D -~aetivationPersists(not (p), t).</Paragraph>
      <Paragraph position="8"> FACT ezpressedNot(p, t) A focns( ts, t) D -.aetivationPersists(p, t).</Paragraph>
      <Paragraph position="9"> DEFAULT (1, aetivationp ersists(p, t ) ) : active(p, tsi ) A sueeessorTS(tsnow, tsi) A foeus(tsno~, t) D adive(p, ts.o~).</Paragraph>
    </Section>
    <Section position="6" start_page="280" end_page="280" type="sub_section">
      <SectionTitle>
4.6 Expectation
</SectionTitle>
      <Paragraph position="0"> The following definition captures the notion of &amp;quot;expectation&amp;quot;. null</Paragraph>
    </Section>
  </Section>
  <Section position="6" start_page="280" end_page="280" type="metho">
    <SectionTitle>
DEFINITION 2: A discourse-level action R is ez-
</SectionTitle>
    <Paragraph position="0"> DEFAULT (2, ezpectedReply(Pdo, p, do(Sl, a2), ts)): active(pdo , is) A lezpectation(pdo, p, dO(Sl, a2)) A believe(sx, p) A iintentionsOk(sl, az, ts) D expected(s1, a2, ts).</Paragraph>
    <Paragraph position="1"> FACT active(pdo, ts) D &amp;quot;,ezpectedReply(pdo, p, preply, ts).  The predicate expectedReply is a default. Although activation might depend on default persistence, activation always takes precedence over expectation because it has a higher priority (on the assumption that memory for suppositions is stronger than expectation). null The predicate lintentionsOk(S, A, TS) is true if speaker S expresses the linguistic intentions of the act A in turn sequence TS, and these intentions are consistent with TS.</Paragraph>
    <Paragraph position="2"> We also introduce a subjunctive form of expectation, which depends only on a speaker's real beliefs: FACT lezpectation(do(sl, al), p, do(s2, a2)) A believe(s1, p) D wouldEz(sl, al, a2).</Paragraph>
    <Section position="1" start_page="280" end_page="280" type="sub_section">
      <SectionTitle>
4.7 Recognizing misunderstandings
</SectionTitle>
      <Paragraph position="0"> When a dialogue proceeds normally, a speaker's utterance can be explained by abducing that a discourse action has been planned using one of a known range of discourse strategies: plan adoption, acceptance, challenge, repair, or closing. (Figure 1 includes some examples in Theorist.) In cases of appaxent misunderstanding, the same explanation process suggests a misunderstanding, rather than a planned act, as the reason for the utterance. To handle these cases, the model needs a theory of the symptoms of a failure to understand \[Poole, 1989\]. For example, a speaker $2 might explain an otherwise unexpected response by a speaker $1 by hypothesizing that $2 has mistaken some speech act by $1 for another with a similar decomposition or $2 might hypothesize that $1 has misunderstood (see Figure 2). We shall now consider some applications.</Paragraph>
    </Section>
  </Section>
  <Section position="7" start_page="280" end_page="282" type="metho">
    <SectionTitle>
5 Some applications
</SectionTitle>
    <Paragraph position="0"> probably Mrs. Cadry and some of the teachers.</Paragraph>
    <Paragraph position="1"> The surface-level representation of this conversation is given as the following:</Paragraph>
    <Paragraph position="3"/>
    <Section position="1" start_page="280" end_page="281" type="sub_section">
      <SectionTitle>
5.1 Russ's interpretation of T1 in the
</SectionTitle>
      <Paragraph position="0"> meeting example ~,From Russ's perspective, T1 can be explained as a pretelling, an attempt by Mother to get him to ask her who is going. Russ's rules about the relationship between surface forms and speech acts (decomp) include that:  FACT decomp( s-request ( s l , s2, informif(s2, sl, knowref(s2, p))), pretell(sl, s2, p)).</Paragraph>
      <Paragraph position="1"> FACT decomp( s-request ( s l , s2 , informif(s2, sl, knowref(s2, p))), askref(sl, s2, p)).</Paragraph>
      <Paragraph position="2"> FACT decomp( s-request ( s l , s2 , informit~s2, sl, knowref(s2, p))), askif(sx, s2, knowref(s2, p))).</Paragraph>
      <Paragraph position="3">  Russ has linguistic expectation rules for the adjacency pairs pretell-askref, askref-inforraref, and askif-informif (as well as for pairs of other types). Russ also has believes that he knows who's going to the meeting, that he knows he knows this, and that Mother's knowledge about the meeting is likely to be</Paragraph>
    </Section>
    <Section position="2" start_page="281" end_page="282" type="sub_section">
      <SectionTitle>
Utterance Explanation
</SectionTitle>
      <Paragraph position="0"/>
      <Paragraph position="2"> &amp;quot;If agent $1 intends that agent S$ perform the action A~ and A2 is the expected reply to the action A1, and it would be coherent for SI to perform A1, then $1 should do so.&amp;quot; &amp;quot;If agent $1 believes that act A is the expected next action, then $1 should perform A.&amp;quot;</Paragraph>
      <Paragraph position="4"> &amp;quot;Speaker S might be attempting action A in discourse TS if: S was thought to have performed action AM; but, the linguistic intentions of AM are inconsistent with those of A; acts A1 and AM have a similar surface form (and hence could be mistaken); and, H may have made this mistake.&amp;quot; &amp;quot;Speaker S might be attempting action A in discourse TS if: speaker H was thought to have performed action At; but, acts AI and AM have a similar surface form; if H had performed AM, A would be expected; S may express the linguistic intentions of A; and, S may have made the mistake.&amp;quot;  FACT believe(r, knowref(r, w)).</Paragraph>
      <Paragraph position="5"> FACT believe(r, knowif(r,knowref(r,w))).</Paragraph>
      <Paragraph position="6"> FACT believe(r, knowsBetterRef(m,r,w)).</Paragraph>
      <Paragraph position="7"> DEFAULT (1, credulousB(p)) : believe(in, p).</Paragraph>
      <Paragraph position="8"> DEFAULT (1, credulousg(p, ts)) : hasGoal(in, p, ts).  Russ's interpretation of T1 as a pretelling is possible using the meta-plan for plan adoption and the rule for planned action.</Paragraph>
      <Paragraph position="9">  1. The proposition hasGoal(in, do(r, askref(r, In, w)), ts(0)) may be explained by abducing credulousH(do(r,askref(r, m, w)),ts(0)).</Paragraph>
      <Paragraph position="10"> 2. An askref by Russ would be the expected reply to a pretell by Mother: wouldEz(in,do(in,pretell(m, r, w)), do(r,askref(r, In, w)))  It would be expected by Mother because: * The lezpectation relation suggests that she might try to pretell in order to get him to produce an askref:</Paragraph>
      <Paragraph position="12"> * Russ may abduce cred aousB(knowsnetterRef(in, r, w ) ) to explain believe (in,knowsBetterRef(in, r, w)). 3. The discourse context is empty at this point, so the linguistic intentions of pretelling satisfy lintentionsOk.</Paragraph>
    </Section>
  </Section>
  <Section position="8" start_page="282" end_page="283" type="metho">
    <SectionTitle>
4. Lastly, Russ may assume 1deg
</SectionTitle>
    <Paragraph position="0"> adopt(m, r, pretell(m, r, w), askref(r, m, w), ts(0)) Thus, the conditions of the plan-adoption meta~rule are satisfied, and Russ can explain shouldTry(m, r, pretell(m, r, w), ts(0)). This enables him to explain try(m, r, pretell(m, r, w), ts(0)) as a planned action. Once Russ explains the pretelling, his decomp relation and utterance explanation rule allow him to explain the utterance.</Paragraph>
    <Section position="1" start_page="282" end_page="282" type="sub_section">
      <SectionTitle>
5.2 Russ's detection of his own
</SectionTitle>
      <Paragraph position="0"> misunderstanding in the meeting example ~From Russ's perspective, the inform-not-knowref that Mother performs in T3 signals a misunderstanding. Assuming T1 is a pretelling, just prior to T3, Russ's model of the discourse corresponds to the following: null expressed(do(m, pretell(m, r, w)), 1) expressed(knowref(m, w), 1) expressed(knowsBetterItef(m, r, w), 1) expressed(intend(m, do(m, informref(m, r, w))), 1) expressed(intend(m, knowref(r, w)), 1) expressed(do(r, askref(r, m, w)), 2) expressedNot(knowref(r, w), 2) expressed(intend(r, knowref(r, w)), 2) expressed(intend(r, do(m, informref(m, r, w))), 2) T3 does not demonstrate acceptance because inform(m, r, not(knowref(m, w))) is not coherent with this interpretation of the discourse. This act is incoherent because not(knowref(m, w)) is among the linguistic intentions of this inform, while according to the model active(knowref(m, w),ts(2)). Thus, it is not the case that: lintentionsOk (m, inform(m, r, not(knowref(m, w))), ts(2)) As a result, Russ cannot attribute to Mother any expected act, and must attribute a misunderstanding to himself or to her.</Paragraph>
      <Paragraph position="1"> Russ may attribute T3 to a self-misunderstanding using the rule for detecting failure to understand. We sketch the proof below.</Paragraph>
      <Paragraph position="2">  1. According to the Context, expressed( do(m,pretell(m,r,w) ),O).</Paragraph>
      <Paragraph position="3"> And, Russ may assume that the activation of  result not yet be achieved: FACT active(do(a, az), ts) D -~adopt(sl, s2, al, a2, ts). this supposition persists:</Paragraph>
      <Paragraph position="5"> Thus, active(do(m, pretell(m, r, w)), ts(2)). 2. The acts pretell and askrefhave a surface form that is similar, s-request (m,r,informif(r,m,knowref(r,w))) So, ambiguous(pretell(m,r,w), askref(m,r,w)).</Paragraph>
      <Paragraph position="6"> 3. The linguistic intentions of the pretelling are: and(knowref(m, w), and(knowsBetterRef(m, r, w), and( intend(m, do(m, informref(m, r, w))), intend(m, knowref(r, w))))) The linguistic intentions of inform-not-knowref are and(not (knowref(m, w)), intend(m, knowif(r,not (knowref(m, w))))). But these intentions are inconsistent. 4. Russ may assume selfMis(m,r, mistake(r,askref(m, r, w), prete|l(m, r, w)), inform(m, r, not(knowref(m, w))), ts(2)). Once Russ explains the inform-not-knowref, his deeomp relation and utterance explanation rule allow him to explain the utterance.</Paragraph>
    </Section>
    <Section position="2" start_page="282" end_page="283" type="sub_section">
      <SectionTitle>
5.3 A case of other-misunderstanding:
</SectionTitle>
      <Paragraph position="0"> Speaker A finds that speaker B has misunderstood We now consider a new example (from McLaughlin \[1984\]), in which a participant A recognizes that a another participant, B, has mistaken a request in T1 for a test:</Paragraph>
      <Paragraph position="2"> The surface-level representation of this conversation is given as the following:  T1 a: s-request(a, b, informref(b, a, d)) T2 b: s-request(b, a, informif(a, b, p)) T3 a: s-lnform(a, b, intend(a, do(a, askref(a, b, d)))) T4 b: s-inform(b, a, not(knowref(b, d)))  A has linguistic expectation rules for the adjacency pairs pretell-askref, askref-informref, askif-informif, and testref-askif. A also believes that she does not know the time of the dinner, that B does know the time of the dinner. 11 We assume that A can make default assumptions about what B believes and wants:  FACT believe(a, not(knowref(a,d))).</Paragraph>
      <Paragraph position="3"> FACT believe(a, knowref(b,d)).</Paragraph>
      <Paragraph position="4"> FACT hasGoal( a,do(b,informref(b,a,d ) ),ts( O ) ). DEFAULT (1, credulousB(p) ) : believe(b, p). DEFAULT (1, credulousH(p, ts)) : hasGoal(b, p, ts).  /,From A's perspective, after generating T1, her model of the discourse is the following: ezpressed(do(a, askref(a, b, d)), 1) e p,e,sedgot(knowref(a, d), 1) expressed(intend(a, knowref(a, d)), 1) expressed(intend(a, do(b, informref(b, a, d))), 1) According to the decomp relation, T2 might be interpretable as askif(b, a, p). However, T2 does not demonstrate acceptance, because there is no askref- askif adjacency-pair from which to derive an expectation. T2 is not a plan adoption because A does not believe that B believes that A knows whether the dinner is at seven-thirty. However, there is evidence for misunderstanding, because both information-seeking questions and tests can be formulated as surface requests. Also, T2 is interpretable as a guess and request for confirmation (represented as askif), which would be expected after a test. We sketch the proof below.</Paragraph>
      <Paragraph position="5"> 1. According to the context: ezpressed(do(a, askref(a, b, d)), 0).</Paragraph>
      <Paragraph position="6"> A may assume that the activation of this supposition persists: activationPersists(do(a, askref(a, b, d)), 0).</Paragraph>
      <Paragraph position="7"> Thus, aaive( do( a,askref( a,b,d ) ),ts(1) ). 2. The acts askref and testrefhave a surface form that is similar, namely s-request (a,b,lnformref(b,a,knowref(b,d))). So, ambiguous( askref( a,b,d ), testref(a,b,d)).</Paragraph>
      <Paragraph position="8"> 3. An askif by B would be the expected reply to a testref by A: wouldEx(b,do(a,testref(a, b, d)), do(b,asklf(b, a, p))) From A's perspective, it would be expected by B because: * The iezpectation relation suggests that A might try to produce a testref in order to get him to produce an askif: 11A must believe that B knows when the dinner is for her to have adopted a plan in T1 to produce an askref get B to perform the desired informref.</Paragraph>
      <Paragraph position="9"> lexpectation( do( a,testref( a,b,d ) ), and(knowref(b,d), and(knowlf(b,p), and(pred(p,X), pred(d,X))), do(b,asklf(b,a,p))) The condition of this rule requires that B believe he knows the referent of description d and that p asserts that the described property holds of the referent that he knows. For example, if we represent &amp;quot;B knows when the dinner is&amp;quot; as the description null knowref(b, the(X, time(dinner, X))), then the condition requires that knowif(b, time(dlnner, q)) for some q.</Paragraph>
      <Paragraph position="10"> This is a gross simplification, but the best that the notation allows.</Paragraph>
      <Paragraph position="11"> A may assume that B believes the condition of this lezpecta~ion by default.</Paragraph>
    </Section>
  </Section>
class="xml-element"></Paper>
Download Original XML