File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/abstr/87/t87-1044_abstr.xml

Size: 4,739 bytes

Last Modified: 2025-10-06 13:46:30

<?xml version="1.0" standalone="yes"?>
<Paper uid="T87-1044">
  <Title>amp;quot;No Better, but no Worse, than People&amp;quot;</Title>
  <Section position="2" start_page="222" end_page="224" type="abstr">
    <SectionTitle>
3. Two non-problems
</SectionTitle>
    <Paragraph position="0"> The possibility of a program somehow generating things that no human could understand is a red herring.3 People say things all the time that other people don't understand, yet we don't think anything unusual is happening. Usually the audience fails to make an expected inference rather than misunderstand some literal part of the utterance, a problem that can happen quite easily when the speaker misjudges what the audience already knows, or the speaker thinks that they share some judgement or context when they do not. Another source of the problem comes from the speaker thinking that a certain turn of phrase should signal a certain inference but the audience is opaque to that signal.</Paragraph>
    <Paragraph position="1">  3 Since programs wouldn't talk to us if they didn't need to communicate, saying things to us that we don't understand would just be failing to achieve their own goals. Perhaps they might choose to talk this way to each other (though why should they, since given any commonality in their internal designs, telepathy would be much simpler and more satisfying), but if we give them any sensitivity to their audience's reactions (and how could communication be effective without it) they will quickly realize that we're missing the point of most of what they're saying to us and change their techniques.</Paragraph>
    <Paragraph position="2">  The very same mistake could be made by a program--we cannot program them to be superhumanly aware of their audience. The only protection is incorporating into language interfaces the same kind of sensitivity to later audience reactions that we have ourselves. We know what the effect of following our inferences should be on our audiences, and we can sense when they have missed our intent. We especially know how to feed back a communications failure onto our own generation strategies so that we will make different choices the next time we need to get across a similar idea. We should make our machines able to do the same.</Paragraph>
    <Paragraph position="3"> The problem of how best to match a system's input and output language abilities is likely to turn out to be a red herring as well, one that will go away naturally as soon as our understanding systems become as syntactically and lexically competent as our generators. 4 The problem is that presently if the generator produces a more sophisticated construction than the understander can parse or uses a word that it does not know, then the human user, mimicing what the generator has done, will be frustrated when he turns out not to be understood.</Paragraph>
    <Paragraph position="4"> If this were the only difficulty, then it could be solved by straightforward software engineering: consistency tools would force one to drop items from the generator's repertoire that the understander did not know. Unfortunately the problem goes deeper than that. The mismatch is not the issue, since people's abilities do not match either: we all can understand markedly more than we would ever say. The real problem for a non-research interface is--direct queries for literal information aside--that machine understanding abilities are so far below the human level that any facile, inference motivating output from the generator is going to suggest to the user that the system will understand things that it cannot.</Paragraph>
    <Paragraph position="5"> Because of this, I personally would never include language input in a non-research interface today. Interactive graphics and menu facilities do not suffer from the ambiguity and scope of inferencing problems faced by language, and give a realistic picture of what a system is actually able to comprehend. Interfaces based on a &amp;quot;graphics in, graphics and speech out&amp;quot; 4 It is trivial to specify a linguistically complex phrase and have a generator utter it by rote. Such canned or template-based text is often the best route to take in a practical interface. If the programmer is sure that the situation warrants the phrase then it can safely be used, even though there may be no explicit model within the system from which the phrase could have been deliberately composed.</Paragraph>
    <Paragraph position="6">  paradigm have not been given enough study by the language and communications research community, and are likely to be a much better match to the deliberative and intentional abilities of the programs we can experiment with today.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML