File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/abstr/82/c82-2030_abstr.xml
Size: 5,644 bytes
Last Modified: 2025-10-06 13:46:02
<?xml version="1.0" standalone="yes"?> <Paper uid="C82-2030"> <Title>WHY THERE MUST BE A SEMANTIC REPRESENTATION (OVER AND ABOVE ANY COG~TIVE ~ETWORK)</Title> <Section position="1" start_page="0" end_page="0" type="abstr"> <SectionTitle> WHY THERE MUST BE A SEMANTIC REPRESENTATION (OVER AND ABOVE ANY COG~TIVE ~ETWORK) </SectionTitle> <Paragraph position="0"> A s~nantic representation (or a semantic network; Mel'~uk, Seuren, Hofmann, Sgall, ...) cannot be dispensed with in a model of language comprehension which incorporates a representation of knowledge, commonly called a oo~itive network (e.g. QuPS11ian, Lamb, Shank, Hays, Jackendoff, ...).</Paragraph> <Paragraph position="1"> We shall demonstrate this in several ways, claiming to resolve the contention between those WhO claim they are necessary (the 1st camp above) and those who would ols/m that they can be dispensed with, the 2nd oemp, including also Hontague, who uses them for&quot;convenience only&quot;.</Paragraph> <Paragraph position="2"> First, and most intuitively, is the observation that .hi can and commonly does understand a description of something (e.g. a scientific theory, or a political tract) which one knows or believes to be at variance with the facts. This suggests that in-taken information is kept apart from general knowledge, until such time as one decides to accept it aa true. Cognitive networks can, however, be augmented to account for this by annotating arcs by their (episteeLic) source, though this is a bit unrealistic for most human use of language - we seldom know the source of some believed fact.</Paragraph> <Paragraph position="3"> A 2nd demonstration is to note that the norms~ person &quot;knows&quot; of many different worlds, where facts are at variance with those of an other world. As a scientific example, Riemsnian and Eucledian geometries are such contrary worlds, - 134 as are any pair of compet~ theories. For more ordinary examples, most English speakers know of at least 4 different worlds, contaPSning different objects, different possibilities, and so on: the world of @reek mytholo~ (conta4nLng unicorns, gods, etc.), the world of Sherlock Holmes (with a certain Dr Watson, a Baker St, ...), the world of James Bond, and the &quot;real&quot; world. As for the real world, one's interlocutor often has a different version of it, which must be known to understand his speech.</Paragraph> <Paragraph position="4"> Now, if one must incorporate in himself knowledEe of 4 or so dPSstinot worlds (and I would suggest it is closer to 40 than 4), then we may say he has as me~y cognitive networks, of v~dus degrees of d etePS1 and completeness. These networks may be represented as conflated into one super cognitive netr work, with sufficient special marking of the type eu~ested above (extended to nodes, also), but such a oonflated network can always be decomposed into separate ones for the venous worlds, if it is adequately marked to model human behawlour.</Paragraph> <Paragraph position="5"> If then there are a number of distinct cognitive networks needed for understanding ordinary human speech, there is no reason not to add 1 more for the conversation currently in progress. This is no more nor less than a semantic representation, except as we shall observe below, it differs in structure, and function.</Paragraph> <Paragraph position="6"> A tighter, but longer and more difficult, demonstration is tO show that &quot;inferencing&quot;, which is roughly equivalent to moving through a cognitive network, is NOT undertaken until the sem~utio operation of integration is attempted. This operation, which combines the semantic contributions of sentences together, depends heavily on a principle whereby each successive sentence is interpreted in the most redundant way possible, so that it &quot;overlaps&quot; as much as possible with the content of the previous sentences.</Paragraph> <Paragraph position="8"> With this principle, we can show that inferencing appears to be undertaken only when integration is blocked for lack of averlap of the expressed meanings of the component sentences. (Inferencing is of course required for argument, for determ, tnAng truth of an assertion, or for otherwise applying the comprehension of a language act against the world.) These &quot;expressed meanings of the discourse&quot; are nothing more than their sen~ntic representation, ld~ich we have thus shown to b9 necessarily held distinct from background knowledge used in lnferenoing, represented in a cognitive network.</Paragraph> <Paragraph position="9"> We can conclude, then, that in some form or other, a representation of the semantic effects of the sentences is needed to account for how discourses are understood. Although this semantic representation me~ be conflated (in a computer, e.g.) with a co~j~itive network, it is logically distinct, and may be profitably so-treated.</Paragraph> <Paragraph position="10"> This sems~utic representation, we may observe, is distinct from the ordinary cognitive network in being extremely undetai~ea contaihing mostly syntagmatic rather than parsdl~natlo relatiofiships, and very malleable, and it has a priva'ledged position in the interpretation of language acts.</Paragraph> <Paragraph position="11"> It is also used pri~ns~rily, at speech time, as a depository, with perhaps no inferenci~, while a co~litive network as generally understood receives perhaps, nothing at speech time, but ie used primarily for inferencing. Thus they appear to be quite distinct in contents, function and usage.</Paragraph> </Section> class="xml-element"></Paper>