File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/03/w03-0906_concl.xml
Size: 3,259 bytes
Last Modified: 2025-10-06 13:53:40
<?xml version="1.0" standalone="yes"?> <Paper uid="W03-0906"> <Title>Entailment, Intensionality and Text Understanding</Title> <Section position="6" start_page="0" end_page="0" type="concl"> <SectionTitle> 5 Conclusion </SectionTitle> <Paragraph position="0"> We have argued that entailment and contradiction detection (ECD) should be included as one of a number of metrics for evaluating text understanding. Intensional constructions -- predications with proposition- or propertydenoting arguments -- are a challenge for ECD. They occur commonly, but simple predicate-argument representations do not do justice to the variety of inferences they support. More sophisticated first-order accounts (Hirst, 1991; Hobbs, 1985) may be extendable to bear this load.</Paragraph> <Paragraph position="1"> But there is also a direct path building on results from possible-worlds semantics. We are developing contexted clausal representations to aim at a useful trade-off between tractability and expressivity. Other researchers are also building on insights from model-theoretic semantics in interesting ways, e.g. (Schubert and Hwang, 2000).</Paragraph> <Paragraph position="2"> Intensional ECD seems to presuppose deep and detailed syntactic and semantic analysis (though we have no arguments to rule out the possibility of shallower analysis).</Paragraph> <Paragraph position="3"> The current state of deep language processing technology suggests that ECD is a viable though challenging metric for open text in restricted domains.</Paragraph> <Paragraph position="4"> One issue that we have not addressed is the best form for annotated evaluation material for ECD. Ideally, this should be raw texts, annotated only to link the sentences or clauses that have entailment or contradiction relations between them. This has the benefit of being an almost entirely theory-neutral annotation scheme. A mark-up based around some form of semantic representation for texts (e.g. contexted clauses) would very likely impose an unfair penalty on alternative approaches. A limited precursor to raw-text mark-up for semantic evaluation was undertaken as part of the FraCaS project (Cooper and Colleagues, 1996). This was a semantic test suite of about 350 syllogisms, specifying entailment and contradiction relations, or the lack of them, e.g.</Paragraph> <Paragraph position="5"> (19) The PC-6082 is faster than the ITEL-XZ.</Paragraph> <Paragraph position="6"> The ITEL-XZ is fast.</Paragraph> <Paragraph position="7"> Is the PC-6082 fast? [Yes] Even for trivial, artificial examples like these two problems arose. (i) The premises or conclusions can be ambiguous, where entailments of contradictions follow under one set of interpretations but not under another. There is no obvious way of marking the intended interpretations.</Paragraph> <Paragraph position="8"> (ii) It is extraordinarily hard to construct examples where inference relations do not in part depend on world knowledge. By taking texts rather than sentences as the units of annotation, the intended interpretation is generally much clearer (to human annotators). With regard to domain dependence, one just has to accept that ECD quality will decline without world knowledge.</Paragraph> </Section> class="xml-element"></Paper>