File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/97/w97-0707_concl.xml

Size: 1,519 bytes

Last Modified: 2025-10-06 13:57:59

<?xml version="1.0" standalone="yes"?>
<Paper uid="W97-0707">
  <Title>I I I I I</Title>
  <Section position="6" start_page="45" end_page="45" type="concl">
    <SectionTitle>
5 Conclusion
</SectionTitle>
    <Paragraph position="0"> In tlus study, we have tried to evaluate automattc summanzaUon methods proposed earlier If a good testbed for evaluaUng summaries were available, the evalualaon methodology adopted m this study could be improved, but we believe it is the best we can currently do Under.</Paragraph>
    <Paragraph position="1"> our evaluation scheme, the four extraclaon algorithms exanuned perform comparably, but they produced sigmflcantly better extracts than a random selection of paragraphs The absolute performance figures are not/ugh, but given the low overlap between two human-generated extracts, they are enunenfly satisfactory However, this wide vanatton between users brings us to the question of whether summanzauon by automauc extracuon is feasible If humans are unable to agree on wluch paragraphs best represent an amcle, it is unreasonable to expect an automauc procedure to identify the best extract, whatever that might be We also find that presenting the user with the lmUal part of an arucle is as good as emploYing any &amp;quot;mtelhgent&amp;quot; text extraction scheme In summary, automauc summanzauon by extractuon is admtttedly an imperfect method However, at the moment, it does appear to be the only domain-independent technique which performs reasonably</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML