File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/03/n03-1020_concl.xml

Size: 1,280 bytes

Last Modified: 2025-10-06 13:53:31

<?xml version="1.0" standalone="yes"?>
<Paper uid="N03-1020">
  <Title>Automatic Evaluation of Summaries Using N-gram Co-Occurrence Statistics</Title>
  <Section position="7" start_page="400" end_page="400" type="concl">
    <SectionTitle>
3 topics.
</SectionTitle>
    <Paragraph position="0"> According to Over (2003), NIST spent about 3,000 man hours each in DUC 2001 and 2002 for topic and document selection, summary creation, and manual evaluation. Therefore, it would be wise to use these valuable resources, i.e. manual summaries and evaluation results, not only in the formal evaluation every year but also in developing systems and designing automatic evaluation metrics. We would like to propose an annual automatic evaluation track in DUC that encourages participants to invent new automated evaluation metrics. Each year the human evaluation results can be used to evaluate the effectiveness of the various automatic evaluation metrics. The best automatic metric will be posted at the DUC website and used as an alternative in-house and repeatable evaluation mechanism during the next year.</Paragraph>
    <Paragraph position="1"> In this way the evaluation technologies can advance at the same pace as the summarization technologies improve. null</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML