File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/abstr/06/n06-1048_abstr.xml

Size: 1,483 bytes

Last Modified: 2025-10-06 13:44:53

<?xml version="1.0" standalone="yes"?>
<Paper uid="N06-1048">
  <Title>Nuggeteer: Automatic Nugget-Based Evaluation using Descriptions and Judgements</Title>
  <Section position="1" start_page="0" end_page="0" type="abstr">
    <SectionTitle>
Abstract
</SectionTitle>
    <Paragraph position="0"> The TREC Definition and Relationship questions are evaluated on the basis of information nuggets that may be contained in system responses. Human evaluators provide informal descriptions of each nugget, and judgements (assignments of nuggets to responses) for each response submitted by participants. While human evaluation is the most accurate way to compare systems, approximate automatic evaluation becomes critical during system development.</Paragraph>
    <Paragraph position="1"> We present Nuggeteer, a new automatic evaluation tool for nugget-based tasks.</Paragraph>
    <Paragraph position="2"> Like the first such tool, Pourpre, Nuggeteer uses words in common between candidate answer and answer key to approximate human judgements. Unlike Pourpre, but like human assessors, Nuggeteer creates a judgement for each candidatenugget pair, and can use existing judgements instead of guessing. This creates a more readily interpretable aggregate score, and allows developers to track individual nuggets through the variants of their system. Nuggeteer is quantitatively comparable in performance to Pourpre, and provides qualitatively better feedback to developers.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML