File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/abstr/06/n06-1057_abstr.xml
Size: 827 bytes
Last Modified: 2025-10-06 13:44:56
<?xml version="1.0" standalone="yes"?> <Paper uid="N06-1057"> <Title>ParaEval: Using Paraphrases to Evaluate Summaries Automatically</Title> <Section position="1" start_page="0" end_page="0" type="abstr"> <SectionTitle> Abstract </SectionTitle> <Paragraph position="0"> ParaEval is an automated evaluation method for comparing reference and peer summaries. It facilitates a tieredcomparison strategy where recall-oriented global optimal and local greedy searches for paraphrase matching are enabled in the top tiers. We utilize a domain-independent paraphrase table extracted from a large bilingual parallel corpus using methods from Machine Translation (MT). We show that the quality of ParaEval's evaluations, measured by correlating with human judgments, closely resembles that of ROUGE's.</Paragraph> </Section> class="xml-element"></Paper>