File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/06/n06-1057_concl.xml
Size: 2,098 bytes
Last Modified: 2025-10-06 13:55:13
<?xml version="1.0" standalone="yes"?> <Paper uid="N06-1057"> <Title>ParaEval: Using Paraphrases to Evaluate Summaries Automatically</Title> <Section position="8" start_page="453" end_page="453" type="concl"> <SectionTitle> 7 Conclusion and Future Work </SectionTitle> <Paragraph position="0"> In this paper, we have described an automatic summarization evaluation method, ParaEval, that facilitates paraphrase matching using a large domain-independent paraphrase table extracted from a bilingual parallel corpus. The three-layer matching strategy guarantees a ROUGE-like baseline comparison if paraphrase matching fails.</Paragraph> <Paragraph position="1"> The paraphrase extraction module from the current implementation of ParaEval does not discriminate among the phrases that are found to be paraphrases of one another. We wish to incorporate the probabilistic paraphrase extraction model from (Bannard and Callison-Burch, 2005) to better approximate the relations between paraphrases. This adaptation will also lead to a stochastic model for the low-level lexical matching and scoring.</Paragraph> <Paragraph position="2"> We chose English-Chinese MT parallel data because they are news-oriented which coincides with the task genre from DUC. However, it is unknown how large a parallel corpus is sufficient in providing a paraphrase collection good enough to help the evaluation process. The quality of the paraphrase table is also affected by changes in the domain and language pair of the MT parallel data.</Paragraph> <Paragraph position="3"> We plan to use ParaEval to investigate the impact of these changes on paraphrase quality under the assumption that better paraphrase collections lead to better summary evaluation results.</Paragraph> <Paragraph position="4"> The immediate impact and continuation of the described work would be to incorporate paraphrase matching and extraction into the summary creation process. And with ParaEval, it is possible for us to evaluate systems that do incorporate some level of abstraction, especially paraphrasing.</Paragraph> </Section> class="xml-element"></Paper>