File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/06/e06-1039_concl.xml
Size: 1,383 bytes
Last Modified: 2025-10-06 13:55:09
<?xml version="1.0" standalone="yes"?> <Paper uid="E06-1039"> <Title>Multi-Document Summarization of Evaluative Text</Title> <Section position="7" start_page="311" end_page="311" type="concl"> <SectionTitle> 6 Conclusions </SectionTitle> <Paragraph position="0"> We have presented and compared a sentence extraction- and language generation based approach to summarizing evaluative text. A formative user study of our MEAD* and SEA summarizers found that, quantitatively, they performed equally well relative to each other, while significantly outperforming abaseline standard approach to multidocument summarization. Trends that we identified in the results as well as qualitative comments from participants in the user study indicate that the summarizers have different strengths and weaknesses. On the one hand, though providing varied language and detail about customer opinions, MEAD* summaries lack in accuracy and precision, failing to give and overview of the opinions expressed in the evaluative text. On the other, SEA summaries provide a general overview of the source text, while sounding 'robotic', repetitive, and surprisingly rather incoherent.</Paragraph> <Paragraph position="1"> Some of these differences are, fortunately, quite complimentary. We plan in the future to investigate how SEA and MEAD* can be integrated and improved.</Paragraph> </Section> class="xml-element"></Paper>