File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/evalu/97/w97-0705_evalu.xml

Size: 1,300 bytes

Last Modified: 2025-10-06 14:00:31

<?xml version="1.0" standalone="yes"?>
<Paper uid="W97-0705">
  <Title>I I I I I I I I I I I I I I I I I</Title>
  <Section position="6" start_page="27" end_page="29" type="evalu">
    <SectionTitle>
4 Conclusion
</SectionTitle>
    <Paragraph position="0"> A text-by-text comparison of the results Of the legxbdRy criterion (Very Bad, Medtocre, Gooc~ Very Good)m the FAN protocol, and the results of the quahty of the abstract (Incomprehensible, Not Very Clear. Fawly Clear, Clear) m the MLUCE protocol, shows very httle convergence Apart from two excepUons, MLUCE ts always more demanding than FAN Here we hlghhght the differences m assessment between a user who reads an abstract m order to find answers to speczfic questmns, and a reader who =s not trying to assess the mformat~on content of the same abstract The quahty of an abstract depends on what the user expects from tt, and only an in-sltu assessment wall allow one to really assess the performance of a &amp;quot;summansmg&amp;quot; system Followmg this expenmant with these two protocols, the mstallaUon of any such procedure would appear to be extremely expensive - not to mention the fact that =t would reqmre &amp;quot;user expectatmns&amp;quot; to be defined, and the related assessment cnterm to be formahsed</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML