File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/05/j05-3001_concl.xml

Size: 889 bytes

Last Modified: 2025-10-06 13:54:40

<?xml version="1.0" standalone="yes"?>
<Paper uid="J05-3001">
  <Title>Squibs and Discussions Evaluating Discourse and Dialogue Coding Schemes</Title>
  <Section position="4" start_page="293" end_page="293" type="concl">
    <SectionTitle>
4. Conclusion
</SectionTitle>
    <Paragraph position="0"> The application of agreement statistics has done much to improve the scientific rigor of discourse and dialogue research. However, unless we understand what we are attempting to prove and which tests are appropriate, the results of evaluation can be unsatisfactory or, worse still, misleading. In this article we have encouraged researchers to clarify their reasons for assessing agreement and have suggested that in many cases the most suitable test for this purpose is one that corrects for expected agreement, based on an assumed equal distribution between coders.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML