File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/02/w02-1002_concl.xml

Size: 1,266 bytes

Last Modified: 2025-10-06 13:53:26

<?xml version="1.0" standalone="yes"?>
<Paper uid="W02-1002">
  <Title>Conditional Structure versus Conditional Estimation in NLP Models</Title>
  <Section position="5" start_page="0" end_page="0" type="concl">
    <SectionTitle>
5 Conclusions
</SectionTitle>
    <Paragraph position="0"> We have argued that optimizing an objective that is as close to the task &amp;quot;accuracy&amp;quot; as possible is advantageous in NLP domains, even in data-poor cases where machine-learning results suggest discriminative approaches may not be reliable. We have also argued that the model structure is a far more important issue.</Paragraph>
    <Paragraph position="1"> For simple POS tagging, the observation bias effect of the model's independence assumptions is more evident than label bias as a source of error, but both are examples of explaining-away effects which can arise in conditionally structured models. Our results, combined with others in the literature, suggest that conditional model structure is, in and of itself, undesirable, unless that structure enables methods of incorporating better features, explaining why maximum-entropy taggers and parsers have had such success despite the inferior performance of their basic skeletal models.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML