File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/04/p04-1013_concl.xml

Size: 1,710 bytes

Last Modified: 2025-10-06 13:54:04

<?xml version="1.0" standalone="yes"?>
<Paper uid="P04-1013">
  <Title>Discriminative Training of a Neural Network Statistical Parser</Title>
  <Section position="8" start_page="0" end_page="0" type="concl">
    <SectionTitle>
8 Conclusions
</SectionTitle>
    <Paragraph position="0"> This article has investigated the application of discriminative methods to broad coverage natural language parsing. We distinguish between two di erent ways to apply discriminative methods, one where the probability model is changed to a discriminative one, and one where the probability model remains generative but the training method optimizes a discriminative criteria. We nd that the discriminative probability model is much worse than the generative one, but that training to optimize the discriminative criteria results in improved performance. Performance of the latter model on the standard test set achieves 90.1% F-measure on constituents, which is the second best current accuracy level, and only 0.6% below the current best (Bod, 2003).</Paragraph>
    <Paragraph position="1"> This paper has also proposed a neural network training method which optimizes a discriminative criteria even when the parameters being estimated are those of a generative probability model. This training method successfully satis es the con icting constraints that it be computationally tractable and that it be a good approximation to the theoretically optimal method. This approach contrasts with previous approaches to scaling up discriminative methods to broad coverage natural language parsing, which have parameterizations which depart substantially from the successful previous generative models of parsing.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML