File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/99/w99-0901_concl.xml

Size: 1,690 bytes

Last Modified: 2025-10-06 13:58:35

<?xml version="1.0" standalone="yes"?>
<Paper uid="W99-0901">
  <Title>Hiding a Semantic Hierarchy in a Markov Model</Title>
  <Section position="9" start_page="7" end_page="7" type="concl">
    <SectionTitle>
6 Conclusion
</SectionTitle>
    <Paragraph position="0"> In the last section , we showed why the straight-forward application of an EM algorithm, namely the forward-backward algorithm, would not disambiguate the sensese of input words as desired. Thus, we introduced a type of smoothing which produced the desired bias in the example at hand. Then we showed how this smoothing, when used on certain graphs, produced unwanted biases which then necessitated further modifications in the E and M steps of the algorithm. In the end, even with smoothing, sense, length, and width balancing, the performance of the EM-like estimation was disappointing.</Paragraph>
    <Paragraph position="1"> One possible lesson is that EM itself is inappropriate for this problem. Despite the fact that it has become the default method for uncovering hidden structure in NLP problems, it essentially averages together many possible solutions. Possibly, a less linear method that eventually commits to one or another hypothesis about hidden structure may be more appropriate in this case.</Paragraph>
    <Paragraph position="2"> In conclusion, this paper has made the following contributions: it has shown how a stochastic generation model can make use of a semantic class hierarchy, it has provided a negative result with respect to parameter estimation for this model, and in doing so has provided an interesting illustration of the inner workings of the forward-backward algorithm.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML