File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/94/h94-1051_concl.xml
Size: 1,743 bytes
Last Modified: 2025-10-06 13:57:14
<?xml version="1.0" standalone="yes"?> <Paper uid="H94-1051"> <Title>AUTOMATIC GRAMMAR ACQUISITION</Title> <Section position="8" start_page="270" end_page="270" type="concl"> <SectionTitle> 6. CONCLUSIONS </SectionTitle> <Paragraph position="0"> These experiments provide a quantitative measure of the relative effectiveness of the three different types of grammars. Using the standard context-free grammar as a baseline, we see great improvement both with the addition of context information and with the incorporation of a probabilistic model. We also see evidence that using context to disambiguate among rules is not as effective as using probabilities.</Paragraph> <Paragraph position="1"> There are still many problems to overcome. Direct conversion of Treebank parse trees into rules yields productions whose fight-hand sides can vary in size between I and approximately 10.</Paragraph> <Paragraph position="2"> This is suspected to have significant impact on the performance of the context-dependent system.</Paragraph> <Paragraph position="3"> More improvements will be necessary before a trainable parser will be able to produce parses of high enough quality to be useful in an understanding system. This increase in accuracy should be achievable by combining the strengths of the context-dependent model with those of the probabilistic context-free model, and by exploring ways to make use of other types of information, such as semantic information. It would also be worthwhile to fitrther experiment with varying the amount of training data, contrasting domain-dependent and domain-independent training, and varying the amount and type of context information used by the context-dependent model.</Paragraph> </Section> class="xml-element"></Paper>