File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/relat/93/j93-2006_relat.xml
Size: 2,903 bytes
Last Modified: 2025-10-06 14:16:04
<?xml version="1.0" standalone="yes"?> <Paper uid="J93-2006"> <Title>BBN Systems and Technologies</Title> <Section position="4" start_page="377" end_page="378" type="relat"> <SectionTitle> 4.4 Related Work </SectionTitle> <Paragraph position="0"> In addition to the work discussed earlier on tools to increase the portability of natural language systems, another recent paper (Hindle and Rooth 1990) is directly related to our goal of inferring case frame information from examples.</Paragraph> <Paragraph position="1"> Ralph Weischedel et al. Coping with Ambiguity and Unknown Words Hindle and Rooth focused only on prepositional phrase attachment using a probabilistic model, whereas our work applies to all case relations. Their work used an unsupervised training corpus of 13 million words to judge the strength of prepositional affinity to verbs, e.g., how likely it is for to to attach to the word go, for from to attach to the word leave, or for to to attach to the word flight. This lexical affinity is measured independently of the object of the preposition. By contrast, we are exploring induction of semantic relations from supervised training, where very little training may be available. Furthermore, we are looking at triples of headword (or semantic class), syntactic case, and headword (or semantic class).</Paragraph> <Paragraph position="2"> In Hindle and Rooth's test, they evaluated their probability model in the limited case of verb-noun phrase-prepositional phrase. Therefore, no model at all would be at least 50% accurate. In our test, many of the test cases involved three or more possible attachment points for the prepositional phrase, which provided a more realistic test.</Paragraph> <Paragraph position="3"> An interesting next step would be to combine these two probabilistic models (perhaps via linear weights) in order to get the benefit of domain-specific knowledge, as we have explored, and the benefits of domain-independent knowledge, as Hindle and Rooth have explored.</Paragraph> <Section position="1" start_page="378" end_page="378" type="sub_section"> <SectionTitle> 4.5 Future Work: Finding Relations/Combining Fragments </SectionTitle> <Paragraph position="0"> The experiments on the effectiveness of finding core NPs using only local information were run by midsummer 1990. In fall 1990, another alternative, the Fast Partial Parser (FPP), which is a derivative of earlier work (de Marcken 1990), became available to us.</Paragraph> <Paragraph position="1"> It finds fragments using a stochastic part of speech algorithm and a nearly deterministic parser. It produces fragments averaging three to four words in length. Figure 9 shows an example output for the sentence.</Paragraph> <Paragraph position="2"> A BOMB EXPLODED TODAY AT DAWN IN THE PERUVIAN TOWN OF YUNGUYO, NEAR THE LAKE, VERY NEAR WHERE THE PRESI-</Paragraph> </Section> </Section> class="xml-element"></Paper>