File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/relat/05/p05-2009_relat.xml
Size: 3,155 bytes
Last Modified: 2025-10-06 14:15:51
<?xml version="1.0" standalone="yes"?> <Paper uid="P05-2009"> <Title>Learning Meronyms from Biomedical Text</Title> <Section position="4" start_page="49" end_page="49" type="relat"> <SectionTitle> 2 Related Work </SectionTitle> <Paragraph position="0"> Early work on knowledge extraction from electronic dictionaries used lexico-syntactic patterns to build relational records from definitions. This included some work on partOf (Evens, 1988). Lexical relation extraction has, however, concentrated on hyponym extraction. A widely cited method is that of Hearst (1992), who argues that specific lexical relations are expressed in well-known intra-sentential lexico-syntactic patterns. Hearst successfully extracted hyponym relations, but had little success with meronymy, finding that meronymic contexts are ambiguous (for example, cat's paw and cat's dinner). Morin (1999) reported a semi-automatic implementation of Hearst's algorithm.</Paragraph> <Paragraph position="1"> Recent work has applied lexical relation extraction to ontology learning (Maedche and Staab, 2004).</Paragraph> <Paragraph position="2"> Berland and Charniak (1999) report what they believed to be the first work finding part-whole relations from unlabelled corpora. The method used is similar to that of Hearst, but includes metrics for ranking proposed part-whole relations. They report 55% accuracy for the top 50 ranked relations, using only the two best extraction patterns.</Paragraph> <Paragraph position="3"> Girju (2003) reports a relation discovery algorithm based on Hearst. Girju contends that the ambiguity of part-whole patterns means that more information is needed to distinguish meronymic from non-meronymic contexts. She developed an algorithm to learn semantic constraints for this differentiation, achieving 83% precision and 98% recall with a small set of manually selected patterns. Others have looked specifically at meronymy in anaphora resolution (e.g. Poesio et al (2002)).</Paragraph> <Paragraph position="4"> The algorithm presented here learns relations directly between semantically typed multiword terms, Input: 1. Using input resources (a) Label terms (b) Label relations 2. For a fixed number of iterations or until no new relations are learned (a) Identify contexts that contain both participants in a relation (b) Create patterns describing contexts (c) Generalise the patterns (d) Use generalised patterns to identify new relation instances (e) Label new terms (f) Label new relations and itself contributes to term recognition. Learning is automatic, with neither manual selection of best patterns, nor expert validation of patterns. In these respects, it differs from earlier work. Hearst and others learn relations between either noun phrases or single words, while Morin (1999) discusses how hypernyms learnt between single words can be projected onto multi-word terms. Earlier algorithms include manual selection of initial or &quot;best&quot; patterns. The experiments differ from others in that they are restricted to a well defined domain, anatomy, and use existing domain knowledge resources.</Paragraph> </Section> class="xml-element"></Paper>