File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/90/c90-2067_concl.xml
Size: 1,831 bytes
Last Modified: 2025-10-06 13:56:28
<?xml version="1.0" standalone="yes"?> <Paper uid="C90-2067"> <Title>Word Sense Disambiguation with Very Large Neural Networks Extracted from Machine Readable Dictionaries</Title> <Section position="5" start_page="393" end_page="393" type="concl"> <SectionTitle> 4. Conclusion </SectionTitle> <Paragraph position="0"> The use of word relations implicitly encoded in machine-readable dictionaries, coupled with the neural network strategy, seems to offer a promising approach to WSD. This approach succeeds where the Lesk strategy fails, and it does not require determining and encoding microfeatures or other semantic information.</Paragraph> <Paragraph position="1"> The model is also more robust than the Lesk strategy, since it does not rely on the presence or absence of a particular word or words and can filter out some degree of &quot;noise&quot; (such as inclusion of some wrong lemmas due to lack of information about part-of-speech or occasional activation of misleading homographs). However, there are clearly several improvements which can be made: for instance, the part-of-speech for input words and words in definitions can be used to extract only the correct lemmas from the dictionary, the frequency of use for particular senses of each word can be used to help choose among competing senses, and additional knowledge can be extracted from other dictionaries and thesauri. It is also conceivable that the network could &quot;learn&quot; by giving more weight to links which have been heavily activated over numerous runs on large samples of text. The model we describe here is only a first step toward a fuller understanding and refinement of the use of VLNNs for language processing, and it opens several interesting avenues for further application and research.</Paragraph> </Section> class="xml-element"></Paper>