File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/98/m98-1016_concl.xml

Size: 1,266 bytes

Last Modified: 2025-10-06 13:58:05

<?xml version="1.0" standalone="yes"?>
<Paper uid="M98-1016">
  <Title>LEARNING PROCESS: INFORMATION DISTILLATION OF TRAINING CORPUS Learning Process in General</Title>
  <Section position="6" start_page="0" end_page="0" type="concl">
    <SectionTitle>
FUTURE RESEARCH DIRECTION
</SectionTitle>
    <Paragraph position="0"> Our brief experimentation in Chinese and English Named Entity recognition shows that the system has great potential that deserves further investigation.</Paragraph>
    <Paragraph position="1"> 1. Modeling of the problem: currently information and knowledge is represented in the form of word#2Ftag. This may pose too much restriction. A better way of representing information and knowledge, in other words, a better modeling of the problem, should be studied.</Paragraph>
    <Paragraph position="2"> 2. Quantitive justi#0Ccation of the learning process #28knowledge distillation#29 should also be studied. The system should be able to compare di#0Berent set of back-o#0B features and thus the best one can be chosen.</Paragraph>
    <Paragraph position="3"> 3. The system provides great #0Dexibilityashow to optimize it. The optimization should be done systematicly, rather than trial by trial as is the case for the time being.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML