File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/06/w06-2707_concl.xml

Size: 2,444 bytes

Last Modified: 2025-10-06 13:55:46

<?xml version="1.0" standalone="yes"?>
<Paper uid="W06-2707">
  <Title>Annotating text using the Linguistic Description Scheme of MPEG-7: The DIRECT-INFO Scenario</Title>
  <Section position="7" start_page="55" end_page="55" type="concl">
    <SectionTitle>
5 Conclusions and future Work
</SectionTitle>
    <Paragraph position="0"> In the DIRECT-INFO project we managed to include results of text analysis in an automated fashion into a MPEG-7 description, which was dealing with the XML representation of the analysis of various modalities. Using corresponding metadata, it was possible to ensure the encoding/annotation of the related results in one file and to facilitate the access to the separated annotation using XPath. As such the DIRECT-INFO MPEG-7 annotation schema is offering a practicable multi-dimensional annotation scheme, if we consider a &amp;quot;dimensions&amp;quot; as being the output of the analysis of various modalities.</Paragraph>
    <Paragraph position="1"> MPEG-7 proved to be generic and flexible enough for combining, saving and accessing various types of annotation.</Paragraph>
    <Paragraph position="2"> Limitations of MPEG-7 were encountered when the task was about fusion or merging of information encoded in the various descriptors (or features), and this task was addressed in a posterior step, whereas the encoding scheme of MPEG-7 was not longer helpful, in defining for example relations between the annotation resulting from the different modules or for defining constraints between those annotation. There seems to be a need for a higher level of representation for annotation resulting from the analysis of distinct media, being low-level features for images or high-level semantic features for texts.</Paragraph>
    <Paragraph position="3"> The need of an &amp;quot;ontologization&amp;quot; of multimedia features has been already recognized and projects are already dealing with this, like AceMedia. Initial work in relating multimodal annotation in DIRECT-INFO will be further developed in K-Space, a new Network of Excellence, which goal is to provide for support in semantic inference for both automatic and semi-automatic annotation and retrieval of multimedia content. K-Space aims at closing the &amp;quot;semantic gap&amp;quot; between the low-level content descriptions and the richness and subjectivity of semantics in high-level human interpretations of audiovisual media.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML