File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/06/e06-1015_concl.xml
Size: 1,199 bytes
Last Modified: 2025-10-06 13:55:03
<?xml version="1.0" standalone="yes"?> <Paper uid="E06-1015"> <Title>Making Tree Kernels practical for Natural Language Learning</Title> <Section position="7" start_page="119" end_page="119" type="concl"> <SectionTitle> 6 Conclusions </SectionTitle> <Paragraph position="0"> In this paper, we have shown that tree kernels can effectively be adopted in practical natural language applications. The main arguments against their use are their efficiency and accuracy lower than traditional feature based approaches. We have shown that a fast algorithm (FTK) can evaluate tree kernels in a linear average running time and also that the overall converging time required by SVMs is compatible with very large data sets. Regarding the accuracy, the experiments with Support Vector Machines on the PropBank and FrameNet predicate argument structures show that: (a) the richer the kernel is in term of substructures (e.g. SST), the higher the accuracy is, (b) tree kernels are effective also in case of automatic parse trees and (c) as kernel combinations always improve traditional feature models, the best approach is to combine scalar-based and structured based kernels.</Paragraph> </Section> class="xml-element"></Paper>