File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/04/n04-1017_concl.xml
Size: 1,457 bytes
Last Modified: 2025-10-06 13:54:05
<?xml version="1.0" standalone="yes"?> <Paper uid="N04-1017"> <Title>Lattice-Based Search for Spoken Utterance Retrieval</Title> <Section position="6" start_page="0" end_page="0" type="concl"> <SectionTitle> 5 Conclusion </SectionTitle> <Paragraph position="0"> We proposed an indexing procedure for spoken utterance retrieval that works on ASR lattices rather than just single-best text. We demonstrated that this procedure can improve maximum F-measure by over five points compared to single-best retrieval on tasks with poor WER and low redundancy. The representation is flexible so that we can represent both word lattices, as well as phone lattices, the latter being important for improving performance when searching for phrases containing OOV ferent tasks. The tasks are Broadcast News (+), Switchboard (x), and Teleconferences (o). The techniques are using best word hypotheses (single points), using word lattices (solid lines), and using word and phone lattices (dashed lines).</Paragraph> <Paragraph position="1"> words. It is important to note that spoken utterance retrieval for conversational speech has different properties than spoken document retrieval for broadcast news. Although consistent improvements were observed on a variety of tasks including Broadcast News, the procedure proposed here is most beneficial for more difficult conversational speech tasks like Switchboard and Teleconferences. null</Paragraph> </Section> class="xml-element"></Paper>