File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/04/w04-1709_concl.xml
Size: 1,712 bytes
Last Modified: 2025-10-06 13:54:21
<?xml version="1.0" standalone="yes"?> <Paper uid="W04-1709"> <Title>Sentence Completion Tests for Training and Assessment in a Computational Linguistics Curriculum</Title> <Section position="6" start_page="0" end_page="0" type="concl"> <SectionTitle> 6 Conclusions </SectionTitle> <Paragraph position="0"> In training or assessment situations where correct answers to questions do not consist of one (or a few) isolated items (words, numbers, symbols) but where a complete description in natural language is required, and when human tutors are not available, SET is the right tool to use. It allows to simulate, to some extent, the detailed comments to individual aspects of an answer that make human tutors so valuable.</Paragraph> <Paragraph position="1"> While SETs are great once they have been written, the process of authoring them is still painful, demanding, error-prone, and thus extremely time-consuming. We will need authoring tools that allow a top-down kind of design for SETs, with stepwise re nement of the code and on-the- y testing of selected parts of the FSA, instead of the low-level design process used now. It would also be very useful to have programs that work, bottom-up, from possible answers to FSAs, by automatically identifying common phrases in answers and collecting them in boxes. We developed such a system and found it very useful but its grammatical coverage is too small to make it viable in practice. The automatic creation of terminological variations in potential answers, by accessing on-line lexical resources, will be another feature that might make life easier for test developers. We continue work on all of these lines of research.</Paragraph> </Section> class="xml-element"></Paper>