File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/04/w04-1709_intro.xml
Size: 3,826 bytes
Last Modified: 2025-10-06 14:02:39
<?xml version="1.0" standalone="yes"?> <Paper uid="W04-1709"> <Title>Sentence Completion Tests for Training and Assessment in a Computational Linguistics Curriculum</Title> <Section position="2" start_page="0" end_page="0" type="intro"> <SectionTitle> 1 Introduction </SectionTitle> <Paragraph position="0"> Students of Computational Linguistics (CL) at the University of Zurich come from two di erent faculties, viz. the Faculty of Arts and the Faculty of Economics, Business Management and Information Technology. Thus they have a very uneven previous knowledge of linguistics and programming. The introductory lectures touch upon most aspects of CL but cannot compensate for these di erences in a satisfactory way. We are trying to ease the problem by supplying students with extensive additional on-line reading material for individual study. However, until recently students had no way of testing the knowledge they acquired through self-study against the requirements of the courses. For this reason we developed web-based tools for individual training and self-assessment.</Paragraph> <Paragraph position="1"> Most assessments in web-based learning courses use Multiple Choice (MC) tests. These tests are easy to create for authors and easy to use for students. Unfortunately the concept of MC imposes a very restrictive format on the tests, and they can basically test only the presence or absence of small \knowledge bites&quot;. More general and abstract types of knowledge are hard to test by means of MC.</Paragraph> <Paragraph position="2"> Free-form text tests, i.e. tests allowing replies in the form of mini-essays, are, of course, far less restrictive but the costs of assessing them by hand is, in many institutional contexts, prohibitively high. Systems for reliable and consistent automatic assessment of free-form text are not yet available. Those that exist either test writing style, or test the presence or absence in an essay of (explicit) terms or of (implicit) concepts (example: IEA; (Landauer et al., 1998, p295-284)), or use a combination of surface lexical, syntactic, discourse, and content features (example: e-rater; (Burstein, 2003)). It was shown, by the system developers themselves, that the most advanced of these systems, e-rater, can be tricked rather easily into giving marks that are far too good, by using some knowledge of the techniques used by the system (Powers et al., 2001). Since knowledge of the techniques used by rating systems can hardly be kept secret for any length of time all such feature based systems are open to this kind of tricks.</Paragraph> <Paragraph position="3"> This is why we developed a new type of test, called \Satzerg anzungstests&quot; (SET) { SET1, positioned halfway between multiple-choice tests and free-form text tests. We use this type of test for training as well as for assessments, and it is part of our web-based curriculum.</Paragraph> <Paragraph position="4"> The development was funded by the University in view of the implementation of the Bachelor/Master/PhD based \Bologna scheme&quot; in most European universities (see (European Ministers of Education, 1999)).</Paragraph> <Paragraph position="5"> With SETs we are able to create far more demanding tasks for training and assessment than we could otherwise. The philosophy behind SETs will be presented in Section 2.</Paragraph> <Paragraph position="6"> 1\Sentence Completion Tests&quot;.</Paragraph> <Paragraph position="7"> In Section 3 we will show how the individual student can use a test. In Section 4 we will show how to create tests. In section 5, nally, we will give an overview of the courses in which we use these tests for teaching Computational Linguistics (CL), and discuss in which other contexts they could be used.</Paragraph> </Section> class="xml-element"></Paper>