File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/03/w03-0202_intro.xml
Size: 3,383 bytes
Last Modified: 2025-10-06 14:01:55
<?xml version="1.0" standalone="yes"?> <Paper uid="W03-0202"> <Title>Learning to Identify Student Preconceptions from Texta0</Title> <Section position="2" start_page="0" end_page="0" type="intro"> <SectionTitle> 1 Introduction </SectionTitle> <Paragraph position="0"> We are building INFACT, a software system to support teachers in performing diagnostic assessment of their students' learning. Our work is guided by the principle that assessment should be a ubiquitous and unobtrusive part of the learning process. Since many learning experiences involve writing, we focus on the analysis of free natural language text and certain other representations of student expression and behavior. We also believe that rich assessment, which informs teachers about the belief states of their students, is a valuable addition to tests with a single numeric grade.</Paragraph> <Paragraph position="1"> a1 Research supported in part by the National Science Foundation on Grant ITR/PE 0121345.</Paragraph> <Paragraph position="2"> There are several parts to our system, including an on-line textual forum for class discussions, an annotation interface for teachers, and tools for displaying assessment data in various formats. The philosophy behind the system is described in (Tanimoto et al., 2000). The system facilitates small-group discussions which the teacher can monitor and intervene if there is an obvious impasse. An astute teacher with enough time can follow the discussions closely and observe as students make conceptual transitions.</Paragraph> <Paragraph position="3"> A major motivation for the work described in this paper is to find a way to reduce the burden on teachers who want such diagnostic information but who cannot afford the time needed to follow each discussion closely. Our system analyzes small selections of student writing, on the order of one or two sentences, and learns rules that can be used to identify common student preconceptions.</Paragraph> <Paragraph position="4"> Our approach to partially-automated analysis uses text markup rules consisting of patterns in a &quot;rule language&quot; and classifications that may be as general as &quot;may be of interest&quot; to &quot;suggests preconception P17.&quot; In addition to learning text markup rules for identifying preconceptions in online discussions, we are also learning rules for assessing short textual answers in an online diagnostic testing environment. This system poses questions to the student and uses the results to report student preconceptions to teachers and recommend resources to the student. The system asks multiple choice or numeric content questions and then, based on the response asks a short-answer follow-up question allowing the student to explain their reasoning. In this paper, we describe the results of applying our rule learning system to classifying the responses to these follow-up questions.</Paragraph> <Paragraph position="5"> In the following sections we discuss other work on automated essay grading, we then describe the language with which rules are represented in our system, followed by a description of the version space learning technique and our specific adaptations to allow it to learn text classification rules. Finally, we describe the empirical results of our experiments with this technique.</Paragraph> </Section> class="xml-element"></Paper>