File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/intro/03/n03-3003_intro.xml
Size: 3,924 bytes
Last Modified: 2025-10-06 14:01:42
<?xml version="1.0" standalone="yes"?> <Paper uid="N03-3003"> <Title>Language choice models for microplanning and readability</Title> <Section position="3" start_page="0" end_page="0" type="intro"> <SectionTitle> SPELLING </SectionTitle> <Paragraph position="0"> You finished the SPELLING test, well done.</Paragraph> <Paragraph position="1"> You got eleven out of fifteen, so you need to practise. Sometimes you could not spell longer words. For example, you did not click on: necessary.</Paragraph> <Paragraph position="2"> Many people find learning to spell hard, but you can do it. If you practise reading, then your skills will improve. Our research is focused on decisions GIRL makes at the discourse level. A previous project, PSET (Devlin and Tait 1998, Devlin et al. 2000), has already made some progress towards lexical-level and syntax-level simplifications for poor readers. In GIRL, it is at the discourse level that choices are made that affect sentence length and selection of discourse cue phrases (phrases that render discourse relations explicit to the reader, e.g. 'for example', 'so' and 'if', in Figure 1). These choices are made in a module called the micro-planner (see Reiter and Dale 2000).</Paragraph> <Paragraph position="3"> The inputs to the microplanner are a model of a user's reading ability and a tree-structured document plan (Reiter and Dale 2000) that includes discourse relations. In GIRL, discourse relations are schemas arranged in a discourse tree structure. Each schema has slots for semantic roles filled by daughter text spans, or daughter relations. For instance, the condition relation has two semantic roles: a condition and a consequent.</Paragraph> <Paragraph position="4"> Figure 2 shows a discourse relation tree structure with its corresponding schema. The root relation, R1, is a concession (type: concession), with one daughter rela- null tion, R2, filling the 'concession' slot and a text span daughter, S1, filling the 'statement' slot. R2 is a condition relation with two text span daughters: S3 filling the The task of GIRL's microplanner is to decide on the ordering of the daughters, how they should be packed into sentences (aggregation), whether there should be punctuation between the daughters, whether discourse cue phrases should be present and, if so, which ones and where they should be placed. The microplanner will ultimately adapt the choices it makes to the reading level of individual users (readers) from user models built from users' answers to up to ninety questions from a literacy test. Our current implementation only considers two generic types of user - &quot;good readers&quot; and &quot;bad readers&quot;.</Paragraph> <Paragraph position="5"> Suppose the input to the microplanner is a discourse plan containing the discourse relation tree in Figure 2. It should be able to calculate that this could be generated in a number of different ways. Just a few of them are: You made four mistakes. But you can learn to fill in forms if you practise.</Paragraph> <Paragraph position="6"> Although you made four mistakes, you can learn to fill in forms ... just as soon as you practise.</Paragraph> <Paragraph position="7"> You made four mistakes. But if you practise, you can learn to fill in forms.</Paragraph> <Paragraph position="8"> If you practise, you can learn to fill in forms. You made four mistakes, though.</Paragraph> <Paragraph position="9"> and it should be able to choose which of these is the most appropriate for poor readers.</Paragraph> <Paragraph position="10"> The remainder of this paper describes what we believe is a novel approach to building language choice models for microplanning. We explain how these models evolved (section 2) and the implications of this design (section 3). Section 4 draws conclusions from the current work and outlines our plans for future work.</Paragraph> </Section> class="xml-element"></Paper>