File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/ackno/88/j88-3002_ackno.xml
Size: 3,325 bytes
Last Modified: 2025-10-06 13:51:35
<?xml version="1.0" standalone="yes"?> <Paper uid="J88-3002"> <Title>MODELING THE USER IN NATURAL LANGUAGE SYSTEMS</Title> <Section position="33" start_page="0" end_page="0" type="ackno"> <SectionTitle> ACKNOWLEDGEMENTS </SectionTitle> <Paragraph position="0"> This work was partially supported by grant ARMY/DAAG-29-84-K0061 from the Army Research Office, grant DARPA/ONR-N0001485-K-0807 from DARPA, and a grant from the Digital Equipment student modeling for intelligent tutoring systems. (Brown and Burton 1978, Sleeman 1982, Johnson and Soloway 1984) are examples of just a few intelligent tutoring systems that profitably employ this idea.</Paragraph> <Paragraph position="1"> 2. There is an unfortunate conflict in terminology here. Sparck Jones uses the term &quot;agent&quot; in the sense of an individual who performs a task for another. Thus for Sparck-Jones the agent is the actual individual interacting with the system. Hence in our terminology the system may have agent models for both Sparck-Jones's &quot;agent&quot; and &quot;patient,&quot; with the model for the individual Sparck-Jones calls the &quot;agent&quot; actually being a user model. 3. Both the overlay and perturbation models were developed in work on intelligent tutoring systems. The overlay model was first defined by Carr and Goldstein (1977) and used in their Wumpus Advisor (WUSOR) user model, although Carbonell (1970) used an overlay technique in the SCHOLAR program, considered to be the first of the intelligent tutoring systems. A perturbation model was used by Brown and Burton in representing bugs students had in learning multicolumn subtraction (Brown and Burton 1978) and has since been used by many others. See Sleeman and Brown (1982) for a collection of seminal papers on intelligent tutoring sy,ltems, or Kass (1987b) for a look at user modeling for intelligent tutoring systems.</Paragraph> <Paragraph position="2"> 4. This is how acceptance attitudes were implemented in VIE-DPM.</Paragraph> <Paragraph position="3"> A wider range of values for the acceptance attitudes, such as a four-valued logic or numeric weights, could easily be used instead. null 5. Although it is conceivable that each interaction with an individual user might refine the generic model of all users in some way. Thus such a user model would converge on the &quot;average user&quot; after many sessions.</Paragraph> <Paragraph position="4"> 6. The terms used in a user's statements also provide information about beliefs of the user, but not as much as one might hope. At first glance, it seems that if the user makes use of a word, he has knowledge about the concept to which that word refers. Most of the time this is true. However, people will sometimes use a term that they really don't understand, simply because others have used it. Inferences based simply on the use of terms should be made with care (or with a low level of trust).</Paragraph> <Paragraph position="5"> 7. A very clever system might even be able to incorporate questions from the user modeling module into questions from the application in an attempt to meet two needs simultaneously.</Paragraph> <Paragraph position="6"> 8. The first three issues are suggested by Sridharan in Sleeman et al (1985).</Paragraph> </Section> class="xml-element"></Paper>