File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/98/w98-1229_metho.xml
Size: 3,045 bytes
Last Modified: 2025-10-06 14:15:16
<?xml version="1.0" standalone="yes"?> <Paper uid="W98-1229"> <Title>Towards Language Acquisition by an Attention-Sharing Robot</Title> <Section position="4" start_page="0" end_page="0" type="metho"> <SectionTitle> 3 Implications of Autism </SectionTitle> <Paragraph position="0"> Attention-sharing is commonly seen in infants at the pre-verbal stage. Its development starts before 6 months old, and is completed at around 18 months old (Butterworth, 1991). Also in some non-human primates show attention-sharing (Itakura, 1996).</Paragraph> <Paragraph position="1"> Most of infants and children with autism do not show attention-sharing; being instructed by an experimenter, however, they can do it (Baron-Cohen, 1995). This means they axe unaware that one's gaze-direction implies his or her attentional target.</Paragraph> <Paragraph position="2"> Being unaware of others' attention, children with autism show typical disorders in verbal and non-verbal communication (Frith, 1989). Most of children with autism can not acquire language or use language properly. This is because (1) they failed in observing verbal (and also pragmatic) behavior of others, and (2) they failed in observing positive/ negative feedback for elaborating their hypothetic language models.</Paragraph> </Section> <Section position="5" start_page="0" end_page="0" type="metho"> <SectionTitle> 4 The Attention-Sharing Robot </SectionTitle> <Paragraph position="0"> We are developing a robot, Infanoid, as a test-bed for our model of attention-sharing. The robot is intended to create shared attention with humans in terms of monitoring their gaze-direction.</Paragraph> <Paragraph position="1"> The robot has a head, as shown in Figure 4, with four CCD cameras (left/right x zoom/wide) and servo motors to drive the &quot;eyes&quot; at the speed of human saccade. The images taken by the cameras are sent to a workstation for gaze-monitoring.</Paragraph> <Paragraph position="2"> The gaze-monitoring process consists of the following tasks, as also shown in Figure 5: (1) detect a face in a scene, (2) saccade to the face and switch to the zoom cameras, (3) detect eyes and determine the gaze-direction in terms of the position of the pupils, and (4) search for an object in the direction. If something relevant is found, the robot identifies it as the target.</Paragraph> <Paragraph position="3"> We have developed a prototype of the robot and the real-time face/eye detectors. W'e are now work- null (1) detect a face (3) capture gaze Figure 5.</Paragraph> <Paragraph position="4"> (2) saccade and zoom N (4) identify the target</Paragraph> <Section position="1" start_page="0" end_page="0" type="sub_section"> <SectionTitle> Gaze-monitoring process </SectionTitle> <Paragraph position="0"> ing on gaze capturing and target selection. Our preliminary study found that these tasks require some top-down information like the object's &quot;relevance&quot; (Sperber and Wilson, 1986) to the current context.</Paragraph> </Section> </Section> class="xml-element"></Paper>