File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/evalu/98/p98-2241_evalu.xml
Size: 3,699 bytes
Last Modified: 2025-10-06 14:00:34
<?xml version="1.0" standalone="yes"?> <Paper uid="P98-2241"> <Title>A Preliminary Model of Centering in Dialog*</Title> <Section position="5" start_page="1475" end_page="1476" type="evalu"> <SectionTitle> 5 Results and analysis </SectionTitle> <Paragraph position="0"> Table 2 summarizes our findings. Only 10 of 664 utterances violate Centering Rule 1, so centering theory's assumptions linking local focus to pronouns appear to hold in dialog. It is interesting to note that Model 1, which includes dialog participants as discourse entities, consistently performed best in the dialog is used to clarify unclear annotation instructions. In this case, the annotators examined it to agree on which syntactic constituents would contribute Cf elements and the criteria for breaking turns into utterances.</Paragraph> <Paragraph position="1"> SMore standard reliability measures could not be used since there are no &quot;tags&quot; in this annotation scheme, and within some categories there may be an ordered list of items.</Paragraph> <Section position="1" start_page="1476" end_page="1476" type="sub_section"> <SectionTitle> 5.1 Empty Cb's </SectionTitle> <Paragraph position="0"> Each of our models leaves at least 52% of non-empty utterances with no prediction of the Cb (Cfn-1 and Cfn are disjoint). 1deg Some empty Cb's result from abrupt topic shifts, while others occur when the speakers make topically related, but C f-disjoint, contributions, such as the last line in: Example 3 \[dialog 48611 A I just want to figure out what I'm going to do with my life. I feel like I'm never going to figure it out. B Lizzy, you might not.</Paragraph> <Paragraph position="1"> B I haven't figured out mine yet.</Paragraph> <Paragraph position="2"> In many cases, a Cb would exist if we modified the models to include associated and ellipsed entities in Cf. For instance, in Example 4, the ellipsed location in A's utterance should be the Cb: , Example 4 \[dialog 42481 B ... Ive been there walt, yes three times I think A Well this is our second time</Paragraph> </Section> <Section position="2" start_page="1476" end_page="1476" type="sub_section"> <SectionTitle> 5.2 Cb Matches the 'real' topic </SectionTitle> <Paragraph position="0"> For utterances where a Cb can be selected, it matches the 'real' topic only 21% to 35% of the time. By this measure, our models are poor predictors of local focus. For instance, in Example 5, the 'real' topic of the first utterance is Jackson, but according to Modell the set of entities referred to by &quot;we&quot; is the Cb of both utterances.</Paragraph> <Paragraph position="1"> Example 5 \[dialog 42481 A And like we went into Jackson, the town and / we were like - AAAHHHI let me out of here The annotators' intuitions regarding the 'real' topic often conflicted. It would be interesting to annotate actor and discourse focus separately, then see which one the Cb most closely matches.</Paragraph> </Section> <Section position="3" start_page="1476" end_page="1476" type="sub_section"> <SectionTitle> 5.3 Cheap versus expensive transitions </SectionTitle> <Paragraph position="0"> Strube and Hahn (1996) propose a method of evaluating a model against centering rule 2, measuring the 'cost' of the listener's inference load. A cheap transition has Cbn = Cp,-I, otherwise it is expensive. Models with a large percent of cheap transi1deg57% of Cb's in Modell are entities referred to via I/2PPs. tions better reflect human notions of coherence. All three of our models produced a very low percent of cheap transitions in this experiment, especially when compared to Strube and Hahn's result of 80%.</Paragraph> </Section> </Section> class="xml-element"></Paper>