File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/90/c90-3024_metho.xml
Size: 8,562 bytes
Last Modified: 2025-10-06 14:12:32
<?xml version="1.0" standalone="yes"?> <Paper uid="C90-3024"> <Title>A Computational Model for Arguments Understanding</Title> <Section position="2" start_page="134" end_page="136" type="metho"> <SectionTitle> 4. A Computational Model </SectionTitle> <Paragraph position="0"> Our model consists of several module. Each module contributing to the general understanding process by providing a specific set of constraints resulting from the analysis of the input: - the Conceptual Base contains all the domain conceptual relations. It is divided in several spaces, one for general common knowledge shared by all actors, except otherwise specified, and one space for each speaker to record his/her particular beliefs.</Paragraph> <Paragraph position="1"> - the Relation Finder derives appropriate relations from the conceptual knowledge represented in canonical form.</Paragraph> <Paragraph position="2"> the Base of Linguistic Constraints describes each argumentative operator.</Paragraph> <Paragraph position="3"> - the Context Analyzer keeps track of the local and global topic of the conversation, the argumentative orientation of the current or previous segment, and incrementally builds the discourse structure.</Paragraph> <Paragraph position="4"> - the Argumentative Analyzer actually computes the argumentative orientation of an utterance or a complete turn, taking into account the contextual constraints as well as the linguistic constraints.. - the Learning Module is activated when there is a gap in the available conceptual knowledge, resulting in the impossibility to account for the coherence of the current turn in the dialogue. This modules makes hypotheses for new relations and checks their plausibility and consistency with what is already known. The Learning Module is able to update the belief space of the current speaker.</Paragraph> <Section position="1" start_page="134" end_page="136" type="sub_section"> <SectionTitle> 4.1. Representation of Conceptual Knowledge </SectionTitle> <Paragraph position="0"> Conceptual knowledge is made essentially of facts and rules. Facts concern independent propositions, while rules describe argumentative relations between propositions. For instance:</Paragraph> <Paragraph position="2"> We use the operator &quot;opposite&quot; to consider the opposite of a proposition. This operator is not the logical negation, but we have the following rules of equivalence: arg~ment (against, A, B) i8 equivalent to argument (for,A, opposite (B)) argument (against, A, B) is equivalent to argument (for, opposite (A) , B) opposite (opposite (X)) iS equivalent to x Knowledge is distributed into different belief spaces. By default, general knowledge is shared by the actors. Knowledge about semantic hierarchies is independent from belief spaces.</Paragraph> <Paragraph position="3"> It is very important to insist that argumentation relations can not be assimilated to logical operators and manipulated as such. The argumentative relation &quot;in favor of&quot; is not processed as a logical implication. Truth values do not matter very much to interpret arguments, since we are mostly interested in the relations between propositions. In fact, the truth value of individual propositions matters all the less that in general, nothing can be logically deduced from the combinations of facts and argumentative rules.</Paragraph> <Paragraph position="4"> An argumentative rule in not a description of the set of conditions that must be met for a certain conclusion to be true. A rule only defines one argument for a conclusion: it usually is a partial argument. If this argument holds, there may be at the time other arguments which hold and go against the same conclusion. This is the very source of any serious argumentative debate: opponents will raise arguments which are believed, by both, to hold, but which go in opposite directions concerning the point of the debate.</Paragraph> <Paragraph position="5"> For instance, nice weather is surely a good argument to go for a walk, though it is not a sufficient condition to take such a decision. A lot of work to do is a very good argument which goes against the suggestion of a walk. Both &quot;nice weather&quot; and &quot;a lot of work&quot; can hold together, and there is no way to make any valid reasoning to conclude about going or not going for a walk. A speaker could express both facts in a discourse: what we need to understand his/her point is information about which fact is held as an argument stronger than the other.</Paragraph> <Paragraph position="6"> The need for ways to compare the relative strength of arguments illustrates once again the inappropriateness of a logical model to handle the process of understanding arguments. It is the relative strength of propositions towards a certain conclusion which determines the outcome of the discourse. The predicate stronger asserts the relative strength of arguments towards the same conclusion. It takes three arguments, the two propositions to be compared and the conclusion intended by these two propositions. The predicate SPS ronger-opp asserts the relative strength of arguments towards opposite conclusions. It takes three arguments, the two propositions to be compared and the conclusion intended by the first one (while the second proposition intends the opposite of the given conclusion). For instance: stronger(need-exerclse, nice-weather, go-for-a-walk) stronger-opp(lot-of-work, nice-weather, opposite(go-for-a-walk)) 4.2. Representation of Linguistic Knowledge Our model uses first order logic to describe relations and constraints. We represent the knowledge attached to argumentative operators as a list of local constraints which are satisfied when the operator is used. For but and almost, we have for instance: is used to assert the final orientation of an expression containing an operator or a connector. The orientation is given as a propositional content. The constraints which are not explicitly present when the expression is uttered are assumed to be asserted at the time of the utterance.</Paragraph> </Section> <Section position="2" start_page="136" end_page="136" type="sub_section"> <SectionTitle> 4.3. Representation of Discourse Structure </SectionTitle> <Paragraph position="0"> The input and output uses the same basic data structure, which is a complete description of the dialogue. The structure is augmented when constraints are taken into account and conclusions found. Descriptions use a features list format.</Paragraph> <Paragraph position="1"> The dialogue is described as a hierarchy, according to the segmentation of the dialogue between turns (complete intervention of one speaker) and individual utterances. Initially, the structure only contains input information about the first utterance.</Paragraph> <Paragraph position="2"> The hierarchical structure is then built incrementally. Information is added as soon as it is available, as the result of the analyses performed on the input. Within the discourse structure, at each level, the topic and the argumentative orientation are recorded.</Paragraph> <Paragraph position="3"> 4.4. Algorithm for the Analysis of Arguments The analysis of a dialogue is performed as an incremental process. The basic algorithm consists of the following steps: - listing the contextual constraints - listing the linguistic constraints resulting from the use of clue words - searching for argumentative relations coherent with the previous constraints - computation of the argumentative orientation It is extended to include the computation of contextual constraints and the derivation and learning of new conceptual relations. We keep track of a global topic as well as a local topic, often identified as the argumentative orientation of the current segment. An analysis is first attempted using the available concepts, and if it fails, the hypothesis mechanism is activated. Hypotheses added to a belief space can be later retracted to satisfy global coherence. Hypotheses may be made about missing conceptual knowledge, even in the case where these new relations are incompatible with default common knowledge, as long as this process results in a global interpretation which accounts for the coherence of the current utterances. The plausibility and consistency of new hypotheses are checked by looking for possible contradictions with existing knowledge, interpreting for this task argumentative relations as logical implications.</Paragraph> </Section> </Section> class="xml-element"></Paper>