File Information
File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/04/p04-2006_metho.xml
Size: 14,317 bytes
Last Modified: 2025-10-06 14:09:00
<?xml version="1.0" standalone="yes"?> <Paper uid="P04-2006"> <Title>iSTART: Paraphrase Recognition</Title> <Section position="4" start_page="0" end_page="0" type="metho"> <SectionTitle> 3 Recognition Model </SectionTitle> <Paragraph position="0"> To recognize paraphrasing, we convert natural language sentences into Conceptual Graphs (CG, Sowa, 1983; 1992) and then compare two CGs for matching according to paraphrasing patterns.</Paragraph> <Paragraph position="1"> The matching process is to find as many &quot;concept-relation-concept triplet&quot; matches as possible. A triplet match means that a triplet from the student's input matches with a triplet from the given sentence. In particular, the left-concept, right-concept, and relation of both sub-graphs have to be exactly the same, or the same under a transformation based on a relationship of synonymy (or other relation defined in WordNet), or the same because of idiomatic usage. It is also possible that several triplets of one sentence together match a single triplet of the other. At the end of this pattern matching, a summary result is provided: total paraphrasing matches, unparaphrased information and additional information (not appearing in the given sentence).</Paragraph> <Section position="1" start_page="0" end_page="0" type="sub_section"> <SectionTitle> 3.1 Conceptual Graph Generation </SectionTitle> <Paragraph position="0"> A natural language sentence is converted into a conceptual graph using the Link Grammar parser.</Paragraph> <Paragraph position="1"> This process mainly requires mapping one or more Link connector types into a relation of the conceptual graph.</Paragraph> <Paragraph position="2"> A parse from the Link Grammar consists of triplets: starting word, an ending word, and a connector type between these two words. For example, [1 2 (Sp)] means word-1 connects to word-2 with a subject connector or that word-1 is the subject of word-2. The sentence &quot;A walnut is eaten by a monkey&quot; is parsed as follows:</Paragraph> <Paragraph position="4"> We then convert each Link triplet into a corresponding CG triplet. Two words in the Link triplet can be converted into two concepts of the CG.</Paragraph> <Paragraph position="5"> To decide whether to put a word on the left or the right side of the CG triplet, we define a mapping rule for each Link connector type. For example, a Link triplet [1 2 (S*)] will be mapped to the 'Agent' relation, with word-2 as the left-concept and word-1 as the right-concept: [Word-2] fi (Agent) fi [Word-1]. Sometimes it is necessary to consider several Link triplets in generating a single CG triplet. A CG of previous example is shown below: Each line (numbered 0-7) shows a Link triplet and its corresponding CG triplet. These will be used in the recognition process. The '#S#' and '#M' indicate single and multiple mapping rules.</Paragraph> </Section> <Section position="2" start_page="0" end_page="0" type="sub_section"> <SectionTitle> 3.2 Paraphrase Recognition </SectionTitle> <Paragraph position="0"> We illustrate our approach to paraphrase pattern recognition on single sentences: using synonyms (single or compound-word synonyms and idiomatic expressions), changing the voice, using a different word form, breaking a long sentence into smaller sentences, substituting a definition for a word, and changing the sentence structure.</Paragraph> <Paragraph position="1"> Preliminaries: Before we start the recognition process, we need to assume that we have all the information about the text: each sentence has various content words (excluding such 'stop words' as a, an, the, etc.); each content word has a definition together with a list of synonyms, antonyms, and other relations provided by WordNet (Fellbaum, 1998). To prepare a given text and a sentence, we plan to have an automated process that generates necessary information as well as manual intervention to verify and rectify the automated result, if necessary.</Paragraph> <Paragraph position="2"> Single-Word Synonyms: First we discover that both CGs have the same pattern and then we check whether words in the same position are synonyms. Example: &quot;Jenny helps Kay&quot; [Help] fi (Agent) fi [Person: Jenny] + fi (Patient) fi [Person: Kay] vs.</Paragraph> <Paragraph position="3"> &quot;Jenny assists Kay&quot; [Assist] fi (Agent) fi [Person: Jenny] + fi (Patient) fi [Person: Kay] Compound-Word Synonyms: In this case, we need to be able to match a word and its compound-word synonym. For example, 'install' has 'set up' and 'put in' as its compound-word synonyms. The compound words are declared by the parser program. During the preliminary processing CGs are pre-generated.</Paragraph> <Paragraph position="4"> process. For example, the phrase 'give someone a hand' means 'help'. The preliminary process will generate the following conceptual graph: In this example, one might say that a 'hand' might be an actual (physical) hand rather than a synonym phrase for 'help'. To reduce this particular ambiguity, the analysis of the context may be necessary.</Paragraph> <Paragraph position="5"> Voice: Even if the voice of a sentence is changed, it will have the same CG. For example, both &quot;Jenny helps Kay&quot; and &quot;Kay is helped by Jenny&quot; have the same graphs as follows: [Help] fi (Agent) fi [Person: Jenny] + fi (Patient) fi [Person: Kay] At this time we are assuming that if two CGs are exactly the same, it means paraphrasing by changing voice pattern. However, we plan to introduce a modified conceptual graph that retains the original sentence structure so that we can verify that it was paraphrasing by change of voice and not simple copying.</Paragraph> <Paragraph position="6"> Part-of-speech: A paraphrase can be generated by changing the part-of-speech of some keywords. In the following example, the student uses &quot;a historical life story&quot; instead of &quot;life history&quot;, and 'similarity' instead of 'similar'. Original sentence: &quot;All thunderstorms have a similar life history.&quot; Student's Explanation: &quot;All thunderstorms have similarity in their historical life story.&quot; To find this paraphrasing pattern, we look for the same word, or a word that has the same baseform. In this example, the sentences share the same base-form for 'similar' and 'similarity' as well as for 'history' and 'historical'.</Paragraph> <Paragraph position="7"> Breaking long sentence: A sentence can be explained by small sentences coupled up together in such a way that each covers a part of the original sentence. We integrate CGs of all sentences in the student's input together before comparing it with the original sentence.</Paragraph> <Paragraph position="8"> Original sentence: &quot;All thunderstorms have a similar We will provisionally assume that the student uses only the words that appear in the sentence in this breaking down process. One solution is to combine graphs from all sentences together. This can be done by merging graphs of the same concept. This process involves pronoun resolution. In this example, 'it' could refer to 'life' or 'history'. Our plan is to exercise all possible pronoun references and select one that gives the best paraphrasing recognition result.</Paragraph> <Paragraph position="9"> Definition/Meaning: A CG is pre-generated for a definition of each word and its associations (synonyms, idiomatic expressions, etc.). To find a paraphrasing pattern of using the definition, for example, a 'history' means &quot;the continuum of events occurring in succession leading from the past to the present and even into the future&quot;, we build a CG for this as shown below:</Paragraph> <Paragraph position="11"> We refine this CG by incorporating CGs of the definition into a single integrated CG, if possible.</Paragraph> <Paragraph position="13"> From WordNet 2.0, the synonyms of 'past', 'present', and 'future' found to be &quot;begin, start, beginning process&quot;, &quot;middle, go though, middle process&quot;, and &quot;end, last, ending process&quot;, respectively. The following example shows how they can be used in recognizing paraphrases.</Paragraph> <Paragraph position="14"> Original sentence: &quot;All thunderstorms have a similar the same things, and end the same way.&quot;</Paragraph> <Paragraph position="16"> From this CG, we found the use of 'begin', 'gothrough', and 'end', which are parts of the CG of history's definition. These together with the correspondence of words in the sentences show that the student has used paraphrasing by using a definition of 'history' in the self-explanation.</Paragraph> <Paragraph position="17"> Sentence Structure: The same thing can be said in a number of different ways. For example, to say &quot;There is someone happy&quot;, we can say &quot;Someone is happy&quot;, &quot;A person is happy&quot;, or &quot;There is a person who is happy&quot;, etc. As can be easily seen, all sentences have a similar CG triplet of &quot;[Person: $] fi (Char) fi [Happy]&quot; in their CGs. But, we cannot simply say that they are paraphrases of each other; therefore, need to study more on possible solutions.</Paragraph> </Section> <Section position="3" start_page="0" end_page="0" type="sub_section"> <SectionTitle> 3.3 Similarity Measure </SectionTitle> <Paragraph position="0"> The similarity between the student's input and the given sentence can be categorized into one of these four cases: 1. Complete paraphrase without extra info. 2. Complete paraphrase with extra info. 3. Partial paraphrase without extra info. 4. Partial paraphrase with extra info. To distinguish between 'complete' and 'partial' paraphrasing, we will use the triplet matching result. What counts as complete depends on the context in which the paraphrasing occurs. If we consider the paraphrasing as a writing technique, the 'complete' paraphrasing would mean that all triplets of the given sentence are matched to those in the student's input. Similarly, if any triplets in the given sentence do not have a match, it means that the student is 'partially' paraphrasing at best. On the other hand, if we consider the paraphrasing as a reading behavior or strategy, the 'complete' paraphrasing may not need all triplets of the given sentence to be matched. Hence, recognizing which part of the student's input is a paraphrase of which part of the given sentence is significant. How can we tell that this explanation is an adequate paraphrase? Can we use information provided in the given sentence as a measurement? If so, how can we use it? These questions still need to be answered.</Paragraph> </Section> </Section> <Section position="5" start_page="0" end_page="0" type="metho"> <SectionTitle> 4 Related Work </SectionTitle> <Paragraph position="0"> A number of people have worked on paraphrasing such as the multilingual-translation recognition by Smith (2003), the multilingual sentence generation by Stede (1996), universal model paraphrasing using transformation by Murata and Isahara (2001), DIRT - using inference rules in question answering and information retrieval by Lin and Pantel (2001). Due to the space limitation we will mention only a few related works.</Paragraph> <Paragraph position="1"> ExtrAns (Extracting answers from technical texts) by (Molla et al, 2003) and (Rinaldi et al, 2003) uses minimal logical forms (MLF) to represent both texts and questions. They identify terminological paraphrases by using a term-based hierarchy with their synonyms and variations; and syntactic paraphrases by constructing a common representation for different types of syntactic variation via meaning postulates. Absent a paraphrase, they loosen the criteria by using hyponyms, finding highest overlap of predicates, and simple keyword matching.</Paragraph> <Paragraph position="2"> Barzilay & Lee (2003) also identify paraphrases in their paraphrased sentence generation system. They first find different paraphrasing rules by clustering sentences in comparable corpora using n-gram word-overlap. Then for each cluster, they use multi-sequence alignment to find intra-cluster paraphrasing rules: either morpho-syntactic or lexical patterns. To identify inter-cluster paraphrasing, they compare the slot values without considering word ordering.</Paragraph> <Paragraph position="3"> In our system sentences are represented by conceptual graphs. Paraphrases are recognized through idiomatic expressions, definition, and sentence break up. Morpho-syntatic variations are also used but in more general way than the term hierarchy-based approach of ExtrAns.</Paragraph> </Section> <Section position="6" start_page="0" end_page="0" type="metho"> <SectionTitle> 5 Preliminary Implementation </SectionTitle> <Paragraph position="0"> We have implemented two components to recognize paraphrasing with the CG for a single simple sentence: Automated Conceptual Graph Generator and Automated Paraphrasing Recognizer.</Paragraph> <Paragraph position="1"> Automated Conceptual Graph Generator: is a C++ program that calls the Link Grammar API to get the parse result for the input sentence, and generates a CG. We can generate a CG for a simple sentence using the first linkage result. Future versions will deal with complex sentence structure as well as multiple linkages, so that we can cover most paraphrases.</Paragraph> <Paragraph position="2"> Automated Paraphrasing Recognizer: The input to the Recognizer is a pair of CGs: one from the original sentence and another from the student's explanation. Our goal is to recognize whether any paraphrasing was used and, if so, what was the paraphrasing pattern. Our first implementation is able to recognize paraphrasing on a single sentence for exact match, direct synonym match, first level antonyms match, hyponyms and hypernyms match. We plan to cover more relationships available in WordNet as well as definitions, idioms, and logically equivalent expressions. Currently, voice difference is treated as an exact match because both active voices have the same CGs and we have not yet modified the conceptual graph as indicated above.</Paragraph> </Section> class="xml-element"></Paper>