File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/metho/96/j96-4004_metho.xml

Size: 47,743 bytes

Last Modified: 2025-10-06 14:14:18

<?xml version="1.0" standalone="yes"?>
<Paper uid="J96-4004">
  <Title>A Statistically Emergent Approach for Language Processing: Application to Modeling Context Effects in Ambiguous Chinese Word Boundary Perception</Title>
  <Section position="3" start_page="0" end_page="534" type="metho">
    <SectionTitle>
2. Ambiguous Chinese Word Boundary Perception
</SectionTitle>
    <Paragraph position="0"> A written Chinese sentence consists of a series of evenly spaced Chinese characters.</Paragraph>
    <Paragraph position="1"> Each character corresponds to one syllable. A word in Chinese can be made up of a single character, such as OK f?m 'rice', or it can be a combination of two or more  Gan, Palmer, and Lua A Statistically Emergent Approach characters, such as ~ shu~gu6 'fruit'. It is possible that the component characters of a word are fre@, such as ;~ shut and ~ gu6 of the word ;q&lt;~ shu~gu6 'fruit', which mean 'water' and 'fruit' respectively. For any two Chinese characters in a sentence, denoted as x and y, if xy cannot be combined together to function as a word, a single word boundary exists between these two characters. If x and y can be constituents of the same word, yet at the same time may also be free, then word boundary ambiguity exists in these two characters. If there is a unique word boundary before x and after y, we refer to the ambiguity existing in xy as a combination ambiguity. On the other hand, if there is a word boundary ambiguity between the characters xy and the character that precedes or follows them, say z, and these three characters can be grouped into either xy z or x yz, then we say that an overlap ambiguity exists. A sentence that allows an ambiguous fragment to have multiple word boundaries will end up with more than one interpretation. This type of ambiguity is called global ambiguity with respect to the sentence. On the other hand, if only one way of segmenting the word boundary of an ambiguous fragment is allowed in a sentence, we call this local ambiguity with respect to the sentence. Global ambiguity can only be resolved with discourse knowledge. An example for each category is shown in (1) to (4). 2 Throughout this paper, we follow the guidelines on Chinese word segmentation adopted in China. 3</Paragraph>
    <Paragraph position="3"> zh~ w~i zhiyudn gOngzu6 de yali hOn da this CL 4 worker work STRUC 5 pressure very great 'This worker faces great pressure in his work.' The underlined fragment ~12~ yudn gOngzuD in (1) has overlap, local ambiguity. The middle character I gDng can combine with the previous character ~ yudn to form the word HI yudngong 'worker', leaving the third character functioning as a monosyllabic word ~ zuD 'do'. The middle character can also combine with the next character to form the word X2~ gongzuD 'work', leaving the first character alone. The sentence containing this fragment allows only one way of segmenting the word boundary, which is shown in (1). The character ~ yudn combines with the character preceding it, lI~ zhi, to form the bisyllabic word ~ zhiyudn 'worker', and the two characters 32 gong and ~ zuD form a word.</Paragraph>
    <Paragraph position="4"> Overlap, Global Ambiguity (2)a.</Paragraph>
    <Paragraph position="5"> w?Smen y~o xuesh~ng hu6 d6 y6u yiyi we want student live CSC 6 have meaning 'We want our students to have a meaningful life.'  Computational Linguistics Volume 22, Number 4 b.</Paragraph>
    <Paragraph position="6"> w{~men y?~o xuf sh~nghu6 d6 y6u yiyi we want learn life CSC have meaning 'We want to learn how to lead a meaningful life.' The fragment _~e_~ xud shdng hu6 also has overlap ambiguity, where the middle character can either combine with the first character to form a word, or combine with the last character to form a word. The sentence containing this fragment has two plausible interpretations as shown in (2a) and (2b). Both alternations: ~ ~ xudshdng hu6 'student live' (2a) and -~ ~41.~ xu~ sh~nghu6 'learn life' are acceptable. Combination, Local Ambiguity</Paragraph>
    <Paragraph position="8"> nr de bidoqing shff~n hudj~ you STRUC look very funny 'You look very funny.' In (3), the two characters in the fragment nt'~ shif~n can either function as two autonomous words q~ shi 'ten' and ~ f~n 'mark', or they can combine together to function as a bisyllabic word ff'~ shif~n 'very'. Given the sentential context of (3), however, only the second alternation is correct.</Paragraph>
    <Paragraph position="9"> Combination, Global Ambiguity (4)a. ~ ~ ~l~ ~__ \]~ w6men dou h~n n~n gub we all very hard live 'We all have a hard life.' b.</Paragraph>
    <Paragraph position="10"> wfimen dou h~n ngmgub we all very sad 'We all feel very sad.' The fragment \]~\]~ ngmgub also has combination ambiguity. It differs from (3) in that the sentence in which it appears has two plausible interpretations. Hence, this fragment can either be segmented as ~l~ ndm 'hard' and i~ gub 'live' in (4a), or as ~i~ n~ngub 'sad' in (4b).</Paragraph>
    <Paragraph position="11"> Word boundary ambiguity is a very common phenomenon in written Chinese, due to the fact that a large number of words in modem Chinese are formed from free characters (Chao 1957). The problem also exists in continuous speech recognition research, where correct interpretation of word boundaries in an utterance requires linguistic and nonlinguistic information. However, people have a fascinating ability to fluidly perceive groups of characters as words in one context but break these groups apart in a different context. This human capability highlights the fact that there is a continual interaction between word identification and sentence interpretation. We are therefore motivated to study how our statistically emergent model can be used to simulate the interactions between word identification and sentence analysis. In particular, we want to study how the model (i) handles fragments with local ambiguities, such as those in sentences (1) and (3), when they appear in different sentential contexts and (ii) handles fragments with global ambiguities, such as those in sentences (2) and (4), when there is no discourse information.</Paragraph>
    <Paragraph position="12">  Gan, Palmer, and Lua A Statistically Emergent Approach</Paragraph>
  </Section>
  <Section position="4" start_page="534" end_page="536" type="metho">
    <SectionTitle>
3. Existing Approaches
</SectionTitle>
    <Paragraph position="0"> Traditionally, word identification has been treated as a preprocessing issue, distinct from sentence analysis. We will therefore only discuss current practices in word identification, leaving sentence analysis aside. Several techniques have been used in word identification, ranging from simple pattern matching, to statistical approaches, to rule-based methods. The most popular pattern-matching method is based on the Maximum Matching heuristics, commonly known as the MM method (Liang 1983; Wang, Wang, and Bai 1991). This method scans a sentence from left to right. In each step, the longest matched substring is selected as a word by dictionary look-up. For example, in sentence (5), (5) fisu~nj~ de f~ming y~y~ zh6ngda computer STRUC invention implication profound 'The invention of the computer has profound implications.' the first three characters are identified as the word ~t~J~ fisu~nfi'computer' because it is the longest matched substring found in a word dictionary. With the same reasoning, the words ~ de 'STRUC', ~ faming 'invention', ~ y~y~ 'implication', and ~ zhbngd~ 'profound' are identified.</Paragraph>
    <Paragraph position="1"> Statistical techniques include the relaxation approach (Fan and Tsai 1988; Chang, Chen, and Chen 1991; Chiang et al. 1992), the mutual information approach (Sproat and Shih 1990; Wu and Su 1993; Lua and Gan 1994), and the Markov model (Lai et al. 1992). These approaches make use of co-occurrence frequencies of characters in a large corpus of written texts to achieve word segmentation without getting into deep syntactic and semantic analysis. For example, the relaxation approach uses the usage frequencies of words and the adjacency constraints among words to iteratively derive the most plausible assignment of characters into word classes. First, all possible words in a sentence are identified and assigned initial probabilities based on their usage frequency. These probabilities are updated iteratively by employing the consistency constraints among neighboring words. Impossible combinations are gradually filtered out, leading to the identification of the most likely combination. The mutual information approach is similar to the relaxation approach in principle. Here, mutual information is used to measure how strongly two characters are associated. The mutual information score is derived from the ratio of the co-occurrence frequency of two characters to the frequency of each character. In a sentence, the mutual information score for each pair of adjacent characters is determined. The pair having the highest score is grouped together. The sentence is split into two parts by the two characters just grouped. The same procedure is applied to each part recursively. Eventually, all word boundaries will be identified.</Paragraph>
    <Paragraph position="2"> Both the pattern-matching and the statistical approaches are simple and easy to implement. It is well known, however, that they perform poorly when presented with ambiguous fragments that have alternate word boundaries in different sentential contexts. For instance, the fragment -~ shif~n, which is a bisyllabic word in sentence (3a), functions as two separate word, s in sentence (6).</Paragraph>
    <Paragraph position="4"> he only score ASP ten mark 'He scores only ten marks.'  Computational Linguistics Volume 22, Number 4 The MM method will regard this fragment as a bisyllabic word nutj &amp;quot; shff~n 'very' regardless of the sentential context in (3a) and (6), since this word is longer than the lengths of the two monosyllabic words n u shf 'ten' and ~ f~n 'mark'. As a result, this method fails to correctly identify the word boundaries in sentence (6). Within statistical approaches, considering, for example, the mutual information method (Lua and Gan 1994), the same fragment is identified as a bisyllabic word in both sentences (3a) and (6) 7.</Paragraph>
    <Paragraph position="5"> By checking the structural relationships among words in a sentence, rule-based approaches aim to overcome limitations faced by pattern-matching and statistical approaches. However, many of the rules in existing rule-based systems (Huang 1989; Yao, Zheng, and Wu 1990; Yeh and Lee 1991; He, Xu, and Sun 1991; Chen and Liu 1992) are either arbitrary and word-specific, or overly general. For example,</Paragraph>
    <Section position="1" start_page="535" end_page="536" type="sub_section">
      <SectionTitle>
Rule
</SectionTitle>
      <Paragraph position="0"> Given an ambiguous fragment xyz where x, z, xy, and yz are all possible words, if x can be analyzed as a so-called direction word, segment the fragment as x yz, else segment it as xy z (Liang 1990).</Paragraph>
      <Paragraph position="1"> This syntactic rule works in sentence (7).</Paragraph>
      <Paragraph position="3"> he bend down body 'He bends down his body.' The fragment T:~ xi?~ shen zi in sentence (7) is ambiguous. As -F xi?~ 'down' is a direction word, the fragment is segmented as -~ ~:j~ xi?l sh@nzi 'down body', which is as desired.</Paragraph>
      <Paragraph position="4"> Similarly, this rule will segment the fragment ~\]~lJ~ w?li gu6 r~n as ~ \[~l),, w?~i gu6rdn 'out citizen', since ~ w?zi 'out' is also a direction word. Therefore, when this fragment appears in sentence (8a),</Paragraph>
      <Paragraph position="6"> 'He is a foreigner.' the word boundaries identified will be: t4 sh~ w?d gu6r~n he COPULA out citizen which is incorrect.</Paragraph>
      <Paragraph position="7"> Examples (7) and (8) illustrate that although syntactic information has been incorporated in word segmentation, there are still errors. In contrast, people are extremely flexible in their perception of word boundaries of ambiguous fragments appearing in different sentential contexts. We believe that the separation of word identification from the task of analysis accounts for the difference in performance. This has motivated us to study how word identification and sentence analysis can be integrated. 7 This result is reported in Gan (1994).</Paragraph>
      <Paragraph position="8">  Gan, Palmer, and Lua A Statistically Emergent Approach</Paragraph>
    </Section>
  </Section>
  <Section position="5" start_page="536" end_page="540" type="metho">
    <SectionTitle>
4. The Statistically Emergent Model
</SectionTitle>
    <Paragraph position="0"> This model is inspired by the work done in the Fluid Analogies Research Group (Hofstadter 1983; Meredith 1986; Mitchell 1990; French 1992). There are four main components in this model. Namely, (i) the conceptual network, which is a network of nodes and links representing some permanent linguistic concepts; (ii) the workspace, which is the working area in which high-level linguistic structures representing the system's current understanding of a sentence are built and modified; (iii) the coderack, which is a pool of structure-building agents (codelets) waiting to run; and (iv) the computational temperature, which is an approximate measure of the amount of disorganization in the system's understanding of a sentence.</Paragraph>
    <Section position="1" start_page="536" end_page="536" type="sub_section">
      <SectionTitle>
4.1 The Conceptual Network
</SectionTitle>
      <Paragraph position="0"> This is a network of nodes and links representing some permanent linguistic concepts (Figure 1).</Paragraph>
      <Paragraph position="1"> In the network, a node represents a concept. For example, the node labeled character represents the concept of character; the node word represents the concept of word; the node chunk represents the concept of chunk; the nodes character-l, character-2, up to character-n represent the actual characters in a sentence; the affix and affinity nodes represent the concepts of relations between characters; the nodes classifier, reflexive adjective, structure, etc., represent the concepts of relations between words; the nodes agent, patient, theme, etc., represent the concepts of relations between chunks.</Paragraph>
      <Paragraph position="2"> A link represents an association between two nodes. There are four types of links: (i) category-of links, or is-a links, which connect instances to types, for example, the connections from character-I, character-2, up to character-n to the character node; (ii) has-instance links, the converse of category-of links; (iii) has-relation links, which associate a node with the relations it contributes, for example, the connection from the character node to the affix node represents that the character node contributes to the character-based relation named as affix; (iv) part-of links, which represent part-of relations between two nodes. The direction of a part-of link, for instance, the link from the character node to the word node, is interpreted as 'the character is part of the word'.</Paragraph>
      <Paragraph position="3"> During a run of the program, nodes become activated when perceived to be relevant, and decay when no longer perceived to be relevant. Nodes also spread activation to their neighbors, and thus concepts closely associated with relevant concepts also become relevant. The activation levels of nodes can be affected by processes that take place in the workspace. Several nodes in the network (e.g., agent, patient, word, chunk, etc.), when activated, are able to exert top-down influences on the types of activities that may occur in the workspace in subsequent processing. The context-dependent activation of nodes enables the system to dynamically decide what is relevant at a given point in time, and influences what types of actions the system engages in.</Paragraph>
    </Section>
    <Section position="2" start_page="536" end_page="539" type="sub_section">
      <SectionTitle>
4.2 The Workspace
</SectionTitle>
      <Paragraph position="0"> The workspace is meant to be the region where the system does the parsing and construction required to understand a sentence. This area can be thought of as corresponding to the locus of the creation and modification of mental representations that occurs in the mind as one tries to form a coherent understanding of a sentence. The construction process is done by a large number of processing agents.</Paragraph>
      <Paragraph position="1"> Figure 2 shows an example of a possible state of the workspace when the system is processing sentence (9).</Paragraph>
      <Paragraph position="2">  Computational Linguistics Volume 22, Number 4 * //q patient I predlicate f/---q theme \] , ~,'- i J X\~-~ classifier i I i~ ,,~ / re .~ \ I~nel...a~j~c~w : \ / //, structur~ \[charact~r_l I ' \ .'.. ~__ : \ &amp;quot;f/__C _, coo~.tion I re , ~ //___ i character-2 l\ : /lexical marker ~,~- -... -... complex \[ I~\ , \4 ~ ,.\ ~ stative I \[character-3 I\\ , \4 ~ \ \ construction rt \\ : ~'~' X&amp;quot; &amp;quot;-4 judgment I C/hara~ter-n l'-&amp;quot;t c.ara~t~r r', \ \\ ,, ,,~1 quantity I a.ff~ty \\\\ \ &amp;quot;~ i \\\\ d manner \[ \\ \\ ~ degree I L &lt; nds: &lt; r- has-instance &amp; category-of link \ X. direction \] ----&gt; has-relation link \ demonstrative J .... * part-of link question\[ Figure 1 The conceptual network.</Paragraph>
      <Paragraph position="3"> (9) ~ ~.X. ~ T' -- ~ ~ t~ b6nr~n sh~ng le s~n g~ hfiizi  she self give birth ASP three CL child 'She herself has given birth to three children.' There are three types of objects that may exist in the workspace: character objects, word objects, and chunk objects. The Chinese characters in Figure 2 not enclosed by rectangles, namely, the characters _~ s4n and ~I g~, are character objects. When a few Chinese characters are enclosed by a rectangle, for example ~,/k b6nr~n, it indicates that these characters make up a word object. The constituent characters of the word still exist in the workspace but they become less explicit in the figure. If a group of characters is enclosed by two rectangles, for example, the character ~1~ sh~ng, it indicates that a chunk object exists, made up of word objects. In short, the immediate constituents of a word object are character objects, and those of a chunk object are  word objects. It is possible to have unitary constituency whereby one object is the only part of another object. The chunk object ~l~ sh~ng 'give birth' is an example. Each object in the workspace has a list of descriptions not shown in Figure 2. For example, descriptions of character objects include their morphological category (stem/affix) and whether they are bound or unbound. 8 Descriptions of word objects include their categorial information and sense. Descriptions of chunk objects may also include these two descriptions, except that here, these two descriptions are derived from the category and the sense of the word that is the governor. The directed arc connecting two objects in Figure 2 denotes a linguistic relation between the objects connected. We adopt the dependency grammar notation (Tesni6re 1959; Mel'~uk 1988) in which the object pointed to by an arrow is the dependent while the object where the arrow originates is the governor. The undirected arc connecting the characters ~ hdi and ~ zi in Figure 2 represents a statistical relation, and statistical relations are undirected in our representation.</Paragraph>
      <Paragraph position="4"> An overview of our classification of relations is shown in Figure 3. A list of all types of relations is summarized in Table 1; a detailed exposition can be found in Gan (1994).</Paragraph>
      <Paragraph position="5"> In Figure 2, the connection between the word objects ~ ta 'she' and ~.h. b~nr~n  'self' is a reflexive adjective relation, the connection between the word objects ~ sh~ng 'give birth' and -j&amp;quot; le 'ASP' is an aspectual relation, and the two arcs connecting the character objects ~ hdi and --~ zi are affix and affinity relations. 8 A bound character cannot occur independently as a word.</Paragraph>
      <Paragraph position="6">  Computational Linguistics Volume 22, Number 4 Table 1 A list of all types of relations.</Paragraph>
      <Paragraph position="7"> Object Type Relation Type Example Object 1 Object 2 character affinity relation -~character affix relation ~ word classifier relation (~ 'CL' ~f~ 'snake' word reflexive adjective relation ~lJ~\] 'they' 2~\]&amp;quot; 'self' word structure relation i~1 'STRUC' .~J~ 'father' word coordination relation ~11 'and' ~\]~1~I 'Lisi' word adjective relation ~- 'blue' ~ 'sky' word complex stative relation ~ 'STRUC' ~ 'good' word attitude relation ~J~ 'really' ~ 'go' word disposal relation ~\] 'BA' \]~ 'door' word quantity relation ~J~\] 'we' ~ 'all' word manner relation ~ 'able' II~I 'sing' word degree relation ~\[~ 'very' ~\[~ 'nervous' word aspectual relation \]~ 'sleep' ~ 'ASP' word direction relation ~:-~ 'table' _\]~ 'on' word demonstrative relation ~ 'this' ,,~, 'fish' word interrogative relation ~-~ 'what' I~ 'time' chunk agent relation ~\[~ 'he' ~\]'(6~ T 'broke' chunk patient relation \[~ 'door' ~* 'broke' chunk theme relation ~ 'chant' .~ 'scripture' chunk source relation ~ \[\] 'from China' \[~ 'return' chunk goal relation ~1\]\]~ ~\]~ 'to room' ~ 'get' chunk time relation @~ 'today' ~i~J\]~ 'not well'</Paragraph>
    </Section>
    <Section position="3" start_page="539" end_page="540" type="sub_section">
      <SectionTitle>
4.3 The Coderack
</SectionTitle>
      <Paragraph position="0"> The building of linguistic structures (e.g., word and chunk objects, descriptions of objects, relations between objects) is carried out by a large number of agents known as codelets. These codelets reside in a data structure called the coderack. A codelet is a piece of code that carries out some small, local task that is part of the process of building a linguistic structure. For example, one codelet may check for the possibility of building an aspectual relation between the words ~4~ sh~ng 'give birth' and -Tle 'ASP' of sentence (9). There are several codelet types. Each type is responsible for building one of the relations shown in Table 1. In addition, there are word and chunk codelet types, which are responsible for the construction of words and chunks. Two special codelet types, namely, breaker and answer, will be explained in Section 5. Here, we make a distinction between codelets and codelet type. The latter is a prewritten piece of code while the former are instances of the latter.</Paragraph>
      <Paragraph position="1"> In the initial stage when the program is presented with a sentence, the default codelets initialized in the coderack are affix and affinity codelets. They will construct relations between character objects. Some default bottom-up word codelets are also posted to determine whether monosyllabic words could be constructed from character objects. When the word node in the conceptual network becomes activated by activation spreading from the character node, more top-down word codelets will be posted. When word objects are constructed, nodes denoting relevant relations between words will be activated. These nodes in turn cause the posting of codelets that will build relations between word objects. Again, by activation spreading to the chunk node, codelets  Gan, Palmer, and Lua A Statistically Emergent Approach building chunk objects will be posted, which will further lead to the posting of codelets that determine how chunk objects can be related.</Paragraph>
      <Paragraph position="2"> Note that there is no top-level executive deciding the order in which codelets are executed. At any given time, one of the existing codelets is selected to execute.</Paragraph>
      <Paragraph position="3"> The selection is a stochastic one, and it is a function of the relative urgencies of all existing codelets. The urgency of a codelet is a number assigned at the time of its creation to represent the importance of the task that it is supposed to carry out (this is an integer between 1 to 7, with 1 as the least urgent and 7 as the most urgent).</Paragraph>
      <Paragraph position="4"> Many codelets are independent and they run in parallel. Therefore, efforts towards building different structures are interleaved, sometimes co-operating and sometimes competing. The rate at which a structure is built is a function of the urgencies of its dedicated codelets. More promising structures are explored at high speeds and others at lower speeds. Almost all codelets make one or more stochastic decisions, and the high-level behavior of the program arises from the combination of thousands of these very small choices. In other words, the system's high-level behavior arises from its low-level stochastic substrate. To summarize, the macroscopic behavior of the system is not preprogrammed; the details of how it emerges from the low-level stochastic architecture of the system are given in Sections 5.2 and 5.3.</Paragraph>
    </Section>
    <Section position="4" start_page="540" end_page="540" type="sub_section">
      <SectionTitle>
4.4 The Computational Temperature
</SectionTitle>
      <Paragraph position="0"> The computational temperature is an approximate measure of the amount of coherency in the system's interpretation of a sentence: the value at a given time is a function of the amount and quality of linguistic structures that have been built in the workspace.</Paragraph>
      <Paragraph position="1"> The computational temperature is in turn used to control the amount c~f randomness in the local action of codelets. If many good linguistic structures have been built, the temperature will be low, and the system will make decisions less randomly. When few good linguistic structures have been found, the temperature will be high, leading to many more random decisions and hence to more diverse paths being explored by codelets. 9 The notion of temperature used here is similar to that in simulated annealing (Kirkpatrick, Gelatt, and Vecchi 1983). Both start with a high temperature, allowing all sorts of random steps to be taken, and slowly cool the system down by lowering the temperature. However, the decrease in temperature in our system is not necessarily monotonic. It varies according to the amount of coherency in the system's interpretation of a sentence. Thus, our system has an extra degree of flexibility, which allows uphill steps in temperature; in effect, this means that the system is annealing at the metalevel as well.</Paragraph>
    </Section>
  </Section>
  <Section position="6" start_page="540" end_page="546" type="metho">
    <SectionTitle>
5. An Example
</SectionTitle>
    <Paragraph position="0"> We will use a sample run of the program on sentence (9) to illustrate many central features of the model, including the selection of a codelet; the selection of competing alternatives; the interaction between the workspace and the conceptual network; etc.</Paragraph>
    <Paragraph position="1"> Note that this section would be overwhelmed with details if a step-by-step explanation were given. A detailed trace of the system's execution on this sentence can be found in Gan (1994), and a short description of the program's behavior can be found in Gan (1993). Here, only selected snapshots are highlighted.</Paragraph>
    <Paragraph position="2"> Sentence (9) is an example with local, overlap, and combination ambiguities in the 9 &amp;quot;Diverse paths&amp;quot; refers to different ways of analyzing the structure of a sentence.  fragment :~:,K~ b~n r~n sh~ng. Without considering the sentential context, these three characters have three possible word boundaries: :~ d~ ~L b~n rfn sh~ng 'CL human give birth', ~,~ ~ b~nr~n sh~ng 'self give birth' or ~ ,~e~_ b~n r~nsh~ng 'CL life'. Given the sentential context of (9), however, only the second alternative is correct.</Paragraph>
    <Section position="1" start_page="541" end_page="541" type="sub_section">
      <SectionTitle>
5.1 Initial Setup
</SectionTitle>
      <Paragraph position="0"> When the parsing process starts, the program is presented with the sentence. The temperature is clamped at 100 for the first 80 cycles to ensure that diverse paths are explored initially (the range of the temperature varies between 0 and 100). A cycle is the execution of one codelet. The number 80 is decided based on intuition and trial-and-error; it is not necessarily optimal. The workspace is initialized with nine character objects, each corresponding to a character of the sentence. Since the workspace contains only character objects, the only relevant concepts are: character, affinity, affix, and each character of the sentence. The corresponding nodes in the conceptual network, namely: character, affinity, affix, ~ ta, ~ b~n, up to ~ zi, are set to full activation. Fourteen instances of word codelet are posted to the coderack. They are responsible for identifying and constructing monosyllabic words. Twenty instances of affinity codelet are also posted to identify and construct affinity relations between characters. Eight instances of affix codelet are posted to identify and construct affix relations between characters. In general, the number of codelets posted is a function of the length of a sentence.</Paragraph>
    </Section>
    <Section position="2" start_page="541" end_page="542" type="sub_section">
      <SectionTitle>
5.2 Selection of a Codelet
</SectionTitle>
      <Paragraph position="0"> Among all codelet instances that exist in the coderack, only one of them is stochastically selected to execute each time. The choice of which codelet instance to execute depends on three factors: (i) its urgency, (ii) the number of codelet instances in the coderack that are of the same type as the individual instance, and (iii) the current temperature. At cycle 0, the coderack contains the statistics as shown in Table 2.</Paragraph>
      <Paragraph position="1"> The temperature-regulated urgency (Lit) is derived in the following way:</Paragraph>
      <Paragraph position="3"> where t denotes the temperature, which ranges between \[0,1001. This equation is used to magnify differences in urgency values when the temperature is low. Conversely, at high temperatures, it will minimize differences in urgency values. The idea is to let the system explore diverse paths when the temperature is high, while always stick to one search path when the temperature is low.</Paragraph>
      <Paragraph position="4"> At cycle 0 where the temperature is 100, the temperature-regulated urgencies of the three codelet types are the same. The probability of selecting an instance of a word codelet, an affinity codelet, and an affix codelet is 33.3%, 47.6%, and 19.1% respectively.  State of the workspace at cycle 17.</Paragraph>
      <Paragraph position="5"> These probabilities are derived as follows:</Paragraph>
      <Paragraph position="7"> where Qi and Qj are the quantities of codelet types Ci and Cj respectively, Ui, t and Uj, t are the urgencies of codelet types Ci and Cj at temperature t respectively, and n is the total number of codelet types.</Paragraph>
      <Paragraph position="8"> Supposing that the coderack contains the same types of codelets with the same quantities, but the temperature is 0, the probability of selecting an instance of a word codelet, an affinity codelet, and an affix codelet becomes 8.99%, 65.01%, and 26.00% respectively. Therefore, at low temperatures, codelets with high urgency are preferred.</Paragraph>
    </Section>
    <Section position="3" start_page="542" end_page="543" type="sub_section">
      <SectionTitle>
5.3 Construction of Linguistic Structures
</SectionTitle>
      <Paragraph position="0"> Linguistic structures include high-level objects (words and chunks) and relations between two objects (see Table 1). In this run, for example, an affinity relation between the character objects ~: b~n and ),, rdn is constructed by an instance of an affinity codelet at cycle 17 (Figure 4).</Paragraph>
      <Paragraph position="1"> An affinity codelet works on any two adjacent character objects to evaluate whether an affinity relation should be built between these two characters. The affinity relation is a quantitative measure that reflects how strongly two characters co-occur statistically. It is derived from mutual information (Fano 1961), which is the probability that two characters occur together versus the probability that they are independent.</Paragraph>
      <Paragraph position="2"> Mathematically, it is:</Paragraph>
      <Paragraph position="4"> where A(a, b) is the affinity relation between the character objects a and b, P(a, b) is the probability that the two character objects co-occur consecutively, P(a) and P(b) are the probabilities that a and b occur independently. To derive affinity relations between characters, we have the usage frequencies of 6,768 Chinese characters specified in the GB2312-80 standard, and the usage frequencies of 46,520 words derived from a corpus.</Paragraph>
      <Paragraph position="5"> The total usage frequency of these words is 13,019,814. (The data was obtained from Liang Nanyuan, Beijing University of Aeronautics and Astronautics.) Note that efforts towards building different structures are interleaved, as many codelets are independent and they run in parallel. Apart from the initial set of codelets present at the onset of processing, new codelets are sometimes created by old codelets to continue working on a task in progress, and these codelets may in turn create other  Computational Linguistics Volume 22, Number 4 codelets, and so on. The cycle in which a structure is built is not preprogrammed.</Paragraph>
      <Paragraph position="6"> Rather, it emerges from the statistics of the interaction of all codelets in the coderack.</Paragraph>
    </Section>
    <Section position="4" start_page="543" end_page="543" type="sub_section">
      <SectionTitle>
5.4 Selection of Competing Structures
</SectionTitle>
      <Paragraph position="0"> It may happen that a structure being constructed is in conflict with an existing structure. In this run, for example, an affinity relation between the characters .PS. r~n and shgng is being considered at cycle 79. This structure is in conflict with the previously constructed affinity relation between the characters dg b~n and .),. r~n. The decision about which competing structure should win is decided stochastically as a function of two factors: (i) the strengths of the competing structures, and (ii) the temperature.</Paragraph>
      <Paragraph position="1"> The strength of a structure is an approximate measure of how promising the structure is. It is an integer ranging between 0 and 100, inclusive. The strengths of different structures are derived according to either linguistic knowledge encoded in the lexicon or certain statistical measures. Equation (3) is a key factor in deriving the strength of an affinity relation. In this run, the strength of the proposed affinity relation between the characters .PS. r~n and ~ sh~ng is 55, while that of the existing affinity relation between the characters ~ b~n and ),. r~n is 56. These two values are adjusted by the temperature according to equation (4).</Paragraph>
      <Paragraph position="3"> where St is the temperature-regulated strength, S is the original strength, and t is the temperature. The effect of equation (4) is similar to equation (1): to maximize differences in strength values at low temperatures, and to minimize differences at high temperatures. At cycle 79, the temperature is still clamped at 100, and hence the temperature-regulated strengths of these two competing structures are both 7 (rounded up to the nearest integer). The decision about which structure should win is therefore a random one, as both have an equal probability of success. According to equation (4), at low temperatures, it is increasingly difficult for a new structure of lesser strength to win in competition against existing structures of greater strength. Since the system's behavior is more random at high temperatures, it is able to explore diverse paths in the initial stage when little structure has been built. When a large number of structures deemed to be good have been found, which entails a low temperature, the system will proceed in a more deterministic fashion, always preferring good paths to bad ones.</Paragraph>
      <Paragraph position="4"> Indeed, in this case, the new affinity relation between the characters .PS. r~n and shgng has won. Instead of destroying the affinity relation between the characters b~n and ),. r~n, this structure is retained, but it becomes dormant in the workspace.</Paragraph>
    </Section>
    <Section position="5" start_page="543" end_page="544" type="sub_section">
      <SectionTitle>
5.5 The interaction between the Workspace and the Conceptual Network
</SectionTitle>
      <Paragraph position="0"> Activated nodes in the conceptual network spread activation to their neighbors, and thus concepts closely related to relevant concepts also become relevant. In this run, for example, the nodes word and chunk become activated at cycle 80 due to activation spreading from the character node. Activated nodes influence what tasks the system will focus on subsequently through the posting of top-down codelets. For example, at cycle 80, the activated word node causes the proportion of word codelets to increase to 93%. This is an important feature of the system: the context-dependent activation of nodes, which enables the system to dynamically decide what is relevant at a given point in time, and influences what actions to take through the posting of top-down codelets.</Paragraph>
      <Paragraph position="1">  State of the workspace at cycle 180.</Paragraph>
      <Paragraph position="2"> 5.6 Detection and Resolution of Erroneous Structures By the end of cycle 180, the following structures have been built (Figure 5): active sh~ng, ~- zi; active active  relations: an affinity relation between the characters ,K, r~n and ~dd hdi and -~ zi, an affix relation between the characters ~ hdi and word objects: ~zj~ hdizi 'child', ),.~4~ r~nsh~ng 'life', and :~ b~n 'CL'; chunk objects: ,K.~_ r~nsh~ng &amp;quot;life', and ~:j~ hdizi 'child'; dormant relations: an affinity relation between the characters ~: b~n and r~n.</Paragraph>
      <Paragraph position="3"> Among them, the word ~ b~n 'CL' is a classifier. This word has activated the classifier node in the conceptual network, which in turn causes the posting of classifier codelets to the coderack. The responsibility of this type of codelet is to explore the possibility of establishing a classifier relation between a classifier and an object name. 1deg The use of a classifier is in general idiosyncratic. This type of idiosyncrasy is encoded in the lexicon. Since ~ b~n cannot be the classifier of the object name ,K.~ r~nsh~ng 'life', a special type of codelet known as a breaker codelet is posted to the coderack. The role of a breaker is to identify erroneous linguistic structures, and set them to dormant, restoring any dormant competing structure when necessary. At cycle 187, a breaker codelet is executed that examines structures that are &amp;quot;introuble&amp;quot;, namely, the words :~ b~n and ),,~4~ r~nsh~ng 'life'. Since the component characters of the second word can be free, the breaker codelet concludes that this is an erroneous grouping. The word yk.~4~ r~nsh~ng 'life' is made dormant. The other structures that support the word ,K.~ r~nsh~ng 'life', namely the affinity relation between the characters ,K. r~n and ~ sh~ng and the chunk ,~.~-~ r~nsh~ng 'life', are also made dormant. The competing alternative, the affinity relation between the characters b~n and ),, rdn, is reactivated. This snapshot also illustrates an important feature of the system: syntactic analysis can be performed without waiting for the system to complete the task of word identification.</Paragraph>
      <Paragraph position="4">  State of the workspace at cycle 373.</Paragraph>
    </Section>
    <Section position="6" start_page="544" end_page="546" type="sub_section">
      <SectionTitle>
5.7 The Final State
</SectionTitle>
      <Paragraph position="0"> Figure 6 shows the state of the workspace at the end of cycle 373.</Paragraph>
      <Paragraph position="1"> For easy reference, sentence (9) is repeated here: (9) ~ :~.K. ~ T G ~I ~x-~ ta b~nr~n sh~ng le san g~ h~izi she self give birth ASP three CL child 'She herself has given birth to three children.' The list of structures built are: * active relations: an affinity relation between the characters ~ b~n and .~ r~n, ~ h~i and ~ zi, an affix relation between the characters ~ h~i and zi, a reflexive adjective relation between the words ~ ta 'she' and ;4;.~ b~nr~n 'self', a classifier relation between the words ~ g~ 'CU and ~ h~izi 'child', a quantity relation between the words ~ san 'three' and ~:~ h~izi 'child', an aspectual relation between the words ~ sh~ng 'give birth' and ~ le 'ASP'; * active words: ~ ta 'she', :~:.J~ b~nr~n 'self', ~ sh~ng 'give birth', Tle 'ASP', -~ san 'three', ~l g~ 'CU, and ~ h~izi 'child'; * active chunks: ~:4:.~ ta b~nr~n 'she herself', P4~ sh~ng 'give birth', and ~--.{~l~x~ san g~ h~izi 'three CL children';  They were not identified because the system has come to a stop at cycle 381, after an instance of answer codelet was executed. This type of codelet reports on the word  Gan, Palmer, and Lua A Statistically Emergent Approach agent theme reflexive adjective aspect  A graph of structures constructed against cycle number. boundaries of a sentence. The system currently adopts a greedy approach and starts posting large numbers of this type of codelet as soon as it has identified a plausible interpretation of the word boundaries of a sentence. Hence, although instances of agent and theme codelets were present in the coderack, they were being overwhelmed by the ubiquitous answer codelets.</Paragraph>
      <Paragraph position="2"> Figure 8 summarizes the cycle number in which various types of structures were constructed during this run. In this figure we see that affinity relations are built earlier than words, reflecting the system's preference for words of greater lengths. The system makes use of statistical information (the mutual information scores) to make quick and reliable guesses of the locations of these words. It can also be observed that overall, there is a gradual shift in the types of operations executed, from being charactercentered initially, to word-centered, and then to chunk-centered. From time to time, however, the construction of different types of structures is interleaved.</Paragraph>
    </Section>
  </Section>
  <Section position="7" start_page="546" end_page="548" type="metho">
    <SectionTitle>
6. System Performance and Discussions
</SectionTitle>
    <Paragraph position="0"> Thirty ambiguous fragments that have alternating word boundaries in different sentential contexts were presented to the system and the system was able to resolve all the ambiguities. The test set covers the four types of word boundary ambiguities de- null Computational Linguistics Volume 22, Number 4 scribed in Section 2. When the sentential contexts of locally ambiguous fragments (both the overlap and combination type) were varied, our system was able to identify the correct word boundaries. When the system was presented with sentences with global ambiguities, it produced all the plausible alternative word boundaries. However, at any run of such a sentence, only one alternative is generated. The system's behavior is similar to human performance in the goblet/faces recognition problem in perception (Hoffman and Richards 1984). We cannot see both the goblet and the faces at the same time, but we are able to switch back and forth between these two interpretations.</Paragraph>
    <Paragraph position="1"> The frequencies of generating all the alternatives vary from one sentence to another. It is important to note that such frequencies are not meant to indicate some kind of &amp;quot;goodness&amp;quot; measure of alternative word boundary interpretations. Neither are they meant to reflect the preferences of a human. They are merely a reflection of the usage frequencies of Chinese characters and words in our dictionary.</Paragraph>
    <Paragraph position="2"> The system's ability to generate different word boundaries for a globally ambiguous sentence arises from its stochastic search mechanism, which does not rule out a priori certain possibilities. This feature enables the system to occasionally discover less-obvious interpretations of word boundaries. For example, in addition to the two apparent ways of aligning the fragment ~,~ yTjrnggu6 as either ~,~ i~ yfjrnggu~ 'already over' or B ,~ yfjrnggu6 'already go through' in sentences (10a) and (10b), a less-obvious possibility that the system has identified is: ~, ,~ i~ y~jTnggu~ 'already experience over', where i~ gu6 'over' is the complement of .~ jTng 'experience'.</Paragraph>
    <Paragraph position="3"> (10)a.</Paragraph>
    <Paragraph position="4"> w~ y~j~ng gu~ le xu~sh~ng shfd?li I already over ASP student period 'My student days are over.' b.</Paragraph>
    <Paragraph position="5"> w~ yr jTnggu6 le xu~sh~ng shid?zi I already go through ASP student period 'I have already gone through the period as a student.'</Paragraph>
    <Paragraph position="7"> I already experience over ASP student period 'I have already experienced student life.' The system rarely produces the less-obvious interpretations. This demonstrates that its mechanisms are able to strike an effective balance between random search and deterministic search, imbuing it with both flexibility and robustness. An issue that arises from the nondeterministic feature of the system is: will the word boundaries of a locally ambiguous sentence vary at different runs? To address this, weran the program with each sentence 20 times. We found that for sentences covered by our current set of linguistic descriptions, the system arrived at the same word boundaries despite different paths being taken at each run. For linguistic phenomena not yet covered, suboptimal solutions may sometimes be generated. For example, when the program worked on sentence (10), it produced sentence (11) once as the answer.</Paragraph>
    <Paragraph position="8">  Gan, Palmer, and Lua A Statistically Emergent Approach</Paragraph>
    <Paragraph position="10"> China already exploit and yet not kaifa de z~yu~n dOu h~n duo exploit STRUC resource all very many 'China has many resources which have either been exploited or not yet been exploited.' (12)* zhdnggu6 yr kaifa hd sh?mg w~i China already exploit and yet not kai f~ de ziyu~n dOu h6n3 open distribute STRUC resource all very duo many In this run, the bisyllabic word ~_~ /a//f~ 'develop' has been wrongly identified as two monosyllabic words ~'\] /a/i 'open' and ~ ft/'distribute'. To determine the proper use of two juxtaposed predicates, such as ~J kai 'open' and ~ fa 'distribute' in this case, requires a careful study of serial verb constructions. It is inevitable that the system would make such a mistake as our linguistic descriptions have not yet covered this phenomenon.</Paragraph>
    <Paragraph position="11"> In comparison, consider the performance of a strictly statistical approach based on mutual information (Lua and Gan 1994): the latter wrongly identified the word boundaries in 11 out of the 30 ambiguous fragments. For the 6 fragments that appear in globally ambiguous sentences, the mutual information approach gave only one interpretation of the word boundaries. In terms of processing speed, the mutual information approach took an average of 110.4 ms to process one character; our approach took 1.7 s. 11 The extra time in our approach is spent in parsing sentences.</Paragraph>
  </Section>
class="xml-element"></Paper>
Download Original XML