File Information

File: 05-lr/acl_arc_1_sum/cleansed_text/xml_by_section/concl/99/j99-2005_concl.xml

Size: 3,776 bytes

Last Modified: 2025-10-06 13:58:20

<?xml version="1.0" standalone="yes"?>
<Paper uid="J99-2005">
  <Title>Interactive System for Phonological</Title>
  <Section position="5" start_page="272" end_page="272" type="concl">
    <SectionTitle>
4. Conclusions
</SectionTitle>
    <Paragraph position="0"/>
    <Section position="1" start_page="272" end_page="272" type="sub_section">
      <SectionTitle>
4.1 Connolly's New Algorithm
</SectionTitle>
      <Paragraph position="0"> Since the appearance of Covington's article (and even since the first draft of this reply), a highly relevant article has appeared, which--coincidentally--addresses the issues raised here (Connolly 1997). In this two-part article, Connolly first suggests ways of quantifying the difference between two individual phones, on the basis of perceptual and arficulatory differences, and using either a Euclidean distance metric or, like CAT, a feature-based metric. Connolly's proposals are more elaborate, however, in that they permit specific differences to be weighted, so as to reflect the relative importance of each opposition. In the second part of the article, Connolly introduces a distance measure for comparing sequences of phones, based on the Levenshtein distance well-known in the speech processing and corpus alignment literature (inter alia). Again, this metric can be weighted, to allow substitutions to be valued differentially (presumably on the basis of the individual phone distance measure as described in the first part), and to deal with merging and metathesis. Connolly also considers briefly the effects of nonlinear prosodic structure on the distance measure. Although his methods are clearly computational in nature, Connolly reported (personal communication, 1997) that he had not yet implemented them. Taken together, these measures are certainly more sophisticated than either CAT's or Covington's, so this contribution could well be an extremely significant one towards the development of articulation testing software.</Paragraph>
      <Paragraph position="1"> In Somers (1998), I report an implementation and comparison of Connolly's measures with my own earlier work.</Paragraph>
    </Section>
    <Section position="2" start_page="272" end_page="272" type="sub_section">
      <SectionTitle>
4.2 What Would a New Version of CAT Be Like?
</SectionTitle>
      <Paragraph position="0"> In the light of the above remarks, it is interesting to think about how we might specify a reimplementation of CAT. One area where there could be considerable improvement is in the data input. CAT uses a very crude phonetic transcription based only on a minimal character set, not even including lower-case letters. Clearly this restriction would not be necessary nowadays. The software system PDAC (Phonological Deviation Analysis by Computer) uses a software package called LIPP (Logical International phonetic Programs) for input of transcriptions (Perry 1995). Alternatively, it seems quite feasible to allow the transcriptions to be input using a standard word processor and a phonetic font, and to interpret the symbols accordingly. For a commercial implementation it would be better to follow the standard proposed by the IPA (Esling and Gaylord 1993), which has been approved by the ISO, and included in the Unicode definitions.</Paragraph>
      <Paragraph position="1"> Despite the reservations of all the speech-language pathology experts, it seems to me that the work on alignment discussed here (Somers 1978b; Covington 1996; Connolly 1997) suggests that this aspect of computerized articulation test analysis is a research aim well worth pursuing, especially if collaborators from the speech-language pathology field can be found. It would be rewarding if this article were to awaken interest in the problem.</Paragraph>
    </Section>
  </Section>
class="xml-element"></Paper>
Download Original XML