Phonetically Driven Phonology: The Role of Optimality Theory and Inductive Grounding 1



Download 266.49 Kb.
Page1/8
Date29.03.2017
Size266.49 Kb.
#15344
  1   2   3   4   5   6   7   8
Phonetically Driven Phonology:

The Role of Optimality

Theory and Inductive Grounding1
Bruce P. Hayes
UCLA
Final version: August 1997

Written for the proceedings volume of the 1996 Milwaukee Conference

on Formalism and Functionalism in Linguistics
Abstract
Functionalist phonetic literature has shown how the phonologies of human languages are arranged to facilitate ease of articulation and perception. The explanatory force of phonological theory is greatly increased if it can directly access these research results. There are two formal mechanisms that together can facilitate the link-up of formal to functional work. As others have noted, Optimality Theory, with its emphasis on directly incorporating principles of markedness, can serve as part of the bridge. Another mechanism is proposed here: an algorithm for inductive grounding permits the language learner to access the knowledge gained from experience in articulation and perception, and form from it the appropriate set of formal phonological constraints.

1. Phonological Functionalism

The difference between formalist and functionalist approaches in linguistics has taken different forms in different areas. For phonology, and particularly for the study of fully-productive sound patterns, the functionalist approach has traditionally been phonetic in character. For some time, work in the phonetic literature, such as Ohala (1974, 1978, 1981, 1983), Ohala and Ohala (1993), Liljencrants and Lindblom (1972), Lindblom (1983, 1990), and Westbury and Keating (1986), has argued that the sound patterns of languages are effectively arranged to facilitate ease of articulation and distinctness of contrasting forms in perception. In this view, much of the patterning of phonology reflects principles of good design.2


In contemporary phonological theorizing, such a view has not been widely adopted. Phonology has been modeled as a formal system, set up to mirror the characteristic phonological behavior of languages. Occasionally, scholars have made a nod towards the phonetic sensibleness of a particular proposal. But on the whole, the divide between formal and functionalist approaches in phonology has been as deep as anywhere else in the study of language.
It would be pointless (albeit fun) to discuss reasons for this based on the sociology of the fields of phonetics and phonology. More pertinently, I will claim that part of the problem has been that phonological theory has not until recently advanced to the point where a serious coming to grips with phonetic functionalism would be workable.

2. Optimality Theory

The novel approach to linguistic theorizing known as Optimality Theory (Prince and Smolensky 1993) appears to offer the prospect of a major change in this situation. Here are some of the basic premises of the theory as I construe it.


First, phonological grammar is not arranged in the manner of Chomsky and Halle (1968), in essence as an assembly line converting underlying to surface representations in a series of steps. Instead, the phonology selects an output form from the set of logical possibilities. It makes its selection using a large set of constraints, which specify what is “good” about an output, in the following two ways:

a. Phonotactics: “The output should have phonological property X.”


b. Faithfulness: “The output should resemble the input in possessing property Y.”
Phonotactic constraints express properties of phonological markedness, which are typically uncontroversial. For example, they require that syllables be open, or that front vowels be unrounded, and so on. The Faithfulness constraints embody a detailed factorization of what it means for the output to resemble the input; they are fully satisfied when the output is identical to the input.
Constraints can conflict with each other. Often, it is impossible for the output to have the desired phonotactic properties and also be faithful to the input; or for two different phonotactic constraints to be satisfied simultaneously. Therefore, all constraints are prioritized; that is, ranked. Prioritization drives a specific winnowing process (not described here) that ultimately selects the output of the grammar from the set of logical possibilities by ruling out all but a single winner.3
I will take the general line that Optimality Theory is a good thing. First, it shares the virtues of other formal theories: when well implemented, such theories provide falsifiability, so that the errors in an analysis can lead to improvement or replacement. Further, formal theories characteristically increase the pattern recognition capacity of the analyst. For example, it was only when the formal theory of moras was introduced (Hyman 1985) that it became clear that compensatory phonological processes always conserve mora count (see Hyman, and for elaboration Hayes 1989).4
Second, Optimality Theory has permitted solutions to problems that simply were not treatable in earlier theories. Examples are the metrical phonology of Guugu Yimidhirr (Kager, to appear), or the long-standing ordering paradoxes involving phonology and reduplication (McCarthy and Prince 1995).
Most crucially, Optimality Theory has the advantage of allowing us to incorporate general principles of markedness into language-specific analyses. Previously, a formal phonology consisted of a set of somewhat arbitrary-looking rules. The analyst could only look at the rules “from the outside” and determine how they reflect general principles of markedness (or at best, supplement the rules with additional markedness principles, as in Chomsky and Halle (1968, Ch. 9), Schachter (1969), or Chen (1973)). Under Optimality Theory, the principles of markedness (stated explicitly and ranked) form the sole ingredients of the language-specific analysis. The mechanism of selection by ranked constraints turns out to be such an amazingly powerful device that it can do all the rest. Since rankings are the only arbitrary element in the system, the principled character of language-specific analyses is greatly increased. This is necessarily an argument by assertion, but I believe a fair comparison of the many phonological analyses of the same material in both frameworks would support it.5

3. What is a Principled Constraint?

The question of what makes a constraint “principled” is one that may be debated. The currently most popular answer, I think, relies on typological evidence: a principled constraint is one that “does work” in many languages, and does it in different ways.


But there is another answer to the question of what makes a constraint principled: a constraint can be justified on functional grounds. In the case of phonetic functionalism, a well-motivated phonological constraint would be one that either renders speech easier to articulate or renders contrasting forms easier to distinguish perceptually. From the functionalist point of view, such constraints are a priori plausible, under the reasonable hypothesis that language is a biological system that is designed to perform its job well and efficiently.
Optimality Theory thus presents a new and important opportunity to phonological theorists. Given that the theory thrives on principled constraints, and given that functionally motivated phonetic constraints are inherently principled, the clear route to take is to explore how much of phonology can be constructed on this basis. One might call such an approach “phonetically-driven Optimality-theoretic phonology.” A theory of this kind would help close the long-standing and regrettable gap between phonology and phonetics.

4. Research in Phonetically-Driven Optimality-Theoretic Phonology

The position just taken regarding phonetics and Optimality Theory is not original with me, but is inspired by ongoing research, much of inspired by Donca Steriade, which attempts to make use of OT to produce phonetically-driven formal accounts of various phonological phenomena.


For instance, Steriade (1993, 1997) considers the very basic question of segmental phonotactics in phonology: what segments are allowed to occur where? Her perspective is a novel one, taking the line that perception is the dominant factor. Roughly speaking, Steriade suggests that segments preferentially occur where they can best be heard. The crucial part is that many segments (for example, voiceless stops) are rendered audible largely or entirely by the contextual acoustic cues that they engender on neighboring segments through coarticulation. In such a situation, it is clearly to the advantage of particular languages to place strong restrictions on the phonological locations of such segments.
Following this approach, and incorporating a number of results from research in speech perception, Steriade is able to reconstruct the traditional typology of “segment licensing,” including what was previously imagined to be an across-the-board preference for consonants to occur in syllable onset position. She goes on to show that there in fact are areas where this putative preference fails as an account of segmental phonotactics: one example is the preference for retroflexes to occur postvocalically (in either onset or coda); preglottalized sonorants work similarly. As Steriade shows, these otherwise-baffling cases have specific explanations, based on the peculiar acoustics of the segments involved. She then makes use of Optimality Theory to develop explicit formal analyses of the relevant cases.
Phonetically-driven approaches similar to Steriade’s have lead to progress in the understanding of various other areas of phonology: place assimilation (Jun 1995a,b; Myers, this volume), vowel harmony (Archangeli and Pulleyblank 1994, Kaun 1995a,b), vowel-consonant interactions (Flemming 1995), syllable weight (Gordon, 1997), laryngeal features for vowels (Silverman 1995), non-local assimilation (Gafos 1996), and lenition (Kirchner, in progress).6

5. The Hardest Part

What is crucial here (and recognized in earlier work) is that a research result in phonetics is not same thing as a phonological constraint. To go from one to the other is to bridge a large gap. Indeed, the situation facing phonetically-driven Optimality-theoretic phonology is a rather odd one. In many cases, the phonetic research that explains the phonological pattern has been done very well and is quite convincing; it is only the question of how to incorporate it into a formal phonology that is difficult. An appropriate motto for the research program described here is: we seek to go beyond mere explanation to achieve actual description.


In what follows, I will propose a particular way to attain phonetically-driven phonological description.7 Since I presuppose Optimality Theory, what is crucially needed is a means to obtain phonetically-motivated constraints.
In any functionalist approach to linguistics, an important question to consider is: who is in charge? That is, short of divine intervention, languages cannot become functionally well designed by themselves; there has to be some mechanism responsible. In the view I will adopt, phonology is claimed to be phonetically natural because the constraints it includes are (at least partially) the product of grammar design, carried out intelligently (that is, unconsciously, but with an intelligent algorithm) by language learners.
Before turning to this design process, I will first emphasize its most important aspect: there is a considerable gap between the raw patterns of phonetics and phonological constraints. Once the character of this divergence is clear, then the proposed nature of the design process will make more sense.
6. Why Constraints Do Not “Emerge” From The Phonetics

There are a number of reasons that suggest that phonetic patterns cannot serve as a direct, unmediated basis for phonology. (For more discussion of this issue, see Anderson (1981) and Keating (1985)).



6.1 Variation and Gradience
First, phonetics involves gradient and variable phenomena, whereas phonology is characteristically categorial and far less variable. Here is an example: Hayes and Stivers (in progress) set out to explain phonetically a widespread pattern whereby languages require postnasal obstruents to be voiced. The particular mechanism we propose is reviewed below; for now it suffices that it appears to be verified by quantitative aerodynamic modeling and should be applicable in any language in which obstruents may follow nasals.
Since the mechanism posited is automatic, we might expect to find it operating even in languages like English that do not have postnasal voicing as a phonological process. Testing this prediction, Hayes and Stivers examined the amount of closure voicing (in milliseconds) of English /p/ in the environments / m ___ versus / r ___. Sure enough, for all five subjects in the experiment, there was significantly more /p/ voicing after /m/ than after /r/, as our mechanism predicted. But the effect was purely quantitative: except in the most rapid and casual speech styles, our speakers fully succeeded in maintaining the phonemic contrast of /p/ with /b/ (which we also examined) in postnasal position. The phonetic mechanism simply produces a quantitative distribution of voicing that is skewed toward voicedness after nasals. Moreover, the distribution of values we observed varied greatly: the amount of voicing we found in /mp/ ranged from 13% up to (in a few cases) over 60% of the closure duration of the /p/.
In contrast, there are other languages in which the postnasal voicing effect is truly phonological. For example, in Ecuadorian Quechua (Orr 1962), at suffix boundaries, it is phonologically illegal for a voiceless stop to follow a nasal, and voiced stops are substituted for voiceless; thus sac&a‑pi ‘jungle-loc.’ but atam-bi ‘frog-loc.’ For suffixes, there is no contrast of voiced versus voiceless in postnasal position. Clearly, English differs from Quechua in having “merely phonetic” postnasal voicing, as opposed to true phonological postnasal voicing.8 We might say that Ecuadorian Quechua follows a categorial strategy: in the suffix context it simply doesn’t even try to produce the (phonetically difficult) sequence nasal + voiceless obstruent. English follows a “bend but don’t break” strategy, allowing a highly variable increase in degree of voicing after nasals, but nevertheless maintaining a contrast.
I would claim then, that in English we see postnasal voicing “in the raw,” as a true phonetic effect, whereas in Ecuadorian Quechua the phonology treats it as a categorial phenomenon. The Quechua case is what needs additional treatment: it is a kind of leap from simply allowing a phonetic effect to influence the quantitative outcomes to arranging the phonology so that, in the relevant context, an entire contrast is wiped out.9
6.2 Symmetry
Let us consider a second argument. I claim that phonetics is asymmetrical, whereas phonology is usually symmetrical. Since the phonetic difficulty of articulation and perception follows from the interaction of complex physical and perceptual systems, we cannot in the general case expect the regions of phonetic space characterized by a particular difficulty level to correspond to phonological categories.
To make this clear, consider a particular case, involving the difficulty of producing voiced and voiceless stops. The basic phonetics (here, aerodynamics) has been studied by Ohala (1983) and by Westbury and Keating (1986). Roughly, voicing is possible whenever a sufficient drop in air pressure occurs across the glottis. In a stop, this is a delicate matter for the speaker to arrange, since free escape of the oral air is impeded. Stop voicing is influenced by quite a few different factors, of which just a few are reviewed here.
(a) Place of articulation. In a “fronter” place like labial, a large, soft vocal tract wall surface surrounds the trapped air in the mouth. During closure, this surface retracts under increasing air pressure, so that more incoming air is accommodated. This helps maintain the transglottal pressure drop. Since there is more yielding wall surface in labials (and more generally, at fronter places of articulation), we predict that the voiced state should be relatively easier for fronter places. Further, since the yielding-wall effect actually makes it harder to turn off voicing, we predict that voicelessness should be harder for fronter places.
(b) Closure duration. The longer a stop is held, the harder it will be to accommodate the continuing transglottal flow, and thus maintain voicing. Thus, voicelessness should be favored for geminates and for stops in post-obstruent position. (The latter case assumes that, as is usual, the articulation of the stop and the preceding obstruent are temporally overlapped, so no air escape can occur between them.)
(c) Postnasal position. As just noted, there are phonetic reasons why voicing of stops should be considerably favored when a nasal consonant immediately precedes the stop.
(d) Phrasal position. Characteristically, voicing is harder to maintain in utterance-initial and utterance-final position, since the subglottal pressure that drives voicing tends to be lower in these positions.
As Ohala (1983) and others have made clear, these phonetic factors are abundantly reflected in phonological patterning. (a) Gaps in stop inventories that have both voiced and voiceless series typically occur at locations where the size of the oral chamber makes voicing or voicelessness difficult; thus at *[p] or *[g], as documented by Ferguson (1975), Locke (1983), and several sources cited by Ohala (p. 195). (b) Clusters in which a voiced obstruent follows another obstruent are also avoided, for instance in Latin stems (Devine and Stephens 1977), or in German colloquial speech (Mangold 1962: 45). Geminate obstruents are a similar case: they likewise are often required to be voiceless, as in Japanese (Vance 1987: 42), West Greenlandic (Rischel 1974), or !Xõõ (Traill 1981: 165). (c) Languages very frequently ban voiceless stops after nasals, with varying overt phonological effects depending on how the constraints are ranked (Pater 1995, 1996; Hayes and Stivers, in progress). (d) Voicing is favored in medial position, and disfavored in initial and final position, following the subglottal pressure contour (Westbury and Keating 1986).10
Plainly, the phonetics can serve here as a rich source of phonological explanation, since the typology matches the phonetic mechanisms so well. However, if we try to do this in a naive, direct way, difficulties immediately set in.
Suppose that we concoct a landscape of stop voicing difficulty (2) which encodes values for difficulty (zero = maximal ease) on an arbitrary scale for a set of phonological configurations. For simplicity, we will consider only a subset of the effects mentioned above.



Download 266.49 Kb.

Share with your friends:
  1   2   3   4   5   6   7   8




The database is protected by copyright ©sckool.org 2023
send message

    Main page