Scientific Realism First published Wed Apr 27, 2011



Download 221.79 Kb.
Page2/4
Date30.04.2018
Size221.79 Kb.
#42414
1   2   3   4
post hoc, identifying the explanatorily crucial parts of past theories with aspects that have been retained in our current best theories. (For discussions, see Chang 2003, Stanford 2003a, 2003b, Elsamahi 2005, McLeish 2005, 2006, Saatsi 2005a, Lyons 2006, and Harker 2010.)

Another version of realism that adopts the strategy of selectivity is entity realism. On this view, realist commitment is based on the putative ability to causally manipulate unobservable entities (like electrons or gene sequences) to a high degree—for example, to such a degree that one is able to intervene in other phenomena so as to bring about certain effects. The greater the ability to exploit one's apparent causal knowledge of something so as to bring about (often extraordinarily precise) outcomes, the greater the warrant for belief (Hacking 1982, 1983, Cartwright 1983, ch. 5, Giere 1989, ch. 5). Belief in scientific unobservables thus described is here partnered with a degree of scepticism about scientific theories more generally, and this raises questions about whether believing in entities while withholding belief with respect to the theories that describe them is a coherent or practicable combination (Morrison 1990, Elsamahi 1994, Resnik 1994, Chakravartty 1998, Clarke 2001, Massimi 2004). Entity realism is especially compatible with and nicely facilitated by the causal theory of reference associated with Kripke (1980) and Putnam (1985/1975, ch. 12), according to which one can successfully refer to an entity despite significant or even radical changes in theoretical descriptions of its properties; this allows for stability of epistemic commitment when theories change over time. Whether the causal theory of reference can be applied successfully in this context, however, is a matter of dispute (see Hardin & Rosenberg 1983, Laudan 1984, Psillos 1999, ch. 12, and Chakravartty 2007a, pp. 52–56).

Structural realism is another view promoting selectivity, but in this case it is the natures of unobservable entities that are viewed sceptically, with realism reserved for the structure of the unobservable realm, as represented by certain relations described by our best theories. All of the many versions of this position fall into one of two camps: the first emphasizes an epistemic distinction between notions of structure and nature; the second emphasizes an ontological thesis. The epistemic view holds that our best theories likely do not correctly describe the natures of unobservable entities, but do successfully describe certain relations between them. The ontic view suggests that the reason realists should aspire only to knowledge of structure is that the very concept of entities that stand in relations is metaphysically problematic—there are, in fact, no such things, or if there are such things, they are in some sense emergent from or dependent on their relations. One challenge facing the epistemic version is that of articulating a concept of structure that makes knowledge of it effectively distinct from that of the natures of entities. The ontological version faces the challenge of clarifying the relevant notions of emergence and/or dependence. (On epistemic structural realism, see Worrall 1989, Psillos 1995, 2006, Votsis 2003, and Morganti 2004; regarding ontic structural realism, see French 1998, 2006, Ladyman 1998, Psillos 2001, 2006, Ladyman & Ross 2007, and Chakravartty 2007a, ch. 3).

3. Considerations Against Scientific Realism (and Responses)

3.1 The Underdetermination of Theory by Data

Lined up in opposition to the various motivations for realism presented in section 2 are a number of important antirealist arguments, all of which have pressed realists either to attempt their refutation, or to modify their realism accordingly. One of these challenges, the underdetermination of theory by data, has a storied history in twentieth century philosophy more generally, and is often traced to the work of Duhem (1954/1906, ch. 6). In remarks concerning the confirmation of scientific hypotheses (in physics, which he contrasted with chemistry and physiology), Duhem noted that a hypothesis cannot be used to derive testable predictions in isolation; to derive predictions one also requires “auxiliary” assumptions, such as background theories, hypotheses about instruments and measurements, etc. If subsequent observation and experiment produces data that conflict with those predicted, one might think that this reflects badly on the hypothesis under test, but Duhem pointed out that given all of the assumptions required to derive predictions, it is no simple matter to identify where the error lies. Different amendments to one's overall set of beliefs regarding hypotheses and theories will be consistent with the data. A similar result is commonly associated with the later “confirmational holism” of Quine (1953), according to which experience (including, of course, that associated with scientific testing) does not confirm or disconfirm individual beliefs per se, but rather the set of one's beliefs taken as a whole. The thesis of underdetermination is thus sometimes referred to as the ‘Duhem-Quine thesis’ (see Ben-Menahem 2006 for a historical introduction).

How does this amount to a concern about realism? The argument from underdetermination proceeds as follows: let us call the relevant, overall sets of scientific belief ‘theories’; different, conflicting theories are consistent with the data; the data exhaust the evidence for belief; therefore, there is no evidential reason to believe one of these theories as opposed to another. Given that the theories differ precisely in what they say about the unobservable (their observable consequences—the data—are all shared), a challenge to realism emerges: the choice of which theory to believe is underdetermined by the data. In contemporary debate, the challenge is usually presented using slightly different terminology. Every theory, it is said, has empirically equivalent rivals—that is, rivals that agree with respect to the observable, but differ with respect to the unobservable. This then serves as the basis of a sceptical argument regarding the truth of any particular theory the realist may wish to endorse. Various forms of antirealism then suggest that hypotheses and theories involving unobservables are endorsed, not merely on the basis of evidence that may be relevant to their truth, but also on the basis of other factors that are not indicative of truth as such (see sections 3.2, and 4.2–4.4). (For modern explications, see van Fraassen 1980, ch. 3, Earman 1993, Kukla 1998, chs. 5–6, and Stanford 2001.)

The argument from underdetermination is contested in a number of ways. One might, for example, distinguish between underdetermination in practice (or at a time) and underdetermination in principle. In the former case, there is underdetermination only because the data that would support one theory or hypothesis at the expense of another is unavailable, pending foreseeable developments in experimental technique or instrumentation. Here, realism is arguably consistent with a “wait and see” attitude, though if the prospect of future discriminating evidence is poor, a commitment to future realism may be questioned thereby. In any case, most proponents of underdetermination insist on the idea of underdetermination in principle: the idea that there are always (plausible) empirically equivalent rivals no matter what evidence may come to light. In response, some argue that the principled worry cannot be established, since what counts as data is apt to change over time with the development of new techniques and instruments, and with changes in scientific background knowledge, which alter the auxiliary assumptions required to derive observable predictions (Laudan & Leplin 1991). Such arguments rest, however, on a different conception of data than that assumed by many antirealists (defined above, in terms of human sensory capacities). (For other responses, see Okasha 2002, van Dyck 2007, and Busch 2009. Stanford 2006 proposes a historicized version of the argument from underdetermination, discussed in Chakravartty 2008 and Godfrey-Smith 2008).

3.2 Scepticism about Inference to the Best Explanation

One especially important reaction to concerns about the alleged underdetermination of theory by data gives rise to another leading antirealist argument. This reaction is to reject one of the key premises of the argument from underdetermination, viz. that evidence for belief in a theory is exhausted by the empirical data. Many realists contend that other considerations—most prominently, explanatory considerations—play an evidential role in scientific inference. If this is so, then even if one were to grant the idea that all theories have empirically equivalent rivals, this would not entail underdetermination, for the explanatory superiority of one in particular may determine a choice (Laudan 1990, Day & Botterill 2008). This is a specific exemplification of a form of reasoning by which ‘we infer what would, if true, provide the best explanation of [the] evidence’ (Lipton 2004/1991, p. 1). To put a realist-sounding spin on it: ‘one infers, from the premise that a given hypothesis would provide a “better” explanation for the evidence than would any other hypothesis, to the conclusion that the given hypothesis is true’ (Harman 1965, p. 89). Inference to the best explanation (as per Lipton's formulation) seems ubiquitous in scientific practice. The question of whether it can be expected to yield knowledge of the sort suggested by realism (as per Harman's formulation) is, however, a matter of dispute.

Two difficulties are immediately apparent regarding the realist aspiration to infer truth (approximate truth, existence of entities, etc.) from hypotheses or theories that are judged best on explanatory grounds. The first concerns the grounds themselves. In order to judge that one theory furnishes a better explanation of some phenomenon than another, one must employ some criterion or criteria on the basis of which the judgement is made. Many have been proposed: simplicity (whether of mathematical description or in terms of the number or nature of entities, properties, or relations involved); consistency and coherence (both internally, and externally with respect to other theories and background knowledge); scope and unity (pertaining to the domain of phenomena explained); and so on. One challenge here concerns whether virtues such as these can be defined precisely enough to permit relative rankings of explanatory goodness. Another challenge concerns the multiple meanings associated with some virtues (consider, for example, mathematical versus ontological simplicity). Another concerns the possibility that such virtues may not all favour any one theory in particular. Finally, there is the question of whether these virtues should be considered evidential or epistemic, as opposed to merely pragmatic. What reason is there to think, for instance, that simplicity is an indicator of truth? Thus, the ability to rank theories with respect to their likelihood of being true may be questioned.

A second difficulty facing inference to the best explanation concerns the pools of theories regarding which judgments about relative explanatory efficacy are made. Even if scientists are likely reliable rankers of theories with respect to truth, this will not produce belief in a true theory (in some domain) unless that theory in particular happens to be among those considered. Otherwise, as van Fraassen (1989, p. 143) notes, one may simply end up with ‘the best of a bad lot’. Given the widespread view, even amongst realists, that many and perhaps most of our best theories are false, strictly speaking, this concern may seem especially pressing. However, in just the way that the realist strategy of selectivity (see section 2.3) may offer responses to the question of what it could mean for a theory to be close to the truth without being true simpliciter, this same strategy may offer the beginnings of a response here. That is to say, the best theory of a bad lot may nonetheless describe unobservable aspects of the world in such a way as to meet the standards of variants of realism including explanationism, entity realism, and structural realism. (For a book-length treatment of inference to the best explanation, see Lipton 2004/1991; for defences, see Lipton 1993, Day & Kincaid 1994, and Psillos 1996, 2009, part III; for critiques, see van Fraassen 1989, chs. 6–7, Ladyman, Douven, Horsten & van Fraassen 1997, and Wray 2008.)

3.3 The Pessimistic Induction

Worries about underdetermination and inference to the best explanation are generally conceptual in nature, but the so-called pessimistic induction (also called the ‘pessimistic meta-induction’, because it concerns the “ground level” inductive inferences that produce scientific theories and laws) is intended as an argument from empirical premises. If one considers the history of scientific theories in any given discipline, what one typically finds is a regular turnover of older theories in favour of newer ones, as scientific knowledge develops. From the point of view of the present, most past theories must be considered false; indeed, this will be true from the point of view of most times. Therefore, by enumerative induction (that is, generalizing from these cases), surely theories at any given time will ultimately be replaced and regarded as false from some future perspective. Thus, current theories are also false. The general idea of the pessimistic induction has a rich pedigree. Though neither endorse the argument, Poincaré (1952/1905, p. 160), for instance, describes the seeming ‘bankruptcy of science’ given the apparently ‘ephemeral nature’ of scientific theories, which one finds ‘abandoned one after another’, and Putnam (1978, pp. 22–25) describes the challenge in terms of the failure of reference of terms for unobservables, with the consequence that theories employing them cannot be said to be true.

Contemporary discussion commonly focuses on Laudan's (1981) argument to the effect that the history of science furnishes vast evidence of empirically successful theories that were later rejected; from subsequent perspectives, their unobservable terms were judged not to refer and thus, they cannot be regarded as true or even approximately true. (If one prefers to define realism in terms of scientific ontology rather than reference and truth, the worry can be rephrased in terms of the mistaken ontologies of past theories from later perspectives.) Responses to this argument generally take one of two forms, the first stemming from the qualifications to realism outlined in section 1.3, and the second from the forms of realist selectivity outlined in section 2.3—both can be understood as attempts to restrict the inductive basis of the argument in such a way as to foil the pessimistic conclusion. For example, one might contend that if only sufficiently mature and non-ad hoc theories are considered, the number whose central terms did not refer and/or which cannot be regarded as approximately true is dramatically reduced (see references, section 1.3). Or, the realist might grant that the history of science presents a record of significant referential discontinuity, but contend that, nevertheless, it also presents a record of impressive continuity regarding what is properly endorsed by realism, as recommended by explanationists, entity realists, or structural realists (see references, section 2.3). (For other responses, see Leplin 1981, McAllister 1993, Chakravartty 2007, ch. 2, Doppelt 2007, and Nola 2008; Hardin & Rosenberg 1982, Cruse & Papineau 2002, and Papineau 2010 explore the idea that reference is irrelevant to approximate truth).

In just the way that some authors suggest that the miracle argument is an instance of fallacious reasoning—the base rate fallacy (see section 2.1)—some suggest that the pessimistic induction is likewise flawed (Lewis 2001, Lange 2002, Magnus & Callender 2004). The argument is analogous: the putative failure of reference on the part of past successful theories, or their putative lack of approximate truth, cannot be used to derive a conclusion regarding the chances that our current best theories do not refer to unobservables, or that they are not approximately true, unless one knows the base rate of non-referring or non-approximately true theories in the relevant pools. And since one cannot know this independently, the pessimistic induction is fallacious. Again, analogously, one might argue that to formalize the argument in terms of probabilities, as is required in order to invoke the base rate fallacy, is to miss the more fundamental point underlying the pessimistic induction (Saatsi 2005b). One might read the argument simply as cutting a supposed link between the empirical success of scientific theory and successful reference or approximate truth, as opposed to an enumerative induction per se. If even a few examples from the history of science demonstrate that theories can be empirically successful and yet fail to refer to the central unobservables they invoke, or fail to be what realists would regard as approximately true, this constitutes a prima facie challenge to the notion that only realism can explain the success of science.

3.4 Scepticism about Approximate Truth

The regular appeal to the notion of approximate truth by realists has several motivations. The widespread use of abstraction (that is, incorporating some but not all of the relevant parameters into scientific descriptions) and idealization (distorting the nature of certain parameters) suggests that even many of our best theories and models are not strictly correct. The common realist contention that theories can be viewed as gradually converging on the truth as scientific inquiry progresses suggests that such progress is amenable to assessment or measurement in some way, if only in principle. And even for realists who are not convergentists as such, the importance of cashing out the metaphor of theories being close to the truth is pressing in the face of antirealist assertions to the effect that the metaphor is empty. The challenge to make good on the metaphor and explicate, in precise terms, what approximate truth could be, is one source of scepticism about realism. Two broad strategies have emerged in response to this challenge: attempts to quantify approximate truth by formally defining the concept and the related notion of relative approximate truth; and attempts to explicate the concept informally.

The formal route was inaugurated by Popper (1972, pp. 231–236), who defined relative orderings of ‘verisimilitude’ (literally, ‘likeness to truth’) between theories in a given domain over time by means of a comparison of their true and false consequences. Miller (1974) and Tichý (1974) proved that there is a technical problem with this account, however, yielding the consequence that in order for theory A to have greater verisimilitude than theory B, A must be true simpliciter, which leaves the realist desideratum of explaining how strictly false theories can differ with respect to approximate truth unsatisfied (see also Oddie 1986a). Another formal account is the possible worlds approach (also called the ‘similarity’ approach), according to which the truth conditions of a theory are identified with the set of possible worlds in which it is true, and ‘truth-likeness’ is calculated by means of a function that measures the average or some other mathematical “distance” between the actual world and the worlds in that set, thereby facilitating orderings of theories with respect to truth-likeness (Tichý 1976, 1978, Oddie 1986b, Niiniluoto 1987, 1998; for critiques, see Miller 1976 and Aronson 1990). One last attempt to formalize approximate truth is the type hierarchy approach, which analyzes truth-likeness in terms of similarity relationships between nodes in tree-structured graphs of types and subtypes representing scientific concepts on the one hand, and the entities, properties, and relations in the world they putatively represent on the other (Aronson 1990, Aronson, Harré, & Way 1994, pp. 15–49; for a critique, see Psillos 1999, p. 270–273).

Less formally and perhaps more typically, realists have attempted to explicate approximate truth in qualitative terms. One common suggestion is that a theory may be considered more approximately true than one that preceded it if the earlier theory can be described as a “limiting case” of the later one. The idea of limiting cases and inter-theory relations more generally is elaborated by Post (1971; see also French & Kamminga 1993), who argues that certain heuristic principles in science yield theories that ‘conserve’ the successful parts of their predecessors. His ‘General Correspondence Principle’ states that later theories commonly account for the successes of their predecessors by ‘degenerating’ into earlier theories in domains in which the earlier ones are well confirmed. Hence, for example, the often cited claim that certain equations in relativistic physics degenerate into the corresponding equations in classical physics in the limit, as velocity tends to zero. The realist may then contend that later theories offer more approximately true descriptions of the relevant subject matter, and that the ways in which they do so can be illuminated in part by studying the ways in which they build on the limiting cases represented by their predecessors. (For further takes on approximate truth, see Leplin 1981, Boyd 1990, Weston 1992, Smith 1998, and Chakravartty 2010.)

4. Antirealism: Foils for Scientific Realism

4.1 Empiricism

The term ‘antirealism’ (or ‘anti-realism’) encompasses any position that is opposed to realism along one or more of the dimensions canvassed in section 1.2: the metaphysical commitment to the existence of a mind-independent reality; the semantic commitment to interpret theories literally or at face value; and the epistemological commitment to regard theories as constituting knowledge of both observables and unobservables. As a result, and as one might expect, there are many different ways of being an antirealist, and many different positions qualify as antirealism. In the historical development of realism, arguably the most important strains of antirealism have been varieties of empiricism which, given their emphasis on experience as a source and subject matter of knowledge, are naturally set against the idea of knowledge of unobservables. It is possible to be an empiricist more broadly speaking in a way that is consistent with realism—for example, one might endorse the idea that knowledge of the world stems from empirical investigation, but contend that on this basis, one can justifiably infer certain things about unobservables. In the first half of the twentieth century, however, empiricism came predominantly in the form of varieties of “instrumentalism”: the view that theories are merely instruments for predicting observable phenomena or systematizing observation reports.

Traditionally, instrumentalists maintain that terms for unobservables, by themselves, have no meaning; construed literally, statements involving them are not even candidates for truth or falsity. The most influential advocates of instrumentalism were the logical empiricists (or logical positivists), including Carnap and Hempel, famously associated with the Vienna Circle group of philosophers and scientists as well as important contributors elsewhere. In order to rationalize the ubiquitous use of terms which might otherwise be taken to refer to unobservables in scientific discourse, they adopted a non-literal semantics according to which these terms acquire meaning by being associated with terms for observables (for example, ‘electron’ might mean ‘white streak in a cloud chamber’), or with demonstrable laboratory procedures (a view called ‘operationalism’). Insuperable difficulties with this semantics led ultimately (in large measure) to the demise of logical empiricism and the growth of realism. The contrast here is not merely in semantics and epistemology: a number of logical empiricists also held the neo-Kantian view that ontological questions “external” to the frameworks for knowledge represented by theories are also meaningless (the choice of a framework is made solely on pragmatic grounds), thereby rejecting the metaphysical dimension of realism (as in Carnap 1950). (Duhem 1954/1906 was influential with respect to instrumentalism; for a critique of logical empiricist semantics, see Brown 1977, ch. 3; on logical empiricism more generally, see Giere & Richardson 1997 and Richardson & Uebel 2007; on the neo-Kantian reading, see Richardson 1998 and Friedman 1999.)

Van Fraassen (1980) reinvented empiricism in the scientific context, evading many of the challenges faced by logical empiricism, by adopting a realist semantics. His position, constructive empiricism, holds that the aim of science is empirical adequacy, where ‘a theory is empirically adequate exactly if what it says about the observable things and events in the world, is true’ (p. 12; p. 64 gives a more technical definition in terms of the embedding of observable structures in scientific models). Crucially, unlike traditional instrumentalism and logical empiricism, constructive empiricism interprets theories in precisely the same manner as realism. The antirealism of the position is due entirely to its epistemology—it recommends belief in our best theories only insofar as they describe observable phenomena, and an agnostic attitude with respect to anything unobservable. The constructive empiricist thus recognizes claims about unobservables as true or false, but does not go so far as to believe or disbelieve them. In advocating a restriction of belief to the domain of the observable, the position is similar to traditional instrumentalism, and is for this reason sometimes described as a form of instrumentalism. (For elaborations of the view, see van Fraassen 1985, 2001, and the helpful study, Rosen 1994.) There are also affinities here with the idea of fictionalism, according to which things in the world are and behave

Download 221.79 Kb.

Share with your friends:
1   2   3   4




The database is protected by copyright ©sckool.org 2022
send message

    Main page