Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte...

41
05.05.2022 1 The Pessimistic Meta-Induction and the Exponential Growth of Science Ludwig Fahrbach PLEASE DO NOT QUOTE Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet. I aim to defend scientific realism against the argument of pessimistic meta-induction. I start form a preliminary definition of scientific realism according to which empirical success of a scientific theory licenses an inference to its approximate truth. Pessimistic meta-induction, then, is the argument that this inference is undermined by numerous counterexamples, i.e., by theories from the history of science that were successful, but false. To counter pessimistic meta-induction I argue that the realist should refine his position by using a graded notion of empirical success. I then examine the history of science and formulate some claims about the pattern of how the degrees of success of the best theories have developed over the history of science. I proceed as follows. First, I define scientific realism, and present the No-miracles argument. Second, I formulate a simple version of pessimistic meta-induction (simple PI, for short), and examine how it undermines the realist’s position. Third, I sketch a counterstrategy for realists against this version of PI. This counterstrategy relies on the notion of degrees of success of scientific theories. It states that in the history of science degrees of success have been continuously growing, and that our current best theories have higher degrees of success then any of their predecessors. In response anti-realists may present what I call the sophisticated PI. The sophisticated PI states that the growth of degrees success of theories has been continually accompanied by theory changes, therefore we should extrapolate the existence of refuted theories to current degrees of success undermining the inference to truth. In the second half of the paper, I argue that the case for the sophisticated PI has not been made. At the centre of my counterargument will be the observation that the increase in degrees of success over the history of science has by no means been uniform, quite the opposite, most of its increase occurred in very recent times.

Transcript of Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte...

Page 1: Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

24.05.2023 1

The Pessimistic Meta-Induction and the Exponential Growth of Science

Ludwig Fahrbach

PLEASE DO NOT QUOTE

Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

I aim to defend scientific realism against the argument of pessimistic meta-induction. I start form a preliminary definition of scientific realism according to which empirical success of a scientific theory licenses an inference to its approximate truth. Pessimistic meta-induction, then, is the argument that this inference is undermined by numerous counterexamples, i.e., by theories from the history of science that were successful, but false. To counter pessimistic meta-induction I argue that the realist should refine his position by using a graded notion of empirical success. I then examine the history of science and formulate some claims about the pattern of how the degrees of success of the best theories have developed over the history of science.

I proceed as follows. First, I define scientific realism, and present the No-miracles argument. Second, I formulate a simple version of pessimistic meta-induction (simple PI, for short), and examine how it undermines the realist’s position. Third, I sketch a counterstrategy for realists against this ver-sion of PI. This counterstrategy relies on the notion of degrees of success of scientific theories. It states that in the history of science degrees of success have been continuously growing, and that our current best theories have higher degrees of success then any of their predecessors. In response anti-realists may present what I call the sophisticated PI. The sophisticated PI states that the growth of de -grees success of theories has been continually accompanied by theory changes, therefore we should extrapolate the existence of refuted theories to current degrees of success undermining the inference to truth. In the second half of the paper, I argue that the case for the sophisticated PI has not been made. At the centre of my counterargument will be the observation that the increase in degrees of success over the history of science has by no means been uniform, quite the opposite, most of its increase oc -curred in very recent times.

1 Scientific Realism

1 Definition of scientific realism and definition of successIn this paper, I start from the definition of scientific realism as the position that the following in -

ductive principle which I call the success-to-truth principle is correct: Empirically successful theories are probably approximately true. Another formulation of this principle that I will use is that the empir-ical success of a scientific theory is a good indicator of the approximate truth of the theory. Later this definition will be refined. Examples for successful theories that realists have in mind here are theories such as the atomic theory of matter, the theory of evolution or claims about the role of viruses and bac-teria in infectious diseases.

The success-to-truth principle is not meant to be a full-blown account of confirmation or induc-tion; it is only meant to capture the common core relevant for the realism debate as discussed in this paper of all those inductive principles or accounts of theory confirmation, such as inference to the best explanation, hypothetico-deductivism, etc., that realists typically offer to formulate their respective variants of scientific realism.

Page 2: Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

24.05.2023 2

I adhere to the following conventions. Because it plays only a minor role in this paper, I gener-ally omit the term “approximately” in “approximately true” and simply use “true”.1 Furthermore, I use the term “theory” in a rather generous sense so that it also denotes laws of nature, theoretical state -ments, sets of theoretical statements and even classification systems such as the Periodic Table of Ele-ments the reason being that realists usually want to endorse the truth of the statements involved in these things as well. For example, the term “Periodic Table of Elements” is meant to refer to the state-ments involved in that classification system.

We need to define the notion of empirical success. I will start here with a simple definition of success, which will be refined in some ways over the course of the paper. The definition employs en-tirely standard ideas of how theories are tested by observation. Thus, a theory is empirically successful (or simply successful) at some point in time, just in case its known observational consequences fit with all the data gathered until that time, i.e., the theory describes correctly, as far as scientists know at that time, all observations and experimental results gathered by scientists until that time, and there are suf-ficiently many such cases of fit. In other words, a theory is successful at some point in time, just in case scientists have been able to compare sufficiently many consequences of the theory with observa-tions until that time, and all of them have turned out to be correct.

2 Comments on the definitionsI defined scientific realism as a position which endorses a certain inductive principle, so my

definition of realism is a purely epistemic one. Many definitions of realism offered in the literature in-volve, in addition, semantic, pragmatic and other conditions, but in this paper I focus on epistemic top-ics. 2 Put in general terms, the question at issue is how far inductive inference can take us beyond all observations that scientists have gathered so far, or, in other word, which forms of inductive inferences are reliable and which are not. Realists are typically optimistic about the reach of inductive inference, and therefore endorse the success-to-truth principle or similar principles of induction. In contrast, anti-realists typically doubt in different ways and to different degrees that inductive inference can take us very far beyond observation, and therefore normally reject the success-to-truth principle and all the realist principles of inference, of which it is the common core. They maintain, for example, that it is nothing but the fallacy of affirming the consequent (e.g., Laudan 1981, Alan Musgrave 1988, 230).

There are many different forms of anti-realism, of course, but in this paper I will understand the position of anti-realism entirely negatively as the rejection of the success-to-truth principle, because I only want to discuss realism, and arguments for and against realism, and none of the different forms of anti-realism. So, anti-realists only occur as people who oppose scientific realism and offer counterar-guments and other challenges to realism.3

3 The No-miracles ArgumentThe most important argument offered by realists to support the success-to-truth principle and

similar principles is the no-miracles argument (NMA for short) (Putnam 1978, Smart 1960). The simplest version consists in pointing out the inherent plausibility of the success-to-truth principle: “Given that a theory enjoys empirical success wouldn’t it be a miracle if it nevertheless were false? Wouldn’t it be miracle, if infectious diseases behaved all the time, as if they are caused by viruses and bacteria, but there are no viruses and bacteria.” This argument appeals directly to our confirmational intuitions. For what follows, we need not engage in a detailed examination of the different versions of the NMA, all we need to note is that, in the end, they are all based on confirmational intuitions, which are by and large shared by most realists. Let us call those intuitions the “shared realist intuitions”.4

1 Realists usually admit that a general explication of the notion of approximate truth has not yet been devised and may even be impossible to devise, but they think that our intuitive grasp of that notion and scientists’ successful application of it in many specific situations suffice to justify its use for defining realism. An obstacle for devising a general explication of the notion of approximate truth is identified in Alexander Bird (2007).2 For recent book length treatments of the scientific realism debate see Jarrett Leplin (1997), André Kukla (1998), Stathis Psillos (1999), Jaako Niiniluoto (1999), and Kyle Stanford (2006). 3 For a comparison of different forms of anti-realism with regard to the pessimistic meta-induction see Fahrbach (2009a, 2009b).4 The nature of the intuitions, whether they are a apriori or somehow rooted in experience, need not concern us here.

Page 3: Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

24.05.2023 3

As anti-realists reject the success-to-truth principle and all the confirmational principles of which it is the common core they also reject whatever arguments realists offer in support of these prin-ciples, that is they reject all versions of the NMA. For example, they reject inference to the best ex-planation. Thus, realists and anti-realists are engaged in an ongoing dispute over the success-to-truth principle, the principles of which it is the common core, and the possibility of their justification, in which no side can convince the other side, and which goes on without any signs of a resolution. Their disagreement can be traced back to a clash of intuitions, where the “shared realist intuitions” on which the different versions of the NMA are ultimately based are rejected by anti-realists, who simply do not share them.

2 The pessimistic meta-induction

1 The simple PIBut now, the anti-realist thinks she can offer an independent argument – independent of her re-

jection of the NMA, inference to the best explanation, the realist’s confirmational intuitions, etc. – which undermines the success-to-truth principle. This argument is the simple PI. The simple PI moves from the premise that the history of science is full of theories that were once successful and accepted by scientists as true, but later refuted and abandoned. Let’s assume for the time being that this premise is correct. Then these successful, but false theories constitute counterinstances to the inference from success to truth. In other words, the success-to-truth principle has had a really bad track-record, which counts strongly against its being valid. This is the simple PI.5

The premise of the simple PI about the widespread occurrence of successful but false theories in the history of science requires evidence. Thus, antirealists present long lists of examples of such theor-ies. Larry Laudan (1981) famously presents the following list of theories, all of which were once suc-cessful, and all of which are now considered to have been refuted:

• The crystalline spheres of ancient and medieval astronomy• The humoral theory of medicine• The effluvial theory of static electricity• ’Catastrophist’ geology (including Noah’s deluge)• The phlogiston theory of chemistry• The caloric theory of heat• The vibratory theory of heat• The vital force theories of physiology• Electromagnetic ether• Optical ether• The theory of circular inertia• Theories of spontaneous generation.

The anti-realist then argues that, even if judged from the perspective of the realist, i.e., starting from the confirmational views and intuitions of the realist (his optimism about the reach of inductive inference, the NMA, the shared realist intuitions) and disregarding the confirmational views and intu-itions of the anti-realist, the success-to-truth principle has to be given up. From this perspective two ar-guments concerning the success-to-truth principle have to be considered, the NMA and the simple PI. The NMA supports the success-to-truth principle, the simple PI undermines it. The two arguments have to be balancee against each other. The anti-realist maintains that the result of the balancing is that the simple PI is much stronger than the NMA. Whereas the NMA is apriori and theoretical, and ulti-mately based on intuitions, the premise of the simple PI is based on empirical evidence from the his-tory of science, and provides many concrete counterexamples against the inference from success to

5 Compare, among others, Psillos (1999, ch. 5), Psillos (2000), Peter Lewis (2001), Michael Devitt (2005), Kitcher (2001), ??? Lyons and Juha Saatsis (2004).

Page 4: Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

24.05.2023 4

truth.6 What better case against an inference than counterexamples can you provide? Hence, the suc-cess-to-truth principle’s support by the NMA is trumped by its negative track record, the wide-spread incidence of successful, but false theories in the past. The anti-realist concludes that even if one en-dorses the realist’s confirmational views and intuitions, one has to change one’s view about the suc-cess-to-truth principle, and admit that is undermined by the past of science.

What follows for our current successful theories which are so dear to the realist’s heart? The anti-realist can offer what he might call the No-indicators argument: Empirical success is the only promising indicator of truth in empirical science. It is neutralized by the history of science. No other indicator of truth is available to justify our belief in our current successful theories. Therefore, belief in the truth of those theories is not justified.

2 Three Counterstrategies **The counterstrategies realists have devised to defend their position against the simple PI fall

roughly into three kinds. Consider the success-to-truth principle: If a theory is successful, then it is true. Counterstrategies of the first kind restrict the consequent, i.e., restrict what can be inferred from success; counterstrategies of the second kind restrict the antecedent, i.e., restrict from what truth can be inferred; counterstrategies of the third kind are combinations of the first and second counter-strategy.

**Counterstrategies of the first kind restrict the consequent of the success-to-truth principle: They weaken it to an inference from success to some diminished form of truth, such as reference, truth restricted to some sub-domain of the original domain of the theory, or partial truth (truth about struc-ture, about classification, or about other parts that contributed to the success of the theory). To defend the inference to diminished truth, proponents of this counterstrategy aim to show that the theory changes of the past were mostly not as deep as might have seemed at first sight, that the successful re -futed theories of the past, although strictly speaking false, were – judged from today – not entirely false, but had terms that referred, were approximately true in restricted domains, had parts that were true, etc. In pursuit of such strategies, realists have developed elaborate accounts of reference, partial truth, and so on.7 If any of these accounts work, the position of the realist (the success-to-truth prin-ciple) is weakened somewhat, but we can still infer some diminished form of truth for our current suc -cessful theories. I will later (mis-)use this kind of strategy in my defence of the “shared realist intu -itions”.

**The counterstrategies of the second kind restrict what truth can be inferred from by working on the notion of empirical success. The simplest version consists in noting that several theories on Laudan’s list enjoyed very little success. For example, Kitcher writes “Laudan’s list includes such things as the humoral theory of medicine, catastrophist geology, and theories of spontaneous genera-tion. In none of these examples does it seem right to hail the theory as successful…”.8 This reduces the inductive base of the simple PI somewhat, and therefore weakens it somewhat, but it leaves many counterexamples untouched, so this reply cannot help much. Other versions of the second counter-strategy, also meant to decrease the number of counterexamples, consist in raising the standards for counting a theory as successful. The most prominent version of this strategy relies on the demand that a theory only count as successful, if it has made novel predictions. Unfortunately there remain import -ant cases of refuted theories which produced true novel predictions before being refuted, such as theor-ies of light and the caloric theory of heat (see Lyons). What is more, the value of novel predictions is quite contested (Hempel, Hull). Therefore, this attempt also succeeds only partially, at best.

**Finally, counterstrategies of the third kind combine the first and the second counterstrategy. For example, they refine the success-to-truth principle so that it asserts that an increase in success leads to an increase in truth-likeness. This is the popular idea that increase in success leads to conver-gence to truth.

6 See also Ladyman/Ross 200??, p. 837 There is a lot of literature on these approaches. See, for example, McMullin (1984, p. 18), Kitcher (1993), Leplin (1997), Psillos (1999), John Worall (19??), Martin Carrier (2004), Gerhard Schurz (2004, 2009), Nola, Theo Kuipers, Ladyman/Ross (2007??), and so on. 8 Kitcher (2001, footnote 31); likewise McMullin (1984??), McAllister (1993), Kukla (1998), Psillos (1999, Ch. 5), Devitt (2005, 772).

Page 5: Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

24.05.2023 5

In this paper, I want to develop a version of the second counterstrategy. It relies on grading the notion of success. Let us begin with the following simple, standard account of theory testing. In order to test a theory, scientists derive observable consequences, i.e., predictions, from the theory and make observations. A particular test of a theory consists in comparing some prediction of the theory with some observation. I use the notion of a test in a rather broad sense here to cover all cases in which sci -entists are aware of the theory coming into contact with experience so that it is possible for the theory to fail.9 Also I use the term “prediction of a theory” in a broad sense to denote any observable con-sequence of the theory scientists are aware of.10 The reason to do so will become apparent later. If pre-diction and observation agree, the theory enjoys some measure of success. As a theory passes more and more tests its degree of success increases. Hence, different theories can differ with respect to the degree of success they enjoy at some point in time, and the same theory can enjoy different levels of success at different points in time. As we will see later, these differences can be quite large.

If prediction and observation don’t agree, the theory suffers from an anomaly. As long as the anomaly is not significant or the anomalies do not accumulate, they can be tolerated. If the anomaly is significant or the anomalies do accumulate, the theory counts as refuted. In that case scientists have to look for alternative theories, and a theory change may take place. Of course, this account of theory testing and empirical success is rather minimal in several respects and could be made more precise in many ways, but it is all we will need at the moment. Later on, I will refine it further in several re-spects.

3 The modified success-to-truth inferenceGiven the grading of success, the assertion is very plausible that the degrees of success of the

theories accepted by scientists in the history of science have by and large grown steadily over time. They have grown both during theory changes and between theory changes. During theory changes, successor theories have usually incorporated the successes of their predecessors, and were moreover successful where the predecessors failed, for the simple reason that they have typically been designed to take over the successes of their predecessors and to avoid the failures of their predecessors. As for the times in between theory changes, theories have usually become more successful while they were accepted (before possibly significant anomalies showed up or anomalies began to pile up), because the amount and diversity of data, the precision of measurements, the precision of the predictions, etc., have been growing all the time in the history of science.

Realists can then offer the following idea of a counterargument to the simple PI: Because de-grees of success have increased during the history of science our current best, i.e., most successful, theories enjoy higher degrees of success than past theories; therefore, we are warranted to infer their probable truth; in contrast theories of the past were not sufficiently successful to warrant an inference to their probable truth (as the theory changes of the past show). Hence, according to this idea realists modify the success-to-truth principle so that it allows the inference to truth for current levels of suc-cess only. This idea to counter the simple PI belongs to the counterstrategies of the second kind in that it does something with the notion of success. To support the modified success-to-truth principle real -ists invoke, as before, the NMA (in whatever version), although modified so that it only supports the revised success-to-truth principle. Remember that we are only concerned with defending and restoring the realist position, hence we assume that realists are allowed to call upon their confirmational views and intuitions, and disregard the intuitions of anti-realists. The “shared realist intuitions”, then, feed into the modified version of the NMA so that it supports the modified position that only current levels of success indicate truth. If this counterargument works, the realist position is not incompatible with

9 Kitcher’s (2001) notion of success includes practical success in our dealings with the world to reach our practical goals. I don’t include practical success in my definition, because to every case of practical success which somehow depends on a the -ory corresponds a true consequence of the theory, so that my notion of success captures all practical successes of theories as well. The practical success of science and technology, especially over the last 100-200 years, is nevertheless a good indicator of success in my sense, and could be used alongside the other indicators of success I discuss below.10 One kind of observational consequence has to be excluded, though, namely irrelevant consequences of a theory which, if true, offer no support for the theory at all. For example, T implies TO, where O is any true observation sentence, but T is usually not confirmed by TO. For further discussion of this problem and an account of irrelevant consequences see Schurz (1991).

Page 6: Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

24.05.2023 6

the history of science, and is saved from the attack by the simple PI.11 As we will see this idea is on the right track. But it is seriously underdeveloped and certainly looks rather ad hoc so far. It is open to ob-jections from the anti-realist. Let us look at the objections.

4 Three objections**The anti-realist can raise three objections. The first objection reiterates the argument from earlier

that the realist applied the NMA to past successful theories some of which were refuted showing that the „shared realist intuitions” on which it is based are not trustworthy. Hence, the realist can no longer use them to support anything at all, specifically not the inference to truth for current levels of success. The realist may respond that a realist in 1800 or 1900, for example, should not have made the infer-ence to truth and should not have supported it by the NMA, because, if he had really possessed shared realist intuitions and had carefully attended to them, he would have recognized that those levels of success were actually not high enough to indicate truth; if we listen carefully to shared realist intu -itions we recognize that they tell us that only current levels of success suffice to indicate truth.

This response of my realist is certainly not satisfactory. We may wonder whether instead of past realists not carefully attending to shared realist intuitions, it is my realist who is not carefully attending to them and only hears from them what he wants to hear, namely that they differentially support the success-to-truth inference for present, but not for past levels of success. In any case the realist intended the shared realist intuitions to accord with the intuitions of the working scientists. But the scientists themselves accepted the theories of the past that were later refuted. Therefore, the shared realist intu-itions are such that they did back the inference to truth for past levels of success. The realist cannot simply change his intuitions as it suits him. Thus, he is still faced with a conflict between the conclu-sion of the NMA, applied to the theories of the past, and the historical track record, which still throws the trustworthiness of the shared realist intuitions into doubt.

For the second objection the anti-realist grants that the first objection can be met somehow, that the shared realist intuitions are not discredited by their application via the NMA to past refuted theor-ies. The second objection starts with the observation that in his defence the realist uses a simple dis-tinction between past and present science. This distinction is based on a distorted view of the relative weights of past, present and future times. Science has been a human endeavor for at least some centur -ies now, and will probably go on for some more centuries (or so we can assume). The present is only a small part of this whole time period. The objection then is that the realist’s preference for the present looks entirely arbitrary and ad hoc. Why should we believe that it is precisely now, instead of at some future time, some decades or centuries from now, that a level of success sufficient for truth is reached? The realist invokes the NMA, but that only pushes the problem back one step: If it is granted that the NMA does not apply to past levels of success, why does it support the inference from success to truth for current theories, and not just for theories of some future time? What is special about the present levels of success? The anti-realist submits that my realist’s whole defence looks as if he adjusted his position a bit too flexibly to the history of science.

5 The sophisticated PIThe third objection of the anti-realist is as follows. The anti-realist objects that a realist in 1900,

for example, could have reasoned in exactly the same way as my realist just did: “Success of scientific theories has generally been growing until now. All our current best theories enjoy higher degrees of success today than any of the refuted theories of the past. Consequently we can infer the truth of our current best theories from their success, (where that inference is supported by the NMA), whereas their predecessors’ lower degrees of success in the past didn’t suffice for an inference to their truth.” This reasoning of a realist of 1900 would have been futile, says the anti-realist, because many of the best theories of 1900 were refuted after 1900. In general, at any point in the history of science, whether 1700 or 1800 or 1900, realists could have reasoned in exactly the same way as the realist just did (namely that the levels of success of their respective successful theories had risen and now sufficed to indicate truth), but all these realists would have been refuted by later theory changes. The anti-real-ist asks: What is different about current levels of success that they – all at once – suffice to indicate 11 Leplin (1997, p. 141), Kyle Stanford (BJPS??, Buch??), Psillos and Gerard Doppelt (2007), among others, mention or dis -cusses variants of this argument.

Page 7: Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

24.05.2023 7

truth, whereas past levels of success did not suffice to indicate truth? This rhetorical question means, of course, that nothing is different today, and that we should expect that our current best theories will suffer the same fate that befell the theories of the past.

The realist will immediately reply that there does exist a relevant difference between past and present, namely precisely the degrees of success of the respective best theories; they have increased between the past and the present. So, the epistemic situation of a realist in 1900 and the epistemic situ-ation of a contemporary realist are actually not similar. To this the anti-realist replies that the very same dissimilarity, namely a difference in degrees of success, also existed for a realist in 1900, namely with respect to a realist in 1800, say, but after 1900 refutations did occur, showing that the difference did not save the realist of 1900 from being refuted. So, what is going on here? I take it that the third objection is meant to be the following piece of reasoning: In the history of science, degrees of success have been growing continuously for several centuries, and all the time right to the present while they have been growing, theories enjoying those degrees of success kept being refuted. We should extrapol-ate the incidence of false theories from past levels of success to current levels of success.12 Such an ex-trapolation along degrees of success does justice to what is similar and what is not similar between the epistemic situations of the different times. It supports that many of our current best theories will also be refuted at some future time. It is another version of the PI. I will call it “the sophisticated PI”. The sophisticated PI is an argument in support of the claim that there exist many counterexamples to the inference from success to truth for current levels of success, and thereby undermines that inference. If it is correct the defence of the realist presented above does not work.13

**Like the second objection the third objection is independent from the first objection; hence the third objection may work, even if the first does not. (I will later provide an independent reply to the first objection.) Thus, for the third objection it can be granted that the NMA still has force. This means that we have to balance two arguments once again, in this case the sophisticated PI with the NMA. The result of the balancing is less clear than in the first case of balancing, the simple PI with the NMA, but I will simply assume that the sophisticated PI trumps the NMA also in this case. An additional reason to do so is that the sophisticated PI can be strengthened somewhat. It is not plausible that the increase in levels of success has been uniform for all successful theories across all scientific fields. In-stead, it is quite clear that among our current best theories those of some fields have reached higher levels of success earlier than those of other fields, and have differing levels of success today. In gen -eral, the distribution of levels of success across scientific fields at all points in time is far from uni -form, even if we only consider the best theories of some time. Therefore, the anti-realist could claim that it is plausible that some of the refuted theories already enjoyed degrees of success that are typical of our current best theories; these cases can be used to undermine the success-to-truth inference for current levels of success directly without invoking an extrapolation. This strengthens the third objec-tion. Anyway, I will ignore these complications (balancing and non-uniformity) from now on.

It is clear that the arguments and objections presented so far assume quite some degree of sim-plification of the history of science. However this makes the arguments and objections clearer, and it can, as we will now see, be tolerated. I will now tackle the third objection. This will be the main part of the paper. It will also provide a response to the second objection. Afterwards I will tackle the first objection.

12 See also Stanford (2006, Ch. 1.2), Gerald Doppelt (2007), Bird (2007, Section 4.1).13 A similar extrapolation threatens to refute a counterargument by Devitt against the PI. Devitt wants to argue against the PI by invoking the improvement of scientific methods (see Devitt 1991, p. 163, and Psillos 1999, p. 104). His suggestion can be challenged by an argument similar to the sophisticated PI: In the last few centuries, scientific methods have improved all the time, and while they have improved, the theories accepted by scientists kept being refuted, therefore, we should extrapolate this pattern of failures to current levels of quality of scientific methods.

Page 8: Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

24.05.2023 8

3 The Growth of Success

1 Problems for the sophisticated PIHow good an argument against the success-to-truth inference is the sophisticated PI? It has the

premise that “all the time right to the present theories have been refuted”. So far this premise is just a claim. What is its support from the history of science? The anti-realist will probably assert that it is supported by the same data from the history of science that supported the simple PI, namely the re-futed theories on Laudan’s list and other such examples offered in the philosophical literature. But is that really so? Let has have another look at those examples. If we do so we notice that all the theories on Laudan’s list are actually rather old, namely more than 100 years old. The same holds for practic-ally all examples of theory changes offered in the philosophical literature.14 Kyle Stanford (2006) ex-tensively discusses three further examples, but they are older than 100 years as well. So far anti-real -ists have not shown that theory changes occurred “right to the present”, i.e., they have not offered suf-ficient support for the sophisticated PI so far.

Anti-realists could react by trying to show that despite the difference in time between the refuted theories and our current best theories, the difference in degrees of success is not that big, so it is still true that we should extrapolate the occurrence of false theories to current levels of success. At this point I could stop. The burden of proof is clearly on the side of anti-realists to show that this reply can be made to work. As long as he has not shown this there is no attack on realism. 15 But I won’t stop here. Instead I want to show that the prospects for anti-realists to succeed with this task are not good at all. To do so I will now aim to compare the degrees of success of the refuted theories with those of our current best theories.

In preparation of the comparison, let me present some further examples of what I consider to be our current best theories. Here is a list meant to be representative of our current best theories (remem-ber that the realist endorses the approximate truth of those theories):

the Periodic Table of Elements16

the theory of evolution17

“Stars are like our sun.” the conservation of mass-energy the germ theory of disease the kinetic gas theory “All organisms on Earth consist of cells.“ E = mc2 “The oceans of the Earth have large-scale systems of rotating ocean currents.” And so on18

The list does not include any theories which are simple generalizations from just one kind of ob-servation such as “all pieces of copper expand when heated”. The abandonment of such low-level em-pirical generalizations is sometimes called “Kuhn-loss”. There seems to be wide agreement that Kuhn-losses are rare in the history of science, i.e., such low-level empirical generalizations have rarely been abandoned. Therefore, we need not take them into account. Consequently, all theories on my list are more than just low-level empirical generalizations: They have a certain measure of unifying power

14 As is often the case in philosophy, the same examples are discussed again and again. Here the most discussed examples by far are the cases of phlogiston, caloric, and ether, all of which are older than 100 years. Examples that are also discussed a lot are the cases of Newtonian mechanics, QM and GTR, but I want to exclude them from my discussion, because theories in fundamental physics seem to me to represent a special case with respect to the success-to-truth inference, and should there-fore be examined separately. 15 The anti-realist may also try to search for examples of refuted theories which enjoyed degrees of success typical of our cur -rent best theories from the recent history of science. I will later examine this second possible reply by the anti-realist. 16 As remarked at the beginning of the paper, I use the term “theory” in a rather broad sense.17 The theory of evolution may be taken to consist of two claims, that all organisms on earth are related by common ancestors, and that natural selection is an important force for change.18 The list does not contains any theories from fundamental physics such as Quantum Theory and the Special Theory of Re-lativity, because, as just remarked, I think that theories from fundamental physics represent a special case.

Page 9: Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

24.05.2023 9

(generalize over several distinct domains), or are about unobservables, or are about facts that are oth-erwise not easily accessible, etc.

In order to compare the degrees of success of current best theories with the successful, but re-futed theories of the past I want to employ five “indicators of success”. I call them “indicators of suc-cess”, because they are positively correlated with the degrees of success of theories. The first indicator is the amount of scientific word done by scientists until some time. The second, third, and fourth indic-ator are the amount, diversity and precision of scientific data and observations gathered by scientists until some time. The fifth indicator is the amount of computing power available to scientists at some time. I will examine the growth of these indicators over the history of science. From their growth I will draw an inference to the degrees of success of both the refuted theories of the past and our current best theories. This inference will constitute the main argument of this paper. Before turning to discuss the indicators I have to warn the reader. He will now be showered by a great deal of figures.

2 First indicator: the exponential growth of scientific workThe first indicator of success I want to examine is the amount of scientific work done by scient-

ists in some period of time. Here, “scientific work” means such things as making observations, per-forming experiments, constructing and testing theories, etc. The amount of scientific work done by scientists in some period of time can be measured with the help of various quantities. Two such quant-ities are especially pertinent: the number of journal articles published in the respective period of time and the number of scientists working in the respective period of time. Because we are only interested in very rough estimates of the amount of scientific work done in different periods of time, both quant-ities are plausible ways to measure over-all scientific work during those times. 19 Consider the number of journal articles published by scientists every year. Over the last few centuries, this number has grown in an exponential manner. The doubling rate of the number of journal articles published every year has been 15 – 20 years over the last 300 years. Before 1700, there were, of course, even less pub -lications per year than anytime after 1700. Roughly the same growth rates hold for the number of sci-entists over the last 300 years (although we can assert this with less confidence for times before the 20th century, because for those times not much good data is available). 20

Figure 1. The time-line weighted in such a way that the length of any interval is proportional to the amount of scientific work done in that interval.

Growing with a doubling rate of 15-20 years is a very strong sort of growth. It means that half of all scientific work ever done was done in the last 15-20 years, while the other half was done in all the time before; and three quarters of all scientific work ever done was done in the last 30-40 years, while one quarter was done in all the time before. Figure 1 provides an idea of this sort of growth.

Scientific work is connected with success of theories in the following way. Our defini-tion of degrees of success implies that a theory gains in degree of success, if it passes more tests, where a test of a theory occurs whenever scientists compare a consequence of the theory with some piece of data. Hence in the testing of theories two kinds of scientific activities are involved: gathering data and deriving consequences from theories. Scientists need to gather data, i.e., make observations and perform experiments. This is certainly an important kind of scientific work for which scientists of -ten have to invest considerable effort and time. But in addition scientists need to make calculations and derivations to arrive at consequences from the theories they want to test. The latter is often not

19 There are also other ways of measuring scientific work such as government and industry expenditures on research, the number of scientific journals (rather than scientific journal articles), the number of universities and the number of doctorates. Where data is available it shows that all these ways of measuring the amount of scientific work yield essentially the same res-ults.20 For more details and references see Fahrbach (2009a).

1990 201019701900 1950

-∞

Page 10: Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

24.05.2023 10

easy to do either. In most physical sciences and increasingly in many other natural sciences, theories are formulated using mathematical equations, for example differential equations, therefore scientists can only arrive at predictions from the theories by solving those equations. But developing and apply-ing such methods is often hard work and may require a great effort on the part of the scientists.

The increase in amount of scientific work done over some time is then linked with the increase in degrees of success of theories over that time via three steps: (1) the increase in amount of scientific work leads to an increase in both the amount and the quality of both observations and calculations. (2) An increase in the amount and quality of both observations and calculations leads to an increase in both the number and quality of tests of theories. (3) If the number and quality of tests of theories in-creases, and the tests are passed, then the degrees of success of the tested theories increases, see Figure 2.

More scientific work → more and better observations/calculations → more and better tests → more success

Figure 2.

3 First part of main argumentUsing the link between scientific work and degrees of success, we can now formulate the first

part of the main argument (where the main argument is the inference from the indicators of success to the degrees of success of the two classes of theories we want to compare). It’s only the first part, be -cause it relies only on the first indicator of success. The rest of the main argument will then be de -veloped over the next few sections.

The first part of the main argument proceeds as follows. As we just saw, by far most of the growth of scientific work has occurred in the recent past, i.e., in the last 50 to 80 years. Our current best theories have profited from this increase via the three steps: The increase in scientific work has meant a huge increase in relevant observations and computations; this has resulted in a huge increase in tests of those theories; that all these theories have been approximately stable in the last 50 years (and often much longer) shows that practically all these tests were passed; finally, the high amount of passed tests has resulted in a huge increase in degrees of success for these theories. So, our current best theories have received a big boost in degrees of success in the recent past, an increase that is far greater than any increase in success for any theories of earlier times. In contrast, the refuted theories discussed by philosophers were all held prior to the big boost of success, therefore they could not par-take in it. Therefore, their degrees of success were quite modest. Figure 1 can be used to represent the big boost of success by reinterpreting the x-axis as depicting the degrees of success of the respective best theories at different points in time. The conclusion of the main argument is the main thesis of this paper: Our current best theories enjoy far higher degrees of success than any of the successful, but re-futed theories of the past, which enjoyed only quite modest degrees of success. The main thesis provides us with the sought comparison between current best and refuted past theories.

An objection to the first part of the main argument may be that in many scientific fields more scientific work has obviously not lead to more successful theories. There are many domains of reality such as the economy and human society where a lot of efforts by scientists investigating those do -mains has lead to quite moderately successful theories at best. The same holds for many theories in the natural sciences such as theories about the origin of life on earth, the future weather or the funda-mental constitution of everything. Generally speaking there are, quite obviously, still a lot of large gaps in our knowledge of the world, despite the exponential growth of scientific work. Indeed, most hypotheses considered at the “research frontier” of science are further examples. So, the two quantit-ies, scientific work and degree of success, seem to be fairly independent from each other.

In reply it has to be conceded that the relationship between scientific work and success is not as straightforward as the three steps may suggest. Instead, it is a complex and contingent one and varies with the part of reality scientists are dealing with. Certainly an increase in amount of scientific activity in a scientific field does not automatically lead to an increase in empirical success of the theories in

Page 11: Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

24.05.2023 11

that field. However, it is quite implausible that the enormous increase in scientific work had no or little effect on the degrees of success of all theories of all scientific fields. It would be very surprising, if no theory profited substantially from all the efforts of scientists in the last few decades and centuries. And it is, of course, the theories that did profit that realists want to focus on. And these theories are pre -cisely our current best theories.

Still, the first part of the main argument relies solely on the connection between amount of sci-entific work and degrees of success, and this connection is of a rather indirect sort, as just conceded. We need to develop the main argument further. I will do so by first looking at what I call core theories. This will serve to further develop the first part of the main argument. Afterwards I look at the growth of the other four indicators of success. This will serve to further develop the rest of the main argument.

4 Core theories, weak confirmation, and confirmation en passantIn order to further develop the first part of the main argument we need a couple of further no-

tions. For one kind of theory it is especially plausible that they profited from the huge increase in sci -entific work, namely those of our current best theories that are highly unifying in the sense that they are involved in much of what is going on in their respective scientific disciplines. For example, the periodic Table of Elements plays a role in practically everything chemists do and think. Also, the the -ory of evolution is involved in much of what is going on in biology.21 A third example: the conversa-tion of mass-energy plays a role in very many situations in physics, chemistry, engineering, etc. Fi -nally, plate tectonics is connected with or explains much of the topography of the Earth (location of mountains, shape of seafloor, etc.), locations of earthquakes and volcanoes, magnetic stripes on the sea-floor, etc. I call theories that are highly unifying in this sense “core theories” of their respective scientific discipline, and every situation in which a core theory plays a role an “application” of the core theory.

Even though many of the applications of a core theory don’t constitute tests of the core theory, a large number of them do. However, almost all of the latter applications are not “severe” tests of the core theory in any way, but only tests of a weak or moderate kind. Based on the core theory scientists have expectations about the application (about observations and outcomes of experiments), but these expectations are typically not of a strong sort. For example, on the basis of the periodic Table of Ele-ments chemists expect that every new substance has a chemical structure in terms of the 92 elements, and every chemical reactions is describable in terms of the 92 elements. Likewise, based on the theory of evolution, biologists form expectations about the features of organisms, fossils, genomes, species, etc. Lets call such tests of a theory “gentle tests” of the theory. If the theory fulfils the respective ex -pectations, i.e., passes the gentle test, it only receives a small or moderate increase in its success. If it fails a gentle test, it only suffers from a non-significant anomaly, i.e., an anomaly which only means some kind of trouble for the theory, which, in Bayesian terms, may lower its probability somewhat, but does usually not lead to its refutation, or only leads to its refutation, if the theories already suffers from a lot of other anomalies. Thus, as long as theory and observation come into contact in the applic-ation somehow, so that it is possible for the application to become an anomaly for the theory that sci-entists can notice22, the application counts as a gentle test of the theory.

The term “prediction of a theory” will also be used in a rather broad sense. It will denote any ob-servable statement which scientists arrive at by deriving it from the theory. This includes any expecta-tions scientists form on the basis of the theory. In particular, this includes two things. First, predictions need not be “novel” in the sense of Worall (1989), i.e., need not be about new kinds of phenomena, but rather include all consequences derived from the theory even if they are about the same kind of ob-servations as the observable statements used in the theory‘s construction. Second, if the derivation

21 As Theodosius Dobzhansky famously said (exaggerating somewhat) “Nothing in biology makes sense except in the light of evolution”.22 Kuhn thought that scientists engaged in normal science don’t test their paradigm, and when anomalies arise usually don’t blame the paradigm. Hence, he depicts scientists engaged in normal science as uncritical and close-minded. I think this is a misrepresentation. Anyway according to Hoyningen-Huene, Kuhn also thought that scientists “trained in normal science … are … extraordinarily suited to diagnosing” anomalies of theories (1993, p. 227).

Page 12: Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

24.05.2023 12

from the theory is only approximate, then the prediction is the result of this approximate derivation. In that case the prediction is not strictly a logical consequence of the theory.23

When scientist consider a core theory to be well-confirmed and accept it, they usually don’t de-vote their time and energy to further testing of the theory. Therefore, when an application of a core theory is a test of it, testing it is usually not the scientist’s main aim in the scientific project of which the application is a part, and mostly not even any one of her aims for that project at all. Instead she may pursue other aims in the project, such as explaining some phenomenon with the help of the the-ory, filling in details in the theory, or testing some other entirely different theories. For example, when scientists study malaria bacteria they don’t intend to test once again that malaria is caused by bacteria, nevertheless, a lot of studies about the (rather complicated) behaviour of the malaria bacterium are also (usually rather weak) tests of the germ theory of disease for the case of malaria. Likewise, when palaeontologists look for fossils nowadays, they generally don’t have the aim of testing the theory of evolution once more, nevertheless, any fossil they find is also a (usually very weak) test of the theory of evolution, because some of its features may turn out to contradict or undermine the theory of evolu-tion, e.g., may suggest really big jumps in the fossil record, which would not accord well with the by-and-large gradual nature of evolution. When an application of a theory is a test of the theory, but test -ing the theory is not among the main purposes of the scientists in applying the theory, and the test is passed, I will call the resulting kind of confirmation of the theory “confirmation en passant”.

5 The main argument againWe can now develop the first part of the main argument further, as follows. Consider those theo-

ries among our current best theories that are core theories in their respective scientific disciplines. Most of their applications have only been gentle tests. So, in most of them they have received only a small or moderate increase in degree of success. However as the very strong growth of scientific work shows, the number of applications has increased enormously. In almost all of them, the expectations were met, and the gentle tests were passed. Some applications may not have been successful, but in those cases there was almost never a reason to blame the core theory, otherwise it would not have been as stable as we observe it to have been. Hence, the small or moderate successes have accumulated to a very high overall amount of increase in degrees of success for the core theory. In this way, weaker kinds of confirmation of core theories have contributed strongly to the increase in their degrees of suc-cess. I will now present an example for this reasoning, and then strengthen the main argument by ex-amining the other four indicators of success.

The core theory of chemistry is the periodic Table of Elements. Like in the rest of science, man-power and publications in chemistry have risen exponentially. Jochen Schummer observes that “[o]nly during the past 15 years [i.e., until 1999] we saw more chemistry publications than had been written ever before… [T]his year chemists will publish a hundred times as many papers as in 1901, when van’t Hoff received the first chemistry Nobel prize.“ (1999) Schummer shows that during the past 200 years the growth of scientific work in chemistry meant that the number of newly discovered or pro-duced chemical substances has risen exponentially with a growth rate of around 13 years. “During the whole period the total curve corresponds quite well to a stable exponential growth … with an annual rate of 5.5% and doubling time of 12.9 years.” (1997, p. 111)24

As remarked above the Periodic Table of Elements implies constraints about, for example, the features of any chemical substance and what can happen in any chemical reaction. Hence, every new substance provides an occasion for a gentle test of the Periodic Table of Elements. That the Periodic Table of Elements has been entirely stable for many decades shows that the gentle tests have always been passed. Mostly they have only provided a weak increase of degree of success. But because the number of such gentle tests has been huge – as witnessed by the number of new substances that have been found or produced over the last 150 years, all of them different from each other – the over-all in -crease in degrees of success of the Periodic Table of Elements has been huge. Note that nowadays and

23 Of course, a theory rarely implies an observational statement all by its own, but needs boundary conditions, auxiliary theor-ies, etc., to do so. I ignore such complications here.24 A growth rate of 12.9 years seems to be higher than that of scientific manpower or journal articles. Schummer provides a plausible explanation for this discrepancy. He observes that the number of reported new substances per article, and also the number of reported new substances per chemist has been growing over time, i.e., the productivity of chemists has increased over time. (Schummer 1997b, p. 118)

Page 13: Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

24.05.2023 13

in the more recent past testing the periodic Table of Elements has clearly not been a purpose of chem-ists on any of these applications, as chemists have accepted it for a long time now, so these applica-tions are all cases of confirmation en passant.

4 The other four indicators of success

1 Amount of dataThe second indicator of success is the amount of data gathered by scientists until some time.

Here, we observe that in many scientific disciplines the amount of data has grown at a very strong rate. First, in some disciplines such as palaeontology or chemistry, it is often still the scientists themselves who gather or produce the data, e.g., searching for fossils or synthesizing chemical substances. For such kinds of data, it is often plausible that the amount of data gathered or produced by scientists has grown very roughly proportional to the number of scientists in the field. As we saw earlier manpower has increased with a doubling rate of 15-20 years over the last few centuries. A doubling time of 20 years means that today there are around 30 times as many scientists as in 1900, around 1.000 times as many scientists as in 1800, and around 30.000 times as many scientists as in 1700.25 Therefore, in such disciplines the amount of data has often risen in a similar fashion. For example, figure 3 depicts the growth of the number of a certain type of fossil over the last two centuries, where this kind of growth is entirely typical for the growth of the fossil record in general.26

Secondly and more importantly, in many scientific disciplines the growth of amount of data has been far stronger than the growth of scientific man power due to better instruments and computer tech-nology.27 In many disciplines, data are nowadays gathered automatically.28 During the last six years the Sloan Digital Sky Survey, the “most ambitious astronomical survey project ever undertaken … meas-ured precise brightnesses and positions for hundreds of millions of galaxies, stars and quasars…. it mapped in detail one-quarter of the entire sky. … The total quantity of information produced, about 15 terabytes (trillion bytes), rivals the information content of the Library of Congress.”29 By comparison, the most ambitious such project at the beginning of the 20 th century, a survey of the sky conducted at Harvard and completed in 1908, measured and cataloged the brightnesses and positions of 45 000 stars.30 The future has even more in stock: The “Large Synoptic Survey Telescope, scheduled for com-pletion atop Chile’s Cerro Pachon in 2015, will gather that much data [as did the Sloan Digital Sky Survey over the last six years] in one night”.31

25 A doubling rate of 15 years means that today there are around 100 times as many scientists as in 1900, around 10.000 as many scientists as in 1800, and around 1.000.000 as many scientists as in 1700. Here are the calculations: For a doubling rate of 20 years we get 25 30, 210 1000, and 215 30.000; for a doubling rate of 15 years we get 13 doublings in 195 years, hence a factor of 213 8.000, or around 10.000 in 200 years.26 Timothy Rowe (2005), Book Review of Mammals From the Age of Dinosaurs: Origins, Evolution, and Structure by Zofia Kielan-Jaworowska, Richard L. Cifelli and Zhe-Xi Luo. Nature 438, 426 (24 November 2005). For similar numbers about the growth of dinosaur fossils, see New Scientist, (21 May 2005, pp. 36, 38).27 See also the fourth indicator which concerns computing power. 28 See also Humphreys (2004, pp. 6-8)29 http://www.sdss.org/. See also Robert Kennicutt (2007).30 Jones and Boyd, (1971, p. 202). Cited from Johnson (2005)31 Nature (2009), vol. 460, no. 7255, p. 551. This represents an increase of time efficiency by a factor of around 2000, namely 5 times 365. The Large Synoptic Survey Telescope is just one of several large astronomical projects planned for the next 10 years.

Page 14: Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

24.05.2023 14

Figure 3 The growth of the fossil record of mammals from the age of dinosaurs (245 – 66 mil-lion years ago). From Kielan-Jaworowska, Cifelli, Luo, 2005, S.7

Another example is provided by the sequencing of DNA. Here, over the last 20 years the over-all number of decoded DNA sequences has grown with a fairly stable doubling rate of 18 months.32

This means a growth by a factor of 100 every 10 years from 1984 (around 4000 sequences) to 2004 (around 40 000 000 sequences). Again, this is not possible without automation. To illustrate, “geneti-cists spent more than a decade getting their first complete reading of the 3 billion base pairs of the hu-man genome, which they finally published in 2003. But today’s rapid sequencing machines can run through that much DNA in a week.”33 The costs for sequencing the first human genome was probably at least $500 million, whereas the costs for sequencing the eighth human genome in 2009 was around $50,000. In the next few years, the costs are expected to decrease by a factor of two each year (Nich-olas Wade 2009).

As a final example consider the Argo system which was installed in the years 2005 to 2009 and which consists of a network of 3000 robotic probes floating in the Earth’s oceans. The probes continu-ously measure and record salinity and temperature of the upper 2000m of the ocean, surfacing once every 10 days to transmit the collected data via satellite to some stations on land. In these and many other fields, the automatic gathering of data has lead to truly gigantic amounts of data. 34 Relating my examples from palaeontology, astronomy, genetics, and oceanography of increase of data with the re-spective entries on my list of our current best theories supports the main thesis that those theories en-joy far higher degrees of success than any theories some decades ago.

2 The worry of diminishing returns and the diversity of dataAt this point the following worry might arise. Even if in the examples above the amount of data

has increased enormously, still every new pieces of data may be rather similar to data gathered previ -ously, and, as is well known, if new data is of the same kind as old data its confirmational value con -verges to zero very quickly. For an extreme example, the results of the same kind of experiment re-peated again and again only varying times or locations or other irrelevant characteristics quickly loose

32 See http://www.ncbi.nlm.nih.gov/Genbank/genbankstats.html.33 Nature (2009), vol. 460, no. 7255, p. 55134 Sazlay and Gray (2006, pp. 413-4), in a report about a Microsoft workshop on the future of science, claim that the amount of scientific data is doubling every year.

Page 15: Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

24.05.2023 15

any confirmational value. This is the worry of diminishing returns.35 If the worry were correct, the in-crease in amount of data would not mean that there was a big boost in degrees of success in the recent past. In reply note the following: It is surely the case for the above sets of data (fossils, stars, etc.) that for every piece of data gathered nowadays there are almost always many pieces of data gathered earlier that look rather similar to it. But this way of viewing the confirmational situation is misleading. We need to adopt a global perspective, and determine for each whole set of data whether it exhibits high diversity or not.

So, how much diversity do the data sets presented above (data from fossils, chemical elements, oceans, astronomical objects) possess? Surely they are not such that all the pieces of data are of the same narrow kind. It is not the case that palaeontologists examine the same features of the same fossil again and again, or that all fossils they find are of the same species and the same age. Such deeds would correspond to repeating the same sort of experiment again and again varying only insignificant properties such as time or location. Instead palaeontologists look for fossils from different locations and strata, and what they find are mostly different species. Thus, in Figure 3 the y-axis depicts the number of genera of mammalian fossils. (Genera are a more general classification grouping than spe-cies). Although some important parts of the fossil record are missing, especially from earlier times, e.g., from the beginning of life on earth, we possess fossils for many important parts of the tree of life, enough to have a rough outline of it.36 Also it is not the case that chemists examine the same features of the same substance again and again; instead they create, as we saw, new substances incessantly. The millions of chemical substances that have been synthesized so far clearly represent an extremely high variety of evidence. Likewise it is not the case that all probes of ARGO were released at the very same location in the same ocean, but instead were, of course, distributed widely over many different locations in the oceans of the earth so as to maximize the worth of the data produced. Finally, astro -nomical projects such as the Sloan Digital Sky Survey don’t record the same star again and again, but record features of hundreds of millions of different stars and galaxies.

The rise in the diversity of data manifests itself also in the very strong rise in the number of kinds of instruments and kinds of measurement techniques in many scientific disciplines, especially in the last few decades. While 100 years ago, we just had light microscopes, after World War II many new kinds of microscopes have been developed: many types of light microscopes (polarization, fluor-escence, phase contrast, etc., etc.), many types of electron microscopes, scanning probe microscopes and acoustic microscopes.37 Similarly in astronomy: Until some decades ago, we had only light tele-scopes, while today, astronomers have many different kinds of instruments which not only cover most of the electromagnetic spectrum, from radio-telescopes to gamma-ray telescopes, but also detect neut-rinos, muons, Oh-My-God particles, and so on. To get an impression of the diversity of microscopes, telescopes and other measurement techniques, have a look at the respective entries in Wikipedia, e.g. at the entries of “measuring instruments” and “non-destructive testing”.

Finally, we can point to a further feature of science that supports that the evidence gathered by scientists show a huge and ever growing variety and breadth. This feature is the ever increasing spe-cialization in science. In practically all scientific disciplines we see an ever increasing number of ap-proaches, techniques, instruments, problems, etc. All the time we see fields splitting into sub-fields. It is very plausible that if there is a core theory in the respective scientific disciplines specialization often results in more diversity of evidence for the core theory.

A particular salient indication of the specialisation is the problem of communication between scientists. Scientists of different scientific disciplines, and even scientists of the same discipline, but different sub-disciplines or sub-sub-discipline, have an ever harder time to communicate the particu-lars of their work with each other. This can easily be witnessed at scientific conferences. (Just go to any talk of any conference and ask any member of the audience how much he really understood of the talk.) In general, communication is possible for the basics of the respective field, the general aims, the-ories and problems of the respective scientific field. In contrast, communication about the specifics of

35 This worry was first suggested to me by Rachel Cooper. See Hempel XXX36 The fossil record as evidence for the theory of evolution has only a limited diversity. However, fossils are just one kind of evidence among very many very different kinds of evidence from very different sources such as biogeography, genetics, em -bryology, molecular biology, etc. To get an impression of the diversity of evidence for the theory of evolution compare such sites as www.talkorigins.org, or any textbook on evolution such as Douglas Futuyama (2009).37 See Ian Hacking’s “Do We See Through a Microscope”, in particular the section titled “A Plethora of Microscopes”. Mi -croscopy is nowadays being developed in many different sometimes surprising directions, see Alison Abbott (2009).

Page 16: Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

24.05.2023 16

the daily work in the field, the particular techniques, data, instruments, particular problems and special theories is mostly not possible without being provided a lot of special background information. Com-pare this situation with the situation 100 years (or even 200 years ago), when it was far easier for sci-entists to keep up with developments in their own discipline and even in other scientific disciplines.

The omnipresent difficulties of communication between scientists of different fields show just how specialized science has become. And the ever higher specialisation is a clear indication of the ever growing diversity of approaches, techniques, instruments, etc, and hence of the ever growing di-versity of evidence for the core theories. As with growing scientific work in general the growing spe -cialization does not automatically lead to more diversity of evidence. It is only an indicator of di -versity. Surely, there are many scientific fields in which the large increase in specialization has not lead to much more diversity of evidence for the theories in that field that result in a very degree of high success for any of the theories. But, as with the increase of scientific work in general, it is also implausible that no scientific theories profited in this way. Especially for the core theories of the re-spective disciplines such as the theory of evolution or the periodic Table of Elements it is highly plausible that they profited in this way.

3 The final definition of successBefore turning to the fourth and fifth indicator of success, I want to improve the notion of suc-

cess one last time. To do so I will once more use confirmational ideas that are entirely standard (com-pare, e.g., Hempel 1966). So, here is the final characterization of success. It is still a partial character -ization in that it only specifies some factors on which the degree of success of a scientific theory de -pends. First, as was noted earlier the degree of success of a theory at a given time depends on the total number and diversity of all the tests that the theory has passed until that time. In every test a predic-tion38 of the theory and a piece of data are compared, hence the degree of success of a theory at a given time depends on both the total number and variety of the predictions and the total number and variety of the data involved in all the passed tests.

Second, individual tests differ with respect to their quality. I will here consider only one aspect of the quality of tests, namely the precision (or specificity) of the data and predictions involved in tests. The notion of precision applies to data of both a qualitative and a quantitative sort. If the data is of a quantitative sort, then obviously its level of precision may vary, but the same is true for data of a qualitative sort: It can vary in its precision from rather unspecific to very specific. For example, the precise description of all features of a fossil can constitute a quite precise piece of data. Likewise for the predictions derived by scientists from the given theory: they may be of a qualitative or a quantitat -ive sort, and in both cases their precision may vary. For example, when scientists have to solve equa-tions to derive predictions from a theory, they often have to use approximations and simplifications which usually reduce the precision of the predictions.

In general, if the data is of higher precision than the prediction, then the quality of the test is de-termined by the precision of the prediction, and if the data is of lower precision than the prediction, then the quality of the test is determined by the data precision of the data. If in a test either data or pre-dictions are of low or modest precision, we have a gentle test of the theory. Such a test, if passed, can only result in a weak or modest increase in degrees of success for the theory. Confirmation en passant is typically of this sort.

My partial characterization of the notion of success is of a rather basic sort. It could surely be re-fined in many ways, e.g., with the help of probabilistic means. Some ways of refining it are not neces -sary for our purposes here, quite the opposite some measure of generality is welcome having the ad-vantage of making the arguments of this paper compatible with a large range of realist theories of con-firmation. However other ways of refining the notion would be important to consider. It should be taken into account that confirmation is holistic (that theories are not tested singly, but in bundles). The same holds for the distinction between data and phenomena (Bogen/Woodward 198XX). Also rela-tionships such as mutual support of theories at different levels of generality and between neighbouring scientific areas have to be examined. And so on. Taking all these complications into account has to be left for several other occasions.

38 As noted earlier the notion of a prediction of a theory is used in a rather broad sense: Any observation statement which sci -entists have derived from the theory counts as a prediction.

Page 17: Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

24.05.2023 17

We can discern two extreme ways of how a theory can acquire very high degrees of success. The two extreme ways form the two end points of a continuum. At one end we find theories that have passed only gentle tests – all the data or predictions of the tests have had a low or moderate precision – , but there are a very large number and variety of such tests. At the other end are theories that have not passed very many tests, but passed tests of a very precise sort. In between the two ends of the con -tinuum are a lot of possible mixtures of the two extreme ways. The support for our current best theor-ies are typically mixtures. A plausible conjuncture would be that for the best theories of the physical sciences we find both high precision and strong diversity of evidence. By contrast, the current best the-ories from other sciences, such as the theory of evolution or the periodic table of elements, profited less from tests with very high precision, and more from a very high diversity of evidence.

4 Precision The fourth indicator of success is the precision of data. Data becomes more precise, when sci -

entists improve already existing kinds of instruments and measurement techniques, or develop new kinds of instruments and measurement techniques. This happens all the time, of course, and has lead to constant improvement in the precision of data over the last few centuries, and especially the last few decades. Often the improvements were by great leaps. Examples abound. Let me just mention three es-pecially interesting ones.

The first example concerns the measurement of distances between places on the surface of the earth for the purpose of determining details about the movement of tectonic plates. The kinds of meas-urement available until the 1980s required years to produce meaningful data. In the 1980s this changed dramatically through the advent of GPS. The precision increased 100fold. In consequence de-termining the movements of tectonic plates became rather easy and very reliable.39 In this case we have a substantial increase in the precision of the data which exhibits at the same time a large variety, namely from thousands of GPS stations from many different places on the earth. This example is inter-esting, because it concerns the only interesting theory change discussed in the philosophical literature of recent times: In the 1960s geologists changed their view from the belief that the earth crust does not move to the theory of moving tectonic plates. The example also illustrates the notion of confirmation en passant. Clearly, the purpose of the measurements is not to confirm once again that plate tectonics is true, but instead to determine details of the tectonic plates such as their precise borders, direction and velocity of their movements, etc., nevertheless plate tectonics is reconfirmed all the time by these measurements.

The second example is the increase in precision of time measurement. Figure 4 offers a rough idea of the immense increase in precision of time measurement over the last 400 years. Note that the y-axis has a logarithmic scale so that the increase is actually hyper-exponential. Furthermore, since the 1950s the precision of the best clocks has increased by at least one digit per decade (Sullivan, 2001, p. 6). Today the best clocks, so-called optical clocks, reach a precision of 1 in 10 17. They are, of course, expected to become still better soon (T. Rosenband, et al 2008). Needless to say precise measurements of time are vital in very many different scientific fields (for example for GPS), and have led to a strong increase in success for many of our current best theories over last few decades.

The third example is a very specific one, namely a test of Einstein’s equation E = mc 2 in the year 2005. Using “improved crystal Bragg spectrometers” and a “Penning trap” (whatever those are), E = mc2 was reconfirmed with an accuracy of at least 0.00004%.40 This accuracy is 55 times higher than the accuracy of the previous best test from 1991, which, by the way, used an entirely different method.

The last example shows that even for some of our best theories scientists don’t stop intentionally testing them. As remarked earlier, reconfirmation of our best theories mainly occurs as a side-effect of scientific projects, as “confirmation en passant”. But the last example shows that scientific projects in which scientists make a deliberate effort to recheck a well-established theory, so that rechecking of the

39 GPS became available much earlier for scientists than for the general public. Incidentally, GPS works only, because several relativistic effects are taken into account (Neil Ashby 2003). So, every time you use GPS successfully, you reconfirm en passant the approximate truth of general relativity, although only a tiny little bit. 40 NATURE, Vol. 438, (22/29 December 2005, pp. 1096-7). See New Scientist (4 March 2006, pp. 42-43) for the story be-hind this recent validation of E = mc2 in which two teams, one in the US and the other in France made their respective meas-urements entirely independently from each other until they deemed them stable, before simultaneously faxing the results to each other.

Page 18: Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

24.05.2023 18

theory is the main purpose or one of the main purposes of the project, do exist. While such intentional tests are not considered to provide great advances, and are rarely in the limelight of the scientific com-munity (precisely because the theories are considered to be already sufficiently confirmed), still, those tests are performed when the occasion to do so arises (which they do time and again, because of new instruments and techniques), and their results are noticed and acknowledged.41 It is simply interesting (and also satisfying) to see that a theory you already accept passes further more stringent tests. And it is always at least possible that the theory does not pass a further test, in which case, if you can really establish that this is so, you will become a candidate for the Nobel prize. Nevertheless, this does not happen, and every such passed test constitutes a further boost for the success of the respective theory.

Figure 4: Improvement in the quality of artificial clocks and comparison with the clock provided by terrestrial rotation (from XX)

5 Computing powerThe fifth indicator of success is computing power. As I use the term here, the computing power

of the scientists at some time has two components, the software known to the scientists at that time and the hardware available to the scientists at that time. The software comprises the methods and al-gorithms known to scientists to solve equations (and, more generally, methods to derive predictions from theories). The hardware comprises the devices on which software are implemented such as aba-cuses, logarithmic tables, computers, and the human brain. The computing power – software and hard-ware – available to scientists at some time determines three things: the kinds of equations scientists of that time can solve, the precision of the solutions, and the amount of time needed to obtain the solu -tions (efficiency).

So, let us look how computing power has developed over the last few centuries. First, software. Mathematics and the other disciplines responsible for finding methods and algorithms to solve equa-tions have, like the rest of science, gone through exponential growth during the history of science.

41 Another example is the recent test of Newton’s law of gravitation gravitational forces between masses separated by 55m (“Gravity passes a little test“, NATURE Vol 446, 1 March 2007, pp. 31-32). This test also served to rule out some versions of string theory, so testing Newton’s law of gravitation was not the only purpose of the project.

Page 19: Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

24.05.2023 19

Hence, the number and efficiency of methods and algorithms to solve mathematical equations has cer-tainly been growing very strongly. However, it is difficult to quantify this growth in any meaningful way on a global level. Describing the growth of hardware over the history of science is far easier. Un-til 50 years ago computations were done by humans, therefore overall human computing power rose at least at the rate of the number of scientists, and actually considerably more due to instruments like abacuses, logarithmic tables, slide rules, etc. Furthermore, for the last 50 years hardware power of computers certainly rose exponentially: As is well know, hardware power of computers doubled roughly every two years over the last 50 years. This growth is, of course, much stronger than the growth of the number of scientists and journal articles.

6 The benefits of the increase in computing powerThe increase in computing power is then connected with increase in success of theories in

straightforward ways. New and better software, i.e., new and better methods for solving equations, can provide solutions for kinds of equations which were not solvable before, and it can lead to solutions of already solvable equations, but with higher efficiency (i.e., consuming less time and other resources); both kinds of improvement can then result in a higher number of predictions, which can thereby ex-hibit more diversity. Similarly, new and better software can lead to more precise solutions of already solvable equations, hence to more precise predictions. The same holds for hardware: more and better hardware leads to a higher number and diversity of predictions, and to more precise predictions.

Note an important difference between computing power and data. Every piece of data is taken to have a specific content, i.e., is taken to carry information about a specific object or phenomenon such as some star, some stretch of DNA, or some fossil. Therefore every piece of data is confirmationally relevant for a very narrow set of theories only, mostly in just one scientific area. By contrast, neither software nor hardware are taken to have a specific content.42 A method to solve some kind of equation can, of course, be used in every scientific field in which such equations of that kind arise. And very many methods for solving equations are in fact used in many different scientific areas. 43 Even more so for hardware. The same piece of hardware can usually implement a large number of different kinds of software. This is certainly true for the human brain and for electronic computers.

All of this makes it highly plausible that the growth of software and hardware has contributed strongly to the increase in degrees of success of our current best theories. Paul Humphreys remarks that “much of the success of the modern physical sciences is due to calculation” (2004, especially Ch. 3). He observes that our knowledge of the numerical methods needed to solve equations numerically have made big advances.44 Thus, he writes:

… [B]ehind a great deal of physical science lies this principle: It is the in-vention and deployment of tractable mathematics that drives much progress in the physical sciences. Whenever you have a sudden increase in usable mathematics, there will be a sudden, concomitant increase in scientific pro-gress in the areas affected. And what has always been true of the physical sciences is increasingly true of the other sciences. Biology, sociology, an-thropology, economics, and other areas now have available, or are develop-ing for themselves, computational mathematics that provide a previously un-available dimension to their work. (p. 55, emphasis in original)

42 Here are the correspondences between the elements on the prediction side of testing and the elements on the observation side of testing: Software corresponds to observational and measurement techniques, hardware (including the human brain) corresponds to measuring instruments (including the human perceptual apparatus). On both sides we can draw the distinction between process and product. Making a calculation means running some piece of software on some piece of hardware. This is a process the product of which is some computational result. Likewise, making an observation or measurement means em -ploying a measuring device. This is a process the product of which is some piece of data. Finally, comparing the results of calculations with the data constitutes a test of the respective theory. – Needless to say, I chose to examine those elements on the two sides of testing for which some kind of quantitative statement is meaningful and for which I could find numerical in -formation.43 See Paul Humphrey (2004, Ch ??)44 The vast majority of equations are nowadays solved numerically. For example, the equations of Quantum Mechanics are al-most exclusively solved numerically today (Humphreys 2004, p. 60).

Page 20: Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

24.05.2023 20

Humphreys provides ample evidence for the huge difference increase in computing power has made for the empirical success of theories.

An important use of computing power in both physical and non-physical sciences is the con-struction of computer models and computer simulations as an intermediary between theory and obser-vation. When constructing computer models scientists often have to make strong approximations and idealizations. If so, their confidence that the predictions of the models are actually the predictions of the theory cannot be that high. In case data is available for comparison with the predictions of the model, i.e., in case we have a test of the respective theory, this test is a gentle test at best. Therefore, if the predictions of the model accord with the respective observations or experimental results, the amount of success the respective theory enjoys is limited. Still, if the theory is a core theory that is profitably used in very many such models, and if the cases where the predictions are wrong are rare or the fault can be traced to other culprits, such models of the theory can strongly contribute to the rise in its degree of success. What is more, because computing power has increased so strongly, especially over the last 50 years, the approximations have become ever better and the necessary idealizations ever weaker, leading in many scientific fields to ever more precise models, which therefore constitute increasingly stringent tests of the respective theories and have contributed ever more strongly to an in-crease in their degrees of success.

5 Saving Realism

1 Completing the main argumentLet us complete the main argument. The main argument is an inference from statements about

the five indicators of success to statements about the degrees of success of the two classes of theories we want to compare. As we saw, the indicators of success have enjoyed an enormous increase over the last few decades. Before that almost all of them were quite low, today they are very high. From this we can infer the degrees of success of both our current best theories and the refuted theories of the past. On the one hand, we can infer that our current best theories profited from the enormous increase in the indicators in the recent past and therefore received a big boost in their degrees of success in the recent past, an increase that is far greater than any increase in success of any theories of earlier times. So they enjoy very high degrees of success today. This inference proceeds on a general level, but we could also show directly for a number of specific theories, e.g., some core theories, that their degrees of suc-cess profited from the increase of the indicators. On the other hand, the refuted theories of the past dis -cussed by philosophers were all refuted before the big boost of success took place. At those times practically all indicators were quite low. Therefore, the degrees of success of the refuted theories were quite modest. This completes the main argument. The conclusion of the main argument is the main thesis of the paper: All our current best theories enjoy far higher degrees of success than any of the successful, but refuted theories of the past, which enjoyed only quite modest degrees of success.

2 Reply to sophisticated PIWe can now reply to the sophisticated PI. The sophisticated PI is the argument that right to the

present theories kept being refuted, therefore we should extrapolate the incidence of false theories from past degrees of success to current degrees of success. The target of the sophisticated PI is the modified success-to-truth principle which restricts the inference from success to truth to current de-grees of success, and which threatens to be undermined by the extrapolation of false theories. But the sophisticated PI is challenged by the main thesis. The main thesis states that there is a large difference in degrees of success between the refuted theories of the past and our current best theories. It implies that the claim of the sophisticated PI that theories kept being refuted as success kept growing is simply not true. Quite the opposite is true: theory change among theories stopped at rather low levels of suc-cess. Therefore, the extrapolation of the existence of false theories from past degrees of success to cur -rent degrees of success is not plausible. This rebuts the sophisticated PI, and saves the modified suc -cess-to-truth principle.

Page 21: Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

24.05.2023 21

3 How Realism is savedBecause the modified success-to-truth principle is not threatened by the existence of counter-

examples, the realist can modify his position so that it consists in the endorsement of the modified suc -cess-to-truth principle. This modified form of realism is save from the sophisticated PI. It is a version of realism which is compatible with the history of science.

The realist wants to support his position, i.e., the modified success-to-truth principle, with the NMA, but the NMA and the intuitions behind it were attacked in the first objection. In a moment I will offer a reply to the first objection. I want to show that the NMA and the intuitions behind it appear slightly scathed, yet basically intact from the confrontation with the history of science. Assuming I can show this, the realist can, after all, use the NMA to support the modified success-to-truth principle. Fi-nally, he can apply the modified success-to-truth principle to our current best theories, and infer that they are approximately true.

6 Restoring the Intuitions**

1 The weak success-to-truth principle**Let us now deal with the first objection. The first objection aims to undermine what I called the

shared realist intuitions by pointing out that they support via the NMA an inference to truth for the levels of success enjoyed by the refuted theories of the past, so that the shared realist intuitions are at odds with the historic track record. In this conflict, the intuitions lose, rendering them untrustworthy and no longer able to support anything at all, in particular not the inference from success to truth for current levels of success.

To restore the integrity of the shared realist intuitions, I want to consider what I will call the weak success-to-truth principle. This principle states what we can infer for theories with moderate de-grees of success, as typically possessed by the successful, but refuted theories of the past. The prin-ciple is a conjunctions of two statements. First, we can infer that such a theory is partly true45 in the sense that important components of such a theory are probably correct. The correct components may be part of its ontology (some of its central terms refer), or part of its classification system, or they may be structural claims such as its equations. Scientist accepting such a theory may not be able to determ-ine which of its components are the correct ones, but rather know only in hindsight.46 Also the theory may be true or partially true not in the whole domain of application as it was initially intended, but only in a sizeable part of it, where this part may also be known in hindsight only. Second, the weak success-to-truth principle states that there is a substantial probability (to fix ideas arbitrarily, between 20% and 80% say) that the theory is not just partially true, but actually fully true. A person is warran-ted to have some, though not that high level of confidence in its full truth, e.g., an attitude of tentative belief may not be altogether unreasonable.

To show that realists can hold on to the weak success-to-truth principle I have to show two things: that it is by and large in line with shared realist intuitions, and that it is not undermined by the historical track record. First, we need to show that it is by and large in line with shared realist intu-itions. When Smart and Putnam presented the NMA the levels of success they had in mind were, of course, the present ones. I showed above that by far most of the growth in success of the respective best theories occurred in the recent past, that levels of success of the refuted theories of the more dis-tant past were much lower than that of our current best theories. Also it is clearly the case that the lower the degree of success of some theory, the weaker the shared realist intuitions’ support for its truth via the NMA. For these two reasons, the claim is plausible that the shared realist’s intuitions are such that they don’t offer that much support for a NMA in support of the success-to-truth inference for the refuted theories of the past. A realist 100 or 200 years ago, for example, if he had inferred the truth of some successful theory of those times, should have been aware that he could have done so only 45 As before, “true” always means “approximately true”.46 Kitcher compares this situation of not knowing which components are the correct ones with the preface paradox (2001, 170-71).

Page 22: Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

24.05.2023 22

rather tentatively, and that anti-realism was a real option for many of the best theories at that time. When some theories with so much lower amounts of success turned out to be wrong, it should not have been such a big surprise for him, far less a miracle. Thus, for past refuted theories the common realist intuitions seem to support at most a watered-down NMA (which may therefore not really de-serve its name in this case).

Having said this it must be admitted that it is difficult to determine the shared realist intuitions with any precision. It must also be admitted that current realists often seem to identify with past real-ists and realist-minded scientists, and may think that, at least in some cases, success of past refuted theories already sufficed to have greater confidence in their truth than the weak success-to-truth prin-ciple recommends. Therefore, this principle may not be entirely in line with common realist intuitions. Still, I think we can say that it is by and large in line with them.

2 The weak success-to-truth principle and the history of science**We further have to show that the weak success-to-truth principle is not undermined by the his-

tory of science, but on the whole accords with it. So, let us examine the history of science and therein the theories with moderate levels of success. What we then can do is use the first counterstrategy against the PI mentioned earlier. That counterstrategy consists in showing that judged from today the successful refuted theories of the past, although strictly speaking false, had parts that were true, e.g., had terms that referred, got the structure right, etc., or were approximately true in restricted domains. Many philosophers have develop versions of this counterstrategy and have provided convincing histor-ical evidence for its viability. Therefore I can be brief here. Thus, for example, important claims of the phlogiston theory of combustion are still accepted today. Likewise many central claims of the caloric theory of heat are true (in many situations heat behaves like a substance).47 Another example is the se-quence of theories of light since Maxwell: All its members agree on the structure of light, i.e., on Max-well’s equations.48 Furthermore, the ontologies of many of the successful abandoned theories of the past overlap to a considerable extent with the ontologies of the theories we accept today, e.g. theories about the electron (Nola 19XX, Norton 19XX in Nola eds). Finally, some theories can only be con-sidered approximately true, if their domain of application is significantly restricted. Newtonian mech-anics is approximately true in a very large domain including the domains of most engineering sci -ences. Hence, the first part of the weak success-to-truth principle, according to which theories with moderate success usually have important components that are correct, is in accord with the history of science.

In order to assess the second part of the weak success-to-truth principle, we not only have to look at the incidence of false theories among theories with moderate degrees of success, but also at the theories that were not refuted; we have to determine the ratio of the number of non-refuted theories to the number of refuted theories. The weak principle is only compatible with the history of science if that ratio has been not much lower than one, i.e., if among theories with moderate success the number of refuted theories has at most the same order of magnitude as the number of non-refuted theories. Of course, to show this is once again an ambitious thing to do, with numerous problems, such as how the-ories are individuated, what levels of success precisely count as moderate, how degrees of success are to be determined in the first place, etc. Hence, the following (all to brief) observations from the history of science can only provide a crude estimation of that ratio. Still, I think they are already sufficiently good to show that the realist’s case is promising. As my aim is only to defend realism, the burden of proof that the ratio is considerably lower than one is once again on the side of the antirealist, as he wants to attack the intuitions, therefore it suffices for the realist to throw sufficient doubt on the anti-realist’s attack on the intuitions.

So, let us examine the moderately successful theories of the history of science. Here are four ob-servations. First, inspecting the set of abandoned theories, for instance those on Laudan’s list or the other cases offered in the philosophical literature, we observe that most of the successor theories of the abandoned theories are still held today. Judged from today, many of the abandoned theories, espe-

47 Martin Carrier (2004) defends the claim that phlogiston theory and the caloric theory of heat already got significant state -ments about classification, i.e., natural kinds, right. This idea is further developed into a formal proof showing that under cer-tain conditions the successor theory in a theory has to retain some elements of the abandoned theory, see Schurz (2004, 2009). See also, among others, Ladyman (2009, §5).48 Worall (1989), French and Ladyman (19XX). Many more examples are provided by Ladyman 1998XX.

Page 23: Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

24.05.2023 23

cially the most interesting and most frequently discussed ones, were already “finalists” or, at the very least, “semi-finalists” in the “theory contests” of the respective scientific fields. For example, the the-ory of phlogiston, the caloric theory of heat and the geocentric system were finalists (or semi-finalists, depending inter alia on how theories are individuated and which theories count as versions of each other) in their respective fields.

Second observation. As a more recent set of examples consider the field of clinical studies. In a recent meta-study, 49 highly cited (i.e., cited more than 1000 times) original clinical studies claiming that a drug or other treatment worked were examined. We can classify the results of these studies as cases of moderate success, because they were usually supported by just one study and did not enjoy strong support from independent evidence. It then turned out that subsequent studies of comparable or larger sample size and with similarly or better-controlled designs contradicted the results of 16% of the earlier studies and reported weaker results for another 16%. (For example, the refuted studies had seemingly shown that hormone pills protect menopausal women from heart disease and that vitamin A supplements reduce the risk of breast cancer.) This means that nearly two-thirds of the original results held up.49

Third, there are only very few scientific fields which experienced more than one or two theory changes among successful theories. The three to five changes of theories of light are quite an excep-tion in this regard. Very few scientific fields exhibit such a large number of theory changes. Because of the low number of such fields, their contribution to the ratio of non-refuted to refuted theories is rather small, and barely lowers it.

Fourth, there are some fields in which the first really successful theories scientists hit on were already the theories we still accept today. Examples of such theories are provided by discoveries of en-tities and phenomena to which scientists had no access before, e.g., the discovery of X-rays, the double helix, or double stars. If such discoveries were based on at least moderate evidence, they mostly have survived until today, hence in these cases there were sometimes no theory changes at all. Hence, these cases either increase or, at least, do not lower the ratio of non-refuted to refuted theories.

Putting these four observations together (which, of course, have to be checked more thoroughly than I can do here), we arrive at the (admittedly preliminary) estimation of the ratio of non-refuted to refuted theories among moderately successful theories of very roughly one. It follows that scientists could be reasonably confident that the moderate success of those theories sufficed to eliminate all but two or three theories, that the two or three known and seriously entertained theories of their times probably included the true theory, or, in other words, that the respective small disjunction of those the-ories was probably true. For example, Priestly and Lavoisier could both be reasonably confident that one of them was right. So, although Priestly, for example, was not right to have a strong conviction in phlogiston theory, he was not only justified to believe that important parts of his theory were true, but also a tentative acceptance of his theory as a whole would not have been entirely unreasonable for him. Hence, the first part of the weak success-to-truth principle which allows an attitude of tentative belief towards theories with a moderate amount of success is compatible with the history of science.

From this and my conclusion above about the first part of the weak success-to-truth principle, it follows that, although the common realist’s intuitions may not escape entirely unscathed from the con-frontation with the past of science, they are not invalidated by the confrontation either, and can still be used by the realist as reasons for supporting the full (as well as the weak) success-to-truth principle.

7 Five objections

(…)(…)

49 Paraphrased from John Ioannidis (2005) and Lindey Tanner (2005). Ironically whereas I use the results of this meta-study to support realism, its real import is that it shows that clinical studies are not as reliable as commonly thought.

Page 24: Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

24.05.2023 24

8 Conclusion

(…)

Page 25: Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

24.05.2023 25

9 References

Abbott, Alison (2009). „Magnifying Power“, Nature Vol. 459, no. 7247, (4 June 2009), pp. 629-39.Ashby, Neil (2003). “Relativity in the Global Positioning System”, Living Reviews in Relativity 6, 1.Bird, A., (2007). "What is Scientific Progress?" NoûsBolton, A., Scott Burles, Leon V.E. Koopmans, Tomaso Treu, and Leonidas A. Moustakas, (2006). “The Sloan Lens ACS

Survey. I. A Large Spectroscopically Selected Sample of Massive Early-Type Lens Galaxies”, The Astrophysical Jour-nal, Volume 638, Issue 2, pp. 703-724.

Bovens, L. und Hartman, S. (2004), Bayesian Epistemology. Oxford University Press.Boyd, R., 1983. "On the Current Status of the Issue of Scientific Realism." Erkenntnis 19: 45-90.Carrier, M. (2004). “Experimental Success and the Revelation of Reality”, in: M. Carrier, J. Roggenhofer, G. Küppers, P.

Blanchard, Challenges beyond the Science Wars. Berlin: Springer-Verlag.Dobzhansky Theodosius (1973). “Nothing in biology makes sense except in the light of evolution”. American Tiology

Teacher 35, pp. 125-129.Robert C. Kennicutt Jr (2007). “Sloan at five”, Nature, vol 450, Nov 22th 2007, p. 488-9.Devitt, M. (1991). Realism and Truth. 2nd edn Oxford: Basil Blackwell.Devitt, M. (2005), “Scientific Realism”, In The Oxford Handbook of Contemporary Analytic Philosophy, Frank Jackson and

Michael Smith, eds. Oxford: Oxford University Press pp. 767-91.de Solla Price, D.J. (1963). Little Science, Big Science. New York: Columbia University Press.Dunbar, R. (2004). The Human Story, Faber & Faber.Egghe, L. and I.K. Raviachandra Rao, (1992). “Classification of Growth Models Based on Growth Rates and its Applica -

tions”, Scientometics, Vol. 25. No. 1, 5-46.Forster, M. R. (2000. “Hard Problems in the Philosophy of Science: Idealization and Commensurability”. In R. Nola and H.

Sankey (eds.) (2000) After Popper, Kuhn & Feyerabend: Issues in Theories of Scientific Method, Australasian Studies in History and Philosophy of Science, Kluwer.

Frigg, R. and Stephan Hartmann, (2005), “Scientific Models“, in: S. Sarkar et al. (eds.), The Philosophy of Science: An En-cyclopedia, Vol. 2. New York: Routledge 2005, 740-749.

Furner, J. (2003). “Little Book, Big Book: Before and After Little Science, Big Science: A Review Article, Part II“, Journal of Librarianship and Information Science, Vol. 35, No. 3, 189-201.

Futuyma, D.J. (2009). Evolution. 2nd edition, Sinauer Associates, Sunderland, Massachusetts.Godfrey-Smith, P. (2003). Theory and Reality: An Introduction to the Philosophy of Science, The University of Chicago

Press, Chicago.Humphreys, P. (2004). Extending Ourselves: Computational Science, Empiricism, and

Scientific Method, Oxford University Press.Hardin, C., and Rosenberg, A., 1982. “In Defense of Convergent Realism” Philosophy of Science 49: 604-615.Hempel, Carl Gustav (1966). Philosophy of Natural Science. Prentice Hall.Hoyningen-Huene, P. (1993). Reconstructing Scientific Revolutions: Thomas S. Kuhn's Philosophy of Science, Chicago: Uni-

versity of Chicago Press.Ioannidis, John P. A. 2005. “Contradicted and Initially Stronger Effects in Highly Cited Clinical Research” Journal of the

American Medical Association, July 13, 2005; 294: 218 - 228.Kitcher, P. (1993). The Advancement of Science New York: Oxford University PressKitcher, P. (2001). ‘Real realism: The Galilean Strategy”. The Philosophical Review, 110(2):151–197.Klee, R. (1997). Introduction to the Philosophy of Science: Cutting Nature at Its Seams, Oxford University Press. Kukla, A. (1998). Studies in Scientific Realism.  Oxford: Oxford University Press.Ladyman, J., (2002). Understanding Philosophy of Science. London: RoutledgeLadyman, James (2009). “Structural realism versus standard scientific realism: the case of phlogiston and dephlogisticated

air”, Synthese, XXX.Lange, M. (2002). “Baseball, Pessimistic Inductions, and the Turnover Fallacy“ Analysis 62, 281-5.Laudan, L. (1981) “A Refutation of Convergent Realism”, Philosophy of Science, 48 March, 19-49.Laudan, L and J. Leplin (1991), “Empirical Equivalence and Underdetermination” in Journal

of Philosophy 1991 vol. 88,.no. 9 pp. 449-473Leplin, J., (1997). A Novel Defence of Scientific Realism, Oxford: Oxford University Press.Lewis, P. (2001). “Why The Pessimistic Induction Is A Fallacy”, Synthese 129: 371-380.Mabe, M. and M. Amin, (2001). “Growth Dynamics of Scholarly and Scientific Journals”, Scientometrics. Vol. 51, No. 1,

147-162.McAllister, J. W. (1993) ‘Scientific Realism and the Criteria for Theory Choice’, Erkenntnis 38, 2, pp.203-222.McClellan III, James E. and Harold Dorn (1998). Science and Technology in World History, The Johns Hopkins University

Press, Baltimore and London.McMullin, E. (1984). ‘A Case for Scientific Realism’ in Leplin, J., ed. Scientific Realism (Berkeley: University of California

Press), pp. 8-40. McMullin, E. (1993). ‘Rationality and paradigm change in science’. In World Changes:

Thomas Kuhn and the Nature of Science, ed. P. Horwich, 55-78.Meadows, J. (1974). Communication in Science. Butterworths, London.

Page 26: Bitte diese 4 Fotokopien machen:€¦  · Web viewLudwig Fahrbach PLEASE DO NOT QUOTE. Abschnitte und Absätze, die fürs Seminar nicht wichtig sind, sind mit zwei Sternen ** gekennzeichnet.

24.05.2023 26

Meadows, J. (2000). “The Growth of Journal Literature: A Historical Perspective” in: The Web of Knowledge: A Festschrift in honor of Eugene Garfield Edited by Blaise Cronin and Helen Barsky Atkins ASIS&T Monograph Series, Informa-tion Today, Inc., Medford, New Jersey.

Musgrave, A. (1988). “The Ultimate Argument for Scientific Realism,” in Robert Nola (ed.), Relativism and Realism in Sci-ence Kluwer Academic Publishers.

Musgrave, A. (1999). Essays on realism and rationalism. Amsterdam & Atlanta: Rodopi.Newton-Smith, W. H. (1987). “Realism and Inference to the Best Explanation”, Fundamenta Scientiae, Vol. 7, No. 3/4, 305-

316.Niiniluoto, Ilkka (1999). Critical Scientific Realism, Oxford University PressNorton, J. (2003). ‘Must Evidence Underdetermine Theory?’. Preprint on http://philsci-archive.pitt.edu. Okasha, S. (2002). “Underdetermination, Holism and the Theory/Data Distinction”, Philosophical Quarterly, 52, 208, 303-

319.Papineau, D. (1993), Philosophical Naturalism. Oxford: Blackwell.Psillos, P. (1999). Scientific Realism: How Science Tracks Truth. New York and London: Routledge.Psillos, P. (2000). “The Present State of the Scientific Realism Debate”, British Journal for the Philosophy of Science, 51

(Special Supplement), pp.705-728.T. Rosenband, D. B. Hume, P. O. Schmidt, C. W. Chou, A. Brusch, L. Lorini, W. H. Oskay, R. E. Drullinger, T. M. Fortier,

J. E. Stalnaker,|| S. A. Diddams, W. C. Swann, N. R. Newbury, W. M. Itano, D. J. Wineland, J. C. Bergquist (2008). “Frequency Ratio of Al+ and Hg+ Single-Ion Optical Clocks; Metrology at the 17th Decimal Place”. Science 28 March 2008: Vol. 319. no. 5871, pp. 1808 – 1812.

Rowe, T. (2005), Book Review of Mammals From the Age of Dinosaurs: Origins, Evolution, and Structure by Zofia Kielan-Jaworowska, Richard L. Cifelli and Zhe-Xi Luo. Nature 438, 426 (24 November 2005)

Saatsi, J. (2004). “On the Pessimistic Induction and Two Fallacies”. In Proceedings Philosophy of Science Assoc. 19th Bien -nial Meeting - PSA2004: PSA 2004 Contributed Papers.

Sanderson, Allen R. and Bernard L. Dugoni, Thomas B. Hoffer Sharon L. Myers, (1999). Doctorate Recipients from United States Universities: Summary Report 1999. http://www.norc.uchicago.edu/studies/sed/sed1999.htm.

Sazlay, A. and J. Gray (2006), Nature, vol. 440, 23 March 2006Schummer, Jochen (1999). “Coping with the Growth of Chemical Knowledge” Schurz, Gerhard (1991). “Relevant Deduction”, Erkenntnis 35, 391-437.Schurz, G. (2004). „Theoretical Commensurability By Correspondence Relations: When Empirical Success Implies Theoret -

ical Reference”, in: Kolak, D. and Symons, J (Eds.), Quantifiers, Questions and Quantum Physics Essays on the Philo-sophy of Jaakko Hintikka. Kluwer Academic Publishers

Schurz, Gerhard (2008). „Patterns of Abduction“, Synthese, 164: 201-234.Schurz, Gerhard (2009). „When Empirical Success Implies Theoretical Reference: A Structural Correspondence The-

orem“,British Journal for the Philosophy of Science 60/1, 2009, 101-133Stanford, P. Kyle (2001). “Refusing the Devil's Bargain: What Kind of Underdetermination Should We Take Seriously?”,

Philosophy of Science 68 (Proceedings): S1-S12.. Stanford, P. Kyle (2003), „Pyrrhic Victories for Scientific Realism” Journal-of-Philosophy. N 03; 100(11): 553-572Stanford, P. Kyle (2006).  Exceeding Our Grasp:  Science, History, and the Problem of Unconceived Alternatives.  Oxford

University Press.Sullivan, D.B., (2001). “Time and Frequency Measurement at NIST: The first 100 Years”.Tannehill, J., Anderson. D. and R. Pletcher. (1997). Computational Fluid Mechanics and Heat Transfer. Taylor francis.Tanner, Lindey (2005). „Review of Medical Research Turns Up Contradictory Results”, The Associate Press, July 13, 2005.Tenopir, C. and D. W. King, (2004). “Communication Patterns of Engineers“, IEEE Press, Wiley-Interscience.Trout, J. D. (2002). „Scientific Explanation and the Sense of Understanding“,Philosophy of Science 69: 212-233.van Fraassen, B., 1980. The Scientific Image. Oxford: Oxford University Press. van Fraassen, B., 1998. “The agnostic subtly probabilified”, Analysis 58, 212-220.van Fraassen. 2003. “On McMullin's Appreciation of Realism.” Philosophy of Science 70: 479-492Vickery, B.C. (1990). “The Growth of Scientific Literature, 1660-1970”, in: The Infomration Environemnt: A World View,

Studies in Honour of Professor A. I. Mikhailov, Elsevier Science Publishers B.V. North-Holland. Vickery, B.C. (2000). Scientific Communication in History, The Scarecrow Press, Lanham, Maryland.Wade, Nicholas (2009). “Cost of Decoding a Genome Is Lowered”, New York Times, August 11.Williamson, T. (2006). “Must Do Better.” In: Greenough, P. and M. Lynch Truth and Realism, Oxford University Press.Worrall, J. (1989). “Structural realism: The best of two worlds?” Dialectica, 43, 99-124.