Tuesday, August 14, 2012

Why Does Science Succeed?


In today's post I would like to reflect briefly on one philosophical implication of the previous post, Reality As a Verbal Game.

The thesis of that post was, briefly, that human knowledge (say, of an individual) can be expressed as a game comprising a web of verbal assertions (sentences), intrinsic rules of grammar and inference and a set of change costs associated with the sentences.

To speak of this game as knowledge is not accurate. Knowledge requires true belief and nothing of the sort is suggested here. There is no semantics to the game and there is no real concept of truth. There is no certainty associated with individual assertions in the web/database - there are change costs but these have no significance outside the rules of the game.

The fact that this picture makes no claims regarding underlying metaphysical reality does not mean that there is a denial that there is a such an underlying stratum. Similarly, the fact that syntactic sufficiency is aimed for, does not mean the grand history of attempts at pinning down meaning from Frege (1892) through Russel (e.g., 1905) and down to moderns such as Putnam (e.g., 1981) and Harnad (e.g. 2002) is wrong.  The claim is rather that it is possible to maintain a thoroughgoing agnosticism with regards to such issues and nevertheless succeed at answering some key philosophical and scientific questions.

One of the key questions that has troubled the philosophy of science is why science succeeds. The success of science referred to here may be limited to those areas of the hard sciences and technology where there is no dispute regarding the phenomenal success at predicting and manipulating mechanical and electronic objects - down to the nano-meter scale. No prejudice is intended to other fields; it is useful to focus the question by focusing on the indisputable successes. However, this success does extend to the common-sense everyday realm where, for example, we can make extremely reliable prediction regarding the success of healthy food at sustaining us or the dangers involved in an oncoming vehicle.  How do these knowledge structures come about? How do they work? Most importantly, why do they work?

This question has been used by some to prove metaphysical points. For example, Putnam (1975) in his more Realist days created the No Miracle Argument (NMA). The argument is part of the debate between Scientific Realists and Instrumentalists. If Instrumentalism in its naïve form is true, concludes the argument, then the success of Science would be an inexplicable miracle. Only if the underlying story of science is true in its complete metaphysical sense, could we explain why technology succeeds.

The view presented here proposes an answer to the question of how science succeeds which does not invoke metaphysics, realism, meaning or semantics. It is similar in many respects to arguments that have been labeled Instrumentalist, Positivist or Relativist in the past but it does not try to affirm any commitment to the metaphysical views, positive or negative, that have usually come along with these arguments.

The web of assertions is called a web because the intricate interconnections between the assertions are the defining feature of the system. These connections between assertions can be seen to be created by the fact that words in the assertions match words in other assertions. However, it is more correct and far more general to say that the interconnection between the sentences in the pool are created by the inference rules of the web itself. It is these that connect a sentence "water is wet" with "my shirt is wet" and both these with other sentences describing the implications of "wet". It is both the intrinsic rules and the rules created themselves in the language of the web that create the interconnections. This web could be identified with the Quine's web of belief (for example, 1951, 1954, 1955, 1991) if it were not for the fact that severe restrictions have been placed here on the concept relative to its features as developed through the course of Quine's writings.

The assertions in the web are divided (with gray areas in the demarcation) into sentences labeled as given as opposed to sentences labeled, for the purposes of this discussion only as optional. Sense-data sentences such as "I seem to see a red ball before me" or "the dial before me seems to point to the digit 3" are examples of the given. Sentences such as "Force is proportional to both mass and acceleration" or "electrons are flowing in the wire" are examples of the optional. Controversially, sentences that describe, say, Modus Ponens, also belong to the optional. The given  corresponds to a class of sentences with very high change cost but the optional may also have a very high change cost. As implied by their name, the given is a class that is assumed to be outside of the control of the game. At any given time (ignoring the circularity introduced by employing the concept of time here, for now) the sentences of the given are just there, changing and being changed by players/forces/reality outside the game whose ultimate source is of no interest to the current discussion.

One sub-game within the Über-game of verbal reasoning has as its goal to create processes composed of optional assertions and inference rules that produce a set of output sentences that match the given sentences in the web. What is proposed is that human predictive and manipulative success ranging from everyday interaction with the environment to the wonders of nano-scale technology are the result of achieving the goals of this sub-game. Why does science succeed? Because we have found a way to create just the syntactic processes that produce the same outputs as the givens. If this answer is sufficient, there is no more to science than this.

Imagine a simple computer set up with nothing more than the ability to run Wittgen. There are two players. The first player creates assertions in the Wittgen database such as "a red square appears at time 0", "a red square disappears at time 1", "a red square appears at time 5", "a red square disappears at time 6", "a red square appears at time 10" etc. (The meanings of red, square, appears, disappears, time and any digits are irrelevant for this thought experiment). The job of the second player is to create Wittgen assertions such that the database produces sentences like "predicting that a red square appears at time 0", "predicting that a red square disappears at time 1", "predicting that a red square appears at time 5" etc. In other words the continuation of "predicting that..." must match the first player's sentences. The concept of Matching is purely syntactic. The first player is producing the given assertions. The second player, who knows nothing about the first player,  simply sees the given sentences appearing in the Wittgen database.

The second player could be a computer program. Such a program may not be simple because there are quite a few subsystems such as integer arithmetic (a purely formal system here with no model or quantities applicable) to be invented. However, such a program is plausible. The history of human reasoning is the creation of such syntactic, symbolic and verbal systems such that it is quite easy for our "software" to produce the required sentences. If science can do that, then the question of how technology is possible has been solved. The question of who or what is the first player is not of interest to the core question. The only relevant question is whether it is possible and plausible that the second player should succeed at creating inference rules such that they produce sentences (strings, collections of well formed strings form the symbol alphabet) that match the given sentences.

To a mind unspoiled by too much contact with philosophy, this point seems so elementary that there seems little point in devoting a blog post to it. However, for the last 235 years, philosophers have been piling on questions such that it seems totally inexplicable why the tools we use should succeed in achieving any such knowledge. There now follows the briefest of summaries of some of the more famous of these problems.

Arguably, it was Hume (1777) who got the ball rolling with two critical questions. The first is the problem of induction. Assume that a logical deduction from true premises, produces true conclusions. However, empirical science requires another mechanism; induction. If one ball collides into another ball and transfers some of its momentum to the second ball, and if we always observe this to be the case, we can induce from this that there is a law of nature that requires that the momentum be transferred appropriately. Is there any justification for believing that there is such a law and that it can require nature to behave in a certain way? The only justification one can imagine is that induction has always succeeded in the past. In other words, we would need to use the principle of induction to provide any validity for the principle itself; a clearly circular and invalid argument. We cannot provide any reason why the foremost principle behind empirical science should be true or should yield any truths!

Hume (1777) also challenges one of the most ubiquitous concepts employed in science - causality. I see the two balls colliding and I believe that the first ball, or perhaps the collision of the balls causes the second to move. Yet I do not observe the cause itself. There is only the first moving and then the second moving and but there is nothing visible "twixt them". Where is this mysterious cause hiding? The problem has continued to engage philosophers to this day (see, for example,  Salmon 1998) with little resolution. It seems we don't understand and cannot justify the concept that may be seen as the very scaffolding of the scientific enterprise.

If we can't infer laws by observing objects then at least we can know the objects themselves. However, Kant (1781) started down the slippery slope of challenging the observations themselves by arguing that there is no access to the object itself as it is in itself. We only have access to our image of the object. However, that objection seems minor compared to his argument that the concepts of space and time themselves are not observations of an external reality. Rather, space and time are creations of the human mind. If we create these foundations of all scientific description freely, how could the descriptions themselves be truthful in any sense?

The assault on the truth-value of observations continued mercilessly. If we cannot know the objects of observations, at least we can know that we observe something or that we are undergoing an experience as-if of observation. I may be dreaming that I observe facts but at least I know that I seem to be observing the facts. The phenomenalist fact of observation should by unassailable. However, Sellars (1956) presents a devastating argument to the effect that there are two levels at which phenomenalist evidence must be judged. One is sub-verbal, incorrigible but absolutely useless in any verbal reasoning or scientific process. The other is the ability to articulate verbally a report about the sense-data experience. Unfortunately, the latter, while prime material for empirical investigation, are nevertheless problematic cultural-linguistic products. Putnam (for example, 1981)   focuses the attack on such verbal reports by pointing out that the observer would retract almost any word in the sentence if an expert were to pronounce the word misused. The individual, it would seem, cannot provide for the truth of any sentence - not even an observation statement. Finally, Wittgenstein (1953) debunks the defense that there could be some private language in which the observation could be considered true.

It would seem that the last bastion of scientific method is the deductive process itself. However, Quine (1951) challenged the idea that analytic statements are radically different in their truth values from synthetic statements. In fact, its seems that nobody can can get a clear uncontroversial grasp on "truth" at all. Moreover, even such cornerstones of logic as the belief that Modus Ponens is a process that transmits truth from it premises to its conclusion may be easily challenged. What justification is there, after all, for believing that Modus Ponens is a valid form of argument, and if there is some such principle that would justify it, what would be the justification of this latter principle? 

How, then, if there is no truth to rules inferred from observations, concepts such as causality, space or time, observation statements or the rules of deduction could there be any truth to science? And, if, following Putnam's NMA, science does not discover truth, how could it possibly be as effective as it is?

The answer presented by the Verbal Reasoning Game is that the whole system can work without truth. This goes beyond Instrumentalist philosophy of science which denies the need for a metaphysical commitment but normally assumes the truth of many of the other concepts that have been doubted here. Without truth, the only question that needs to be answered is whether it is possible to create processes within the verbal game that produce optional sentences that match the given sentences. The answer is that it is clearly plausible. 

In fact, it is even plausible that an artificial player can be written using the Wittgen language that could create the necessary Wittgen processes that match the given sentences in a Wittgen database. 

There is no miracle involved in our technological prowess because it so happens, we don't know why, that the givens are not random but rather predictable and moreover predictable by relatively easy procedures. Most importantly, technology works because this predictability has not changed in recorded human history. Apples may start falling upwards tomorrow for all we know, but they have never done so till now, as far as we know. If this changes, of course, technology will no longer work and there won't be a question to ask.

The only question we have not answered is about causality - but that is for another post.

References:

Frege, G., 1892, On Sense and Reference, Translations from the Philosophical Writings of Gottlob

Harnad, S., 2002, Symbol Grounding and the Origin of Language. Scheutz, M. (ed), Computationalism: New Directions, 143-158. MIT Press.

Hume, D., 1777/1975. Enquiries concerning Human Understanding and concerning the Principles of Morals, reprinted from the posthumous edition of 1777 and edited with introduction, comparative table of contents, and analytical index by L. A. Selby Bigge, MA. Third edition with text revised and notes by P. H. Nidditch. Oxford, Clarendon Press.

Kant, I., 1781/1963, The Critique of Pure Reason, translated by Norman Kemp Smith (Macmillan, 1963)

Putnam, H., 1981, Reason, Truth and History, Cambridge: Cambridge University Press.

Quine, W. V., 1951, ‘Two Dogmas of Empiricism’, Philosophical Review, 60:20-43

Quine, W. V., 1954, 'Carnap and Logical Truth', Synthese, 12: 350-374

Quine, W.V., 1955, ‘Posits and Reality’, repr. in The Ways of Paradox and Other Essays, Cambridge, MA: Harvard University Press, pp. 246–54.

Quine, W. V., 1991, “Two Dogmas in Retrospect”, Canadian Journal of Philosophy, 21: 265–74


Russell, B., 1905, “On Denoting”, in R. Marsh, Logic and Knowledge: Essays 1901–1950, London: George Allen and Unwin, 1956, 39–56.

Salmon, W., 1998, Causality and Explanation. Oxford: Oxford University Press.

Sellars, W., 1956, ‘Empiricism and the Philosophy of Mind’, Herbert Feigl and Michael Scriven, eds., Minnesota Studies in the Philosophy of Science, Volume I: The Foundations of Science and the Concepts of Psychology and Psychoanalysis. University of Minnesota Press, pp. 253-329

Wittgenstein, L., 1953, Philosophical Investigations, G.E.M. Anscombe and R. Rhees (eds.), G.E.M. Anscombe (trans.), Oxford: Blackwell.









No comments:

Post a Comment