Wednesday, November 7, 2012

Computationalism for Logic and Epistemology

Computationalism is normally seen as relating to the philosophy of mind. It could be stated simply as the view that "the human mind is really just a kind of computer". There are two somewhat devastating critiques of this statement that I've addressed elsewhere. The first is that the human mind seems to include consciousness (or intentionality, for some) of a kind that no-one has succeeded in showing uncontroversially how a computer might produce. The second is that there is nothing about a computer, as it is in itself, without human interpretation, that makes it radically different from any dynamically changing system. As such, a rock with its ever-changing internal electronic configuration might be just as good a model for the mind; an idea that would make computationalism a trivial thesis.

However, the paradigms that computers provide need not be restricted to the philosophy of mind.  Computationalism should not retreat because of the difficulties it encountered the the philosophy of mind. Rather, the computationalist thesis should be extended to cover an even wider range of philosophical disciplines, including those that are lower down on the justificational hierarchy.

Of course, there is the Curry-Howard correspondence. But perhaps the isomorphism has not been taken far enough. It is not that computers can implement proof systems. Rather, it is that the epistemology or grounding of logic, truth and perhaps epistemology itself, should be replaced by a computational paradigm. 

How does static, classic logical thinking differ from computational thinking? Here is a list of the way computer scientists think about procedures as opposed to logical truths:

W1. Procedures are sequential rather than static.
W2. Procedures can enter infinite loops, abort or otherwise fail.
W3. Procedure are syntactic rather than semantic.
W4. Procedures operate on data as opposed to sentences existing stand-alone.
W5. There is a hierarchy of implementation rather than a hierarchy of justification.
W6. Computers always have a "what's executing now" as opposed to the open choice menu of logic.

The idea of W1 is this. Logic, mathematics, science and common sense reasoning should be viewed the way a computer might execute them rather than as expressing abstract, static truths

I want to argue that logic, the underpinning of our reasoning and of science, should be seen as a procedural, sequential procedure and not as a static "state of affairs" that asserts some permanent two-way relationship between elements.

Consider the statement that 3 x 4 = 12. This is normally understood to express a relationship between 3, 4 and 12 that holds in any direction. However, think how a computer might be programmed to express this. There would be a multiply function, two inputs (3 and 4) and an output. The function takes the inputs and it proceeds through a sequence of steps which results in the output.  (Peruzzi, 2004, makes this point.) The steps take finite time and the input chronologically precedes the output. The relationship is asymmetric. You cannot take the same procedure and give it a 12, hoping to get 3 x 4, or 3 and 4 as output(s). Moreover, there is no trivial relationship between the multiply and the divide function as might be said to follow immediately from a definition of multiply in the static sense. Something happens that takes inputs and produces outputs. The computer process is procedural, sequential and also, as will be discussed, syntactic.

This holds true for any piece of reasoning that you would want a computer to implement.

Consider the liar paradox. Typically it might be introduced using the following statement.

    (1) This sentence is false.

If (1) is true then the sentence is false which means that (1) is false which means that (1) is true etc. There are ways of restating the example that circumvent attempts to say that (1) is both true and false or that it is neither. However, these responses and the respective counter responses are not the focus here.

How would a computer be programmed to deal with this problem? Imagine that there is a computer function that can take statements as inputs, performs analytic analysis and outputs whether the statement is true or false. Assume that the function includes all the knowledge necessary to avoid a "don't know" answer. Alternatively, imagine it does not have such knowledge, and a third possible output value is indeed "don't know". What would it do with the liar paradox example (1)? The answer is that it would get into an infinite loop. It would never return an answer. It would not say that it is true, false, both or neither. It simply would get stuck. If somehow it had a parallel mechanism to abort the infinite loop, there would still be no answer - just an aborted function. This is the point made in W2.

Gupta and Belnap (1993) have proposed what is known as the revision theory of truth for dealing with the liar paradox. This theory of truth includes, for the first time, a dynamically changing value for truth, that flips from one state to another. This idea needs to be extended to all truth valuation. There is always a dynamic process that either ends rather quickly in an answer one way or another, or get into an infinite loop.

Famously, computer processes cannot solve problems of the undecidable class and one important subclass of these are those problems that reduce to the halting problem. However, there are trivial ways of modifying computer programs such that they can solve halting decisions for specific families of cases. A simple example of this is introducing a loop counter into an otherwise undecidable infinite loop that breaks when the counter reaches a specific number. This alteration transforms an undecidable program into a trivial halting problem.

Consider the following case. For any looping program, standard logic execution (substitution, searching for premises etc.) is modified such that for every step in a loop:
(a) a history of the state of the theorems and variable values is stored,
(b) the current state is compared to all previous states, searching for identity,
(c) break in case a match is found, for no progress can be made.
In the case of the liar paradox, such a modification on standard logic execution can determine that the state has returned to a previous state and will (decidably) break.

Other examples include a review of the history of state that, instead of searching for identity in step (b), searches for some other indication that a solution will not be found.

The advantage that a procedural perspective has in this case, is that the logic sequence can be modified to add specific, yet common, classes of problems to otherwise undecidable problems. This well known fact about computing may perhaps undermine claims one sometimes sees that the human mind is capable of super-Turing computation - an impossibility if the human brain is just another Turing machine.

Consider a more typical sentence:

    (2) If A and B are true then A is true.

What makes (2) true? The answer usually given (call it A2.1) is that the meaning of the connecting words (all but A and B) make it true. Consider another answer (call it A2.2). What makes (2) true is a procedure (or, perhaps, that you apply a procedure) that takes (2), that this procedure has some conditional branching, driven by symbols such as "and", "then", etc. which result in executing specific steps that come up with the answer "yes, (2) is true".

It's difficult to deny that A2.1 and A2.2 are actually answering subtly different interpretations of the question "What makes (2) true?". Please let this stand, these subtle differences lie at the heart of the difference between the static-semantic and the procedural-syntactic approach.

W3 has been the subject of more than one post in this blog. The first is probably this one.

W4 expresses the idea that sentences in any language, formal or natural, should be considered as the data of a procedural system. This undermines some of the strategies that form standard theories of meaning. The post "Stop Ignoring the Procedure!" deals with this.

The last two, I leave for later posts.

References

Gupta, A., and Belnap, N., (1993), The Revision Theory of Truth, Cambridge, MA: MIT Press.
Peruzzi, A., (2004), Causality in the texture of mind, in A. Peruzzi (ed. ) Mind and Causality.

No comments:

Post a Comment