Wednesday, November 14, 2012

What Do You Mean, There Is No Meaning?

Many of the posts on this blog have advocated looking at rationality from the perspective that there is no meaning or semantics, only procedures and syntax. On reflection, it seems a very strange thing to suggest. There are some obvious intuitions that would totally reject such an idea. How on earth could there be no meaning?

Here are some of the obvious objections:
O1. Do you mean that "pear" does not necessarily mean pear and could equally mean apple? All verbal activity would be nonsense if such chaos reigned.
O2. "But I can *feel* the meaning of a word!"
O3. Can you do any logic without truth attribution? Truth attribution requires truth value. Where do truth attributions come from if not semantics?
O4. It is obvious that you have to go beyond words as strings of symbols in order to do science. The sentences themselves are true by virtue of the meaning that connects them to non-verbal knowledge, as in, for example, seeing something is true using vision.

Some qualifications will take care of the main thrust of O1 and O3. Semantics is understood to be a relationship between a word (or words) and the real world (or possible worlds). As opposed to that, there is syntax. Syntax is a formal system of manipulation, combination or substitution of symbols that operates without reference to the meaning of the symbols. 

Semantics involves reference. The word "cats" refers to objects in the real world, namely cats. Oceans of ink have been spilled on the question of how this reference relationship is supposed to work; how it is justified, determined etc. The syntactic alternative is to say that, given a database of sentences, the symbol "cat" in one sentence accesses only the other sentences. (A simple example of what "accesses" means, pull out all the sentences in the database that contain the symbol "cat".) As opposed to the semantic notion of reference, the syntactic notion is very specific and can be made entirely explicit as long as an intrinsic rule for syntactic processing is accepted (see "Is Wittgen Justified?" for some discussion of intrinsic and explicit rules). 

Some might argue that this kind of syntactic reference is just another kind of semantics. Using the term "semantics" in this sense is unobjectionable. However, there are two conditions that must be are kept to. Firstly, that reference is only to other sentences, strings of symbols or wff's participating in the same game or database. Secondly, that the reference relationship is purely formal, that there is an "effective procedure", or explicit rule, that evaluates that relationship. It is because semantic reference is usually intended to refer from the database of symbols to some reality "out there", that the words "meaning", "semantics" etc. have been rejected in favor of a wider use of the word "syntax". 

O1 is not a problem because there are purely syntactic ways of preventing chaos. If I were to claim "John's roof is green", there is no need to resort to the semantic notion that a "roof" is a roof and not an apple. Assume that there are sentences with a high change cost, commitment level, beliefs or whatever, in my verbal system. These will be referred to here as "anchor sentences" instead of beliefs, in order to avoid talk of propositions and propositional states. Assume there are a set of anchor sentences such as "all roofs are red", "an object that is red is not green" etc. together with a set of explicit rules regarding words or word parts such as "all", "'s" etc. as well as rules about matching and contradictions. In that case, a purely syntactic formal process will conclude that either the anchor sentences get changed or the new sentence about John's roof gets changed/rejected. Thus chaos is averted.

Semantic theories are usually assumed to do the job of averting the chaos. However, Quine (1951) argued that the totality of sentences are required to justify a rejection and Putnam (1981) argues that even this totality is not sufficient for determining that "roof" does not actually refer to apples. Therefore it is syntax that averts the chaos and not semantics.

Anchor sentences are not necessarily true or even unchangeable. For a given verbal game, they are taken as sentences to be modified only as a last resort. Observation sentences tend to be anchor sentences. Core logical inference procedures tend to be anchor sentences; firstly, because it is psychologically very difficult to image an alternative state of affairs and also because they play an important part in all cognitive successes - from common sense to science and technology.

O3 refers to the fact that truth labeling is normally associated with any discussion of logic. Take for example a small axiomatic system that includes the axiom "A->(B->A)". This would be part of a syntactic formal system. However, one might find reference to the fact that truth tables can provide a semantic justification for the axiom. (Given the truth table for X->Y, the truth table for the axiom has all entries as true.) However, this truth-table effective procedure that gives this result, is not semantic in the sense objected to here, but is a purely formal system which therefore qualifies as a syntactic system. It is called semantic in this case firstly because it assigns truth values to both component and result and secondly because relative to the closed axiomatic system, it is a model or external domain. 

A purely axiomatized logic does not actually care for any justification of its axioms, that is an external concern. However, one could say that the truth table formal system creates anchor formulae, here the axioms themselves for the logic. 

O2 objects that meaning is an inseparable component of how we think about a word or sentence, assuming we understand it. This is not denied. However, this is just "what it is like" (to use Nagel's 1974 phrase) to understand a sentence. It is a fact about consciousness, that understanding-qualia have this nature. Consciousness is a complex subject in which, it seems, no progress has been made. However, the syntactic system of the brain does not require postulating any causal input from the conscious aspects of mind. Therefore, while there might be a feeling associated with a word, (perhaps it is a kind of observation of the process of the syntactic links between words and other words or words and images,) but it need not play any part in the analysis of the workings of rationality.

If meaning is to be understood only in this consciousness or qualia sense, then no objection is given to  suggesting that it exists. However, in the process of defining it so, it should be removed from the current enterprise.

O4 objects that the verbal reasoning system cannot be the whole story. The verbal reasoning system refers either to the verbal component of an individual thinker or to an abstract reasoning system that is shared among multiple individuals. For the purposes the discussion now, only an individual is considered. The following discussion is therefore explicitly in the domain of philosophy of mind. It should not be confused with the general discussion of a syntactic verbal system, which is understood as general philosophy or epistemology.

Verbal reasoning is just one syntactic game. Verbal reasoning includes scientific, mathematical and logical symbols as well as the words of natural language. However it does not include pictures, images, audio etc. Verbal reasoning is a closed syntactic system except for the fact that it relies on external systems that anchor some of its statements. Thus "there is a table before me" or "the dial points to 3" is asserted to be an anchor by a visual, nonverbal subsystem. Two points can be made about the visual subsystem as opposed to the verbal subsystem. Firstly, it, too, is syntactic, but within a different game. Secondly, some parts of the visual system are accessible to conscious experiencing of its working.

The visual system is also a syntactic system. There are formal rules that translate, perhaps chaotically or non-deterministically, the images into anchor-sentences. They might use background knowledge and concepts, memory, Canny edges, stereoscopic registration and who knows what, but there is no reason to suspect anything but a formal process that works on databases of images, and association rules. Thus the visual system is syntactic too. Its symbols however include pixels, electrical potential, the states of color-responsive neurons, and, finally, words. One of the outputs of the visual subsystem is the assertion of a sentence. Thus, for the purposes of the discussion, (without intending to cut nature at its joints,) the visual subsystem includes an innate linguistic component that blurts out, as it were, "I see a table".

The second point to be made about the visual subsystem, which applies to other subsystems of the mind, is that consciousness observes or experiences its workings too. There was never any intention to  identify the verbal reasoning system with the conscious mind and to assert that the anchoring process for some verbal assertions is non-conscious or innate. Thus we both experience certainty about sentences such as "there is a table before me" as well as experiencing the source of that certainty, namely, the visual experience. The claim is that reasoning is limited to the verbal subsystem, but not that it is all that we are. 

Nevertheless, only the verbal game is reasoning. It is not the only contributor to knowledge, but the verbal subsystem is what reasoning is.


References:

Nagel, T. (1974). “What is it like to be a bat?” Philosophical Review, 83: 435–450.

Putnam, H., 1981, Reason, Truth and History, Cambridge: Cambridge University Press.

Quine, W. V., 1951, ‘Two Dogmas of Empiricism’, Philosophical Review, 60:20-43

Wednesday, November 7, 2012

Computationalism for Logic and Epistemology

Computationalism is normally seen as relating to the philosophy of mind. It could be stated simply as the view that "the human mind is really just a kind of computer". There are two somewhat devastating critiques of this statement that I've addressed elsewhere. The first is that the human mind seems to include consciousness (or intentionality, for some) of a kind that no-one has succeeded in showing uncontroversially how a computer might produce. The second is that there is nothing about a computer, as it is in itself, without human interpretation, that makes it radically different from any dynamically changing system. As such, a rock with its ever-changing internal electronic configuration might be just as good a model for the mind; an idea that would make computationalism a trivial thesis.

However, the paradigms that computers provide need not be restricted to the philosophy of mind.  Computationalism should not retreat because of the difficulties it encountered the the philosophy of mind. Rather, the computationalist thesis should be extended to cover an even wider range of philosophical disciplines, including those that are lower down on the justificational hierarchy.

Of course, there is the Curry-Howard correspondence. But perhaps the isomorphism has not been taken far enough. It is not that computers can implement proof systems. Rather, it is that the epistemology or grounding of logic, truth and perhaps epistemology itself, should be replaced by a computational paradigm. 

How does static, classic logical thinking differ from computational thinking? Here is a list of the way computer scientists think about procedures as opposed to logical truths:

W1. Procedures are sequential rather than static.
W2. Procedures can enter infinite loops, abort or otherwise fail.
W3. Procedure are syntactic rather than semantic.
W4. Procedures operate on data as opposed to sentences existing stand-alone.
W5. There is a hierarchy of implementation rather than a hierarchy of justification.
W6. Computers always have a "what's executing now" as opposed to the open choice menu of logic.

The idea of W1 is this. Logic, mathematics, science and common sense reasoning should be viewed the way a computer might execute them rather than as expressing abstract, static truths

I want to argue that logic, the underpinning of our reasoning and of science, should be seen as a procedural, sequential procedure and not as a static "state of affairs" that asserts some permanent two-way relationship between elements.

Consider the statement that 3 x 4 = 12. This is normally understood to express a relationship between 3, 4 and 12 that holds in any direction. However, think how a computer might be programmed to express this. There would be a multiply function, two inputs (3 and 4) and an output. The function takes the inputs and it proceeds through a sequence of steps which results in the output.  (Peruzzi, 2004, makes this point.) The steps take finite time and the input chronologically precedes the output. The relationship is asymmetric. You cannot take the same procedure and give it a 12, hoping to get 3 x 4, or 3 and 4 as output(s). Moreover, there is no trivial relationship between the multiply and the divide function as might be said to follow immediately from a definition of multiply in the static sense. Something happens that takes inputs and produces outputs. The computer process is procedural, sequential and also, as will be discussed, syntactic.

This holds true for any piece of reasoning that you would want a computer to implement.

Consider the liar paradox. Typically it might be introduced using the following statement.

    (1) This sentence is false.

If (1) is true then the sentence is false which means that (1) is false which means that (1) is true etc. There are ways of restating the example that circumvent attempts to say that (1) is both true and false or that it is neither. However, these responses and the respective counter responses are not the focus here.

How would a computer be programmed to deal with this problem? Imagine that there is a computer function that can take statements as inputs, performs analytic analysis and outputs whether the statement is true or false. Assume that the function includes all the knowledge necessary to avoid a "don't know" answer. Alternatively, imagine it does not have such knowledge, and a third possible output value is indeed "don't know". What would it do with the liar paradox example (1)? The answer is that it would get into an infinite loop. It would never return an answer. It would not say that it is true, false, both or neither. It simply would get stuck. If somehow it had a parallel mechanism to abort the infinite loop, there would still be no answer - just an aborted function. This is the point made in W2.

Gupta and Belnap (1993) have proposed what is known as the revision theory of truth for dealing with the liar paradox. This theory of truth includes, for the first time, a dynamically changing value for truth, that flips from one state to another. This idea needs to be extended to all truth valuation. There is always a dynamic process that either ends rather quickly in an answer one way or another, or get into an infinite loop.

Famously, computer processes cannot solve problems of the undecidable class and one important subclass of these are those problems that reduce to the halting problem. However, there are trivial ways of modifying computer programs such that they can solve halting decisions for specific families of cases. A simple example of this is introducing a loop counter into an otherwise undecidable infinite loop that breaks when the counter reaches a specific number. This alteration transforms an undecidable program into a trivial halting problem.

Consider the following case. For any looping program, standard logic execution (substitution, searching for premises etc.) is modified such that for every step in a loop:
(a) a history of the state of the theorems and variable values is stored,
(b) the current state is compared to all previous states, searching for identity,
(c) break in case a match is found, for no progress can be made.
In the case of the liar paradox, such a modification on standard logic execution can determine that the state has returned to a previous state and will (decidably) break.

Other examples include a review of the history of state that, instead of searching for identity in step (b), searches for some other indication that a solution will not be found.

The advantage that a procedural perspective has in this case, is that the logic sequence can be modified to add specific, yet common, classes of problems to otherwise undecidable problems. This well known fact about computing may perhaps undermine claims one sometimes sees that the human mind is capable of super-Turing computation - an impossibility if the human brain is just another Turing machine.

Consider a more typical sentence:

    (2) If A and B are true then A is true.

What makes (2) true? The answer usually given (call it A2.1) is that the meaning of the connecting words (all but A and B) make it true. Consider another answer (call it A2.2). What makes (2) true is a procedure (or, perhaps, that you apply a procedure) that takes (2), that this procedure has some conditional branching, driven by symbols such as "and", "then", etc. which result in executing specific steps that come up with the answer "yes, (2) is true".

It's difficult to deny that A2.1 and A2.2 are actually answering subtly different interpretations of the question "What makes (2) true?". Please let this stand, these subtle differences lie at the heart of the difference between the static-semantic and the procedural-syntactic approach.

W3 has been the subject of more than one post in this blog. The first is probably this one.

W4 expresses the idea that sentences in any language, formal or natural, should be considered as the data of a procedural system. This undermines some of the strategies that form standard theories of meaning. The post "Stop Ignoring the Procedure!" deals with this.

The last two, I leave for later posts.

References

Gupta, A., and Belnap, N., (1993), The Revision Theory of Truth, Cambridge, MA: MIT Press.
Peruzzi, A., (2004), Causality in the texture of mind, in A. Peruzzi (ed. ) Mind and Causality.

Tuesday, November 6, 2012

Are There Bad Arguments?


If there are no absolute standards of truth, there can never be such a thing as a "bad" argument. The normative nature of epistemological investigation is utterly undermined. What justification could you provide for arguing that any method of reasoning or coming to conclusions is poor or invalid, if there is no absolute, grounded basis for any truth or reason?

The post "Is Wittgen Justified?" asked whether a justification can exist for the intrinsic rule expressed by Wittgen, which was called the Association Intrinsic Rule. No answer was given and implications of both the possibility that it can be justified and the possibility that it cannot, were briefly explored. However, for the purposes of this post, only the position that there is no way to justify an intrinsic rule will be discussed.

A relativist epistemology faces the objection that there can be no basis for anything normative. By what standards can any argument be called "bad"? If there are no absolute facts or valid methods of inference, how could any argument fail to live up to some absolute standards? If epistemology is just a branch of psychology, a descriptive enterprise discovering how human subjects respond to visual stimuli by creating belief systems, then there is no room for judging belief-forming practices.

A relativist might respond by replacing categorical imperatives by hypothetical imperatives. Instead of "you should deduce B from ...", this should replaced by "if you wish to conform to theory T, you should deduce B from ...". All justification is relative to some theory T, says the relativist. The absolutist might object that such statements are still absolute in form, and that the relativist is self-defeating. The relativist can counter that objection by showing that theory T is not just a premise but also an intrinsic rule together with explicit procedures expressed in terms of that implicit rule. 

The implicit rule may not be justifiable, but within its system, it is possible to say "B is generated by  these explicit procedures and not C". Thus within theory T, a set of statements specified using an intrinsic rule (say, Wittgen), you can say that an argument producing C is bad and another producing B is good. Hypothetical normativity is retained, even though it is not really of the form "if you want to follow T then you should ...", but rather "within T you should ...". 

The conclusion is that there is a way of specifying some relative normativity. However, it would seem that the conclusion that normally gives relativity its name still holds. If there is no justification for any theory T, then there are lots of equally valid points of view. Everybody has their own narrative. One can reach two possible conclusions. Equal validity states that there are any number of equally valid points of view. Universal equal validity claims that all points of view are equally valid. 

One can imagine someone attempting to reject universal equal validity by saying that while there are any number of theories T1, T2, ... Tn,  that does not mean that any argument is valid. A claim that "within T1, C is generated", when within T1, B is generated, is not valid. This argument claims that, without justifying theory T, accepting T means that a specific intrinsic rule is in place, specific generative procedures execute and specific outputs (B) are the result. 

However, a different attempt can be made that reduces accepting T, to simply accepting an Association Inference Rule (AIR) such as Wittgen. The point here is that accepting AIR is simply asserting that to accept that label "A" is assigned to, say, the string "table", is just asserting that "A" is assigned to "table" and not "chair". T becomes so basic that one cannot imagine any meaning to reasoning where T is not accepted. Another way of saying it is that everybody, or almost everybody, accepts AIR. 

What does normative mean in this context? It is a form of persuasion. It appeals to the person who accepts AIR and is incapable of imagining any form of reasoning within AIR. It does not say "you should accept that AIR together with the premises L, should generate B and not C" but rather "you cannot help but accept that with premises L you generate B and not C". That the interlocutor may not accept L but rather L', is beside the point. What is pertinent is that a re-imagined form of normativity exists which is based on a descriptive premise (namely, that almost everybody cannot help but accept AIR). 

The fact that the meaning of normative has changed does not make a difference to the continuation of the epistemological project. Once the most basic intrinsic rule is accepted, the task is to go forward from that point, showing the advantages of L1 rather than L2. Also, it paves the way for an epistemology that can accept the equal validity framework but that can still build tools for rejecting the universal equal validity framework.


Monday, November 5, 2012

Is Communication Possible?

If you create a system where "truth" is just a symbol within the system, if there is no semantics in the system and no meaning, if the system consists of nothing but syntactic procedural manipulation of formal rules, can there be communication between two such systems? If the reasoning process of a human being is expressible completely (except consciousness) by a syntactic process, what is occurring when they "communicate" with each other? Is there any way to determine that what one says, the other understands? If reason is a game where each player sets their own change values to different sentences, is this game inherently just a "private language"?

One motivation for the whole framework of semantics, including meaning and reference is that if there are absolute facts about these things, then one can account for true communication between two rational individuals. Therefore, the question arises, if semantics is abandoned for a syntax-only system, does this mean that true communication is impossible?

There are two strategies for responding to these questions. The first strategy will focus on the fact that a human reasoning system is a syntactic system, which is, in principle, exposable to any other human syntactic systems. The second strategy will focus on the possibility of the creation by two syntactic systems, of a third syntactic system that they jointly manipulate.

We tend to think that an individual human thinker is automatically subject to problems of subjectivity.  However, if you think in terms of a syntactic system the problem does not arise, at least not in principle. In theory, the human brain is exposable to external  neurological observation. The changing states of the brain can be input as data to a second human (mind). Of course, we don't know how to use this data today, but there is no in-principle reason that a human brain's symbolic syntax - the sequence of changing brain states - could not be completely visible.

This assertion only holds true in a syntactic framework. From a semantic framework where meaning plays a critical role, even a full exposure of one brain will not yield any understanding of what is going on for the human being whose brain it is. There are two classes of reasons for this failure. The first involves mentality or consciousness. However, even for a hard-nosed materialist who denies the existence of consciousness, there is a second problem caused by the under-determination of meaning.

Consider the issue of consciousness. Either consciousness can influence the operations of the brain (interactionism) or it can't (epiphenomenalism). In the latter case, there is no problem for the syntactic approach. Syntax is no more than the dynamic development of symbols in time. These symbols can be implemented using concrete physical configurations of material, field strengths or both. Therefore a complete exposure of all the neurons at every moment in time along a sequence is a full description of the syntactic development of what is going on in brain. According to this epiphenomenalist view, the fact that the conscious mind experiences these symbols in a rich subjective way, is of no relevance to the syntax of the system.

For the interactionist, the situation need be no worse. Assume that that the conscious mind is able to make choices of one verbal path rather than another. However, assume that these decisions are restricted in the following ways. Firstly, the decisions thus made do not violate the restrictions on well formulated formulae (wffs). Secondly, the conscious mind may only pick between generative options that would have been available anyway. In this case, the mind does not disturb the syntactic correctness of the symbolic system. If the brain is a syntactic reasoning system, it stays so even in the case of this limited interactionist scenario.

On the other hand, if consciousness need not be considered, then there still remains the question of the under-determination of meaning. How could we know which brain state refers to what? Of course, one could go through the same process as one might to learn any foreign language; comparing stimuli with resulting brain states to try and decipher the meanings. However, this may fail in practice because of the complexity of the task and in theory as pointed out by Quine (1951).

However, this problem need not concern the syntactic proceduralism presented in these posts. Neither meaning nor reference is required. There are no propositions that are the content of sentences, in any form - vocal or electronic. Once one discovers the intrinsic rule of the brain (not that this is presumed easy), the procedures, written in the system, can be interpreted using this intrinsic rule. There is therefore nothing, in principle private, about verbal reasoning. The practice may take millennia, but human beings need not be considered in their essence communication-incapable islands.

However, the remoteness of the technology required for this solution, leads to a more practical way of understanding inter-human communication. This option is a direct consequence of the syntactic nature of reasoning presented here. Both a human mind and a computer implement syntactic processing systems. However, there are other means of implementing a syntactic system. A group of people can together implement a syntactic system.

Imagine two people manipulating a set of magnetized plastic letters on a board. Assume the two people have agreed, internalized or trained themselves to follow some intrinsic rule regarding the manipulation of these plastic symbols or configurations. The dynamically changing configuration of plastic pieces is a syntactic procedural system. It exists separately from each of its two implementors yet is experienced by each and shared by the two together.

Extending the thought to a large number of people and widening the symbolic material to include vocalizations or written symbols, the same can be said of the culture shared by a society. The culture as a dynamically evolving configuration of symbols is a syntactic system too. It is experienced by each of the members of the community and perhaps that experiencing is subjective, but the syntactic system is itself not subjective. Thus the individuals do not so much communicate between one and another as each observe/experience a joint evolving symbolic system.

References:


Quine, W.V., 1951, “Two Dogmas of Empiricism”, Philosophical Review, 60: 20–43.




Sunday, November 4, 2012

Stop Ignoring the Procedure!

Much of the philosophy of logic, language and meaning takes a static view of the well formed formula (wff), statement or proposition. As I've argued in previous posts, there is an alternative picture of the reasoning process. According to this dynamic alternative there is always some  procedure, some rule following, that is assumed but rarely made explicit, that is required in order to transform axioms into theorems or activating the effects of a sentence.

In this post, I would like to develop the idea further by looking at the concept of meaning in sentences. In order to illustrate the procedural approach, it will be contrasted with a classic static or metaphysical approach. Rather than burden the text here by considering the many and varied variations of the static approach, a simplified, rather sketchy, unified representative of the static approach will be described. The purpose is not to create a "straw man" out of the static alternative but simply to enable presentation of the procedural approach.

The static approach (as described here) takes a sentence (wff or proposition) and asks what makes that sentence true. The assumption is that there is something in the structure of the symbols themselves that provides the answer. That something can be called the meaning of the sentence. This meaning is achieved through reference and sense; sometimes taken compositionally and sometimes holistically. The procedural alternative, taken simplistically, denies that there is meaning in the sentence itself. Consideration of a sentence consists of procedures that are brought into execution. These procedures do something with the symbols of the sentence analogously to the way computer programs operate on data. 

It will be argued that looking at language using the procedural alternative provides an approach that is ultimately simpler and does not run into the kind of trouble that truth-oriented or other inherent-meaning static approaches ultimately run into.

The classic approach to language is often associated with some theory of truth. The meaning of a sentence is explicated in terms of what makes the sentence true. This raises the question of what "truth" means from a procedural syntactic perspective. The concept of truth is normally associated with semantics. However a syntactic approach which does not involve semantics will normally be expected to have no use for concepts that are constitutive of semantic theory, such as propositions, reference, meaning and truth. However, while the elements of this list are rejected in the form they are usually used, they may be transformed and pressed into service in a syntax-only framework. "Truth" can be a label that is manipulated within the verbal game of reason, just like any other label.

There are various ways of expressing the role that the symbol "true" plays in this game. One could  try to translate the role of procedural truth to that of the classic approach.  For example, observation statements would be labeled true in a deflationary sense. To say that they are true is just to say they have a high change cost, to say they should be included in the database or simply to assert them. However, other statements that play some part in the axioms or procedures used to generate statements intended to match the observation statements would be labeled "true" in a different sense. Perhaps these latter statements could be said to be true in a correspondence sense; they do not correspond to reality, whatever that is, but they correspond to other sentences, previously marked as true. Perhaps this should rather be called a coherence kind of truth, because they really cohere with the previously marked sentences. However, this is not coherence in the circular and symmetric sense. In fact, the prefix "co-" in correspondence and coherence is inappropriate because the generative flow, is mostly one-way. Perhaps a better name would be generative truth for statements that earn their pay by participating in generative mechanisms that output statements whose truth is more like deflationary truth.

Alternately, one could say that the symbol "truth" is the currency of the verbal game of  reasoning. Initially, some sentences are given truth values with both degree and certainty components to this truth value. As generative procedures are created they are initially assigned low certainty truth values, but as they succeed in the matching game, the truth currency flows towards them. Ultimately, even statements with very high truth certainty might lose truth currency in the service of avoiding inconsistency.

Thus, even in the discussion of the syntactic procedural alternative, the notion of truth will be used in describing the processing, but this usage is subtly different from the classic usage.

Consider an analytic sentence such as:


    (1) All bachelors are unmarried.

The classic approach asks, what makes (1) true? The usual answer (A1.1) is that there is a definition for "bachelor" that makes (1) analytically true. The procedural alternative (A1.2) says that there is a procedure that accesses a database that produces an intermediary version of (1):

    (1a) All not married men are not married.

The procedure then follows an algorithm that processes words such as "All", "not" and "are" and outputs the result that (1a) is true, which then leads it to output that (1) is true. The concept of meaning is not required for A1.2 unless the execution of memory lookups involved in A1.2 is considered to be meaning. A1.2 involves no semantics; only syntax.

Consider an example that might be used to introduce metaphysical discussion.

    (2) New York is on the eastern seaboard of the United States.

What makes (2) true? The metaphysical realist might answer (A2.1) that New York picks out an object in the real world and it is the state of affairs in the real world that makes (2) true. A2.1 differs quite significantly in its truth-making mechanism from A1.1. However, the procedural answer (A2.2) is far more similar to both A1.2. The procedural answer is that there is a database of assertions, including geographic-type assertions, such that predefined conditional branching causes execution of the procedure to result in the answer "(2) is true". No external world (or semantics) underwrites the execution of A2.2.

Consider the following three examples, usually introduced to challenge the metaphysical realist:

    (3) Harry Potter was accepted to Hogwarts.
    (4) 37 is prime
    (5) Unicorns don't exist

Harry Potter does not pick out an object in the real world, but (3) is true. (4) does not even pick out an object. In order to assert (5) you might need to say that Unicorns pick out a real class of objects, which would get you in trouble in the predicate.

The problems are caused by a need to see (3)-(5) as conforming in structure to (2). If (2) works by picking out an object and then determining if predicates apply to it, so should the others. This reasoning seems surprising to common sense. Why should it be required that sentences such as (3) derive their truth in the same way, using the same concepts of reference, as (2)? Because the standard linguistic assumption is that meaning inheres in the words. However, the procedural response is not so bound. The procedural response is that as soon as (3) is considered, a new set of branching is executed that effectively realizes that the rules here are different. The database that asserts the geographic facts of the American continent is not continuous with the database of assertions about wizards and muggles. The processing of (3) consists of realizing that a world of literary narrative is now under discussion and the appropriate assertions are accessed. Similarly (4) switches into an arithmetic mode and (5) involves sentence parsing altogether quite different from that in (2). Why is this strategy of response not usually found in metaphysical or linguistic philosophy discussions? Perhaps because the existence of a separate processing layer is not explicitly acknowledged.

It is not that the static approach is wrong. Perhaps one day, the meaning guys will figure out a great story and solve all these problems. It is that there are two alternative ways of looking questions, one that involves static truths and relationships and one that is procedural and dynamic. There is a procedure that must be executed whenever a sentence is presented. This procedure's existence is acknowledged, but usually only implicitly. The title of this blog calls to stop ignoring this procedure. It is not possible to determine the truth of sentences in general without explicit study of this procedure and all its conditional sub-branching. 

What then are sentences, if they have no intrinsic meaning? Sentences are the data of the linguistic procedures?

It is one think to assert that there are procedures that appear out of the woodwork when a sentence is presented which then start executing on the data of the sentence. It is quite another to specify these procedures and study them in detail. What kind of entity are these procedures? The procedures themselves are explicit rules, specifiable in terms of other rules and the data they operate on. This answer makes the procedures no different fundamentally from sentences. They are both just the data of procedures.  What ends the regress? The answer must be an intrinsic, i.e. non-explicit rule. Wittgen or the Association Intrinsic Rule is proposed as what might play that role in language as well as reasoning. The validity of such an intrinsic rule is discussed in the post Is Wittgen Justified

Nothing here is intended to suggest that all approaches to language try to find the meaning as something inherent in the words of a sentence or its structure. Two obvious cases come to mind. Firstly, there is Strawson (1950) who sees the meaning of a sentence as defined by its use. For example there are, in his opinion "overtly fictional" uses of a sentence. There is a sociological way of understanding this distinction which might defy analysis. However, it is also possible to argue that it implies a set of rules that apply once a sentence is categorized one way and another set of rules if categorized another way; in which case, this leads to the views presented here.

Another example that comes to mind is Chomsky's (see, for example his 2000) internalist semantics.  It is possible to read the focus on understanding language as intrinsically tied to a human mind constructing the language as part of a program that seeks to understand language only in the context of an engine that operates on that language. Meaning is generated by the way the human machine operates on the constituents of the sentence. Fodor's (1975) development of Chomsky's ideas in terms of a Language of Thought internal to the human mind also suggest this direction. However, it is important to note firstly, that the specific procedures executing on the data are not made explicit (a grammar is not a procedure as such) and secondly, meaning and propositions still play an important role in specifying these theories.

Finally, it is important to stress that the view presented here is not intended as participating in a theory of mind, a theory of computation or both. The intention is to present a theory of language and of verbal reasoning. Even if procedures are to be the focus of the study, this does not mean that what is proposed is a study of the engine operating on the sentences. Rather, the thesis of these posts is that a study of the sentences themselves should be conducted by making explicit the procedures that are required to operate on these sentences. 

References

Chomsky, N., (2000), New Horizons in the Study of Language and Mind, Cambridge: Cambridge University Press.

Fodor, Jerry A. (1975). The Language of Thought, Cambridge, Massachusetts: Harvard University Press.

Strawson, P. (1950), On Referring, Mind, 59: 320–44.



Saturday, November 3, 2012

Is Wittgen Justified?


Previous posts have argued that all reasoning can be recreated by means of explicit rules that themselves are expressed in terms of a simple intrinsic rule. One possible form for the intrinsic rule is expressed by the Wittgen language. This will be referred to henceforth as the Association Intrinsic Rule (AIR).

The AIR can be expressed as the simple assertion that you can assign a label or definition to some string of symbols and then that label is assigned to those symbols. For example, you can define "A" as the word "table". Thereafter "A" is assigned to "table". You can ask "what is "A"?" and the answer will be "table". Of course you can try and reject AIR and ask: "How do I know that "A" is assigned to "table"? Perhaps it is "chair" instead." The only response is that nothing at all is coherent in any sense if you don't accept this basic operation of assigning and retrieving the assignment.

One could define a computer, in the abstract or empirically, as specified by the Wittgen language, which is an expression of the AIR. If you program assignments into that computer it will generate new assignments by running AIR. You can always object and say, how do I know that the computer will implement AIR. Perhaps it will break. By what criteria can I compare it to an ideal of AIR in order to determine that it has malfunctioned or not. No answer is provided to that objection. The only response is that, if I assign "table" to the label "A" and I don't know whether with the label "A", I will retrieve "table" or "chair", then nothing, absolutely nothing, in coherent. Nevertheless, the point stands, the objection has not been answered.

You can express all other rules, logic in all its forms, mathematics, empirical reasoning etc. explicitly in terms of the AIR but the AIR itself must be assumed. It is used not only as the base for reasoning but also as the means for generating outputs using the rules as well as determining that the rules are indeed being followed.

Some would say that AIR is true, self-evidently, analytically or by definition. Others would argue that AIR remains unjustified. There seems no way to resolve this conflict. Therefore, both options are considered. The implications of each option will be discussed. Additionally, even if no justification can be brought for AIR, some reasons for recommending its use nevertheless, are suggested.

If AIR is true, then one can say that it is true that some theorems follow from specific premises, where "follows from" is defined by explicit rules expressed using AIR. It is true that, specific procedures expressed using AIR do indeed produce specific new assignments. It is true that, given a matching procedure expressed using AIR, the assignments thus generated match other assignments that we choose to classify as "observation statements". This would not mean that the explicit rules such as, say, matching, variable substitution procedures, Modus Ponens, other basic axioms of logic or any theory of science so capable of generating matching procedures, could be labelled true. This list, is just what it is, a set of explicit rules that pay for their board by generating statements that match other statements we want matched. Thus following Quine's (1951) suggestion, all assertions (and procedures) get judged together as a whole regardless of analytic/synthetic divisions.

If AIR is true, why not extend it? Why not accept that string matching is true, that the premises of logic is true, that there are analytic truths etc.? After all, whatever reasoning might lead to saying that AIR is true, might hold equally for this list. A number responses may be offered as a reason for a minimization strategy that restricts the realm of truth only to AIR. To give just one argument as an example, it may not be necessary to expand the field. If a reasonably workable account of our rational framework can be given by accepting only the truth of AIR, why stick one's neck out and claim reflection-free truth status for more?

For now, no more will be said of the option that AIR is true. The question is what can be said, if there is no justification for AIR.

First of all, if it undecidable whether  AIR is true, it is not possible to make an argument such as "if AIR is true, then.." and develop the "then" branch always knowing that there is an "otherwise" branch. The reason is that and "If... then.." arguments assumes at least the truth of AIR.

One can read Wittgenstein's (1953) famous §201 as arguing exactly this undecidability of AIR when he says: "This was our paradox: no course of action could be determined by a rule, because every course of action can be made out to accord with the rule."

AIR may be unjustifiable, but what choice is there but to use it? What sense can be made of the questions themselves or anything written here, without the ability to assign one symbol to another? The only alternative to accepting AIR and moving forward under its aegis, is to fall silent. Nevertheless, even if there is no choice but to move forward, this is not a justification for AIR. It is an acceptance of the use of AIR, while realizing that no validation has been provided. This lack of capability to live with an alternative is not an ontological argument for absolute facts. There is no argument for AIR, there is just no alternative.

The continuation of Wittgenstein's §201 can be read as endorsing this view: "It can be seen that there is a misunderstanding here.... What this shews is that there is a way of grasping a rule which is not an interpretation, but which is exhibited in what we call 'obeying the rule' and 'going against it' in actual cases." We can use AIR; we cannot philosophize about it.

It is inaccurate to call the foregoing an argument for AIR, because it is does not provide the justification that is implied by the strict sense of an argument. It would be better to call it a recommendation.

If AIR cannot be justified, does it have to be so minimal? If we follow AIR for lack of choice, why not extend the argument to the basics of logic or maths? One can answer that once one moves beyond AIR, it is no longer possible to argue that there is no choice. One can imagine affirming the consequent or even some novel axiom of logic that produces seemingly absurd results. There is a difference between having no alternative and having a strong intuition that only one alternative is true.

Consider for example, Kripke's (1982) argument that if you have never added a number greater than 57 before, when presented with the problem "68+57", there is no way to decide against answering "5" on the basis of an interpretation of addition as meaning an operation whose output is 5, whenever either of the arguments is greater that 57, based on past experience with addition. However, within a syntactic system that assumes AIR, the argument does not get off the ground. Addition is specified as a procedure operating on strings of digits. The skepticism applies to the intrinsic rule and only by extension to examples such as these; which is probably Kripke's point anyway.

There are other angles from which one can create a recommendation for AIR. One could argue that a system that assumes AIR is just what we call reasoning. Justified or not, the process of reasoning itself is the procedure of symbol manipulation under some intrinsic rule. The basic form of reasoning that we are familiar with is the one that assumes AIR. Thus a system that implements AIR is what we choose to call a "reasoning system" - not meaning to imply anything about its truth or justification.

Again, one can ask why not extend the intrinsic base to include more of what we automatically consider reasoning, such as, again, logic or mathematics. The answer here is less clear than in the "no choice" argument. One could argue that false reasoning is still reasoning, whereas there is still some minimum below which there is no reasoning. Alternately, one can accept any non-minimal intrinsic rule, but see the current project as an attempt to systematize what we call reasoning, justified or not, in terms of some core that itself is required for the systematization project.

A third recommendation for AIR is that, like it or not, everyone, or almost everyone, works within its framework. The problem with this recommendation is that it assumes a far wider ontological base. Nevertheless, this strategy has the advantage that it might be usable for justifying the normative aspects expected from epistemology. Arguing for minimization is easier using this method of recommendation because clearly, if the goal is to include as wide a demographic of thinkers as possible, it is best to aim for as minimal a core as possible.

There are other intrinsic rules that might be suggested as alternatives to AIR.

For example, the basic register operations of a standard computer system might serve as an intrinsic rule in terms of which all other reasoning is explicitly implemented (programmed). AIR can be implemented in terms of these register operations on the one hand, and on the other, the same basic register operations can be explicitly implemented in terms of AIR. Nothing said here suggests that AIR is the only alternative for an intrinsic rule system. However, one could argue that even the first step of defining a register and its contents is already a symbolic assignment. In other words, AIR will be assumed, however the register system is developed. AIR is still the core.

Another alternative intrinsic rule might be the basic substitution and matching system that is taught to any student of logic. This teaching process, from a perspective of the procedures implementing logic, (a perspective usually not made explicit in classic presentations of logic,) is simply teaching the student how to become an intuitive interpreter (in the Computer Science sense of the word) for a logic language. However, again, substitution and matching, when presented carefully, will include AIR as its core.

Is summary, AIR, may be seen as justified and true, or may be taken as inherently unjustified but nevertheless for extra-philosophical reasons, to be assumed anyway. The point is that it is not necessary to resolve this point. Either way, one moves forward with AIR. For the believer, its justification carries forward justification into further stages of the analysis. For the skeptic, justification, abandoned at this starting point, cannot be revived magically at later stages. Either way, it is valuable to focus on the existence of a core intrinsic rule and build analysis of the practice of rationality on the basis of a bounded kernel one can keep in sight.

References

Kripke, S., 1982, Wittgenstein on Rules and Private Language, Oxford: Blackwell.

Quine, W.V., 1951, “Two Dogmas of Empiricism”, Philosophical Review, 60: 20–43.

Wittgenstein, L., 1953, Philosophical Investigations, translated by G. E. M. Anscombe, Oxford: Blackwell, 3rd edition, 1967.


Wednesday, October 31, 2012

The Ontological Argument for Facts and Truth

You know the Ontological Argument; an old proof for the existence of God. The modern mind finds it intuitively unsatisfying. Perhaps it is the ex nihilo nature of the argument that is so disturbing. Well, it seems that there is an even older argument that is quite popular (not in the base and common sense)  these days, that tries to refute the skeptic (read: relativist, constructivist), that seems no less ex nihilo.

The Ontological Argument, very very roughly, says something like, there must be something that is the most perfect thing there is (or you can imagine something absolutely perfect), perfection must include the property of existence, therefore the most perfect must exist, therefore God exists!

There you have it. You get a very real, in fact potentially empirically efficacious, entity like God, as the output from an argument that inputs only logical premises. Seems like there is a category mistake somewhere. It seems ex nihilo in the sense that an argument using pure logic has no substance; certainly not when you consider the magnitude of the output that is expected to ride on it.

Now here is a way of solving that most difficult of problems, absolute relativism. Absolute relativism expresses skepticism about the existence of any absolute facts, that there is anything at all that we can confidently say is true, that there is anything that is true - whether we can know it or not, or that there is any argument that can be absolutely valid. To take one example: any assertion requires some reason or method for justifying that it is true. The method used must itself have a justification. However, the justification regress must either continue indefinitely, which means that there is no basis, or hit circularity (including a one-statement circularity) which is, by most lights, an invalid form of argument. 

How do you deal with the fact that by appealing to the standards required by rationality itself, it is impossible to meet those standards? How can you justify that there exist at least some absolute facts? 

That's where the Ontological Argument for Facts comes in. All you have to do is postulate a skeptic. The skeptic (once postulated) asserts that there are no absolute facts. Everything is relative. Something can only be true relative to some theory T, but T can never be absolutely grounded. (Theory T presumably includes both axiomatic assertions as well as mechanisms for deriving conclusions from premises). However, when the skeptic is busy asserting that "nothing is absolute" or "everything is true only relative to T", she is making a statement of absolute fact. If instead, the skeptic is saying "perhaps nothing is absolute", that too is saying that the "perhaps" is absolute. Suggesting "perhaps perhaps ... nothing is absolute" would require an infinite regress of the "perhaps". 

In conclusion, the skeptic must be incoherent. If the skeptic is incoherent, the skepticism is invalid. Therefore, absolute facts exist!

The argument is as old as Plato (Protagoras). However, it is being used to this day. The argument needs a name. I suggest: "The Ontological Argument for Facts and Truth".