Sunday, November 4, 2012

Stop Ignoring the Procedure!

Much of the philosophy of logic, language and meaning takes a static view of the well formed formula (wff), statement or proposition. As I've argued in previous posts, there is an alternative picture of the reasoning process. According to this dynamic alternative there is always some  procedure, some rule following, that is assumed but rarely made explicit, that is required in order to transform axioms into theorems or activating the effects of a sentence.

In this post, I would like to develop the idea further by looking at the concept of meaning in sentences. In order to illustrate the procedural approach, it will be contrasted with a classic static or metaphysical approach. Rather than burden the text here by considering the many and varied variations of the static approach, a simplified, rather sketchy, unified representative of the static approach will be described. The purpose is not to create a "straw man" out of the static alternative but simply to enable presentation of the procedural approach.

The static approach (as described here) takes a sentence (wff or proposition) and asks what makes that sentence true. The assumption is that there is something in the structure of the symbols themselves that provides the answer. That something can be called the meaning of the sentence. This meaning is achieved through reference and sense; sometimes taken compositionally and sometimes holistically. The procedural alternative, taken simplistically, denies that there is meaning in the sentence itself. Consideration of a sentence consists of procedures that are brought into execution. These procedures do something with the symbols of the sentence analogously to the way computer programs operate on data. 

It will be argued that looking at language using the procedural alternative provides an approach that is ultimately simpler and does not run into the kind of trouble that truth-oriented or other inherent-meaning static approaches ultimately run into.

The classic approach to language is often associated with some theory of truth. The meaning of a sentence is explicated in terms of what makes the sentence true. This raises the question of what "truth" means from a procedural syntactic perspective. The concept of truth is normally associated with semantics. However a syntactic approach which does not involve semantics will normally be expected to have no use for concepts that are constitutive of semantic theory, such as propositions, reference, meaning and truth. However, while the elements of this list are rejected in the form they are usually used, they may be transformed and pressed into service in a syntax-only framework. "Truth" can be a label that is manipulated within the verbal game of reason, just like any other label.

There are various ways of expressing the role that the symbol "true" plays in this game. One could  try to translate the role of procedural truth to that of the classic approach.  For example, observation statements would be labeled true in a deflationary sense. To say that they are true is just to say they have a high change cost, to say they should be included in the database or simply to assert them. However, other statements that play some part in the axioms or procedures used to generate statements intended to match the observation statements would be labeled "true" in a different sense. Perhaps these latter statements could be said to be true in a correspondence sense; they do not correspond to reality, whatever that is, but they correspond to other sentences, previously marked as true. Perhaps this should rather be called a coherence kind of truth, because they really cohere with the previously marked sentences. However, this is not coherence in the circular and symmetric sense. In fact, the prefix "co-" in correspondence and coherence is inappropriate because the generative flow, is mostly one-way. Perhaps a better name would be generative truth for statements that earn their pay by participating in generative mechanisms that output statements whose truth is more like deflationary truth.

Alternately, one could say that the symbol "truth" is the currency of the verbal game of  reasoning. Initially, some sentences are given truth values with both degree and certainty components to this truth value. As generative procedures are created they are initially assigned low certainty truth values, but as they succeed in the matching game, the truth currency flows towards them. Ultimately, even statements with very high truth certainty might lose truth currency in the service of avoiding inconsistency.

Thus, even in the discussion of the syntactic procedural alternative, the notion of truth will be used in describing the processing, but this usage is subtly different from the classic usage.

Consider an analytic sentence such as:


    (1) All bachelors are unmarried.

The classic approach asks, what makes (1) true? The usual answer (A1.1) is that there is a definition for "bachelor" that makes (1) analytically true. The procedural alternative (A1.2) says that there is a procedure that accesses a database that produces an intermediary version of (1):

    (1a) All not married men are not married.

The procedure then follows an algorithm that processes words such as "All", "not" and "are" and outputs the result that (1a) is true, which then leads it to output that (1) is true. The concept of meaning is not required for A1.2 unless the execution of memory lookups involved in A1.2 is considered to be meaning. A1.2 involves no semantics; only syntax.

Consider an example that might be used to introduce metaphysical discussion.

    (2) New York is on the eastern seaboard of the United States.

What makes (2) true? The metaphysical realist might answer (A2.1) that New York picks out an object in the real world and it is the state of affairs in the real world that makes (2) true. A2.1 differs quite significantly in its truth-making mechanism from A1.1. However, the procedural answer (A2.2) is far more similar to both A1.2. The procedural answer is that there is a database of assertions, including geographic-type assertions, such that predefined conditional branching causes execution of the procedure to result in the answer "(2) is true". No external world (or semantics) underwrites the execution of A2.2.

Consider the following three examples, usually introduced to challenge the metaphysical realist:

    (3) Harry Potter was accepted to Hogwarts.
    (4) 37 is prime
    (5) Unicorns don't exist

Harry Potter does not pick out an object in the real world, but (3) is true. (4) does not even pick out an object. In order to assert (5) you might need to say that Unicorns pick out a real class of objects, which would get you in trouble in the predicate.

The problems are caused by a need to see (3)-(5) as conforming in structure to (2). If (2) works by picking out an object and then determining if predicates apply to it, so should the others. This reasoning seems surprising to common sense. Why should it be required that sentences such as (3) derive their truth in the same way, using the same concepts of reference, as (2)? Because the standard linguistic assumption is that meaning inheres in the words. However, the procedural response is not so bound. The procedural response is that as soon as (3) is considered, a new set of branching is executed that effectively realizes that the rules here are different. The database that asserts the geographic facts of the American continent is not continuous with the database of assertions about wizards and muggles. The processing of (3) consists of realizing that a world of literary narrative is now under discussion and the appropriate assertions are accessed. Similarly (4) switches into an arithmetic mode and (5) involves sentence parsing altogether quite different from that in (2). Why is this strategy of response not usually found in metaphysical or linguistic philosophy discussions? Perhaps because the existence of a separate processing layer is not explicitly acknowledged.

It is not that the static approach is wrong. Perhaps one day, the meaning guys will figure out a great story and solve all these problems. It is that there are two alternative ways of looking questions, one that involves static truths and relationships and one that is procedural and dynamic. There is a procedure that must be executed whenever a sentence is presented. This procedure's existence is acknowledged, but usually only implicitly. The title of this blog calls to stop ignoring this procedure. It is not possible to determine the truth of sentences in general without explicit study of this procedure and all its conditional sub-branching. 

What then are sentences, if they have no intrinsic meaning? Sentences are the data of the linguistic procedures?

It is one think to assert that there are procedures that appear out of the woodwork when a sentence is presented which then start executing on the data of the sentence. It is quite another to specify these procedures and study them in detail. What kind of entity are these procedures? The procedures themselves are explicit rules, specifiable in terms of other rules and the data they operate on. This answer makes the procedures no different fundamentally from sentences. They are both just the data of procedures.  What ends the regress? The answer must be an intrinsic, i.e. non-explicit rule. Wittgen or the Association Intrinsic Rule is proposed as what might play that role in language as well as reasoning. The validity of such an intrinsic rule is discussed in the post Is Wittgen Justified

Nothing here is intended to suggest that all approaches to language try to find the meaning as something inherent in the words of a sentence or its structure. Two obvious cases come to mind. Firstly, there is Strawson (1950) who sees the meaning of a sentence as defined by its use. For example there are, in his opinion "overtly fictional" uses of a sentence. There is a sociological way of understanding this distinction which might defy analysis. However, it is also possible to argue that it implies a set of rules that apply once a sentence is categorized one way and another set of rules if categorized another way; in which case, this leads to the views presented here.

Another example that comes to mind is Chomsky's (see, for example his 2000) internalist semantics.  It is possible to read the focus on understanding language as intrinsically tied to a human mind constructing the language as part of a program that seeks to understand language only in the context of an engine that operates on that language. Meaning is generated by the way the human machine operates on the constituents of the sentence. Fodor's (1975) development of Chomsky's ideas in terms of a Language of Thought internal to the human mind also suggest this direction. However, it is important to note firstly, that the specific procedures executing on the data are not made explicit (a grammar is not a procedure as such) and secondly, meaning and propositions still play an important role in specifying these theories.

Finally, it is important to stress that the view presented here is not intended as participating in a theory of mind, a theory of computation or both. The intention is to present a theory of language and of verbal reasoning. Even if procedures are to be the focus of the study, this does not mean that what is proposed is a study of the engine operating on the sentences. Rather, the thesis of these posts is that a study of the sentences themselves should be conducted by making explicit the procedures that are required to operate on these sentences. 

References

Chomsky, N., (2000), New Horizons in the Study of Language and Mind, Cambridge: Cambridge University Press.

Fodor, Jerry A. (1975). The Language of Thought, Cambridge, Massachusetts: Harvard University Press.

Strawson, P. (1950), On Referring, Mind, 59: 320–44.



No comments:

Post a Comment