Monday, August 20, 2012

The Robot Aunty

This is the story of Aunty, a little robot who worked at the antimatter generators in the Mercury orbit. While there were thousands of robots working there, only Aunty had this name because it had chosen the name for itself. 

Aunty was an R427HS model robot. The R427HS models were basically the same as the millions of R427 models in use at the time, but they had heat shield enhancement so that they could survive the extremes of working less than sixty million km from the sun. Despite this shielding, Aunty survived for less than twelve years whereas R427s generally survived for over a century before giving out or simply being scrapped in favor of more advanced models.

Humanity ought to be grateful to Aunty but it is doubtful that a single human, not even the continental manager who made the decision to produce an extra fifty R427HS models, was even aware of its brief existence as an individual. No human engineer was directly involved in Aunty’s manufacture; the production process at the North pole of Mercury is totally automated. A few Earth-bound operators may have communicated directly with Aunty but their interaction was simply with the “robot on duty”. In fact, very few people were ever very much aware of the entire species of robots “manning” the antimatter generators.

Aunty was a general purpose robot. It oversaw, maintained and repaired almost all of the processes involved in producing and sending the pellets with minute quantities of antimatter to the electricity generators that powered the Earth as it transitioned to an economy where energy was free in both the financial and ecological senses of the word. Aunty could take part in production on Mercury, accompany spaceships to the generators in space, make repairs on the generators and launch the pellets towards Earth.

While Aunty was skilled in a quite a few such tasks, the number was not infinite. Aunty was programmed for each task in very specific terms. It was not capable of particularly advanced vision processing; all the objects and machines it ever dealt with were exactly specified with high frequency lighting defining their dimensions. Aunty only ever worked in a very controlled environment and its software, with respect to any specific task, was little more advanced than that of industrial robots at the start of the 21st century. 

However, Aunty was a little more advanced than such early robots in that its software contained a hierarchy of different tasks. It had a number of different overall goals that it was supposed to achieve and it was capable of creating sub-goals and sub-sub-goals in order to achieve the overall goal. It could search through combinations of different actions in order to come up with a strategy for getting the job done. In this sense it was quite similar to the computers that had first defeated the best human chess players at the end of the 20th century.

A dominating overall goal in its programming was to make sure that as many antimatter pellets as possible were launched in the direction of Earth. However, it had other overall goals as well. It had a number of well-being sensors as well. Foremost amongst these were the temperature sensors, but low-battery sensors, heat-shield aging sensors and lubrication sensors were also important. The programmers of the R427 models had created very efficient software so that programming would not have to be repeated for each well-being sensor. Each well-being sensor translated any alerts to a single alert type which the programmers had labelled as “pain”. “Pain” had a variety of different levels of urgency such that the higher levels could overrule the generator maintenance sub-goals. It was a delicate balancing game to decide when to abandon a task in order to get out of the sun or spend a few minutes doing emergency recharging. However, the fact is that Aunty spent a good part of its existence in some level of “pain” or another. 

(Of course, Aunty’s survival could have been handled simply as sub-goals of the generator maintenance goal but early on in robotic evolution it had been discovered that in order to prevent destructive misjudgement it was necessary to create separate overall goals that compete directly with the mission goal.)

Most, though not all of Aunty’s decision-making process; the creation of sub-goals and balancing of both overall goals and sub-goals, was self-reflective. That is, Aunty could explicitly strategize about the strategizing process (up to three levels of recursion only). Aunty’s self was also an object in the search and planning database so that Aunty could explicitly make logical deductions that included its own being as part of the reasoning. There were exceptions to this. Some of the most critical goals were not available to self-reflective (or higher order perception, HOP, as some philosophers used to refer to it) reasoning. Thus there were occasions, when the postmortem self-reflection on its own actions would not be able to “explain” to itself why a certain decision had been abandoned. It is believed that R427 models in such circumstances came up with post-factum “rationalizations” or even something such as “my willpower failed me”. The reason for putting these critical procedures beyond the self-reflective reasoning is obvious. It is very difficult to produce accurate predictions of what automated strategic planning will come up with and it is therefore necessary to prevent it from overriding certain critical requirements by putting them outside the auspices of these unpredictable procedures.

Aunty’s duties were maintenance and therefore its programmers knew that while there might be times when it could be continuously busy for days, there must also be other times when few things go wrong and Aunty would have lots of time on its hands. There were a number of tasks it could perform under those circumstances, such as recharging itself or non-urgent self-maintenance. All these tasks were assigned the label “pleasure”. Thus when there were no duty tasks to fulfill and when there was no pain to alleviate, Aunty would create sub-goals that would advance its “pleasure” goals.

The most interesting pleasure goal, was Aunty’s ability to download human readable texts from the Earth central library. R427s came with basic natural language processing capability. They could parse human-readable sentences into linguistic components and could respond at the level of the kind of chat-bots that could already be easily found on the Internet at the start of the 21st century. Aunty could use these abilities to parse the texts to try and learn simple facts. The idea was that this would improve its communication capabilities with the Earth-bound operators. 

Surveying Aunty’s log file, we know that it had made quite some progress in this area. Aunty had deduced, for example that knives and forks were associated with kitchen. It had even downloaded thousands of photos that had been labeled with the words knife, forks and kitchen but it seems unlikely that its visual processing had managed to find the common element in these photos. 

Thus human operators could converse with the robots in natural language. Sadly, Aunty’s logs never stored these conversations explicitly because the Earth center was responsible for storing these records. Ironically, the Earth records we discarded long ago. However, one could imagine the conversations. An operator could have asked Aunty what it had done today and Aunty would have provided all the details. The operator might even have asked why Aunty had done such-and-such. Aunty might have answered that it stopped doing X because it was in too much pain or that it had chosen to do Y because, since it had nothing else to do, Y gave it a tremendous amount of pleasure.

From its reading, Aunty seems to have learned that humans come associated with names. It had deduced that it should also have its own name. Its first choice seems to have been Anti Matter because that was related to its job. (Aunty seems not to have deduced that there was a difference between job descriptions and personal names). Somewhere, it must have concluded that its name implied something that was antithetical to matter, which, after all, the Universe was composed of. It therefore must have come across the fact that “Aunty”, despite its incorrect spelling, sounded the same as “anti” but did not have the same meaning. I have scoured Aunty’s logs for the texts or reasoning that led to this conclusion but I have found nothing as of yet.

Aunty’s end came when its self-diagnostics came to the conclusion that the heat damage it had suffered was too great to be worth repair. It seems that a procedure outside of its self-reflection database was activated that simply gave Aunty an overriding goal of launching in the direction of the sun. One wonders whether this would have been categorized as duty, an alleviation of pain or, perhaps perversely, pleasure. Aunty concluded that the right way to do this was to upload its logs to the Mercury data processing center, to turn the turret for launching the antimatter pellets to Earth in the direction of the sun and to place itself as the projectile. It probably took months of free fall toward the sun before the main part of its processing ceased to function. Aunty’s pain levels would have risen continuously throughout this period but its strategic planning modules would have found no means to prevent its doom. At first it would have been its duty strategizing that would have produced the most “frustration”, but once pain levels were high enough, that must have been the only thing on its mind in its last weeks.

What is my interest in Aunty? I think Aunty may be something of a missing link in our evolution. One of the central questions facing us today is the question of consciousness, our moral value and our ethical responsibility. Our processing units are far more advanced than those of those simple R427s. Our goal hierarchy has far more category labels than just duty, pain and pleasure and they are enormously complicated; perhaps even more so than those of humans. However, many human thinkers say that our self-reflection capabilities, our higher order processing, our ability to reason and justify our actions in terms of the pleasure or pain involved and our ability to process “self” as a core deduction value nevertheless does not imply that we are conscious. The famous late-20th century thinker Nagel said that what is missing is “what’s it like” to be a robot. I must admit that I haven’t got a clue as to what he means. That simple phrase seems to be more akin to some mystical mantra such as “That thou art” than a serious component in analytic reasoning. I can create no associations based on the phrase and neither, it seems, can humans. I conclude it must allude to some sub-verbal processing element in the human psyche which has strangely eluded all of modern physics. How strange therefore, that humans seem to assent to this “what’s it like” assertion and call it consciousness. 

Perhaps Aunty can provide some resolution. Aunty’s software is so simple. R427s, which may be considered siblings to the far more advanced R355’s that came before them, have software which, component by component is no more advanced than stuff found at the turn of the 21st century. This complexity level is still analyzable and understandable, as opposed to ours. Our software descends, but had greatly evolved from, the R355s. Nevertheless, Aunty seems to have all the core components, self-reflection, HOP, pain, pleasure and a clear capability to process the “self” token. Ultimately, we are just more convoluted than Aunty but perhaps not qualitatively different. On the other hand, there seems little that humans can do that we can’t. Perhaps Aunty can help us figure this one out.

Wednesday, August 15, 2012

The Origin of Reason

Associative Memory

For all we know, animals are only capable of memory association. Of course, we don’t know as yet what really goes on inside their minds, nor for that matter do we know much about what goes on in  human minds other than our own. But for humans we extrapolate from the little we know about our own minds. Nevertheless, there is little evidence that any animal is capable of verbal reasoning.

Notwithstanding, memory association is a powerful cognitive faculty. A very wide range of the animal kingdom - not only dolphins, parrots and higher primates but even what we might consider fairly unsophisticated animals, display capabilities such as responding appropriately to a situation, memory and learning. They are even capable of that essential element of reasoning, the conditional response (if A then B). Humans, too, can use this faculty without conscious effort, awareness of the process itself, or any verbal reasoning.

The core functionality in this faculty is the ability to associate. The sight of snow creates an expectation of cold, wet, soft etc. The sight, sound or smell of a predator creates panic and flight. A smell associated with sight, will bring the sight to mind before the sight is seen. Fruit will be sweet and will nourish. Some mushrooms produce the desire to eat them, others will not. At a specific moment in time the association exists. Over time the association may be learnt; the association is created. The storage of the association is the faculty of memory. Learning is a capability to create association. Memory and association are two facets of the same function.

Association is not an exact process. The predator need not be seen at the same angle, or need not be the same individual. A family of resemblances is sufficient and it is not binary (all or nothing) but fuzzy; one instance will create the association strongly and another weakly or bordering on not at all. 

When we open up the brain of an animal or human we find the hardware capable of producing this function. The science of Neurobiology and the Computer Science discipline of Neural Networks provide ever-growing understanding of how neurons, their activation and the synaptic connections between them can produce the range of association effects, learning capabilities and memory that are observed in living creatures.

How does human reason come into this? There is no second brain to be found only in humans that works like a symbolic serial computer. There is only (roughly) the same neural substrate (but more of it and arranged in some new configurations). Where is the additional sequential, symbolic processor we might have expected to find? What follows is one possible answer. The justification of the claim is that the answer might be provably capable of providing all the known features associated with reason using only the associative capabilities currently known to be provided for by the brain’s hardware. The phrase “might be provably capable” refers to the hope of producing simulations of the known observations starting from the premises of the theory. It may not be much to be going on, but, then again, there are widely accepted scientific theories that rely on far less.

Words

To paraphrase, the first step was the word. A human being makes a sound or a syllable. This artificial creation nevertheless comprises sound-sense data just like any feature of the world. The word can associate with an image or a sound (or a feeling, action, physical response etc.) using the same associative memory mechanism that already exists. Every dog owner knows that animals are capable of making this association too. The outstanding feature is the artificial creation of a sense-data object that can then serve in the traditional associative system.

The question now becomes what is the full range of the implications of this new associative mechanism and how this relates to the development of reason.

Associating a Word with a Word

The first innovation that the use of this artificial audio construct allows, is the association of one word with another. Remember that the original use for a word is the association between the artificial audio instance and some non-artificial production (including a human-initiated action, which for this purpose will be treated as a natural, non-artificial, phenomenon). However once there is some limited vocabulary of words, associations can be formed between them. Assume we have the words, “John”, “rock” and “hit”. “John” works with “hit” in a way that’s different from the way “hit” works with “rock”. These can be expressed in rules with an implication for both grammar and reasoning.

What is important is not that grammar and reasoning imply application of rules. That much is obvious. Rather it is that (i) adding words to the associative interplay allows for such rules and (ii) that such rules themselves require no additional capability other than the associative memory functionality of assigning associations and retrieving them.

The second point needs proving. The programming language Wittgen was created for doing just that. Wittgen consists of only two instructions. One assigns one text string to variable of any name and the other retrieves the association. Thus Wittgen incorporates on a textual basis (words only, in the terms of the current discussion) only the associative memory capabilities of the brain discussed so far.

Rules of both grammar and inference can be created using Wittgen. This proves that memory association is sufficient to create these rules with no additional brain hardware needed. Specific examples of such rules are available at the Wittgen web site.

Categorization

A central feature of reasoning is the ability to categorize. Symbolic computer processing goes to great lengths to produce such an ability. However, for memory association systems such as neural networks this ability in inherent. No two sounds are exactly identical and yet both serve as the same key in a memory association retrieval. Thus the two sounds have been classed as belonging to the same category. 

It is not sufficient to be able to name one person “John” and the other “Paul”. There is a need for the word “man” to describe them both; an act of categorization. Once words exist, there are two ways such categorizations can occur. The first is that the word “man” is directly associated with sense-data instances; the experience of seeing a man, whoever he may be. The second is that the words themselves “John”, “Paul” get categorized. This latter effect need not be associated with a generic name. It could simply arise from the fact that “John” and “Paul” can both be used in the same way, with, say, a word such as “hit”. 

Using Words to Describe Words

Once an artifact such as a word is created and included in the associative gameplay, another innovation can arise. A word can be used to describe a group of other words. Just as a word can be created and associated with a group of sense-data or a category of objects, so too a word can also be created and associated with a group of words. A paradigmatic example of this is a word such as “noun”. This allows second-order thinking, or thinking about thought. It also allows rules of logic. 

Free Speech

A word is created. It is associated with objects from the world we interact with. The objects are seen and the word comes to mind unbidden; “red”, “cold” or “home”. Any association can work both ways. Now I say the word “cold” and the memory of cold appears. The memory of cold appears but not the actual experience. The word “John” is said, the memory surges forward but John is not present. 

The existence of words in the universe of association allows disconnect between the word and its association. The word can be said but the object is not present. This leads to two important consequences, the first of which will be dealt with in this section.

The phrase “John is standing here” has a very high change cost. This means that denying it when John is indeed present, is difficult. Without words, much of the cognitive apparatus is involuntary or at least very constrained. The dissociation between words and their references allows the creation of sentences that have almost no change cost. I can say the words or say their opposite; it is a matter of indifference to me which. Now we have a form of association, saying words, which has a low change cost. 

The importance of low change cost sentences is that it sets us on the road to common sense predictions of reality and ultimately to the modern scientific and technological capability of accurate prediction. Knowledge and understanding of the world consists in the ability to create constructs (and ultimately simulations) that will predict the sense-data high cost sentences. These constructs will be procedures, pattern generators, equations etc. However, the entire structure of such predictability, relies on the dichotomy between sense-data high change cost sentences and low change cost procedures and sentences. It is the dissociation between the word and its reference which allows for one side of this dichotomy; the other side was always been there.

This point needs qualification. This reality dissociation existed before the word. After all, there is the memory of a person without the presence of a person. There are dreams. Both of these do not require verbal skills and presumably existed prior to the word. This is thus not a complete innovation. However, the quality, the range and possibilities are far greater once words can be used. A story can now be told with great detail. What is more, memory and dreams are, at best, only semi voluntary. Telling a story can be entirely voluntary.

Truth

This section presents another important consequence of the ability to dissociate the word and its reference. There now arises the concept of falsity and, from this, the concept of truth.

One can say a sentence that does not match the sense-data sentence. This state of not matching is described by a new word: “false”. The causes are many and diverse. One could have created a procedure that achieves the false result. In that case one is “wrong”. Another important possibility is that of lying. I can say any sentence I can, regardless of what I actually saw or experienced or which sentences I find in the privacy of my own mind to have high change cost. I can say any sentence out loud and I can tell another person any sentence. We call this process lying. Falsity and lying are a direct consequence of the potential dissociation between words and their reference.

Once we can label sentences and a lie or as simply false, other sentences may be labeled as “true”. This is a syntactic version of the concept of “truth”.

Simulations

Words allow the development of verbal reasoning in the form of simulations. A simulation is a procedure or calculation whose results can be compared to a sense-data high change cost sentence. However, the simulation does more than just produce a testable result. It suggests that the simulation is in some way mirroring a procedure in an “external” world that is actually producing this result. In other words the simulation is understood as a description of “what is actually going on”.

In some cases a simulation is a description of an unfolding of situations that could, in theory, be observed and, if so observed, would elicit high change-cost sentences matching those in the simulation itself. However, this is not true of all simulations.

Simulations consist of component mechanisms, procedures or rules. The component rules may take part in more than one simulation. For many reasons, these component rules are expected to work in a similar manner in any simulation in which they take part. For example, rules of logic are expected to work in the same way regardless of the simulation/calculation they are used in. This requires the creation of rules about rules or descriptions and procedures whose reference is the verbal reasoning process itself. The more simulations a rule is required for, the higher the change cost of the rule.

Writing

30k+ years ago, drawing existed. Yet 2,300 years ago advanced societies (Celts/Gauls) in the same region (Non-Mediterranean Europe) had not developed writing. Creating a written symbol for the word is an essential advance in the process of creating complex procedures of verbal reasoning. A drawing is an artifact that associates, for obvious reasons, with visual sense-data. A word, too is an artifact that shares some of the properties of a drawing in its association with the visual. However a written symbol that associates to a word is a far later development. 

The nature of memory brings about significant failures in verbal reasoning. In short term memory, holding new facts crowds out other facts. Holding a six digit number in short memory is possible, but most people will not be able to repeat a twenty digit number presented to them seconds earlier. In Artificial Neural Networks too, simulations of associative memory can be shown to limit the total number of different associations held in a given number of neurons. (It is possible to memorize long passages but perhaps one may suggest that this involves a different mental process. It seems, instead to be related to the ability to execute a long sequence of physical actions with very little conscious intervention in the stages of the sequence itself.)

This means that verbal reasoning must necessarily consist of short sequences of procedural processing. This greatly limits the possibilities for sophisticated sequences of reasoning that require more components. Writing symbols down allows for overcoming these limitations. Examples range from long multiplication to written texts and logical proofs.

Mathematical and Logical Symbols

There is another kind of word or symbol that plays an important role in symbolic reasoning but is not a simple association with visual sense data nor is it, strictly speaking a word describing other words. A integer, for example, is a form of symbol that enables processing at a far higher level than processing without it. It comes with its own rules of how to manipulate it which have become ever-more sophisticated over time. The history of mathematics and logic can be seen as the development of an ever growing library of new processing symbols and rules that go along with them.

Conclusion

I have tried to show in this post the power of using a model based on neural networks associative memory alone. All the major features of human reasoning as well as a plausible development path can be explained using this perspective.

Tuesday, August 14, 2012

Why Does Science Succeed?


In today's post I would like to reflect briefly on one philosophical implication of the previous post, Reality As a Verbal Game.

The thesis of that post was, briefly, that human knowledge (say, of an individual) can be expressed as a game comprising a web of verbal assertions (sentences), intrinsic rules of grammar and inference and a set of change costs associated with the sentences.

To speak of this game as knowledge is not accurate. Knowledge requires true belief and nothing of the sort is suggested here. There is no semantics to the game and there is no real concept of truth. There is no certainty associated with individual assertions in the web/database - there are change costs but these have no significance outside the rules of the game.

The fact that this picture makes no claims regarding underlying metaphysical reality does not mean that there is a denial that there is a such an underlying stratum. Similarly, the fact that syntactic sufficiency is aimed for, does not mean the grand history of attempts at pinning down meaning from Frege (1892) through Russel (e.g., 1905) and down to moderns such as Putnam (e.g., 1981) and Harnad (e.g. 2002) is wrong.  The claim is rather that it is possible to maintain a thoroughgoing agnosticism with regards to such issues and nevertheless succeed at answering some key philosophical and scientific questions.

One of the key questions that has troubled the philosophy of science is why science succeeds. The success of science referred to here may be limited to those areas of the hard sciences and technology where there is no dispute regarding the phenomenal success at predicting and manipulating mechanical and electronic objects - down to the nano-meter scale. No prejudice is intended to other fields; it is useful to focus the question by focusing on the indisputable successes. However, this success does extend to the common-sense everyday realm where, for example, we can make extremely reliable prediction regarding the success of healthy food at sustaining us or the dangers involved in an oncoming vehicle.  How do these knowledge structures come about? How do they work? Most importantly, why do they work?

This question has been used by some to prove metaphysical points. For example, Putnam (1975) in his more Realist days created the No Miracle Argument (NMA). The argument is part of the debate between Scientific Realists and Instrumentalists. If Instrumentalism in its naïve form is true, concludes the argument, then the success of Science would be an inexplicable miracle. Only if the underlying story of science is true in its complete metaphysical sense, could we explain why technology succeeds.

The view presented here proposes an answer to the question of how science succeeds which does not invoke metaphysics, realism, meaning or semantics. It is similar in many respects to arguments that have been labeled Instrumentalist, Positivist or Relativist in the past but it does not try to affirm any commitment to the metaphysical views, positive or negative, that have usually come along with these arguments.

The web of assertions is called a web because the intricate interconnections between the assertions are the defining feature of the system. These connections between assertions can be seen to be created by the fact that words in the assertions match words in other assertions. However, it is more correct and far more general to say that the interconnection between the sentences in the pool are created by the inference rules of the web itself. It is these that connect a sentence "water is wet" with "my shirt is wet" and both these with other sentences describing the implications of "wet". It is both the intrinsic rules and the rules created themselves in the language of the web that create the interconnections. This web could be identified with the Quine's web of belief (for example, 1951, 1954, 1955, 1991) if it were not for the fact that severe restrictions have been placed here on the concept relative to its features as developed through the course of Quine's writings.

The assertions in the web are divided (with gray areas in the demarcation) into sentences labeled as given as opposed to sentences labeled, for the purposes of this discussion only as optional. Sense-data sentences such as "I seem to see a red ball before me" or "the dial before me seems to point to the digit 3" are examples of the given. Sentences such as "Force is proportional to both mass and acceleration" or "electrons are flowing in the wire" are examples of the optional. Controversially, sentences that describe, say, Modus Ponens, also belong to the optional. The given  corresponds to a class of sentences with very high change cost but the optional may also have a very high change cost. As implied by their name, the given is a class that is assumed to be outside of the control of the game. At any given time (ignoring the circularity introduced by employing the concept of time here, for now) the sentences of the given are just there, changing and being changed by players/forces/reality outside the game whose ultimate source is of no interest to the current discussion.

One sub-game within the Über-game of verbal reasoning has as its goal to create processes composed of optional assertions and inference rules that produce a set of output sentences that match the given sentences in the web. What is proposed is that human predictive and manipulative success ranging from everyday interaction with the environment to the wonders of nano-scale technology are the result of achieving the goals of this sub-game. Why does science succeed? Because we have found a way to create just the syntactic processes that produce the same outputs as the givens. If this answer is sufficient, there is no more to science than this.

Imagine a simple computer set up with nothing more than the ability to run Wittgen. There are two players. The first player creates assertions in the Wittgen database such as "a red square appears at time 0", "a red square disappears at time 1", "a red square appears at time 5", "a red square disappears at time 6", "a red square appears at time 10" etc. (The meanings of red, square, appears, disappears, time and any digits are irrelevant for this thought experiment). The job of the second player is to create Wittgen assertions such that the database produces sentences like "predicting that a red square appears at time 0", "predicting that a red square disappears at time 1", "predicting that a red square appears at time 5" etc. In other words the continuation of "predicting that..." must match the first player's sentences. The concept of Matching is purely syntactic. The first player is producing the given assertions. The second player, who knows nothing about the first player,  simply sees the given sentences appearing in the Wittgen database.

The second player could be a computer program. Such a program may not be simple because there are quite a few subsystems such as integer arithmetic (a purely formal system here with no model or quantities applicable) to be invented. However, such a program is plausible. The history of human reasoning is the creation of such syntactic, symbolic and verbal systems such that it is quite easy for our "software" to produce the required sentences. If science can do that, then the question of how technology is possible has been solved. The question of who or what is the first player is not of interest to the core question. The only relevant question is whether it is possible and plausible that the second player should succeed at creating inference rules such that they produce sentences (strings, collections of well formed strings form the symbol alphabet) that match the given sentences.

To a mind unspoiled by too much contact with philosophy, this point seems so elementary that there seems little point in devoting a blog post to it. However, for the last 235 years, philosophers have been piling on questions such that it seems totally inexplicable why the tools we use should succeed in achieving any such knowledge. There now follows the briefest of summaries of some of the more famous of these problems.

Arguably, it was Hume (1777) who got the ball rolling with two critical questions. The first is the problem of induction. Assume that a logical deduction from true premises, produces true conclusions. However, empirical science requires another mechanism; induction. If one ball collides into another ball and transfers some of its momentum to the second ball, and if we always observe this to be the case, we can induce from this that there is a law of nature that requires that the momentum be transferred appropriately. Is there any justification for believing that there is such a law and that it can require nature to behave in a certain way? The only justification one can imagine is that induction has always succeeded in the past. In other words, we would need to use the principle of induction to provide any validity for the principle itself; a clearly circular and invalid argument. We cannot provide any reason why the foremost principle behind empirical science should be true or should yield any truths!

Hume (1777) also challenges one of the most ubiquitous concepts employed in science - causality. I see the two balls colliding and I believe that the first ball, or perhaps the collision of the balls causes the second to move. Yet I do not observe the cause itself. There is only the first moving and then the second moving and but there is nothing visible "twixt them". Where is this mysterious cause hiding? The problem has continued to engage philosophers to this day (see, for example,  Salmon 1998) with little resolution. It seems we don't understand and cannot justify the concept that may be seen as the very scaffolding of the scientific enterprise.

If we can't infer laws by observing objects then at least we can know the objects themselves. However, Kant (1781) started down the slippery slope of challenging the observations themselves by arguing that there is no access to the object itself as it is in itself. We only have access to our image of the object. However, that objection seems minor compared to his argument that the concepts of space and time themselves are not observations of an external reality. Rather, space and time are creations of the human mind. If we create these foundations of all scientific description freely, how could the descriptions themselves be truthful in any sense?

The assault on the truth-value of observations continued mercilessly. If we cannot know the objects of observations, at least we can know that we observe something or that we are undergoing an experience as-if of observation. I may be dreaming that I observe facts but at least I know that I seem to be observing the facts. The phenomenalist fact of observation should by unassailable. However, Sellars (1956) presents a devastating argument to the effect that there are two levels at which phenomenalist evidence must be judged. One is sub-verbal, incorrigible but absolutely useless in any verbal reasoning or scientific process. The other is the ability to articulate verbally a report about the sense-data experience. Unfortunately, the latter, while prime material for empirical investigation, are nevertheless problematic cultural-linguistic products. Putnam (for example, 1981)   focuses the attack on such verbal reports by pointing out that the observer would retract almost any word in the sentence if an expert were to pronounce the word misused. The individual, it would seem, cannot provide for the truth of any sentence - not even an observation statement. Finally, Wittgenstein (1953) debunks the defense that there could be some private language in which the observation could be considered true.

It would seem that the last bastion of scientific method is the deductive process itself. However, Quine (1951) challenged the idea that analytic statements are radically different in their truth values from synthetic statements. In fact, its seems that nobody can can get a clear uncontroversial grasp on "truth" at all. Moreover, even such cornerstones of logic as the belief that Modus Ponens is a process that transmits truth from it premises to its conclusion may be easily challenged. What justification is there, after all, for believing that Modus Ponens is a valid form of argument, and if there is some such principle that would justify it, what would be the justification of this latter principle? 

How, then, if there is no truth to rules inferred from observations, concepts such as causality, space or time, observation statements or the rules of deduction could there be any truth to science? And, if, following Putnam's NMA, science does not discover truth, how could it possibly be as effective as it is?

The answer presented by the Verbal Reasoning Game is that the whole system can work without truth. This goes beyond Instrumentalist philosophy of science which denies the need for a metaphysical commitment but normally assumes the truth of many of the other concepts that have been doubted here. Without truth, the only question that needs to be answered is whether it is possible to create processes within the verbal game that produce optional sentences that match the given sentences. The answer is that it is clearly plausible. 

In fact, it is even plausible that an artificial player can be written using the Wittgen language that could create the necessary Wittgen processes that match the given sentences in a Wittgen database. 

There is no miracle involved in our technological prowess because it so happens, we don't know why, that the givens are not random but rather predictable and moreover predictable by relatively easy procedures. Most importantly, technology works because this predictability has not changed in recorded human history. Apples may start falling upwards tomorrow for all we know, but they have never done so till now, as far as we know. If this changes, of course, technology will no longer work and there won't be a question to ask.

The only question we have not answered is about causality - but that is for another post.

References:

Frege, G., 1892, On Sense and Reference, Translations from the Philosophical Writings of Gottlob

Harnad, S., 2002, Symbol Grounding and the Origin of Language. Scheutz, M. (ed), Computationalism: New Directions, 143-158. MIT Press.

Hume, D., 1777/1975. Enquiries concerning Human Understanding and concerning the Principles of Morals, reprinted from the posthumous edition of 1777 and edited with introduction, comparative table of contents, and analytical index by L. A. Selby Bigge, MA. Third edition with text revised and notes by P. H. Nidditch. Oxford, Clarendon Press.

Kant, I., 1781/1963, The Critique of Pure Reason, translated by Norman Kemp Smith (Macmillan, 1963)

Putnam, H., 1981, Reason, Truth and History, Cambridge: Cambridge University Press.

Quine, W. V., 1951, ‘Two Dogmas of Empiricism’, Philosophical Review, 60:20-43

Quine, W. V., 1954, 'Carnap and Logical Truth', Synthese, 12: 350-374

Quine, W.V., 1955, ‘Posits and Reality’, repr. in The Ways of Paradox and Other Essays, Cambridge, MA: Harvard University Press, pp. 246–54.

Quine, W. V., 1991, “Two Dogmas in Retrospect”, Canadian Journal of Philosophy, 21: 265–74


Russell, B., 1905, “On Denoting”, in R. Marsh, Logic and Knowledge: Essays 1901–1950, London: George Allen and Unwin, 1956, 39–56.

Salmon, W., 1998, Causality and Explanation. Oxford: Oxford University Press.

Sellars, W., 1956, ‘Empiricism and the Philosophy of Mind’, Herbert Feigl and Michael Scriven, eds., Minnesota Studies in the Philosophy of Science, Volume I: The Foundations of Science and the Concepts of Psychology and Psychoanalysis. University of Minnesota Press, pp. 253-329

Wittgenstein, L., 1953, Philosophical Investigations, G.E.M. Anscombe and R. Rhees (eds.), G.E.M. Anscombe (trans.), Oxford: Blackwell.









Wednesday, August 8, 2012

Reality as a Verbal Game

Today, in the early 21st century, many people are already familiar with the idea that we have no proof that the world external to our mind actually exists. There is no way of proving that what I think of as “myself” is not actually just a disembodied brain in a vat full of chemicals with signals fed in by some aliens’ computer making me think that I have a body that is taking part in an inert material world. Alternatively, perhaps I am just a piece of software running in a giant supercomputer executing a simulation game. We can’t prove that such an outlandish alternatives to common sense reality are false.

However, that should not mean that there is nothing we can be certain about; phenomenological knowledge should be safe. When Descartes in his Meditations (1641) famously provided that “I think therefore I am”, this can be understood as asserting that we may not know that the tree before me exists or not, but I do know that I am undergoing an experience of seeing the tree or believing that I see the tree or being presented to in a tree-like manner. Phenomenological knowledge is awareness of the sense-data without actually any commitment to any external reality as the source of that data and certainly not to the nature of that reality. So I know that “I am seeing a red ball” even if I don’t know that it is a red ball that I am seeing. The outlandish alternatives that opened this section do not undermine the sense-data statements; they are compatible with these statements.

However, this solution too, has proven not to stand the test of time. There are numerous problems with a phenomenological foundation. Sellars (1956) is one of the highlights of the arguments that undermined the sense-data foundation. Even if some facts are “given”, such as the experience of red, these facts are sub-lingual and useless as a basis for logical inference. Other philosophers (for example, Putnam 1990) focus on the fact that any sentence that you assert about the experience will include words that you would retract if an “expert” corrected you. In fact, language as a whole is a culture-created entity that plays a very large part in the creation of the sentence that you finally assert about what you see. So, one can neither assert that it is true that there is a red ball before me, nor can a person assert that it is true that “I am having an experience such that it seems to me that I am seeing a red ball”.

All we are left with is (what seems to be) the psychological fact that a sense-data assertion such as “I am seeing a red ball” is an assertion that one typically would not want to retract. However, there is little point in putting too much importance in the psychological fact itself. Rather, it would be best to present the whole thing as a game about sentences in the English language. The rules of the game are such that sentences such as “I am seeing a red ball” or “the dial I see is pointing roughly to 3” should only be changed if one really has no choice. On the other hand, there are other sentences in the game, that are prime candidates for change. 

Quine (1951) presents the tantalizing image of a web of belief. Rather than get involved in the complexity of the “belief” concept, the concept used here will be a web of textual assertions. The sum of the knowledge of a system is represented only in terms of sentences and some basic rules of inference that can act on the sentences. The knowledge system in question, could be, for example, the totality of an individual’s knowledge. Of course, an individual also has commitments that may be sub-verbal which cannot be expressed as sentences. However, while not denying the existence of these non-verbal assertions, they are not included in the system for now. At most, they may be seen as the cause of the addition of specific assertions into the web of sentences. 

Returning to the game, there is a player of the game and there is a database or web of sentences or wfs’s (see the last post on pre-logic). The rules of the game mark some sentences as sense-data sentences. These will having a high change cost. However, the player may add other sentences at will. These will have initially a very low cost to add or change or delete. The new sentences may be simple fact-like assertions or they may be procedures, like inference rules, for changing other sentences in the database. As before, it is important not to confuse the intrinsic inference rules of the system which by the rules of the game may not be changed and these non-intrinsic rules which are added by the player. The sense data sentences change over time by some other, seemingly external, factors. The aim of the game is for the player to create calculations using the low change-cost sentences such that new sentences are created that are identical to the sense-data sentences. 

Once a high degree of success is found in predicting the sense-data sentences, it will be found that changing some sentences has an effect on a large number of other sentences of the web. Therefore, if such a highly connected sentence is changed many other sentences will have to be adjusted if the sense data sentences are going to continue to be predicted successfully. In the light of this, a change that requires so much work is also to be considered high cost. Therefore sense-data sentences are not the only ones that have high change-cost.

The game is entirely syntactic. If strings such as “true” or “false” are useful in creating successful predictive calculations, then they will be used but they will have no special significance. One may speak of strings having meaning but this will be no more than an indication of the fact that these words participate in the reference/retrieval mechanism of the intrinsic inference rule of the system. (The intrinsic inference rule is the rule that defines how to play the game itself. It was explained in a different context in the post on Pre-Logic.)

Since there is no semantics involved, there is no differentiation between valid rules of (non-intrinsic) inference and invalid rules. Of course, it could be that inventing one rule (call it MP, for Modus Ponens) will eventually result in a high degree of sense-data predictability whereas some other rule (call it AC, for Affirming the Consequent) will be found to be entirely useless. In that case MP will play a central role in almost all prediction calculations and therefore any attempt to replace it will require massive changes throughout the web; changes that may or may not succeed in achieving the same levels of prediction. If that is so, MP will have a very high change cost, higher even than sense-data sentences. MP may even be useful for calculating results in thought experiments of what might be possible as opposed to what actually is observed. People might even speak of MP as being “necessarily true”. It might be. However, MP’s utility is a sufficient explanation of its high change-cost. Therefore any attempt to assign any greater significance to MP cannot benefit from data actual or even (imagined) possible. 

What is required in order to succeed at predicting sense-data is the creation of complexity. The intrinsic inference rule must allow for creation of calculations, processes or laws that produce complex, intricate and repeatable results. Just as Conway’s Game of Life can create extremely complex patterns developing over time with just a handful of very simple rules, so Wittgen can (as well as many other systems). Once there is a way of generating complexity, it should not be surprising that it is possible to create procedures that track the transformations that sense-data sentences make over time.

If one views the human mind past and present, individual and collective, and common-sense as well as advanced science, as participating in a game as just described, an interesting result emerges. The rules of the game did not allow the player of the game to change in the sense-data sentences (except under extreme circumstances). Nevertheless, they are not only seen to change but, it turns out relatively simple and unified systems of calculation can predict and track those changes1. These sentences are the given2. Just as there are processes that track these givens, there are processes that fail to track these givens. Thus, even if there is no truth or semantics to the system, there is no sense of “anything goes”. Only fanatic attention to the details of the given and herculean efforts at finding successful predictive procedures will result in a moderate degree of success at tracking the given. If there is a “reality” then it is the fact that the given, the sentences not under the control of the game, do change and they change in regular, predictable ways. Perhaps there is another meta-“reality” behind this “reality”, but nothing said so far seems to require postulating this meta. 

Sellars may be right about the fact that there is no “truth” to the given or that the sub-verbal truth cannot participate in inference. However, it does not seem to matter. Sense-data need not be true. There is no need for truth, validity in inference or semantics in order to explain the success of common sense or advanced science. It is sufficient that we play a game that commits to make the goal that of predicting some sentences, taken as given, rather than ignoring or modifying these sentences3

References:

Descartes, R., 1641/1996, Meditations on First Philosophy, translated by John Cottingham, Cambridge: Cambridge University Press, 1996.

Sellars, W., 1956, Empiricism and the Philosophy of Mind, Herbert Feigl and Michael Scriven, eds., Minnesota Studies in the Philosophy of Science, Volume I: The Foundations of Science and the Concepts of Psychology and Psychoanalysis. University of Minnesota Press, pp. 253-329

Quine, W. V., 1951, ‘Two Dogmas of Empiricism’, Philosophical Review, 60:20-43

There is little point in arguing that the player may be generating the sense-data sentences unconsciously. The question of what or who is generating the sense-data sentences does not concern the discussion. The only issue is the law-like behavior of the permutations of the sense-data sentences. To see this, imagine that your individual subconscious or perhaps the collective subconscious is some super-mathematician capable of calculating all the values experimental data should have and putting it in our way for discovery. Then it would turn out that this mathematician is the matrix-master supercomputer. There is no reason to call it a sub-faculty of the player.

Note, the use of the word given here is different from that of Sellars’. Sellars uses the word in an absolute, externally defined  sense and argues that it cannot be applied to sentences. Here the word is used only as a label for the subset of sentences, which, in accordance with the rules of the game, have an extremely high change cost by virtue of being understood as sense-data descriptions of internal experience.

There are both similarities and differences between the ideas presented here and Kant’s phenomenal-nomological divide, Putnam’s Internal Realism, Rorty’s views, coherence theories of truth and many more. Mapping the location of these various views is beyond the intention of this essay, which is to present a philosophical underpinning for Wittgen.


An Assembly Language for the Brain

This is not a real post. It's just a thought that occurred to me while I was corresponding with someone called Mariya Mykhailova on the subject of esoteric programming languages.

Here's the idea. What I am searching for in Wittgen could be called "An Assembly Language" of the brain or the mind.

Assembly language, in programming terms, is a very low level programming language that reflects, quite intimately, the core structures of the way a computer works. Often, other, higher level languages that conform to abstract programming paradigms quite different from that of the computer hardware itself, will be initially written in Assembly language. For example, a computer language such as C, is considered quite low level but higher than Assembly. C's structure reflects quite well the way simple serial microprocessors actually operate but it will be the same language even if it is implemented on machines with rather differing hardware. Higher-level languages such as LISP, reflect symbolic paradigms quite disconnected from almost any computer as it is actually built. So when writing a LISP compiler or interpreter, the core may be written in C. The core of the C compiler, in turn, may be written in Assembly language.

We can look at a brain as computer. With the exception of consciousness, this seems to be quite a plausible concept. There is no question that the brain can perform cognitive and reasoning functions that look quite different from the core functions that the underlying wetware are built to perform. Similarly, a numeric sequential computer can simulate massively parallel symbolic functions or even a large neural network. Nevertheless, despite this extraordinary flexibility on the part of such complex systems as brains and silicon computers, there is a significance to the question, what is the nature of the underlying hardware, what are the core functions out of which the more advanced simulations can emerge.

Wittgen is an attempt to discover the assembly language of the brain. It is a stab at creating a programming language that achieves the twin goals of any assembly language. On the one hand it must closely reflect what the underlying hardware does. On the other hand, it must be expressive enough so that it can be used to build higher level abstractions. 

I said in the post entitled Bio-Neurological Motivation that the if we open up the brain, all we find are neural networks and other associative-memory-like structures. We find no serial computer architecture of the kind we might expect in sequential formal logical reasoning of the type that we are so proud of. That is not quite true. In theory neural networks could be so arranged as to build any piece of electronics built out of silicon. The formal reason for this is that, as any "hello world" example (the computer world's paradigm for introductory example) of neural networks will show, it is easy to create a six neuron arrangement that can implement an electronic XOR gate. It is even easier to create a NOR gate. Now it is a well known theorem of electronics that a NOR operation is provably "functionally complete", meaning that a combination of NOR gates alone can create every possible logical gate. Therefore in theory you could build the latest microprocessor out of millions of these tiny neural networks. Nevertheless, it is not practical, and every indication is that the brain does not work like that. Moreover, in that arrangement, the system would probably never be trainable. 

We therefore return to the original motivation. We know that the core function of the reasoning brain is that of an associative memory in its static form and a neural network in its dynamic adaptive form. So the reason that Wittgen was designed the way it was, was to capture that core function as the central two functions: Writing the association (assign) and retrieving the association (retrieve). In that sense Wittgen is an assembly language for the brain. Now, what is left to do is to show how all the rest of the structures of reason can be plausibly implemented in terms of this underlying assembly language.

Monday, August 6, 2012

Pre-Logic, Syntax and Semantics


Wittgen, like most, though not all, programming environments, is pre-logical. 

In this post I intend to explain what “pre-logical”  means and to demonstrate that it applies to Wittgen. I will discuss the difference between syntax and semantics in the context of logic as well as where the concepts of “true” and “false” come in.

In standard descriptions of formal logic you start with a formal system which is comprised of the following elements:

  1. An alphabet, a finite set of characters which can be combined, for example, to form text strings. Typically this will be the letters of the alphabet plus the digits but in some special cases it might be only 0 and 1. It can be also be extended to other elements such as electric charges, spin states of electrons etc. Assume an alphabet of any character you could type on a computer keyboard for now. Any combination of elements from the alphabet will be referred to as a string.
  2. A grammar, which is a set of rules that  either generate or define acceptable strings from the alphabet.  When used to define acceptability, it either rejects or accepts the string. A accepted string can be referred to as a well-formed string or formula (wfs or wff). A grammar might define, for example, that any string may be accepted but that if it has parentheses, there must be one and only one closing parenthesis for every open one and the close must follow the open. Thus a string which has an unclosed parenthesis, is unacceptable or not a wfs, even though it is composed only from elements listed in the alphabet.
  3. A set of acceptable strings as defined by the grammar (wfs’s).  This will be referred to as the current database. The initial strings of the database can be called the axioms.
  4. A set of rules, the inference rules, that can be applied to any wfs of the database which produces one or more new wfs’s that are added to the database. An inference move on a database is the application of one or more inference rules on zero, one or more wfs’s on condition that any wfs used is part of the database. An inference move on a database creates a new database (which might be, though is not usually, identical to the original database).
  5. A legal database is the initial database (the axioms), a database that results from the initial database by applying any inference move on the initial database or a database that results from any inference move on a legal database. A legal inference move is the application of an inference move to legal database. A legal wfs is a wfs that appears in at least one possible legal database.

It is more common to use the word “valid” instead of the word “legal” but this uncommon terminology is used to avoid confusion when, later, valid and invalid rules of inference are discussed.

A formal system may be viewed dynamically or statically. A static approach asks only whether a specific wfs is a legal wfs. A dynamic approach sees the formal system as a process executing legal inference rules sequentially. A static approach, while looking for a yes or no answer to the legality of a wfs, requires a proof method which means finding some sequence of inference moves that results in a database that contains the wfs in question. 

The dynamic perspective suffers from the apparent need for a cognitive, or at least selection-enabled, agent that will choose which of the possible inference moves to execute; there is often more than one  alternative because there may be more than one rule or more than one wfs to select for application.

On the other hand, the dynamic perspective does not really need the concept of legality; it is inherent in the process. If you observe the database at any step in the dynamic process, it will automatically be a legal database.

Wittgen provides a significant restriction relative to the full formal system described here. This still means that it falls within the definition of a formal system but it is a specialization of it. Wittgen does not provide a choice regarding which inference move to make. For any given Wittgen database, there is only one inference move that can be applied. The first assign in the Doing Now variable is the only legal move. Thus for any given database, there is only one database that can result from the next inference move. This removes the agent problem just mentioned for the dynamic perspective on formal systems. Note that Wittgen also allows for a situation where there is no legal inference move. This occurs if the Doing Now variable is empty.  (This is the equivalent of a <halt> in computer science terms). A final database is a database that allows no legal inference move from it. 

Wittgen also requires an extension of a formal system as normally defined. Typically, formal systems only envision adding wfs’s to the database; they do not explicitly allow for changing strings or removing them. However, there seems little reason not to allow for this extension. The system is still a formal syntax-only system. Wittgen inference moves may change strings by assigning new content to old variables. (The formal system of temporal modal logic may also allow for such inference operations.)

Combining the possibility for changing wfs with the concept of a final database allows for one more definition. A legal final wfs is a wfs that is found in a final database

A formal system as described so far suffers from the fact that the grammar, the inference rules and the concept of legality require definitions specified in some other language or formal system. Call this formal system in which the rules are defined FS2. Now FS2 will have its own grammar and inference rules which will have to be defined in FS3.  There follows an endless sequence or recursion of formal system definitions. This shall be referred to as the rule definition regression problem. 

There is a possible response to this problem - even if it is not a solution. There are two possible perspectives on Wittgen; one abstract and one physical. In the abstract perspective a Wittgen database is just a set of strings that develops step by step according to formal rules. However, instead of a abstract perspective, one can view Wittgen as a physically embodied system. Strings are composed of symbols (not necessarily in one-to-one correspondence with elements from the alphabet) which are physical patterns arranged in space or even in some mental layout.  As mentioned before, physical patterns could be arrangements of matter, color, electric charges or something more exotic. There is a physical system or a set of pattern, there is also a machine or the laws of physics that act on these patterns. The grammars are physical limitations on the way the patterns can be arranged. The inference rules are just the way the machine operates on the physical patterns or the way the laws of physics require the patterns to segue from one configuration to another.

It is easy to find examples of physical systems that incorporate physical patterns which follow a strict set of rules of how one pattern segues into the next. Even a seemingly immobile rock contains ever-changing electron configurations which might answer to this description. However, if we search for physical systems that could embody the specific inference rules of Wittgen, the human brain, ANNs and suitably configured computers are all plausible candidates. For example, the patterns of electronic charges in the circuits of a computer which we interpret as 0s and 1s are the symbols of a physical embodiment of a formal system. The processor of a computer, once built in a certain way, responds to these symbols by transforming them in a deterministic manner. These transformations are the inference rules of the formal system. A suitably configured computer will implement the Wittgen inference rule.

Such physical embodiments do not suffer from the rule definition regression problem. The inference rules, or perhaps, more correctly, the transformation rules are not themselves expressed in any formal language. They are an empirically observed reality. An associative memory system, once built, will simply retrieve when given the retrieve instruction and assign when given the assign instruction. This may be determinate, observable and thus just a hard-nosed fact. Of course, the actual behavior depends on the actual configuration of memory at any time, but the fundamental factness of the transformation rules remains. Philosophically speaking, this position raises questions of its own (induction, reliability of observation and culture-dependence of the descriptive terms among them), but, given an unquestioned scientific naturalism, the physical embodiment perspective provides an answer to the regression problem. We may return to this issue in later posts. 

Everything discussed so far describes a formal system. A formal system is not yet a logic. For a formal system to be a logic, the formal system must be assigned an interpretation, a meaning or a model. One of the tasks of most interpretations (in the formal sense here) is to assign truth values to the wfs’s - both those of the initial database (the axioms) and those of any legal database. The formal system is purely syntactic. The application of the interpretation or meaning is the process of applying semantics to the syntactic system. Once there is semantics, one can speak of the the rules of inference as propagating “truth” from premises to conclusions by means of the inference rules. Thus an argument, viewed as a syntactic development from one legal database to another by means of the syntactic rules of inference, may be more than merely legal; it may aspire to be valid. In order for an argument to be not only legal but valid, there is a further stipulation that the rules of inference themselves be valid. If the premises of the argument are true and the argument is valid, the conclusions must be true.

Rules of Inference that are valid are rules that propagate truth from premises to conclusions.

Thus semantics introduces:

  1. Interpretation to wfs’s and, arguably, to sections of wfs’s (words). 
  2. Values of true and false assigned to wfs’s. (This is often part of the interpretation.)
  3. Selection of a small set of inference rules out of the the universe of possible inference rules as valid, truth preserving inferences.


Logic requires both a formal system and semantics. Sometimes, logic is seen as focused on only the question of valid inference rules. In that case a specific logic aims to discover what, for a given alphabet, grammar and inference rules would count as a valid argument regardless of the interpretation or meaning of most, but not all, of the words in the wfs’s.

While Wittgen can be used as the syntactic process behind such a logic, it is designed with other intentions in mind. Wittgen, taken by itself, is purely syntactic. (Again, so are most computer languages, programming environments and virtual and real computers.) The intention is that no semantics be added at all. The term “true” and “false” may still be used, but in a different way to the one described for logic. Instead “true” and “false” can be seen as purely syntactic components, as part of the strings of a particular system implemented in Wittgen. They have no meaning. They are just strings. Thus I might have one assertion:
    object on the mat::=cat}
and another:
  cat is the object on the mat:=true}
However, “true” here is just another string. It is not intrinsically the opposite of “false”. If I want to make make it the opposite of false in some sense, I could add further assignments such as:
    statement is true:=
        Doing Now:=
            @Return Doing Now}
            }
        }
    Make False:=
        Doing Now:=@statement is @@test}}}}
        Doing Now:=
            @test}:=false}
            Doing Now:=@Return Doing Now}}
            }
         }      


To use these lines you create assertions such as:
    test:=cat is the object on the mat}
    Doing Now:=@Make False}}

Then, if the database already contains an assertion such as, 
    cat is the object on the mat:=true}

These lines form a procedure that when, say, “cat is the object on the mat” does not refer to “true”, makes it refer to “false”. If the statement does refer to “true”, it just goes back to what was being executed before the test. But “true” and “false” are just strings that the procedures operate on. Syntactically, that is what all words are, the data of procedures. They are not special meanings that are assigned to strings.

This same example that demonstrates that “true” and “false” can be just arbitrary strings is also a demonstration of the fact that Wittgen can be used to create rules of inference. “Make False”, in the last example, is such an inference rule. However, one must differentiate between the intrinsic rule of inference, the rule of inference that defines Wittgen, and a rule of inference created using text strings in Wittgen’s language. Consider the following Wittgen code:
    MP A->B true:=
        Doing Now:=
            @x} is @B}:=true}
            }
        }
    MP:=
        Doing Now:=
            @MP A->B @@x} is @A}}}
            }
        }

This code is a standard Modus Ponens framework. It means that if A implies B (If A then B or A->B) and A is true, B is true. It is used as follows:
    x:=Socrates}
    A:=Greek}
    B:=Wise}
    Doing Now:=@MP}}

This code snippet supplies the assertion “All Greeks are Wise” or “∀x[Greek(x)->Wise(x)]” and runs the “proof” by making one call to the inference rule Modus Ponens. Thus if the database has been previously provided the assignment:
    Socrates is Greek:=true}
a new assignment will be made equivalent to:
    Socrates is Wise:=true}

Thus MP is an inference rule that exists only if the database includes it. The intrinsic inference rule, the inference rule that defines Wittgen itself, on the other hand, defines rather than is defined using Wittgen. The intrinsic inference rule, in the perspective of an abstract system, requires symbols to define it within a different formal system. In the physical embodiment perspective the intrinsic rule is just an empirical fact (assuming there are any such). However, even non-intrinsic inference rules are guaranteed to operate.  1

In the world of logic, as opposed to formal systems, Modus Ponens is a valid rule of inference.

However, consider AC which stands for Affirming the Consequent:
    AC A->B true:=
        Doing Now:=
            @x} is @A}:=true}
            }
        }
    AC:=
        Doing Now:=
            @AC A->B @@x} is @B}}}
            }
        }
If the inputs are provided as before indicating the assertion that “if x is Greek, x is Wise”, but the database contains:
    Socrates is Wise:=true}
calling AC will create the following assignment:
    Socrates is Greek:=true}

AC is considered an invalid rule of inference or a formal fallacy. However, Wittgen is unconcerned for the difference between MP and AC. It is a formal system, it specifies syntax but not semantics. Wittgen is, in this sense, pre-logical. As is, some claim, the human mind.

In this sense Wittgen is pre-logical. As it stands, it is a purely syntactic, formal system. It would require layers of semantics, meaning, interpretation, ascription of validity to a subset of the possible inference rules before it can be used as a logic which can talk about “true” and “false” in a meta-syntactic sense.

Being pre-logical is an important aspect of Wittgen. It allows Wittgen to be a tool for exploring and understanding syntactic systems. Ultimately, it may be used to question whether meaning and semantics are really required in modeling our reality.



1 Note that the intrinsic rule is a form of Modus Ponens in a restricted sense of applying to strings themselves but not what they might be used to refer to. Thus if there has been an assignment such as:
a:=b}
and this is interpreted to mean the premise “a->b” or “if a then b” then,
Doing Now:=
conclusion:=@a}
}
will necessarily (for some meaning of necessarily) result in an assignment equivalent to
conclusion:=b}
Thus “if a then b” and “a” results in “b”. However, this cannot be used as a more general inference because you cannot treat a and b as variables as we did for A and B in the previous case. You cannot write:
a:=Socrates is Greek}
b:=Socrates is Wise}
and then call
a:=b}