I have been going back over some of my PhD research and have discovered an interesting example I didn’t consider. It shows up some odd consequences of the way we understand truth predicates. Consider the following:

B:             Sentence A in this post is true.

There is no A in this post so intuitively we can rule out B being true. That means that whatever else we might say about it B is not true. However this leads to an odd argument:

1. B is not true.

2. B iff B is true.                   (T Schema)

3. A is true iff B is true.         (Spelling out B)

4. A is not true iff B is not true.  (Consequence of 3)

5. A is not true.                     (From 1 and 4)

However, it is somewhat ridiculous to conclude that A is not true since there is no A.

So where does this argument go wrong? It can only be in the initial argument that B is not true, or in the assumption of the T Schema.

The only way I can see that we could argue that it is not the case that B is not true is to argue that as there is no A, we can’t say anything about the truth of B. However, that means that it is not the case that “B is true” – which is equivalent to saying that “B is not true”.

So something odd is going on here with the assumption of the T-Schema.

The word model get used in a number of different ways in ordinary and technical language. There are scientific models, economic models, supermodels, model cars, and semantic models.

There are at least three distinct ways that the term model is used which are important to distinguish. One way to make understand the differences is to compare a ‘model citizen’ with a ‘model train’ with a ‘make and model’. A model citizen is someone who is close to perfect (as a citizen) and exemplifies the core characteristics of a citizen. A model train on the other hand is a replica of the real thing that is smaller or more manageable in some way. When we talk about the make and model of something, we are referring to a particular version or type of a generic category. In the first meaning, a ‘model’ is better than we would expect from real life. On the second meaning, a ‘model’ is a limited version of real life. On the third meaning, a ‘model’ is one of various alternatives that all have different characteristics but belong to some large category.

Supermodels, for example, clearly draw on the first meaning of ‘model’ – models are human being whose looks are considered to be close to perfect.

Scientific and economic models, on the other hand, draw on the second meaning. They are limited forms of reality which rely on simplifying assumptions to make then understandable or computable. Thus Neil Bohr’s model of the atom was a useful way of understanding it, but that has since been shown to be incomplete.

In the way they are normally understood, semantic models draw on the first meaning not the second. A semantic model is normally the mathematical structure that defines the semantic values of a formal logic and which establishes the link between truth and formal provability. Semantic models are clearly not intended to be limited forms of reality, but exemplify the core characteristics of formal logic.

Of the two different concepts of ‘model’ supermodels and semantic models belong in the same company.

It is a desirable feature in developing formal logics that the rules governing the logic fully determine the logic in a fairly precise way. Loosely speaking, this means that everything you might want to do with any symbol or connective in the system can be done. So, for example, in a natural deduction system, you want introduction and elimination rules for every connective and quantifier.

More precisely, the aim of defining a logic is normally that the logic be complete in a truth functional sense. That is, given a valuation that assigns all components in a language an appropriate value, then every sentence/well-formed-formula (up to certain constraints like Goedel’s Theorem) has a truth value.

A partially determined logic is simply a logic that doesn’t satisfy this requirement in either of the above definitions. A very simple example would be a logic which includes an Elimination rule for AND, but no introduction rule. Thus, if we have proven A&B, we can derive A or derive B; but if we know C and we know D we cannot conclude C&D. While this is a somewhat trivial example, it illustrates the idea.

I have been playing with the idea on this blog that natural languages may in fact determine partial logics of this sort, rather than the fully determined logics we are more familiar with. One motivation for this is that we get natural agreement on a set of logical rules in natural langauges (e.g. Modus Ponens, AND rules) but there is less agreement on other rules. Instead of looking for the ‘real’ rule that governs natural languages, it is maybe the case that natural languages do not in fact fully determine the rules of a logic.

Another motivation is that human reasoning is often driven by pragmatic, as well as logical, considerations. For example, it is plausible to argue that human reasoning relies on a principle that each step should be more informative that the previous. If someone argues in a way that appears to reduce the amount of information known, or does not increase it, we do not consider it to be a useful argument.

While this is a plausible pragmatic consideration, some valid logical rules do not adhere to this principle. The most obvious example is OR-introduction in most natural deduction systems. The following valid argument does not add information, but rather reduces it:

1. It rained this morning.

2. It rained this morning or it snowed yesterday.

OR-Introduction is therefore not ‘pragmatically valid’ in this case anyway even if it is logically valid. Any version of OR introduction is ‘pragmatically invalid’ and therefore a pragmatically valid logic would need to be partial in the sense discussed here.

Partial logics can have very interesting properties and I hope to post a draft paper on some of these shortly. Philosophically though, the present an interesting way of resolving some of the issues on the interface of natural language and logic.

In my previous post I outlined my conjecture that natural languages partially determine a system of logic, based on the observation that there is debate about some aspects of the correct logic, but not others. For example, no-one disputes the correct logic for the AND connective and there are not serious arguments against Modus Ponens.

There is stronger reason to suppose that natural languages do not fully determine a system of logic than simply that no-one seems to be able to agree what it is. If a natural language completely determines its relevant logic, then that natural language determines a formal system that is expressive enough for the arguments that Goedel used in his Incompleteness Theorems to work in that system. This means firstly that the natural language is incomplete (“There is some statement that is true that cannot be proven to be so in that language”) and it cannot be proven in that language that that language is consistent.

These conclusions are difficult to accept.

While it is plausible that natural languages are incomplete, it runs counter to ordinary use that the exact sentences identified in Goedel’s argument are true but not provably so. We can run Goedel’s argument in a natural language for that language, can understand what the sentence means or represents and that the sentence must be true. It is difficult to see how we can accept that the sentence must be true when Goedel’s argument shows that this cannot be proven in the natural language.

To be more precise, Goedel showed that if we can prove the relevant sentence, then the relevant system (in this case language) is inconsistent. So we seemingly have an argument that if natural languages fully determine a system of logic then, following Goedel, they must be inconsistent.

One thing we can never accept is that natural languages are inconsistent and that therefore everything is provable in them. If this is the case, then the whole of human knowledge is a lie as everything is in fact true.

In my previous post, I outlined the argument that the meanings of certain words, especially connectives, provides natural languages like English with an inbuilt logic, or system of reasoning. The point is that the meaning of many words fix reasoning involving sentences that include that word.

To take a different example from the last post, if I know that the sentence “The cake is a chocolate mud cake and it has white icing on it” is true, then I am completely justified in concluding that “The cake has white icing on it” by virtue of the meaning of the word ‘and’.  This seems unremarkable but that is an example of one of the AND rules in classical logic.

However, if natural languages do have an inbuilt logic, why is there debate and a lack of consensus about what the correct logic is?

I am not aware of any literature that has explored this question in this form, and so what follows are purely my ideas.

My basic thesis is that the meanings of words in natural languages, like English, partially determine a system of logic, not completely determine one – at least in the sense that logic is commonly used by philosophers and logicians. The main evidence for this, which there is not space to justify here, is that all viable logics agree on many of the rules and principles, and all the disagreement is on a few points. I take it then that logic is determined by languages for the rules and principles that are agreed on, and only partially determined for those where there is disagreement.

To date, I have identified four areas of disagreement, which I plan to explore in later posts:

1) rules of inference  – e.g. over introducing the conditional/implication

2) meanings of connectives – e.g. one inference rule can have various semantic interpretations

3) basic principles of reasoning – e.g. rejecting or accepting the principle of non-contradiction

4) the grammatical (non-logical) rules of a logic  – e.g. what counts as a sentence

Tags: , , ,

« Older entries