Thoughts on natural languages

You are currently browsing the archive for the Thoughts on natural languages category.

The word model get used in a number of different ways in ordinary and technical language. There are scientific models, economic models, supermodels, model cars, and semantic models.

There are at least three distinct ways that the term model is used which are important to distinguish. One way to make understand the differences is to compare a ‘model citizen’ with a ‘model train’ with a ‘make and model’. A model citizen is someone who is close to perfect (as a citizen) and exemplifies the core characteristics of a citizen. A model train on the other hand is a replica of the real thing that is smaller or more manageable in some way. When we talk about the make and model of something, we are referring to a particular version or type of a generic category. In the first meaning, a ‘model’ is better than we would expect from real life. On the second meaning, a ‘model’ is a limited version of real life. On the third meaning, a ‘model’ is one of various alternatives that all have different characteristics but belong to some large category.

Supermodels, for example, clearly draw on the first meaning of ‘model’ – models are human being whose looks are considered to be close to perfect.

Scientific and economic models, on the other hand, draw on the second meaning. They are limited forms of reality which rely on simplifying assumptions to make then understandable or computable. Thus Neil Bohr’s model of the atom was a useful way of understanding it, but that has since been shown to be incomplete.

In the way they are normally understood, semantic models draw on the first meaning not the second. A semantic model is normally the mathematical structure that defines the semantic values of a formal logic and which establishes the link between truth and formal provability. Semantic models are clearly not intended to be limited forms of reality, but exemplify the core characteristics of formal logic.

Of the two different concepts of ‘model’ supermodels and semantic models belong in the same company.

It is a desirable feature in developing formal logics that the rules governing the logic fully determine the logic in a fairly precise way. Loosely speaking, this means that everything you might want to do with any symbol or connective in the system can be done. So, for example, in a natural deduction system, you want introduction and elimination rules for every connective and quantifier.

More precisely, the aim of defining a logic is normally that the logic be complete in a truth functional sense. That is, given a valuation that assigns all components in a language an appropriate value, then every sentence/well-formed-formula (up to certain constraints like Goedel’s Theorem) has a truth value.

A partially determined logic is simply a logic that doesn’t satisfy this requirement in either of the above definitions. A very simple example would be a logic which includes an Elimination rule for AND, but no introduction rule. Thus, if we have proven A&B, we can derive A or derive B; but if we know C and we know D we cannot conclude C&D. While this is a somewhat trivial example, it illustrates the idea.

I have been playing with the idea on this blog that natural languages may in fact determine partial logics of this sort, rather than the fully determined logics we are more familiar with. One motivation for this is that we get natural agreement on a set of logical rules in natural langauges (e.g. Modus Ponens, AND rules) but there is less agreement on other rules. Instead of looking for the ‘real’ rule that governs natural languages, it is maybe the case that natural languages do not in fact fully determine the rules of a logic.

Another motivation is that human reasoning is often driven by pragmatic, as well as logical, considerations. For example, it is plausible to argue that human reasoning relies on a principle that each step should be more informative that the previous. If someone argues in a way that appears to reduce the amount of information known, or does not increase it, we do not consider it to be a useful argument.

While this is a plausible pragmatic consideration, some valid logical rules do not adhere to this principle. The most obvious example is OR-introduction in most natural deduction systems. The following valid argument does not add information, but rather reduces it:

1. It rained this morning.

2. It rained this morning or it snowed yesterday.

OR-Introduction is therefore not ‘pragmatically valid’ in this case anyway even if it is logically valid. Any version of OR introduction is ‘pragmatically invalid’ and therefore a pragmatically valid logic would need to be partial in the sense discussed here.

Partial logics can have very interesting properties and I hope to post a draft paper on some of these shortly. Philosophically though, the present an interesting way of resolving some of the issues on the interface of natural language and logic.

In my posts to date on Tarski, I have noted that he argued that natural languages were inconsistent. This claim needs some explanation as it does not simply mean that inconsistent statements can be stated in natural languages, or that people can hold inconsistent beliefs.

Consistency in this context is a defined property of logical or mathematical systems. Such a system is consistent if there is no statement in the system that is both demonstrably true and demonstrably not true. In claiming that natural languages are not consistent, Tarski assumes that natural languages have (at least partially) a logical structure that allows statements to be demonstrated to be true or not true.

This assumption is highly plausible. For example, if you understand the meaning of the connectives “If … then …” in English, then you ¬†almost certainly have to accept the Modus Ponens rule of inference. You can go through similar exercises with connectives like “and” and “or” and pretty soon you get to a set of inference rules that would determine a full system of logic.

If English has an inbuilt logic, then it makes sense that natural languages like English must be (logically) consistent or inconsistent.

Tags: , , , ,