# Logic without Sanity

In order to discuss the notions at the heart of Gödel's proof, one must deal with as general a notion of what is a logical system as possible, without one's usual filter of only looking at the useful systems that fit the bill. Yet a proof must still be able to present itself in terms which do convince the reader that the logical system can diagnose (at least one of) the deficiencies of which the theorem speaks. The system isn't obliged to care about this diagnosis, but the notion of adequacy – which I characterize as supports Peano or can count – has to suffice to say enough that one can ask the system enough that we can assess its consistency and completeness.

The logical system might plausibly have no means of discussing consistency and completeness itself: we have to come up with our own characterization of these notions and apply them to the system; ideally we want to express these notions in terms intelligible to the system and see what happens when we ask it about its own consistency and completeness. If we are to be able to discuss these properties of the system, we need some handle on what it means for the system to disagree with itself (and: about what?) or to fail to make up its mind (likewise). This we can do by characterizing what we mean by a logical system suitably: the central notions are assertions, proof and inference; at least some notion of negation (logical not) is a prerequisite for assertion of either consistency or completeness – and suffices to describe these notions.

Since we are walking in the unknown terrain of an arbitrary logical system, we must needs be wary of taking for granted any familiar property of the logic we are used to using, let alone any properties it lacks (even if we're used to thinking it has them). In particular, we may not safely suppose that double negation brings us back to the same thing as it was applied to – not(not(A)) might be a different assertion from A; one might infer, from consistency and completeness, that they are equivalent; but we are concerned with such discussion as may hope to discover whether the system has those properties; so we cannot afford to presume their consequences.

### Aside: ordering

The ordering I was taught for Gödel's proof was obtained by considering arbitrary valid texts in some tokenised form, with a finite vocabulary of tokens; on this finite vocabulary, chose some ordering which describes some tokens as earlier than others, or the latter as later than the former; treat longer texts as later than earlier ones; amongst texts of equal length, identify the first token in which they differ, compare this using your ordering on the vocabulary, order the texts according to the outcome (this is lexicographic ordering among the texts of equal length). For any given text, there are only finitely many texts no longer than it, and all texts earlier than it are no longer than it, so we can count these earlier texts or any subset of them.

Among the texts, the logical system is supposed to be equipped with some means of identifying which texts are assertions, the rest being gibberish in one sense or another. So, for any given text which is an assertion, we can ask how many assertions appear earlier than it in the ordering; the ordering is sufficiently well-behaved that no two assertions get the same number (though the many different ways of writing each assertion get distinct numbers) but every number is used. This gives us an index (induced from our tokenisation and our ordering on its vocabulary) to assertions, a one-to-one equivalence between natural counting numbers and texts recognized as assertions by the system.

That suffices to prove that one can index the available assertions: provided the logical system supports counting, we can even describe this indexing to the system. This enables us to describe the logical system's axioms, including any rules of inference it may have, as a set of numbers; likewise, to discuss which assertions (disguised as integers) the system is able to prove or disprove. The need for Peano comes down to the need to cope with as large a domain of possible logical system as one can express in finite terms: this is a bit like presuming that one has finitely many axioms – I'm never going to be in a position to discuss any other kind, so I'll willingly ignore them for the course of a discussion in which I pursue understanding.

While the details of the ordering are pertinent to the proof, they are the point at which this page diverges from the proof: now I wish to examine how one may describe the system to itself, using a numbering of its assertions but not presuming anything beyond every assertion has a number and for each number there is an assertion. [I'll use {0, 1, 2, …} as my natural numbers, with each member being the set of all earlier members, so 0 = {} is empty, 1 = {0}, 2 = {1, 0}, 3 = {2, 1, 0} and so on – but what matters is that there's a first one, 0 in this case, each has a successor, the first one isn't anyone's successor and distinct numbers have distinct successors.]

For concreteness, I'll describe the numbering by index(A) is the number associated with assertion A and assert(n) is the assertion associated with natural number n. That is about as much of the ordering as the following cares about.

# Logic modelling itself

I expect a logical system to recognize its axioms as assertions; we may thus identify the set of indices of its axioms. (Indeed, I expect to be able to encode the semantics of the system entirely in terms of sets of integers.) To model inference, we need to discuss the machinery of inference: while this must be a general discussion, it must yet be sufficiently specific to allow us to address consistency and completeness.

## Disproof in place of negation

Since I find all my learned intuition about logical negation, not, is tangled up with presumptions of completeness and consistency, I chose to dodge that description of negation and use another: I shall presume that the system entertains proofs and disproofs of assertions; a domain with proofs and some understood notion of negation may readily read disproof as proof of the negation; I have a suspicion I will find the separation constructive.

### Meaning

Either in what we mean by a logical system or under the heading of adequacy, we need the system to address questions about whether it proves or disproves assertions. We effectively demand that a logical system (to deserve that name) be able to describe itself: how else is it to be the final arbiter of what it does and doesn't prove or disprove ? Likewise, we demand that its discussion of itself is right at least in so far as (for given assertion A):

• if it proves I prove A then it must prove A
• if it proves I disprove A then it must disprove A

(this is modus ponens) which rather begs the questions

• if it disproves I prove A can it also prove A ? Can it disprove A ?
• if it disproves I disprove A can it also disprove A ? Can it prove A ?

In short, what are the semantics of disproof ?

It may help, in addressing this question, to consider how completeness and consistency appear: the former asserts that each A is either proved or disproved; the latter that no A is both proved and disproved. Consistency answers no to the first question asked in each case; adding completeness answers yes to the second in each case.

## (Dis)proofs of (dis)provability

Pause to notice that (within the system being presumed throughout), for given assertion A,

• the system proves the assertion the system proves A and
• the system disproves the assertion the system disproves A

need not be equivalent to A, no matter how bewildering we may find it that a system should set out to be logical and not achieve this desideratum. We should also try to make sense of the mixed cases which disprove the system proves or prove the system disproves.

Let's see what the second of these kin of A means. First examine the inner clause, the system disproves A: the system recognizes A as an assertion, and it may or may not disprove A; but here we ask it to recognize, as an assertion, a statement about whether it proves A; do we have any grounds for expecting the system to recognize the system disproves A as an assertion ? If not, there will be no sense in asking the outer question (involving disproof of a non-assertion) quite apart from the fact that we, again, have no reason to expect this nonsense to be an assertion. The notion of disproof is in the hands of the logical system: however it expresses its laws of inference, it must have ways of expressing, to itself, information about which things it proves and disproves; which I expect to oblige the system to recognize the system disproves A as an assertion. Implicitly, adequacy must oblige the system to be willing to discuss itself at least to the extent of recognizing this kind of assertion. Having done so, the system is equally obliged to address both of the above kin of A as assertions, though it might phrase there is a (dis)proof A as I (dis)prove A or in some other manner.

### Inference

Where a system with proof and negation discusses the interaction between the two, a proof/disproof system discusses a similar and clearly related interaction. The notion of implication – were the system given a proof of A, it would be able to construct a proof of B; phrased A implies B – is preserved intact. Where proof and negation were mixed, we must now look at where the system is able to construct, out of a proof or disproof of one assertion, a proof or a disproof of some other. The available transformations need names: I chose the following:

imply
turn a proof of A into a proof of B

this is unchanged.

else
turn a disproof of A into a proof of B

given a disproof of A, the system would be able to construct a proof of B; this need not be symmetric between A and B (else I'd call it or), but it feels like it means or.

dnamed
turn a disproof of A into a disproof of B

is a precondition for would do equally well as the word here, were it but a word. Examine the relationship between A and B and you will see B demand A – if there is no disproof of B, there can be no disproof of A; and completeness has taught us to read there is no disproof as there is proof; but our system isn't guaranteed to be complete, so I don't phrase demand as its cousin imply. Since the chain of inference proceeds from A to B, I need to write B demand A backwards as A is a precondition for B, calling for a short word which means the same; so I have reversed to A dnamed B.

forbid
turn a proof of A into a disproof of B

so A forbids B.

Note, of course, that the connotations for which I chose the words here need not carry over into the logical system being examined: by my choice of names I hope to ease the task of remembering which style of inference is which – but their meaning here rests on the definitions just given, not on their connotations.

We are, of course, given no guarantees that our logical system actually possesses any such tools of inference: what matters is that the Peano precondition we slipped into the adequacy requirements for entry to this mad-house does give us enough leverage over the system to discover what we want about it regardless of whether our questions make any sense to it. My approach to Kurt's proof needs to show that the system proves and disproves or neither proves nor disproves some assertion, provided only that it is adequate. The grace of the proof lies in constructing a way of describing the system within itself, regardless of whether its own semantics (bother to) read the construction as such.

## Chains of inference

Among the four kinds of inference-step, only some composites are possible: one can only combine a step which yields a proof or disproof with one which presumes one of the same kind – we are not given enough information about how that which is proved relates to that which is disproved to be in a position, for instance, to presume either that that which is not proved is disproved or that which is disproved cannot also be proved, for else how might we ask whether these things are true ?

The proof/disproof rules of inference now become:

• if A implies B and B implies C then A implies C
• if A implies B and B forbids C then A forbids C
• if A forbids B and B else C then A implies C
• if A forbids B and B dnamed C then A forbids C
• if A else B and B implies C then A else C
• if A else B and B forbids C then A dnamed C
• if A dnamed B and B dnamed C then A dnamed C
• if A dnamed B and B else C then A else C

My Humpty-ism has ensured that everything reads simply from left to right: what matters is which pairs of inference-kind are composable in which order, in effect. It may help to think of a picture with two points marked, arrows from each to the other and to itself; one self-arrow is labelled implies, the other dnamed, with forbid pointing from the implied point to that of the damned and else coming back the other way. The points are, of course, labelled proof and disproof.

## Describe thyself

Valid CSS ? Valid HTML ? Written by Eddy.