**Theorem**: there is no proof of this assertion.

Proof: suppose the contrary - that there were a proof of the assertion, constituting a counter-example to the assertion. This being absurd,reductioproves the assertion.

... says that every statement is either true or false: there is nothing in
between

. It's also known as completeness

- a complete logical system is one
in which this law holds. When combined with consistency

- the assumption that
truth doesn't contradict itself (which would be *absurd*) - this gives us
a very powerful tool, *reductio ad absurdum*. This is a method of proof:
to prove a statement A, take a contrary hypothesis, reason from this to an
absurdity, conclude (by consistency) that the contrary hypothesis is false,
whence (by completeness) that A is true. I refer to this mechanism, in brief,
as *reductio*.

Now, Gödel's theorem tells us (given
that we do know how to count indefinitely) that assuming either completeness or
consistency makes our logical system inconsistent: so we have to stop using
*reductio*, and even completeness, in proofs. Now, *reductio* is
deeply entrenched in the corpus of mathematical knowledge: with the standard
definition of continuity, *reductio* is required in most proofs that a
function is continuous - to give but one example. So it will take us a while to
work out how to say the things we understand without recourse to it.

Most urgently, we must teach our students, from the moment we introduce
logic, that the function not

, whose restriction to {True, False} swaps these
two values, is not, in general, self-inverse - mayhap not&on;not&on;not = not,
none the less not&on;not isn't the identity on logical statements (it's a
projection), though it is the identity on not's outputs. [I'm using &on; as the
composition

binary operator on (relations generally, but here) the mapping
not; f&on;g maps u to f(g(u)); for more on denotations see here.] Then a new generation will grow up used to this
basic constructive discipline: if they yet retain the taint of our immersion in
*reductio*, as I do though educated half a century after Gödel, then
at least they should be better placed to work out how to educate *their*
heirs. Meanwhile, we can all try our best to avoid *reductio* in our
proofs, and hunt for formulations of familiar truth which don't depend on
it.

[So what is not&on;not ? I described it, above, as a projection

: in
general, this means a function composable after itself and equal to the ensuing
composite. Given not&on;not&on;not = not (which I'm not going to prove just
now), clearly (not&on;not)&on;(not&on;not) = not&on;not, so not&on;not is indeed
a projection. This means that it acts as the identity on the set of values it
yields: indeed, we have it as the identity on the set of values not yields.
Thus not

is self-inverse on its range, but this is a proper subset of its
domain.]

Fortunately, many uses of the law of *reductio* can be avoided -
typically by replacing some axioms and definitions with ones which
*reductio* would render equivalent, but which actually imply the things we
want without needing negation; and frequently the result proved is similarly one
which *reductio* would render equivalent to the result originally proved.
Sometimes this renders contorted results - basically because we cannot use
familiar linguistic tools that subsume a negation to simplify our description of
what's going on - but the important truth is still proven.

For example, take the square root of 2 is irrational

. Now irrational

is
a negatively-defined term - what that really says is 2 is not the square of any
rational

, a rational being the product of an integer and the multiplicative
inverse of a positive integer. Now, in fact, this is just the special case of a
more general theorem: a square root of a positive integer is either an integer
or an irrational. We prove that (for example) 2 is not the square of any
integer by observing that any integer less than -1 or greater than 1 has square
at least 4, so greater than 2, and the three remaining integers, 0 and
±1, have squares 0 and 1, neither of which is 2.

Unwrapping the negation, we now have if a rational's square is an
integer, then the rational is also an integer

. Formally, with square =
(rational| r->r.r :rational), we are asserting that (|square:integers) is the
integers. So, suppose we have a rational, p/q, whose square is an integer, N.
Then p.p=q.q.N is an integer: so any prime factor of q is a repeated prime
factor of p.p and thus a factor of p. Consequently, either q=1 or there is a
rational equal to p/q, with smaller denominator, whose square is N. We could
now get the desired result by using some proof that every rational can be
written in coprime form (in which p,q have no common prime factor; but any prime
factor of q is one of p, so q must have no prime factors, so it is 1). However,
just to reassure you that this doesn't depend on a hidden use of
*reductio*, I'll complete the proof directly.

Use induction on every rational p/q with q<n and p.p/q.q an integer is
an integer

, starting with n=2 (which implies q=1, and makes the statement
trivial): consider any p/q with q<2n and p.p/q.q an integer; so either q=1,
whence p/q is an integer, or q has some prime factor, r, which p shares with it;
so q/r and p/r are both integers; and (p/r)/(q/r) = p/q has an integer as its
square and denominator, q/r, less than or equal to q/2 (since r is prime, so at
least 2) and so less than n. Therefore (by our inductive hypothesis)
(p/r)/(q/r) is an integer. Thus, for each positive integer i (replacing n with
power(i, 2), aka 2^{i}), we deduce that whenever a rational whose
denominator is less than 2^{i} has an integer as its square, the given
rational is an integer: but every positive integer is less than some
2^{i} for some i, so we obtain every rational whose square is an
integer is an integer

.

So, with a little care in what we assume, and a little discipline, we can
find constructively provable results which amount to familiar results which are
orthodoxly proved by *reductio* (Euclid's proof of the irrationality of 2's
square root: as above, the square root isn't an integer; suppose it rational; so
it has a coprime representation, p/q with p, q sharing no common prime factor,
and q>1; so q has some prime factor r (which might be equal to q); so
p.p=q.q.2 also has r as a prime factor (and repeats it); so p has r as a prime
factor; which is absurd as p,q have no common prime factor; so our supposition
of rationality fails). The proofs may work out longer, but they often have a
greater clarity - which stems from the confusion that multiple negation lends to
things stated with *reductio*.

*This next bit needs some other home ... it has to do with addressing what
not&on;not&on;not is ... in a particular formalism.*

One can describe logic by a arrow world containing an initial identity, False, whose self-homset, hom(False, False) = {arrows, a: (False,a,False) is composable}, is just {False} (because the initial property implies this or empty, and we already have False in hom(False, False)). We call the identity on this homset True = (hom(False, False)| i->i :hom(False, False)). and note that it is isomorphic to hom(False, arrow) for any arrow, by the initial property of False. Furthermore, we obtain a mapping from every homset to hom(False, False), so composable before True, namely (hom(a,b)| e-> False :hom(False, False)). Furthermore, as hom(False, False) = {False}, any mapping from any hom(a,b) to hom(False, False) is actually equal to this. Thus True is terminal.

The properties of logic now emerge as:

- completeness
- says that every arrow, a, has either hom(True,a) or
hom(a,False) non-empty.
- consistency
- hom(True,False) is empty
so no a has both hom(True,a) and hom(a,False) non-empty - for t in hom(True,a), f in hom(a,False), we have f&on;a&on;t in hom(True,False).

- truth
- is defined in terms of arrows composable after True: an arrow is deemed true precisely if hom(True,a) is non-empty.
- a implies b
- is the identity on hom(a,b)
There is exactly one arrow from {False} = hom(False, False) = (True|) to hom(a,b) for each member of the latter: so hom(True, a implies b) is non-empty precisely if hom(a,b) is non-empty. Thus

a implies b

is true iff hom(a,b) is non-empty.Viewing

implies

as a relation on arrows [formally, the relation is a~b ⇔a implies b is true

], we can look for the familiar properties of relations:- reflexive
- will hold iff, for any arrow a, a&implies;a is true.
This is immediately true for any identity arrow, but falls down in general. What's required is an arrow antiparallel to a: but the unique arrow in hom(False, True) has nothing antiparallel to it if our system is consistent.

- symmetric
- means that whenever a implies b, the converse holds also.
This is not generally true.

- transitive
- means that whenever
a implies b

andb implies c

are true, we also havea implies c

true.This is easy:

a implies b

true ⇔ (by the above) there is some arrow m in hom(a,b); likewise we have some n in hom(b,c). So we can compose to get n&on;b&on;m in hom(a,c), soa implies c

is also true. Indeed, in any composable list with more than 2 entries, the first always implies the last.

Notice, in particular, that every implication is an identity arrow: so not all arrows are implications.

- not
- is a->
a implies False

, the identity on hom(a, False)In particular, not&on;not can only have any chance of preserving non-identity arrows.

- and
- is the Cartesian product
- or
- is the direct sum

In these terms, not&on;not is a-> hom(hom(a,False), False) This is the assertion that there is some arrow, d, from hom(a,False) to (|False). But False is an initial identity: so False&on;d = d; and, for any arrow, f, hom(d,f) contains exactly one arrow (which is initial).

Written by Eddy.