Inductive definitions provide a way of describing the yawning expanse of infinity without having to instansiate an infinite thing. I like that: I don't trust infinity.
The axiom of induction essentially says the following two things:
namely, starting from
some given collection and finitely many constructors, it asserts that it is
meaningful to discuss the collection of everything you can build from members
of the given collection using the given constructors
.
If the given collection's sole member is the empty set, and our sole constructor is successor (which maps a set, S, to the union of S and {S}, the singleton-set whose sole member is S), then we construct from these the set known as the natural numbers, whose members are 0 = {}, 1 = successor(0) = {0}, 2 = successor(1) = {0,1}, 3 = {0,1,2}, ...
namely, given a statement about arbitrary members of such a collection, C, a sufficient proof of that statement consists of
inductivestep, which consists of showing how to propagate the given truth to the whole of C.
The easiest variety of inductive step is a proof, for each constructor, that if the inputs it uses satisfy the statement then the output(s) it produces will also satisfy the statement. However, a slightly more general inductive step in the same spirit is available.
For any sub-collection, S, of C, we have another sub-collection, T, of C obtained by adding to S every value which can be constructed, from members of S, by a single application of any of the constructors used to build C. The inductive step requires us to prove that, if our statement is true for arbitrary members of S, then it is true for arbitrary members of T.
We don't actually have to do this for arbitrary S of whose members our statement is true: we can restrict to those S posessing some helpful property, provided our starting collection has that property and we can prove that T will also have that property.
To prove that some statement is true for every natural number, the easy
form requires us to prove that it is true for empty (the sole member of the
initial collection) and, whenever it is true for some natural n, it is also true
for what our one constructor makes of n, namely successor(n). The subtler
approach allows us various other forms of induction, most notably the one called
strong induction
which, for a given predicate P, requires only that you prove
i in n implies P(i)implies P(n)
and lets you infer P(j) for every natural j. Note that this doesn't require a separate proof of P(empty), since empty has no members, so P(empty) follows trivially from what you were obliged to prove.
Equally, mathematics applies the same machinery when building free groups
and similar constructions: we take a collection of generators and deploy the
constructors of group theory - namely: there is an identity (that's a
constructor taking no inputs !); for any value we have an inverse; for any two
values, we have a product. To prove a result about a free group, it suffices to
prove the result for its collection of generators and prove that the result is
preserved by the constructors: so the result must be proved for the identity,
for any value satisfying the result we must prove its inverse does too, and for
any pair of values each satisfying the result, we must prove the result for
their product.
One can define induction only in the context of the natural numbers: with these, one can build anything defined in the inductive terms above. Here it suffices to build a mapping from the naturals to collections - in other words, a sequence of collections. The first of these is, of course, your initial collection; and after a given C in the sequence comes C's union with the collection of values constructible, in one application of one constructor, from members of C. Uniting the collections in the sequence (each of which subsumes all its predecessors), we get the collection discussed above. Thus we can build the inductive tool above from one of its special cases.
Equally, I am interested in generalising the inductive tool further. One can define relations inductively: given a definition of the relation's restriction to our generators, it suffices to specify the relation's action on a constructed value, x, in terms of its action on the values used to construct x. To prove that an inductively defined relation is a mapping may then involve proving that it doesn't matter how you construct x, you get the same answer: that would typically require an inductive proof. My first definition of induction (above) is a special case of such construction of relations.
That is: one can define a relation by specifying a sub-relation and a
collection of generators
each of which takes some information about values of
the relation and tells us about some values that are consequently related. The
natural numbers are then given by: Natural has empty as a fixed point; if n is a
fixed point of Natural, then so is successor(n).
One standard inductive tool is transitive closure. This extends a relation, r, to a minimal transitive relation subsuming r: it is the intersection of all transitive relations subsuming r. [Note that All is transitive and subsumes any relation, so the collection of relations intersected is non-empty.] We can use induction to construct the transitive closure of r: wrap up the induction in the mapping repeat and look at unite(natural: n-> repeat(n,r) |). If this relates a to w, then there is some natural n for which repeat(n,r) relates a to w. If it relates w to z, using repeat(m,r) for some natural n, we have repeat(n+m,r) relating a to z, so our union relates a to z: hence it is transitive. Since that calls for the natural numbers, which I prefer to define inductively, I'll settle for defining transitive closure in terms of induction, not the other way round.
Written by Eddy.