The time between times

We live in a world addicted to certainties. My addiction to water is healthy and natural, your addiction to coffee may not be a problem but you need to watch it, their addiction to the media rots their minds (more irregularities of the English language), and we have this driving desire to be certain. Yet we live in a world of doubt: denying the reality of our uncertainty would be dishonest. Furthermore, both logic and physics now oblige us to live with uncertainty – the uncertainty is not entirely the product of our failings, some of it is Real®. We cannot ignore the grey areas in hopes of them going away as we come closer to perfection: so let us strive to understand them, as best we can.

My chosen metaphor for all grey areas is twighlight. By day, we can see the world about us (without needing artificial illumination), at night we cannot; at night we can see the stars (if clouds give us leave), by day we cannot; in between, we see both the world about us (though dimly) and (some of) the stars. During the twighlight, we see the sun cross the horizon: its movement is clear and intelligible; its responsibility becomes manifest, both for showing us our world and for blotting out the stars. Through the day weather, plants and beasts proceed after one manner; through the night after another; though there is variation in each, this is dwarfed by the difference between the two modes. In twighlight we see the transitions happening.


Kurt Gödel taught some High Priests (of the self-critical aspects) of the addiction that certainties aren't always an option – but their conversion hasn't put an end to the old superstitions that have seeped into our culture from the millenia of this addiction. Indeed, other High Priests of other facets of this addiction seem entirely oblivious to the force of what Kurt has proved: for which one may sensibly use the slogan if you can (count and) always make up your mind on every subject, you are mad.

[Consider a logical system capable of sustaining Peano's axioms – a classic formulation of the process of counting. As a logical system, it recognises certain texts as meaningful in ways that allow us to ask it whether (the thing meant by) such a text is true. Kurt proved that (because it can count) there is a statement of form I cannot prove this statement true that it recognises as meaningful (with I understood to mean the logical system itself in that statement). I'll call this statement Gödel's fork for the logical system in question.

Your logical system is described as complete if you can make up your mind on every subject (in particular, decide whether the fork is true). Using the fork, it is not hard to show that any complete logical system is inconsistent: certainty about the fork implies inconsistency. If you can make up your mind about your fork it will be both true and false: so true will imply it and it will imply false; so true implies false. But false implies everything meaningful (logicians call this ab falso quad libet, whether you like falsehoods or not). So truth implies everything meaningful – your logical system believes every claim it can understand, which I've caricatured as madness.

One can, however, read Gödel's theorem in reverse as: in a complete consistent system, you cannot formulate Peano's axioms; indeed, your ability to count must necessarily be circumscribed – for instance by big numbers becoming fuzzy or inability to discuss all the counting numbers as a meaningful concept.]

There is a conflict between certainty and sanity. (Not that all certainty is prejudicial to sanity – indeed, without a proper understanding of why we are certain about what, roads to insanity need not involve logical inconsistency. It's just that there is such a thing as too much certainty.) The knife I have learned to use is called honesty and it is deeply entwined with a separation between knowledge and speculation/hope/fantasy.

In many ways Gödel liberates mathematics: it no longer has a duty to strive to answer everything. The best one can get with inconsistency is a two-state logic – a given statement is either: undecidable (we can't make up our minds about it) or; decided (in which case we have decided that it is true, but we have also decided that it is false) – so we must live with incompleteness. The mathematician is thereby freed from the stick which demands completeness – yet not deprived of the carrot which rewards discovering just how much a given logical system can prove. Before that liberation, the need to be able to prove more encouraged mathematicians to tolerate yet more axioms, possibly packaged as working conjectures, to make it possible to decide more things. There has always been a counter-force which tries to find how little one needs to get a great deal (a few well-chosen tools may suffice for a wide variety of jobs). Kurt has cleared the air to allow folk to focus more on the latter: and we have discovered that there is much one can do, from remarkably little. Such is the price of liberty.


Our minds are well-adapted (whether this is by God's wise design or by the forces of natural selection will not affect what follows) to understanding a context (our world) with a distinctive day-night cycle: this affects our world most thouroughly through the variation in temperature and ambient light levels throughout the domains in which our kind live most willingly. Other factors are involved in how brightly my context is lit: meteorology – most obviously clouds, of whatever thickness and altitude – and the shape of my immediate surroundings (indoors, or in a wood, for instance) spring readilly to mind; so the cycle doesn't repeat itself perfectly each time round. None the less, days begin (whether bright and clear or dimly through a murk), persist for a while, end and are separated from one another by nights.

To measure the day-night component of how the world is we look at

these give us numbers we can position between +1, say, for noon (maximally day) and −1 for midnight.

We've found good theoretical models which describe the second of these – in terms of the sun moving (anually) along a great circle on the celestial sphere while the planet (and hence the horizon) spins (daily) about an axis through Polaris. Those are clear – and straight-forward – enough in what they predict and how they predict it that comparison between the two approaches given (and, indeed, their relatives) leads us to

The surprising thing about spin is that it's like that day-night component of a moment in time (or anything else naturally expressible as a (predominantly periodically) varying position on a sphere)


The magnitude of an electron's angular momentum has been found to be constant (the constant involved is half Dirac's constant: this is Planck's constant divided by four pi). Now, we expect to think of angular momentum as a vector quantity: it has a magnitude and a direction. Since its magnitude seems to be constant, the spin is naturally represented as a position on a sphere – the free variable is the direction of the spin and space-time doesn't generally show any direction-sensitivity, so (at first sight) it would seem that all directions should be feasible.

Measure the x component of the angular momentum of an electron. Funny thing: it may be half Dirac's constant, or it may be minus that – but it's never anything in between. That means its x component is equal to its magnitude: and there are only two places on the sphere where that happens, the two x-poles – at which y and z are zero. You can likewise measure the y component, or the z component: like the x component, they're never zero. We directly observe an x which seems to imply that y=0=z, yet we never observe y or z being zero.

So what's spin ? It seems to be angular momentum, which is obtained out of an antisymmetric product of a displacement, x, and a momentum, p. Each of these is a vector: it can be converted to a covector by application of the metric, g – indeed, the covector g·p may credibly be thought of as the real underlying quantity of momentum, with the vector p being merely implied by it (note that g depends on the evolution of space-time, which is influenced by the energy inherent in p, so it may be worth noticing which form of p is for real).

We actually build the angular momentum by working in 3 spatial dimensions at some fixed time: we combine a position with a momentum antisymmetrically to get an antisymmetric form in a space which happens to be isomorphic to 3-vectors. In 4 dimensions, the antisymmetric product of three covector quantities yields a quantity in a space isomorphic to the original 4-dimensional space: and we have the arrow of time available to serve as the third vector. The isomorphism in question, which I'll call epsilon, has the property epsilon×epsilon = det(g): and it's a linear isomorphism between antisymmetric({covector}, i) and antisymmetric({vector}, 4-i) for i from 0 to 4; epsilon is the measure of space-time, sometimes written, dV or d4x.

This will give us epsilon−1(dt^(g·p)^(g·x)) as a 4-vector, for some suitably covectorial g·x representing position as a covector. [This is equally epsilon(p^x^(g−1·dt)), give or take a sign.] It remains to come up with a suitable vector to use as x, describing position (which isn't a vector quantity – though rate of variation of position is). Our use of antisymmetry ensures that the t component of x and its component parallel to p are ignorable: and we want g·x to look like the sum of xi.dxi at least for suitably well-behaved charts (in which g is ±unit-diagonal, for instance).

Valid CSSValid HTML 4.01 Written by Eddy.