Where two lines meet, there is an angle. We can add angles together, we can multiply them by numbers (yielding angles), we can divide one angle by another to get a number; in all this, angles are just like lengths. We can measure the sizes of angles relative to one another but, until we chose an angle to use as unit, angles aren't numbers. A quarter turn isn't π/2 or, indeed, 1/4; it's a quarter turn; and it is a right angle; but it isn't a number. Here turn serves as a unit of angle entrenched in the English language: it's the angle through which one has to turn to get back to where one started. A quarter of that angle is the right angle one gets between perpendicular lines.

Angles also have direction: so there are two quarter turns – one clockwise, the other anticlockwise. I'll distinguish these as the right angle and the left angle. For compatibility with the notion of turn right or turn left, taken in context of some orthodox descriptions, presume that a right angle is a quarter turn clockwise while a left angle is a quarter turn anticlockwise; and describe the left angle as positive, right angle as negative; however, all that matters is that left and right be in opposite senses and that one is negative, the other positive.

The SI unit of angle is the radian; 2.π radians = one turn. I'll come back to why later. The right (or left) angle, already mentioned, is another candidate for use as a unit of angle; as is the about-face or half turn. I can't say I'm much interested in the degree, though – it's a hang-over from the Babylonian approximation to the year as 360 days, albeit 360 is a highly factorisable number (it has 5 as a factor once, 3 twice and 2 thrice). There's also a unit called the grad, equal to turn / 400, which I believe originated in military usage (among gunners); this subdivides the right angle into 100 equal parts, making it marginally tidier than the degree; but I do not find it a compelling choice for unit of angle – at best, it is the centi-unit when the chosen unit is the quarter turn.

Rotation through a whole turn gets you back to where you were when the turn started. In two dimensions, a half turn negates both co-ordinates; repeating it to make a whole turn illustrates power(2, −1) = 1; the quarter turn will in due course illustrate what a square-root of −1 looks like. It is worth noting that one needs at least two dimensions to have angles; with more dimensions, more complex options enter the picture, but our discussion of any given angle can, at least, be reduced by projection onto the (at most) two dimensions spanned by the lines forming the angle. In one dimension, we still – sort of – have angles; but only half and whole turns: as before, the half turn takes the rôle of −1, we see its square is 1 and might pause to dream of the quarter turn as its square root.

One way or another, there are good reasons in geometry and linear algebra for wanting to measure angles in turns or nice easy fractions of the turn. So why have the learned bods of the international institutes chosen the radian, an irrational fraction of the turn, as the SI unit ?

To answer that, it will be necessary to discuss some trigonometry; which shall reinforce the case for the turn but equip us to state the case for the radian.

A little trigonometry

see text Figure: two lines come out of a right angle, one going forward the other upwards; the front of the former is joined to the top of the latter by a line, called the hypotenuse; this is longer than either of the original two lines. At the front of the triangle, the hypotenuse meets the forward line; the angle between them is labelled a. The sides are also labelled with: h on the hypotenuse, h.Sin(a) on the upright and h.Cos(a) on the forward edge. The labels are the size of the angle and lengths of the sides.

By such a diagram or otherwise, the functions Sin (short for sine and pronounced like sign) and Cos (for cosine) are defined; each takes an angle as input and produces a ratio (a pure number) as output; a is an angle, Sin(a) is the length ratio of the upright to the hypotenuse, as Cos(a) is for forward to hypotenuse. [The ratio of upright to forward is called the tangent, written Tan(a) and equal to Sin(a) / Cos(a). But it and the rest of that family are peripheral to the present discussion.] We can use Sin and Cos to express Pythagoras' theorem as:

The diagram only really gives us Sin(a) and Cos(a) for a between zero and a quarter turn; and, at that, interprets a as positive, so we'd better call that quarter turn the left angle (positive by the conventions chosen above). However, various structural truths about Sin and Cos emerge, in particular it is possible to compute the Sin and Cos of a+b and a−b from the Sin and Cos of a and b, at least wherever a+b or a−b was an angle for which Sin and Cos are defined; by accepting the answers this gives where Sin and Cos aren't defined, we can extend the two functions to arbitrarily large angle. Here are the relevant formulae:

These addition and subtraction formulae can (for instance) be obtained by using the given definition to infer the co-ordinates of the linear map which implements a rotation through an angle a; (: [x.Cos(a) −y.Sin(a), x.Sin(a) +y.Cos(a)] ← [x,y] :); applying this for angles a and b we can compose the linear maps, by matrix multiplication, to obtain an implementation of rotation through angle a+b, entirely in terms of the cosine and sine of a and b; we can equally apply the same reasoning for a+b as we did for a and b to obtain rotation through a+b in terms of the cosine and sine of a+b itself; comparing this with the composite yields the first two formulae given above. Either by similar reasoning or by substituting c=a+b, d=a, c−d=b and subsequent re-naming, one can obtain the last two. (Alternatively, consider a rectangle, tilted relative to one's preferred co-ordinates, and its bounding box with respect to those co-ordinates; the angles between a diagonal of the rectangle, an edge of it and an edge of the bounding box can then give us two formulae for the edges of the bounding box, in terms of the length of the rectangle's diagonal, that give one the same equations.) Note that the corresponding equations for tangent can be cast purely in terms of Tan:

For the subtraction formulae with a = b, we promptly obtain Sin(zero) = 0 and, via Pythagoras' theorem, Cos(zero) = 1. Applying the same formulae with a = zero instead, we obtain

i.e. Sin is an odd function (negating its input negates its output) and Cos is an even function (negating input doesn't change output). These, in turn, turn the formulae for a−b into special cases of the formulae for a+b, by simply replacing b with −b. (They also imply that Tan = Sin/Cos is odd.)

Just as the original triangle only established Sin and Cos for angles between zero and a quarter turn, it only strictly lets us assert Pythagoras' theorem for those angles. However, by establishing that the above sum and difference formulae yield results which satisfy Pythagoras' theorem for a+b and a−b when it holds for a and b, we can induce the theorem's validity for arbitrary angles, just as we can infer the Sin and Cos of arbitrary angle using the above formulae. From Sin being odd and Cos even, we can immediately infer that Pythagoras' theorem holds for −b whenever it holds for b, saving us the need to examine the a−b case. So, given Pythagoras' theorem for angles a and b:

Sin(a+b).Sin(a+b) +Cos(a+b).Cos(a+b)
= (Sin(a).Cos(b) +Cos(a).Sin(b)).(Sin(a).Cos(b) +Cos(a).Sin(b)) +(Cos(a).Cos(b) −Sin(a).Sin(b)).(Cos(a).Cos(b) −Sin(a).Sin(b))
= Sin(a).Sin(a).Cos(b).Cos(b) +2.Sin(a).Cos(a).Sin(b).Cos(b) +Cos(a).Cos(a).Sin(b).Sin(b) +Cos(a).Cos(a).Cos(b).Cos(b) −2.Sin(a).Cos(a).Sin(b).Cos(b) +Sin(a).Sin(a).Sin(b).Sin(b)
= (Sin(a).Sin(a) +Cos(a).Cos(a)).Cos(b).Cos(b) +(Cos(a).Cos(a) +Sin(a).Sin(a)).Sin(b).Sin(b)
= Sin(b).Sin(b) +Cos(b).Cos(b) = 1

by applying Pythagoras first to a and then to b.

Furthermore, we can (by adding and subtracting suitable formulae above) infer the product formulae:

I'll use left as a positive quarter turn and right a negative one. In our original triangle, the angle at the top of the hypotenuse is just left−a (because the sum of angles in the triangle is a half turn, 2.left, and the right angle accounts for left of that, leaving left for the other two). This is equal to a precisely when a is left/2. Consequently the two sides other than the hypotenuse are equal; so Sin(a).Sin(a) = Cos(a).Cos(a) and their sum is 1, so each is 1/2, whence Sin(left/2) = 1/√2 = Cos(left/2). Thus

and we can use the fact that Sin is odd and Cos is even to infer the corresponding results for multiples of the right angle (i.e. negative multiples of left).

It follows immediately that

These tell us that the two sinusoids are periodic (they repeat themselves after a turn), advancing either by a half turn negates it, and each may be obtained by advancing or backing up the other by a quarter turn. We should also note one further important structural fact (for symmetry, expressed here in two ways):

which you can actually read off from the original triangle which I used to define Sin and Cos, by looking at the angle at the top of the hypotenuse, which is left−a. We can use the addition formulae to compute Sin and Cos of any multiple of an angle, as polynomials in the Sin and Cos of the original angle. In particular, this lets us express the (known) Sin and Cos of the turn as polynomials in the Sin and Cos of the result of dividing the turn by any positive integer; by solving the resulting polynomial equations, we can (with suitable care) infer the Sin and Cos of such fractions of the turn; by scaling those fractions up, we are thus able to compute the Sin and Cos of any rational (i.e. integer divided by positive integer) multiple of the turn. Since the relevant polynomials all have integer coefficients and the Sin and Cos of the (quarter, half and whole) turn are integers, we also infer that the Sin and Cos of any rational multiple of the turn is an algebraic number (i.e. one that's a solution to some polynomial equation whose coefficients are all integers). This could sanely be construed as a powerful argument in favour of using the turn (or a rational multiple of it) as unit of angle.

That pretty much re-iterates the virtues of the turn and its fractions as units of angle; but it also sets the scene for the perspective that leads SI to measure angles in radians. The attentive will notice that, when a is small (by comparison to a quarter turn or other angle of roughly its size), Sin(a) is approximately proportional to a; the very careful will see that, in fact, Sin(a) is approximately 2.π.a/turn = a/radian for such small a. However, rather than approaching this approximately, I'll derive the constant of proportionality as a side-effect of …

Differentiating Sin and Cos

Draw a circle of radius R: use its centre as the origin for cartesian co-ordinates, [x, y]. Each position on the circle may be characterized by the radial line from the origin to it; which, in turn, may be uniquely identified by the angle between it and a chosen co-ordinate direction. Indeed, if this angle is a, measured from the positive x-axis in the sense which has the positive y-axis's angle equal the positive quarter turn (i.e. left), the position on the circle may readily be shown to be [x, y] = R.[Cos(a), Sin(a)]. Draw a tangent to the circle at our position at angle a. This tangent is at right angles to the radius, so is parallel to the unit vector [−Sin(a), Cos(a)].

Now, for a given angular velocity w (whose units are angle/time), we can consider an object moving around the circle so that its angle at time t is just w.t: its position is then [x, y] = R.[Cos(w.t), Sin(w.t)] and its velocity is V.[−Sin(w.t), Cos(w.t)] for some speed V (whose units are length/time). It is easy enough to verify that the absolute value of V is the speed of our moving object, which is just the circumference divided by the period. The circumference is 2.π.R, the period is turn/w, so we obtain abs(V) = 2.π.R.w/turn = R.w/radian. As to the sign of V, observe that (when w is positive) y is increasing exactly when x is positive while x is increasing exactly when y is negative: whence we find V = R.w/radian.

Thus (cancelling out the common factor of R), [Cos(w.t), Sin(w.t)] ←t has derivative [−Sin(w.t), Cos(w.t)].w/radian ←t, from which we may infer that the derivative of Sin is Sin' = Cos/radian and that of Cos is Cos' = −Sin/radian. These may be re-written as:

Thus differentiation of the sinusoids is add a quarter turn to the input, divide the output by radian. SI uses the radian as its unit of angle because one can read this as saying that differentiation wants to measure angles in radians; yet, even differentiation also uses the quarter turn.

Rotating vectors

For the object going round a circular path, above, let its position be p; so p = R.[Cos(w.t), Sin(w.t)] for some angular velocity, w, and the object's velocity is R.[−Sin(w.t), Cos(w.t)].w/radian. Its acceleration is then −R.[Cos(w.t), Sin(w.t)].w.w/radian/radian, so ddp/dt/dt = −p.(w/radian).(w/radian). Thus, again, the radian shows up as the convenient unit of angle for use when looking at the (very common) case of the simple harmonic oscillator, a system whose dynamics are described by the second derivative of a quantity being equal to a negative multiple of the quantity itself. This special case likely contributed heavily to SI's choice in the matter.

Of course, in these terms, radian wants to be an imaginary unit (as time is, in relation to spatial distance; the speed of light is more compellingly a square root of −1 than the radian is a dimensionless unit); then the negation involved in the above gets swallowed by radian. This also fits nicely with the fact that 2.π is the period along the imaginary axis of the exponential function on the complex plane.

This leads to the functions (for which, to match orthodoxy, I use lower-case names, in contrast to Capitalised Sin and Cos) of sin = (: Sin(t.radian) ←t :) and cos = (: Cos(t.radian) ←t :), which map dimensionless inputs to dimensionless outputs. The factor of radian in their input ensures that sin' = cos and cos' = −sin. With these functions, we get exp(i.t) = cos(t) +i.sin(t) and can express cos and sin in terms of exp; this also leads to sin and cos having nice elegant Taylor series expansions. In this form, with radian as implicit unit of angle, cos and sin are thus nicely related to exp, easy to compute and have each other (give or take a sign but no scaling) as derivative. This is the other big argument in favour of the radian as unit of angle.

The flip side of these nice properties of sin and cos is that we often need to represent rational multiples of the turn as angles; when we do, we have to represent them as rational multiples of the irrational number π as input to sin and cos, or as offset to an existing input. (For contrast, few situations naturally call for rational multiples of the radian, although our use of radian as unit may sometimes lead to considering them.) In particular, if you want to re-write their derivatives in the offset form I gave above, you get sin'(t) = sin(t+π/2) and cos'(t) = cos(t+π/2). Having π sneak in to the input all over the place is less unwelcome than having factors of π show up (as they would using turn as unit) in the derivatives and in the Taylor series, but it's still far from welcome.

Fourier's complication

Now it's possible to take (pretty much) any function (U: f |V), with U and V both real vector spaces, and apply a transformation, F, to it, defined (using i as a square root of −1) by:

for any choice we like of scalings P, Q, R, S and T. (The Fourier transform uses R = 2.π, Q = −1, P = S = T = 1; its inverse uses P = Q = R = S = T = 1.) It is easy enough to show that replacing S with 1 and Q with Q/S doesn't change F; nor does replacing R with 1, P with P/R and Q with Q/R; nor does replacing R with 1 and T with T/power(dim, R) where dim is the dimension of V. If we apply this transformation twice we get

for some scale computable from P, Q, R, S and T. Aside from the scaling, F∘F is just (: f∘negate ←f :), with negate = (: −x ←x :). Clearly repeating this yields the identity as F∘F∘F∘F. Thus, give or take a scaling, F is a fourth root of the identity; if we chose P, Q, R, S and T suitably, so as to get scale = ±1, it'll deserve to be regarded as having unit size. The choice P = Q = S = 1 with R = √(2.π) yields scale = 1; but so does the choice Q = 2.π, P = R = S = T = 1. While the former is most useful in justifying the Fourier transform, it is hard to motivate other than as the way to achieve unit size; the latter, on the other hand, is easily enough motivated.

Choosing Q = 2.π effectively chooses exp(2.π.i.w·x) as the canonical sinusoid being used by Fourier's analysis. The inner product w·x of the inputs (to the original function, f, and its transform, F(f)) is the natural scalar to obtain from a member of V and a member of dual(V); while the function exp(2.π.i.t) ←t is a sinusoid with period 1. Thus, in effect, the Fourier transform begs us to use period = 1 sinusoids rather than period = 2.π.i ones; in effect, it's asking us to use the turn as our unit of angle, in preference to the radian.

This preference becomes particularly stark in the case of the real transform, where no complex numbers are involved at all. For this we define

and obtain f = C(C(f)) +S(S(f)) cleanly without any factors of 2.π showing up at all; whereas the radian-based equivalents (using cos(k·x), i.e. Cos(k·x.radian), rather than Cos(k·x.turn); and likewise for Sin) are doomed to a scattering of factors of 2.π.

None the less, even Fourier isn't entirely ambiguous on this: when it comes to considering the transform applied to derivatives of a function, the radian-based form of the transform gives a power of the input times the transform of the original function; the turn-based version, however, complicates this answer with some factors of 2.π; likewise, when transforming a function scaled by a power of its input, the radian-based transform gives a derivative of the transform of the original function, while the turn-based one again drags in some factors of 2.π. As ever, when derivatives are involved, the radian wins; but the turn wins in most other cases.

The two-dimensional solid angle

While we're at it, note that SI also defines the steradian, a unit of solid angle. Just as the whole of a circle subtends an angle of one turn about its centre, so equally a sphere's surface subtends a solid angle of one whole shell (for want of a better word) about its centre. The idea behind solid angle is that, for example, if a light source is radiating energy out in all directions equally, any object receives a share of that energy in proportion to the fraction of a whole shell that's covered by the radial projection of the given object onto the chosen shell. Thus, if a light-bulb hangs in the mouth of a cave, half the light from the bulb goes into the cave and half of it goes to the outside world; when projected onto a sphere about the bulb, the mouth of the cave appears as a great circle of the sphere, dividing it into equal parts, one of which faces into the cave, the other outwards.

Since we live in a Minkowskian universe, it might also make sense to consider what analogue of fraction of a whole spherical shell can be made intelligible for the metric of space-time, whose spheres are hyperboloids (of one sheet if of space-like radius, of two sheets if of time-like radius) and hence, in particular, infinite.

Where two lines meet in an angle, if we draw a circle about the point where they meet, small enough that both lines cut it, we can measure the length of the arc of the circle between where the two lines cut the circle. (Measure the arc's length along the circumference, not the length of the chord that takes a short-cut via the interior of the circle.) Dividing the length of the arc by the radius of the circle gives you the size of your angle, measured in radians. Equally, if you start with an arc of a circle and connect its end-points to the centre by straight lines, the angle these make at the centre is, in radians, the result of dividing the length of the arc by the radius of the circle. Analogously, the steradian is defined by: if we connect the boundary of a region on the surface of a sphere to the centre, by straight lines forming a (not necessarily circular) conical surface, the solid angle at the apex of the cone, measured in steradians, is just the area of the region divided by the square of the radius.

If we look at a distant object, the lines from our eyes to the apparent boundary of the object form just such a (not necessarily circular) cone; we can intersect this with any sphere between us and the object, centred on our eyes, and measure the solid angle by looking at the portion of the sphere contained within the cone. That portion is what's covered by projecting the object radially, towards the sphere's centre, onto the sphere. The object is said to subtend the resulting solid angle at our eyes. Likewise, in two dimensions, if we project a figure radially onto a circle, to cover a portion of its circumference, we obtain the angle that the figure subtends at the centre of the circle.

Astronomers use a measure of solid angle called the square degree; this is the solid angle subtended by a square on the sphere whose sides subtend one degree. For any tiny enough angle k, a similar square with sides subtending k will have area equal to square(radius.k/radian); dividing this by the square of radius we get the number of steradians it corresponds to, which is just the square of k/radian. Thus the steradian is, in a meaningful sense, just the square of the radian. One full shell, 4.π sr, is thus 2.turn.radian or turn.turn/π; in square degrees, that comes to 129600/π or just under 41253.

Strictly speaking, the appropriate notion of a square on the surface of a sphere requires us to use, as its straight edges, arcs of great circles (these are the circles in which the sphere meets planes through its centre; the sense in which they are straight is that the shortest path, in the surface of the sphere, between two points on it, is always an arc of a great circle); and the figure is square if the four angles in which the edges meet are equal – in which case they are (at least a little) bigger than quarter turns, thanks to the sphere's curvature. For small enough angles, this makes no practical difference, so we can properly define the square of an angle a by: find some number k large enough that for squares whose sides subtend a/k at a sphere's centre, there is negligible difference in area between a Euclidean square in a plane tangent to the sphere at the plane's centre and a square in the surface of the sphere; now measure the solid angle such a square subtends at the centre and multiply it by k.k; this is the square of a. The important thing is that all sufficiently large k give the same answer (give or take negligible differences); or, to put it formally, the solid angle this gives you as a×a tends to a definite limit as 1/k tends to zero. If the sphere's radius is R, the tiny squares have side R.a/k/radian and area R.R.(a/radian).(a/radian)/k/k, hence subtend solid angle steradian.(a/radian).(a/radian)/k/k, whence the square of a is a×a = steradian.(a/radian).(a/radian). In particular, substituting a = radian, we get radian×radian = steradian. Technically, this × isn't scalar multiplication, but we can now see that it behaves about as much like scalar multiplication as the multiplication of lengths of sides of a square, so it makes sense to denote it the same way and just write a.a for the square of the angle a. Which is just a long-winded way of formalizing what I said previously.

Angle as scaled fraction of the whole

Now, the total area of a sphere's surface is 4.π times the square of the sphere's radius; this is the product of the sphere's circumference and diameter. Crucially, it grows in proportion to the square of the sphere's radius.

In the two-dimensional world, the one-dimensional circle's circumference grows proportional to the radius of the circle; the fraction of the circle covered by the projection, radially onto it, of any given object is just the angle (measured in turns) subtended by that object as seen from the circle's centre. The length of the relevant piece of the circumference is then just the angle multiplied by the radius times 2.π/turn. The radian = turn/(2.π) equips us to restate that as: the length of the piece of circumference is just the angle, measured in radians, times the radius.

Likewise, in three dimensions, a two-dimensional portion of a sphere's surface, covered by the projection, radially onto it, of some given object, has area equal to the solid angle it subtends, times the square of the sphere's radius, times 4.π/shell. Just as measuring an angle in radians gave us what to multiply the radius by to get the length of a piece of circumference, we can introduce a solid angle steradian = shell/(4.π) and, by measuring solid angles in this unit, get the scalar by which to multiply the square of radius to obtain the area of a portion of our sphere that subtends the given solid angle.

From the above, we have solid angle as square of plane angle, with shell = 2.turn.radian. SI, in choosing to define the steradian, declares solid angle to be an independent kind of dimension, separate from the angle. This shall oblige it to give independent n-dimensional angles for each positive integer n, where I am inclined to build such units up from the radian, with the whole n-sphere in n+1 dimensions (analogous to turn for n = 1 and shell for n = 2) then being the surface area (boundary measure) of the sphere times the n-th power of radian/radius.

Varying dimension

An immediate thought from that is to follow down in dimension; at dimension 1, the 1-sphere's perimeter comprises two points, independent of radius, and a zero-angle counts how many of those two points it embraces; it's clearly just a number. At dimension zero, the 0-sphere is a point with no boundary (or, if you will, with a boundary made of 0 pieces, each of which is the −1 simplex and varies in size proportional to 1/radius), so begs no notion of angle. Having no geometric intuition about – and, in particular, no notion of angle for – negative dimensional analogues, it doesn't make sense to chase below dimension zero. It is, however, noteworthy that this strange domain gives zero area and volume to negative even dimension and steadily growing (albeit alternating in sign) volume and area for negative odd dimension: even dimension is special.

Going up in dimension, we'd get the 4-sphere, of total area 2.π.π times the cube of radius, requiring a unit of 4-angle equal to 1/(2.π.π) of a 4-shell; the 5-sphere with area 8.π.π/3 times the fourth power of radius, requiring a unit of 5-angle equal to 3/(8.π.π) of a 5-shell; and so on. Taking A as mapping from dimensions to the surface measure of the unit sphere at each, we obtain A(n+2) = 2.π.A(n)/n for each positive dimension, n. Note that A(dim) < 1 for dim > 18, so the required units of angle for dimension greater than 18 would exceed the whole shell at such dimensions. The two-step nature of the iteration might point to the right unit of angle at each dimension being a suitable power of solid angle times, for even dimensions, plain old angle; if solid angle was honestly dimensionless (albeit this makes little sense), we'd thereby escape from needing endlessly many new forms of angle unit. In any case, if a(n) is the n-dimensional angle of a full shell, we get a(n+2) = turn.radian.a(n)/n, with a(1) = 2, a(2) = turn (so a(0) = 0 / radian, a(−1) = −2/turn/radian and stranger things below). Going up, from 1, while avoiding factors of &pi, we get 2, turn, 2.turn.radian, turn.turn.radian/2, 2.turn.turn.radian.radian/3, turn.turn.turn.radian.radian/8 and so on; aside from integer factors, we alternate units of turn and radian as we go up in dimension. This encourages me to embrace both units and accept their differences.

A minor curiosity

The ancient HAKMEM document contains the following fascinating information:

PROBLEM 45 (Gosper):
  • Take a unit step at some heading (angle).
  • Double the angle, step again.
  • Redouble, step, etc.
  • For what initial heading angles is your locus bounded?
PARTIAL ANSWER (Schroeppel, Gosper):

When the initial angle is a rational multiple of [a half turn], it seems that your locus is bounded (in fact, eventually periodic) iff the denominator contains as a factor the square of an odd prime other than 1093 and 3511, which must occur at least cubed. (This is related to the fact that 1093 and 3511 are the only known primes satisfying

  • 2P = 2 mod P.P)

But a denominator of 171 = 9 * 19 never loops, probably because 9 divides phi(19). Similarly for 9009 and 2525. Can someone construct an irrational multiple of [a half turn] with a bounded locus? Do such angles form a set of measure zero in the reals, even though the measure in the rationals is about .155? About .155 = the fraction of rationals with denominators containing odd primes squared = 1 − product(: 1 − 1/P/(P +1) ←P :{odd primes}). This product = .84533064 ± a smidgen, and is not, alas, sqrt(pi/2) ARCERF(1/4) = .84534756. This errs by 16 times the correction factor one expects for 1093 and 3511, and is not even salvaged by the hypothesis that all primes > a million satisfy the congruence. It might, however, be salvaged by quantities like 171.

If, in fact, all solution angles are rational multiples of a half turn, this could be construed as geometry favouring the turn, again …

Dimensionless ?

Although SI does specify the radian as unit of angle, it has also opted to treat angles (and solid angles) as dimensionless, hence to treat the radian (and thus the steradian) as a synonym for the multiplicative identity, the number 1. From the references cited in that decision, I infer that this document would tell me more about the reasons behind that decision, were I willing to pay twenty dollars (before knowing whether it actually does answer my questions) to read it. For now I shall make do with the abbreviated reasoning given in the decision:

The latter is somewhat undermined by the decision itself mentioning counter-examples; I would also argue that the stray factors of 2.π needed when converting between frequencies and angular rates is a symptom of a need for this (and this is the source of the same factor between Dirac's and Planck's constants). In any case, the lack of a formalism that fully explores the ramifications of systematically taking units of angles seriously is not a reason to dismiss the possibility of trying to build such a formalism. Had such a formalism been attempted and found problematic, that would be a good argument for treating angles as scalars; but the fact that everyone so far has cut corners is not always a good reason to cut the same corners.

As it happens, this very subject is presently (in 2015) under review; the BIPM's consultative committee for units (CCU) has a working group on angles and dimensionless quantities who shall meet in late February 2015, with an eye to making proposals for a 2016 SI brochure revision.

As to the former reason, I find it unpersuasive. The length of an arc of a circle is indeed proportional to the angle the arc subtends; and the constant of proportionality is just the circle's radius per radian. This is, furthermore, a universal truth (at least for small enough circles, relative to the ambient curvature of the space in which the circles are studied), that is reflected in the differentiation of the trigonometric functions. All the same, when I compare this with the proportionality of mass of a piece of uniform wire and the length thereof, I find enough similarity – even though the constant of proportionality for angles is universal where that for the wire's mass is a mere physical property of the particular spool of wire – to be suspicious of conflating the angle with the ratio of lengths.

As explained above, there are valid grounds for debate as to which unit of angle to take as natural; and the cleanest resolution of this is to admit that both the turn and the radian are relevant and useful units; for which it is necessary to accept angles as distinct from pure scalars. The existence of a convenient isomorphism between a one-dimensional space and scalars can serve as a rationale for identifying the two, but when there's more than one such isomorphism – each convenient in its own ways, in which the other is inconvenient – it is better to acknowledge that the one-dimensional space is merely one-dimensional, rather than trying to conflate it with {scalars}. Each such isomorphism corresponds to a choice of unit in the one-dimensional space: so two units each with virtues, but each deficient where the other has virtues, make a case for distinguishing what those units measure from scalars. Angles form a one-dimensional space over scalars (that in many contexts warrants interpretation via equivalence modulo differences by whole numbers of turns), in which two values stand out as important enough to beg to be used as unit; but, as they are distinct, each undermines the case for using the other as unit.

Furthermore, when we mutiply angles we do not get an angle – as argued above, we get a solid angle – whereas scaling an angle by a true number does indeed give an angle. There is no natural interpretation, in plane geometry, of a product of angles as an angle; or, indeed, of a mechanism for multiplying angles other than, in a higher dimension, to get a higher-dimensional angle.

Valid CSSValid HTML 4.01
Written by Eddy.