Trigonometric functions of multiples of angles

Basic trigonometry provides the means to compute the values of Sin and Cos of the sum of (or difference between) two angles from their values at the two angles. We can use this to compute the Sin and Cos of any multiple of an angle from the Sin and Cos of that angle. This will inevitably give us polynomials in two free variables which, when fed the base angle's Sin and Cos as values for these variables, will yield the Sin and Cos of each multiple. Since the squares of Sin and Cos, at a given angle, sum to 1, we can replace all even powers of either Sin or Cos with even polynomials in the square of the other; so our polynomials in two free variables can limit one of the variables to power(0) and power(1), while using arbitrary powers of the other. (In particular, any polynomial in the square of either Sin or Cos of a given angle can be re-written as a polynomial in the square of the other.) Let's start by looking at the first few, to get an idea of what to try out:

Sin(2.t) = Sin(t +t) = Sin(t).Cos(t) +Cos(t).Sin(t)
= 2.Sin(t).Cos(t)
Cos(2.t) = Cos(t).Cos(t) −Sin(t).Sin(t)
= 2.Cos(t).Cos(t) −1 = 1 −2.Sin(t).Sin(t)
Sin(3.t) = Sin(t).Cos(2.t) +Cos(t).Sin(2.t)
= Sin(t).(2.Cos(t).Cos(t) −1) +2.Sin(t).Cos(t).Cos(t)
= Sin(t).(4.Cos(t).Cos(t) −1) = Sin(t).(3 −4.Sin(t).Sin(t))
Cos(3.t) = Cos(t).Cos(2.t) −Sin(t).Sin(2.t)
= Cos(t).(2.Cos(t).Cos(t) −1) −2.Sin(t).Sin(t).Cos(t)
= Cos(t).(2.Cos(t).Cos(t) −1 −2.(1 −Cos(t).Cos(t)))
= Cos(t).(4.Cos(t).Cos(t) −3) = Cos(t).(1 −4.Sin(t).Sin(t))

Unsurprisingly, each multiple's Sin is a multiple of the base angle's Sin; aside from this factor of Sin, everything above can be expressed as a polynomial in Cos of the base angle; furthermore, each of these polynomials is either a polynomial in the square of the base angle's Cos or simply that Cos times such a polynomial; and (for all that we have at most two terms, so this isn't yet saying much) the signs of terms alternate positive and negative, with the leading order term positive.

Preliminary Analysis

We have two sequences of these polynomials, one for Cos of a multiple, the other for Sin of the multiple, with a factor of Sin to put aside. We can define two functions ({polynomials}: |{naturals}) by, for each natural n and angle t:

(Using 1+n in the definition of S ensures, as we'll see shortly, that the degree of each S(i) is i.) For the case n = 0, we get Cos(0.t) = 1 = C(0, Cos(t)) while Sin(t) = Sin(t).S(0, Cos(t)) makes S(0, Cos(t)) = 1 as well; and 1 is a polynomial of degree 0 in any variate you care to pick. (We can also entertain an S(−1) = 0 obtained from 0 = Sin(0.t) = Sin(t).0; since it has no non-zero coefficient for any power, its degree is less than any natural, hence negative; it is indeed conventional to consider the zero polynomial to have degree −1, as this fits with differentiating a non-zero polynomial always decreasing its degree by one.) From the other multiples of angles explored above, we get C(1, x) = x, S(1, x) = 2.x, C(2, x) = 2.x.x −1, S(2, x) = 4.x.x −1 and C(3, x) = x.(4.x.x −3).

Each S(i) and C(i) has degree i (that is, the highest power with non-zero coefficient is i) and its coefficient of power(i) is positive. The other powers present (for all that, thus far, there's at most one of these) are those less than i by even numbers, all such power(i−2.j) with j natural and 2.j ≤ i do appear and the coefficient of each such is negative for odd j, positive for even j. In S(i), this last coefficient has (at least) i−2.j factors of 2; in C(i) it may have one fewer than this. Furthermore, the coefficient of power(i) in S(i) or 2.C(i) is exactly the power(i, 2) required by this last, with no further factors, except for 2.C(0), which is 2 (so twice the power(0, 2) just specified).

The (|C:) are known as the Chebychev polynomials, with T=C and U=S, so nothing here is novel; see also de Moivre's formula.

Adding multipliers

We can apply the addition formulae for angles to our sequences of polynomials:

C(n+m, Cos(t)) = Cos((n+m).t)
= Cos(n.t +m.t) = Cos(n.t).Cos(m.t) −Sin(n.t).Sin(m.t)
= C(n, Cos(t)).C(m, Cos(t)) −Sin(t).S(n−1, Cos(t)).Sin(t).S(m−1, Cos(t))
= C(n, Cos(t)).C(m, Cos(t)) +(Cos(t).Cos(t) −1).S(n−1, Cos(t)).S(m−1, Cos(t))
S(n+m−1, Cos(t)).Sin(t) = Sin((n +m).t)
= Sin(n.t +m.t) = Sin(n.t).Cos(m.t) +Cos(n.t).Sin(m.t)
= Sin(t).S(n−1, Cos(t)).C(m, Cos(t)) +C(n, Cos(t)).Sin(t).S(m−1, Cos(t))

whence

In particular, with m = 1, we get C(n+1) = C(n).power(1) +(power(2) −power(0)).S(n−1), i.e. C(n+1, x) = C(n, x).x +(x.x −1).S(n−1, x). The S formula with m = 1 gives S(n) = S(n−1).power(1) +C(n), i.e. S(n, x) = S(n−1,x).x +C(n, x); with m = 2, we get

S(n+1) = S(n−1).C(2) +C(n).S(1)
= S(n−1).(2.power(2) −power(0)) +2.C(n).power(1)
= 2.power(1).(S(n−1).power(1) +C(n)) −S(n−1)

in which S(n−1).power(1) +C(n) is just the expression m=1 gave us for S(n), above

= 2.power(1).S(n) −S(n−1)

i.e. S(n+1, x) +S(n−1, x) = 2.x.S(n, x). In particular, we gain a factor of two in the leading-order term at each step up S, suggesting that we might want to work with 2.Cos(t) in place of Cos(t) as parameter. We can now, furthermore, generate the S series without further knowledge of C. Starting with n = 0, successive S(n, x/2) are: 1, x, x.x −1, x.(x.x −2), x.x.(x.x −3) +1, x.(x.x.(x.x −4) +3), x.x.(x.x.(x.x −5) +6) −1, x.(x.x.(x.x.(x.x −6) +10) −4), and so on. Supplying these to the formula for C(n+1) given C(n) we can then obtain all of C, as well. The two iteration steps are then:

These two equations, together with S(−1) = 0 and S(0) = 1 = C(0), suffice to determine all of S and C. Summarising the cases given above and working out a few more, we get:

nC(n, x)S(n, x)
−10
011
1x2.x
22.x.x −14.x.x −1
3x.(4.x.x −3)2.x.(4.x.x −2)
42.x.x.(4.x.x −4) +14.x.x.(4.x.x −3) +1
5x.(4.x.x.(4.x.x −5) +5)2.x.(4.x.x.(4.x.x −4) +3)
62.x.x.(4.x.x.(4.x.x −6) +9) −14.x.x.(4.x.x.(4.x.x −5) +6) −1
7x.(4.x.x.(4.x.x.(4.x.x −7) +14) −7)2.x.(4.x.x.(4.x.x.(4.x.x −6) +10) −4)

Signs and sizes

Returning to the properties enumerated just before the last subsection (which the attentive reader shall see still hold true for the cases just illustrated), define Pattern(n), for natural n, to be the assertion that S(n) and C(n): have degree n, contain only terms in power(n−2.i) for natural i with 2.i ≤ n, have positive coefficient for each such term with even i, negative coefficient for odd i (so, in particular, every such power has non-zero coefficient), the coefficients of power(i) in S(i) and, if i > 0, in 2.C(i) are both power(i, 2) and the coefficient of each power(j) in S(i) has power(j, 2) as a factor. (Note that I leave out the last's matching statement for 2.C(i): this is because it's not so easy to prove !) I shall now show that Pattern(n) holds for each natural n.

Suppose we have some natural m for which Pattern(n) holds for all n in m. If m is 0, we know Pattern(m) by the starting values S(0) = 1 = C(0). If m is 1, the first iteration gives us S(1, x) = 2.x, C(1, x) = x, which do indeed satisfy Pattern(1). Otherwise, m is n+1 for some n in m and n−1 is also in m, so we can apply the iteration formulae to obtain S(m) and C(m) from S(n), C(n) and S(n−1), which have the properties that Pattern(n) and Pattern(n−1) assert.

For S(m, x), 2.x.S(n, x) gives us power(m, 2.x) as leading term, unaffected by −S(n−1) as it's of lower degree. Each term in S(n, x) has as many factors of 2 as of x, hence so does each term in 2.x.S(n, x) since it just adds one of each; and each term of −S(n−1) has as many factors of 2 as of x, hence so does S(m). Each power in 2.x.S(n, x) is power(1+n−2.j) for some natural j with 2.j ≤ n and, for each such j, we do get a term, with positive coefficient if j is even, negative if j is odd; as 1+n is m, each of these is power(m−2.j) with the right sign for Pattern(m) and, with the exception of 2.j = m when m is even, we do get each such term. Each power in S(n−1) is less than n−1, hence also m = n+1, by an even offset; and its sign is exactly the opposite of what Pattern(m) requires for terms in S(m), so negating it makes them just right. When m is even, we do get such a term for power(0), since n−1 is even, hence equal to some 2.j with j natural that yields a term here. So 2.x.S(n, x) contributes terms of the right sign to each power required for S(m) except possibly power(0); and −S(n−1) contributes terms of the right sign to each power required for S(m) except power(m) itself; where both contribute, they do so with the same sign so the combination still has this sign. Thus S(m) satisfies all that's required of it for Pattern(m).

For C(m, x) = x.C(n, x) +(x.x −1).S(n−1, x) we get power(n−1, 2).power(n+1, x) from x.C(n, x) and the same again from x.x.S(n−1, x), for a total of power(n, 2).power(n+1, x), which is exactly the leading term required for power(m) in C(m); its coefficient has n = m−1 powers of 2. Every term in x.C(n, x) has the right sign for its power in C(m) exactly as for 2.x.S(n, x) above and likewise for the terms from −S(n−1). The terms from x.x.S(n−1, x) are all power(2+n−1−2.j), for some natural j with 2.j ≤ n−1, with positive coefficient for even j and negative for odd j; the power here is just n+1−2.j, i.e. m−2.j, and the coefficient has, again, the right sign for C(m). Thus each contribution to each power has the right sign, hence so does the combination of all such contributions. From x.C(n, x) we get every power required for C(m) except possibly power(0); from −S(n−1) we get every required power except power(m) and, in particular, do get power(0) if m is even, as before for S(m). Thus C(m) also satisfies what Pattern(m) asks of it.

Thus Pattern(n) for all n in m, for natural m, implies Pattern(m) and, by (strong) induction, Pattern(n) holds for every natural n.

Thus each S(i, x) is a polynomial in 2.x; for odd i, it's 2.x.P(−4.x.x), otherwise, for even i, it's simply P(−4.x.x), for (in each case) some polynomial P; when P is of even order, its leading coefficient is 1 and it has positive coefficients of all lower powers; otherwise, P is of odd order and −P has leading coefficient 1, with positive coefficients at all lower orders. Although I've not proved the equivalent for 2.C(m), it does hold true for each example we've seen so far. This lets us write our polynomials in terms of some simpler ones (those in u, in this table), albeit the presentation gets a bit messy:

n2.C(n, x)S(n, x)
−10
021
12.x2.x
2−(: u +2 ← u :)(−4.x.x)−(: u +1 ←u :)(−4.x.x)
3−2.x.(: u +3 ←u :)(−4.x.x)−2.x.(: u +2 ←u :)(−4.x.x)
4(: u.u +4.u +2 ←u :)(−4.x.x)(: u.u +3.u +1 ←u :)(−4.x.x)
52.x.(: u.u +5.u +5 ←u :)(−4.x.x)2.x.(: u.u +4.u +3 ←u :)(−4.x.x)
6−(: u.u.u +6.u.u +9.u.u +2 ←u :)(−4.x.x)−(: u.u.u +5.u.u +6.u +1 ←u :)(−4.x.x)
7−2.x.(: u.u.u +7.u.u +14.u +7 ←u :)(−4.x.x)−2.x.(: u.u.u +6.u.u +10.u +4 ←u :)(−4.x.x)

We could now analyse the polynomials in u here; but we'd get four families with correspondingly more equations relating them. Instead, let's look at some more structure that's present in S and C.

Multiplying multipliers

Now consider the case of Cos(n.m.t) for naturals n and m; clearly, this is C(n.m, Cos(t)) but equally, by considering n.m.t as a multiple of m.t, it's C(n, Cos(m.t)) = C(n, C(m, Cos(t))), so C(n)∘C(m) = C(n.m), which makes C an embedding (or homomorphism) of the multiplicative structure of the naturals in the composition structure of polynomials. Given that polynomials are defined in terms of the addition and multiplication of their coefficients and parameters, with addition and multiplication of polynomials defined pointwise (i.e. evaluate both polynomials at a given input, combine the answers; the result is what the combined polynomial gives for that input), passing one polynomial as parameter to another has exactly the same effect as composing the passed polynomial before the other, P∘Q = P(Q) when P and Q are polynomials. So we have C(n.m) = C(n, C(m)). Similarly, though less straightforwardly,

Sin(t).S(n +m +n.m, Cos(t)) = Sin((1 +n +m +n.m).t)
= Sin((1+n).(1+m).t) = Sin((1+m).t).S(n, Cos((1+m).t))
= Sin(t).S(m, Cos(t)).S(n, C(1+m, Cos(t)))

whence S(n +m +n.m) = S(m).S(n, C(1+m)) and, by symmetry under swapping n and m, this must also be equal to S(n).S(m, C(1+n)).

From n = 2, we get C(2.m) = C(2, C(m)) = 2.C(m).C(m) −1; from m = 2, we get C(2.n) = C(n, 2.power(2) −power(0)). From m = 1, we get S(2.n +1) = S(1).S(n, C(2)) = 2.power(1).S(n, 2.power(2) − power(0)) and also S(2.n +1) = S(n).S(1, C(1+n)) = 2.S(n).C(1+n). (This last is just what the addition formula for S above gives when its two multipliers are 1+n.) We can also use the addition formula for C, above, to get C(2.n) = C(n).C(n) +(power(2) −power(0)).S(n−1).S(n−1); comparing this with C(2.n) = 2.C(n).C(n) −1, we see C(n).C(n) = 1 +(power(2) −1).S(n−1).S(n−1), which is just a restatement of Pythagoras's theorem, Cos(n.t).Cos(n.t) +Sin(n.t).Sin(n.t) = 1, expressed via S and C. The simple equations here are:

These tell us how C behaves on even and S on odd, i.e. the effect of halving the base angle; so we can get a complete description by working out how C behaves on odd and S on even. The C(odd, x) all have a factor of x; when expressed as in the last section via polynomials in u = −4.x.x, with all coefficients positive, they use the polynomials whose values at u are: 1, u +3, u.u +5.u +5, u.u.u +7.u.u +14.u +7. The S(even, u) correspondingly use polynomials 1, u +1, u.u +3.u +1, u.u.u +5.u.u +6.u +1. Taking the average of the first-degree members of these families, we get u +2; let's try writing each in terms of it. The C ones are 1, (u+2) +1, (u+2).(u+2) +(u+2) −1, (u+2).(u+2).(u+2) +(u+2).(u+2) −2.(u+2) −1; the S ones are 1, (u+2) −1, (u+2).(u+2) −(u+2) −1, (u+2).(u+2).(u+2) −(u+2).(u+2) −2.(u+2) +1. These have the same coefficients except for negating the ones at odd offset below highest order. These shared polynomials have parameter u+2 = 2 −4.x.x = −2.(2.x.x −1) which, when x is Cos(t), is just −2.Cos(2.t).

Refactoring

We have, at least for a few small n, C(2.n +1, x) = x.power(n, −1).P(2 −4.x.x) and S(2.n, x) = power(n, −1).Q(2 −4.x.x) for some polynomials P and Q, both of order n, whose highest order coefficient is +1 (unaffected by shifting the parameter by 2, although lower-order coefficients shall be). We've just found that flipping the sign of the coefficients of each power(n−odd) has the effect of turning P into Q and Q into P. Now, P(−w) ←w is just P with the sign of every odd-order coefficient flipped; so power(n, −1).(: P(−w) ←w :) is what we've found to match Q. Unwrapping all of that we get: C(2.n +1, x) = x.power(n, −1).P(2 −4.x.x) = x.Q(4.x.x −2) while S(2.n, x) = power(n, −1).Q(2 −4.x.x). So now, motivated by all of this, let's define two new families of polynomials, K and H (which we can hope we'll be able to prove equal, later) by: for each natural n and real x (presumed to be Cos(t) for some angle t),

We already know C(2.n +1, x) has a factor of x and is, otherwise, a polynomial in 4.x.x; likewise, S(2.n, x) is a polynomial in 4.x.x; nothing keeps us from shifting the origin or scaling by an overall sign, so both K and H are surely well-defined. Since C(2.n +1)/x and S(2.n) have degree 2.n, so do K(n, 4.x.x −2) and H(n, 2 −4.x.x) as polynomials in x; as they're being passed a quadratic in x as parameter, this means K(n, u) and H(n, u) have half the degree, as polynomials in u; so K(n) and H(n) have degree n. It should also be clear, from the previous analysis, that each K(n) and H(n) has leading-order coefficient +1.

Early examples

Inspecting the early polynomials above, we have C(1, x) = x so K(0) = 1; C(3, x) = x.(4.x.x −3) so K(1, u) = u −1; S(0) = 1, so H(0) = 1; and S(2, x) = 4.x.x −1 = power(1, −1).(2 −4.x.x −1) so H(1, u) = u −1. We'll shortly work out how to infer the rest from these, but let me just pause to work through the few more for which we already have the details, if only as a sanity check: feel free to skip to the next subsection.

S(4, x)
= 4.x.x.(4.x.x −3) +1
= 4.x.x.(4.x.x −2 −1) +1
= (4.x.x −2).(4.x.x −2) +2.(4.x.x −2) −4.x.x +1
= (2 −4.x.x).(2 −4.x.x) −(2 −4.x.x) −1
= power(2, −1).H(2, 2 −4.x.x)

so H(2, u) = u.u −u −1

C(5, x) / x
= 4.x.x.(4.x.x −5) +5
= 4.x.x.(4.x.x −2 −3) +5
= (4.x.x −2).(4.x.x −2) +2.(4.x.x −2) −3.(4.x.x) +5
= (4.x.x −2).(4.x.x −2) −4.x.x +1
= (4.x.x −2).(4.x.x −2) −(4.x.x −2) −1
= K(2, 4.x.x −2)

so K(2, u) = u.u −u −1 = H(2, u)

S(6, x)
= 4.x.x.(4.x.x.(4.x.x −5) +6) −1
= 4.x.x.(4.x.x.(4.x.x −2 −3) +6) −1
= 4.x.x.((4.x.x −2).(4.x.x −2) −4.x.x +2) −1
= 4.x.x.(2 −4.x.x).(2 −4.x.x) +4.x.x.(2 −4.x.x) −1
= (4.x.x −2).(2 −4.x.x).(2 −4.x.x) +(4 −4.x.x).(2 −4.x.x) −1
= −(2 −4.x.x).(2 −4.x.x).(2 −4.x.x) +(2 −4.x.x).(2 −4.x.x) +2.(2 −4.x.x) −1
= power(3, −1).H(3, 2 −4.x.x)

so H(3, u) = u.u.u −u.u −2.u +1

C(7, x)/x
= 4.x.x.(4.x.x.(4.x.x −7) +14) −7
= 4.x.x.(4.x.x.(4.x.x −2 −5) +14) −7
= 4.x.x.((4.x.x −2).(4.x.x −2) −3.(4.x.x) +10) −7
= 4.x.x.(4.x.x −2).(4.x.x −2) +4.x.x.(−3.(4.x.x −2) +4) −7
= (4.x.x −2).(4.x.x −2).(4.x.x −2) +2.(4.x.x −2).(4.x.x −2) −3.(4.x.x −2).(4.x.x −2) −6.(4.x.x −2) +4.(4.x.x −2) +8 −7
= (4.x.x −2).(4.x.x −2).(4.x.x −2) −(4.x.x −2).(4.x.x −2) −2.(4.x.x −2) +1
= K(3, 4.x.x −2)

so K(3, u) = u.u.u −u.u −2.u +1 = H(3, u)

So (:K:4) = (:H:4), at least. Our next task is to work out the recurrence relations for K and H.

Iterating H

Since H and K track every second member of S and C, we're going to want to take double steps in S and C. For the C series, we can simply use the addition formula with m = 2. However, we were able to separate the S series out, so let's see if we can preserve that while double-stepping. We got S(n+1, x) +S(n−1, x) = 2.x.S(n, x), so

S(n+2, x)
= 2.x.(2.x.S(n, x) −S(n−1, x)) −S(n, x)
= (4.x.x −1).S(n, x) −2.x.S(n−1, x)
= (4.x.x −1).S(n, x) −(S(n, x) +S(n−2, x))
= (4.x.x −2).S(n, x) −S(n−2, x)

This uses only terms at even offsets from one another; better yet, the only other factor involved is 4.x.x −2, exactly (give or take a sign) the parameter to our new functions. Applying this, we get

H(n+1, 2 −4.x.x).power(n+1, −1)
= S(2.(n +1), x)
= (4.x.x −2).S(2.n, x) −S(2.n −2, x)
= (4.x.x −2).power(n, −1).H(n, 2 −4.x.x) −power(n−1, −1).H(n−1, 2 −4.x.x)

from which we can read off H(n+1, u) = u.H(n, u) −H(n−1, u). We can now derive all of H from the known H(0) = 1, H(1, u) = u −1.

Notice that H(n+1, u) +H(n−1, u) = u.H(n, u) lets us iterate in either direction. If we decrease n, we get H(−1, u) = u.H(0, u) −H(1, u) = 1, so H(−1) = H(0). Nothing stops us repeating the process: H(−2, u) = u.H(−1, u) −H(0, u) = u −1, so H(−2) = H(1); as long as H(−(i+1)) = H(i) for each i in some positive natural n, we can cheerfully exploit the case i = n−1 to infer H(−(n+1), u) = H(−n, u).u −H(−(n−1), u) = H(n−1, u).u −H(n−2, u) = H(n, u) whence H(−(n+1)) = H(n); whence, by induction (given that we've already seen it for n = 0, 1), this holds for all natural n. Of course, this is just stating that Sin((2.n+1).t) = Sin(t).S(2.n, Cos(t)) = power(n, −1).H(n, 2 −4.Cos(t).Cos(t)) = power(n, −1).H(−(1+n), 2 −4.Cos(t).Cos(t)) = −Sin((1−2.(n+1)).t) = −Sin(−(2.n+1).t), which follows trivially from Sin(−a) = −Sin(a). The matching statement for K, once we show K = H, shall simply be the statement that Cos((2.n+1).t) = Cos(−(2.n +1).t), which follows equally trivially from Cos(−a) = Cos(a).

More sensibly, iterating forwards from the given H(0) = 1, H(1, u) = u −1, we get

Notice that H(n−1) has degree two lower than power(1).H(n), from which it's subtracted, so H(n+1)'s two leading orders come from H(n).power(1), without any interference from H(n−1). Thus the leading order coefficient is always +1 and the next is always −1; the two highest-order terms in H(n, u) are always (u −1).power(n−1, u).

At the far end of each polynomial, power(1).H(n) makes no contribution to the constant term (power(0)'s coefficient), so H(n+1)'s constant term is simply the negation of H(n−1)'s; as it happens, H(0) and H(1) had opposite values for it (rather than values with different magnitudes) so H(2)'s negated use of H(0) coincides with H(1)'s constant term and we get blocks of two of each sign, alternating with blocks of two of the other. These flow left to give us pairs of positive coefficients alternating with pairs of negative coefficients; in H(n), for each j in 4 = {0, 1, 2, 3} and natural i with n ≥ 4.i +j, the coefficient of H(n−4.i−j) is positive if j is 0 or 3, negative if it's 1 or 2. The contributions from u.H(n, u) to H(1+n, u) all thus have the sign this anticipates for H(n+1)'s coefficients; and H(n−1)'s term of each power has the opposite sign to that anticipated for H(n+1)'s coefficient of that power, so subtracting H(n−1) contributes with the matching sign. As both contributions have the anticipated sign, their sum does too and the coefficients of H(n+1) duly do have the anticipated signs. In particular, the constant term of H(n) is +1 when n or n+1 is a multiple of 4, else −1.

H(n)'s coefficients of power(n−2), for n > 1, and of power(n−3), for n > 2, are 1−n and n−2, respectively; when this holds true (as, for example, in H(3)), these two terms contribute 1−n and n−2 to H(n+1)'s terms in power(n−1) and power(n−2), respectively; and −H(n−1, u) contributes (1 −u).power(n−2, u) from its highest-order terms, so that H(1+n)'s coefficients of power(n−1) and power(n−2) are duly 1−(n+1) and (n+1)−2, respectively. Thus the highest-order four terms of H(n, u), for n > 3, are (u.u.(u −1) +(1 −n).u +n −2).power(n −4, u).

Iterating K and matching with H

Using the addition formula for C, above, with m = 2 and 2.n +1 in place of n, we obtain

K(n +1, 4.x.x −2).x
= C(2.(n +1) +1, x)
= C(2.n +1, x).C(2, x) +(x.x −1).S(2.n, x).S(1)
= C(2.n +1, x).(2.x.x −1) +2.x.(x.x −1).S(2.n, x)
= x.(2.x.x −1).K(n, 4.x.x −2) +2.x.(x.x −1).power(n, −1).H(n, 2 −4.x.x)

which gives us

Now, H(n) has rank n with leading-order term power(n), so (: power(n, −1).H(n, −u) ← u :) also has leading-order term power(n); as long as this is also the leading-order term in K(n), we'll have 2.power(n+1) as the leading order term in 2.K(n+1), hence power(n+1) as K(n+1)'s leading-order term. Since we do indeed have K(0) and K(1) of ranks 0 and 1, each with leading order coefficient +1, we can thus induce that each K(n) does indeed have rank n and leading-order coefficient 1. The iteration formula's sole contribution to power(0) in 2.K(n+1) comes form the −2.power(n, −1).H(n, −u) term, so K(n+1)'s constant term is simply −power(n, −1) times H(n)'s constant term, which we establisehd above to be +1 when n or n+1 is a multiple of 4, else −1. Thus, when n is odd, H(n+1) has the same constant term as H(n); when n is even, the constant terms in H(n+1) and H(n) have opposite sign; which is exactly the relation just given between the constant terms of K(n+1) and H(n), hence K(n+1) always has the same constant term as H(n+1). So each K(i) has, like the matching H(i), rank i with leading coefficient 1 and constant term equal to that of H(i).

We already know (:K:4) = (:H:4), so let's suppose we have some natural m for which (:K:m) = (:H:m). In particular, this means that 2.H(i, u) = u.H(i−1, u) +power(i−1, −1).(u−2).H(i−1, −u) for each natural i in m. So now consider:

2.H(m, u)
= 2.u.H(m−1, u) −2.H(m−2, u)

by the iteration formula for H

= u.(u.H(m−2, u) +power(m−2, −1).(u−2).H(m−2, −u)) −(u.H(m−3, u) +power(m−3, −1).(u−2).H(m−3, −u))

by applying the formula just given for 2.H(i, u), with i = m−1 in the first term and i = m−2 in the second; reordering the terms gives

= u.u.H(m−2, u) −u.H(m−3, u) +power(m−2, −1).(u−2).u.H(m−2, −u) −power(m−1, −1).(u−2).H(m−3, −u)
= u.(u.H(m−2, u) −H(m−3, u)) −power(m−1, −1).(u−2).(u.H(m−2, −u) +H(m−3, −u))

for the last term of which we can pass −u in place of u to H's iterative formula to get H(i+1, −u) = −u.H(i, −u) −H(i−1, −u) with i = m−2, while using H's iterative formula directly in the first term, to get:

= u.H(m−1, u) +power(m−1, −1).(u−2).H(m−1, −u)

and our inductive hypothesis tells us K(m−1) = H(m−1), so

= u.K(m−1, u) +power(m−1, −1).(u−2).H(m−1, −u)
= 2.K(m, u)

so that H(m) = K(m). Thus (:H:m) = (:K:m) implies H(m) = K(m) and, by induction, we can infer that H = K on all of {naturals}. In particular, H satisfies 2.H(n+1, u) = u.H(n, u) +power(n, −1).(u −2).H(n, −u). Taken together with H's two-step iteration, H(n+1, u) +H(n−1, u) = u.H(n, u), this gives

power(n, −1).(u −2).H(n, −u)
= 2.H(n+1, u) −u.H(n, u)
= H(n+1, u) −H(n−1, u)
= u.H(n, u) −2.H(n−1, u)

In particular, the last can be rearranged to express H(n−1) in terms of H(n).

Summary

So we now have, for natural n and any angle t:

with each C(n) or S(n) being a polynomial of degree n. From the formulae for doubling angles, we have, for every natural n:

while the remaining values of S and C can be obtained as

from a function ({polynomials}: H |{naturals}) with

which, furthermore, also satisfies, for each u and natural n,

which, though more complicated than the preceding two-step iteration, is a single-step iteration, so lets us determine each H(n+1) from H(n) without needing to know H(n−1); in particular, with this, it suffices to know H(0) = 1 and we can infer H(1), along with all later H(n). Each H(n) is a polynomial of degree n.

These sequences of polynomials are exported (with the names C, S and H, as lazily-evaluated immutable sequences) by the study.maths.multiangle module of my pythonic study package.

Structural properties of H

We've used the addition formulae for small steps to get our iterations for H (and K): now let's look at what those formulae say for general steps. Since H relates to the odd multiples of angles, the interesting cases have odd sum for the two multipliers of the base angle; this means one multiplier is odd (so gives terms based on H) while the other is even (so not); but, if it's some specific power of two times some odd multiplier, it can still give a result that we can relate to H; for now, let's just consider twice an odd multiplier.

x.H(n +2.m +1, 4.x.x −2)
= C(2.(n +2.m +1) +1, x) = C(2.n +1 +2.(2.m +1), x)
= C(2.n +1, x).C(2.(2.m +1), x) +(x.x −1).S(2.n, x).S(2.(2.m) +1, x)
= C(2.n +1, x).(2.C(2.m +1, x).C(2.m +1, x) −1)
+2.(x.x −1).S(2.n, x).S(2.m, x).C(2.m +1, x)
= x.H(n, 4.x.x −2).(2.x.H(m, 4.x.x −2).x.H(m, 4.x.x −2) −1)
+2.(x.x −1).power(n, −1).H(n, 2 −4.x.x).power(m, −1).H(m, 2 −4.x.x).x.H(m, 4.x.x −2)
= 2.x.H(m, 4.x.x −2).(
x.x.H(n, 4.x.x −2).H(m, 4.x.x −2)
+(x.x −1).power(n +m, −1).H(n, 2 −4.x.x).H(m, 2 −4.x.x))
−x.H(n, 4.x.x −2)
power(n +2.m +1, −1).H(n +2.m +1, 2 −4.x.x)
= S(2.n +4.m +2, x) = S(2.n +1 +2.(2.m +1) −1, x)
= S(2.n, x).C(4.m +2, x) +C(2.n +1, x).S(4.m +1, x)
= S(2.n, x).(2.C(2.m +1, x).C(2.m +1, x) −1)
+2.C(2.n +1, x).S(2.m, x).C(2.m +1, x)
= power(n, −1).H(n, 2 −4.x.x).(2.x.H(m, 4.x.x −2).x.H(m, 4.x.x −2) −1)
+2.x.H(n, 4.x.x −2).power(m, −1).H(m, 2 −4.x.x).x.H(m, 4.x.x −2)
= 2.x.x.H(m, 4.x.x −2).power(n, −1).H(n, 2 −4.x.x).H(m, 4.x.x −2)
+2.x.x.H(m, 4.x.x −2).power(m, −1).H(n, 4.x.x −2).H(m, 2 −4.x.x)
−power(n, −1).H(n, 2 −4.x.x)
= 2.x.x.H(m, 4.x.x −2).(
power(n, −1).H(n, 2 −4.x.x).H(m, 4.x.x −2)
+power(m, −1).H(n, 4.x.x −2).H(m, 2 −4.x.x))
−power(n, −1).H(n, 2 −4.x.x)

whence, with u = 4.x.x −2 in the first and u = 2 −4.x.x in the second,

2.H(n +2.m +1, u)
= H(m, u).(
(u +2).H(n, u).H(m, u)
+(u −2).power(n +m, −1).H(n, −u).H(m, −u))
−2.H(n, u)
2.H(n +2.m +1, u)
= 2.H(n, u)
+(u −2).H(m, −u).(
H(n, u).H(m, −u)
+power(n +m, −1).H(n, −u).H(m, u))

The right-hand side, in either form, only involves H(n) and H(m). For any positive natural k, there is some natural r < 3 and natural m for which k = 3.m +r +1; setting n = m + r, we get k = n +2.m +1, with both n and m necessarily less than k. We can thus infer H(k) from (:H:k). Thus this one formula is sufficient to generate all of H from H(0). Furthermore, when inferring any given H(k), it doesn't need all of (:H:k); those H(i) with 3.(i +1) ≥ k are not needed, at least. For example, on the way to computing H(137), only (: H :{0, 1, 2, 3, 4, 5, 6, 14, 15, 16, 45, 46}) need be computed.

Comparing the two forms, we obtain

H(m, u).(
(u +2).H(n, u).H(m, u)
+(u −2).power(n +m, −1).H(n, −u).H(m, −u))
−2.H(n, u)
= 2.H(n, u)
+(u −2).H(m, −u).(
H(n, u).H(m, −u)
+power(n +m, −1).H(n, −u).H(m, u))

whence

(u +2).H(n, u).H(m, u).H(m, u)
+power(n +m, −1).(u −2).H(n, −u).H(m, u).H(m, −u)
= 4.H(n, u)
+(u −2).H(n, u).H(m, −u).H(m, −u)
+power(n +m, −1).(u −2).H(n, −u).H(m, u).H(m, −u)

in which the power(n +m, −1) terms cancel; rearranging what's left, we obtain

Since this is true as an equation of polynomials and H(n) isn't constant 0 for natural n, the term it's multiplied by (which doesn't involve n at all, so we get the above for each natural n, notably including n = 0, H(0) = 1) must in fact be zero, whence:

for each natural m. The studious reader is welcome to verify that this holds true for each H(i) listed above (and doesn't hold true for most other polynomials). This implies that (: (2 +u).H(m, u).H(m, u) ←u :) has constant term 2 and all other terms are of odd order; so it's 2 +u.P(u.u) for some polynomial P (with P(4) = 1).

From multiplying multipliers we have

Applying these to the cases where we derive C and S from H, we have

x.H(2.n.m +m +n, 4.x.x −2)
= C(4.n.m +2.m +2.n +1, x) = C((2.n +1).(2.m +1), x)
= C(2.n +1, C(2.m +1, x))
= x.H(n, 4.C(2.m +1, x).C(2.m +1, x) −2)
= x.H(n, 4.x.x.H(m, 4.x.x −2).H(m, 4.x.x −2) −2)
H(2.n.m +m +n, 2 −4.x.x)
= power(2.n.m +n +m, −1).S(2.n +2.m +4.n.m, x)
= power(n +m, −1).S(2.m, x).S(2.n, C(1 +2.m, x))
= power(n +m, −1).power(m, −1).H(m, 2 −4.x.x).power(n, −1).H(n, 2 −4.C(1+2.m, x).C(1+2.m, x))
= H(m, 2 −4.x.x).H(n, 2 −4.x.x.H(m, 4.x.x −2).H(m, 4.x.x −2))

whence


Valid CSSValid HTML 4.01 Written by Eddy.