There is a broad class of situations in which differential equations arise
in which ∇^{2} applied to a field is equal to that field
multiplied by some real function, E, of position. By analogy with the
one-dimensional harmonic oscillator – whose second derivative is a
negative multiple of its value – the angular factors in the solutions to
such an equation are described as spherical harmonics

.
I'll thus describe the differential equation as the three-dimensional harmonic
equation.

When E, as a function of position, possesses spherical symmetry about some point, we can chose polar co-ordinates [r,m,n] about that point: a radial co-ordinate r measuring distance from the point, an azimuthal co-ordinate or latitude, m, defined relative to some line through the point, and an axial co-ordinate or longitude, n, defined relative to some half-plane whose boundary is the same line.

Our equation then falls into the form

- E(r).u(r,m,n) = ∇
^{2}u(r,m,n) = (∂(r.r.∂u/∂r)/∂r −S^{2}(u))/r/r

where

- S
^{2}(u) = −radian.radian.( ∂(Cos(m).∂u/∂m)/∂m +∂∂u/∂n/∂n/Cos(m) )/Cos(m)

and S^{2} only involves n and m; it is the square of
the spin operator, S.

Since our differential equation is linear in u, any linear combination of solutions to it will also be a solution. We can thus seek out solutions of simple form, which are easier to find, and leave those of complex form to be built up as linear combinations of these. Let us, then, consider solutions of form u(r,m,n) = R(r).M(m).N(n); such solutions should be possible, given the spherical symmetry of our system. Dividing our equation through by u/r/r then reduces it to

- E(r).r.r = d(r.r.R'(r))/dr/R(r) −S
^{2}(M(m).N(n))/M(m)/N(n)

which we can re-arrange as

- d(r.r.R'(r))/dr/R(r) −E(r).r.r = S
^{2}(M(m).N(n))/M(m)/N(n)

in which the left side depends only on r and the right side only on m and n; since r, m and n are independent, this implies both are constant. Let this constant be L and obtain:

- d(r.r.R'(r))/dr/R(r) −E(r).r.r = L
- L = S
^{2}(M(m).N(n))/M(m)/N(n) - = −radian.radian.(d(Cos(m).M'(m))/dm/M(m) +N''(n)/N(n)/Cos(m))/Cos(m)

for some constant L. In the present case, we know nothing about how E(r) depends on r, so the first equation is outside the scope of the present discussion; but the second equation arises in every problem of this kind and doesn't depend on the particular radial function of position. It, in turn, can be re-arranged as

- L.Cos(m).Cos(m)/radian/radian +Cos(m).d(Cos(m).M'(m))/dm/M(m) = −N''(n)/N(n)

which, again, separates: the left side depends only on m and the right side only on n, so they must be constant. Now, N must be a periodic function, with period one turn (or some whole fraction of it) – its values when n is ± a half turn must be equal for u to be continuous, as a function of position. If N''(n)/N(n) is constant, we get solutions of form (: exp(k.n) ←n :) with k.k equal to the constant. We require that exp(k.turn/2) = exp(−k.turn/2), i.e. exp(k.turn) = 1. Since exp is periodic with period 2.π.i and exp(0) = 1, this implies k.turn is an integer multiple of 2.π.i so k is an integer multiple of 2.π.i/turn = i/radian. So take k = h.i/radian for some integer h, forget about k, and note that our constant −N''(n)/N(n) = h.h/radian/radian, with N = (: exp(h.n.i/radian) ←n :) = (: Cos(h.n) +i.Sin(h.n) ←n :).

We are thus left with the equation

- radian.radian.Cos(m).d(Cos(m).M'(m))/dm/M(m) = h.h −L.Cos(m).Cos(m)

which we can attack by the substitution s = Sin(m), yielding Cos(m).Cos(m) = 1−s.s, ds/dm = Sin'(m) = Cos(m)/radian; and considering the case where M(m) is some power of Cos(m) multiplied by some function of Sin(m); introduce P = (: M(m) / power(q, Cos(m)) ←Sin(m) :) so that M(m) = P(Sin(m)).power(q, Cos(m)) and

- Cos(m).M'(m)
- = (P'(s).Cos(m).Cos(m) −q.s.P(s)).power(q,Cos(m))/radian
- = (P'(s).(1−s.s) −q.s.P(s)).power(q,Cos(m))/radian
whence

- (h.h +(s.s−1).L).P(s)
- = radian.radian.Cos(m).d(Cos(m).M'(m))/dm/power(q, Cos(m))
- = radian.d((P'(s).(1−s.s) −q.s.P(s)).power(q,Cos(m)))/dm/power(q−1, Cos(m))
- = d(P'(s).(1−s.s) −q.s.P(s))/ds.Cos(m).Cos(m) −q.s.(P'(s).(1−s.s) −q.s.P(s))
- = (P''(s).(1−s.s) −(2+q).s.P'(s) −q.P(s)).(1−s.s) −q.s.(P'(s).(1−s.s) −q.s.P(s))
- = P''(s).(1−s.s).(1−s.s) −2.(1+q).s.P'(s).(1−s.s) +q.P(s).(s.s.(q+1) −1)
whence

- (P''(s).(1−s.s) −2.(1+q).P'(s).s).(1−s.s)
- = P(s).(h.h +(s.s−1).L +(1 −(q+1).s.s).q)
- = P(s).(h.h +(s.s−1).L −q.q +q.(q+1).(1−s.s))
- = P(s).(h.h −q.q +(1−s.s).(q.(q+1) −L))

so chose q = abs(h), the non-negative member of {±h}, to obtain

- (P''(s).(1−s.s) −2.(1+q).P'(s).s).(1−s.s) = P(s).(1−s.s).(q.(q+1) −L)

or, equivalently,

- P''(s).(1−s.s) = 2.s.(1+q).P'(s) +P(s).(q.(q+1) −L)

If we now expand P as a power series, P = sum(: k(j).power(j) :) for some ({scalars}:k:{integers}), the requirement that M be well-behaved near m = 0 translates to a requirement that P be likewise well-behaved at s = 0; and, while we could allow P to diverge at its bounding values, s = ±1, we must yet require that P at least converge for s arbitrarily close to these. The first of these constraints requires k(j) = 0 for j negative, in effect restricting (:k:{naturals}) and making P a polynomial; we shall return to the second. We now substitute this polynomial into our equation, to obtain:

- 0
- = P(s).(q.(q+1) −L) +2.s.(1+q).P'(s) −P''(s).(1−s.s)
- = sum(: (q.(q+1) −L).k(j).power(j,s) ←j :) +sum(: 2.s.(1+q).k(j).j.power(j−1,s) ←j :) +sum(: (s.s−1).k(j).j.(j−1).power(j−2,s) ←j :)
- = sum(: (q.(q+1) −L).k(j).power(j,s) +2.(1+q).k(j).j.power(j,s) +k(j).j.(j−1).power(j,s) −k(j+2).(j+2).(j+1).power(j,s) ←j :)
- = sum(: ( k(j).(q.(q+1) +2.(1+q).j +j.(j−1) −L) −k(j+2).(j+2).(j+1) ).power(j,s) ←j :)

in which each power of s must have co-efficient zero; and (q+1).q +2.(1+q).j +j.(j−1) = q.q +q +2.q.j +j +j.j = (q+j).(q+j+1); so we get a recurrence equation for k:

- k(j+2).(j+2).(j+1) = k(j).((q+j).(q+j+1) −L)

Our requirement that k(j) be zero for negative j is consistent with j = −1 and j = −2 making the left side zero, so k(−1) and k(−2) may be zero; whence similarly so may all other −ve coefficients. Since the series only relates even j to even j+2, and odd j to odd j+2, the coefficients of odd and even order can be chosen independently; we can start with either k(0) or k(1) zero and the other non-zero. For large j, unless k(j) is zero, we get k(j+2)/k(j) tending to 1; which would produce a divergent power series for s strictly between ±1, yielding an ill-defined M. Thus we must have k(j) = 0 for all sufficiently large j; which requires (q+j).(q+j+1) = L for the largest j with k(j) non-zero. We have q and j in {naturals} so this implies L = b.(1+b) for some natural b ≥ q = ±h, i.e. with −b ≤ h ≤ b. Our recurrence relation now becomes

- k(j+2).(j+2).(j+1) = k(j).((q+j).(q+j+1) −b.(1+b))

with k(b−q−2.a) non-zero for each natural a with q+2.a ≤ b; all other k(j) are zero. If b−q is odd, we get k(0) = 0 and k(1) non-zero; if b−q is even, k(1) = 0 and k(0) is non-zero. Aside from an over-all scaling, this completely determines our solution for M(m) = P(Sin(m)).power(abs(h),Cos(m)).

Suppose we have a solution for P, as above, characterized by

- P''(s).(1−s.s) = 2.s.(1+q).P'(s) +P(s).(q.(q+1) −b.(1+b))

for some natural q ≤ b or, equivalently, with P = sum(: k(j).power(j) :) for some ({scalars}:k:1+b−q), by the requirement that

- k(j+2).(j+2).(j+1) = k(j).((q+j).(q+j+1) −b.(1+b))

and consider P' = sum(: j.k(j).power(j−1) ←j :); writing g(j) = (j+1).k(j+1) so that P' = sum(: g(j).power(j) ←j :), we find

- g(j+1).(j+1).j
- = k(j+2).(j+2).(j+1).j
- = k(j).((q+j).(q+j+1) −b.(1+b)).j
- = g(j−1).((q+j).(q+j+1) −b.(1+b))
- = g(j−1).((q+1+j−1).(q+1+j−1+1) −b.(1+b))

so that g satisfies the same recurrence relation as k but with q increased by 1. Indeed, if we differentiate the differential equation for P we obtain:

- P'''(s).(1−s.s) −2.s.P''(s) = 2.s.(1+q).P''(s) +(2.(1+q) +q.(q+1) −b.(1+b)).P'(s)

which we can rearrange as

- P'''(s).(1−s.s) = 2.s.(2+q).P''(s) +((2+q).(q+1) −b.(1+b)).P'(s)

revealing that P' does indeed solve the same differential equation for q+1 as P solves for q. Thus, aside from normalization factors, all the solutions to our Azimuthal equation may be obtained by differentiating the q = 0 solution. This is the case with M(m) = P(Sin(m)) and no power of Cos(m), and it is entirely determined by b; it is a solution of

- P''(s).(1−s.s) = 2.s.P'(s) −b.(1+b).P(s)

and known as a Legendre polynomial

. Its
coefficients satisfy

- k(j+2).(j+2).(j+1) = k(j).(j.(j+1) −b.(1+b))

The first few Legendre polynomials and their non-trivial derivatives are given (up to over-all scaling) by:

- b = 0
- 1 ←s
- b = 1
- s ←s
- b = 2
- 3.s.s −1 ←s
- b = 3
- (5.s.s −3).s ←s
- 3.(5.s.s −1) ←s
- b = 4
- 35.s.s.s.s −30.s.s +3 ←s
- 20.s.(7.s.s −3) ←s
- 60.(7.s.s −1) ←s
- b = 5
- (63.s.s.s.s −70.s.s +15).s ←s
- 15.(21.s.s.s.s −14.s.s +1) ←s
- 420.s.(3.s.s −1) ←s
- 420.(9.s.s −1) ←s

If, then, we write Legendre(b, q) for some suitably-scaled q-th derivative of the Legendre polynomial of order b, we obtain M(m) = Legendre(b, q, Sin(m)).power(q, Cos(m)) with L = b.(1+b). Combining this with our axial solution N(n) = exp(h.n.i/radian) = Cos(h.n) +i.Sin(h.n) with abs(h) = q, we obtain a solution M(m).N(n) to the angular part of our general harmonic equation. We thus define the functions

- Y(b,h) = (: (: K(b,abs(h)).Legendre(b, abs(h), Sin(m)).power(abs(h), Cos(m)).(Cos(h.n) +i.Sin(h.n)) ←n :) ←m :)

with (:Y|{naturals}) and each (:Y(b):{integer h: −b ≤ h ≤
b}), and each K(b,abs(h)) so chosen as to make the integral of Y(b,h) times its
complex conjugate over the unit sphere be 1. These Y(b,h) are known as spherical harmonics

and reduce our general

- ∇
^{2}u(r,m,n) = E(r).u(r,m,n)

to a superposition of solutions of form u = R(r).Y(b,h,m,n) for some b, h, with each solution's R satisfying

- (E(r).r.r +(1+b).b).R(r) = d(r.r.R'(r))/dr

for the b of its Y(b,h). Note that, when looking for real (that is, not complex) solutions to our harmonic equation, we can use Y(b,h)+Y(b,−h) and i.Y(b,−h)−i.Y(b,h), each divided by √2, in place of Y(b,h) and Y(b,−h); these simply replace Cos(h.n) ±i.Sin(h.n) with Cos(h.n) and Sin(h.n), respectively, and limit h to its non-negative values, h in 1+b. The thus-adjusted spherical harmonics will be orthonormal provided (as we'll soon see) the standard Y(b,h) are.

Since the area element on the unit sphere is dm^dn.Cos(m)/radian/radian, integration over the unit sphere requires us to scale the integrand by Cos(m)/radian/radian and integrate with n and m varying over the ranges with bounds ±turn/2 and ±turn/4, respectively. We require Y(b,h).*Y(b,h) to have integral 1 over the unit sphere. Since N(n) has constant modulus, the n integration simply contributes a factor of a turn to the over-all integral of Y(b,h) times its conjugate. We thus require (with q = abs(h)):

- 1/K(b,q)/K(b,q)
- = integral(: power(2, Legendre(b, q, Sin(m))). power(1+2.q, Cos(m)) ←m; −turn/4 ≤ m ≤ turn/4 :).turn/radian/radian
- = 2.π.integral(: power(2, Legendre(b, q, s)).power(q, 1−s.s) ←s; −1≤s≤1 :)

Note that each Y(b,h) is the M(m).N(n) which, when we decoupled m and n from r, satisfied
S^{2}(M(m).N(n))/M(m)/N(n) = L = b.(1+b), so we have
S^{2}(Y(b,h)) = b.(1+b).Y(b,h), making Y(b,h) an eigenvector of
S^{2} with eigenvalue b.(1+b). Since the z-component of the spin operator S is just Sz =
−i.radian.∂/∂n, we also have Sz(Y(b,h)) = h.Y(b,h) so Y(b,h) is
an eigenvector of Sz with eigenvalue h.

In quantum mechanics, the angular momentum operator is just
**J** = ℏ.S so that J_{z}'s eigenvalue is h.ℏ and
J^{2} = ℏ^{2}.S^{2} has eigenvalue
b.(1+b).ℏ^{2} for the eigenvector Y(b,q).

Now, multiplying one function by the conjugate of another and integrating over the sphere provides a conjugate-symmetric positive definite inner product on the space of complex scalar functions on the sphere and, in particular, on the sub-space of smooth complex scalar fields on the sphere. Call the sphere B and the space of such functions V = {smooth ({complex scalars}:|B)} and define

- (dual(V): g |V) = (: (: integral(: v.*u |B) ←v :V) ←u :V)

to encode our inner product; g is antilinear and each of its outputs is linear. Since g is Hermitian-symmetric, a linear map (V:f|V) is g-symmetric precisely if g(f(u),v) = g(u,f(v)) for every u, v in V. I shall now show that the spin operator,

- S = [Sx, Sy, Sz] = [x,y,z]^(−i.∇)

is g-symmetric. Since each of the scalar functions of position on the unit sphere that we use as our co-ordinates is a real function of position, it is readily enough seen that (V: (: u(p).x(p) ←p |B) ←u |V) – and similar for y and z (and, indeed, r, m, n) – must be g-symmetric. Next, consider i.∂/∂x = (V: i.∂u/∂x ←u |V); for given u, v in V we obtain

- g(i.∂u/∂x, v)
- = integral(: −i.v.*(∂u/∂x) :B)
- = integral(: −i.∂(v.*u)/∂x +i.∂v/∂x.*u :B)
- = g(u, i.∂v/∂x) −i.integral(: ∂(v.*u)/∂x :B)

but the final term is the integral of a derivative over the whole
sphere; by Stokes' theorem this is zero. This just leaves us, for arbitrary u,
v in V, with g(i.∂u/∂x, v) = g(u, i.∂v/∂x) from which we may
infer that i.∂/∂x is g-symmetric; as, likewise, must be the other
components of i.∇. It is easy enough to verify that a composite of
g-symmetric linear maps on V must also be g-symmetric and that any linear
combination of them is likewise g-symmetric; from which we may infer that Sz and
S^{2} are indeed g-symmetric. This then implies that their eigenvalues must be
real and their eigenvectors with distinct eigenvalues must be g-orthogonal,
i.e. have g(u,v) = 0.

We may thus assert that each g(Y(b,h),Y(c,e)) is zero unless b = c and h = e. We can easily enough confirm this for the case of distinct h and e; the n-dependence of the integrand is exp(i.(e−h).n/radian) so that integrating over n for each given m yields zero unless e = h.

The space of smooth scalar functions on a sphere – like the space of smooth scalar functions on a linear space – forms a linear space; just as the Fourier transform decomposes any smooth scalar function on a linear space as a superposition of sinusoids (the solutions to the harmonic equation on a linear space) so equally the spherical harmonics can be used to decompose smooth scalar functions on a sphere. Formally we need to establish that the spherical harmonics are linearly independent and span the smooth scalar fields on the sphere: but it will suffice to exhibit linear maps from such scalar fields to scalars, each yielding the field's co-ordinate for a matching harmonic, and show that each smooth scalar field is indeed regenerated by multiplying each co-ordinate by the matching harmonic and summing the result.

We already have an orthogonality property among the spherical harmonics; and
we normalize them so that each is a unit vector for our metric g. Thus the
spherical harmonics form an orthonormal basis for their span, at least; in
particular, we get their linear independence for free; and their span trivially
has infinite dimension. To argue for their span being the whole space of smooth
complex scalar functions on the sphere requires more. The sphere is
two-dimensional and we have two g-symmetric linear maps, Sz and S^{2}; I
have not shown that they commute with one another, or that we cannot expect to
be able to find a third which commutes with both of them, let alone that this
actually suffices to ensure that their joint eigenvectors really do span the
whole space. None the less, these are my grounds for accepting that the
spherical harmonics do form a basis of V.

We can use the orthogonality property, given a solution of our original harmonic equation, to decompose any solution as a superposition of radial functions each scaled by a spherical harmonic. Multiply a given solution to the harmonic equation by the complex conjugate of a spherical harmonic and integrate over the sphere of each radius; this yields the function r.r.R(r) ←r from which we can infer the R(r) for our given spherical harmonic.

Consider now, for given b, sum(: Y(b,h).*Y(b,h) ←h :) as a function over the unit sphere. This trivially has no variation in n, so it remains to consider its m-dependence. Multiplying it by a factor of 2.π (to match the n-integration), for any given n, e.g. zero, it depends on m as

- 2.π.sum(: power(2, K(b,abs(h)).Legendre(b,abs(h).Sin(m)).power(abs(h),Cos(m))) ←h; −b ≤ h ≤ b :)
- = 2.π.power(2, K(b,0).Legendre(b,0,Sin(m))) + 4.π.sum(: power(2, K(b,1+h).Legendre(b,1+h.Sin(m)).power(1+h,Cos(m))) ←h :b)
- = power(2, Legendre(b,0,Sin(m)))/integral(: power(2, Legendre(b,0,s)) ←s; −1≤s≤1 :) + 2.sum(: power(2, Legendre(b,1+h.Sin(m))).power(2.(1+h),Cos(m))/integral(: power(2, Legendre(b, 1+h, s)).power(1+h, 1−s.s) ←s; −1≤s≤1 :) ←h :b)

which we can sensibly express as a function of Sin(m); so define B(b,Sin(m)) = sum(: Y(b,h,m).*Y(b,h,m) ←h :) and obtain

- 2.π.B(b,s)
- = power(2, Legendre(b,0,s))/integral(: power(2, Legendre(b,0,w)) ←w; −1≤w≤1 :) + 2.sum(: power(2, Legendre(b,1+h.s)).power(1+h,1−s.s)/integral(: power(2, Legendre(b,1+h,w)).power(1+h,1−w.w) ←w; −1≤w≤1 :) ←h :b)

TODO: for a given b, look at the sum over h of the squared moduli of Y(b,h); is it uniform on the sphere ?

Written by Eddy.