]>
The simple harmonic oscillator is an ideal system in which the measured position varies sinusoidally with time (i.e. proportional to Sin(a.t+b) for some angle b and angular velocity a). It arises when a non-dissipative system has an equilibrium point and, when disturbed from that equilibrium, exerts a force proportional to the amount of disturbance in the direction that would cause movement back towards the equilibrium.
Any system with a stable equilibrium exerts a restoring (i.e. towards equilibrium) force when off-equilibrium. When released from an off-equilibrium position, it thus accelerates towards equilibrium, thereby picking up speed as it approaches the equilibrium position, so that it is moving when it gets there, so over-shoots and is duly decelerated by the restoring force on the other side of equilibrium, until it comes to rest off-equilibrium and begins once again to accelerate back towards it. It thus oscillates back and forth; as long as the sysem is non-dissipative, it never settles down at the equilibrium. In the case of a simple linear restoring force, however, the variation of displacement with time follows a particularly simple form, in which the period of the variation (the time it takes to get back to where it was, repeating the same movement) doesn't depend on its scale.
The simple linear restoring force happens to be relevant because it is in fact a good approximation in a very great many cases. A stable equilibrium is a state of locally minimal stored energy: so any disturbance away from this state causes an increase in stored energy. As long as the stored energy E varies smoothly with the amount x of disturbance, Taylor's theorem tells us to expect the variation to be well approximated by E(x) = E(0) +E'(0).x +E''(0).x.x/2 plus terms in higher powers of x, which we can usually ignore, at least for small x. However, for x = 0 to be an equilibrium, E'(0) must be zero (else E'(0).x would be negative for x on one side or another of 0, making E(x) less than E(0) there). We thus have E(x) = E(0) +E''(0).x.x/2; furthermore for the equilibrium to be stable, E must be at a minimum at x = 0, so E''(0) must be positive. The force the system exerts, when so disturbed, is −dE/dx = −E''(0).x, so varies linearly with displacement. Thus the simple linear displacement rule is a half-way decent approximation to many systems near a stable equilibrium, so reasonable approximations to the simple harmonic oscillator occur frequently in reality. The displacement of a weight hanging from a spring (or the angular displacement of a watch's balance wheel, suspended on a spring) is, thanks to Hook's law, a good illustration of an actual simple harmonic oscillator.
We have a system in which we measure displacement from equilibrium by some parameter x and the restoring force grows proportional to x; dividing this by the inertia of the system, we obtain its accelleration back towards equilibrium, −ddx/dt/dt, as some positive constant times x; since that constant is positive (and for convenience below), we can express it as the square of some constant, k (and we can presume k positive; it has the dimensions of an inverse time). We thus obtain ddx/dt/dt +k.k.x = 0 as the archetypical dynamical equation for the simple linear oscillator; k.k is E''(0) divided by the system's inertia; if that inertia is m, the potential energy is E(0) +m.k.k.x.x/2.
Following a standard analytical technique, consider solutions of form x(t) =
X.exp(q.t) for some constants X and q, yielding 0 = k.k.X.exp(q.t)
+q.q.X.exp(q.t) = (k.k +q.q).X.exp(q.t). This is solved by q = ±i.k,
where i is an imaginary
unit, i.e. the square root of −1. The
equation is linear, so X is unconstrained. We thus obtain solutions
exp(±i.k.t) and any sum of such solutions, however scaled, will be a
solution. Since exp(±i.z) = cos(z) ±i.sin(z), two such sums are
sin(k.t) and cos(k.t), from which we can readilly recover exp(±i.k.t), so
they serve as an alternate basis, with the convenient property that they are
purely real (i.e. we
avoid complex numbers). Any real
combination of sin(k.t) and cos(k.t) can be written as X.sin(k.t +a) for some
constants X and a, so our real solutions are all sinusoids. (Here sin(s) =
Sin(s.radian); sin is a function from reals to reals, while ({reals}: Sin
:{angles}) is the essential trigonometric
function.)
The amplitude, X, determines the total energy of the solution; the lowest possible energy is E(0), arising when X = 0; the surplus above that is simply (k.X).(k.X).m/2 when the system's inertia is m. Independent of X, the period of the oscillation is 2.π/k.
In Schrödinger's equation for the wave function ψ of a particle of mass m in a potential V,
we must substitute V = E(x) = E(0) +E''(0).x.x/2 = E(0) +k.k.m.x.x/2, and replace ∇ with ∂/∂x – for now, this is a simple one-dimensional system. Following a standard technique, consider solutions of form ψ(t,x) = T(t).X(x) for some functions T, X; substituting this in the above and dividing through by ψ/2/m, we obtain
in which the left side depends on t but not on x and the right side depends on x but not on t; hence both sides must in fact be constant. Let that constant (dimensionally a squared momentum) be 2.m.Q for some energy Q. The left-hand side immediately gives T(t) as exp(−i.Q.t/ℏ) times some constant, while the right-hand side yields
X'' is a quadratic times X; this is exactly what would arise from X including a factor of an exponential of a quadratic; and we expect our solution to be either symmetric or anti-symmetric about x = 0, so we need quadratic to be symmetric about zero, too, meaning it's simply some scaling times the square of its input. We'll also need X to be normalisable, so it must die away to zero at large input, making the scaling of the square negative. So consider solutions of form X(x) = P(x).exp(−x.x/2/s) for some constant s; these yield X'(x) = −x.X(x)/s +P'(x).exp(−x.x/2/s) and
which matches if
which has the right form for polynomials to be solutions, so let's look for solutions in which P is a sum of scaled powers. Re-arranging to group the highest-order terms together, we get:
Since the left-hand side has terms of higher order than the right, its coefficient of leading order must be zero, whence ℏ/s = ±k.m or s = ±ℏ/k/m; the requirement that X(x) be normalisable (that is, we can chose to so scale it as to make integral(: ψ.*ψ :) over the full range of x come out to 1, for all t) forces s to be positive, so we get s = ℏ/k/m and ℏ2/s = ℏ.k.m. This leaves us with
Now, s has the dimensions of an area, so let's use u = x/√s = x.√(k.m/ℏ) in preference to x, with P(x) = sum(: power(n, u).p(n) ←n :) for some list p of co-efficients, which we scale by exp(−u.u/2) to get X(x). Then P'(x) = sum(: n.p(n).power(n−1, u) ←n :)/√s, P''(x) = sum(: n.(n−1).p(n).power(n−2, u) ←n :)/s, and we obtain:
Divide through by ℏ.k and take e = (Q −E(0))/ℏ/k −1/2; since each power must have the same coefficient on each side, we obtain
for each natural n. Were there a non-negative n, either greater than e or not differing from e by an even integer, with p(n) non-zero, we'd get p(n+2.i) non-zero for every natural i, diminishing for large i as 1/i!, for a power series whose sum grows like P(x) ≈ exp(x.x/s) for large x, so that X(x) = P(x).exp(−x.x/s/2) grows without bound, as exp(x.x/s/2), for large x, making it impossible to normalise. (Indeed, this is equivalent to taking the minus sign when we had s = ±ℏ/k/m above.)
If any negative n, either less than e+2 or differing from e by anything but an even integer, were to have p(n) non-zero, this would likewise imply p(n−2.i) non-zero for every natural i and growing by a factor of order n/2−i for each i, so ultimately like i!, for a power series that won't converge for u smaller than 1, i.e. for x smaller than √(ℏ/k/m). If 0 > e ≥ −2, this precludes any solutions. If e < −2 then n in {−1, −2} implies 0 = (n+2).(n+1).p(n+2)/2 = (n−e).p(n) with n−e non-zero, hence p(n) = 0; so p(−1) = 0 = p(−2) and every negative integer n>e has p(n) = 0; at the same time, if any non-integral n≥e+2 has p(n) non-zero, then some positive n > e also has p(n) non-zero and we can't normalise. Thus no solutions are possible for e<0 and we have p(n) = 0 for all n<0, as this implies n<e; whence (n+2).(n+1).p(n+2) = 0 for all n<0, so the only i<2 for which p(i) can be non-zero are i in {1, 0}. This, taken with all n with p(n) non-zero having to be e−2.i for some natural i, implies that ({non-zero}:p|) can only be non-empty if e is natural and ({non-zero}: p |{e−2.i: i in naturals, 2.i≤e}), making P a polynomial (that is, our sum of scaled powers includes no negative powers and only finitely many positive powers). This also tells us that our system's energy levels are given by Q = E(0) +(e +1/2).ℏ.k for some natural e.
The first few polynomials satisfying the constraints are, when all common factors of coefficients are eliminated:
We can scale those any way we like, but let's use these scalings to begin with; we can revise them later. Let h(n) be the polynomial, in this family, of rank n; later, we'll apply a scaling to each. Since each has rank one higher than the previous, let's subtract each from a multiple of power(1) times the previous, chosing a multiplier that eliminates the term of highest rank:
So, with H(i) being a suitably rescaled version of h(i), we can arrange that H(i+1) = 2.H(i).power(1) −H(i)', which suggests there may be some function G for which −H(i+1).G is the derivative of H(i).G, i.e. G'.H(i) +G.H(i)'; that requires G' = −2.power(1).G which has an obvious solution: G(u) = exp(−u.u), the square of the exp(−x.x/s) by which we're scaling P. So we find H(i+1, u) = −exp(u.u).d(H(i).exp(−u.u))/du and, inductively, with G = (: exp(−u.u) ←u :),
We now have X(u.ℏ/k/m) = H(n, u).&radig;G(u) as the spatial variation of our wave-function for a state with energy E(u) +(n +1/2).ℏ.k. When we come to normalise ψ, the integral(: ψ.*ψ :) is now (thanks to T being a pure phase times some constant) just some constant times integral(: X(x).*X(x) ←x :) = integral(: H(n, u).G(u).H(n, u) ←u :).√(ℏ/k/m). Since all powers in H(n) are power(n −2.i) for various i, either all are even or all are odd; either way, every power in H(n).H(n) is even. So determining our normalisation just involves integrating powers of u.u scaled by G(u) = exp(−u.u).
Each term in H(e).G.H(e) is of form (: power(n, u.u).exp(−u.u) ←u :) for some natural n, with some over-all scaling. When we come to integrate this, we can substitute v = u.u, du = dv/2/√v to get:
so it suffices to compute H(e).H(e), multiply its coefficient of power(2.n, u) by this, for each natural n up to e, and sum over n.
Better yet, since iterating H(i +1, x) = 2.x.H(i, x) −H'(i, x) from H(0) = 1 adds a factor of 2 wherever it adds a factor of x, each H(i, x) is actually a polynomial in 2.x, so its square is a polynomial in 4.x.x, not just in x.x. This, in turn, means each coefficient of power(2.n) in H(e).H(e) is in fact a multiple of power(n, 4), so each term in our sum is a whole number mutiple of (√π)/2. So moving the (√π)/2 factor to multiply the √(ℏ/k/m) factor from our change of coordinates, this leaves purely integer arithmetic in the normalisation computation.
Naturally, I have implemented all of this in Python, which enables me to experiment. With H(i) = repeat(i, (: −f' ←f :), G)/G, I find that every coefficient of H(2.i −1) or of H(2.i) has (at least) i factors of 2; and, even after dividing out those factors of 2, that H(2.i, x) is in fact a polynomial in 2.x.x with integer coefficients and H(2.i +1, x) is x times such a polynomial. Furthermore, computing the above normalisation, with each power(2.n) term in H(e).H(e) mapped to (2.n)!/n!/power(n, 4), I find that the integral of H(i).G.H(i) is just i!.power(i, 2). So the normalisation of H(i) just divides it by the square roof of this. Thus X = √(π.ℏ/k/m/i!/power(i, 2)).(H(i).√G)&on;(: x.√(k.m/ℏ) ←x :)/2 is our typical spatial solution.
The solutions have energies E(0) +(e +1/2).ℏ.k with e natural, where 2.π/k is the period of the classical solution, so ℏ.k is h.f where f is the frequency of the classical solution. The time-dependency of our solution to the field equation is given by our earlier T(t) = exp(−i.Q.t/ℏ) with Q/ℏ = E(0)/ℏ +(e +1/2).k; as this is just a phase, it makes no difference to physical observables or normalisation.
Note that, although E(0) is the nominal zero-point of energy, the solution
with lowest energy has ℏ.k/2 more energy than this; and all other
solutions have more energy yet by even multiples of this. The oscillator is
thus incapable of coming to rest
; after all, if it did, it'd have
definite position and momentum, violating Heisenberg's uncertainty
principle.
Next, consider the case of a multi-dimensional harmonic oscillator. In the simplest case, the restoring force towards the central position simply pulls back towards that position, in which case it necessarily does so, in proportion to the displacement from centre, with the same scaling independent of direction. The potential is thus just E(0) +k.k.g(x, x).m/2, with g being the (positive-definite) metric of our spatial co-ordinates and x now being a position vector. The position co-ordinate in any given direction is simply a one-dimensional simple harmonic oscillator, as above, whether classical or quantum.
More generally, allowing that the system may resist disturbance from equilibrium more in some directions than others, we can replace k.k.g(x, x) with a general (positive-definite – otherwise the equilibrium point won't be stable) quadratic form. By judicious choice of co-ordinates, this can be reduced to diagonal form (with respect to a g-orthonormal basis), in effect giving each of the chosen co-ordinates its own value of k. In the simple case, any orthonormal system of co-ordinates will do and all have the same value of k; in this more general case, one must let the quadratic form pick the co-ordinates, each with its own value for k. (There is some freedom to chose co-ordinates whenever two or more co-ordinates share a common value of k.)
In any case, there is some orthonormal basis (:b|dim) of our position-vector space, of dimension dim, with dual g&on;b as basis of co-ordinates (thanks to b being orthonormal), and some list (:k|dim) of scalars such that, with q = k.b = (: k(i).b(i) ←i |dim), our quadratic form is simply sum(q×q). Each of the co-ordinates involved in diagonalising the quadratic form is g(b(i), x) for some i in dim; scaling this by k(i), squaring and summing over i in dim gives us the value of our quadratic form at position x.
In the classical case, each co-ordinate is then X(i).sin(k(i).t +a(i)) for the given scaling k(i) and arbitrary amplitude X(i) and phase a(i), so we obtain x = sum(: X(i).b(i).sin(k(i).t +a(i)) ←i |dim) as general solution, for arbitrary ({reals}:X|dim) and ({reals}:a|dim).
For the quantum solution, Schrödinger's equation has solutions of form ψ(t, x) = T(t).product(: X(i, g(b(i), x)) ←i |dim), with the g(b(i), x) being orthonormal co-ordinates, so independent variables. Substituting this and scaling by 2.m/ψ, we obtain:
in which the left side depends only on t, 2.m.E(0) depends on nothing and each term in the sum over i depends only on g(b(i), x), not any of the other co-ordinates of x. Consequently T has the same solution as before and each term in the sum has the same form as the single X we had in the one-dimensional case. We thus obtain solutions with energies E(0) +ℏ.sum(: (e(i) +1/2).k(i) ←i |dim) for arbitrary ({naturals}: e |dim) and the ground state has energy E(0) +ℏ.sum(k)/2.
Written by Eddy.