]>
The simple harmonic oscillator is an ideal system in which the measured position varies sinusoidally with time (i.e. proportional to Sin(a.t+b) for some angle b and angular velocity a). It arises when a non-dissipative system has an equilibrium point and, when disturbed from that equilibrium, exerts a force proportional to the amount of disturbance in the direction that would cause movement back towards the equilibrium.
Any system with a stable equilibrium exerts a restoring (i.e. towards equilibrium) force when off-equilibrium. When released from an off-equilibrium position, it thus accelerates towards equilibrium, thereby picking up speed as it approaches the equilibrium position, so that it is moving when it gets there, so over-shoots and is duly decelerated by the restoring force on the other side of equilibrium, until it comes to rest off-equilibrium and begins once again to accelerate back towards it. It thus oscillates back and forth; as long as the sysem is non-dissipative, it never settles down at the equilibrium. In the case of a simple linear restoring force, however, the variation of displacement with time follows a particularly simple form, in which the period of the variation (the time it takes to get back to where it was, repeating the same movement) doesn't depend on its scale.
The simple linear restoring force happens to be relevant because it is in fact a good approximation in a very great many cases. A stable equilibrium is a state of locally minimal stored energy: so any disturbance away from this state causes an increase in stored energy. As long as the stored energy E varies smoothly with the amount x of disturbance, Taylor's theorem tells us to expect the variation to be well approximated by E(x) = E(0) +E'(0).x +E''(0).x.x/2 plus terms in higher powers of x, which we can usually ignore, at least for small x. However, for x = 0 to be an equilibrium, E'(0) must be zero (else E'(0).x would be negative for x on one side or another of 0, making E(x) less than E(0) there). We thus have E(x) = E(0) +E''(0).x.x/2; furthermore for the equilibrium to be stable, E must be at a minimum at x = 0, so E''(0) must be positive. The force the system exerts, when so disturbed, is −dE/dx = −E''(0).x, so varies linearly with displacement. Thus the simple linear displacement rule is a half-way decent approximation to many systems near a stable equilibrium, so reasonable approximations to the simple harmonic oscillator occur frequently in reality. The displacement of a weight hanging from a spring (or the angular displacement of a watch's balance wheel, suspended on a spring) is, thanks to Hook's law, a good illustration of an actual simple harmonic oscillator.
We have a system in which we measure displacement from equilibrium by some parameter x and the restoring force grows proportional to x; dividing this by the inertia of the system, we obtain its accelleration back towards equilibrium, −ddx/dt/dt, as some positive constant times x; since that constant is positive (and for convenience below), we can express it as the square of some constant, k (and we can presume k positive; it has the dimensions of an inverse time). We thus obtain ddx/dt/dt +k.k.x = 0 as the archetypical dynamical equation for the simple linear oscillator; k.k is E''(0) divided by the system's inertia; if that inertia is m, the potential energy is E(0) +m.k.k.x.x/2.
Following a standard analytical technique, consider solutions of form x(t) =
X.exp(q.t) for some constants X and q, yielding 0 = k.k.X.exp(q.t)
+q.q.X.exp(q.t) = (k.k +q.q).X.exp(q.t). This is solved by q = ±i.k,
where i is an imaginary
unit, i.e. the square root of −1. The
equation is linear, so X is unconstrained. We thus obtain solutions
exp(±i.k.t) and any sum of such solutions, however scaled, will be a
solution. Since exp(±i.z) = cos(z) ±i.sin(z), two such sums are
sin(k.t) and cos(k.t), from which we can readilly recover exp(±i.k.t), so
they serve as an alternate basis, with the convenient property that they are
purely real (i.e. we
avoid complex numbers). Any real
combination of sin(k.t) and cos(k.t) can be written as X.sin(k.t +a) for some
constants X and a, so our real solutions are all sinusoids. (Here sin(s) =
Sin(s.radian); sin is a function from reals to reals, while ({reals}: Sin
:{angles}) is the essential trigonometric
function.)
The amplitude, X, determines the total energy of the solution; the lowest possible energy is E(0), arising when X = 0; the surplus above that is simply (k.X).(k.X).m/2 when the system's inertia is m. Independent of X, the period of the oscillation is 2.π/k.
In Schrödinger's equation for the wave function ψ of a particle of mass m in a potential V,
we must substitute V = E(x) = E(0) +E''(0).x.x/2 = E(0) +k.k.m.x.x/2, and replace ∇ with ∂/∂x – for now, this is a simple one-dimensional system. Following a standard technique, consider solutions of form ψ(t,x) = T(t).X(x) for some functions T, X; substituting this in the above and dividing through by ψ/2/m, we obtain
in which the left side depends on t but not on x and the right side depends on x but not on t; hence both sides must in fact be constant. Let that constant (dimensionally a squared momentum) be 2.m.Q for some energy Q. The left-hand side immediately gives T(t) as exp(−i.Q.t/ℏ) times some constant, while the right-hand side yields
X'' is a quadratic times X; this is exactly what would arise from X including a factor of an exponential of a quadratic; and we expect our solution to be either symmetric or anti-symmetric about x = 0. So consider solutions of form X(x) = P(x).exp(−x.x/2/s) for some constant s; these yield X'(x) = −x.X(x)/s +P'(x).exp(−x.x/2/s) and
which matches if
which has the right form for polynomials to be solutions for P. Re-arranging to group the highest-order terms together, we get:
Since the left-hand side has terms of higher order than the right, its coefficient of leading order must be zero, whence ℏ/s = ±k.m or s = ±ℏ/k/m; the requirement that X(x) be normalisable (that is, we can chose to so scale it as to make integral(: ψ.*ψ :) over the full range of x come out to 1, for all t) forces s to be positive, so we get s = ℏ/k/m. This leaves us with
Now, s has the dimensions of an area, so let's use u = x/√s = x.√(k.m/ℏ) in preference to x, with P(x) = sum(: power(n, u).p(n) ←n :) for some list p of co-efficients. Then P'(x) = sum(: n.p(n).power(n−1, u) ←n :)/√s, P''(x) = sum(: n.(n−1).p(n).power(n−2, u) ←n :)/s, and we obtain:
Whence, equating powers and taking e = (Q −E(0))/ℏ/k −1/2, we obtain
for each natural n. Were there a non-negative n, either greater than e or not differing from e by an even integer, with p(n) non-zero, we'd get p(n+2.i) non-zero for every natural i, diminishing for large i as 1/i!, for a power series whose sum grows like P(x) ≈ exp(x.x/s) for large x, so that X(x) = P(x).exp(−x.x/s/2) grows without bound, as exp(x.x/s/2), for large x, making it impossible to normalise. If any negative n, either less than e+2 or differing from e by anything but an even integer, were to have p(n) non-zero, this would likewise imply p(n−2.i) non-zero for every natural i and growing by a factor of order n/2−i for each i, so ultimately like i!, for a power series that won't converge for u smaller than 1, i.e. for x smaller than √(ℏ/k/m). If 0 > e ≥ −2, this precludes any solutions. If e < −2 then n in {−1, −2} implies 0 = (n+2).(n+1).p(n+2)/2 = (n−e).p(n) with n−e non-zero, hence p(n) = 0; so p(−1) = 0 = p(−2) and every negative integer n>e has p(n) = 0; at the same time, if any non-integral n≥e+2 has p(n) non-zero, then some positive n > e also has p(n) non-zero and we can't normalise. Thus no solutions are possible for e<0 and we have p(n) = 0 for all n<0, as this implies n<e; whence (n+2).(n+1).p(n+2) = 0 for all n<0, so the only i<2 for which p(i) can be non-zero are i in {1, 0}. This, taken with all n with p(n) non-zero having to be e−2.i for some natural i, implies that ({non-zero}:p|) can only be non-empty if e is natural and ({non-zero}: p |{e−2.i: i in naturals, 2.i≤e}).
The first few polynomials satisfying the constraints are, up to an over-all constant scaling each:
Let H(n) be the polynomial, in this family, of order n, subject to an over-all constant scaling to be specified below. Then our solution has energy Q = E(0) +(e +1/2).ℏ.k for some natural e, X(x) = H(e, x.√(k.m/ℏ)).exp(−x.x.k.m/ℏ/2). This shall contribute a factor integral(: X.*X :{reals}) to the integral(: ψ.*ψ); we can let T(t) carry a scaling, but should chose a scaling for X that at least makes it easy for T's scaling to nromalise ψ. Writing (again) u = x.√(k.m/ℏ), we have X(x) = H(e, u).exp(−u.u/2); if we arrange for the square of this, integrated with respect to u, to have value 1, then integral(: X.*X :) shall be √(ℏ/k/m), which we can leave T's scaling to deal with. For e odd, all non-zero terms in H(e) have odd order, so every term in H(e).H(e) is a product of odd-order terms, hence has even order; for e even, every non-zero term in H(e) has even order, hence the same is true in H(e).H(e). Thus H(e).H(e) is an even-order polynomial, for each natural e. We scale this by exp(−u.u) and integrate over {reals} to get an integral that we want to be 1, so must adjust H(e)'s scaling to make it so.
Each term in H(e).H(e).(: exp(−u.u) ←u :) is of form (: power(n, u.u).exp(−u.u) ←u :) for some natural n, with some over-all scaling. When we come to integrate this, we can substitute v = u.u to get:
so it suffices to compute H(e).H(e), multiply its coefficient of power(2.n, u) by this, for each natural n up to e, and sum over n. Interestingly, I find (upon doing so, computationally) that the answers come to e! for even e and e!/2 for odd e, when the polynomials are initially normalised so that H(2.n) and H(2.n+1) have leading order coefficient 2n (this makes all coefficients integral, with no common factor).
The solutions have energies E(0) +(e +1/2).ℏ.k with e natural, where 2.π/k is the period of the classical solution, so ℏ.k is h.f where f is the frequency of the classical solution. The time-dependency of our solution to the field equation is given by our earlier T(t) = exp(−i.Q.t/ℏ) with Q/ℏ = E(0)/ℏ +(e +1/2).k; but this is not physically observed.
Note that, although E(0) is the nominal zero-point of energy, the solution
with lowest energy has ℏ.k/2 more energy than this; and all other
solutions have more energy yet by even multiples of this. The oscillator is
thus incapable of coming to rest
; after all, if it did, it'd have
definite position and momentum, violating Heisenberg's uncertainty
principle.
Next, consider the case of a multi-dimensional harmonic oscillator. In the simplest case, the restoring force towards the central position simply pulls back towards that position, in which case it necessarily does so, in proportion to the displacement from centre, with the same scaling independent of direction. The potential is thus just E(0) +k.k.g(x, x).m/2, with g being the (positive-definite) metric of our spatial co-ordinates and x now being a position vector. The position co-ordinate in any given direction is simply a one-dimensional simple harmonic oscillator, as above, whether classical or quantum.
More generally, allowing that the system may resist disturbance from equilibrium more in some directions than others, we can replace k.k.g(x, x) with a general (positive-definite – otherwise the equilibrium point won't be stable) quadratic form. By judicious choice of co-ordinates, this can be reduced to diagonal form (with respect to a g-orthonormal basis), in effect giving each of the chosen co-ordinates its own value of k. In the simple case, any orthonormal system of co-ordinates will do and all have the same value of k; in this more general case, one must let the quadratic form pick the co-ordinates, each with its own value for k. (There is some freedom to chose co-ordinates whenever two or more co-ordinates share a common value of k.)
In any case, there is some orthonormal basis (:b|dim) of our position-vector space, of dimension dim, with dual g&on;b as basis of co-ordinates (thanks to b being orthonormal), and some list (:k|dim) of scalars such that, with q = k.b = (: k(i).b(i) ←i |dim), our quadratic form is simply sum(q×q). Each of the co-ordinates involved in diagonalising the quadratic form is g(b(i), x) for some i in dim; scaling this by k(i), squaring and summing over i in dim gives us the value of our quadratic form at position x.
In the classical case, each co-ordinate is then X(i).sin(k(i).t +a(i)) for the given scaling k(i) and arbitrary amplitude X(i) and phase a(i), so we obtain x = sum(: X(i).b(i).sin(k(i).t +a(i)) ←i |dim) as general solution, for arbitrary ({reals}:X|dim) and ({reals}:a|dim).
For the quantum solution, Schrödinger's equation has solutions of form ψ(t, x) = T(t).product(: X(i, g(b(i), x)) ←i |dim), with the g(b(i), x) being orthonormal co-ordinates, so independent variables. Substituting this and scaling by 2.m/ψ, we obtain:
in which the left side depends only on t, 2.m.E(0) depends on nothing and each term in the sum over i depends only on g(b(i), x), not any of the other co-ordinates of x. Consequently T has the same solution as before and each term in the sum has the same form as the single X we had in the one-dimensional case. We thus obtain solutions with energies E(0) +ℏ.sum(: (e(i) +1/2).k(i) ←i |dim) for arbitrary ({naturals}: e |dim) and the ground state has energy E(0) +ℏ.sum(k)/2.
Written by Eddy.