]>
From de Broglie's work, we can infer that the space-time structure of matter is periodic along the world-line of the matter. In particular, a material object with 4-momentum p has associated wave co-vector K = g(p)/h, where g is the space-time metric and h is Planck's constant.
Just as K·s, a.k.a. K(s), tells us the change in phase
– i.e. the number of cycles of the periodic structure – between
the end-points of a straight displacement, s, we can integrate up K·dx
along a trajectory, x, to obtain the change in phase between its ends: and in
a flat universe with K constant this is just the same as integrating up dx to
obtain the total displacement of the trajectory, end−start, and using K
on this displacement as before. More usefully, however, we can still use the
same approach when our domain isn't flat – and the constraint on K
changes to something subtler than constancy: the requirement being that the
integral of K·dx along a curve isn't changed by continuous deformation
of the curve, so long as its end-points remain unchanged (so long as the
integral remains well-defined throughout the deformation). This (implies d^K
= zero, which) may be achieved by K being the gradient of a scalar field
(albeit one with units of phase), at least within any simply connected domain
(which means any loop can be continuously deformed down to a point, which has
zero as its integral of K·dx). That scalar field is known as
the phase
of the periodic process, as a function of position.
Now, a loop (a trajectory starting and ending in the same place) round a
region in which K is undefined may be irremovable – the interior of a
ring donut contains a trajectory which cannot, within the donut, be deformed
away to nothing. Consequently, in watching our wave process pass over the
obstacle encircled by such a loop, we can follow trajectories from a
start-point (such as the source of illumination in a standard two slits
experiment), one to one side of the obstacle (through one slit) and another to
the other side: thereafter, the two trajectories can come back together. If
the two trajectories had passed on the same side of the obstacle, they'd be
amenable to being continuously deformed, one onto the other; so they'd have
the same integral of K·dx: but they didn't, so they might not –
from this comes the phenomenon of interference, which makes the wave
process quiescent
except where the difference between the two
trajectories is (at least roughly) a whole number of cycles.
Thus we transform our analysis of wavelengths into an analysis of phases, with the phase being construed as a variable permeating space. A wave also has magnitude, describing the scale of variation. It is common in systems with wave-like behaviour to be able to describe the system using a function of form ψ = magnitude.exp(i.phase/radian), where i is an imaginary square root of −1. When magnitude is constant, dψ is just the gradient of our phase (i.e. K) times i.ψ/radian. We thus have dψ = i.g(p).ψ/h/radian. If the magnitude also varies, this is going to complicate the form of dψ, but it comes naturally to conjecture that this last equation still holds true. We thus identify the operator −i.h.radian.d with g(p). In the properly relativistic treatment, one would naturally want to read d as the space-time gradient operator.
However, the simple terms in which we think we know the dynamics of the system are expressed in terms of time and space separately. To put it another way, we know them in terms of the space-like momentum and the energy, which is the time-like component of the four-momentum. So (aside from a suitable scaling) we identify our space-like momentum with the spatial differential operator ∇ and the energy of our system with ∂/∂t, the the time-like part of the gradient operator. We thus transform our classical dynamical equation, E = V +p.p/2/m, with V being the potential energy (typically a function of position), into Schrödinger's equation, by interpreting E and p as differential operators (scaled by −i.h.radian = −i.ℏ, with E further negated by the action of g) acting on ψ and V as a simple scaling of ψ:
This describes the variation in space and time of a complex scalar field ψ which describes an object's state. The integral over any volume of space of ψ.*ψ (the squared modulus of ψ, obtained by multiplying it by its complex conjugate) gives the probability that the object is (or rather, that an experiment might observe it to be) in that volume of space.
The equation is a (second-order) linear differential equation, so it suffices to find suitably general solutions to it of simple forms; a full general solution may then be expressed as a linear superposition of simple solutions.
The first way we can simplify this is to consider solutions whose variations in time and space are independent; we consider the case ψ = u(p).f(t) where p (for position, no longer momentum) encodes spatial co-ordinates. The equation then becomes:
Dividing through by ψ then yields
Thus, if V may be expressed as a sum of a term depending only on p and a term depending only on t, we can subtract the latter from both sides to obtain a left side which depends only on t and a right side depending only on p. Since p and t are independent, this implies the two expressions thus equated must in fact be constant, enabling us to separate out ψ's dependence on time and position. In particular, in the case where V depends only on p, this constant is V additively combined with a term with the form of a kinetic energy, so we identify it as the total energy of the system. Thus we obtain, for some constant E:
The first of these is easily solved: f'(t) = −i.E.f(t)/ℏ has solution f(t) = Q.exp(−i.E.t/ℏ) for arbitrary constant Q. Since dividing f by Q and multiplying u by Q yields the same ψ, we can require Q to be 1 without any loss of generality in our eventual solution. That leaves us with ψ = u(p).exp(−i.E.t/ℏ) with u depending only on position, as described by the second equation above. Rearranging this as
we get a standard
three-dimensional harmonic equation; when V depends only on distance r
from some central point, its solutions are of form u(p) = R(r).Yb
a(p/r) where: Yb a(p/r) is a spherical harmonic
function of direction, i.e. of p but independent of r; b is a natural number
and a is an integer with −b ≤ a ≤ b. Then R satisfies
The further solution then depends on V's radial dependence; the case of an inverse-square potential well is analytically tractable.
Written by Eddy.