]> Complex Numbers

Complex Numbers

Old page: see newer page for a (hopefully) clearer exposition.

What are the complex numbers ?

They're a model of the field generated by rotations and enlargements in two (real) dimensions. They're the algebraic closure of the reals – everything you need to be able to solve all polynomial equations with real coefficients. They're the algebra you get out of the two-dimensional (real) plane by presuming commutativity, associativity and distributivity, then chosing to identify one member of a basis with the real number 1 and constrain the other's square to be −1. However, each of these requires some motivation and explanation.

Rotations and scalings

Before I delve into deeper theory, let me first give an over-view of some salient properties by describing the most intuitively convenient form of the complex numbers, as an algebraic abstraction from the linear transformations, of the two-dimensional real plane, that preserve (the origin, since they are linear, and) both the magnitude and orientation of angles.

Each such transformation has the form (: [a.x −b.y, a.y +b.x ] ← [x, y] :) for some real a and b; and can be expressed as a scaling, by c = √(a.a +b.b), composed with a rotation (through an angle whose Sin and Cos are b/c and a/c, respectively), each centred on the origin. If we compose two such transformations, it doesn't matter what order we compose them in; the end result scales by the product of their two scalings and rotates through the sum of their respective rotations' angles. The pure scalings (with b = 0) serve as a model of the real numbers, with composition faithfully modelling multiplication. The half-turn rotation is exactly the same transformation as scaling by −1 and any other negative scaling can be expressed as a positive scaling composed with the half-turn. Aside from the zero mapping, each transformation has an inverse – scale by the inverse of its scale factor, rotate through the reverse of its angle.

As may be seen from their matrix representaions, adding such transformations pointwise yields another such; and composing such a sum with another such tranformation yields the same as would be obtained by composing this other with each of the summed transformations separately, then summing the results. Thus the natural pointwise addition of mappings of a linear space, taken with the composition of such mappings, lets us combine our selected transformations just as if they were numbers, like the reals, with composition understood as multiplication. Formally, these transformations form a field, known as the complex numbers.

When we construe rotations as numbers, in this sense, we find that the square of any rotation is simply rotation through twice its angle; the equivalence of the −1 scaling and the half-turn rotation thus casts the rotations through a quarter turn, in either sense, as square roots of −1. Indeed, not only does the polynomial equation x2 +1 = 0 have these two solutions for x, but every polynomial equation in this field has solutions and every polynomial can be written as a product of simple degree 1 polynomials. This proves to be a highly useful property.

If we give a name, commonly i, to one of the quarter turns, say (: [−y, x] ←[x, y] :), then we can obtain any other complex number as the sum of a scaling (which we can construe as a real number) and some real multiple of i; indeed, the typical transformation, given above as (: [a.x −b.y, a.y +b.x ] ← [x, y] :), with a and b real, is simply a +b.i, when we interpret each real number as synomymous with scaling by that real number. However, notice that we have two quarter-turn rotations to chose from; we can chose either as i, without changing anything about the structure. Switching which one we chose merely replaces i with −i everywhere it appears in whatever we work out. We thus find that the mapping ({complex}: a −i.b ← a +i.b; a, b in {reals} :{complex}) preserves the algebraic structure of the complex numbers; this mapping is known as (complex) conjugation and commonly denoted *, so *(a +i.b) = a −i.b for every real a and b.

For real a, b we can multiply a+i.b by its conjugate to get (a+i.b).(a−i.b) = a.a +i.b.a −a.i.b −i.i.b.b = a.a +b.b, the (positive real) square of the scale factor needed to express a+i.b as the composite of a scaling and a rotation. For any complex z, the positive square root of this product, √(z.*z), is known as the modulus of z. When z = a+i.b, we describe a = (*z+z)/2 as the real part of z and b = i.(*z−z)/2 as the imaginary part of z. These are the components, respectively parallel and perpendicular to the input, of the output of the transformation z represents, when its input is a unit vector.

Just as the real number continuum supports differentiation and the solution of differential equations, so too does the two-dimensional complex number continuum, with one twist. Consider a function ({complex}: Z :{complex}); we can necessarily write it, in terms of a pair of real-valued functions f, g from the two-dimensional real plane to the reals, as Z(x+i.y) = f([x,y]) +i.g([x,y]), for real x, y. At any [x,y] at which f and g are differentiable, we can differentiate this as a function from the two-dimensional real plane; thus, if Z is to be differentiable, its derivative at [x,y] must satisfy:

However, Z is not a mapping from the real two-plane to itself; it is a mapping from the complex numbers to itself; as such, its derivative is constrained to be a complex-linear mapping from the complex numbers to itself, rather than just a real-linear mapping from the real two-plane to itself. This means that if we scale its input by some complex number, its output must likewise be simply scaled by that complex number. Giving it 1 as input, we get the result for X=1, Y=0; multiplying this by i must give, simply, the result of giving it i as input, which is its value for X=0, Y=1. Thus we obtain

i.e. a ({complex}::{complex}) function is only complex-differentiable if: its real part's variation with the real part of its input matches its imaginary part's variation with the imaginary part of its input; and its real part's variation with the imaginary part of its input is exactly opposite to its imaginary part's variation with the real part of its input. This is trivially satisfied for the identity (with derivative 1), by any constant mapping (with derivative 0) and by any sum of mappings each of which, separately, satisfies it. It is equally trivially not satisfied by conjugation. The details of complex multiplication ensure (although the details are mildly laborious) that the product of two complex-differentiable functions is also complex-differentiable, from which (given the foregoing) we can infer that all polynomials are complex-differentiable.

As for the reals, we can pose differential equations and find solutions in the complex numbers; in particular, the differential equation f' = f with f(0) = 1, whose unique solution on the reals is the natural exponential function f = exp, turns out to have a solution in {complex}. Naturally, this agrees with exp on {reals}, represented by the scalings, so we can consider it the extension of exp to {complex}, wherein it turns out to be periodic, with period 2.π.i; for real x, y, exp(x +i.y) = exp(x).(cos(y) +i.sin(y)).


Anywhere we can do addition, we can ask which relations on the things being added respect addition; I'll call these real-linear; given additions on collections U and V (so a+b is in U for all a, b in U and x+y is in V for all x, y in V), a mapping (U: f |V) is real-linear iff f(x+y) = f(x) +f(y) for all x, y in V (here f(x) takes the rôle of a, with f(y) as b; more generally, a relation (U: r :V) is real-linear iff: r relates x to a and y to b implies r relates x+y to a+b; the reverse of any real-linear relation is a real-linear relation). A mapping (V: s |V) is a real scaling iff it commutes with every real-linear relation (V:f:V), i.e. f&on;s = s&on;f. Composing two scalings necessarily yields a scaling; and – since each is a real scaling and the other is, in particular, real linear – they commute; so I'll write a simple . in place of &on; when the relation on either side of it is a scaling.

There's a natural embedding of the naturals in {real scalings (V:|V)} as (: (V: sum({v}:|n) ←v :V) ←n :{naturals}). This gives us a skeleton for the real scalings (V:|V). We can compose these: the composite agrees with the multiplication natural to the naturals, i.e. sum({sum({v}:|n)}:|m) = sum({v}:|n×m) with × (somewhat extravagantly) denoting natural multiplication (i.e. repeat(m)&on;repeat(n) = repeat(m×n)). Since I'll be treating a natural, n, as synonymous with (V: sum({v}:|n) ←v |V), I can use . rather than × for multiplication hereafter; I want to reserve × for the tensor calculus.

The addition on V induces one on {mappings (V:f|A)}, for any collection A (e.g., for our scalings, A is V), defined by (V:f|A)+(V:g|A) = (V: f(a)+g(a) ←a |A), which (in particular) makes f+g real linear when we have an addition on A for which f and g are real linear. [More generally, on relations, I'll have (V:f:A)+(V:g:A) relate: u+v to a whenever f relates u to a and g relates v to a; u to a whenever f relates u to a and a isn't a right value of g; v to a whenever g relates v to a and a isn't a right value of f; the implicit padding with zeros makes it possible to add lists of things without having to insist they have the same length, for example; and to use an atomic relation ({x}| |{i}) = x←i, with i natural and x addable, as if it were a list of any length greater than i, having x in position i and zero in all other positions.] Provided every positive natural scaling is (V||V), i.e. {n.x: x in V} = V for every positive natural n, the reverse of any positive natural scaling is a real scaling; any sum or composite of real scalings is a real scaling; we thus obtain the positive rationals embedded in the real scalings.

I need to establish that the positive rationals are dense in the positive real scalings, i.e. that the real scalings have an order induced from the one on the naturals, via the positive rationals; and that, for every real scaling s, s.s > the zero scaling, (: sum({v}:|0) ←v :) with 0 being the natural {}, so any list (:|0), including ({v}:|0) for any v, is just the empty list [], whose sum (using V's addition) is V's additive identity (a third entity entitled to the name zero or 0). If the real scalings don't include negatives, we can induce them by additive completion (provided we can show the addition is cancellable, for which it should suffice that V's addition is cancellable).

Once I have real scalings, I can form lists of real scalings. Since a list is just a mapping (from a natural), we can use the addition induced above (with A as a natural) to induce an addition on such lists, establishing a framework which incidentally suffices to describe simplices of arbitrary finite dimension. Any scaling, s, of V induces a scaling (: (: s.f(i) ←i :n) ←(:f|n) |{lists ({scalings of V}::)}), which I shall treat as synonymous with s, i.e. I'll write s.f for (: s.f(i) ←i :n). Likewise, any scaling on {lists of scalings} serves as a scaling on {scalings of lists of scalings} and so on ad nauseam; this will eventually surface as the underlying truth of the tensor algebra.

Now, on {lists ({scalings}:|n)}, any permutation (n|p|n) induces (: f&on;p ← ({scalings}:f|n) :{lists}), which is manifestly a real linear monic mapping. For a real linear map, s from {lists ({scalings}:|n)} to itself, to be a scaling (of lists of scalings), it must commute with such a permutation-induced real linear isomorphism. It must also commute with (: (:f:u) ←f :) for any subset u of n whence s(:f:u) = (:s(f):u), so s(f)'s entry at position i depends on f(i), for each i in n, but on no other output of f. Thus s is diagonal: we can write it as s = (: (: S(i).f(i) ←i |n) ←f :) for some list ({scalars}: S |n).

Thus s&on;(: f&on;p ←f :) maps f to s(f&on;p) = (: S(i).f(p(i)) ←i |n) while (: f&on;p ←f :)&on;s maps f, via s(f), to s(f)&on;p = (: S(p(i)).f(p(i)) ←i |n). For s to commute with (: f&on;p ←f :) we must thus have S(i) = S(p(i)) for all i in n; for s to be a real scaling (of lists of real scalings) this must hold true for every permutation (n|p|n), so we may infer that S is constant and s is simply the real scaling on lists of real scalings induced by multiplying each real scaling in the list by a fixed scaling. Thus, somewhat laborously, we find a natural isomorphism between S = {real scalings (V:|V)} and {real scalings ({lists (S::)}:|{lists (S::)})}.

Now look at C = {lists ({real scalings (V:|V)}: |2)} and notice that the only permutations of 2 are the identity, [0,1], and the minimal transposition, [1,0]. These induce the linear maps (C| u←u |C) and (C| [t,s]←[s,t] |C), the identity (a.k.a. C itself) and reflection in the principal diagonal, i.e. in the line {[s,s]: (V:s|V) is a scaling}. Provided we have additive completion (hence negative scalings) in V, we obtain:

each of which is real linear (C||C), with 1 and −1 scalings and i&on;conj = (: [t,s] ←[s,t] :) = conj&on;(−i) as the other permutation-induced real linear isomorphism. Using i and −i we can establish a pair of real linear mappings com = ({linear (C:|C)}: s.1 +t.i ←[s,t] |C) and anti = (: s.1 +t.(−i) ←[s,t] |C), so that anti = com&on;conj and (as conj is self-inverse) com = anti&on;conj. Then

= (s.1+t.i)&on;(a.1+b.(−i))
= s.(a.1+b.(−i)) +t.(a.i+b.1)
= (s.a+t.b).1 +s.b.(−i) +t.a.i
= (s.a+t.b).1 +(t.a −s.b).i

and, in particular,


= (s.1 +t.i)&on;(a.1 +b.i)
= (s.a −t.b).1 +(s.b +t.a).i, and
= (s.1 +t.i)([a,b])
= s.[a,b] +t.[−b,a]
= [s.a −t.b, s.b +t.a], so
= (s.a −t.b).1 +(s.b +t.a).i
= com([s,t])&on;com([a,b])

i.e. com(com(u,v)) = com(u)&on;com(v) for u, v in C. This is exactly the associativity condition needed to ensure that we can pull back a multiplication on C, with com(u,v) serving as u×v. Meanwhile,

= (s.1 −t.i)&on;conj
= s.conj −t.i&on;conj
= s.conj +t.conj&on;i
= conj&on;com([s,t])

i.e. anti(u)&on;conj = conj&on;com(u) so com(u)&on;conj = conj&on;anti(u), since conj is self-inverse; likewise, com(conj(u)) = anti(u) = conj&on;com(u)&on;conj , and

= com(conj(u))&on;com(conj(v))
= com(com(conj(u),conj(v)))
= com(anti(u,conj(v)))
= com((anti(u)&on;conj)(v))
= com(conj(com(u,v)))
= anti(com(u,v))
= com(u)&on;com(conj(v))
= com(com(u,conj(v)))
= com((com(u)&on;conj)(v))
= com(conj(anti(u,v)))
= anti(anti(u,v))
= com(conj(u))&on;comm(v)
= com(com(conj(u),v))
= com(anti(u,v))

i.e., for u, v in C,

They're a variety of scalar (i.e. number) and they aren't all real. The reals I can get at, for any additive context V, as the (V|:V) which respect addition and commute with every (V|:V) that respects addition. If the complex numbers are there as unreal scalings, multiplying by one respects addition but doesn't commute with all (V|:V) which do respect addition; those addition-respecting (V|:V) with which they do commute are known as linear, with respect to the complex numbers

There is an isomorphism, called conjugation, on {(V:|V) respecting addition} which isn't the identity but does respect addition; those multiplying by complex scalars and their images under conjugation

In the complex plane, multiply by some given, then conjugate respects addition; compose it with itself and you always get a real scaling – furthermore, never a negative one. Compose two such and you get multiplication by a complex (one's given times the conjugate of the other's), which also respects addition.

When we come to look at a several complex dimensional space, V, the (V|:V) which respect addition include some, which compose with themselves to give a scaling, which look like a half-turn about some axis (for example; one could apply a scaling with it, if one wished) without any trace of conjugation involved (they're linear, not antilinear). Otherwise I'd just introduce, as anti-scalings, those (V|:V) which respect addition and compose with themselves to give positive real scalings.

I need to look into the positivity thing.

I can get the positive scalings by saying: take all your real scalings, throw away any that happen to be additive identities, compose each with itself and gather up the results. This presumes continuity of the square root function (except at 0).

I get my real scalars, for an additive context V, as those (V|:V) which respect addition and commute with all (V|:V) which respect addition. I need to establish that these are suitably one dimensional so that I can use 0, if present (without it, we only have positives) divides the positives (identified by V being among them) from the rest.

I can echo the orthodox way to obtain real scalars as follows. The positive integer multiples of the identity provide a natural skeleton for the scalings; if these are all epic – i.e. {n.v: v in V} is V for each positive interger n – they generate the positive rationals, which form the skeleton of a continuum; if there are scalings to fill in the whole continuum, we get the positive orthodox reals.

What does one-dimensional mean ? It's about the strict ordering available: we can say that

({positives}| x&on;x ← x |{non-zero real scalings}). We obtain an ordering, <, as: ({real scalings}: x+y ←x, positive y |{real scalings}), i.e. x < z iff there is some positive y for which x+y is z. The reverse of this ordering is called >. It remains to see what real scalings aren't related to one another, in one order or the other, by < and ask: must they be equal ? We can use the construction of rationals above to show that they differ by less than any positive rational but may be left with an infinitesimal neighbourhood. Is that suitably one-dimensional ?

But with the positives as the squares of all scalings (not just the rational ones we were able to construct) we can hope to pull back an ordering even on any infinitesimal displacements at our disposal. To attempt this: I'll say that a ({real scaling}: f :{real scaling}) respects order precisely if a<b and a in (|f:) implies b in (|f:) with f(a)<f(b) – i.e. if there's some positive c with a+c=b, then there's some positive d with f(a)+d=f(b). The positivity can be hidden by rephrasing positive c with a+c=b as real scaling h with a+h.h=b and likewise for d.

Now consider two functions f, g on {real scalings}, each of which respects <. Given real scalings a, b, h with a+h.h=b, f's respect gives some k with f(a)+k.k=f(b), g's then gives some d with g(f(a))+d.d=g(f(b)) so g&on;f respects <. For f+g, we get f(a)+k.k=f(b) and g(a)+e.e=g(b) so (f+g)(a) + k.k+e.e = (f+g)(b) and we just need to show that a sum of positives is positive – a sum of two squares must have a square root !

I intuit that ({positives: x.x ←x |{positives}) respects order. Does it ? a+c=b gives a.a +2.a.c +c.c =b.b with all parties given to be positive; but, officially, I need (2.a+c).c to be k.k for some scaling k, which is a pythagorean game. Still, I can think of it as (b+a).c and reduce the problem to asking whether a<b, with a positive, implies b+a is positive, but this just asks whether a sum of positives is positive. Which shouldn't be so hard to prove, should it ?

Valid CSSValid XHTML 1.1 Written by Eddy.