]> Two-dimensional complex

I guess I need to write more about the complex numbers: for now, let it suffice that they are the algebraic-and-topological completion of the natural numbers. That factorises any polynomial as a product of first rank polynomials (of form a.x +b ←x with a, b complex) times an optional zeroth rank polynomial (of form c ←x for some complex constant, c). It also entertains an auto-anti-isomorphism called (complex) conjugation.

Conjugation, *, maps each complex value, c, to some other, c*, and c** is c: for any complex z and w, (z.w)* is w*.z* and (z+w)* is z*+w*; the additive and multiplicative identities are self-conjugate (0* = 0, 1* = 1); whence we may infer that the naturals are, and that (z/w)* is z*/w*; whence the rationals also self-conjugate. Conjugation is also continuous, hence so are all reals. It is an essential truth of the complex numbers that nothing but the reals is self-conjugate; and there's plenty more complex numbers apart from the reals. In particular, using i for the square root of −1, we obtain i* = −i; and all other values for * follow from this and the self-conjugacy of the reals. As i.i = −1, i.i* = i.(−i) A complex number z for which z* = −z is known as an imaginary number: i is the imaginary unit. Every other imaginary number is some real multiple of i since, for imaginary z, z* = z, so (z/i)* = z*/i* = −z*/i = z/i is real.

Any complex number, z, can be written as z = (z +z*)/2 +(z −z*)/2. The first term is trivially self-conjugate: so it is real; call it Real(z) = (z+z)*/2. For the second term, ((z−z*)/2)* = (z*−z)/2 = −(z−z*)/2; so it is imaginary, so it is some real multiple of i; call the real in question Im(z) = (z −z*) / (2.i); so z = Re(z) + i.Im(z). Thus every complex number is of form x +i.y for some reals x and y; the complex numbers are a real vector space of dimension two.

Addition and multiplication are commutative and associative on the complex numbers, with multiplication distributing over addition, addition supporting an inverse (negation) and multiplication, aside from the additive identity, supporting its own inversion, 1/z ←z. These were the properties of the reals that we needed in order to use them as scalars, relative to which to define linear spaces and linear maps thereon; so we can define complex-linear spaces. The complex numbers lack the strict ordering of the reals. (Should i be considered greater than or less than 0, 1, −1 ? How about −i ? There is no nautral answer, although one can impose answers artificially.) That imposes some differences on the structures of complex-linear spaces; as, likewise, does the presence of conjugation.

On any complex-linear space, conjugation induces an interesting twist. Since conjugation isn't complex-linear, although it is real-linear, its interaction with complex-linear maps is non-simple: but it is an (algebraic and topological) isomorphism between the complex number space and itself, which does provide for some simplifications. A mapping between linear spaces, (U:f|V), is called linear precisely if f(k.v+w) = k.f(v)+w for any v,w in V and scalar k. Likewise, a mapping between complex spaces, (U:f|V) is called anti-linear precisely if f(k.v+w) = k*.f(v)+w for v,w in V and k complex. Note that (since * is self-inverse) composing two anti-linear mappings will yield a linear one; while composing linear with antilinear yields antilinear.

Two-dimensional complex

Spinors oblige me to consider two-dimensional complex linear spaces. One of the neat features of these is that a metric is of the same tensor kind as the measure, though symmetric rather than anti.

One of the things about four dimensions is that the measure (square root of the determinant of the metric) is of the same kind as the square of the metric.

Square roots of linear maps

Two ways present themselves to obtain a square root of a linear map (V:|V). We can compose two linear maps or two antilinear maps: in either case each is (V:|V).

Focus, for the moment, on linear maps which are diagonalisable (as the thing whose square root we want). This means that there is some basis, [b,d], of our complex two-dimensional space S, with dual basis [p,q] for dual(S) – defined by p.b = 1 = q.d, p.d = 0 = q.b – which can describe the given linear map as k.b×p +h.d×q for some scalars k and h.

Now, * is an antilinear map from (complex) scalars to scalars; p and q are in dual(S) so each is a linear map from S to scalars; so we can compose it before * to obtain an antilinear, e.g (S| *&on;p :{complex}) which I'll write as *p, likewise *q maps s in S to scalar (q(s))*. Tensor algebra lets us re-write an arbitrary linear (S:f|S) as f = (p·f·b).b×p +(p·f·d).b×q +(q·f·b)d×p +(q·f·d).d×q, wherein each (y·f·h) is y(f(h)) with both ({complex}:y|S) and (S:h|S) linear. It equally lets us re-write an antilinear (S:t|S) as p(t(b)).b×(*p) +p(t(d)).b×(*q) +q(t(b)).d×(*p) +q(t(d)).d×(*q). The art of diagonalisation is about chosing [b,d], hence [p,q], so as to make p.f.d = 0 = q.f.b or p(t(d)) = 0 = q(t(b)). So let us look at t&on;t which maps a given s in S to

= (p·t·b)*.t(b).(p·s) +(p·t·d)*.t(b).(q·s) +(q·t·b)*.t(d).(p·s) +(q·t·d)*.t(d).(q·s), which makes
= (p·t·b)*.t(b)×p +(p·t·d)*.t(b)×q +(q·t·b)*.t(d)×p +(q·t·d)*.t(d)×q

With a little work (using b×p + d×q as the identity) this becomes

= (b.(p·t·b) +d.(q·t·b))×((p·t·b)*.p +(p·t·d)*.q) +(b.(p·t·d) +d.(q·t·d))×((q·t·b)*.p +(q·t·d)*.q)
(b.(p·t·b) +d.(q·t·b))×(p·t·b)*.p
+(b.(p·t·b) +d.(q·t·b))×(p·t·d)*.q
+(b.(p·t·d) +d.(q·t·d))×(q·t·b)*.p
+(b.(p·t·d) +d.(q·t·d))×(q·t·d)*.q
b×p.((p·t·b).(p·t·b)* +(p·t·d).(q·t·b)*)
+b×q.((p·t·b).(p·t·d)* +(p·t·d).(q·t·d)*)
+d×p.((q·t·b).(p·t·b)* +(q·t·d).(q·t·b)*)
+d×q.((q·t·b).(p·t·d)* +(q·t·d).(q·t·d)*)

If t&on;t is to be diagonal with respect to [b,d], we need the b×q and d×p terms to be zero, so

which implies

from which we may infer p·t·d / q·t·b = (p·t·d / q·t·b)* (times a positive real, but the ratio of a complex and its conjugate is necessarily a phase and the only positive real phase is +1), so is real: p·t·d and q·t·d have equal phase. This also tells us that p·t·b/(q·t·d)* is −(p·t·d)/(p·t·d)*, which is a pure phase, making the sizes of p·t·b and q·t·d equal: they differ only in phase. So p·t·b/(q·t·d)* is the product of their phases and is minus the square of the phase which p·t·d and q·t·b share. Thus a quarter turn round from this shared phase is half way between the phases of p·t·b and q·t·d. Consequently, we can write (for some complex Z and real H, s and α)

wherein (1−H)/s and s are two arbitrary reals: the given parameterisation is chosen to make what follows easier. It is readilly verified that the off-diagonal elements of t&on;t are now zero. We thus obtain t as

wherein *&on;p and *&on;q are the antilinears ({scalars}: |S) that one obtains by composing conjugation, ({complex}| * |{complex}), after the linear maps p and q respectively.

So, now for the diagonal elements. These are

= (p·t·b).(p·t·b)* +(p·t·d).(q·t·b)*
= Z.exp(i.α).Z*.exp(−i.α) + Z.((1−H)/(i.s)).Z*.(−i.s)
= Z*.Z(1 −(1−H))
= H.Z*.Z,

which is real and has the sign of H.

= (q·t·b).(p·t·d)* +(q·t·d).(q·t·d)*
= Z.i.s.Z*.(1−H)/(−i.s) +Z.Z*
= H.Z*.Z


Note, as a result, that any diagonalisable linear map which has an anti-linear square root is a multiple of the identity: and the multiplier involved tells us the sign of the H parameter we must use. Note that s and α are totally unconstrained, as is the phase of Z: none of these affects t&on;t.

Now, when H is 0 we have t(d.exp(i.α)) = b +d.i.s.exp(−i.α) = i.s(b.(1−H)/(i.s) +d.exp(−i.α)) = i.s.t(d) = t(−i.s.d), so we find that t(d.exp(i.α) +i.s.d) = 0, making t degenerate: indeed, t&on;t is then the zero linear map on our 2-D complex space, so this should be no surprise. …[unfinished]

Space-time metric

So why do I want all this ? I'm looking for a square root (the measure) for the determinant of the metric, which is negative; and some sort of square root of the metric could be useful in building one. One way to approach this is to have, in place of my conventional linear (G: g |T) with G = {linear ({scalars}: |T)}, some linear (V:f|T) with V some algebra, equipped with a suitable multiplication, *, among the products of which one finds a natural identity and its accompanying scalar multiples; from this we may hope to be able to obtain, at least among the outputs of f, a natural scalar product between members of V. For vectors s, t in T, g gave as inner product g(s,t), which we now express as f(s)*f(t). It actually suffices to be able to do this for only those cases where s and t are equal – that is, give g(t,t) for each t in T as f(t)*f(t) – since we can infer g(s,t) from g's symmetry and the value of g(s+t,s+t) = f(s+t)*f(s+t).

The above gives an example of something of the right form: if f's outputs were all anti-linear isomorphisms on some 2-dimensional complex space, each f(t)*f(t) would be a scalar multiple of the identity; and as long as f is linear, and any V-algebraic twists implicit in * respect (real) linearity, the result will be bilinear in t. I need to consider whether/when sums of square roots are also square roots; clearly, scalar multiples of square roots always are.

Hermitian maps

Consider a complex space U and the anti-linear maps from U to its dual. The outputs of these, in U's dual, are linear maps from U to {complex}, the scalars involved. We thus get a one antilinear, one linear product for any two of U's members. Consider such a product, anti-linear ({linear ({complex}:|U)}: q |U): for arbitrary u, v in U; g(u,v) depends anti-linearly on u but linearly on v; it's a complex scalar, so we can conjugate it; (g(u,v))* then depends linearly on u and anti-linearly on v; as a mapping, we can take the transpose of (: (: (g(u,v))* ←v :) ←u :), call this G, defined by G(v,u) = (g(u,v))*; observe that G is now antilinear, then linear like g; it is called the Hermitian conjugate of g. It takes little time indeed to see that Re(g) = g+G and Im(g) = i.(G−g) are self-conjugate, with g = Re(g) +i.Im(g). Such self-conjugate maps are described as Hermitian symmetric; if g is symmetric, so g=G, then i.g's conjugate will be −i.G = −i.g; so i.g is described as Hermitian anti-symmetric. In practice, given the indicated reduction into real and imaginary parts, Re and Im, I prefer to think of Hermitian symmetric maps as real and Hermitian antisymmetric maps as imaginary; multiplication by i swaps between the two kinds, which is just what happens for real and imaginary complex scalars.

For contrast, note that we can have anti-linear maps from U to U; the transposes of which are linear from U's dual, {linear ({complex}: |U)}, to U's anti-dual, {anti-linear ({complex}:|U)}. The two latter spaces yield complex outputs, so we can compose conjugation after either kind of mapping; from ({complex}: a |U) we obtain ({complex}: *&on;a |U) with (*&on;a)(u) = *(a(u)); if either a or *&on;a is linear then the other is anti-linear, and conversely; this establishes a natural self-inverse anti-linear isomorphism between these two complex-linear spaces of mappings. For given anti-linear (U: f |U), we have linear transpose(f) = (antidual(U): x&on;f ←x |dual(U)) and anti-linear iso star = (dual(U): *&on;a ←a |antidual(U)), which we can compose after transpose(f) to obtain antilinear star&on;(transpose(f)) = (dual(U): ({complex}: *&on;x&on;f |U) ←x |dual(U)); since * and f are anti-linear, while each input x is linear, the outputs of this are indeed linear, hence in dual(U) as asserted; and depend antilinearly on x. The mapping *&on;x&on;f ←x is also called a Hermitian transpose, or adjoint, of f; like f it is antilinear and produces outputs of the same kind as its inputs; but unlike f, those inputs and outputs are linear maps in U's dual, rather than members of U. Consequently, we cannot naturally compare them, to ask whether they are equal for instance, let alone whether one is minus the other; so questions of symmetry and anti-symmetry are not intrinsicly present – though they may be induced by any isomorphism between U and its dual (e.g. a metric), so long as it is either linear or anti-linear. Use of such an isomorphism turns the discussion back into one of either Hermitian or bilinear metrics on U.

One thing we do have with anti-linear (U:|U) is the potential to play with the trace operator, which should yield at least a natural inner product. Given d, f each anti-linear (U:|U), we have linear (U: d&on;f |U); on {linear (U:|U)} we have trace inferred from trace(u×w) = w(u) where, for u in U and w in dual(U), u×w = (U: u.w(v) ←v |U) is linear. This gives ({scalars}: trace |{linear (U:|U)}) and trace is linear; so trace(d&on;f) depends linearly on d, antilinearly on f, yielding a complex output. In short, trace and composition provide us with a natural inner product on {anti-linear (U:|U)}: call this T, for now.

Decomposing anti-linear f, d, each (U:|U), into real and imaginary parts, we obtain …[unfinished] Furthermore, if d and f are real we can consider (d+if)&on;(d+if) = d&on;d + f&on;f +i.(f&on;d −d&on;f) to see how the …[unfinished]

Valid CSSValid XHTML 1.1 Written by Eddy.