The three Ts

I here introduce two closely related concepts of linear algebra:

Both are about linear maps and intimately involve tensor products. [All three were taught to me more confusingly than necessary before anyone let on how straightforward they are.] Important definition: for an R-linear space U, define dual(U) to be {linear (U|:R)}. There is a natural embedding of U in dual(dual(U)), namely: (U| u-> (dual(U)| t-> tu :R) :dual(dual(U))). When U is finite-dimensional, this is an isomorphism. I suspect the equivalent embedding of dual(U) in dual(dual(dual(U))) is iso in any case, but I don't know. The tools following are important if we want to deal with linear maps as matrices.

Transposition

The base definition of transposition is as an operation on a relations whose final values are relations: when r relates x to a relation which relates y to z, transpose(r) maps y to a relation which relates x to z. When all the relations involved are mappings, this makes transpose(f,y,x) = f(x,y): transpose(f) is a mapping which `accepts f's arguments in reverse order', yielding f's answer to the suitably re-ordered arguments.

Thus, when (U|f:{linear (V|:W)}) is linear, for some S-linear spaces U, V and W, we have (V| transpose(f) :{(U|:W)}) defined by transpose(f,v,u) = f(u,v); but f and its outputs are linear, so this is linear in u - making (U| transpose(f,v) :W) linear - and linear in v, making transpose(f) itself linear (V| :{linear (U|:W)}). Furthermore (by the nature of the linear structure induced on {linear (X|:Y)} by linear structures on X and Y), for f and g both (U| :{linear (V|:W)}) and k a scalar:

transpose(f+g)
= (V| v->
(U| u->
((f+g)(u))(v)
= (f(u)+g(u))(v)
= f(u,v)+g(u,v)
= transpose(f,v,u)+transpose(g,v,u)
:W)
= transpose(f,v)+transpose(g,v)
= (transpose(f)+transpose(g))(v)
:{linear (U|:W)})
= transpose(f) + transpose(g)
transpose(k.f)
= (V| v->
(U| u->
k.f(u,v)
= k.transpose(f,v,u)
:W)
= k.transpose(f,v)
:{linear (U|:W)})
= k.transpose(f)

making transpose, itself, a linear map. Now, {linear (U| :{linear (V|:W)})} has W⊗dual(V)⊗dual(U) as a sub-space: on which transpose acts as

w×x×y ->
(V| v-> (U| u-> w.x(v).y(u) :W) = w×y.x(v) :{linear (U|:W)})
= w×y×x

Consequently, transpose maps (W⊗dual(V)⊗dual(U): :W⊗dual(U)⊗dual(V)) and, likewise, the reverse. Since it is one-to-one, this makes it an isomorphism between the given tensor subspaces of our linear mapping spaces. We also have a natural isomorphism between dual(U)⊗dual(V) and dual(V⊗U): this transpose into an isomorphism between W⊗dual(U⊗V) and W⊗dual(V⊗U). For given U, V, it does this for every W: from which I suppose we can infer a natural isomorphism between dual(U⊗V) and dual(V⊗U) leading to one between U⊗V and V⊗U. All such mappings, whether they are restrictions of transpose or mappings inferred as just given, are known as transposition, at least to linear algebra.

So long as we can always discuss {linear (V|:W)} in terms of dual(V)⊗W, it suffices to replace W with scalars. In this spirit, it suffices to discuss one instance of the isomorphism, ({linear (U|:dual(V))}: :{linear (V|:dual(U))}), and infer all others from this one using the tensor algebra. This is the transposition on dual(V)⊗dual(U) or dual(U⊗V): it is often more convenient to discuss the transposition induced on U⊗V, which turns u×v into v×u - and (in practice) everything else may be inferred from this.

As ever, we can transpose {linear (U|:X)}, for X not the dual of any linear space, via its embedding in {linear (U| :dual(dual(X)))}, which transposes to {linear (dual(X)| :dual(U))}. Notice, however, that we do have to go via this potentially non-invertible dual-dual map, (X| x-> (dual(X)| w-> wx :scalar) :dual(dual(X))), if X is the dual of some linear space.

Trace

Now, consider any S-linear space V: on V&tensor;dual(V), a subspace of {linear (V|:V)}, we have a linear map induced from v×w -> w(v): this is known as trace. We may be unable to extend this to all of {linear (V|:V)}: for instance, the trace of the identity is the dimension of V, so we can expect problems if V is infinite-dimensional. However, it's definitely well-defined on V&tensor;dual(V).

We can take trace a little further. Any [u,t] in U×dual(V) implies a linear map ({linear (U|:V)}| a-> t(a(u)) :scalar), and if u×t = x×r we have some scalar k with u=k.x, r=k.t (or with x=k.u, t=k.r, in which case swap x with u, r with t in what follows) yielding x(a(r)) = x(a(k.t)) = k.x(a(t)) = u(a(t)): consequently, the resulting mapping from U×dual(V) to dual({linear (U|:V)}) factorises via U⊗dual(V) and (being manifestly linear in each factor) induces a linear embedding of U⊗dual(V) in dual({linear (U|:V)}). This is ({linear (V|:U)}: embed :dual({linear (U|:V)})) but (|embed) will only be all of {linear (V|:U)} when at least one of V and U is finite-dimensional. The best we can rely on is a dense sub-space of {linear (V|:U)}, such as U⊗dual(V), on which the embedding is initially defined.

We consequently have a partial bilinear multiplication, ({linear (U|:V)}⊗{linear (V|:U)}| · :scalars) with (V⊗dual(U))⊗{linear (V|:U)} and {linear (U|:V)}⊗(U⊗dual(V)) as linear subspaces of (|·:). We can suppose this to be extended to as large a subspace of {linear (U|:V)}⊗{linear (V|:U)} as can be consistently induced from these given subspaces.

When either U or V is the scalars, this multiplication is just the trivial (dual(U)×U| [w,u]-> w(u) :scalar), and (|·:) is all of dual(U)×U, or of V×dual(V) in the [v,x]-> x(v) case. If we have a linear (V|a:V), we can compose it before any linear (V|p:U) and after any (U|q:V) and examine (poa).q and p.(aoq). Now, if p is u×t, with t in dual(V) and u in U, we get poa= (V| v-> p(a(v))= u.t(a(v)) :U) = u×(toa), so (poa).q = (u×(toa)).q = toaoq(u) = to(aoq)(u) = (u×t).(aoq) = p.(aoq). , p.q only depends on p,q via poq, which is in

When U=V, we have a product on {linear (V|:V)} yielding scalars. Now {linear (V|:V)} has a natural identity, (V| v->v :V), so we obtain It suffices that we have a natural embedding of a dense sub-space, U⊗dual(V), of {linear (V|:U)} in the dual of {linear (U|:V)}. Equally, this gives an embedding of {linear (U|:V) in the dual of U⊗dual(V): or a partial binary operator, which can sensibly be thought of as multiplication, How much more can be given a value for this product becomes interesting: it's the sort of thing a Banach space can be clear about, I think.

In particular, with U = V, we get a partial multiplication this embeds {linear (V|:V)} in the dual of one of its dense sub-spaces, so it is an inner product on {linear (V|:V)}. Furthermore, {linear (V|:V)} contains a natural unit element, (V| v->v :V) or 1. We define Trace = ({linear (V|:V)}: a->1(a) :scalar), in dual({linear (V|:V)}), to be the image of the identity under our natural embedding: but we have to restrict it to the subspace of {linear (V|:V)} generated by maps on which it is finite. Notice that its value for 1 will be infinite if V is infinite-dimensional; but that we do get V⊗dual(V) as a subset of (|Trace).

If we take linear (U|a:V) and (V|c:U), we obtain linear composite (V| a o c :V) and its trace is just a(c), or c(a), here identifying a with its embedding in dual({linear (V|c:U)}). Indeed, for any composable loop of linear maps we can take the trace of the composite at any point around the loop (used as both start and end-point), and it is equal to the dual-action of any one map in the loop acting on the composite of all the others. Why is this true ?

Trace is here fundamentally defined in terms of the representation of linear maps by tensor products: I would very much like to see how it could be defined by some more direct construction of the multiplication between {linear (V|:U)} and {linear (U|:V)}, giving scalar answer, that is here constructed. The problem is that the definition can only work when the answer is finite ...

Written by Eddy.