Given a scalar domain, S a.k.a. {scalars}, a one-dimensional S-linear space, E, is some additive domain (so addition of members of E yields a member of E) on which S has a multiplicative action (so s.e is in E whenever s is in S and e is in E), cancellable except at zero (so s.e = t.e implies s=t or e is zero and s.e = s.f implies e = f or s = 0), for which there is some non-zero member, e, of E for which E = {s.e: s in S}.

It follows immediately that any one-dimensional linear domain is isomorphic to {scalars}; equally, that e may be replaced with s.e for any non-zero s: all members of E are parallel, so any non-zero one of them suffices to form a basis. Furthermore, thanks to cancellability, for given e in E, E = {s.e: s in S} induces a linear map (S: s ← s.e |E) in dual(E); this linear map serves entirely naturally the rôle of 1/e.

All the same, the tensor algebra cares about differences between linear
spaces: so, in principle, U&tensor;V and V&tensor;U are distinct spaces even
when one-dimensional. It may arise that each is &tensor;[A,…,A] for
some 1-dimensional linear space A, with U and V using lists of potentially
different lengths, in which case there *is* a natural
(order-preserving) isomorphism induced by the naturality of re-partitioning a
list of length n+m as a list of length m+n. [Kindred constructions may arise
with more-than-one-dimensional spaces in certain contexts where (constant)
tracing and permutation operators are being kept out of the way to avoid
confusing a situation which they will ultimately reduce to one
dimension.]

Furthermore, the tensor algebra introduces U&tensor;V as the span of
{u×v: U in U, v in V}. While the linear structure of U&tensor;V is
entirely described by its typical

members, of form u×v = (U:
u.x(v) ←x |dual(V)), the general member of U&tensor;V is a sum of typical
members. When U and V aren't one-dimensional, most

members aren't
typical. But when bulk(&tensor;, [A,B,…,G]) has at most one entry not
one-dimensional, it turns out that all members are typical.

judicious use of permutations should let me get away with proving this for the case where A might not be 1-dimensional, but B,…,G are. Take bases of B,…,G, i.e. non-zero members b,…,g of each respectively; any member of bulk(&tensor;, [B,…,G]) is then a scalar multiple of b×…×g; call this last h for brevity. Any typical member of bulk(&tensor;, [A,…,G]) is a×k.h for some scalar k; this is equal to a.k×h. A general member is a sum of such terms, but each is now of form a.k×h so we can take the sum of the a.k members of A to obtain a member of A, call it z, for which my general member is z×h, hence typical.

Now, whenever bulk(&tensor;, [A,…,Z]) is one-dimensional, so is
each of A,…,Z. So when we *do* have U&tensor;V = V&tensor;U
one-dimensional, we at least know that U and V are separately
one-dimensional. When each is expressible as &tensor;[A,…,A] for some
fixed linear space A, possibly using distinct lengths of list, we can pick a
in A and obtain u = a×…×a in U and v similar in V, for
suitably many repeats of a in each case. These are non-zero, as is their
product, which is of the same form. Examining their product we find u×v
= v×u simply by re-arranging brackets in a long list of
a×…×a. Any member of U is k.u for some scalar k, any
member of V is h.v for some scalar h; and (k.u)×(h.v) = (k.h).u×v
= (h.k).v×u = (h.v)×(k.u) so any product of a member of U with one
of V is independent of order (provided scalar multiplication is). So, at
least in the one case I can think of where U&tensor;V = V&tensor;U can be
one-dimensional, it turns out that the tensor product is abelian.

However, there are other multiplicative ways of combining things aside from ×, for instance if V is dual(U), we get a multiplication, which can be characterized as u·v = u(v) ← u×v using the reading of u in U as a member of dual(V), hence ({scalars}: u |V) yielding u(v) a scalar. This will work equally as a tracing operator when (x×u)·(v×y) = u(v).x×y is used to infer, from typical elements, a multiplicative action of X&tensor;U on V&tensor;Y, yielding answers in X&tensor;Y; in this guise, it is called contraction, so I'll use that name for · also when it's the action of a linear function on its argument, like u(v). Note that composition of linear functions coincides with ·, as is seen by viewing the earlier x×u as a linear (X:|V) and v×y as a linear (V:|dual(Y)) with exactly the composite given in X×Y = {linear (X:|dual(Y))}.

When U&tensor;V and V&tensor;U are equal we can presume that they are A&tensor;B&tensor;C with A&tensor;B = B&tensor;C as one of U, V, while the other is A = C: without loss of generality, we can take A = U = C, B&tensor;U = V = U&tensor;B. This is no simpler but we can now reason with what that, one layer down, tells us about where we are, one layer up from it. Taking non-zero u in U, b in B and c in U, we have u×(b×c) equal to (u×b)×c via an (order-preserving) natural isomorphism, and A = C lets us write u = k.c, c = u.h for some scalars k, h with k.h = 1, leading to u×(b×c) = (u.h×b)×k.c = (c×b)×u.

Describe a collection C of 1-dimensional linear spaces (over a common
field) as &tensor;-abelian precisely if: U, V in C with U&tensor;V =
V&tensor;U

implies × is abelian between U and V

. The above
lets us infer that any collection of finite &tensor; products of members of C
is also &tensor;-abelian.

Define a space U as 1-&tensor; irreducible iff U&tensor;V = V&tensor;U implies V = bulk(&tensor;, ({U}:|n)) for some natural n. Then any finite &tensor; product of 1-&tensor; irreducibles is either of this last form, for some irreducible U, or itself irreducible. Thus we can induce that, whenever U and V are finite &tensor; products of 1-&tensor; irreducible linear spaces, U&tensor;V = V&tensor;U implies that, for some 1-&tensor; irreducible W and naturals u and v, U = bulk(&tensor;, ({W}:|u)) and V = bulk(&tensor;, ({W}:|v)), whence × is abelian between U and V. In particular, any collection of 1-&tensor; irreducibles is trivially &tensor;-abelian.

Given that I can think of no way to construct spaces U and V with U&tensor;V = V&tensor;U other than as bulk(&tensor;, ({A}:|n)) for various n and fixed A, I am inclined to conjecture that every 1-dimensional linear space is either 1-&tensor; irreducible or of form bulk(&tensor;, ({A}:|n)) for some 1-&tensor; irreducible A.

Now suppose {scalars} to be an ordered field (e.g. the rationals, reals or
surreals) and V a linear space over {scalars}. Thus we can partition
{scalars} as {negatives}, {0} and {positives}. For any given non-zero u in V,
we can likewise partition V into those with the same sign as u (i.e. u.x for
positive x), those with opposite sign (u.x for negative x) and
zero. Furthermore, for any v = u.t with t positive, {u.x: x > 0} = {v.x: x
> 0}, so the same sign

and opposite sign

relations just
introduced are genuinely symmetric (as the chosen wording tacitly
implied). Any product of positives is positive and 1>0, so same
sign

is also reflexive and transitive, making it an equivalence; it has
three equivalence classes. One of these is {0}; the other two are
interchanged by negation.

In general, there is no reason to regard one of these non-zero
sign-equivalence classes as positive

and the
other negative

. However, when we look at V&tensor;V, which is again
one-dimensional, we *do* obtain a natural choice of which class to deem
positive: namely, {k.v×v: non-zero v in V, scalar k > 0}.

Every member of V&tensor;V is v×u for some v, u in V (since every member is typical) and, since V is 1-dimensional, if either v or u is non-zero, the other is a scalar multiple of it, so in fact every member of V&tensor;V is of form k.v×v for some v in V. Thus, for any non-zero v in V, the sign-equivalence classes in V&tensor;V are indeed {zero}, {k.v×v: k>0} and {k.v×v: k<0}; it remains to show that changing choice of v doesn't interchange the last two. If u = h.v is also non-zero then h must be non-zero: {k.u×u: k>0}'s typical member is then k.h.h.v×v with k>0 and h.h > 0, so k.h.h > 0 and each u-positive member of V&tensor;V is also v-positive; this being true for arbitrary non-zero u and v we can infer that the converse is true and we don't interchange the positive and negative classes.

Now, any metric on our one-dimensional space V is simply a mapping from it to its dual; which is just a member of one-dimensional dual(V)&tensor;dual(V), so has a well-defined sign.

Written by Eddy.