To define the tensor bundle, with respect to some scalar domain {scalars}, of a smooth manifold, it suffices to know which morphisms between the scalar domain and the manifold are smooth. Those from the scalar domain to the manifold are called smooth trajectories while those from the manifold to the scalar domain are called smooth scalar fields on the manifold. One may know this by inference from a given smooth atlas of the manifold or they may be given ab initio. The crucial properties are:
For any location, m, on our smooth manifold, M, consider
Now, ({scalars}:x:M) in Vary(m) and [(M:f|E),t] in Paths(m) yield
({scalars}:x&on;f:E). This is defined at t, since f(t)=m and x is defined at m:
since f is continuous and x is smooth in some open U with m in U, (U:f|) is open
in {scalars}, so its intersection with (open) E is also open. Thus (:x&on;f|)
subsumes a neighbourhood of t: and x&on;f is (at least on this neighbourhood) a
composite of a smooth trajectory and a smooth scalar field, so smooth. Taking
its derivative at t, we can obtain (x&on;f)'(t), the rate of change of x along
the trajectory f. Intuitively, we expect this to depend on x and f only via
their behaviour near
m. To explore the intuition's notion
of near
, consider two relations (on Vary(m) and Paths(m),
variously):
[f, t] in Paths(m)implies
(x &on; f)'(t) = (y &on; f)'(t)
x in Vary(m)implies
(x &on; f)'(t) = (x &on; g)'(r).
It follows immediately from these definitions that ~m and ~m are equivalence relations. In this light, the first, ~m, distinguishes two scalar fields precisely if there is some trajectory along which they vary at different rates as the trajectory passes through m. Likewise, the second, ~m, distinguishes two trajectories through m precisely if there is some scalar field which varies at different rates, along the trajectories, as they pass through m.
Thus, intuitively, two equivalent (i.e. indistinguishable at m) trajectories
have the same tangent as they pass through m; two equivalent scalar fields have
the same gradient at m. We formalise this by defining
a tangent to the manifold at m to be, simply, an equivalence
class of trajectories and a gradient on the manifold at m to
be, likewise, an equivalence class of scalar fields. We can denote each as a
derivative of an example member of its equivalence class, using apostrophe '
(a.k.a. dash) for the tangent and a d
prefix for the gradient:
This is the tangent-equivalence class of (f,t) at m.
This is the gradient-equivalence class, at m, of x.
A trajectory, (M:f|E), passes through many places in M: for each t in E, we obtain f'(t), a tangent at f(t). Likewise a scalar field, ({scalars}:x|U), yields a gradient at each point of U. Discusssion of how f' and dx vary will have to wait until we have some tools to let us relate tangents (or gradients) at different points of M. First, let's just see what we now have at each point of M.
I'll use the following names for the collections of tangents and gradients (respectively) at a point, m, of a manifold M:
but I'll simply refer to them as T and G when it's clear which manifold and point on it we're dealing with.
It remains to show that the spaces, T and G, to which the equivalence relations thereby reduce Vary and Paths are, indeed, linear spaces. This is most readily seen by looking first at the gradients: we can add and scale scalar functions (i.e., Vary(m) itself is a linear space) and our equivalence relation respects this. It is easy to show that the sum of two members of Vary(m) is indistinguishable from the sum of some other pair of scalar fields one of which is indistinguishable from one of the original pair and the other of which is indistinguishable from the other. Thus we can define df+dg to be d(f+g). Likewise, applying a common scaling to two indistinguishable scalar fields produces indistinguishable results; so we can, for constant scalar k and scalar field f, define k.df = d(k.f); as a special case of this, d(k−f) is known as −df; and dk (which is an additive identity, dk+df = d(k+f) = df, since adding a constant to all outputs of f has no impact on (f&on;h)'(t) for any [h,t] in Paths(m)) is known as zero or 0.
The reason each member of Vary is only required to be smooth on some neighbourhood of m is to make Vary an additive group; I've defined addition of functions by implicitly extending each scalar function to be zero where only the other scalar function is defined. Even when each function is smooth everywhere it's defined, this can yield a function which isn't smooth (it may well be discontinuous) at the boundary of each function's domain. Thus, if Vary's members were required to be smooth everywhere they're defined, Vary wouldn't be closed (so wouldn't form a group) under this (inclusive) addition. It suffices to require each of Vary(m)'s members to be smooth on a neighbourhood of m; a (finite) sum of its members is then smooth on the intersection of their respective neighbourhoods; and that (finite) intersection is itself a neighbourhood of m.
One could get out of that issue by defining addition to yield a function on the intersection of the two domains, to avoid the edge-effects; however, if I add two functions and one of them isn't defined everywhere the other was, subtracting this one from the sum shalln't yield the other (though it shall yield one equivalent to it); so the addition fails to form a group, hence Vary wouldn't be a linear space. An alternative approach is to demonstrate that each member of Vary is equivalent to some smooth ({scalars}:|M), but this gets quite fiddly. One could also define Vary to be the collection of scalar fields on the whole of M; however, the gradient / tangent construction is essentially local, so it's nicer to only require locally-defined scalar functions.
Thus it is shown that we can add and scale gradients, the results being likewise gradients; and our addition on gradients respects (and hence inherits) the additive group structure of Vary(m) and, hence, forms a group itself. Thus G is a linear space.
A chart of some neighbourhood of m in M is an isomorphism between that neighbourhood of m and an open set in some linear space, V (with respect to which our atlas is defined); by choice of a basis in V, one can establish a linear isomorphism between V and {mappings ({scalars}::n)} for some index set n; composing this isomorphism with our chart we obtain an isomorphism between our neighbourhood in M and one in {mappings ({scalars}::n)}. Since our original chart and its inverse are smooth, in our atlas's terms, and the linear isomporphism we compose it with and the inverse of this are trivially also smooth (in the usual terms of functions between linear maps; any linear map is, at each input, its own first derivative; which is thus constant, so all higher derivatives are zero), so are the composites. Let x be some such composite; then, for each m in M, x(m) is a mapping ({scalars}: x(m) :n) and, for each i in n, x(m,i) is a scalar. Let y be the transpose of x, y = (: (: x(m,i) ←m :M) ←i :n); each ({scalars}: y(i) :M) is just the result of composing a co-ordinate projection f(i) ← ({scalars}:f:n) after x (this is a linear map, hence smooth), hence each y(i) is smooth and thus a scalar field on our neighbourhood of m. We thus obtain dy(i) as a gradient-valued field at each point in some neighbourhood of m, for each i in n. The scalar fields y(i) are known as (local) co-ordinates in M and we'll shortly see gradients of the co-ordinates arising from a chart necessarily provide a basis of the gradients at each point in our chart.
Now: x is a smooth isomorphism, identifying some neighbourhood U of m with
some open set O in {mapings ({scalars}::n)}, a linear space. It consequently
induces (by composing it or its inverse before) an isomorphism between the sets
of scalar functions on U and on O; and (by composing after) an isomorphism
between trajectories in U and in O. The same operation as we used to obtain
G(m) and T(m) can be applied in O to obtain gradients and tangents in O, but
this is open in a linear space, in wich we already have the ability to do
differentiation, which yields one natural isomorphism between the tangents at
x(m) and the linear space and a second between the gradients at x(m) and the
dual of the linear space. Let X = reverse(x) be the smooth isomorphism back
from O to U; from [f,t] in Paths(m) and h in Vary(m) we obtain [x&on;f,t] in
Paths(x(m)) and h&on;X in Vary(x(m)); composing these we get h&on;X&on;x&on;f =
h&on;f, so the isomorphisms x and X induce between Vary(m) and Vary(x(m)) and
between Paths(m) and Paths(x(m)) respect the compose and differentiate
operation central to the equivalences that induced gradients and tangents;
consequently, these isomorphisms induce isomorphisms between the two gradient
spaces and between the two tangent spaces. The dual of the linear space has the
natural basis (: (: f(i) if i in (:f|), else 0 ←f :) ← i |n) and our
isomorphism identifies this with dy; which must thus be a basis of G(m).
A direct proof that the tangents form a linear space is somewhat harder: while it is easy to perform scaling (simply scale the parameter of a trajectory) it is not immediately clear how one performs addition (one cannot do this on trajectories except within infinitesimal neighbourhoods which, while cute, I don't want to involve). However, linear algebra comes to the rescue. The space of linear maps from a linear space, e.g. G, to its field (our scalar domain) is a linear space (the general proof of this is fairly straightforward), known as the dual of the original linear space.
Now, for any trajectory through m and any scalar field at m, we can obtain the rate of change of the scalar field along the trajectory. Fixing on a single trajectory and looking at the value of this for various scalar fields, we see that this rate of change
so that, in fact, each trajectory defines a linear map from G to our scalar domain. It is readily seen that equivalent trajectories produce exactly the same linear map and, thus, that the rate of change operation provides us with an embeding of T in the dual of G: furthermore, scaling of the parameter of a trajectory scales the member of G's dual thus obtained. The action thus defined is
Now, by the nature of the way we defined G and T, distinct members of G imply a member of T which distinguishes them. Likewise distinct members of T imply a member of G which distinguishes them, so distinct members of T have distinguishable images in the dual of G. It follows that the rate-of-change embedding of T in G's dual is monic. It remains to show that every member of G's dual is the image of some tangent; we'll then have a natural isomorphism between T and dual(G) and it'll be natural to simply interpret T as dual(G) and, conversely, G as dual(T).
So, once again, introduce our chart x of some neighbourhood of m with induced co-ordinates y yielding basis dy of G(m). Any member w of dual(G) is entirely determined by its values on a basis of G, hence by w&on;dy; so let f = (: x(m) +t.(: w(dy(i)) ←i :) ←t :). This is a trajectory in our linear space and f'(0) is manifestly w&on;dy; compose X = reverse(x) after it to obtain a trajectory X&on;f in M with X(f(0)) = m; then (X&on;f)'(0) is necessarily a tangent at m whose image under the action above is exactly w, since it contracts with each dy(i) to yield (y&on;X&on;f)'(0) = ((: q(i) ←i :)&on;x&on;X&on;f)'(0) = ((: q(i) ←i :)&on;f)'(0) = f'(0,i) = w(dy(i)). Thus each w in dual(G) is the image of some tangent at m under the embedding of tangents in dual(G) induced by our action.
Thus we have a pair of mutually dual linear spaces, G and T, at each point. From these, standard tensor algebra produces a full menagerie of tensor spaces. These can all be expressed, from our scalar domain and either G or T, as spaces of linear maps. Refering to {scalars} as R, for the present,
and, in general (once you have linear spaces V, W obtained by any of these constructions),
Standard linear algebra yields natural isomorphisms between (W⊗V)⊗U and W⊗(V⊗U) and it is standard to identify these spaces via such isomorphisms; this makes ⊗ associative and allows us to use it to combine an arbitrary list of linear spaces using bulk(⊗) in the usual manner.
Given any list of linear spaces, any permutation of the list induces isomorphisms between the spaces obtained by applying bulk(⊗) to the list and the result of permuting it. These isomorphisms are natural to the permutation but not necessarily natural to the spaces of tensors: in particular, when all entries in the list are the same space, the induced isomorphism is not the identity (unless the permutation is an identity) so it is not natural to identify tensor spaces via such isomorphisms. (In any case where the list has two entries the same, the permutation is not deducible from the spaces themselves, hence naturality for the permutation fails to induce naturality for the spaces.)
When the list is of length two, the only non-identity
permutation, [1,0], swaps the two entries; in the theory of permutations, it is
normal to refer to a permutation which swaps two entries as
a transposition
, which gives rise to the usage of transposition also as
the name for the permutation-induced isomorphism between V⊗W and
W⊗V. I prefer to refer to the permutations in question as swaps, to
avoid conflation
with transposition as
applied in general to relations – which, when applied to mappings, turns a
mapping f whose outputs are mappings into a mapping which accepts the first two
inputs in the reverse order, transpose(f) = (: (: f(a,b) ←a :) ←b :)
where f = (: (: f(a,b) ←b :) ←b :). None the less, it is possible to
recover the orthodox description of the induced isomorphism of tensor spaces as
transposition.
Any member of a linear space can always be intrepreted as a linear map from
the space's dual to scalars; consequently, any member of W⊗V can be
interpreted as a mapping which accepts a member of dual(V) and produces a member
of W, which can in turn be understood as a mapping which accepts a member of
dual(W) and produces a scalar. Thus our member of W⊗V is understood as a
function accepting two inputs and producing a scalar output; as such, we can
apply the standard meaning of transposition to it, swapping the order in which
it receives its two inputs to produce the same output. In this reading, the
transpose is indeed exactly the member of V⊗W that one obtains from the
isomorphism induced by the permutation [1,0]. Consequently, I have no
reservations about using transpose
to refer to this operation on tensor
spaces.
The lack of natural identification among the different permutations of a
tensor space leads to one minor wrinkle: we have to make a choice about how dual
acts on a tensor product. When considering dual(V⊗W) both
dual(V)⊗dual(W) and its transpose are candidates; and there is no natural
reason to chose either over the other – but, for definiteness in the
notation, we must make some such choice. (For a longer list,
e.g. U⊗V⊗W, only same order or its exact reverse are candidates;
these are clearly more natural than all other permutations.) Given u in U, w in
W, n in dual(U) and m in dual(W), we have u×w in U⊗W; when we
contract (u×w)·(m×n) it is natural to obtain
u×n.(w·m); the contraction operator · acts between mutually
dual factors of the tensors on either side of it, taking the rightmost factor of
its left operand and the left most factor of its right operand. Reading
dual(W)⊗dual(U) as dual(U⊗W) would give us the alternative of
reading each parenthetical term of (u×w)·(m×n) as
a single
tensor factor dual to the other, hence contracting fully to
obtain (u·n).(w·m)
Every space we can obtain from G or T using ⊗ and dual can be expressed in a unique manner as bulk(⊗) applied to a list
Natural isomorphisms also arise between W⊗V and V⊗W, known as transposition (because, when each of V and W is construed as a linear map from its dual to scalars, a member of either product space accepts, in one or another order, a member each of dual(W) and dual(V) to produce a scalar; the isomorphisms in hand simply swap the order in which these inputs are accepted, to produce the given scalar, matching the standard definition of transposition); it is not natural, however, to identify these spaces via transposition, if only because the case V = W already has a meaning for identity that (in general) disagrees with tranposition.
Now, this gives us a full family of tensor spaces induced from G (or from T) at each point, m. For a physical theory, however, we need to have tensor-valued functions of position. The value of such a function, at m, must necessarily be in the appropriate tensor space at m. This follows from the necessity, if the tensor-valued function is to have meaning in a physical theory, of being able to examine how the tensor-valued function interacts with the gradients of scalar fields and the tangents to physical trajectories. Consequently, such functions must take values, at neighbouring points, in conceptually separate tensor spaces. To address this, we shall need to show that the base linear space, G, from which the tensor spaces are derived, has the same dimension at each point: we shall then need to identify what should be understood by continuity and smoothness for such functions.
We shall refer to tensor-valued functions whose value, at each point, is in a tensor space at that point as tensor fields. Likewise, gradient (or covector) fields and vector (or tangent) fields take, at each point, values in the gradient space and tangent space (respectively) at that point. Thus, for instance, if h is some scalar field, so that dh(m) is a gradient at each point m, we have dh defined as a gradient field.
For each tensor space derived from G, we have that space at each point of our smooth manifold. This collection of (as we shall see, isomorphic) spaces form what's known as a fibre bundle, or just bundle, over the smooth manifold. The tensor fields we are discussing are also known as sections of the relevant bundle.
What we can do for a single scalar field, we can equally do for a family thereof: given ({scalar fields}: x |I) for some index set I, we thereby obtain ({gradient fields}: dx |I). The values this takes at any given point are all gradients at that point, so we can ask whether they are linearly independent and whether they span the space of gradients at that point. If they do both, according to the usual definitions, we say that they form a basis and that the space of gradients at that point has dimension |I|. In such a case, we say that the scalar fields given by x are chart-forming variates at m.
Likewise, for any basis of G at a point m (bearing in mind that each gradient is, by definition, a set of scalar fields equivalent to one another at m) we can obtain, from it, an ({scalar fields}: x |I) for which the basis is dx (namely: let I be the set of members of our basis of G at our point m and, for each i in I, let x(i) be any scalar field in the equivalence class i). Consequently, we obtain an equivalence between chart-forming families of variates at m and bases of G at m.
Now, a family of variates is defined at more than one point (whereas a basis of G at a point m is only defined at m). Consequently, we can ask whether it is chart-forming at more than one point. Furthermore, it seems (intuitively) obvious that a family of variates which is chart-forming at some point will also be chart-forming throughout some neighbourhood of that point. So if we have a family of variates which are chart-forming in some neighbourhood, we say this family forms a chart of that neighbourhood: and, indeed, this agrees with the usual definition of a chart.
Within a chart, it is immediately clear that the dimension of G must be constant (because it is |I| throughout). Thus if we can cover the entirety of our smooth manifold with overlapping charts (as, for instance, if we define it in terms of a smooth atlas) we can assert that the dimension of G is constant. Alternatively, we may take it as axiomatic that the dimension is constant: it will then follow (trivially) that all chart-forming families of variates have (Set-)isomorphic index sets and that the manifold is covered by such charts. Consequently, definition in terms of an atlas is natural in the sense that we can always recover (the completion of) the atlas using scalar fields forming charts when we start from any other definition which produces all of the above conditions.
Provided that the dimension of our gradient space is constant, the dimension of any tensor space derived from it will also be constant. More importantly, the gradient space at any point is isomorphic to the gradient space at any other and any derived tensor space is likewise isomorphic from one point to another. It should be noted that composition of an autoisomorphism (of either space) with any isomorphism between isomorphic spaces produces another isomorphism between those spaces: consequently, we have plenty of freedom to chose among the isomorphisms between isomorphic spaces. In the absence of any natural (i.e. independent of how we describe the space) isomorphism, we should be wary of giving any special significance to any particular isomorphism between corresponding spaces at distinct points. Indeed, it is this freedom of choice which leads to the variety of differential operators which we may sensibly use on a smooth manifold.
Now, continuity is usually understood in terms of a function from one
topological space to another. In this context, it is defined as meaning that
the inverse image of any open set is open. However, our tensor fields don't
even have the same space for their destination as we move around the domain: we
cannot ask whether a set is open
when it consists of exactly one member
from each of a family of isomorphic but disjoint spaces.
However, we do have scalar fields at our disposal; and these are smooth. It makes perfect sense to treat the gradients of these as continuous (nay, smooth) functions of position. We can then take the product of any smooth scalar field with the gradient of any smooth scalar field, or any finite sum of such products, as being smooth (and, implicitly, continuous). This suffices to define smoothness of gradient fields and of any tensor field expressible simply as a tensor product of copies of G.
For any vector (i.e. tangent) field, we can look at its contractions with arbitrary smooth gradient fields. If these are all smooth, we can sensibly define the tangent field to be smooth. Consequently, we are able to construct a meaning for smoothness of general tensor fields by asserting the smoothness of any sum or (tensor) product of smooth fields and applying this to the given notion of smoothness for scalar, gradient and vector fields.
We can, likewise, define continuity by asserting the continuity of all
smooth fields, plus possibly some collection of continuous scalar fields, and
the continuity of any sum or (tensor) product of continuous fields. This
provides us with (just) enough of a notion of sameness
of tensors from
one member of a bundle to another that we can think of each bundle as though it
formed a continuous space (whose dimension is the product of that of the bundled
tensor space and that of the manifold – i.e. of the gradient
space). However, this is not enough of an idea of sameness
to enable us
to consider any tensor function to be constant
. This will come back to
haunt us when we come to develop differential operators on our manifold (i.e. to
decide what the derivatives are for all these tensor fields which we have just
declared to be smooth
).
We can (and orthodox treatments of smooth manifolds generally do) arrive at equivalent notions of smoothness and continuity by using a chart to represent a patch of the manifold as part of a linear space; this induces natural representations of the tensor spaces associated with the manifold, at each point covered by the chart, as corresponding tensor spaces of the chart's target linear space. Since this is a fixed linear space, its induced tensor spaces are likewise fixed, enabling us to define smoothness, continuity and even constancy in the usual way. One can then demonstrate that any tensor-valued function on our manifold which yields a smooth or continuous representation via one chart naturally does the same via any other chart, on the intersection of the two charts' domains (but the same does not hold for constancy). It is then possible to define smoothness and continuity (but not constancy) by requiring the tensor function's representations via all charts (whose domain meets that of the tensor function) to be smooth or continuous, as appropriate. I prefer, however, to address as much as possible of the manifold's properties in terms of the manifold alone, only resorting to charts where I must.
The construction of gradients and tangents is built around differentiating
composites of trajectories, mappings (manifold::{scalars}), and scalar
fields
, mappings ({scalars}::manifold) – a composite
({scalar}::{scalar}) is susceptible to the definition of
differentiation. Equally, any two linear spaces U, V over some common scalar
domain allow differentiation on mappings (V::U) – so now let us look at
composites of mappings (manifold::U) and (V::manifold).
As above, for any location, m, on our smooth manifold, M, consider
As before, (V:x:M) in Vary(V,m) and [(M:f:U), t] in Paths(m,U) yield
(V: x&on;f :U), with (:x&on;f|) an open neighbourhood of t, in U; and x&on;f is
smooth. Its derivative, (x&on;f)'(t), is linear (V:|U): it describes how small
movements, in U, about t yield proportional
small changes in x&on;f's
value. Again, this should depend on x and f only via their
behaviours near
m.
The corresponding equivalences (on Vary(V,m) and Paths(U,m), variously) are
[f,t] in Paths(U,m)implies
(x&on;f)'(t) = (y&on;f)'(t)
x in Vary(V,m)implies
(x&on;f)'(t) = (x&on;g)'(r)
The former distinguishes U-trajectories if there is some V-field whose composite after the U-trajectories have different derivatives. The latter distinguishes V-fields precisely if there is some U-trajectory whose composites before the V-fields have different derivatives.
Now, ~U,m distinguishes (V:x:M) from (V:y:M) precisely if there
is some [f,t] in Paths(U,m) for which (x&on;f)'(t) and (y&on;f)'(t) are
distinct. These two derivatives are linear (V::U) and distinguishable precisely
if there is some u in U which is mapped to distinct members of V by the two
derivatives. Each such u in U implies a scalar-trajectory, (m: f(t+a.u) ←a
:{scalars}), whose composites with x and y have distinct derivatives (at a = 0)
in {linear ({scalars}::V)}. Thus, whenever ~U,m distinguishes x and
y, so does ~scalar,m, previously introduced as ~m. In
fact, whenever W is a subspace of U, or linear-isomorphic to some subspace of
U, ~U,m distinguishes x and y
implies ~W,m
distinguishes x and y
. To put this another way, ~W,m subsumes
~U,m when U subsumes W.
Correspondingly, ~V,m distinguishes [f,t] and [g,r] in Paths(U,m) precisely when there is some (V:x:M) in Vary(V,m) for which (x&on;f)'(t) and (x&on;g)'(r) are distinct linear mappings (V::U). This, in turn, gives us some w in dual(V) whose composites with the two derivatives are distinct linear maps ({scalars}::U). From this we obtain a scalar field, ({scalars}: w(x(k)) ←k :M) = w&on;x, whose composites with f and g are distinct – so ~scalar,m also distinguishes [f,t] from [g,r]. In general, if W is a subspace of V, or linear-isomorphic to some subspace of V, then ~W,m subsumes ~V,m.
Conversely, any linear map ({scalars}||U) can be composed with each trajectory at a point, so that ~U,m can distinguish anything which ~m distinguishes. Thus, in fact, ~U,m = ~m. Likewise, any member of V can be multiplied by any scalar field on M to give us a V-field on M – so that ~V,m = ~m. Thus we get nothing new in the equivalences that arise from this generalisation.
As previously, we can define
This is the U-tangent equivalence class of [f,t] at m=f(t).
This is the V-gradient equivalence class, at m, of x.
At each point, m, of M we thus obtain
As before, G is trivially a linear space over our scalars, because ~m respects the linear structure that Vary(V,m) inherits from {(V::M)}. I'll come back to T. Between G and T, as before, we have a natural multiplicative action induced from dx(m)·f'(t) = (x&on;f)'(t) when f(t)=m.
If we have a basis of V, (V:b|dim) with dual (dual(V): p |dim) so that
sum({linear (V:|V)}: b(i)×p(i) ←i |dim) = (V: v ←v :V), we can
write any linear (V:w:U) as w = sum(: b(i)×p(i)·w ←i :), and
we don't have to chose a fresh basis for each position on m, so each b(i) and
p(i) is constant
as far as the manifold is concerned. Given x in
Vary(V,m) and [f,t] in Paths(U,m), p(i)·(x&on;f)'(t) is then just
(p(i)·x&on;f)'(t). Since p(i)·x is simply a mapping
({scalars}::M) – which inherits, from x, smoothness on some neighbourhood
of m – it's in Vary({scalars},m) so we effectively reduce dx to sum(:
b(i)×d(p(i)·x) ←i :) which is just a typical member of
V⊗Gm; likewise, f'(t) reduces to a member of
dual(U)⊗Tm. For convenience of the contraction
dx(m)·f'(t), which should be linear (V:|U), we should transpose the last
to get T⊗dual(U), i.e. {linear (T:|U)}. In any case,
i.e. we can just factorise our linear (V:|U) via T.
Written by Eddy.