I'll begin with a deliberately hazy outline of what integration is
;
which should be recognizable to anyone who has met one or another formalization
of it. From there I'll meander in search of the natural way of describing
measures on a smooth manifold.
Integration over a domain A is a linear operator which takes any mapping f
from A to any linear space, V, and yields integral(:f|A), a member of some
linear space, typically resembling V but possibly obtained from V by some
application of tensor algebra. Even when the answer is in V
its units
acquire some contributions from the integration: if A is a three-dimensional
space, integral(:f|A)'s units are the units of f's outputs
times volume
. When V is one-dimensional, the integral of f falls in a
one-dimensional space. In general, a linear operator of this kind is described
as a measure
: we generally call on integration to have some further
properties which relate it to differentiation and to notions of length, area,
volume, etc.
Various theorems concerning integration (lumped together, in my jumbled memory under the name Stokes' theorem) assert that integrating the derivative of some field, v, over the interior of a region gives the same answer as integrating v itself over the boundary of the region. These generalize the one-dimensional case, where integral(: x' :from a to b) = x(b) −x(a) when x' is the derivative of x. On a smooth manifold, we have some freedom in our choice of differential operator, so one must address issues of how (if at all) integration depends on choice of differential operator.
Integrating the constant scalar function 1 over a region gives
its volume
; when the region is suitably cuboid, this volume is the result
of multiplying the lengths of the sides of the cubes. Lengths are obtained from
the metric, so this links integration to the metric. In general relativity, the
differential operator we use is the one which annihilates the metric; i.e. the
metric (which, in orthodox notation, raises and lowers indices)
is constant
in the eyes of the chosen differential operator. Integration
will thus depend on the metric, possibly via the differential operator which
considers the metric constant.
Now, integral(V:f|A) involves adding
f's outputs (or some suitably
tensor rearrangement thereof, characterizing our integration). When A is a
smooth manifold, (at least) some of the things we want to integrate are tensor
fields; whose outputs, given different inputs in A, are all of one rank but in
different vector spaces, namely the given rank's expression at the given input
point. We thus have no proper means of adding
such values; this would
appear to restrict us to integrating scalar
fields, yet Stokes' theorem
seems to tell us that we can at least integrate, over an interior, the
derivative of anything we can integrate over a boundary. We shall get round
this, fear not.
Now, if A is a linear space of finite dimension and V is {scalars}, for simplicity, suppose we have an isomorphism (not necessarily linear) (B|j|A). Then (it is an orthodox truth that) integral(V:f|A) can be re-written as integral(V:g|B) where g∘j = f.J, i.e. for each a in A, g(j(a)) = f(a).J(a), in which J(a) is the determinant of j's derivative at a; J is known as the Jacobian (give or take spelling). This will unravel to reveal what we need to see on a smooth manifold.
See alternating forms for a somewhat more formal treatment of approximately the same material as here, excluding the determinant.
Given any linear space L and natural n, tensor algebra builds for us the space of ({L}:|n)-rank tensors as span(| product :{lists (L:|n)}) where product is the bulk action of the tensor multiplication – when v = [a,…,z], product(v) is a×…×z – and span is the linear completion operator (applied to collections above, but generally defined on relations by):
Having f∘v subsume u implies that, for each i in n, f relates u(i) to v(i). When f is a mapping, its span is the minimal linear relation that subsumes it; it is the linear extension of f. When f is an identity (i.e. a sub-collection of V, simplifying the above because u and v must then be equal, so that an atomic relation (: sum(h.u)←sum(h.v) :) is just the collection {sum(h.v)}) we get f = (f: x←x :) and this simplifies to
A (natural) permutation is a one-to-one
mapping from a natural to itself, i.e. a list (n|s|n), in which each i in n is
s(k) for exactly one k in n. For any given natural n, the permutations (n|s|n)
form a group – the identity is n itself, construed as the mapping (n|
i←i |n), and a permutation's inverse
is simply its reverse (as a
relation). Any permutation other than the identity must have at least two
inputs which aren't fixed points, i.e. which it maps to something other than
themselves; a permutation with only two non-fixed values must swap those values;
so I'll call it a swap (avoiding orthodoxy's use of transposition
,
overloaded with other meanings). Any
permutation may be factorized as the
composite of a sequence of swaps; whether the sequence's length is odd or
even depends only on the permutation
thus factorized; permutations are thus described as odd
or even
according as their swap-factorization' sequences' lengths are odd or even. This
implies the existence of a natural group homomorphism
(i.e. mapping which
preserves group structure) from permutations to the group specified to have
members −1 and +1, with (−1).(−1) = +1 = (+1).(+1) and
(+1).(−1) = −1 = (−1).(+1); odd permutations are mapped to
−1, even ones to +1.
For any list (L:v|n) and any permutation (n|s|), the composite v∘s is a list (L:|n). This enables us to define the (universal) antisymmetrization operator:
i.e. this gives the action on the typical
members of each span(|
product :{lists (L:|n)}) and its action on sums of such is obtained by adding
its outputs given each of the summands as input. Here average(: g
:{permutations (n||n)}) = sum(g)/n! because there are n! permutations (n||n),
with n! = product(:successor|n), so 0! = 1 and (1+n)! is n!.(1+n) for each
natural n. The choice of scaling (i.e. average) is carefully made so as to
ensure that wedge∘wedge = wedge is a projection; it acts as the identity
on its outputs (and the name wedge
comes from the symbol orthodoxly used,
as a binary operator
, in denotations for wedge's outputs; the symbol is
the bottom half of a letter X but I often use ^ as it).
For each natural n and linear space L, we thus obtain the collection of wholly-antisymmetric ({L}:|n)-tensors,
which I'll slackly refer to as Wedge(n,L). If n exceeds L's dimension, this only has zero (of appropriate rank) as a member. Otherwise, if dim is L's dimension, the dimension of Wedge(n,L) is dim!/n!/(dim−n)!, the number of ways of choosing n distinct members from a collection of dim distinct objects. Note, in particular, the symmetry under dim−n ←n. For any linear map (V:f|L) and any natural n, we can induce a linear map
i.e. by applying f to each of the tensor factors in each product; this is a mapping from {({L}:|n)-rank tensors} = span(| product :{lists (L:|n)}), of which Wedge(n,L) is the antisymmetric sub-space, to the corresponding tensor space with V in place of L. Restricting this mapping to Wedge(n,L) on the right delivers a mapping (Wedge(n,V): |Wedge(n,L)), which I'll write as W(n,f).
Now, when n is the dimension of L, wedge(n,L) is one-dimensional; any
non-zero member of wedge(n,L) serves as a basis and enables us to describe
Wedge(n,L) as if it were {scalars}, with 1 representing the chosen basis
member. W(n,f)'s image of this unit
then determines W(n,f) entirely,
since its image of anything else is implied by linearity and the representation
of the given anything else as a scalar times the unit. This makes W(dimension
of (:f|), f) express f as a linear map of rank (at most) one, known as
the determinant
of f, written det(f).
If V's dimension is less than L's, Wedge(n,V)'s only member is zero so det(f) is itself zero (because its output always is). Otherwise, Wedge(n,V) has some degrees of freedom but det(f)'s outputs still form (at most) a one-dimensional sub-space of Wedge(n,V).
When L is V, and n is their dimension, Wedge(n,V) is Wedge(n,L), which is
one-dimensional. Linear maps from a one-dimensional space to itself
are all scalings, i.e. effectively scalars, making det(L:f|L) a scalar. When V
has the same dimension as L, Wedge(n,V) will again be one-dimensional, so the
space of linear maps from Wedge(n,L) to Wedge(n,V) is one-dimensional; but it
generally has no natural
unit (as which the identity linear map served
when V was L).
When V is dual(L) = {linear map ({scalars}:|L)}, it has the same
dimension as L; hence Wedge(n,V) is also one-dimensional; furthermore, it is
dual to Wedge(n, L) so the space {linear maps (Wedge(n, V): |Wedge(n, L))} is
just Wedge(n, V)⊗Wedge(n, V) and det(V: f |L) lies in this space. For
any non-zero b in Wedge(n, L) there is a unique q in Wedge(n, V) for which
q·b = 1; we can use [b] as a basis of Wedge(n, L) and [q] as a basis of
Wedge(n, V), hence [q×q] provides a basis of Wedge(n, V)⊗Wedge(n,
V) and we get det(f) = F.q×q for some scalar F. If we replace b with b/h
for some real h, we get h.q in place of q and det(f) = (F/h/h).(h.q)×(h.q)
so F/h/h replaces F; but h.h is positive, so F hasn't changed sign. Indeed, as
long as scalars are real, any non-zero member of Wedge(n, V), multiplied by
itself, gives a positive multiple of q×q, there is a natural sign
in Wedge(n, V)⊗Wedge(n, V), with q×q positive for all non-zero q in
Wedge(n, V). Then det(f) is definitely either positive or negative; by suitably
scaling q (and b), we can arrange for det(f) to be ±q×q; I'll be
using this q in Wedge(n, V) as square root
of ±det(f); it shall
play a central rôle in integration.
The other important case is when f is the derivative of some isomorphism (j
in the last section) and its determinant is the Jacobian, J. Because it's the
derivative of an isomorphism, it should be expressible as a linear map between
spaces of equal dimension, so f's determinant will fall in a one-dimensional
space; this space will typically have no natural unit, but any choice of unit in
it will reduce further discussion of it to as if it were {scalars}
.
In particular, on a smooth manifold M of dimension dim, with gradient bundle G and tangent bundle T, the same tensor algebra (applied at each point of M) gives us Wedge(dim,G) and Wedge(dim,T) as mutually dual one-dimensional ranks of tensor on M. Among linear maps from one to the other there is no natural unit, at least until the metric gets involved, but we can distinguish positive from negative (and, as it happens, space-time's metric's determinant is a negative linear map from Wedge(dim,T) to Wedge(dim,G), but let us ignore that for now).
Suppose we have some tensor field (:f:M) with (:f|) some neighbourhood in M
on which we have two charts ({lists ({scalars}:|dim)}: :M), isomorphisms between
(:f|) and neighbourhoods in the canonical linear space of dimension dim –
the space of lists, of scalars, with length dim. Let x and u be the associated
lists of scalar fields ({smooth ({scalars}::M)}: |dim), with x's chart then
being (: (: x(i,m) ←i |dim) ←m :M) and likewise for u (in the language
of relations, each chart is the transpose
of its list of scalar fields).
Let X = (|x:(:f|)) and U = (|u:(:f|)) be the neighbourhoods in {lists
({scalars}:|dim)} over which our co-ordinates vary and note that (X|
x∘reverse(u) |U) is one-to-one and smooth.
Now, each i in dim = {0,…,dim−1} gives us a scalar field x(i), to which we can apply the canonical structural map from scalar fields to gradient fields, local differentiation: each dx(i) is a gradient field. Because we obtained x and u from charts (by transposition as relations), the list of gradients (G: dx |dim) is a basis of G; as is (G: du |dim). From each basis of G we instantly obtain a basis of Wedge(dim,G), whose single member is wedge(product(dx)) or wedge(product(du)) as appropriate. We can also construct the dual bases of T, namely (T:p|dim) and (T:q|dim) specified by: dx(i)·p(k) = du(i)·q(k) is 0 unless i = k, in which case it's 1. These in turn give us wedge(product(p)) and wedge(product(q)) as the single members of corresponding bases of Wedge(dim,T).
These bases allow us to express the identity on G as sum(: du×q |dim) and as sum(: dx×p |dim), with likewise sum(:q×du:) = T = sum(: p(i)×dx(i) ←i |dim). Composing the last two, we obtain this identity as T = sum(: sum(: q(k)×dx(i).(du(k)·p(i)) ←i |dim) ←k |dim). Applying this to a member of T, each dx(i) extracts the tangent's i-component in x's chart; multiplying by du(k)·p(i) and summing over i gives the co-efficient of q(k) in the above T's image of the tangent, which is the tangent itself; thus du(k)·p(i) provides the matrix with which to convert x's chart's co-ordinates to those of u's chart, for tangents.
From each chart we also obtain differential operators, d/dx and d/du, induced (via the product rule and the requirement that each agree with d on scalar fields) from their actions on G, specified by:
and likewise for d/du, with q in place of p. For v in G, v·p(i)
is a scalar field, to which we apply d and then tensor with dx(i), the basis
member dual to p(i). When consequences of the product rule and agreement with d
on scalar fields are followed through, d/dx and d/du emerge as coordinate
differentiation
in its orthodox sense. For a scalar field h, the list
commonly written dh/dx = ({scalar fields}: dh/dx(i) ←i |dim) gives dh =
sum(: dx(i).dh/dx(i) ←i |dim) while, since G = sum(: dx×p :), dh =
sum(: dx(i).(p(i)·dh) ←i |dim), so what might orthodoxly have been
written as dh/dx(i) becomes p(i)·dh = p(i)·(d/dx)(h). This gives
us du(k)/dx(i) as p(i)·du(k), coinciding with the co-ordinate
transformation discussed above.
Now, whatever rank (|f:) may be, either chart gives us co-ordinates for it;
and allows us to integrate those coordinates over X or U as appropriate. We
should get the same answer regardless of which chart we use; we can use
x∘reverse(u) as an isomorphism (X|j|U); its derivative is the matrix of
coordinates dx(i)/du(k)
expressed as above by q(k)·dx(i), with
determinant J = wedge(product(q))·wedge(product(dx)) when we go from
integrating over (|x:) to over (|u:).
So, with F as some (scalar) co-ordinate of f in x's chart, integral(:F:X)
becomes integral(:F.J:U) = integral(:
(F.wedge(product(dx)))·wedge(product(q)) :U) and doesn't depend on choice
of chart; q and U are characteristic of u but dx and F are mere artifacts of
which other chart, x, we involved; whence we may infer that F.wedge(product(dx))
is the tensor quantity the co-ordinate F encoded, i.e. f =
F.wedge(product(dx)). Thus integration over U
of
f·wedge(product(q)) gives the same as integration over X
of F =
f·wedge(product(p)), with q and p being the bases of tangents dual to the
bases du and dx (respectively) induced by the charts.
Thus the rank of tensor field we can integrate over a neighbourhood in a
smooth manifold is Wedge(dim,G). The volume of a suitably cuboid region is just
the product of the lengths of its sides; if we sheer it to get a rhomboid,
effectively adding, to one of the edges, some linear combination of the other
edges, we don't change the volume. The antisymmetric product of the tangents
pointing along edges also doesn't change when one does such a sheer (because the
antisymmetrization causes the change to cancel itself out); so the volume of a
suitably rhomboid region is a wholly antisymmetric multilinear function of its
edge tangents; i.e. volume
is obtained by integrating a linear map from
Wedge(dim,T) to some one-dimensional linear domain, V, independent of position
on the manifold (effectively scalars, tagged in some way with suitable units of
measurement). The tensor field being integrated is thus linear
(V:|Wedge(dim,T)) or, equivalently, V⊗Wedge(dim,G) since G and T are dual
to one another. This lets us ignore V hereafter; we can replace it with an
arbitrary linear space external to
the manifold, adding no complications
(unless it leads us into extending our scalars to include imaginary
values).
It may help, in understanding this, to think of integration in its
classical sum over many little boxes
form. When integrating f over some
region, we partition the region into rhomboid volumes, each small enough that
we're happy to ignore f's variation therewithin; this typically means each
rhomboid is small. Classically, each box contributes to the integral to an
extent proportional to f's (functionally constant) value within the box and
proportional to the size of the box. Our f in Wedge(dim,G) formalizes this by
being a multi-linear antisymmetric map taking dim tangents, the displacement
(tangent) vectors along the edges of the box, and returning a scalar:
integration then sums the scalars from all the rhomboids to give a scalar
integral of f over the union of the boxes.
The sides of a small enough
box are functionally tangent vectors. To
be integrable a function has to be able to give us a scalar for each such box;
the scalar must vary linearly with each edge tangent (so as to
be proportional to the size of
the box) and combine the edges
antisymmetrically (so as to be unchanged by sheering). Hence the integrable
function must be a function from dim tangents to scalars, linear in each tangent
and antisymmetric; which is exactly the specification of the members of
Wedge(dim,G).
Thus the only rank of tensor to be considered for integration over a
manifold with dimension dim and gradient bundle G is Wedge(dim,G). Of course,
on a sub-manifold of lower dimension, we'll replace G with the gradient bundle
of the sub-manifold (which looks like
a sub-space of G, at each point on
the sub-manifold: though, strictly, the sub-manifold's tangent bundle is a
sub-space of T at each point but its gradient bundle is the quotient
of G
by the members of G which map all the sub-manifold's tangents to zero) and dim
with its dimension; which will allow us to integrate Wedge(n,G) fields over a
sub-manifold of dimension n, albeit only seeing
the sub-manifold's
quotient of G in the process.
Consider the action of f in Wedge(dim,G) on {({G}:|n)-rank tensors} with n
in {0,…,dim}. Wedge(dim,G) is a sub-space of {({G}:|dim)-rank tensors}
which the tensor algebra will readily interpret as the space of linear maps from
{({T}:|n)-rank tensors} to {({G}:|dim−n)-rank tensors}; read as such, f's
outputs are all in Wedge(dim−n,G) and it consumes its input
antisymmetrically
, i.e. f = f∘wedge. Now, Wedge(dim−n,G) and
Wedge(n,T) have equal dimension; so long as f is non-zero it has
an inverse
, namely the single member of the basis of Wedge(dim,T) which
is dual to [f] as basis of Wedge(dim,G). Thus f, when non-zero, provides an
isomorphism (Wedge(n,T)| |Wedge(dim−n,G)) for each n in {0,…,dim}.
Any other member of Wedge(dim,G) is just f times (at each point) some scalar
field: so choice of f only affects the isomorphism's scale, not its
understanding of which direction is which.
So a tensor field we can integrate over a manifold provides an isomorphism
between Wedge(n,G), whose fields are integrable over sub-manifolds of dimension
n, and Wedge(dim−n,T); indeed, it will equip us with the means to turn any
({T}:|n)-rank tensor field on M, not just the Wedge(n,T) ones, into something we
can integrate over sub-manifolds of dimension dim−n, otherwise known as
sub-manifolds of co-dimension
n.
Now, the metric is linear from T to G, making its rank span({u×v: u, v in G}) or G⊗G. Its determinant is linear (Wedge(dim,G): det(g) |Wedge(dim,T)) with rank Wedge(dim,G)⊗Wedge(dim,G). The metric is symmetric; g(x,y) = g(y,x) for all tangents x, y; g(x) is in G, which is {linear ({scalars}:|T)} = dual(T), so g(x,y) is a scalar, as is g(y,x). We saw above that det(g) = ±m×m for some m in Wedge(dim,G); as such it is inevitably symmetric.
It would appear we are interested in a universe with an odd number of time-like dimensions and an odd number of space-like ones; so, whichever you consider imaginary, det(g) is negative. In particular, I'll assume det(g)'s sign doesn't change; and that we can have a Wedge(dim,G)-rank tensor field m with det(g) = ±m×m and m nowhere zero: this formally expresses the idea that the manifold is orientable, but that's a matter for some other time. [And it turns out that det(g) being negative isn't a problem – phew !^]
Now, m is in Wedge(dim,G); it's an integrable tensor field. The
differential operator we use considers g to be constant, hence equally considers
det(g) and so m×m to be constant. Since Wedge(dim,G) is one-dimensional,
m must thus also be constant
(in the eyes of our differential
operator). Thus our constant metric begets a constant tensor field to describe
integration, albeit with an arbitrary global ± scaling ambiguity (since
−m will do in place of m, as only their square is determined; none the
less, continuity ensures that once the choice of sign is made anywhere, it's
implied everywhere else).
We thus obtain a constant tensor field, m, of rank Wedge(dim,G), begetting a
constant tensor field, 1/m, in Wedge(dim,T). These provide us, as above, with
an isomorphism between Wedge(n,T) and Wedge(dim−n,G) for each n from 0 to
dim; these isomorphisms are constant. We can express any other tensor field of
rank Wedge(dim,G), Wedge(dim,T) (or other rank derived solely from these two) in
terms of a scalar field and m. In particular, det(g) is a constant (possibly
negative) multiple of m×m. I'll describe the tensor field m as
our measure
on the smooth manifold, M.
Stokes' theorem and its kin integrate the derivative of a tensor field over
a region and the field itself over the boundary. For instance, if one
integrates the trace of the derivative of a tangent field – the derivative
is a G⊗T tensor field, which we can read as a linear map from tangents to
tangents, so it's amenable to trace – over the interior of a region, one
gets the same answer as integrating the original tangent field over the boundary
of the region. The trace of the tangent field's derivative is a scalar field,
so scale m by it to get something we can integrate over the region; likewise, we
can use m to turn the tangent field itself into a Wedge(dim−1,G) field,
which we can integrate over the sub-manifold of co-dimension 1
which
constitutes the boundary. [The other familiar case integrates a gradient field
along a curve, its derivative over a submanifold of dimension 2 spanning that
curve; this is just the prior case applied to a 2-manifold of which the given
spanning surface is a region.]
Let D be our differential operator, t our tangent field; Dt is of such a rank that, at least at each point, it may be written as a sum of terms of form w×u with w in G, u in T; trace(Dt) is the sum of the scalars w(u) one obtains by tracing each such term; although one may decompose Dt in many ways as a sum of such terms, all will give the same sum of scalars. We're integrating Dt·m and m is constant, so this is equal to D(t·m); and t·m is what we're integrating over the boundary.
Now, everything we're going to integrate will be totally antisymmetrized, so
what happens to derivatives when one does that ? We only need to look at
Wedge(n,G) for n from 0 to dim; and antisymmetrization has no effect on G
itself, which is the rank of gradients of scalar fields; so the antisymmetrized
derivative will agree with d on {scalar fields} = Wedge(0,G), no matter what
differential operator we chose. Furthermore, if we differentiate a scalar field
twice and antisymmetrize, we get zero (provided our differential operator
is torsion-free
).
Any tensor in {({G}:|n)-rank tensors} may be written as a sum of terms of form x(0).product(: dx(1+i) ←i |n) for some list ({scalar fields}: x |1+n) of scalar fields. If we take the derivative of such a term, we get dx(0)×product(:dx(1+i)←i|n), which is just product(dx), and a sum of terms each involving a symmetric Ddx(1+i) for some i in n; when we antisymmetrize, we're left with wedge(product(dx)). Consequently, antisymmetric differentiation doesn't depend on our choice of differential operator and every tensor field it produces is a sum of terms of form wedge(product(dx)); furthermore, if x(0) had been constant the output would have been zero, so whenever the input is a sum of products of gradients of scalar fields, the output is zero; in particular, antisymmetric differentiation annihilates antisymmetric derivatives. We can thus define
as the antisymmetric differential operator. The case n = 0 gives d^ acting on scalar fields as d; for n = 1 we have d^(a.du) = wedge(da×du) for any scalar fields u and a; n=2 gives d^(a.dx×dy) = wedge(da×dx×dy) and so on. We know d^∘d^ is zero, albeit a different zero on each rank; (Wedge(2+n,G): d^∘d^ |span(|product:{lists (G:|n)})) is the constant mapping with output the zero of rank Wedge(2+n,G).
Now d^ maps any Wedge(dim−1,G) tensor field to a Wedge(dim,G) tensor field, which is just some scalar field times our constant measure m, since Wedge(dim,G) is one-dimensional (at each point). Furthermore, like m, each of these outputs of d^ is annihilated by d^; so it would be nice to express m as one of these outputs of d^, i.e. m = d^w for some tensor field w of rank Wedge(dim−1,G), whose dimension at each point is dim. There may, of course, be many such w; but d^ maps the differences between them to 0.m; indeed, adding the d^ of any Wedge(dim−2,G) tensor field to w will turn one solution into another.
We thought we wanted to integrate a divergence, trace(Dt) for some tangent
field t, which turns out to be really
m.trace(Dt); so how does this
relate to d^(m·t) ? As before, express m as x(0).wedge(product(: dx(1+i)
←i |dim)) for some list ({scalar fields}: x |1+dim). This yields
To apply d^ to a term in this average we only need to differentiate the scalar field x(0).dx(1+s(dim−1))·t (the results of differentiating the rest will have symmetries). Furthermore, when we express this derivative in terms of the basis (G: dx(1+i) ←i |dim), we can ignore all terms except the multiple of dx(1+s(dim−1)), since each of the others is parallel to dx(1+s(j)) for some j other than dim−1, and this dx(1+s(j)) appears in product(: dx(1+s(i)) ←i :dim−1), so the wedge applied, after D, by d^ will annihilate it. Let (T: p |dim) be the basis dual to (G: dx(1+i) ←i |dim), so p(j)·dx(1+i) is 0 unless j = i, in which case it's 1. Then t = sum(T: p(i).dx(1+i)·t ←i |dim) and we know we're only interested, with j = s(dim−1), in
so, at the very least, if we can arrange for x(0) to be constant (in which case we can arrange for it to be 1) we get p(j)·D(dx(1+j)·t), which is the j-component of the D-derivative of the j-component of t: i.e. the j-indexed term in the sum making up trace(Dt).
In the presence of a metric, we can convert any tensor to a pure-G rank, which is amenable to d^. If we then use the metric's inverse to convert back to T those rank factors we had to apply the metric to before, we get a tensor whose rank is just G tensored with the rank we started with; thus we can use d^ and a metric to define something very like a differential operator (i.e. an operator of rank G whose action on scalar fields coincides with d); but this operator isn't a Leibniz operator since d^(u×v) isn't (d^u)×v plus a suitably permuted u×(d^v); instead, it's the result of antisymmetrizing this sum.
Written by Eddy.