]> Leibniz Operators on a Smooth Manifold

Leibniz Operators on a Smooth Manifold

When considering differential operators – i.e. operators that quantify variation with respect to position – on a smooth manifold, it turns out to be necessary and important to also consider a derived operator, the torsion, obtained from a differential operator. Since, in the interesting case, this has many properties in common with a differential operator, I chose to begin my examination of differential operators with an examination of a generalization of them which suffices also to derive useful properties of the torsion. We can then examine the respective specializations to a differential operator (at length) and (very briefly) its torsion separately, without repeating the common ground.

Note: algebraic logicians use the term Leibniz operator to mean something else altogether. The present discussion is about smooth manifolds, not algebraic logic. Gottfried Leibniz is commonly credited with establishing the product rule, which is the central property of the operators discussed here.


I'll begin with some preliminaries, to set the scene:

We'll then be ready to combine these ingredients to specify what I mean by Leibniz operators and so begin exploring what properties are implied by their definition:

This last prepares the ground for consideration of two ways we can permute order in such composites and, by looking at the differences that result, obtain new Leibniz operators:

These then equip us to analyze the generalization of why specifying the geometry of a manifold provides a basis for selecting one differential operator that plays nicely with that geometry.


When dealing with a function between linear spaces, the derivative at any point is a linear map from the input space to the output space; given linear spaces W and Z, an open neighbourhood U of x in W and a function (Z:g|U) differentiable at x, g(x) is in Z, as is g(x+h) for any h in W small enough that x+h is in U; and the difference between these is, for small enough h, well approximated by the result of contracting h with g'(x). Hence g'(x) serves as a linear map from (small) input displacements (in W) to output displacements (in Z). For notational convenience, I chose to have h·g'(x) be the expression that gives the output of this linear map (so that h·D(g) shall be the form used when a general differential operator, D, is implicated; the displacement in input, h, is thus visibly contracted with the differential operator), rather than g'(x)·h, which would be the normal way to write the action of a linear map g'(x) from W to Z on an input h in W. This, in turn, implies that g'(x) is in dual(W)&tensor;Z, rather than the transpose of this space, which would more normally serve to encode the space of linear maps from W to Z; and g' is (dual(W)&tensor;Z: :W).

The product rule

In the standard analysis of differentiation, the derivative of a product of functions is obtained as a sum of terms, each resembling the original product but with one function replaced by its derivative; this is known as the product rule and was first derived by Leibniz. Let product = bulk(×). Proof:

Given an open neighbourhood U in {reals} and a list ({differentiable functions ({reals}:|U)}: f |n) for some positive natural n, we obtain ({reals}: product(f) |U) = (: product(: f(i, x) ←i |n) ←x |U) in the usual way. Define a list F of functions yielding lists of reals by

The claimed rule is that, at each x in U, product(f) is differentiable with sum(: product(F(j, x)) ←j |n) as derivative. If product(f) has a derivative at some given x in U (which it shall if the following limit exists) it must be the limiting value, as real h tends to zero, of:

(product(f,h+x) −product(f,x))/h
= (product(: f(i, h+x) ←i |n) −product(: f(i, x) ←i |n))/h
= (product(: f(i, x) +h.f'(i, x) ←i |n) −product(: f(i, x) ←i |n))/h

ignoring, here and hereafter, assorted terms inside the outer parentheses of order h.h, since these all robustly vanish, even after division by h, in the limit as h tends to zero,

= (product(: f(i, x) ←i |n) + h.sum(: product(: F(j, x, i) ←i |n)←j |n) −product(: f(i, x) ←i |n))/h
= sum(: product(F(j, x)) ←j |n)

which has no dependence on h, hence does have a limiting value as h tends to zero; hence product(f) is indeed differentiable at every x in U; and the given derivative at x is just exactly the sum of terms claimed, each replacing one function in the original list with its derivative.

The same proof can be carried over transparently (at some burden in notation) to the case where we have a list (:V|n) of linear spaces and a list of functions (: (V(i): f(i) |U) ←i |n), with product(f) and each product(F(j, x)) being then a mapping (bulk(&tensor;, V): :U). When we come to try to extend this result to the case where the space of inputs is a linear space, the tensor algebra introduces a minor wrinkle we need to take care to resolve. For U open in linear W, and linear Z, a differentiable function (Z:g|U) has, as its derivative at each x in U, a linear map from W to Z, which maps any small change h in the input (hence h in W) to the first order estimate h·g'(x) at the value of g(h+x) −g(x) in Z. As discussed above, I chose to represent this linear (Z: g'(x) |W) in the tensor algebra as a member of dual(W)&tensor;Z. (Had I chosen contrarily, the factors of dual(W), and hence the wrinkle, would appear at the right, rather than the left; but the wrinkle would be essentially the same as here.) When we now look at a list (: (V(i): f(i) |U) ←i |n) whose point-wise product is (bulk(&tensor;, V): |U) and replace some entry in the list, f(j), with its derivative, the product of the result is not a tensor in bulk(&tensor;, V) but, instead, has a factor of dual(W) inserted just before V(j) in this tensor product. For each j we thus obtain a product(F(j, x)) at each x in U that has a different tensor rank, depending on j; we cannot sum such values over j. Even in the case where every entry in V is in fact dual(W), so that the tensor product spaces are all the same, so allow us to add up the products, when we come to contract a small displacement h in W on the left of of any such term with j ≠ 0, we contract it with the factor f(0, x) rather than the differentiation-induced aspect of f(j)'(x) that we actually need it to contract with, if we are to carry over the duly-rearranged analogue of the proof above.

This wrinkle is easily enough resolved: it suffices to say that, for arbitrary h in W, h·product(f)'(x) is equal to a sum of terms, obtained from product(f), by replacing some f(j) by h·f(j)'. This is formally equivalent to saying that product(f)' can be expressed (at each input) as a sum of terms, each of which is obtained from product(f) by, first, replacing one f(j) by its derivative and then, to fix up the wrinkle, applying a permutation operation to the tensor rank of the output, that shuffles the factor of dual(W) due to f(j)' out to the left. However, the statement quantified over h in W leads to a less notationally cumbrous discussion than the formally equivalent statement that describes product(f)' directly by use of tensor-permutation operators.

When we come to look at tensor fields on a smooth manifold, differentiation with respect to position shall take a tensor of rank R to one of rank G&tensor;R, where G is the gradient bundle of the manifold, dual to the tangent bundle T. The product rule shall then, analogously to the foregoing, say that the product of a list ({tensor fields}: f |n), all members of which are differentiable at some point p of the manifold, has product(f) also differentiable at p, with – for any tangent h at p – h·product(f)'(p) equal to a sum of terms, each obtained from product(f) by replacing one f(j) in the list with h·f(j)'(p).

Tensor fields on smooth manifolds

I here presume basic familiarity with smooth manifolds and their tensor bundles; see the linked pages for details. While the rigid structure of linear spaces provides the canonical context in which to define differentiation, and hence provides each linear space with a unique differential operator, the flexibility of smooth manifolds leaves ambiguity. While the formalism of the smooth atlas makes differentiation possible – in the simplest case with respect to co-ordinates – the openness to smooth re-parameterisation makes available a diversity of choices of how to quantify variation with position. The derivation of the gradient bundle provides a natural differentiation for scalar functions of position, with which it is thus natural to require all candidate differential operators to agree; however, there is no equivalent naturality to differentiation on tensor fields, generally. We can require the (appropriately stated) product rule as a natural property to expect of a differential operator; as we shall see, once we know the action of a differential operator on all gradient fields, or on all tangent fields, we can use the product rule to infer its action on tensors of all ranks; but the choices open to us remain extensive.

The different kinds of tensor on a smooth manifold are distinguished by their rank. A function which maps each point in some neighbourhood in the manifold to a gradient at that point is a gradient-valued tensor field, with rank G; a function whose value at each point is a tangent at that point is a tangent-valued tensor field with rank T. For any list ({G,T}:f|n) we can construct a corresponding rank bulk(&tensor;, f); any tensor of this rank maps each point, p, of the manifold to a member of the space of tensors at that point obtained, from f, by replacing each G with the space of gradients at that point, each T with the space of tangents at that point and then applying bulk(&tensor;) to the resulting list. The rank of a tensor field, then, characterizes what kind of value the field produces, in terms that are equivalent among positions at which the tensor field takes a value. (A function on the smooth manifold, that returns values of different ranks from one position to another, is not a tensor field; it does not play nicely with tensor algebra.) When a tensor field u of rank U and a tensor field v of rank V are multiplied point-wise, using the tensor product of their values at each point to obtain a value for the product at that point, the result is a tensor field u×v of rank U&tensor;V.

The preceding paragraph and assorted things below address ranks of form bulk(&tensor;, ({G, T}: :)), derived from the intrinsic gradient and tensor ranks; it is entirely possible to also consider ranks bulk(&tensor;, ({G, T, V, W, …}: :)) for diverse linear spaces V, W, … unrelated to position on the smooth manifold. I generally overlook these because one can always reduce the analysis to one in which their contribution is expressed in terms of (some permutation operator applied to a suitable sum of) constant tensor factors (involving only the V, W, … spaces) times plain {G,T}-tensor fields; however, bothering to do so adds complexity to the discussion without illuminating anything of significance, so I skip over this. I trust the reader shall not find it too hard to interpolate the necessary details if ever they appear relevant.

In manipulating quantities below, I shall make repeated reference to a (local) basis, b, of some rank, R. This means a mapping ({tensor fields of rank R}:b:) with: (:b|) the dimension, at each point, of R; it shall implicitly be defined in some open neighbourhood, U, in the manifold for which: at each point m in U, R(m) = span(: b(i,m) ←i :), and; any ({scalar fields}: h :(:b|)) for which sum(: h(i).b(i) ←i :) = 0 at any point in U has each h(i)=0 at each such point. When using such a basis I shall usually also introduce its dual, p, which is a local basis of rank dual(R) with (:p|) = (:b|) satisfying p(i)·b(i) = 1 and p(i)·b(j) = 0 whenever i and j differ. The crucial property of these two dual bases is that sum(: b(i)×p(i) ←i :) is the identity on R, which supposes that (:b|) is finite and discrete. (However, this restriction may (in principle) be dropped if sum can be understood as the integration associated with a suitable measure on (:b|).) I shall aim to reserve the pair of names b,p for a basis of Q's rank and its dual (when discussing an operator Q), using the names e,c when the rank is less intimately related to the ranks of operators under discussion.

Tensor operators

A tensor operator is a function which maps tensor fields to tensor fields; for example,

These are applicable to tensor fields of all ranks; some tensor operators are only applicable to tensor fields of certain specific ranks – e.g. antisymmetrization can only act on tensor fields whose ranks are of form bulk(&tensor; ({X}:|n)) for some natural n and base rank X; indeed, for each such base rank, one obtains a separate antisymmetrization operator. Where a tensor operator only acts on tensor fields of some specified ranks, I describe it as rank-limited; when it acts on tensor fields of all ranks, I describe it as a universal tensor operator.

In passing (I do not use this in what follows), I say that a tensor operator, Q, respects restriction precisely if, for any tensor field f of a rank on which Q does act and any open neighbourhood U, in the manifold, on all of which f is defined, (: Q(f) :U) = Q(:f:U) – i.e. Q(f)'s value at any point only depends on f's behaviour arbitrarily close to that point. This is one of the characteristic properties of differential operators.

I say that a tensor operator respects rank precisely if the rank of its output depends only on the rank of its input; furthermore, that it has rank R – and thus is a fixed rank tensor operator – precisely if its output's rank is just R tensored with its input's rank – if the input has rank X then the output has rank R&tensor;X. This is true, for instance, of our tensor t with it operator, given above – whereas square respects rank without having fixed rank. (What I have called fixed rank could be termed left-fixed and paired with a corresponding notion of right-fixed rank; however, for reasons arising out of my choice to have differential operators contract with small changes in input on the left, it suffices – for my needs – to attend to left-fixed and not burden the language with reminders of its chirality.) In support of this, I define the mapping ({ranks}: rank |{fixed rank tensor operators}) for which rank(Q) is the rank of Q (i.e. R above).

I say that a fixed-rank tensor operator respects trace precisely if, when it accepts a tensor whose rank includes two mutually dual terms, it also accepts the result of contracting these mutually-dual terms out of the tensor and the tensor operator commutes with such contraction. That is (using the trace-permutation operator, τ), for operator Q of rank R, if W and M are mutually dual and f has rank A&tensor;W&tensor;K&tensor;M&tensor;Z, the tensor fields τ([0,1,*,2,*,3], [R,A,W,K,M,Z], Q(f)) and Q(τ([0,*,1,*,2], [A,W,K,M,Z], f)) are equal; in each case, the W and M factors are contracted out with one another and the order of other tensor ranks is unchanged. I say that a fixed-rank tensor operator respects permutation precisely if, for every rank to which it applies and every permutation of a factorization of that rank that yields another rank to which it applies, the result of applying that permutation to a tensor of the given rank, then applying the operator, is the same as the result of first applying the operator and then applying the corresponding permutation (passing over the added factor, in the rank, supplied by the operator) to the result. (If the original permutation was (n|s|n), this corresponding adjusted permutation is the union of 0←0 with (: 1+n(i) ←1+i :), permuting the ranks after Q's in the same way as s permuted them before we added Q's rank to the start of the list.)

When a tensor operator respects rank, we can let it act on two tensor fields of some common rank and on their sum (which is necessarily of that same rank); its outputs from these are all of the same rank, so we can add those from the separate tensor fields and compare the result with that from their sum. If tensor operator Q respects rank and, for all tensors u, v of any one rank on which Q does act (remember, it might be rank-limited), Q(u+v) = Q(u) +Q(v), then I say Q respects addition. In such a case, I further describe it as:

globally linear

precisely if, for any (constant) scalar k and any tensor field u of a rank it accepts, Q(k.u) = k.Q(u); and as

locally linear

precisely if, for any scalar field h and any tensor field u of a rank it accepts, Q(h.u) = h.Q(u).

(I hope the choice of naming shall make more sense as we progress; global linearity is, as specified, linearity with respect to global constant scalars; local linearity shall turn out to implicate the operator behaving, locally, on each rank, as a linear map – thereby ensuring it respects restriction.)

Differential operators

Differential operators have fixed rank G, respect restriction, trace, permutation and addition, are globally (but not locally) linear, obey a suitable form of the product rule and agree with the smooth manifold's intrinsic gradient operation (induced as part of constructing the gradient and tangent bundles) on scalar fields.

A tensor field annihilated (i.e. mapped to a zero tensor field of suitable rank) by a differential operator is described as constant with respect to that differential operator; but do not confuse this with the notion of constancy we apply to scalar fields. For scalar fields, it is actually meaningful to ask whether the values at distinct points are equal; for general tensors, it is not. None the less, as we'll see below, there are some ranks (with equal numbers of G and T factors) in which there do exist some tensor fields which are indeed naturally constant, for all differential operators.

Any chart compatible with the smooth atlas of the manifold can be encoded (via a fixed basis of the linear space used as reference space of the chart) as a list of co-ordinate functions ({scalar fields}: x |n) where n is the dimension of the manifold; from each x(i) the intrinsic gradient operator d yields dx(i) as a gradient field and, because x encodes a chart, the (: dx(i) ←i |n) form a local basis of G, with a ({tangent fields}: |n) dual basis. For any tensor field, u, we can use dx and its dual to construct a local basis e of the rank of u and obtain its dual, c. Each component u·c(j), for j in (:e|), of u with respect to this basis is then a scalar field and we can apply d to it; we may thus define

and refer to (: du/dx ←u :) as d/dx, a universal tensor operator; its action on scalar fields fatuously agrees with that of d; consideration of the image of each rank – under our chart encoded by x, in the reference vector space and its derived tensor space of corresponding rank – implies that d/dx is indeed a differential operator. I call such a differential operator a co-ordinate differentiation operator.

Leibniz Operators

I say that a fixed-rank tensor operator Q, of rank R, obeys Leibniz precisely if, for any tensor fields u, v of ranks Q accepts, if Q also accepts tensor fields of the same rank as u×v, then: for every tensor field h of rank dual(R),

A Leibniz operator is a fixed-rank universal tensor operator, Q, that respects trace and permutation, is globally linear and obeys Leibniz.

Now observe that, for s a scalar field and Q a fixed rank universal tensor operator which obeys Leibniz: for any tensor field, u; Q(s.u) = s.Q(u) + Q(s)×u. Thus, if s and Q satisfy for every tensor field u, Q(s.u) = s.Q(u), then Q(s) must be zero; Q annihilates s. The converse is trivially also true. Thus a fixed rank universal tensor operator which respects trace, permutation and addition, and obeys Leibniz is: globally linear (hence a Leibniz operator) precisely if it annihilates scalar global constants and; locally linear precisely if it annihilates all scalar fields.

Since the action of differential operators on scalar fields is determined by the intrinsic gradient operation, which annihilates globally constant scalar fields, obeying Leibniz and being fixed-rank universal tensor operators is thus sufficient to imply the global linearity I asserted for them above and, hence (provided they also respect trace and permutation), make them Leibniz operators. Furthermore, if we look at the difference between two differential operators, given that they are defined to agree on scalar fields, the difference must annihilate all scalar fields; it is trivially a fixed-rank universal tensor operator and it can readily be shown to obey Leibniz: hence, it is necessarily a locally linear Leibniz operator.

Composing any simply linear action after a Leibniz operator – such as contraction of the output with a tensor field – will, as long as it's applicable to all ranks (which precludes the action from meddling with the ranks of the input to the operator, so restricts it to meddling with the rank added by the operator and adding further ranks), always yield a Leibniz operator as composite. Indeed, in specifying the product rule, I've exploited this by using (for suitably general h) h·Q, a scalar-rank operator, to bypass the previously discussed issues with shuffling rank.

Basic analysis

Given a local basis b of the rank of a Leibniz operator, Q, we can obtain a dual basis p and decompose the identity on Q's rank as sum(: b(i)×p(i) ←i :) to obtain (exploiting the fact that Q obeys Leibniz and each p(i) is a tensor of rank dual to that of Q), for arbitrary tensor fields u, v:

= sum(: b(i)×p(i) ←i :)·Q(u×v)
= sum(: b(i)×p(i)·Q(u×v) ←i :)
= sum(: b(i)×((p(i)·Q(u))×v + u×(p(i)·Q(v))) ←i :)
= Q(u)×v + sum(: b(i)×u×p(i)·Q(v) ←i :)

which commonly serves as a more tractable form in which to state the product rule; the interleaving of b× and p· with u×Q(v) effectively implements the tensor-rank permutation needed to obtain from u×Q(v) its contribution to Q(u×v), by swapping u with the Q-ness of Q(v). When we have a basis e of the rank of v, with dual basis c, we can also write this second term as:

effectively using ·c and ×e to implement an alternate permutation of tensor rank, this time applied to Q(v)×u, swapping u with the v-ness of Q(v). I shall use the short-forms sum(b×u×p)·Q(v) and Q(v)·sum(c×u×e) to abbreviate these permutations.

Trace and permutation

The action of linear maps on their arguments can always be expressed as taking the tensor product of the linear map (as a tensor) with its argument and then applying a trace operation; likewise, contraction, u·v, can be expressed in terms of applying a trace operation to u×v. The combination of respecting trace and obeying Leibniz's product rule then implies that the product rule also applies to such linear actions as well as tensor multiplication; when u and v are of ranks for which u·v or u(v) is meaningful, the relevant equations below apply for every tensor field h of rank dual to that of Q:

Applying the product rule while respecting trace and permutation has the effect that any Leibniz operator, Q, annihilates every trace-permutation operator, T (which is a linear map, albeit polymorphic). Proof:

Let u be some tensor of a rank to which T is applicable and let S be the adjusted equivalent of T that acts on Q(u), passing over Q's addition to the rank and acting on the ranks due to u as T did; then, for every tensor field h of rank dual to that of Q:

= h·S(Q(u))
= h·Q(T(u))
= (h·Q(T))(u) +T(h·Q(u))

in which the last term is what we started with, whence h·Q(T) maps every u on which T can act to zero; this being true for all h, Q(T) must in fact be zero, i.e. Q annihilates T.

This means that all trace-permutation operators are constant tensor fields of every differential operator. In particular, the identity linear map on each tensor rank is a fatuous trace-permutation operator (it does no tracing and applies the identity permutation), hence annihilated by all Leibniz operators and a constant of every differential operator.


Now let e be a local basis of some rank, with c its dual; then sum(: c(i)×e(i) ←i :) is an identity, so annihilated by any Leibniz operator, Q; thus, for every tensor field h of rank dual to Q, 0 = sum(: h·Q(c(j))×e(j) +c(j)×(h·Q(e(j))) ←j :) whence, for each i in (:e|) and every h of rank dual to Q's,

= −sum(: h·Q(c(j))×e(j) ←j :)·c(i)
= sum(: c(j)×(h·Q(e(j))) ←j :)·c(i)
= sum(: c(j).(h·Q(e(j))·c(i)) ←j :)

in which h·Q(e(j))·c(i) is a scalar, since h's rank is dual to Q's and c's is dual to e's; and scalar multiplication is commutative, so this is just

= sum(: (h·Q(e(j))·c(i)).c(j) ←j :)
= h·sum(: (Q(e(j))·c(i))×c(j) ←j :)

and, this being true for all h of rank dual to Q's, we can infer

whence Q's action on any rank, combined with its action on scalars, fully determines its action on the dual rank. When we have a local basis b of Q's rank, with dual p, we can re-write this last as

in which each p·Q(e)·c is a scalar field and b×c is the basis b and c imply for the rank of Q(c(i)). In the special case where e is b, whence c is p, Q&on;p is expressed in terms of Q&on;b via scalar fields of form p·Q(b)·p as co-ordinates with respect to the b×p basis of {linear (R:|R)} = R&tensor;dual(R). We'll see more of these p·Q(b)·p scalars, in due course.

We can apply this dual-action to mutually dual bases of gradient and tangent fields; suitable products of these yield bases of all other ranks; hence we can obtain a Leibniz operator's action on a basis of each rank from its action on a basis of gradients; combining this with the operator's action on scalar fields implies its action on all tensor fields. Thus the action of a Leibniz operator (on the neighbourhood in which the chosen local basis is defined) is entirely determined, on all ranks, by its action on scalar fields and its action on any one local basis of gradients. Choosing a local basis on which the operator's action is suitably straightforward is (as a result) often very helpful.

Local Linearity

In particular, consider a locally linear Leibniz operator, Q; this annihilates all scalar fields, so its action is entirely determined by its action on a local basis of gradients. Indeed, given a tensor field u and a basis e of its rank (typically generated from one of gradients, but the following holds anyway) with dual basis c, we can write u as sum(: e(i).(c(i)·u) ←i :), in which each c(i)·u is a scalar field and thus annihilated by Q, whence:

= sum(: Q(e(i).(c(i)·u)) ←i :)
= sum(: Q(e(i)).(c(i)·u) ←i :)

since each Q(c(i)·u)×e(i) term from the other half of the product rule is zero

= sum(: Q(e(i))×c(i) ←i :)·u

so Q simply acts, on u's rank, as the linear map sum(: Q(e(i))×c(i) ←i :) – the action of a locally linear Leibniz operator is, on each rank, simply the action of a linear map (albeit one which, considered as a tensor field, likely varies over the manifold); and the linear map as which it acts on any given rank can be inferred from that as which it acts on gradients. (The Riemann tensor shall, in due course, emerge as encoding this linear map for the torsion of a differential operator, acting on gradients.) In particular, although I here express the action on u's rank in terms of my chosen local bases e and c, the value of this linear map must be independent of choice of local bases.

The difference between two Leibniz operators which agree on scalars must, necessarily, annihilate scalars; that it is Leibniz is trivial. Consequently, it must be locally linear and, thus, wholly determined by the linear map as which it acts on gradients. In particular, since all differential operators agree with the intrinsic gradient operator d (and thus with each other) on scalar fields, the difference between a pair of differential operators is always locally linear and determined by the linear map as which it acts on gradient fields. Thus, given an arbitrary differential operator (for use as the origin of the space of possible differential operators, e.g. the co-ordinate differentiation operator associated with some given chart), there is a direct correspondence between;

In particular, the co-ordinate differentiation operators of two charts necessarily differ by a locally linear gradient-ranked Leibniz operator: d/dx − d/dy is locally linear for any charts x, y. Likewise, any differential operator can be wholly specified, relative to any given co-ordinate differentiation operator, by the the linear map as which the difference between the two acts on gradients. (In the case of the covariant differential operator D and some co-ordinate chart x, the difference D−d/dx is encoded, in relativistic orthodoxy, by the Christoffel symbols for the chart x.)


Finally, among basics, let us look briefly at what happens when we compose two Leibniz operators, principally for the sake of a negative result, but also to prepare the ground for the next topic.

A composite of Leibniz operators (trivially) is universal, has fixed rank, does respect trace and permutation and is globally linear; so, aside from the product rule, it meets all the criteria for a Leibniz operator. Suppose we have two Leibniz operators Q and S; let's see what happens when we apply their composite to a tensor product. Let b be a basis of Q's rank and p its dual; let e be a basis of S's rank with dual c; then

= Q(S(u)×v +sum(e×u×c)·S(v))
= Q(S(u))×v +sum(b×S(u)×p)·Q(v) +Q(sum(e×u×c))·S(v) +sum(b×sum(e×u×c)·(p·Q(S(v))))

The first and last of these terms are the ones the product rule demands; those in the middle are not generally zero, nor do they necessarily cancel one another, so the composite of two Leibniz operators does not generally obey Leibniz. (Double-differentiation, after all, does not simply obey the product rule.) Although we'll shortly see cases in which a Leibniz operator can be extracted from two others, this shows that simple composition isn't enough to achieve it. Before going further, pause to consider the third term's factor of Q(sum(e×u×c)), contracting it with an arbitrary tensor field h of rank dual to Q's (e.g. h could be p(j) for some j) for clarity:

= sum(: h·Q(e(i)×u×c(i)) ←i :)
= sum(: h·Q(e(i))×u×c(i) +e(i)×h·Q(u)×c(i) +e(i)×u×h·Q(c(i)) ←i :)

by the product rule. Apply our earlier formula relating the action of Q on the dual bases c and e to replace Q(c(i)):

= sum(: h·Q(e(i))×u×c(i) +e(i)×h·Q(u)×c(i) −e(i)×u×h·sum(: (Q(e(j))·c(i))×c(j) ←j :) ←i :)

Shuffle the order of terms and exploit the fact that each h·Q(e)·c is a scalar:

= sum(: e(i)×h·Q(u)×c(i) +h·Q(e(i))×u×c(i) −sum(: (h·Q(e(j))·c(i)).e(i)×u×c(j) ←j :) ←i :)

Summing over i in the last term now collapses out its (…·c(i)).e(i):

= sum(e×h·Q(u)×c) +sum(: h·Q(e(i))×u×c(i) ←i :) −sum(: h·Q(e(j))×u×c(j) ←j :)

in which the last two, aside from the name of their remaining summation variable, differ only in sign, so cancel, leaving

= sum(e×h·Q(u)×c)

Antisymmetric Parts

Now, with two operators as above, consider Q(S(u)) for some arbitrary tensor field u. Its rank comprises that of Q, followed by that of S, then that of u. If we apply a tensor permutation operator to shuffle its tensor rank, we can swap these first two; the result then has the same rank as S(Q(u)) although it shall in general not be the same tensor field. When S and Q have the same rank, we can take the difference between Q(S(u)) and the result of performing such a permutation, to swap the order of the rank contributions from the two operators; from this we may obtain an antisymmetric part of Q&on;S. We can do the same for S&on;Q; in general, it'll yield a different result to that obtained from Q&on;S. Such antisymmetrized composites are necessarily universal tensor operators, of fixed rank, that respect trace and permutation and are globally linear; it remains only to address the product rule.


As discussed above, let us now consider the result of applying two Leibniz operators – Q and S, of equal rank R with local basis b and dual p – then antisymmetrizing over the two rank factors added by the operators. Applying this to a product u×v we obtain:

antiSym([R,R], Q(S(u×v)))
= sum(: (b(i)∧b(j))×p(j)·(p(i)·( Q(S(u))×v +sum(b×S(u)×p)·Q(v) +Q(sum(b×u×p))·S(v) +sum(b×sum(b×u×p)·(p·Q(S(v)))) )) ←[i,j] :)
= sum(: (b(i)∧b(j))×( p(j)·(p(i)·Q(S(u)))×v +p(j)·S(u)×p(i)·Q(v) +p(j)·(p(i)·Q(sum(b×u×p)))·S(v) +u×p(j)·(p(i)·Q(S(v))) ) ←[i,j] :)

in which we can now substitute our simplification of h·Q(sum(e×u×c)), with p(i) as h:

= sum(: (b(i)∧b(j))×( p(j)·(p(i)·Q(S(u)))×v +p(j)·S(u)×p(i)·Q(v) +p(j)·sum(b×p(i)·Q(u)×p)·S(v) +u×p(j)·(p(i)·Q(S(v))) ) ←[i,j] :)
= sum(: (b(i)∧b(j))×( p(j)·(p(i)·Q(S(u)))×v +p(j)·S(u)×p(i)·Q(v) +p(i)·Q(u)×p(j)·S(v) +u×p(j)·(p(i)·Q(S(v))) ) ←[i,j] :)

The first and last terms here are simply the terms we would expect from the product rule. Interchanging i and j in any term of the sum, thanks to the leading (b(i)∧b(j))×(…), simply reverses the sign of the term; doing this to one of the two middle terms and swapping Q and S gives us a term that cancels the other middle term. We can thus infer that antiSym([R,R])&on;(Q&on;S +S&on;Q) will be left with only the terms resulting from the first and last here – and, hence, shall obey Leibniz. We thus find a way to obtain a new Leibniz operator from a pair of equal-rank Leibniz operators: compose them in both orders and sum (or take the average), then antisymmetrize in the two factors of their rank added thereby. The resulting operator's rank is the product of the two originals.

In particular, when Q and S are the same operator, the two middle terms above actually cancel, so the antisymmetric part of the square of any Leibniz operator is a Leibniz operator; specifically, we can define

This is a Leibniz operator, which I shall refer to as the torsion of Q. In the case of a differential operator, D, whose rank is G, we can use the index-notation to write torsion(D) as (D0&on;D1 −D1&on;D0)/2 and even as D[0&on;D1]. Notice that torsion(D)'s rank is G&tensor;G, the same as that of the metric of space-time. Also, notice that the torsion of a co-ordinate differentiation operator annihilates all scalar fields; the second derivative of a scalar field is always symmetric. (In fact, the torsion of a co-ordinate differentiation operator is simply zero; but other interesting differential operators shall have non-zero torsion that still annihilates all scalar fields.)

¿ Is there a corresponding result for several Leibniz operators (of equal rank); is their symmetrized product's antisymmetric part necessarily also Leibniz ?

Consider a list ({Leibniz operators of rank R}: Q |n) and the induced average of the composites of permutations of this list, average(: bulk(&on;, Q&on;s) ←(n|s|n) |{monic map}). We can antisymmetrize any output of this in the n factors of R that it gained from applying the n operators. Call the result Z and consider its action on a tensor product u×v.

When torsion(Q) annihilates all scalar fields, I describe Q as torsion-free (for all that torsion(Q) need not annihilate other ranks of tensor field). In this case, torsion(Q) is locally linear and, as discussed above, acts as a linear map (at each point) on each tensor rank. Its action on all ranks can be inferred from its action on gradients; when Q's rank is R, this is a linear map (at each point) from G to R&tensor;R&tensor;G, which is naturally encoded as a tensor field of rank R&tensor;R&tensor;G&tensor;T. The action of torsion(Q) on Q's own rank, R, is likewise encoded as a tensor field of rank R&tensor;R&tensor;R&tensor;dual(R); I shall refer to this tensor field as Riemann(Q), the (generalized) Riemann tensor; tracing out the final dual(R) with one of the first two ranks yields Ricci(Q), the Ricci tensor. Various symmetries of these tensors and the results of applying Q to them are known as the Bianchi identities.

Lie Bracket

In the foregoing, we took two Leibniz operators Q and S of equal rank R and applied antisymmetrization to the tensor ranks added by them, antiSym(R,R), while symmetrizing over the order in which we applied them, Q&on;S +S&on;Q. I now turn to a case where it's interesting to antisymmetrize over the order of application, rather than over the tensor ranks introduced; we'll now look at a special case of Q&on;S −S&on;Q.

Given a Leibniz operator Q, each tensor field h of rank dual to Q yields a scalar-rank Leibniz operator h·Q (which, thanks to the simplicity of scalars, is the operator we actually discussed in the specification of the product rule). Given a second such tensor field, k, we can compose the induced operators in both orders and take the difference, (h·Q)&on;(k·Q) −(k·Q)&on;(h·Q). This is necessarily also a globally linear scalar-rank universal tensor operator that respects trace and permutation; if it obeys Leibniz, it'll be a scalar-rank Leibniz operator. Let's look at its action on an arbitrary tensor field u:

((h·Q)&on;(k·Q) −(k·Q)&on;(h·Q))(u)
= h·Q(k·Q(u)) −k·Q(h·Q(u))

to which we can apply the product rule (as extended to contraction, thanks to Q respecting trace):

= h·Q(k)·Q(u) +k·(h·Q(Q(u))) −k·Q(h)·Q(u) −h·(k·Q(Q(u)))
= (h·Q(k) −k·Q(h))·Q(u) +2.k·(h·torsion(Q, u))

Each term in this has the form of a tensor field contracted with a Leibniz operator, applied to u; each thus acts on u as a Leibniz operator, hence so does the sum and we can infer that (h·Q)&on;(k·Q) −(k·Q)&on;(h·Q) is indeed a Leibniz operator. When Q is torsion-free and u is a scalar field, furthermore, it acts simply as the result of contracting a tensor field, of the same rank as h and k, with Q. Define

(and allow that the subscriptQ may be dropped in contexts where the Leibniz operator is implicit). This then gives us

in which the second term will, when Q is torsion-free, simply act linearly on each rank. The tensor field [h ; k]Q is known as the Q-commutator of h and k (with the Q-prefix dropped when Q is implicit from context) and [;]Q is a generalization of the Lie bracket (which is [;] as obtained from differential operators).

The commutator is antisymmetric under interchange of its two inputs. If we hold one input fixed and consider how the commutator varies with the other, it is a rank-limited (accepting only the rank dual to Q's) globally linear tensor operator. Given tensor fields h, k as before and a scalar field f,

[f.h ; k]
= f.h·Q(k) −k·Q(f.h)
= f.h·Q(k) −k·(Q(f)×h +f.Q(h))
= f.[h ; k] −(k·Q(f)).h
= [h ; f.k] −(k·Q(f)).h −(h·Q(f)).k


so the commutator is not locally linear in its inputs unless Q is locally linear. Given a basis b of Q's rank, with dual p as usual, we have

[h ; k]
= [sum((b·h).p) ; sum((b·k).p)]

but let's take that one step at a time

= sum(: [(b(r)·h).p(r) ; k] ←r :)
= sum(: (b(r)·h).[p(r) ; k] −(k·Q(b(r)·h)).p(r) ←r :)
= sum(: (b(r)·h).[p(r) ; (b(t)·k).p(t)] −(b(t)·k).(p(t)·Q(b(r)·h)).p(r) ←[r, t] :)
= sum(: (b(r)·h).(b(t)·k).[p(r) ; p(t)] +(b(r)·h).(p(r)·Q(b(t)·k)).p(t) −(b(t)·k).(p(t)·Q(b(r)·h)).p(r) ←[r, t] :)
= sum(: (b(r)·h).(b(t)·k).[p(r) ; p(t)] ←[r, t] :) +sum(: (h·Q(b(s)·k) −k·Q(b(s)·h)).p(s) ←s :)

in which the first term is just the co-ordinates of h and k multiplied by the commutators of basis members while the remainder encodes the action of each field on the other's co-ordinates.

Action on bases

Next, let's examine the commutator's action on the members of a basis:

[p(r) ; p(t)]
= p(r)·Q(p(t)) −p(t)·Q(p(r))

Apply our earlier expression of Q&on;p in terms of Q&on;b:

= −sum(: (p(r)·Q(b(m))·p(t)).p(m) ←m :) +sum(: (p(t)·Q(b(m))·p(r)).p(m) ←m :)
= sum(: (p(t)·Q(b(m))·p(r) −p(r)·Q(b(m))·p(t)).p(m) ←m :)

which is zero if every output of Q&on;b is symmetric. From this, we can obtain the scalars

i.e. the s-component of [p(r) ; p(t)] is the [t,r]-component of Q(b(s))'s antisymmetric part – whence the antisymmetric parts of outputs of Q&on;b are given by:

giving us a clear connection between commutators of p's outputs and antisymmetric parts of Q&on;b's outputs. If we multiply this ×p(s) and sum over s, we get the tensor equation

which serves as summary of the action on bases.

Jacobi's sum

Consider, next, the case of three tensor fields x, y and z with rank dual to that of Q. The commutator of any pair of these is another tensor field of the same rank, so we can take its commutator with the third. Let's see what happens (taking various Q-subscripts as implicit) when we add up the three possibilities:

[x ; [y ; z]] +[z ; [x ; y]] +[y ; [z ; x]]
= x·Q([y ; z]) −[y ; z]·Q(x) +z·Q([x ; y]) −[x ; y]·Q(z) +y·Q([z ; x]) −[z ; x]·Q(y)
= x·Q(y·Q(z) −z·Q(y)) +z·Q(x·Q(y) −y·Q(x)) +y·Q(z·Q(x) −x·Q(z))
−(y·Q(z) −z·Q(y))·Q(x) −(x·Q(y) −y·Q(x))·Q(z) −(z·Q(x) −x·Q(z))·Q(y)

Expand by applying the product rule:

= x·Q(y)·Q(z) +y·(x·Q(Q(z))) −x·Q(z)·Q(y) −z·(x·Q(Q(y)))
+z·Q(x)·Q(y) +x·(z·Q(Q(y))) −z·Q(y)·Q(x) −y·(z·Q(Q(x)))
+y·Q(z)·Q(x) +z·(y·Q(Q(x))) −y·Q(x)·Q(z) −x·(y·Q(Q(z)))
−y·Q(z)·Q(x) +z·Q(y)·Q(x) −x·Q(y)·Q(z) +y·Q(x)·Q(z) −z·Q(x)·Q(y) +x·Q(z)·Q(y)

Regroup the terms obtained by that expansion:

= y·(x·Q(Q(z))) −x·(y·Q(Q(z))) +x·(z·Q(Q(y))) −z·(x·Q(Q(y))) +z·(y·Q(Q(x))) −y·(z·Q(Q(x)))
+y·Q(z)·Q(x) −z·Q(y)·Q(x) +x·Q(y)·Q(z) −y·Q(x)·Q(z) +z·Q(x)·Q(y) −x·Q(z)·Q(y)
−y·Q(z)·Q(x) +z·Q(y)·Q(x) −x·Q(y)·Q(z) +y·Q(x)·Q(z) −z·Q(x)·Q(y) +x·Q(z)·Q(y)

Observe that the second and third sets of terms match exactly, with opposite sign, so cancel; and express the Q&on;Q results in terms of torsion(Q)

= 2.y·(x·torsion(Q, z)) +2.x·(z·torsion(Q, y)) +2.z·(y·torsion(Q, x))

and remember that, when Q is torsion-free, torsion(Q) acts on each rank as a linear map, i.e. by contraction with a tensor field specific to that rank. When Q has rank R, our fields x, y and z are of rank dual(R) and the tensor as which torsion(Q) acts on this rank has rank [R,R,dual(R),R]. The way the above contracts x, y and z with this tensor effectively symmetrizes it over the three factors of R (as is only to be expected, given that what we started with was symmetric between x, y and z). I'll need to explore the Riemann tensor some more before I can reach the conclusion that the result is zero (known as the Jacobi identity).

Constant metric

Suppose, for some Leibniz operator Q, that we have a linear map, g, from the dual of Q's rank to Q's rank, which is annihilated by Q. The reason for considering this special case is that, when we come to deal with differential operators, we shall be interested in having one which considers the metric of space-time (a tensor field which characterizes space-time's geometry; the tensor's value at each point is symmetric and invertible) to be constant. If Q's rank is R, then g's is R&tensor;R, the same as that of torsion(Q).

What can we deduce ? First (via its annihilation of the permutation involved), Q must also annihilate g's transpose: whence also both its symmetric and antisymmetric parts. Consequently, I'll mainly consider cases where g is either symmetric or antisymmetric. (I shall, none the less, begin by considering the general case, without this simplification.) Now, g may be expressed in terms of its components with respect to a local basis, b – with dual p, as usual – of Q's rank; when Q is applied to it in this form, its annihilation implies a relationship between Q's action on our basis, Q&on;b, and on the scalar fields obtained as g's components with respect to the basis. For arbitrary tensor field h of rank dual to that of Q,

0 = h·Q(g)
= h·Q(sum(: (p(i)·g·p(j)).b(i)×b(j) ←[i,j] :))
= sum(: h·Q(p(i)·g·p(j)).b(i)×b(j) +(p(i)·g·p(j)).h·Q(b(i))×b(j) +(p(i)·g·p(j)).b(i)×h·Q(b(j)) ←[i,j] :)

Contracting on the right with a specific p(j) and then with a specific p(i), we can re-arrange this to obtain Q's action on co-ordinates (from the first term) in terms of its action on outputs of b (the other two):

= (sum(: (p(r)·g·p(s)).h·Q(b(r))×b(s) +(p(r)·g·p(s)).b(r)×h·Q(b(s)) ←[r,s] :)·p(j))·p(i)
= sum(: (p(r)·g·p(j)).h·Q(b(r))·p(i) ←r :) +sum(: (p(i)·g·p(s)).h·Q(b(s))·p(j) ←s :)

each term in which is a product of two scalars, a co-ordinate of g times an h·Q(b)·p; rearranging to put h at the left and noting that this is true for all h, I infer:

= −sum(: Q(b(k))·p(i).(p(k)·g·p(j)) +Q(b(k))·p(j).(p(i)·g·p(k)) ←k :)

which gives us Q's action on g's co-ordinates in terms of Q&on;b. As a first step towards expressing the latter in terms of the former, go back to our initial 0 = Q(g), removing the h· above in favour of applying τ[1,0,2] to the third term. If we now apply τ[1,0,2] to the whole (equivalent to summing p(i)×(the preceding)×p(j) over i and j), we get

sum(: b(i)×Q(p(i)·g·p(j))×b(j) ←[i,j] :)
= −sum(: (p(i)·g·p(j)).( τ[1,0](Q(b(i)))×b(j) +b(i)×Q(b(j)) ) ←[i,j] :)
= −sum(: τ[1,0](Q(b(k)))×p(k)·g +g·p(k)×Q(b(k)) ←k :)

The first term in this sum involves a τ[1,0]&on;Q&on;b, which we can substitute from our earlier formula for Q&on;b's antisymmetric parts in terms of commutators; we'll then have terms g·p×Q(b) and Q(b)×p·g (and some [p ; p] terms). When g is symmetric or antisymmetric, we can convert the p·g into ±g·p. Applying τ[1,0,2] to g·p×Q(b) or τ[0,2,1] to Q(b)×p·g will then yield the same term, give or take a sign; applying each of the given permutations to the other term simply applies τ[1,0] to its Q(b) factor. (In the following, τ[0,1,2] is just another way of writing a relevant identity, used to match the forms of other permutations.) Thus, if we add these two permuted terms and subtract the result from what we started with, aside from the duplicated term (that'll cancel in the antisymmetric case), our terms in Q(b) all become terms in its antisymmetric part, for which we can substitute using the last section's formulae. We don't actually need to take the first step (eliminating the τ[1,0](Q(b)) we already have) to do this; and delaying substitution until we've combined the various permuted terms allows us to do the substitution all at one point. Of course, the result isn't quite as simple in the general case (when g is neither symmetric nor antisymmetric), but the same treatment shall still reduce the number of Q(b) terms. Before we begin that, let's pause to see what happens when we contract g with our earlier summary of the relation of [p ; p] commutators to the antisymmetric part of Q&on;b, and with a permuted form of this:

We can re-arrange the first of these (exploiting antisymmetry of [p ; p] to unflip the sign of its term while moving from one side to the other) as:

We are now ready to look at the permuted combination described earlier:

(τ[0,1,2] −τ[1,0,2] −τ[0,2,1])(sum(: b(i)×Q(p(i)·g·p(j))×b(j) ←[i,j] :))
= −(τ[0,1,2] −τ[1,0,2] −τ[0,2,1])(sum(:
τ[1,0](Q(b(k)))×p(k)·g +g·p(k)×Q(b(k)) ←k :))

Apply the permutations (mostly):

= sum(:
−τ[1,0](Q(b(k)))×p(k)·g +Q(b(k))×p(k)·g +τ[0,2,1](τ[1,0](Q(b(k)))×p(k)·g)
−g·p(k)×Q(b(k)) +τ[1,0,2](g·p(k)×Q(b(k))) +g·p(k)×τ[1,0](Q(b(k))) ←k :)


= sum(:
(Q(b(k)) −τ[1,0](Q(b(k))))×p(k)·g −g·p(k)×(Q(b(k)) −τ[1,0](Q(b(k))))
+τ[0,2,1](τ[1,0](Q(b(k)))×p(k)·g) +τ[1,0,2](g·p(k)×Q(b(k))) ←k :)

We now have two terms overtly in Q&on;b's antisymmetric part, that we can substitute as above; of the remaining terms, I've retained the from of the first ready for substitution using the formula derived before we embarked on this derivation:

= sum(:
b(j)×b(i)×[p(i) ; p(j)]·g −g·[p(i) ; p(j)]×b(j)×b(i)
+τ[0,2,1](b(i)×b(j)×[p(i) ; p(j)]·g)
← [i,j]) +sum(:
τ[0,2,1](Q(b(k))×p(k)·g) +τ[1,0,2](g·p(k)×Q(b(k))) ←k :)

Apply the permutation in the first case and re-arrange it to match its partner in the second:

= sum(:
b(j)×b(i)×[p(i) ; p(j)]·g −g·[p(i) ; p(j)]×b(j)×b(i) +b(i)×[p(i) ; p(j)]·g×b(j)
← [i,j]) +sum(: τ[1,0,2]((p(k)·g +g·p(k))×Q(b(k))) ←k :)

What the final term feeds to τ[1,0,2] can be written as (τ[1,0](g) +g)·p(k)×Q(b(k)); rearranging our equation and applying (self-inverse) τ[1,0,2] throughout (so as to move it from the last term to all others) we obtain:

(g +τ[1,0](g))·sum(: p(k)×Q(b(k)) ←k :)
= τ[1,0,2](
(τ[0,1,2] −τ[1,0,2] −τ[0,2,1])(sum(: b(i)×Q(p(i)·g·p(j))×b(j) ←[i,j] :))
−sum(: b(j)×b(i)×[p(i) ; p(j)]·g −g·[p(i) ; p(j)]×b(j)×b(i) +b(i)×[p(i) ; p(j)]·g×b(j) ← [i,j]) )

Apply the permutations:

= sum(:
Q(p(i)·g·p(j))×b(i)×b(j) −b(i)×Q(p(i)·g·p(j))×b(j) −b(j)×b(i)×Q(p(i)·g·p(j))
−b(i)×b(j)×[p(i) ; p(j)]·g +b(j)×g·[p(i) ; p(j)]×b(i) −[p(i) ; p(j)]·g×b(i)×b(j)
←[i, j] :)

Re-order terms to make the patterns of factors match, exploiting the i↔j antisymmetry of the [p ; p] terms:

= sum(:
Q(p(i)·g·p(j))×b(i)×b(j) −b(i)×Q(p(i)·g·p(j))×b(j) −b(j)×b(i)×Q(p(i)·g·p(j))
−[p(i) ; p(j)]·g×b(i)×b(j) −b(i)×g·[p(i) ; p(j)]×b(j) +b(j)×b(i)×[p(i) ; p(j)]·g
←[i, j] :)

Since this starts with g +τ[1,0](g), which is 2.g for symmetric g or zero for antisymmetric, I shall shortly consider these two cases separately. First, however, notice that, as long as g +τ[1,0](g) is invertible, the foregoing can be contracted with its inverse to supply a formula for sum(p×Q&on;b) and we can contract b(k)· this to obtain Q(b(k)) for any k. This gives us Q&on;b in terms of Q's action on co-ordinates of g and the contractions of [p ; p] with g; the following treatment of the symmetric case merely refines the form of that.

Now, this isn't immediately a lot of use, given that the [p(i) ; p(j)] can only be known given the values of Q&on;p or, equivalently, Q&on;b, which is what this formula thinks it's telling us – but, using an initial basis for which we know Q&on;b to be symmetric, we can boot-strap knowledge of Q's action on this basis from knowledge of its action on co-ordinates of g (since this basis's [p ; p] are all zero); we can then use this basis to determine Q's action on the basis we'd sooner be using (for whatever reason) in the form above.

In particular, for the case of a differential operator Q = D, with g the metric tensor that encodes space-time's geometry, g's co-ordinates are scalar fields, on which D acts as the intrinsic gradient operator d, which we know independently of D. Initially choosing a co-ordinate basis, b = dx for some list of co-ordinates x, we do indeed have D(dx(k)) symmetric for each k (because D is torsion-free), so we can infer these D(dx(k)) from the co-ordinates of g with respect to x. That then equips us to determine the values of [p ; p] and thus D&on;b for any other local basis pair b, p.


When g is symmetric, we have

−2.g·sum(: p(k)×Q(b(k)) ←k :)
= sum(:
[p(i) ; p(j)]·g×b(i)×b(j) +b(i)×[p(i) ; p(j)]·g×b(j) −b(j)×b(i)×[p(i) ; p(j)]·g
−Q(p(i)·g·p(j))×b(i)×b(j) +b(i)×Q(p(i)·g·p(j))×b(j) +b(j)×b(i)×Q(p(i)·g·p(j))
←[i, j] :)

and the [i,j]-symmetry of the Q(p·g·p) lets us swap their b(i) and b(j) factors, so that the corresponding swaps in the [p ; p]·g terms can induce sign changes and make the two types of term match in sign as well as order of factors:

= sum(:
−[p(i) ; p(j)]·g×b(j)×b(i) +b(i)×[p(i) ; p(j)]·g×b(j) +b(i)×b(j)×[p(i) ; p(j)]·g
−Q(p(i)·g·p(j))×b(j)×b(i) +b(i)×Q(p(i)·g·p(j))×b(j) +b(i)×b(j)×Q(p(i)·g·p(j))
←[i, j] :)
= (τ[2,1,0] +τ[1,2,0] −τ[0,1,2])(sum(: (Q(p(i)·g·p(j)) +[p(i) ; p(j)]·g)×b(j)×b(i) :))

When g is, furthermore, invertible we can contract on the left with minus half its inverse and obtain:

sum(: p(k)×Q(b(k)) ←k :)
=(τ[0,1,2] −τ[2,1,0] −τ[1,2,0])(sum(: (Q(p(i)·g·p(j)) +[p(i) ; p(j)]·g)×b(j)×b(i) :))/g/2

Contracting b(k)· this for any given k will yield Q(b(k)), so this tensor (whose value depends on choice of basis) serves to map basis members to their images under Q – but this only works for the b(k), not for general tensors !


When g is antisymmetric, we learn nothing about Q&on;b (although we might be able to by using our earlier sum(b·p×Q(b) +Q(b)×p·g) equation differently), since it drops out of the formula, leaving us with a constraint relating Q's action on co-ordinates of g to the [p ; p]·g:

= sum(:
[p(i) ; p(j)]·g×b(i)×b(j) −b(i)×[p(i) ; p(j)]·g×b(j) −b(j)×b(i)×[p(i) ; p(j)]·g
−Q(p(i)·g·p(j))×b(i)×b(j) +b(i)×Q(p(i)·g·p(j))×b(j) +b(j)×b(i)×Q(p(i)·g·p(j))
←[i, j] :)

in which (thanks to the sign-change on converting our one g·[p ; p] to [p ; p]·g) terms with corresponding factor order have opposite sign; so we can write this as

= (τ[0,1,2] −τ[1,0,2] −τ[2,1,0])(sum(:
([p(i) ; p(j)]·g −Q(p(i)·g·p(j)))×b(i)×b(j)
←[i, j] :))

the sum in which is manifestly antisymmetric under [i,j]-interchange, hence antisymmetric in its last two tensor factors; so swapping these last two factors in the τ[1,0,2] and τ[2,1,0] uses of the sum simply flips their signs, yielding:

= (τ[0,1,2] +τ[1,2,0] +τ[2,0,1])(sum(:
([p(i) ; p(j)]·g −Q(p(i)·g·p(j)))×b(i)^b(j)
←[i, j] :))

which (given b(i)^b(j)'s antisymmetry) is simply thrice the (3-way) antisymmetric part of the given tensor sum.

Now, this is merely a necessary condition arising from Q(g) being zero; next, let us consider whether it is a sufficient condition to ensure that Q(g) shall be zero. To this end, suppose we have some antisymmetric f of the relevant rank for which, for all applicable i, j and k,

p(k)·Q(p(i)·f·p(j)) +p(i)·Q(p(j)·f·p(k)) +p(j)·Q(p(k)·f·p(i))
= [p(i) ; p(j)]·f·p(k) +[p(k) ; p(i)]·f·p(j) +[p(j) ; p(k)]·f·p(i)

which is just the single-component truth equivalent to:

using the (adapted) subscript notation to indicate symmetrization. Simply rewinding the derivations above, we can get back to the point where we subtracted two permuted versions of an equation from its original – which we can re-state as:

= (τ[0,1,2] −τ[1,0,2] −τ[0,2,1])(
sum(: b(i)×Q(p(i)·f·p(j))×b(j) ←[i, j] :)
+sum(: τ[1,0](Q(b(k)))×p(k)·f +f·p(k)×Q(b(k)) ←k :) )

Valid CSSValid XHTML 1.1 Written by Eddy.