In a linear space, we know how to do integration with respect to some given
co-ordinates; and we can integrate a wide range of functions of position; at
least any exhibitable function bounded within some finite convex hull in any
linear output space, integrating over any exhibitable sub-set of our linear
space. We can use that to induce integration of functions on a smooth manifold
via our charts' representation of the manifold in some linear space. As with
everything on a smooth manifold, though, the result only really means something
in so far as it doesn't depend on our choice of chart. So, among the diverse
things we're *able* to integrate, …

Our machinery for integration on linear spaces ultimately adds up results on lots of little boxes, within each of which the integrand doesn't change much. So let's look at one such little box; and I'll stick to three dimensions for now (it's hard enough representing that in a two-dimensional medium, as it is), but try to imagine we have more of them.

The integral's contribution from our little box is then the
integrand's typical

value in the box times

something like the
volume of the box; and this last shall be, in some sense, a product

of
the edges of the box. We're going to add up contributions of this kind from
many such little boxes to get our overall integral. The result then has to be a
value that's not tied to any point of the manifold (it's associated with the
whole region over which we integrated, not any single point of it), so it more
or less has to be a scalar. (It could be a value in some linear space external
to the manifold; but then, by taking co-ordinates in that space, we can reduce
our integration to one integration per co-ordinate, each giving a scalar answer,
that we then combine to get the original result.) So the multiplicative way of
combining our little box with our integrand, implicit in the times

alluded to above, needs to produce a scalar answer, that doesn't depend on our
co-ordinates. So let's see how it depends on our box, in so far as the
integrand's value remains near-enough constant throughout the box.

If we shrink some edges of our box by some factor, we're going to shrink the
integral's contribution from the box by that factor, raised to the power that is
the number of edges we shrank; so, if we halve three edges, we reduce
the volume

by a factor of eight, for example. So whatever our
multiplication does, between integrand an box, the box gets represented by some
sort of a product

of its edges (as we would expect, for a volume); which
means we can read the integrand as some kind of linear map from the tensor
product of the edge-vectors.

If we shear the box, its volume doesn't change. (Picture a cube of jelly placed on a freezing cold surface, such as a block of ice; the jelly sticks along its surface of contact. If you tilt the surface, the jelly leans over in the direction of tilting, but its top surface stays the same distance from the bottom and retains its shape, while the side faces distort; the edges parallel to the freezing surface stay unchanged, but the other edges change by various displacements parallel to the surface.) A shear changes some edge vectors by linear combinations of the others; so whatever our integration does with tensor product of the edge-vectors of our box, it isn't changed by adding multiplse of one edge-vector to each of the others; which implies that, in fact, we're using an alternating n-form on our manifold's tangent bundle, with n being the number of edges our box has, i.e. the dimension of our manifold. This can, indeed, be represented as a linear map from the tensor product of the edges of our box; and it's antilinear in the individual edges, so it's actually a linear map from the antisymmetric part of the tensor product of our edge-vectors. Since the number of edges is equal to the dimension of our manifold, the space of such antisymmetric products is one-dimensional, as is its dual, which encodes each n-form as a simple linear map to scalars.

So our integration's action on a given little box needs to combine the
integrand with the antisymmetric product of the box's edge-vectors; to make that
work, it needs the integrand to be an n-form with n = dimension. As the space
of such n-forms is one-dimensional, the integrand feels like

a scalar
field (i.e. it'll be represented by one scalar function of position, in any
given system of co-ordinates), matching what we intuitively expect for a
quantity that integrates up to an actual scalar. In three dimensions,
integrating with respect to three co-ordinats x, y and z, our integration shall
use the covector fields dx, dy and dz as gradient basis; this gives us dx^dy^dz
as the single 3-form that serves as basis for our one-dimensional space of
3-forms. Our integrand shall then be some (actual) scalar function, f, of
position (the one co-ordinate of the integrand, as 3-form, with respect to this
basis) times dx^dy^dz, making the integral look like ∫f.dx^dy^dz, distinctly
reminiscent of the ∫f.dx.dy.dz that we might initially have though to write.
This is no coincidence.

One quantity that we normally expect to be able to associate with a region of 3-space is its volume; we can naturally expect to generalise that to other dimensions. Indeed, in a 2-surface, we'll expect to get area and along a line length; I'll return to that, but let's first consider the general case. When we have a box with perpendicular sides, we expect its volume to be the product of the lengths of those edges; which clearly depends on our metric (which determines lengths), so we must expect it to be implicated here.

We usually (when we can) pick co-ordinates in which our metric is nice and
tidy – e.g., in three dimensions, we'll make it dx×dx +dy×dy
+dz×dz and we might throw in −dt×dt if we add a time-like
fourth dimension – and this makes the volume of a co-ordinate-aligned box
equal to the product of the lengths of its sides, with each of those lengths
being (give or take a space-like vs time-like distinction) just the change,
along each edge, in the one co-ordinate that varies along it, at least (for the
smooth manifold case) when each such change is small enough to avoid any
complications curvature may be throwing into the mix. This is just the product
of the scalar resulting from each co-ordinate's gradient (dex, dy, dz or dt)
applied to the edge along which that co-ordinate varies; as we're here working
in nice orthonormal co-ordinates, each co-ordinate gradient measures length
along the edge to which it's applied; and would give zero if contracted with any
of the other sides. As a result, the antisymmetric product μ of the
orthonormal co-ordinate gradients (dx^dy^dz, possibly with a ^dt thrown in four
4-D) is the n-form that eats

the edges of a box to give us its volume;
and this remains true for a more general box (edges not aligned with the
co-ordinates). Since our metric, g, is unit-diagonal, in these co-ordinates,
its determinant is just ±1 times the basis member these co-ordinates give
to quantities of its (one-dimensional) kind, which is the tensor product
μ×μ in these co-ordinates; so we can think of μ as
√±det(g).

When we use other co-ordinates, ±det(g)'s co-ordinate with respect to
the basis they induce shall be different, but that co-ordinate's square root
times the basis vector the co-ordinates give shall still be the same n-form we
got from our orthonormal co-ordinates. The transformation of co-ordinates from
orthonormal is represented by the matrix of partial derivatives of the
orthonormal co-ordinates with respect to the new; using the gradients of the new
as basis of covectors, the rows of this matrix are its representation of the
orthonormal co-ordinates' gradient covectors; the determinant of this matrix,
the jacobean used to convert integration, is just the antisymmetric product of
the orthonormal covector basis, expressed in the new co-ordinates; which is,
indeed, the square root of ±det(g). I refer to this dimension-form
as the measure

on the manifold, induced from the metric;
it formally encodes the machinery that performs integration in the usual sense,
integrating a scalar field to get a scalar answer. When we integrate the
measure over a region, tacitly integrating the constant scalar function 1, we
get the amount

of our manifold in the region; on a one-manifold, this is
a line, on a two-manigold an area, on a three-manifold a volume; and so on.
Since μ is what our orthonormal co-ordinates express as dx^dy^dz (or
similar), it now makes sense to write the integral of a scalar field f as
∫f.μ, with no d in sight.

The antisymmetrising differential operator, d^, annihilates all dimension-forms, so d^μ = 0. When using the differential operator D for which the metric g is constant, Dg = 0, we also get Dμ = 0. So we definitely don't want any d in our expression of the integral.

If we have a sub-manifold (i.e. an embedding of a lower-dimensional manifold
in the primary manifold) S of a manifold M, there's a natural mapping from S's
tangents, at any point of S, to M's tangents at that point; each trajectory
through the point in S is a trajectory in M also, so its tangent considered as a
trajectory in S corresponds to its tangent considered as a trajectory in M.
This, in turn, induces a mapping from (for each natural n) n-forms on the
gradients (i.e. antisymmetric products of tangents) of S to those of M; while
letting a metric on M act on tangents of S, thus inducing a metric on S, from
which we can induce a measure on S. Let s be the dimension of S and m be the
dimension of M; with s < m, the measure on S is an s-form on S's tangents,
which has a natural inverse in the s-forms on S's gradients, which we can
naturally embed in the s-forms on M's gradients and contract with m's metric to
get an (m−s)-form on M's tangents that we can think of as M's measure
divided by S's. This is only defined in M and on S (i.e. it's an M-tensor
defined only at the points of M that happent to be in S); and contracting it
with any tangent to S shall give zero, so we can think of it as the natural
generalisation of the normal vector

to a two-dimensional surface in three
dimensions (where it's actually a covector; the vector you thought of as normal
to the surface is the metric's inverse's image of it).

In the case of the one-dimensional line, as sub-manifold, the metric on the
line *is* its own determinant; and that metric is fully specified by its
action on one non-zero tangent to the line (as all others are just scalings of
the chosen one). Let that vector be t; then (as it's non-zero) there is a
covector u for which u·t = 1; and our line is one-dimensional, so this u
is unique. Our embedding of the line in the manifold gives us a natural
embedding of t as a tangent to the manifold, as well as to the line, from which
we can get g(t, t), the squared length of t; then the metric on the line is just
g(t, t).u×u. Our measure on the line is then just u scaled by a square
root of ±g(t, t). When our line is represented as a function (: f
:{reals}), t is just f'(s) at f(s) and u is ds, giving (√±g(f'(s),
f'(s))).ds as our measure, which should be instantly recognisable as what you
integrate along a line to get its length. (So we've now formalised a
justification for this natural intuition.)

When we picture a region of space-time in terms of some spatial co-ordinates
and a time co-ordinate, we can regard each instant of time as a slice through
space-time; this leads to the familiar perception of the universe, in which we
experience each moment successively, as that slice moves forward in time. The
restriction of space-time's metric to such slices gives us a spatial metric in
each slice; this implies a measure on the slice. We can divide

the
measure of space-time by this to get a gradient field that's just some scaling
times the gradient of our time co-ordinate; that scaling shall be near 1 in so
far as your co-ordinates keep the measures close to the products of the
gradients of the co-ordinates, both in space-time and in the spatial slice.

We can likewise, from a system of co-ordinates, select some co-ordinates to
fix on a sub-manifold while others vary; and, by considering the diverse
sub-manifolds resulting from different choices of the fixed co-ordinates, obtain
a family of sub-manifolds parameterised by the co-ordinates fixed on each. Each
sub-manifold in this family gives us a measure and thus a normal

(m−s)-form as before; but now this form is defined on each manifold, so we
obtain it at each point within the range of our co-ordinates.

Any tensor field that it's interesting to integrate over space-time, or certain (particularly: families of) sub-manifolds of it, might be better discussed in terms of the combination of it with the measure that's what we actually integrate (any time the integral's expression involves a factor of √−det(g), that's a hint that this may be happening).

In the case of integrating over a family of sub-manifolds (such that the sub-manifold measure can be expressed as a tensor field on M), it may be interesting to use the s-sub-manifold's measure to turn a rank with n G-factors into one with s−n T-factors in their place, so as to use the natural embedding of S's tangents in M's tangent space to turn the S-tensor into an M-tensor, after which we can use M's measure to turn those s−n T-factors into n+m−s G-factors. Aside from adding m−s G-factors to the rank we had on the sub-manifold, to get the tensor's rank on the manifold, this works like using the metric to map G-factors through T-factors in extracting a tensor field on the manifold from one on the family of sub-manifolds. So anti-symmetric tensor fields that appear to us (in our 4-D view of the world) to have rank n might actually have rank n+m−4 in the m-dimensional physics a full theory would describe. If we do this to something that looks like a scalar field on the sub-manifold, we get an antisymmetric tensor with m−s factors of G (an m−s-form) in it, Hodge dual to an s-form; if the sub-manifold is our 4-space view of space-time, this gives us a m−4-form.

A possibly interesting project: survey the quantities we think we understand and ask: which have the same rank as in 4-D, even if space-time has higher dimension, and which in fact have rank dim−n for some n that doesn't change with dimension; i.e.,

- which are rank-2 and which are rank-(dim−2) ? These happen to coincide for dim = 4, so may be hard to tell apart; of the electromagnetic field and its Hodge dual, one is rank 2, the other is rank dim−2.
- are any of our scalar fields actually of rank dim−4 ? If so, they may be worth rank-shifting with the measure to rank 4 of the dual kind (and taking care not to think of this as rank dim); the converse is in principle poassible, but we tend to select a scalar field to represent dim-forms anyway, taking the measure as implicit.
- there are, likewise, covector or vector fields that we should instead represent by a dim−1-form; indeed, the charge-current density field on space-time is really one of these, represented by a vector field that should be implicitly scaled by the measure to get the field that d^ maps the E-M field's Hodge dual to, so that d^ then maps it to the zero tensor of the measure's rank. Again the converse is possible, a rank-3 tensor we should be representing as a vector or covector; as for scalar fields, though, we're apt to represent these as vector fields, so the more usual misrepresentation is of a covector field, such as a gradient, by a vector.

What other quantities that we think of as falling on one side of such a distinction actually fall on the other ?

Written by Eddy.