The following is at least largely a transcript (into my notation and forms of description) of a matching discussion in From Calculus to Cohomology, subtitled De Rham cohomology and characteristic classes, by Ib Madsen and Jørgen Tornehave, of the University of Aarhus (published by CUP in 1997, ISBN 0-521-58956-8 / 0-521-58059-5). This is comfortably orthodox, and orthodoxly engages in the notational horrors that caused me to be glad of the excuse to invent my own notation; in reading it, I routinely had to flip back and forth to discover how to interpret particular typographic conventional arrangements in relation to the mathematical entities whose names appeared liberally splattered about formulaic expressions. In one or two places, I tripped over portions I suspect of being wrong, through the authors' own confusion at their own ad-hoc notations, though I may instead merely be confused by the same into mistaking an error where none is: but a notation which makes such mistakes easy is not well chosen.

De Rham Cohomology

The topological structure of a space can be characterized in terms of an algebraic analysis of a triangulation (sub-division into simplices) of the space; this characterization is called homology and, roughly speaking, identifies the loops, spherical shells and their analogues in higher dimension which it is not possible to continuously deform down to a point. It turns out that there is a matching, dual, description of the topology which looks at which smooth functions satisfy analogues of the 3-dimensional equation ∇^v = 0 but v is not ∇f for any scalar field f or ∇·v = 0 but v is not ∇^u for any vector field f. This description is called cohomology and depends on alternating (a.k.a. antisymmetric) forms and a differentiation on them. Conveniently, the description doesn't care about the domain of the functions being a vector space; it works fine for the most general context that is enough like a vector space to allow us to do differentiation as if in a vector space.

Antisymmetric or Alternating Forms

I'll take as given a real-linear space V, i.e. we have a cancellable addition defined on V and: the natural embedding of {positive naturals} as scalings of V via repeated addition,

extends naturally to a multiplicative action of {reals} on V,

(: (V: r.v ←v |V) ←r :{reals})
= scale(r) when r is natural

which distributes over the additions on V and {reals}, i.e.

From this the tensor algebra will let us build the assorted spaces dual(V) = {linear maps ({reals}:|V)}, bulk(⊗, ({V}:|n)) and bulk(⊗, ({dual(V)}:|n)) for natural n. A map, f, between two real-linear spaces is described as linear precisely if it respects addition and scaling; i.e. f(u+r.v) = f(u)+r.f(v).

[Here ({x}:|n) is the unique list of length n in which every entry is x; and bulk(*, [a,b,…,z]) = a*b*…*z extends any associative binary operator * to its bulk action on lists; ⊗ is the tensor combiner on linear spaces, bulk(⊗, [A,…,Z]) = {bulk(×, [a,…,z]): a in A, …, z in Z} with × being the tensor combiner on members of linear spaces.]

Antisymmetric Forms

We can also define the permutation operator τ which, officially, takes a permutation, (n|s|n), and a list, (:H|n), whose entries are linear spaces, and yields a linear isomorphism (bulk(⊗, H∘s): τ(s,H) |bulk(⊗,H)) obtained by linearly extending (: bulk(×, h∘s) ← bulk(×,h) :), with each h implicitly constrained to satisfy h(i) in H(i) for each i in n. [Elsewhere, I define an extended form of τ which encodes contraction (i.e. trace) as well as permutation; and often use an abbreviated form for τ in which τ[a,…,z] stands for τ([a,…,z], [A,…,Z]) for arbitrary [A,…,Z] of the same length as [a,…,z].] For the sake of brevity, let Σ(n) = {permutations of n} = {monic mappings (n||n)}. Using τ I can define, for each natural n and linear space V,

(bulk(⊗, ({V}:|n)): antisymm({V}:|n) |bulk(⊗, ({V}:|n)))
= average(: sign(s).τ(s,({V}:|n)) ←s |Σ(n))

which can be obtained by linear extension from

(: average(: sign(s).bulk(×, h∘s) ←(n|s|n) |Σ(n)) ← bulk(×, (V:h|n)) :)

in which average(f) is just sum(f) = bulk(+,f) divided by the number of members of (:f|). The number of members of Σ(n) is just n!, defined for natural n by 0! = 1, (1+i)! = i!.(1+i). The function ({±1}: sign |Σ(n)) is defined to be the signature group homomorphism [from permutations, under their natural composition, to the two-member multiplicative group {±1}] which maps to −1 every permutation which simply swaps two indices and acts as the identity on all others; it consequently maps to +1 every composite of an even number of such swaps and to −1 every composite of an odd number of such swaps.

It takes only modest effort to show that antisymm({V}:|n) is linear and acts as the identity on its outputs, which form a linear sub-space of bulk(⊗,({V}:|n)). This sub-space is known as the space of wholly antilinear n-way tensor products of V; for reasons which will emerge below, the corresponding (|antisymm({dual(V)}:|n):) is known as the space of antisymmetric n-forms on V.

Alternating forms

Inductively define the space of alternating n-forms on V, Alt(n,V), for natural n, by:

Note that, inductively, whenever [x,…,z] is of length n and g is in Alt(n,V), g(x,…,z) is real; and 0 is the real zero (in general, zero is an additive identity in each linear space). Standard isomorphisms of the tensor algebra duly imply that each Alt(n,V) is subsumed within bulk(⊗, ({dual(V)}:|n)), which also subsumes (|antisymm({dual(V)}:|n):). At the same time, any member of the latter is trivially an alternating n-form, thanks to its antisymmetry.

The need to discuss lists and multi-input functions makes it highly convenient to deploy the mapping (:supply:{lists}) defined inductively by:

which is faithfully caricatured as supply([a,…,z], f) = f(a,…,z).

Consider any g in Alt(2,V) and u, v in V; since neither [u,u] nor [v,v] is monic, we have g(u,u) = 0 = g(v,v) and g is linear in both arguments, so 0 = g(u+v,u+v) = g(u,v) +g(v,u) so we can infer that g(u,v) = −g(v,u) for any u, v in V. Thus every alternating 2-form is antisymmetric. (They're quadratic forms and have some special features all their own.) For 0-forms and 1-forms, the only possible permutation of the inputs is the identity; so permuting the no inputs or the single input makes no difference, making them fatuously symmetric; and there are no swaps among the permutations of the inputs, making them fatuously also antisymmetric.

Now, consider any given (V:f|n) and distinct i, j in n; for any u, v in V we can define F(f,u,v) to map u←i, v←j and otherwise agree with f. For any g in Alt(n,V) we can thus define G = (: (: supply(F(f,u,v), g) ←u :V) ←v :V). Now, if u = v then F(f,u,v) has two equal entries, hence isn't monic, so supply(F(f,u,v),g) is zero, so G(u,v) = 0. Thus G is in Alt(2,V), so G(u,v) = −G(v,u). By considering the case u = f(i), v = f(j) we can infer that supply(f, g) = −supply(F(f,f(j),f(i)), g) i.e. swapping any two entries of [a,…,z] negates g(a,…,z). We can build up any permutation of f as a composite of such swaps, so we can infer that supply(f∘s, g) = sign(s).supply(f, g) for every permutation s. This, in turn, suffices to establish g as a fixed-point of antisymm({dual(V)}:|n), hence a member of (|antisymm({dual(V)}:|n):).

We can thus infer that Alt(n,V) = (|antisymm({dual(V)}:|n):).

Before we move on to the antisymmetric product, let's notice a less stringent condition that implies alternating; and a stronger property that alternating implies.

The definition above requires g(x,…,z) to be zero if any two members of [x,…,z] are equal; in fact, it suffices to require g(x,…,z) to be zero if any two adjacent members of [x,…,z] are equal. Indeed, if g only satisfies this last condition, we can apply the same F/G argument as above to infer that swapping any two adjacent inputs to g will negate the result; but, in fact, any permutation can be written as a composite of swaps of adjacent entries, so this suffices to imply the same antisymmetry property as the F/G argument; hence g is antisymmetric, whence alternating.

The definition above also only tells us that g(x,…,z) will be zero if two members of [x,…,z] are equal. However, suppose merely that [x,…,z] is linearly dependant; without loss of generality (since permuting the list at worst changes g(x,..,z)'s sign), that is to say we have [x,…,y,z] with z = a.x+…+b.y for some scalars a,…,b; but g is linear in each input, so g(x,…,y,z) is then a.g(x,…,y,x)+…+b.g(x,…,y,y) in which each term has, in z's position, one of the earlier inputs; g thus gets two equal inputs and yields zero output in each term; so the sum, g(x,…,y,z), is zero. Thus g(x,…,z) is zero whenever there is a linear dependency among [x,…,z]. This, in particular, implies that Alt(n, V) = {zero} whenever n > dim((V).

The antisymmetric product

Suppose, now, we have two alternating forms on V, w in Alt(n,V) and u in Alt(m,V). We can construct u^w in Alt(m+n,V)

Now, w×u is just τ(r, u×w) with r being the cyclic permutation which unites (: i+n ←i |m) with (n| i ←i+m :), so we have w^u = sign(r).u^w. For this we need to know the signatures of cyclic permutations. It is noteworthy that a cyclic permutation through one step on n+m, repeated n (or m) times will yield r; so we can determine sign(r) from the signatures of such one-step cycles.

When n+m is odd, one of n, m is odd and the other is even; r is the result of repeating a forward one-step cycle m times or a backward one n times; one of these is an even repetition, so necessarily yields sign(r) = +1, the even power of the one-step's sign; from this we can infer that a one-step cycle of odd length also has sign +1 (since repeating it an odd number of times yields r, with sign +1). For an even-length cycle, either n and m are both even (in which case r is an even repeat of the one-step cycle) yielding sign(r) = +1, or they are both odd. For the latter case, for n+m > 2, note that [1,2,…,n+m−2,0] is a one-step odd-length cycle, so has sign +1; swap positions n+m−2 (where the 0 of that cycle is) and n+m−1 (which is unchanged by that cycle) to obtain [1,2,…,n+m−2,n+m−1,0], which is our even-length one-step cycle; this must have sign −1, since it composes a swap with a sign +1 cycle; the length 2 one-step cycle is [1,0] and has sign −1, hence every one-step even-length cycle has sign −1. To summarize: every odd-length cycle has signature +1; a cycle of even length has signature +1 if it is through an even number of steps, else −1.

Thus if either n or m is even, sign(r) is +1; otherwise, both are odd and r is an even-length cycle through an odd number of steps, so sign(r) is −1. Thus sign(r) = power(n.m, −1) is −1 precisely if both n and m are odd; otherwise (if either n or m is even) it is +1.

Thus u^w = power(n.m, −1).w^u when one of u, w is in Alt(n,V) and the other is in Alt(m,V). Such symmetry except for sign is described as graded; ^ is said to be graded commutative (sometimes: anti-commutative, but that potentially conflicts with an other meaning). Combined with associativity (below) this will make (: Alt(i,V) ←i |{naturals}) a graded algebra (but you needn't care about this unless you're familiar with orthodox jargon).

Associativity

Next, let us see whether ^ is associative; are u^(w^z) and (u^w)^z necessarily equal ? For u in Alt(m,V), w in Alt(n,V) and z in Alt(p,V): in the former we are obliged to consider composing permutations of m+n+p with m-shifted permutations of n+p; in the latter, we must compose m+n+p permutations with m+n permutations. In each case, the shorter permutation is simply a restricted permutation of m+n+p and we can, for any (L:f|Σ(m+n+p)) with L a linear space, safely assert that, for each (restricted) permutation (m+n+p|r|m+n+p),

average(: sign(s).f(s) ←s :Σ(m+n+p))
= average(: sign(s∘r).f(s∘r) ←s :Σ(m+n+p))
= sign(r).average(: sign(s).f(s∘r) ←s :Σ(m+n+p))

since (: s∘r ←s :) is merely a permutation of Σ(m+n+p), so doesn't affect summation over it (it just shuffles the order of the terms). The same naturally applies for r∘s in place of s∘r. Thus, with S = ({V}:|m+n+p), the list of length m+n+p whose entries are all V,

u^(w^z)
= average(: sign(s).τ(s, S, u×(w^z)) ←s |Σ(m+n+p))
= average(: sign(s).τ(s, S, u×average(: sign(r).τ(r, ({V}:|n+p), w×z) ←r |Σ(n+p))) ←s |Σ(m+n+p))

and we can pull u inside the inner τ by shunting r – i.e. replacing it with the union of (m:i←i:) and (: m+r(i)←m+i :m+n+p) – which I'll refer to as shunt(m,r) for now; this has the same signature as r; composing the τ of s with the τ of our shunted r yields the τ of their composite; so we have

= average(: average(: sign(s).sign(shunt(m,r)).τ(s∘shunt(m,r), S, u×w×z) ←r |Σ(n+p)) ←s |Σ(m+n+p))

and reversing the order of the averagings has no effect, so this is

= average(: sign(shunt(m,r)).average(: sign(s).τ(s∘shunt(m,r), S, u×w×z) ←s |Σ(m+n+p)) ←r |Σ(n+p))

which is of the required form for my preliminary result, with shunt(m,r) in place of the earlier r, yielding

= average(: average(: sign(s).τ(s, S, u×w×z) ←s |Σ(m+n+p)) ←r |Σ(n+p))

in which what we're averaging over r in Σ(n+p) doesn't depend on r; the average of a constant (over a non-empty collection) is just that constant, so we obtain

= average(: sign(s).τ(s, S, u×w×z) ←s |Σ(m+n+p))
= antisymm(S, u×w×z)

Likewise (as should now be no surprise),

(u^w)^z
= average(: sign(s).τ(s, S, (u^w)×z) ←s |Σ(m+n+p))
= average(: sign(s).τ(s, S, average(: sign(r).τ(r, ({V}:|m+n), u×w) ←r |Σ(m+n))×z) ←s |Σ(m+n+p))

and again we can pull z inside the inner τ by replacing r with extend(p,r), the union of r and (: m+n+i ← m+n+i :m+n+p), which just acts as the identity in the final p items, yielding (once we've swapped orders of averaging)

= average(: sign(extend(p,r)).average(: sign(s).τ(extend(p,r)∘s, S, u×w×z) ←s |Σ(m+n+p)) ←r |Σ(m+n))
= average(: average(: sign(s).τ(s, S, u×w×z) ←s |Σ(m+n+p)) ←r |Σ(m+n))

and, again, the outer average is of a constant, so yields the inner one,

= average(: sign(s).τ(s, S, u×w×z) ←s |Σ(m+n+p))
= antisymm(S, u×w×z)

as before, so

= u^(w^z)

Thus ^ is associative and we can write the above three-way product as u^w^z without needing to bracket it; and we can apply bulk(^) to lists of alternating tensors on V (or, indeed, lists of tensors of form bulk(×, ({dual(V)}:h|)) with each h a list; ^ will antisymmetrize each in the course of combining them). Note that bulk(^) = antisymm∘bulk(×).

Alt's symmetric chain

Suppose V is finite-dimensional and we're given a basis (dual(V):q|dim) of dual(V) = Alt(1,V), with dim natural. This trivially gives us, for each natural n, a basis of bulk(⊗,({dual(V)}:|n)) whose elements are of form bulk(×,q∘a) for some list (dim:a|n). We may thus infer that the sub-space Alt(n,V) is spanned by members of form bulk(^,q∘a) for lists (dim:a|n). Now, if a is not monic, nor is q∘a, hence bulk(^,q∘a) will be zero; thus, if n>dim, we necessarily have Alt(n,V) spanned by {zero}, hence

Furthermore, when a is monic, composing it after a permutation of n will merely scale bulk(^,q∘a) by ±1, so the permuted version won't contribute anything to the span that the original didn't. Every monic (dim:a|n) is a permutation of one that's strictly increasing, i.e. i<j in n implies a(i)<a(j) in dim, so we have Alt(n,V) spanned by {bulk(^,q∘a): (dim:a|n) is strictly increasing}, which is linearly independent (because q is), so the dimension of Alt(n,V) is just the number of distinct strictly increasing (dim:a|n), which is the number of ways one may chose a subset with n members from a set with dim members; which is just chose(dim,n) = dim!/(dim−n)!/n! where i! denotes the factorial of i, bulk(×,[1,…,i]). In any case, the number of ways one can chose n members to include in our subset, from a set with dim members, is identically the number of ways one can chose the dim−n members to exclude from the sub-set. This implies that Alt(dim−n,V) and Alt(n,V) have the same dimension, hence are isomorphic:

If we now apply this to n = i+dim, we get Alt(−i,V) isomorphic to Alt(i+dim,V), which is {zero} for i>0, i.e. Alt(n,V) = {zero} for n<0. This is clearly some kind of nonsense, but it is intermittently useful to be able to have, at least formally, Alt(−1,V) = {zero} at our disposal as a nominal zero-dimensional anchor to various inductive definitions and proofs (and to not have this depend on V being finite-dimensional). Thus I shall formally define Alt(−1) to map any linear space (or linear map – see next section) to {zero}.

If we try to wind our inductive definition of Alt(1+n,V) backwards with 1+n = 0, we get

in which it makes sense to read g(x,…,z) as supply([x,…,z],g); and (V:[x,…,z]:0) can only be [], for which supply([],g) = g; so the condition on g reads g is zero unless [] is monic. Now, [] is monic (fatuously), so the condition imposes no restraint on g and we're left with

There is, in fact, only one linear map ({zero}:|V), the zero linear map, so the formal identification of Alt(−1,V) with {zero} isn't compatible with the inductive definition of Alt(n,V) for natural n. This is no big issue, given that −1 isn't natural.

Extending each Alt(n) to a functor

Suppose we have linear (W:f|V) and look at how it interacts with Alt(n,W) and Alt(n,V). When we look at any Alt(1+n,W), we see that its members are linear maps from W, so can naturally be composed after f to yield linear maps from V. Doing this to g in Alt(1,W), we obtain linear ({reals}: g∘f |V) which is trivially in Alt(1,V) since any list of length at most one is trivially monic. So tentatively introduce Alt(1,f) = (Alt(1,V): g∘f ←g |Alt(1,W). Given (Alt(n,V): Alt(n,f) |Alt(n,W)), we can define

since the outputs of g∘f, for g in Alt(1+n,W), are outputs of g, i.e. members of Alt(n,W), which Alt(n,f) will duly accept as inputs. Backtracking, with n = 0, we need Alt(0,f) to be the identity on reals, so that Alt(n,f)∘g∘f = g∘f will map any given x in V to g(f(x)). Now, for given n, suppose we already know that, for any g in Alt(n,W) and (V:[x,…,z]|n),

then, inductively, we find that given g in Alt(1+n,W) and (V:[x,y,…,z]|1+n),

and thus induce that this does indeed hold for all n. Note that we can write this as supply(:w|n)∘Alt(n,f) = supply(f∘w). Now consider any g in Alt(1+n,W); we have g(x,…,z) zero unless (W:[x,…,z]:1+n) is monic; when we come to look at Alt(1+n,f,g) we'll need 1+n members of V, i.e. some (V:[x,…,z]|1+n), with

which will be zero unless [f(x),…,f(z)] = f∘[x,…,z] is monic. Now, if [x,…,z] isn't monic, composing f after it won't fix that, so we'll get zero out of Alt(1+n,f,g); thus Alt(1+n,f,g)(x,…,z) is indeed zero unless (V:[x,…,z]|1+n) is monic. This being true for lists of the full length, 1+n, it must also be true for any shorter length of list. Thus we can infer that Alt(1+n,f,g) is in Alt(1+n,V) and, finally, that Alt(1+n,f) is linear (Alt(1+n,V):|Alt(1+n,W).

In the technical jargon of category theory, Alt(n) is thus a functor; and, since (Alt(n,V):Alt(n,f)|Alt(n,W)) reverses V and W relative to (W:f|V), Alt(n) is described as a contravariant functor. If we have two linear maps (W:f|V) and (V:e|U) we readily enough see that Alt(n,f∘e) = Alt(n,e)∘Alt(n,f), e.g. by considering (U:[x,…,z]|n) and g in Alt(n,U), for which

Alt(n,f∘e,g)(x,…,z)
= g(f(e(x)),…,f(e(z)))
= Alt(n,f,g)(e(x),…,e(z))
= Alt(n,e,Alt(n,f,g))(x,…,z)

So how does Alt(*,f) act on a product ? Suppose we have u in Alt(n,W), v in Alt(m,W), so u^v in in Alt(n+m,W); then, for any (V:w|n+m),

supply(w, Alt(n+m, f, u^v))
= supply(f∘w, u^v)
= average(: sign(s).supply((:f∘w∘s:n), u).supply((:f(w(s(n+i)))←i:m), v) ←s :Σ(m+n))
= average(: sign(s).supply((:w∘s:n), Alt(n,f,u)).supply((:w(s(n+i))←i:m), Alt(m,f,v)) ←s :Σ(m+n))
= supply(w, Alt(n,f,u)^Alt(m,f,v))

whence Alt(n+m, f, u^v) = Alt(n,f,u)^Alt(m,f,v).

Manifold

Now we have our alternating forms, it's time to consider functions which yield them and whose derivatives are amenable to antisymmetrizing. A derivative is a linear map from small variations in the input to small variations in the output; in the tensor algebra, this is equivalent to the tensor product of the output with a gradient on the input – if the input space is linear, a gradient on it is just a member of its dual. For such a derivative to be amenable to antisymmetrization, the output must be in bulk(⊗, G), where G is the space of gradients on the input space; if the output is in Alt(n,V) for some linear space V, this means G must be dual(V). If the input space is linear, this just says that it must be a sub-set of V. However, it's possible to do differentiation of functions from a non-linear space, provided that space looks enough like a linear space. The formalism that characterizes looking enough like a linear space is the smooth manifold, a particular flavour of topological space.

Smooth Manifold

Formally, a topological space is a set equipped with a topology; a topology on S is a collection, T, of sub-sets of S, known as open sub-sets of S, for which any finite intersection of T's members is in T, any union of T's members is in T and both S and empty are in T. Any sub-set, N, of S is described as a neighbourhood of s in N precisely if N subsumes some open sub-set, U, of S with s in U. A topological space is described as Hausdorff (after the chap who proposed this property) precisely if, for any pair of distinct members of the space, there are disjoint open sets in the topology containing one but not the other, and vice versa; I shall only concern myself with Hausdorff topologies. An open cover of a topological space is a collection of its open sets whose union is the whole space. A function, (S:f|R), between two topological spaces is said to be continuous if U is open in S implies (U:f|) is open in R. A a continuous function f whose reverse is also a continuous function is described as a homeomorphism.

There is a natural topology on {reals} induced by their ordering; it comprises every union of members of {{real x: a<x<b}: a, b are real}. This induces a natural topology on every real linear space, generated by declaring each member of its dual to be continuous; the complex numbers are a real linear space, and the thus-induced topology on the complex numbers duly induces a matching topology on all complex linear spaces. An atlas of a topological space, S, is a collection, A, of homeomorphisms from each member of an open cover of S to some linear space. Thus: {(:f|): f in A} is our open cover of S and there is some linear space V for which each f in A is a homeomorphism (V:f:S). The members of A are known as charts of S. A topological manifold is a topological space equipped with an atlas (and satisfying some rather hairy conditions I shan't go into here; loosely speaking, they forbid self-intersection and self-touching).

Now, the reverse of each chart is given to be a continuous map from the linear space to the topological manifold; if we compose one chart after the reverse of another – f∘reverse(g) for f, g in A – we thus obtain a homeomorphism (V::V), albeit this will be empty if (:f|) and (:g|) have empty intersection. When such a composite is non-empty, it's a mapping between linear spaces, and these support differentiation, so we can ask whether the mapping is differentiable, if so whether it's derivative is differentiable, and so on; it is said to be smooth if it is differentiable and its derivative is smooth (yes, that's a self-referential definition). The empty mapping is fatuously smooth. An atlas, A, of a topological manifold is described as smooth if f∘reverse(g) is smooth for every f, g in A. A smooth manifold is a topological space equipped with a smooth atlas. Since the atlas covers the whole manifold, and establishes local isomorphisms between open sets of it and a linear space, a smooth manifold looks a lot like a linear space – in fact, it looks enough like a linear space to let us do differentiation.

Every linear space is trivially a smooth manifold – an atlas consisting of just one chart, the identity on the linear space, suffices. Given smooth manifolds N and M, a mapping (N:f:M) is described as smooth precisely if, for every chart s of M and r of N, r∘f∘reverse(s) is smooth as a mapping between linear spaces (albeit many r and s may yield empty composites). When N and M are linear spaces, this definition fatuously coincides with the definition of smoothness of mappings between linear spaces. As for linear spaces, it is trivial to show that any composite of smooth maps among manifolds is smooth.

Gradients and Tangents

A smooth mapping (M:f:{reals}) with (:f|) open is described as a trajectory in M; a smooth mapping ({reals}:f:M) is described as a scalar field on M. For every point, m, in a smooth manifold, M, we can identify the collections

For any [f,t] in R(m) and s in S(m), we can compose ({reals}: s∘f :{reals}) and note that f(t) = m in (:s|) implies (:s∘f|) is non-empty. Since s and f are smooth, so is s∘f; since it's ({reals}::{reals}) we already know how to differentiate it, so we can ask for (s∘f)'(t), its derivative at t – i.e. at m. This enables us to define equivalences on R(m) and S(m) as follows

We can then define T(m) = R(m)/~ to be the collection of equivalence classes in R(m); and G(m) = S(m)/~ to be the collection of equivalence classes in S(m). Now, S(m) is a real linear space and our equivalence on it respects its linear structure, so G(m) is also a real linear space. We have a natural contraction between a member of T(m) and a member of G(m) – it combines a [f,t] in the former with an s in the latter to yield (s∘f)'(t), which (by construction) doesn't depend on which [f,t] or s we chose as members of the members of T(m) and G(m). This embeds T(m) in dual(G(m)); it's possible to show (e.g. by pushing everything through some chart of a neighbourhood of m) that this embedding yields all of dual(G(m)), enabling us to identify T(m) with dual(G(m)) and thereby pull back real linear space structure onto T(m). Indeed, T(m) is isomorphic to the linear space with which our charts identify open subsets of M.

The members of T(m) are called tangents at m – the equivalence class containing [f,t] is the tangent at m = f(t) to the trajectory f and is written f'(t) in T(f(t)). The members of G(m) are called gradients at m – the equivalence class containing s is written ds(m) in G(m) and describes the rate of variation of the scalar field s. Note that the definitive contraction between S and R yields (s∘f)'(t) = ds(f(t))·f'(t), matching the chain rule for derivatives. Orthodoxy also refers to tangents as vectors and to gradients as covectors.

Tensors, Sections, Fields and Forms

We can apply all the usual operations of the tensor algebra to G(m) and T(m) to obtain real linear spaces of tensors of various ranks at m; and we can do the entire construction at every m in M, so that G and T are mappings ({real linear spaces}:|M), justifying denoting them G(m) and T(m). In particular, at each m in M and natural n, we have the spaces Alt(n,T(m)) and Alt(n,G(m)), with Alt(1,T(m)) = G(m), Alt(1,G(m)) = T(m) and Alt(0,G(m)) = {reals} = Alt(0,T(m)); we can then deal with these as tensor ranks in their own right, Alt(n,T) and Alt(n,G) for each natural n.

When M is a linear space, with its identity mapping as the sole chart in its atlas, one can obtain natural isomorphisms, for each m in M, identifying T(m) with M and G(m) with dual(M). More generally: a chart (V:f:M), with (:f|) an open neighbourhood of m, identifies T(m) with V and G(m) with dual(V). This, in turn, identifies each linear space of tensors, obtained from G and T, at m with a matching linear space derived from V and its dual.

For any tensor rank H – i.e. H(m) at each m in M – a mapping (:h:M) is described as a section of H precisely if, for every m in M, h(m) is in H(m). If (K:e:H(m)) is the isomorphism, derived from a chart, identifying H(m) with a fixed linear space, K, for each m in some open sub-set of M, we can compose e with a section h of H to obtain e∘h as a mapping from M to K; if this is smooth at e(m), h is described as smooth at m; if h is smooth at every m in (:h|) it is described as a smooth section of M. A smooth section of H is also called a tensor field of rank H; as special cases, a section of G is called a gradient field (or covector field), a section of T is called a tangent field (or vector field) and a scalar field is a tensor field of rank {reals}. A tensor field of rank Alt(n,T), for natural n, is known as a (smooth, or differential) n-form; we'll be discussing them a lot hereafter.

Co-ordinates

Each chart in our atlas is a smooth isomorphism between an open sub-set of our smooth manifold, M, and an open sub-set of a vector space. By choosing a basis of that vector space's dual, we can obtain a smooth isomorphism between the vector space and {({reals}:|k)} for some index set k; composing this with our chart gives us a smooth isomorphism between an open sub-set of {({reals}:|k)} and an open sub-set of our manifold. Given such a smooth iso ({({reals}:|k)}: f :M), we can take its transpose F = (: ({reals}: f(m,i) ←m :M) ←i |k) whose outputs are necessarily smooth, hence scalar fields on M; so we can apply d to them and obtain gradient fields (G: dF |k). Because f was iso, dF must be a basis of G. Since our atlas provides an open cover of M, every m in M has a neighbourhood U in which there is some ({smooth ({reals}:|U)}: x |k) for which dx is a basis of G. From this we can infer a dual basis of T and bases for all tensor ranks derived from G and T, throughout our neighbourhood of m.

In particular, this implies that, for any n-form w and any m in M, there is a neighbourhood of m in which we can write w as a sum of terms, each of form h.bulk(^,df) for some scalar field h and list, f, of scalar fields.

The Problem of Differentiation

Now, the gradient construction gives us, for any scalar field (:s:M), a gradient field (: ds(m) ←m :M) = ds which it makes sense to regard as a derivative of s. Our ability to differentiate s depends on the fact that we know what constant means for scalar fields; we can ask whether s(m) and s(p) are equal, for any m and p in (:s|), since both s(m) and s(p) are reals. However, for distinct m and p in M, G(m) and G(p) – though isomorphic – are quite different collections; there is no natural way to address the question of whether a member of G(m) is equal to a member of G(p). The same applies for T and any other rank aside from {reals}, hence we have no natural way to decide whether a tensor field of non-scalar rank is constant; without that, we have no way to define a differentiation.

On the other hand, we can characterize differentiation as a tensor operator satisfying the product rule, coinciding with d on scalar fields, promoting the rank of a general tensor field by one factor of G and mapping every output of d to a symmetric G⊗G field. That does leave some ambiguity – there are many possible differential operators on a smooth manifold – but it does suffice for what we need; in particular, the ambiguity will get washed away by anti-symmetrization. Since any differential operator turns a rank H tensor into a rank G⊗H tensor, and we want to be able to do anti-symmetrization, we need to concern ourselves with ranks H of form bulk(⊗, ({G}:|n)) for natural n; the anti-symmetric sub-spaces of these are the Alt(n,T).

The Alternating Functor

For natural n and a smooth manifold M with tangent bundle T and gradient bundle G, define Ω(n,M) to be the space of n-forms on all of M, i.e. Ω(n,M) = {smooth (:|M) sections of Alt(n,T)}. The antisymmetric product between members of Alt(n,T(p)) and Alt(m,T(p)) – for p in M and natural n, m – induces a product between members of Ω(n,M) and Ω(m,M), namely u^w = (: u(p)^w(p) ←p :M). If we have a smooth mapping from one smooth manifold to another, (N:f|M), we can expect it to induce a mapping (Ω(n,M): Ω(n,f) |Ω(n,N)) for each natural n; so let's see what the details of that must be.

Given smooth (N:f|M), consider any p in M. Each trajectory, h, through p = h(t) yields a trajectory f∘h through f(p) = f(h(t)); thus f induces a linear map Q from M's tangents at p to N's tangents at f(p). Each scalar field, s, in a neighbourhood of f(p) induces a scalar field (: s(f(x)) ←x :) in a neighbourhood of p. Thus f induces a linear map R from N's gradients at f(p) to M's gradients at p. Now, ds(f(p))·Q·h'(t) = ds(f(p))·(f∘h)'(t) = (s∘f∘h)'(t) = d(s∘f)(p)·h'(t) = (R·ds(f(p)))·h'(t), so we can infer that R is just the transpose of Q. Hereafter, I'll refer to Q as ∂f(p), defined by:

Now, construed as a linear map from M's tangents at p to N's tangents at f(p), ∂f(p) yields Alt(n,∂f(p)) as a linear map from N's n-forms at f(p) to M's n-forms at p, for each natural n; so if we have w in Ω(n,N), we have w(f(p)) as a valid input to Alt(n,∂f(p)), so (: Alt(n,∂f(p))·w(f(p)) ←p |M) is an n-form on M. Thus we can define

making Ω, like Alt, a contravariant functor. Indeed, if we also have smooth (M:g|L) for some smooth manifold L; at each p in L, we obtain

Ω(n, f∘g, w, p)
= Alt(n, ∂(f∘g)(p))·w(f(g(p)))
= Alt(n, ∂g(p))·Alt(n, ∂f(g(p)))·w(f(g(p)))
= Alt(n, ∂g(p))·Ω(n, f, w, g(p))
= Ω(n, g, Ω(n, f, w), p)
= (Ω(n, g)∘Ω(n, f))(w, p), i.e.

In particular, note that when f is an identity (or a sub-manifold inclusion) ∂f is also an identity, hence so is Alt(n,∂f(p)) for each p, so Ω(n, f) is just (: w∘f ←w :) which is, again, just the relevant identity (or restriction mapping).

Also, for u in Ω(n,N), v in Ω(m,N) and smooth (N:f|M), we have

Ω(n+m,f,u^v)
= (: Alt(n+m, ∂f(p), (u^v)(f(p))) ←p |M)
= (: Alt(n, ∂f(p), u(f(p)))^Alt(m, ∂f(p), v(f(p))) ←p |M)
= Ω(n, f, u)^Ω(m, f, v)

so, like Alt, Ω distributes suitably over ^. Note that Alt(0,f) is just (Alt(0,M): w∘f ←w |Alt(0,N)) and Alt(1,f) is (Alt(1,M): (w∘f)·∂f ←w |Alt(1,N)). In particular, when s is a scalar field on N, w = ds in the latter gives Alt(1,f,ds, p) = ds(f(p))·∂f(p) = d(s∘f)(p), i.e. Alt(1,f,ds) = d(s∘f) = d(Alt(0,f,s)).

The Exterior Derivative

Suppose we have h in Ω(n,M) and a differential operator D on M, with tangent bundle T and gradient bundle G. We can thus obtain D(h) of rank G⊗Alt(n,T), which we can anti-symmetrize to obtain a tensor of rank Alt(1+n,T). As noted above, we can write h as a sum of terms of form f(n).bulk(^,(:df:n)) for some ({scalar fields}:f|1+n). If we differentiate this term, we get D(f(n))×bulk(^,(:df:n)) plus n terms in each of which there appears a D(df(i)) factor for some i in n. From the specification of our differential operator, we know D agrees with d on scalar fields, so D(f(n)) = df(n), and D maps each output of d to a symmetric G⊗G field, so each D(df(i)) is symmetric. When we now anti-symmetrize D(h), the terms with a D(df(i)) in them are annihilated and each f(n).bulk(^,(:df:n)) term in h contributes only ±bulk(^,df) to antisymm(({G}:|1+n), D(h)). [The ± is because I've got df(n)^bulk(^,(:df:n)) = power(−1,n).bulk(^,(:df:n))^df(n) = power(−1,n).bulk(^,df).] This gives us two results.

First, bulk(^,df) doesn't depend on D, so any differential operator will yield the same answer for antisymm(D(h)). We can thus define the exterior derivative on alternating tensor fields, d^, to be

without any ambiguity arising from choice of differential operator D. The second result we obtain from d^h being a sum of terms of form bulk(^,df) is that d^(d^h) = zero, since when we express d^h in Ω(M,1+n) as a sum of terms of form f(1+n).bulk(^,(:df:1+n)), we know we can chose f(1+n) = 1, so constant, in each term; but then df(1+n) = zero, so bulk(^,df) = zero.

Next we must look at d^'s action on products. Suppose we have u in Ω(n,M) and v in Ω(m,M); in a neighbourhood of any point p in M, we can express u as a sum of terms of form f(n).bulk(^,(:df:n)) and v as a sum of terms of form g(m).bulk(^,(:dg:m)), so that u^v is a sum of terms of form f(n).g(m).bulk(^,(:df:n))^bulk(^,(:dg:m)) which d^ will map to

d(f(n).g(m))^bulk(^,(:df:n))^bulk(^,(:dg:m))
= d(f(n)).bulk(^,(:df:n))^g(m).bulk(^,(:dg:m)) + f(n).d(g(m))^bulk(^,(:df:n))^bulk(^,(:dg:m))
= d(f(n)).bulk(^,(:df:n))^g(m).bulk(^,(:dg:m)) + power(−1,n).f(n).bulk(^,(:df:n))^d(g(m))^bulk(^,(:dg:m))

in which the first term is a term of (d^u)^v and the second is a term of power(−1,n).u^(d^v); we thus infer the suitable form for the product rule: for u in Ω(n,M) and v in Ω(m,M),

Now, consider a general linear tensor operator Q which: is (Ω(1+n,M):|Ω(n,M)) for each natural n; agrees with d on Ω(0,M) = {scalar fields}; satisfies Q∘Q = zero; and gives Q(u^v) = Q(u)^v + power(−1,n).u^Q(v) for each u in Ω(n,M) and v in Ω(m,M). We clearly have one such, namely Q = d^; and Q must necessarily agree with d^ on scalar fields, since both agree with d there. Applying Q to f(n).bulk(^,(:df:n)) for some ({scalar fields}:f|n), we see each df(i) = Q(f(i)) so Q(df(i)) = 0 and our product rule then gives Q(f(n).bulk(^,(:df:n))) as Q(f(n))^bulk(^,(:f:n)) plus n terms, with alternating signs, each containing a factor Q(df(i)) for some i, which is zero; we are thus left with only the first term, which is equal to df(n)^bulk(^,(:df:n)) which is d^(f(n).bulk(^,(:df:n))), so Q and d^ agree on this term. Since every member of each Ω(n,M) is a sum of such terms, on which Q and d^ agree, and both are linear operators, we can infer that Q = d^. Thus d^ is the unique operator satisfying the given conditions.

Alternating Functor: reprise

Given smooth (N:f|M) between smooth manifolds, and w in Ω(n,N), we can apply N's d^ to w, then pass the result through Ω(1+n,f), or apply M's d^ to Ω(n,f,w); how are the results related ? Both are in Ω(1+n,M). As usual, we can express w as a sum of terms, each of which is of form h(n).bulk(^,(:dh:n)) for some ({scalar fields}:h|n), contributing power(n,−1).bulk(^,dh) to d^w. The term contributes Ω(0,f,h(n)).bulk(^, (: Ω(1,f)∘dh :n)) to Ω(n,f,w), and each Ω(1,f,dh(i)) is just d(h(i)∘f), with Ω(0,f,h(n)) = h(n)∘f; thus, in d^Ω(n,f,w), the term's only contribution is d(h(n)∘f).bulk(^, (: d(h(i)∘f) ←i :n)), since all others have a d^d(…) factor hence are zero. This contribution is just power(n,−1).bulk(^, (: d(h(i)∘f) ←i :1+n)) = power(n,−1).bulk(^,Ω(1,f)∘dh), which is what Ω(1+n,f) will map d^w's power(n,−1).bulk(^, dh) term to.

Note that Alt(−1)'s outputs are all {zero}, so we also obtain Ω(−1,M) as a formal space whose sole member is ({zero}:|M), a constant (hence fatuously smooth) zero mapping. Since this is constant, applying d^ to it must clearly yield zero; for compatibility with the above, this must be a 0-form, hence it must be the ({0}:|M) constant scalar field zero. For any smooth (N:f|M), Ω(−1,f) simply maps the unique ({zero}:|N) to the unique ({zero}:|M); applying d^ to its output yields M's constant zero function, which is indeed Ω(0,f)'s output when given N's constant zero function as input, so we also have (d^)∘Ω(−1,f) = Ω(0,f)∘(d^).

Thus, for each natural n,

and we already have:

Madsen and Tornehave state, but neglect to prove, that any family of transformations satisfying these three conditions, in place of Ω(*,f), is necessarily equal to Ω(*,f).

The Cohomology Groups

We have d^ annihilating all its outputs; it remains to consider what (if anything) else it annihilates. For any smooth manifold M, on each Ω(1+n, M), we can define an equivalence relation by u ~ v iff u−v = d^w for some w in Ω(n, M). We can then look at ({zero}: d^ |) reduced modulo this equivalence. The result is just

This is called the (1+n)th de Rham cohomology group. As before, by allowing Ω(−1,M) = {({zero}:|M)} we can extend this to include H(0,M), exploiting (|d^:Ω(−1,M)) = {({0}:|M)}, the trivial subspace {zero} in Ω(0,M), so that H(0,M) is just {w in Ω(0,M): d^w = zero}; and d^ acts on Ω(0,M) as d, so this is just the collection of locally constant scalar fields. The dimension of H(0,M) is thus the number of connected components of M; I shall generally only be interested in connected manifolds, for which H(0) is thus one-dimensional.

Forms which are annihilated by d^ are described as closed; those which are outputs of d^ are described as exact (we can formally evade the need for Ω(−1) by allowing the scalar constant field zero to be exact without demanding something for d^ to map to it). Every exact form is closed, since d^∘d^ is zero; and we can write the above as:

Note that both {closed n-forms on M} and {exact n-forms on M} are linear spaces; and each is usually infinite-dimensional; but that H(n,M) is typically finite dimensional; the H(n,M) encode the topology of M.

Now consider a closed n-form u and a closed m-form v on M. For any exact n-form d^s and exact m-form d^r,

(u + d^s) ^ (v + d^r)
= u^v + u^(d^r) + (d^s)^v +(d^s)^(d^r)
= u^v + d^(u^r + s^v + s^(d^r))

since the extra terms in the final d^(…) are (d^u)^r + s^(d^v) + s^(d^d^r), each of which has d^ applied to a closed form, which yields zero. Thus changing each closed form in a ^ product by an exact form changes the product by an exact form; consequently we can induce a product on the H groups defined by: for x in H(n,M) and y in H(m,M), select any u in x, v in y and set x^y = {u^v + e: e is an exact (n+m)-form} in H(n+m,M).

Suppose we have a smooth (N:f|M) between smooth manifolds; for natural n define

in which d^w is implicitly zero, hence so is d^Ω(n,f,w). Note that any exact n-form, r, on N has d^Ω(n,f,r) = Ω(n+1,f,d^r) = Ω(n+1,f,zero) = zero so that Ω(n,f,r) is exact, so changing w by an exact n-form of N doesn't change the {Ω(n,f,w) + s: s is an exact n-form on M} in the above. Thus, indeed, H(n,f) is a mapping; and H(n) is yet another contravariant functor.

Poincaré's lemma: the topologically trivial case

Consider now a star-shaped domain, U. This is a sub-set of a vector space within which there is some u in U for which: v in U implies that, for every positive s, t satisfying s+t = 1, u.s+v.t is in U; U is then said to be star-shaped about u. [Compare this to the special case of a convex subset of a linear space, which is star-shaped about each of its members, i.e. u, v in U and positive s, t satisfy s+t = 1 implies u.s+v.t is in U.]

We can construct a smooth function ({reals}: step |{reals}) for which: t ≥ 1 implies step(t) = 1; t ≤ 0 implies step(t) = 0; otherwise, 0 ≤ step(t) ≤ 1. [For example: first note that (: exp(1/t) ←t :{positives}) is smooth; it tends to zero as its input tends to zero, as does each of its derivatives; so consider the function f given by exp(1/t).exp(1/(1−t)) ←t for 0<t<1 and 0←t otherwise; then the integral of f from 0 to t varies smoothly with t and is constant for t≤0 and t≥1; dividing this by its value at t≥1 yields a candidate for step.] Using this, we can define a smooth function hat = (U: step(t).(v−u) + u ←[v,t] |W), with W = {[v,t]: v in U, t is real}, for which hat([v,t]) = u for t ≤0 while hat([v,t]) = v for t≥1. This induces, for each natural n, (Ω(n,W): Ω(n,hat) |Ω(n,U)). Let t in Ω(0,W) be defined by t = (: x ←[v,x] |W). For any f in Ω(0,U) we have Ω(0,hat,f) = f∘hat and d^Ω(0,hat,f) = Ω(1,hat,d^f); since d^ is just d on Ω(0), we obtain

Ω(1,hat,df)
= d(f∘hat)
= d(: f(step(x).(v−u)+u) ←[v,x] :)
= (: f(hat([v,x])).step'(x).dt +step(x).df(hat([v,x])) ←[v,x] |W)
= (f∘hat).(step'∘t).dt + (step∘t).(df∘hat)

from which we may infer Ω(1,hat)'s action on all higher-rank forms on U, by applying Ω(n+m,hat,w^z) = Ω(n,hat,w)^Ω(m,hat,z) as appropriate.

For each v in U, we have a trajectory P(v) = (: [v,t]←t; 0<t<1 :{reals}) in W along which we can integrate any (1+n)-form in W; viewed as a function of v, this will necessarily yield an n-form in U since it contracts our (1+n)-form in W with displacements along the trajectory; we can use this to construct a mapping (Ω(n,U): R(n) |Ω(1+n,W)). Formally, take t = ({reals}: x←[v,x] |W) and construct dt in Ω(1,W); take any (Ω(0,U): x |dim) for which dx is a basis of Ω(1,U) at each point of U; taken together with dt, these supply us with a basis of Ω(1,W) at each point of W. From dx infer bases of Ω(n,U) for all natural n, in which each basis member is of form bulk(^,dx∘a) for some (dim:a|n); then any w in Ω(1+n,W) can be written in the form

for some finite I and J, with each f(i) and g(i) being ({reals}: |W), each r(i) being a basis member in Ω(1+n,U) and each s(j) a basis member in Ω(n,U). The value of sum(: g(j).s(j) ←j :) doesn't depend on our choice of basis, essentially because it's P'(v,t)·w; we can then integrate each (: g(j,[v,t]) ←t; 0≤t≤1 :) to obtain

yielding (Alt(n,U): R(1+n,w) |U) in Ω(n,U) since each g was necessarily smooth, hence (Ω(n,U): R(1+n) |Ω(1+n,W)). Given our formal Ω(−1,U) = {zero} we can extend R to R(0) by noting that w in Ω(0,U) can be used as f(0), with r(0) = (:1←v|U) as the natural basis member of Ω(0,U) resulting from bulk(^,dx∘empty); outputs of s must be basis members in Ω(−1,U), but {zero}'s basis is empty, hence s has no outputs, hence no inputs, so J is empty and there is no dt^ term; thus R necessarily maps any 0-form on W to Ω(−1,U)'s unique member, zero. Thus we obtain

and, hereafter, I'll continue the above discussion but with w now in Ω(n,W), each r(i) a basis member in Ω(n,U) and each s(i) a basis member in Ω(n−1,U); when n is 0, J is necessarily empty. It is now time to examine (d^)∘R(n) + R(1+n)∘(d^).

Now, each s(j) and r(i) is of form bulk(^,dx∘a) for some (dim:a:), hence each ds(j) and dr(i) is zero; thus

in which the negation arises from the product rule with dt being a 1-form, so yielding a power(1,−1). Since d^g(j) appears as dt^(d^g(j)), we can ignore its dt.∂g(j,[v,t])/∂t contribution, leaving only its Alt(1,U) component. If we apply U's d^ to R(n,w), it annihilates each s(j), so d^ only acts on each g(j) within the integral; as it's U's d^, it again only yields the Alt(1,U) component thereof. When we apply R(1+n) to d^w, the second, dt^sum(::J), term in d^w thus yields exactly −d^R(n,w); meanwhile, only the df(i)/dt component of df(i)^r(i) can yield a dt contribution for R(1+n) to notice; and integrating that (which is multiplied by constant r(i)) from 0 to 1 will simply yield the scalar field (: f(i,[v,1])−f(i,[v,0]) ←v |U). Thus,

d^R(n,w) + R(1+n,d^w)
= (: sum(: (f(i,[v,1])−f(i,[v,0])).r(i) ←i :I) ←v |U)

which is the f-term part of (: w([v,1])−w([v,0]) ←v |U). We now look at

for which

(Ω(n,U): (d^)∘S(n) + S(1+n)∘(d^) |Ω(n,U))
= (d^)∘R(n)∘Ω(n,hat) + R(1+n)∘Ω(1+n,hat)∘(d^)

but Ω and d^ commute, albeit swapping rank,

= (d^)∘R(n)∘Ω(n,hat) + R(1+n)∘(d^)∘Ω(n,hat)
= ((d^)∘R(n) + R(1+n)∘(d^))∘Ω(n,hat)

Since each S(n) is manifestly linear, as is d^, we can see how this acts on Ω(n,U) by considering a single term of form (Alt(n,U): h(n).bulk(^,(:dh:n)) |U) for some (Ω(0,U):h|1+n); any other Ω(n,U) is a sum of terms of this form. Applying Ω(n,hat) to this term, we get

Ω(n, hat, h(n).bulk(^,(:dh:n)))
= Ω(0,hat,h(n))^bulk(^,(: Ω(1,hat)∘dh :n))
= (h(n)∘hat).bulk(^,(Ω(1,W): (h(i)∘hat).(step'∘t).dt +(step∘t).(dh(i)∘hat) ←i :n))

When we now break this into sum(:f.r:I) +dt^sum(:g.s:J) ready for R(n)'s action, for n>0, it has only one f.r term, namely

with (:f|) = {0} = (:r|), f(0) = (h(n)∘hat).(power(n)∘step∘t) and r(0) = bulk(^,(: dh(i)∘hat ←i :n)); applying d^∘R(n)+R(1+n)∘d^ to this will just yield (given hat(1) = (:v←[v,x]:) with hat(0) = (:u←[v,x]:) constant)

(: (f(0,[v,1])−f(0,[v,0])).r(0) ←v |U)
= (: (h(n,v).power(n,1) −h(n,u).power(n,0)) ←v |U).bulk(^,(:dh:n))
= h(n).bulk(^,(:dh:n))

which is just the n-form on U we started with; thus, for n>0, ((d^)∘R(n)+R(1+n)∘d^)∘Ω(n,hat) = (d^)∘S(n)+S(1+n)∘(d^) is the identity on Ω(n,U), for n>0.

For n = 0, any h in Ω(0,U) is just a scalar field, h, on U; Ω(0,hat) maps it to h∘hat; when we put this into sum(:f.r:)+dt^sum(g.s) form, we have (:f|) = {0} = (:r|) again, with f(0) = h∘hat and r(0) the constant unit scalar field, so we can ignore it. Thus applying d^∘R(0)+R(1)∘d^ to h∘hat will give us

so that

So, now consider any closed n-form f on M, so d^f = 0. If n > 0, we have f = d^S(n,f) +S(n+1,d^f) = d^S(n,f) is an output of d^, hence exact; thus every closed n-form is exact and H(n,U) = {zero}. For n = 0, S(0)'s only output is the zero in Ω(−1,U), which d^ maps to the zero scalar fields; and S(1,d^f) = 0, so we have 0 = h(v)−h(u) for all v in U, i.e. h(v) = h(u) for all v in U, whence h is constant. Thus H(0,u) is the one-dimensional linear space of constant scalar fields on U.

Thus, for any star-shaped domain U, H(0,U) is one-dimensional and H(n,U) is {zero} for n>0.

A smooth manifold, M, is described as simply connected or topologically trivial if H(0,M) is one-dimensional and each H(n,U) is {zero} for n>0. Using the contravariant functor property of each H(n), monic smooth mappings between a smooth manifold, M, and a star-shaped domain can be used to establish M's topological triviality.


Valid CSSValid HTML 4.01 Written by Eddy.