Quadratic forms

The theory of quadratic forms concerns itself with chosing bases which reduce quadratic forms, on a fixed space, to particularly simple forms. The classical theory goes on to prove further results about doing this while keeping a chosen positive-definite form in a particularly simple form. For the purposes of general relativity, two complications must be taken into account: the metric is Minkowskian rather than positive-definite; and we are dealing with a smooth manifold, so that each quadratic form is actually a tensor field and our bases are, likewise, vector fields.

Thus, where the classical theory allows one to take an initial basis, the manifold theory allows one to, for any given point of interest, take a family of smooth vector fields which, at each point in some neighbourhood of the given point, constitute a basis. Where the classica theory obtains a new basis which reduces a quadratic form or two to simple expressions, we can obtain a basis which, at the chosen point, reduces the quadratic form to a similarly simple expression; and, where feasible, we would also like to achieve the same simplification throughout some neighbourhood of the point.

Fortunately, much of why the classical results work for positive-definite forms depends primarily on their invertibility, so can be carried over to the Minkowskian case.


For the rest of this page, I take as given:

Analysis shall generally proceed in terms of revising U to a sub-neighbourhood of m and adjusting b and q, subject to keeping the above constraints true, in such a way as to express a quadratic form as a simple expression in q×q. Note that any tensor field on U, whose value at each point is a linear map from tangents at that point to gradients at that point, can be written in the form sum(: sum(: k(i,j).q(i)×q(j) ←i :) ←j :) for some ({({scalar fields on U}::dim)}: k :dim) – k gives the matrix of the linear map – and our aim is to make so many of k's outputs zero that we can simplify the summation. The proofs effectively take the form of describing an algorithm for revising U and b, with q adapting to b's changes, which leads to the desired result; at each step of the way, it is necessary to prove that the algorithm can proceed and does preserve the invariants above and any presumed by the proof itself.

Smooth indicator functions

One tool I shall need is: a smooth indicator function for an open set N in M is a scalar field on M which is positive on N and zero elsewhere. Given finitely many open sets with indicator functions, we can sum those indicator functions to obtain an indicator function on their union. Given arbitrarily many disjoint open sets, on each of which we have a smooth indicator function, we obtain a smooth indicator function on their union by using their several indicator functions, each on its separate support, and taking value zero outside their union.

Consider an open set S contained in the M side of a single chart of M's atlas; we can identify S with its inverse image, under the chart, in a vector space; we can chose a basis in that vector space and define distance via the usual sum of squares of co-ordinates, with respect to that basis; distance from nearest point not in the inverse image of S is then a scalar function on the vector space and smooth everywhere except the boundary of S's inverse image; we can pull back this function to a smooth scalar function on S, which we can extend to a continuous scalar function on M by letting it be zero outside S. The function (: exp(−1/x/x) ←x :) is smooth, maps positive inputs to positive outputs, zero to zero, but all of its derivatives at zero are zero. Composing such a function with our continuous scalar function on M, we obtain an indicator function for S. Thus every open set contained in M's side of a chart of M's atlas has a smooth indicator function.

I presume that each connected component of any open N in M can be expressed as a finite union of open sets, each of which is contained in the M side of a single chart in M's atlas. Consequently, I take it that every open N in M has a smooth indicator function.

Basic diagonalisation

If we have a tensor field whose value at each point is a symmetric quadratic form, can we revise b and q so as to express the quadratic form as sum(: k(i).q(i)×q(i) ←i |dim) for some ({scalar fields on M}: k |dim), at least in some neighbourhood of m ?

Our quadratic form is a smooth (:f|N) with N some neighbourhood of m and each ({gradients at n}: f(n) |{tangents at n}) satisfies f(n,u,v) = f(n,v,u) for all tangents v, u at n. From continuity, implied by smoothness, we can infer that, when u, v are smooth vector fields in a neighbourhood of m with v·f·u non-zero at m, there is some neighbourhood of m in which this holds true. Inductively (on natural i) assume we know f = sum(: k(j).q(j)×q(j) ←j |i) +h with h·b(j) = 0 (necessarily implying 0 = b(j)·h by symmetry of h) for all j in i; we can certainly assert this for i = 0 to begin with. If it is true for i = dim, our result is proven. In any case, h must be symmetric, since the given sum and f are.

If there is any j in dim but not in i for which, throughout some neighbourhood of m, for each e in dim but not in i, either e = j or b(e)·h·b(j) = 0 (hence, by symmetry of h, b(j)·h·b(e) = 0), then we may set k(i) = b(j)·h·b(j) and permute b and q to ensure that this j is i, thereby extending our inductive hypothesis. Otherwise: every neighbourhood of m contains, for each j in dim but not in i, an n at which b(e)·h·b(i) is non-zero for some e in dim but neither equal to j nor in i. (Inconveniently, this doesn't preclude h(m) being zero, which I'll be dancing round hereafter.) In particular, i is in dim but not in i, so is a candidate for j.

If there is some j in dim but not in i for which b(j)·h·b(j) is non-zero at m, then there is some neighbourhood of m in which it is non-zero. Intersect U with this neighbourhood and permute b and q so that j is i. Let k(i) = b(i)·h·b(i); for each j in dim but not in 1+i, replace b(j) with b(j) −b(j)·h·b(i).b(i)/k(i), whose contraction with h·b(i) is zero; revise q to be b's dual. Since we only changed the b(j) with j in dim but not in 1+i, and each changed by a multiple of b(i), this does not change (:q:i), so our inductive relationship between f, b and h remains true. Now each b(j) with j not equal to i is in the kernel of h·b(i), so this must be some scalar multiple of q(i); and contracting it with b(i) reveals that it is k(i).q(i). Thus h −k(i).q(i)×q(i) annihilates b(i), as well as all the b(j) with j in i, so we may replace h with it and i with i+1, extending our induction.

Otherwise, b(j)·h·b(j) = 0 at m for each j in dim. If there are any e, j in dim but not in i for which b(e)·h·b(j) is non-zero, replace b(j) with b(j)+b(e) to obtain the situation of the previous paragraph, since (b(j)+b(e))·h·(b(j)+b(e)) = 2.b(e)·h·b(j) by symmetry and b(j)·h·b(j) = 0 = b(e)·h·b(e).

It remains to consider the case where h = 0 at m but, in every neighbourhood of m, there is some n and some e in dim but not in 1+i for which b(e)·h·b(i) is non-zero. I pause to note that, if f is known to be invertible everywhere, we never run into this case, so the methods above suffice to fully diagonalize it, i.e. to inductively extend i to dim.

However, when a quadratic form's variation does allow it degeneracies, we can't take the above any further: there may be no family of smooth vector fields, forming at each point a basis, which diagonalizes the form everywhere. As a specific example, in the two-dimensional Euclidean plane with orthodox cartesian coordinates x and y, consider f = x.x.dx×dx +(x+y).(dx×dy +dy×dx) +y.y.dy×dy at (and near) the origin. It is singular at the origin (and at every other location with (x+y).(x+y) = x.x.y.y, i.e. 1/x +1/y = ±1, so on each of the curves y = −x/(1 ±x), which meet in the origin), but not in any open neighbourhood of the origin. We can, indeed, diagonalize this form, as

but the basis described by this is degenerate when 1/x +1/y = ±1; either it has a zero member (and at least one of the required functions k(0) and k(1) goes to infinity) or it has an infinite member (and a k function goes to zero). The underlying problem here is that the coefficients of dx×dx and dy×dy are independent of one another and go to zero quadratically as the coefficient of dx×dy +dy×dx goes linearly to zero.

(x +y +x.y).(u.dx +v.dy)×(u.dx +v.dy) +(x +y −x.y).(w.dx +z.dy)×(w.dx +z.dy)
= dx×dx.(u.u.(x +y +x.y) +(x +y −x.y).w.w) +(dx×dy +dy×dx).(u.v.(x +y +x.y) +(x +y −x.y).w.z) +dy×dy.(v.v.(x +y +x.y) +(x +y −x.y).z.z)
= x.(x/(x+y) +1/y)/4 +u.u.w.w.y.(y/(x+y) −1/x)
= x.(x.y +x +y)/4/y/(x+y) +u.u.w.w.y.(y.x −x −y)/x/(x+y)
= x.(x/(x+y) −1/y)/4 +u.u.w.w.y.(y/(x+y) +1/x)
= x.(x.y −x −y)/4/y/(x+y) +u.u.w.w.y.(y.x +x +y)/x/(x+y)
x.(x +y +x.y).(x +y +x.y)/4/y −u.u.(x +y).(x +y +x.y)
= u.u.w.w.(x +y −y.x).(x +y +x.y)
= x.(x +y −x.y).(x +y −x.y)/4/y +w.w.(x +y).(x +y −x.y)
w.w.(x +y).(x +y −x.y)
= x.( (x +y +x.y).(x +y +x.y) −(x +y −x.y).(x +y −x.y) )/4/y −u.u.(x +y).(x +y +x.y)
= x.x.(x +y) −u.u.(x +y).(x +y +x.y)
(2.u.u −(x +y +x.x)/(x +y +x.y))**2
= (x +y +x.x).(x +y +x.x)/(x +y +x.y)/(x +y +x.y) −x/y
= (y.(x +y +x.x).(x +y +x.x) −x.(x +y +x.y).(x +y +x.y))/y/(x +y +x.y)/(x +y +x.y)
= (x.x.x.y.(x −y) +y.y.y +x.y.y −x.x.y −x.x.x)/y/(x +y +x.y)/(x +y +x.y)
= (x.x.x −(y+x).(y+x)/y).(x −y)/(x +y +x.y)/(x +y +x.y)
= (x +y +x.x ±√((x.x.x −(y+x).(y+x)/y).(x −y)))/(x +y +x.y)/2
= (x.x −(x +y +x.y).u.u)/(x +y −x.y)
= −(x +y −x.x ±√((x.x.x −(y+x).(y+x)/y).(x −y)))/(x +y −x.y)/2

Consequently, none of the other possible diagonalizations will save us from degeneracies such as are exhibited by this case.

Valid CSSValid HTML 4.01 Written by Eddy.