A quadratic form, or bilinear map, is a linear map whose outputs are linear maps. If the bilinear map draws its inputs from the same linear space as that from which its outputs draw their inputs we can swap its inputs; the second input is a valid first input and vice versa. If the output is unchanged by such a swap, the quadratic form is described as symmetric; if the output is negated by a swap, the quadratic form is described as anti-symmetric.

Given bilinear ({linear maps ({scalars}:|V)}: q |V) for some linear space V, we can define g = (: (: q(u,v)+q(v,u) ←v |V)/2 ←u |V) and s = (: (: q(u,v)-q(v,u) ←v |V)/2 ←u |V) which are manifestly symmetric and antisymmetric, respectively, with q = g+s.

Classical theorems of algebra say that every symmetric quadratic form can
be unit diagonalised

by a suitable choice of basis: i.e. there are
mappings (dual(V):b|N) and ({+1,-1}:a:N) for which g = sum(:
a(i).b(i)×b(i) ←i :N) = sum(: a.b×b :N); in b's matrix
representation of g, all entries are zero except the diagonal ones, given by
a. Furthermore, there's sufficient freedom of choice remaining in b that: if
we take a second symmetric quadratic form, r – at least if r is positive
definite (i.e. r(x,x) is positive for each x in V except x = zero, for which
r(x,x) cannot avoid being 0) – we can chose a basis simultaneously unit
diagonalising g, as before, and diagonalising r as sum(: c.b×b :N) for
some ({positive scalars}: c :N).

Further quotes from the classics tell me that every anti-symmetric quadratic form takes the form s = sum(: b(n(2.i))×b(n(1+2.i)) − b(n(1+2.i))×b(n(2.i)) ←i |m) for some monic list (N: n |2.m) and basis (dual(V):b|N). In particular, if V is odd-dimensional, (|n:) isn't all of N, so s maps some non-zero members of V to zero – i.e. every anti-symmetric bilinear map on an odd-dimensional linear space is singular.

My gut instinct is that this should leave enough freedom to allow for
simultaneous diagonalisation of at least some symmetric quadratic forms;
ideally, thereby, expressing q = g+s as a band-diagonal

matrix, with
non-zero entries only on the diagonal except for units in the off-diagonal
positions of two-by-two squares on the diagonal (with zeros to either side of
each off-diagonal unit in the direction of the diagonal).

I suspect that anti-symmetric (real) quadratic forms can be used to bring
forth the complex numbers in a manner which leaves me we some chance of
understanding *why* the complex numbers arise; my suspicion is based on
the thoughts with which I'll now begin, seeing where they lead and, by and by,
I'll have a clearer opinion than suspicion.

Define

- S = (: sum(: p(2.i)×p(1+2.i) −p(1+2.i)×p(2.i) ←i |m) ← ({linear (W:|V)}: p |2.m) ; W, V are linear spaces, m is natural :)

This maps even-length lists of linear (W:|V) to antisymmetric
quadratic forms; my earlier quote from classical algebra may then be expressed
as (|S:) = {antisymmetric quadratic form on V}; indeed, to deliver all
anti-symmetric quadratic forms on V, S only needs to use those of its inputs
which are linearly independent. Now, S is a mapping so its right-equivalence,
(:S|), relates p to q iff each is an even length list of members of V and S(p)
= S(q). When we express an anti-symmetric quadratic form, s, in its
canonical units either side of the diagonal

form, as above, we are
chosing a linearly independent list (dual(V): b |2.m) for which S(b) = s; and
our choice of b is thus unique up to (:S|)-equivalence

. Thus the study
of which changes of basis don't change the matrix of an anti-symmetric
quadratic form

reduces to an analysis of (:S|).

For given (dual(V):p|2.m) we can decompose S(p) as sum(: a^c |m) with a = p&on;(: 2.i ←i |m), c = p&on;(: 1+2.i ←i |m) and a^c = (: a(i)×c(i) -c(i)×a(i) ←i |m). This describes S(p) as the value of an antisymmetric quadratic form on lists of V's members, given a and c (which interleave to give p) as its two inputs. So (notice that V only appears via its dual, which is a linear space just as V is, so we may as well skip mention of V itself and take its dual as the linear space worth naming; and) introduce

- Q = (: W is a linear space; (: sum(: x(i)×y(i) -y(i)×x(i) ←i :) ← (W:y:) :{lists}) ← (W:x:) :{lists})

noting that the sum is implicitly over the intersection of (:x|) and (:y|), which (as x and y are lists) are naturals so their intersection is whichever is smaller of the two list lengths. Then Q(a,c) = sum(:a^c:) and S(p) = Q(p&on;(:2.i←i:), p&on;(:1+2.i←i:)), making every output of S an output of an output of Q; and every such Q-product is an antisymmetric quadratic form on some linear space, dual(W), hence is an output of S; so the collections of Q-products and S-outputs are equal. Now, Q is an anti-symmetric quadratic form on {lists (W::)} for any linear space W, and {lists (W::)} in such a case forms a linear space, so Q is an anti-symmetric quadratic form on a linear space so we can express (at least) its restriction to lists in any given linear space as a Q-product.

Indeed, for any given linear space V, (: Q :{lists (dual(V)::)}) should be sum(: u^v :) for some lists (V:u:) and (V:v:) interpreted as members of dual({lists (dual(V)::)}) via

- (: sum(: w(i,v(i)) ←i :) ← (dual(V):w:) :{lists})

for v and similar, with u in v's place, for u. Manifestly, u and v should be non-overlapping sub-lists of some basis of V.

Written by Eddy.