]> Polynomials: sums of scaled powers

Polynomials

Just as repeated addition begets a form of multiplication by the naturals, we can use repeated multiplication to define power, for which my power(n, x) corresponds to orthodoxy's xn; we thus obtain power(n, x).power(m, x) = power(n +m, x) and power(n)&on;power(m) = power(n.m). We can define powers whenever we have a binary operator that we think of as multiplication. Below, I'll only need (: power :{naturals})

The naturals support a multiplication and an addition; the former lets us take powers of naturals and scale them by other naturals; the latter lets us add up the results, to gives us polynomials. Furthermore, when mutliplication and addition are applicable to some values, the usual case leads to there being values that represent the naturals among the ones we can multiply and add, so consideration of the case where all we have is the naturals can teach us a great deal about arithmetic in general. There are also, conversely, results I can prove generally in the abstract; while we only need naturals to describe them, I prefer to prove them for the general case, rather than only for the naturals (and then again for other cases elsewhere). So I'll hear presume that the values whose powers we're taking are drawn from some ringlet, within which the naturals are naturally embedded in the usual way (by repeated addition of the ringlet's multiplicative identity).

When we combine our multiplication with an addition, over which the multiplication distributes, we can build polynomial functions. These are functions of form (: sum(: s(i).power(i, x) ←i :) ←x :) with s being a mapping from within some natural to values of the kind our addition and multiplication act on. (If our addition has an identity, 0, then we can extend (:s:n) to a list (:s|n) by filling in any gaps with 0; but we needn't do this, even when we can.) The outputs of s are known as coefficients of the polynomial. We can interpret our polynomial as simply sum(: s(i).power(i) ←i :), in which s(i) is a constant by which we're multipling the function power(i) = (: power(i, x) ←x :); and the usual definition of pointwise multiplication then turns this into sum(s.power). When we interpret our polynomial in this way, it becomes natural to describe it entirely in terms of its (possibly partial) list of coefficients, taking the powers as implicit.

Generality and formalism

Since I like to allow cases where addition is incomplete, so there may be no zero coefficient to fill in the gaps in a sparse sum of powers, I define a partial list to be a mapping (: :n) from some natural; when it is (: |n) it is a (full) list, but I allow a partial list such as s = [,,1], with s(2) = 1 and no other s(i) defined, to represent power(2), with no coefficients for 0, 1 or any other power. For the same reason, when adding two partial lists, I define pointwise addition to make (: r +s |) the union of (: r |) and (: s |) with (r +s)(i) = r(i) +s(i) when i is a right value of both r and s; (r +s)(i) = r(i) when i is a right value of r but not s; and (r +s)(i) = s(i) when i is a right value of s but not r. This has the same effect, when there is an additive identity, zero, as treating undefined entries in either list as being implicitly zero entries; but does so without the need for there to actually be an additive identity to implicitly use as entry.

When a polynomial is read as a function (: sum(: s(i).power(i, x) ←i :) ←x :), the multiplication of such functions is pointwise; but when representing the polynomial by its partial list of coefficients, the induced multiplication isn't the pointwise multiplication that we could use on partial lists. When discussing polynomials, we use this induced multiplication instead.

So, formally, I define polynomial algebra purely in terms of partial lists of coefficients, without reference to the powers of the variable x. The addition is just pointwise addition of mappings on whose outputs an addition is defined. The multiplication is trickier: given partial lists (: r :m) and (: s :n) and a multiplication between outputs of r and of s, producing values we know how to add, interpreted as polynomials,

When considered as a polynomial, each r(j).s(k) product of coefficients came from multiplying r(j).power(j) by s(k).power(k) to get r(j).s(k).power(j +k), which is why r(j).s(k) is one of the terms summed to get the product's entry at index j +k = i, which serves as coefficient of power(i).

Note that the addition of polynomials can be applied anywhere that corresponding entries in the lists can be added; usually we'll have some single addition defined on a collection V of values and we'll use lists (V: :) as our coefficients, but nothing stops us having some sequence U of collections, with an addition defined on each U(i), and using pointwise addition on lists in Rene(U) = {lists (:u|n): u(i) in U(i) for each i in n}. Likewise, the multiplication only requires that we we know how to multiply outputs of r by outputs of s and sum the results; usually, we'll have some multiplication and addition defined on a collection V of values, with (V: r :) and (V: s :), but we can perfectly well have other cases; as long as s(i).r(j) terms with the same i +j can be added to one another, the multiplication works; for example, each s(i) might be a rank-i tensor. In both cases, we can define and distinguish polynomials or their partial lists of coefficients without having to give any thought to what values we shall take powers of, prior to scaling by the coefficients and then summing.

When the coefficients are values of a ringlet, the resulting addition and multiplication combine to form a ringlet of polynomials over the original ringlet; we can then use these polynomials as coefficients in their own right and repeat this step as many times as we like. At each step in doing this, when we come to interpret our partial lists as functions, we may chose a fresh variable to pass as input to each power that scales this layer's coefficients; consequently, the polynomials in this case take many parameters, rather than just one; as a result, such a multi-layer polynomial is sometimes referred to as a multinomial. A multinomial, understood as such a function of several parameters, can be subjected to permutation of the order in which those parameters are supplied (as, for example, by transposition): doing so transforms its representation as a polynomial in a way that doesn't depend on the values the formal parameters might be supposed to vary over.

Of course, nothing prevents us, equally, from passing the same value to each layer; but, in this case, the multinomial just collapses down to a simple polynomial, multiplying together all the powers from each layer to make a simple polynomial in that variable. We shall revisit multinomials below for the special case where the sum of orders of the variables is the same in all terms.

This arithmetic on lists of coefficients can even be extended to sequences of coefficients; when we do so, such a sequence (or, as for lists, partial sequence) is known as a power series; whether such a sequence can be interpreted as a function depends on the module from which the coefficients are drawn, the ring over which the function's parameter ranges and, typically, the values of the coefficients – the details of which lie beyond the intended scope of this page.

Evaluation and Equation

When we do have some values that we can multiply by themselves to get powers, and can scale those powers by our coefficients and sum the results, we can indeed read our partial list of coefficients as a polynomial function and give it one of these values as an input. When we pick a particular value to use as input to the polynomial, we evaluate the polynomial at that input to get the polynomial's output, by taking the input's various powers, scaling each by a suitable coefficient and summing the results. The peculiarities of arithmetic on these values and on coefficients might, in some contexts, lead to distinct partial lists giving rise to equal polynomial functions; in such a case, the mapping from partial lists to polynomial functions won't be monic. For example, in the ringlet of whole-number arithmetic modulo some prime p, power(p) is equal to power(1) at every input that's a value of the ringlet.

Similar functions can also be defined in cases where the coefficients aren't amenable to a multiplication, but do support an addition; if the inputs to evaluation are values by which we can multiply these coefficients, we still get a polynomial function (in the sense that it uses its input the same way a polynomial does). We can add these but we can't multiply them; it remains that they can be quite useful, and the formalism of polynomials (with coefficients we can multiply) provides ways to study such functions. Formally, they are represented by taking a tensor product of the module (or linear space) in which the coefficients lie with the ring of polynomials over the ringlet (or ring, which may indeed be a field) of inputs to evaluation.

If we pass the same input to two distinct polynomials, we won't normally get the same answer; given a pair of polynomials, it turns out to be interesting to study the collection of values where they do give the same answer. The pair of polynomials is then known as a polynomial equation and each input at which they agree is known as a root of this equation. Adding the same polynomial to both sides in such an equation won't make any difference to what roots you get, so the roots only depend on the difference between the two polynomials. When our addition is complete, so we can subtract freely, it is thus usual to reduce any polynomial equation to the form in which one side is the zero polynomial and the other is the difference between the original polynomials; the roots of the equation are then also referred to as roots of this difference polynomial. With incomplete additions, one can still reduce both sides to simplest form (cancelling as much as possible from both sides; typically so that at most one side has a term in any given power).

Modular reduction

When the arithmetic in use forms a ringlet, the polynomials over that ringlet (i.e. using its values as coefficients) also form a ringlet; it can be interesting to study a reduced form of this polynomial ringlet that takes some given polynomial equation to be true – i.e. reduce the polynomial ringlet modulo the equivalence induced by treating the equation as if it were true; see the next section for an example. For a given equation A = B, the equivalence in question relates polynomials p to q precisely if there exist polynomials U, V, for which p +U.A +V.B = q +U.B +V.A; for a putative input x at which A(x) = B(x), this would give rise to p(x) +U(x).B(x) +V(x).A(x) = p(x) +U(x).A(x) +V(x).B(x) = (p +U.A +V.B)(x) = (q +U.B +V.A)(x) = q(x) +U(x).B(x) +V(x).A(x) whence, cancelling U(x).B(x) +V(x).A(x) from first and last, p(x) = q(x). So this equivalence identifies polynomials that would agree on any inputs at which A and B agree; yet we can define and discuss the equivalence regardless of whether there are any such inputs. (When the ringlet of polynomials, over a given original ringlet S, is reduced modulo this equivalence, the reduced ringlet, R, still has values representing those of S, hence there are polynomials over R that represent A and B, using R's representations of their coefficients in S; and R also has a member x represening S's power(1); evaluating R's A and B at this x, we get S's A and B, which our equivalence identifies; hence indeed, in R, A(x) = B(x). Thus, whether or not S has any members satisfying the equation, R does.) One may equally do the same for an arbitrary family of polynomial equations, using an equivalence derived from ignoring substitution of an arbitrary multiple of either side of an equation for the matching multiple of that equation's other side. (This is just the transitive closure of the union of the equivalences derived from the several equations.)

When we can reduce our polynomial equation to have an additive identity on one side, the other side is a single polynomial from which we can generate a principal ideal; the reduced ringlet is then just the quotient ringlet of the original by this ideal. Orthodoxy, taking additive completion for granted, thus always expresses such equivalences in terms of an ideal. Conversely, even when we can't reduce our polynomial equation A = B to this form, we can still obtain an ideal from it, namely {polynomial p: p +q +h.A +k.B = q +r.A +s.B for some polynomials q, h, k, r and s}, which is just the set of polynomials that our equivalence collapses down to the additive identity in the derived ringlet. When we do the reduction for several equations at once, we effectively construct an ideal of sums of values from the ideals of the respective individual equations.

I define polynomials in terms of an arithmetic on partial lists of coefficients; as noted above, when considered as functions taking inputs in some ringlet R, it's possible for distinct partial lists to represent functions that agree on every value of R, as input. In such a case, when considering polynomials over R, it is natural to use these polynomial equations – which do hold true at all inputs – to generate an equivalence modulo which to reduce the ringlet of polynomials; it is thus usual to refer to the thus-reduced ringlet of polynomials as the ringlet of polynomials over R; but it is worth noting that the primitive polynomials over the ringlet, without this reduction, can also be meaningfully studied; in it, there are polynomials that differ, as partial lists of coefficient, yet agree when interpreted as functions from the ringlet. Thus interpreting polynomials as functions on the ringlet serves as a natural case in which to reduce a ringlet of polynomials modulo some equations.

Complex

As an example of reducing a polynomial ring modulo an equation, writing j = power(1), consider polynomials over some ringlet modulo the polynomial equation j.j +2 = 1; this deems any two polynomials equivalent if adding multiples of power(2) +power(0) = j.j +1 to either or both gives equal results. This equation can be expressed in any ringlet (since the only values it uses are the multiplicative identity, which we're guaranteed to have, and the result of adding that to itself, 2), regardless of whether it has additive inverses or even an additive identity. The polynomials over our ringlet then, modulo this equation, form a ringlet in which j.j +1 serves as an additive identity and j.j serves as an additive inverse for 1, whence r.j.j serves as additive inverse for r, for every r in our original ringlet. We can thus write 0 for j.j +1 even if we had no additive identity in our original ringlet; and write −r for r.j.j for each r in our original ringlet; our ringlet of polynomials modulo our equation thus forms a ring, even if the original ringlet (by lacking some additive completions) did not.

With these substitions, every power(4.n) = power(4.n, j) = power(2.n, j.j) = power(2.n, −1) = power(n, power(2, −1)) = power(n, 1) = 1 modulo our equation; whence power(4.n +1) = power(4.n).j = j, power(4.n +2) = j.j = −1, power(4.n +3) = j.j.j = −j, whence every polynomial reduces to a sum of multiples of (at most) ±1 and ±j; whence the most general polynomial over our ringlet modulo j.j +1 is r +s.j with r and s being any values of the ring obtained from our ringlet by including j.j +1 as 0 and t.j.j, for each t in the ringlet, as an additive inverse for t. The multiplication of such values follows (r +s.j).(x +y.j) = (r.x −s.y) +(r.y +s.x).j and we obtain the complexified ring of our original ringlet. Composing its members after the additive inversion function, on inputs, replaces j with −j, without affecting power(0), so implements conjugation.

Homogeneous form

While it is usual to interpret a polynomial as a function, reading the list p of coefficients as the function P = sum(p.power) with P(x) = sum(: p(i).power(i, x) ←i :), there is another form that is often useful, the homogeneous polynomial form, in which each term in the polynomial is a product of powers of two variables, such that all terms have the same power when a single variable is substituted for both. Thus, for coefficients (:p:1+n), we can define P(u, v) = sum(: p(i).power(i, u).power(n −i, v) ←i :); this is a homogeneous polynomial, in u and v, of oder n.

We can construe a simple power of one variable, optionally times a scaling, as a homogeneous polynomial in one variable. Considered as a multinomial, a homogeneous polynomial in two variables is a second level polynomial (in one variable), whose coefficients are first-level polynomials (in the other); and the (second-level) coefficient of each power is a (first-level) homogeneous polynomial such that adding the order of each (first-level, homogeneous) coefficient to its index (the power it tacitly scales) gives a constant, the order of the second-level homogeneous polynomial. Inevitably, we can do this again; if a polynomial's coefficients are homogeneous polynomials, when the order of each coeffficient plus its index is the same for all, then the result is again a homogeneous polynomial, with that common sum as its order. When construed as a multinomial, each term's sum of the orders of the various variables in it will be the order of the homogeneous multinomial.

Powers of a sum

One case where homogeneous polynomials come up is when we consider powers of a sum: power(n, x +y) can be written as a homogeneous polynomial, in x and y, of order n. If we write out n copies of (x +y) and multiply them, each term in the product selects either x or y from each copy of the sum, so the powers of x and y always add up to n. As long as our multiplication is abelian (x.y = y.x for all x, y), each way of selecting i of the copies in which to pick x, leaving us to pick y in the other n −i, gives us a single power(i, x).power(n −i, y). The number of ways to chose i items from among n is chose(n, i) = n!/i!/(n −i)!, where j! is product(: j −i ←i |j), the result of multiplying together the numbers 1 through j. If we write F(i, j) = chose(i +j, i) = (i +j)!/i!/j!, which is always a whole number for any natural i, j (i.e. (i +j)! is always a multiple of i!.j!), we thus obtain

This is called the binomial theorem; the F(i, j) and chose(n, i) are known as binomial coefficients. We'll meet another homogeneous form, below, when we look at sums of powers.

In particular, the powers of a sum can be used to examine how a polynomial (in its single-variable form) behaves locally, near a specific input; if we have a polynomial P = sum(p.power) for some partial list p of coefficients, its behaviour near a specific input k can be understood in terms of the function (: P(k +x) ←x :), which we can re-write as a polynomial by using the formula above to express each p(n).power(n, k +x) as a sum of terms of form p(n).F(i, j).power(i, k).power(j, x) with i +j = n; grouping together all the resulting terms (from distinct n with p(n) given) with power(j, x) as a factor, we get (: P(k +x) ←x :) = sum(: sum(: p(n).F(n −j, j).power(n −j, k) ←n; n ≥ j :).power(j) ←j :), in which the outer sum is taken over all j for which the inner sum is non-empty, i.e. there is some p(n) with n ≥ j. This, in practice, means we must sum over j from 0 up to the greatest n for which p(n) is non-zero; and, when j is that greatest n, the inner sum, to get the coefficient of highest power, is just p(j).F(0, j).power(0, k) = p(j), So the derived polynomial, representing (: P(k +x) ←x :), has the same order as P and the same coefficient of highest power.

Note, however, that judicious choice of k may clear some coefficients of lower powers, as in the completing the square technique that solves quadratic equations: a.x.x +b.x +c = a.power(2, x +b/a/2) +c −b/b/a/4, which can only be zero if 2.a.x +b = ±√(b.b −4.a.c), implying the usual formula for the solution of a quadratic.

Simple sum of powers

For any natural n we can form S(n) = sum(: power(i) ←i |n). If we multiply this by power(1), we find S(n).power(1) +power(0) = S(1+n) = power(n) +S(n). When our addition is complete, we can re-write this as S(n).(power(1) −power(0)) = power(n) −power(0). The practical result of this is that, for any x of which we can take and sum powers with 1 as power(0, x), sum(: power(i, x) ←i |n) = (power(n, x) −1)/(x −1) or, in slightly more orthodox notation:

a result that proves remarkably useful remarkably often. In particular, this lets us sum geometric progressions (sequences in which each entry is the previous entry times some fixed factor); and when applied to naturals (or anywhere else that multiplication isn't guaranteed complete) it tells us that one less than a power of any given x is a multiple of one less than x. When applied to small x (archetypically a real or rational with −1 < x < 1), for which successive powers get steadily smaller and ultimately negligible (i.e. effectively equal to 0), it leads to the limiting case sum(: power(i, x) ←i |{naturals}) = 1/(1 −x); this likewise proves useful in many places.

Rewriting this with v = x −1, we get sum(: power(i, v +1) ←i |n).v +1 = power(n, v +1), which is more apt to use when subtraction isn't given to work. When writing numerals to base v +1, this tells us that each numeral which just repeats the digit for v some number k of times is just one less than power(k, 1 +v), the power of our base that's the smallest numeral longer than our all-v one.

Gradients of chords

I can use the sum of powers, above, to do a form fo algebraic differentiation; obtaining the gradients of polynomials by directly considering the gradients of chords.

First let's restate that sum of powers in a homogeneous form; we have sum(:power|n).(power(1) −power(0)) = power(n) −power(0). If we give it u/v as input and multiply the result by power(n, v), we get the homogeneous polynomial equation

Each term in the sum is power(i, u).power(j, v) for some i, j with i +j +1 = n; when u = v, this sum is simply n.power(n −1, u). Now consider any polynomial P = sum(p.power) for some partial list p of coefficients. We can apply the above to write

When our coefficients are real, interpreting P as a function from {reals} to itself, we can draw a graph of P in the usual way, with input varying horizontally and output increasing upwards; so the curve representing P has height P(x) at horizontal co-ordinate x for each x; and the straight line from its point at [u, P(u)] to its point at [v, P(v)] has u −v and P(u) −P(v) as its horizontal and vertical dispacements, respectively; so it slope is just their ratio, which is the sum of terms scaling (u −v) on the right above.

Derivative

Now, as noted above, we can always re-write a polynomial in a form that expresses its local variation around one input; from polynomial P, we can infer Q(k) = (: P(k +x) ←x :) as a polynomial in x. When P = sum(p.power), this gives us Q(k) = sum(q(k).power) with each q(k, j) = sum(: p(n).F(n −j, j).power(n −j, k) ←n; n ≥ j :) being a polynomial, derived from P, evaluated at k. Let b = transpose(q) so b(j, k) = q(k, j) and each b(j) is a polynomial. For b(0, k), we get sum(: p(n).F(n, 0).power(n, k) ←n :) = P(k). For b(1), we get sum(: p(n).F(n −1, 1).power(n −1, k) ←n; n ≥ 1 :) and F(n −1, 1) = chose(n, 1) = n!/(n −1)!/1! = n, so this is just b(1) = sum(: n.p(n).power(n −1, k) ←n; n ≥ 0 :) which is, indeed, sum(: p(n).sum(: power(i, u).power(n −1 −i, v) ←i |n) evaluated at u = k = v.

We must still address the question of whether this formal derivative is in fact a derivative in the sense in which I define differentiation. Since the gradient of a chord is a polynomial in its two end-point inputs, it does give a value when those end-points coincide, even though the formula that justifies its use as a gradient depends on the end-points being distinct; and the continuity of polynomials ensures that, where the polynomial parameters do vary in a continuum, there are chords with gradients arbitrarily close to this value. In particular, any closed interval (1-simplex) that contains all gradients of chords of P within some interval about a given input must contain (albeit possibly on its boundary) P's chord-gradient polynomial's value when the given input is used for both chord end-points. Consequently, this value is in the intersection of all such closed intervals. Conversely, again due to the continuity of polynomials, when the inputs vary within a continuum, the range of chord-gradient values can be made arbitrarily narrow by selecting a suitably narrow range, about the given input, within which to look at chords; consequently, there are arbitrarily short closed intervals in the intersection, ensuring that the intersection has zero width and so can only contain this one point. Thus, indeed, the formal derivative that we can infer from the algebra of polynomials is indeed, at least when the input to the polynomial varies within a continuum, the derivative in the analytic sense that can be applied to functions on a continuum more generally.

This enables us to define a formal differentiation on polynomials – applicable even within the formal algebra that regards them as lists of coefficients with a pointwise addition and a slightly eccentric multiplication, without regard to any interpretation as functions of a continuum variable – for which the derivative of a partial list p of coefficients is q = (: (1+n).p(1+n) ←n :); when P = sum(p.power) this does indeed give P' = sum(q.power) when P is understood as a function from a continuum.

Note that this can be applied to a multinomial; combining this with the rearrangements of the multinomial that correspond to permuting the order of inputs to it, when understood as a function, we can apply such differentiation at any layer (i.e. to any of the inputs).

Notice that the rule for differentiation, applied to coefficient lists, has D(f) = (: (n +1).f(n) ←n :), whence repeat(m, D, f) is (: f(n).(m+n)!/m! ← m+n :), in which (m+n)!/m! = product(: m +i ←i |n) is the product of successive indices at which a given f(n) makes its contribution, after m applications of D. Keeping track of all those factorial factors can be tamed by adopting a different description of polynomials.

Generalising beyond arithmetic

In P = sum(p.power) = (: sum(: p(n).power(n, x) ← n:) ←x :), we canonically understand the input, x, as some value of a ring and each coefficient, p(n) as some value of a module over that ring, allowing us to scale each coefficient by a power of the input to obtain a member of the module; we can then sum the resulting members of the module to get P's output. Since what we do with the input is scale the coefficient, we could replace the input scalar by its associated scaling and use repeat(n) on it instead of power(n), then apply the repeated scaling to the coefficient. Once we do that, there is no actual need for the input to be a scaling, or for the coefficients to be values in a module over a ring; it suffices that we can add the coefficients and the input maps between such addable values, so that repeating it and applying to a coefficient gives us something we know how to add. As before, we can then add the results.

Having gone this far, we can look to eliminate even the need for coefficients to be values of an addition: summing the scaled coefficients uses each only as the translation that adds the given scaled coefficient to its input; nominally, we feed one scaled coefficient in as the input to the translation of the next and so on; but, if we're satisfied to get the translation resulting from the sum in place of the sum, we can simply compose the translations arising from the individual terms. At this point, there is no longer any arithmetic going on, only the possibly repeated action of functions and the composition of the results of several such. This slightly adjusts what we want from the input and the results of repeating it; now, rather than the scaling of members of the module, matching the usual form needs the scalings to apply to the translations of the module; but this is easy enough.

So now we get a partial list ({relations}: p :n) as our coefficients and induce from it a mapping

in which we think of x as a scaling of translations and each p(i) as a translation, with the output of P then being a translation; however, we can use any relations at all as the entries in p and any relation at all as the input to P – albeit we are likely to get an empty composite if these coefficients and the input don't behave nicely together.

In this formalism, we can even express the derivative, discussed above: when repeat(i, x, p(i)) has a scaling as i and a translation as p(i), the derivative of this (with respect to variation in the scaling) is repeat(i, repeat(i −1, x, p(i)), so the derivative of P is

In so far as we can still think of x as a scaling of each p(i), we can expect repeat(i +1) to commute with repeat(i, x), which puts the derivative in the same form as P but for the partial list (: repeat(i +1, p(i +1)) ←i :n) in place of (:p:n).

Now, our addition of polynomials just adds corresponding coefficients; which now corresponds to composing the translations, so the induced addition on our lists of coefficients is just pointwise composition (in the usual manner, using the entry from one partial list wherever the other has no entry, only actually composing where the two lists have matching entries). We lose the ability to multiply such lists, though, just as we lost it where the coefficients were members of a module, rather than of a ring.

When we now look at the mapping induced from a sum of two partial lists of coefficients, a typical term in its evaluation at x will be repeat(i, x, p(i)&on;q(i)) which, when p(i) and q(i) commute with one another, is the same as repeat(i, x, p(i))&on;repeat(i, x, q(i)); if, furthermore, all coefficients in both partial lists commute with one another, the function induced from the sum is indeed just the composite of the functions induced from each of the partial lists that we added to make it. That won't (reliably) be true unless all coefficients commute – but it is in the canonical case, where they're all translations.


Valid CSSValid XHTML 1.1 Written by Eddy.