]> Inducing Linearity from Addition

Inducing Linearity from Addition

I'll describe a collection V as an additive domain if context provides it with a symmetric (i.e. abelian) binary operator (: (V: v+u ←u |V) ←v |V) construed as addition (so + denotes the binary operator).

If V is an additive domain, its addition induces one on {(V:|A)} for any collection A, defined by (V:f|A)+(V:g|A) = (V: f(a)+g(a) ←a |A), and the symmetry of addition in V guarantees that this is also symmetric, making {(V:|A)} an additive domain. Note that A need not be an additive domain. Given (A:h|B), we can compose h before f, g or f+g, obtaining composites in {(V:|B)}, which is likewise an additive domain; (f+g)&on;h maps b to

Given additive domains U, V, I'll say that (V: f |U) respects addition iff, for all t, u in U, f(t+u) = f(t)+f(u). Note that t+u adds two members of U, so uses the addition on U, while f(t)+f(u) adds two members of V, so uses the addition in V. If (V:f|U) and (W:g|V) both respect addition (U, V and W being additive domains) then (g&on;f)(t+u) is g(f(t)+f(u)) = (g&on;f)(t) +(g&on;f)(u) so g&on;f respects addition.

Given (V:f|U) which respects addition and (U:g|A), (U:h|A), we can compose f after g, h or (U:g+h|A), producing composites in {(V:|A)}: f&on;(g+h) maps u to f(g(u)+h(u)), from the definition of + on {(U:|A)}, and f respects addition so this is f(g(u)) +f(h(u)) which is just (f&on;g + f&on;h)(u), so

Given additive domains, U and V, we have an addition defined on {(U:|V)} some of whose members respect addition in U and V; consider two such, f and g, and their sum, f+g. Since each of f, g respects addition, for any t, u in V we have

= f(t+u)+g(t+u)
= f(t)+f(u)+g(t)+g(u)
= f(t)+g(t)+f(u)+g(u)
= (f+g)(t) +(f+g)(u)

so f+g also respects addition. Consequently, {(U:f|V): f respects addition} is an additive domain.

Natural Scaling

In an additive domain V, we can repeatedly add a vector to itself as a way of implementing scaling by a positive integer. Formally, we define: for each v in V, 1.v = v; for each positive integer n, (1+n).v = v+(n.v). Applying this to a sum, we find 1.(u+v) = u+v = (1.u)+(1.v) and, whenever n.(u+v) = (n.u)+(n.v), (1+n).(u+v) = u+v+(n.u)+(n.v) = u+(n.u)+v+(n.v) = (1+n).u +(1+n).v whence, by induction, n.(u+v) = n.u +n.v for every positive natural n, i.e.

Given (V: f |A) with V an additive domain, so f is in the additive domain {(V:|A)}, we can apply the same construction scalings of f. Clearly 1.f = f maps a to 1.(f(a)) so consider any positive integer n for which n.f = (V: n.(f(a)) ←a |A) and observe

= f +n.f
= (V:f(a) +n.(f(a)) ←a |A)
= (V: (1+n).(f(a)) ←a |A)

so scaling f has the same effect as scaling each output of f (as one might expect). [Since a collection is simply a relation which relates x to x precisely if x is a member, a collection is synonymous with the identity mapping on its members; so V is also the name of the identity on V.] In particular, n.V = (V: n.v ←v |V), which respects addition, for each positive integer n; and (n.V)&on;f = (V:n.f(a) ←a|A) = n.f. If A is also an additive domain and f respects addition, n.f = (n.V)&on;f is a composite, each term of which respects addition, hence so does the composite, n.f:

Given (U:f|V) which respects addition, f&on;(1.V) = f&on;V = f = 1.f and, for any positive integer n for which f&on;(n.V) = n.f,

= f&on;(V+(n.V))
= f&on;V +f&on;(n.V)
= f +n.f
= (1+n).f

whence f&on;(n.V) = n.f = (n.U)&on;f: any (U:f|V) which respects addition also respects multiplication by positive integers. In particular, when U = V, f commutes with n.V for each positive integer n. For positive integers n, m, we know that m.V respects addition so we can infer that (n.V)&on;(m.V) = (m.V)&on;(n.V); and a little attention to the definition of multiplication on natural numbers tells us this is (m.n).V.

I can not define scalars for V to be those (V:|V) which commute with every (V:|V) that respects addition: scalars clearly include a model of the additive and multiplicative structure of the positive integers.

Rational Scaling

I'll describe an additive domain V as a rational domain if, for every positive integer n, {n.u: u in V} = V – that is, every v in V is n.u for some u in V. In such a case, for any positive integer n, we have reverse(n.v) = (V: v ← n.v |V) as a mapping and can verify that it respects addition. This mapping clearly corresponds to division by n so I'll write it as V/n. Given another positive integer, m, we can compose m.V with the reverse of n.V to obtain m.V/n = (V: m.v ←n.v |V).

Scaling by positive integers allows us to construct scaling by a broad variety of scalars, among which the naturals have a natural representation, whose behaviour is all ultimately dictated by the addition on scalars and the multiplication induced therefrom – scaling by scalars, i.e. one another. We can discuss division on scalars, inferring the rationals, and obtain polynomials, inferring the algebraic numbers (which are dense in the complex plane, just as rationals are dense in the real line).

From the positive scalars, we infer an ordering: s<s+n for any scalar s and positive scalar n. This is a total order on the rationals (but not on the algebraics). I can define an interval to be any collection, S, of scalars for which, given u, s in S and t a scalar, u<t<s implies t in S and describe S as a neighbourhood of t in such a case. This enables us to induce a topology on S, for which polynomials are smooth. It's incidentally enough for a definition of differentiation: and the derivatives of polynomials are polynomials, Derivative(polynomial(f)) = polynomial(: (i+1).f(i+1) ←i :). Note that, since 0 isn't 1+i for any i, we thus ignore f(0), the coefficient of x0, in the derivative.

Given differentiation on mappings ({addable}:|{scalars}), we can play with differential equations and pose, for example, Df = f, whose solutions are the scalar multiples of exp. Examining DDf +positive.f = polynomial([]), which gives the sinusoids, we can express sinusoids in terms of exp(i.t) ←t, with i a root of the polynomial equation x.x+2=1. This, in its turn, introduces π as the minimal positive for which 1+exp(i.π) is an additive identity. We thus have the transcendentals derived from π as well as all as those from exp(1).


In so far as we know how to add things we equally know how to add lists of (addable) things or indeed general mappings whose outputs are addable. Thus one can add ({addables}: f |I) to ({addables}: g |I) provided that, for each i in I, f(i) and g(i) are addable, to give f+g = (: f(i)+g(i) ←i |I). For any domain D within which a, b in D implies a, b addable, with a+b in D, the constraint becomes fatuous for (D: f |I) +(D: g |I); in such a case I'll describe f as uniform of kind D or of the same kind as g. I'll describe D as additively closed. When I is a natural number, f and g are lists; their entries are members of D.

One can generalise the addition a little: (: f |I) +(: g |J) = (: f(i)+g(i) ←i :) = h, say, with (:h|) being the union of I and J, and the f(i)+g(i) being taken as f(i) when only f accepts i and as g(i) when only g accepts i. This makes it possible to deal with polynomials using only positive coefficients, so that I don't need to mess with equivalence between lists (: F |N) and (: f |n) with (:F:n) = f and i in N implies i in n or F(i) is an additive identity. To deal with negative coefficients, I'll be using polynomial equations (a disguise for a completion operation …).

So I'll be discussing mappings (D: f :n) for some natural n and additively closed D. I can sum this, giving sum(f) in D. Equally, for any k with (:k|) subsuming (:f|), if we know how to scale members of D be the members of (|k:), we can form (D: f(i).k(i) ←i :n) = f.k, with (:f.k|) the intersection of (f|) and (:k|). [Contrast with (:f+g|) which was the union of (:f|) and (:g|).] Just as we could sum f or k, so also we can sum f.k.

I'll refer to things by which we know how to scale members of D as scalars: all positive integers are scalars, but we may have other candidates, and we needed {scalars} to subsume (:k|). On {scalars} we have addition and multiplication inferred, for all scalars, by the same construction as let us scale members of D by these scalars. Just as multiply(n,m) is repeat(n, add(m), 0), for natural n, m, we can define power(i,n) to be repeat(i, multiply(n), 1), and this is well-defined for scalar n and natural i.

The discussion of polynomials relates to examination of the case where k is, for some scalar x, of form (: power(i, x) ←i :{naturals}). Name some scalar polynomials as x0 maps 1←0, x1 maps 1←1, x2 maps 1←2, x3 maps 1←3, …, with each rejecting all other inputs. We can introduce a multiplication on scalar polynomials induced from x(n+m) = xn.xm. Any polynomial can be written as f = sum(: f(i).xi ←i :). This is formally extensible to provide an alternative multiplication between ({scalars}: k :) and (D: f :), namely k.f = sum(: sum(: ({k(j).f(i)}: |{i+j}) ←j :(:k|)) ←i :(:f|)), wherein ({k(j).f(i)}: |{i+j}) is the mapping with exactly one input, i+j, whose sole available output is k(j).f(i), so it maps i+j to k(j).f(i). Each such atomic mapping represents a simple polynomial, k(j).f(i).xi+j, and we sum these over all i, j to get the product f*k of f and k understood as polynomials.

This enables a multiplication between scalar lists and addable lists, known as polynomial multiplication: the lists involved are referred to as polynomials. Notably, of course, we can multiply pairs of scalar polynomials (since scalars are addable, which is all we asked of D). From the polynomial (D: f :) we can induce a mapping (D: sum(D: power(i,t).f(i) ←i :(:f|)) ←t :{scalars}): I'll call this polynomial(f). You can then show that polynomial(k).polynomial(f) = polynomial(k*f), where, for ({scalars}:K:) = polynomial(k) and (D:F:) = polynomial(f), K.F means (: K(t).F(t) ←t :), with (:K.F|) the intersection of (:K|) and (:F|). Since we can decompose any polynomial function as a sum of constant.power(natural) polynomials, polynomial turns * into a faithful representation, for lists of coefficients, of the usual pointwise multiplication of polynomials.

Once we have polynomials, we can form equations in them, with f = g serving as an expression of for what scalar x may polynomial(f,x) and polynomial(g,x) be equal–?. One of the ways we can extend {scalars} is by constructing solutions to such equations. Solving a.x = b, given a and b, may give no solutions – a and b aren't parallel – or it will give (provided our multiplication is left-cancellable, only) one. Compare with the following section, where x.n = a gets solved, with x varying in D, a given in D and n some given scalar: the same technique used there may be applied to a.x=b and, when D is {scalars}, to a+1.x=b, giving additive completion.


Given even just addition, we can address questions like: for which addables, e.g. a, and what integers, e.g. n, is there some addable t with n.t = a ? I can phrase that as when can I divide an addable by a positive integer ? So take some domain D on which we have addition defined, with {c+d: c, d in D} subsumed by D (so it's closed under addition, hence under integer scaling). Consider the space D×{positive integers} of pairs, [d,n], with left member d in D and right-member, n, a positive integer; within this, we can form, for each d in D and positive integer n, div(n,d) = {[c,m]: c.n = d.m}. Notice that if there is a d/n member of D, then [d/n,1] is in this collection; the definitive equation can be re-arranged as d/n = c/m. We can now define an addition on E = {div(n,d): d in D, n a positive integer}, induced from div(n,d) +div(m,c) = div(n.m, m.d+n.c) and observe that div(1,d) +div(1,c) = div(1,d+c) so that D is faithfully embedded in E by div(1). The multiplicative action of the naturals on D is then faithfully represented and satisfies m.div(m.n,d) = div(n,d) = div(m.n, m.d): so if D does provide a d/n, div(n,d) will be div(1,d/n). In E we definitely can do division by arbitrary positive integer, using div(n,d) / m = div(n.m, d), in particular giving us div(1,d) / m = div(m,d).

Now, what I've just done to construct division out of addition is an example of a general mechanism of completion. I can equally apply a similar construction to obtain additive completion, C = {sub(d,c): c, d in D} with sub(d,c) = {[b,a]: b+c=a+d} and sub(d,c)+sub(b,a) = sub(d+b,c+a), embedding D using sub(d,d+d) ←d to obtain sub(a,a), for any a in D, as a natural additive identity, and sub(c,b) ←sub(b,c) as additive inversion. An essentially identical construction can be used to obtain an additively complete version of E (which is positive integer scaling complete, as it were).

Both of the completions above are variants on polynomial completion.

Valid CSSValid XHTML 1.1 Written by Eddy.