]>
In any context that defines multiplication,
its bulk action is
called product
and I define the mapping
so that power(n, v) is what orthodoxy writes vn (where I'm reserving superscripts for their use in the tensor notation).
The power(n, v) for diverse naturals n are known as powers
of
v; and I'll describe the mapping power(n) as a power
. When
power(n, x) is described as an expression, n is known as
the exponent
and x as the base
.
(Note that the exponent is the first parameter to power, in contrast to various
programming languages whose standard libraries include a pow(v, n)
function to compute power(n, v); my transpose(power) is
their pow
.)
Below, I extend power to accept various values other
than naturals as exponents; but we start with its restriction to natural
exponents. Functions that vary with their input the way power(n, x) varies with
its exponent, n, for fixed base, x, get described
as exponential
while those that vary with their input as
power(n, x) varies with its base, for fixed exponent, are called powers
.
We'll shortly come to the decisive properties that
characterise these. I'll refer to the power(n) for n natural as natural
powers
to distinguish these definitive powers; in general, adjectives
applied to power (used to mean an output of power) tacitly transfer to the input
from which power produced it; so (for example) an even power
is power(n)
for some even n.
Each power(n), when applied to the values of an abelian multiplication, is an automorphism of that multiplication, since power(n, x.y) = power(n, x).power(n, y). Thus power maps the naturals to the space of automorphisms of any given abelian multiplication. Even for non-abelian multiplications, power maps each natural to a mapping from and to the multiplication's collection of values. Mappings that have values of the multiplication as outputs are amenable to pointwise multiplication, for which (f.g)(x) = f(x).g(x); so now let us look at power(n).power(m).
I define natural arithmetic in terms of repetition and bulk action in terms of lists; as a result, a product of outputs of product is the product of the concatenations of the input lists, with length equal to the sum of the the lengths of the inputs. Thus product({v}: |n).product({v}: |m) = product({v}: |n+m) so power(n).power(m) = power(n +m); and product({power(n, v)}: |m) = product({v}: |n.m) so power(n)&on;power(m) = power(n.m). The first of these results says that power is a homomorphism from the addition on naturals to the pointwise multiplication on functions (into any given collection of values we know how to multiply). When the multiplication each output of power acts on is abelian, the second makes power a homomorphism from the arithmetic on the naturals to the ringlet of automorhpisms of the abelian multiplication.
As power gives us a homomorphism from naturals to functions that output values of any multiplication, so also is the result of evaluating each power at a single value of the multiplication: for any given x, power(n +m, x) = power(n, x).power(m, x) makes transpose(power, x) = (: power(n, x) ←n :) a homomorphism from the addition on the naturals to whatever multiplication we're using for x and its powers. We can apply this with x itself natural, to obtain a homomorphism from natural addition to natural multiplication. We can equally use any real x to embed natural addition in real multiplication.
So the specific properties that characterise exponential functions and powers are:
We can apply any positive natural power to any multiplication. Let V be the collection of values of the mutiplication and n be our positive natural: we get (V: power(n) |V). Given a function (V: f |V): for any c in V, we can scale f to get c.f = (: c.f(x) ←x :); or, given another (V: g |V), we can define a pointwise product f.g = (V: f(x).g(x) ←x |V); this seeds the space of mappings (V: |V) with powers that we can combine pointwise (which works out rather trivial, as (c.power(n)).(k.power(m)) = (c.k).power(n +m), but gets more intereresting once we get some other way of combining values).
When we also have an addition, we can apply it pointwise to functions (V: |V), enabling us to sum scaled powers. Such a sum is known as a polynomial when finite; otherwise, as a power series.
Observe that power(n +m, x) = power(n, x).power(m, x), taken together with power(1) being the identity on the values of any multiplication with which we use it, suffices to determine power entirely on naturals. Let's now see what these rules imply, if we apply them with integers for n, m. When x has a multiplicative inverse, 1/x, we can inductively show power(n, 1/x) = 1/power(n, x) for every positive natural n. For integers n, m, we have m = (n +m) −n, so power(n +m, x)/power(n, x) = power(m, x) = power(n +m, x).power(−n, x) tells us that power(−n, x) = 1/power(n, x) = power(n, 1/x). Indeed, in particular, power(−1, x) = 1/x and our other arithmetic rule for powers, power(m)&on;power(n) = power(m.n), then gives us power(−n) = power(n)&on;power(−1) = (: power(n, 1/x) ←x :). We can use this to extend power from the naturals to the result of completing natural addition with an identity and inverses, i.e. the integers. In particular, we obtain power(0, x) = 1, at least for all invertible x; and no inconsistency arises from extending this definition to apply to all values of any multiplication that does have an identity, even if those values are not all invertible.
So much for additive completion. Now let's exercise power(n.m) =
power(n)&on;power(m) to perform multiplicative completion. The way
I define ratios of positive naturals is to compose
reverse(repeat(n))&on;repeat(m) to be understood as repeat(m/n)
; and
multiply(power(n, x)) is just repeat(n, multiply(x)), so let us look at
reverse(power(n))&on;power(m) as the implied meaning for power(m/n).
On naturals, and indeed on rationals, reverse(power(n)) is only patchily defined; not every natural (or rational) is a right value of it (i.e. an output of power(n)). Likewise, as not all values are output of power(m), the outputs of power(n) aren't all outputs of power(m), so composing reverse(power(n))&on;power(m) isn't clean; some outputs of power(m) aren't inputs to reverse(power(n)). None the less, power(n) has infinitely many outputs, each of which, as an input to power(m), produces an output that reverse(power(n)) does accept, so that even (: power(m, x) ←power(n, x) :{naturals}) has infinitely many atoms for any two positive naturals n, m; and is subsumed by reverse(power(n))&on;power(m), which thus depends (for exactly the same reasons that I used when deriving ratios) on n, m via their ratio m/n. We can thus think of it as power(m/n) = reverse(power(n))&on;power(m).
This extension to rationals is generally applicable to any multiplication; in so far as any outputs of power(n) are also outputs of power(m), the relation reverse(power(n))&on;power(m) is defined; at least when it is a mapping, it makes sense to think of it as power(m/n). Even with the reals, even powers aren't monic, giving rise to a ± of ambiguity when we reverse them; general multiplications typically take this further. For example, in the complex plane, each power(n) maps n distinct values to any given non-zero output, making rational powers always ambiguous; this is the typical case for general multiplications. However, {positives} is helpfully well-behaved and every ringlet contains an image of at least part of it.
When we consider the same functions on the positive reals, each power(n) for positive n is monotonically increasing, continuous and iso ({positives}| |{positives}), making each invertible, so reverse(power(n)) works entirely sensibly as power(1/n), making each positive rational power an iso likewise. Furthermore, since {positives} has a maximal lower bound for any subset of it with any lower bound, we can even extend power to accept any positive real exponent, as follows. For positive real p and x, define power(p, x) to be the greatest lower bound of {power(r, x): rational r ≥ p}; since power(r, x) is a lower-bound for the given set whenever rational r ≤ p, this does define power(p, x) for any positive reals p and x. We are thus able to extend the definition of power to any positive real exponent, at least when applied to positive real bases.
The reverse of an exponential function, when it is a mapping, is
a logarithmic function; and every logarithmic
function is a scaled version of the natural
logarithm, ({reals}| ln |{positives}). This is strictly monotonic and
increasing, with ln(1) = 0, so its reverse is indeed a function, hence a
strictly monotinically increasing exponential function ({positives}| |{reals}),
(fairly) conventionally named exp
, with exp(0) = 1.
With exp = reverse(ln) and ln both strictly monotonically increasing, exp&on;ln is an identity, with constant derivative 1. So the chain rule tells us 1 = exp'(ln(x)).ln'(x) = exp'(ln(x))/x, so exp'(ln(x)) = x = exp(ln(x)), for all positive x, so exp'&on;ln = exp&on;ln with ln iso ({reals}| |{positives}) and the composition clean, whence exp' = exp is its own derivative. Thus exp(a +b) = exp(a).exp(b) and exp(n.a) = power(n, exp(a)) when n is a right value of power; which we can rearrange to power(n, x) = exp(n.ln(x)) as a general means to extend the definition of power to accept arbitrary real exponent n, with power(n) then defined on at least {positives}. Since both exp and ln are continuous, this specification of power(n, x) is necessarily continuous in each of n, x; whence (as it agrees for natural exponents) it necessarily agrees with the approach sketched above.
We thus extend power to be a ringlet-isomorphism from the field of real arithmetic to automorphisms of the multiplication on {positives}.
Factorial grows faster than any exponential; when an exponential grows, it does so faster than any polynomial. That is, if we divide an exponential, pointwise, by factorial, or if we divide a polynomial pointwise by an exponential, in each case making a sequence of this ratio evaluated at natural inputs, the sequence converges to zero.
Before establishing that, let's first establish a few comparisons. Observe that, among naturals, n > m implies n ≥ m+1; thus for any positive natural n, we have n ≥ 1; which in turn leads to 2.n = n+n ≥ n+1. Also, power(0, 2) = 1 > 0 and, whenever power(n, 2) > n, power(n+1, 2) = 2.power(n, 2) ≥ 2.(n +1) ≥ n+1 +1 > n+1 so, inductively, power(n, 2) > n for every natural n. This, in turn, gives power(i.n, 2) > power(i, n) for every positive natural i (since power(i) is then increasing).
I've characterised an exponential as a homomorphism from addition to multiplication, a mapping e for which e(x +y) = e(x).e(y); when it comes to evaluating this at naturals, we get e(n) = power(n, e(1)). That grows, with increasing n, precisely if e(1) > 1. In that case, e(1) −1 is positive and so has a positive multiplicative inverse 1/(e(1) −1) and, as {naturals} is unbounded in {positives}, there is some natural k > 1/(e(1) −1) whence, by rearrangement, e(1) > 1 +1/k; from which, applying the binomial formula for some natural i > 0 (using F(r, s) for (r+s)!/r!/s!),
each term in whose sum is some natural, i!/s!/(i −s)!, times a power of 1/k; the result is necessarily positive. From s = 0, we get F(i −0, 0).power(0, 1/k) = 1; from s = 1 we get F(i −1, 1).power(1, 1/k) = i!/1!/(i−1)!/k = i/k; all other terms are positive; so, for i > k > 1/(e(1) −1), we find power(i, e(1)) > 1 +i/k and thus, for each natural j, power(j.k, e(1)) > 1 +j and so, for every natural, there is some power of e(1) greater than that the given natural; as the powers of e(1) are the outputs of e and the naturals are unbounded in positives, no increasing exponential's outputs has an upper bound in {positives}.
I'm describing a function as logarithmic precisely if it's a homomorphism from a multiplicatin to an addition; composing it after the addition's negation necessarily also gives a homomorphism (cf. log(x) = −log(1/x), above). In so far as the multiplication is complete (i.e. has inverses), the homomorphism's outputs are closed under additive inversion. Given two positive real inputs that such a homomorphism, h, maps to different values, we can write the smaller input as x and the larger as k.x for some k > 1; then h(k.x) = h(k) +h(x) and, as the outputs are distinct, h(k) isn't an additive identity.
Consider {reals} as a linear space over {positive ratios}; select any linearly-independent ({reals}: f :); then span(f) is a subspace of {reals}, hence closed under addition, so addition's restriction to span(f) is a group and homomorphisms from it to a multiplication are exponential. For any ({positives}: g |f), we can extend g to an exponential ({positives}: h |span(f)) by mapping sum(x.f), for each ({positive ratios}: x :(:f|)), to product(: power(x(i), g(f(i))) ←i :). For many pathological choices of g, the result shall be everywhere discontinuous and we'll be unable to say anything helpful about its general behaviour; none the less it is (by construction) a homomorphism, hence exponential in the sense given. We can do similar in reverse to get an everywhere discontinuous logarithmic ({reals}: :{positives}). So we shall be interested only in continuous logarithmic and exponential functions !
In particular, for any increasing exponential e, there's some t for which e(t) > 2 and thus, for each positive natural n, e(n.t) > power(n, 2); ≥ n. (Note that power(0, 2) = 1 > 0 and 2.p = p +p > p for every positive natural p; so a positive natural n with power(n, 2) > n gives us power(n +1, 2) > 2.n ≥ n +1.)
Now comparet such an exponential to power(m) for any natural m, by division. We get the sequence
which subsumes (by having as its restriction to all large enough naturals), in turn:
Now, for each n there is necessarily some i for which e(i) > n, so for n > i.m, we get power(m, n)/e(n) = power(m, n/e(i))/e(n −i.m) with n/e(i) < 1
Written by Eddy.