(^: Well, the site's name *is* chaos, right ? So I
should at least say something about it ;^)

I've found quite an interesting article on the subject elsewhere: Chaos and Fractals in Financial Markets, by J. Orlin Grabbe, which'll have to do for now, until I've done some messing about of my own.

Grabbe mentions the non-linear system x(n+1) = 4.(1−x(n)).x(n),
known as the logistic equation

. To find its stable points, where
x(n+1) = x(n), let s be the stable value and observe that we have s =
4.s.(1−s) or 0 = 4.s.s −3.s = s.(4.s −3) from which we can
infer that either s=0 or 4.s=3. Thus the system has two stable points, 0 and
3/4. However, the equation has the curious property that very small
differences in input value, x(0), produce very different subsequent
trajectories – i.e. sequences of values x(1), x(2), …

None the less, each x(1+n) is a smooth function – indeed, a mere quadratic – of its predecessor, x(n). This makes it possible to examine how small perturbations grow. If two sequences, y and x, follow the same rule but start with minutely different values, i.e. y(0)−x(0) is small, we can examine how the difference between them grows. Indeed, let e = y−x and observe that:

- y(1+n)
- = 4.(1−y(n)).y(n) = 4.(1−e(n)−x(n)).(x(n)+e(n))
- = 4.(1−x(n)).x(n) −4.e(n).(x(n)+e(n)) +4.e(n).(1−x(n))
- = x(1+n) +4.e(n) −8.e(n).x(n) −4.e(n).e(n), whence
- e(1+n)
- = y(1+n) − x(1+n)
- = 4.e(n).(1 −2.x(n) −e(n))
- = 4.e(n).(1 −x(n) −y(n))

Thus if x(0) is 3/4 (in which case x(n) is 3/4 for all n) and y(0) is close to it, so that e(0) is small, we find that each e(n), among the first few at least, is approximately twice as big as, but opposite in sign to, its predecessor; 4.(1−2.x(n)) = 4.(1 −3/2) = −2. Thus the y(n) values bounce to-and-fro around 3/4, getting further away at an exponential rate. Likewise, if x(0) is 0 (in which case all x(n) are 0), e(n) starts out growing by a factor of 4 at each iteration, so y(n) grows exponentially away from zero, on whichever side it started out.

Indeed, whenever an iterator x(1+n) = f(x(n)) with f differentiable has a
stable k = f(k), whenever x(n) is close to k we have x(1+n) −k = f(x(n))
−k = f'(k).(x(n)−k) and the distance of successive terms from k
grows, or shrinks, exponentially by the factor f'(k), at least while those
distances are small. For our iterator above, f = (: 4.(1−s).s ←s :)
has k=3/4 and k=0 as fixed points with f' = (: 4.(1−2.s) ←s :) and
f'(3/4) = −2, f'(0) = 4. Thus small deviations from *either*
fixed point will *grow* (since both 4 and −2 are bigger, in
magnitude, than 1): both fixed points are unstable.

We might, in such a case, hope to find some stable *trajectories*,
i.e. periodic solutions, where x(m) = x(0) for some m, whence x(m+n) = x(n)
for all n. This would simply imply that we replace the iteration function, f,
with repeat(m,f) to get a new iterator of which x(0) is a fixed point; as
before, we would then examine whether x(0) is a stable fixed point.

One might expect that this would imply that every solution diverges – as n tends to infinity, so does x(n). Certainly for x(n) < 0, 4.(1−x(n)) > 4 so each successive x(1+n) is at least 4 times as big as the preceding x(n), and still negative; so a sequence, once negative, will diverge. An x(n) > 1 will imply x(1+n) <0, so any solution which ever strays outside the range from 0 to 1 will diverge. However, between 0 and 1, f = (: 4.(1−s).s ←s :) falls between 0 and 1; as s increases from 0 to 1/2, f(s) increases from f(0) = 0 to its maximum, f(1/2) = 1; as s increases from 1/2 to 1, f(s) decreases back down to f(1) = 0. Along the way, it takes every value strictly between 0 and 1 exactly twice, once on the way up and once on the way down. We can thus conclude that any trajectory starting between 0 and 1 (inclusive) will always remain in this interval.

For any given value 0 ≤ v < 1, we have two solutions to f(s) = v, i.e. 0 = v −4.s +4.s.s = (2.s−1).(2.s−1) +v−1, namely s = (1±sqrt(1−v))/2. These are symmetrically placed on either side of 1/2. If v > 3/4, sqrt(1−v) < 1/2 so the two solutions lie between 1/4 and 3/4; the closer v is to 1, the closer the two solutions are to 1/2. If v < 3/4, one solution lies above 3/4, the other below 1/4; the closer v is to 3/4, the closer the two solutions are to 1/4 and 3/4 respectively.

If ever x(n) is 1/2, we'll get x(1+n) = 1, x(2+n) = 0 and all subsequent
x(m+n) = 0 for ever. There are two candidate values of x(n−1) that'll
lead to this; for each of which there are two candidate values of
x(n−2); and so on, doubling the number of candidates at each step
backwards, to get 2^{i} candidate values for x(n−i) and
2^{n} candidates for x(0). This holds for each natural n, giving us
infinitely many values of x(0) which will, ultimately, lead to the sequence
reaching zero and being fixed thereafter.

One of the values for n=1 is < 1/4, the other is > 3/4; for n=2, we'll have one value < 1/4, one between 1/4 and 1/2, one between 1/2 and 3/4 and one > 3/4. As we increase n, the smallest candidate gets closer to 0 by a factor of approximately 4 (as noted above for trajectories growing away from 0, but now seen in reverse); it implies a partner candidate that's as close below 1 as the smallest is above 0. A candidate close below 1 at step n implies candidates close to 1/2 at step 1+n; the closer the former is to 1, the closer the latter are to 1/2. Each value close to 1/2 at step n implies a perturbed version, in steps after n, of the entire tree from step 0's exact 1/2. Each candidate above 3/4 implies, at the next step, candidates between 1/4 and 3/4. Thus the values for x(0) that will, ultimately, lead to the fixed point zero are spread out throughout the interval from 0 to 1.

Just as we can wind backwards from 0, via 1/2, we can equally wind
backwards from our other fixed point, 3/4. If x(n) is 1/4, x(1+n) will be
3/4; there are then two candidates for x(n−1), 2^{i} for
x(n−i) and 2^{n} for x(0). There are thus, again, infinitely
many values for x(0) between 0 and 1 that will ultimately lead to the sequence
reaching 3/4 and being fixed thereafter. Since n=0 gives us 1/4, n=1 will
give us two values, one roughly a sixteenth (actually just over 1/15), the
other roughly that close below 1. As before, these imply values steadilly
closer and closer to 0, and to 1, at later steps; whence values steadilly
closer to 1/2; whence perturbed versions of the pattern of values leading
towards 0, instead of 3/4; so that any value in either pattern is arbitrarily
close to some values in the other pattern. This, and the two patterns being
spread out throughout the interval from 0 to 1, suffices to ensure
that *any* trajectory within the interval from 0 to 1 must be
unstable.

Still, how about orbits, for all that they must be unstable ? Just
as stable points were solutions of f(s) = s, so an orbit of period 2 must be a
solution of f(f(s)) = s. Clearly our prior solutions to f(s) = s will satisfy
this also; but are there others ? I'll not go into the details, but if
you start with 5, add or subtract 5's square root and divide the result by 8,
you get a solution to x(2) = x(0), with x(1) as the other solution; so yes,
there is a period 2 orbit

, obtained by solving

- x = f(f(x)) = 16.x.(1−x).(1−4.(1−x).x) = 16.x.(1−x).(1−2.x).(1−2.x), i.e.
- 0 = x.(1 − 16.(1−x).(1−2.x).(1−2.x)) = x.(4.x−3).(16.x.x −20.x +5) = x.(4.x−3).((8.x−5).(8.x−5)−5)/4

However, for a three-cycle we'd need a zero of the polynomial

- ((16.(4.x)
^{3}−104.(4.x)^{2}+182.(4.x) −73)/16)^{2}+((19.(8.x) −77)^{2}+6822)/64/19

which is everywhere positive – indeed, bounded below by 6822/64/19 = 3411/608 or about 5.61. Thus there are no orbits of period 3. An orbit of period 4 would be a root of a corresponding equation, repeat(4,f,x) = x; once we've eliminated the period-2 special cases, this reduces to finding the roots of a polynomial of degree 12. I'm not about to try !

Of course, even for the three-cycle case, we can find solutions in the complex plane, with non-zero imaginary part; indeed, every polynomial equation has solutions in the complex plane. When (as shall always arise for repeat(n, f, x) = x with, as here, polynomial f's coefficients all real) the polynomial's coefficients are all real, any non-real solutions in the complex plain shall arise in conjugate pairs.

Since I've now mentioned the complex
numbers, I may as well look at how the logistic equation behaves off the
real line. Near 0 and 3/4, small perturbations with non-zero imaginary part are
going to grow exponentially, as before; this shall apply to the imaginary part
as much as to the real. While the real line from 0 to 1 remains trapped in that
interval and the real line outside that interval all escapes to (minus)
infinity, life gets significantly more interesting off the real line. (In fact,
all those pretty pictures folk make, of the Mandlebrot set and its Julia set are
depictions of just *how* interesting it gets.)

Our iteration function L = (: 4.z.(1−z) ←z :) can be re-written
as (: 1 −(2.z −1)^{2} ←z :), so let's write its input
as z = 1/2 +r.exp(i.ψ) to make its output
1−4.r.r.exp(2.i.ψ). This differs from 1/2 by 1/2
−4.r.r.exp(2.i.ψ) whose magnitude (next iteration's r) is at least as
big as 1/2 −4.r.r. For r > 1/2, 4.r.r > 1 so this magnitude is
indeed > 1/2, so all subsequent iterations shall also have r > 1/2; and,
whenever ψ.radian isn't close to a multiple of a half turn, r shall get
bigger. Once r gets even moderately big, it shall subsequently grow rapidly,
squaring at each iteration, to ensure our sequence diverges towards
infinity. So, outside the disk {1/2 +r.exp(i.ψ): ψ and r are real, 0
≤ r ≤ 1/2}, the behaviour of our iteration is easily enough
characterized. Inside, however, matters are more interesting.

If we look at 1/2 +i.x for some real x, we get L(1/2 +i.x) = (1 +2.i.x).(1 −2.i.x) = 1 +4.x.x, which is on the real line and > 1, hence outside the disk, so shall diverge; so even the tiniest imaginary deviation from the real line, at 1/2, sends the iteration out to infinity. Next, let's look, for real r and φ, at

- L(1/4 +r.exp(i.φ)/2)
- = (1/2 +r.exp(i.φ)).(3/2 −r.exp(i.φ))
- = 3/4 +r.exp(i.φ) −r.r.exp(2.i.φ)
- = 3/4 +r.exp(i.φ).(1 −r.exp(i.φ))
- L(3/4 +r.exp(i.φ)/2)
- = (3/2 +r.exp(i.φ)).(1/2 −r.exp(i.φ))
- = 3/4 −r.exp(i.φ) −r.r.exp(2.i.φ)
- = 3/4 −r.exp(i.φ).(1 +r.exp(i.φ))

so, for small r, we see that values near 3/4 or 1/4 end up near 3/4, albeit roughly twice as far away.

Written by Eddy.