Two and three – the first two primes

power(0) is (: 1 ←x :), power(1+n) is (: x.power(n, x) ←x :), for each natural n (x varies over whatever values we know how to multiply by themselves and 1). One may equally define: power(1) is the identity, power(n+m) = (: power(n,x).power(m,x) ←x :) and leave power(0) out of the discussion when no multiplicative identity is available. One of the strengths of trits is their treatment of negatives on an equal footing with positives (instead of digits for 0, 1, 2 we can use digits for −1, 0 and 1) we don't need to reserve a bit for sign; in n trits we can represent the power(n, 3) numbers between ±(power(n, 3) −1)/2, inclusive; the lowest of these is represented by n digits each meaning −1, the highest by n 1 digits and negation of a value simply negates each digit.

Notice, for natural i, that power(power(i, 3), 2) +1 is always a multiple of power(i+1, 3). Proof:

For i = 0, power(power(0,3),2) is power(1,2) is 2 and 2+1 is 3 which is power(0+1,3); once known for i−1, so i>0 and 2.i>i, making power(power(i−1,3),2) equal to q.power(i,3)−1 for some natural q, we can cube each to find power(power(i,3),2) equal to the cube of q.power(i,3)−1;

Now I'm more interested in a power of 3 that is one short of a power of 2. The obvious example is 3 itself, one short of 4, but is there a bigger ? Lest you ask why, powers of 3 provide an interesting number system, which I'd ideally encode within the conventional binary context leaving as little slack as possible – but just enough to (for example) distinguish keep reading from that was the last digit of the number. For these purposes, 243 = power(5,3) and 256 = power(8,2) differ by only 13, which is pretty good: a byte can encode a bundle of 5 trits or an out-of-band message.

So let me look at polynomials in the powers of 2 and 3; and I'll abbreviate power as just p.

I'm also interested in which powers of 2 are one more or less than a prime. Conjecture: every prime is the sum or difference of a power of 2 and a power of 3.

Perhaps a more fruitful approach is to use continued fractions and observe that:

and the 23 looks like a good place to cut, giving

log(2) / log(3)
≈ 1 −1/(3 −1/(3 +1/(2 +1/(4 −1/(6 +1/2)))))
= 1 −1/(3 −1/(3 +1/(2 +1/(4 −2/13))))
= 1 −1/(3 −1/(3 +1/(2 +13/50)))
= 1 −1/(3 −113/389)
= 1 −389/1054 = 665/1054

with various earlier truncations producing 306/485, 53/84, 12/19, 5/8 and 2/3, in decreasing order of precision. However, of these, only 306/485 and 5/8 are high estimates (so leave slack, some bit-patterns for out-of-band data, rather than too few to fully represent the trits), with power(485, 2) / power(306, 3) ≈ 1.001 and power(8, 2) / power(5, 3) ≈ 1.05. So 306 trits packed into 485 bits would be more efficient than five trits packed in eight bits; but 485 bit numbers are a bit awkward on early 21st century architectures.

So let's approach this differently and look at real-world word-sizes, which are generally powers of two in bits (the 8-bit byte, a.k.a. octet, the '90s made the transition from 16-bit integers to 32-bit ones, with a few 64-bit architectures emerging then and becoming more common in the next decade; in the 2020s, 128-bit systems are coming). We can take each number of bits, scale by log(2) / log(3) and round down to get a number of trits; dividing how many distinct bit-patterns we have by that power of three gives us the ratio corresponding to my power(8, 2) / power(5, 3) ≈ 1.05 above. These aren't very promising, though: power(16, 2) / power(10, 3) ≈ 1.11 is just the square of the preceding; the same happens for 32-bit, 64-bit and 128-bit. The last of these gives us power(128, 2) / power(80, 3) ≈ 2.3, which at least suggests a possible alternative approach, as we can rewrite this as power(127, 2) / power(80, 3) ≈ 1.15 and have a spare bit left over to do something else with. That, inevitably, leads to thinking what we could do with a smaller number of trits than the available bits can accommodate, plus some creative use of some free bits. One obvious use of this would be to represent a floating point number by a mantissa and exponent, each in trits.

All things considered, though, the easiest things is probably to read each octet as either a number in the range ±121 (representing 5 trits) or one of 13 out-of-band token. For example, a stream of successive values in the 5-trit range can be read as successive digits of a number until an out-of-band token says to end reading, or to switch from reading a mantissa to reading an exponent. Since that only uses up two of our 13 out-of-band tokens, we can have variants on each: end reading and treat what we've read as an integer can be distinct from end reading but treat as a mantissa; and both this last and switch to reading exponent can have variants indicating where in the mantissa to interpolate the presence of the fractional part separaotr (a.k.a. ternary point). Indeed, for greater flexibility, one could interrupt a mantissa part way through to say where the fractional part separator goes in the previous or next octet's five trits. Diverse encodings are potentially viable for such a scheme.

Valid CSSValid HTML 4.01 Written by Eddy.