]>
Physics commonly uses, for its arithmetic, the real number continuum or some related quantities (e.g. relations between tensor fields on real-smooth manifolds), so we lavish attention on the continuum properties of functions on the reals. It remains that the reals themselves arise, via limiting processes, out of the rationals; and the rationals themselves arise, even in a system only using whole-number arithmetic, as a model of certain constraints between whole number values, such as the number of days in successive years or (lunar or calendar) months, with some internal structure of how those values vary, in effect as a rounding-down into whole-number arithmetic of what appears to be a real-continuum of time (quintessentially ordered, modelled by a field) and a chosen definition of a discrete sub-set of those times at which to ask discrete questions (i.e. all our experimental results are encoded in whole numbers, for all that this might not be the encoding we'd want to use all the time)
It turns out that, in rounded-down arithmetic, outcomes can depend on the order in which arithmetic operations are rounded down; the same is true if we rounded up or towards nearest; or, indeed, if the decision of whether to round up or down were some stochastic process such as might encode the (at least apparent – and very convincingly so) randomness of quantum mechanics. The effects of the variation are constrained by continuity of variation of (what looks a lot like) real-valued functions of a real input, albeit only sampled at a rational-valued sub-set of inputs; it's entirely possible that an equivalent formulation of our model could express an equivalent constraint which encodes the outputs of each as a rational answer. The world of classical physics can then be thought of as the real-continuum approximation that one uses to simplify description of our observed universe; the quantum world reveals a need for encoding some element of non-determinism whose correlations encode physics; which needs a point at which to insert such sophisticated machinery as the usual Hilbert space formalism, without needing anything but whole numbers to describe our data. So, when modelling a discrete universe, the choice of how to handle rounding in a rational model can be a way to encode, in a real-continuum model, any rounding-choice infrastructure one cares to invent, by which to express our understanding.
Even in a purely discrete world, or model of it purely in terms of whole numbers, a rational description can use correlations that express a continuity of variation, behaving rationally as if it were just a sampling via some form of rational rounding from a model on a real continuum. Direct physical experiences and intuitions inform the real continuum model; indeed, where we come up with different continuum models. We can embed quite a lot of mathematical sophistication into making the real-continuum model that this rationally samples; and we can combine that with mathematically sophisticated tools to encode the rounding-choices that lead to the discretisation that breaks the continuum behaviour and keeps our universe fuzzy. When our model strictly speaks only in terms of whole numbers, it'll be encodng all of that in some manner that suffices to describe rational evaluation within a real continuum.
Each function is specified in terms of some suitable computation; where the
real-continuum's model uses real-valued inputs, the rational-model's function
shall have a rational input; where the former does real arithmetic, it has no
rounding to resolve; while the latter shall have rounded rational arithmetic,
subject to the control of some rounding-choser. When the inputs to the former
are the values that, in the reals, represent the rationals given to the latter,
continuity of the function should ensure that the latter's output's
representation as a real is (in some sense) near
the former's output. In
particular, the two may agree, although they need not, even when the
real-continuum function's output was the real-model's representation of some
rational. On the other hand, it is in the nature of rounding that there must
be some values for inputs for which no rounding happens; at these, the
real and rational paths must in fact agree as to the output of the given
function. Inputs for which that happens can be expected (again by the nature of
rational rounding) to form a lattice in a relevant space of possible inputs;
everywhere else, our expectation that the possible values are at least
tollerably near
together shall typically be met, save where some step of
the computation tripped over a discontinuity (such as where the computed
denominator in a division has both positive and negative values the rounding
might select for it; or if the computation has special handling of some value
(such as 0 being excluded as a denominator) that departs from the computation it
would otherwise perform).
Each step in the computation of a function offers a moment at which to insert rounding and, thus, the mechanism for resolving rounding. The expression of that may make use of where in the continuum the function is being evaluated; although we might hope to eliminate that, at least in some cases, by inventing (in the model used by the rounding-choser) various functions of position, that obey some constraints, to provide (at least) a locally-constant (in some sense) mechanism for resolving the given rounding step, preferrably sharing its constraining functions with other such steps (possibly reaching different resolutions, but using much of the same information). When we perform all of the computation steps, we get the rounded answer, for the given function, at the given location. The usual rounding algorithms – up, down, nearest – all depend only on the value of some real variable in the continuum model. When that variable does in fact take a rational value, the rounding-choser might be constrained to simply take that rational value; but computations using several steps of rounding might lead to evaluation landing elsewhere, even in this case.
Many functions in physics get integrated over ranges of inputs; when that's
all of how the function contributes to predictions and observations in the world
we observe, we can be insensitive to just exactly how the function is
implemented, so long as we have a suitable mechanism for integrating it over
ranges. The measure in use to do integration may adapt itself to the input
throughout the range over which it integrates; and the rounding-machinery and
measure may share common machinery that enables them to work suitably together.
The result might look very like a smooth integral of a smooth tensor field on a
real continuum, yet be implemented by some entirely rational computation, within
which the effects of rounding at different locations may average out
to
look the smae.
Indeed, one way of mediating rounding is to deal with pairs of whole numbers, rather than rationals. Although most of what we'll do with them is insensitive to the usual equivalence by which the rationals are obtained from pairs, retaining the computed pair (and using the rational that represents it where that's what's needed) leaves open the possibility that this representation plays a part in the decision as to when to round. Thus 4/2 might not be the same as 2, for the purposes of rounding: if every second advance in an input produces an advance of four in the output, it's not the same as having each advance in the input gets an advance of two in the output.
The game Dungeons and Dragons
(D&D) deliberately uses
rounded-down whole number arithmetic, which leads to results that depend on
where in your arithmetic you do your rounding down. Its rules for the
spell wall of stone
tell me
A wall of stone is 1 inch thick per four caster levels and composed of up to one 5-foot square per level.
So, if my caster level, L, is 9 I get a two-inch thick wall, as two is 9/4; and my wall is made of 9 squares. However, the rules continue with:
You can double the wall's area by halving its thickness.
That can, at least, be read as allowing only that I can have an inch thick wall (half of what I would have by default) with area 18 (twice 9) squares. We could, instead, read it as allowing an arbitrary trade-off between thickness and area (number of tiles), albeit subject to rounding-down as usual. If I ask for a wall L/4 inches thick, I get one L squares long, at level L; but, if I chose to have a wall one inch thick, do I get L/4 feet of wall per level ? That would get me L×L/4 squares of wall – but we may need to read that as L×(L/4) to indicate where to do the rounding. This is the point where associativity breaks down, if you use rounding-down in your arithmetic. So let's compare evaluating (L×L)/4 with L×(L/4) for successive L, reporting the two candidate values for rounding and relation of the lower to the upper – either < or =, with the latter meaning no rounding:
L | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 |
---|---|---|---|---|---|---|---|---|---|---|---|
(L×L)/4 | 0 < 1 | 1 = 1 | 2 < 3 | 4 = 4 | 6 < 7 | 9 = 9 | 12 < 13 | 16 = 16 | 20 < 21 | 25 = 25 | 30 < 31 |
L×(L/4) | 0 < 1 | 0 < 2 | 0 < 3 | 4 = 4 | 5 < 10 | 6 < 12 | 7 < 14 | 16 = 16 | 18 < 27 | 20 < 30 | 22 < 33 |
Notice that the two agree at 4 and 8, where no rounding happens either way; but the first's two candidates are at most one apart, while the second's get steadily further apart and the first's interval between candidates always lies entirely within the second's. So choice of evaluation order can interact with choice of when to round. Weighting the rounded values by probabilities proportional to how near each is to the real-valued answer can give an expected or average value (of our discrete quantity) that models (albeit with a variance about this mean) the real value between the discrete candidate values, so we can get results that look like real values on average while modelling them solely with whole values or rounded rational values.
Such roundings look very quantum: can "rational arithmetic with stochastic rounding" package coordinated randomness in a seemingly-deterministic arithmetic; in particular, via all manner of possible representations of the coordinated stochastics. If the actual values in a model are all rationals (or even if, whenever certain of them are, so are all the others), then the arithmetic isn't quite going to match that of the reals, that our other models might be using to describe what's happening.
One can read Planck's and de Broglie's laws as saying that matter's space-time structure is periodic along world-lines (with period Planck's constant divided by rest-mass, give or take factors of the speed of light). Whatever structure matter has, within space-time, it is reasonable to suppose that relative phase, between different pieces of matter, is apt to affect how they interact, so that particular interactions only happen if the interacting parts meet at the right relative phases. This might tend to happen naturally, if only via the electromagnetic field and curvature associated with the structures; but it still restricts the configurations that can work.
This would mean that physical processes involve structural phase match-ups, which imposes discreteness on their ways of operating. Given a possible structure that works, there may be some (at least short-range) continuum of variations on that structure (e.g. changing only its position or orientation relative to some other structures) that also work; but the need to line up periodic parts of the structure to get the right relative phase between them shall mean that, for variations that would affect that relative phase, there's at best a short (compared to the periods of the periodic structures in question) continuum about each viable structure; but, possibly with some variation in other parameters, there are similar viable structures that differ in the relative phase of the two structures by (roughly) a whole cycle.
This is apt to involve rounding in at least some of the arithmetic involved in computing how interactions play out.
Written by Eddy.