A rational discourse's use lies in its ability to describe the relationships
among entities of interest. Mathematics
can formalise all relationships in terms
of relations
, characterised by: context gives meaning to r as
a relation
in so far as it gives meaning, for any values
offered in place of left
and right
, to r relates left to
right
as a statement, a.k.a. an assertion
– i.e. admissible as
the subject matter of a proof or disproof, albeit potentially undecidable.
universal identity.
is in; &on; as composition; reversal, transitivity and reflexivity.
from A to Bwhile leaving enough slack to be useful.
See also:
In so far as a relation, r, relates y to x, I'll describe x as
a right
value of r and y as a left
value of r; I'll describe both x and y as, simply, values
of r.
In my writings about relations, I describe relationships among relations;
consequently, I allow that relations may be values
, so discuss relations
whose values are relations. Other contexts, using what I build out of
relations, likely opt to limit which relations to
entertain as values, thereby restricting the relations discussed. As different
contexts may come to different choices as to which relations to accept as
values, I prefer to keep my writings indifferent to the question, as if most
questions about whether relations are values were undecidable. None the
less:
insofar as context allows it as a valuelimitation.
When discussion involves one relation as a (left or right) value of some other, the first is treated as a value, so the above is relevant to it; but the second is only discussed as a relation. If it is not a value, it still serves in the text as a short-hand that relates how it was specified to the things it is shown to relate (or not) to one another.
I introduce denotations for specifying relations, usually in terms of other relations, thereby equipping myself to build tools with which to describe relationships among relations and to construct relations out of other relations.
I'll say that r
precisely if subsumes
ss
relates x to y implies r relates x to y
and, as a synonym, introduce a
relation named subsume
and specified by: subsume relates r to s
precisely if r subsumes s
, i.e. (using notation introduced
below)
s relates x to yimplies
r relates x to y:)
In all contexts which deal with relations, I deem two relations to
be equal
if each subsumes the other. Formally this is (as we'll see
below) an equivalence. One may reasonably consider two relations to be
conceptually distinct entities if they are specified distinctly, even if the two
agree entirely on all questions of form do you relate this to that ?
All the same, for my purposes, the proper notion of equality
when dealing
with relations is this mutually subsumes
equivalence, regardless of how
the relations are specified. Note that each relation fatuously subsumes itself,
hence is equivalent to itself in this sense; and thus equal to itself.
Hopefully it is clear that subsume is not constructible: in my context, it relates itself to itself ! A context which does not entertain it as a value can, none the less, expand every statement I make about subsume into a statement about which relations subsume which others.
Given arbitrary relations r and s, there's a smallest relation
which subsumes both
and a largest relation which each subsumes
,
called the union and intersection, respectively, of r with s:
r relates x to y or s relates x to y
r relates x to y and s relates x to y
The union necessarily subsumes each of the relations combined, each of which necessarily subsumes the intersection. Any relation which subsumes both r and s necessarily also subsumes their union; any relation subsumed both by r and by s is necessarily also subsumed by their intersection. In so far as two relations are constructible, so also are their union and intersection.
My formalisation encodes relationships
among many entities in terms of chained two-entity relationships, using a
technique known as currie-ing (because Haskell B. Currie persuaded some
mathematicians and computer scientists that the way to formalise a
many-parametered function is in terms of a single-parameter function which takes
the first parameter and returns a single-parameter function which takes the
second parameter and so on, with the penultimate step producing a
single-parameter function which takes the last parameter and yields the over-all
result of the many-parameter function). Currie encodes relationships among at
least two entities; for less than two entities, we may quibble at calling
a predicate
a relationship among one entity
or calling
a statement
a relationship among no entities
, but they have the
right form, so do warrant encoding as relations; to whit
collectionsor
identities.
Thus the statements true and false encode as the all-relation (which relates x to y, regardless of their values) and the empty relation (which doesn't relate any value to any value).
A statement can equally be used as a fatuous predicate – true, for any
value, precisely if the statement is true, ignoring the value just as the
statement's encoding as a relation ignored both values it was asked about
– and the predicates corresponding thereby to true and false give us the
universal identity (which relates x to x for every value x, but does not relate
any x to any y unless y=x, so this all-collection
is not the same thing
as the all-relation
encoding true directly as a statement) and the empty
collection – which is the same relation as the empty relation, since the
constraint x and y are the same value
is irrelevant when we're going to
say no to that value even if this constraint is met.
We thus have the relations (which we'll shortly see to be equivalences: for now, you can ignore this detail, though it affects one name)
empty= {}
defined by: empty relates x
to y
is false, regardless of the values given for x and y; i.e. empty
doesn't relate anything to anything.
identity, (:x←x:)
which relates any value to itself, but to nothing else; it could equally be denoted {values}; and
All= (:x←y:)
the all-relation or all-equivalence, which relates every value to every value (note that All is capitalised, unlike empty; though I don't use it much in newer texts).
Since empty has no values, I can give it meaning in any context at all
– it doesn't need to presume any values provided by context; indeed, every
relation subsumes empty and empty is vacuously constructible as well as
(decidably) computable
. The other two may very well be in-expressible to
some contexts so, to avoid making my tools useless to those contexts, I shall
avoid discussion depending on them. None the less, they equip me with
short-forms for various statements, which a context disliking them can safely
translate into appropriate long forms; e.g. a relation subsumed by the
universal identity
is simply one which never relates x to y unless x and y
denote the same value, albeit it may be more selective than the universal
identity in which values it does relate – indeed, there is a direct
correspondence between predicates and such relations.
I'll describe a relation subsumed by the universal identity as
a collection
or identity
:
equivalently, a collection is the encoding of a predicate, in the scheme
above.
Every relation is subsumed by the universal all-relation. I'll describe a
relation r as an all-relation
precisely if it relates each
of its left values to each of its right values: whenever r relates x to u and v
to y, it also relates x to y. Two relations are disjoint
precisely if their intersection is empty; each can then be described as the
other's complement within
their union; this is most
usually of interest when that union is an all-relation or a collection. If
relations r and s are each complements, within a particular union, of a given
relation, we may reasonably expect r and s to be equal; however, there may be
some decidability issues here. One might reasonably hope that, within a
constructible union, complements of a given constructible relation would be
constructible and necessarily equal; but I am wary of assuming this. None the
less, I shall refer to the
complement of one relation within another;
this may be an ambiguous specification.
Every relation subsumes empty and itself. I'll describe a
non-empty relation as atomic
precisely if it
subsumes only empty and itself; and I'll call an atomic relation
an atom
or a singleton
. (I
occasionally mistype relatino
; if that were a real word,
it would deserve to be another
synonym for atom.) An atom is by specification non-empty, so relates at least
some x to some y; it then subsumes the relation x←y that relates (only) x
to (only) y; this isn't empty and the atom subsumes it, so the atom is in fact
equal to it. For any values x, y the relation x←y is non-empty and does
subsume nothing but empty and itself. Thus every atom is x←y for some
values x, y; and every such relation is indeed an atom. (I'll
introduce, below, some other ways to denote an atom.) I'll
refer to each atom subsumed by a relation r as an atom of
r; every relation is equal to the union of its atoms. In so far as values x and
y are constructible, so also is the atom x←y.
I'll describe a value, x, as a fixed point
of a relation, r, precisely if r relates x to x; in such a case I'll also say
that x is in
r (because, when r is a collection, its
members are its fixed points and, indeed, a collection consists entirely of
fixed points
). The collection of fixed points of r is simply the
intersection of r with the universal identity.
Given two relations r and s, I'll define
the composite
, r&on;s, of r {
[ to the ] left of
; after } s
or s { [to the
] right of ; before
} r
by:
r&on;s relates x to zprecisely if
r relates x to some y which s relates to z
Composition is an unambiguous binary operator: I shall, in due course,
establish it as the archetypical binary
operator. The temporal
connotations
of before and after
will make more sense when we consider mappings (which regard their right values
as inputs
and left values as outputs
); for the moment, it suffices
to note that the definition quite
deliberately runs the other way to the order in which English text is
normally read.
For a composite, the right values of r&on;s are right values of s (though some right values of s may be missed out) and the left values of r&on;s are left values of r (with a matching proviso, arising because r's right values and s's left values are not guaranteed to coincide – indeed, if there is no overlap, r&on;s will be empty). In so far as r and s are constructible, r&on;s is necessarily also constructible.
In many contexts (particularly when working with mappings –
see below) it is desirable to work with composites where all right values
(inputs) of the right-most component shall be right values of the composite; to
make this reliably true, one works with a restriction of composition where
r&on;s is only allowed when every left value of s is a right value of r; I refer
to this as clean
composition. A dual notion (that would
ensure a composite of monics has all of its left values) requires that every
right value of r be a left value of s; I call this co-clean. Should I ever care
about a composition being both clean and co-clean, I'll describe it as
iso-clean. Note that, as a binary operator (raw)
composition is flat, while these restricted versions of
it are categoric.
I can define a
relation, reverse, by: reverse
relates r to s
precisely if r relates x to y precisely if s relates y to
x
. Fixed points of reverse are often described as symmetric
, though
I'll prefer to call them fixed points of reverse
or symmetric under
reversal
to avoid confusion with symmetry under other operations. Note that
reverse(r&on;s) = reverse(s)&on;reverse(r), making
reverse a coautomorphism of &on;; that reverse
is likewise an automorphism of &unite; and
&intersect;, i.e. reverse(r&unite;s) = reverse(r)&unite;reverse(s) and
reverse(r&intersect;s) = reverse(r)&intersect;reverse(s); that reverse(reverse)
= reverse is a fixed point of itself (hence, in particular, not constructible);
and that every relation is a fixed point of reverse&on;reverse (i.e. reverse
is self-inverse
on relations). When reverse relates one relation to
another, I'll describe each as the reverse of
the other. In so far as a
relation r is constructible, so also is reverse(r).
I'll describe a relation as left-reflexive precisely if every
left value is a fixed point; and as right-reflexive precisely if every right
value is a fixed point. The reverse of a right-reflexive relation is trivially
left-reflexive; and vice-versa. I'll describe a
relation as reflexive
precisely if it is both
left-reflexive and right-reflexive – i.e. all its values are fixed
points; r is reflexive
precisely if r relates x to y implies x in r
and y in r
. The reverse of a reflexive relation is trivially reflexive; the
reverse of a right-reflexive relation is left-reflexive, and vice
versa.
I'll describe a relation, r,
as transitive
precisely if r subsumes r&on;r. This means
that, if r relates x to y and y to z, then r also relates x to z. The reverse
of any transitive relation is necessarily transitive. The intersection of two
transitive relations is necessarily transitive. The intersection of a
transitive relation with its reverse is trivially transitive and symmetric under
reversal.
I'll describe a relation r as cotransitive precisely if r&on;r subsumes
r. Every left-reflexive or right-reflexive relation is cotransitive. A
relation r which is both transitive and cotransitive is equal to its
self-composite, r = r&on;r. For example, subsume is both transitive and
reflexive, hence cotransitive, so subsume&on;subsume = subsume. Every
transitive fixed point of reverse is reflexive (since, if it's r and relates x
to y, r (as its own reverse) also relates y to x, hence r&on;r relates x via y
to x and y via x to y; and r, being transitive, subsumes r&on;r, so every value
of r is a fixed point of r). Note, however, that r&on;r subsumes r
, or
even r&on;r = r
, need not imply that r is reflexive (e.g. when r is <
on the rationals or reals – indeed, I've even toyed with using r&on;r =
r but r has no fixed points
as a specification of a continuum
ordering
.)
These properties are the ingredients of equivalence (see below); and they interact well with repetition.
Given relations r, f and g I write (f:r:g) for the relation which relates x to y precisely if x is a right value of f, r relates x to y and y is a left value of g. Omitting either f or g skips the condition involving it; in particular, (:r:) is simply r, on account of which I'll commonly use (: … :), rather than plain parentheses, to enclose any relation I deem it worth-while to set off from the surrounding text.
For any relation r, I
introduce end-relations
(|r:) and (:r|) on r's left and right values,
respectively, defined by:
In so far as r is constructible, so also are (|r:) and (:r|). When r is a collection (or, see below, any other equivalence), it is equal to both (|r:) and (:r|). I refer to (|r:) as r's left-relation and (:r|) as its right-relation.
Since subsume is transitive and reflexive, so are both (|r:) and (:r|), for any relation r. In particular, every left value of r is a fixed point of (|r:) and every right value of r is a fixed point of (:r|), so we can write r's collection of left values as {x in (|r:)} and collection of right values as {x in (:r|)}; each is constructible whenever r is. If reverse relates r to s then (|r:) = (:s|); in particular, r = reverse(r) implies (|r:) = (:r|).
Thus any transitive reflexive relation – notably including any end-relation – is necessarily equal to its left-relation and to the reverse of its right-relation. In particular, (|subsume:) = subsume = reverse(:subsume|) and, as this applies to end-relations, the mappings (: (|r:) ←r :) and (: reverse(:r|) ←r :) are reflexive (a.k.a., for functions, idempotent: every left value is a fixed point).
For any relation, r, the composites (|r:)&on;r and r&on;reverse(:r|) are both equal to r:
If (|r:)&on;r relates x to y, it must do so via some z which r relates to y, while (|r:) must relate x to z; but then {w: r relates x to w} subsumes {w: r relates z to w} and y is in the latter, hence the former, so r relates x to y; thus r subsumes (|r:)&on;r. Likewise, if r&on;reverse(:r|) relates x to y it must do so via some z to which r relates x and (:r|) relates y; but then {w: r relates w to y} subsumes {w: r relates w to z}, of which x is a member, so r relates x to y and r subsumes r&on;reverse(:r|). As (|r:) subsumes the identity on r's left values and (:r|) subsumes the identity on its right values, the composites both also subsume r, hence are equal to it. QED.
For any relations f and g, (|f&on;g:) subsumes (|f:g) and (:f&on;g|) subsumes (f:g|).
Proof: suppose
(|f:g) relates x to z; so {y in (|g:): f relates x to y} subsumes {y in (|g:): f
relates z to y} and neither is empty; in particular, f does relate z to some y
in (|g:) and f&on;g relates z to every w for which g relates y to w; and there
are some such w. Now suppose f&on;g relates z to some w; implicitly, f relates
z to some y which g relates to w; but then f must also relate x to y, hence
f&on;g relates x to w as well. Thus {y: f&on;g relates x to y} subsumes {y:
f&on;g relates z to y}, so (|f&on;g:) relates x to z. Thus (|f:g) relates x
to z
implies (|f&on;g:) relates x to z
, i.e. (|f&on;g:) subsumes
(|f:g).
Analogously, suppose (f:g|) relates x to z; so {y in (:f|): g relates y to x} subsumes non-empty {y in (:f|): g relates y to z}; composing f on the left of either of these yields {w: f&on;g relates w to x} subsumes {w: f&on;g relates w to z}; thus (:f&on;g|) subsumes (f:g|).
At the same time, note that (: x←x :f&on;g) = {x in (|f&on;g:)} = {x in (|f:g)}, which (|f:g) subsumes; and (f&on;g: x←x :) = {x in (:f&on;g|)} = {x in (f:g|)}, which (f:g|) subsumes.
I'll describe a relation as an equivalence
precisely
if it's a transitive fixed point
of reverse; as noted above, this necessarily implies that
it is reflexive. Because it is transitive and
reflexive, any equivalence is necessarily equal to its left-relation and the
reverse of its right-relation; because it is a fixed point of reverse, this
means it is also equal to its right relation.
Orthodoxy specifies an equivalence as reflexive, symmetric and
transitive, without inferring the first from the other two; it needs to do so
only in order to insist that the relation is an equivalence on all of
some specified set. Thus, where the definition specifies a set X and a relation
R on it, it fails to say the equivalent of (X|R:) or (:R|X); so, in requiring
reflexivity, R relates a to a for all a in X
, the part it actually needs
is all a in X
, not relates a to a
.
Notably, every collection, including empty, is an equivalence, as is the universal all-relation. An all-relation is an equivalence precisely if its collections of left and right values are equal. The intersection or union of any relation with its reverse is necessarily a fixed point of reverse; if the original relation was transitive, so is the intersection, which is thus necessarily an equivalence. Thus, for any transitive relation r, I'll describe two values as r-equivalent precisely if r relates each to the other (hence r&intersect;reverse(r) does the same). The intersection of any reflexive relation with itself has all the same values (they're all fixed points, both ways round). So the intersection of any reflexive transitive relation r with its reverse is an equivalence on all of r's values.
In particular, every end-relation is reflexive and transitive. For any
given relation r, intersecting either (:r|) or (|r:) with its reverse will yield
an equivalence; I call these the end-equivalences
of the relations (and
used to, before 2004/June, use (:r|) and (|r:) to denote them; this does not
change which values are their fixed points). I'll describe the end-equivalence
thus obtained from (|r:) as r's left-equivalence and that from (:r|) as r's
right-equivalence. Thus two values are (|r:)-equivalent precisely if r relates
them to exactly the same values (of which there are some); while they are
(:r|)-equivalent precisely if r relates exactly the same values to them (of
which there are some). In particular, every left value of r is (|r:) equivalent
to itself and every right value of r is (:r|)-equivalent to itself.
For any relation f, f&on;reverse(f) is a relation among f's left values; it is symmetric and reflexive; and it subsumes (|f:). Likewise, reverse(f)&on;f is a symmetric reflexive relation among f's right values and subsumes (:f|). [Forward reference: mapping and monic.] When f is either a mapping or a monic, f&on;reverse(f) = (|f:) and reverse(f)&on;f = (:f|) are both equivalences; for a mapping, the former is a collection (of outputs); for a monic, the latter is (of inputs).
I'll say that one relation, e, conforms to another, c, precisely if c relates at least one
value of e to each value of c and, whenever c relates one value of e to another,
so does e. I'm mainly concerned to apply this to end-relations, which are
transitive and reflexive; in this case, e conforms to c
can be encoded
as {x in (:e&on;c|)} = {x in (:c|)} and e subsumes (e:c:e)
. Composing c
on the right of the former, with e and c reflexive and transitive, we can infer
that: c relates u to v
implies {x in (:e|): c relates x to v} subsumes
{x in (:e|): c&on;c relates x to v} subsumes {x in (:e|): c relates x to u}
implies (e:c|) relates v to u
; thus the first condition implies that
reverse(e:c|) subsumes c; which, in turn, implies the given first condition, so
the two are equivalent. Thus, for transitive reflexive e, e conforms to
c
precisely if reverse(e:c|) subsumes c and e subsumes (e:c:e)
.
I'll describe a relation, r, as
being from
[{ within ; all of }]
right
precisely if: every right value of r is a left value
of right and;
either within is specified or (:r|) conforms to
(|right:). In the latter case, I'll write
(:r|right),
which makes the given
assertions. For the sake of practical match-up with orthodox language
further down the line, from's default is from all of
, making the covering
assertion; but from within
just asserts that
(|right:) subsumes {x in (:r|)}, i.e. every right
value of r is a left value of right.
I'll describe a relation, r, as
being [{ in
; on }] to left
precisely if: every left value of r is a right value
of left and, if on is
used, (|r:) conforms to (:left|). In the latter
case, I'll write (left|r:), which makes the given
assertions. In contrast to from
, but again for match-up with orthodox
language, the default form of to
is in to
(or into
) which
only asserts that (:left|) subsumes {x in (|r:)};
use of on to
(or onto
) makes the added assertion that r's
left-relation conforms to left's
right-relation.
As specified, these use a general relation as the constraint, left or right. These may, of course, be collections or equivalences, which are equal to their own end-relations. I find these cases usually make for clearer illustrations, as well as more closely matching the orthodox usages I allude to above.
I'll combine these two in the obvious way – given collections A and Z,
a relation from A to Z
is (Z:r|A) or from all of A in to Z
, which
will match up with orthodox language for functions (as contrasted with partial
functions, which are from within
); r will be epimorphic
if (it's a
function and) it's (Z|r:A) or from within A on to Z
. The corresponding
statements for more general relations (rather than collections and functions)
take account of pertinent equivalences, factoring out the need to operate on
values via equivalence classes of which they are members.
With this notation, when x and y are unambiguous, we can (and I usually shall) write any atom x←y as ({x}: |{y}), ({x}| :{y}) or ({x}| |{y}); each stipulates that the relation does have a left or right value and only allows one left value and one right value. The form ({x}: :{y}) is ambiguous, denoting either x←y or empty. If either x or y is ambiguous, x←y isn't an atom; it also isn't ambiguous, as it quantifies over the ambiguities in x and y, i.e. it relates u to v in so far as x can take the value u while y takes the value v. Each of the forms ({x}: |{y}) and similar (with : and | changes) then ambiguously denotes any sub-relation of x←y subject to each constraint any | in it imposes. For example, ({x}: |{y}) does have, as right values, all the values y can take; and only relates u to v if x can take the value u while y takes the value v; but need not relate every such u to any given v, as long as it does relate at least one such u to each such v.
To clarify the definitions, pause to consider relations f, g with (:f|g) and see what the definitions say. We are given that (:f|) conforms to (|g:), i.e. that (|g:) relates at least one value of (:f|) to each value of (|g:) and, whenever (|g:) relates one value of (:f|) to another, so does (:f|). The first half of this says that, for every left value y of g, there's a right value x of f that g relates to everything g relates y to; at least for the purposes of f&on;g, x makes y redundant, so we can construe this as saying f's right values include enough of g's left values that any omissions don't matter. Consequently, every right value of g is a right value of f&on;g.
Now, given (:f|g), suppose (:g|) relates u to v; so whenever g relates x to v it also relates x to u. As v is a right value of g, it is also a right value of f&on;g; so suppose f&on;g relates y to v; thus f relates y to some x that g relates to v; g thus also relates this x to u and f&on;g relates y to u also; so (:f&on;g|) subsumes (:g|).
The second half of what (:f|g) tells us is that, whenever (|g:) relates one
right value of f to another, so does (:f|). This matters when g is a mapping
modulo c
, for some equivalence c on its right values; that is, g might
relate several left values to some right value z but all of these left values
are c-equivalent, so c considers g a mapping; if f is a mapping, then f&on;g
shall then be a mapping precisely if (:f|g).
Conversely, of course, (f|g:) implies every left value of f is a left value of f&on;g and (|f&on;g:) subsumes (|f:).
Theorem: if g is from [within] A onto f and f is from g [on] to Z then f&on;g is (correspondingly) from [within] A [on] to Z.
Proof: The nature of composition ensures
that every left value of f&on;g is a left value of f, hence a right value of Z;
while every right value of f&on;g is a right value of g, hence a left value of
A; this duly takes care of the case using within
but not on
(so
implicitly merely into Z); that f&on;g is from within A into Z follows from g
being from witin A and f being into Z. For the other cases, observe that f is
from (all of) g and g is onto f. i.e. (:f|g) and (f|g:). As noted above, these
imply that (:f&on;g|) subsumes (:g|) and (|f&on;g:) subsumes (|f:). In
particular, given the earlier consequence of the nature of composition, we have
{left values of f} = {left values of f&on;g} and {right values of g} = {right
values of f&on;g}.
For the on
case, we have (Z|f|g) and (f|g:A) and must show
(Z|f&on;g:). Now (Z|f:) says (|f:) conforms to (:Z|), so
Consequently, (|f&on;g:) conforms to (:Z|), i.e. (Z|f&on;g:), which is what we needed to prove.
For the case without within
(so implicitly from all of A), we have
(Z:f|g) and (f|g|A) and must show (:f&on;g|A). Now, (:g|A) says (:g|) conforms
to (|A:), so:
Thus (:f&on;g|A) follows from (f|g|A), and I have shown what was required.
I define three types
– monic
, mapping
and iso
– that refine
the from and to type
specifiers given above. These stipulate uniqueness properties, expanded enough
to allow unique modulo
a suitable equivalence.
in which left and right must, when present, denotate relations and their respective end-relation constraints (as above) are implied; mapping, monic or iso – depending which was used – further constrain the type, in a manner which depends on these end-constraints, if present. If name is absent, the denotation identifies a type; otherwise, name must be an expression (it can indeed be a simple name) in which case the whole denotation asserts that this expression denotes a value of that type. Let r be a relation, for which we must answer whether it is of the specified type. If left is omitted, take Z = (:x←x:r), otherwise take Z = (:left|); if right is omitted, take A = (r:x←x:), otherwise take A = (|right:); then, variously, use of
asserts that (Z:r:) relates h to a,
i to b and A relates a to b
implies Z relates h to i
,
asserts that (:r:A) relates h to a, i
to b and Z relates h to i
implies A relates a to b
,
asserts both of these conditions.
Grammatically, iso
and monic
are adjectives; when a noun is
needed a text may treat either as if it were a noun, tacitly applied
to relation
as noun; either can equally be used to qualify some other
noun. When mapping
is qualified by some adjective – or adjectival
clause – it is common to contract it to map
– for
example, a linear map
is just a mapping which
is linear.
An iso – a.k.a. monic mapping – is also known as
a correspondence
: if qualified as from A onto Z
, it
identifies the equivalence classes of (|A:) with those of (:Z|), one-to-one; if
unqualified – i.e. implicitly qualified on its end-collections – or
only qualified on collections, it identifies its left and right values
one-to-one – each equivalence class of a collection is a singleton.
An iso whose left and right equivalences coincide is known
as a permutation
of this equivalence; any composite of
permutations of a given equivalence is a permutation of that equivalence; any
union of permutations of disjoint equivalences is a permutation (of the union of
the equivalences); and every equivalence (including every identity,
i.e. collection) is a permutation (of itself). A permutation of one collection,
C, of values can thus be extended to a permutation of any collection, B, that
subsumes C, by uniting the initial permutation with (the identity on) the
collection of values in B but not in C. It is thus common to ignore any fixed
points of a permutation; it acts on them merely as it would if we were extending
it to a collection of which they are members. When considered as a permutation,
an iso may thus be tacitly interpreted as a representative of the whole family
of permutations obtained from it by uniting with such disjoint identities; in
particular, a swap
is a permutation that interchanges two
values, relating a to b and b to a for some distinct a and b, while acting only
as identity on any other values; this is written swap(a, b) = a↔b and is
necessarily equal to swap(b, a) = b↔a; although only a and b get mentioned,
swap(a, b) tacitly acts as the identity on some collection of values other than
a and b.
The defaults for left
and right are collections, which leads to mapping,
monic and iso being uniqueness requirements when unadorned (these match their
orthodox meanings). When C is a collection, C relates e to g
just says e
= g is in C; the constraints above are thus simplified to: mapping (:r:A)
relates h to a and i to a
precisely if h = i
; and monic (Z:r:)
relates h to a and h to b
precisely if a = b
. A mapping to a
collection has a unique output (left value) for each input (right value);
correspondingly, a monic from a collection has a unique right value for each
left value (though it may use the same right value for several left values, just
as a mapping may produce the same output in response to several inputs). More
generally, a mapping pulls back an equivalence on its left values to imply one
on its right values; while a monic pushes one forward the other way.
The form of the specifications is a special case of our relation (that is
iso, monic or a mapping) respecting some other structure on its collections of
end-values; in thise case, the end-relations Z and A, with the
mapping propagating
A's structure on r's right values to Z's structure on
r's left values; the monic propagates structure in the reverse direction; and
the iso identifies corresponding structure at both ends. This general pattern,
of a relation reflecting some structure on one end-collection as a structure on
the other, makes the study of such relations a natural companion to the study of
any kind of structure one may have on a collection. For a given kind of
structure, a relation that respects it is known as
a homomorphism of the given structures on the
relation's collections of left and right values.
For f = (Z:f:A) to be a mapping from A to Z, the specification requires (:Z|) to subsume f&on;(|A:)&on;reverse(f), which certainly subsumes f&on;reverse(f) and hence (|f:A); while monic requires (:A|) to subsume reverse(f)&on;(:Z|)&on;f, which certainly subsumes reverse(f)&on;f and hence (Z:f|). For each z in (|f:), f&on;reverse(f) relates z, via each a in ({z}:f|), to each x in (|f:) which f relates to a member of ({z}:f|); when Z is a collection, subsuming f&on;reverse(f), z must be the only such x, though there may be several a in ({z}:f|); so each right value, a, of f has exactly one left value related to it, making f(a) an unambiguous denotation.
The reverse of a monic relation is a mapping; the reverse of a mapping is monic. Consequently, though I shall concentrate my attention on the world of mappings, everything I prove about mappings delivers a natural dual truth about monics; but I shalln't always go into its details, given that they're simply obtained by judicious application of reversal.
Any right value of a right-reflexive relation, f, is a fixed point of f; if, furthermore, f is a mapping to a collection, its inputs are its right values, hence fixed points of f, so f is an identity; thus any right-reflexive mapping to a collection is a collection. Likewise, a left-reflexive monic from within a collection is a collection.
A left-reflexive mapping is also, at least in some contexts, described as
a projection
. Given a mapping f, saying that it is left-reflexive
amounts to saying that it acts as the identity on its outputs (left values); it
maps each output only to itself. We can thus infer that f = f&on;f and, in
particular, that every projection is transitive.
Theorem: any composite of monics between collections is monic; any composite
of mappings between collections is a mapping. Proof: suppose f
and g are monic. If f&on;g relates some x to some y and to some z
; infer
that f relates x to some u which g relates to y and f relates x to some v which
g relates to z; but f is monic, so u is v and g relates it to both y and z; but
g is monic, so y is z. Thus f&on;g is monic. One can reason likewise for
mappings, or simply exploit that fact that their reverses are monic, so have
monic composite (in the reversed order), whose reverse is thus a mapping; yet
equal to the original composite of mappings.
Generalisation: if g is monic from within A onto f and f is monic from g to Z then f&on;g is monic from within A to Z; if g is a mapping from within A onto f and f is a mapping from g to Z then f&on;g is a mapping from within A to Z. We are given (:f|g) and (f|g:) so, as we saw above, (|f&on;g:) subsumes (|f:) and the two have the same values while (:f&on;g|) subsumes (:g|) and these two likewise have the same values; furthermore, when relevant, (:g|A) implies (:f&on;g|A) and (Z|f:) implies (Z|f&on;g:). So suppose (f|g:A) and (Z:f|g) are mappings and f&on;g relates z to b and x to a; in each case, it necessarily does so via an intermediate, so f relates z to some j that g relates to b and f relates x to some i that g relates to a; if A relates a to b then, as (f|g:A) is a mapping, (:f|) relates i to j; as f relates z to j, this means f also relates z to i; as (Z:f|g) is a mapping and relates both x and z to i, we infer that Z relates z to x; this being true whenever A relates a to b while f&on;g relates x to a and z to b, we have shown (Z:f&on;g:A) is a mapping. Likewise, for monic (f|g:A) and (Z:f|g), if f&on;g relates z via j to b and x via i to a, while Z relates x to z, monic (Z:f|g) tells us that (|g:) relates i to j; as g relates j to b, it thus also relates i to b; as (f|g:A) is monic and relates i to both a and b, we infer that A relates a to b; thus (Z:f&on;g:A) is likewise monic.
Reverse just corresponds to swapping the names used in the
two-relationship underlying a relation. For a
many-name relationship, one may wish to perform an arbitrary permutation on the
names: to describe such permutations in terms of the relations by which
currie-ing encodes the relationship is, in general, somewhat fiddly – it
is usually better to use some other encoding of the many-name relationship, if
such permutation will often be needed. For example, one can encode the
many-name relationship as a relation whose right values are lists of the
successive right values to be supplied to: the relation encoding it, the left
values thereof, the left values of these, and so on; one can then compose a
permutation with this list – and, optionally, re-currie the result to undo
the enlisting
operation.
However, the next most common case after reversal is when we swap the two
values combined by a binary operator – known
as transposing
the binary operator. If I construe a binary operator, *,
in its aspect as the relation (: c← (: c=b*a; b←a :) :), I can express
transposition by composing this relation to the left of reverse. However, it is
more usual (and useful) to encode the binary
operator as (: (: b*a ← b :) ← a :) which obliges us to actually do
some work to express transposition.
I'll describe one relation, r, as transpose to
another, s, precisely
if: whenever s relates u to a and u relates c to b, r relates some v to b for
which v relates c to a; I'll say that two relation, r and s,
are mutually transpose
precisely if each is transpose to
the other. That this is symmetric is immediate from the definiiton. (It is
neither transitive nor reflexive, although it does have some fixed points. It
entirely ignores any left values of r or s that are not relations or that are
empty. This may cause it to ignore some right values, to which either r or s
relates only such ignored left values.)
Using the extended bulk action of &unite;, we can define a mapping U by (: ({non-empty}: unite(: t :{x}) ←x :) ←t :{relations}); its left values (outputs) are mappings, by specification. Whenever t relates w to i and w relates k to j, U(t, i) also relates k to j because w is one of the left values of (: t :{i}) united to make U(t, i); because U(t, i) is this union, whenever it relates k to j there is some w that also relates k to j for which t does relate w to i. If we now look at mutually transpose r, s we find that r and U(s) are mutually transpose; as are U(r) and s; as are U(r) and U(s); indeed, each of these pairs is mutually transpose precisely if any of the others is. Consequently the equivalence (:U|), which relates p to r precisely if U(p) = U(r), identifies p with r precisely if, whenever either is mutually transpose to any s, so is the other. Furthermore, U&on;U = U, so U maps each right value to a mapping that's equivalent to it for the purposes of mutual transpose. (When we consider deriving a binary operator from a ({relations}: :) relation, U also mediates reduction of that relation to a canonical mapping that generates the same binary operator.) It thus makes sense to define the transpose of each relation r to be the U(s) of all s for which r, s are mutually transpose. Thus I define…
The transpose
, t, of a relation, r, is a mapping
defined by: t(a) relates c to b precisely if r relates, to b, some s which
relates c to a:
Thus transpose(r) may be re-cast as ({non-empty}: (: c ←b; r(b) relates c to a :) ←a :), or, indeed, (: r(b,a) ←b :) ←a; for contrast, r = (: (: r(a,b) ←b :) ←a :) whenever r is a relation whose left values are all relations. Note that the {non-empty} constraints (here and, above, for U) are needed to prevent transpose(r) from relating empty to every b that is not a right value of any left value of r. It should be clear that any relation r is mutually transpose with transpose(r); and that transpose's restriction to its own outputs (its left values) is self-inverse; indeed, transpose&on;transpose = U and each end-equivalence of transpose is equal to that of U (so (: transpose |) = (: U |) and they have the same collections of left values).
Transpose gets to play an important rôle in conjunction with the cartesian product, as well as its more obvious rôle for binary operators.
Written by Eddy.