When discussing any type of mathematical structure, it is common to also
take an interest in how such structures relate to one another; this usually
takes the form of a mapping, from the values acted on by one structure to those
acted on by another, which (in some sense) respects the structure. Since we
care about the given structures on the values at either end, not just the
outputs and inputs of the mapping, it is necessary to adorn

the mapping,
or more generally a relation, with knowledge of structures to be considered in
connection with the values on either end of it. I can then say that the
resulting entity morphs

one structure to another while
mapping the former's values to the latter's (or relating the latter's values to
the former's).

Most of the truths general to homomorphisms are addressed by category theory, which essentially grew out of modeling exactly what all types of homomorphism have in common, so I'll deal with them there (when I get round to it); but some of them are worth stating here for the sake of letting my introductions to specific types refer to this generic discourse to explain what homomorphims are (and to give some clue to why they're worth discussing at all).

A relation is an entity r for which context gives
meaning to statements of form r relates x to y

when suitable denotations
for values are supplied in place of x and y. In the following, for any type T
of mathematical structure, I use the term T-morphism

for a relation, r,
for which context also gives meaning to statements of form r morphs A to
B

when suitable denotations for structures of type T are used in place of A
and B. As for relations, I leave it to the implementation on any given
foundation to decide how to encode this.

We can regard a morphism as a relation with
some meta-data

attached, about structures that it morps between. Given
that morphing has the same form as relating, one can of course associate a
relation R with a morphism r, for which R relates A to B precisely if r morphs A
to B. We can equally extract, from r, the pure relation s (which knows nothing
of any T-structures) that relates x to y precisely if r does. If there is some
one-to-one relation (: M |{T-structures}) whose left values cannot possibly be
values of the kind acted on by a T-structure, then the union of Q =
M&on;R&on;reverse(M) with s would provide us with a pure relation that can be
used to encode r; its restriction to left values of M gives you Q, for which
reverse(M)&on;Q&on;M = R; and it naturally partitions into a union of Q with s,
which we can thereby recover to learn what r relates to one another. This
should (as long as there is some suitable M for each type T) provide a mechanism
by which formalisms capable of encoding relations can likewise encode morphisms;
however, I do not presume that this is the actual mechanism used. It suffices,
for my purposes, that an entity can know what it relates to what and know what
it morphs to what.

As for relations, we can reverse a morphism; when doing so, we not only reverse it as a relation but also reverse its morphing. Thus r morphs A to B precisely if reverse(r) morphs B to A. The typical T-morphism of interest, below, shall morph some T-structure A on some collection U of values to some T-structure B on some collection V of values while, at the same time, being a relation (V: :U). Its reverse shall, in this case, morph B to A while being a relation (U: :V); thus if r meets this minimal requirement to be of interest, so does its reverse.

For any given structure A on a collection U of values, we can
extend the identity mapping on U to a morphism that also morphs A to A. This
duly satisfies our minimal requirement to be of interest and I'll describe it as
the identity morphism

on A. When composed after any morphism to A, or
before any morphism from A, the composite is the given morphism (albeit possibly
stripped of some knowledge of other structures it may morph from or to).

For any given type T of structure, we may have a concept of sub-structure:
when a restriction B of some T-structure A is itself a T-structure, we describe
B as a sub-T of A. Every T-structure is fatuously a sub-T of itself. When B
acts on a collection V of values subsumed by the collection U of values on which
A acts, we can construe the identity on V as a relation (U: :V), so we use this
relation to morph from B to A while satisfying my minimal requirement to be of
interest. Thus we can extend the identity morphism on B to also morph from B to
any T-structure of which B is a sub-T; I refer to the result as the identity
embedding

of B in each such T-structure.

We can define a composition on morphisms
exactly as for relations, extended to apply also to
T-morphing by: r&on;s morphs A to C precisely if there is some T-structure B for
which s morphs A to B and r morphs B to C. We can then restrict this
composition to only apply where the resulting composite *does* morph some
structures; if s doesn't morph to some T-structure that r morphs from, then the
morphic composition of T-structures doesn't combine r&on;s at all, even though
the raw morphic composition (like that on relations) does. We thus limit
composition to the case where the right operand morphs to the same T-structure
that the left operand morphs from.

All of which, thus far, is vastly general but mostly useless; it just gives us a way to associate a relation with some structures, while making sure the association plays nicely with reversal and composition. To get anything more interesting, we need to impose some constraints on the relation to make it play nicely with those structures.

Note that category theory also (orthodoxly) uses the
term morphism

, for the entities in a category that link the objects
together; although related to the way I'm using the term here (a morphism knows
what objects it's from and to), category theory's usage is more general. I
intend the present usage purely for the purpose of setting up the definitions
below, not for general usage outside this page as context. It remains that my
present use of morphism

does indeed give you (for each structure type T)
a category, in which we'll now explore a sub-category that's relatively
interesting. Note that category

is here a type of algebraic structure;
and, indeed, the study of its homomorphisms (functors) gives us a category with
some interesting structure of its own.

A type T of mathematical structure is characterised by a set of statements
about the structure that must hold true for that structure to be of the given
type. Typically, for a T-structure A, there are some statements about values
implicated in the structure A, that have to be true for all choices (possibly
subject to some specific constraints) of which values to use in those
statements. If we have a T-morphism (V: f :U) from A to a T-structure B on V,
we can take each such statement, characterising T, as stated for A, and replace
A with B throughout, while also replacing each referenced value in U with some
value in V that f relates to it; we can then ask whether the resulting
statement, about B and values on which it acts, is true. If every relevant
statement in A implies the resulting statement in B, then I'll say the relation
f respects

the T-structure on A.

This construction actually requires that f has, as right values, every value that appears in any of the statements that characterise T as applied to A, so that we can perform the necessary transformation into a statement within B. Since these are all the values implicated in A, at least when considered as a T, i.e. all the members of U, the foregoing actually requires (V: f |U), i.e. every member of U is a right value of f (and, if U is an equivalence, f respects it, i.e. (: f |) subsumes U).

It commonly arises that, when we apply the above to just one or a few of the statements about A that make it a T-structure, if a morphism from A to B works for just these statements, the fact that A and B are T-structures suffices to imply that the morphism does respect the T-structure on A. In other cases, there may be some simple condition on the morphism, that isn't exactly what any of the characteristic statements of T transforms to in the above, which serves to imply that the morphism does respect T-structure. In such cases, the few statements or other simple condition suffice as a specification equivalent to requiring that a relation respect T-structure. There are also cases where it is desirable to specify some tighter condition, e.g. that the morphism be a mapping, which makes it possible to simplify the statement of what is required to make the morphism respect T-structure.

Thus a context discussing a type T of mathematical structure may specify
some conditions that a T-morphism must satisfy in order to respect T-structure;
when a T-morphism satisfies those conditions, I shall describe it as a
T-homomorphism

. At the same time, for a relation (V: f
|U) that morphs T-structure A on values in U to T-structure B on values in V, if
f respects T-structure, I shall describe f as
T-homomorphic

. When a mapping is homomorphic, I shall
describe it as a homomorphic map

; many contexts define a
homomorphism to be a homomorphic map, excluding more general relations. For any
T-homomorphic map from A to B, I'll describe A as homomorhic
to

its image under f, which is the collection of actual left values of f
under the structure induced within B. (This is typically a sub-T of B.)

Certain contexts have other names for their homomorphisms; those between vector spaces, for example, are described as linear rather than homomorphic. Orthodoxy typically specifies homomorphisms to be functions; these are even more tightly constrained than mappings (they must be between sets, not arbitrary collections); for example, the collection of objects of a category need not form a set (indeed, for the category Set, {sets} is not a set), much less the collection of (in category theory's terms, not those of this page) morphisms of a category; consequently, functors aren't orthodoxly called homomorphisms of categories, for all that it's what the above lets me call them.

When a T-homomorphism morphs a given T-structure into itself, it
is known as an automorphism

of that structure
(auto-

being a prefix meaning self

, derived from ancient Greek,
which also supplied homo-

meaning same

and -morph

meaning
shape). In particular, the identity morphism of any
T-structure is an automorphism; and the identity embedding of any sub-T of a
T-structure in this T-structure is a homomorphism.

As discussed for morphisms above, composing
homomorphisms requires that each homomorphism morphs *to* the structure
that the next morphs *from*. For (U: f |V)&on;(W: g |X) to be a valid
composition of T-homomorphisms, we require that W and V be the same and that the
T-structure on W = V to which g is a T-homomorphism must be the T-structure on V
= W from which f is a T-homomorphism; although g's image of X may well be a
sub-T of V = W, we have to distinguish (W: g |X) from the (G| h |X) whose
relation is exactly that of g but for which G is the actual T-structure image of
X within W = V; if G isn't (all of) V, we can't form f&on;h even though we can
form f&on;g and f&on;(V: x←x |G)&on;h. The identity embedding of G in V
has to be made explicit, as in this last, for the composition to be valid.

One might hope (as I initially did) to get away with allowing composition whenever the right operand morphs to a sub-T of what the left operand morphs from; however, if T-structures E, F have a common sub-T G in their intersection, this would let us compose (: e |E) and (: f |F) after some (G: g :), despite there being diverse (E: h :) that we can compose e after but can't compose f after. To make our composition categoric, we need validity of e&on;g and f&on;g, for any g, to imply validity of f&on;h whenever e&on;h is valid. (Orthodox category theory insists explicitly on each morphism having a specific identity (or object, with associated identity morphism) at each end and composition only being valid if one morphism ends on the identity at which the other starts, exactly as my rule is here obliging me to require of homomorphisms.) So we are obliged to make inclusion morphisms such as (E: x←x |G) and (F: x←x |G) explicit, so that e&on;(E: x←x |G)&on;g and f&on;(F: x←x |G)&on;g are valid but neither e&on;g nor f&on;g is (even though the piece omitted in between is, as a relation, simply an identity); then e&on;(E: h :) being valid doesn't oblige us to try to deal with f&on;h. Of course, (E: x←x |G)&on;g is a perfectly good homomorphism (E: :), even though most of E is missing from its actual left values, and it's implemented by the same relation as g; unlike g, we can compose it to the right of e, though not to the right of f.

Although I shall describe relations and mappings as homomorphisms (just as
orthodoxy does), it is important to remember that they are only so in so far as
they are morphisms in the sense above, i.e. they know what they morph from and
to. As relations, they only know what their values are; they know nothing of
any structure being considered on those values. Such structure
often *is* specified by context, so many contexts shall use a collection
of values synonymously with some structure on it; but it is entirely possible
for two structures to act on the same collectino of values, in which case it is
important to remember that a homomorphism morphs between structures and not
conflate the structures with their collections of values. A mapping (V: |V) may
be an automorphism of one structure on V or a homomorphism from one structure on
V to another; in the latter case, even though it maps from V to itself, it is
not an automorphism.

When there is a homomorphism from one structure on a given collection of
values to another on the same, composition with this can of course be used
to bridge the gap

between homomorphisms to the former and from the
latter; however, such bridging should always be made explicit, both to satisfy
the rules for composition and to avoid leaving readers with a garbled
understanding of what's happening. In any case, such homomorphisms typircally
aren't identities (e.g. those from addition to
multiplication on {reals} and its
sub-ringlets are exponential), so
omitting them won't give a homomorphism as composite.

When the rules for composition are satisfied, I'll describe a composition of
homomorphisms as proper

and denote it by
its composition as applied to the
relations. Because each homomorphism in the chain respects the structure,
it is to be expected that any proper composite of homomorphisms is then itself a
homomorphism. However, since I here only sketch the nature of homomorphisms,
each context defining homomorphisms needs to prove that its specification does
indeed lead to proper composites being homomorphisms. Since each homomorphism
has, as right values, all of the values implicated in the structure from which
it morphs, a composite of homomorphisms necessarily morphs from the whole of the
structure that the first (right-most) component morphs from; if (: g |U) morphs
from some structure A on U, then any proper composite f&on;…&on;g is also
(: |U) and morphs from A.

When, for some given structure-type T, any proper composite of T-homomorphisms is indeed a T-homomorphism, we get proper composition as a closed binary operator. As long as the composite morphs from the same T that its right operand morphs from and morphs to the same T that its left operand morphs to, the morphing rules will let us form (f&on;g)&on;h whenever they let us form f&on;(g&on;h), and vice versa; the underlying composition of relations is associative, so we should get the same composite either way round, making our proper composition a combiner. Proper composition of homomorphisms is constrained to use clean composition of relations; both this and our morphing rules preclude being flat (we refuse to compose certain pairs of homomorphisms) unless there is in fact only one T-structure (hence all T-homomorphisms are to and from that one T-structure, making them all mutually composable).

However, as we always have an identity automorphism on each T-structure, every T-homomorphism can be composed to the left of the identity on what it morphs from and to the right of the identity on what it morphs to, giving the given T-homomorphism as composite; proper composition thus satisfies my first two constraints for being categoric. For the last, we have to consider any a, b for which there is a c for which a&on;c and b&on;c are proper; whatever T-structure c morphs to, both a and b must morph from; then if a&on;d is proper, for any d, this must in fact morph to the same T-structure, ensuring that b&on;d is also proper. Likewise, when there's some c for which c&on;a and c&on;b are proper, both a and b morph to the same T-structure that c morphs from, so any d with d&on;a proper also morphs from this T-structure, ensuring d&on;b is also proper. Thus proper composition, when homomorphisms are so defined as to ensure it's a combiner, gets to be categoric (and we'll be able to use category theory to study it).

A binary operator * combines values a, b to produce a value a*b; there is then some collection V for which, in so far as * knows how to combine a with b to produce a*b, the values of a, b and a*b are in V. A homomorphism from * to another binary operator, @, whose associated collection of values is U, is then a relation (U: f |V) for which, whenever * knows how to combine a with b to make some c as a*b: for each i that f relates to a, j that f relates to b and k that f relates to c, @ does know how to combine i with j to make k as i@j. When f is a mapping and the binary operators are unambiguous, this reads as f(a*b) = f(a)@f(b). In general, f need not be a mapping, nor need a*b and i@j be unambiguous expressions; in such a case, @ may also know how to combine i with j to make some values other than k; indeed, if f relates more than one left value to values (such as c) that * can produce as a*b, @ needs to make these values as i@j as well.

A group is then a binary operator with some extra properties; but it turns out that any binary operator homomorphism between groups is in fact a group homomorphism, so there is no need to include the extra properties when specifying what a group homomorphism is. In contrast, a ring (or ringlet) is a pair of flat binary operators (with some constraints relating them) on one collection of values, one of which is specified to have an identity; a homomorphism of the two binary operators suffices to ensure the other properties but not to ensure that the image of this identity is the destination's specified identity; so, for ring homomorphism, it is necessary to specify this requirement explicitly.

The addition and multiplication on the natural numbers are both binary
operators; they act on the same values, yet they are quite different as
structures of the type binary operator

. A homomorphism to or from one of
them typically won't be a homomorphism to or from the other. For any natural k,
(: power(n, k) ←n :{naturals}) defines
a homomorphism from the additive structure to the multiplicative structure.
Contrast with, for any natural k, the automorphism power(k) of the
multiplicative structure or the automorphism scale(k) = (: k.n ←n
:{naturals}) of the additive structure, neither of which is homomorphic for the
other structure, at either end.

For any given mathematical structure of a particular type, the automorphisms
from that structure to itself are closed under their associative composition,
which thus constitutes a flat combiner with an
identity. When the mathematical structure type in question is
itself a combiner, we can extend that combiner
(on the inputs to and outputs from our automorphisms) pointwise to a combiner of
automorphisms; if we interpret this pointwise combination of automorphisms
as addition

and the composition of automorphisms
as multiplication

, the resulting structure has many of the properties of
familiar arithmetic.
If the addition is cancellable and
commutative, indeed, we get a ringlet.
The study of homomorphisms between such ringlets can thus shed light on the
relationships between homomorphisms among different mathematical structure
types.

The specified conditions seldom impose on a homomorphism f any need to be a more general relation than a mapping; so the homomorphisms usually discussed are mappings – indeed, orthodoxy specifies them to be mappings and the case I can think of with a more general relation (with left values in a group understood modulo some sub-group) does reduce to a mapping subject to a suitable equivalence (turning the relation into a mapping to the quotient group). When several values are implicated in some of the statements characterising T-homomorphism, for each choice of the values in U that it implicates for A, we can independently vary which left value in V to use in place of each chosen value in U, subject to f relating one to the other; yet the resulting statement is obliged to remain true in B. This is apt to make (|f:) be an equivalence natural enough to B that, in practice, B should be understood modulo it, thereby making f a mapping. None the less, I allow homomorphisms, in principle, to be relations, if only so that we can define such an f as a relation in the first instance, prior to showing the naturality of B's reduction mod (| f :).

For a structure type which implicates separate collections in distinct rôles in specifying the structure, a homomorphism would need to comprise several relations, one for each such collection, unless some structural property of the type provides a means for a single relation to encode all of these. As an example, consider a module M over a ring R and a second module, N, over a ring S; rather than a simple R-module homomorphism (requiring S = R) consider a module-over-ring homomorphism: suppose we have a ring homomorphism f from R to S coupled to a homomorphism F of the additions on M and N for which F(r.m) = f(r).F(m); while the example of this that I can think of does have R = S, we can have a non-trivial f, namely any ring autoisomorphism of R, for example a conjugation. This gives us a conjugate-linear map from M to N, both as R-modules; which mostly serves to illustrate what I said about a structural property of the type (conjugation) letting a single relation (in this case, F) encode all of the homomorphism. If we embed the quaternions in some addition's automorphisms (thereby giving a multiplicative action of the quaternions on the addable values; but quaternion multiplication isn't commutative, so this doesn't make a module), there are more ring automorphisms one could potentially use (beside the obvious conjugation, negating the imaginary part, an automorphism might cycle the three imaginary units, for example), taking us a little closer to a better example. I'm unable to think of an interesting example with R and S not isomorphic, though; and suspect examples might be more complicated than illuminating.

For a structure type T that supports a notion of restriction to suitable collections of values, to make a sub-T, for which the images of homomorphisms are suitable, and any intersection of sub-Ts is a T, composing the relations that implement homomorphisms (: f |U) and (V: g |W) with U and V sub-Ts of some common T, we compose f&on;g via the intersection of U and V, which is another sub-T of the common T. The resulting composite is apt to be (the relation that implements) a T-homomorphism, albeit likely from a proper sub-T of W rather than all of W. To represent this properly for homomorphisms, we need to identify the restriction (U: g :W) as a homomorphism distinct from g, that we can indeed then compose with f; this restriction and the resulting composite won't morph from W, but from a sub-T of W. Fortunately, such compositions aren't particularly often relevant or interesting.

A homomorphism knows the structures it morphs from and
to, hence the collections of values they implicate; consequently, it is natural
to allude to these collections when mentioning (the relation that mediates) the
homomorphism, specifying it as (V: f |U) when it's from a structure on U's
members to one on V's. It's thus relevant to enquire whether every member of V
is a left value; when a homomorphism is (V| f |U), for V and U the collections
of values implicated in the structures it morphs between, it's common for all of
the structure on V to be captured by the structure on U, as represented in V by
f. Such a homomorphism is described as surjective

or epimorphic

and as a surjection

or epimorphism

. (The latter
term is taken from category theory; the former has wider currency.)

Consider an epimorphism from a binary operator * on U to an
ambiguous binary operator @ on V, encoded by mapping (V|f|U), and suppose there
is some z in u which is not x*y for any x, y in U (e.g. for the addition on
positive naturals, 1 is not n+m). For each t, u in U, f(t*u) is required to
be *one of* the values f(t)@f(u) can take; but @ is ambiguous, so it is
possible that f(z) is one of the *other* values f(t)@f(u) can take, for
some t, u in U. In such a case, f fails to capture the fact that f(z) is a
value of f(t)@f(u). So an epimorphism need not *always* capture all
details of the output structure; when it does, it is necessary to prove that it
does.

Just as homomorphisms need a refined composition, they also need various other relational notions refined: for example, reverse acts on a homomorphism by reversing the relation in the usual way, as a relation, and also reverses the morphing. Whether or not it is a homomorphism is then to be determined in terms of this. (Even if it is not, it still knows which T-structures it morphs from and to, as a morphism, so that we can compose it with other morphisms, potentially building new homomorphisms out of morphisms that aren't (all) homomorphisms.)

I describe monic mappings as iso; an iso a one-to-one correspondence between its left and right values; this is true even if it is specified (U: f :V), in which case U and V aren't necessarily its collections of left and right values. A relation knows its own right and left values, so (U: f :V) as a relation doesn't remember U and V, only the sub-collections of each that are f's relevant values. The specification of monic is equivalent to saying that f's reverse is a mapping; so f is iso precisely if both it and its reverse are mappings. Let's now look at what happens when we specify that a homomorphism's reverse is also a homomorphism, bearing in mind that a homomorphism needn't be a mapping – it can be any relation – but is obliged to have, as right values, all values implicated in the structure it morphs from.

With U and V derived as usual from the structures it morphs between, a
homomorphism (V: f |U) has (U| reverse(f) :V); this cannot possibly be a
homomorphism unless it is (U: |V), which makes (V| f |U) surjective. Thus f and
its reverse relate all of the values implicated in each structure to those
implicated in the other, establishing a correspondence between the structures.
When a homomorphism's reverse *is* also a homomorphism, I'll refer to it
(and hence also to its reverse) as a mutual homomorphism

between the two structures; if, furthermore, it is iso (as a relation, i.e. it
and its reverse are both mappings), it is known as
an isomorphism

. The two structures on either end of an
isomorphism are described as isomorphic

to one
another.

The important feature of isomorphism is that for all practical
purposes

the two isomorphic structures are the same

structure. Anything you can show true of one can (using the isomorphism to
replace particular entities in one with those in the other) be shown true in the
other. None the less, the two structures remain distinct; this can be
particularly relevant when there are many distinct isomorphisms between them and
there is no particularly compelling reason to prefer one of these over any
other.

For example, any two vector spaces with the same finite
dimension are isomorphic; in particular, any finite-dimensional vector space is
isomorphic to its own dual; yet there are many such isomorphisms, for any given
pair of spaces, even for a given space and its dual. Even when they are
one-dimensional, nothing intrinsic to the vector spaces themselves singles out
one isomorphism as better than

the rest. For a vector space and its
dual, one may reasonably prefer the positive-definite isomorphims over all
others, but there are still plenty of these.

None the less, in some cases there is one isomorphism that
arises naturally

and that the two structures do single out, in their own
terms, as more apt than any other; in such a case, it is not uncommon for a
discourse to conflate the two structures, via this natural

isomorphism.

An isomorphism from a structure to itself is known as
an autoisomorphism

; the composition of autoisomorphisms of
a given structure necessarily always forms a
group. When we have two mutually isomorphic spaces with many isomorphisms
betwen them, one of the things we can do is compose one isomorphism with the
reverse of another to get an autoisomorphism of one of the ends. Conversely,
given one isomorphism (whether natural or otherwise) between two structures,
composing this one with an autoisomorphism of either end provides a monic
mapping from the autoisomorphisms of that end to the isomorphisms between the
two; doing the same thing at the other end, we end up with a monic mapping from
(all of) the autoisomorphisms of one end onto (all of) those of the other end.
Thus each T-isomorphism between any two T-structures induces a group-isomorphism
between their respective groups of autoisomorphisms. It is for this reason
that group theory is so very important – much
light can be shed on the nature of a structure by studying its group of
autoisomorphisms; indeed, for many structure types, there is a natural
representation of any structure of the given type as a homomorphic image in a
structure built out of its own autoisomorphisms.