Like unitary transformations, rotations show up as handy models of various Lie Groups. In {map ({reals}:|2)}, a rotation is just a linear map of form [x.Cos(a)-y.Sin(a), x.Sin(a)+y.Cos(a)] ←[x,y], for some angle a. It generalises to other two dimensional spaces via isomorphism with {map ({reals}:|2)}; for any basis [u,v] of the two-dimensional space, the general member of it is u.x+v.y for some scalars x and y, enabling us to interpret (x.Cos(a)-y.Sin(a)).u +(x.Sin(a)+y.Cos(a)).v ← x.u+y.v as a rotation; and

- (x.Cos(a)-y.Sin(a)).u +(x.Sin(a)+y.Cos(a)).v
- = x.(u.Cos(a) +v.Sin(a)) +y.(v.Cos(a)-u.Sin(a))
- = Cos(a).(x.u+y.v) +Sin(a).(x.v-y.u)

In Euclidean geometry, rotations preserve the length of vectors. [The length of a vector isn't an intrinsic property, in general; it's a consequence of a choice of metric. For a real linear space V, a metric on V is just a linear map (dual(V): g |V), where dual(V) is just {linear maps ({reals}:|V)}. It's usual to work with invertible symmetric metrics, for which g(u,v) = g(v,u) for every u, v in V and every w in dual(V) is g(v) for some v in V; for Euclidean (in contrast to Minkoswkian) geometry, it's usual to work with a positive definite metric, for which g(v,v) >0 for every non-zero v in V.]

Given a metric, g, the length of length of a vector v is just the square root of g(v,v). For the rotation above to preserve lengths, we require equality of

- g(x.u+y.v,x.u+y.v)
- = x.x.g(u,u) +x.y.(g(u,v)+g(v,u)) +y.y.g(u,u)
- g( x.(u.Cos(a) +v.Sin(a)) +y.(v.Cos(a)-u.Sin(a)), x.(u.Cos(a) +v.Sin(a)) +y.(v.Cos(a)-u.Sin(a)) )
- = x.x.g(u.Cos(a) +v.Sin(a), u.Cos(a) +v.Sin(a)) +x.y.( g(u.Cos(a) +v.Sin(a), v.Cos(a)-u.Sin(a)) + g(v.Cos(a)-u.Sin(a), u.Cos(a) +v.Sin(a)) ) +y.y.g(v.Cos(a)-u.Sin(a), v.Cos(a)-u.Sin(a))

for every choice of x and y; which necessarily implies

- g(u,u)
- = g(u.Cos(a) +v.Sin(a), u.Cos(a) +v.Sin(a))
- = Cos(a).Cos(a).g(u,u) +Cos(a).Sin(a).(g(u,v)+g(v,u)) +Sin(a).Sin(a).g(v,v)

- g(u,v)+g(v,u)
- = g(u.Cos(a) +v.Sin(a), v.Cos(a)-u.Sin(a)) + g(v.Cos(a)-u.Sin(a), u.Cos(a) +v.Sin(a))
- = -2.Cos(a).Sin(a).g(u,u) +2.Sin(a).Cos(a).g(v,v) + (Cos(a).Cos(a) -Sin(a).Sin(a)).(g(u,v) +g(v,u))

- g(v,v)
- = g(v.Cos(a)-u.Sin(a), v.Cos(a)-u.Sin(a))
- = Cos(a).Cos(a).g(v,v) -Sin(a).Cos(a).(g(u,v)+g(v,u)) +Sin(a).Sin(a).g(u,u)

which we can treat as equations in three variables - g(v,v), g(u,u) and g(u,v)+g(v,u) - to be solved for in terms of functions of a. Given that

- Cos(a).Cos(a) +Sin(a).Sin(a) = 1,

averaging the first and last yields a fatuous equality; subtracting the last from the first yields (on rearranging):

- g(u,u) -g(v,v) = (g(u,u) -g(v,v)).(Cos(a).Cos(a)-Sin(a).Sin(a)) +2.(g(u,v)+g(v,u)).Sin(a).Cos(a)

in which we can deem the left-hand side to be scaled by an over-all factor 1 = Cos(a).Cos(a) +Sin(a).Sin(a). Rearranging this and the middle equation above yields:

- (g(u,v)+g(v,u)).Sin(a).Cos(a) = (g(u,u) -g(v,v)).Sin(a).Sin(a)
- (g(u,v)+g(v,u)).Sin(a).Sin(a) = -(g(u,u) -g(v,v)).Sin(a).Cos(a)

Scale the latter by Cos(a) and subtract it from the former times Sin(a) to infer 0 = (g(u,u)-g(v,v)).Sin(a); or scale the first by Cos(a), the second by Sin(a) and add to obtain (g(u,v)+g(v,u)).Sin(a) = 0. If Sin(a) is zero, our original rotation is simply a scaling by Cos(a), which is just ±1, and does indeed respect our metric for any basis [u,v]. Otherwise, we can now infer that g(u,v)+g(v,u) = 0 - so if g is symmetric we have g(u,v) = 0 - and that g(u,u) = g(v,v), i.e. u and v have equal lengths. Thus, for Sin(a) non-zero, our rotation only respects lengths if g deems u and v to be of equal length and perpendicular - since g(x.u+y.v, x.u+y.v) = (x.x+y.y).g(u,u), so a triangle with one side parallel to u and the other parallel to v obeys pythagoras, i.e. it has, as the square of the length of its other side, the sum of squares on the u-wards and v-wards sides.

In three dimensions, we can chose any basis [u,v,w] and define rotations for it of form (x.Cos(a)-y.Sin(a)).u +(x.Sin(a)+y.Cos(a)).v +z.w ← x.u+y.v+z.w, mixing u and v by rotation as in two dimensions and leaving z unchanged.

A handy chart of rotations in three dimensions: take a sphere's interior; interpret radial distance as angle, so scaled that the sphere's surface is turn/2; for any unit vector v, a.v denotes the rotation about the axis {h.v: h is real} through angle a; opposite points on the sphere's surface are equivalent half turns. Thus the vector a.v denotes the rotation

- R(v,a) = (: g(p,v).v +Cos(a).(v^p)^v +Sin(a).v^p ← p :)

in which ^ denotes the antisymmetric vector product associated with g, for which u^(v^w) = g(u,w).v -g(u,v).w for any vectors u, v, w.

For a very small angle a, R(v,a) will be very close to (: p +Sin(a).v^p ←p :) so that, for c also very small, R(u,c)&on;R(v,a) will map p to p +(Sin(a).v +Sin(c).u)^p plus a term in Sin(a).Sin(c), which is small enough to warrant ignoring. Thus R(u,c)&on;R(v,a) = R(w,e) for some e and w with Sin(e).w = Sin(a).v +Sin(c).u; dividing this by Sin'(0) = 1/radian and exploiting the fact that Sin'' is small for small a and c, we can infer that, to a good approximation, w.e = v.a +u.c, for small enough a and c.

Writing V = Sin(a).v and U = Sin(c).u and taking into account the Sin(a).Sin(c) term above, we can write R(v,a)&on;R(u,c) as

- (: p +(V+U)^p +V^(U^p) ←p :)

which we can compose with

- R(v,-a)&on;R(u,-c) = (: p -(V+U)^p +V^(U^p) ←p :)

to get the *commutator* of the two rotations. This maps a
general point p to

- p +(V+U)^p +V^(U^p) -(V+U)^(p +(V+U)^p +V^(U^p)) +V^(U^(p +(V+U)^p +V^(U^p)))
- = p +2.V^(U^p) -(V+U)^(V^(U^p) +(V+U)^p) +V^(U^(V^(U^p) +(V+U)^p))
- = p +2.(g(V,p).U -g(V,U).p) -(V+U)^(V^(U^p)) -(V+U)^((V+U)^p)
+V^(U^(V^(U^p))) +V^(U^((V+U)^p))
if we now retain terms in up to two factors of Sin(a) or Sin(c), but discard terms in more than two such factors, we are left with

- = p -2.p.g(V,U) +2.g(V,p).U -g(V+U,p).(V+U) +g(V+U,V+U).p
- = p.(1 +g(V,V) +g(U,U)) +g(V,p).(U-V) -g(U,p).(U+V)

If we take into account the Sin(a).Sin(c) term, still with a and c small, R(u,c)&on;R(v,a) maps p via p +Sin(a).v^p to p +Sin(a).v^p +Sin(c).u^p +Sin(c).Sin(a).u^(v^p); if we now pass this through R(u,-c)&on;R(v,-a) it goes via

- p +Sin(a).v^p +Sin(c).u^p +Sin(c).Sin(a).u^(v^p) -Sin(a).v^(p +Sin(a).v^p +Sin(c).u^p +Sin(c).Sin(a).u^(v^p))
- = p +Sin(c).u^p +Sin(c).Sin(a).(u^(v^p) -v^(u^p)) -Sin(a).Sin(a).v^(v^p) +Sin(a).Sin(c).Sin(a).v^p.g(u,v)

to

- p +Sin(c).u^p +Sin(c).Sin(a).(u^(v^p) -v^(u^p)) -Sin(a).Sin(a).v^(v^p) +Sin(a).Sin(c).Sin(a).v^p.g(u,v) -Sin(c).u^(p +Sin(c).u^p +Sin(c).Sin(a).(u^(v^p) -v^(u^p)) -Sin(a).Sin(a).v^(v^p) +Sin(a).Sin(c).Sin(a).v^p.g(u,v))
- = p +Sin(c).Sin(a).(v.g(u,p) -u.g(v,p)) -Sin(a).Sin(a).v^(v^p) -Sin(c).Sin(c).u^(u^p) +Sin(a).Sin(c).Sin(a).v^p.g(u,v)
- p +Sin(c).Sin(a).(u^(v^p) -v^(u^p)) -Sin(a).Sin(a).v^(v^p) +Sin(a).Sin(c).Sin(a).v^p.g(u,v) -Sin(c).u^(Sin(c).u^p) -Sin(c).u^(Sin(c).Sin(a).(u^(v^p) -v^(u^p))) +Sin(c).u^(Sin(a).Sin(a).v^(v^p)) -Sin(c).u^(Sin(a).Sin(c).Sin(a).v^p.g(u,v))

A rotation is a linear isometry of a linear space which mixes up two
orthogonal directions while leaving vectors perpendicular to these
unchanged. The way it mixes up the two given directions also preserves
the handedness

of figures in any plane spanned by these two directions
(i.e. the rotation isn't a reflection). Formally, to define them we thus need
a metric (whose invariance under the rotation makes it an isometry) on a
linear space: so our context is some linear space V with a given metric,
(dual(V):g|V). This last is presumed symmetric (that is, g(u,v) = g(v,u) for
all u, v in V) and the usual meaning of a rotation requires g to be
positive-definite (that is, g(v,v) > 0 unless v = 0). In this context it's
useful to define a mapping

- ⊥ = ({subspaces of V}: {v in V: g(v,q) = 0 for each q in Q} ←Q |{subsets of V})

which has the handy property that ⊥&on;⊥ = span, which maps any subset Q of V to the collection of linear combinations of members of Q. Two vectors p and q are said to be orthogonal precisely if p in ⊥({q}), which implies q in ⊥({p}). A unit vector is a member of ({1}: g(v,v) ←v |).

We can now specify that: a linear map (V:R|V) is a rotation (of V, respecting g) precisely if there are some orthogonal unit vectors p and q for which:

- u, v in V implies g(R(u),R(v)) = g(u,v),
- v in ⊥({p,q}) implies R(v) = v,
- R(p) and R(q) both lie in span({p,q}) and
- in antisymm(span({p,q}):|2), R(p)^R(q) = p^q.

Given v in V, let a = v -p.g(v,p) -q.g(v,q) and observe that g(a,p) = 0 = g(a,q). Thus we can write any member of V as a +b.p +d.q for some scalars b, d and a in A = ⊥({p,q}). Then R(a +b.p +d.q) = a +b.R(p) +d.R(q) and the action of R is entirely determined by its action on p and q. The third rule assures us that this is R(p) = e.p +r.q and R(q) = s.p +c.q for some scalars e, r, s and c. The first rule then ensures that

- 1 = g(p,p) = g(R(p),R(p)) = g(e.p +r.q, e.p +r.q) = e.e +r.r
- 0 = g(p,q) = g(R(p),R(q)) = g(e.p +r.q, s.p +c.q) = e.s +c.r
- 1 = g(q,q) = g(R(q),R(q)) = g(s.p +c.q, s.p +c.q) = s.s +c.c

The second of these yields r = −e.s/c; feeding that into the first yields 1 = e.e.(1 +s.s/c/c) = e.e.(c.c +s.s)/c/c = e.e/c/c, whence e.e = c.c and s.s = r.r; thus e = ±c and R(p) = ±(c.p −s.q). The ± here is the reason why we need the last rule above (which amounts to saying det(R) = 1, given the second rule): it now gives us p^q = R(p)^R(q) = (e.p +r.q)^(s.p +c.q) = (e.c −r.s).p^q whence 1 = e.c −r.s = e.c +e.s.s/c = e.(c.c +s.s)/c = e/c, so e = c and R(p) = c.p −s.q, while R(q) = s.p +c.q and c.c +s.s = 1. We can equally re-state our rules as simply

- v in ⊥({p,q}) implies R(v) = v,
- for some scalars c, s with c.c +s.s = 1: R(p) = c.p −s.q and R(q) = s.p +c.q.