A transistor is an elecrical component with three contacts: one of these controls how freely a current can flow between the other two, which makes it possible to use transistors as switches. This is how they're used in the design of electronic computers: below, I'll give a glimpse into how such simple switches can be used to build up circuits that can perform arithmetic and all the other strange and wonderful things that modern computers do.
There's a fairly simple circuit one can build that implements the
so-called NAND
or Negated AND
operation between two single-bit
inputs, whose output is on
unless both of its inputs are on
; and
all manner of other components may be built up out of this. There's
even a fun site that will let you play at chip
design, challenging you to build up steadily more complex components out of
these basic NAND gates
(and the other things you've built, once you've
done so). Some of the other basic gates (such as NOT, AND, OR, NOR and XOR) can
be built up directly from transistors, using fewer than it takes to implement
them in terms of NAND gates; which, I presume, chip designers exploit to build
more efficient processors than the NAND-based theory promises. That theory
does, however, demonstrate that it's possible to build up the more
complex things; that one can do so more efficiently is (at least from the point
of view of that basic theory) an engineering practicality: I'll here give a
glimpse of that practicality – although I'll still gloss over some
engineering details (most obviously, the placement and size of resistors one
needs to include in circuits).
When I looked for how these gates are implemented in terms of transistors and other components, I immediately noticed that some of them can be generalised to combine many inputs rather than just two – combining N inputs with NAND, AND, OR or NOR can be done using N transistors and 1+N resistors (or 2.N transistors and no resistors), where building these up out of the two-input versions would take rather more. So I'm going to play around here with what's possible and think about how to make basic components somewhat efficiently: I've no doubt Real Chip Designers do cleverer things that'll make these doodlings look naïve and simplistic, but I like exploring an idea as far as my limited understanding does reach.
See the illustration here, naming various things that shall appear in the diagrams I'll use to depict electric circuits. My hand-crafted images are based on those I learned in school, which match one notation I find in use on the web (there are several). I'll say more about the particular components further down. Where lines representing wiring cross one another, a round dot covering the crossing-point means they are joined there; otherwise, lines that cross in the two-dimensional picture merely indicate wires passing each other without connecting in our real three-dimensional world.
I'll also put names of inputs and outputs inside circles (larger than the small one with a + inside it that indicates a power line connection) as labels on points in the circuit (typically inputs and outputs), to make it easier to explain how a particular piece of circuitry behaves, typically by describing the behaviour in terms of relationships between the inputs and outputs, expressing the outputs as functions of the inputs, in terms of the logical operations mentioned above and described below.
I'm mildly surprised to not find Unicode characters for these symbols (which would have been handy to use here, in place of rather fiddly hand-crafted SVG diagrams), aside from earth, ⏚, and the standard arithmetic operator ⊕, which would do for the power source. Since I need to make my own SVG symbols for some, I'll do the same also for these.
The standard operations mentioned above, that combine values (described here, aside from the first, in their many-input form, although the two-input form is usual), are:
inclusive ORto distinguish it from XOR, below.
Negated AND.
Negated OR.
exclusive OR; for just two inputs, it is true if either input is on, provided the other isn't; it is, thus, the inequality test.
Various of these can be constructed from one another (and there are diverse others one can construct from them); for example, applying NOT to the output of AND gives you NAND; and, as NOT is self-inverse, applying it to the output of NAND gives you AND. Likewise, applying NOT to the output interchanges OR and NOR. All but the first and last, NOT and XOR, can be implemented for many inputs by a trivial adaptation of the circuit for combining two of them; and NOT is indeed the result of scaling back either NOR or NAND to only one input.
The combining operators – AND, OR, NAND, NOR and XOR – are all binary operators that extend naturally to bulk actions, albeit NAND and NOR don't do this by the usual mechanism of applying associativity, since they're not associative, although the rest are – if you partition the inputs into any number of (non-empty) sub-sets, use the operation to combine the inputs in each sub-set and then use the operation again to combine the various outputs from that, you get the same answer as if you combined all of them in one go. For example, with three inputs, a, b and c, we can combine a with b then combine the result with c; using AND this gives (a AND b) AND c, which gets the same result as combining b with c and then combining a with the result, a AND (b AND c) = (a AND b) AND c. In contrast, a NAND (b NAND c) = (NOT a) OR (b AND c), while (a NAND b) NAND c = (a AND b) OR NOT c; these are only equal when a = c; likewise a NOR (b NOR c) = (NOT a) AND (b OR c) isn't equal to (a NOR b) NOR c = (a OR b) AND NOT c unless a = c.
All five are commutative (if you change the order of the operands, the result is the same) and closed on the set {off, on} of simple boolean values. The three that are associative also have identity values, off for XOR and OR, on for AND; i.e. for any v, off OR v = v XOR off = v = v AND true. Only for XOR does every value have an inverse – a value which, when combined with it, gives the operator's identity – every value, indeed, is its own inverse, so a XOR a is always false and a XOR b XOR a is b. XOR thus forms a group. For AND and OR, the non-identity value turns everything into itself (just as multiplying by zero always gives zero): anything AND off is off, anything OR on is on.
As ever in boolean logic, AND and OR distribute over each other: a AND (b OR
c) = (a AND b) OR (a AND c) while, equally, a OR (b AND c) = (a OR b) AND (a OR
c). The first of those leads to a common representation of these operations in
which AND is multiplication, OR is addition, on is 1 (or, rather, all positive
numbers; when we add
two of them, we get another, but at least it's still
positive) and off is 0; I cannot resist pointing out, though, that the exact
reverse interpretation (OR as multiplication, AND as addition, on = 0, off =
positive) works just as well: the situation is perfectly symmetrical between AND
with on as identity and OR with off as identity. Both AND and
OR have characteristic 1,
i.e. when we combine a value with itself we get that value, a AND a = a = a OR
a. The characteristic of XOR is 2 (as each value is self-inverse).
The action of NOT on an expression in terms of AND and OR is to swap AND with OR and apply NOT to each operand. I can illustrate this by describing the other combiners in terms of AND, OR and NOT:
These formulae shall aid us in designing circuits to implement these operations.
I'm presuming that readers are familiar with the basics of electricity: for
example, if a resistor R sits between a power line and earth, with voltage V
between them, then a current V/R will flow through the resistance, dissipating
power at rate V.V/R, which shall typically warm the resistor up. If you reverse
the voltage, the same current runs in the opposite direction: the basic resistor
is symmetric. In contrast, the diode lets current flow through in one direction
but not the other; in the diagram above, it allows current to flow from left to
right, entering the triangle through the edge of it that's perpendicular to (and
bisected by) the wire, emerging through the opposite vertex of the triangle and
continuing onwards. In effect it's a resistor with a very low resistance to
current going the right way
through it but a very high resistance to
current going the other way. (Note that the nominal direction of current in
electrical circuits is from power to earth, although this flow is usually
mediated by electrons moving in the opposite direction.) A transistor, however,
is not quite so straightforward – it has three connectors, after
all.
One of those connectors controls whether the other two are connected. Let's
look again at that picture of the transistor, but with different parts in
different colours: the part shown here in blue controls whether the orange parts
form a current path. When they do, they function as a diode, so even then they
only let current flow in one direction: the direction of the little arrow on one
of the two orange lines away from the central bar in the diagram. The result is
that the transistor functions as a switch. The type I've
labelled active
, with a simple (blue) line coming in to the central bar,
has the (orange) conduction path active when the blue line is on, inactive when
off. The type I've labelled passive
, with a ring between the end of the
blue line and the central bar, does the exact reverse; the orange current path
is active when the blue connection is off and inactive when the connection is
on.
With the diagrams here, each transistor wants the power supply on the side
I've labelled +
and the earth on the side I've labelled −
.
If you wire it up the other way round, even when the current path is activated,
no current will flow because the current path functions as a diode. (If you
apply a big enough voltage the wrong way round, you may burn out the
transistor.) This constraint on the orientation of the current path leads to
most diagrams involving transistors having their contacts with power lines
towards the top and earth towards the bottom; but watch out for cases where
that's reversed and the little arrow on the current line inside a transistor
points upwards.
Strictly, there's a range of voltages for the blue part that control the resistance of the orange current path: at one end of the range, the resistance is very small, at the other very large; in between, the resistance varies. The transistors used in amplifiers (e.g. in an audio system) typically have some wide sub-range of that voltage range within which the orange resistance varies (close to) linearly with the blue voltage, enabling a weak signal on the blue connection to modulate a strong current along the orange path, so that the variations in the current through that orange path are an amplified version of the weak signal on the blue path. However, for purposes of the present discussion, we're only going to care about the extremes of the range: the blue voltage shall be at one extreme or the other of its range, causing the orange path to have either a resistance that we can ignore (it's tiny compared to various other resistors in the circuits) or such a huge resistance (compared to the other resistances in the circuit) that it's effectively an insulator, letting (almost) no current through. As a result, the transistors used in these circuits don't have to be particularly well-behaved in the interval in between, which means we can afford to make them cheap and small, so that we can have lots of them, which turns out to matter because we'll be building complex networks of them.
I won't be labelling the resistors in my diagrams with their actual resistances: the right resistances to use will depend on the particular details of the transistors in use – the resistances of the orange current path when the blue voltage is at its two extremes and … there's surely a large impedance between the blue input and the orange current path (to limit current leakage between the two), at least some of which shall be simple resistance; there may also be some effective capacitance, that yields an impedence when divided by the time we allow the components from a change of state to when we expect them to have settled down to their response to it – and I don't want to digress into those details. They're crucial to making actual working circuits, but incidental to the general form of those circuits, which is what I want to discuss here. It remains that the details of getting the resistances right may limit how many inputs one can put into a single many-input gate, in practice, where the usual literature discusses (and will tell you the resistances for) only the two-input versions.
The control wires of the two types of transistor have exactly opposite effects on the current path. In this analysis, each transistor's (orange) current path shall thus be in one of two states:
saturated); or
saturated).
Now, a switch is just a mechanism for inserting a removable air gap
into a circuit, so (for present purposes) a transistor functions as a switchable
diode: a passive transistor's conduction path is enabled by default
but
can be disabled by applying power; an active transistor's conduction path is
disabled by default but can be enabled by applying power.
The three connections of a transistor are named variously, depending on the type
of transistor: the one with the arrow may be called emitter or drain, the one
I've coloured blue above is base or gate and the third is collector or source.
The choice between the terminologies relates to the particular implementation
mechanics of the transistor and I find it (especially the fact that there are
two names for each part) more confusing than illuminating. So I shall describe
the parts in terms of their functions, rather than their rôles in the
implementation: the base (gate) is what controls the path from collector to
emitter (source to drain), so I shall refer to the (blue) former as
the control
and the (orange) other two as
the path
or conduction path
.
That practical implementations need to be wired with the path's two ends the right way round is mostly irrelevant to the theory, although it does (of course, matter in practice; and, even in the theory) preclude circuits in which a transistor's path is part of a current line that varies its direction. When I do need to distinguish the path's ends, I shall refer to the collector (source) as in-going and the emitter (drain) as out-going (because that's which way the conventionally described current flows, from positive to negative; note, as ever, that electrons are actually flowing the other way).
I'll say that an input or output is on
when it is at (more or less)
the power line's voltage and that it is off
when it is at (more or less)
earth's voltage. When it is used as the control line for a transistor, it thus
corresponds with the above account (of the current path) of the transitor being,
likewise, on or off if active, or the other way round if passive.
So now let's see what happens when we combine some resistors and transistors. For simplicity, I'll start with active transistors; until we combine active and passive, the case for passive can be seen by inverting the condition of the active transistor's control line.
First consider the simplest circuit – a current passing through a resistor – and I'll attach two outputs to it, while marking two nearby points. As it stands, the circuit has a simple current running through the resistor from power line to earth, with all of the voltage drop happening across the resistor, so u is at the same voltage as the power line and v is at the voltage of earth – i.e., u is on and v is off.
Now, if I cut the wire from the power line to earth at any point, the current shall stop and the parts of the circuit on the power line's side of the cut shall soon all be at the power line's voltage while the parts on earth's side of the cut shall likewise be at its voltage. So cutting at R shall leave both u and v off; while cutting at S shall leave both on. In the latter case, if significant current were to flow out at v, that current would produce a voltage drop across the resistor, lowering the voltage at v (without affecting the voltage at u). In contrast, with a cut at S, if significant current flows out at u it makes no difference, as there is no resistance between the power line and u. (This is in theory. In practice, the power supply has an internal resistance that would lead to the voltage of the power line itself dropping; but u and v would still be at the power line's voltage, so on.) We thus seek to avoid having substantial currents flowing through outputs of a circuit: this can be achieved by connecting them (usually via resistances) to the control lines of transistors, which do not draw much current.
Of course, where I speak of cutting
the wire, I can achieve an equivalent
effect by inserting the current-path of a transistor into the wire: when this
current path is off, it shall block the wire I inserted it into. So now turning
r off implements cutting at R, while turning s off implements cutting at S. If
we turn both r and s off, the central resistor isn't connected to
either the power line or earth, so it is, along with u and v, at an
indeterminate voltage. So we won't do that. When r is off but s is on, both u
and v are off; when s is off but r is on, both u and v are on; and when both r
and s are on, we have the prior situation, with u on and v off. Thus s toggles
v = NOT(s) and r toggles u = r, as long as the other of r and s is on. If we
used passive transistors here, we'd get v = s and u = NOT(r) as long as the
other of r and s is off.
Notice that the circuit includes a resistance between each input and the control line of its transistor. Since we want that control line to be on when the input is on and off when the input is off, this resistance presumably needs to be small compared to the internal impedance between the transistor's control line and its current-path; yet it's presumably there to limit current along that path; which I suspect means that the control line's interal impedance actually includes an impedance due to a capacitative effect, as alluded to above. However, as noted there, I'm less concerned with the details of the resistors than with the fact that they're there.
Since we want to avoid any indeterminate voltages, we won't be putting any outputs between transistors along their current-paths, as in this first circuit, now that it has served to illustrate what we can do. There shall be one exception to that: when we can guarantee that at least one side shall be connected and, if both can be connected, that the two shall agree when that happens; but circuits that can guarantee this don't need a resistor (on the path from power to earth – see the next section). Since voltage drops happen across resistors, which transistors can disconnect from either the power supply or earth, the position at which to place an output is between the transistor (or all of the transistors, once we have several) and a resistor.
Now, the circuit just studied put transistors on either side of a resistor:
pause, before we move on, to consider what happens if we have resistors on
either side of a transistor, with the power line at one end and earth at the
other. An output from either side of the transistor, between it and a resistor,
would get a half-on
signal when the transistor's current path is on;
that's not something we ever want, so we won't take an output from here. An
output on the other side of either resistor won't be affected by the transistor,
since it's directly connected to either power or earth. Thus this configuration
is not useful.
Such intermediate values might, however, have uses for dealing with continuum signals, rather than digitised ones – as in an amplifier, in fact. The theory of such continuum circuits is, however, significnatly more complex than the present simple digital study.
So, at least in circuits with only one type of transistor, we'll have all our transistors on the same side of a single resistor, with our output between the transistors and the resistor. We don't want to have significant current running through any output or input of a circuit, so the circuits that let current flow through a resistor shall not have inputs at their ends; they can thus only have power on one side and earth on the other (the way round that the transistors want them). It thus remains to see how we can combine transistors and see how the results depend on which side of the resistor they're placed.
In the preceding subsection's main current path, from power to earth, the
current (when it does flow) flows through one transistor, then through the
resistor and finally through the other transistor; when components are arranged
like this, so that any current that passes through any of them must pass through
all of them, they are described as being in series
. When
a current path splits into several paths that later rejoin, current will flow if
it can do so along any one of these paths; the components on such paths are said
to be combined in parallel
. Our resistance shall always
be in series with our transistors, but the transistors may be arranged in
parallel with one another or in series with one another, as we'll
see below.
Now, the circuit above has a resistance in it, across which voltage drops when a current flows. This uses power, which generates heat, which must be dissipated somehow; and we can actually avoid this, by judicious mixing of the two types of transistor.
Here we have a passive transistor on the power side of our output and an active transistor on the earth side; both are controlled by a single input. Since one is active and the other passive, the state that input is in selects which of them has its current path enabled; the other has it disabled. So the output is connected either to power or to earth and there is no current path from power to earth. When r is on, the active transistor connects the output u to earth, so u is off; when r is off, the passive transistor connects u to power, so u is on. Thus u = NOT(r).
If we do this with the two transistors swapped, we get u = r. That may seem
useless, but remember that, in real circuits, the on and off states of our
transistors' current paths have merely very low and very high resistances,
rather than zero and infinity, so we may have an input that, due to current
drain on the way to it, is apt to be only approximately on or off. Using this
circuit with the transistors swapped will then give a firmly on output when r is
roughly on and a firmly off output when r is roughly off. So we can think of
this circuit as a cleaning
component.
Because circuits of this type do not let any current flow from power to earth
(or, at least, always have at least one very large resistance, i.e. a transistor
whose current path is off, on each route between them) we can use inputs at an
end, in place of power or earth, and we can likewise use the output of a
sub-circuit at an end of a transistor's current-path. This gives us some
interesting flexibility compared to resistance-paired circuits. Note, however,
that a transistor can only have an input (or output of a sub-circuit) as
the in
end of its current path: otherwise, it would ignore an on
signal from that input. If we replace NOT's power and earth with two inputs,
flipping the transistor that lead to earth before (so that the input is on its
paths's in
end), our original input, s, selects one of these two new
inputs to use as output: this circuit lets us chose between two values. (It can
be implemented using four NAND gates: but this does the job with a quarter as
many transistors.)
If, instead, we put a second input, a, between NOT's two transistors and flip
the formerly power-side transistor, we can take outputs where we used to have
power and earth: s then selects which of them a gets delivered to. The other is
left indeterminate, so this is only viable to use when either the indeterminate
output shall be ignored (e.g. because other parts of our circuit are controlled
by s) or we have two more transistors controlled by s to connect these two
outputs to default
values when they're not getting a. (This can be done,
with the other output defaulting off, using five NAND gates, each costing (at
least) two transistors: this does it with two transistors or, if we care about
defaults, four.) So we can use inputs at the path in
ends of transistors
to implement switching, either between which input to use for an output or which
output to deliver an input to.
Now, in the circuits I've shown above, I've put resistances on the control wire of each transistor; although these are (if I undestand correctly) needed in practical circuit design, their presence is a distraction from the structure of the circuits. As I'll now be drawing circuits with conduction paths of transistors in parallel with one another, these resistances on control wires get fiddly to fit into the diagrams, while contributing no material information: so I shall leave out resistances leading to control wires. Any practical implementation of the circuits below should bear this in mind and add suitable resistances on control wires where needed.
We have two ways to implement basic gates: we can have no resistor between power and earth as long as we have transistors on either side of the output, arranged so as to leave it connected to exactly one of power and earth; or we can have a resistance between our output and either power or earth, with some combination of transistors between our output and the other. Let's look at the implementations of AND, OR, NAND and NOR, in the various ways this permits.
We thus have two options for how our transistors are arranged in relation to one
another – in parallel or in series – and two options for which side
of the resistor they go – on the power side or on the earth side. Note
that, although I've only drawn the circuits with two inputs (a and z),
I've included a ⋮ (vertical ellipsis
) between them, and …
(horizontal ellipsis
) between their transistors in the parallel circuits,
to indicate that one could actually have arbitrarily many inputs, each connected
into the circuit in the same way.
When the transistors are in series, they collectively form a connection
end-to-end only if all their control lines are on. When they are in parallel,
any one of them suffices to form the connection from their paths'
joined in
lines to the similarly joined out
lines. When they are
on the power side of the output, which the resistor then connects to earth, they
turn on the output precisely when they form that connection. When they are on
the earth side, they turn off the output when they form a connection.
Consequently, in series on the power side they form an AND gate; in parallel on
the power side they form an OR gate; in series on the earth side they form a
NAND gate; and in parallel on the earth side they form a NOR gate.
We can thus form an N-way gate of any of these four kinds using N transistors and 1+N resistors (one of which is shown here, the rest implicit on the control lines of the transistors). In particular, notice that the N=1 case gives us two realisations of NOT, either as NAND or as NOR. (There is no difference between series and parallel when you have only one component !) The N=1 case for AND and OR is simply the identity: the output is the input.
One can even examine the N=0 cases. For series, adding a transistor opened up the wire and inserted the transistor; removal takes out the transistor and joins the wires on either side of it; so the N=0 case is simply a wire and the output is whichever of ground and earth the zero transistors were nominally connecting the output to. For parallel, we have two points connected by as many transistors as we care to put between them; when that is none, the two points are simply not connected. Thus the empty AND or NOR is always on and the empty OR or NAND is always off.
One can, furthermore, combine several parallel blocks of transistors in series with one another; or put several transistors in series on each limb of a parallel block. These can implement complex expressions such as ((a AND b AND c) OR (d AND e) OR f) AND (a OR (b AND c) OR (d AND e AND f)); this would take a dozen transistors. (I'm not sure whether we need, in addition to the resistor on the other side of the output, one resistor per input or one resistor per transistor; when we use inputs more than once, this makes a difference. I shall assume we need one per transistor, below, but we may be able to get away with fewer.)
You can nest parallel within series within parallel within series to arbitrary depth, of course. I think there's a theorem of formal logic that says any such structure can be reduced to one only two layers deep – several simple series in parallel with one another, or several simple parallel blocks in series – but it's possible that a deeper structure might save some transistors, at least in some cases.
Note that this is all using AND and OR, without any negation except at the end, if we put the transistors on the earth side of the output rather than the power side.
Now let's look at exactly the same arrangements but with passive transistors in place of active ones. On working out how the output depends on the inputs, I find that the two choices – parallel vs series, earth-side vs power-side – have exactly swapped their effects. This is not so surprising, given that the effect of each input on its transistor's current path is inverted.
As before, these circuits work for arbitrarily many inputs, give or take practicalities that may limit how many inputs one can use. Again, one can also combine transistors in any combination of series and parallel, on the side of the output opposite to the resistance. Each input is effectively inverted but series still combines the results – both of this inversion and of any sub-circuit that combines negated inputs – as AND while parallel combines as OR, as before. The overall expression is used directly as output if the transistors are on the power-side of the output, or used negated as output if on the earth-side, also as before.
The four basic operations above find no use for mixing active and passive transistors, as they are symmetric in their inputs; but their (antisymmetric) close relatives AND NOT and OR NOT do benefit from such mixing, since they combine one input with the negation of the other.
When we combine the two types of transistor and mix series and parallel, we can include negated inputs in the OR and AND combinations that parallel and series implement for us. This does not give us any ability to negate the results of such combinations, other than by choice of which side of the resistor to place the transistors, but it still makes some interesting optimisations possible. In particular, we can combine any number of inputs with AND or with OR, negating as many of them as we wish.
When we then combine such arrangements with some of the tricks we can do without resistors, within the network of transistors across the input from a resistor, things get even more interesting – but let's first see what we can do entirely without resistors (except for the implicit ones on each transistor's control wire).
Since we now require the input to be connected to exactly one of earth and power, the combination of transistors between the input and either of these two must produce a result exactly opposite to the one on the other side. This typically means the two arrangements of transistors are mutually dual under interchanging active and passive while also interchanging series and parallel. As noted above, we can also use an input, or the output of a sub-assembly of transistors, in place of power in another assembly; this can save us some transistors, countering the duality requirement's tendency to double the number of transistors we need.
This makes both AND and OR easy. However, since NAND and NOR need both inputs negated, we have to use four transistors for each. (When both inputs are off, the output is on; when both are on, the output is off; in each case, neither input can supply the output value, so the circuit must have a power line to supply the on value, and an earth connection to supply the off value, when neither input can supply the output. These have to be the ends of the current path, hence the inputs must be brought in via control lines of transistors.) The NAND and NOR solutions can be generalised to arbitrarily many in the same way as above (albeit now using 2.N transistors to combine N inputs); as before, their N=1 case is simply the NOT gate (as shown above). For AND and OR, the multi-input versions would gain an extra pair of transistors for each extra input; so using an input in place of either power or earth saves us just two transistors relative to the two per input needed for many-input variants.
If we swap the transistors in the AND circuit, we get v = a AND NOT b; in the OR circuit, the same swap gives us n = a OR NOT b. So these two antisymmetric operations are likewise simple; and, in the multi-way AND and OR, our two transistors per input after the first let us invert as many of the inputs, after the first, as we like – at no extra cost.
We can express a XOR b as (a NAND b) AND (a OR b) or as (a AND b) NOR (a NOR b). Each of these involves a negation in a nested expression (NAND or NOR), so we can't do the job in a single resistor-based circuit using active transistors; we must combine two. None the less, the positive inner operation can be combined with the outer operation, so we do only need two. Thus XOR can be implemented (in two apparently distinct ways) using just five transistors and seven resistors (or five resistors, if they bind to inputs rather than transistors).
If we use transistors of both types, however, we can reduce the count even
further: a XOR b = (b AND NOT a) OR (a AND NOT b) can be implemented quite
directly. The circuit (at left, here) using a resistor needs four transistors;
it has a natural version without the resistor, obtained by replacing the
resistor with an arrangement of transistors that's almost the same as the one on
the other side of the output, but with one input's two transistors swapped
– it encodes (a AND b) OR (a NOR b). However, we can actually get away
with just four transistors by using one input to switch between using the other
and its negation. This connects two transistors' out-going current paths,
relying on the fact that exactly one of those current paths is connected, so it
doesn't matter if the other has a voltage across it the wrong way round
,
as happens whenever the output x is on.
Now, in that last circuit (on the right), notice that the NOT a
sub-circuit in the top right is only relevant when b is on; so we could use b in
place of power in this sub-circuit. If we do that, then when b is off this
sub-circuit's output is always off; we could then replace the transistor by
which it connects to x (and the wire from b that controls it) with a simple
diode, to prevent current flow back along it when a is on. I'll use that,
below, when I come to the three-input XOR, using two copies of it connected
together.
Another way to read XOR is as if not a then b; if not b then a; otherwise
off
, the first two terms in which take one passive transistor each, using
one of a and b as the control line and the other as input to the current path.
Sadly it takes another two transistors just to deliver an off signal when both a
and b are on, implementing the final clause. Note that any two-input XOR
circuit must have an earth in it sowewhere, to provide an off signal
when both inputs are on. When both inputs are off, the earth-path is
disconnected and our output is connected to both inputs – which is fine,
as they're both off. When exactly one input is on, it (and only it) is
connected to the output. We could replace the two active transistors here (and
save the wiring from inputs to them) with a resistor, if we're happy with
drawing a current from (one of) our inputs.
Now, the reason why we needed the second half of that last diagram was that we needed to supply an off value when both inputs were on. The problem here is that the required output isn't one of the inputs. When we come to attempt addition, below, we'll need a three-way XOR, which is true precisely if: all of its inputs are on or; exactly one of them is. In this case, the output is the value of (at least) one of the inputs; so we can hope to be able to avoid needing to kludge the missing state into the circuit. The need to deliver the output from an input that's equal to it is, however, a tight constraint: except in the two cases where all three inputs are equal, there's exactly one input that's equal to the output. Thus far, I've found no way to exploit this that does better than just combining two XOR gates and exploiting the fact that XOR is associative. The XOR gate with three transistors and a diode forms a nice simple unit that can clearly be repeated as many times as we like, to obtain a multi-way XOR gate.
One can make XOR out of four NAND gates by reusing c = a NAND b twice in:
While this is the fewest NAND gates that suffice, it's far from minimal when counting transistors, and needs several resistors. Rewriting (a NAND c) NAND (c NAND b) as (a AND c) OR (c AND b) can at least save a few resistors. I suspect there are many other ways to put together an XOR gate, but let's move on to other things.
Before I move on to addition, let's pause to consider how we can represent
numbers. Thus far we've dealt with inputs and outputs that are on or off:
as noted above, we can interpret either of those as 1 and
the other as 0: by convention, on is interpreted as 1, off as 0. However, when
we care about arithmetic, we normally want to do it with numbers other than just
0 and 1. We can achieve that by using many bits
, each of which may be
either 0 or 1, and reading them as digits of a number written out in binary.
Given enough bits, we can represent any number. In practice, we're normally
content to be able to represent all counting numbers less than some very large
power of two, or all whole numbers within some large power of two each side of
zero; as long as none of the inputs to, intermediate values in or outputs of our
calculations are too big to represent, this will do fine.
It is usual to group bits into sets of eight, a
so-called byte
or octet
, capable of representing
the numbers from 0 through 255 or, if we prefer (by one representation or
another) from −128 through 127. These, in turn, are grouped to make
larger units, capable of representing wider ranges of numbers. In any given
computer architecture there is typically a grouping known as a word
;
modern computers have 64-bit words (8 bytes grouped together), slightly older
(or less powerful) ones use 32-bit words; far enough back in history systems
with 16 bits were (and may still be, at the very cheap end of the range
available today) common. Any of these groupings lets us represent some definite
range of whole numbers (or, indeed, some selection of the diadic rationals as a
useable approximation to enough real numbers (with fractional parts) to let
programs do a wide range of tasks).
So I'll introduce a notation in my diagrams to represent a group of bits that are being interpreted collectively as a number; to avoid having to settle on a specific size of grouping, I'll include an ellipsis, ⋮ or … depending on the group's orientation, part way along, separating boxes indicating bits at each end; and, likewise, circuits shall include ellipsis in the space between components that are repeated, to indicate the repetition without actually cluttering the diagram with further copies of the components. I'll use upper-case (capital) letters to name such groups; and, when the group represents a whole number, I'll put the name outside the group, beside the box that represents the least significant bit of the group (i.e. the bit that indicates whether the number is odd (on) or (off) even). Each bit further away from it still represents 0 when off, but when on it represents a power of two, twice as big at each step away from the name's bit.
The next circuits the nand game asked me to build were the half adder
and
the full adder
. Suppose we want to add two numbers, represented as above
by many bits each. To get the least significant bit of the sum, we only need to
look at the least significant bits of the inputs: if exactly one of them is 1,
so is the output's least significant bit. So the output's least significant bit
is the XOR of the least significant bits of the inputs. (That remains true even
when we have many inputs to sum.) If the inputs' least significant bits are
equal, the output's least significant bit is 0: but, if both input bits were 1,
they added up to two and we need to add one to the sum of the next pair of bits;
so the summation for that pair of bits needs to handle three inputs:
one bit from each of the two numbers, plus a carry
bit from the sum of
less significant digits. Thankfully, the most that sum can come to is three,
which can still be represented by two bits – if all three inputs are 1,
the sum's corresponding bit is 1 and we carry a 1 up to the next most
significant bit. So the more significant bits never need more than three
inputs, one from each number and one carry from less significant bits.
The two-bit adder that suffices to handle the least significant bits
delivers their XOR to be used as the output's least significant bit and their
AND to be carried over when adding the second-least significant bits. The piece
of circuitry to do this is called a half adder
(presumably the more
obvious name two-bit adder
is not used due to the idiomatic meaning
of two-bit
in USAish). We've already seen how to build AND and XOR
gates, so I'll leave depicting the half adder until it shows up in
the whole-number adder, below. It may be possible to
be cunning and devise an XOR gate that has AND as an intermediate value so that
we can simply pull the AND out as our left output while also using it on the way
to building XOR; but this does it fairly compactly, so I think I'll worry about
such optimisations after I've build up a little more structure.
The piece of circuitry that adds three bits, to produce one output bit and a
carry bit, is known as a full adder
(rather than a three-bit
adder
, as might seem more natural, had the two-bit adder been named
naturally). We'll need one of these for each bit of our output number aside
from the least significant. It can be built out of two half-adders and an OR
gate, but can be done more efficiently. When playing the nand game, I built a
solution using three NAND gates, two OR NOT gates and one each of AND and OR.
The AND and OR NOT gates are each built of two NAND gates, while OR takes three,
so these add up to 14 NAND gates, each of which would cost two transistors and a
resistor; however, each of the gates I mentioned can in fact be made out of two
transistors (and, for NAND, a resistor), so building directly from transistors
costs half as much as building using NAND gates.
That uses 14 transistors, but we can do better. We can obtain v as the
three-way XOR of the inputs, needing six transistors and two diodes; and u is (a
AND b) OR (b AND c) OR (c AND a) = ((a OR b) AND c) OR (a AND b), needing five
transistors and a resistor; so we can trade three transistors and two resistors
for two diodes. The three-way XOR part (six transistors and two diodes) could
be replaced with four transistors and two resistors if we're OK with drawing
current from our inputs. (We could also implement the carry
bit as (a OR
b) if c else (a AND b), with no resistor, but this would take six transistors,
one of which I prefer to trade for a resistor.)
So we can add up to three bits, each of which we interpret as 0 when off or
1 when on. Using that, we can now make ourselves a number-adder: we only need a
half-adder to add the least significant bits (one from each number); when this
gives 0 or 1, it just sets the corresponding least significant bit of the output
as such, but when both are on we get 2 which leaves the output's least
significant bit off while supplying a third bit –
a carry
bit – to be added, along with the next most
significant bits, to produce the corresponding bit of the output plus, as
before, a carry bit for use in the next most significant position; and so on.
Finally, if the addition of the most significant bits, plus the carry from the
the second most significant, yields a carry, the sum of the two
numbers overflows
the whole number. Thus the number adder
has, as output, a number representing the sum, plus a one-bit overflow
indicator.
For the sake of variety, I'll use the cheap
XOR I mentioned above,
that might draw power from an input. In order to follow the usual arrangements
of digits in numbers, with significance decreasing from left to right, I'm going
to flip the circuits left-right; the carry we need from a less significant bit
needs to come out of the left side of each full adder if it's to be used as an
input to the adder for the next most significant bit. Since, even then, the
wiring gets a bit fiddly, I've used a different colour for the wires from one
input number into the circuits, to make it easier to keep them distinct from
those from the other.
Notice that the half-adder shown here is simple enough that two of it and an OR gate, which suffice to make a full adder, take only one more transistor (and one fewer resistor) than the full adder units I've got repeated across this (it would also trade a connection to power for a connection to earth). That takes nine transistors per bit, minus five saved on the least significant. Maybe some time I'll get round to messing with multiplication, but that seems likely to be rather complex !
Enough of arithmetic: how can we store values ? First let's look at what
the nand game refers to as latch
: when the control bit s (for store) is
on, the output matches the input; but when s is off, the output stays unchanged,
regardless of any changes to input. This is actually quite simple; it can be
done with four NAND gates. Once I'd noticed OR NOT as a gate, I was able to
simplify my solution to just two of it and an AND gate: six transistors. (It
can equally be done with two NAND gates and an OR NOT gate: but OR NOT and AND
can be done in two transistors without any resistors, or steady current flow,
where NAND costs more.) When s is on, z tracks a; when s is off, z stays at
what it was when last s was on. Simplifying the logic, this is
One might perhaps hope to achieve the same with the
conditional-selection diagram I used above (u = a if s else
c, wiring u and c together as z); however, I have my doubts about the practical
viability of its feeding the output of a transistor back in as one of the inputs
to that same transistor. The circuit given here, when s is off, delivers z's
value either from s (as off) or from an active power supply (as on); while the
choice between them depends on z's own value, a little decay towards
the half-on
should always be corrected by the transistors' non-linear
clipping to the source selected.
The nand game's data flip-flop
combines two of these (per bit of data to
be stored) to let a clock
signal, c, when on, deliver an internally
stored value, while suppressing changes to that value; when the clock is off, as
before, the store
signal, s, controls whether an input, a, changes the
internal stored value; but the output, z, doesn't change until the clock ticks
on again. We can apply a single-bit flip-flop to each bit of an entire number,
sharing the store and clock lines, hence also the s AND NOT c that actually
controls each input latch, to manage a whole number. That gives the component
that the nand game refers to as a register: a whole number that can be updated
in a controlled way. It takes twelve transistors per bit plus two for the clock
and store (shared by all bits). Remembering values in a controlled way costs
more than adding them.
With each byte of memory costing of order a hundred transistors, modern processors with many megabytes of local cache on the chip, plus gigabytes of RAM from which to fill that memory (and typically several layers of intermediate cache in between), a modern computer has to pack in vast numbers of transistors.
Valid CSS ? Valid HTML ? Written by Eddy.