Coa - Unit 4
Coa - Unit 4
ARITHMETIC CIRCUITS
Circuits that can perform binary addition and subtraction are constructed by
combining logic gates. These circuits are used in the design of the arithmetic logic
unit (ALU). The electronic circuits are capable of very fast switching action, and thus
an ALU can operate at high clock rates.
For instance, the addition of two numbers can be accomplished in a matter of
nanoseconds! This chapter begins with binary addition and subtraction, then presents
two different methods for representing negative numbers. You will see how an
exclusive OR gate is used to construct a half-adder and a full-adder. You will see how
to construct an 8-bit adder-subtracter using a popular IC.
A technique to design a fast adder is discussed in detail followed by discussion on a
multifunctional device called Arithmetic Logic Unit or ALU. Finally, an outline to
perform binary multiplication and division is also presented.
ARITHMETIC BUILDING BLOCKS
We are on the verge of seeing a logic circuit that performs 8-bit arithmetic on positive
and negative numbers. But first we need to cover three basic circuits that will be used
as building blocks.
These building blocks are the half-adder, the full-adder, and the controller inverter.
Once you understand how these work, it is only a short step to see how it all comes
together, that is, how a computer is able to add and subtract binary numbers of any
length.
Half-Adder
When we add two binary numbers, we start with the least-significant column. This
means that we have to add two bits with the possibility of a carry. The circuit used for
this is called a half-adder.
Figure 6.3 shows how to build a half-adder. The output of the exclusive-OR gate is
called the SUM, while the output of the AND gate is the CARRY. The AND gate
produces a high output only when both inputs are high exclusive. The-OR gate
produces a high output if either input, but not both, is high. Table 6.2 shows the truth
table of a half-adder.
When you examine each entry in Table 6.2, you are struck by the fact that a half-
adder performs binary addition.
As you see, the half-adder mimics our brain pro-cesses in adding bits. The only
difference is the half-adder is about a million times faster than we are.
Full-Adder
For the higher-order columns, we have to use afi1ll-adder, a logic circuit that can add
3 bits at a time. The third bit is the carry from a lower column. This implies that we
need a logic circuit with three inputs and two outputs, similar to the full-adder shown
in Fig. 6.4a. (Other designs are possible. This one is the simplest.)
Table 6.3 shows the truth table of a full-adder. You can easily check this truth table
for its validity. For instance, CARRY is high in Fig. 6.4a when two or more of the
ABC inputs are high; this agrees with the CARRY column in Table 6.3. Also, when
an odd number of high ABC inputs drives the exclusive-OR gate, it produces a high
output; this verifies the SUM column of the truth table.
From this truth table we get Karnaugh map as shown in Fig. 6.4b that gives following logic
equations,
A general representation of full-adder which adds i-th bit A; and B; of two numbers A and
Band takes carry from (i-l)th bit could be
Controlled Inverter
Figure a 6.5 shows controlled inverter. When INVERT is low, it transmits the 8-bit input to
the output; when INVERT is high, it transmits the l's complement. For instance, if the input
number is
The controlled inverter is important because it is a step in the right direction. During a
subtraction, we first need to take the 2's complement of the subtrahend. Then we can
add the complemented subtrahend to obtain the answer.
With a controlled inverter, we can produce the l's complement. There is an easyto
way get the 2 's complement, discussed in the next section. So, we now have all the
building blocks: half-adder, full-adder, and controlled inverter.
THE ADDER..SUBTRACTOR
We can connect full-adders as shown in Fig. 6.6 to add or subtract binary numbers.
The circuit is laid out from right to left, similar to the way we add binary numbers.
Therefore, the least-significant column is on the right, and the most-significant
column is on the left.
The boxes labeled FA are full-adders. (Some adding circuits use a half-adder instead
of a full-adder in the least-significant column.)
The CARRY OUT from each full-adder is the CARRY IN to the next-higher full-
adder. The numbers be~ ing processed are A 7 ... Ao and B7 ... Bo, and the answer is
S7 ... S0.
With 8-bit arithmetic, the final carry is ignored for reasons given earlier. With 16-bit
arithmetic, the final carry is the carry into the addition of the upper bytes.
Addition
Here is how an addition appears:
During an addition, the SUB signal is deliberately kept in the low state. Therefore,
the binary number B7... Bo passes through the controlled inverter with no change.
The full-adders then produce the correct output sum.
They do this by adding the bits in each column, passing carries to the next higher
column, and so on. For instance, starting at the LSB, the full-adder adds Ao, Bo,
and SUB. This produces a SUM of So and a CARRY OUT to the next-higher full-
adder; The next-higher full-adder then adds A 1, B 1, and the CARRY IN to
produce S1 and a CARRY OUT.
A similar addition occurs for each of the remaining full-adders, and the correct
sum appears at the output lines.
For instance, suppose that the numbers being added are + 125 and -67. Then, A 7 ... Ao =
0111 1101 and
B7 ••• B0 = 1011 1101. This is the problem:
The CARRY OUT of the first full-adder is the CARRY IN to the second full-adder:
In a similar way, the remaining full-adders add their 3 input bits until we arrive at the last
full-adder:
This answer is equivalent to decimal +58, which is the algebraic sum of the numbers we
started with: +125 and-67.
Subtraction
Here is how a subtraction appears:
During a subtraction, the SUB signal is deliberately put into the high state. Therefore, the
controlled inverter produces the l's complement of B7 ••• B0. Furthermore, because SUB is the
CARRY IN to the first full-adder, the circuit processes the data like this:
When A7 ... A0 = 0, the circuit produces the 2's complement of B7 •.. B0 because 1 is being
added to the l's complement B7 ... B0. When A 7 ... A0 does not equal zero, the effect is
equivalent to adding A 7 ... A0 and the 2's complement of B7 . .. Bo.
Here is an example. Suppose that the numbers are +82 and +17. Then A7 ... Ao=
0101Bo= 0001inverter000I.The controlled produces the l's complement ofB, which is
11100010 and B7 ...1110. Since SUB
1 during a subtraction, the circuit performs the following addition:
For 8-bit arithmetic, the final carry is ignored as previously discussed; therefore, the
answer is S7 · ..
This answer is equivalent to decimal +65, which is the algebraic difference between the
numbers we started with: +82 and+ 17.
FAST ADDER
Fast adder is also called parallel adder or carry look ahead adder because that is how
it attains high speed in addition operation. Before we go into that circuit,see let's what
limits the speed of an adder.
Consider, the worst case scenario when two four bit numbers A: 1111 and B: 0001
are added. This generates a carry in the first stage that propagates to the last stage as shown
next.
Addition such as these (Fig. 6.6) is called serial addition or ripple carry addition. It
also reveals from the adder equation (given in Section 6.8) result of every stage
depends on the availability of carry from previous stage.
The minimum delay required for carry generation in each stage is two gate delays,
one coming from AND gates (1st level) and second from OR gate (2nd level). For 32-bit
serial addition there will be 32 stagesn working in serial.
In worst case,2 itx will require 32 = 64 gate delays to generate the final carry. Though
each gate delay is of nanosecond order, serial addition definitely limits the speed of high
speed computing. Parallel adder increases the speed by generating the carry in advance
(look ahead) and there is no need to wait for the result from previous stage. This is achieved
by following method.
Let us use the second equation for carry generation from previous section, i.e.
G; stands for generation of carry and P; stands for propagation of carry in a particular
stage depending on input to that stage. As explained in previous section, ifA;B; = 1,
then ith stage will generate a carry, no matter previous stage generates it or not. And if
A;+ B; = 1 then this stage will propagate a carry if available from previous stage to
next stage.
Note that, all G; and P; are available after one gate delay once the numbers A and
B are placed.
Starting from LSB, designated by suffix O ifwe proceed iteratively
we get,
The equations But Look pretty complicated. do we gain in any way? Note that, these
equations can be realized in hardware using multi-input AND and OR gates and in two
levels. Now, for each carry whether Co or C3 we require only two gate delays once the G;
and P; are available.
We have already seen they are available after 1 gate delay. Thus parallel adder (circuit
diagram for 2-bit is shown in Fig. 6.8a) generates carry within
1 + 2 = 3 gate delays. Note that, after the carry is available at any stage there are two more
gate delays from Ex-OR gate to generate the sum bit as we can write S; = G; EB P; EB
C;_1.
Thus serial adder in worst case requires at least (2n + 2) gate delays for n-bit addition
and parallel adder requires only 3 + 2 = 5 gate delays for that. One can imagine the gain
for higher values of n.
However, there is a caution. We cannot increase n indiscriminately for parallel adder
as every logic gate has a capacity to accept at most a certain number of inputs, termed/an-
in. This is a characteristic of the logic family to which the gate belongs. More about this is
discussed in Chapter 14.
The other disadvantage of parallel adder is in-creased hardware complexity for large
n. In Fig. 6.8b we present functional diagram and pin connections of a popular fast adder,
IC 74283.
Now, how do we add two 8-bit numbers using IC 74283? Obviously, we need two
such devices and C out of LSB adder will be fed as C in of MSB unit. This way
each individual 4-bit addition is done parallel but between two ICs carry
propagates by rippling.
To avoid carry ripple between two ICs and get truly parallel addition the following
approach can be useful. Let each individual 4-bit adder unit generate two additional outputs
Group Carry Generate (G3-0) and Group Carry Propagate (P3-0). They are defined as follows
What do we get from above equation? Group carry generation and propagation
terms are available from respective adder blocks (G3_0, P3_0 from LSB and G 7-4,
P 7-4 from MSB) after 3 and 2 gate delays respectively.
This comes from the logic equations that define them with Gi, Pi available after 1
gate delay.
Once these group-carry terms are available, we can generate C 7 from previous by
equation designing a small Look Ahead Carry (LAC) Generator circuit.
This requires a bank of AND gates (here one 2 input and one 3 input) followed by a
multi-input OR gate (here, three input) totaling 2 gate delays.
Thus final carry is available in 3 + 2 = 5 gate delays and this indeed is what we
were looking for in parallel addition. In next section we discuss a versatile IC
74181 that while performing 4-bit addition generates this group carry generation
and propagation terms.
LAC generator circuits are also commercially available; IC 74182 can take up to
four pairs of group carry terms from four adder units and generate final carry for 16
bit addition.
Synchronous Operation
Nearly all of the circuits in a digital system ( computer) change states in .STnchronism
with the system clock. A change of state will either occur as the clock transitions from
low to high or as it transitions from high to low.
The low-to-high transition is frequently called the positive transition (PT), as shown in
Fig. 7.2.
The PT is given emphasis by drawing a small arrow the on rising edge of the clock
waveform. A circuit that changes state at this time is said to be positive-edge-triggered.
The high-to-low transition is called the negative transition (NI), as shown in Fig. 7.2.
The NT is emphasized by drawing a small arrow on the falling edge of the clock
wavefom1. A circuit that changes state at iliis time is said to be negative- edge-
triggered.
Virtually all circuits in a digital system are either positive-edge- triggered or negative-
edge-triggered, and thus are synchronized with the system clock. There are a few
exceptions.
For instance, the operation of a push button (RESET) by a human operator might result
Characteristics
The clock waveform drawn above the time line in Fig. 7.3a is a perfect, ideal clock.
What exactly are the characteristics that make up an ideal clock? First, the clock levels
must be absolutely stable.
When the clock is high, the level must hold a steady value of +5 V, as shown between
points a and b on the time line.
When the clock is low, the level must be an unchanging O V, as it is between points b
and c. In actual practice, the stability of the clock is much more important than the
absolute value of the voltage level.
For instance, it might be perfectly acceptable to have a high level of +4.8 V instead of+
5.0 V, provided it is a steady, unchanging, +4.8V.
The second characteristic deals with the time required for the clock levels to change from
high to low or vice versa. The transition of the clock from low to high at point a in Fig. 7.3a
is shown by a vertical line segment.
This implies a time of zero; that is, the transition occurs instantaneously-it requires zero
time. The same is true of the transition time from high to low at point bin Fig. 7.3a. Thus
an ideal clock has zero transition time.
A nearly perfect clock waveform might appear on an oscilloscope trace as shown in Fig.
7.3b. At first glance this would seem to be two horizontal traces composed of line
segments.
On closer examination, however, it can be seen that the waveform is exactly like the ideal
waveform in Fig. 7.3a if the vertical segments are removed. The vertical segments might
not appear on the oscilloscope trace because the transition times are so small (nearly zero)
and the oscilloscope is not capable of responding quickly enough.
The vertical segments can usually be made visible by either increasing the oscilloscope
"intensity," or by reducing the "sweep time."
Figure 7.3c shows a portion of the waveform in Fig. 7.3b expanded by reducing the
"sweep time" such that the transition times are visible. Clearly it requires some time for
the waveform to transition from low to high-this is defined as the rise time tr- Remember,
the time required for transition from high to low is de-fined as the fall time If It is
customary to measure the rise and fall times from points on the waveform referred to as
the JO and 90 percent points.
In this case, a 100 percent level change is 5.0 V, so 10 percent of this is 0.5 V and 90
percent is 4.5 V. Thus the rise time is that time required for the waveform to travel from
0.5 up to 4.5 V. Similarly, the fall time is that time required for the waveform to transition
from 4.5 down to 0.5 V.
The second characteristic deals with the time required for the clock levels to change from
high to low or vice versa. The transition of the clock from low to high at point a in Fig.
7.3a is shown by a vertical line segment.
This implies a time of zero; that is, the transition occurs instantaneously-it requires zero
time. The same is true of the transition time from high to low at point bin Fig. 7.3a. Thus
an ideal clock has zero transition time.
A nearly perfect clock waveform might appear on an oscilloscope trace as shown in Fig.
7.3b. At first glance this would seem to be two horizontal traces composed of line
segments.
On closer examination, however, it can be seen that the waveform is exactly like the ideal
waveform in Fig. 7.3a if the vertical segments are removed.
The vertical segments might not appear on the oscilloscope trace because the transition
times are so small (nearly zero) and the oscilloscope is not capable of responding quickly
enough.
The vertical segments can usually be made visible by either increasing the oscilloscope
"intensity," or by reducing the "sweep time."
Figure 7.3c shows a portion of the waveform in Fig. 7.3b expanded by reducing the "sweep
time" such that the transition times are visible.
Clearly it requires some time for the waveform to transition from low to high-this is
defined as the rise time tr- Remember, the time required for transition from high to low is
de-fined as the fall time If It is customary to measure the rise and fall times from points on
the waveform referred to as the JO and 90 percent points. In this case, a 100 percent level
change is 5.0 V, so 10 percent of this is 0.5 V and 90 percent is 4.5 V.
Thus the rise time is that time required for the waveform to travel from 0.5 up to 4.5 V.
Similarly, the fall time is that time required for the waveform to transition from 4.5 down
to 0.5 V.
Finally, the third requirement that defines an ideal clock is its frequency stability. The
frequency of the clock should be over steady and unchanging a specified period of time.
Short-term stability can be specified by requiring that the clock frequency ( or its period)
not be allowed to vary by more than a given percentage over a short period of time-say, a
few hours.
Clock signals with short-tenn stability can be derived from straightforward electronic
circuits as shown in the following sections.
Long-term stability deals with longer periods of time~perhaps days, months, or years.
Clock signals that have long-term stability are generally derived from rather special circuits
placed in a heated enclosure (usually called an "oven") in order to guarantee close control
of temperature and hence frequency.
Such circuits can provide clock frequencies having stabilities better than a few parts in 10 9
per day.
FLIP-FLOPS
The outputs of the digital circuits considered previously are dependent entirely on their
inputs. That is, if an input changes state, output may also change state.
However, there are requirements for a digital device or circuit whose output will remain
unchanged, once set, even if there is a change in input level(s). Such a device could be used
to store a binary number.
A flip-flop is one such circuit, and the characteristics of the most common types of flip-
flip-flop is often called a latch, since it will hold, or latch, in either stable state.
Basic Idea
One of the easiest ways to construct a flip-flop is to connect two inverters in series as
shown in Fig. 8.2a.
The line connecting the output of inverter B (INV B) back to the input of inverter A (INV
A) is referred to as the feedback line.
For the moment, remove the feedback line and consider V1 as the input and V3 as the
output as shown in Fig. 8.2b.
There are only two possible signals in a digital system, and in this case we will define L =
0 = 0 V dc and H = 1 = + 5 V dc.
If V1 is set to OV dc, then V3 will also be OV dc. Now, if the feedback line shown in Fig.
8.2b is reconnected, the ground can be removed from Vi, and V3, will remain at OVdc.
This is true since once the input of INVA is grounded, the output of INV B will go low
and can then be used to hold the input of INV low A by using the feedback line. This is
one stable state-V3 = 0 Vdc.
Conversely, if Vi is +5 Vdc, V3 will also be +5 Vdc as seen in Fig. 8.2c. The feedback line can
again be used to hold Vi at + 5 V de since V3 is also at + 5 V de. This is then the second
stable state- V3 = + 5 V de.
NOR-Gate latch
The basic flip-flop shown in Fig. 8.2a can be improved by replacing the inverters with
either NAND or NOR gates. The additional inputs on these gates provide a convenient
means for application of input signals to
switch the flip-flop from one stable state to the other. Two 2-input NOR gates are connected
in Fig. 8.3a to fom1 a flip-flop.
Notice that if the two inputs labeled R and S are ignored, this circuit will function exactly as
the one shown in Fig. 8.. 2a.
This circuit is redrawn in a more conventional form in Fig. 8.3b. The flip-flop actually
has two outputs, defined in more general terms as Q and Q.
It should be clear that regardless of the value of Q, its complement is Q. There are two
inputs to the flip-flop defined as R and S.
The input/output possibilities for this RS flip-flop are summarized in the truth table in
Fig. 8.4.
To aid in understanding the operation of this circuit, recall that an H = I at any input of
a NOR gate forces its output to an L = 0.
1. The first input condition in the truth table is R = 0 and S = 0. Since a O at the input of a
NOR gate has no effect on its output, the flip-flop simply remains in its present state; that
is, Q remains unchanged.
2. The second input condition R = 0 and S = I forces the output of NOR gate B low. Both
inputs to NOR gate A are now low, and the NOR-gate output must be high. Thus a I at
the S input is said to SET the flip-flop, and it switches to the stable state where Q = 1.
3. The third input condition is R = I and S = 0. This condition forces the output of NOR gate
A low, and since both inputs to NOR gate B are now low,the output must be high. Thus a
1 at the input is said to RESET the flip-flop. and it switches to the stable state where
4. The in last input condition table, R=1 and S=1, is forbidden,as it forces the outputs of both
NOR gates to the low state.
In other words, both Q=0 and at the same time ! But this violates the basic
definition of a flip-flop that requires Q to be the complement of Q, and so it is
generally agreed never to impose this input condition. Incidentally, if this condition is
for some reason, imposed and the next input is R = 0, S = 0 then the resulting state Q
depends on propagation delays of two NOR gates.
If delay of gate A is less, i.e. it acts faster, then Q = 1 else it is 0. Such dependence
makes the job of a design engineer difficult, as any replacement of a NOR gate will
make Q unpredictable.
nNAND-Gate latch
A slightly different latch can be constructed by using NAND gates as shown in Fig. 8.7.
The truth table for this NAND-gate latch is different from that for the NOR-gate latch.
We will call this latch an RS flip-flop. To understand how this circuit functions, recall that
a low on any input to a NAND gate will force its output high. Thus a low on the S input
will set the latch (Q = 1 and Q = 0).
A low on the R input will reset it (Q = 0). If both R and S are high, the flip-flop will rem~n
in its previous state. Setting both Rand S low simultaneously is forbidden since this forces
both Q and Q high.
Figure 8.21 shows a positive pulse-forming circuit at the input of a D latch. The narrow
positive pulse (PT) enables the AND gates for an instant.
The effect is to activate the AND gates during the PT of C, which is equivalent to
sampling the value of D for an instant. At this unique point in time, D and its
complement hit the flip-flop inputs, forcing Q to set or reset (unless Q already equals D).
Again, this operation is called edge triggering because the flip-flop responds only when
the clock is in transition between its two voltage states. The triggering in Fig. 8.21 occurs
on the positive-going edge of the clock; this is why it's referred to as positive-edge
triggering.
The truth table in Fig. 8.21 b summarizes the action of a positive-edge-triggeredD flip-
fl.op. When the clock is low, D is a don't care and Q is latched in its last state.
On the leading edge of the clock (PT), designated by the up arrow, the data bit is loaded
into the flip-flop and Q takes on the value of D.
When power is first applied, flip-flops come up in random states. To get some computers
started, an operator h<1s to push a RESET button. This sends a CLEAR or RESET signal
to all flip-flops.
Also, it's necessary in some digital systems to preset (synonymous with set) certain flip-flops.
Figure 8.22 shows how to include both functions in a D flip-flop. The edge triggering is
the same as previously described. Depressing the RESET button will set Q to I with the
first PT of the clock. Q will remain high as long as the button is held closed.
The first PT of the clock after releasing the button will set Q according to the D input.
Furthermore, the OR gates allow us to slip in a high PRESET or a high CLEAR when
desired. A high PRESET forces Q to equal 1; a high CLEAR resets Q to 0.
The PRESET and CLEAR are called asynchronous inputs because they activate the
flip-flop independently of the clock.
On other hand, the D input is a synchronous input because it has an effect only with PTs
of the clock.
Figure 8.23a is the IEEE symbol for a positive-edge-triggered D flip-flop. The clock
input has a small triangle to serve as a reminder of edge triggering.
When you see this symbol, remember what it means; the D input is sampled and stored
on PTs of the clock.
Sometimes, triggering on NTs of the clock is better suited to the application. In this
case, an internal inverter can complement the clock pulse before it reaches the AND
gates.
Figure 8.23b is the symbol for a negative-edge-triggered D flip-flop. The bubble and
triangle symbolize the negative-edge triggering.
Figure is 8.23c another commercially available D flip-flop (the 54/74175 or 54/74LS
175). Besides having positive-edge triggering, it has an inverted CLEAR input. This
means that a low CLR resets it.
The 54/74175 has four of these D flip-flops in a single 16-pin dual in-line package
(DIP), and it's referred to as a quad D - type flip-flop with clear.
EDGE-TRIGGERED JK FLIP FLOP
Setting R = S = I with an edge-triggered RS flip-flop forces both Q and Q to the same logic
level. This is an illegal condition, and it is not possible to predict the final state of Q.
The JK flip-flop accounts for this illegal input, and is therefore a more versatile circuit.
Among other things, flip-flops can be used to build counters. Counters can be used to count
the number of PTs or NTs of a clock. For purposes of counting, the JK flip-flop is the ideal
element to use.
There are many commercially available edge-triggered JK flip-flops. Let's see how they
function.
Positive-Edge-Triggered JK Flip-Flops
In Fig. 8.24, the pulse-forming box changes the clock into a series of positive pulses, and thus
this circuit will be sensitive to PTs of the clock. The basic circuit is identical to the previous
positive-edge-triggered RS flip-flop, with two important additions:
1. The Q output is connected back to the input of the lower AND gate.
2. The Q output is connected back to the input of the upper AND gate.
This cross-coupling from outputs to inputs changes the RS flip-flop into a JK flip-flop. The
previous S input is now labeled J, and the previous R input is labeled K. Here's how it
works:
1. When J and K are both low, both AND gates are disabled. Therefore, clock pulses have no
effect. This first possibility is the initial entry in the truth table. As shown, when J and K are
both Os, Q retains its last value.
2. When J is low and K is high, the upper gate is disabled, so there's no way set the flip-flop.
The only possibility is reset. When Q is high, the lower gate passes a RESET pulse as soon as the
next positive.
clock edge arrives. This forces Q to become low (the second entry in the truth table).
Therefore, J = 0 and K = I means that the next PT of the clock resets the flip-flop (unless Q is
already reset).
3. When J is high and K is low, the lower gate is disabled, so it's impossible to reset the flip-
flop. But you can set the flip flop as follows. When Q is low, Q is high; therefore, the
upper gate passes a SET pulse on the next positive clock edge.
4. This drives Q into the high state (the third entry in the truth table). As you can see, J = 1
and K = 0 means that the next PT of the clock sets the flip-flop (unless Q is already high).
5. When J and Kare both high (notice that this is the forbidden state with an RS flip-flop), it's
possible to set or reset the flip-flop. If Q is high, the lower gate passes a RESET pulse on
the next PT. On the
Other hand, when Q is low, the upper gate passes a SET pulse on the next PT. Either way,
Q changes to the complement of the last state (see the truth table). Therefore, J = I and K = I
mean the flip-flop will toggle (switch to the opposite state) on the next positive clock edge.
Propagation delay prevents the JK flip-flop from racing (toggling more than once during a
positive dock edge). Here's why. In Fig. 8.24, the outputs change after the PT of the clock. By
then, the new Q and Q values are too late to coincide with the PTs driving the AND gates.
For instance, if tP = 20 ns, the outputs change approximately 20 ns after the leading edge of
the clock.
If the PTs are narrower than 20. ns, the returning Q and Q arrive too late to cause
false triggering.
Figure 8.25a shows a symbol for a JK flip-flop of any design. When you see this on
a schematic diagram, remember that on the next PT of the clock:
1. J and K low: no change of Q.
2. J low and K high: Q is reset low.
3. J high and K low: Q is set high.
4. J and K both high: Q toggles to opposite state.
You can include OR gates in the design to accommodate PRESET and CLEAR as was
done earlier. Figure 8.25b gives the symbol for a JK flip-flop with PR and CLR. Notice that it
is negative-edge-triggered and requires a low PR to set it or a low CLR to reset it.
pulse symbol
Second, the symbol l appearing next to the Q and the Q outputs is the IEEE
designation for a postponed output. In this case, it means Q does not change state until
the clock makes an NT. In other words, the contents of the master are shifting into the
slave on the clock NT, and at this time Q changes state.
To summarize: The master is set according to J and K while the clock is high; the
contents of the master are then shifted into the slave (Q changes state) when the clock
goes low. This particular flip-flop might be referred to as pulse-triggered, to
distinguish it from the edge-triggered flip-flops previously discussed.
Some of the more popular pulse-triggered flip-flops you might encounter include the
7473, 7476, and 7478. Their more modem, edge-triggered counterparts include the
74LS73A, the 74LS76A, and the 74LS78A.