Using the positive logic system, the logic values 0 and 1 are referred to
simply as “low” and “high.” To implement the threshold-voltage concept, a
range of low and high voltage levels is defined, as shown in Figure 3.1.
The figure gives the minimum voltage, called VSS , and the maximum
voltage, called VDD, that can exist in the circuit. We will assume that VSS is
0 volts, corresponding to electrical ground, denoted Gnd. The voltage VDD
represents the power supply voltage. The most common levels for VDD are
between 5 volts and 1 volt. In this chapter we will mostly use the value
VDD 5 V. Figure 3.1 indicates that voltages in the range Gnd to V0,max
represent logic value 0. The name V0,max means the maximum voltage level
that a logic circuit must recognize as low. Similarly, the range from V1,min
to VDD corresponds to logic value 1, and V1,min is the minimum voltage
level that a logic circuit must interpret as high. The exact levels of V0,max
and V1,min depend on the particular technology used; a typical example
might set V0,max to 40 percent of VDD and V1,min to 60 percent of VDD. The
range of voltages between V0,max and V1,min is undefined. Logic signals do
not normally assume voltages in this range except in transition from one
logic value to the other.
31
metal oxide semiconductor field-effect transistor (MOSFET).
Logic circuits are built with transistors. A full treatment of transistor behavior is beyond
the scope of this text; it can be found in electronics textbooks, such as [1] and [2]. For
the purpose of understanding how logic circuits are built, we can assume that a
transistor operates a sa simple switch. Figure 3.2a shows a switch controlled by a logic
signal, x. When x is low, the switch is open, and when x is high, the switch is closed.
The most popular type of transistor for implementing a simple switch is the metal oxide
semiconductor field-effect transistor (MOSFET). There are two different types of
MOSFETs, known as n-channel, abbreviated NMOS, and p-channel, denoted PMOS.
Important Note: The MOSFET technology is replaced with FINFET technology in
today’s Chip. They are very similar but there are not enough teaching material about
Digital Circuit design using FINFET. We will learn the MOSFET for now.
How a transistor functions as a switch and what is the difference between the MOSFET
and FINFET check this video; https://www.youtube.com/watch?v=TXxw1kdF5_Q
32
NMOS
Figure 3.2b gives a graphical symbol for an NMOS transistor. It has
four electrical terminals, called the source, drain, gate, and substrate.
In logic circuits the substrate (also called body) terminal is connected
to Gnd. We will use the simplified graphical symbol in Figure 3.2c,
which omits the substrate node. There is no physical difference
between the source and drain terminals. They are distinguished in
practice by the voltage levels applied to the transistor; by convention,
the terminal with the lower voltage level is deemed to be the source.
The transistor operates by the voltage VG at the gate terminal. If VG is
low, then there is no connection between the source and drain, and we
say that the transistor is turned off. If VG is high, then the transistor is
turned on and acts as a closed switch that connects the source and
drain terminals. In section 3.8.2 we show how to calculate the
resistance between the source and drain terminals when the transistor
is turned on, but for now assume that the resistance is 0.
33
PMOS
PMOS transistors have the opposite behavior of NMOS
transistors. The former are used to realize the type of switch
illustrated in Figure 3.3a, where the switch is open when the
control input x is high and closed when x is low. A symbol is
shown in Figure 3.3b. In logic circuits the substrate of the
PMOS transistor is always connected to VDD, to the simplified
symbol in Figure 3.3c. If VG is high, then the PMOS transistor is
turned off and acts like an open switch. When VG is low, the
transistor is turned on and acts as a closed switch that
connects the source and drain. In the PMOS transistor the
source is the ith the higher voltage.
34
summary
Figure 3.4 summarizes the typical use of NMOS and PMOS
transistors in logic circuits. An NMOS transistor is turned
on when its gate terminal is high, while a PMOS transistor
is turned on when its gate is low. When the NMOS
transistor is turned on, its drain is pulled down to Gnd,
and when the PMOS transistor is turned on, its drain is
pulled up to VDD. Because of the way the transistors
operate, an NMOS transistor cannot be used to pull its
drain terminal completely up to VDD. Similarly, a PMOS
transistor cannot be used to pull its drain terminal
completely down to Gnd.
35
The combination of NMOS and PMOS transistors in a
circuit is known as complementary MOS technology
(CMOS). The concept of CMOS circuitsis based on
replacing the pull-up device with a pull-up network
(PUN) that is built using PMOS transistors, such that the
functions realized by the PDN and PUN networks are
complements of each other. Then a logic circuit, such as a
typical logic gate, is implemented as indicated in Figure
3.11.
36
NOT gate PUN
PDN
The simplest example of a CMOS circuit, a NOT
gate, is shown in Figure 3.12. When Vx = 0 V,
transistor T2 is off and transistor T1 is on. This
makes Vf = 5 V, and since T2 is off, no current
flows through the transistors. When Vx = 5 V, T2
is on and T1 is off. Thus Vf = 0 V, and no current
flowsbecause T1 isoff.
37
CMOS NAND gate PUN
Figure 3.13 provides a circuit diagram of a
CMOS NAND gate. It is similar to the PDN
NMOS circuit presented in Figure 3.6
except that the pull-up device has been
replaced by the PUN with two PMOS
transistors connected in parallel. The truth
table in the figure specifies the state of
each of the four transistors for each logic
valuation of inputs x1 and x2. The reader
can verify that the circuit properly
implements the NAND function. Under
static conditions no path exists for current
flow from VDD to Gnd.
The circuit in Figure 3.13 can be derived
from the logic expression that defines the
NAND operation, f = x1x2. This expression
specifies the conditions for which f = 1;
Next slide
38
CMOS NAND gate
39
CMOS NOR gate
The circuit for a CMOS NOR gate is
derived from the logic expression that
defines the NOR operation
Since f =1 only if both x1 and x2 have
the value 0, then the PUN consists of
two PMOS transistors connected in
series. The PDN, which realizes
f =x1 + x2, has two NMOS transistors
in parallel, leading to the circuit shown
in Figure 3.14.
40
A CMOS AND gate
is built by connecting a NAND gate to an inverter, as illustrated in Figure 3.15. Similarly, an OR gate is
constructed with a NOR gate followed by a NOT gate.
41
Do the examples 3.1-3.2
42
Chips
An approach used widely was to connect together multiple chips with fixed function, each containing
only a few logic gates. A wide assortment of chips, with different types of logic gates, is available for
this purpose.
44
As an example of how a logic circuit can
be implemented using 7400-series chips,
consider the function f = x1x2 + x2x3, which
is shown in the form of a logic diagram in
Figure 2.30. A NOT gate is required to
produce x2, as well as 2 two-input AND
gates and a two-input OR gate. Figure 3.22
shows three 7400-series chips that can be
used to implement the function.
45
The function provided by each standard series is fixed and
cannot be tailored to suit a particular design situation. This
fact, coupled with the limitation that each chip contains only a
few logic gates, makes these chips inefficient for building large
logic circuits. It is possible to manufacture chips that contain
relatively large amounts of logic circuitry with a structure that is
not fixed and are called programmable logic devices (PLDs)
and programmable logic array (PLA).
46
The general structure of a PLA is depicted in Figure 3.25.
Based on the idea that logic functions can be realized in
sum-of-products form, a PLA comprises a collection of
AND gates that feeds a set of OR gates. As shown in the
figure, the PLA’s inputs x1, . . . , xn pass through a set of
buffers (which provide both the true value and complement
of each input) into a circuit block called an AND plane, or
AND array. The AND plane produces a set of product
terms P1, . . . , Pk . Each of these terms can be configured
to implement any AND function of x1, . . . , xn. The product
terms serve as the inputs to an OR plane, which produces
the outputs f1, . . . , fm. Each output can be configured to
realize any sum of P1, . . . , Pk and hence any sum-of-
products function of the PLA inputs.
47
For implementation of circuits that require more inputs
and outputs, either multiple PLAs or PALs can be
employed or else a more sophisticated type of chip,
called a complex programmable logic device (CPLD),
can be used. A CPLD comprises multiple circuit blocks
on a single chip, with internal wiring resources to
connect the circuit blocks. Each circuit block is similar
to a PLA or a PAL; we will refer to the circuit blocks as
PAL-like blocks. An example of a CPLD is given in
Figure 3.32. It includes four PAL-like blocks that are
connected to a set of interconnection wires. Each PAL-
like block is also connected to a subcircuit labeled I/O
block, which is attached to a number of the chip’s input
and output pins. Figure 3.33 shows an example of the
wiring structure and the connections to a PAL-like block
in a CPLD. The PAL-like block includes3 macrocells
(real CPLDs typically have about 16 macrocells in a
PAL-like block), each consisting of a four-input OR gate
(real CPLDs usually provide between 5 and 20 inputs
to each OR gate).
48
FPGAs can be used to implement logic circuits of
more than a million equivalent gates in size. To
implement larger circuits, it is convenient to use a
different type of chip that has a larger logic capacity.
A field-programmable gate array (FPGA) isa
programmable logic device that supports
implementation of relatively large logic circuits.
FPGAs are quite different from SPLDs and CPLDs
because FPGAs do not contain AND or OR planes.
Instead, FPGAs provide logic blocks for
implementation of the required functions. The
general structure of an FPGA is illustrated in Figure
3.35a.
49
Each logic block in an FPGA typically has a small
number of input sand outputs. A variety of FPGA
products are on the market, featuring different types
of logic blocks. The most commonly used logic block
is a lookup table (LUT), which contains storage cells
that are used to implement a small logic function.
Each cell is capable of holding a single logic value,
either 0 or 1. The stored value is produced as the
output of the storage cell. LUTs of various sizes may
be created, where the size is defined by the number
of inputs. Figure 3.36a shows the structure of a small
LUT. It has two inputs, x1 and x2, and one output, f .It
is capable of implementing any logic function of two
variables. Because a two-variable truth table has four
rows, this LUT has four storage cells. One cell
corresponds to the output value in each row of the
truth table. The input variables x1 and x2 are used as
the select inputs of three multiplexers, which,
depending on the valuation of x1 and x2, select the
content of one of the four storage cells as the output
of the LUT.
50
To see how a logic function can be realized in the two-input LUT, consider the truth table in Figure 3.36b.
The function f1 from this table can be stored in the LUT as illustrated in Figure 3.36c. The arrangement of
multiplexers in the LUT correctly realizes the function f1. When x1 = x2 = 0, the output of the LUT is driven
by the top storage cell, which represents the entry in the truth table for x1x2 = 00. Similarly, for all
valuations of x1 and x2, the logic value stored in the storage cell corresponding to the entry in the truth
table chosen by the particular valuation appears on the LUT output. Providing access to the contents of
storage cells is only one way in which multiplexer scan be used to implement logic functions.
Programmable cells
Programmed into LUT
51
Fan-out
Figure 3.48 illustrated timing delays for one
NOT gate driving another. In real circuits each
logic gate may be required to drive several
v
others. The number of other gates that a
specific gate drives is called its fan-out. An
example of fan-out is depicted in Figure
3.55a, which shows an inverter N1 that drives
the inputs of n other inverters. Each of the
other inverters contributes to the total
capacitive loading on node f . In part (b) of the
figure, the n inverters are represented by one
large capacitor Cn. For simplicity, assume that Gate
each inverter contributes a capacitance C and
that Cn = n × C. The propagation delay
increases in direct proportion to n. Figure
3.55c illustrates how n affects the propagation Gates
delay. It assumes that a change from logic
value 1 to 0 on signal x occurs at time 0. One
curve represents the case where n = 1, and
the other curve corresponds to n = 4.
52
Buffer
In circuits in which a logic gate has to drive a large
capacitive load, buffers are often used to improve
performance. A buffer is a logic gate with one input,
x, and one output, f , which produces f = x. The
simplest implementation of a buffer uses two
inverters, as shown in Figure 3.56a. Buffers can be
created with different amounts of drive capability,
depending on the sizes of the transistors (see
Figure 3.49). In general, because they are used for
driving higher-than-normal capacitive loads, buffers
have transistors that are larger than those in typical
logic gates. The graphical symbol for a noninverting
buffer is given in Figure 3.56b.
53
Tri-state Buffers
In section 3.6.2 we mentioned that a type of buffer called a tri-state buffer is included in some standard
chips and in PLDs. A tri-state buffer has one input, x, one output, f , and a control input, called enable, e.
The graphical symbol for a tri-state buffer is given in Figure 3.57a. The enable input is used to determine
whether or not the tri-state buffer produces an output signal, as illustrated in Figure 3.57b. When e = 0, the
buffer is completely disconnected from the output f . When e = 1, the buffer drives the value of x onto f
,causing f = x. This behavior is described in truth-table form in part (c) of the figure. For the two rows of the
table where e = 0, the output isdenoted by the logic value Z, which is called the high-impedance state. The
name tri-state derives from the fact that there aretwo normal states for a logic signal, 0 and 1, and Z
represents a third state that produces no output signal.
54
Figure 3.58 shows several types of tri-state buffers. The buffer in part (b) has the same behavior as the
buffer in part (a), except that when e = 1, it produces f = x. Part (c) of the figure gives a tri-state buffer for
which the enable signal has the opposite behavior; that is, when e = 0, f = x, and when e = 1, f = Z. The
term often used to describe this type of behavior is to say that the enable is active low. The buffer in
Figure 3.58d also features an active-low enable, and it produces f = x when e = 0.
55
So far we have encountered AND, OR, NOT, NAND,
and NOR gates as the basic elements from which
logic circuit scan be constructed. There is another
basic element that is very useful in practice,
particularly for building circuits that perform arithmetic
operations, as we will see in Chapter 5. This element
realizes the Exclusive-OR function defined in Figure
3.61a. The truth table for this function is similar to the
OR function except that f = 0 when both inputs are 1.
Because of this similarity, the function is called
Exclusive-OR, which is commonly abbreviated as
XOR. The graphical symbol for a gate that
implements XOR is given in part (b) of the figure. The
XOR operation is usually denoted with the ⊕ symbol.
It can be realized in the sum-of-products form as
which leads to the circuit in Figure 3.61c.
56
Solve Selected Problems in:
https://www.ee.ryerson.ca/~courses/coe328/trans.htm
57
Two-Variable K-Map
In Chapter 2 we showed that algebraic
manipulation can be used to find the lowest-cost
implementations of logic functions. The purpose
of that chapter was to introduce the basic
concepts in the synthesis process. The reader is
probably convinced that it is easy to derive a
straightforward realization of a logic function in a
canonical form, but it is not at all obvious how to
choose and apply the theorems and properties of
section 2.5 to find a minimum-cost circuit.
Indeed, the algebraic manipulation is rather
tedious and quite impractical for functions of
many variables. The Karnaugh map is not just
useful for combining pairs of minterms. As we
will see in several larger examples, the
Karnaugh map can be used directly to derive a
minimum-cost circuit for a logic function.
58
A Karnaugh map for a two-variable function is given in
Figure 4.3. It corresponds to the function f of Figure 2.15.
The value of f for each valuation of the variables x1 and x2
is indicated in the corresponding cell of the map. Because
a 1 appears in both cells of the bottom row and these cells
are adjacent, there exists a single product term that can
cause f to be equal to 1 when the input variables have the
values that correspond to either of these cells. To indicate
this fact, we have circled the cell entries in the map.
Rather than using the combining property formally, we can
derive the product term intuitively. Both of the cells are
identified by x2 = 1, but x1 = 0 for the left cell and x1 = 1 for
the right cell. Thus if x2 = 1, then f = 1 regardless of
whether x1 is equal to 0 or 1. The product term
representing the two cells is simply x2. Similarly, f = 1 for
both cells in the first column. These cells are identified by
x1 = 0. Therefore, they lead to the product term inverted x1.
Since this takes care of all instances where f = 1, it follows
that the minimum-cost realization of the function is
59
Evidently, to find a minimum-cost implementation of a given function, it is necessary to find the smallest
number of product terms that produce a value of 1 for all cases where f = 1. Moreover, the cost of these
product terms should be as low as possible. Note that a product term that covers two adjacent cells is
cheaper to implement than a term that covers only a single cell. For our example once the two cells in the
bottom row have been covered by the product term x2, only one cell (top left) remains. Although it could
be covered by the term x1x2, it is better to combine the two cells in the left column to produce the product
term x1 because this term is cheaper to implement.
60
Three-Variable Map
A three-variable Karnaugh map is constructed by
placing 2 two-variable maps side by side. Figure
4.4 shows the map and indicates the locations of
minterms in it. In this case each valuation of x1
and x2 identifies a column in the map, while the
value of x3 distinguishes the two rows. To ensure
that minterms in the adjacent cells in the map can
always be combined into a single product term,
the adjacent cells must differ in the value of only
one variable. Thus the columns are identified by
the sequence of (x1, x2) values of 00, 01, 11, and
10, rather than the more obvious 00, 01, 10, and
11. This makes the second and third columns
different only in variable x1. Also, the first and the
fourth columns differ only in variable x1, which
means that these columns can be considered as
being adjacent. The reader may find it useful to
visualize the map as a rectangle folded into a
cylinder where the left and the right edges in
Figure 4.4b are made to touch.
61
62
63
Note:
It is also possible to have a group of eight 1s in a three-variable map. This is the trivial case
where f = 1 for all valuations of input variables; in other words, f is equal to the constant 1.
The Karnaugh map provides a simple mechanism for generating the product terms that
should be used to implement a given function. A product term must include only those
variables that have the same value for all cells in the group represented by this term. If the
variable is equal to 1 in the group, it appears uncomplemented in the product term; if it is
equal to 0, it appears complemented. Each variable that is sometimes 1 and sometimes 0 in
the group does not appear in the product term.
64
Four-Variable Map
A four-variable map is constructed by placing 2
three-variable maps together to create four rows
in the same fashion as we used 2 two-variable
maps to form the four columns in a three-
variable map. Figure 4.6 shows the structure of
the four-variable map and the location of
minterms. We have included in this figure
another frequently used way of designating the
rows and columns. As shown in blue, it is
sufficient to indicate the rows and columns for
which a given variable is equal to 1. Thus x1 = 1
for the two right-most columns, x2 = 1 for the two
middle columns, x3 = 1 for the bottom two rows,
and x4 = 1 for the
two middle rows.
65
Practice the examples in Figure 4.7 for 4‐variable K‐Map.
The 5‐Variable K‐Map is very similar to 4‐Variable K‐Map.
Please read the 5‐variable K‐Map. You need it in Labs.
Important:
Read Sections
4.2 Strategy for Minimization
66
Figure 4.13 depicts the same function as Figure
4.9 depicts. There are three maxterms that must
be covered: M4, M5, and M6. They can be covered
by two sum terms shown in the figure, leading to
the following implementation:
A circuit corresponding to this expression has
two OR gates and one AND gate, with two inputs
for each gate. Its cost is greater than the cost of
the equivalent SOP implementation derived in
Figure 4.9, which requires only one OR gate and
one AND gate.
67
The function from Figure 4.10 is reproduced in Figure 4.14. The maxterms for which f = 0 can be covered
as shown, leading to the expression
68
In digital systems it often happens that certain input conditions can never occur. For example, suppose that
x1 and x2 control two interlocked switches such that both switches cannot be closed at the same time. Thus
the only three possible states of the switches are that both switches are open or that one switch is open and
the other switch is closed. Namely, the input valuations (x1, x2) = 00, 01, and 10 are possible, but 11 is
guaranteed not to occur. Then we say that (x1, x2) = 11 is a don’t-care condition, meaning that a circuit with x1
and x2 as inputs can be designed by ignoring this condition. A function that has don’t-care condition(s) is said
to be incompletely specified.
69
Important:
Don’t-care conditions, or don’t-cares for short, can be used to advantage in the design of logic
circuits. Since these input valuations will never occur, the designer may assume that the
function value for these valuations is either 1 or 0, whichever is more useful in trying to find a
minimum-cost implementation.
70
Sections 4.5 to 4.11 is excluded and there is no need to review them for the course.
Read Section 4.12. Very useful for the labs.
Do the example 4.25
Solve Selected Problems in:
https://www.ee.ryerson.ca/~courses/coe328/materials.htm
71
72