Eeng 354
Eeng 354
The largest value reached in a half cycle is called the peak value or the maximum value or the
amplitude of the waveform. Such values are represented by Vm, Im, Em, etc.
A peak-to-peak value is the difference between the maximum and minimum values in a cycle. The
time taken to complete one cycle is called the period or the periodic time, T(seconds), of the
waveform.
The number of cycles completed in one second is called the frequency, f (Hertz, Hz).
Logic Levels
Before examining digital signals, we must define logic levels.
A logic level is a voltage level that represents a defined digital state.
Logic HIGH: The higher of two voltages, typically 5 volts
Logic LOW: The lower of two voltages, typically 0 volts
A digital signal is a signal that represent data as a sequence of discrete values; at any given time it
can only take on one of a finite number of values.
Digital signals are not continuous.
They use specific values to represent information.
Logic Gates
A logic gate is an electronic circuit which makes logic decisions.
It has one output and one or more inputs.
The output signal appears only for certain combinations of input signals.
Logic gates are the basic building blocks from which most of the digital systems are built up.
They implement the hardware logic function based on the logical algebra developed by George
Boole which is called Boolean algebra in his honour.
A unique characteristic of the Boolean algebra is that variables used in it can assume only one of
the two values i.e. either 0 or 1. Hence, every variable is either a 0 or a 1.
These gates are available today in the form of various IC families. The most popular families are:
transistor-transistor logic (TTL), emitter-coupled logic (ECL), metal-oxide-semiconductor (MOS)
and complementary metal-oxide-semiconductor (CMOS).
Here, we will consider the OR, AND, NOT, NOR, NAND, exclusive OR (XOR) and exclusive
NOR (XNOR) gates along with their truth tables.
Positive and Negative Logic
In computing systems, the number symbols 0 and 1 represent two possible states of a circuit or
device.
It makes no difference if these two states are referred to as ON and OFF, CLOSED and OPEN,
HIGH and LOW, PLUS and MINUS or TRUE and FALSE depending on the circumstances. Main
point is that they must be symbolized by two opposite conditions.
In positive logic, a 1 represents
1. an ON circuit2. a CLOSED switch3. a HIGH voltage 4. a PLUS sign5. a TRUE statement
Consequently, a 0 represents
1. an OFF circuit 2. an OPEN switch 3. a LOW voltage 4. a MINUS sign 5. a FALSE
statement.
Truth Table
A truth table lists all possible combinations of input binary variables and the corresponding outputs
of a logic system.
The logic system output can be found from the logic expression, often referred to as the Boolean
expression that relates the output with the inputs of that very logic system.
When the number of input binary variables is only one, then there are only two possible
inputs, i.e. ‘0’ and ‘1’. If the number of inputs is two, there can be four possible input
combinations, i.e. 00, 01, 10 and 11.
OR Gate
An OR gate performs an ORing operation on two or more than two logic variables.
The OR operation on two independent logic variables A and B is written as Y = A+B and reads as
Y equals A OR B and not as A plus B.
The output of an OR gate is LOW only when all of its inputs are LOW.
For all other possible input combinations, the output is HIGH. This statement when interpreted for
a positive logic system means the following.
The output of an OR gate is a logic ‘0’ only when all of its inputs are at logic ‘0’. For all other
possible input combinations, the output is a logic ‘1’. Figure 1 shows the circuit symbol and the
truth table of a two-input OR gate. The operation of a two-input OR gate is explained by the logic
expression
Y=A+B
Figure 1: Two-input OR gate.
AND Gate
The output of an AND gate is HIGH only when all of its inputs are in the HIGH state. In
all other cases, the output is LOW.
When interpreted for a positive logic system, this means that the output of the AND gate is a logic
‘1’ only when all of its inputs are in logic ‘1’ state. In all other cases, the output is logic ‘0’. The
logic symbol and truth table of a two-input AND gate are shown in Figs 2 (a) and (b) respectively.
The AND operation on two independent logic variables A and B is written as Y = A.B and reads as
Y equals A AND B and not as A multiplied by B. Here, A and B are input logic variables and Y is
the output. An AND gate performs an ANDing operation:
NOT Gate
A NOT gate is a one-input, one-output logic circuit whose output is always the complement of the
input.
That is, a LOW input produces a HIGH output, and vice versa.
When interpreted for a positive logic system, a logic ‘0’ at the input produces a logic ‘1’ at the
output, and vice versa.
It is also known as a ‘complementing circuit’ or an ‘inverting circuit’. Figure 3 shows the circuit
symbol and the truth table.
The NOT operation on a logic variable X is denoted as 𝑋 𝑜𝑟 𝑋′ . That is, if X is the input
to a
NOT circuit, then its output Y is given by Y = 𝑋 𝑜𝑟 𝑋′ and reads as Y equals NOT X.
Thus, if X = 0, Y = 1 and if X = 1, Y = 0.
Figure 3 (a) Circuit symbol of a NOT circuit and (b) the truth table of a NOT circuit.
EXCLUSIVE-OR Gate
The EXCLUSIVE-OR gate, commonly written as EX-OR gate.
Figures 4(a) and (b) respectively show the logic symbol and truth table of a two-input EX-OR gate.
As can be seen from the truth table, the output of an EX-OR gate is a logic ‘1’ when the inputs are
unlike and a logic ‘0’ when the inputs are like.
Although EX-OR gates are available in integrated circuit form only as two-input gates, unlike other
gates which are available in multiple inputs also, multiple-input EX-OR logic functions can be
implemented using more than one two-input gates.
The truth table of a multiple-input EX-OR function can be expressed as follows. The output of a
multiple-input EX-OR logic function is a logic ‘1’ when the number of 1s in the input sequence is
odd and a logic ‘0’ when the number of 1s in the input sequence is even, including zero. That is, an
all 0s input sequence also produces a logic ‘0’ at the output. The output of a two-input EX-OR gate
is expressed by
Figure 4 (a) Circuit symbol of a two-input EXCLUSIVE-OR gate, (b) the truth table of a
two-input EXCLUSIVE-OR gate
NAND Gate
NAND stands for NOT AND.
An AND gate followed by a NOT circuit makes it a NAND gate
[Fig.5 (a)]. Figure 5 (b) shows the circuit symbol of a two-input NAND gate. The
truth table of a NAND gate is obtained from the truth table of an AND gate by
complementing the output entries [Fig.5 (c)].
The output of a NAND gate is a logic ‘0’ when all its inputs are a logic ‘1’. For all other input
combinations, the output is a logic ‘1’. NAND gate operation is logically expressed as
𝒀 = 𝑨. 𝑩
Figure 5 (a) Two-input NAND implementation using an AND gate and a NOT circuit, (b)
the circuit symbol of a two-input NAND gate and (c) the truth table of a two-input
NAND gate.
NOR Gate
NOR stands for NOT OR.
An OR gate followed by a NOT circuit makes it a NOR gate [Fig. 6(a)].
The truth table of a NOR gate is obtained from the truth table of an OR gate by complementing the
output entries.
The output of a NOR gate is a logic ‘1’ when all its inputs are logic ‘0’. For all other input
combinations, the output is a logic ‘0’. The output of a two-input NOR gate is logically expressed
as
𝑌 = (𝐴 + 𝐵)
Figure 6 (a) Two-input NOR implementation using an OR gate and a NOT circuit, (b) the
circuit symbol of a two-input NOR gate and (c) the truth table of a two-input NOR gate.
EXCLUSIVE-NOR Gate
EXCLUSIVE-NOR (commonly written as EX-NOR/ EX-NOR) means NOT of EX-OR, i.e. the
logic gate that we get by complementing the output of an EX-OR gate.
Figure below shows its circuit symbol along with its truth table.
The truth table of an EX-NOR gate is obtained from the truth table of an EX-OR gate by
complementing the output entries.
The output of a two-input EX-NOR gate is a logic ‘1’ when the inputs are like and a logic ‘0’ when
they are unlike.
In general, the output of a multiple-input EX-NOR logic function is a logic ‘0’ when the number of
1s in the input sequence is odd and a logic ‘1’ when the number of 1s in the input sequence is even
including zero.
That is, an all 0s input sequence also produces a logic ‘1’ at the output.
Universal Gates
OR, AND and NOT gates are the three basic logic gates as they together can be used to construct
the logic circuit for any given Boolean expression.
NOR and NAND gates have the property that they individually can be used to hardware implement
a logic circuit corresponding to any given Boolean expression.
That is, it is possible to use either only NAND gates or only NOR gates to implement any Boolean
expression.
This is so because a combination of NAND gates or a combination of NOR gates can be used to
perform functions of any of the basic logic gates. It is for this reason that NAND and NOR gates
are universal gates.
As an illustration, Fig. 7 a, b, c shows how two-input NAND gates can be used to construct
a NOT circuit
[Fig. 7, (a)], a two-input AND gate [Fig.(b)] and a two-input OR gate [Fig.(c)]. Fig 8 shows
the same using NOR gates. Understanding the conversion of NAND to OR and NOR
to AND requires the use of DeMorgan’s theorem, which is discussed on Boolean
algebra
Boolean algebra have a unique property i.e. they can assume only one of the two possible values of
0 and 1.
Each of the variable used in a logical or Boolean equation can assume only the value 0 or 1. For
example, in the logical equation A + B = C, each of the three variables A , B and C can have only
the values of either 0 or 1. This point must be clearly taken note of by the reader for easy
understanding of the laws of Boolean algebra.
1. OR Laws
Law 1. A + 0 =A
Law 2. A +1=1
Law 3. A+A=A
Law 4. A+𝐴=1
2. AND Laws
Law 5. A .0=0
Law 6. A .1=A
Law 7. A .A =A
Law 8. A . 𝐴 =0
3. Laws of Complementation
Law 9. 0=1
Law 10. 1=0
Law 11. if A = 0, then 𝐴 = 1
Law 12. if A = 1, then 𝐴 = 0
Law 13. A =A
4. Commutative Laws
These laws allow change in the position of variables in OR and AND expressions.
Law 14. A +B =B +A
Law 15. A.B = B.A
These two laws express the fact that the order in which a combination of terms is performed
does not affect the final result of the combination.
5. Associative Laws
These laws allow removal of brackets from logical expression and regrouping of variables.
Law 16. A + (B + C) = (A + B) + C
Law 17. (A + B) + (C + D) = A + B + C + D
Law 18. A.(B.C) = (A.B).C
6. Distributive Laws
These laws permit factoring or multiplying out of an expression.
Law 19. A (B + C) = A B + AC
Law 20. A + BC = (A + B) (A + C)
7. Absorptive Laws
These enable us to reduce a complicated logic expression to a simpler form by absorbing
some of the terms into existing terms.
Law 22. A+AB=A
Law 23. A.(A + B) = A
The above laws can be used to prove any given Boolean identity and also for simplifying
complicated expressions.
Example: Determine the logic expression for the output Y, from the truth table shown.
Simplify and sketch the logic circuit for the simplified expression.
= (A + B) (A + C)
=AA + AC + AB + BC
=A + AC + A B + BC
=A + A B + AC + BC
=A (1 + B) + AC + BC
=A + AC + BC
=A (1 + C) + BC
=A + BC
(A + B) (A + C) =A + BC
DE Morgan’s Theorem
These two theorems (or rules) are a great help in simplifying complicated logical
expressions. The theorems can be started as under:
Law 1.
𝑨 + 𝑩
= 𝑨. 𝑩
Law 2.
𝑨. 𝑩 = 𝑨 + 𝑩
The first statement says that the complement of a sum equals the product of complements.
The second statement says that the complement of a product equals the sum of the complements.
In fact, it allows transformation from a sum-of-products form to a product-of-sum form.
As seen from the above two laws, the procedure required for taking out an expression from
under a NOT sign is as follows:
1. complement the given expression i.e., remove the overall NOT sign
2. change all the ANDs to ORs and all the ORs to ANDs.
3. complement or negate all individual variables.
𝐴 + 𝐵𝐶 = A + BC
= A (B + C)
= 𝐴 (𝐵 + 𝐶 )
Next, consider this example,
All Boolean expressions, regardless of their form, can be converted into either of the two following
standard forms
1. Sum-of-products (SOP)
form and
2. Product-of-sums (POS)
form.
The standardization of Boolean expressions makes their evaluation, simplification and
implementation much more systematic and easier.
Now we shall discuss these two standard forms in more detail.
Sometimes, it is convenient to define the set of variables contained in the expression (in either
complemented or uncomplemented form) as a domain.
For example, domain of the expression A 𝐵 + A B is the set of variables A and B. Similarly, the
domain of expression 𝐴BC + ABC + A 𝐵 CD is the set of variables A, B, C and D.
Any logic expression can be changed into SOP form by applying Boolean algebra laws and
theorems. For example, the expression A (BC + D) can be converted to SOP form by applying the
distributive law:
A (BC + D) = ABC + AD
A standard SOP expression is defined as an expression in which all the variables in the domain
appear in each product term.
As stated below, a nonstandard SOP expression is converted into standard form using
Boolean algebra law 4 : A + 𝐴 = 1 i.e. a variable added to its complement equals 1.
Step 1. Multiply each nonstandard product term by a term made up of the sum of a
missing variable and its complement. This results in two product terms. It is possible
because we know that we can multiply anything by 1 without changing its value.
Step 2. Repeat step 1 until all resulting product terms contain all variables in the domain
in either complemented or uncomplemented form. Note that in converting a product
term to a standard form, the number of product terms is doubled for each missing
variable.
It may be carefully noted that a POS expression, can contain a single-variable term as in 𝐴 (A + 𝐵
+ C) (B + 𝐶 + 𝐷 ).
In POS expression, a single overbar cannot extend over more than one variable, although more than
one variable in a term can have an overbar.
A standard POS expression is defined as an expression in which all the variables in the domain
appear in each sum term.
Each sum term in an POS expression that does not contain all the variables in the domain can be
expanded to standard form to include all variables in the domain and their complements.
As stated below:
a nonstandard POS expression is converted into a standard form using Boolean algebra Law 8 :
𝐴. 𝑨 = 0, i.e. a variable multiplied by its complemented equals 0.
Step 1. Add to each nonstandard product term a term made up of the product of a
missing variable and its complement. This results in two sum terms. This is
possible because we know that we can add 0 to anything without changing its
value.
Step 2. Apply law 20, i.e. A + BC = (A + B) (A + C)
Step 3. Repeat Step 1 until all resulting sum terms contain all variables in the
domain in either complemented or uncomplemented form.
For example we want to convert the Boolean expression ( 𝐴+ B + C)(𝐴 + 𝐶 + D) (A + 𝐵 + C + D)
into a standard POS form. Then following the above procedure, we proceed as below: example
An expanded form of Boolean expression, where each term contains all Boolean variables in their
true or complemented form, is also known as the canonical form of the expression.
The Karnaugh Map
The Karnaugh map (or simply a K-map) is similar to a truth table because it presents all the
possible values of input variables and the resulting output for each value.
However, instead of being organised into columns and rows like a truth table, the Karnaugh map
is an array of squares (or cells) in which each square represents a binary value of the input
variables.
The squares are arranged in a way so that simplification of given expression is simply a matter of
grouping the squares.
Karnaugh maps can be used for expression with two, three and four variable Karnaugh maps.
The number of squares in a Karnaugh map is equal to the total number of possible input variable
combinations (as is the number of rows in a truth table).
For two variables, the number of square is 22= 4, for three variables, the number of squares is 23 =
8 and for four variables, the number of squares is 24 = 16.
For example, a square in the upper left corner has a value of 𝐴 𝐵 and a square in the lower right
corner has a value of A B. Fig. 1 (b) shows the standard product terms represented by each square
in the Karnaugh map.
Fig. 1
The value of a given square is the values of A and B at the left in the same row combined with the
value of C at the top in the same column.
For example, a square in the upper left corner has a value of 𝐴 𝐵 𝐶 and a square in the bottom right
corner has a value of A 𝐵 C. Fig. 2 (b) shows the product terms that are represented by each square
in the Karnaugh map.
Fig. 2
Fig. 3
The value of a given square is the values of A and B at the left in the same row combined with the
values of C and D at the top in the same column.
For example, a square in the upper right corner has a value 𝐴 𝐵 𝐶 𝐷 and a square in the lower right
corner has a value A 𝐵 𝐶 𝐷. Fig. 3 (b) shows the standard product terms that are represented by
each square in the four-variable Karnaugh map.
It may also be noted that squares in the top row are adjacent to the corresponding squares in the
bottom row and squares in the outer left column are adjacent to the corresponding squares in the
outer right column.
This is called “wrap around” adjacency because we can think of the map as wrapping around from
top to bottom to form a cylinder or from left to right to form a cylinder.
Fig. 4 (a) and (b) shows the square adjacencies with a three-variable and a four-variable Karnaugh
maps respectively. Notice the square adjacencies in a four variable Karnaugh map:
Fig. 4
Similarly, for the second product term place a 1 in the second row and first column. Repeat this
process for the other two product terms. The squares that do not have 1 are the squares for which
the expression is 0. Usually when working with sum-of-products expressions, the 0s are left off the
map.
Solution. Sketch a four variable Karnaugh map. Select the first product term 𝐴 B C 𝐷 from
the given SOP expression and enter 1 in the corresponding square as shown in Fig. 7.
Similarly enter 1 for the other product terms in the given SOP expression. Check that
number of 1s in the Karnaugh Map is equal to the number of product terms in the given
SOP expression.
Fig .7
𝐴 +A𝐵 +AB𝐶
As seen, this expression is obviously not in standard form because each product term does not have
three variables. The first term is missing two variables, the second term is missing one variable and
the third term is standard. In order to convert the given nonstandard SOP expression to a standard
form, we multiply the first product term by B + 𝐵 and C + 𝐶 , and the second
term
by C + 𝐶 .
Expanding the resulting expression, we get,
𝐴 𝐵 𝐶 + 𝐴 𝐵 C + 𝐴 B 𝐶 + 𝐴 BC + A B 𝐶 + A 𝐵 𝐶 + A 𝐵 C
This expression can be mapped on a three - variable Karnaugh map as shown in Fig. 8.
(a) Grouping the 1s : We can group the 1s on the Karnaugh map according to the following
rules by enclosing those adjacent squares containing 1s.
The objective is to maximize the size of the groups and to minimize the number of groups.
(b ) Determining the Product Term for each group: Following are the rules that are applied
to find the minimum product terms and the minimum sum-of-products expression :
1. Group the squares that have 1s : Each group of squares containing 1s creates one
product term composed of all variables that occur in only one form (either
uncomplemented or complemented) within the group. Variables that occur both
uncomplemented and complemented within the group are eliminated. These are
known as contradictory variables.
2. In order to determine the minimum product term for each group, we need to look at
the standard methodology for three-variable and four-variable Karnaugh map
respectively.
(i) For a three-variable K-map : (1) for 1-square, group we get a three-variable
product term, (2) for a 2-square group, we get a two-variable product term, (3) for
a 4-square product term, we get a one-variable product term.
(ii) For a four-variable K-map : (1) For a 1-square group, we get a four-variable
product term, (2) for a 2-square group, we get a three-variable product term, (3)
for a 4-square group, we get a two-variable product term and (4) for a 8-square
group, we get a onevariable product term.
(c ) Summing the resulting product terms : When all the minimum product terms are
derived from the Karnaugh map, these are summed to form the minimum sum-of-
products expression.
Note : In some cases, there may be more than one way to group the 1s to form the product
terms. Whatever be the way, the minimal expression must have the same number of
product terms and each product term, the same number of Boolean variables.
The example given below will help you to understand and apply the simplification of the
SOP expression using Karnaugh map.
Example. Simplify the following Boolean expression using the Karnaugh mapping
technique :
X = 𝐴 B +𝐴 𝐵 𝐶 +𝐴𝐵 𝐶 + 𝐴 𝐵 𝐶
Solution. The first step is to map the given Boolean expression on the Karnaugh map.
Notice that there are three variables A, B and C in the Boolean expression, therefore
we need a three-variable Karnaugh map.
The Boolean expression to be mapped is,
Note that the given Boolean expression is a nonstandard SOP expression because the first product term 𝐴
B has the variable C missing in it. This can be converted into a standard SOP form by modifying the
expression as below.
Fig. 9
X = 𝐴 B +𝐶
Fig 10 Fig 11
X=𝐵𝐷+𝐵𝐶+𝐴𝐶D
Fig. 1
Adders
The logical gates discussed so far can be used for performing arithmetical functions like addition,
subtraction, multiplication and division in electronic calculators and digital instruments. In the
central processing unit (CPU) of a computer, these arithmetic functions are carried out by the
arithmetic and logic unit (ALU). The logic functions used generally are XOR, OR and AND. We
will consider the following:
1. Half Adder—It carries out binary addition with the help of XOR and AND gates. It
has two inputs and two outputs.
2. Full Adder—It has three inputs and can add three bits at a time. It is made up of two
half adders and one OR gate.
It can add 2 binary digits at a time and produce a 2-bit data i.e. sum and carry according
to
the binary addition rules.
Half-Adder
A half-adder is an arithmetic circuit block that can be used to add two bits.
Such a circuit thus has two inputs that represent the two bits to be added and two outputs,
with one producing the SUM output and the other producing the CARRY.
Figure below shows the truth table of a half-adder, showing all possible input combinations
and the corresponding outputs.
The Boolean expressions for the SUM and CARRY outputs are given by the equations
An examination of the two expressions tells that there is no scope for further simplification. While
the first one representing the SUM output is that of an EX-OR gate, the second one representing
the CARRY output is that of an AND gate.
However, these two expressions can certainly be represented in different forms using various laws
and theorems of Boolean algebra to illustrate the flexibility that the designer has in hardware
implementing as simple a combinational function as that of a half-adder.
Although the simplest way to hardware-implement a half-adder would be to use a two-input EX-
OR gate for the SUM output and a two-input AND gate for the CARRY output, as shown below
Full Adder
A full adder circuit is an arithmetic circuit block that can be used to add three bits to produce a
SUM and a CARRY output.
Such a building block becomes a necessity when it comes to adding binary numbers with a large
number of bits.
The full adder circuit overcomes the limitation of the half-adder, which can be used to add two bits
only.
Figure above shows the truth table of a full adder circuit showing all possible input combinations
and corresponding outputs.
In order to arrive at the logic circuit for hardware implementation of a full adder, we will firstly
write the Boolean expressions for the two output variables, that is, the SUM and CARRY outputs,
in terms of input variables.
These expressions are then simplified by using any of the simplification techniques described
previously.
The Boolean expressions for the two output variables are given in below for the SUM output (S)
and for the CARRY output (Cout):
The next step is to simplify the two expressions. We will do so with the help of the Karnaugh
mapping technique.
As is clear from the two maps, the expression for the SUM (S)output cannot be simplified any
further, whereas the simplified Boolean expression for Cout is given by the equation
Figure above shows the logic circuit diagram of the full adder.
A full adder can also be seen to comprise two half-adders and an OR gate. The expressions for
SUM and CARRY outputs can be rewritten as follows:
Half-Subtractor
A half-subtractor is a combinational circuit that can be used to subtract one binary digit from
another to produce a DIFFERENCE output and a BORROW output. The truth table of a half-
subtractor, as shown in Figure below.
The Boolean expressions for the two outputs are given by the equations
It is obvious that there is no further scope for any simplification of the Boolean expressions given
by Equations.
While the expression for the DIFFERENCE (D) output is that of an EX-OR gate, the expression
for the BORROW output (Bo) is that of an AND gate with input A complemented before it is fed
to the gate.
Figure below shows the logic implementation of a half-subtractor.
Full Subtractor
A full subtractor performs subtraction operation on two bits, a minuend and a subtrahend, and also
takes into consideration whether a ‘1’ has already been borrowed by the previous adjacent lower
minuend bit or not.
As a result, there are three bits to be handled at the input of a full-subtractor, namely the two bits to
be subtracted and a borrow bit designated as Bin.
There are two outputs, namely the DIFFERENCE output D and the BORROW output Bo. The
BORROW output bit tells whether the minuend bit needs to borrow a ‘1’ from the next possible
higher minuend bit.
Figure below shows the truth table of a full subtractor.
The Boolean expressions for the two output variables are given by the equations
The Karnaugh maps for the two expressions are given in Figure below for DIFFERENCE output D
and for BORROW output Bo.
As is clear from the two Karnaugh maps, no simplification is possible for the difference output D.
The simplified expression for Bo is given by the equation
It can be shown that a full subtractor can be implemented with half-subtractors in the same way as
a full adder was constructed using half-adders.
A Multiplexer (MUX)
A MUX is a digital switch that has multiple inputs (sources) and a single output
(destination). The select lines determine which input is connected to the output.
MUX Types
2-to-1 (1 select line)
4-to-1 (2 select lines)
8-to-1 (3 select lines)
16-to-1 (4 select lines)
Four-input multiplexer
an 4-to-1 multiplexer can be represented by the Boolean function
Z
Logic symbol for a 1-of-4 data selector/multiplexer
As shown in Fig. below, the input lines corresponding to the three minterms present in the given
Boolean function are tied to logic ‘1’. The remaining five possible minterms absent in the Boolean
function are tied to logic ‘0’.
It is a three-variable Boolean function. Conventionally, we will need to use an 8-to-1 multiplexer
to implement this function.
The truth table of the given Boolean function is shown in Table below
d) Demultiplexer:
The demultiplexer is the inverse of the multiplexer, in that it takes a data input and n address inputs.
It has 2n outputs. The address input determine which data output is going to have the same value as
the data input.
A demultiplexer has
N control inputs
1 data input
2N outputs
A demultiplexer routes (or connects) the data input to the selected output.
The value of the control inputs determines the output that is selected.
A demultiplexer performs the opposite function of a multiplexer.
DEMUX Types
1-to-2 (1 select line)
1-to-4 (2 select lines)
1-to-8 (3 select lines)
1-to-16 (4 select lines)
Note that i/p X is held High (1)
Decoder
The basic function of a decoder is to detect the presence of a specified combination of bits on its
inputs and to indicate that presence by a specified output level.
It consists of:
Inputs (n)
2-to-4 Decoder
The truth table of 3-to-8 Decoder
A A A D D D D D D D D
0 0 0 1
0 0 1 1
0 1 0 1
0 1 1 1
1 0 0 1
1 0 1 1
1 1 0 1
1 1 1 1
3-to-8 Decoder with Enable
If there are some unused or ‘don’t care’ combinations in the n-bit code, then there will be fewer
than 2n output lines. As an illustration, if there are three input lines, it can have a maximum of eight
unique output lines. If, in the three-bit input code, the only used three-bit combinations are 000,
001, 010, 100, 110 and 111 (011 and 101 being either unused or don’t care combinations), then this
decoder will have only six output lines. In general, if n and m are respectively the numbers of input
and output lines, then m ≤ 2n.
A decoder can generate a maximum of 2n possible minterms with an n-bit binary code.
4 to 2 Encoder
Let 4 to 2 Encoder has four inputs Y3, Y2, Y1 & Y0 and two outputs A1 & A0. The block diagram
of 4 to 2 Encoder is shown in the following figure.
At any time, only one of these 4 inputs can be ‘1’ in order to get the respective binary code at the
output. The Truth table of 4 to 2 encoder is shown below.
Inputs Outputs
Y3 Y2 Y1 Y0 A1 A0
0 0 0 1 0 0
0 0 1 0 0 1
0 1 0 0 1 0
1 0 0 0 1 1
From Truth table, we can write the Boolean functions for each output as
A1=Y3+Y2
A0=Y3+Y1
We can implement the above two Boolean functions by using two input OR gates. The circuit
diagram of 4 to 2 encoder is shown in the following figure.
The above circuit diagram contains two OR gates. These OR gates encode the four inputs with two
bits
SEQUENTIAL LOGIC CIRCUITS
Introduction
So far, we have dealt with combinational logic circuits, whose outputs are functions of the current
inputs only.
Here we will deal with sequential logic circuits.
A sequential logic circuit is a circuit whose present outputs are functions of the present inputs, as
well as previous inputs.
It has in it a unit called the memory which stores the effect of the previous sequence of inputs.
The stored effects of the previous inputs are called the STATE of the circuit. A sequential circuit
can be represented by the block diagram on Figure 1.
Flip-Flops
A flip-flop is a logic circuit that is capable of storing one bit of information (0 or 1).
It stores the one bit of information as long as power is supplied to the circuit. It is the simplest
memory element.
It is also referred to as a bistable multivibrator.
The block diagram of a flip-flop is shown on Figure 2.
-
A Flip-flop can have one or more inputs, but it has only two outputs, the normal output Q and the
inverted (or complemented) output 𝑄 .
Under normal operating conditions Q and 𝑄 are always complements of each other, i.e. either Q = 0
and 𝑄 = 1, or Q = 1 and 𝑄 = 0.
When we refer to the STATE of a flip-flop, we are referring to the state of its normal (Q) output
i.e. if we say that the state of a flip-flop is 1, we mean Q = 1.
When we have SET = 0 and RESET = 1, the flip-flop is RESET (Q = 0), and when SET = 1 and
RESET = 0, the flip-flop is SET (Q = 1). If we have SET = 1 and RESET = 1, it is similar to
setting and resetting the flip-flop at the same time, and this mode is disallowed as it causes Q
= 𝑄 = 1.
The D Flip-Flop
The D-type flip-flop is illustrated in Figure below.
Figure : D flip-flop
From the truth-table, we can see that Q transparently follows the input D, and for this reason, the D
flip-flop is sometimes referred to as a transparent latch.
The T flip-flop
A T-flip-flop (also known as a toggle flip-flop) is illustrated on Figure below.
Figure: T flip-flop
The corresponding truth-table is shown below:
The JK flip-flop
The JK flip-flop is shown on Figure below.
Figure: JK flip-flop
Its truth-table is shown below:
From the truth-table above, we can see that the JK flip-flop does not have invalid inputs and it can
complement. These properties make the JK flip-flop very versatile, and it is used in many
sequential logic circuits.
Exercise
Figure below shows a clock waveform and inputs J and K applied to a positive-edge triggered JK
flip-flop. Using the truth-table for a positive edge triggered JK flip-flop, explain what happens
at
t0, t1, t2, t3, t4 and t5 to justify the output waveform shown for Q. (It is assumed that the initial of Q
is HIGH)
Flip-flop excitation tables
Counters
A counter is a circuit made up of a series of flip-flops connected in a suitable manner to record
sequences of pulses presented to it in digital form. Counters are used in digital systems to:
1. Count events - convert a given number of event cycles into a prescribed binary code.
2. To time events - The duration of an event can be determined from the count and the
clock frequency.
3. To generate special code sequences - In this application, the counter is used as the
heart of a logic sequencer. Such a circuit generates the next-state information as well
as the control information.
4. Frequency division
Classification of counters
1. Mode of operation
Single mode counter - Counts without external control inputs.
Multi-mode counter - Counters with external control inputs that can be used to
alter the counting sequence, the beginning of a sequence or the end of a sequence.
2. Modulus of the counter
The modulus (MOD number) of a counter is the number of different logic states it goes through
before it comes back to the initial state to repeat the count sequence.
An n-bit counter that counts through all its natural states and does not skip any of the states has a
modulus of 2n. We can see that such counters have a modulus that is an integral power of 2, that is,
2, 4, 8, 16 and so on.
These can be modified with the help of additional combinational logic to get a modulus of less than
2n.
To determine the number of flip-flops required to build a counter having a given modulus, identify
the smallest integer m that is either equal to or greater than the desired modulus and is also equal
to an integral power of 2. For instance, if the desired modulus is 10, which is the case in a decade
counter, the smallest integer greater than or equal to 10 and which is also an integral power of 2 is
16. The number of flip-flops in this case would be 4, as 16 = 24. On the same lines, the number of
flip-flops required to construct counters with MOD numbers of 3, 6, 14, 28 and 63 would be 2, 3,
4, 5 and 6 respectively. In general, the arrangement of a minimum number of n flip-flops can be
used to construct any counter with a modulus is given by the equation
2𝑛−1 + 1 ≤ 𝑚𝑜𝑑𝑢𝑙𝑢𝑠 ≤ 2𝑛
Asynchronous/ripple/serial counters
ii. Synchronous/parallel counter, - Counter in which all the flip-flops are clocked
simultaneously. This is illustrated on Figure below.
Example
Consider the circuit shown on Figure below.
Assuming that we start with Q3 = Q2 = Q1 = 0, we can draw the waveforms shown on Figure
below.
We then get the truth-table for Q1, Q2 and Q3, which is shown below:
From the table above, we can see that the counter has eight distinct states and it is a down counter.
The counter is therefore a modulo-8 down asynchronous (ripple) counter or a divide by 8 down
asynchronous counter.
Asynchronous Counters with MOD numbers <
2N
The general design procedure is as follows:
1. For a modulo-k counter, find the smallest number of flip-flops N such that 2N−1 < k <
2N, and connect them as a ripple counter.
2. Connect a NAND gate output to the asynchronous CLEAR inputs of all the flip flops.
3. Determine which flip-flops will be in the HIGH state at a count k and then connect
the normal outputs of these flip-flops to the NAND gate inputs. Example
Design a JK flip-flop-based modulo-6 asynchronous up counter.
Solution
The MOD number k = 6, and this is not a power of 2. Using the equation above:
2N−1 < 6 < 2N
we find that the value of N which satisfies this is N = 3, hence we use three flip-flops which we
connect as a ripple counter. k = 6 = 1102, hence the two most-significant digits of the counter are
the ones that should be connected to the input of the NAND gate, to yield the circuit shown on
Figure below.
From Figure of Waveforms for the counter above, we can see that when Q3 and Q2 go HIGH, the
output of the NAND gate (X) goes LOW, and this clears all the flip-flops (Q3, Q2 and Q1 are taken
to the LOW state). When Q3 and Q2 go LOW, the NAND gate output goes HIGH and the counter
resumes normal operation.
The duration of the resetting pulse is very short, typically < 100ns, so the counter state 110 a
temporary state and is not considered as a valid state of the counter. Assuming that we start with
Q3 = Q2 = Q1 = 0, we can draw the truth-table for this counter as shown below:
The counter has 6 stable states (000, 001, 010, 011, 100 and 101) hence it is a modulo-6 up
asynchronous counter.
Synchronous Counters
These are counters in which all the flip-flops are clocked simultaneously, and as a result, all flipflop
output changes occur simultaneously.
Propagation delays do not add together hence synchronous counters can work at much higher
frequencies than ripple counters.
The design procedure is as follows:
(i) Write the state-transition-table for the count.
(ii) Derive a minimal logic expression for each flip-flop input.
(iii) Implement the logic circuit using flip-flops and a suitable set of logic gates.
Example
Design a JK flip-flop based circuit whose state diagram is shown on Figure below.
Solution
Since we are to use JK flip-flops to implement this function, we shall use the JK flip flop
excitation table. We then construct the table shown below:
The K- maps corresponding to the four flip-flop inputs are shown on Figure below.
Example
Design a JK flip-flop based circuit whose state diagram is shown on Figure below.
Using the JK flip-flop excitation table, we construct the table shown below:
The K- maps corresponding to the four flip-flop inputs are shown on Figure below. From Figure
below, we can see that:
The circuit is shown on Figure below. In this case, QA is the MSB while QC is the LSB.
Apply the next bit to Din. So Din = 1. As soon as the next negative edge of the clock hits,
FF-2 will set and the stored word change to Q3 Q2 Q1 Q0 = 1100.
Apply the next bit to be stored i.e. 1 to Din. Apply the clock pulse. As soon as the third
negative clock edge hits, FF-1 will be set and output will be modified to Q3 Q2 Q1 Q0 =
1110.
Similarly with Din = 1 and with the fourth negative clock edge arriving, the stored word in
the register is Q3 Q2 Q1 Q0 = 1111.
Truth Table
Waveforms
Serial Input Parallel Output
In such types of operations, the data is entered serially and taken out in parallel
fashion.
Data is loaded bit by bit. The outputs are disabled as long as the data is loading.
As soon as the data loading gets completed, all the flip-flops contain their required
data, the outputs are enabled so that all the loaded data is made available over
all the output lines at the same time.
4 clock cycles are required to load a four bit word. Hence the speed of operation
of SIPO mode is same as that of SISO mode.
Block Diagram
Parallel loading
Left Shifting
Right shifting
The mode control input is connected to logic 1 for parallel loading operation whereas it
is connected to 0 for serial shifting. With mode control pin connected to ground, the
universal shift register acts as a bi-directional register. For serial left operation, the input
is applied to the serial input which goes to AND gate-1 shown in figure. Whereas for the
shift right operation, the serial input is applied to D input.
Block Diagram
FPGAs also have several input/output (I/O) blocks to create interface between external devices
and the FPGA’s internal logic circuit. They also consist of a memory element to store the programs
that specifies the operational behavior of the logic cells and programmable interconnects.
In order to program FPGAs, there are various hardware description languages (HDLs) available
like Verilog or VHDL. These programming languages are used to define the desired functionality
and behavior of the digital system.
The configurable logic block consists of multiplexers, flip-flops, and array of combination logic
circuits. Th I/O blocks provide pins to connect external devices with the FPGA. The programmable
interconnect is basically a switching matrix structure that provides interconnection between CLB
and I/O blocks inside the FPGA.
Certainly! The classification of FPGAs into low-end, mid-range, and high-end categories is based
on their performance, complexity, gate density, and power consumption. Let's delve deeper into
each category −
Types of FPGAs
Depending on the applications, FPGAs can be classified into the following main types −
Low-End FPGAs
Mid-Range FPGAs
High-End FPGAs
Low-End FPGAs
Low-End FPGAs are primarily designed to consume least power than the mid-range and high-end
FPGAs. Thus, they are well-suited to use in battery-powered devices and other applications where
energy efficiency is critical.
In low-end FPGAs, a smaller number of logic gates are used, hence they use less resources for
implementing complex logic systems. Also, these FPGAs have a less complex architecture. Some
common applications of low-end FPGAs include simple control systems, basic signal processing
systems, and low-cost consumer electronics.
Mid-Range FPGAs
Mid-range FPGAs take more power than low-end FPGAs but less power than high-end FPGAs.
This is mainly because the mid-range FPGAs consist of a larger number of logic gates as compared
to low-end FPGAs. This in turn increases the overall complexity of the circuit. Although, these
FPGAs offer a balance between performance and efficiency.
Since mid-range FPGAs provide a larger number of resources, they allow to implement more
complex digital circuits.
These FPGAs are used in a wide range of applications such as digital signal processing,
communication systems, embedded systems, industrial automation systems, telecommunication
devices, medical equipment, etc.
High-End FPGAs
High-end FPGAs consume more power than both low-end and mid-range FPGAs. This is because
they use a larger number of logic gates and also have higher operating frequencies. Although, these
FPGAs are supposed to be exceptional in terms of performance and processing efficiency.
Due to availability of large number resources, the high-end FPGAs can be used for implementing
highly complex logic circuits and systems. Also, they provide the highest level of flexibility and
performance.
Some common applications where the high-end FPGAs are used include high-speed processing
systems, real-time data analysis systems, data centers, high-performance computing systems,
aerospace and defense systems, etc.
Advantages of FPGAs
FPGAs offer numerous advantages over other types of programmable logic devices. Some of the
major advantages of FPGAs are listed below −
FPGAs provide a greater flexibility and re-configurability, as they can be programmed or
reprogrammed to implement different logic functions to meet the requirements of specific
applications without need of hardware alteration or redesign.
FPGAs allow to develop digital systems in less time.
FPGAs have high performance and processing capabilities. Hence, they can execute
complex algorithms and tasks more efficiently.
FPGAs can be customized and optimized to meet the requirements of specific
applications.
FPGAs also support parallelism and pipelining. These technologies allow to enhance the
overall system performance and throughput.
Disadvantages of FPGAs
FPGAs offer several advantages as listed above, but they also have certain disadvantages. Some
of the main disadvantages of FPGAs are highlighted here −
FPGAs are more expensive than other types of programmable logic devices.
FPGAs are complex to design and implement and require more time and expertise in
hardware description languages (HDLs) and system design tools.
FPGAs are more susceptible to security threats than other types of programmable logic
devices.
Applications of FPGAs
FPGAs are extensively used in several applications across a wide range of industries. The
following are some common applications of FPGAs −
FPGAs are used in the field of digital signal processing to accomplish tasks such as
audio-video signal processing, speech recognition, image processing, etc.
FPGAs are used in the implementation of complex algorithms and real-time signal
processing functions.
FPGAs are used in various communication and networking devices such as routers,
switches, network processing units, etc.
In communication systems, FPGAs are employed for implementing protocol processing
algorithms, packet handling algorithms, encryption-decryption techniques, error detection
and correction mechanisms, etc.
FPGAs are used in a variety of electronic systems such as embedded systems, industrial
automation systems, automotive electronics, consumer electronic devices, etc.
FPGAs are used to perform high-end processing tasks such as scientific computation,
data analysis, machine learning and artificial intelligence tasks.
FPGAs are also integral parts in various medical equipment such as MRI (Magnetic
Resonance Imaging), CT (Computed Tomography) scan, ultrasound systems, X-ray
machines, etc.
Memory
Definitions
bit ─ a single binary digit
byte ─ a collection of eight bits accessed together
word ─ a collection of binary bits whose size is a typical unit of access for the
memory. It is typically a power of two multiple of bytes (e.g., 1 byte, 2 bytes, 4 bytes,
8 bytes, etc.)
Nibble- is a four-bit aggregation or half an octet. It is also known as half-byte
Memory Address ─ A vector of bits that identifies a particular memory element (or
collection of elements).
A memory unit is a device to which binary information is transferred for storage and from which
information is retrieved when needed for processing.
When data processing takes place, information from memory is transferred to selected registers in
the processing unit.
A register is a very small amount of very fast memory that is built into the CPU in order to speed
up its operations by providing quick access to commonly used values
Intermediate and final results obtained in the processing unit are transferred back to be stored in
memory. Binary information received from an input device is stored in memory, and
information transferred to an output device is taken from memory. A memory unit is a
collection of cells capable of storing a large quantity of binary information.
There are two types of memories that are used in digital systems:
1. random‐access memory (RAM) and
2. read‐only memory (ROM).
.
The process of storing new information into memory is referred to as a memory write
operation.
The process of transferring the stored information out of memory is referred to as a memory read
operation.
RAM can perform both write and read operations.
ROM can perform only the read operation. This means that suitable binary information is already
stored inside memory and can be retrieved or read at any time. However, that information
cannot be altered by writing.
ROM is a programmable logic device (PLD).
The function to be performed by a programmable logic device is undefined at the time of its
manufacture.
These devices are programmed by the user to perform a range of functions.
The binary information that is stored within such a device is specified in some fashion and then
embedded within the hardware in a process is referred to as programming the device.
The word “programming” here refers to a hardware procedure which specifies the bits that are
inserted into the hardware configuration of the device.
.
A PLD is an integrated circuit with internal logic gates connected through electronic paths that
behave similarly to fuses. In the original state of the device, all the fuses are intact.
Programming the device involves blowing those fuses along the paths that must be removed in
order to obtain the particular configuration of the desired logic function.
RANDOM-ACCESS MEMORY
A memory unit is a collection of storage cells, together with associated circuits needed to transfer
information into and out of a device. The architecture of memory is such that information can be selectively
retrieved from any of its internal locations.
The time it takes to transfer information to or from any desired random location is
always the same—hence the name random‐access memory, abbreviated RAM. In
contrast, the time required to retrieve information that is stored on magnetic tape
depends on the location of the data.
Most computer memories use words that are multiples of 8 bits in length.
Thus, a 16‐bit word contains two bytes, and a 32‐bit word is made up of four bytes. The capacity of a
memory unit is usually stated as the total number of bytes that the unit can store.
Communication between memory and its environment is achieved through data input
and output lines, address selection lines, and control lines that specify the direction
of transfer.
A block diagram of a memory unit is shown in Fig. above.
The n data input lines provide the information to be stored in memory, and the n data output lines
supply the information coming out of memory.
The k address lines specify the particular word chosen among the many available.
The two control inputs specify the direction of transfer desired: The Write input causes binary data to
be transferred into the memory, and the Read input causes binary data to be transferred out of
memory.
The memory unit is specified by the number of words it contains and the number of bits in each
word. The address lines select one particular word. Each word in memory is assigned an
identification number, called an address, starting from 0 up to 2k - 1, where k is the number of
address lines.
The selection of a specific word inside memory is done by applying the k ‐bit address to the address
lines. An internal decoder accepts this address and opens the paths needed to select the word
specified.
Memories vary greatly in size and may range from 1,024 words, requiring an address of 10 bits, to
232 words, requiring 32 address bits.
It is customary to refer to the number of words (or bytes) in memory with one of the letters K (kilo),
M (mega), and G (giga). K is equal to 210, M is equal to 220, and G is equal to 230. Thus, 64K =
216, 2M = 221, and 4G = 232.
The 1K * 16 memory has 10 bits in the address and 16 bits in each word.
As another example, a 64K * 10 memory will have 16 bits in the address (since 64K = 2 16 ) and
each word will consist of 10 bits.
The number of address bits needed in a memory is dependent on the total number of words that can
be stored in the memory and is independent of the number of bits in each word.
The number of bits in the address is determined from the relationship 2k m, where m is the total
number of words and k is the number of address bits needed to satisfy the relationship.
The steps that must be taken for the purpose of transferring a stored word out of memory are
as follows:
1. Apply the binary address of the desired word to the address lines.
2. Activate the read input.
3. The memory unit will then take the bits from the word that has been selected by the
address and apply them to the output data lines. The contents of the selected word do
not change after the read operation, i.e., the word operation is nondestructive.
Commercial memory components available in integrated‐circuit chips sometimes provide the two
control inputs for reading and writing in a somewhat different configuration.
Instead of having separate read and write inputs to control the two operations, most integrated
circuits provide two other control inputs:
1. One input selects the unit and
2. the other determines the operation.
The memory operations that result from these control inputs are specified in Table below.
The memory enable (sometimes called the chip select) is used to enable the particular memory chip
in a multichip implementation of a large memory. When the memory enable is inactive, the
memory chip is not selected and no operation is performed.
Static RAM (SRAM) consists essentially of internal latches that store the binary information. The
stored information remains valid as long as power is applied to the unit.
Dynamic RAM (DRAM) stores the binary information in the form of electric charges on capacitors
provided inside the chip by MOS transistors
The stored charge on the capacitors tends to discharge with time, and the capacitors must be
periodically recharged by refreshing the dynamic memory.
Refreshing is done by cycling through the words every few milliseconds to restore the decaying
charge.
DRAM offers reduced power consumption and larger storage capacity in a single memory chip.
SRAM is easier to use and has shorter read and write cycles.
Memory units that lose stored information when power is turned off are said to be volatile. CMOS
integrated circuit RAMs, both static and dynamic, are of this category, since
the binary cells need external power to maintain the stored information.
In contrast, a nonvolatile memory, such as magnetic disk, retains its stored information after the
removal of power. This type of memory is able to retain information because the data stored on
magnetic components are represented by the direction of magnetization, which is retained
after power is turned off. ROM is another nonvolatile memory.
A nonvolatile memory enables digital computers to store programs that will be needed again after
the computer is turned on. Programs and data that cannot be altered are stored in ROM, while
other large programs are maintained on magnetic disks.
The latter programs are transferred into the computer RAM as needed. Before the power is turned
off, the binary information from the computer RAM is transferred to the disk so that the
information will be retained.
READ‐ONLY MEMORY
A read‐only memory (ROM) is essentially a memory device in which permanent binary information
is stored.
The binary information must be specified by the designer and is then embedded in the unit to form
the required interconnection pattern. Once the pattern is established, it stays within the unit
even when power is turned off and on again.
A block diagram of a ROM consisting of k inputs and n outputs is shown in Fig. below.
The inputs provide the address for memory, and the outputs give the data bits of the stored word
that is selected by the address.
The number of words in a ROM is determined from the fact that k address input lines are needed to
specify 2k words.
Note that ROM does not have data inputs, because it does not have a write operation. Integrated
circuit ROM chips have one or more enable inputs and sometimes come with three‐state
outputs to facilitate the construction of large arrays of ROM.
Consider, for example, a 32 * 8 ROM. The unit consists of 32 words of 8 bits each.
There are five input lines that form the binary numbers from 0 through 31 for the address.
Figure below shows the internal logic construction of this ROM. The five inputs are decoded into
32 distinct outputs by means of a 5 * 32 decoder.
Each output of the decoder represents a memory address. The 32 outputs of the decoder are
connected to each of the eight OR gates.
The diagram shows the array logic convention used in complex circuits.
Each OR gate must be considered as having 32 inputs. Each output of the decoder is connected to
one of the inputs of each OR gate.
Since each OR gate has 32 input connections and there are 8 OR gates, the ROM contains 32 * 8
= 256 internal connections.
In general, a 2k * n ROM will have an internal k * 2k decoder and n OR gates. Each OR gate
has 2k inputs, which are connected to each of the outputs of the decoder.
Every 0 listed in the truth table specifies the absence of a connection, and every 1 listed specifies
a path that is obtained by a connection.
For example, the table specifies the eight‐bit word 10110010 for permanent storage at address 3.
The four 0’s in the word are programmed by blowing the fuse links between output 3 of the
decoder and the inputs of the OR gates associated with outputs A6, A3, A2, and A0.
The four 1’s in the word are marked with a * to denote a temporary connection, in place of a dot
used for a permanent connection in logic diagrams.
When the input of the ROM is 00011, the signal equivalent to logic 1 at decoder output 3 propagates
through the connections to the OR gate outputs of A7, A5, A4, and A1.
The other four outputs remain at 0. The result is that the stored word 10110010 is applied to
the eight data outputs.
Example
Design a combinational circuit using a ROM. The circuit accepts a three‐bit number and outputs a
binary number equal to the square of the input number.
The first step is to derive the truth table of the combinational circuit
Three inputs and six outputs are needed to accommodate all possible binary numbers. We note that
output B0 is always equal to input A0, so there is no need to generate B0 with a ROM, since it is
equal to an input variable. Moreover, output B1 is always 0, so this output is a known constant.
We actually need to generate only four outputs with the ROM; the other two are readily obtained.
The minimum size of ROM needed must have three inputs and four outputs. Three inputs
specify eight words, so the ROM must be of size 8 * 4. The ROM implementation is shown.
The three inputs specify eight words of four bits each. The truth table in (b) specifies the
information needed for programming the ROM. The block diagram of Fig. (a) shows the
required connections of the combinational circuit.
Types of ROMs
The required paths in a ROM may be programmed in four different ways.
The first is called mask programming and is done by the semiconductor company during the last
fabrication process of the unit. The procedure for fabricating a ROM requires that the customer fill out the
truth table he or she wishes the ROM to satisfy. The truth table may be submitted in a special form
provided by the manufacturer or in a specified format on a computer output medium. The manufacturer
makes the corresponding mask for the paths to produce the 1’s and 0’s according to the customer’s truth
table. This procedure is costly because the vendor charges the customer a special fee for custom masking
the particular ROM. For this reason, mask programming is economical only if a large quantity of the
same ROM configuration is to be ordered.
For small quantities, it is more economical to use a second type of ROM called programmable read‐only
memory, or PROM. When ordered, PROM units contain all the fuses intact, giving all 1’s in the bits of the
stored words. The fuses in the PROM are blown by the application of a high‐voltage pulse to the device
through a special pin.
A blown fuse defines a binary 0 state and an intact fuse gives a binary 1 state. This procedure allows the
user to program the PROM in the laboratory to achieve the desired relationship between input addresses
and stored words. Special instruments called PROM programmers are available commercially to facilitate
the procedure. In any case, all procedures for programming ROMs are hardware procedures, even though
the word programming is used.
The hardware procedure for programming ROMs or PROMs is irreversible, and once programmed, the
fixed pattern is permanent and cannot be altered. Once a bit pattern has been established, the unit must be
discarded if the bit pattern is to be changed.
A third type of ROM is the erasable PROM, or EPROM, which can be restructured to the initial state
even though it has been programmed previously. When the EPROM is placed under a special ultraviolet
light for a given length of time, the shortwave radiation discharges the internal floating gates that serve as
the programmed connections. After erasure, the EPROM returns to its initial state and can be
reprogrammed to a new set of values.
The fourth type of ROM is the electrically erasable PROM (EEPROM or E2PROM ). This device is
like the EPROM, except that the previously programmed connections can be erased with an electrical
signal instead of ultraviolet light. The advantage is that the device can be erased without removing it from
its socket.
Flash memory devices are similar to EEPROMs, but have additional built‐in circuitry to selectively
program and erase the device in‐circuit, without the need for a special programmer. They have
widespread application in modern technology in cell phones, digital cameras, set‐top boxes, digital TV,
telecommunications, nonvolatile data storage, and microcontrollers. Their low consumption of power
makes them an attractive storage medium for laptop and notebook computers. Flash memories
incorporate additional circuitry, too, allowing simultaneous erasing of blocks of memory, for example, of
size 16 to 64 K bytes. Like EEPROMs, flash memories are subject to fatigue, typically having about 105
block erase cycles.
Sequential Circuit Synthesis
State Machine
‘state’ and ‘state transition’ are the two important things that you should learn before learning
FSM. Because FSM models the behaviour of a system by representing it in different states it can
be in and the transition between each state.
A state is simply a specific action. Example for action is waiting for button 1 is to be pressed, or
rotating the dc motor to any direction etc.
A FSM is a computer program that, at any point in time, can be in one of a predetermined number of
states.
So, based on the present inputs and present states, the Mealy state machine produces outputs.
Therefore, the outputs will be valid only at positive (or negative) transition of the clock signal.
As shown in figure, there are two parts present in Moore state machine. Those are
1. The output of the circuit may be affected 1. The output of the circuit is unaffected by changes in the
by changes in the information. input.
2. It requires fewer number of states for 2. For implementing the same function, it requires more
implementing same function. number of states
6. Mealy machines react to changes more 6. Decoding the output necessitates more logic, resulting
quickly. They all seem to react in the same in longer circuit delays. They usually react after one clock
way. cycle
9. It's easier to design with less hardware. 9. To design, more hardware is necessary
In general, the number of states required in Moore state machine is more than or equal to the
number of states required in Mealy state machine.
There is an equivalent Mealy state machine for each Moore state machine.
State Reduction
Any design process must consider the problem of minimising the cost of the final circuit. The two
most obvious cost reductions are:
Example: Let us consider the state table of a sequential circuit shown in Table 1.
Next State Output
Present State x=0 x=1 x=0x=1
A B C 1 0
B F D 0 0
C D E 1 1
D F E 0 1
E A D 0 0
F B C 1 0
It can be seen from the table that the present state A and F both have the same next states, B (when
x=0) and C (when x=1). They also produce the same output 1 (when x=0) and 0 (when x=1).
Therefore states A and F are equivalent. Thus one of the states, A or F can be removed from the
state table. For example, if we remove row F from the table and replace all F's by A's in the
columns, the state table is modified as shown in Table 2.
Next State Output x = 0 x = 1
Present State x=0 x=1
A B C 1 0
B A D 0 0
C D E 1 1
D A E 0 1
E A D 0 0
It is apparent that states B and E are equivalent. Removing E and replacing E's by B's results in the
reduce table shown in Table 3.
Next State Output x = 0 x = 1
Present State x=0 x=1
A B C 1 0
B A D 0 0
C D B
1 1
D A B 0 1
The removal of equivalent states has reduced the number of states in the circuit from six to four.
Two states are considered to be equivalent if and only if for every input sequence the circuit
produces the same output sequence.
Design of Sequential Circuit
Design problem normally starts with a word description of input output relation and ends with a
circuit diagram having sequential and combinatorial logic elements.
Sequential logic circuit design refers to synchronous clock-triggered circuit because of its design
and implementation advantages.
The first step in a sequential logic synthesis problem is to convert this word description to State
transition diagram.
MOORE MODEL 1)
Input output relation
In Moore model circuit outputs, also called primary outputs are generated solely from secondary
outputs or memory value.
Figure above shows the state transition diagram following Moore model.
Initialized with a (if input x=0 - remain in same state a ,if input x=1 go to next state b first bit(1)
is detected and output y=0)
• State b (if input x=0 - go to initial state a ,if input x=1 go to next state c second bit(1) is
detected and output y=0)
• State c (if input x=0 - go to final state d third bit (0) is detected, if input x=1 remain in
same state c and output y=0)
• State d (if input x=0 - go to initial state a ,if input x=1 go to state b and repeat the
sequence ie, ‘011011’ and output y=1)
MealyModel
Figure above shows state transition diagram of the given problem following Mealy model.
• Initialized with a (if input x=0 - remain in same state a ,if input x=1 go to next state b
first bit(1) is detected and output y=0)
• State b (if input x=0 - go to initial state a ,if input x=1 go to next state c second bit(1) is
detected and output y=0)
• State c (if input x=0 - go to state a third bit (0) is detected and output y=1, if input x=1
remain in same state c and output y=0)
Note :-The difference with Moore model where output is generated one clock cycle later in state d
and also requires one additional state.
Note that Mealy model does not use state d. Assignment can be done in any order, e.g. we can make a: B =
0, A = 1 and b: B = 0, A = 0
NO OF MEMORY ELEMENTS
2 number of memory elements
DECIDE ABOUT FILP FLOPS
JK flip flop
Moore Model
State synthesis table obtained from state transition diagram of Moore model (Moore model Figure)
and excitation table of JK flip-flop is shown in Table below
• It has eight rows as for each of the four possible states there can be two different types of
inputs.
• The table is prepared as follows. When the circuit is at state 00, i.e. a and receives X = 0
it remains at state 00 and output in this state Y = 0. Since both B and A flip-flop makes
0>0 transition both the flip-flops should have input Ox from excitation table.
Moore Model
• Figure below presents Kamaugh map developed from state synthesis Table of Moore
model and also shows corresponding design equations.
• Also the Figure of sequence detector circuit diagram developed from these equations is
shown.
(a) Design equations for Moore model (b) Circuit diagram following a Moore model
Mealy Model
Using state synthesis table corresponding to Mealy Model Table we can fill six positions in each
Karnaugh map (Fig. below (a)). Locations B,,A,X = 110 and B,A,,K = 111 are filled with don’t
care(x) conditions as such a combination never occur in the detector circuit if properly initialized.
The design equations are obtained from these Karnaugh maps from which circuit diagram is
drawn as shown in Fig. b. In this circuit, output directly uses input information.
(a) Design equations for Mealy model (b) Circuit diagram following a Mealy model
Assignment
INTRODUCTION
Design restrictions
Normally requires that inputs do not change simultaneously (i.e., within the settling time
of the circuit after an input change)
Hence, input changes only in stable states
The design of asynchronous circuit is very complex and has several constraints to be taken care
of, which is not required for synchronous circuit.
Problems are: -
Race conditions
Hazards
Race Conditions
A race condition is caused when two or more binary state variables change value due to
change in an input variable
Unequal delays may cause state variables to change in unpredictable manner
Race condition may be
i. non- critical, or
ii. critical
Hazards
Unwanted switching transients, “glitches”, that may cause circuit to malfunction
This makes the design of such circuit complex and cumbersome and if benefits like speed are not
of critical importance, synchronous design is preferred to asynchronous design
Example:-
Let us start with state transition diagram
Fig: State transition diagram of the problem
Let the initial state be considered as a when AB = 00 with output 0.
As long as AB remains 00, the circuit remains at a.
As soon as one of A or B changes the circuit may immediately move to different states.
A and B cannot change together, a constraint in asynchronous design.
At state a, if input AB = 01, the circuit goes to state b and if input AB = 10 it goes to c
Both b and c generate output 0 as they have not yet fulfilled the condition stated in the
problem for assertion of output.
The circuit remains at b for AB = 01. If AB changes to 11, the circuit moves to state d
with output X = 0
if AB becomes 00 the circuit goes back to a.
Similarly, the circuit stays at c if input stays at 10 but goes to d receiving 11 and to a
receiving 00.
At state d, the input AB = 11 and now if B changes from 1—>0 then condition for output
X = 1 is fulfilled and next state e for AB = 10 shows output as 1.
The circuit remains at state e as long as AB = 10. It goes back to state d if AB becomes
11 because if B again goes to 0 output should be high.
At e, if AB changes to 00 the circuit goes to initial state a as AB becoming 10 following
00 will not assert output.
d c 0
a a
a X3 d 0
c
X4 a e 0
d
a X5 d 1
e
d 0
a a a
X4 a e 0
d
a X5 d 1
e
State assignment
This step in asynchronous sequential circuit design has to be done very carefully so that a
valid state transition does not require two or more output variables to change
simultaneously which may lead to racing problem.
In this problem there are three states in the reduced state diagram which needs two
variables to represent them. Figure above shows that we cannot avoid two variables
changing together in one or more occasions for the reduced state transition diagram.
If {a,d,e) is represented by (00,01,10) it occurs twice for d—>e and e—>d transitions in
state transition diagram.
A representation of (00,01,11) requires two variables to change only once when e—>a
transition occurs.
The solution to this may be found if the unused fourth combination of two variable
representation is used as a dummy state, include between e and a.
if one dummy state is not enough need to use a third variable to represent the states that
will make 2—3 = 5 dummy variable available for this purpose.
Represent the states in this problem by two variables PQ in the following way. The
modified state diagram and state table with dummy variable =10 included are shown in
Fig. a below :00 ,d:01,e:11 and dummy state =10
Dummy state is an unstable state and before the input can change it goes to next stable
state a.
Represent state variables by P and Q, the corresponding feedback variables are
The final circuit is developed from these equations and is shown in Fig. below.