0% found this document useful (0 votes)
20 views104 pages

Eeng 354

The document covers the fundamentals of digital logic design, focusing on the differences between analog and digital signals, logic levels, and various logic gates including AND, OR, NOT, NAND, NOR, EX-OR, and EX-NOR. It explains the concepts of positive and negative logic, truth tables, and the laws of Boolean algebra essential for simplifying and manipulating logic expressions. Additionally, it highlights the universal nature of NAND and NOR gates in implementing any Boolean expression.

Uploaded by

Joel Paade
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views104 pages

Eeng 354

The document covers the fundamentals of digital logic design, focusing on the differences between analog and digital signals, logic levels, and various logic gates including AND, OR, NOT, NAND, NOR, EX-OR, and EX-NOR. It explains the concepts of positive and negative logic, truth tables, and the laws of Boolean algebra essential for simplifying and manipulating logic expressions. Additionally, it highlights the universal nature of NAND and NOR gates in implementing any Boolean expression.

Uploaded by

Joel Paade
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 104

EENG 354: DIGITAL LOGIC DESIGN

Combinational logic design and building blocks


Signals Analog and Digital Signals
An analog signal is continuous, meaning that there are no breaks or interruptions.
Example of Analog Signals
• An analog signal can be any time-varying signal.
• Minimum and maximum values can be either positive or negative.
• They can be periodic (repeating) or non-periodic.
• Sine waves and square waves are two common analog signals.
• Note that this square wave is not a digital signal because its minimum value is
negative.

Parts of an Analog Signal

The largest value reached in a half cycle is called the peak value or the maximum value or the
amplitude of the waveform. Such values are represented by Vm, Im, Em, etc.
A peak-to-peak value is the difference between the maximum and minimum values in a cycle. The
time taken to complete one cycle is called the period or the periodic time, T(seconds), of the
waveform.
The number of cycles completed in one second is called the frequency, f (Hertz, Hz).

Logic Levels
Before examining digital signals, we must define logic levels.
A logic level is a voltage level that represents a defined digital state.
Logic HIGH: The higher of two voltages, typically 5 volts
Logic LOW: The lower of two voltages, typically 0 volts

Logic Voltage True/False On/Off 0/1


Level

HIGH 5 volts True On 1

LOW 0 volts False Off 0

A digital signal is a signal that represent data as a sequence of discrete values; at any given time it
can only take on one of a finite number of values.
Digital signals are not continuous.
They use specific values to represent information.

Example of Digital Signals


• Digital signal are commonly referred to as clock signals.
• Their minimum value must be 0 volts, and their maximum value must be 5 volts.
• They can be periodic (repeating) or non-periodic.
• The time the signal is high (tH) can vary anywhere from 1% of the period to 99% of
the period.

Parts of a Digital Signal


Amplitude:
For digital signals, this will ALWAYS be 5 volts.
Time High (tH):
The time the signal is at 5 v.
Time Low (tL):
The time the signal is at 0 v.
Duty Cycle:
The ratio of tH to the total period (T).
Rising Edge:
A 0-to-1 transition of the signal.
Falling Edge:
A 1-to-0 transition of the signal.

Logic Gates
A logic gate is an electronic circuit which makes logic decisions.
It has one output and one or more inputs.
The output signal appears only for certain combinations of input signals.
Logic gates are the basic building blocks from which most of the digital systems are built up.
They implement the hardware logic function based on the logical algebra developed by George
Boole which is called Boolean algebra in his honour.
A unique characteristic of the Boolean algebra is that variables used in it can assume only one of
the two values i.e. either 0 or 1. Hence, every variable is either a 0 or a 1.

These gates are available today in the form of various IC families. The most popular families are:
transistor-transistor logic (TTL), emitter-coupled logic (ECL), metal-oxide-semiconductor (MOS)
and complementary metal-oxide-semiconductor (CMOS).

Here, we will consider the OR, AND, NOT, NOR, NAND, exclusive OR (XOR) and exclusive
NOR (XNOR) gates along with their truth tables.
Positive and Negative Logic
In computing systems, the number symbols 0 and 1 represent two possible states of a circuit or
device.
It makes no difference if these two states are referred to as ON and OFF, CLOSED and OPEN,
HIGH and LOW, PLUS and MINUS or TRUE and FALSE depending on the circumstances. Main
point is that they must be symbolized by two opposite conditions.
In positive logic, a 1 represents
1. an ON circuit2. a CLOSED switch3. a HIGH voltage 4. a PLUS sign5. a TRUE statement

Consequently, a 0 represents
1. an OFF circuit 2. an OPEN switch 3. a LOW voltage 4. a MINUS sign 5. a FALSE
statement.

In negative logic, just opposite conditions prevail.


Suppose, a digital system has two voltage levels of 0V and 5. If we say that symbol 1 stands for 5V
and symbol 0 for 0V, then we have positive logic system. If, on other hand, we decide that a 1
should represent 0 V and 0 should represent 5V, then we will get negative logic system.

Truth Table
A truth table lists all possible combinations of input binary variables and the corresponding outputs
of a logic system.
The logic system output can be found from the logic expression, often referred to as the Boolean
expression that relates the output with the inputs of that very logic system.

When the number of input binary variables is only one, then there are only two possible
inputs, i.e. ‘0’ and ‘1’. If the number of inputs is two, there can be four possible input
combinations, i.e. 00, 01, 10 and 11.

OR Gate

An OR gate performs an ORing operation on two or more than two logic variables.
The OR operation on two independent logic variables A and B is written as Y = A+B and reads as
Y equals A OR B and not as A plus B.

The output of an OR gate is LOW only when all of its inputs are LOW.
For all other possible input combinations, the output is HIGH. This statement when interpreted for
a positive logic system means the following.

The output of an OR gate is a logic ‘0’ only when all of its inputs are at logic ‘0’. For all other
possible input combinations, the output is a logic ‘1’. Figure 1 shows the circuit symbol and the
truth table of a two-input OR gate. The operation of a two-input OR gate is explained by the logic
expression

Y=A+B
Figure 1: Two-input OR gate.

AND Gate
The output of an AND gate is HIGH only when all of its inputs are in the HIGH state. In
all other cases, the output is LOW.
When interpreted for a positive logic system, this means that the output of the AND gate is a logic
‘1’ only when all of its inputs are in logic ‘1’ state. In all other cases, the output is logic ‘0’. The
logic symbol and truth table of a two-input AND gate are shown in Figs 2 (a) and (b) respectively.

Figure 2: Two-input AND gate.

The AND operation on two independent logic variables A and B is written as Y = A.B and reads as
Y equals A AND B and not as A multiplied by B. Here, A and B are input logic variables and Y is
the output. An AND gate performs an ANDing operation:

NOT Gate
A NOT gate is a one-input, one-output logic circuit whose output is always the complement of the
input.
That is, a LOW input produces a HIGH output, and vice versa.
When interpreted for a positive logic system, a logic ‘0’ at the input produces a logic ‘1’ at the
output, and vice versa.
It is also known as a ‘complementing circuit’ or an ‘inverting circuit’. Figure 3 shows the circuit
symbol and the truth table.

The NOT operation on a logic variable X is denoted as 𝑋 𝑜𝑟 𝑋′ . That is, if X is the input
to a
NOT circuit, then its output Y is given by Y = 𝑋 𝑜𝑟 𝑋′ and reads as Y equals NOT X.
Thus, if X = 0, Y = 1 and if X = 1, Y = 0.

Figure 3 (a) Circuit symbol of a NOT circuit and (b) the truth table of a NOT circuit.

EXCLUSIVE-OR Gate
The EXCLUSIVE-OR gate, commonly written as EX-OR gate.
Figures 4(a) and (b) respectively show the logic symbol and truth table of a two-input EX-OR gate.
As can be seen from the truth table, the output of an EX-OR gate is a logic ‘1’ when the inputs are
unlike and a logic ‘0’ when the inputs are like.
Although EX-OR gates are available in integrated circuit form only as two-input gates, unlike other
gates which are available in multiple inputs also, multiple-input EX-OR logic functions can be
implemented using more than one two-input gates.
The truth table of a multiple-input EX-OR function can be expressed as follows. The output of a
multiple-input EX-OR logic function is a logic ‘1’ when the number of 1s in the input sequence is
odd and a logic ‘0’ when the number of 1s in the input sequence is even, including zero. That is, an
all 0s input sequence also produces a logic ‘0’ at the output. The output of a two-input EX-OR gate
is expressed by
Figure 4 (a) Circuit symbol of a two-input EXCLUSIVE-OR gate, (b) the truth table of a
two-input EXCLUSIVE-OR gate

NAND Gate
NAND stands for NOT AND.
An AND gate followed by a NOT circuit makes it a NAND gate
[Fig.5 (a)]. Figure 5 (b) shows the circuit symbol of a two-input NAND gate. The
truth table of a NAND gate is obtained from the truth table of an AND gate by
complementing the output entries [Fig.5 (c)].
The output of a NAND gate is a logic ‘0’ when all its inputs are a logic ‘1’. For all other input
combinations, the output is a logic ‘1’. NAND gate operation is logically expressed as

𝒀 = 𝑨. 𝑩
Figure 5 (a) Two-input NAND implementation using an AND gate and a NOT circuit, (b)
the circuit symbol of a two-input NAND gate and (c) the truth table of a two-input
NAND gate.

NOR Gate
NOR stands for NOT OR.
An OR gate followed by a NOT circuit makes it a NOR gate [Fig. 6(a)].
The truth table of a NOR gate is obtained from the truth table of an OR gate by complementing the
output entries.
The output of a NOR gate is a logic ‘1’ when all its inputs are logic ‘0’. For all other input
combinations, the output is a logic ‘0’. The output of a two-input NOR gate is logically expressed
as

𝑌 = (𝐴 + 𝐵)
Figure 6 (a) Two-input NOR implementation using an OR gate and a NOT circuit, (b) the
circuit symbol of a two-input NOR gate and (c) the truth table of a two-input NOR gate.

EXCLUSIVE-NOR Gate
EXCLUSIVE-NOR (commonly written as EX-NOR/ EX-NOR) means NOT of EX-OR, i.e. the
logic gate that we get by complementing the output of an EX-OR gate.
Figure below shows its circuit symbol along with its truth table.
The truth table of an EX-NOR gate is obtained from the truth table of an EX-OR gate by
complementing the output entries.

(a) Circuit symbol of a two-input EXCLUSIVE-NOR gate and (b) the


truth table of a two-input EXCLUSIVE-NOR gate.

The output of a two-input EX-NOR gate is a logic ‘1’ when the inputs are like and a logic ‘0’ when
they are unlike.
In general, the output of a multiple-input EX-NOR logic function is a logic ‘0’ when the number of
1s in the input sequence is odd and a logic ‘1’ when the number of 1s in the input sequence is even
including zero.
That is, an all 0s input sequence also produces a logic ‘1’ at the output.

Universal Gates
OR, AND and NOT gates are the three basic logic gates as they together can be used to construct
the logic circuit for any given Boolean expression.
NOR and NAND gates have the property that they individually can be used to hardware implement
a logic circuit corresponding to any given Boolean expression.
That is, it is possible to use either only NAND gates or only NOR gates to implement any Boolean
expression.
This is so because a combination of NAND gates or a combination of NOR gates can be used to
perform functions of any of the basic logic gates. It is for this reason that NAND and NOR gates
are universal gates.

As an illustration, Fig. 7 a, b, c shows how two-input NAND gates can be used to construct
a NOT circuit
[Fig. 7, (a)], a two-input AND gate [Fig.(b)] and a two-input OR gate [Fig.(c)]. Fig 8 shows
the same using NOR gates. Understanding the conversion of NAND to OR and NOR
to AND requires the use of DeMorgan’s theorem, which is discussed on Boolean
algebra

Fig 7: Implementation of basic logic gates using only NAND gates.


Fig 8: Implementation of basic logic gates using only NOR gates.

Unique Feature of Boolean Algebra

Boolean algebra have a unique property i.e. they can assume only one of the two possible values of
0 and 1.
Each of the variable used in a logical or Boolean equation can assume only the value 0 or 1. For
example, in the logical equation A + B = C, each of the three variables A , B and C can have only
the values of either 0 or 1. This point must be clearly taken note of by the reader for easy
understanding of the laws of Boolean algebra.

Laws of Boolean Algebra


Boolean algebra is a system of Mathematics based on logic. It has its own set of
fundamental laws which are necessary for manipulating different Boolean expressions.

1. OR Laws

Law 1. A + 0 =A
Law 2. A +1=1
Law 3. A+A=A
Law 4. A+𝐴=1

2. AND Laws
Law 5. A .0=0
Law 6. A .1=A
Law 7. A .A =A

Law 8. A . 𝐴 =0

3. Laws of Complementation
Law 9. 0=1
Law 10. 1=0
Law 11. if A = 0, then 𝐴 = 1
Law 12. if A = 1, then 𝐴 = 0
Law 13. A =A

4. Commutative Laws
These laws allow change in the position of variables in OR and AND expressions.
Law 14. A +B =B +A
Law 15. A.B = B.A

These two laws express the fact that the order in which a combination of terms is performed
does not affect the final result of the combination.

5. Associative Laws
These laws allow removal of brackets from logical expression and regrouping of variables.
Law 16. A + (B + C) = (A + B) + C
Law 17. (A + B) + (C + D) = A + B + C + D
Law 18. A.(B.C) = (A.B).C

6. Distributive Laws
These laws permit factoring or multiplying out of an expression.
Law 19. A (B + C) = A B + AC
Law 20. A + BC = (A + B) (A + C)

Law 21. A + 𝐴.B = A + B

7. Absorptive Laws
These enable us to reduce a complicated logic expression to a simpler form by absorbing
some of the terms into existing terms.
Law 22. A+AB=A
Law 23. A.(A + B) = A

Law 24. A.( 𝐴+ B) = A B

The above laws can be used to prove any given Boolean identity and also for simplifying
complicated expressions.

Example: Prove the following Boolean identity: AC + ABC = AC


=
AC

Example: Determine the logic expression for the output Y, from the truth table shown.
Simplify and sketch the logic circuit for the simplified expression.

Example: Prove the following Boolean identity: (A + B) (A + C) = A + BC


Solution: Putting the left hand side expression equal to y, we get

= (A + B) (A + C)
=AA + AC + AB + BC
=A + AC + A B + BC
=A + A B + AC + BC
=A (1 + B) + AC + BC
=A + AC + BC
=A (1 + C) + BC
=A + BC
(A + B) (A + C) =A + BC

DE Morgan’s Theorem
These two theorems (or rules) are a great help in simplifying complicated logical
expressions. The theorems can be started as under:

Law 1.
𝑨 + 𝑩
= 𝑨. 𝑩

Law 2.

𝑨. 𝑩 = 𝑨 + 𝑩

The first statement says that the complement of a sum equals the product of complements.
The second statement says that the complement of a product equals the sum of the complements.
In fact, it allows transformation from a sum-of-products form to a product-of-sum form.

As seen from the above two laws, the procedure required for taking out an expression from
under a NOT sign is as follows:
1. complement the given expression i.e., remove the overall NOT sign
2. change all the ANDs to ORs and all the ORs to ANDs.
3. complement or negate all individual variables.

As an illustration, take the following example

𝐴 + 𝐵𝐶 = A + BC
= A (B + C)

= 𝐴 (𝐵 + 𝐶 )
Next, consider this example,

Standard Forms of Boolean Expressions

All Boolean expressions, regardless of their form, can be converted into either of the two following
standard forms
1. Sum-of-products (SOP)
form and
2. Product-of-sums (POS)
form.
The standardization of Boolean expressions makes their evaluation, simplification and
implementation much more systematic and easier.
Now we shall discuss these two standard forms in more detail.

The Sum-of-Products (SOP) Form


A product term is a term consisting of the product (or Boolean multiplication) of literals (i.e
variables or their complements).
When we add two or more product terms, the resulting expression is called, sum-of-products (SOP)
expression. Some examples of sum-of-products expressions are
A𝐵 +𝐴BC, 𝐴BC +ACD +A𝐵CD, 𝐴B + A 𝐶 + A 𝐵 𝐶

Sometimes, it is convenient to define the set of variables contained in the expression (in either
complemented or uncomplemented form) as a domain.
For example, domain of the expression A 𝐵 + A B is the set of variables A and B. Similarly, the
domain of expression 𝐴BC + ABC + A 𝐵 CD is the set of variables A, B, C and D.
Any logic expression can be changed into SOP form by applying Boolean algebra laws and
theorems. For example, the expression A (BC + D) can be converted to SOP form by applying the
distributive law:
A (BC + D) = ABC + AD

The Standard SOP Form


So far, we have seen SOP (sum-of-products) expressions in which some of the product terms do
not contain all the variables in the domain of the expression.

For example, the expression, 𝐴 BC + A 𝐶 D + A 𝐵 CD has a domain made up of the variables A,


B, C and D. But notice that the first two terms contains only three variables, i.e. 𝐷 or 𝐷 is missing
from the first term and 𝐵 or 𝐵 is missing from the second term.

A standard SOP expression is defined as an expression in which all the variables in the domain
appear in each product term.

For example 𝐴 BCD + AB 𝐶 D + A 𝐵 CD is a standard SOP expression.


Standard SOP expressions are important in constructing truth tables and in karnaugh map
simplification method.

It is very straightforward to convert non-standard product term to a standard SOP using


Boolean algebra. Each product term in the SOP expression that does not contain all the
variables in the domain has to be expanded to standard form to include all the variables
in the domain and their complements.

As stated below, a nonstandard SOP expression is converted into standard form using
Boolean algebra law 4 : A + 𝐴 = 1 i.e. a variable added to its complement equals 1.

Step 1. Multiply each nonstandard product term by a term made up of the sum of a
missing variable and its complement. This results in two product terms. It is possible
because we know that we can multiply anything by 1 without changing its value.
Step 2. Repeat step 1 until all resulting product terms contain all variables in the domain
in either complemented or uncomplemented form. Note that in converting a product
term to a standard form, the number of product terms is doubled for each missing
variable.

For example, suppose we want to convert the Boolean expression 𝐴 BC + A 𝐶 D + A 𝐵 CD


to a standard SOP form. Then following the above procedure we proceed as below:
The expression given above is a standard SOP expression.

The Product-of-sums (POS) Form


The sum term is a term consisting of the sum (or Boolean addition) of literals (i.e. variables or their
complements).
When we multiply two or more sum terms, the resulting expression is called product-of-sums
(POS). Some examples of POS form are

(A+𝐵)(𝐴+B+C),(𝐴+B+ 𝐶)( 𝐴 + 𝐶 + 𝐷) (A + 𝐵 + C + D) and ( 𝐴 + 𝐵 ) (A + C) ( 𝐴 +𝐵 + C)


.

It may be carefully noted that a POS expression, can contain a single-variable term as in 𝐴 (A + 𝐵

+ C) (B + 𝐶 + 𝐷 ).
In POS expression, a single overbar cannot extend over more than one variable, although more than
one variable in a term can have an overbar.

The Standard POS Form


So far, we have seen POS (product-of-sums) expressions in which some of the sum terms do not
contain all the variables in the domain of the expression.

For example, the expression ( 𝐴 + B + C)(𝐴 + 𝐶 + D) (A + 𝐵 + C + D) has a domain made up of


variables, A, B, C and D. Notice that the complete set of variables in the domain is not represented
in the first two terms of the expression, i.e. 𝐷 or 𝐷 is missing in the first term and 𝐵 or 𝐵 is missing
in the second term.

A standard POS expression is defined as an expression in which all the variables in the domain
appear in each sum term.

For example ( 𝐴 + B + C+D)(𝐴 +B+ + 𝐶+ D) (A + 𝐵 + C + D) is a standard POS form.


Any nonstandard POS expression can be converted to a standard form using Boolean algebra.

Each sum term in an POS expression that does not contain all the variables in the domain can be
expanded to standard form to include all variables in the domain and their complements.
As stated below:
a nonstandard POS expression is converted into a standard form using Boolean algebra Law 8 :
𝐴. 𝑨 = 0, i.e. a variable multiplied by its complemented equals 0.

Step 1. Add to each nonstandard product term a term made up of the product of a
missing variable and its complement. This results in two sum terms. This is
possible because we know that we can add 0 to anything without changing its
value.
Step 2. Apply law 20, i.e. A + BC = (A + B) (A + C)
Step 3. Repeat Step 1 until all resulting sum terms contain all variables in the
domain in either complemented or uncomplemented form.
For example we want to convert the Boolean expression ( 𝐴+ B + C)(𝐴 + 𝐶 + D) (A + 𝐵 + C + D)
into a standard POS form. Then following the above procedure, we proceed as below: example

The expression given above is a standard POS expression

An expanded form of Boolean expression, where each term contains all Boolean variables in their
true or complemented form, is also known as the canonical form of the expression.
The Karnaugh Map
The Karnaugh map (or simply a K-map) is similar to a truth table because it presents all the
possible values of input variables and the resulting output for each value.
However, instead of being organised into columns and rows like a truth table, the Karnaugh map
is an array of squares (or cells) in which each square represents a binary value of the input
variables.
The squares are arranged in a way so that simplification of given expression is simply a matter of
grouping the squares.
Karnaugh maps can be used for expression with two, three and four variable Karnaugh maps.

The number of squares in a Karnaugh map is equal to the total number of possible input variable
combinations (as is the number of rows in a truth table).
For two variables, the number of square is 22= 4, for three variables, the number of squares is 23 =
8 and for four variables, the number of squares is 24 = 16.

The Two-variable Karnaugh Map


Fig. 1(a) shows a two-variable Karnaugh map. As seen, it is an array of four squares.
In this case, A and B are used for two variables although any other two letters could be used. The
binary values of A (i.e. 0 and 1) are indicated along the left side as 𝐴 and 𝐴 (notice the sequence)
and the binary values of B are indicated across the top as 𝐵 and 𝐵.
The value of a given square is the value of A at the left in the same row combined with the value
of B at the top in the same column.

For example, a square in the upper left corner has a value of 𝐴 𝐵 and a square in the lower right
corner has a value of A B. Fig. 1 (b) shows the standard product terms represented by each square
in the Karnaugh map.

Fig. 1

The Three-variable Karnaugh Map


Fig. 2 (a) shows a three-variable Karnaugh map. As seen it is an array of eight squares.
In this case, A, B and C are used for the variables although any other three letters could be used.
The value of A and B are along the left side (notice the sequence carefully) and the values of C are
across the top.

The value of a given square is the values of A and B at the left in the same row combined with the
value of C at the top in the same column.

For example, a square in the upper left corner has a value of 𝐴 𝐵 𝐶 and a square in the bottom right
corner has a value of A 𝐵 C. Fig. 2 (b) shows the product terms that are represented by each square
in the Karnaugh map.
Fig. 2

The Four-variable Karnaugh Map


Fig. 3 (a) shows a four-variable Karnaugh map. As seen, it is an array of sixteen squares. In
this case A, B, C and D are used for the variables. The values of A and B are along the left
side, and the values of C and D are across the top. The sequence of the variable values may be
noted carefully.

Fig. 3

The value of a given square is the values of A and B at the left in the same row combined with the
values of C and D at the top in the same column.
For example, a square in the upper right corner has a value 𝐴 𝐵 𝐶 𝐷 and a square in the lower right
corner has a value A 𝐵 𝐶 𝐷. Fig. 3 (b) shows the standard product terms that are represented by
each square in the four-variable Karnaugh map.

Square Adjacency in Karnaugh Map


It will be interesting to know that the squares in a Karnaugh map are arranged in such a way that
there is only a single-variable change between adjacent squares.
Adjacency is defined as a single-variable change. It means the squares that differ by only one
variable are adjacent.
For example, in a three-variable Karnaugh map shown in Fig. 2 (b), the 𝐴𝐵 𝐶 square is adjacent to
𝐴𝐵 𝐶 square, the 𝐴𝐵C square and the A B 𝐶 square.
It may be carefully noted that square with values that differ by more than one variable are not adjacent.
For example, the 𝐴𝐵 𝐶 square is not adjacent to the 𝐴𝐵 C square, the 𝐴𝐵𝐶 square, the 𝐴𝐵 𝐶
square or the A 𝐵 C square. In other words, each square is adjacent to the squares that are immediately next
to it on any of its four sides.
However, a square is not adjacent to the squares that diagonally touch any of its corners.

It may also be noted that squares in the top row are adjacent to the corresponding squares in the
bottom row and squares in the outer left column are adjacent to the corresponding squares in the
outer right column.
This is called “wrap around” adjacency because we can think of the map as wrapping around from
top to bottom to form a cylinder or from left to right to form a cylinder.
Fig. 4 (a) and (b) shows the square adjacencies with a three-variable and a four-variable Karnaugh
maps respectively. Notice the square adjacencies in a four variable Karnaugh map:

Fig. 4

Here for example, the square 𝐴 𝐵 𝐶 𝐷 is adjacent to 𝐴 𝐵 𝐶 D square, 𝐴 B 𝐶 𝐷 square, A𝐵 𝐶 𝐷


square and A 𝐵 C 𝐷 square. Similarly 𝐴 B 𝐶 D square is adjacent to 𝐴 𝐵 𝐶 D square, 𝐴 B 𝐶 𝐷
square, 𝐴BCD square and AB𝐶D square.

Mapping a Standard SOP Expression on the Karnaugh Map

Consider a SOP (sum-of-products) expression 𝐴 𝐵 𝐶 + 𝐴 B 𝐶 + A 𝐵 𝐶 + A 𝐵 𝐶


In order to map this expression on the Karnaugh map, we need a three variable Karnaugh-map
because the given expression has three variables A, B, and C. Then select the first product term
and enter 1 in the corresponding square (i.e. the first row and the first column) as shown in Fig. 5
Fig 5

Similarly, for the second product term place a 1 in the second row and first column. Repeat this
process for the other two product terms. The squares that do not have 1 are the squares for which
the expression is 0. Usually when working with sum-of-products expressions, the 0s are left off the
map.

Example 71.22. Map the following standard sum-of-products (SOP) expression on a


Karnaugh map:

𝐴 B C 𝐷 + 𝐴𝐵𝐶𝐷 + AB 𝐶 𝐷+ 𝐴 𝐵 𝐶 𝐷+ AB 𝐶 D+ 𝐴 𝐵 𝐶 D+𝐴𝐵 C 𝐷+ 𝐴𝐵𝐶𝐷

Solution. Sketch a four variable Karnaugh map. Select the first product term 𝐴 B C 𝐷 from
the given SOP expression and enter 1 in the corresponding square as shown in Fig. 7.
Similarly enter 1 for the other product terms in the given SOP expression. Check that
number of 1s in the Karnaugh Map is equal to the number of product terms in the given
SOP expression.

Fig .7

Mapping a Nonstandard SOP Expression on the Karnaugh Map


A nonstandard sum-of-products (SOP) expression is a one that has product terms with one or
more missing variables.
In such a case, Boolean expression must first be converted to a standard form. Let us consider an
example to illustrate the procedure for mapping a nonstandard SOP expression on the Karnaugh
map. Suppose we have the SOP expression:

𝐴 +A𝐵 +AB𝐶
As seen, this expression is obviously not in standard form because each product term does not have
three variables. The first term is missing two variables, the second term is missing one variable and
the third term is standard. In order to convert the given nonstandard SOP expression to a standard
form, we multiply the first product term by B + 𝐵 and C + 𝐶 , and the second
term
by C + 𝐶 .
Expanding the resulting expression, we get,

Rearranging the expression for our convenience, we get

𝐴 𝐵 𝐶 + 𝐴 𝐵 C + 𝐴 B 𝐶 + 𝐴 BC + A B 𝐶 + A 𝐵 𝐶 + A 𝐵 C
This expression can be mapped on a three - variable Karnaugh map as shown in Fig. 8.

Simplification of Boolean Expression Using Karnaugh Map


The process that results in an expression containing the minimum number of possible terms with
the minimum number of variables is called simplification or minimization.
After the SOP (or the Boolean) expression has been mapped on the Karnaugh map, there are three
steps in the process of obtaining a minimum SOP expression.
The three steps are:
a. grouping the 1s,
b. determining the product term for each group and
c. summing the resulting product terms.

(a) Grouping the 1s : We can group the 1s on the Karnaugh map according to the following
rules by enclosing those adjacent squares containing 1s.
The objective is to maximize the size of the groups and to minimize the number of groups.

1. A group must contain either1, 2, 4, 8 or 16 squares.


In the case of two-variable Karnaugh map, 4 squares is the maximum group, for
three-variable map, 8 squares are the maximum group and so on.
2. Each square in the group must be adjacent to one or more squares in that same
group but all squares in the same group do not have to be adjacent to each
other.
3. Always include the largest possible number of 1s in a group in accordance
with rule 1.
4. Each 1 on the Karnaugh map must be included in at least one group. The 1s
already in a group can be included in another group as long as the overlapping
groups include non-common 1s.

(b ) Determining the Product Term for each group: Following are the rules that are applied
to find the minimum product terms and the minimum sum-of-products expression :

1. Group the squares that have 1s : Each group of squares containing 1s creates one
product term composed of all variables that occur in only one form (either
uncomplemented or complemented) within the group. Variables that occur both
uncomplemented and complemented within the group are eliminated. These are
known as contradictory variables.
2. In order to determine the minimum product term for each group, we need to look at
the standard methodology for three-variable and four-variable Karnaugh map
respectively.
(i) For a three-variable K-map : (1) for 1-square, group we get a three-variable
product term, (2) for a 2-square group, we get a two-variable product term, (3) for
a 4-square product term, we get a one-variable product term.
(ii) For a four-variable K-map : (1) For a 1-square group, we get a four-variable
product term, (2) for a 2-square group, we get a three-variable product term, (3)
for a 4-square group, we get a two-variable product term and (4) for a 8-square
group, we get a onevariable product term.

(c ) Summing the resulting product terms : When all the minimum product terms are
derived from the Karnaugh map, these are summed to form the minimum sum-of-
products expression.

Note : In some cases, there may be more than one way to group the 1s to form the product
terms. Whatever be the way, the minimal expression must have the same number of
product terms and each product term, the same number of Boolean variables.
The example given below will help you to understand and apply the simplification of the
SOP expression using Karnaugh map.

Example. Simplify the following Boolean expression using the Karnaugh mapping
technique :

X = 𝐴 B +𝐴 𝐵 𝐶 +𝐴𝐵 𝐶 + 𝐴 𝐵 𝐶
Solution. The first step is to map the given Boolean expression on the Karnaugh map.
Notice that there are three variables A, B and C in the Boolean expression, therefore
we need a three-variable Karnaugh map.
The Boolean expression to be mapped is,
Note that the given Boolean expression is a nonstandard SOP expression because the first product term 𝐴
B has the variable C missing in it. This can be converted into a standard SOP form by modifying the
expression as below.

Equation above can be mapped on the Karnaugh map as shown in Fig. 9.

Fig. 9

X = 𝐴 B +𝐶

Example. Fig. 10 shows a Karnaugh map of a sum-of-products (SOP) function.


Determine the simplified SOP function.

Fig 10 Fig 11
X=𝐵𝐷+𝐵𝐶+𝐴𝐶D

Combinational Logic Circuit


It is a circuit built from various logic gate combinations.
The circuit possesses a set of inputs, a memoryless logic network to operate on the inputs and a set
of outputs as shown in Fig. 1 (a).
The output from a combinational logic circuit depends solely on the present input values and not
on the previous ones. Moreover, output combinational networks are used to make logical decisions
and control the operation of different circuits in digital electronic systems. For a given set of input
conditions, the output of such a circuit is the same. Consequently, a truth table can fully describe
the operation of such a circuit. Examples of such a circuit are: decoders, adders, multiplexer and
de-multiplexers etc.

Fig. 1
Adders
The logical gates discussed so far can be used for performing arithmetical functions like addition,
subtraction, multiplication and division in electronic calculators and digital instruments. In the
central processing unit (CPU) of a computer, these arithmetic functions are carried out by the
arithmetic and logic unit (ALU). The logic functions used generally are XOR, OR and AND. We
will consider the following:
1. Half Adder—It carries out binary addition with the help of XOR and AND gates. It
has two inputs and two outputs.
2. Full Adder—It has three inputs and can add three bits at a time. It is made up of two
half adders and one OR gate.

It can add 2 binary digits at a time and produce a 2-bit data i.e. sum and carry according
to
the binary addition rules.

Half-Adder
A half-adder is an arithmetic circuit block that can be used to add two bits.
Such a circuit thus has two inputs that represent the two bits to be added and two outputs,
with one producing the SUM output and the other producing the CARRY.
Figure below shows the truth table of a half-adder, showing all possible input combinations
and the corresponding outputs.

The Boolean expressions for the SUM and CARRY outputs are given by the equations

An examination of the two expressions tells that there is no scope for further simplification. While
the first one representing the SUM output is that of an EX-OR gate, the second one representing
the CARRY output is that of an AND gate.
However, these two expressions can certainly be represented in different forms using various laws
and theorems of Boolean algebra to illustrate the flexibility that the designer has in hardware
implementing as simple a combinational function as that of a half-adder.
Although the simplest way to hardware-implement a half-adder would be to use a two-input EX-
OR gate for the SUM output and a two-input AND gate for the CARRY output, as shown below

Full Adder
A full adder circuit is an arithmetic circuit block that can be used to add three bits to produce a
SUM and a CARRY output.
Such a building block becomes a necessity when it comes to adding binary numbers with a large
number of bits.
The full adder circuit overcomes the limitation of the half-adder, which can be used to add two bits
only.

Figure above shows the truth table of a full adder circuit showing all possible input combinations
and corresponding outputs.
In order to arrive at the logic circuit for hardware implementation of a full adder, we will firstly
write the Boolean expressions for the two output variables, that is, the SUM and CARRY outputs,
in terms of input variables.
These expressions are then simplified by using any of the simplification techniques described
previously.
The Boolean expressions for the two output variables are given in below for the SUM output (S)
and for the CARRY output (Cout):

The next step is to simplify the two expressions. We will do so with the help of the Karnaugh
mapping technique.
As is clear from the two maps, the expression for the SUM (S)output cannot be simplified any
further, whereas the simplified Boolean expression for Cout is given by the equation
Figure above shows the logic circuit diagram of the full adder.
A full adder can also be seen to comprise two half-adders and an OR gate. The expressions for
SUM and CARRY outputs can be rewritten as follows:

Similarly, the expression for CARRY output can be rewritten as follows:


5

Logic implementation of a full adder with half-adders.

Half-Subtractor
A half-subtractor is a combinational circuit that can be used to subtract one binary digit from
another to produce a DIFFERENCE output and a BORROW output. The truth table of a half-
subtractor, as shown in Figure below.

The Boolean expressions for the two outputs are given by the equations
It is obvious that there is no further scope for any simplification of the Boolean expressions given
by Equations.
While the expression for the DIFFERENCE (D) output is that of an EX-OR gate, the expression
for the BORROW output (Bo) is that of an AND gate with input A complemented before it is fed
to the gate.
Figure below shows the logic implementation of a half-subtractor.

Full Subtractor
A full subtractor performs subtraction operation on two bits, a minuend and a subtrahend, and also
takes into consideration whether a ‘1’ has already been borrowed by the previous adjacent lower
minuend bit or not.
As a result, there are three bits to be handled at the input of a full-subtractor, namely the two bits to
be subtracted and a borrow bit designated as Bin.
There are two outputs, namely the DIFFERENCE output D and the BORROW output Bo. The
BORROW output bit tells whether the minuend bit needs to borrow a ‘1’ from the next possible
higher minuend bit.
Figure below shows the truth table of a full subtractor.

The Boolean expressions for the two output variables are given by the equations

The Karnaugh maps for the two expressions are given in Figure below for DIFFERENCE output D
and for BORROW output Bo.
As is clear from the two Karnaugh maps, no simplification is possible for the difference output D.
The simplified expression for Bo is given by the equation

It can be shown that a full subtractor can be implemented with half-subtractors in the same way as
a full adder was constructed using half-adders.
A Multiplexer (MUX)
A MUX is a digital switch that has multiple inputs (sources) and a single output
(destination). The select lines determine which input is connected to the output.
MUX Types
2-to-1 (1 select line)
4-to-1 (2 select lines)
8-to-1 (3 select lines)
16-to-1 (4 select lines)

Consider an integer ‘m’, which is constrained by the following


relation: m = 2n, where m and n are both integers.

A m-to-1 Multiplexer has

 m Inputs: I0, I1, I2, ................ I(m-1)


 one Output: Z

 n Control inputs: S0, S1, S2, ...... S(n-1)


 One (or more) Enable input(s)
such that Z may be equal to one of the inputs, depending upon the control inputs.
Two-input multiplexer

Four-input multiplexer
an 4-to-1 multiplexer can be represented by the Boolean function

Z
Logic symbol for a 1-of-4 data selector/multiplexer

Implementing Boolean Functions with Multiplexers


One of the most common applications of a multiplexer is its use for implementation of
combinational logic Boolean functions. The simplest technique for doing so is to employ a 2nto-
1 MUX to implement an n-variable Boolean function. The input lines corresponding to each of
the minterms present in the Boolean function are made equal to logic ‘1’ state. The remaining
minterms that are absent in the Boolean function are disabled by making their corresponding input
lines equal to logic ‘0’.

Example implement the Boolean function given by the equation

In terms of variables A, B and C, equation above can be written as follows:

As shown in Fig. below, the input lines corresponding to the three minterms present in the given
Boolean function are tied to logic ‘1’. The remaining five possible minterms absent in the Boolean
function are tied to logic ‘0’.
It is a three-variable Boolean function. Conventionally, we will need to use an 8-to-1 multiplexer
to implement this function.
The truth table of the given Boolean function is shown in Table below

d) Demultiplexer:
The demultiplexer is the inverse of the multiplexer, in that it takes a data input and n address inputs.

It has 2n outputs. The address input determine which data output is going to have the same value as
the data input.

The other data outputs will have the value 0.

A demultiplexer has
 N control inputs
 1 data input
 2N outputs
A demultiplexer routes (or connects) the data input to the selected output.
The value of the control inputs determines the output that is selected.
A demultiplexer performs the opposite function of a multiplexer.
DEMUX Types
 1-to-2 (1 select line)
 1-to-4 (2 select lines)
 1-to-8 (3 select lines)
 1-to-16 (4 select lines)
Note that i/p X is held High (1)
Decoder

The basic function of a decoder is to detect the presence of a specified combination of bits on its
inputs and to indicate that presence by a specified output level.

 It consists of:

 Inputs (n)

 Outputs (2n , numbered from 0 to 2n - 1)

 Selectors / Enable (active high or active low)


The truth table of 2-to-4 Decoder

2-to-4 Decoder
The truth table of 3-to-8 Decoder

A A A D D D D D D D D

0 0 0 1
0 0 1 1

0 1 0 1

0 1 1 1

1 0 0 1

1 0 1 1

1 1 0 1

1 1 1 1
3-to-8 Decoder with Enable

If there are some unused or ‘don’t care’ combinations in the n-bit code, then there will be fewer
than 2n output lines. As an illustration, if there are three input lines, it can have a maximum of eight
unique output lines. If, in the three-bit input code, the only used three-bit combinations are 000,
001, 010, 100, 110 and 111 (011 and 101 being either unused or don’t care combinations), then this
decoder will have only six output lines. In general, if n and m are respectively the numbers of input
and output lines, then m ≤ 2n.
A decoder can generate a maximum of 2n possible minterms with an n-bit binary code.

Implementing Boolean Functions with Decoders


Given Boolean function implement it using a Decoder
Encoder
An encoder is the combinational circuit which performs a reverse function of decoder.
Encode inputs are decimal digits and/or alphabetic characters and outputs are coded representation
of these inputs.
 Perform the inverse operation of a decoder

 Inputs (2n) or less

 Outputs (n output lines)

4 to 2 Encoder

Let 4 to 2 Encoder has four inputs Y3, Y2, Y1 & Y0 and two outputs A1 & A0. The block diagram
of 4 to 2 Encoder is shown in the following figure.

At any time, only one of these 4 inputs can be ‘1’ in order to get the respective binary code at the
output. The Truth table of 4 to 2 encoder is shown below.
Inputs Outputs

Y3 Y2 Y1 Y0 A1 A0

0 0 0 1 0 0

0 0 1 0 0 1

0 1 0 0 1 0

1 0 0 0 1 1

From Truth table, we can write the Boolean functions for each output as
A1=Y3+Y2
A0=Y3+Y1
We can implement the above two Boolean functions by using two input OR gates. The circuit
diagram of 4 to 2 encoder is shown in the following figure.

The above circuit diagram contains two OR gates. These OR gates encode the four inputs with two
bits
SEQUENTIAL LOGIC CIRCUITS
Introduction
So far, we have dealt with combinational logic circuits, whose outputs are functions of the current
inputs only.
Here we will deal with sequential logic circuits.
A sequential logic circuit is a circuit whose present outputs are functions of the present inputs, as
well as previous inputs.
It has in it a unit called the memory which stores the effect of the previous sequence of inputs.
The stored effects of the previous inputs are called the STATE of the circuit. A sequential circuit
can be represented by the block diagram on Figure 1.

Figure 1: Block diagram of a sequential logic circuit


Flip-flops are the basic building blocks for sequential logic circuits.

Flip-Flops
A flip-flop is a logic circuit that is capable of storing one bit of information (0 or 1).
It stores the one bit of information as long as power is supplied to the circuit. It is the simplest
memory element.
It is also referred to as a bistable multivibrator.
The block diagram of a flip-flop is shown on Figure 2.

-
A Flip-flop can have one or more inputs, but it has only two outputs, the normal output Q and the
inverted (or complemented) output 𝑄 .

Under normal operating conditions Q and 𝑄 are always complements of each other, i.e. either Q = 0
and 𝑄 = 1, or Q = 1 and 𝑄 = 0.
When we refer to the STATE of a flip-flop, we are referring to the state of its normal (Q) output
i.e. if we say that the state of a flip-flop is 1, we mean Q = 1.

The SET-RESET flip-


flop
This is illustrated on
Figure below.

Figure: SET-RESET (SR) flip-flop


The truth-table for this flip-flop is shown below:

When we have SET = 0 and RESET = 1, the flip-flop is RESET (Q = 0), and when SET = 1 and
RESET = 0, the flip-flop is SET (Q = 1). If we have SET = 1 and RESET = 1, it is similar to
setting and resetting the flip-flop at the same time, and this mode is disallowed as it causes Q
= 𝑄 = 1.

The D Flip-Flop
The D-type flip-flop is illustrated in Figure below.
Figure : D flip-flop

It is a modified version of the SET-RESET flip-flop where SET = 𝑅𝐸𝑆𝐸𝑇.


It has only one input, D.
The truth-table for this flip-flop is shown below:

From the truth-table, we can see that Q transparently follows the input D, and for this reason, the D
flip-flop is sometimes referred to as a transparent latch.

The T flip-flop
A T-flip-flop (also known as a toggle flip-flop) is illustrated on Figure below.

Figure: T flip-flop
The corresponding truth-table is shown below:

The JK flip-flop
The JK flip-flop is shown on Figure below.
Figure: JK flip-flop
Its truth-table is shown below:

This truth-table can be summarized as shown below:

From the truth-table above, we can see that the JK flip-flop does not have invalid inputs and it can
complement. These properties make the JK flip-flop very versatile, and it is used in many
sequential logic circuits.

Types of clocked flip-flops


There are three types of clocked flip-flops:
• Level Driven flip-flops
• Master-Slave flip-flops
• Edge-triggered flip-flops
The first two types are no longer used in modern digital systems, but are mentioned here for
completeness.

Edge triggered flip-flops


Edge triggered flip-flops are those that only change state during the clock transitions (HIGH to
LOW or LOW to HIGH).
The flip-flops that change state during the HIGH to LOW transitions of the clock (negative going
transitions) are known as negative edge triggered flip-flops, while those that change state at the
LOW to HIGH transitions of the clock are known as positive edge triggered flip-flops.
Note that there are no flip-flops that trigger on both the positive and the negative going transitions
of the clock.
Figure below shows the logic symbol for a positive edge triggered JK flip-flop.
The corresponding truth-table is shown below:

Figure: Symbol of a positive-edge-triggered JK flip-flop

The interpretation of the truth-table is as follows:


If the inputs of the flip-flop are J = 0, K = 0 and a Positive Going Transition (PGT) of the clock
occurs, the flip-flop will remain in its present state (Qn), while if the inputs are J = 0, K = 1,
then a PGT of the clock will make the output to be LOW (0) e.t.c.
Figure below shows the logic symbol for a negative edge triggered JK flip-flop.

Figure: Symbol of a negative-edge-triggered JK flip-flop

The corresponding truth-table is shown below:


The interpretation of the truth-table is similar to that of the positive edge triggered flip-flop, the only
difference is that changes in the state occur only on the negative going transitions of the clock.

Exercise
Figure below shows a clock waveform and inputs J and K applied to a positive-edge triggered JK
flip-flop. Using the truth-table for a positive edge triggered JK flip-flop, explain what happens
at
t0, t1, t2, t3, t4 and t5 to justify the output waveform shown for Q. (It is assumed that the initial of Q
is HIGH)
Flip-flop excitation tables

Counters
A counter is a circuit made up of a series of flip-flops connected in a suitable manner to record
sequences of pulses presented to it in digital form. Counters are used in digital systems to:
1. Count events - convert a given number of event cycles into a prescribed binary code.
2. To time events - The duration of an event can be determined from the count and the
clock frequency.
3. To generate special code sequences - In this application, the counter is used as the
heart of a logic sequencer. Such a circuit generates the next-state information as well
as the control information.
4. Frequency division

Classification of counters
1. Mode of operation
 Single mode counter - Counts without external control inputs.
 Multi-mode counter - Counters with external control inputs that can be used to
alter the counting sequence, the beginning of a sequence or the end of a sequence.
2. Modulus of the counter
The modulus (MOD number) of a counter is the number of different logic states it goes through
before it comes back to the initial state to repeat the count sequence.
An n-bit counter that counts through all its natural states and does not skip any of the states has a
modulus of 2n. We can see that such counters have a modulus that is an integral power of 2, that is,
2, 4, 8, 16 and so on.
These can be modified with the help of additional combinational logic to get a modulus of less than
2n.
To determine the number of flip-flops required to build a counter having a given modulus, identify
the smallest integer m that is either equal to or greater than the desired modulus and is also equal
to an integral power of 2. For instance, if the desired modulus is 10, which is the case in a decade
counter, the smallest integer greater than or equal to 10 and which is also an integral power of 2 is
16. The number of flip-flops in this case would be 4, as 16 = 24. On the same lines, the number of
flip-flops required to construct counters with MOD numbers of 3, 6, 14, 28 and 63 would be 2, 3,
4, 5 and 6 respectively. In general, the arrangement of a minimum number of n flip-flops can be
used to construct any counter with a modulus is given by the equation

2𝑛−1 + 1 ≤ 𝑚𝑜𝑑𝑢𝑙𝑢𝑠 ≤ 2𝑛

A modulus N counter is also referred to as a MOD N counter or a DIVIDE by N counter.


Generally speaking, a counter that can sequence through N states is known as a MODULO N
counter or a DIVIDE by N counter.
If N is a power of 2 (2, 4, 8, 16 e.t.c.), the counter is called a binary counter.
If N is a power of 10 (10, 100, 1000 e.t.c.), the counter is known as a decimal or a decade counter.
3. Clocking method - There are two types of clocking for counters:
i. Asynchronous/ripple/serial counter -Type of counter where each flip-flop serves as
the clock input signal for the next flip-flop in the chain. Examples are shown on
Figure below.

Asynchronous/ripple/serial counters
ii. Synchronous/parallel counter, - Counter in which all the flip-flops are clocked
simultaneously. This is illustrated on Figure below.

Figure: Synchronous counter


4. Code sequence generated - A counter may be a straight forward up or down counter
(a counter generating codes 00 → 01 →10 → 11 is an up counter, while one
generating codes 11 →10 → 01 → 00 is a down counter). Other counters include Gray
Code counters and the shift register counters (ring counter and Johnson counters).

Counters designed using a combination of flip-flops and gates Asynchronous


Counters
An asynchronous counter comprises a chain of flip-flops in which one flip-flop is under the
command of an external clock input.
The other flip-flops in the chain are indirectly controlled by the external clock.
The flip-flops are connected such that each flip-flop generates the clock input to the next flip-flop
in the chain.
An asynchronous counter is also referred to as a ripple counter. This is because the effect of
the external clock ripples through the chain of flip-flops. Example
Consider the circuit shown on Figure below.

Figure: A ripple counter

Assuming that we start with Q3 = Q2 = Q1 = 0, we can draw the waveforms shown


on Figure below.

The truth-table corresponding to these waveforms is shown below:


From the table above, we can see that the counter has eight distinct states and it is an up counter.
The counter is therefore a modulo-8 up asynchronous (ripple) counter.
Note that the frequency of the waveform for Q3 is 1/8 of the clock frequency hence this can also
be referred to as a divide by 8 counter.
Generally, for N negative-edge triggered JK flip-flops connected in the form shown on Figure
above of a ripple counter, the counter formed is a Modulo-2N ripple up counter also known as a
Divide by 2N ripple up counter.
For any ripple counter, the output from the last flip-flop in the chain divides the input clock
frequency by the MOD number of the counter.

Example
Consider the circuit shown on Figure below.

Assuming that we start with Q3 = Q2 = Q1 = 0, we can draw the waveforms shown on Figure
below.
We then get the truth-table for Q1, Q2 and Q3, which is shown below:

From the table above, we can see that the counter has eight distinct states and it is a down counter.
The counter is therefore a modulo-8 down asynchronous (ripple) counter or a divide by 8 down
asynchronous counter.
Asynchronous Counters with MOD numbers <
2N
The general design procedure is as follows:
1. For a modulo-k counter, find the smallest number of flip-flops N such that 2N−1 < k <
2N, and connect them as a ripple counter.
2. Connect a NAND gate output to the asynchronous CLEAR inputs of all the flip flops.
3. Determine which flip-flops will be in the HIGH state at a count k and then connect
the normal outputs of these flip-flops to the NAND gate inputs. Example
Design a JK flip-flop-based modulo-6 asynchronous up counter.
Solution
The MOD number k = 6, and this is not a power of 2. Using the equation above:
2N−1 < 6 < 2N

we find that the value of N which satisfies this is N = 3, hence we use three flip-flops which we
connect as a ripple counter. k = 6 = 1102, hence the two most-significant digits of the counter are
the ones that should be connected to the input of the NAND gate, to yield the circuit shown on
Figure below.

Figure : A modulo 6 up ripple counter


Assuming that we start with Q3 = Q2 = Q1 = 0, we can draw the waveforms shown on Figure
below.
Figure : Waveforms for the counter in Figure above

From Figure of Waveforms for the counter above, we can see that when Q3 and Q2 go HIGH, the
output of the NAND gate (X) goes LOW, and this clears all the flip-flops (Q3, Q2 and Q1 are taken
to the LOW state). When Q3 and Q2 go LOW, the NAND gate output goes HIGH and the counter
resumes normal operation.
The duration of the resetting pulse is very short, typically < 100ns, so the counter state 110 a
temporary state and is not considered as a valid state of the counter. Assuming that we start with
Q3 = Q2 = Q1 = 0, we can draw the truth-table for this counter as shown below:
The counter has 6 stable states (000, 001, 010, 011, 100 and 101) hence it is a modulo-6 up
asynchronous counter.

Synchronous Counters
These are counters in which all the flip-flops are clocked simultaneously, and as a result, all flipflop
output changes occur simultaneously.
Propagation delays do not add together hence synchronous counters can work at much higher
frequencies than ripple counters.
The design procedure is as follows:
(i) Write the state-transition-table for the count.
(ii) Derive a minimal logic expression for each flip-flop input.
(iii) Implement the logic circuit using flip-flops and a suitable set of logic gates.

Example
Design a JK flip-flop based circuit whose state diagram is shown on Figure below.

Solution
Since we are to use JK flip-flops to implement this function, we shall use the JK flip flop
excitation table. We then construct the table shown below:
The K- maps corresponding to the four flip-flop inputs are shown on Figure below.

From the figure, we can see that:

The circuit is shown on Figure below.


Figure: Circuit for a MOD 4 synchronous counter

Note that the flip-flops may be either positive-edge-triggered or negative-edge-triggered. For


observation of the states, QA and QB may be connected to LEDs (QA is the MSB while QB is the
LSB).

Example
Design a JK flip-flop based circuit whose state diagram is shown on Figure below.

Using the JK flip-flop excitation table, we construct the table shown below:
The K- maps corresponding to the four flip-flop inputs are shown on Figure below. From Figure
below, we can see that:
The circuit is shown on Figure below. In this case, QA is the MSB while QC is the LSB.

Figure: Circuit for a MOD 8 synchronous counter


Memory
Register
Flip-flop is a 1 bit memory cell which can be used for storing the digital data. To increase
the storage capacity in terms of number of bits, we have to use a group of flip-flop. Such
a group of flip-flop is known as a Register. The n-bit register will consist of n number
of flip-flop and it is capable of storing an n-bit word.
The binary data in a register can be moved within the register from one flip-flop to
another. The registers that allow such data transfers are called as shift registers. There
are four mode of operations of a shift register.

 Serial Input Serial Output


 Serial Input Parallel Output
 Parallel Input Serial Output
 Parallel Input Parallel Output

Serial Input Serial Output


Let all the flip-flop be initially in the reset condition i.e. Q3 = Q2 = Q1 = Q0 = 0. If an entry
of a four bit binary number 1 1 1 1 is made into the register, this number should be
applied to Din bit with the LSB bit applied first. The D input of FF-3 i.e. D3 is connected to
serial data input Din. Output of FF-3 i.e. Q3 is connected to the input of the next flip-flop
i.e. D2 and so on.
Block Diagram
Operation
Before application of clock signal, let Q3 Q2 Q1 Q0 = 0000 and apply LSB bit of the number
to be entered to Din. So Din = D3 = 1. Apply the clock. On the first falling edge of clock, the
FF-3 is set, and stored word in the register is Q3 Q2 Q1 Q0 = 1000.

Apply the next bit to Din. So Din = 1. As soon as the next negative edge of the clock hits,
FF-2 will set and the stored word change to Q3 Q2 Q1 Q0 = 1100.

Apply the next bit to be stored i.e. 1 to Din. Apply the clock pulse. As soon as the third
negative clock edge hits, FF-1 will be set and output will be modified to Q3 Q2 Q1 Q0 =
1110.

Similarly with Din = 1 and with the fourth negative clock edge arriving, the stored word in
the register is Q3 Q2 Q1 Q0 = 1111.
Truth Table

Waveforms
Serial Input Parallel Output
 In such types of operations, the data is entered serially and taken out in parallel
fashion.
 Data is loaded bit by bit. The outputs are disabled as long as the data is loading.
 As soon as the data loading gets completed, all the flip-flops contain their required
data, the outputs are enabled so that all the loaded data is made available over
all the output lines at the same time.
 4 clock cycles are required to load a four bit word. Hence the speed of operation
of SIPO mode is same as that of SISO mode.
Block Diagram

Parallel Input Serial Output (PISO)


 Data bits are entered in parallel fashion.
 The circuit shown below is a four bit parallel input serial output register.
 Output of previous Flip Flop is connected to the input of the next one via a
combinational circuit.
 The binary input word B0, B1, B2, B3 is applied though the same combinational
circuit.
 There are two modes in which this circuit can work namely - shift mode or load
mode.
Load mode
When the shift/load bar line is low (0), the AND gate 2, 4 and 6 become active they will
pass B1, B2, B3 bits to the corresponding flip-flops. On the low going edge of clock, the
binary input B0, B1, B2, B3 will get loaded into the corresponding flip-flops. Thus parallel
loading takes place.
Shift mode
When the shift/load bar line is low (1), the AND gate 2, 4 and 6 become inactive. Hence
the parallel loading of the data becomes impossible. But the AND gate 1,3 and 5 become
active. Therefore the shifting of data from left to right bit by bit on application of clock
pulses. Thus the parallel in serial out operation takes place.
Block Diagram

Parallel Input Parallel Output (PIPO)


In this mode, the 4 bit binary input B0, B1, B2, B3 is applied to the data inputs D0, D1, D2,
D3 respectively of the four flip-flops. As soon as a negative clock edge is applied, the
input binary bits will be loaded into the flip-flops simultaneously. The loaded bits will
appear simultaneously to the output side. Only clock pulse is essential to load all the
bits.
Block Diagram

Bidirectional Shift Register


 If a binary number is shifted left by one position then it is equivalent to multiplying
the original number by 2. Similarly if a binary number is shifted right by one
position then it is equivalent to dividing the original number by 2.
 Hence if we want to use the shift register to multiply and divide the given binary
number, then we should be able to move the data in either left or right direction.
 Such a register is called bi-directional register. A four bit bi-directional shift register
is shown in fig.
 There are two serial inputs namely the serial right shift data input DR, and the
serial left shift data input DL along with a mode select input (M).
Block Diagram
Operation
S.N. Condition Operation

1 With M = 1 − Shift right operation


If M = 1, then the AND gates 1, 3, 5 and 7 are
enabled whereas the remaining AND gates 2, 4,
6 and 8 will be disabled.
The data at DR is shifted to right bit by bit from
FF-3 to FF-0 on the application of clock pulses.
Thus with M = 1 we get the serial right shift
operation.

2 With M = 0 − Shift left operation


When the mode control M is connected to 0 then
the AND gates 2, 4, 6 and 8 are enabled while
1, 3, 5 and 7 are disabled.
The data at DL is shifted left bit by bit from FF-0
to FF-3 on the application of clock pulses. Thus
with M = 0 we get the serial right shift operation.

Universal Shift Register


A shift register which can shift the data in only one direction is called a uni-directional
shift register. A shift register which can shift the data in both directions is called a bi-
directional shift register. Applying the same logic, a shift register which can shift the data
in both directions as well as load it parallely, is known as a universal shift register. The
shift register is capable of performing the following operation −

 Parallel loading
 Left Shifting
 Right shifting
The mode control input is connected to logic 1 for parallel loading operation whereas it
is connected to 0 for serial shifting. With mode control pin connected to ground, the
universal shift register acts as a bi-directional register. For serial left operation, the input
is applied to the serial input which goes to AND gate-1 shown in figure. Whereas for the
shift right operation, the serial input is applied to D input.
Block Diagram

Field Programmable Logic Gates


Field-Programmable Gate Array
What is a Field-Programmable Gate Array
A Field-Programmable Gate Array (FPGA) is a type of programmable logic device (PLD) that
provides high degree of flexibility and can be used for implementing complete digital system on a
single chip. It contains an array of identical logic cells that can be programmed. By programming
these logic cells or blocks, FPGAs can be used to perform various logic functions. Also, we can
interconnect them to implement complex digital systems.

FPGAs also have several input/output (I/O) blocks to create interface between external devices
and the FPGA’s internal logic circuit. They also consist of a memory element to store the programs
that specifies the operational behavior of the logic cells and programmable interconnects.

In order to program FPGAs, there are various hardware description languages (HDLs) available
like Verilog or VHDL. These programming languages are used to define the desired functionality
and behavior of the digital system.

The general block diagram of an FPGA is depicted in the following figure.


Components of FPGA
It consists of the following main components −
 Configurable Logic Blocks (CLBs)
 I/O Blocks
 Programmable Interconnects

The configurable logic block consists of multiplexers, flip-flops, and array of combination logic
circuits. Th I/O blocks provide pins to connect external devices with the FPGA. The programmable
interconnect is basically a switching matrix structure that provides interconnection between CLB
and I/O blocks inside the FPGA.

Certainly! The classification of FPGAs into low-end, mid-range, and high-end categories is based
on their performance, complexity, gate density, and power consumption. Let's delve deeper into
each category −

Types of FPGAs
Depending on the applications, FPGAs can be classified into the following main types −
 Low-End FPGAs
 Mid-Range FPGAs
 High-End FPGAs

Low-End FPGAs
Low-End FPGAs are primarily designed to consume least power than the mid-range and high-end
FPGAs. Thus, they are well-suited to use in battery-powered devices and other applications where
energy efficiency is critical.
In low-end FPGAs, a smaller number of logic gates are used, hence they use less resources for
implementing complex logic systems. Also, these FPGAs have a less complex architecture. Some
common applications of low-end FPGAs include simple control systems, basic signal processing
systems, and low-cost consumer electronics.

Mid-Range FPGAs
Mid-range FPGAs take more power than low-end FPGAs but less power than high-end FPGAs.
This is mainly because the mid-range FPGAs consist of a larger number of logic gates as compared
to low-end FPGAs. This in turn increases the overall complexity of the circuit. Although, these
FPGAs offer a balance between performance and efficiency.

Since mid-range FPGAs provide a larger number of resources, they allow to implement more
complex digital circuits.

These FPGAs are used in a wide range of applications such as digital signal processing,
communication systems, embedded systems, industrial automation systems, telecommunication
devices, medical equipment, etc.

High-End FPGAs
High-end FPGAs consume more power than both low-end and mid-range FPGAs. This is because
they use a larger number of logic gates and also have higher operating frequencies. Although, these
FPGAs are supposed to be exceptional in terms of performance and processing efficiency.

Due to availability of large number resources, the high-end FPGAs can be used for implementing
highly complex logic circuits and systems. Also, they provide the highest level of flexibility and
performance.

Some common applications where the high-end FPGAs are used include high-speed processing
systems, real-time data analysis systems, data centers, high-performance computing systems,
aerospace and defense systems, etc.

Advantages of FPGAs
FPGAs offer numerous advantages over other types of programmable logic devices. Some of the
major advantages of FPGAs are listed below −
 FPGAs provide a greater flexibility and re-configurability, as they can be programmed or
reprogrammed to implement different logic functions to meet the requirements of specific
applications without need of hardware alteration or redesign.
 FPGAs allow to develop digital systems in less time.
 FPGAs have high performance and processing capabilities. Hence, they can execute
complex algorithms and tasks more efficiently.
 FPGAs can be customized and optimized to meet the requirements of specific
applications.
 FPGAs also support parallelism and pipelining. These technologies allow to enhance the
overall system performance and throughput.

Disadvantages of FPGAs
FPGAs offer several advantages as listed above, but they also have certain disadvantages. Some
of the main disadvantages of FPGAs are highlighted here −
 FPGAs are more expensive than other types of programmable logic devices.
 FPGAs are complex to design and implement and require more time and expertise in
hardware description languages (HDLs) and system design tools.
 FPGAs are more susceptible to security threats than other types of programmable logic
devices.

Applications of FPGAs
FPGAs are extensively used in several applications across a wide range of industries. The
following are some common applications of FPGAs −
 FPGAs are used in the field of digital signal processing to accomplish tasks such as
audio-video signal processing, speech recognition, image processing, etc.
 FPGAs are used in the implementation of complex algorithms and real-time signal
processing functions.
 FPGAs are used in various communication and networking devices such as routers,
switches, network processing units, etc.
 In communication systems, FPGAs are employed for implementing protocol processing
algorithms, packet handling algorithms, encryption-decryption techniques, error detection
and correction mechanisms, etc.
 FPGAs are used in a variety of electronic systems such as embedded systems, industrial
automation systems, automotive electronics, consumer electronic devices, etc.
 FPGAs are used to perform high-end processing tasks such as scientific computation,
data analysis, machine learning and artificial intelligence tasks.
 FPGAs are also integral parts in various medical equipment such as MRI (Magnetic
Resonance Imaging), CT (Computed Tomography) scan, ultrasound systems, X-ray
machines, etc.

Memory
Definitions
 bit ─ a single binary digit
 byte ─ a collection of eight bits accessed together
 word ─ a collection of binary bits whose size is a typical unit of access for the
memory. It is typically a power of two multiple of bytes (e.g., 1 byte, 2 bytes, 4 bytes,
8 bytes, etc.)
 Nibble- is a four-bit aggregation or half an octet. It is also known as half-byte
 Memory Address ─ A vector of bits that identifies a particular memory element (or
collection of elements).

A memory unit is a device to which binary information is transferred for storage and from which
information is retrieved when needed for processing.
When data processing takes place, information from memory is transferred to selected registers in
the processing unit.
A register is a very small amount of very fast memory that is built into the CPU in order to speed
up its operations by providing quick access to commonly used values
Intermediate and final results obtained in the processing unit are transferred back to be stored in
memory. Binary information received from an input device is stored in memory, and
information transferred to an output device is taken from memory. A memory unit is a
collection of cells capable of storing a large quantity of binary information.

There are two types of memories that are used in digital systems:
1. random‐access memory (RAM) and
2. read‐only memory (ROM).
.
The process of storing new information into memory is referred to as a memory write
operation.
The process of transferring the stored information out of memory is referred to as a memory read
operation.
RAM can perform both write and read operations.
ROM can perform only the read operation. This means that suitable binary information is already
stored inside memory and can be retrieved or read at any time. However, that information
cannot be altered by writing.
ROM is a programmable logic device (PLD).
The function to be performed by a programmable logic device is undefined at the time of its
manufacture.
These devices are programmed by the user to perform a range of functions.
The binary information that is stored within such a device is specified in some fashion and then
embedded within the hardware in a process is referred to as programming the device.
The word “programming” here refers to a hardware procedure which specifies the bits that are
inserted into the hardware configuration of the device.
.
A PLD is an integrated circuit with internal logic gates connected through electronic paths that
behave similarly to fuses. In the original state of the device, all the fuses are intact.
Programming the device involves blowing those fuses along the paths that must be removed in
order to obtain the particular configuration of the desired logic function.

RANDOM-ACCESS MEMORY
A memory unit is a collection of storage cells, together with associated circuits needed to transfer
information into and out of a device. The architecture of memory is such that information can be selectively
retrieved from any of its internal locations.
The time it takes to transfer information to or from any desired random location is
always the same—hence the name random‐access memory, abbreviated RAM. In
contrast, the time required to retrieve information that is stored on magnetic tape
depends on the location of the data.

Most computer memories use words that are multiples of 8 bits in length.
Thus, a 16‐bit word contains two bytes, and a 32‐bit word is made up of four bytes. The capacity of a
memory unit is usually stated as the total number of bytes that the unit can store.
Communication between memory and its environment is achieved through data input
and output lines, address selection lines, and control lines that specify the direction
of transfer.
A block diagram of a memory unit is shown in Fig. above.
The n data input lines provide the information to be stored in memory, and the n data output lines
supply the information coming out of memory.
The k address lines specify the particular word chosen among the many available.
The two control inputs specify the direction of transfer desired: The Write input causes binary data to
be transferred into the memory, and the Read input causes binary data to be transferred out of
memory.

The memory unit is specified by the number of words it contains and the number of bits in each
word. The address lines select one particular word. Each word in memory is assigned an
identification number, called an address, starting from 0 up to 2k - 1, where k is the number of
address lines.
The selection of a specific word inside memory is done by applying the k ‐bit address to the address
lines. An internal decoder accepts this address and opens the paths needed to select the word
specified.
Memories vary greatly in size and may range from 1,024 words, requiring an address of 10 bits, to
232 words, requiring 32 address bits.
It is customary to refer to the number of words (or bytes) in memory with one of the letters K (kilo),
M (mega), and G (giga). K is equal to 210, M is equal to 220, and G is equal to 230. Thus, 64K =
216, 2M = 221, and 4G = 232.

The 1K * 16 memory has 10 bits in the address and 16 bits in each word.
As another example, a 64K * 10 memory will have 16 bits in the address (since 64K = 2 16 ) and
each word will consist of 10 bits.
The number of address bits needed in a memory is dependent on the total number of words that can
be stored in the memory and is independent of the number of bits in each word.
The number of bits in the address is determined from the relationship 2k m, where m is the total
number of words and k is the number of address bits needed to satisfy the relationship.

Write and Read Operations


The two operations that RAM can perform are the write and read operations.
As alluded to earlier, the write signal specifies a transfer‐in operation and the read signal specifies
a transfer‐out operation. On accepting one of these control signals, the internal circuits
inside the memory provide the desired operation.
The steps that must be taken for the purpose of transferring a new word to be stored into
memory are as follows:
1. Apply the binary address of the desired word to the address lines.
2. Apply the data bits that must be stored in memory to the data input lines.
3. Activate the write input.
4. The memory unit will then take the bits from the input data lines and store them in the
word specified by the address lines.

The steps that must be taken for the purpose of transferring a stored word out of memory are
as follows:
1. Apply the binary address of the desired word to the address lines.
2. Activate the read input.
3. The memory unit will then take the bits from the word that has been selected by the
address and apply them to the output data lines. The contents of the selected word do
not change after the read operation, i.e., the word operation is nondestructive.

Commercial memory components available in integrated‐circuit chips sometimes provide the two
control inputs for reading and writing in a somewhat different configuration.
Instead of having separate read and write inputs to control the two operations, most integrated
circuits provide two other control inputs:
1. One input selects the unit and
2. the other determines the operation.
The memory operations that result from these control inputs are specified in Table below.

The memory enable (sometimes called the chip select) is used to enable the particular memory chip
in a multichip implementation of a large memory. When the memory enable is inactive, the
memory chip is not selected and no operation is performed.

Types of Ram Memories


Integrated circuit RAM units are available in two operating modes:
1. static and
2. dynamic.

Static RAM (SRAM) consists essentially of internal latches that store the binary information. The
stored information remains valid as long as power is applied to the unit.
Dynamic RAM (DRAM) stores the binary information in the form of electric charges on capacitors
provided inside the chip by MOS transistors
The stored charge on the capacitors tends to discharge with time, and the capacitors must be
periodically recharged by refreshing the dynamic memory.
Refreshing is done by cycling through the words every few milliseconds to restore the decaying
charge.
DRAM offers reduced power consumption and larger storage capacity in a single memory chip.
SRAM is easier to use and has shorter read and write cycles.

Memory units that lose stored information when power is turned off are said to be volatile. CMOS
integrated circuit RAMs, both static and dynamic, are of this category, since
the binary cells need external power to maintain the stored information.
In contrast, a nonvolatile memory, such as magnetic disk, retains its stored information after the
removal of power. This type of memory is able to retain information because the data stored on
magnetic components are represented by the direction of magnetization, which is retained
after power is turned off. ROM is another nonvolatile memory.
A nonvolatile memory enables digital computers to store programs that will be needed again after
the computer is turned on. Programs and data that cannot be altered are stored in ROM, while
other large programs are maintained on magnetic disks.
The latter programs are transferred into the computer RAM as needed. Before the power is turned
off, the binary information from the computer RAM is transferred to the disk so that the
information will be retained.

READ‐ONLY MEMORY

A read‐only memory (ROM) is essentially a memory device in which permanent binary information
is stored.
The binary information must be specified by the designer and is then embedded in the unit to form
the required interconnection pattern. Once the pattern is established, it stays within the unit
even when power is turned off and on again.
A block diagram of a ROM consisting of k inputs and n outputs is shown in Fig. below.

The inputs provide the address for memory, and the outputs give the data bits of the stored word
that is selected by the address.
The number of words in a ROM is determined from the fact that k address input lines are needed to
specify 2k words.
Note that ROM does not have data inputs, because it does not have a write operation. Integrated
circuit ROM chips have one or more enable inputs and sometimes come with three‐state
outputs to facilitate the construction of large arrays of ROM.

Consider, for example, a 32 * 8 ROM. The unit consists of 32 words of 8 bits each.
There are five input lines that form the binary numbers from 0 through 31 for the address.
Figure below shows the internal logic construction of this ROM. The five inputs are decoded into
32 distinct outputs by means of a 5 * 32 decoder.
Each output of the decoder represents a memory address. The 32 outputs of the decoder are
connected to each of the eight OR gates.
The diagram shows the array logic convention used in complex circuits.
Each OR gate must be considered as having 32 inputs. Each output of the decoder is connected to
one of the inputs of each OR gate.
Since each OR gate has 32 input connections and there are 8 OR gates, the ROM contains 32 * 8
= 256 internal connections.
In general, a 2k * n ROM will have an internal k * 2k decoder and n OR gates. Each OR gate
has 2k inputs, which are connected to each of the outputs of the decoder.

The 256 intersections in Fig. above are programmable.


A programmable connection between two lines is logically equivalent to a switch that can be altered
to be either closed (meaning that the two lines are connected) or open (meaning that the two
lines are disconnected).
The programmable intersection between two lines is sometimes called a crosspoint. Various
physical devices are used to implement crosspoint switches.
One of the simplest technologies employs a fuse that normally connects the two points, but is
opened or “blown” by the application of a high‐voltage pulse into the fuse.
The internal binary storage of a ROM is specified by a truth table that shows the word content in
each address. For example, the content of a 32 * 8 ROM may be specified with a truth table
similar to the one shown in Table below.
The truth table shows the five inputs under which are listed all 32 addresses. Each address stores a
word of 8 bits, which is listed in the outputs columns. The table shows only the first four and
the last four words in the ROM. The complete table must include the list of all 32 words.
The hardware procedure that programs the ROM blows fuse links in accordance with a given truth
table. For example, programming the ROM according to the truth table given by Table above
results in the configuration shown in Fig. below.

Every 0 listed in the truth table specifies the absence of a connection, and every 1 listed specifies
a path that is obtained by a connection.
For example, the table specifies the eight‐bit word 10110010 for permanent storage at address 3.
The four 0’s in the word are programmed by blowing the fuse links between output 3 of the
decoder and the inputs of the OR gates associated with outputs A6, A3, A2, and A0.
The four 1’s in the word are marked with a * to denote a temporary connection, in place of a dot
used for a permanent connection in logic diagrams.
When the input of the ROM is 00011, the signal equivalent to logic 1 at decoder output 3 propagates
through the connections to the OR gate outputs of A7, A5, A4, and A1.
The other four outputs remain at 0. The result is that the stored word 10110010 is applied to
the eight data outputs.

Example
Design a combinational circuit using a ROM. The circuit accepts a three‐bit number and outputs a
binary number equal to the square of the input number.
The first step is to derive the truth table of the combinational circuit
Three inputs and six outputs are needed to accommodate all possible binary numbers. We note that
output B0 is always equal to input A0, so there is no need to generate B0 with a ROM, since it is
equal to an input variable. Moreover, output B1 is always 0, so this output is a known constant.

We actually need to generate only four outputs with the ROM; the other two are readily obtained.
The minimum size of ROM needed must have three inputs and four outputs. Three inputs
specify eight words, so the ROM must be of size 8 * 4. The ROM implementation is shown.
The three inputs specify eight words of four bits each. The truth table in (b) specifies the
information needed for programming the ROM. The block diagram of Fig. (a) shows the
required connections of the combinational circuit.
Types of ROMs
The required paths in a ROM may be programmed in four different ways.
The first is called mask programming and is done by the semiconductor company during the last
fabrication process of the unit. The procedure for fabricating a ROM requires that the customer fill out the
truth table he or she wishes the ROM to satisfy. The truth table may be submitted in a special form
provided by the manufacturer or in a specified format on a computer output medium. The manufacturer
makes the corresponding mask for the paths to produce the 1’s and 0’s according to the customer’s truth
table. This procedure is costly because the vendor charges the customer a special fee for custom masking
the particular ROM. For this reason, mask programming is economical only if a large quantity of the
same ROM configuration is to be ordered.
For small quantities, it is more economical to use a second type of ROM called programmable read‐only
memory, or PROM. When ordered, PROM units contain all the fuses intact, giving all 1’s in the bits of the
stored words. The fuses in the PROM are blown by the application of a high‐voltage pulse to the device
through a special pin.
A blown fuse defines a binary 0 state and an intact fuse gives a binary 1 state. This procedure allows the
user to program the PROM in the laboratory to achieve the desired relationship between input addresses
and stored words. Special instruments called PROM programmers are available commercially to facilitate
the procedure. In any case, all procedures for programming ROMs are hardware procedures, even though
the word programming is used.
The hardware procedure for programming ROMs or PROMs is irreversible, and once programmed, the
fixed pattern is permanent and cannot be altered. Once a bit pattern has been established, the unit must be
discarded if the bit pattern is to be changed.
A third type of ROM is the erasable PROM, or EPROM, which can be restructured to the initial state
even though it has been programmed previously. When the EPROM is placed under a special ultraviolet
light for a given length of time, the shortwave radiation discharges the internal floating gates that serve as
the programmed connections. After erasure, the EPROM returns to its initial state and can be
reprogrammed to a new set of values.
The fourth type of ROM is the electrically erasable PROM (EEPROM or E2PROM ). This device is
like the EPROM, except that the previously programmed connections can be erased with an electrical
signal instead of ultraviolet light. The advantage is that the device can be erased without removing it from
its socket.
Flash memory devices are similar to EEPROMs, but have additional built‐in circuitry to selectively
program and erase the device in‐circuit, without the need for a special programmer. They have
widespread application in modern technology in cell phones, digital cameras, set‐top boxes, digital TV,
telecommunications, nonvolatile data storage, and microcontrollers. Their low consumption of power
makes them an attractive storage medium for laptop and notebook computers. Flash memories
incorporate additional circuitry, too, allowing simultaneous erasing of blocks of memory, for example, of
size 16 to 64 K bytes. Like EEPROMs, flash memories are subject to fatigue, typically having about 105
block erase cycles.
Sequential Circuit Synthesis
State Machine
‘state’ and ‘state transition’ are the two important things that you should learn before learning
FSM. Because FSM models the behaviour of a system by representing it in different states it can
be in and the transition between each state.
A state is simply a specific action. Example for action is waiting for button 1 is to be pressed, or
rotating the dc motor to any direction etc.

A FSM is a computer program that, at any point in time, can be in one of a predetermined number of
states.

FSM can be created in two ways. By using;


 Mealy machine and
 Moore machine.
In the Mealy machine, the output of the system depends on the present state and the present input also. But
in the case of Moore machine, the output of the system depends on the present state only.
We know that synchronous sequential circuits change (affect) their states for every positive (or
negative) transition of the clock signal based on the input. So, this behavior of synchronous sequential
circuits can be represented in the graphical form and it is known as state diagram.
A synchronous sequential circuit is also called as Finite State Machine (FSM), if it has finite
number of states.

Mealy State Machine


A Finite State Machine is said to be Mealy state machine, if outputs depend on both present inputs
& present states. The block diagram of Mealy state machine is shown in the following figure.
As shown in figure, there are two parts present in Mealy state
machine. Those are
 combinational logic and
 Memory.
Memory is useful to provide some or part of previous outputs (present states) as inputs of
combinational logic.

So, based on the present inputs and present states, the Mealy state machine produces outputs.
Therefore, the outputs will be valid only at positive (or negative) transition of the clock signal.

Moore State Machine


A Finite State Machine is said to be Moore state machine, if outputs depend only on present
states. The block diagram of Moore state machine is shown in the following figure.

As shown in figure, there are two parts present in Moore state machine. Those are

 combinational logic and


 memory.
In this case, the present states determine the next states. So, based on next states, Moore state
machine produces the outputs. Therefore, the outputs will be valid only after transition of the state.
Difference between moore and mealy model
Mealy Machine Moore Machine

1. The output of the circuit may be affected 1. The output of the circuit is unaffected by changes in the
by changes in the information. input.

2. It requires fewer number of states for 2. For implementing the same function, it requires more
implementing same function. number of states

3. The output is a function of the present state


3. The output is a function of the present state only
as well as present input

4. The counter cannot be referred to as a


4. The counter is referred to as a Moore Machine.
Mealy Machine.

5. The design of the Mealy model is


5. The design of the Moore model is easy
complex.

6. Mealy machines react to changes more 6. Decoding the output necessitates more logic, resulting
quickly. They all seem to react in the same in longer circuit delays. They usually react after one clock
way. cycle

7. If the external inputs vary, the output can


7. Only when the active clock edge will the output alter.
alter between clock edges.

8. The output is set on the transition. 8. The output is set to state.

9. It's easier to design with less hardware. 9. To design, more hardware is necessary
In general, the number of states required in Moore state machine is more than or equal to the
number of states required in Mealy state machine.

There is an equivalent Mealy state machine for each Moore state machine.

So, based on the requirement we can use one of them.

State Reduction

Any design process must consider the problem of minimising the cost of the final circuit. The two
most obvious cost reductions are:

reductions in the number of flip-flops and


the number of gates.
The process of eliminating the equivalent or redundant states from a state table/diagram is known
as state reduction.
STATE REDUCTION
TECHNIQUES Row elimination
method

Example: Let us consider the state table of a sequential circuit shown in Table 1.
Next State Output
Present State x=0 x=1 x=0x=1
A B C 1 0
B F D 0 0
C D E 1 1
D F E 0 1
E A D 0 0
F B C 1 0

Table 1. State table

It can be seen from the table that the present state A and F both have the same next states, B (when
x=0) and C (when x=1). They also produce the same output 1 (when x=0) and 0 (when x=1).
Therefore states A and F are equivalent. Thus one of the states, A or F can be removed from the
state table. For example, if we remove row F from the table and replace all F's by A's in the
columns, the state table is modified as shown in Table 2.
Next State Output x = 0 x = 1
Present State x=0 x=1

A B C 1 0
B A D 0 0
C D E 1 1
D A E 0 1
E A D 0 0

Table 2. State F removed

It is apparent that states B and E are equivalent. Removing E and replacing E's by B's results in the
reduce table shown in Table 3.
Next State Output x = 0 x = 1
Present State x=0 x=1

A B C 1 0
B A D 0 0
C D B
1 1

D A B 0 1

Table 3. Reduced state table

The removal of equivalent states has reduced the number of states in the circuit from six to four.
Two states are considered to be equivalent if and only if for every input sequence the circuit
produces the same output sequence.
Design of Sequential Circuit
Design problem normally starts with a word description of input output relation and ends with a
circuit diagram having sequential and combinatorial logic elements.
Sequential logic circuit design refers to synchronous clock-triggered circuit because of its design
and implementation advantages.

Steps involved in design of sequential circuit


1. Input output relation
2. State transition diagram or Algorithmic State Machine (ASM) chart
3. State synthesis table
a. Flip Flop based implementation- excitation tables are used to generate design equations
through Karnaugh Map
b. Read Only Memory (ROM) based implementation- excitation tables are not required but flip-
flops are used as delay elements
4. Circuit diagram is developed from these design equations.

State machine design


There are two different approaches of state machine design called Moore model and Mealy
model.

DESIGN OF SYNCHRONOUS SEQUENTIAL CIRCUIT


EXAMPLE:-
SEQUENCE DETECTOR PROBLEM
The Problem
Design a sequence detector that receives binary data stream at its input, X and signals when a
combination ‘011’ arrives at the input by making its output, Y high which otherwise remains low.
Consider, data is coming from right i.e. the first bit to be identified is 1, second 1 and third 0 from
the input sequence.

The first step in a sequential logic synthesis problem is to convert this word description to State
transition diagram.

MOORE MODEL 1)
Input output relation
In Moore model circuit outputs, also called primary outputs are generated solely from secondary
outputs or memory value.

State transition diagram


• Since, the output is generated only from the state variables, the detector circuit be at state
a when initialized where none of the bit in input sequence is properly detected or the
starting point of detection.
• Then if 1st bit is detected properly the circuit should be at a different state say, b
• Similarly, we need two more states say, c and d to represent detection of 2nd and 3rd
bit in proper order. When the detector circuit is at stated, output Y is asserted and kept
high as long as circuit remains in state d signaling sequence detection. For other states
detector output, Y = 0.
STATE TRANSITION DIAGRAM

Figure above shows the state transition diagram following Moore model.

Each state and output is defined within a circle in state transition


diagram in the format s/V
• where s represents a symbol or memory values identified with a state and V represents
the output of the circuit.
• An arrow sign marks state transition following an input value 0 or I that is written along
the path.
• X represents the binary data input from which sequence ‘011’ is to be detected.

Initialized with a (if input x=0 - remain in same state a ,if input x=1 go to next state b first bit(1)
is detected and output y=0)
• State b (if input x=0 - go to initial state a ,if input x=1 go to next state c second bit(1) is
detected and output y=0)
• State c (if input x=0 - go to final state d third bit (0) is detected, if input x=1 remain in
same state c and output y=0)
• State d (if input x=0 - go to initial state a ,if input x=1 go to state b and repeat the
sequence ie, ‘011011’ and output y=1)

MealyModel

1) Input output relation


 Since, the output can be derived using state as well as input, three different
states for 3- bit sequence detector circuit following Mealy model.
 The three states say, a, b, c represents none, 1st bit and 2’ bit detection.
 When the circuit is at state c if the input is as per the pattern the output is
generated in state c itself with proper logic combination of input.
State Transition Diagram: Mealy Model

Figure above shows state transition diagram of the given problem following Mealy model.

• Initialized with a (if input x=0 - remain in same state a ,if input x=1 go to next state b
first bit(1) is detected and output y=0)
• State b (if input x=0 - go to initial state a ,if input x=1 go to next state c second bit(1) is
detected and output y=0)
• State c (if input x=0 - go to state a third bit (0) is detected and output y=1, if input x=1
remain in same state c and output y=0)

Note :-The difference with Moore model where output is generated one clock cycle later in state d
and also requires one additional state.

STATE SYNTHESIS TABLE


The next step in design process is to develop state synthesis table, also called circuit excitation
table or simply state table from state transition diagram.
For m number of memory elements, 2m number of different states in a circuit.
• State assignment
• No of memory elements
• Decide about flip flops (JK, D), normally prefer JK flip-flop as it has maximum number
of don’t care states in its excitation table and that leads to simpler design equations
STATE ASSIGNMENT FOR SEQUENCE DETECTOR
 Each state is a binary combination of memory values.
 For both Moore and Mealy models require minimum two flip-flops (say A
and B) to define their states (4 for Moore and 3 for Mealy) The state
assignment be as follows.

Note that Mealy model does not use state d. Assignment can be done in any order, e.g. we can make a: B =
0, A = 1 and b: B = 0, A = 0
NO OF MEMORY ELEMENTS
2 number of memory elements
DECIDE ABOUT FILP FLOPS
JK flip flop
Moore Model
State synthesis table obtained from state transition diagram of Moore model (Moore model Figure)
and excitation table of JK flip-flop is shown in Table below

JK flip flop excitation table

• It has eight rows as for each of the four possible states there can be two different types of
inputs.
• The table is prepared as follows. When the circuit is at state 00, i.e. a and receives X = 0
it remains at state 00 and output in this state Y = 0. Since both B and A flip-flop makes
0>0 transition both the flip-flops should have input Ox from excitation table.

State tables for Moore model

Present State Present Input Output

Bn An X=0 X=1 Y X=0 Y X=1

Bn+1 An+1 Bn+1 An+1


0 0 0 0 0 1 0 0
0 1 0 0 1 0 0 0
1 0 1 1 1 0 0 0
1 1 0 0 0 1 1 1

State synthesis table for Moore model


Mealy Model
• Since, Mealy Model requires three states for this problem we have six rows in state
synthesis table
• In each state there can be two different types of input X = 0 or X = 1. Table below
represents state synthesis table for Mealy model. The method remains the same as Moore
model but we use stat transition corresponding to Mealy model.

State tables for Mealy model

Present State Present Input Output

Bn An X=0 X=1 Y X=0 Y X=1

Bn+1 An+1 Bn+1 An+1


0 0 0 0 0 1 0 0
0 1 0 0 1 0 0 0
1 0 0 0 1 0 1 0

State synthesis table for Mealy model

DESIGN EQUATIONS AND CIRCUIT DIAGRAM


• In design equation we express flip-flop inputs as a function of present state, i.e. memory
values (here, B and A) and present input (here, X). This ensures proper transfer of the
circuit to next state.
• The design equations also give output (here, Y) equation in terms of state variables or
memory elements in Moore model and state variables together with input in Mealy
model.
• Use Karnaugh map technique to get a simplified form of these relations.

Moore Model
• Figure below presents Kamaugh map developed from state synthesis Table of Moore
model and also shows corresponding design equations.
• Also the Figure of sequence detector circuit diagram developed from these equations is
shown.
(a) Design equations for Moore model (b) Circuit diagram following a Moore model

Mealy Model
Using state synthesis table corresponding to Mealy Model Table we can fill six positions in each
Karnaugh map (Fig. below (a)). Locations B,,A,X = 110 and B,A,,K = 111 are filled with don’t
care(x) conditions as such a combination never occur in the detector circuit if properly initialized.
The design equations are obtained from these Karnaugh maps from which circuit diagram is
drawn as shown in Fig. b. In this circuit, output directly uses input information.
(a) Design equations for Mealy model (b) Circuit diagram following a Mealy model

Assignment

Implication Table Method


Conversion from Mealy to Moore Model and Conversion from Moore to Mealy Model
Implementation/design of sequential circuit using read only memory-
3 people
ASYNCHRONOUS SEQUENTIAL CIRCUITS

INTRODUCTION

 Do not use clock pulses.


 The change of internal state occurs when there is a change in the input variable.
 Their memory elements are either un-clocked flip-flops or time-delay elements.
 They often resemble combinational circuits with feedback.
 They are used when speed of operation is important.
 The communication of two units, with each unit having its own independent clock, must
be done with asynchronous circuits

The asynchronous circuit is in

Stable state if for all i

Transition state if for all i

Excitation and Transition Table


 Circle indicates a stable state
 Every row has a stable state,
 When y2y1 ≠ Y2Y1 (due to input change) a transition to new row will occur

Design restrictions
 Normally requires that inputs do not change simultaneously (i.e., within the settling time
of the circuit after an input change)
 Hence, input changes only in stable states
The design of asynchronous circuit is very complex and has several constraints to be taken care
of, which is not required for synchronous circuit.
Problems are: -
 Race conditions
 Hazards

Race Conditions
 A race condition is caused when two or more binary state variables change value due to
change in an input variable
 Unequal delays may cause state variables to change in unpredictable manner
 Race condition may be
i. non- critical, or
ii. critical

Hazards
 Unwanted switching transients, “glitches”, that may cause circuit to malfunction

Steps involve in design of asynchronous sequential circuit are: -

1. Find relationship between input and output


2. Construct state transition diagram
3. Construct primitive table /primitive flow table/simple flow table
4. State reduction –either using row elimination method/implication table method
5. State assignment
6. Design input equation and circuit diagram

This makes the design of such circuit complex and cumbersome and if benefits like speed are not
of critical importance, synchronous design is preferred to asynchronous design

Example:-
Let us start with state transition diagram
Fig: State transition diagram of the problem
 Let the initial state be considered as a when AB = 00 with output 0.
 As long as AB remains 00, the circuit remains at a.
 As soon as one of A or B changes the circuit may immediately move to different states.
 A and B cannot change together, a constraint in asynchronous design.
 At state a, if input AB = 01, the circuit goes to state b and if input AB = 10 it goes to c
 Both b and c generate output 0 as they have not yet fulfilled the condition stated in the
problem for assertion of output.
 The circuit remains at b for AB = 01. If AB changes to 11, the circuit moves to state d
with output X = 0
 if AB becomes 00 the circuit goes back to a.
 Similarly, the circuit stays at c if input stays at 10 but goes to d receiving 11 and to a
receiving 00.
 At state d, the input AB = 11 and now if B changes from 1—>0 then condition for output
X = 1 is fulfilled and next state e for AB = 10 shows output as 1.
 The circuit remains at state e as long as AB = 10. It goes back to state d if AB becomes
11 because if B again goes to 0 output should be high.
 At e, if AB changes to 00 the circuit goes to initial state a as AB becoming 10 following
00 will not assert output.

Construct primitive table /primitive flow table/simple flow table


Primitive Table
The next step is to form state table from state transition diagram.
In this table if all the rows representing a state has only one stable state for all possible input
combinations and it is termed as primitive table, or primitive flow table or simply flow table.
Primitive table prepared from state transition diagram is shown in table Fig. below.

Fig.: primitive table for the problem


Each row in this table has one don’t care state. The don’t care state in each row comes which asks
for both the input variables to change to move from stable state, a condition not allowed in
asynchronous sequential logic

a and b similar eliminate b

d c 0
a a
a X3 d 0
c
X4 a e 0
d
a X5 d 1
e

a and c similar eliminate c

d 0
a a a
X4 a e 0
d
a X5 d 1
e

State assignment
 This step in asynchronous sequential circuit design has to be done very carefully so that a
valid state transition does not require two or more output variables to change
simultaneously which may lead to racing problem.
 In this problem there are three states in the reduced state diagram which needs two
variables to represent them. Figure above shows that we cannot avoid two variables
changing together in one or more occasions for the reduced state transition diagram.
 If {a,d,e) is represented by (00,01,10) it occurs twice for d—>e and e—>d transitions in
state transition diagram.
 A representation of (00,01,11) requires two variables to change only once when e—>a
transition occurs.
 The solution to this may be found if the unused fourth combination of two variable
representation is used as a dummy state, include between e and a.
 if one dummy state is not enough need to use a third variable to represent the states that
will make 2—3 = 5 dummy variable available for this purpose.
 Represent the states in this problem by two variables PQ in the following way. The
modified state diagram and state table with dummy variable =10 included are shown in
Fig. a below :00 ,d:01,e:11 and dummy state =10
 Dummy state is an unstable state and before the input can change it goes to next stable
state a.
 Represent state variables by P and Q, the corresponding feedback variables are

represented by p and q respectively.

Design Equations and Circuit Diagram


Use Karnaugh map to get expression of state variables P and Q as a function of input A, B and
feedback variables p and q. The equations derived from Karnaugh map are shown in Fig. below.
The equation of output X is generated from P and Q as use Moore model.
(a)Reduced State transition diagram from the fig: State transition diagram of the
problem
(b-d) Karnaugh map and design equations

The final circuit is developed from these equations and is shown in Fig. below.

Fig: Circuit diagram for asynchronous sequential logic

You might also like