0% found this document useful (0 votes)
120 views343 pages

Dpco CS3351

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
120 views343 pages

Dpco CS3351

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 343

UNIT-I

COMBINATIONAL LOGIC
Combinational circuits-KMap-Analysis and Design Procedures-Binary Adder-
BinaryAdder-Decimal Adder- Magnitude comparator-Decoder-Encoder-
Multiplexers- Demultiplexers

INTRODUCTION:
The digital system consists of two types of circuits, namely
(i) Combinational circuits
(ii) Sequential

Combinational circuit consists of logic gates whose output at any time is


determined from the present combination of inputs. The logic gate is the most
basic building block of combinational logic. The logical function performed by a
combinational circuit is fully defined by a set of Boolean expressions.
Sequential logic circuit comprises both logic gates and the state of
storage elements such as flip-flops. As a consequence, the output of a
sequential circuit depends not only on present value of inputs but also on the
past state of inputs.
In the previous chapter, we have discussed binary numbers, codes,
Boolean algebra and simplification of Boolean function and logic gates. In this
chapter, formulation and analysis of various systematic designs of
combinational circuits willbe discussed.
variables. The logic gates accept signals from inputs and output signals are
generated according to the logic circuits employed in it. Binary information from
the given data transforms to desired output data in this process. Both input and
output are obviously the binary signals, i.e., both the input and output signals are
of two possible states, logic 1 and logic 0.

1
Block diagram of a combinational logic circuit

For n number of input variables to a combinational circuit, 2n possible


combinations of binary input states are possible. For each possible combination,
there is one and only one possible output combination. A combinational logic
circuit can be described by m Boolean functions and each output can be
expressed in termsof n input variables.

DESIGN PROCEDURES:
Any combinational circuit can be designed by the following steps of
designprocedure.
1. The problem is stated.
2. Identify the input and output variables.
3. The input and output variables are assigned letter symbols.
4. Construction of a truth table to meet input -output requirements.
5. Writing Boolean expressions for various output variables in terms of input
variables.
6. The simplified Boolean expression is obtained by any method of
minimization—algebraic method, Karnaugh map method, or tabulation
method.
7. A logic diagram is realized from the simplified Boolean expression using
logic
t

The following guidelines should be followed while choosing the preferred


form for hardware implementation:
1. The implementation should have the minimum number of gates, with
thegates used having the minimum number of inputs.
2. There should be a minimum number of interconnections.
3. Limitation on the driving capability of the gates should not be ignored.

2
Problems:
1. Design a combinational circuit with three inputs and one output. The output
is 1when the binary value of the inputs is less than 3. The output is 0
otherwise.
Solution:
Truth Table:

x y z F
0 0 0 1
0 0 1 1
0 1 0 1
0 1 1 0
1 0 0 0
1 0 1 0
1 1 0 0
1 1 1 0
K-map

Logic Diagram:
The combinational circuit can be drawn

2. Design a combinational circuit with three inputs, x, y and z, and the three
outputs, A, B, and C. when the binary input is 0, 1, 2, or 3, the binary output
is

3
one greater than the input. When the binary input is 4, 5, 6, or 7, the binary
output is one less than the input.
Solution:
Truth Table:
Derive the truth table that defines the required relationship between inputs and
outputs.

x y z A B C
0 0 0 0 0 1
0 0 1 0 1 0
0 1 0 0 1 1
0 1 1 1 0 0
1 0 0 0 1 1
1 0 1 1 0 0
1 1 0 1 0 1
1 1 1 1 1 0

Obtain the simplified Boolean functions for each output as a function of the
inputvariables.
K-map for output A:

The simplified expression from the map is: A= xz+ xy+


yz

K-map for output B:

4
The simplified expression from the map is: B= x’y’z+ x’yz’+ xy’z’+ xyz
Logic Diagram:

K-map for output C:

The simplified expression from the map is:


C=z’

3. A majority circuit is a combinational circuit whose output is equal to 1 if


the input variables have more 1’s than 0’s. The output is 0 otherwise. Design a 3-
input majority circuit.

Solution:
Truth Table:

x y z F
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 1

1 0 0 0
1 0 1 1
1 1 0 1
1 1 1 1
K-map Simplification:

The simplified expression from the map is: xz+ yz+


xy

4. Design a combinational circuit that generates the 9's complement of a BCD


digit.
Solution:

Inputs Outputs
A B C D w x y z
0 0 0 0 1 0 0 1
0 0 0 1 1 0 0 0
0 0 1 0 0 1 1 1
0 0 1 1 0 1 1 0
0 1 0 0 0 1 0 1
0 1 0 1 0 1 0 0
0 1 1 0 0 0 1 1
0 1 1 1 0 0 1 0
1 0 0 0 0 0 0 1
1 0 0 1 0 0 0 0
6
K-map Simplification:

Logic

7
ARITHMETIC CIRCUITS:
In this section, we will discuss those combinational logic building blocks
that can be used to perform addition and subtraction operations on binary
numbers. Addition and subtraction are the two most commonly used arithmetic
operations, as the other two, namely multiplication and division, are respectively
the processes of repeated addition and repeated subtraction.
The basic building blocks that form the basis of all hardware used to
perform the arithmetic operations on binary numbers are half-adder, full adder,
half- subtractor, full-subtractor.

Half-Adder:
A half-adder is a combinational circuit that can be used to add two binary
bits. It has two inputs that represent the two bits to be added and two outputs,
with one producing the SUM output and the other producing the CARRY.

Block schematic of half-adder

The truth table of a half-adder, showing all possible input combinations


and the corresponding outputs are shown below.
Truth Table:

Inputs Outputs
A B Sum (S) Carry (C)
0 0 0 0
0 1 1 0
1 0 1 0
1 1 0 1
K-map simplification:

8
The Boolean expressions for the SUM and CARRY outputs are given by
theequations,
Sum, S = A’B+ AB’= AB
Carry, C = A . B
The first one representing the SUM output is that of an EX-OR gate,
thesecond one representing the CARRY output is that of an AND gate.
The logic diagram of the half adder is,

Logic Implementation of Half-adder

third input represents the carry from previous lower significant position. The block

Full-Adder:
A full adder is a combinational circuit that forms the arithmetic sum of
threeinput bits. It consists of three inputs and two outputs.
Two of the input variables, represent the significant bits to be added. The
diagram of full adder is given by,

Block schematic of full-adder

9
The full adder circuit overcomes the limitation of the half-adder, which can
be used to add two bits only. As there are three input variables, eight different
input combinations are possible.

Truth Table:
Inputs Outputs
A B Cin Sum (S) Carry (Cout)
0 0 0 0 0
0 0 1 1 0
0 1 0 1 0
0 1 1 0 1
1 0 0 1 0
1 0 1 0 1
1 1 0 0 1
1 1 1 1 1

K-map simplification:

The Boolean expressions for the SUM and CARRY outputs are given by the

Carry, Cout= AB+ ACin + BCin .


Logic Diagram:

10
The logic diagram of the full adder can also be implemented with two half-
adders and one OR gate. The S output from the second half adder is the
exclusive-

Sum = Cin  (A  B) [x  y = x’y+ xy’]


= Cin  (A’B+AB’)
= C’in (A’B+AB’) + Cin (A’B+AB’)’ [(x’y+xy’)’= (xy+x’y’)]
= C’in (A’B+AB’) + Cin (AB+A’B’)
= A’BC’in + AB’C’in + ABCin +

and the carry output is,


Carry, Cout = AB+ Cin (A’B+AB’)
= AB+ A’BCin+ AB’Cin
= B (A+A’Cin) +
AB’Cin

= AB + BCin + AB’Cin
= AB + Cin (B + AB’)
= AB + Cin (A + B)
= AB+ ACin+ BCin.

11
Implementation of full adder with two half-adders and an OR gate

Half -Subtractor
A half-subtractor is a combinational circuit that can be used to subtract
one binary digit from another to produce a DIFFERENCE output and a BORROW
output.
The BORROW output here specifies whether a ‘1’ has been borrowed to perform
the

Block schematic of half-subtractor

The truth table of half-subtractor, showing all possible input


combinationsand the corresponding outputs are shown below.

Inputs Outputs
A B Difference (D) Borrow (Bout)
0 0 0 0
0 1 1 1
1 0 1 0
1 1 0 0

K-map simplification:

The Boolean expressions for the DIFFERENCE and BORROW outputs are
given by the equations,
12
Difference, D = A’B+ AB’= A  B
Borrow, Bout = A’ . B
The first one representing the DIFFERENCE (D)output is that of an
exclusive- OR gate, the expression for the BORROW output (Bout) is that of an
AND gate with input A complemented before it is fed to the gate.
The logic diagram of the half adder is,

Logic Implementation of Half-Subtractor

Comparing a half-subtractor with a half-adder, we find that the


expressions for the SUM and DIFFERENCE outputs are just the same. The
expression for BORROW in the case of the half-subtractor is also similar to what
we have for CARRY in the case of the half-adder. If the input A, ie., the minuend
is complemented, an AND gate can be used to implement the BORROW output.

Full Subtractor:
A full subtractor performs subtraction operation on two bits, a minuend
and a subtrahend, and also takes into consideration whether a ‘1’ has already been
borrowed by the previous adjacent lower minuend bit or not.
As a result, there are three bits to be handled at the input of a full
subtractor, namely the two bits to be subtracted and a borrow bit designated as
Bin. There are two outputs, namely the DIFFERENCE output D and the BORROW
output Bo. The BORROW output bit tells whether the minuend bit needs to borrow
a ‘1’ from the next possible higher minuend bit.

Block schematic of full- subtractor


The truth table for full-subtractor is,
Inputs Outputs
13
A B Bin Difference(D) Borrow(Bout)
0 0 0 0 0
0 0 1 1 1
0 1 0 1 1
0 1 1 0 1
1 0 0 1 0
1 0 1 0 0
1 1 0 0 0
1 1 1 1 1

Borrow, Bout = A’B+ A’Bin + BBin .

K-map simplification:

The Boolean expressions for the DIFFERENCE and BORROW outputs


are

The logic diagram for the above functions is shown as,

14
Implementation of full- subtractor

The logic diagram of the full-subtractor can also be implemented with two
half-subtractors and one OR gate. The difference,D output from the second half
subtractor is the exclusive-OR of Bin and the output of the first half-subtractor,
Difference,D= Bin  (A  B) [x  y = x’y+ xy’]
= Bin  (A’B+AB’)
= B’in (A’B+AB’) + Bin (A’B+AB’)’ [(x’y+xy’)’= (xy+x’y’)]
= B’in (A’B+AB’) + Bin (AB+A’B’)
= A’BB’in + AB’B’in + ABBin + A’B’Bin
.

and the borrow output is,


[(x’y+xy’)’= (xy+x’y’)]
Borrow, Bout = A’B+ Bin (A’B+AB’)’
= A’B+ Bin (AB+A’B’)
= A’B+ ABBin+ A’B’Bin
= B (A’+ABin) + A’B’Bin
= B (A’+Bin) + A’B’Bin
A’B + BB + A’B’B
= A’B + Bin( B + A’B’)
= A’B + Bin (B+ A’)
= A’B+ BBin+ A’Bin.

Therefore,
We can implement full-subtractor using two half-subtractors and OR gate as,

Implementation of full-subtractor with two half-subtractors and an OR gate

Binary Adder (Parallel Adder)


The 4-bit binary adder using full adder circuits is capable of adding two 4-bit
numbers resulting in a 4-bit sum and a carry output as shown in figure

4-bit binary parallel Adder


Since all the bits of augend and addend are fed into the adder circuits
simultaneously and the additions in each position are taking place at the same
time, this circuit is known as parallel adder.

Let the 4-bit words to be added be represented by,


A3 A2 A1 A0= 1 1 1 1 and B3 B2 B1 B0= 1 0 1 1.

The bits are added with full adders, starting from the least significant
position, to form the sum it and carry bit. The input carry C0 in the least significant
position must be 0. The carry output of the lower order stage is connected to the
carry input
of the next higher order stage. Hence this type of adder is called ripple-carry adder.

In the least significant stage, A0, B0 and C0 (which is 0) are added resulting
in sum S0 and carry C1. This carry C1 becomes the carry input to the second stage.
Similarly in the second stage, A1, B1 and C1 are added resulting in sum S1 and
carry C2, in the third stage, A2, B2 and C2 are added resulting in sum S2 and carry
C3, in the third stage, A3, B3 and C3 are added resulting in sum S3 and C4, which is
the output carry. Thus the circuit results in a sum (S3 S2 S1 S0) and a carry output
(Cout).

Though the parallel binary adder is said to generate its output immediately
after the inputs are applied, its speed of operation is limited by the carry
propagation delay through all stages. However, there are several methods to
reducethis delay.

One of the methods of speeding up this process is look-ahead carry


addition

Carry Look Ahead Adder:


In Parallel adder, all the bits of the augend and the addend are available
for computation at the same time. The carry output of each full-adder stage is
connected to the carry input of the next high-order stage. Since each bit of the
sum output depends on the value of the input carry, time delay occurs in the
addition process.

For example, addition of two numbers (1111+ 1011) gives the result as 1010.
Addition of the LSB position produces a carry into the second position. This carry
when added to the bits of the second position, produces a carry into the third
position. This carry when added to bits of the third position, produces a carry
intothe last position. The sum bit generated in the last position (MSB) depends
on the carry that was generated by the addition in the previous position. i.e., the
adder will not produce correct result until LSB carry has propagated through the
intermediate full-adders. This represents a time delay that depends on the
propagation delay
17
produced in an each full-adder. For example, if each full adder is considered to
have a propagation delay of 8nsec, then S3 will not react its correct value until 24
nsec after LSB is generated. Therefore total time required to perform addition is
24+ 8 = 32nsec.

functions: carry generate (Gi) and carry propagate (Pi)

The method of speeding up this process by eliminating inter stage carry


delay is called look ahead-carry addition. This method utilizes logic gates to look at
the lower order bits of the augend and addend to see if a higher-order carry is to be
generated. It uses two functions: carry generate and carry propagate.

Full-Adder circuit
Consider the circuit of the full-adder shown above. Here we define
as,Carry propagate, Pi = Ai  Bi
Carry generate, Gi = Ai . Bi
the output sum and carry can be expressed as,
Si = P i  C i
Ci+1 = Gi +
Pi.Ci
18
Gi (carry generate), it produces a carry 1 when both Ai and Bi are 1, regardless of
the input carry Ci. Pi (carry propagate), is the term associated with the
propagation of the carry from Ci to Ci+1.
The Boolean functions for the carry outputs of each stage and substitute for
each Ci its value from the previous equation:
C0= input carry
C1= G0 + P0C0
C2= G1 + P1C1 = G1 + P1 (G0 + P0C0)
= G1 + P1G0 + P1P0C0
C3= G2 + P2C2 = G2 + P2 (G1 + P1G0 + P1P0C0)
= G2 + P2G1 + P2P1G0 + P2P1P0C0

Since the Boolean function for each output carry is expressed in sum of
products, each function can be implemented with one level of AND gates
followed by an OR gate. The three Boolean functions for C1, C2 and C3 are
implemented in the carry look-ahead generator as shown below.

Logic diagram of Carry Look-ahead Generator


19
Note that C3 does not have to wait for C2 and C1 to propagate; in fact C3 is
propagated at the same time as C1 and C2.
Using a Look-ahead Generator we can easily construct a 4-bit parallel
adder with a Look-ahead carry scheme. Each sum output requires two exclusive-
OR gates. The output of the first exclusive-OR gate generates the Pi variable, and
the AND gate generates the Gi variable. The carries are propagated through the
carry look-ahead generator and applied as inputs to the second exclusive-OR

gate. All output carries are generated after a delay through two levels of gates.

Thus, outputs S1 through S3 have equal propagation delay times.

4-Bit Adder with Carry Look-ahead

Binary Subtractor (Parallel Subtractor)


20
The subtraction of unsigned binary numbers can be done most
conveniently by means of complements. The subtraction (A – B) can be done by
taking the 2’s complement of B and adding it to A. The 2’s complement can be
obtained by taking the 1’s complement and adding 1 to the least significant pair
of bits. The 1’s complement can be implemented with inverters and a 1 can be
added to the sum through the input carry.

The circuit for subtracting (A – B) consists of an adder with inverters


placed between each data input B and the corresponding input of the full adder.
The input carry C0 must be equal to 1 when performing subtraction. The
operation thus performed becomes A, plus the 1’s complement of B, plus1. This
is equal to A plus the 2’s complement of B.

Let the 4-bit words to be subtracted be represented by, A3 A2 A1 A0= 1 1 1 1 and


B3 B2 B1 B0= 1 0 1 1.

4-bit Parallel Subtractor


Parallel Adder/ Subtractor

The addition and subtraction operation can be combined into one circuit
with one common binary adder. This is done by including an exclusive-OR gate
with eachfull adder. A 4-bit adder Subtractor circuit is shown below.

4-Bit Adder Subtractor


The mode input M controls the operation. When M= 0, the circuit is an
adder and when M=1, the circuit becomes a Subtractor. Each exclusive-OR gate
receives input M and one of the inputs of B. When M=0, we have B 0= B. The
full adders receive the value of B, the input carry is 0, and the circuit performs A

plus B. When M=1, we have B 1= B’ and C0=1. The B inputs are all
complemented and a 1 is added through the input carry. The circuit performs the
operation A plus the 2’s complement of B. The exclusive-OR with output V is for
detecting an overflow.

Decimal Adder (BCD Adder)


The digital system handles the decimal number in the form of binary
coded decimal numbers (BCD). A BCD adder is a circuit that adds two BCD bits
and produces a sum digit also in BCD.
Consider the arithmetic addition of two decimal digits in BCD, together
with an input carry from a previous stage. Since each input digit does not exceed 9,
the output sum cannot be greater than 9+ 9+1 = 19; the 1 is the sum being an input
carry. The adder will form the sum in binary and produce a result that ranges from
0 through 19.
These binary numbers are labeled by symbols C, S3, S2, S1, S0, C is the
carry. The columns under the binary sum list the binary values that appear in the
outputs
of the 4-bit binary adder. The output sum of the two decimal digits must be
represented in BCD.
To implement BCD adder:
 For initial addition , a 4-bit binary adder is required,
 Combinational circuit to detect whether the sum is greater than 9 and
 One more 4-bit adder to add 6 (0110)2 with the sum of the first 4-bit adder,
if the sum is greater than 9 or carry is 1.
The logic circuit to detect sum greater than 9 can be determined by
simplifying the Boolean expression of the given truth table.

The two decimal digits, together with the input carry, are
first added in thetop4-bit binary adder to provide the binary sum.
zero, nothing is added to the binary sum. When it is equal to one, binary (0110)2
is added to the binary sum through the bottom 4-bit adder.

23
The output carry generated from the bottom adder can be ignored, since it
supplies information already available at the output carry terminal. The output carry
from one stage must be connected to the input carry of the next higher-order stage.

MULTIPLEXER: (Data Selector)


A Multiplexer or MUX, is a combinational circuit with more than one input
line, one output line and more than one selection line. A multiplexer selects
binary information present from one of many input lines, depending upon the
logic status of the selection inputs, and routes it to the output line. Normally,
there are 2n input lines and n selection lines whose bit combinations determine
which input is selected. The multiplexer is often labeled as MUX in block
diagrams.
A multiplexer is also called a data selector, since it selects one of many
inputsand steers the binary information to the output line.

Block diagram of Multiplexer


24
2-to-1- line Multiplexer:
The circuit has two data input lines, one output line and one selection line, S.
When S= 0, the upper AND gate is enabled and I0 has a path to the output.
When S=1, the lower AND gate is enabled and I1 has a path to the output.

Logic diagram
The multiplexer acts like an electronic switch that selects one of the two
sources.

S Y
0 I0
1 I1

4-to-1-line Multiplexer
A 4-to-1-line multiplexer has four (2n) input lines, two (n) select lines and
one output line. It is the multiplexer consisting of four input channels and
information ofone of the channels can be selected and transmitted to an output
line according to the select inputs combinations. Selection of one of the four
input channel is possibleby two selection inputs.
Each of the four inputs I0 through I3, is applied to one input of AND
gate.

the AND gate are applied to a single OR gate that provides the 1-line output.

25
4-to-1-Line Multiplexer
Function table:

S1 S0 Y
0 0 I0
0 1 I1
1 0 I2
1 1 I3
To demonstrate the circuit operation, consider the case when S1S0= 10. The
AND gate associated with input I2 has two of its inputs equal to 1 and the third input
connected to I2. The other three AND gates have atleast one input equal to 0, which
makes their outputs equal to 0. The OR output is now equal to the value of I2,
providing a path from the selected input to the output.
The data output is equal to I0 only if S1= 0 and S0= 0; Y= I0S1’S0’.
The data output is equal to I1 only if S1= 0 and S0= 1; Y=
I1S1’S0.The data output is equal to I2 only if S1= 1 and S0= 0;
The data output is equal to I3 only if S1= 1 and S0= 1; Y= I3S1S0.
When these terms are ORed, the total expression for the data output
is,
Y= I0S1’S0’+ I1S1’S0 +I2S1S0’+ I3S1S0.
As in decoder, multiplexers may have an enable input to control the
operation of the unit. When the enable input is in the inactive state, the outputs
are disabled, and when it is in the active state, the circuit functions as a normal
multiplexer.

Quadruple 2-to-1 Line Multiplexer


26
This circuit has four multiplexers, each
capable of selecting one of two input lines. Output Y0
can be selected to come from either A0 or B0.
Similarly, output Y1 may have the value of A1 or B1,
operation.
Although the circuit contains four 2-to-1-Line multiplexers, it is viewed as a
circuit that selects one of two 4-bit sets of data lines. The unit is enabled when
E= 0. Then if S= 0, the four A inputs have a path to the four outputs. On the other
hand, if S=1, the four B inputs are applied to the outputs. The outputs have all 0’s
when E= 1,regardless of the value of S.

Application:

27
1. They are used as a data selector to select out of many data inputs.
2. They can be used to implement combinational logic circuit.
3. They are used in time multiplexing systems.
4. They are used in frequency multiplexing systems.
5. They are used in A/D and D/A converter.
6. They are used in data acquisition systems.

Implementation of Boolean Function using MUX:


1. Implement the following boolean function using 4: 1 multiplexer,
F (A, B, C) = ∑m (1, 3, 5, 6).
Solution:

Variables, n= 3 (A, B, C)
Select lines= n-1 = 2 (S1,
S0)
2n-1 to MUX i.e., 22 to 1 = 4 to 1
MUX Input lines= 2n-1 = 22 = 4 (D0, D1,
D 2, D 3)
Implementation table:
Apply variables A and B to the select lines. The procedures for implementing
the function are:
i. List the input of the multiplexer
ii. List under them all the minterms in two rows as shown below.
The first half of the minterms is associated with A’ and the second
half with A The given function is implemented by circling the minterms of

input.
2. If both the minterms in the column are circled, apply 1 to the
correspondinginput.
3. If the bottom minterm is circled and the top is not circled, apply C to the input.
4. If the top minterm is circled and the bottom is not circled, apply C’ to the input.

28
Multiplexer Implementation:

2. F (x, y, z) = ∑m (1, 2, 6, 7)
Implementation table:

Multiplexer Implementation:
29

3. F ( A, B, C) = ∑m (1, 2, 4, 5)
Solution:
Variables, n= 3 (A, B, C)
Select lines= n-1 = 2 (S1,
S0)
2n-1 to MUX i.e., 22 to 1 = 4 to 1 MUX
Input lines= 2n-1 = 22 = 4 (D0, D1, D2, D3)

Multiplexer Implementation

4. F( A, B, C, D)= ∑m (0, 1, 3, 4, 8, 9, 15)


30

Solution:
Variables, n= 4 (A, B, C, D)
Select lines= n-1 = 3 (S2, S1,
S 0)
2n-1 to MUX i.e., 23 to 1 = 8 to 1 MUX
Input lines= 2n-1 = 23 = 8 (D0, D1, D2, D3, D4, D5, D6, D7)
Implementation table:

Multiplexer Implementation:

5. Implement the Boolean function using 8: 1 and also using 4:1 multiplexer
F (A, B, C, D) = ∑m (0, 1, 2, 4, 6, 9, 12, 14)
Solution:
Variables, n= 4 (A, B, C, D)
Select lines= n-1 = 3 (S2, S1,
S 0)
2n-1 to MUX i.e., 23 to 1 = 8 to 1 MUX
Input lines= 2n-1 = 23 = 8 (D0, D1, D2, D3, D4, D5, D6, D7)
Implementation table:

31
Multiplexer Implementation (Using 8: 1 MUX)

Using 4: 1 MUX:

6. F (A, B, C, D) = ∑m (1, 3, 4, 11, 12, 13, 14, 15)


32
Solution:
Variables, n= 4 (A, B, C, D)
Select lines= n-1 = 3 (S2, S1,
S 0)
2n-1 to MUX i.e., 23 to 1 = 8 to 1 MUX
Input lines= 2n-1 = 23 = 8 (D0, D1, D2, D3, D4, D5, D6, D7)
Implementation table:

Multiplexer Implementation:

7. Implement the Boolean function using 8: 1 multiplexer.


F (A, B, C, D) = A’BD’ + ACD + B’CD + A’C’D.
Solution:
Convert into standard SOP form,
= A’BD’ (C’+C) + ACD (B’+B) + B’CD (A’+A) + A’C’D (B’+B)
= A’BC’D’ + A’BCD’+ AB’CD + ABCD +A’B’CD + AB’CD +A’B’C’D+
A’BC’D
33
= A’BC’D’ + A’BCD’+ AB’CD + ABCD +A’B’CD +A’B’C’D+ A’BC’D
= m4+ m6+ m11+ m15+ m3+ m1+ m5
= ∑m (1, 3, 4, 5, 6, 11, 15)
Implementation table:

Multiplexer Implementation:

8. Implement the Boolean function using 8: 1 multiplexer.


F (A, B, C, D) = AB’D + A’C’D + B’CD’ + AC’D.
Solution:
= AB’D (C’+C) + A’C’D (B’+B) + B’CD’ (A’+A) + AC’D (B’+B)
= AB’C’D + AB’CD+ A’B’C’D + A’BC’D +A’B’CD’ + AB’CD’ +AB’C’D+
ABC’D
= AB’C’D + AB’CD+ A’B’C’D + A’BC’D +A’B’CD’ + AB’CD’+ ABC’D
= m9+ m11+ m1+ m5+ m2+ m10+ m13
= ∑m (1, 2, 5, 9, 10, 11, 13).
Implementation Table:
34
Multiplexer Implementation:

9. Implement the Boolean function using 8: 1 and also using 4:1 multiplexer
F (w, x, y, z) = ∑m (1, 2, 3, 6, 7, 8, 11, 12, 14)
Solution:
Variables, n= 4 (w, x, y, z)
Select lines= n-1 = 3 (S2, S1, S0)
2n-1 to MUX i.e., 23 to 1 = 8 to 1 MUX
Input lines= 2n-1 = 23 = 8 (D0, D1, D2, D3, D4, D5, D6, D7)
Implementation table:

Multiplexer Implementation (Using 8:1 MUX):


35
(Using 4:1 MUX)

10. Implement the Boolean function using 8: 1 multiplexer


36
F (A, B, C, D) = ∏m (0, 3, 5, 8, 9, 10, 12, 14)
Solution:
Variables, n= 4 (A, B, C, D)
Select lines= n-1 = 3 (S2, S1,
S 0)
2n-1 to MUX i.e., 23 to 1 = 8 to 1 MUX
Input lines= 2n-1 = 23 = 8 (D0, D1, D2, D3, D4, D5, D6, D7)
Implementation table:

Multiplexer Implementation:

11. Implement the Boolean function using 8: 1 multiplexer


F (A, B, C, D) = ∑m (0, 2, 6, 10, 11, 12, 13) + d (3, 8, 14)
Solution:
Variables, n= 4 (A, B, C, D)
Select lines= n-1 = 3 (S2, S1,
S 0)
2n-1 to MUX i.e., 23 to 1 = 8 to 1 MUX
Input lines= 2n-1 = 23 = 8 (D0, D1, D2, D3, D4, D5, D6, D7)
Implementation Table:
37
Multiplexer Implementation:

DEMULTIPLEXER

Demultiplex means one into many. Demultiplexing is the process of


takinginformation from one input and transmitting the same over one of several
outputs.
A demultiplexer is a combinational logic circuit that receives information on
asingle input and transmits the same information over one of several (2n) output
lines.
Block diagram of demultiplexer

38
The block diagram of a demultiplexer which is opposite to a multiplexer in
its operation is shown above. The circuit has one input signal, ‘n’ select signals
and 2n output signals. The select inputs determine to which output the data input
will be connected. As the serial data is changed to parallel data, i.e., the input
caused to appear on one of the n output lines, the demultiplexer is also called a
“data
distributer” or a “serial-to-parallel converter”.

1-to-4 line Demultiplexer


A 1-to-4 demultiplexer has a single input, Din, four outputs (Y0 to Y3) and
twoselect inputs (S1 and S0).

Logic Symbol

The input variable Din has a path to all four outputs, but the input
information is directed to only one of the output lines. The truth table of the 1-to-4
demultiplexeris shown below.

Enable S1 S0 Din Y0 Y1 Y2 Y3
0 x x x 0 0 0 0
1 0 0 0 0 0 0 0
1 0 0 1 1 0 0 0
1 0 1 0 0 0 0 0
1 0 1 1 0 1 0 0
1 1 0 0 0 0 0 0
1 1 0 1 0 0 1 0
1 1 1 0 0 0 0 0
1 1 1 1 0 0 0 1
Truth table of 1-to-4 demultiplexer
39
From the truth table, it is clear that the data input, Din is connected to the
output Y0, when S1= 0 and S0= 0 and the data input is connected to output Y1
when S1= 0 and S0= 1. Similarly, the data input is connected to output Y2 and Y3
when S1= 1and S0= 0 and when S1= 1 and S0= 1, respectively. Also, from the truth
table, the expression for outputs can be written as follows,
Y0= S1’S0’Din
Y1= S1’S0Din
Y2= S1S0’Din
Y3= S1S0Din

Logic diagram of 1-to-4 demultiplexer

Now, using the above expressions, a 1-to-4 demultiplexer can be


implemented using four 3-input AND gates and two NOT gates. Here, the
connected to all the AND gates. The two select lines S1, S0 enable only one gate
at a time and the data that appears on the input line passes through the selected
gate tothe associated output line.

40
1-to-8 Demultiplexer
A 1-to-8 demultiplexer has a single input, Din, eight outputs (Y0 to Y7) and
three select inputs (S2, S1 and S0). It distributes one input line to eight output lines
based on the select inputs. The truth table of 1-to-8 demultiplexer is shown below.
Din S2 S1 S0 Y7 Y6 Y5 Y4 Y3 Y2 Y1 Y0
0 x x x 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0 0 1
1 0 0 1 0 0 0 0 0 0 1 0
1 0 1 0 0 0 0 0 0 1 0 0
1 0 1 1 0 0 0 0 1 0 0 0
1 1 0 0 0 0 0 1 0 0 0 0
1 1 0 1 0 0 1 0 0 0 0 0
1 1 1 0 0 1 0 0 0 0 0 0
1 1 1 1 1 0 0 0 0 0 0 0
Truth table of 1-to-8 demultiplexer

From the above truth table, it is clear that the data input is connected with
oneof the eight outputs based on the select inputs. Now from this truth table, the
expression for eight outputs can be written as follows:

Y0= Y4= S2
S2’S1’S0’Din Y1= S1’S0’Din Y5=
S2’S1’S0Din Y2= S2 S1’S0Din Y6=
S2’S1S0’Din S2 S1S0’Din

Now using the above expressions, the logic diagram of a 1-to-8


demultiplexer
can be drawn as shown below. Here, the single data line, Din is connected to all

input lines. For example, if S2S1S0= 000, then only AND gate-0 will be enabled and
thereby the data input, Din will appear at Y0. Similarly, the different combinations
of the select inputs, the input Din will appear at the respective output.

41
Logic diagram of 1-to-8 demultiplexer

1. Design 1:8 demultiplexer using two 1:4 DEMUX.

42
2. Implement full subtractor using demultiplexer.
Inputs Outputs
A B Bin Difference(D) Borrow(Bout)
0 0 0 0 0
0 0 1 1 1
0 1 0 1 1
0 1 1 0 1
1 0 0 1 0
1 0 1 0 0
1 1 0 0 0
1 1 1 1 1

Applications:
1. It can be used as a decoder
2. It can be used as a data distributer
3. It is used in time division multiplexing at the receiving end as a data separator.
4. It can be used to implement Boolean expressions.

43
DECODERS
A decoder is a combinational circuit that converts binary information from
‘n’input lines to a maximum of ‘2n’ unique output lines. The general structure of
decoder circuit is

General structure of decoder


The encoded information is presented as ‘n’ inputs producing ‘2n’ possible
outputs. The 2n output values are from 0 through 2n-1. A decoder is provided with
enable inputs to activate decoded output based on data inputs. When any one
enable
2-to-4 Line decoder

Binary Decoder (2 to 4 decoder)


A binary decoder has ‘n’ bit binary input and a one activated output out of
2n
outputs. A binary decoder is used when it is necessary to activate exactly one

44
Here the 2 inputs are decoded into 4 outputs, each output representing one
ofthe minterms of the two input variables.

Inputs Outputs
Enable A B Y3 Y2 Y1 Y0
0 x x 0 0 0 0
1 0 0 0 0 0 1
1 0 1 0 0 1 0
1 1 0 0 1 0 0
1 1 1 1 0 0 0
As shown in the truth table, if enable input is 1 (EN= 1) only one of the
outputs (Y0 – Y3), is active for a given input.
The output Y0 is active, ie., Y0= 1 when inputs A= B=
0,Y1 is active when inputs, A= 0 and B= 1,
Y2 is active, when input A= 1 and B=
0,Y3 is active, when inputs A= B= 1.
3-to-8 Line Decoder
A 3-to-8 line decoder has three inputs (A, B, C) and eight outputs (Y0-
Y7).
Based on the 3 inputs one of the eight outputs is selected.
The three inputs are decoded into eight outputs, each output representing
oneof the minterms of the 3-input variables. This decoder is used for binary-to-octal
conversion. The input variables may represent a binary number and the outputs will
represent the eight digits in the octal number system. The output variables are
mutually exclusive because only one output can be equal to 1 at any one time. The
output line whose value is equal to 1 represents the minterm equivalent of the binary
number presently available in the input lines.
Inputs Outputs
A B C Y0 Y1 Y2 Y3 Y4 Y5 Y6 Y7
0 0 0 1 0 0 0 0 0 0 0
0 0 1 0 1 0 0 0 0 0 0
0 1 0 0 0 1 0 0 0 0 0
0 1 1 0 0 0 1 0 0 0 0
1 0 0 0 0 0 0 1 0 0 0
1 0 1 0 0 0 0 0 1 0 0
1 1 0 0 0 0 0 0 0 1 0
1 1 1 0 0 0 0 0 0 0 1

45
3-to-8 line decoder

BCD to 7-Segment Display Decoder


A seven-segment display is normally used for displaying any one of the
decimal digits, 0 through 9. A BCD-to-seven segment decoder accepts a decimal
digitin BCD and generates the corresponding seven-segment code.

Each segment is made up of a material that emits light when current is


passedthrough it. The segments activated during each digit display are
tabulated as
46
Digit Display Segments Activated

0 a, b, c, d, e, f

1 b, c

2 a, b, d, e, g

3 a, b, c, d, g

4 b, c, f, g

5 a, c, d, f, g

6 a, c, d, e, f, g

7 a, b, c

8 a, b, c, d, e, f, g

9 a, b, c, d, f, g

47
Truth table:
BCD code 7-Segment code
Digit
A B C D a b c d e f g
0 0 0 0 0 1 1 1 1 1 1 0
1 0 0 0 1 0 1 1 0 0 0 0
2 0 0 1 0 1 1 0 1 1 0 1
3 0 0 1 1 1 1 1 1 0 0 1
4 0 1 0 0 0 1 1 0 0 1 1
5 0 1 0 1 1 0 1 1 0 1 1
6 0 1 1 0 1 0 1 1 1 1 1
7 0 1 1 1 1 1 1 0 0 0 0
8 1 0 0 0 1 1 1 1 1 1 1
9 1 0 0 1 1 1 1 1 0 1 1

K-map Simplification:

48
49
Logic Diagram

BCD to 7-segment display decoder

Applications of decoders:
1. Decoders are used in counter system.
2. They are used in analog to digital converter.
3. Decoder outputs can be used to drive a display system.
4. Address decoding
5. Implementation of combinational circuits.
6. Code converters.

50
ENCODERS
An encoder is a digital circuit that performs the inverse operation of a
decoder. Hence, the opposite of the decoding process is called encoding. An
encoder is a combinational circuit that converts binary information from 2n input
lines to a maximum of ‘n’ unique output lines.
The general structure of encoder circuit is

General structure of Encoder

It has 2n input lines, only one which 1 is active at any time and ‘n’ output
lines.It encodes one of the active inputs to a coded binary output with ‘n’ bits. In
an encoder, the number of outputs is less than the number of inputs.

Octal-to-Binary Encoder

It has eight inputs (one for each of the octal digits) and the three outputs
thatgenerate the corresponding binary number. It is assumed that only one input
has a value of 1 at any given time.
The encoder can be implemented with OR gates whose inputs are
determined directly from the truth table. Output z is equal to 1, when the input
octal digit is 1 or 3 or 5 or 7. Output y is 1 for octal digits 2, 3, 6, or 7 and the
output is 1 for digits 4, 5,
6 or 7.
Inputs Outputs
D0 D1 D2 D3 D4 D5 D6 D7 A B C
1 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 1
0 0 1 0 0 0 0 0 0 1 0
0 0 0 1 0 0 0 0 0 1 1
0 0 0 0 1 0 0 0 1 0 0
0 0 0 0 0 1 0 0 1 0 1
0 0 0 0 0 0 1 0 1 1 0
0 0 0 0 0 0 0 1 1 1 1
51
These conditions can be expressed by the following output Boolean
functions:
z= D1+ D3+ D5+ D7
y= D2+ D3+ D6+ D7
x= D4+ D5+ D6+ D7
The encoder can be implemented with three OR gates. The encoder
defined in the below table, has the limitation that only one input can be active at
any given time. If two inputs are active simultaneously, output produces an
undefined combination.
For eg., if D3 and D6 are 1 simultaneously, the output of the encoder may
be
111. This does not represent either D6 or D3. To resolve this problem, encoder
circuits must establish an input priority to ensure that only one input is
encoded. If we
establish a higher priority for inputs with higher subscript numbers and if D3 and
D6

Octal-to-Binary Encoder
Another problem in the octal-to-binary encoder is that an output with all 0’s is
generated when all the inputs are 0; this output is same as when D0 is equal to 1. The
discrepancy can be resolved by providing one more output to indicate that atleast
one input is equal to 1.

Priority Encoder
A priority encoder is an encoder circuit that includes the priority function.
In priority encoder, if two or more inputs are equal to 1 at the same time, the
input having the highest priority will take precedence.
In addition to the two outputs x and y, the circuit has a third output, V
(validbit indicator). It is set to 1 when one or more inputs are equal to 1. If all inputs
are 0, there is no valid input and V is equal to 0.

52
The higher the subscript number, higher the priority of the input. Input D3,
has the highest priority. So, regardless of the values of the other inputs, when D3 is 1,
the output for xy is 11.
D2 has the next priority level. The output is 10, if D2= 1 provided D3= 0. The
output for D1 is generated only if higher priority inputs are 0, and so on down the
priority levels.

Truth table:
Inputs Outputs
D0 D1 D2 D3 x y V
0 0 0 0 x x 0
1 0 0 0 0 0 1
x 1 0 0 0 1 1
x x 1 0 1 0 1
x x x 1 1 1 1

Although the above table has only five rows, when each don’t care
condition is replaced first by 0 and then by 1, we obtain all 16 possible input
combinations. For example, the third row in the table with X100 represents
minterms 0100 and 1100. The don’t care condition is replaced by 0 and 1 as
shown in the table below.

Modified Truth table: Inputs Outputs


D0 D1 D2 D3 x y V
0 0 0 0 x x 0
1 0 0 0 0 0 1
0 1 0 0
0 1 1
1 1 0 0
0 0 1 0
0 1 1 0
1 0 1
1 0 1 0
1 1 1 0
0 0 0 1
0 0 1 1
0 1 0 1
0 1 1 1
1 1 1
1 0 0 1
1 0 1 1
1 1 0 1
1 1 1 1

53
K-map Simplification:

The priority encoder is implemented according to the above Boolean functions.

4-Input Priority Encoder

54
MAGNITUDE COMPARATOR
A magnitude comparator is a combinational circuit that compares two
given numbers (A and B) and determines whether one is equal to, less than or
greater than the other. The output is in the form of three binary variables
representing theconditions A = B, A>B and A<B, if A and B are the two numbers

being compared.

Block diagram of magnitude comparator

For comparison of two n-bit numbers, the classical method to achieve the
Boolean expressions requires a truth table of 22n entries and becomes too lengthy
and

2.8.1 2-bit Magnitude Comparator


The truth table of 2-bit comparator is given in table
below

Inputs Outputs
A1 A0 B1 B0 A>B A=B A<B
0 0 0 0 0 1 0
0 0 0 1 0 0 1
0 0 1 0 0 0 1
0 0 1 1 0 0 1
0 1 0 0 1 0 0
0 1 0 1 0 1 0
0 1 1 0 0 0 1
0 1 1 1 0 0 1
1 0 0 0 1 0 0
1 0 0 1 1 0 0
1 0 1 0 0 1 0
1 0 1 1 0 0 1
1 1 0 0 1 0 0
1 1 0 1 1 0 0
1 1 1 0 1 0 0
1 1 1 1 0 1 0
55
K-map Simplification:

56
Logic Diagram:

2-bit Magnitude Comparator


the coefficient of the numbers in descending order as,
4-bit Magnitude Comparator:
Let us consider the two binary numbers A and B with four digits each.
A = A 3A 2A 1 A 0
B = B 3 B 2 B 1 B 0,
Each subscripted letter represents one of the digits in the number. It is
observed from the bit contents of two numbers that A = B when A3 = B3, A2 = B2,
A1
= B1 and A0 = B0. When the numbers are binary they possess the value of either 1
or 0, the equality relation of each pair can be expressed logically by the
equivalence function as,

57
Xi = AiBi + Ai′Bi′ for i = 1, 2, 3, 4.
Or, Xi = (A  B)′. or, Xi ′ = A  B
Or, Xi = (AiBi′ + Ai′Bi)′.
where, Xi =1 only if the pair of bits in position i are equal (ie., if both are 1 or
bothare 0).
To satisfy the equality condition of two numbers A and B, it is necessary
that all Xi must be equal to logic 1. This indicates the AND operation of all Xi
variables. In other words, we can write the Boolean expression for two equal 4-bit
numbers.
(A = B) = X3X2X1 X0.
The binary variable (A=B) is equal to 1 only if all pairs of digits of the two numbers
The gate implementation of the three output variables just derived is
simpler than it seems because it involves a certain amount of repetition. The
unequal outputs can use the same gates that are needed to generate the equal

are equal.
To determine if A is greater than or less than B, we inspect the relative
magnitudes of pairs of significant bits starting from the most significant bit. If
the two digits of the most significant position are equal, the next significant pair
of digits is compared. The comparison process is continued until a pair of
unequal digits is found. It may be concluded that A>B, if the corresponding digit
of A is 1 and B is 0.If the corresponding digit of A is 0 and B is 1, we conclude
that A<B. Therefore, we can derive the logical expression of such sequential
comparison by the following twoBoolean functions,

(A>B) = A3B3′ +X3A2B2′ +X3X2A1B1′ +X3X2X1A0B0′


(A<B) = A3′B3 +X3A2′B2 +X3X2A1′B1 +X3X2X1A0′B0
The symbols (A>B) and (A<B) are binary output variables that are equal to 1
when A>B or A<B, respectively.

output. The logic diagram of the 4-bit magnitude comparator is shown below,
4-bit Magnitude Comparator

The four x outputs are generated with exclusive-NOR circuits and applied to
an AND gate to give the binary output variable (A=B). The other two outputs use the
x variables to generate the Boolean functions listed above. This is a multilevel
implementation and has a regular pattern.
59
UNIT-II

SYNCHRONOUS SEQUENTIAL LOGIC

Introduction to sequential circuit- Flipflops -Operation and Excitation


Tables.Triggering of FF . Analysis and design of clocked sequentialcircuits-
Design-Moor
/Mealy models, state minimization ,state assignment , circuit
implementation-Registers-Counters

INTRODUCTION
In combinational logic circuits, the outputs at any instant of time
depend
only on the input signals present at that time. For any change in input, the

Combinational Circuit- Block Diagram

In sequential logic circuits, it consists of combinational circuits to which


storage elements are connected to form a feedback path. The storage elements
are devices capable of storing binary information either 1 or 0.
The information stored in the memory elements at any given time defines
presenthstate of the sequential circuit. The present state and the external circuit
determine the output and the next state of sequential circuits.

Sequential Circuit- Block Diagram

Thus in sequential circuits, the output variables depend not only on the
1
present input variables but also on the past history of input variables.

The rotary channel selected knob on an old-fashioned TV is like a


combinational. Its output selects a channel based only on its current input – the
position of the knob. The channel-up and channel-down push buttons on a TV is
like a sequential circuit. The channel selection depends on the past sequence of
up/downpushes.
The comparison between combinational and sequential circuits is
given intable below.

S.No Combinational logic Sequential logic


The output variable, at all times The output variable depends not only
1 depends on the combination on the present input but also
ofinput variables. dependupon the past history of
inputs.
Memory unit is required to store the
2 Memory unit is not required.
past history of input variables.
3 Faster in speed. Slower than combinational circuits.
4 Easy to design. Comparatively harder to design.
Eg. Adder, subtractor, Decoder,
5 Eg. Shift registers, Counters
Encoders, Magnitude comparator

CLASSIFICATION OF LOGIC CIRCUITS

The sequential circuits can be classified depending on the timing of their


signals:
 Synchronous sequential circuits
 Asynchronous sequential circuits.
S.No Synchronous sequential circuits Asynchronous sequential circuits
Memory elements are either unclocked
1 Memory elements are clocked
flip-flops (Latches) or time delay
Flip-Flops.
elements.
The change in input signals can The change in input signals can
2 affect memory element upon affect
activation of clock signal. memory element at any instant
oftime.
The maximum operating speed Because of the absence of clock, it
3 of can
clock depends on time operate faster than
delaysinvolved. synchronouscircuits.

TRIGGERING OF FLIP-FLOPS

The state of a Flip-Flop is switched by a momentary change in the input


signal. This momentary change is called a trigger and the transition it causes is
said to trigger the Flip-Flop. Clocked Flip-Flops are triggered by pulses. A clock
pulse starts from an initial value of 0, goes momentarily to 1and after a short
time, returns to its initial 0 value.
Latches are controlled by enable signal, and they are level triggered, either
positive level triggered or negative level triggered. The output is free to change
according to the S and R input values, when active level is maintained at the
enable input.

4 Easier to design More difficult to design


EDGE TRIGGERED FLIP-FLOPS

Flip-Flops are synchronous bistable devices (has two outputs Q and Q’). In
this case, the term synchronous means that the output changes state only at a
specified point on the triggering input called the clock (CLK), i.e., changes in the
output occur in synchronization with the clock.
An edge-triggered Flip-Flop changes state either at the positive edge
(rising edge) or at the negative edge (falling edge) of the clock pulse and is

sensitive to its inputs only at this transition of the clock. The different types of
edge-triggered Flip-
Flops are—
S-R Flip-Flop (Set – Reset)
J-K Flip-Flop (Jack
Kilby) D Flip-Flop (Delay)
T Flip-Flop (Toggle)
Although the S-R Flip-Flop is not available in IC form, it is the basis for the
D and J-K Flip-Flops. Each type can be either positive edge-triggered (no bubble
at C input) or negative edge-triggered (bubble at C input).
The key to identifying an edge- triggered Flip-Flop by its logic symbol is
the small triangle inside the block at the clock (C) input. This triangle is
called the

S-R Flip-Flop
The S and R inputs of the S-R Flip-Flop are called synchronous inputs
becausedata on these inputs are transferred to the Flip-Flop's output only on the
edge of the clock pulse. The circuit is similar to SR latch except enable signal is
replaced by clock pulse (CLK). On the positive edge of the clock pulse, the circuit
responds to the S and R inputs.

4
SR Flip-Flop

When S is HIGH and R is LOW, the Q output goes HIGH on the triggering
edge of the clock pulse, and the Flip-Flop is SET. When S is LOW and R is HIGH,
theQ output goes LOW on the triggering edge of the clock pulse, and the Flip-
Flop is When both S and R are LOW, the output does not change from its prior
RESET.
state.

CLK S R Qn Qn+1 State


1 0 0 0 0
No Change
1 0 0 1 1 (NC)
1 0 1 0 0
Reset
1 0 1 1 0
1 1 0 0 1
Set
1 1 0 1 1
1 1 1 0 x Indeterminate
1 1 1
Truth 1 for SR xFlip-Flop
table *

The timing diagram of positive edge triggered SR flip-flop is shown

Input and output waveforms of SR Flip-Flop


5
Characteristic table and Characteristic equation:
The characteristic table for JK Flip-Flop is shown in the table below. From the
table, K-map for the next state transition (Qn+1) can be drawn and the simplified logic
expression which represents the characteristic equation of JK Flip-Flop can be
found.

S R Qn Qn+1
0 0 0 0
0 0 1 1
0 1 0 0
0 1 1 0
1 0 0 1
1 0 1 1
1 1 0 x
Characteristic table
1 1 1 x
K-map Simplification:

Characteristic equation:
Qn+1= S+ R’Qn

J-K Flip-Flop:
JK means Jack Kilby, Texas Instrument (TI) Engineer, who invented IC in
1958.
JK Flip-Flop has two inputs J(set) and K(reset). A JK Flip-Flop can be obtained

JK Flip Flop
6
The data input J and the output Q’ are applied o the first AND gate and its
output (JQ’) is applied to the S input of SR Flip-Flop. Similarly, the data input K
and the output Q are applied to the second AND gate and its output (KQ) is
applied tothe R input of SR Flip-Flop.

Truth table:
J= K= 0
When J=K= 0, both AND gates are disabled. Therefore clock pulse have
noeffect, hence the Flip-Flop output is same as the previous output.
J= 0, K= 1
When J= 0 and K= 1, AND gate 1 is disabled i.e., S= 0 and R= 1. This
conditionwill reset the Flip-Flop to 0.
J= 1, K= 0
When J= 1 and K= 0, AND gate 2 is disabled i.e., S= 1 and R= 0. Therefore
theFlip-Flop will set on the application of a clock pulse.
J= K= 0
When J=K= 1, it is possible to set or reset the Flip-Flop. If Q is High,
ANDgate 2 passes on a reset pulse to the next clock. When Q is low, AND gate 1
passes on a set pulse to the next clock. Eitherway, Q changes to the
Inputs Output
CLK State
J K Qn+1
1 0 0 Qn No
Change
1 0 1 0 Reset
1 1 0 1 Set
1 1 1 Qn ’ Toggle

7
The timing diagram of negative edge triggered JK flip-flop is shown below.

Input and output waveforms of JK Flip-Flop

Characteristic table and Characteristic equation:


The characteristic table for JK Flip-Flop is shown in the table below. From
the
table, K-map for the next state transition (Qn+1) can be drawn and the simplified

Qn J K Qn+1

0 0 0 0
0 0 1 0
0 1 0 1
0 1 1 1
1 0 0 1
1 0 1 0
1 1 0 1
1 1 1 0
Characteristic table

K-map Simplification:

Characteristic equation:
Qn+1= JQ’+ K’Q

8
D Flip-Flop:
Like in D latch, in D Flip-Flop the basic SR Flip-Flop is used with
complemented inputs. The D Flip-Flop is similar to D-latch except clock pulse
isused instead of enable input.

D Flip-Flop

To eliminate the undesirable condition of the indeterminate state in the RS Flip-


Flop is to ensure that inputs S and R are never equal to 1 at the same time.
Thisis done by D Flip-Flop. The D (delay) Flip-Flop has one input called delay input and
clock pulse input.
The D Flip-Flop using SR Flip-Flop is shown below.

Clock D Qn+1
Truth Table: 1 0 0
The truth table of D Flip-Flop is given 1 1 1
0 x Qn

Truth table for D Flip-Flop

9
The timing diagram of positive edge triggered D flip-flop is shown below.

Input and output waveforms of clocked D Flip-Flop

Looking at the truth table for D Flip-Flop we can realize that Qn+1
function follows the D input at the positive going edges of the clock pulses.

Characteristic table and Characteristic equation:


The characteristic table for D Flip-Flop shows that the next state of the Flip-
Flop is independent of the present state since Qn+1 is equal to D. This means that an
input pulse will transfer the value of input D into the output of the Flip-Flop
independent of the value of the output before the pulse was applied.
The characteristic equation is derived from K-map.
Qn D Qn+1

0 0 0
0 1 1
1 0 0
1 1 1
Characteristic table
K-map Simplification:

Characteristic equation:
Qn+1= D.

10
T Flip-Flop
The T (Toggle) Flip-Flop is a modification of the JK Flip-Flop. It is obtained
from JK Flip-Flop by connecting both inputs J and K together, i.e., single input.
Regardless of the present state, the Flip-Flop complements its output when the
clockpulse occurs while input T= 1.

T Flip-Flop

When T= 0, Qn+1= Qn, ie., the next state is the same as the present state and
no change occurs.
When T= 1, Qn+1= Qn’,ie., the next state is the complement of the present

T Qn+1
Truth Table:
0 Qn
The truth table of T Flip-Flop is given
1 Qn ’

Truth table for T Flip-Flop

Characteristic table and Characteristic equation:


The characteristic table for T Flip-Flop is shown below and characteristic
equation is derived using K-map.

11
Qn T Qn+1

0 0 0
0 1 1
1 0 1
1 1 0
K-map Simplification:

Characteristic equation:
Qn+1= TQn’+ T’Qn

Master-Slave JK Flip-Flop
A master-slave Flip-Flop consists of clocked JK flip-flop as a master and
clocked SR flip-flop as a slave. The output of the master flip-flop is fed as an
input to the slave flip-flop. Clock signal is connected directly to the master flip-flop,
but is connected through inverter to the slave flip-flop. Therefore, the information
present at the J and K inputs is transmitted to the output of master flip-flop on the
positive clock pulse and it is held there until the negative clock pulse occurs, after
which it is allowed to pass through to the output of slave flip-flop. The output of
the slave flip-flop is connected as a third input of the master JK flip-flop.

Logic diagram

When J= 1 and K= 0, the master sets on the positive clock. The high Y
output of the master drives the S input of the slave, so at negative clock, slave
sets, copying the action of the master.

12
When J= 0 and K= 1, the master resets on the positive clock. The high Y’
output of the master goes to the R input of the slave. Therefore, at the negative clock
slave resets, again copying the action of the master.
When J= 1 and K= 1, master toggles on the positive clock and the output
of master is copied by the slave on the negative clock. At this instant, feedback
inputs to the master flip-flop are complemented, but as it is negative half of the
clock pulse,master flip-flop is inactive. This prevents race around condition.
The clocked master-slave J-K Flip-Flop using NAND gate is shown below.

Master-Slave JK Flip-Flop
Input and output waveform of master-slave flip-flop
The input and output waveforms of master-slave JK flip-flop is shown

13
APPLICATION TABLE (OR) EXCITATION TABLE:
The characteristic table is useful for analysis and for defining the
operation of the Flip-Flop. It specifies the next state (Qn+1) when the inputs and
present state are known.
The excitation or application table is useful for design process. It is used
tofind the Flip-Flop input conditions that will cause the required transition, when
the present state (Qn) and the next state (Qn+1) are known.

SR Flip- Flop:
Present
Inputs Next State Present Next
State Inputs Inputs
State State
Qn S R Qn+1
Qn Qn+1 S R S R
0 0 0 0
0 0 0 0
0 0 1 0 0 x
0 0 0 1
0 1 0 1
0 1 1 0 1 0
0 1 1 x
1 0 0 1 0 1
1 0 0 1
1 1 0 0
1 0 1 0 x 0
1 1 1 0
1 1 0 1
Modified Table
1 1 1 x
Characteristic Table

Present Next
Inputs
State State
Qn Qn+1 S R
0 0 0 x
0 1 1 0
1 0 0 1
1 1 x 0
Excitation Table

14
The above table presents the excitation table for SR Flip-Flop. It consists of
present state (Qn), next state (Qn+1) and a column for each input to show how the
required transition is achieved.
There are 4 possible transitions from present state to next state. The
required Input conditions for each of the four transitions are derived from the
information available in the characteristic table. The symbol ‘x’ denotes the don’t
care condition;it does not matter whether the input is 0 or 1.

JK Flip-Flop:
Present Next Present Next
Inputs Inputs Inputs
State State State State
Qn J K Qn+1 Qn Qn+1 J K J K
0 0 0 0 0 0 0 0
0 x
0 0 1 0 0 0 0 1
0 1 0 1 0 1 1 0
1 x
0 1 1 1 0 1 1 1
1 0 0 1 1 0 0 1
x 1
1 0 1 0 1 0 1 1
1 1 0 1 1 1 0 0
x 0
1 1 1 0 1 1 1 0

Characteristic Table Modified Table

Present Next
Inputs
State State
Qn Qn+1 J K
0 0 0 x
0 1 1 x
1 0 x 1
1 1 x 0

Excitation Table

15
D Flip-Flop:

Present Next Present Next


Input Input
State State State State
Qn D Qn+1 Qn Qn+1 D
0 0 0 0 0 0
0 1 1 0 1 1
1 0 0 1 0 0
1 1 1 1 1 1
Characteristic Table Excitation Table

T Flip-Flop:
Present Next Present Next
Input Input
State State State State

Qn Qn+1 Qn Qn+1 T
T
0 0 0 0 0 0

0 1 1 0 1 1

1 0 1 1 0 1

1 1 0 1 1 0

Characteristic Table Excitation Table

REALIZATION OF ONE FLIP-FLOP USING OTHER FLIP-FLOPS


It is possible to convert one Flip-Flop into another Flip-Flop with some
additional gates or simply doing some extra connection. The realization of one
Flip- Flop using other Flip-Flops is implemented by the use of characteristic
tables and excitation tables. Let us see few conversions among Flip-Flops.

SR Flip-Flop to D Flip-Flop
SR Flip-Flop to JK Flip-
Flop SR Flip-Flop to T Flip-
Flop JK Flip-Flop to T Flip-
Flop JK Flip-Flop to D Flip-
Flop D Flip-Flop to T Flip-
Flop T Flip-Flop to D Flip-
Flop
16
SR Flip-Flop to D Flip-Flop:
 Write the characteristic table for required Flip-Flop (D Flip-Flop).
 Write the excitation table for given Flip-Flop (SR Flip-Flop).
 Determine the expression for given Flip-Flop inputs (S & R) by using K- map.
 Draw the Flip-Flop conversion logic diagram to obtain the required flip-
flop(D Flip-Flop) by using the above obtained expression.

The excitation table for the above conversion is

Required Flip-Flop Given Flip-Flop


(D) (SR)
Input Present state Next state Flip-Flop Inputs
D Qn Qn+1 S R
0 0 0 0 x
0 1 0 0 1
1 0 1 1 0
1 1 1 x 0

SR to D Flip-Flop

SR Flip-Flop to JK Flip-Flop
The excitation table for the above conversion is,

Inputs Present state Next state Flip-Flop Inputs


J K Qn Qn+1 S R
0 0 0 0 0 x
0 0 1 1 x 0
0 1 0 0 0 x
0 1 1 0 0 1
1 0 0 1 1 0
1 0 1 1 x 0
1 1 0 1 1 0
1 1 1 0 0 1
SR to JK Flip-Flop

SR Flip-Flop to T Flip-Flop
The excitation table for the above conversion
Flip-Flop
Input Present state Next state
Inputs
T Qn Qn+1 S R
0 0 0 0 x
0 1 1 x 0
1 0 1 1 0
1 1 0 0 1

SR to T Flip-Flop
JK Flip-Flop to T Flip-Flop
The excitation table for the above conversion is

Input Present state Next state Flip-Flop Inputs


T Qn Qn+1 J K
0 0 0 0 x
0 1 1 x 0
1 0 1 1 x
1 1 0 x 1

JK to T Flip-Flop

JK Flip-Flop to D Flip-Flop
The excitation table for the above conversion

Input Present state Next state Flip-Flop Inputs


D Qn Qn+1 J K
0 0 0 0 x
0 1 0 x 1
1 0 1 1 x
1 1 1 x 0

JK to D Flip-Flop
D Flip-Flop to T Flip-Flop
The excitation table for the above conversion is

Input Present state Next state Flip-Flop Input


T Qn Qn+1 D
0 0 0 0
0 1 1 1
1 0 1 1
1 1 0 0

D to T Flip-Flop

T Flip-Flop to D Flip-Flop
The excitation table for the above conversion
Flip-Flop
Input Present state Next state
Input
D Qn Qn+1 T
0 0 0 0
0 1 0 1
1 0 1 1
1 1 1 0

T to D Flip-Flop
CLASSIFICATION OF SYNCHRONOUS SEQUENTIAL CIRCUIT:
In synchronous or clocked sequential circuits, clocked Flip-Flops are used
as memory elements, which change their individual states in synchronism with
the periodic clock signal. Therefore, the change in states of Flip-Flop and change
in stateof the entire circuits occur at the transition of the clock signal.
The synchronous or clocked sequential networks are represented by two models.
Moore model: The output depends only on the present state of the Flip-
Flops. Mealy model: The output depends on both the present state of the
Flip-Flopsand on the inputs.

Moore model:
In the Moore model, the outputs are a function of the present state of the
Flip-
Flops only. The output depends only on present state of Flip-Flops, it appears

Moore model

Mealy model:
In the Mealy model, the outputs are functions of both the present state of
theFlip-Flops and inputs.

Mealy model
Difference between Moore and Mealy model
S.No Moore model Mealy model
1 Its output is a function of present Its output is a function of present state
state only. as well as present input.
2 An input change does not affect Input changes may affect the output
the of
output. the circuit.
3 It requires more number of states It requires less number of states for
for implementing same function. implementing same function.

ANALYSIS OF SYNCHRONOUS SEQUENTIAL

ANALYSIS PROCEDURE:
The synchronous sequential circuit analysis is summarizes as given below:
1. Assign a state variable to each Flip-Flop in the synchronous sequential
circuit.
2. Write the excitation input functions for each Flip-Flop and also write the
Moore/ Mealy output equations.
3. Substitute the excitation input functions into the bistable equations for
theFlip-Flops to obtain the next state output equations.
4. Obtain the state table and reduced form of the state table.
5 D th t t di b i th df f th t t t bl
(y). the Flip-Flop input functions are,
ANALYSIS OF MEALY MODEL:
1. A sequential circuit has two JK Flip-Flops A and B, one input (x) and one
JA= B+ x JB= A’+ x’
K A= 1 K B= 1
and the circuit output function, Y= xA’B.
a) Draw the logic diagram of the Mealy circuit,
b) Tabulate the state table,
c) Draw the state diagram.
Logic Diagram:

State table:

Present state Input Flip-Flop Inputs Next state Output

A B x JA= B+ x K A= 1 JB= A’+ x’ K B= 1 A(t+1) B(t+1) Y= xA’B


0 0 0 0 1 1 1 0 1 0
0 0 1 1 1 1 1 1 1 0
0 1 0 1 1 1 1 1 0 0
0 1 1 1 1 1 1 1 0 1
1 0 0 0 1 1 1 0 1 0
1 0 1 1 1 0 1 0 0 0
1 1 0 1 1 1 1 0 0 0
1 1 1 1 1 0 1 0 0 0

Reduced State Table:

Next state Output


Present state
x= 0 x= 1 x= 0 x= 1

A B A B A B y y
0 0 0 1 1 1 0 0
0 1 1 0 1 0 0 1
1 0 0 1 0 0 0 0
1 1 0 0 0 0 0 0
State Diagram:

State Diagram
2. A sequential circuit with two ‘D’ Flip-Flops A and B, one input (x) and
oneoutput (y). The Flip-Flop input functions are:
DA= Ax+ Bx
DB= A’x
and the circuit output function is, Y= (A+ B) x’.
(a) Draw the logic diagram of the circuit,
(b) Tabulate the state table,
(c) Draw the state diagram.

Soln:
State Table:

Present state Input Flip-Flop Inputs Next state Output


D A=
A B x DB= A’x A(t+1) B(t+1) Y= (A+B)x’
Ax+Bx
0 0 0 0 0 0 0 0
0 0 1 0 1 0 1 0
0 1 0 0 0 0 0 1
0 1 1 1 1 1 1 0
1 0 0 0 0 0 0 1
1 0 1 1 0 1 0 0
1 1 0 0 0 0 0 1
1 1 1 1 0 1 0 0

Next state Output


Present state
x= 0 x= 1 x= 0 x= 1

A B A B A B Y Y
0 0 0 0 0 1 0 0
0 1 0 0 1 1 1 0
1 0 0 0 1 0 1 0
1 1 0 0 1 0 1 0
Second form of state

State Diagram:

3. A sequential circuit has two JK Flip-Flop A and B. the Flip-Flop input


functions are: JA= B JB= x’
KA= Bx’ KB= A x.
(a) Draw the logic diagram of the circuit,
(b) Tabulate the state table,
(c) Draw the state diagram.
Soln:
Logic diagram:

The output function is not given in the problem. The output of the Flip-
Flops

State table:
Prese
Input Flip-Flop Inputs Next state
nt state
A B x J A= B KA= Bx’ JB= x’ KB= Ax A(t+1) B(t+1)
0 0 0 0 0 1 0 0 1
0 0 1 0 0 0 1 0 0
0 1 0 1 1 1 0 1 1
0 1 1 1 0 0 1 1 0
1 0 0 0 0 1 1 1 1
1 0 1 0 0 0 0 1 0
1 1 0 1 1 1 1 0 0
1 1 1 1 0 0 0 1 1

Next state
Present state
X= 0 X= 1

A B A B A B
0 0 0 1 0 0
0 1 1 1 1 0
1 0 1 1 1 0
1 1 0 0 1 1
State Diagram:

4. A sequential circuit has two JK Flop-Flops A and B, two inputs x and y


andone output z. The Flip-Flop input equation and circuit output equations are
JA = Bx + B' y' KA = B' xy'
JB = A' x KB = A+ xy'
z = Ax' y' + Bx' y'
(a) Draw the logic diagram of the
circuit
(b) Tabulate the state table.
(c) Derive the state equation.
Soln:
State table:
To obtain the next-state values of a sequential circuit with JK Flip-Flop,
usethe JK Flip-Flop characteristic table,

Present state Input Flip-Flop Inputs Next state Output


J A= K A= JB= K B=
A B x y A(t+1) B(t+1) z
Bx+B’y’ B’xy’ A’x A+xy’
0 0 0 0 1 0 0 0 1 0 0
0 0 0 1 0 0 0 0 0 0 0
0 0 1 0 1 1 1 1 1 1 0
0 0 1 1 0 0 1 0 0 1 0
0 1 0 0 0 0 0 0 0 0 1
0 1 0 1 0 0 0 0 0 0 0
0 1 1 0 1 0 1 1 1 1 0
0 1 1 1 1 0 1 0 1 1 0
1 0 0 0 1 0 0 1 1 0 1
1 0 0 1 0 0 0 1 1 0 0
1 0 1 0 1 1 0 1 0 0 0
1 0 1 1 0 0 0 1 1 0 0
1 1 0 0 0 0 0 1 1 0 1
1 1 0 1 0 0 0 1 1 0 0
1 1 1 0 1 0 0 1 1 0 0
1 1 1 1 1 0 0 1 1 0 0
State Equation:
5. Analyze the synchronous Mealy machine and obtain its state diagram.

Soln:
The given synchronous Mealy machine consists of two D Flip-Flops, one inputs
andone output. The Flip-Flop input functions are,
DA= Y1’Y2X’
DB= X+ Y1’Y2
The circuit output function is, Z= Y1Y2X.

State Table:

Present state Input Flip-Flop Inputs Next state Output

Y1 Y2 X DA= Y1’Y2X’ DB= X+ Y1’Y2 Y1 (t+1) Y2 (t+1) Z= Y1Y2X


0 0 0 0 0 0 0 0
0 0 1 0 1 0 1 0
0 1 0 1 1 1 1 0
0 1 1 0 1 0 1 0
1 0 0 0 0 0 0 0
1 0 1 0 1 0 1 0
1 1 0 0 0 0 0 0
1 1 1 0 1 0 1 1
Reduced State Table:

Next state Output


Present state
X= 0 X= 1 X= 0 X= 1

Y1 Y2 Y1 Y2 Y1 Y2 Z Z
0 0 0 0 0 1 0 0
0 1 1 1 0 1 0 0
1 0 0 0 0 1 0 0
1 1 0 0 0 1 0 1
Second form of state table

State Diagram:

ANALYSIS OF MOORE MODEL:


6. Analyze the synchronous Moore circuit and obtain its state diagram.
Soln:
Using the assigned variable Y1 and Y2 for the two JK Flip-Flops, we can write
the four excitation input equations and the Moore output equation as follows:
J A= Y 2X ; K A= Y 2’
J B= X ; KB= X’ and output function, Z= Y1Y2’
State table:

Present state Input Flip-Flop Inputs Next state Output


Y1 Y2 X J A= Y 2X K A= Y 2’ J B= X KB= X’ Y1 (t+1) Y2 (t+1) Z= Y1Y2’
0 0 0 0 1 0 1 0 0 0
0 0 1 0 1 1 0 0 1 0
0 1 0 0 0 0 1 0 0 0
0 1 1 1 0 1 0 1 1 0
1 0 0 0 1 0 1 0 0 1
1 0 1 0 1 1 0 0 1 1
1 1 0 0 0 0 1 1 0 0
1 1 1 1 0 1 0 1 1 0

Next state Output


Present state
X= 0 X= 1
Y
Y1 Y2 Y1 Y2 Y1 Y2
0 0 0 0 0 1 0
0 1 0 0 1 1 0
1 0 0 0 0 1 1
1 1 1 0 1 1 0
State Diagram:
Here the output depends on the present state only and is independent of
the input. The two values inside each circle separated by a slash are for the
present stateand output.
7. A sequential circuit has two T Flip-Flop A and B. The Flip-Flop input
functionsare:
TA= Bx T B= x
y= AB
(a) Draw the logic diagram of the circuit,
(b) Tabulate the state table,
(c) Draw the state diagram.
Soln:
Logic diagram:

State table:

Present state Input Flip-Flop Inputs Next state Output

A B x TA= Bx T B= x A (t+1) B (t+1) y= AB


0 0 0 0 0 0 0 0
0 0 1 0 1 0 1 0
0 1 0 0 0 0 1 0
0 1 1 1 1 1 0 0
1 0 0 0 0 1 0 0
1 0 1 0 1 1 1 0
1 1 0 0 0 1 1 1
1 1 1 1 1 0 0 1
Reduced State Table:

Next state Output


Present state
x= 0 x= 1 x= 0 x= 1

A B A B A B y y
0 0 0 0 0 1 0 0
0 1 0 1 1 0 0 0
1 0 1 0 1 1 0 0
1 1 1 1 0 0 1 1
Second form of state table

State Diagram:

STATE REDUCTION/ MINIMIZATION


The state reduction is used to avoid the redundant states in the sequential
circuits. The reduction in redundant states reduces the number of required Flip-
Flops and logic gates, reducing the cost of the final circuit.
The two states are said to be redundant or equivalent, if every possible set of
inputs generate exactly same output and same next state. When two states are
equivalent, one of them can be removed without altering the input-output
relationship.
Since ‘n’ Flip-Flops produced 2n state, a reduction in the number of states
mayresult in a reduction in the number of Flip-Flops.
The need for state reduction or state minimization is explained with one
example.
Examples:
1. Reduce the number of states in the following state diagram and draw the
reduced state diagram.

State diagram
Step 1: Determine the state table for given state diagram
Next state Output
Present state
X= 0 X= 1 X= 0 X= 1
a b c 0 0
b d e 1 0
c c d 0 1
d a d 0 0
e c d 0 1

State table
Step 2: Find equivalent states
From the above state table c and e generate exactly same next state and
same
output for every possible set of inputs. The state c and e go to next states c and d

and replaced by c. The final reduced state table is shown below.

Next state Output


Present state
X= 0 X= 1 X= 0 X= 1
a b c 0 0
b d c 1 0
c c d 0 1
d a d 0 0
Reduced state table
Step 3: Draw state diagram

Reduced state diagram

2. Reduce the number of states in the following state table and tabulate the
reduced

Next state Output


Present state
X= 0 X= 1 X= 0 X= 1
a a b 0 0
b c d 0 0
c a d 0 0
d e f 0 1
e a f 0 1
f g f 0 1
g a f 0 1
Soln:
From the above state table e and g generate exactly same next state and
same output for every possible set of inputs. The state e and g go to next states
a and f and
and replaced by e.
The reduced state table-1 is shown below.

Next state Output


Present state
X= 0 X= 1 X= 0 X= 1
a a b 0 0
b c d 0 0
c a d 0 0
d e f 0 1
e a f 0 1
f e f 0 1
Now states d and f are equivalent. Both states go to the same next state (e, f)
andhave same output (0, 1). Therefore one state can be removed; f is replaced
by d.
The final reduced state table-2 is shown below.

Next state Output


Present state
X= 0 X= 1 X= 0 X= 1
a a b 0 0
b c d 0 0
c a d 0 0
d e d 0 1
e a d 0 1
Reduced state table-2

Thus 7 states are reduced into 5 states.

3. Determine a minimal state table equivalent furnished


Next state
Present state
X= 0 X= 1
1 1, 0 1, 0
2 1, 1 6, 1
3 4, 0 5, 0
4 1, 1 7, 0
5 2, 0 3, 0
6 4, 0 5, 0
7 2, 0 3, 0

Soln:
Next state Output
Present state
X= 0 X= 1 X= 0 X= 1
1 1 1 0 0
2 1 6 1 1
3 4 5 0 0
4 1 7 1 0
5 2 3 0 0
6 4 5 0 0
7 2 3 0 0
From the above state table, 5 and 7 generate exactly same next state and
same output for every possible set of inputs. The state 5 and 7 go to next states 2
and 3 andhave outputs 0 and 0 for x=0 and x=1 respectively. Therefore state 7 can
be removed and replaced by 5.
Similarly, 3 and 6 generate exactly same next state and same output for
everypossible set of inputs. The state 3 and 6 go to next states 4 and 5 and have
outputs 0 and 0 for x=0 and x=1 respectively. Therefore state 6 can be removed
and replacedby 3. The final reduced state table is shown below.

Next state Output


Present state
X= 0 X= 1 X= 0 X= 1
1 1 1 0 0
2 1 3 1 1
3 4 5 0 0
4 1 5 1 0
5 2 3 0 0
Reduced state table

Thus 7 states are reduced into 5 states.

4. Minimize the following state


Next state
Present state
X= 0 X= 1
A D, 0 C, 1
B E, 1 A, 1
C H, 1 D, 1
D D, 0 C, 1
E B, 0 G, 1
F H, 1 D, 1
G A, 0 F, 1
H C, 0 A, 1
I G, 1 H,1

Soln:
Next state Output
Present state
X= 0 X= 1 X= 0 X= 1
A D C 0 1
B E A 1 1
C H D 1 1
D D C 0 1
E B G 0 1
F H D 1 1
G A F 0 1
H C A 0 1
I G H 1 1
From the above state table, A and D generate exactly same next state and
same output for every possible set of inputs. The state A and D go to next states
D and C and have outputs 0 and 1 for x=0 and x=1 respectively. Therefore state D
can be removed and replaced by A. Similarly, C and F generate exactly same next
state and same output for every possible set of inputs. The state C and F go to
next states H and D and have outputs 1 and 1 for x=0 and x=1 respectively.
Therefore state F can be removed and replaced by C.
Next state Output
Present state
X= 0 X= 1 X= 0 X= 1
A A C 0 1
B E A 1 1
C H A 1 1
E B G 0 1
G A C 0 1
H C A 0 1
I G H 1 1
Reduced state table-1
From the above reduced state table-1, A and G generate exactly same next
state and same output for every possible set of inputs. The state A and G go to
next states A and C and have outputs 0 and 1 for x=0 and x=1 respectively.
Therefore state G can be removed and replaced by A.
The final reduced state table-2 is shown below.

Next state Output


Present state
X= 0 X= 1 X= 0 X= 1
A A C 0 1
B E A 1 1
C H A 1 1
E B A 0 1
H C A 0 1
I A H 1 1
Reduced state table-2

Thus 9 states are reduced into 6 states.


5. Reduce the following state diagram.

Soln:
Next state Output
Present state
X= 0 X= 1 X= 0 X= 1
a a b 0 0
b c d 0 0
c a d 0 0
d e f 0 1
e a f 0 1
f g f 0 1
g a f 0 1
State table

From the above state table e and g generate exactly same next state and
same output for every possible set of inputs. The state e and g go to next states
a and f and have outputs 0 and 1 for x=0 and x=1 respectively. Therefore state g
can be removed

Present state Next state Output


X= 0 X= 1 X= 0 X= 1
a a b 0 0
b c d 0 0
c a d 0 0
d e f 0 1
e a f 0 1
f e f 0 1
Reduced state table-1

Now states d and f are equivalent. Both states go to the same next state (e,
f) and have same output (0, 1). Therefore one state can be removed; f is replaced
by d.
The final reduced state table-2 is shown below.

Present state Next state Output


X= 0 X= 1 X= 0 X= 1
a a b 0 0
b c d 0 0
c a d 0 0
d e d 0 1
e a d 0 1
Reduced state table-2

Thus 7 states are reduced into 5 states.


The state diagram for the reduced state table-2 is,

Reduced state diagram

DESIGN OF SYNCHRONOUS SEQUENTIAL

A synchronous sequential circuit is made up of number of Flip-Flops


combinational gates. The design of circuit consists of choosing the Flip-Flops
and then finding a combinational gate structure together with the Flip-Flops. The
number of Flip-Flops is determined from the number of states needed in the
circuit. The combinational circuit is derived from the state table.

Design procedure:
1. The given problem is determined with a state diagram.
2. From the state diagram, obtain the state table.
3. The number of states may be reduced by state reduction methods (if applicable).
4. Assign binary values to each state (Binary Assignment) if the state
tablecontains letter symbols.
5. Determine the number of Flip-Flops and assign a letter symbol (A, B, C,…)
toeach.
6. Choose the type of Flip-Flop (SR, JK, D, T) to be used.
7. From the state table, circuit excitation and output tables.
8. Using K-map or any other simplification method, derive the circuit
outputfunctions and the Flip-Flop input functions.
9. Draw the logic diagram.

The type of Flip-Flop to be used may be included in the design


specifications or may depend what is available to the designer. Many digital
systems are constructed with JK Flip-Flops because they are the most versatile
available. The selection of inputs is given as follows.

Flip-Flop Application
J General Applications
K Applications requiring transfer of
D data(Ex: Shift Registers)
Application involving complementation
(Ex: Binary Counters)
T
3.10.2 Excitation Tables:
Before going to the design examples for the clocked synchronous
sequentialcircuits we revise Flip-Flop excitation tables.
Present Next
Inputs
State State
Qn Qn+1 S R
0 0 0 x
0 1 1 0
1 0 0 1
1 1 x 0
Excitation table for SR Flip-Flop

Present Next
Inputs
State State
Qn Qn+1 J K
0 0 0 x
0 1 1 x
1 0 x 1
1 1 x 0
Excitation table for JK Flip-Flop

Present Next
Input
State State
Qn Qn+1 T
0 0 0
0 1 1
1 0 1
1 1 0
Excitation table for T Flip-Flop

Present Next
Input
State State
Qn Qn+1 D
0 0 0
0 1 1
1 0 0
1 1 1
Excitation table for D Flip-Flop
Problems
1. Design a clocked sequential machine using JK Flip-Flops for the state
diagramshown in the figure. Use state reduction if possible. Make proper
state assignment.

Soln:
State Table:

Next state Output


Present state
X= 0 X= 1 X= 0 X= 1
a a b 0 0
b c b 0 0
c a b 0 1
d a b 0 0
Reduced State Table:

Next state Output


Present state
X= 0 X= 1 X= 0 X= 1
a a b 0 0
b c b 0 0
c a b 0 1

Binary Assignment:
Now each state is assigned with binary values. Since there are three states,
number of Flip-Flops required is two and 2 binary numbers are assigned to the
states.a= 00; b= 0; and c= 10.

Reduced State Diagram

43
Excitation Table:

Present State Next State Inputs


Qn Qn+1 J K
0 0 0 x
0 1 1 x
1 0 x 1
1 1 x 0
Excitation table for JK Flip-Flop

Present
Input Next state Flip-Flop Inputs Output
state
X A B A B J KA JB KB Y
A
0 0 0 0 0 0 x 0 x 0
1 0 0 0 1 0 x 1 x 0
0 0 1 1 0 1 x x 1 0
1 0 1 0 1 0 x x 0 0
0 1 0 0 0 x 1 0 x 0
1 1 0 0 1 x 1 1 x 1
0 1 1 x x x x x x x
1 1 1 x x x x x x x

K-map Simplification:

With these Flip-Flop input functions and circuit output function we can
drawthe logic diagram as follows.
44
2. Design a clocked sequential machine using T Flip-Flops for the following
statediagram. Use state reduction if possible. Also use straight binary state
assignment.

Soln:
State Table:
Next state Output
Present state
X= 0 X= 1 X= 0 X= 1
a a b 0 0
b d c 0 0
c a b 1 0
d b a 1 1

Even though a and c are having same next states for input X=0 and X=1,
asthe outputs are not same state reduction is not possible.

45
State Assignment:
Use straight binary assignments as a= 00, b= 01, c= 10 and d= 11, the
transition table is,
Flip-Flop
Input Present state Next state Output
Inputs
X A B A B TA TB Y
0 0 0 0 0 0 0 0
0 0 1 1 1 1 0 0
0 1 0 0 0 1 0 1
0 1 1 0 1 1 0 1
1 0 0 0 1 0 1 0
1 0 1 1 0 1 1 0
1 1 0 0 1 1 1 0
1 1 1 0 0 1 1 1

K-map simplification:

With these Flip-Flop input functions and circuit output function we can
draw

Logic Diagram:

46
SHIFT REGISTERS:
A register is simply a group of Flip-Flops that can be used to store a binary
number. There must be one Flip-Flop for each bit in the binary number. For
instance,a register used to store an 8-bit binary number must have 8 Flip-Flops.
The Flip-Flops must be connected such that the binary number can be
entered (shifted) into the register and possibly shifted out. A group of Flip-Flops
connected to provide either or both of these functions is called a shift register.
The bits in a binary number (data) can be removed from one place to another
in either of two ways. The first method involves shifting the data one bit at a time in
a serial fashion, beginning with either the most significant bit (MSB) or the least
significant bit (LSB). This technique is referred to as serial shifting. The second
method involves shifting all the data bits simultaneously and is referred to as
parallel
shifting.
There are two ways to shift into a register (serial or parallel) and similarly
two ways to shift the data out of the register. This leads to the construction of four
basic register types—
i. Serial in- serial out,
ii. Serial in- parallel out,
iii. Parallel in- serial out,
iv. Parallel in- parallel out.

(i) Serial in- serial out (iii) Parallel in- serial out

(iii) Serial in- parallel out (iv) Parallel in- parallel out

47
Serial-In Serial-Out Shift Register:
The serial in/serial out shift register accepts data serially, i.e., one bit at a
timeon a single line. It produces the stored information on its output also in serial
form.

Serial-In Serial-Out Shift Register

The entry of the four bits 1010 into the register is illustrated below, beginning
with the right-most bit. The register is initially clear. The 0 is put onto the data input
line, making D=0 for FF0. When the first clock pulse is applied, FF0 is reset, thus
storing the 0.
Next the second bit, which is a 1, is applied to the data input, making D=1 for
FF0 and D=0 for FF1 because the D input of FF1 is connected to the Q0 output.
When the second clock pulse occurs, the 1 on the data input is shifted into FF0,
causing FF0to set; and the 0 that was in FF0 is shifted into FFl.
The third bit, a 0, is now put onto the data-input line, and a clock pulse is
applied. The 0 is entered into FF0, the 1 stored in FF0 is shifted into FFl, and the 0
stored in FF1 is shifted into FF2.
The last bit, a 1, is now applied to the data input, and a clock pulse is applied.
This time the 1 is entered into FF0, the 0 stored in FF0 is shifted into FFl, the 1
stored in FF1 is shifted into FF2, and the 0 stored in FF2 is shifted into FF3. This
completes the serial entry of the four bits into the shift register, where they can
be stored forany length of time as long as the Flip-Flops have dc power.
To get the data out of the register, the bits must be shifted out serially and
taken off the Q3 output. After CLK4, the right-most bit, 0, appears on the Q3
output.
When clock pulse CLK5 is applied, the second bit appears on the Q3
output. Clock pulse CLK6 shifts the third bit to the output, and CLK7 shifts the
fourth bit to the output. While the original four bits are being shifted out, more
bits can be shifted in. All zeros are shown being shifted out, more bits can be
shifted in.

48
Serial-In Parallel-Out Shift Register:
In this shift register, data bits are entered into the register in the same as
serial-in serial-out shift register. But the output is taken in parallel. Once the data
are stored, each bit appears on its respective output line and all bits are available
simultaneously instead of on a bit-by-bit.

Serial-In parallel-Out Shift Register

Parallel-In Serial-Out Shift Register:


In this type, the bits are entered in parallel i.e., simultaneously into their
respective stages on parallel lines.
A 4-bit parallel-in serial-out shift register is illustrated below. There are four
data input lines, X0, X1, X2 and X3 for entering data in parallel into the register.
SHIFT/ LOAD input is the control input, which allows four bits of data to load in
parallel into the register.

Parallel-In Serial-Out Shift Register

49
When SHIFT/LOAD is LOW, gates G1, G2, G3 and G4 are enabled, allowing
each data bit to be applied to the D input of its respective Flip-Flop. When a clock
pulse is applied, the Flip-Flops with D = 1 will set and those with D = 0 will reset,
thereby storing all four bits simultaneously.
When SHIFT/LOAD is HIGH, gates G1, G2, G3 and G4 are disabled and
gates G5, G6 and G7 are enabled, allowing the data bits to shift right from one
stage to the next. The OR gates allow either the normal shifting operation or the
parallel data- entry operation, depending on which AND gates are enabled by the
level on the SHIFT/LOAD input.

Parallel-In Parallel-Out Shift Register:


In this type, there is simultaneous entry of all data bits and the bits appear
onparallel outputs simultaneously.

Parallel-In Parallel-Out Shift Register

UNIVERSAL SHIFT REGISTERS:


If the register has shift and parallel load capabilities, then it is called a shift
register with parallel load or universal shift register. Shift register can be used
for converting serial data to parallel data, and vice-versa. If a parallel load
capability is added to a shift register, the data entered in parallel can be taken out
in serial fashionby shifting the data stored in the register.

50
The functions of universal shift register are:
1. A clear control to clear the register to 0.
2. A clock input to synchronize the operations.
3. A shift-right control to enable the shift right operation and the serial
inputand output lines associated with the shift right.
4. A shift-left control to enable the shift left operation and the serial input
andoutput lines associated with the shift left.
5. A parallel-load control to enable a parallel transfer and the n input
linesassociated with the parallel transfer.
6. ‘n’ parallel output lines.
7. A control line that leaves the information in the register unchanged
eventhough the clock pulses re continuously applied.

4-Bit Universal Shift Register


It consists of four D-Flip-Flops and four 4 input multiplexers (MUX). S0 and
S1 are the two selection inputs connected to all the four multiplexers. These two
selection inputs are used to select one of the four inputs of each multiplexer.

51
The input 0 in each MUX is selected when S1S0= 00 and input 1 is selected
when S1S0= 01. Similarly inputs 2 and 3 are selected when S1S0= 10 and S1S0= 11
respectively. The inputs S1 and S0 control the mode of the operation of the
register.
When S1S0= 00, the present value of the register is applied to the D-inputs
of the Flip-Flops. This is done by connecting the output of each Flip-Flop to the 0
input of the respective multiplexer. The next clock pulse transfers into each Flip-
Flop, the binary value is held previously, and hence no change of state occurs.
When S1S0= 01, terminal 1 of the multiplexer inputs has a path to the D
inputs of the Flip-Flops. This causes a shift-right operation with the lefter serial

input transferred into Flip-Flop FF3.


When S1S0= 10, a shift-left operation results with the right serial input
goinginto Flip-Flop FF1.
Finally when S1S0= 11, the binary information on the parallel input lines (I1,
I2,I3 and I4) are transferred into the register simultaneously during the next clock
pulse.
The function table of bi-directional shift register with parallel inputs and

Mode Control
Operation
S1 S0
0 0 No
0 1 change
1 0 Shift-right
1 1 Shift-left
Parallel load
BI-DIRECTION SHIFT REGISTERS:
A bidirectional shift register is one in which the data can be shifted either

or right. It can be implemented by using gating logic that enables the transfer of
a data bit from one stage to the next stage to the right or to the left depending
on the level of a control line.
A 4-bit bidirectional shift register is shown below. A HIGH on the
RIGHT/LEFT control input allows data bits inside the register to be shifted to the
right, and a LOW enables data bits inside the register to be shifted to the left.
52
When the RIGHT/LEFT control input is HIGH, gates G1, G2, G3 and G4 are
enabled, and the state of the Q output of each Flip-Flop is passed through to the
D input of the following Flip-Flop. When a clock pulse occurs, the data bits are
shifted one place to the right.
When the RIGHT/LEFT control input is LOW, gates G5, G6, G7 and G8 are
enabled, and the Q output of each Flip-Flop is passed through to the D input of
the preceding Flip-Flop. When a clock pulse occurs, the data bits are then shifted
one place to the left.

4-bit bi-directional shift register

Flip-Flops can be connected together to perform counting operations.

SYNCHRONOUS
Such a group of Flip- Flops is a counter. The number of Flip-Flops used and the
way in which they are connected determine the number of states (called the
modulus) and also the specific sequence of states that the counter goes through
during each complete cycle.

53
Counters are classified into two broad categories according to the way
they are clocked:
Asynchronous
counters, Synchronous
counters.
In asynchronous (ripple) counters, the first Flip-Flop is clocked by the
external clock pulse and then each successive Flip-Flop is clocked by the output
of the preceding Flip-Flop.
In synchronous counters, the clock input is connected to all of the Flip-
Flops so that they are clocked simultaneously. Within each of these two
categories, counters are classified primarily by the type of sequence, the number
of states,oforFlip-Flops
number the in the counter.
The term ‘synchronous’ refers to events that have a fixed time
relationship with each other. In synchronous counter, the clock pulses are
applied to all Flip- Flops simultaneously. Hence there is minimum
propagation delay.

S.No Asynchronous (ripple) counter Synchronous counter

1 All the Flip-Flops are not All the Flip-Flops are


clockedsimultaneously. clockedsimultaneously.
2 The delay times of all Flip- There is minimum propagation delay.
Flopsare added. Therefore
there is considerable
propagation delay.
3 Speed of operation is low Speed of operation is high.

4 Logic circuit is very simple Design involves complex logic circuit


evenfor more number of as number of state increases.
states.
5 Minimum numbers of The number of logic devices is
logicdevices are needed. morethan ripple counters.
6 Cheaper than Costlier than ripple counters.
synchronouscounters.

2-Bit Synchronous Binary Counter


In this counter the clock signal is connected in parallel to clock inputs of
both the Flip-Flops (FF0 and FF1). The output of FF0 is connected to J1 and K1 inputs
of the second Flip-Flop (FF1).
54
2-Bit Synchronous Binary Counter

Assume that the counter is initially in the binary 0 state: i.e., both Flip-Flops
are RESET. When the positive edge of the first clock pulse is applied, FF0 will
toggle because J0= k0= 1, whereas FF1 output will remain 0 because J1= k1= 0.
After the first clock pulse Q0=1 and Q1=0.
When the leading edge of CLK2 occurs, FF0 will toggle and Q0 will go LOW.
Since FF1 has a HIGH (Q0 = 1) on its J1 and K1 inputs at the triggering edge of this
clock pulse, the Flip-Flop toggles and Q1 goes HIGH. Thus, after CLK2,
Q0 = 0 and Q1 = 1.
When the leading edge of CLK3 occurs, FF0 again toggles to the SET state (Q0
= 1), and FF1 remains SET (Q1 = 1) because its J1 and K1 inputs are both LOW (Q0 =
0). After this triggering edge, Q0 = 1 and Q1 = 1.
Finally, at the leading edge of CLK4, Q0 and Q1 go LOW because they both
have a toggle condition on their J1 and K1 inputs. The counter has now recycled
to itsoriginal state, Q0 = Q1 = 0.

Timing diagram

55
3-Bit Synchronous Binary Counter
A 3 bit synchronous binary counter is constructed with three JK Flip-Flops
and an AND gate. The output of FF0 (Q0) changes on each clock pulse as the
counter progresses from its original state to its final state and then back to its

original state. To produce this operation, FF0 must be held in the toggle mode by
constant HIGH, on its J0 and K0 inputs.

3- Bit Synchronous Binary Counter

The output of FF1 (Q1) goes to the opposite state following each time Q0=
1. This change occurs at CLK2, CLK4, CLK6, and CLK8. The CLK8 pulse causes
the counter to recycle. To produce this operation, Q0 is connected to the J1 and
K1 inputs of FF1. When Q0= 1 and a clock pulse occurs, FF1 is in the toggle mode
and therefore changes state. When Q0= 0, FF1 is in the no-change mode and
remains in its present state.
The output of FF2 (Q2) changes state both times; it is preceded by the
unique condition in which both Q0 and Q1 are HIGH. This condition is detected by
the AND gate and applied to the J2 and K2 inputs of FF2. Whenever both outputs
Q0= Q1= 1, the output of the AND gate makes the J2= K2= 1 and FF2 toggles on
the following clock pulse. Otherwise, the J2 and K2 inputs of FF2 are held LOW
by the AND gate
CLOCK Pulse Q2 Q1 Q0
Initially 0 0 0
1 0 0 1
2 0 1 0
3 0 1 1
4 1 0 0
5 1 0 1
6 1 1 0
7 1 1 1
8 (recycles) 0 0 0

56
Timing diagram

4-Bit Synchronous Binary Counter


This particular counter is implemented with negative edge-triggered Flip-
Flops. The reasoning behind the J and K input control for the first three Flip-
Flops is the same as previously discussed for the 3-bit counter. For the fourth
stage, the Flip- Flop has to change the state when Q0= Q1= Q2= 1. This condition
is decoded by AND

4- Bit Synchronous Binary Counter

Therefore, when Q0= Q1= Q2= 1, Flip-Flop FF3 toggles and for all other times
it is in a no-change condition. Points where the AND gate outputs are HIGH are
indicated by the shaded areas.

4-Bit Synchronous Decade Counter: (BCD Counter):


BCD decade counter has a sequence from 0000 to 1001 (9). After 1001
state itmust recycle back to 0000 state. This counter requires four Flip-Flops and
AND/OR logic as shown below.

57
4-Bit Synchronous Decade Counter

 First, notice that FF0 (Q0) toggles on each clock pulse, so the logic equation
forits J0 and K0 inputs is
J 0 = K 0= 1
This equation is implemented by connecting J0 and K0 to a constant HIGH level.
 Next, notice from table, that FF1 (Q1) changes on the next clock pulse each
time Q0 = 1 and Q3 = 0, so the logic equation for the J1 and K1 inputs is
J 1 = K 1= Q 0Q 3 ’
 Flip-Flop 2 (Q2) changes on the next clock pulse each time both Q0 = Q1 = 1.
This requires an input logic equation as follows:
J 2 = K 2= Q 0Q 1
This equation is implemented by ANDing Q0 and Q1 and connecting the gate
output to the J2 and K2 inputs of FF2.
 Finally, FF3 (Q3) changes to the opposite state on the next clock pulse each
time Q0 = 1, Q1 = 1, and Q2 = 1 (state 7), or when Q0 = 1 and Q1 = 1 (state 9).
The equation for this is as follows:
J 3 = K 3= Q 0Q 1 Q 2+ Q 0Q 3

This function is implemented with the AND/OR logic connected to the J3 and K3
inputs of FF3.
58
CLOCK Pulse Q3 Q2 Q1 Q0
Initially 0 0 0 0
1 0 0 0 1
2 0 0 1 0
3 0 0 1 1
4 0 1 0 0
5 0 1 0 1
6 0 1 1 0
7 0 1 1 1
8 1 0 0 0
9 1 0 0 1
10(recycles) 0 0 0 0
The timing diagram for the decade counter is shown below.

Timing diagram

Synchronous UP/DOWN Counter


An up/down counter is a bidirectional counter, capable of progressing
in
either direction through a certain sequence. A 3-bit binary counter that advances

goes through the sequence in the opposite direction (7, 6, 5, 4, 3, 2, 1,0) is an


illustration of up/down sequential operation.
The complete up/down sequence for a 3-bit binary counter is shown in
table below. The arrows indicate the state-to-state movement of the counter for
both its UP and its DOWN modes of operation. An examination of Q0 for both the
up and down sequences shows that FF0 toggles on each clock pulse. Thus, the J0
and K0 inputs of FF0 are,
J 0 = K 0= 1

59
To form a synchronous UP/DOWN counter, the control input (UP/DOWN)
is used to allow either the normal output or the inverted output of one Flip-Flop
to
the J and K inputs of the next Flip-Flop. When UP/DOWN= 1, the MOD 8
counter will count from 000 to 111 and UP/DOWN= 0, it will count from 111 to
000.

When UP/DOWN= 1, it will enable AND gates 1 and 3 and disable AND
gates 2 and 4. This allows the Q0 and Q1 outputs through the AND gates to
the J and K inputs of the following Flip-Flops, so the counter counts up as
pulses are applied.
When UP/DOWN= 0, the reverse action takes place.
J1= K1= (Q0.UP)+ (Q0’.DOWN)

J2= K2= (Q0. Q1.UP)+ (Q0’.Q1’.DOWN)

3-bit UP/DOWN Synchronous Counter


60
MODULUS-N-COUNTERS:
The counter with ‘n’ Flip-Flops has maximum MOD number 2n. Find the
number of Flip-Flops (n) required for the desired MOD number (N) using the
equation,
2n ≥ N
(i) For example, a 3 bit binary counter is a MOD 8 counter. The basic counter
can be modified to produce MOD numbers less than 2n by allowing the
counter to skin those are normally part of counting sequence.
n= 3
N= 8
2n = 23= 8=

(ii) MOD 5 Counter:


2n= N
2n= 5
22= 4 less than N.
23= 8 > N(5)
Therefore, 3 Flip-Flops are

(iii) MOD 10 Counter:


2n= N= 10
23= 8 less than N.
24= 16 > N(10).
To construct any MOD-N counter, the following methods can be
used.

using the equation,


2n ≥ N.
2. Connect all the Flip-Flops as a required counter.
3. Find the binary number for N.
4. Connect all Flip-Flop outputs for which Q= 1 when the count is N, as inputs
toNAND gate.
5. Connect the NAND gate output to the CLR input of each Flip-Flop.

61
When the counter reaches Nth state, the output of the NAND gate goes
LOW, resetting all Flip-Flops to 0. Therefore the counter counts from 0 through N
-1.

For example, MOD-10 counter reaches state 10 (1010). i.e., Q3Q2Q1Q0= 1 0 1 0.


The outputs Q3 and Q1 are connected to the NAND gate and the output of the
NAND gate goes LOW and resetting all Flip-Flops to zero. Therefore MOD-10
counter counts from 0000 to 1001. And then recycles to the zero value.
The MOD-10 counter circuit is shown below.

MOD-10 (Decade) Counter

2.1 SHIFT REGISTER COUNTERS:


A shift register counter is basically a shift register with the serial output
connected back to the serial input to produce special sequences. Two of the most
common types of shift register counters are:
Johnson counter (Shift
Counter),Ring counter,

3.16.1 Johnson counter (Shift Counter):


In a Johnson counter the complement of the output of the last Flip-Flop is
connected back to the D input of the first Flip-Flop. This feedback arrangement
produces a characteristic sequence of states as shown in table below. The 4-bit
sequence has a total of eight states, and that the 5-bit sequence has a total of ten
states. In general, a Johnson counter will produce a modulus of 2n, where ‘n’ is the
number of stages in the counter.
62
4-Bit Johnson Counter

The Q output of each stage is connected to the D input of the next stage
(assuming that D Flip-Flops are used). The complement output of the last stage is
connected back to the D input of the first stage.

Time sequence for a 4-bit Johnson counter

3.16.2 Ring Counters:


The ring counter utilizes one Flip-Flop for each state in its sequence. It has
the
advantage that decoding gates are not required. In the case of a l0-bit ring

Ring counter

The output Q0 sets D1 input, Q1 sets D2, Q2 sets D3 and Q3 is fed back to
D0. Because of these conditions, bits are shifted left one position per positive
clock edge
63
and fed back to the input. All the Flip-Flops are clocked together. When CLR
goeslow then back to high, the output is 0000.

Time sequence for a Ring counter

The first positive clock edge shifts MSB to LSB position and other bits to one
position left so that the output becomes Q= 0010. This process continues on
second and third clock edge so that successive outputs are 0100 and 1000. The
fourth positive clock edge starts the cycle all over again and the output is 0001.
Thus the stored 1 bit follows a circular path (i.e., the stored 1 bits move left through
all Flip- Flops and the final Flip-Flop sends it back to the first Flip-Flop). This
action has given the name of ring counter.

2.2 DESIGN OF COUNTERS:


The procedure for design of counters as follows:
1. Specify the counter sequence and draw a state diagram.
2. Derive a next-state table from the state diagram.
3. Make the state assignment and develop a transition table showing the
flip-flop inputs required.
4. Draw the K-maps for each input of each Flip-Flop.
5. Derive the logic expression for each Flip-Flop input from the K-maps.
6. Implement the expressions with combinational logic and combine with
theFlip-Flops to form the counter.

64
Examples:
1. Using JK Flip-Flops, design a synchronous counter which counts in
thesequence, 000, 001, 010, 011, 100, 101, 110, 111, 000.
Step 1: State Diagram

Step 2: Excitation Table :


Excitation Table for JK Flip-
Present State Next State Inputs
Qn Qn+1 J K
0 0 0 x
0 1 1 x
1 0 x 1
1 1 x 0
Excitation Table for

Present State Next State Flip-Flop Inputs


Q2 Q1 Q0 q2 q1 q0 J2 K2 J1 K1 J0 K0
0 0 0 0 0 1 0 x 0 x 1 x
0 0 1 0 1 0 0 x 1 x x 1
0 1 0 0 1 1 0 x x 0 1 x
0 1 1 1 0 0 1 x x 1 x 1
1 0 0 1 0 1 x 0 0 x 1 x
1 0 1 1 1 0 x 0 1 x x 1
1 1 0 1 1 1 x 0 x 0 1 x
1 1 1 0 0 0 x 1 x 1 x 1

Step 3: K-map Simplification

65
Step 4: Logic Diagram

State Diagram:

2. Design and explain the working of a synchronous MOD-3


counter.
Soln:
2n ≥ N= 3
22 > 3.

66
Excitation Table:
Excitation Table for JK Flip-Flop:
Present State Next State Inputs
Qn Qn+1 J K
0 0 0 x
0 1 1 x
1 0 x 1
1 1 x 0
Excitation Table for Counter:

Present State Next State Flip-Flop Inputs


QB QA QB+1 QA+1 JB KB J KA
A
0 0 0 1 0 x 1 x
0 1 1 0 1 x x 1
1 0 0 0 x 1 0 x

K-map Simplification:

Logic Diagram:
67
3. Design a synchronous counter with states 0, 1, 2, 3, 0, 1, ………using JK
Flip-Flops.
Soln:
State Diagram:

Excitation Table for Counter:


Present State Next State Flip-Flop Inputs
QB QA QB+1 QA+1 JB KB J KA
A
0 0 0 1 0 x 1 x
0 1 1 0 1 x x 1
1 0 1 1 x 0 1 x
1 1 0 0 x 1 x 1

K-map Simplification:

Logic Diagram:

68
4. Design a MOD-7 synchronous counter using JK Flip-Flops. Write excitation
tableand state table.
Soln:
2n ≥ N= 7
23 > 8.
Therefore, 3 Flip-Flops are required.
State Diagram:

Excitation Table:
Excitation Table for JK Flip-
Present State Next State Inputs
Qn Qn+1 J K
0 0 0 x
0 1 1 x
1 0 x 1
1 1 x 0
Excitation Table for

Present State Next State Flip-Flop Inputs


QC QB QA QC+1 QB+1 QA+1 JC KC JB KB J KA
A
0 0 0 0 0 1 0 x 0 x 1 x
0 0 1 0 1 0 0 x 1 x x 1
0 1 0 0 1 1 0 x x 0 1 x
0 1 1 1 0 0 1 x x 1 x 1
1 0 0 1 0 1 x 0 0 x 1 x
1 0 1 1 1 0 x 0 1 x x 1
1 1 0 0 0 0 x 1 x 1 0 x

69
K-map Simplification:

Logic Diagram:

5. Design a MOD-10 synchronous counter using JK Flip-Flops. Write


excitationtable and state table.
Soln:
2n ≥ N= 10
24 > 10.

70
Therefore, 4 Flip-Flops are required.
State Table:

Excitation Table:
Excitation Table for JK Flip-
Present State Next State Inputs
Qn Qn+1 J K
0 0 0 x
0 1 1 x
1 0 x 1
1 1 x 0

Excitation Table for

Present State Next State Flip-Flop Inputs


QD QC QB QA QD+1 QC+1 QB+1 QA+1 JD K D J KC J KB J KA
C B A
0 0 0 0 0 0 0 1 0 x 0 x 0 x 1 x
0 0 0 1 0 0 1 0 0 x 0 x 1 x x 1
0 0 1 0 0 0 1 1 0 x 0 x x 0 1 x
0 0 1 1 0 1 0 0 0 x 1 x x 1 x 1
0 1 0 0 0 1 0 1 0 x x 0 0 x 1 x
0 1 0 1 0 1 1 0 0 x x 0 1 x x 1
0 1 1 0 0 1 1 1 0 x x 0 x 0 1 x
0 1 1 1 1 0 0 0 1 x x 1 x 1 x 1
1 0 0 0 1 0 0 1 x 0 0 x 0 x 1 x
1 0 0 1 0 0 0 0 x 1 0 x 0 x x 1

71
K-map Simplification:

72
Logic Diagram:

6. Design a synchronous 3-bit gray code up counter with the help of excitation
table.
Soln:
Gray code sequence: 000, 001, 011, 010, 110, 111, 101, 100.

Excitation Table:

Present State Next State Flip-Flop Inputs


QC QB QA QC+1 QB+1 QA+1 JC KC JB KB J KA
A
0 0 0 0 0 1 0 x 0 x 1 x
0 0 1 0 1 1 0 x 1 x x 0
0 1 1 0 1 0 0 x x 0 x 1
0 1 0 1 1 0 1 x x 0 0 x
1 1 0 1 1 1 x 0 x 0 1 x
1 1 1 1 0 1 x 0 x 1 x 0
1 0 1 1 0 0 x 0 0 x x 1
1 0 0 0 0 0 x 1 0 x 0 x
73
K-map Simplification:

Logic Diagram:

74
7. Design a 3 bit (MOD 8) Synchronous UP/DOWN counter.
Soln:
When UP/DOWN= 1, UP mode,
UP/DOWN= 0, DOWN mode.
State Diagram:

Excitation Table:
Input Present State Next State A B C
Up/Down QA QB QC QA+1 QB+1 QC+1 J KA JB KB JC KC
A
0 0 0 0 1 1 1 1 x 1 x 1 x
0 1 1 1 1 1 0 x 0 x 0 x 1
0 1 1 0 1 0 1 x 0 x 1 1 x
0 1 0 1 1 0 0 x 0 0 x x 1
0 1 0 0 0 1 1 x 1 1 x 1 x
0 0 1 1 0 1 0 0 x x 0 x 1
0 0 1 0 0 0 1 0 x x 1 1 x
0 0 0 1 0 0 0 0 x 0 x x 1
1 0 0 0 0 0 1 0 x 0 x 1 x
1 0 0 1 0 1 0 0 x 1 x x 1
1 0 1 0 0 1 1 0 x x 0 1 x
1 0 1 1 1 0 0 1 x x 1 x 1
1 1 0 0 1 0 1 x 0 0 x 1 x
1 1 0 1 1 1 0 x 0 1 x x 1
1 1 1 0 1 1 1 x 0 x 0 1 x
1 1 1 1 0 0 0 x 1 x 1 x 1
75
K-map Simplification:

Logic Diagram:

76
4.20 LOCKOUT CONDITION:

In a counter if the next state of some unused state is again a used state
and if by chance the counter happens to find itself in the unused states and never
arrived ata used state then the counter is said to be in the lockout condition.

Desired Sequence

The circuit that goes in lockout condition is called bushless circuit. To make
sure that the counter will come to the initial state from any unused state, the
additional logic circuit is necessary. To ensure that the lockout does not occur, the
counter should be designed by forcing the next state to be the initial state from the
unused states as shown below

State diagram for removing


lockout

Avoid lockout condition. Use JK type design.

Soln:
State diagram:

77
Here, states 5, 2 and 0 are forced are forced to go into 6, 3 and 1state,
respectively to avoid lockout condition.
Excitation table:
Excitation Table for JK Flip-Flop:
Present State Next State Inputs
Qn Qn+1 J K
0 0 0 x
0 1 1 x
1 0 x 1
1 1 x 0

Excitation Table for counter:

Present State Next State Flip-Flop Inputs


QA QB QC QA+1 QB+1 QC+1 JA KA J KB JC KC
B
0 0 0 0 0 1 0 x 0 x 1 x
0 0 1 1 0 0 1 x 0 x x 1
0 1 0 0 1 1 0 x x 0 1 x
0 1 1 0 0 1 0 x x 1 x 0
1 0 0 1 1 0 x 0 1 x 0 x
1 0 1 1 1 0 x 0 1 x x 1
1 1 0 1 1 1 x 0 x 0 1 x
1 1 1 0 1 1 x 1 x 0 x 0

K-map Simplification:
78
Logic Diagram:

79
UNIT III COMPUTER

FUNDAMENTALS

Functional Units of a Digital Computer: Von Neumann Architecture


– Operation and Operands of Computer Hardware Instruction –
Instruction Set Architecture (ISA): Memory Location, Address and
Operation – Instruction and Instruction Sequencing – Addressing
Modes, Encoding ofMachine Instruction – Interaction between
Assembly and High Level Language.

3.1 Functional Units of Digital Computer


o A computer organization describes the functions and design of the various units of a
digital system.
o A general-purpose computer system is the best-known example of a digital system.
Other examples include telephone switching exchanges, digital voltmeters, digital
counters, electronic calculators and digital displays.
o Computer architecture deals with the specification of the instruction set and the
hardware units that implement the instructions.
o Computer hardware consists of electronic circuits, displays, magnetic and optic
storagemedia and also the communication facilities.
o Functional units are a part of a CPU that performs the operations and calculations
called for by the computer program.
o Functional units of a computer system are parts of the CPU (Central Processing
Unit) that

performs the operations and calculations called for by the computer program. A
computer consists of five main components namely, Input unit, Central Processing
Unit, Memory unit Arithmetic & logical unit, Control unit and an Output unit.
Input unit
o Input units are used by the computer to read the data. The most commonly used
input devicesare keyboards, mouse, joysticks, trackballs, microphones, etc.
o However, the most well-known input device is a keyboard. Whenever a key is
pressed, the corresponding letter or digit is automatically translated into its
corresponding binary code and transmitted over a cable to either the memory or
the processor.

Central processing unit


o Central processing unit commonly known as CPU can be referred as an
electronic circuitry within a computer that carries out the instructions given by a
computer program by performing the basic arithmetic, logical, control and
input/output (I/O) operations specified by the instructions.

Memory unit
o The Memory unit can be referred to as the storage area in which programs are kept
which arerunning, and that contains data needed by the running programs.
o The Memory unit can be categorized in two ways namely, primary memory and
secondarymemory.
o It enables a processor to access running execution applications and services that
aretemporarily stored in a specific memory location.

o Primary storage is the fastest memory that operates at electronic speeds. Primary
memory contains a large number of semiconductor storage cells, capable of storing
a bit of information. The word length of a computer is between 16-64 bits.
o It is also known as the volatile form of memory, means when the computer is shut
down, anything contained in RAM is lost.
o Cache memory is also a kind of memory which is used to fetch the data very soon.
They are highly coupled with the processor.
o The most common examples of primary memory are RAM and ROM.
o Secondary memory is used when a large amount of data and programs have to be
stored for along-term basis.

o The control unit is also known as the nerve center of a computer system.
o Let's us consider an example of addition of two operands by the instruction given as
Add LOCA, RO. This instruction adds the memory location LOCA to the operand in
the register RO and places the sum in the register RO. This instruction internally
performs several steps.

Output Unit
o The primary function of the output unit is to send the processed results to the user.
Output devices display information in a way that the user can understand.
o Output devices are pieces of equipment that are used to generate information or any
other response processed by the computer. These devices display information that
has been held or generated within a computer.

o The most common example of an output device is a monitor.

3.2 Von-Neumann Model


Von-Neumann proposed his computer architecture design in 1945 which was later known
as Von- Neumann Architecture. It consisted of a Control Unit, Arithmetic, and Logical
Memory Unit (ALU),Registers and Inputs/Outputs.

Von Neumann architecture is based on the stored-program computer concept, where


instruction data and program data are stored in the same memory. This design is still used
in most computers produced today.

A Von Neumann-based computer:


o Uses a single processor
o Uses one memory for both instructions and data.
o Executes programs following the fetch-decode-execute cycle

Components of Von-Neumann Model:


o Central Processing Unit
o Buses
o Memory Unit

Central Processing Unit

The part of the Computer that performs the bulk of data processing operations is called the
Central Processing Unit and is referred to as the CPU.

The Central Processing Unit can also be defined as an electric circuit responsible for
executing the instructions of a computer program.

The CPU performs a variety of functions dictated by the type of instructions that are
incorporated in the computer.

The major components of CPU are Arithmetic and Logic Unit (ALU), Control Unit (CU) and a
variety of registers.

Arithmetic and Logic Unit (ALU)


The Arithmetic and Logic Unit (ALU) performs the required micro-operations for executing
the instructions. In simple words, ALU allows arithmetic (add, subtract, etc.) and logic
(AND, OR, NOT, etc.) operations to be carried out.

Control Unit

The Control Unit of a computer system controls the operations of components like ALU,
memory and input/output devices.

The Control Unit consists of a program counter that contains the address of the
instructions to be fetched and an instruction register into which instructions are fetched
from memory for execution.

Registers

Registers refer to high-speed storage areas in the CPU. The data processed by the CPU
are fetchedfrom the registers.

Following is the list of registers that plays a crucial role in data processing.

Registers Description

MAR (Memory Address Register) This register holds the memory location of the data that needs to be
accessed.

MDR (Memory Data Register) This register holds the data that is being transferred to or from memory.

AC (Accumulator) This register holds the intermediate arithmetic and logic results.

PC (Program Counter) This register contains the address of the next instruction to be executed.

CIR (Current Instruction This register contains the current instruction during processing.
Register)
Buses

Buses are the means by which information is shared between the registers in a multiple-
register configuration system.

A bus structure consists of a set of common lines, one for each bit of a register, through
which binary information is transferred one at a time. Control signals determine which
register is selected by the bus during each particular register transfer.

Von-Neumann Architecture comprised of three major bus systems for data transfer.

Bus Description

Address Bus Address Bus carries the address of data (but not the data) between the processor and the
memory.
Data Bus Data Bus carries data between the processor, the memory unit and the input/output
devices.
Control Bus Control Bus carries signals/commands from the CPU.

Memory Unit
A memory unit is a collection of storage cells together with associated circuits needed to
transfer information in and out of the storage. The memory stores binary information in
groups of bits called words. The internal structure of a memory unit is specified by the
number of words it contains and the number of bits in each word.

Two major types of memories are used in computer systems:

1. RAM (Random Access Memory)


2. ROM (Read-Only Memory)

3.3 Operation and Operands of Computer


HardwareInstruction
Computer instruction is a binary code that determines the micro-operations in a sequence for a
computer. They are saved in the memory along with the information. Each computer has its
specific group of instructions. They can be categorized into two elements as Operation codes
(Opcodes) and Address. Opcodesspecify the operation for specific instructions, and an address
determines the registers or the areas used for that operation.

Operands are definite elements of computer instruction that show what information is to
be operatedon. The most important general categories of data are

1. Addresses
2. Numbers
3. Characters
4. Logical data

In many cases, some calculation must be performed on the operand reference to


determine the mainor virtual memory address.

In this context, addresses can be considered to be unsigned integers. Other common data types
are numbers,characters, and logical data, and each of these is briefly described below. Some
machines define specializeddata types or data structures. For example, machine operations may
operate directly on a list or a string of characters.

Addresses

Addresses are nothing but a form of data. Here some calculations must be performed on
the operandreference in an instruction, which is to determine the physical address of an
instruction.

Numbers

All machine languages include numeric data types. Even in non-numeric data processing,
numbersare needed to act as counters, field widths, etc. An important difference
between numbers used in ordinary mathematics and numbers stored in a computer is
that the latter is limited. Thus, the programmer is faced with understanding the
consequences of rounding, overflow and underflow.

Here are the three types of numerical data in computers, such as:

1. Integer or fixed point: Fixed point representation is used to store integers, the positive
and negative whole numbers (… -3, -2, -1, 0, 1, 2, 3, …). However, the programmer assigns a
radix pointlocation to each number and tracks the radix point through every operation. High
-level programs, such as C and BASIC usually allocate 16 bits to store each integer. Each
fixed point binary number has three important parameters that describe it:

 Whether the number is signed or unsigned,


 The position of the radix point to the right side of the sign bit (for signed numbers), or the
position ofthe radix point to the most significant bit (for unsigned numbers).
 And the number of fractional bits stored.

2. Floating point: A Floating Point number usually has a decimal point, which means 0, 3.14, 6.5,
and-125.5 are Floating Point

The term floating point is derived from the fact that there is no fixed number of digits
before and after the decimal point, which means the decimal point can float. There are
also representations inwhich the number of digits before and after the decimal point is
set, called fixed-point representations. In general, floating-point representations are
slower and less accurate than fixed- point representations, but they can handle a larger
range of numbers.

3. Decimal number: The decimals are an extension of our number system. We also know
that decimals can be considered fractions with 10, 100, 1000, etc. The numbers expressed
in the decimal form are called decimal numbersor decimals. For example:1, 4.09, 13.83, etc.
A decimal number hastwo parts, and a dot separates these parts (.) called the decimal
point.

 Whole number part: The digits lying to the left of the decimal point form the whole
number part.The places begin with ones, tens, hundreds, thousands and so on.
 Decimal part: The decimal point and the digits laying on the right of the decimal point
form thedecimal part. The places begin with tenths, hundredths, thousandths and so on.

Characters

A common form of data is text or character strings. While textual data are most convenient for
stored as binary numbers. All of the characters that a computer can use are called character
sets.Here are the two common standards, such as:

1. American Standard Code for Information Interchange (ASCII)


2. Unicode

ASCII uses seven bits, giving a character set of 128 characters. The characters are
represented in atable called the ASCII table. The 128 characters include:

 32 control codes (mainly to do with printing)


 32 punctuation codes, symbols, and space
 26 upper-case letters
 26 lower-case letters
 numeric digits 0-9

We can say that the letter 'A' is the first letter of the alphabet; 'B' is the second, and so on,
all the wayup to 'Z', which is the 26th letter. In ASCII, each character has its own assigned
number. Denary, binary and hexadecimal representations of ASCII characters are shown in
the below table.
Character Denary Binary Hexadecimal

A 65 1000001 41

Z 90 1011010 5A

a 97 1100001 61

z 122 1111010 7A

0 48 0110000 30

9 57 0111001 39

Space 32 0100000 20

! 33 0100001 21

A is represented by the denary number 65 (binary 1000001, hexadecimal 41), B by 66


(binary 1000010, hexadecimal 42) and so on up to Z, which is represented by the denary
number 90 (binary1011010, hexadecimal 5A).
Similarly, lower-case letters start at denary 97 (binary 1100001, hexadecimal 61) and end
at denary122 (binary 1111010, hexadecimal 7A). When data is stored or transmitted, its
ASCII or Unicode number is used, not the character itself.

For example, the word "Computer" would be represented as:

1000011 1101111 1101101 1110000 1110101 1110100 1100101 1110010

On the other hand, IRA is also widely used outside the United States. A unique 7-bit
pattern represents each character in this code. Thus, 128 different characters can be
represented, and more characters. Some control characters control the printing of
characters on a page, and others areconcerned with communications procedures.

IRA-encoded characters are always stored and transmitted using 8 bits per character. The
8 bit may be set to 0 or used as a parity bit for error detection. In the latter case, the bit is
set such that the totalnumber of binary 1s in each octet is always odd (odd parity) or
always even (even parity).

Logical data

Normally, each word or other addressable unit (byte, half-word, and so on) is treated as a
single unitof data. Sometimes, it is useful to consider an n-bit unit consisting of 1-bit items
of data, each item having the value 0 or 1. When data are viewed this way, they are
considered to be logical data.

The Boolean data can only represent two values: true or false. Although only two values
are possible, they are rarely implemented as a single binary digit for efficiency reasons.
Many programming languages do not have an explicit Boolean type, instead of interpreting
0 as false and other values as true. Boolean data refers to the logical structure of how the
language is interpreted tothe machine language. In this case, a Boolean 0 refers to the
logic False, and true is always a non zero, especially one known as Boolean 1.

There are two advantages to the bit-oriented view:

 We may want to store an array of Boolean or binary data items, in which each item can
take on onlythe values 0 and 1. With logical data, memory can be used most efficiently for
this storage.
 There are occasions when we want to manipulate the bits of a data item.

3.4 Instruction set Architecture(ISA):


An Instruction Set Architecture (ISA) is part of the abstract model of a computer that defines
how theCPU is controlled by the software. The ISA acts as an interface between the hardware
and the software,specifying both what the processor is capable of doing as well as how it gets
done

Two types of instruction set architectures are

The two main categories of instruction set architectures, CISC (such as Intel's x86 series) and
RISC (suchas ARM and MIPS),
The ISA of a processor can be described using 5 catagories:

Operand Storage in the CPU


Number of explicit named
operandsOperand location
Operations
Type and size of operands

The 3 most common types of ISAs are:

1. Stack - The operands are implicitly on top of the stack.


2. Accumulator - One operand is implicitly the accumulator.
3. General Purpose Register (GPR) - All operands are explicitely mentioned, they are
eitherregisters or memory locations

Stack Accumulator GPR

PUSH A LOAD A LOAD R1,A

PUSH B ADD B ADD R1,B

ADD STORE C STORE R1,C

POP C - -
Reduced Instruction Set Computer (RISC):

RISC stands for Reduced Instruction Set Computer. The ISA is composed of instructions
that all have exactly the same size, usualy 32 bits. Thus they can be pre-fetched and
pipelined succesfuly. All ALU instructions have 3 operands which are only registers. The
only memory access is throughexplicit LOAD/STORE instructions.
Thus C = A + B will be assembled as:

LOAD R1,A
LOAD R2,B ADD
R3,R1,R2STORE
C,R3

Although it takes 4 instructions we can reuse the values in the registers.

Complex Instruction Set Architecture (CISC) :


The main idea is that a single instruction will do all loading, evaluating, and
storing operations just like a multiplication command will do stuff like loading
data, evaluating,and storing it, hence it’s complex

Memory Locations and Addresses

The memory consists of many millions of storage cells, each of which can store a bit of information
having thevalue 0 or 1. Because a single bit represents a very small amount of information, bits are
seldom handled individually.
The usual approach is to deal with them in groups of fixed size. For this purpose, the memory is
organized so that agroup of n bits can be stored or retrieved in a single, basic operation. Each group of
n bits is referred to as a word of information, and n is called the word length. The memory of a
computer can be schematically represented as a collection of words.
Modern computers have word lengths that typically range from 16 to 64 bits. If the word length of a
computer is 32 bits, a single word can store a 32-bit signed number or four ASCII-encoded
characters, each occupying 8bits, as shown in Figure

A unit of 8 bits is called a byte. Machine instructions may require one or more words for their
representation.

After we have described instructions at the assembly-language level. Accessing the memory to
store or retrieve a single item of information, either a word or a byte, requires distinct names or
addresses for each location. It is customary to use numbers from 0 to 2k − 1, for some suitable
value of k, as the addresses of successive locations in the memory. Thus, the memory can have up
to 2k addressable locations. The 2k addresses constitute the address space of the computer. For
example, a 24-bit address generates an address space of 224 (16,777,216) locations. This number
is usually written as 16M (16 mega), where 1M is the number 220 (1,048,576). A 32-bit address
creates an address space of 232 or 4G (4 giga) locations, where 1Gis 230. Other notational
conventions that are commonly used are K (kilo) for the number 210 (1,024), and T (tera) for the
number 240

Byte Addressability :

A byte is always 8 bits, but the word length typically ranges from 16 to 64 bits. It is impractical
to assign distinct addresses to individual bit locations in the memory. The most practical
assignment is to have successive addresses refer to successive byte locations in the memory.
This is the assignment used in mostmodern computers. The term byte-addressable memory is
used for this assignment. Byte locations have addresses 0, 1, 2,. Thus, if the word length
of the machine is 32 bits, successive words are located at
addresses 0, 4, 8,. , with each word consisting of four bytes.

There are two ways that byte addresses can be assigned across words big-endian and Little endian
The name big-endian is used when lower byte addresses are used for the more significant bytes
(the leftmostbytes) of the word.

The name little-endian is used for the opposite ordering, where the lower byte addresses are used
for the less significant bytes (the rightmost bytes) of the word. The words “more significant” and
“less significant” are used in relation to the weights (powers of 2) assigned to bits when the word
represents a number. Both little- endian and big-endian assignments are used in commercial
machines. In both cases, byte addresses 0, 4, 8,...,are taken as the addresses of successive words
in the memory of a computer with a 32-bit word length. Theseare the addresses used when
accessing the memory to store or retrieve a word.

Memory Operations

Both program instructions and data operands are stored in the memory. To execute an instruction,
the processor control circuits must cause the word (or words) containing the instruction to be
transferred from thememory to the processor. Operands and results must also be moved between
the memory and the processor.

Thus, two basic operations involving the memory are needed, , namely Read and Write.

Read Operation:

The Read operation transfers a copy of the contents of a specific memory location to the
processor. The memory contents remain unchanged. To start a Read operation, the processor
sends the address of the desiredlocation to the memory and requests that its contents be read.
The memory reads the data stored at that address and sends them to the processor.

Write Operation:

The Write operation transfers an item of information from the processor to a specific memory
location, overwriting the former contents of that location. To initiate a Write operation, the
processor sends the address of the desired location to the memory, together with the data to be
written into that location. The memory thenuses the address and data to perform the write.
3.5 Instructions and Instruction Sequencing
The tasks carried out by a computer program consist of a sequence of small steps, such as adding
two numbers, testing for a particular condition, reading a character from the keyboard, or sending a
character to bedisplayed on a display screen.

A computer must have instructions capable of performing four types of operations:

• Data transfers between the memory and the processor registers

• Arithmetic and logic operations on data

• Program sequencing and control

• I/O transfers

We begin by discussing instructions for the first two types of operations. To facilitate the
discussion, we firstneed some notation

Assembly-Language Notation:

to represent machine instructions and programs we use assembly

–Language notationExample 1:

Load R2, LOC

a generic instruction that causes the transfer described above, from memory location LOC to
processor register R2, The contents of LOC are unchanged by the execution of this instruction, but
the old contents of register R2 are overwritten. The name Load is appropriate for this instruction,
because the contents read froma memory location are loaded into a processor register
Register Transfer Notation

We need to describe the transfer of information from one location in a computer to another.
Possible locationsthat may be involved in such transfers are memory locations, processor
registers, or registers in the I/O subsystem.

Example 1:

R2 ← [LOC]

This expressionmeans that the contents of memory location LOC are transferred into processor

register R2Example 2:

As another example, consider the operation that adds the contents of registers R2 and R3, and
places their suminto register R4. This action is indicated as

R4 ← [R2]+[R3]

This type of notation is known as Register Transfer Notation (RTN). Note that the righthand
side of an RTNexpression always denotes a value, and the left-hand side is the name of a
location where the value is to be placed, overwriting the old contents of that location

Example 2:
adding two numbers contained in processor registers R2 and R3 and placing their sum in R4 can
be specifiedby the assembly-language statement, registers R2 and R3 hold the source operands,
while R4 is the destination
RISC and CISC Instruction Sets

One of the most important characteristics that distinguish different computers is the nature of their
instructions. There are two fundamentally different approaches in the design of instruction sets for
modern computers. One popular approach is based on the premise that higher performance can be
achieved if each instruction occupies exactly one word in memory, and all operands needed to
execute a given arithmetic or logic operation specified by an instruction are already in processor
registers. This approach is conducive to animplementation of the processing unit in which the
various operations needed to process a sequence of instructions are performed in “pipelined”
fashion to overlap activity and reduce total execution time of a program. The restriction that each
instruction must fit into a single word reduces the complexity and the number of different types of
instructions that may be included in the instruction set of a computer. Such computers are called
Reduced Instruction Set Computers (RISC).

An alternative to the RISC approach is to make use of more complex instructions which may span
more than one word of memory, and which may specify more complicated operations. This
approach was prevalent priorto the introduction of the RISC approach in the 1970s. Although the
use of complex instructions was not originally identified by any particular label, computers based on
this idea have been subsequently called Complex Instruction Set Computers (CISC).

Introduction to RISC Instruction Sets:

Two key characteristics of RISC instruction sets are: • Each instruction fits in a single word. • A
load/store architecture is used, in which – Memory operands are accessed only using Load and
Store instructions. – Alloperands involved in an arithmetic or logic operation must either be in
processor registers, or one of the operands may be given explicitly within the instruction word.

At the start of execution of a program, all instructions and data used in the program are stored in
the memory of a computer. Processor registers do not contain valid operands at that time . If
operands are expected to be inprocessor registers before they can be used by an instruction, then it
is necessary to first bring these operands into the registers. This task is done by Load instructions
which copy the contents of a memory location into a processor register. Load instructions are of the
form

Load destination, source

Or more specifically

Load processor_register, memory_location

Example:

The operation of adding two numbers is a fundamental capability in any

computer.The statement C = A + B

The required action can be accomplished by a sequence of simple machine instructions. We


choose to useregisters R2, R3, and R4 to perform the task with four instructions:

Load R2, A

Load R3, B
Store R4, C
Instruction Execution and Straight-Line Sequencing:

we used the task C = A + B

implemented as C ← [A] +

[B]

We assume that the word length is 32 bits and the memory is byte-addressable. The four
instructions of the program are in successive word locations, starting at location i. Since each
instruction is 4 bytes long, the second, third, and fourth instructions are at addresses i + 4, i + 8,
and i + 12. For simplicity, we assume that adesired memory address can be directly specified in
Load and Store instructions, although this is not possibleif a full 32-bit address is involved.

Let us consider how this program is executed. The processor contains a register called the
program counter (PC), which holds the address of the next instruction to be executed. To begin
executing a program, the address of its first instruction (i in our example) must be placed into the
PC. Then, the processor control circuits use the information in the PC to fetch and execute
instructions, one at a time, in the order of increasing addresses. This is called straight-line
sequencing. During the execution of each instruction, the PC is incremented by 4 to point to the
next instruction. Thus, after the Store instruction at location i + 12 is executed, the PC contains the
value i + 16, which is the address of the first instruction of the next program segment. Executing a
given instruction is a two-phase procedure. In the first phase, called instruction fetch, the
instruction is fetched from the memory location whose address is in the PC. This instruction is
placed in the instruction register (IR) in the processor. At the start of the second phase, called
instruction execute, the instruction in IR is examined to determine which operation is to be
performed. The specified operation is thenperformed by the processor. This involves a small
number of steps such as fetching operands from the memory or from processor registers,
performing an arithmetic or logic operation, and storing the result in the contents of the PC are
advanced to
point to the next instruction. When the execute phase of an instruction is completed, the PC
contains theaddress of the next instruction, and a new instruction fetch phase can begin.

Branching:

Consider the task of adding a list of n number

The addresses of the memory locations containing the n numbers are symbolically given as NUM1,
NUM2,...,NUMn, and separate Load and Add instructions are used to add each number to the
contents of register R2.
After all the numbers have been added, the result is placed in memory location SUM.

Instead of using a long list of Load and Add instructions,

it is possible to implement a program loop in which the instructions read the next number in the list
and add itto the current sum. To add all numbers, the loop has to be executed as many times as
there are numbers in thelist. The body of the loop is a straight-line sequence of instructions
executed repeatedly. It starts at location LOOP and ends at the instruction Branch_if_[R2]>0. During
each pass through this loop, the address of the next list entry is determined, and that entry is
loaded into R5 and added to R3. The address of an operand canbe specified in various ways, as will
be described in Section 2.4. For now, we concentrate on how to create and control a program loop.
Assume that the number of entries in the list, n, is stored in memory location N, as shown. Register
R2 is used as a counter to determine the number of times the loop is executed. Hence, the contents
of location N are loaded into register R2 at the beginning of the program. Then, within the body of
the loop, the instruction.
Branch_if_[R2]>0 LOOP

is a conditional branch instruction that causes a branch to location LOOP if the contents of
register R2 are greater than zero. This means that the loop is repeated as long as there are entries
in the list that are yet to be added to R3. At the end of the nth pass through the loop, the Subtract
instruction produces a value of zero in R2, and, hence, branching does not occur. Instead, the Store
instruction is fetched and executed. It moves thefinal result from R3 into memory location SUM.

Encoding of Machine Instructions:


we have introduced a variety of useful instructions and addressing modes. We have used a
generic form of assembly language to emphasize basic concepts without relying on processor-
specific acronyms or mnemonics. Assembly-language instructions symbolically express the
actions that must be performed by theprocessor circuitry.
To be executed in a processor, assembly-language instructions must be converted by the
assembler program,into machine instructions that are encoded in a compact binary pattern.

Let us now examine how machine instructions may be

formed.The Add instruction

Add Rdst, Rsrc1, Rsrc2

The above instruction representative of a class of three-operand instructions that use operands in
processor registers. Registers Rdst, Rsrc1, and Rsrc2 hold the destination and two source
operands. If a processor has32 registers, then it is necessary to use five bits to specify each of the
three registers in such instructions. If each instruction is implemented in a 32-bit word, the
remaining 17 bits can be used to specify the OP code that indicates the operation to be performed

Of the 32 bits available, ten bits are needed to specify the two registers. The remaining 22 bits must
give the OP code and the value of the immediate operand. The most useful sizes of immediate
operands are 32, 16, and8 bits. Since 32 bits are not available, a good choice is to allocate 16 bits
for the immediate operand. This leaves six bits for specifying the OP code.

This format can also be used for Load and Store instructions, where the Index addressing mode
uses the 16-bitfield to specify the offset that is added to the contents of the index register.

The format in Figure b can also be used to encode the Branch instructions. The Branch-
greater-thaninstruction at memory address 128.

BGT R2, R0, LOOP

if the contents of register R0 are zero. The registers R2 and R0 can be specified in the two register
fields in Figure b. The six-bit OP code has to identify the BGT operation. The 16-bit immediate field
can be used to provide the information needed to determine the branch target address, which is the
location of the instructionwith the label LOOP. The target address generally comprises 32 bits.
Since there is no space for 32 bits, the BGT instruction makes use of the immediate field to give an
offset from the location of this instruction in the
program to the required branch target. At the time the BGT instruction is being executed, the
program counter, PC, has been incremented to point to the next instruction, which is the Store
instruction at address
132. Therefore, the branch offset is 132 − 112 = 20. Since the processor computes the target
address by adding the current contents of the PC and the branch offset, the required offset in this
example is negative, namely −20. Finally, we should consider the Call instruction, which is used to
call a subroutine. It only needsto specify the OP code and an immediate value that is used to
determine the address of the first instruction inthe subroutine. If six bits are used for the OP code,
then the remaining 26 bits can be used to denote the immediate value. This gives the format
shown in c.

Addressing Modes:

The operation field of an instruction specifies the operation to be performed. And this
operation must be performed on some data. So each instruction need to specify data on
which the operation is to be performed. But the operand(data) may be in accumulator,
general purpose register or at some specified memory location. So, appropriate
location (address) of data is need to be specified, and in computer, there are various ways
of specifying the address of data. These various ways of specifying the address of data
are known as “Addressing Modes” So Addressing Modes can be defined as “The
technique for specifying the address of the operands “ And in computer the address of
operand i.e., the address where operand is actuallyfound is known as “Effective Address”.
Now, in addition to this, the two most prominent reason of why addressing modes are so
important are:
First, the way the operand data are chosen during program execution is dependent on
the addressing mode of the instruction.

Second, the address field(or fields) in a typical instruction format are relatively small and
sometimes we would like to be able to reference a large range of locations, so here to achieve
this objective i.e., to fit this large range of location in address field, a variety of addressing
techniques has been employed. As they reduce the number of field in the addressing field of the
instruction.

Thus, Addressing Modes are very vital in Instruction Set


Architecture(ISA).some notations are
A= Contents of an address field in the instruction

R= Contents of an address field in the instruction that refers to a register

EA= Effective Address(Actual address) of location containing the

referencedoperand.(X)= Contents of memory location x or register X.


Types Of Addressing Modes
Various types of addressing modes are:

1. Implied and Immediate Addressing Modes


2. Direct or Indirect Addressing Modes
3. Register Addressing Modes
4. Register Indirect Addressing Mode
5. Auto-Increment and Auto-Decrement Addressing Modes
6. Displacement Based Addressing Modes

1. Implied and Immediate Addressing Modes:


Implied Addressing Mode:

Implied Addressing Mode also known as "Implicit" or "Inherent“ addressing mode is the
addressing mode in which, no operand(register or memory location or data) is specified
in the instruction. As in this mode the operand are specified implicit in the definition of
instruction.

“Complement Accumulator” is an Implied Mode instruction because the operand in the


accumulator register is implied in the definition of instruction. In assembly language it
is writtenas:

CMA: Take complement of content of AC Similarly, the

instruction, RLC: Rotate the content of Accumulator is an implied

mode instruction.

All Register-Reference instruction that use an accumulator and Zero-Address instruction


in a Stack Organised Computer are implied mode instructions, because in Register
reference operand implied in accumulator and in Zero-Address instruction, the operand
implied on the Top of Stack.
Immediate Addressing Mode:
In Immediate Addressing Mode operand is specified in the instruction itself. In other
words, an immediate mode instruction has an operand field rather than an address
field, which contain actual operand to be used in conjunction with the operand
specified in the instruction. That is, inthis mode, the format of instruction is:

As an example: The Instruction:

MVI 06 Move 06 to the

ADD 05 ADD 05 to the content of

 One of the operand is mentioned directly.


 Data is available as a part of instruction.
 Data is 8 0r 16 bit long.
 No memory reference is needed to fetch

Immediate Mode

Example 1 :
MOV CL, 03H

03 – 8 bit immediate source

operand CL – 8 bit register

destination operandExample 2:

ADD AX, 0525H

0525 – 16 bit immediate source

operandAX – 16 bit register


destination operand.
2. Direct and Indirect Addressing
Modes
The instruction format for direct and indirect addressing mode is shown below:

It consists of 3-bit opcode, 12-bit address and a mode bit designated as( I).The mode
bit (I) is zero for Direct Address and 1 for Indirect Address. Now we will discuss about
each in detail one by one.

Direct Addressing Mode


Direct Addressing Mode is also known as “Absolute Addressing Mode”. In this mode
the address of data(operand) is specified in the instruction itself. That is, in this type of
mode, the operand resides in memory and its address is given directly by the address
field of the instruction. Means, in other words, in this mode, the address field contain
the Effective Address of operand i.e., EA=A

As an example: Consider the instruction:

ADD A Means add contents of cell A to

accumulator .It Would look like as shown below:


Indirect Addressing Mode:
In this mode, the address field of instruction gives the memory address where on, the
operand is stored in memory. That is, in this mode, the address field of the instruction
gives the address where the “Effective Address” is stored in memory. i.e., EA=(A)

Means, here, Control fetches the instruction from memory and then uses its address
part to access memory again to read Effective Address.
As an example: Consider the instruction:

ADD (A) Means adds the content of cell pointed to contents of A to Accumulator.
It looklike as shown in figure below:

Thus in AC <-- M[M[A]]


[M=Memory]

i.e., (A)=1350=EA

3.Register Addressing Mode:


In Register Addressing Mode, the operands are in registers that reside within the CPU.
That is, in
this mode, instruction specifies a register in CPU, which contain the operand. It is
like DirectAddressing Mode, the only difference is that the address field refers to a
register instead of
i.e., EA=R

It look like as:

Example of such instructions are:


MOV AX, BX Move contents of Register BX to AX

ADD AX, BX Add the contents of register BX to AX

Here, AX, BX are used as register names which is of 16-bit register.

Thus, for a Register Addressing Mode, there is no


need to compute the actual address as the

4. Register Indirect Addressing Mode:


In Register Indirect Addressing Mode, the instruction specifies a register in CPU whose
contents give the operand in memory. In other words, the selected register contain the
address of operand rather than the operand itself. That is,

i.e., EA=(R)

Means, control fetches instruction from memory and then uses its address to access
Register and looks in Register(R) for effective address of operand in memory.

It look like as:


Here, the parentheses are to be
interpreted asmeaning contents of.

Example of such instructions are:

MOV AL, [BX]


Code example in Register:

MOV BX, 1000H

MOV 1000H,

From above example, it is clear that, the instruction(MOV AL, [BX]) specifies a
register[BX],and in coding of register, we see that, when we move register [BX], the
register contain the address of operand(1000H) rather than address itself.

5. Auto-increment and Auto-


decrementAddressing Modes
These are similar to Register indirect Addressing Mode except that the register is
incremented or decremented after(or before) its value is used to access memory.
These modes are required because when the address stored in register refers to a table
of data in memory, then it is necessary to increment or decrement the register after
every access to table so that next value is

Thus, these addressing modes are common requirements in

computer.Auto-increment Addressing Mode:

Auto-increment Addressing Mode are similar to Register Indirect Addressing Mode


except that the register is incremented after its value is loaded (or accessed) at another
location like accumulator(AC).

That is, in this case also, the Effective Address is

equal toEA=(R)

But, after accessing operand, register is incremented by 1.


As an example:

It look like as shown below:

Here, we see that effective address is (R )=400 and operand in AC is 7. And after loading
R1 isincremented by 1.It becomes 401.

Means, here we see that, in the Auto-increment mode, the R1 register is increment to
401 afterexecution of instruction.

Auto-decrement Addressing Mode:

Auto-decrement Addressing Mode is reverse of auto-increment , as in it the register is


decrement before the execution of the instruction. That is, in this case, effective
address is equal to

EA=(R) - 1
As an example:
It look like as shown below:
Here, we see that, in the Auto-decrement mode, the register R1 is decremented to 399
prior to execution of the instruction, means the operand is loaded to accumulator, is of
address 1099H in memory, instead of 1088H.Thus, in this case effective address is
1099H and contents loaded into

6. Displacement Based Addressing

Displacement Based Addressing Modes is a powerful addressing mode as it is a


combination of

Means, Displacement Addressing Modes requires that the instruction have two address
fields, at least one of which is explicit means, one is address field indicate direct
address and other indicate indirect address.

That is, value contained in one addressing field is A, which is used directly and the value
in otheraddress field is R, which refers to a register whose contents are to be added to
produce effective address.
There are three areas where Displacement Addressing modes are used. In
other words,Displacement Based Addressing Modes are of three types. These
are:

1. Relative Addressing Mode


2. Base Register Addressing Mode
3. Indexing Addressing Mode

Now we will explore to each one by one.

In Relative Addressing Mode , the contents of program counter is added to the


address part ofinstruction to obtain the Effective Address.

That is, in Relative Addressing Mode, the address field of the instruction is added to
implicitlyreference register Program Counter to obtain effective address.

i.e., EA=A+PC

It becomes clear with an example:


Assume that PC contains the no.- 825 and the address part of instruction contain the no.
- 24, then the instruction at location 825 is read from memory during fetch phase and
the Program Counteris then incremented by one to 826.

The effective address computation for relative address mode is 26+24=850

Thus, Effective Address is displacement relative to the address of instruction.


RelativeAddressing is often used with branch type instruction

field of instruction point to Index Register, which is a special CPU register that contain

2.Index Register Addressing


an Indexed value, and direct addressing field contain base address.

In indexed addressing mode, the content of Index Register is added to direct


address part(or

As, indexed type instruction make sense that data array is in memory and each
operand in the array is stored in memory relative to base address. And the distance
between the beginning address and the address of operand is the indexed value
stored in indexed register.
Any operand in the array can be accessed with the same instruction, which provided
that the index register contains the correct index value i.e., the index register can
be incremented tofacilitate access to consecutive operands.

Thus, in index addressing

modeEA=A+Index

Means, in it the register indirect address field point to the Base Register and to obtain
EA, thecontents of Instruction Register, is added to direct address part of the instruction.

This is similar to indexed addressing mode except that the register is now called as
Base Registerinstead of Index Register.

3. Base Register Addressing


In this mode, the content of the Base Register is added to the direct address
part of the
That is, the EA=A+Base

Thus, the difference between Base and Index mode is in the way they are used
rather thanthe way they are computed. An Index Register is assumed to hold an
index number that is relative to the address part of the instruction. And a Base
Register is assumed to hold a base address and thedirect address field of
instruction gives a displacement relative to this

Thus, the Base register addressing mode is used in computer to facilitate the
relocation of programs in memory. Means, when programs and data are moved
from one segment of memory to another, then Base address is changed, the
displacement value of instruction
So, only the value of Base Register requires updation to reflect the beginning
of newmemory segment.
UNIT IV PROCESSOR

Instruction Execution
The execution of an instruction in a processor can be split up into a number of
stages. How many stages there are, and the purpose of each stage is different for
each processor design. Examples includes 2 stages (Instruction Fetch / Instruction
Execute) and 3 stages (Instruction Fetch, Instruction Decode, Instruction Execute).

The MIPS processor has 5 stages:


The Instruction Fetch stage fetches the next instruction from memory using
IF the address in the PC (Program Counter) register and stores this instruction in
the IR (Instruction Register)

The Instruction Decode stage decodes the instruction in the IR, calculates the
ID
next PC,and reads any operands required from the register file.

The Execute stage "executes" the instruction. In fact, all ALU operations are
EX done inthis stage. (The ALU is the Arithmetic and Logic Unit and performs
operations such as
addition, subtraction, shifts left and right, etc.)
The Memory Access stage performs any memory access required by the
current
MA
instruction, So, for loads, it would load an operand from memory. For stores, it
wouldstore an operand into memory. For all other instructions, it would do
nothing.
For instructions that have a result (a destination register), the Write Back
WB writes thisresult back to the register file. Note that this includes nearly all
instructions, except
nops (a nop, no-op or no-operation instruction simply does nothing) and s
(stores).
BASIC MIPS IMPLEMENTATION
MIPS implementation includes a subset of the core MIPS instruction set:
■ The memory-reference instructions load word (lw) and store word (sw)
■ The arithmetic-logical instructions add, sub, AND, OR, and slt
■ The instructions branch equal (beq) and jump (j), which we add last
This subset does not include all the integer instructions (for example,
shift,multiply, anddivide are missing), nor does it include any floating-point
instructions.

An Overview of the Implementation


The core MIPS instructions includes
 The integer arithmetic-logical instructions,
 The memory-reference instructions, and
 The branch instructions.
For every instruction, the first two steps are identical:
1. Send the program counter (PC) to the memory that contains the code andfetch
the instruction from that memory.
2. Read one or two registers, using fields of the instruction to select the registersto
read. For the load word instruction, we need to read only one register, butmost other
instructions require reading two registers.

 For example, all instruction classes, except jump, use the arithmetic-logical
unit(ALU) after reading the registers.
 The memory-reference instructions use the ALU for an address calculation,
the arithmetic-logical instructions for the operationexecution, and branches
for comparison.
 A memory-reference instruction will need to access the memory either to read
data fora load or write data for a store.
 An arithmetic-logical or load instruction must write the data from the ALU or
memory back into a register.
 Lastly, for a branch instruction, we may need to change the next instruction
address based on the comparison; otherwise, the PC should be incremented
by 4 to get the address of the next instruction.
Figure 3.1 shows the high-level view of a MIPS implementation, focusing onthe
various functional units and their interconnection.
 A logic element need to be added that chooses from among the multiple
sources
steers and
one of those sources to its destination.
 This selection is commonly done with a device called a multiplexor, although
this devicemight better be called a data selector.
 The multiplexor selects from among several inputs based on the setting of its
control lines. The control lines are set based primarily on information taken
from the instruction being executed.
Fig 3.1: An abstract view of the implementation of the MIPS subset showing
the

Figure 3.2 shows the datapath of Figure 3.1 with the three required multiplexors
added, as well as control lines for the major functional units. A control unit, which
has the instructionas an input, is used to determine how to set the control lines for
the functional units and two
Fig 3.2: The basic implementation of the MIPS subset, including the
necessary multiplexors and control lines.
 The top multiplexor controls the value of PC(PC+4 or the branch destination
address.
 The middle multiplexor is used to steer the output of the ALU for
writing in to theregister file.
 The bottom most multiplexor is used to determine whether the second ALU
input is

BUILDING DATAPATH
 Single-cycle Datapath:
Each instruction executes in a single cycle
 Multi-cycle Datapath:
Each instruction is broken up into a series of shorter steps
 Pipelined Datapath:
Each instruction is broken up into a series of steps; Multiple
instructionsexecute at once
Differences between single cycle and multi cycle datapath
 Single cycle Data Path:
o Each instruction is processed in one (long) clock cycle
o Two separate memory units for instructions and data.

 Multi-cycle Data Path:


o Divide the processing of each instruction into 5 stages and allocate one clock
cycle
per stage
o Single memory unit for both instructions and data
o Single ALU for all arithmetic operations
o Extra registers needed to hold values between each steps
• Instruction Register (IR) holds the instruction
• Memory Data Register (MDR) holds the data coming from memory
• A, B hold operand data coming from the registers
• ALUOut holds output coming out of the ALU

Creating a single cycle datapath


 This simplest datapath will attempt to execute all instructions in one clock
cycle. Thismeans that no datapath resource can be used more than once per
instruction, so any element needed more than once must be duplicated. We
therefore need a memory for instructions separate from one for data.
Although some of the functional units will
flows.
 To share a datapath element between two different instruction classes, we
may need to allow multiple connections to the input of an element, using a
multiplexor and controlsignal to select among the multiple inputs.
 A reasonable way to start a datapath design is to examine the major
components required to execute each class of MIPS instructions. Let’s start
at the top by looking atwhich datapath elements each instruction needs, and
then work our way downthrough the levels of abstraction. When we show the
datapath elements, we will also
show their control signals. We use abstraction in this explanation, starting
from thebottom up.

Datapath Element
A unit used to operate on or hold data within a processor. In the MIPS
implementation, the datapath elements include the instruction and data memories,
the register file, the ALU, and adders.
Program Counter (PC)
 Figure 3.3a shows the first element we need: a memory unit to store the
instructions of a program and supply instructions given an address. Figure 3.3
b also shows the program counter (PC), the register containing the address of
the instruction in the program being executed.
 Lastly, we will need an adder to increment the PC to the address of the next
instruction. This adder, which is combinational, can be built from the ALU
simply by wiring the control lines so that the control always specifies an add
operation.
 We will draw such an ALU with the label Add, as in Figure 3.3c, to indicate that
it has been permanently made an adder and cannot perform the other ALU
functions.
 To execute any instruction, we must start by fetching the instruction from
memory. To prepare for executing the next instruction, we must also
increment the program counter so that it points at the next instruction, 4 bytes
later.

Fig 3.3: Two state elements are needed to store and access instructions, and an adder is
neededto compute the next instruction address.
Figure 3.4 shows how to combine the three elements to form a datapath that
fetches instructions and increments the PC to obtain the address of the next
sequential instruction.
Fig 3.4: A portion of the datapath used for fetching instructions and incrementing the
program counter. The fetched instruction is used by other parts of the datapath
R-FORMAT INSTRUCTIONS
 To perform any operation we required two registers, perform an ALU operation
on the contents of the registers, and write the result to a register. We call these
instructions either R-type instructions or arithmetic-logical instructions (since
they perform arithmetic or logical operations). This instruction class includes
add, sub, AND, OR, and slt,
 The processor’s 32 general-purpose registers are stored in a structure called a
register file. A register file is a collection of registers in which any register can
be read or written by specifying the number of the register in the file.
 R-format instructions have three register operands, so we will need to read two
data words from the register file and write one data word into the register file
for each instruction
that specifies the register number to be read and an output from the register
file that will carry the value that has been read from the registers.
 To write a data word, we will need two inputs: one to specify the register
number tobe written and one to supply the data to be written into the register.
The register file always outputs the contents of whatever register numbers
are on the Read register inputs. Writes, however, are controlled by the write
control signal, which must be asserted for a write to occur at the clock edge.
 Figure 3.5a shows the result; we need a total of four inputs (three for register
numbers and one for data) and two outputs (both for data). The register
number inputs are 5 bits wide to specify one of 32 registers (32 = 25),
whereas the data input and two data output buses are each 32 bits wide.
 Figure 3.5b shows the ALU, which takes two 32-bit inputs and produces a 32-
bit result, as well as a 1-bit signal if the result is 0.

Fig 3.5: The two elements needed to implement R-format ALU operations are
the

DATAPATH SEGMENT FOR Load Word and Store Word INSTRUCTION


 Now, consider the MIPS load word and store word instructions, which
have thegeneral form lw $t1,offset_value($t2) or sw $t1,offset_value ($t2).
 In these instructions $t1 is a data register and $t2 is a base register. The
memory

values specified in the instruction.


 If the instruction is a store, the value to be stored must also be read from the
register file where it resides in $t1. If the instruction is a load, the value read
from memory must be written into the register file in the specified register,
which is $t1. Thus, we will need both the register file and the ALU from Figure
3.5.
 In addition, we will need a unit to sign-extend the 16-bit off set field in the
instruction to a 32-bit signed value, and a data memory unit to read from or
write to. The data memory must be written on store instructions; Figure 3.6
shows these two elements.
Fig 3.6: The two units needed to implement loads and stores, in addition to the
register file and ALU of Figure 3.5

DATAPATH SEGMENT FOR Branch INSTRUCTION


For computing the branch target address, we must also determine whether
the next instruction is the instruction that follows sequentially or the instruction at
the branch target address.
When the condition is true (i.e., the operands are equal), the branch target address
becomes the new PC, and we say that the branch is taken. If the operands are not
equal, theincremented PC should replace the current PC (just as for any other normal
instruction); in this case, we say that the branch is not taken.
Figure 3.7 shows the structure of the datapath segment that handles branches. To
compute the branch target address, the branch datapath includes a sign extension
unit, and an adder.

FIG 3.7: Computation of branch target address


Since that ALU provides an output signal that indicates whether the result was 0, we
can sendthe two register operands to the ALU with the control set to do a subtract. If
the Zero signal out of the ALU unit is asserted, we know that the two values are equal.
Although the Zero output always signals if the result is 0, we will be using it only to
implement the equal test of branches. Later, we will show exactly how to connect the
control signals of the ALU for use in the datapath.
The jump instruction operates by replacing the lower 28 bits of the PC with the lower
26 bits of the instruction shifted left by 2 bits.
CONTROL IMPLEMENTATION SCHEME
This simple implementation covers load word (lw), store word (sw), branch equal (beq),
the arithmetic-logical instructions add, sub, AND, OR, and set on less
than.
The ALU Control
and

Table 3.1: ALU control


signals
Depending on the instruction class, the ALU will need to perform one of these

ALU control input using a small control unit that has as inputs the function field of
the instruction and a 2-bit control field, which we call ALUOp.
ALUOp indicates whether the operation to be performed should be add (00) for
loads and stores, subtract (01) for beq, or determined by the operation encoded in
the funct field (10). The output of the ALU control unit is a 4-bit signal that directly
controls the ALU by generating one of the 4-bit combinations shown previously. In
Figure 3.2, we show how to set the ALU control inputs based on the 2-bit ALUOp
control and the 6-bit function code. Later in this chapter we will see how the ALUOp
bits are generated from the main control unit.
Table 3.2: The ALUOp control bits and the different function codes for the R-
type instruction.

Using multiple levels of control can reduce the size of the main control unit. Using
several smaller control units may also potentially increase the speed of the control
unit. There are several different ways to implement the mapping from the 2-bit
ALUOp field and the 6-bit funct field to the four ALU operation control bits.
Table 3.3; shows the truth table how the 4-bit ALU control is set depending on these
two input fields. Since the full truth table is very large (28 = 256 entries), we show
only the truth table entries for which the ALU control must have a specific value

Table 3.3: The truth table for the 4 ALU control bits (called Operation)
Designing the Main Control Unit
To understand how to connect the fields of an instruction to the datapath, it is
useful to review the formats of the three instruction classes: the R-type, branch, and
load-store instructions. Figure 3.8 shows these formats.
FIGURE 3.8 The three instruction classes (R-type, load and store, and branch) use two
different instruction formats.

There are several major observations about this instruction format that we will rely on:
■ The op field, is called the opcode, is always contained in bits 31:26. Refer to this field
asOp[5:0].
■ The two registers to be read are always specified by the rs and rt fields, at positions
25:21and 20:16. This is true for the R-type instructions, branch equal, and store.
■ The base register for load and store instructions is always in bit positions 25:21 (rs).
■ The 16-bit off set for branch equal, load, and store is always in positions 15:0.
■ The destination register is in one of two places. For a load it is in bit positions 20:16
(rt),while for an R-type instruction it is in bit positions 15:11 (rd).
Th us, we will need to add a multiplexor to select which fi eld of the instruction is
used to
indicate the register number to be written. The first design principle
simplicity favorsregularity—pays off here in specifying control.
FIGURE 3.9 The datapath with all necessary multiplexors and all control
lines

 Figure 3.9 shows seven single-bit control lines plus the 2-bit ALUOp control
signal. We have already defined how the ALUOp control signal works, and it is
useful to define what the seven other control signals do informally before we
determine how to set these control signals during instruction execution.
 Table 3.4 describes the function of these seven control lines. That control
line should
the control unit can make) and the Zero output of the ALU, which is used for
equality comparison, is asserted. To generate the PCSrc signal, we will need
to AND togethera signal from the control unit, which we call Branch, with the
Zero signal out of the ALU.
Table 3.4: The effect of each of the seven control signals.
 These nine control signals (seven from Figure 4.16 and two for ALUOp) can
now beset on the basis of six input signals to the control unit, which are the
opcode bits 31 to
26. Figure 3.10 shows the datapath with the control unit and the control signals.
Finalizing Control
Table 3.6 shows the logic in the control unit as one large truth table that combines
all theoutputs and that uses the opcode bits as inputs It completely specifies the
Table 3.6: The control function for the simple single-cycle implementation is
completelyspecified by this truth table.

OPERATION OF THE DATAPATH


The flow of three different instruction classes through the datapath is shown in
table 3.5. Theasserted control signals and active datapath elements are
highlighted in each of these. Note

that a multiplexor whose control is 0 has a definite action, even if its control
line is nothighlighted. Multiple-bit control signals are highlighted if any constituent
signal is asserted.

Table 3.5: The setting of the control lines is completely determined by the opcode
fieldsof the instruction.
Fig 3.10: shows the datapath with the control unit and the control signals. The setting of
the
control lines depends only on the opcode, we defi ne whether each control signal
should be 0,1, or don’t care (X) for each of the opcode values.

FIGURE 3.10: The simple datapath with the control unit.


DATAPATH FOR THE OPERATION OF A R-TYPE INSTRUCTION
Figure 3.11 shows the operation of the datapath for an R-type instruction, such as
add
$t1,$t2,$t3. Although everything occurs in one clock cycle, we can think of four
steps toexecute the instruction; these steps are ordered by the flow of information:
1. The instruction is fetched, and the PC is incremented.
2. Two registers, $t2 and $t3, are read from the register file; also, the main
control unitcomputes the setting of the control lines during this step.
3. The ALU operates on the data read from the register file, using the function code
(bits 5:0,which is the funct field, of the instruction) to generate the ALU function.
4. The result from the ALU is written into the register file using bits 15:11 of the
instructionto select the destination register ($t1).

FIGURE 3.11: The datapath in operation for an R-type instruction

DATAPATH FOR THE OPERATION OF load word INSTRUCTION


The given figure 3.12 shows the active functional units and asserted control lines for
a load. We can think of a load instruction as operating in five steps (similar to how
the R-type executed in four):
1. An instruction is fetched from the instruction memory, and the PC is incremented.
2. A register ($t2) value is read from the register file.
3. The ALU computes the sum of the value read from the register file and the sign-
extended,lower 16 bits of the instruction (offset).
4. The sum from the ALU is used as the address for the data memory.
5. The data from the memory unit is written into the register file; the register
destination isgiven by bits 20:16 of the instruction ($t1).
Finally, we can show the operation of the branch-on-equal instruction, such as beq
$t1, $t2, offset, in the same fashion. It operates much like an R-format instruction,
but the ALU outputis used to determine whether the PC is written with PC + 4 or the
branch target address.

FIGURE 3.12: The datapath in operation for a load instruction.


DATAPATH FOR THE OPERATION OF BRANCH-ON-EQUAL INSTRUCTION
The given figure 3.13 shows the four steps in execution:
1. An instruction is fetched from the instruction memory, and the PC is incremented.
2. Two registers, $t1 and $t2, are read from the register file.
3. The ALU performs a subtract on the data values read from the register file. The
value of PC + 4 is added to the sign-extended, lower 16 bits of the instruction (offset)
shifted left by two; the result is the branch target address.
4. The Zero result from the ALU is used to decide which adder result to store into the PC.

FIGURE 3.13: The datapath in operation for a branch-on-equal instruction.

PIPELINING
Pipelining is an implementation technique in which multiple instructions are
overlapped inexecution. The computer pipeline is divided in stages. Each stage
completes a part of an
instruction in parallel. The stages are connected one to the next to form a pipe -
instructionsenter at one end, progress through the stages, and exit at the other end.
Today,. The non-pipelined approach to laundry would be as follows:
1. Place one dirty load of clothes in the washer.
2. When the washer is finished, place the wet load in the dryer.
3. When the dryer is finished, place the dry load on a table and fold.
4. When folding is finished, ask your roommate to put the clothes
away.When your roommate is done, start over with the next dirty
load.

FIGURE 3.14: The laundry analogy for pipelining.

 A technique used in advanced microprocessors where the microprocessor


beginsexecuting a second instruction before the first has been completed.
- A Pipeline is a series of stages, where some work is done at each stage.
The work isnot finished until it has passed through all stages.
 With pipelining, the computer architecture allows the next instructions to be
fetched while the processor is performing arithmetic operations, holding
them in a buffer close to the processor until each instruction operation can
performed.
How Pipelines Works
 The pipeline is divided into segments and each segment can execute it operation
concurrently with the other segments. Once a segment completes an operations,
it passes the result to the next segment in the pipeline and fetches the next
operations from the preceding segment.

 The pipelined approach takes much less time, as Figure 3.14 shows. As soon as
washer is finished with the first load and placed in the dryer, you load the washer
with thesecond dirty load. When the first load is dry, you place it on the table to
start folding, move the wet load to the dryer, and put the next dirty load into the
washer. Next you have your roommate put the first load away, you start folding
the second load, the dryer has the third load, and you put the fourth load into
the washer. At this point all steps—called
the

 If all the stages take about the same amount of time and there is enough work to
do, then the speed-up due to pipelining is equal to the number of stages in the
pipeline, in this case four: washing, drying, folding, and putting away. Therefore,
pipelined laundry is potentially four times faster than non-pipelined: 20 loads
would take about 5 times as long as 1 load, while 20 loads of sequential
laundry takes 20 times as long as 1 load. It’s
instructions classically take five steps:
The same principles apply to processors where we pipeline instruction-
1. Fetch instruction from memory.
2. Read registers while decoding the instruction. The regular format of MIPS
instructions allows reading and decoding to occur simultaneously.
3. Execute the operation or calculate an address.
4. Access an operand in data memory.
5. Write the result into a register.

DESIGNING INSTRUCTION SETS FOR PIPELINING


 First, all MIPS instructions are the same length. This restriction makes it
much easier to fetch instructions in the first pipeline stage and to decode
them in the second stage.
 Second, MIPS have only a few instruction formats, with the source register
fields being located in the same place in each instruction. This symmetry
means that the second stage can begin reading the register file at the same
time that the hardware is determining what type of instruction was fetched.
 Third, memory operands only appear in loads or stores in MIPS. This
restriction means we can use the execute stage to calculate the memory
address and then access memory in the following stage.
 Fourth, operands must be aligned in memory. Hence, we need not worry about
a
single data transfer instruction requiring two data memory accesses; the
requested data can be transferred between processor and memory in a single
pipeline stage.
PIPELINE HAZARDS
There are situations in pipelining when the next instruction cannot execute in the
followingclock cycle. These events are called hazards, and there are three different types.
Hazards
 Structural Hazards
 Data Hazards
 Control Hazards
STRUCTURAL HAZARD
 Structural Hazard occurs when a planned instruction cannot execute in the proper
clock cycle because the hardware does not support the combination of
instructions that are set to execute.
 A structural hazard in the laundry room would occur if we used a washer dryer

doing something else and wouldn’t put clothes away. Our carefully scheduled
pipelineplans would then be foiled.
FIGURE 3.15 Single-cycle, non-pipelined execution in top versus pipelined execution
in bottom.
As we said above, the MIPS instruction set was designed to be pipelined, making it
fairly easy for designers to avoid structural hazards when designing a pipeline.
Suppose, however, that we had a single memory instead of two memories. If the
pipeline in Figure 3.15 had a fourth instruction, we would see that in the same clock
cycle the first instruction is accessing data from memory while the fourth
instruction is fetching an instruction from that same

DATA HAZARDS
 It is also called a pipeline data hazard. When a planned instruction cannot
the proper clock cycle because data that is needed to execute the instruction
is not yet available.
 Data hazards occur when the pipeline must be stalled because one step must
wait for another to complete.
 In a computer pipeline, data hazards arise from the dependence of one
instruction on an earlier one that is still in the pipeline. For example, suppose
we have an add instruction followed immediately by a subtract instruction that
uses the sum ($s0):
add $s0, $t0,
$t1 sub $t2,
$s0, $t3
 Without intervention, a data hazard could severely stall the pipeline. The add
instruction doesn’t write its result until the fifth stage, meaning that we would
have to waste three clock cycles in the pipeline.
 To resolve the data hazard, for the code sequence above, as soon as the ALU
creates the sum for the add operation, we can supply it as an input for the
subtract. This is done by adding extra hardware to retrieve the missing item
early from the internal resources is called forwarding or bypassing.
 Figure below shows the connection to forward the value in $s0 after the
execution stage of the add instruction as input to the execution stage of
the sub instruction.

Fig 3.16: Graphical representation of forwarding


 Forwarding paths are valid only if the destination stage is later in time than the
source stage. For example, there cannot be a valid forwarding path from the
output of the memory access stage in the first instruction to the input of the
execution stage of the following, since that would mean going backward in time.
 Forwarding cannot prevent all pipeline stalls, suppose the first instruction was a load
of
$s0 instead of an add, So desired data would be available only after the fourth
stage of the first instruction in the dependence, which is too late for the input of
the third stage ofsub instruction.

lw $s0, 20($t1)

sub $t2, $s0, $t3

 Even with forwarding, we would have to stall one stage for a load-use data hazard,
this figure shows below an important pipeline concept, officially called a
pipeline stall, but
often given the nickname bubble.

Fig 3.17: A stall even with forwarding when an R-format instruction following a
loadtries to use the data.

CONTROL HAZARDS
 It is also called as branch hazard. When the proper instruction cannot execute in
the proper pipeline clock cycle because the instruction that was fetched is not
the one that is needed; that is, the flow of instruction addresses is not what the
pipeline expected.
 A control hazard, arising from the need to make a decision based on the results
of one instruction while others are executing.
 Even with this extra hardware, the pipeline involving conditional branches would
look like figure 3.18. The lw instruction, executed if the branch fails, is stalled
one extra 200 ps clock cycle before starting.
 The equivalent decision task in a computer is the branch instruction. Notice that
we must begin fetching the instruction following the branch on the very next
since it only just received the branch instruction from memory.
 One possible solution is to stall immediately after we fetch a branch, waiting
until the pipeline determines the outcome of the branch and knows what
instruction address to fetch from. Let’s assume that we put in enough extra
hardware so that we can test registers, calculate the branch address, and
update the PC during the second stage of the pipeline.
FIGURE 3.18: Pipeline showing stalling on every conditional branch as solution to
control hazards.

Dynamic hardware predictors, in stark contrast, make their guesses depending on


the behavior of each branch and may change predictions for a branch over the life of
a program. Following our analogy, in dynamic prediction a person would look at
how dirty the uniform
recent guesses. One popular approach to dynamic prediction of branches is keeping
a history for each branch as taken or untaken, and then using the recent past
behavior to predict the future.

PIPELINED DATAPATH
Figure 3.19 shows the single-cycle datapath from with the pipeline stages identified.
The division of an instruction into five stages means a five-stage pipeline, which in
turn means that up to five instructions will be in execution during any single clock
cycle. Thus, we must
separate the datapath into five pieces, with each piece named corresponding to
a stage ofinstruction execution:
1. IF: Instruction fetch
2. ID: Instruction decode and register fi le read
3. EX: Execution or address calculation
4. MEM: Data memory access
5. WB: Write back
 In Figure 3.19, these five components correspond roughly to the way the datapath
is drawn; instructions and data move generally from left to right through the five
stages as they complete execution.

Fig 3.19: The single-cycle datapath


 Returning to our laundry analogy, clothes get cleaner, drier, and more organized
as they move through the line, and they never move backward. There are,
however, two exceptions to this left -to-right flow of instructions:
■ The write-back stage, which places the result back into the register fi le in the
middle ofthe datapath
■ The selection of the next value of the PC, choosing between the incremented
PC and the branch address from the MEM stage
 Data flowing from right to left does not aff ect the current instruction; these
reverse data movements influence only later instructions in the pipeline. Note
that the fi rst right-to- left flow of data can lead to data hazards and the second
leads to control hazards.
 One way to show what happens in pipelined execution is to pretend that each
instruction has its own datapath, and then to place these datapaths on a timeline
to show their relationship. Figure 3.20 shows the execution of the instructions in
Figure 4.27 by displaying their private datapaths on a common timeline. Instead,
we add registers to holddata so that portions of a single datapath can be shared
during instruction execution.

 For example, as Figure 4.34 shows, the instruction memory is used during only
one of the five stages of an instruction, allowing it to be shared by following
instructions during the other four stages.
 To retain the value of an individual instruction for its other four stages, the value
read from instruction memory must be saved in a register. Returning to our
laundry analogy, we might have a basket between each pair of stages to hold the
clothes for the next step.
FIGURE 3.21: The pipelined version of the

 Figure 3.21 shows the pipelined datapath with the pipeline registers highlighted.
All instructions advance during each clock cycle from one pipeline register to the
next. The registers are named for the two stages separated by that register. For
example, the pipelineregister between the IF and ID stages is called IF/ID. Notice
that there is no pipeline register at the end of the write-back stage.
 All instructions must update some state in the processor—the register file,
memory, or the

EXECUTION OF load INSTRUCTION IN A PIPELINED DATAPATH


Figures 3.22 through 3.24, show the active portions of the datapath highlighted as a
load instruction goes through the five stages of pipelined execution. The five
stages are thefollowing:
1. Instruction fetch: The top portion of Figure 3.22 shows the instruction being read
from memory using the address in the PC and then being placed in the IF/ID pipeline
register. The PC address is incremented by 4 and then written back into the PC to be
ready for the next clock cycle.
2. Instruction decode and register file read: The bottom portion of Figure 3.22
shows the instruction portion of the IF/ID pipeline register supplying the 16-bit
immediate field, which
is sign-extended to 32 bits, and the register numbers to read the two registers. All
three valuesare stored in the ID/EX pipeline register, along with the incremented PC
address.

FIGURE 3.22: IF and ID: First and second pipe stages of an instruction, with the
active portions of the datapath in Figure 3.21 highlighted.
3. Execute or address calculation: Figure 3.23 shows that the load instruction reads
the contents of register 1 and the sign-extended immediate from the ID/EX pipeline
register and adds them using the ALU. That sum is placed in the EX/MEM pipeline
register.
FIGURE 3.23 EX: The third pipe stage of a load instruction, highlighting the
portions

4. Memory access: The top portion of Figure 3.24 shows the load instruction
reading thedata memory using the address from the EX/MEM pipeline register and
loading the data into the MEM/WB pipeline register.
5. Write-back: The bottom portion of Figure 3.24 shows the final step: reading the
data fromthe MEM/WB pipeline register and writing it into the register file in the
middle of the figure. This walk-through of the load instruction shows that any
information needed in a later pipe
FIGURE 3.24 MEM and WB: The fourth and fifth pipe stages of a load
instruction,highlighting the portions of the datapath in Figure 3.21 used in this pipe
stage.

EXECUTION OF Store INSTRUCTION IN A PIPELINED DATAPATH

Walking through a store instruction shows the similarity of instruction execution, as


well aspassing the information for later stages. Here are the five pipe stages of the
store instruction:
1. Instruction fetch: The instruction is read from memory using the address in the
PC and then is placed in the IF/ID pipeline register. This stage occurs before the

FIGURE 3.25 EX: The third pipe stage of a store instruction


2. Instruction decode and register file read: The instruction in the IF/ID pipeline
register supplies the register numbers for reading two registers and extends the sign
of the 16-bit immediate. These three 32-bit values are all stored in the ID/EX pipeline
register. The bottom portion of Figure 3.25 for load instructions also shows the
operations of the second stage for stores. These first two stages are executed by all
instructions since it is too early to know the
instruction is identified, so the top portion of Figure 3.25 works for store as well as
load.
type of the instruction.
3. Execute and address calculation: Figure 3.26 shows the third step; the effective
address isplaced in the EX/MEM pipeline register.
4. Memory access: The top portion of Figure 3.26 shows the data being written to
memory. Note that the register containing the data to be stored was read in an
earlier stage and stored in ID/EX. The only way to make the data available during the
MEM stage is to place the data into the EX/MEM pipeline register in the EX stage,
just as we stored the effective addressinto EX/MEM.
FIGURE 3.26: MEM and WB: The fourth and fifth pipe stages of a store
instruction.
5. Write-back: The bottom portion of Figure 3.26 shows the final step of the store.
For this instruction, nothing happens in the write-back stage.
 For the store instruction we needed to pass one of the registers read in the ID
stage to the MEM stage, where it is stored in memory. The data was first placed
in the ID/EX pipelineregister and then passed to the EX/MEM pipeline register.
PIPELINED CONTROL
Adding control to the pipelined datapath is referred to as pipelined control. It is
started with a simple design that views the problem through pipeline bars in between
the stages. The first step is to label the control lines on the existing datapath. Figure
3.27 shows those lines.

FIGURE 3.27: The pipelined datapath with the control signals identified.
 To specify control for the pipeline, we need only set the control values during
each pipeline stage. Because each control line is associated with a component
active in only a single pipeline stage, we can divide the control lines into five
groups according to the pipeline stage.
1. Instruction fetch: The control signals to read instruction memory and to write the
PC are always asserted, so there is nothing special to control in this pipeline stage.
2. Instruction decode/register file read: As in the previous stage, the same thing
happens at every clock cycle, so there are no optional control lines to set.
3. Execution/address calculation: The signals to be set are RegDst, ALUOp, and
ALUSrc (see Figures 4.48). The signals select the Result register, the ALU operation,
and either Read data 2 or a sign-extended immediate for the ALU.

4. Memory access: The control lines set in this stage are Branch, MemRead, and
MemWrite. The branch equal, load, and store instructions set these signals,
respectively. Recall that PCSrc in Figure 4.48 selects the next sequential address
unless control asserts Branch and theALU result was 0.
5. Write-back: The two control lines are MemtoReg, which decides between sending
the ALU result or the memory value to the register file, and Reg-Write, which writes
the chosen
value. Since pipelining the datapath leaves the meaning of the control lines
unchanged, wecan use the same control values.

FIGURE 3.27: The control lines for the final three stages.
 Implementing control means setting the nine control lines to these values in each
stage for each instruction. The simplest way to do this is to extend the pipeline
registers to include control information. Since the control lines start with the
EX stage, we can create the
control information during instruction decode. Figure 3.27 above shows that
these controlsignals are then used in the appropriate pipeline stage as the
UNIT V MEMORY & I/O SYSTEMS
Memory - Introduction
Computer memory is the storage space in the computer, where data
is to be processed and instructions required for processing are stored. The
memory is divided into large number of small parts called cells. Each
location or cell has a unique address, which varies from zero to memory
size minus one.
Computer memory is of two basic type – Primary memory / Volatile
memory and Secondary memory / non-volatile memory. Random Access
Memory (RAM) is volatile memory and Read Only Memory (ROM) is non-
volatile memory.

MEMORY HIERARCHY
The memory hierarchy separates computer storage into a hierarchy based
on response time. Since response time, complexity, and capacity are
related, the levels may also be distinguished by their performance and
controlling technologies. Memory hierarchy affects performance in
computer architectural design, algorithm predictions, and lower level
programming constructs involving locality of reference.
The Principle of Locality: Program likely to access a relatively small portion
of the address space at any instant of time.
- Temporal Locality: Locality in Time
- Spatial Locality: Locality in Space
PIT 1 UNIT 5
Temporal Locality (Locality in Time): If an item is referenced, it will tend to
bereferenced again soon.
Spatial Locality (Locality in Space): If an item is referenced, items whose
addresses are close by tend to be referenced soon.

PIT 2 UNIT 5
PIT 3 UNIT
SRAM and DRAM
SRAM and DRAM are the modes of integrated-circuit RAM where SRAM
uses transistors and latches in construction while DRAM uses capacitors
and transistors.These can be differentiated in many ways, such as SRAM is
comparatively faster than DRAM; hence SRAM is used for cache memory
while DRAM is used for main memory.
RAM (Random Access Memory) is a kind of memory which needs constant
power to retain the data in it, once the power supply is disrupted the data
will be
PIT 4 UNIT 5
lost, that’s why it is known as volatile memory. Reading and writing in RAM
is easy and rapid and accomplished through electrical signals.
SRAM TECHNOLOGY
SRAM (Static Random Access Memory) is made up of CMOS technology
and uses six transistors. Its construction is comprised of two cross-
coupled inverters to store data (binary) similar to flip-flops and extra two
transistors for access control. It is relatively faster than other RAM types
such as DRAM. It consumes less power. SRAM can hold the data as long
as power is supplied to it.
Working of SRAM for an individual cell:
 To generate stable logic state, four transistors (T1, T2, T3, T4) are
organized in a cross-connected way. For generating logic state 1, node
C1 is high, and C2 is low; in this state, T1 and T4 are off, and T2 and T3
are on.
 For logic state 0, junction C1 is low, and C2 is high; in the given state T1
and T4 are on, and T2 and T3 are off. Both states are stable until the
direct current (dc) voltage is applied.

 The SRAM address line is operated for opening and closing the switch
and to control the T5 and T6 transistors permitting to read and write. For
read operation the signal is applied to these address line then T5 and T6
gets on, and

PIT 5 UNIT 5
the bit value is read from line B. For the write operation, the signal is
employedto B bit line, and its complement is applied to B’.
DRAM TECHNOLOGY
DRAM (Dynamic Random Access Memory) is also a type of RAM
which is constructed using capacitors and few transistors. The capacitor is
used for storing the data where bit value 1 signifies that the capacitor is
charged and a bit value 0 means that capacitor is discharged. Capacitor
tends to discharge, which result in leaking of charges.
 The dynamic term indicates that the charges are continuously leaking
even inthe presence of continuous supplied power that is the reason it
consumes more power. To retain data for a long time, it needs to be
repeatedly refreshed which requires additional refresh circuitry.
 Due to leaking charge DRAM loses data even if power is switched on.
DRAM is available in the higher amount of capacity and is less expensive.
It requires only a single transistor for the single block of memory.
Working of typical DRAM cell:
 At the time of reading and writing the bit value from the cell, the address
line is activated. The transistor present in the circuitry behaves as a
switch that is closed (allowing current to flow) if a voltage is applied to
the address line and open (no current flows) if no voltage is applied to
the address line. For the writeoperation, a voltage signal is employed to
the bit line where high voltage shows 1, and low voltage indicates 0. A
signal is then used to the address line which enables transferring of the
charge to the capacitor.
 When the address line is chosen for executing read operation, the
transistorturns on and the charge stored on the capacitor is supplied
out onto a bit lineand to a sense amplifier.
PIT 6 UNIT 5
 The sense amplifier specifies whether the cell contains a logic 1 or logic
2 by comparing the capacitor voltage to a reference value. The reading
of the cell results in discharging of the capacitor, which must be
restored to complete the operation. Even though a DRAM is basically an
analog device and used to storethe single bit (i.e., 0,1).
Key Differences Between SRAM and DRAM
1. SRAM is an on-chip memory whose access time is small while DRAM is
an off-chip memory which has a large access time. Therefore SRAM is
faster than DRAM.
2. DRAM is available in larger storage capacity while SRAM is of smaller size.
3. SRAM is expensive whereas DRAM is cheap.
4. The cache memory is an application of SRAM. In contrast, DRAM is used
in
main memory.
5. DRAM is highly dense. As against, SRAM is rarer.
6. The construction of SRAM is complex due to the usage of a large
number oftransistors. On the contrary, DRAM is simple to design and
implement.
7. In SRAM a single block of memory requires six transistors whereas
DRAMneeds just one transistor for a single block of memory.
PIT 7 UNIT 5
8. Power consumption is higher in DRAM than SRAM. SRAM operates on
the
principle of changing the direction of current through switches whereas
DRAMworks on holding the charges.
Basis for comparison SRAM DRAM
Speed Faster Slower
Size Small Large
Cost Expensive Cheap
Used in Cache memory Main memory
Density Less dense Highly dense
Complex and uses Simple and uses capacitors and
Construction
transistorsand latches. veryfew transistors.
Single block of
6 transistors Only one transistor.
memoryrequires
Present hence require power
Charge leakage Not present
refreshcircuitry
property
Power consumption Low High

PIT 8 UNIT 5
PIT 9 UNIT 5
CACHE MEMORY:
 A Cache is a small and very fast temporary storage memory. It is designed
to
speed up the transfer of data and instructions. It is located inside or
close to the CPU chip. It is faster than RAM and the data/instructions
that are most recently or most frequently used by CPU are stored in
cache.
 The data and instructions are retrieved from RAM when CPU uses them
for thefirst time. A copy of that data or instructions is stored in cache.
The next time the CPU needs that data or instructions, it first looks in
cache. If the required data is found there, it is retrieved from cache
memory instead of main memory. It speeds up the working of CPU.
 The cache has a significantly shorter access time than the main
memory due to the applied faster but more expensive implementation
technology. If information fetched to the cache memory is used again,

memory and the program will execute faster.


PIT 10 UNIT 5

 Time efficiency of using cache memories results from the locality of


access to data that is observed during program execution. We observe
here time and space locality:
The Principle of Locality
The Principle of Locality states that Program access a relatively small
portion ofthe address space at any instant of time.
• Example: 90% of time in 10% of the code

Two Different Types of Locality


Temporal Locality / Time locality (Locality in Time): It consists in a
tendency to use many times the same instructions and data in programs
during neighbouring time intervals, i.e., If an item is referenced, it will tend
to be referenced again soon. Spatial Locality/ Space locality (Locality in
Space): It is a tendency to store instructions and data used in a program in
short distances of time under neighbouring addresses in the main memory.
i.e., if an item is referenced, items whose addresses are close by tend to be
referenced soon.
 Due to these localities, the information loaded to the cache memory is
used several times and the execution time of programs is much
reduced. Cache can be implemented as a multi-level memory. A cache
memory is maintained by a special processor subsystem called cache
controller.
 If there is a cache memory in a computer system, then at each access to
a main memory address in order to fetch data or instructions, processor
hardware sends the address first to the cache memory. The cache
control unit checks if the requested information resides in the cache.
If so, we have a "hit" and the

PIT 11 UNIT 5

requested information is fetched from the cache. The actions concerned


with aread with a hit are shown in the figure below.

Read implementation in cache memory on hit


 If the requested information does not reside in the cache, we have a "miss"
and the necessary information is fetched from the main memory to the cache
and to the requesting processor unit. The information is not copied in the
cache as single words but as a larger block of a fixed volume. The actions
executed in a cache memory on "miss" are shown below.

Read implementation in cache memory on miss


If there are two cache levels, then on "miss" at the first level, the address is
transferred in a hardwired way to the cache at the second level. If at this
level a "hit" happens, the block that contains the requested word is
fetched from thesecond level cache to the first level cache. If a "miss"
occurs also at the second

PIT 12 UNIT 5

cache level, the blocks containing the requested word are fetched to the
cache memories at both levels. The size of the cache block at the first level
is from 8 to several tens of bytes (a number must be a power of 2). The
size of the block in the second level cache is many times larger than the
size of the block at the first level.

The cache memory can be connected in different ways to the processor


and themain memory:
 As an additional subsystem connected to the system bus that
connects theprocessor with the main memory,
 As a subsystem that intermediates between the processor and the main
memory,
 As a separate subsystem connected with the processor, in parallel
regarding themain memory.
Categories of Cache Misses
We can subdivide cache misses into one of three categories:
 A compulsory miss (or cold miss) : It is also known as cold start misses
or firstreferences misses. These misses occur when the first access to
a block happens. Block must be brought into the cache.
 A conflict miss: It is also known as collision misses or interference
misses. These misses occur when several blocks are mapped to the
same set or block frame. These misses occur in the set associative or
direct mapped block placement strategies.
 A capacity miss: These misses occur when the program working set is
much larger than the cache capacity. Since Cache cannot contain all
blocks needed forprogram execution, so cache discards these blocks.

PIT 13 UNIT 5

MEASURING AND IMPROVING CACHE PERFORMANCE

We begin by examining ways to measure and analyze cache


performance.CPU time can be divided into the clock cycles that the CPU
spends executing theprogram and the clock cycles that the CPU spends
waiting for the memory system. Measuring the Cache Performance
We assume that the costs of cache accesses that are hits are part
of thenormal CPU execution cycles. Thus
CPU Time = (CPU execution clock cycles + Memory-stall clock
cycles) xClock cycle time
The memory-stall clock cycles come primarily from cache misses,
and we make that assumption here. In real processors, the stalls generated
by reads and writes can be quite complex, and accurate performance
prediction usually requires very detailed simulations of the processor and
memory system.
Memory-stall clock cycles can be defined as the sum of the stall
cycles coming from reads plus those coming from write:

The read-stall cycles can be defined in terms of the number of read


accessesper program, the miss penalty in clock cycles for a read, and the
read miss rate:
Writes are more complicated. For a write-through scheme, we have
two sources of stalls: write misses, which usually require that we fetch the
block beforecontinuing the write, and write buffer stalls, which, occur when
the write buffer is full when a write occurs. Thus, the cycles stalled for
writes equals the sum of these two:

PIT 14 UNIT 5

Since the write buffer stalls depend on the proximity of writes, and
not just the frequency, it is not possible to give a simple equation to
compute such stalls.

Write-back schemes also have potential additional stalls arising from


the need to write a cache block back to memory when the block is replaced.
In most write-through cache organizations, the read and write miss
penalties are the same. If we assume that the write buffer stalls are
negligible, we can combine the reads and writes by using a single miss rate

and the miss penalty:

We can also factor this as,


CACHE MAPPING TECHNIQUES
Memory mapping is the (complex)
process that associates an address
During program execution, data can move from one location to another, and
possibly be duplicated.
Mapping Function
The correspondence between the main memory blocks and those
in thecache is specified by a mapping function.
The different Cache mapping techniques are as follows:-
PIT 15 UNIT 5
Direct Mapping

1) Associative Mapping
2) Set Associative Mapping
Consider a cache consisting of 128 blocks of 16 words each, for total of
2048(2K) works and assume that the main memory is addressable by 16
bit address. Main memory is 64K which will be viewed as 4K blocks of 16
works each.
(1) Direct Mapping:-
 The simplest way to determine cache locations in which store Memory
blocks isdirect Mapping technique.
 In this block J of the main memory maps on to block J modulo 128
of the cache. Thus main memory blocks 0,128,256,….is loaded into
cache is stored at block 0. Block 1,129,257,….are stored at block 1 and
so on.
 Placement of a block in the cache is determined from memory address.
Memory address is divided into 3 fields, the lower 4-bits selects one of
the 16 words in a block.
 When new block enters the cache, the 7-bit cache block field determines
the cache positions in which this block must be stored.
 The higher order 5- in cache. They identify which of the 32 blocks that
are mapped into this cache bits of the memory address of the block are
stored in 5
PIT 16 UNIT 5

tag bits associated with its location position are currently resident in the
cache.
Advantages: It is easy to implement.
Drawbacks: Since more than one memory block is mapped onto a
given cache block position, contention may arise for that position even
when the cache is not full. Contention is resolved by allowing the new block
to overwrite the currently resident block. This method is not very flexible.
(2) Fully Associative Mapping:-

 This is more flexible mapping method, in which main memory block


can beplaced into any cache block position.
 In this, 12 tag bits are required to identify a memory block when it is
resident inthe cache.
 The tag bits of an address received from the processor are compared
to the tagbits of each block of the cache to see, if the desired block
known as Associative Mapping technique.
 Cost of an associated mapped cache is higher than the cost of direct-mapped
because of the need to search all 128 tag patterns to determine whether a block
is in cache. This is known as associative search.
(3) Set-Associated Mapping:-
 It is the combination of direct and associative mapping technique.
 Cache blocks are grouped into sets and mapping allow block of main
memory reside into any block of a specific set. Hence contention
problem of direct

mapping is eased, at the same time; hardware cost is reduced by


decreasing thesize of associative search.
 For a cache with two blocks per set. In this case, memory block 0, 64,
128,…..,4032 map into cache set 0 and they can occupy any two block
within this set.
 Having 64 sets means that the 6 bit set field of the address determines
which set of the cache might contain the desired block. The tag bits of
address must be associatively compared to the tags of the two blocks
of the set to check if desired block is present. This is known as two way
associative search.
Advantages:
The contention problem of the direct-mapping is eased by having a
few choices for block placement. At the same time, the hardware cost is
reduced by decreasing the size of the associative search.
M - Way Set Associativity:
We can also think of all block placement strategies as a variation on
set associativity. The following figure shows the possible associativity
structures foran eight-block cache.
 A direct-mapped cache is simply a one-way set-associative cache: each
cache entry holds one block and each set has one element.
 A fully-associative cache with m entries is simply an m-way set-
associative cache: it has one set with m blocks, and an entry can reside
in any block within that set.
The advantage of increasing the degree of associativity is that it
usually decreases the miss rate. The main disadvantage is the potential
increase in the hit time.
VIRTUAL MEMORY
Introduction
 Virtual memory is a technique that allows the execution of processes
that may not be completely in memory. One major advantage of this
scheme is that programs can be larger than physical memory.
 Virtual memory abstracts main memory into an extremely large, uniform
array of storage, separating logical memory as viewed by the user from
physical memory.
 Users would be able to write programs for an extremely large virtual-
address space, simplifying the programming task.
 Less I/O would be needed to load or swap each user program into
memory, so each user program would run faster.
 Virtual memory is the separation of user logical memory from
physical

memory. This separation allows an extremely large virtual memory to be


provided for programmers when only a smaller physical memory is
available.
Diagram showing virtual memory that is larger than physical

Demand Paging
 Virtual memory is commonly implemented by demand paging.
 A demand-paging system is similar to a paging system with swapping.
Pages are loaded only on demand and not in advance.
 Processes reside on secondary memory (which is usually a disk). When
the process is to be executed, it is swapped in to memory. Rather than
swapping the entire process into memory, however, use a lazy
swapper. A lazy swapper

PIT 20 UNIT 5

never swaps a page into memory unless that page will be needed.
Basic Concepts
Instead of swapping in a whole process, the pager brings only those
necessary pages into memory. Thus, it avoids reading into memory pages
that will not be used anyway, decreasing the swap time and the amount of
physical memoryneeded.

Diag: Transfer of a paged memory to contiguous disk space.


 While the process executes and accesses pages that are memory

resident,execution proceeds normally.


 But what happens if the process tries to access a page that was not
brought intomemory? Access to a page marked invalid causes a page-
fault trap.
 The process can now access the page as though it had always been in
memory.When the disk read is complete, we modify the internal
table kept with the process and the page table to indicate that the page
is now in memory.

Pure demand paging:


Never bring a page into memory until it is required. The hardware to
support demand paging is the same as the hardware for paging and
swapping:
Page table: This table has the ability to mark an entry invalid through a valid
- invalid bit or special value of protection bits.
Secondary memory: This memory holds those pages that are not present
in main memory. The secondary memory is usually a high-speed disk. It is
known as the swap device, and the section of disk used for this purpose is
known as swap space.

PAGING IN VIRTUAL MEMORY


 Paging is a memory-management scheme that permits the physical-
address space of a process to be noncontiguous. Paging avoids the
considerable problem of fitting the varying-sized memory chunks onto
the backing store, from which most of the previous memory-
management schemes suffered. When some code fragments or data
residing in main memory need to be swapped out, space must be found
on the backing store.

Fig: Paging Hardware


Basic Method:
Physical memory is broken into fixed-sized blocks called frames. Logical
memory is also broken into blocks of the same size called pages. When a
process is to be executed, its pages are loaded into any available memory
frames from the backing store. The backing store is divided into fixed-sized
blocks that are of the same size as the memory frames.

page-number page-offset
P d
m-n n
Where p is an index into the page table and d is the displacement within the
page.
 The hardware support for paging is illustrated in the above Figure. Every
address generated by the CPU is divided into two parts: a page number
(p) and a page offset (d). The page number is used as an index into a
page table. The page table contains the base address of each page in
physical memory. This base address is combined with the page offset
to define the physical memory address that is sent to the memory unit.
The paging model of memory is shown in the Figure given below.
 The page size (like the frame size) is defined by the hardware. The size
of a page is typically a power of 2, varying between 512 bytes and 16 MB
per page, depending on the computer architecture. The selection of a
power of 2 as a pagesize makes the translation of a logical address into
a page number and page offset particularly easy.
Diag: Paging model of logical and physical
memoryLOGICAL Versus PHYSICAL ADDRESS SPACE
 Logical address is generated by the CPU. Physical address is the
address of main memory and it is loaded in to the memory address
register.
 The compile-time and load-time address-binding methods generate
identical logical and physical addresses. Usually the logical address is
referred as a virtual address.
 The set of all logical addresses generated by a program is a logical-
address space; the set of all physical addresses corresponding to these
logical addresses is a physical-address space. Thus, in the execution-
time address-binding scheme, the logical and physical-address spaces
differ.
 The run-time mapping from virtual to physical addresses is done by a
hardware device called the memory-management unit (MMU).
 As illustrated in the given figure, the base register is now called a
relocation register. The value in the relocation register is added to every
address generated by a user process at the time it is sent to memory.
For example, if the base is at 14000, then an attempt by the user to
address location 0 is dynamically relocated to location 14000; an
access to location 346 is mapped to location 14346.
Diag: Address translation using a relocation register in MMU
 The user program never sees the real physical addresses. The program
can create a pointer to location 346, store it in memory, manipulate it,
and compare it to other addresses-all as the number 346. The user
program deals with logical addresses. The memory-mapping hardware
converts logical addresses into physical addresses.
ADDRESS TRANSLATION – EXAMPLE
Consider the memory shown in the following Figure. Using a page size of
4 bytes and a physical memory of 32 bytes (8 pages), we show how the
user's view of memory can be mapped into physical memory. Logical
address 0 is page 0, offset
0. Indexing into the page table, we find that page 0 is in frame 5. Thus,
logical address 0 maps to physical address 20 (= (5 x 4) + 0). Logical
offset 0; according to the page table, page 1 is mapped to frame 6. Thus,
logical
address 4 maps to physical address 24 (= (6 x 4) + 0). Logical address 13
maps to
physical address 9.
Fig: Paging example for 32-byte memory with 4-byte pages
 Generally, page sizes have grown over time as processes, data sets, and
main memory have become larger. Today pages typically are between 4
KB and 8 KB, and some systems support even larger page sizes. Some
CPUs and kernels even support multiple page sizes. For instance, Solaris
uses 8 KB and 4 MB page sizes, depending on the data stored by the
pages. Researchers are now developing variable on-the-fly page-size
support. Each page-table entry is usually 4 bytes long, but that size can
vary as well.
TLB (TRANSLATION LOOK-ASIDE BUFFER)
Each operating system has its own methods for storing page tables. Most
allocate apage table for each process. A pointer to the page table is stored
with the other register values (like the instruction counter) in the process
control block. When the dispatcher is told to start a process, it must reload
the user registers and define the correct hardware page-table values from
the stored user page table.
The page table is implemented as a set of dedicated registers. These
registers should be built with very high-speed logic to make the paging-
address translation efficient. The address consists of 16 bits, and the page
size is 8 KB. The page table thus consists of eight entries that are kept in
fast registers.
The standard solution to this problem is to use a special, small, fast
lookup hardware cache, called translation look-aside buffer (TLB). The
TLB is associative, high-speed memory. Each entry in the TLB consists of
two parts: L a key (or tag) and a value. Typically, the number of entries in a
TLB is small, often numbering between 64 and 1,024.

Fig: Paging Hardware with TLB


The TLB is used with page tables in the following way.
 The TLB contains only a few of the page-table entries. When a logical
address is generated by the CPU, its page number is presented to the
TLB.
 If the page number is found, its frame number is immediately available
and is used to access memory.
The percentage of times that a particular page number is found in the TLB
is called the hit ratio. An 80-percent hit ratio means that we find the desired
page number in the TLB 80 percent of the time. If it takes 20 nanoseconds
to search the TLB, and 100 nanoseconds to access memory, then a
mapped- memory access takes 120 nanoseconds when the page number is
in the TLB.
To find the effective memory-access time, we must weigh each case by its
probability:
Effective access time = 0.80 x 120 + 0.20 x 220 = 140
nanoseconds.
PROTECTION in TLB
Protection bits are kept in the page table. The valid-invalid bit scheme can
be used for this purpose. When this bit is set to "valid," this value indicates
that the associated page is both legal and in memory. If the bit is set to
"invalid," this value indicates that the page either is not valid (that is, not in
the logical address space of the process), or is valid but is currently on the
disk.

 The page-table entry for a page that is brought into memory is set as
usual, but the page table entry for a page that is not currently in memory
is simply marked invalid, or contains the address of the page on disk.
This situation is depicted in the following figure.

Fig: Valid (v) or invalid (i) bit in a page table.


 The paging hardware, in translating the address through the page table,
will notice that the invalid bit is set, causing a trap to the operating
system. This trap is the result of the operating system's failure to
bring the desired page into
memory, rather than an invalid address error as a result of an attempt to
use anillegal memory address.
PAGE REPLACEMENT ALGORITHMS
INPUT / OUTPUT SYSTEM
 The main data-processing functions of a computer involve its CPU and
externalmemory. The CPU fetches instructions and data from memory,
processes them, and eventually stores the results back in memory.
 The other system components like secondary memory, user interface
devices, and so on constitute the input/output (I/O) system. One of the
basic features of acomputer is its ability to exchange data with other
devices. The data transfer rate of peripherals is much slower than that
of the memory or CPU. The I/O subsystem provides the mechanism for
communications between CPU and the outside world.

 The connection between the I/O devices, processor, and memory are
historically called buses, although the term mean shared parallel wires
and most I/O connections today are closer to dedicated serial lines.
Communication among the devices and the processor uses both
interrupts and protocols on the interconnection.
I/O devices are incredibly diverse. Three characteristics are useful in
organizing this wide variety:
1. Behavior: Input (read once), output (write only, cannot be read), or
storage(can be read and usually rewritten).
2. Partner: Either a human or a machine is at the other end of the I/O
device, either feeding data on input or reading data on the output.
3. Data rate: The peak rate at which data can be transferred between the
I/O device and the main memory or processor. It is useful to know the
maximum demand the device may generate when designing an I/O
system.
Accessing I/O Devices
A simple arrangement to connect I/O devices to a computer is to use
a singlebus arrangement. The bus enables all the devices connected to it to
exchange information. Typically it consists of three sets of lines used to
carry address, data, and control signals. Each I/O device is assigned a
unique set of addresses. When the processor places a particular address
on the address lines, the device that recognizes this address responds to
the commands issued on the control lines. The processor requests either a
read or a write operation, and the requested data are transferred over the
data lines.

I/O Interface
 The address decoder, the data and status registers, and the control
circuitry required to coordinate I/O transfers constitute the device’s
interface circuit.
 The address decoder enables the device to recognize its address when
this address appears on the address lines. The data register holds the
data being transferred to or from the processor.
PIT 33 UNIT 5
 The status register contains information relevant to the operation of the
I/O device. Both the data and status registers are connected to the data
bus and assigned unique addresses.
 I/O devices operate at speeds that are vastly different from that of the
processor.When a human operator is entering characters at a keyboard,
the processor is capable of executing millions of instructions between
successive character entries. An instruction that reads a character from
the keyboard should be executed only when a character is available in
the input buffer of the keyboard interface. Also, we must make sure that
an input character is read only once.
 For an input device such as keyboard, a status flag, SIN, is included in
the interface circuit as part of the status register. This flag is set to 1
when a character is entered at the keyboard and cleared to 0 once this
character is read by the processor. A similar procedure can be used to
control output operations using an output status flag, SOUT.
I/O Control Methods
Input-Output operations are distinguished by the extent to which the
CPU is involved in their execution. I/O operation refers to a data transfer
between an IO device and memory, or between an IO device and the CPU.
Commonly used mechanisms for implementing IO operations
PIT 34 UNIT 5
There are three commonly used methods used for implementing IO
operations. They are
1. Programmed I/O
If I/O operations are completely controlled by the CPU, i.e., the CPU
executes programs that initiate, direct, and terminate the IO operations,
then the computer is said to be using programmed IO. This type of IO
control can be implemented with little or no special hardware, but causes
the CPU to spend a lot of time performing relatively trivial IO-related
functions.
2. Direct Memory Access (DMA)
A modern hardware design enables an IO device to transfer a block of
informationto or from memory without CPU intervention. This task requires
the IO device to generate memory addresses and transfer data to or from
the bus connecting it to memory via its interface controller; in other words,
the IO device must be able to act as a bus master.
The CPU is still responsible for initiating each block transfer. The IO device
interface controller can then carry out the transfer without further program
execution by the CPU. The CPU and IO controller interact only when the
CPU must yield control of the memory bus to the IO controller in response
to requests from the latter. This level of IO control is called Direct
Memory Access (DMA)and the IO device interface control circuit is called a
DMA Controller.
3. Interrupts
The DMA controller can also be provided with circuits enabling it to
request service from the CPU, that is, execution of a specific program to
service an IO device. This type of request is called an Interrupt and it
frees the CPU from thetask of periodically testing the status of IO devices.
Unlike a DMA request, which merely requests temporary access to the
system bus, an interrupt request causes the CPU to switch programs by
saving its previous program state and transferring control to a new
interrupt-handling program. After

PIT 35 UNIT 5
the interrupts has been serviced, the CPU can resume execution of the
interrupted program.

DIRECT MEMORY ACCESS (DMA)


A special control unit is provided to allow transfer of a block of data directly
between an external device and the main memory, without continuous
intervention by the processor. This approach is called direct memory
access, or DMA.
(OR)
DMA stands for "Direct Memory Access" and is a method of transferring
data
from the computer's RAM to another part of the computer without
processing it using the CPU. While most data that is input or output from
your computer is processed by the CPU, some data does not require
processing, or can be processed by another device.
In these situations, DMA can save processing time and is a more
efficient way to move data from the computer's memory to other devices.
In order for devices to use direct memory access, they must be assigned
to a DMA channel. Each type of port on a computer has a set of DMA
channels that can be assigned toeach connected device.
 Transfer of data between a fast storage device and memory is
limited by the speed of CPU
 Remove CPU from the path of communication and the technique is
DMA
PIT 36 UNIT 5

 DMA transfers are performed by a control circuit that is part of the I/O
device interface. We refer to this circuit as a DMA controller. The DMA
controller performs the functions that would normally be carried out by
the processor when accessing the main memory.
 For each word transferred, it provides the memory address and all the
bus signals that control data transfer. Since it has to transfer blocks of
data, the DMA controller must increment the memory address for
successive words andkeep track of the number of transfers.
 Although a DMA controller can transfer data without intervention by the
processor, its operation must be under the control of a program
executed by the processor.
 To initiate the transfer of a block of words, the processor sends the
starting address, the number of words in the block, and the direction of
the transfer. On receiving this information, the DMA controller proceeds
to perform the requested operation. When the entire block has been
transferred, the controller informs the processor by raising an interrupt
signal.
 While a DMA transfer is taking place, the program that requested the
transfer cannot continue, and the processor can be used to execute
another program. After the DMA transfer is completed, the processor
can return to the program that requested the transfer. I/O operations are
always performed by the operating system of the computer in response
to a request from an application program.
 The OS is also responsible for suspending the execution of one program
and starting another. Thus, for an I/O operation involving DMA, the OS
puts the program that requested the transfer in the Blocked state
initiates the DMA operation, and starts the execution of another program.
When the transfer is completed, the DMA controller informs the
processor by sending an interrupt

PIT 37 UNIT 5

request. In response, the OS puts the suspended program in the


runnable state so that it can be selected by the scheduler to continue
execution.
 Cycle Stealing –The DMA controller must use the bus only when the
processor does not need it, or it must force the processor to suspend
operation temporarily. This technique is referred to as cycle stealing. It
allows DMA controller to transfer one data word at a time after which it
must return control of the buses to the CPU
DMA Controller
 A simple DMA controller is a standard component in modern PCs, and
many bus-mastering I/O cards contain their own DMA hardware.
 Handshaking between DMA controllers and their devices is
accomplished through two wires called the DMA-request and DMA-
acknowledge wires.
 While the DMA transfer is going on the CPU does not have access to the
PCI bus( including main memory ), but it does have access to its internal
registers and primary and secondary caches.
 DMA can be done in terms of either physical addresses or virtual
addresses thatare mapped to physical addresses. The latter approach is
known as Direct Virtual Memory Access, DVMA, and allows direct data
transfer from one memory-mapped device to another without using the
main memory chips.
The controller is integrated into the processor board and manages all DMA
data transfers. Transferring data between system memory and an I/O
device requirestwo steps.
i. Data goes from the sending device to the DMA controller and then to
the receiving device. The microprocessor gives the DMA controller
the location,destination, and amount of data that is to be transferred.
Then the DMA controller transfers the data, allowing the
microprocessor to continue with other processing tasks. When a
device needs to use the Micro Channel bus to

PIT 38 UNIT 5

send or receive data, it competes with all the other devices that are
trying to gain control of the bus. This process is known as arbitration.
ii. The DMA controller does not arbitrate for control of the BUS instead;
the I/O device that is sending or receiving data (the DMA slave)
participates in arbitration.
 DMA controller takes over the buses to manage the transfer directly
betweenthe I/O device and memory
 Bus Request (BR) –used by the DMA controller to request the CPU to
claim orgive up control of the buses.
 CPU activates bus grant to inform the external DMA that the buses are
in highimpedance state.
 Burst transfer –block sequence consisting of memory words is
transferred in acontinuous bus when DMA controller is the master.
INTERRUPTS
 An interrupt is a signal to the processor emitted by hardware or
software indicating an event that needs immediate attention. An
interrupt alerts the processor to a high-priority condition requiring the
interruption of the current code the processor is executing.
 The processor responds by suspending its current activities, saving its
state, and executing a function called an interrupt handler (or an
interrupt service routine, ISR) to deal with the event. This interruption is
temporary, and, after the interrupt handler finishes, the processor
resumes normal activities.

There are two types of interrupts: hardware interrupts and software


interrupts.
 Hardware interrupts are used by devices to communicate that they
require attention from the operating system. Internally, hardware
interrupts are implemented using electronic alerting signals that are
sent to the processorfrom an external device, which is either a part of
the computer itself, such as a disk controller, or an external peripheral.
 For example, pressing a key on the keyboard or moving the mouse
triggers hardware interrupts that cause the processor to read the
keystroke or mouse position. The act of initiating a hardware interrupt
is referred to as an interrupt request (IRQ).
 A software interrupt is caused either by an exceptional condition in the
processor itself, or a special instruction in the instruction set which
causes an interrupt when it is executed. The former is often called a
trap or exception(For example a divide-by-zero exception) and is used
for errors or events occurring during program execution.
 Each interrupt has its own interrupt handler. The number of hardware
interrupts is limited by the number of interrupt request (IRQ) lines to the
processor, but there may be hundreds of different software interrupts.
Interrupts are a commonly used technique for computer multitasking,
especially in real-time computing. Such a system is said to be interrupt-
driven.
Interrupts Handling
? The interrupt mechanism allows devices to signal the CPU and to
force execution of a particular piece of code.
? When an interrupt occurs, the program counter’s value is changed to
point to an interrupt handler routine (also commonly known as a
device driver) that takes care of the device.

? The interface between the CPU and I/O device includes the following
signalsfor interrupting:
? The I/O device asserts the interrupt request signal when it wants
service
from the CPU; and
? The CPU asserts the interrupt acknowledge signal when it is ready
tohandle the I/O device’s request.

The interrupt mechanism:


? The interrupt mechanism allows devices to signal the CPU and to
force execution of a particular piece of code.
? When an interrupt occurs, the program counter’s value is changed
to point to an interrupt handler routine (also commonly known as a
device driver) that takes care of the device.
? The interface between the CPU and I/O device includes the following
signals for interrupting:
■ the I/O device asserts the interrupt request signal when it wants
service
from the CPU; and

PIT 41 UNIT 5

■ The CPU asserts the interrupt acknowledge signal when it is


ready tohandle the I/O device’s request.
Priorities and Vectors
? Interrupt priorities allow the CPU to recognize some interrupts as
moreimportant than others, and
? Interrupt vectors allow the interrupting device to specify its handler.
? The priority mechanism must ensure that a lower-priority interrupt
does not occur when a higher-priority interrupt is being handled. The
decision processis known as masking.
? Asking an I/O device whether it is finished by reading its status
register is often called polling.
? The highest-priority interrupt is normally called the non maskable
interrupt (NMI).
? Most CPUs provide a relatively small number of interrupt priority
levels,
such as eight. When several devices naturally assume the same
priority. You can combine polling with prioritized interrupts to
efficiently handle the devices.
? Interrupt request signals

PIT 42 UNIT 5
? The CPU will call the interrupt handler associated with this priority;
that handler does not know which of the devices actually requested
the interrupt. The handler uses software polling to check the status of
each device: In this example, it would read the status registers of 1,
2, and 3 to see which ofthem ready and requesting service is. The
given example illustrates how priorities affect the order in which I/O
requests are handled.
 Vectors provide flexibility in a different dimension, namely, the ability to
define the interrupt handler that should service a request from a device.
Figure shows the hardware structure required to support interrupt
vectors. In addition to the interrupt request and acknowledge lines,
additional interrupt vector lines run from the devices to the CPU.
 After a device’s request is acknowledged, it sends its interrupt vector
over thoselines to the CPU. The CPU then uses the vector number as an
index in a table stored in memory as shown in Figure 3.5. The location
referenced in the interrupt vector table by the vector number gives the
address of the handler.
 There are two important things to notice about the interrupt vector
mechanism. First, the device, not the CPU, stores its vector number. In
this way, a device can be given a new handler simply by changing the
vector number it sends, without modifying the system software. For
example, vector numbers can be changed by programmable switches.
 The second thing to notice is that there is no fixed relationship between
vector numbers and interrupt handlers. The interrupt vector table allows
arbitrary relationships between devices and handlers. The vector
mechanism provides great flexibility in the coupling of hardware devices
and the software routines that service them.
 Most modern CPUs implement both prioritized and vectored interrupts.
Priorities determine which device is serviced first, and vectors determine
what routine is used to service the interrupt. The combination of the two
provides a rich interface between hardware and software.
BUS STRUCTURE

A computer system is made up of 3 major components. Central Processing


Unit (CPU) that processes data, Memory Unit that holds data for processing
and the Input and Output Unit that is used by the user to communicate with
the computer. But how do these different components of a CPU
communicate with each other? They use a special electronic
communication system called the BUS. The computer bus carries lots of
information using numerous pathway called circuit lines. The System bus
consists of data bus, address bus and control bus

 Data bus- A bus which carries data to and from memory/IO is called as
data bus
 Address bus- This is used to carry the address of data in the memory
and its width is equal to the number of bits in the MAR of the
memory. For example, if a computer memory of 64K has 32 bit words
then the computer will have a data bus of 32 bits wide and the address
bus of 16 bits wide.
 Control Bus- carries the control signals between the various units of the
computer. Ex: Memory Read/write, I/O Read/write
Two types of Bus organizations:
 Single Bus organization
 Two bus Organization
Single Bus Architecture
 Three units share the single bus. At any given point of time,
information canbe transferred between any two units
 Here I/O units use the same memory address space ( Memory mapped
I/O)

 So no special instructions are required to address the I/O, it can be


accessed like a memory location
 Since all the devices do not operate at the same speed, it is
necessary to smooth out the differences in timings among all the
devices A common approach used is to include buffer registers with
the devices to hold the information during transfers
Ex: Communication between the processor and printer
Two Bus Architecture

 Various units are connected through two independent buses


 I/O units are connected to the processor though an I/O bus and
Memory is connected to the processor through the memory bus
 I/O bus consists of address, data and control bus Memory bus also
consists of address, data and control bus. In this type of
arrangements processor completely supervises the transfer of
information to and from I/O units. All the information is first taken to
processor and from there to the memory. Such kind of transfers is
called as program controlled transfer.
Bus arbitration process
Multiple devices may need to use the bus at the same time so must have a
way to arbitrate multiple requests.
Bus Arbitration refers to the process by which the current bus master
accesses and then leaves the control of the bus and passes it to another
bus requesting processor unit. The controller that has access to a bus at
an instance is known asa Bus master.

PIT 46 UNIT 5

 Bus Arbitration Mechanism between the System buses are shared


between the controllers and an IO processor and multiple controllers
that have to access the bus, but only one of them can be granted the
bus master status at any one instance
 Bus master has the access to the bus at an instance controller and an
IO processor and multiple controllers that have to access the bus, but
only one of them can be granted the bus master status at any one
instance
 Bus master has the access to the bus at an
instance. There are two approaches to bus
arbitration:
1. Centralized bus arbitration – A single bus arbiter performs the
required arbitration.
2. Distributed bus arbitration – All devices participate in the selection
of the next bus master.
Three methods in Centralized bus arbitration process
 Daisy Chain method
 Fixed Priority or Independent Bus Requests and Grant method
 Polling or Rotating Priority
methodDAISY CHAINING
o It is a centralized bus arbitration method. During any bus cycle, the
bus master may be any device – the processor or any DMA controller
unit, connected to the bus.
o Bus control passes from one bus master to the next one, then to the
next and so on. That is from controller units C0 to C1, then to C2, then
U3, and so on.

PIT 47 UNIT 5
Sequence of Signals in the arbitration process
 Bus-grant signal (BG) which functions like a token is first sent to C0.
 If C0 does not need the bus, it passes BG to C1.
 A controller needing the bus raises a bus request (BR) signal.
 A bus-busy (BUSY) signal generates when that controller
becomes the busmaster.
Signals in the arbitration process
 When bus master no longer needs the bus, it deactivates BR and
BUSY signalalso deactivates.
 Another BG is issued and passed from C0 to down the priority
controllers oneby one
 [For example, COM2 to COM1 in IBM PC]

Advantages
 Simplicity and Scalability.
 The user can add more devices anywhere along the chain, up to a
certainmaximum value.

Disadvantages
 The value of priority assigned to a device is depends on the
position ofmaster bus.
 Propagation delay is arises in this method.
 If one device fails then entire system will stop working.
POLLING OR ROTATING PRIORITY METHOD
Diag: Polling method for Bus Sharing by Multiple Processors or controllers
 BUSY activates when that controller becomes the bus master.
When BRdeactivates, then BG and BUSY also deactivates and
counts increment starts.
 Polling method advantage is that the controller next to the current
bus master gets the highest priority to the access the bus after the
current bus master finishes the operations through the bus.
 A poll counts value is sent to the controllers and is incremented.
Assume that there are 8 controllers. Three poll count signals p2, p1,
p0 successively change from 000, 001, …, 110, 111, 000, … If on count
= i, a BR signal is received then counts increment stops, BG is sent.
Advantages:
 This method does not favor any particular device and processor.
 The method is also quite simple.
 If one device fails then entire system will not stop working.
Disadvantages:
 Adding bus masters is different as increases the number of address
lines ofthe circuit.
FIXED PRIORITY or INDEPENDENT REQUEST AND GRANT METHOD
Controller separate BR signals, BR0, BR1,…, BRn.
 Separate BG signals, BG0, BG1, …, BGn for the controllers.
Diag: Independent bus request
method

 An ith controller sends BRi (i-th bus request signal) and when it
receives BGi(ith bus grant signal), it uses the bus and then BUSY signal

 Hardware cost is high as large number of control lines is required.


Advantages
 This method generates fast
response.

DISTRIBUTED BUS ARBITRATION:


Here, all the devices participate in the selection of the next bus master.
Each device on the bus is assigned a4 bit identification number. The
priority of the device will be determined by the generated ID.

PIT 50 UNIT 5
 When one or more devices request control of the bus, they assert the
start arbitration signal and place their 4-bit identification numbers on
arbitration lines through ARB0 to ARB3.
 Each device compares the code and changes its bit position
accordingly. Itdoes so by placing a 0 at the input of their drive.
 The distributed arbitration is highly reliable because the bus operations
are not dependent on devices.

INTERFACE CIRCUITS
An I/O interface consists of the circuitry required to connect an I/O device
to a computer bus. On one side of the interface, we have bus signals. On
the other side, we have a data path with its associated controls to transfer
data between the interface and the I/O device – port. We have two types:
 Parallel port
 Serial port
A parallel port transfers data in the form of a number of bits (8 or 16)
simultaneously to or from the device. A serial port transmits and receives
data one bit at a time. Communication with the bus is the same for
both formats. The
PIT 51 UNIT 5

conversion from the parallel to the serial format, and vice versa, takes place
inside the interface circuit. In parallel port, the connection between the
device and the computer uses a multiple-pin connector and a cable with as
many wires. This arrangement is suitable for devices that are physically
close to the computer. In serial port, it is much more convenient and cost-
effective where longer cables are needed.
Typically, the functions of an I/O interface are:
• Provides a storage buffer for at least one word of data
• Contains status flags that can be accessed by the processor to determine
whether the buffer is full or empty
• Contains address-decoding circuitry to determine when it is being
addressed by the processor
• Generates the appropriate timing signals required by the bus control scheme
• Performs any format conversion that may be necessary to transfer data
between the bus and the I/O device, such as parallel-serial conversion in
the case of a serial port

Parallel
Port Input
Port
Example1: Keyboard to Processor
Observe the parallel input port that connects the keyboard to the processor.
Now, whenever the key is tapped on the keyboard an electrical connection
is established that generates an electrical signal. This signal is encoded by
the encoder to convert it into ASCII code for the corresponding character
pressed at the keyboard (as shown in below figure)
Fig: Parallel Input Port that connect Keyboard and Processor

The Hardware components needed are


– Status flag, SIN
– R/~W
– Master-ready
– Address decoder
When a key is pressed, the valid signal changes from 0 to 1 causing the
ASCIIcode to be loaded into DATAIN and SIN to be set to 1. The status
flag SIN iscleared to 0 when the processor reads the contents of the
DATAIN register Example 2: Printer to processor
The hardware components needed for connecting a printer to a
processor are:The circuit of output interface, and
– Slave-ready
– R/W
– Master-ready
– Address decoder
– Handshake control

The input and output interfaces can be combined into a single interface.
The general purpose parallel interface circuit can be configured in a variety
of ways. For increased flexibility, the circuit makes it possible for some
lines to serve as inputs and some lines to serve as outputs, under program
control.
Output Port
When the display unit is ready to display a character, it activates its ready
line to 1 which setups the DOUT flag in the DISP_STATUS register to 1. This
indicates the processor and the processor places the character to the
DISP_DATA register.

Serial Port
Opposite to the parallel port, the serial port connects the processor to devices
that transmit only one bit at a time. Here on the device side, the data is
transferred in the bit-serial pattern, and on the processor side, the data is
transferred in the bit-parallel pattern.
As soon as the processor loads the character in the DISP_DATA the DOUT flag setbacks to
0 and the New-data line to 1. Now as the display senses that the new- data line is activated
The transformation of the format from serial to parallel i.e., from device to
processor, and from parallel to serial i.e., from processor to device is made
possible with the help of shift registers (input shift register & output shift
register).
The input shift register accepts the one bit at a time in a bit-serial fashion
till it receives all 8 bits. When all the 8 bits are received by the input
shift register it loads its content into the DATA IN register parallelly. In a
similar fashion, the content of the DATA OUT register is transferred in
parallel to the output shift register as shown in the given figure
The serial interface port connected to the processor via system bus
functionssimilarly to the parallel port. The status and control block has two
status flags SIN and SOUT. The SIN flag is set to 1 when the I/O device
inputs the data into the DATA IN register through the input shift register
and the SIN flag is cleared to 0 when the processor reads the data from the
DATA IN register.
When the value of the SOUT register is 1 it indicates to the processor that
the DATA OUT register is available to receive new data from the
processor. The
processor writes the data into the DATA OUT register and sets the SOUT flag
to 0
and when the output shift register reads the data from the DATA OUT
register setsback SOUT to 1.
There are two techniques to transmit data using the encoding scheme.
1. Asynchronous Serial Transmission
2. Synchronous Serial Transmission

Asynchronous Serial Transmission


 In the asynchronous transmission, the clock used by the transmitter
and receiver is not synchronized. So, the bits to be transmitted are
grouped into agroup of 6 to 8 bits which has a defined starting bit and
ending bit. The start bit has a logic value 0 and the stop bit has a logic
value 1.
 The data received at the receiver end is recognized by this start and
stop bit. This approach is useful where is the transmission is slow.

Synchronous Serial Transmission


 The start and stop bit we used in the asynchronous transmission
provides thecorrect timing information but this approach is not useful
where the transmission speed is high.
 So, in the synchronous transmission, the receiver generates the clock
that is synchronized with the clock of the transmitter. This lets the
transmitting large blocks of data at a high speed.

Standard I/O interfaces


Consider a computer system using different interface standards. Let us
look in to Processor bus and Peripheral Component Interconnect (PCI) bus.
These two buses are interconnected by a circuit called bridge. It is a bridge
between processor bus and PCI bus. An example of a computer system
using different interface standards is shown in figure 4.38. The three major
standard I/O interfaces discussed here are:
– PCI (Peripheral Component Interconnect)
– SCSI (Small Computer System Interface)
– USB (Universal Serial Bus)
PCI (Peripheral Component Interconnect)
Host, main memory and PCI bridge are connected to disk, printer and
Ethernet interface through PCI bus. At any given time, one device is the bus
master. It has the right to initiate data transfers by issuing read and write
commands. A master is called an initiator in PCI terminology. This is either
processor or DMA controller. The addressed device that responds to read
and write commands is called a target. A complete transfer operation on
the bus, involving an address and a burst of data, is called a transaction.
Device configuration is also discussed.
SCSI Bus
It is a standard bus defined by the American National Standards Institute
(ANSI).A controller connected to a SCSI bus is an initiator or a target. The
processor sends a command to the SCSI controller, which causes the
following sequence of events to take place:
• The SCSI controller contends for control of the bus (initiator).
• When the initiator wins the arbitration process, it selects the target
controller and hands over control of the bus to it.
• The target starts an output operation. The initiator sends a command
specifying the required read operation.
• The target sends a message to the initiator indicating that it will
temporarily suspends the connection between them. Then it releases the
bus. The target controller sends a command to the disk drive to move the
read head to the first sector involved in the requested read operation.
• The target transfers the contents of the data buffer to the initiator and
then suspends the connection again.
• The target controller sends a command to the disk drive to perform
another seekoperation.
• As the initiator controller receives the data, it stores them into the main
memoryusing the DMA approach.
• The SCSI controller sends an interrupt to the processor to inform it
that therequested operation has been completed.

USB (UNIVERSAL SERIAL BUS)


Universal Serial Bus, USB is a plug and play interface that allows a
computer tocommunicate with peripheral and other devices. USB-
connected devices cover abroad range; anything from keyboards and mice,
to music players and flash drives. The USB has been designed to meet
several key objectives
 Provide a simple, low-cost, and easy to use interconnection system that
overcomes the difficulties due to the limited number of I/O ports
available on a computer
 Accommodate a wide range of data transfer characteristics for I/O
devices, including telephone and Internet connections
 Enhance user convenience through a “plug-and-play” mode of operation
Today, there are millions of different USB devices that can be connected to
yourcomputer. The list below contains just a few of the most common.
 Digital Camera
 External drive
 iPod or other MP3 players
 Keyboard
 Keypad
 Microphone
 Mouse
 Printer
 Joystick
 Scanner
 Smartphone
 Tablet
 Webcams
USB Architecture

When multiple I/O devices are connected to the computer through USB they
all areorganized in a tree structure. Each I/O device makes a point-to-point
connection and transfers data using the serial transmission format we
have discussed serial transmission in our previous content ‘interface
circuit’.
As we know a tree structure has a root, nodes and leaves. The tree
structure connecting I/O devices to the computer using USB has nodes
which are also referred to as a hub. Hub is the intermediatory connecting
point between the I/O devices and the computer. Every tree has a root here,
it is referred to as the root hub which connects the entire tree to the
hosting computer. The leaves of the tree here are nothing but the I/O
devices such as a mouse, keyboard, camera, speaker.
USB Protocols
All information transferred over the USB is organized in packets, where a
packet consists of one or more bytes of information
 The information transferred on the USB can be divided into two broad
categories: control and data
 Control packets perform such tasks as addressing a device to initiate
data transfer, acknowledging that data have been received correctly, or
indicating an error
 Data packets carry information that is delivered to a device. For example,
input and output data are transferred inside data packets
USB Device States
A USB device can have several possible states as described below:
• Attached State: This state occurs when the device is attached to the Host.
• Powered State: After the device is attached, the Host provides power to
the
device if it does not have its own power supply. The device should not draw
more than 100 mA in this state.
• Default State: This state occurs when the device is reset and has not
been assigned a unique address. In this state the device uses default
control pipe for communication and default address 0.
• Addressed State: The USB device enters this state after it gets a unique
address which is used for future communications.
• Configured: When the Host obtains required information from the device,
it loads the appropriate driver for the device. The host configures the
device by selecting a configuration. The device is now ready to do the
operations it was meant for.
• Suspended State: The USB device enters the suspended state when the
bus remains idle for more than 3mS. In this state, the device must not draw
more than 500uA of current.
****************

You might also like