VISION INSTITUTE OF TECHNOLOGY, Subject: (KCS302) Computer
ALIGARH Organization and Architecture
UNIT-V: Input-Output Organization
Table of Content
1) Introduction
2) Peripheral Device
3) Input-Output Interface
4) Data Transfer
5) Mode of Transfer
6) Interrupt and its types
7) Priority Interrupt
8) DMA
pg. 1 Faculty: Shanu Gupta (CSE Department)
VISION INSTITUTE OF TECHNOLOGY, Subject: (KCS302) Computer
ALIGARH Organization and Architecture
Introduction
The I/O subsystem of a computer provides an efficient mode of communication between the
central system and the outside environment. It handles all the input-output operations of the
computer system.
Peripheral Devices
Input or output devices that are connected to computer are called peripheral devices. These
devices are designed to read information into or out of the memory unit upon command from
the CPU and are considered to be the part of computer system. These devices are also called
peripherals.
For example: Keyboards, display units and printers are common peripheral devices. There are
three types of peripherals:-
1)Input peripherals: Allows user input, from the outside world to the computer. Example:
Keyboard, Mouse etc.
2) Output peripherals: Allows information output, from the computer to the outside world.
Example: Printer, Monitor etc.
3)Input-Output peripherals: Allows both input(from outside world to computer) as well as,
output(from computer to the outside world). Example: Touch screen etc.
Interfaces
Interface is a shared boundary between two separate components of the computer system
which can be used to attach two or more components to the system for communication
purposes.
Input-Output Interface
Input Output Interface provides a method for transferring information between internal storage
and external I/O devices.
Peripherals connected to a computer need special communication links for interfacing them
with the central processing unit. The purpose of communication link is to resolve the differences
that exist between the central computer and each peripheral.
The Major Differences/ Need for interfacing:-
1. Peripherals are electro-mechanical and electromagnetic devices and CPU and memory
are electronic devices. Therefore, a conversion of signal values may be needed.
pg. 2 Faculty: Shanu Gupta (CSE Department)
VISION INSTITUTE OF TECHNOLOGY, Subject: (KCS302) Computer
ALIGARH Organization and Architecture
2. The data transfer rate of peripherals is usually slower than the transfer rate of CPU and
consequently, a synchronization mechanism may be needed.
3. Data codes and formats in the peripherals differ from the word format in the CPU and memory.
4. The operating modes of peripherals are different from each other and must be controlled so
as not to disturb the operation of other peripherals connected to the CPU.
To resolve these differences, computer systems include special hardware components
between the CPU and Peripherals to supervises and synchronizes all input and output
transfers. These components are called Interface Units because they interface between the
processor bus and the peripheral devices.
The control lines are referred as I/O command. The commands are as following:
1. Control command- A control command is issued to activate the peripheral and to
inform it what to do.
2. Status command- A status command is used to test various status conditions in the interface
and the peripheral.
3. Data Output command- A data output command causes the interface to respond by
transferring data from the bus into one of its registers.
4. Data Input command- The data input command is the opposite of the data output.
In this case the interface receives an item of data from the peripheral and places it in its buffer
register. The processor checks if data is available using status command and then issues data
input command. The interface places the data on the data lines, where they are accepted by the
processor
pg. 3 Faculty: Shanu Gupta (CSE Department)
VISION INSTITUTE OF TECHNOLOGY, Subject: (KCS302) Computer
ALIGARH Organization and Architecture
Data Transfer
1. Internal operations in a digital system are synchronized by means of clock pulses supplied by
clock pulse generator. 2 units such as CPU and I/O interface, are designed independent of each
other. If the registers in the interface share a common clock with CPU region, then the transfer
between 2 units is said to be synchronous.
2. In most cases, the internal timing signal in each unit is independent from the other where each
of them uses its own private clock for internal registers. In that case, the 2 units are said to be
asynchronous to each other.
3. Asynchronous data transfer between 2 independent units requires that control signals be
transmitted between the communicating units to indicate the time at which data is being
transmitted.
In this method two types of techniques are used based on signals before data transfer:
i. Strobe Control
ii. Handshaking
1.Strobe Control:
The strobe control method of Asynchronous data transfer employs a single control line
to time each transfer. The strobe may be activated by either the source or the destination
unit.
pg. 4 Faculty: Shanu Gupta (CSE Department)
VISION INSTITUTE OF TECHNOLOGY, Subject: (KCS302) Computer
ALIGARH Organization and Architecture
i. Data Transfer Initiated by Source Unit:
In the block diagram fig. (a), the data bus carries the binary information from source
to destination unit. Typically, the bus has multiple lines to transfer an entire byte or word. The
strobe is a single line that informs the destination unit when a valid data word is available. The
timing diagram fig. (b) the source unit first places the data on the data bus. The information on
the data bus and strobe signal remain in the active state to allow the destination unit to receive
the data.
ii. Data Transfer Initiated by Destination Unit:
In this method, the destination unit activates the strobe pulse, to informing the source to
provide the data. The source will respond by placing the requested binary information on the
data bus. The data must be valid and remain in the bus long enough for the destination unit to
accept it. When accepted the destination unit then disables the strobe and the source unit
removes the data from the bus.
2. Handshaking:
The handshaking method solves the problem of strobe method by introducing a second control
signal that provides a reply to the unit that initiates the transfer.
i. Source Initiated Transfer using Handshaking:
The sequence of events shows four possible states that the system can be at any given time. The
source unit initiates the transfer by placing the data on the bus and enabling its data valid signal.
pg. 5 Faculty: Shanu Gupta (CSE Department)
VISION INSTITUTE OF TECHNOLOGY, Subject: (KCS302) Computer
ALIGARH Organization and Architecture
The data accepted signal is activated by the destination unit after it accepts the data from the bus.
The source unit then disables its data accepted signal and the system goes into its initial state.
ii.Destination Initiated Transfer Using Handshaking:
The name of the signal generated by the destination unit has been changed to ready for data to
reflects its new meaning. The source unit in this case does not place data on the bus until after it
receives the ready for data signal from the destination unit. From there on, the handshaking
procedure follows the same pattern as in the source initiated case. The only difference between
the Source Initiated and the Destination Initiated transfer is in their choice of Initial sate
pg. 6 Faculty: Shanu Gupta (CSE Department)
VISION INSTITUTE OF TECHNOLOGY, Subject: (KCS302) Computer
ALIGARH Organization and Architecture
Modes of I/O Data Transfer
Data transfer between the central unit and I/O devices can be handled in generally three types
of modes which are given below:
1. Programmed I/O
2. Interrupt Initiated I/O
3. Direct Memory Access
1. Programmed I/O
Programmed I/O instructions are the result of I/O instructions written in computer program.
Each data item transfer is initiated by the instruction in the program.
Usually the program controls data transfer to and from CPU and peripheral. Transferring data
under programmed I/O requires constant monitoring of the peripherals by the CPU.
2. Interrupt Initiated I/O
In the programmed I/O method the CPU stays in the program loop until the I/O unit indicates that
it is ready for data transfer. This is time consuming process because it keeps the processor busy
needlessly.
This problem can be overcome by using interrupt initiated I/O. In this when the interface
determines that the peripheral is ready for data transfer, it generates an interrupt. After receiving
the interrupt signal, the CPU stops the task which it is processing and service the I/O transfer and
then returns back to its previous processing task.
pg. 7 Faculty: Shanu Gupta (CSE Department)
VISION INSTITUTE OF TECHNOLOGY, Subject: (KCS302) Computer
ALIGARH Organization and Architecture
3.Direct Memory Access
Removing the CPU from the path and letting the peripheral device manage the memory buses
directly would improve the speed of transfer.
This technique is known as DMA. In this, the interface transfer data to and from the memory
through memory bus. A DMA controller manages to transfer data between peripherals and
memory unit. Many hardware systems use DMA such as disk drive controllers, graphic cards,
network cards and sound cards etc. It is also used for intra chip data transfer in multicore
processors. In DMA, CPU would initiate the transfer, do other operations while the transfer is in
progress and receive an interrupt from the DMA controller when the transfer has been
completed.
Interrupt
Transfer of program control from a currently running program to hardware and software-
generated signal request for the processor, which creates a disturbance to a running program.
This disturbance is called an Interrupt.
Interrupts are the snatching of processor/control from a currently running program, giving it to
the third party program to execute the instruction, and then transferring the control to the
previous program. This situation arises due to some error conditions, user requirements,
software and hardware requirement. The CPU handles all the interrupts very carefully, and
priority is given to all the interrupts to execute every instruction.
pg. 8 Faculty: Shanu Gupta (CSE Department)
VISION INSTITUTE OF TECHNOLOGY, Subject: (KCS302) Computer
ALIGARH Organization and Architecture
In the above diagram, Assume it is a memory having one program with many instructions ( I1, I2,
I3……In). Program is a set of instructions, and one by one instructions are executing. Suppose,
during the execution of instruction I, an interrupt occurs. A request is generated externally or
internally to process the program first (interrupt generated program).
Generally, the processor's control came to the i+1 instruction, but due to the interrupt, it moves
to the interrupt instruction; after the interrupt instruction's execution, the control return from
the interrupt to the next instruction (i+1). Control returns to the original program after the
service program is executed.
But before the interrupt occurs, the CPU was processing an instruction. So before moving to
interrupt execution, the CPU completes the instruction (i) and saves the instruction state
because it has to return to the address after completing the instruction. The CPU saves the
following information before giving control to the interrupt instruction.
(i) Value of program counter (PC)
PC => i+1
(ii) Value of all the CPU registers
Before moving to i+1 instruction, all the values of instruction (I) in registers were saved.
(iii) Content of status bit conditions.
All the status bit conditions are stored in the program status word (PSW). Commonly there are
four used flags, Status bit, Carry bit, Overflow bit, Zero bit, which are saved in the stack part of
memory.
Note: All these values and conditions are saved in the stack part of memory. CPU does not
respond to an interrupt until the execution of currently running instruction ends.
How does the CPU execute interrupt instruction?
So, before going to the next fetch cycle and after completion of the first instruction
cycle (I), the CPU checks for an interrupt signal; if there is an interrupt, then it does
the following steps:
• Save the previously executed program state in the CPU called as CPU state
includes value of program counter, CPU Registers, program state word in
the memory stack.
• When the CPU starts execution of interrupt instruction, it saves the following
state.
pg. 9 Faculty: Shanu Gupta (CSE Department)
VISION INSTITUTE OF TECHNOLOGY, Subject: (KCS302) Computer
ALIGARH Organization and Architecture
Program Counter (PC) ß interrupt branch address.
Program Status Word (PSW) ß Status bits of the interrupt service program.
• Last instruction of service program, when control returns from interrupt.
• All the old values which are stored in the stack are taken back to fetch the
new instruction. Value of PSWß old PSW, PSß PC, CPU register ß old CPU
memory.
Different types of program interrupt are:-Hardware interrupt and Software interrupt
a. Hardware interrupt
Interrupt generated by the hardware of the computer system is called hardware interrupts.
For example: In the continuation of a running program, if we press any key from the
keyboard, then a signal is generated to the processor, which is called an interrupt and is
first executed by the system.
Hardware interrupts are classified into two categories.
External and Internal interrupts – Both are initiated from the signal that occurs in the CPU
hardware.
External interrupt - These interrupts are generated by hardware internally. These are
asynchronous (does not depend on anything) and predictable interrupts. Example of
external interrupts are:-
• Timings of input/output devices when they request for data transfer.
• Power failure due to circuit monitoring the power supply or any external source
creates an interrupt.
• A program that goes into an endless loop generates a timeout interrupt.
• When the input/output device finished the transfer of data.
pg. 10 Faculty: Shanu Gupta (CSE Department)
VISION INSTITUTE OF TECHNOLOGY, Subject: (KCS302) Computer
ALIGARH Organization and Architecture
Internal interrupt – These interrupts are generated when there is an "error in instruction”
happens. This type of interrupts results from illegal and erroneous use of instruction,
widely known as “Traps”. Internal interrupts are “synchronous”( depend on the timer
clock) with the program. These are unpredictable interrupts.
Reasons for internal interrupts are:-
• It happens due to register overflow error occur.
• When divide by zero, errors occur.
• Due to an invalid operation code ( Opcode).
2. Software interrupts
It is a special type of instruction that behaves like an interrupt created by the user rather
than a subroutine call. It is an error generating program created by the user whenever the
user wants to create, widely known as a software interrupt. It is initiated by executing an
instruction or an interrupt procedure at any desired point of the program.
For example, Some error instructions are generated to check the proper functioning of the
program.
Supervisor call instruction generates a software interrupt to switch from user mode to
supervisor mode.
There are two cases of occurrence of software interrupts.
a. Normal interrupt- When an interrupt created by instruction or user made an error
program intentionally are normal software interrupts.
b. Exceptional interrupt- An exception case of interrupt or unplanned instruction
generated in a program like a number is generated during the execution of an
instruction divisible by zero will give undefined value and creates an interrupt.
Priority Interrupts
It is a system responsible for selecting the priority at which devices generating interrupt signals
simultaneously should be serviced by the CPU. High-speed transfer devices are generally given
high priority, and slow devices have low priority. And, in case of multiple devices sending
interrupt signals, the device with high priority gets the service first.
Methods for establishing priority of simultaneous Interrupts
Daisy Chaining Priority
This method uses hardware to establish the priority of simultaneous interrupts. Deciding the
interrupt priority includes the serial connection of all the devices that generate an interrupt signal.
The devices are placed according to their priority such that the device having the highest priority
gets placed first, followed by lower priority devices. The device with the lowest priority is found at
last within the chain. In the daisy-chaining device, all devices are linked in serial form. The
interrupt line request is not unusual to devices
pg. 11 Faculty: Shanu Gupta (CSE Department)
VISION INSTITUTE OF TECHNOLOGY, Subject: (KCS302) Computer
ALIGARH Organization and Architecture
Even if one of the devices has an interrupt signal in the low-level state, the interrupt line goes to a
low-level state and allows the interrupt input within the CPU. While there's no interrupt, the
interrupt line remains in a high-level state. The CPU responds to the interrupt by allowing the
interrupt acknowledge line. This signal is received via device '1' at its PI input. The acknowledge
signal passes to the subsequent device through PO output if tool '1' isn't asking for an interrupt.
Polling method
Polling is a software method. It is used to establish priority among interrupts occurring
simultaneously. When the processor detects an interrupt in the polling method, it
branches to an interrupt service routine whose job is to pull each Input/Output module to
determine which module caused the interrupt.
The poll can be in the form of a different command line (For example, Test Input/Output).
Here, the processor raises the Test input/output and puts the address of a specific I / O
module in the address line. If there is an interrupt, the interrupt gets pointed.
Also, it is the order by which they are tested; that is, the order in which they appear in the
address line or service routine determines the priority of every interrupt. Like, at the time
of testing, devices with the highest priority get tested, then comes the turn of devices with
lower priority. This is the easiest method for priority establishment on simultaneous
interrupt. But the downside of polling is that it takes time.
Direct Memory Access (DMA)
In the Direct Memory Access (DMA) the interface transfer the data into and out of the memory
unit through the memory bus. The transfer of data between a fast storage device such as
magnetic disk and memory is often limited by the speed of the CPU. Removing the CPU from the
path and letting the peripheral device manage the memory buses directly would improve the
speed of transfer. This transfer technique is called Direct Memory Access (DMA).
pg. 12 Faculty: Shanu Gupta (CSE Department)
VISION INSTITUTE OF TECHNOLOGY, Subject: (KCS302) Computer
ALIGARH Organization and Architecture
During the DMA transfer, the CPU is idle and has no control of the memory buses. A DMA
Controller takes over the buses to manage the transfer directly between the I/O device and
memory. The CPU may be placed in an idle state in a variety of ways.
One common method extensively used in microprocessor is to disable the buses through
special control signals such as:
◼ Bus Request (BR)
◼ Bus Grant (BG)
These two control signals in the CPU that facilitates the DMA transfer. The Bus Request (BR)
input is used by the DMA controller to request the CPU. When this input is active, the CPU
terminates the execution of the current instruction and places the address bus, data bus and
read write lines into a high Impedance state. High Impedance state means that the output is
disconnected.
activates
output
DMA that
to the
inform
theBus
Busthe
Grant
external
(BG)
The CPU activates the Bus Grant (BG) output to inform the external DMA that the Bus Request (BR) can
now take control of the buses to conduct memory transfer without processor.
When the DMA terminates the transfer, it disables the Bus Request (BR) line. The CPU disables the Bus
Grant (BG), takes control of the buses and return to its normal operation.
DMA Controller:
The DMA controller needs the usual circuits of an interface to communicate with the CPU and I/O device.
The hardware device used for direct memory access is called the DMA
controller. DMA controller is a control unit, part of I/O device’s interface circuit,
pg. 13 Faculty: Shanu Gupta (CSE Department)
VISION INSTITUTE OF TECHNOLOGY, Subject: (KCS302) Computer
ALIGARH Organization and Architecture
which can transfer blocks of data between I/O devices and main memory with
minimal intervention from the processor.
Reference: Book/Websites
Computer System Architecture: Morris Mano
Computer Organization and Architecture: Sudam Pawar
www.geeksforgeeks.org
www.javatpoint.com
www.learncomputerscienceonline.com
pg. 14 Faculty: Shanu Gupta (CSE Department)