0% found this document useful (0 votes)
100 views2 pages

Uniprocessor Parallel Processing

This document discusses various techniques for achieving parallel processing in a uniprocessor system. It describes: 1. Multiprogramming and timesharing, which allow multiple processes to share a CPU by switching between processes when one is blocked on I/O. 2. Using multiple functional units like adders and multipliers to provide concurrency. 3. The Harvard architecture, which separates memory for instructions and data to double memory bandwidth. 4. Memory hierarchies and pipelining, which overlap data transfers and instruction phases to improve throughput. Pipelining can be applied to both arithmetic operations and instruction execution.

Uploaded by

Werda Farooq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
100 views2 pages

Uniprocessor Parallel Processing

This document discusses various techniques for achieving parallel processing in a uniprocessor system. It describes: 1. Multiprogramming and timesharing, which allow multiple processes to share a CPU by switching between processes when one is blocked on I/O. 2. Using multiple functional units like adders and multipliers to provide concurrency. 3. The Harvard architecture, which separates memory for instructions and data to double memory bandwidth. 4. Memory hierarchies and pipelining, which overlap data transfers and instruction phases to improve throughput. Pipelining can be applied to both arithmetic operations and instruction execution.

Uploaded by

Werda Farooq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Parallel Processing

Discussion
- 02

Parallelism in a Uniprocessor System


1. Multiprogramming & Timesharing
a. In multiprogramming several processes reside in main memory and CPU switches from one process (say
P1) to another (say P2) when the currently running process (P1) blocks for an I/O operation. The I/O
operation for P1 is handled by a DMA unit while the CPU runs P2.
b. In timesharing processes are assigned slices of CPU’s time. The CPU executes the processes in the round
robin fashion as below.
Time quantum expires

--- CPU Job finishes

Processes Queue
 It appears that every user (process) has its own processor. (multiple virtual processors)
 Averts the monopoly of a single (computation intensive) process as in pure multiprogramming
2. Multiplicity of Functional Units
Use of multiple functional units like multiple adders, multipliers or even multiple ALUs to provide
concurrency is not a new idea in uniprocessor environment. It has been around for decades.
3. Harvard Architecture
a. This provides separate memory units for instructions and data. This effectively doubles the memory
bandwidth saving CPU’s time. E.g. with split cache instructions are kept in I-cache and data in D-cache.
b. In contrast when instructions and data are kept in the same memory, the architecture is called Princeton
Architecture. E.g. unified cache, main memory, etc.
4. Memory Hierarchy
A parallel processing mechanism supported by memory hierarchy is the simultaneous transfer of
instructions/data between (CPU, cache) and (MM, secondary memory)
5. Pipelining
The basic idea behind any kind of pipeline is to increase throughput by overlapping phases of (generally a
complex) task. There are two types of pipeline in computers:
 Arithmetic Pipeline
 Instruction Pipeline

Page - 1 - of 2
Parallel Processing
Discussion
- 02

Arithmetic Pipeline
Any arithmetic operation that is decomposable into different stages, can be pipelined. e.g. a floating-point
adder as shown below:
a. Arithmetic Pipelining
X
Exponent Mantissa Significand Normalize
Y Alignment Addition result
Comparison

4-stage FP adder
b. Instruction Pipelining
Just like an arithmetic operation, an instruction passes through different phases during its execution from
fetch to completion. e.g. a typical MIPS pipeline consists of five stages as shown below:

IF ID EX M WB

5-stage MIPS Instruction Pipeline

Buffers (pipeline registers) are provided between stages.


Pipeline Clock Period = Processing delay of slowest stage + Pipeline register delay
The EX (execution) stage is marked by the ALU usage be it for adding two register operands, operand
address calculation, testing condition for a branch instruction, or anything else. In M (memory) stage,
an instruction either reads or writes a data element from/to memory. The result produced by an instruction
is written back to register in WB (write back) stage.

******

Page - 2 - of 2

You might also like