0% found this document useful (0 votes)
56 views37 pages

CSA Presentation

The document discusses parallel processing and parallel processor architectures. It defines parallel processing as executing instructions simultaneously to improve performance. It also explains different classifications of parallel architectures including SISD, SIMD, MIMD and topologies like shared bus, ring and mesh.

Uploaded by

Saranya Murali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views37 pages

CSA Presentation

The document discusses parallel processing and parallel processor architectures. It defines parallel processing as executing instructions simultaneously to improve performance. It also explains different classifications of parallel architectures including SISD, SIMD, MIMD and topologies like shared bus, ring and mesh.

Uploaded by

Saranya Murali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 37

PARALLEL PROCESSOR

&
PARALLEL PROCESSING

1
AN OVERVIEW OF PARALLEL
PROCESSING

 What is parallel processing?


 Parallel processing is a method to improve computer system
performance by executing two or more instructions
simultaneously.

 The goals of parallel processing.


 One goal is to reduce the “wall-clock” time or the amount of
real time that you need to wait for a problem to be solved.
 Another goal is to solve bigger problems that might not fit in
the limited memory of a single CPU.

2
AN ANALOGY OF PARALLELISM

 In bank there are number of counters for


different purposes. Imagine there is only one
counter for all purposes , obviously we have to
spend time in queue to fulfill our need in the
bank. So with number of counters our needs can
be completed in the respective counter very soon
than with single counter.
 Here the counters are processors and the queues
are tasks to be completed. This is exactly what is
happening in parallel processing.
3
ANOTHER ANALOGY OF PARALLELISM
Another analogy is having several students grade
quizzes simultaneously. Quizzes are distributed to a few
students and different problems are graded by each
student at the same time. After they are completed, the
graded quizzes are then gathered and the scores are
recorded.

4
PARALLELISM IN UNIPROCESSOR SYSTEMS
 It is possible to achieve parallelism with a
uniprocessor system.
 Some examples are the instruction pipeline,
arithmetic pipeline, I/O processor.
 Note that a system that performs different
operations on the same instruction is not
considered parallel.
 Only if the system processes two different
instructions simultaneously can it be considered
parallel.
5
ORGANIZATION OF
MULTIPROCESSOR SYSTEMS
 Flynn’s Classification
 Was proposed by researcher Michael J. Flynn in 1966.
 It is the most commonly accepted taxonomy of
computer organization.
 In this classification, computers are classified by
whether it processes a single instruction at a time or
multiple instructions simultaneously, and whether it
operates on one or multiple data sets.

6
TAXONOMY OF COMPUTER
ARCHITECTURES

7
TAXONOMY OF PARALLEL PROCESSOR
ARCHITECTURES

88
SINGLE INSTRUCTION, SINGLE DATA
(SISD)
 SISD machines executes a single instruction on
individual data values using a single processor.
 Based on traditional Von Neumann uniprocessor
architecture, instructions are executed
sequentially or serially, one step after the next.
 Until most recently, most computers are of SISD
type.

9
PARALLEL ORGANIZATIONS - SISD

10
10
SINGLE INSTRUCTION, MULTIPLE
DATA (SIMD)
 An SIMD machine executes a single instruction
on multiple data values simultaneously using
many processors.
 There is only one instruction,a single control
unit does the fetch and decoding for all
processors.
 SIMD architectures include array processors and
vector processors.

11
SIMD

12
MULTIPLE INSTRUCTION, MULTIPLE
DATA (MIMD)
 MIMD machines are usually referred to as
multiprocessors or multicomputers.
 It may execute multiple instructions
concurrently.
 Each processor must include its own control unit
that will assign to the processors parts of a task
or a separate task.
 It has two subclasses: Shared memory and
distributed memory

13
PARALLEL ORGANIZATIONS - MIMD
SHARED MEMORY

15
14
PARALLEL ORGANIZATIONS - MIMD
DISTRIBUTED MEMORY

16
15
MULTIPLE INSTRUCTION, SINGLE
DATA (MISD)
 Sequence of data

 Transmitted to set of processors

 Each processor executes different instruction sequence

 Not clear if it has ever been implemented

16
ANALOGY OF FLYNN’S
CLASSIFICATIONS
 An analogy of Flynn’s classification is the check-in desk
at an airport
 SISD: a single desk
 SIMD: many desks and a supervisor with a megaphone
giving instructions that every desk obeys
 MIMD: many desks working at their own pace, synchronized
through a central database

17
SYSTEM TOPOLOGIES
Topologies
 A system may also be classified by its
topology.
 A topology is the pattern of connections
between processors.
 The cost-performance and tradeoff determines
which topologies to use for a multiprocessor
system.

18
TOPOLOGY CLASSIFICATION
A topology is characterized by its diameter, total
bandwidth, and bisection bandwidth
 Diameter – the maximum distance between two
processors in the computer system.
 Total bandwidth – the capacity of a communications
link multiplied by the number of such links in the
system.
 Bisection bandwidth – represents the maximum data
transfer that could occur at the bottleneck in the
topology.
19
System Topologies
M M M
SharedBus
Topology
 Processors communicate
with each other via a single P P P
bus that can only handle one
data transmissions at a time.
 In most shared buses,
processors directly Shared Bus
communicate with their own
local memory.
Global 20
memory
SYSTEM TOPOLOGIES

P
Ring Topology
 Uses direct connections
between processors instead of
a shared bus. P P
 Allows communication links to
be active simultaneously but
data may have to travel
through several processors to P P
reach its destination.

P 21
SYSTEM TOPOLOGIES

Tree Topology P
 Uses direct connections
between processors; each
having three connections.
 There is only one unique P P
path between any pair of
processors.

P P P P

22
SYSTEMS TOPOLOGIES
Mesh Topology
In the mesh P P P
topology, every
processor connects
to the processors P P P
above and below it,
and to its right and
left. P P P

23
SYSTEM TOPOLOGIES
 Hypercube Topology
 Is a multiple mesh
topology.
 Each processor P P P P
connects to all other
processors whose P P P P
binary values differ by
one bit. For example,
processor 0(0000) P P P P
connects to 1(0001) or
2(0010). P P P P

24
SYSTEM TOPOLOGIES

Completely
Connected Topology P P
 Every processor has P P
n-1 connections, one to each of the
other processors.
 There is an increase in complexity as P P
the system grows but this offers
maximum communication
capabilities. P P

25
MIMD SYSTEM ARCHITECTURES

 Finally, the architecture of a MIMD system, contrast to


its topology, refers to its connections to its system
memory.
 A systems may also be classified by their architectures.
Two of these are:
 Uniform memory access (UMA)
 Nonuniform memory access (NUMA)

26
UNIFORM MEMORY ACCESS (UMA)
 The UMA is a type of symmetric multiprocessor, or
SMP, that has two or more processors that perform
symmetric functions. UMA gives all CPUs equal
(uniform) access to all memory locations in shared
memory. They interact with shared memory by some
communications mechanism like a simple bus or a
complex multistage interconnection network.

27
UNIFORM MEMORY ACCESS (UMA)
ARCHITECTURE

Processor 1

Processor 2
Communications Shared
mechanism Memory

Processor n 28
NONUNIFORM MEMORY ACCESS
(NUMA)
 NUMA architectures, unlike UMA architectures do not
allow uniform access to all shared memory locations.
This architecture still allows all processors to access all
shared memory locations but in a nonuniform way, each
processor can access its local shared memory more
quickly than the other memory modules not next to it.

29
NONUNIFORM MEMORY ACCESS
(NUMA) ARCHITECTURE

Processor 1 Processor 2 Processor n

Memory 1 Memory 2 Memory n

Communications mechanism
30
APPLICATIONS OF PARALLEL
PROCESSING

31
32
33
34
35
REFERENCES

(i)
ecomputernotes.com/fundamental/disk-operating-system/parallel-
processing-systems

(ii)https://www.webopedia.com/TERM/P/parallel_processing.html

36
37

You might also like