Cloud Computing
Lecture 3
        13-8-2024
    Mr. Ajay B. Kapase
Principles of Parallel and Distributed Computing
• Terms parallel and distributed computing used interchangeably.
• There is a slight difference between these two terms.
• Parallel Computing: Tightly Coupled System involving single computer
  with multiple processors/cores.
• Distributed Computing: Loosely Coupled system involving multiple
  computers (nodes) connected through network.
Parallel Computing
• A single computation is divided in to several units.
• Each unit is handled by separate processor / separate core.
• Has single physical shared memory.
• Different processors and can communicate with each other by means of the
  shared memory
• Components belong to single computer, Hence Homogeneous (each processor is
  of same type and same capacity)
Parallel Computing
            P1       P2                 P3   P4
                     Shared Memory
                           IO
                      Single Computer
Distributed Computing
• A single computing task is divided in several units.
• Every unit is executed concurrently on different computing elements.
• Mostly processors on different nodes.
• Each node has its own processor and its own memory.
• These nodes communicates with each other over network.
• Promotes heterogeneous characteristics
Distributed vs. Parallel Computing
 Elements of parallel computing
• Parallel Processing
   • Processing of multiple tasks simultaneously on multiple processors.
   • A given task is divided into multiple subtasks using a divide-and-conquer technique.
   • Each subtask is processed on a different central processing unit (CPU).
   • Programming on a multiprocessor system using the divide-and-conquer technique is
     called parallel programming.
Elements of parallel computing
• Need of Parallel Processing
   • Computational requirements are ever increasing in the areas of both scientific and
     business computing.
   • Such as Technical computing problems, require high-speed computational power, are
     related to life sciences, aerospace, mechanical design and analysis.
   • Sequential architectures are reaching physical limitations as they are constrained by
     the speed of light and thermodynamics laws.
   • The technology of parallel processing is mature there is already significant R&D work
     done on development tools and environments
Elements of parallel computing
• Hardware architectures for parallel processing
   • Based on the number of instruction and data streams that can be processed
     simultaneously, computing systems are classified as
   1.   Single-instruction, single-data (SISD) systems
   2.   Single-instruction, multiple-data (SIMD) systems
   3.   Multiple-instruction, single-data (MISD) systems
   4.   Multiple-instruction, multiple-data (MIMD) systems
Single-instruction, single-data (SISD) systems
• This is the most basic form of computer architecture, typically found in traditional
  sequential computers.
• a single instruction operates on a single data stream at a time.
• CPU fetches an instruction from memory, executes it on a single piece of data, and
  then moves to the next instruction.
• Example: Traditional single-core processors like those in early computers or basic
  microcontrollers.
• Use Case: SISD is used in simple applications where parallelism is not required.
Single-instruction, single-data (SISD) systems
Single-instruction, multiple-data (SIMD) systems
  • Multiprocessor machine capable of executing the same instruction on all
    the CPUs but operating on different data streams.
  • A single instruction is applied to multiple data streams at the same time.
  • Example: Graphics Processing Units (GPUs) SIMD systems, where the same
    operation is applied to multiple pixels simultaneously.
  • Use Case: SIMD is commonly used in multimedia, scientific computing, and
    data parallel applications like image processing and matrix operations.
Single-instruction, multiple-data (SIMD)
systems
Multiple-instruction, single-data (MISD) systems
• Involves multiple instructions operating on a single data stream.
• Different processing units perform different operations on the same
  data.
• Architecture is rare and not commonly used
• Example: Used for read only operations (checking data)
Multiple-instruction, Single-data (MISD)
systems
Multiple-instruction, multiple-data (MIMD) systems
  • Most flexible and powerful architecture where multiple instructions operate on
    multiple data streams.
  • Different processors execute different instructions on different data.
  • This architecture allows for true parallel processing.
  • Example: supercomputers often use MIMD architecture, each core or processor works
    on a different part of a problem simultaneously.
  • MIMD is used in complex, large-scale applications like scientific, engineering
    applications
Multiple-instruction, multiple-data (MIMD)
systems
Approaches of Parallel Programming
 • Data parallelism:
    •   divide-and-conquer technique
    •   split data into multiple sets
    •   each data set is processed on different Processor using same instruction.
    •   Uses SMID Model
 • Process parallelism:
    • a given operation has multiple (but distinct) activities.
    • Each activity is executed by distinct processors.
 • Farmer-and-worker model:
    • one processor is configured as master and all remaining are as worker.
    • the master assigns jobs to worker processors
    • Upon completion of task master collects and combines the results.
   Elements of distributed computing
• A distributed system is one in which components located at networked
  computers communicate and coordinate their actions only by passing
  messages.
• Components of a distributed system:
   •   Hardware
   •   OS
   •   Middleware
   •   Distributed Applications
Layered View of Distributed System