0% found this document useful (0 votes)
148 views2 pages

Multi-Processor Scheduling Guide

Multiple processor scheduling allows for load sharing across CPUs. It is more complex than single processor scheduling due to issues around data sharing. There are two main approaches: asymmetric multiprocessing where one CPU handles scheduling and I/O while others run user code, and symmetric multiprocessing where each CPU self-schedules from a common or private ready queue. Processor affinity refers to a process's preference to remain on the same CPU, and can be soft or hard. Load balancing works to evenly distribute work across CPUs and uses either push or pull migration techniques. Multicore processors place multiple cores on a single chip to reduce memory stalls, and use multithreading to assign multiple hardware threads to each core.

Uploaded by

Elia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
148 views2 pages

Multi-Processor Scheduling Guide

Multiple processor scheduling allows for load sharing across CPUs. It is more complex than single processor scheduling due to issues around data sharing. There are two main approaches: asymmetric multiprocessing where one CPU handles scheduling and I/O while others run user code, and symmetric multiprocessing where each CPU self-schedules from a common or private ready queue. Processor affinity refers to a process's preference to remain on the same CPU, and can be soft or hard. Load balancing works to evenly distribute work across CPUs and uses either push or pull migration techniques. Multicore processors place multiple cores on a single chip to reduce memory stalls, and use multithreading to assign multiple hardware threads to each core.

Uploaded by

Elia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Question Number 1: 

Write a note on the multi-processor scheduling. 


Answer: 

In multiple-processor scheduling multiple CPUs are available and hence Load


Sharing becomes possible. However multiple processor scheduling is more complex as
compared to single processor scheduling. In multiple processor scheduling there are cases
when the processors are identical i.e., HOMOGENEOUS, in terms of their functionality, we
can use any processor available to run any process in the queue.
 
Approaches to Multiple-Processor Scheduling
One approach is when all the scheduling decisions and I/O processing are handled by a
single processor which is called the Master Server and the other processors executes only
the user code. This is simple and reduces the need of data sharing. This entire scenario is
called Asymmetric Multiprocessing.
 
A second approach uses Symmetric Multiprocessing where each processor is self-
scheduling. All processes may be in a common ready queue or each processor may have
its own private queue for ready processes. The scheduling proceeds further by having the
scheduler for each processor examine the ready queue and select a process to execute.
 
Processor Affinity
Processor Affinity means a process has an affinity for the processor on which it is currently
running.
 
There are two types of processor affinity:
Soft Affinity - When an operating system has a policy of attempting to keep a process
running on the same processor but not guaranteeing it will do so, this situation is called soft
affinity.
Hard Affinity – Hard Affinity allows a process to specify a subset of processors on which it
may run.
 
 
Load Balancing
Load Balancing is the phenomena which keeps the workload evenly distributed across all
processors in an SMP system. Load balancing is necessary only on systems where each
processor has its own private queue of process which are eligible to execute.
 
There are two general approaches to load balancing:

 Push Migration
 Pull Migration

Multicore Processors
In multicore processors multiple processor cores are places on the same physical chip.
Each core has a register set to maintain its architectural state and thus appears to the
operating system as a separate physical processor. When processor accesses memory
then it spends a significant amount of time waiting for the data to become available. This
situation is called MEMORY STALL. To solve this problem recent hardware designs have
implemented multithreaded processor cores in which two or more hardware threads are
assigned to each core.
There are two ways to multithread a processor:

 Coarse-Grained Multithreading
 Fine-Grained Multithreading

You might also like