Question Number 1: 
Write a note on the multi-processor scheduling. 
Answer: 
In multiple-processor scheduling multiple CPUs are available and hence Load
Sharing becomes possible. However multiple processor scheduling is more complex as
compared to single processor scheduling. In multiple processor scheduling there are cases
when the processors are identical i.e., HOMOGENEOUS, in terms of their functionality, we
can use any processor available to run any process in the queue. 
Approaches to Multiple-Processor Scheduling
One approach is when all the scheduling decisions and I/O processing are handled by a
single processor which is called the Master Server and the other processors executes only
the user code. This is simple and reduces the need of data sharing. This entire scenario is
called Asymmetric Multiprocessing. 
A second approach uses Symmetric Multiprocessing where each processor is self-
scheduling. All processes may be in a common ready queue or each processor may have
its own private queue for ready processes. The scheduling proceeds further by having the
scheduler for each processor examine the ready queue and select a process to execute. 
Processor Affinity
Processor Affinity means a process has an affinity for the processor on which it is currently
running. 
There are two types of processor affinity:
Soft Affinity - When an operating system has a policy of attempting to keep a process
running on the same processor but not guaranteeing it will do so, this situation is called soft
affinity.
Hard Affinity – Hard Affinity allows a process to specify a subset of processors on which it
may run.  
Load Balancing
Load Balancing is the phenomena which keeps the workload evenly distributed across all
processors in an SMP system. Load balancing is necessary only on systems where each
processor has its own private queue of process which are eligible to execute. 
There are two general approaches to load balancing:
       Push Migration
      Pull Migration
Multicore Processors
In multicore processors multiple processor cores are places on the same physical chip.
Each core has a register set to maintain its architectural state and thus appears to the
operating system as a separate physical processor. When processor accesses memory
then it spends a significant amount of time waiting for the data to become available. This
situation is called MEMORY STALL. To solve this problem recent hardware designs have
implemented multithreaded processor cores in which two or more hardware threads are
assigned to each core.
There are two ways to multithread a processor:
      Coarse-Grained Multithreading
      Fine-Grained Multithreading