0% found this document useful (0 votes)
35 views52 pages

Chap-3 OS

os study metrial

Uploaded by

makkakuldip
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views52 pages

Chap-3 OS

os study metrial

Uploaded by

makkakuldip
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

UNIT - 3

Inter process Communication


Prepared By:
Ms. Miteksha Padmani
Assistant Professor
CE/IT Dept.
SRCOE, Rajkot
Introduction to IPC
• Inter-process communication (IPC) is a set
of methods for the exchange of data among
multiple threads in one or more processes.

• Processes may be running on one or more


computers connected by a network.

• IPC may also be referred to as inter-thread


communication and inter-application
communication.
Why IPC ?
There are several reasons for providing an
environment that allows process cooperation:

• Information session
• Computational speedup
• Modularity
• Convenience
Race Condition
• Race conditions arise in software when
separate processes or threads of execution
depend on some shared state.

• When two or more process are reading or


writing some shared data and final result
depends on who run precisely when, are
called race condition.

• Operations upon shared states are critical


sections that must be mutually exclusive..
In is local variable containing pointer to next free slot
Out is local variable pointing to next file to be printed

Figure : Two processes want to access shared memory at the same


time
How to avoid Races???
• Mutual exclusion–only one process at a
time can use a shared variable/file

• Critical regions-shared memory which


leads to races

• Solution- Ensure that two processes can’t


be in the critical region at the same time
Mutual Exclusion
• For shares memory, share files & other
shared things there should be some way to
prohibit one process from reading & writing
the shared date at the same time.

• In other words… There should be Mutual


Exclusion.
Critical Region
• Sometimes a process has to access shared
memory or files, or do other critical things
that can lead to races. This type of part of
program where shared memory is
accessed is called the critical region or
critical section.

• If no two processes are ever in their critical


regions at the same time then we could
avoid race.
Other Solutions to avoid race…
• For parallel processes cooperate correctly & efficiently using
shared data only mutual exclusion is not sufficient. Good
solutions are :

1. No two processes may be simultaneously inside their critical


sections.

2. No assumptions may be made about speeds or number of


CPUs.

3. No process running outside its critical region may block


other processes.

4. No process should have to wait forever to enter its critical


region.
What we are trying to
do???
Mutual Exclusion with Busy Waiting

A list of proposals to achieve mutual exclusion

• Disabling interrupts
• Lock variables
• Strict alternation
• Peterson's solution
• The TSL instruction
Disabling Interrupts
• Idea: process disables interrupts, enters critical region,
enables interrupts when it leaves critical region
• Problems
• Process might never enable interrupts, crashing
system
• Won’t work on multi-core chips as disabling
interrupts only effects one CPU at a time
Lock variables
• Lock is a shared variable. Software mutual
exclusion required that a process read a variable
to determine that no other process is executing a
critical section, then set a variable known as lock.

• It indicates that the process is executing its


critical section.
• When lock is 0, process turns it to 1 and enters
critical region
• When exit critical region, turn lock to 0
• Problem-Race Condition
Strict Alteration
Here, we use an integer variable turn.

Figure (a) For process P0 Figure (b) for process P1

• The integer variable true is initially set to 0. Process P0 check the true
value. If true value is 0 then process P0 enter s its critical section and
process P1 is waiting at that time.
• Process P1 is in busy waiting and it continuously checks the value of
true. When P0 finish its execution, it comes out and set true value equal
to 1.
• When true value becomes 1, then P1 enters into critical section.
Problems with strict alternation
• It requires to know in advance how many
processes will want to share the memory.

• Employs busy waiting-while waiting for


the critical region, a process spins.

• It increase the CPU overhead.


Peterson's Solution
Peterson’s Solution
• each process calls enter_region with its own
process number, 0 or 1, as parameter. Process 0 &
1 try to get in simultaneously.

• After it has finished with the shared variables, the


process calls leave_region to indicate that it is
done and to allow the other process to enter, if it
so desires.

• Now consider the case that both processes call


enter_region almost simultaneously. Last one in
sets turn: say it is process 1
TSL (Test & Set Lock)

• TSL is a single individual machine instruction. It was


introduced by IBM for its multiprocessing ssytem.
• In a one machine cycle, it tests to see if the key is
available and, if it is , sets it to unavailable.
• TS are a hardware solution for critical section
problem.
• TSL reads lock into register and stores NON ZERO
VALUE in lock (e.g. process number)
• Instruction is atomic: done by freezing access to
bus line (bus disable)
Using TSL

TSL is atomic. Memory bus is locked until it is finished executing.


What’s wrong with Peterson, TSL ?

• Busy waiting-waste of CPU time!


• Idea: Replace busy waiting by blocking calls
• Sleep blocks process
• Wakeup unblocks process
The Producer-Consumer Problem (aka Bounded
Buffer Problem)
The problem with sleep and wake-up calls

• Empty buffer,count==0
• Consumer gets replaced by producer before it goes
to sleep
• Produces something, count++, sends wakeup to
consumer
• Consumer not asleep, ignores wakeup, thinks
count= = 0, goes to sleep
• Producer fills buffer, goes to sleep
• P and C sleep forever
• So the problem is lost wake-up calls
Semaphores
• Definition :-
A semaphore is a variable or abstract data type that
provides a simple but useful abstraction for controlling
access by multiple processes to a common resource in a
parallel programming or multi user environment.

• Semaphore is an integer variable.

• Its used to solve critical section problem.


General structure of a process
with semaphore

Begin
Initial-routine;
DOWN(S);
Critical-Region;
UP(S);
Remaining-portion;
End
Semaphores
• Semaphores are a useful tool in the prevention of race
conditions; however, their use is by no means a
guarantee that a program is free from these problems.
• Semaphores which allow an arbitrary resource count
are called counting semaphores, while semaphores
which are restricted to the values 0 and 1 (or
locked/unlocked, unavailable/available) are
called binary semaphores (same functionality that
mutexes have).
• Used count to sleeping processes/wakeups
• If semaphore could have value zero, indicates that no
wake ups were saved or some positive value if one or
more wakeups were pending.
Semaphores (up & down operations)
• One important property of these semaphore variables is
that their value cannot be changed except by using the
wait() and signal() functions.

• Counting semaphores are equipped with two operations,


historically denoted as V (also known as signal()) and P (or
wait()). Operation V increments the semaphore S (also
known as up), and operation P decrements it (also
known as down).

• The value of the semaphore S is the number of units of the


resource that are currently available. The P operation
wastes time or sleeps until a resource protected by the
semaphore becomes available, at which time the resource
is immediately claimed. The V operation is the inverse: it
makes a resource available again after the process has
finished using it.
Semaphores : wait() & signal() operations
A simple way to understand wait() and signal()
operations is:

• wait(): Decrements the value of semaphore variable


by 1. If the value becomes negative, the process
executing wait() is blocked (like sleep), i.e., added to
the semaphore's queue.
• signal(): Increments the value of semaphore
variable by 1. After the increment, if the pre-
increment value was negative (meaning there are
processes waiting for a resource), it transfers a
blocked process from the semaphore's waiting
queue to the ready queue. (like Wake up)
Semaphores
• Semaphore is described by Dijkstra.
• Dijkstra introduce two operation (P and V) to operate on
semaphore to solve process synchronization problem.
• A process calls the P operation when it wants to enter its
critical section and calls V operation when it wants to exit its
critical section.
• The P operation is called as wait operation and v operation is
called as signal operation.
• A wait operation on a semaphore decrease its value by one:
waits : while S<0
do loops;
s: s-1;
• A signal operation increments its value:
signal:
S:= S-1;
Producer Consumer with Semaphores

• 3 semaphores: full, empty and mutex


• Full counts full slots (initially 0)
• Empty counts empty slots (initially N)
• Mutex protects variable which contains the items
produced and consumed
Producer Consumer with semaphores

1. A single consumer enters its critical section.


Since fullCount is 0, the consumer blocks.
2. Several producers enter the producer critical
section. No more than N producers may enter their
critical section due to emptyCount constraining
their entry.
3. The producers, one at a time, gain access to the
queue through mutex and deposit items in the
queue.
4. Once the first producer exits its critical
section, fullCount is incremented, allowing one
consumer to enter its critical section.
Producer Consumer with condition variables
and
mutexes
• Producer produces one item and blocks
waiting for consumer to use the item
• Signals consumer that the item has been
produced
• Consumer blocks waiting for producer to
signal that item is in buffer
• Consumer consumes item, signals producer
to produce new item
Event Counter
• An event counter is another data structure
that can be used for process
synchronization.
Like a semaphore, it has an integer count
and a set of waiting process identifications.

• Unlike semaphores, the count variable only


increases. It is similar to the “next customer
number” used in systems where each
customer takes a sequentially numbered
ticket and waits for that number to be called.
Operations on Event
Counters
• read(E): return the count associated with
event counter E.

• advance(E): atomically increment the count


associated with event counter E.

• await(E,v): if E.count <= v, then continue.


Otherwise, block until E.count > v.
Producer-Consumer with Event Counters
• Two event counters are used, in and out, each of
which has an initial count of zero.

• Each process includes a private sequence variable


which indicates the sequence number of the last
item it has produced or will consume. Items are
sequentially produced and consumed.

• Each item has a unique location in the buffer


(based on its sequence number), so mutually exclusive
access to the buffer is not required.
Monitors
• Tony Hoare (in 1974) and Per Brinch Hansen (in
1975) proposed a new synchronization structure
called a monitor.

• Monitors are features to be included in high level


programming languages level languages.

• A monitor is a construction that groups a collection


of functions, declarations, and initialization
statements.

• Only one process at a time is allowed inside the


monitor. Language compilers generate code that
guarantees this restriction.
Monitors
• An OS’s main task is to control access to a
machine’s resources (data, devices, time,
space)
• If resources are not tightly controlled, “chaos
will ensue”
- race condition
• Monitors provide control by allowing only one
process to access a critical resource at a time
– A class/module/package
– Contains procedures and data
Monitor Rules
• Any process can access any monitor
procedure at any time
• Only one process may enter a monitor
procedure
• No process may directly access a monitor’s
local variables
• A monitor may only access it’s local
variables
Things Needed to Enforce Monitor
• “wait” operation
– Forces running process to sleep
• “signal” operation
– Wakes up a sleeping process
• A condition
– Something to store who’s waiting for a
particular reason
– Implemented as a queue
Monitors
• Advantages
– Data access synchronization simplified (vs. semaphores
or locks)
– Better encapsulation
• Disadvantages:
– Deadlock still possible (in monitor code)
– Programmer can still botch (mess or destroy )use of
monitors
– No provision for information exchange between
machines

Message Passing
Semaphores, monitor and event counters are all
designed to function within a single system (that is, a
system with a single primary memory).

• They do not support synchronization of processes


running on separate machines connected to a
network (Distributed System).

• Messages, which can be sent across the network, can


be used to provide synchronization.

• So message passing is strategy for inter process


communication in distributed environment.
Typical Message-Passing
Functions
• source and destination addresses must identify the
machine and the process being addressed. message
is a generic data item to be delivered.

send (destination, &message);


The calling process sends message to destination
without blocking.

receive (source,&message);
The calling process receives the next sequential
message from source, blocking until it is available.
Producer-Consumer using
Message
Passing
In this solution, each message has two components:

• an empty/full flag, and a data component being passed from


the producer to the consumer.
• Initially, the consumer sends N messages marked as “
empty” to the producer.
• The producer receives an empty message, blocking until
one is available, fills it, and sends it to the consumer.

• The consumer receives a filled message, blocking if


necessary, processes the data it contains, and
returns the empty to the producer.
Classic IPC
Problems
• Dining philosophers
• Readers and writers
Readers and writers
• Multiple readers can concurrently read from
the data base.

• But when updating the db, there can only be


one writer (i.e., no other writers and no
readers either)
Readers and writers
• For example in railway reservation system, reader are those
who want train schedule information. They are called reader
because readers do not change the content of the database. Many
readers may access the database at once.

• The writers are those who making reservation on a particular


train. Writer can modify the database, so it must have exclusive
access. When writer is active, no other readers or writers may be
active.
Dining philosophers
Philosophers eat and
think.
1. To eat, they must first
acquire a left fork and
then a right fork (or
vice versa).
2. Then they eat.
3. Then they put down
the forks.
4. Then they think.
5. Go to 1.
Dining Philosophers
• Problems:

1. Deadlock

2. Starvation
Process Scheduling
• Scheduling processes on the processor is often
called processor scheduling or process
scheduling or simply scheduling.

• As we shall see later in the course, a more


descriptive name would be short-term,
processor scheduling.

• Naturally, the part of the OS responsible for


(short-term, processor) scheduling is called the
(short-term, processor) scheduler and the
algorithm used is called the (short-term,
processor) scheduling algorithm.
Scheduling Algorithm Goals
1. Fairness.
Treating users fairly, which must be balanced against ...
2. Respecting priority.
That is, giving more important processes higher priority. For
example, if my laptop is trying to fold proteins in the
background, I don't want that activity to appreciably slow down
my compiles and especially don't want it to make my system
seem sluggish when I am modifying these class notes. In
general, interactive jobs should have higher priority
3. Efficiency.
This has two aspects.
1. Do not spend excessive time in the scheduler.
2. Try to keep all parts of the system busy.
4. Low turnaround time
That is, minimize the time from the submission of a job to its
termination. This is important for batch jobs.
Continue

5. High throughput.
That is, maximize the number of jobs completed per
day. Not quite the same as minimizing the (average)
turnaround time as we shall see when we
discuss shortest job first.
6. Low response time.
That is, minimize the time from when an interactive
user issues a command to when the response is
given. This is very important for interactive jobs.
7. Repeatability. Dartmouth (DTSS) wasted cycles and
limited logins for repeatability.
8 . Degrade gracefully under load.
Thank you…

You might also like