IPC (Inter-process communication)
Send and receive messages using Application Program Interface(API)
Semaphores
They are used to synchronize events between processes. they are integral values greater than or equal to 0.
Shared Memory
Allows processes to interchange data through a defined area of memory.eg: one process could write to an
area of memory and another could read from from it. To do this, the writing process must check to see
that the read process is reading from the memory at the time and vice versa.
Sockets
These are typically used to connect over a network, between a client and a server(although peer to peer
connections are also possible).
Sockets are endpoints for a connection and allow for a standard connection which is computer and OS
dependent.
Concurrency
This is a state in which process(es) exist simultaneously with other processes. It arises in 3 different
contexts:
1. Multiple applications.
2. Structured applications.
3. Operating system structure.
Principles of concurrency
In a single processor multiprogramming system, processes are interleaved, in time to yield the appearance
of a simultaneous execution even though actual parallel processing is not achieved, it is involved in
switching back and forth between processes. In a multiple processor system it is possible not only to
interleave processes but to overlap them.
Both interleaved and overlapped processes can be viewed as examples of concurrent processes. They both
present the same problems. The relative speed of execution cannot be predicted.it depends on the
activities of other processes, the way the OS handles interrupts and the scheduling policies of the OS, the
following difficulties arise:
1. The sharing of global resources – if 2 processe both make use of a global variable, and both
perform read and writes on that variable, then the order in which the various reads and writes are
executed is critical.
2. It is difficult for the OS to manage the allocation of resources optimally.it may be inefficient for
the OS to simply lock the channel and prevent its use by other processes.
3. It becomes very difficult to locate a programming error because reports are usually not
reproducable.
Synchronization
The process in which OS have to regulate the order in which processes access data is known as
synchronization.
Synchronization issues
1. Mutual exclusion- method of ensuring that when one process is accessing a shared
resource(file or variable) the other are excluded from doing the same.
2. Race condition – when 2 concurrent processes are sharing the same storage area. A condition
in which the behavoir of 2 or more processes depends on the relative rate at which each
process executes its program.
3. Critical section – the mechanism for preventing 2 processes from accessing the same critical
area. A segment of code that can not be executed while some other process is in a
corresponding segment of code.
Semaphores
This is the fundamental principal: 2 or more processes can cooperate by means of a simple signal
such that a process can be forced to stop at a specified place until it has received a specified
signal. Any complex coordination requirement can be satisfied by the appropriate structure of
signals. For signaling, special variables called semaphores are used. To transmit a signal via
semaphores, a process executes the primitive signal(s) .to receive a signal via semaphores, a
process executes the primitive wait(s); if the corresponding signal has not yet been transmitted,
the process is suspended until the transmission takes place.
To achieve the desired effect we can view the semaphore as a variable that has an interger value
upon which 3 operations are defined:
1. A semaphore may be initialized to a non-negative value.
2. The wait operation decrements the semaphore value.if the value becomes negative, the
the process executing the wait is blocked.the signal operation increments the semaphore
value. If the value is not positive, then a process blocked by wait operation is unblocked.
3.
4.
Semaphores are operated on a signal operation, which increments the semaphore value and the
wait operation which decreases it. The initial value of a semaphore indicates the number of wait
operations that can be performed on the semaphore thus :
V = I – W + S where:
I = The initial value of the semaphore.
W = the number of completed wait operations performed on the semaphore.
S = the number of signal operations performed on it.
V = the current value of the semaphore (which must be greater than or equal to
zero).
As V is >= 0 then I – W + S >=0, which gives us:
I + S >= W OR W <=I + S
Thus the number of wait operations must be less than or equal to the initial value of the semaphore
plus the number of signal operations. A binary semaphore will have an initial value of 1(I = 1) thus,
W <= S + 1.
MEMORY MANAGEMENT
Memory algorithms
1. First fit – allocation of data is put on the first block that is big enough to store that allocation
of data.
2. Best fit –
3. Worst fit – data is put in the largest chunk of data
Find difference btn fixed size blocks allocation , buddy blocks, paging, file systems.