UNIT V
TASK COMMUNICATION
• In a multitasking system, multiple tasks/processes run concurrently (in pseudo parallelism)
and each process may or may not interact between.
• Based on the degree of interaction, the processes running on an OS are classified as
Co-operating Processes: In the co-operating interaction model one process requires the inputs
from other processes to complete its execution.
Competing Processes: The competing processes do not share anything among themselves, but they
share the system resources. The competing processes compete for the system resources such as
file, display device, etc.
• Co-operating processes exchanges information and communicate through the following
methods.
a. Co-operation through Sharing: The co-operating process exchange data through some
shared resources.
b. Co-operation through Communication: No data is shared between the processes. But they
communicate for synchronization.
The mechanism through which processes/tasks communicate each other is known as Inter
Process/Task Communication (IPC).
Inter Process Communication is essential for process co-ordination. The various types of
Inter Process IPC mechanisms adopted by process are kernel (Operating System)
dependent.
Some of the important IPC mechanisms adopted by various kernels are explained below.
Shared Memory
Processes share some area of the memory to communicate among them.
UNIT V
• Information to be communicated by the process is written to the shared memory area.
• Other processes which require this information can read the same from the shared memory
area.
• The implementation of shared memory concept is kernel dependent.
• Different mechanisms are adopted by different kernels for implementing this.
• A few among them are:
1. Pipes
2. Memory Mapped Objects
Pipes: ‘Pipe’ is a section of the shared memory used by processes for communicating.
• Pipes follow the client-server architecture.
• A process which creates a pipe is known as a pipe server and a process which connects to
a pipe is known as pipe client.
• It can be unidirectional or bidirectional.
• A unidirectional pipe allows the process connecting at one end of the pipe to write to the
pipe and the process connected at the other end of the pipe to read the data, whereas a bi-
directional pipe allows both reading and writing at one end.
• The implementation of ‘Pipes’ is also OS dependent.
• Microsoft Windows Desktop Operating Systems support two types of ‘Pipes’ for IPC.
• Anonymous Pipes: The anonymous pipes are unnamed, unidirectional pipes used for data
transfer between two processes.
UNIT V
• Named Pipes: Named pipe is a named, unidirectional or bi-directional pipe for data
exchange between processes.
• The process which creates the named pipe is known as pipe server.
• A process which connects to the named pipe is known as pipe client.
• With named pipes, any process can act as both client and server allowing point-to-point
communication.
• Named pipes can be used for communicating between processes running on the same
machine or between processes running on different machines connected to a network.
Memory Mapped Objects: Memory mapped object is a shared memory technique adopted by
certain Real-Time Operating Systems for allocating a shared block of memory which can be
accessed by multiple process simultaneously.
• In this approach a mapping object is created and physical storage for it is reserved and
committed.
• A process can map the entire committed physical area or a block of it to its virtual address
space.
• All read and write operation to this virtual address space by a process is directed to its
committed physical area.
• Any process which wants to share data with other processes can map the physical memory
area of the mapped object to its virtual memory space and use it for sharing the data.
UNIT V
Message Passing
Message passing is asynchronous / asynchronous information exchange mechanism used
for Inter Process/Thread Communication.
The major difference between shared memory and message passing technique is that,
through shared memory lots of data can be shared whereas only limited amount of info/data
is passed through message passing.
Also message passing is relatively fast and free from the synchronization overheads
compared to shared memory.
Based on the message passing operation between the processes, message passing is
classified into
1. Message Queue
2. Mailbox
3. Signalling
UNIT V
Message Queue: Usually the process which wants to talk to another process posts the
message to a First-In-First-Out (FIFO) queue called ‘Message queue’, which stores the
messages temporarily in a system defined memory object, to pass it to the desired process.
Messages are sent and received through send (Name of the process to which the message
is to be sent, message) and receive (Name of the process from which the message is to be
received, message) methods.
The messages are exchanged through a message queue.
The implementation of the message queue, send and receive methods are OS kernel
dependent.
Mailbox: Mailbox is an alternate form of ‘Message queues’ and it is used in certain Real-Time
Operating Systems for IPC.
• Mailbox technique for IPC in RTOS is usually used for one way messaging.
• The task/thread which wants to send a message to other tasks/threads creates a mailbox for
posting the messages.
• The threads which are interested in receiving the messages posted to the mailbox by the
mailbox creator thread can subscribe to the mailbox.
UNIT V
• The thread which creates the mailbox is known as ‘mailbox server’ and the threads which
subscribe to the mailbox are known as ‘mailbox clients’.
• The implementation of mailbox is OS kernel dependent.
• The MicroC/OS-II implements mailbox as a mechanism for inter-task communication.
Signalling: Signalling is a primitive way of communication between processes/threads.
• Signals are used for asynchronous notification where one process/thread fires a signal,
indicating the occurrence of a scenario which the other process(es)/thread(s) is waiting.
• Signals are not queued, and they do not carry any data.
• The implementation of signals is OS kernel dependent and VxWorks RTOS kernel
implements “signals ” for inter process communication.
UNIT V
Remote Procedure Call and Sockets
Remote Procedure Call (RPC)
• Remote Procedure Call or RPC is the Inter Process Communication (IPC) mechanism used
by a process to call a procedure of another process running on the same CPU or on a
different CPU which is interconnected in a network.
• In the object-oriented language terminology RPC is also known as Remote Invocation or
Remote Method Invocation (RMI).
• RPC is mainly used for distributed applications like client-server applications.
• With RPC it is possible to communicate over a heterogeneous network (i.e., Network
where Client and server applications are running on different Operating systems).
• The CPU/process containing the procedure which needs to be invoked remotely is known
as server.
• The CPU/process which initiates an RPC request is known as client.
• In order to make the RPC communication compatible across all platforms it should stick
on to certain standard formats.
• Interface Definition Language (IDL) defines the interfaces for RPC.
• Microsoft Interface Definition Language (MIDL) is the IDL implementation from
Microsoft for all Microsoft platforms.
• The RPC communication can be either Synchronous (Blocking) or Asynchronous (Non-
blocking).
• In the Synchronous communication, the process which calls the remote procedure is
blocked until it receives a response back from the other process.
• In asynchronous RPC calls, the calling process continues its execution while the remote
process performs the execution of the procedure.
UNIT V
Socket:
• Socket is a logical endpoint in a two-way communication link between two applications
running on a network.
• A port number is associated with a socket so that the network layer of the communication
channel can deliver the data to the designated application.
• Sockets are of different types, namely, Internet sockets (INET), UNIX sockets, etc.
• The INET socket works on internet communication protocol.
• TCP/IP, UDP, etc. are the communication protocols used by INET sockets.
• INET sockets are classified into.
1. Stream sockets
2. Datagram sockets
• Stream sockets are connection oriented and they use TCP to establish a reliable connection.
• On the other hand, Datagram sockets rely on UDP for establishing a connection.
UNIT V
TASK SYNCHRONIZATION
• In a multitasking environment, multiple processes run concurrently (in pseudo parallelism)
and share the system resources.
• Apart from this, each process has its own boundary wall, and they communicate with each
other with different IPC mechanisms.
• Imagine a situation where two processes try to access display hardware connected to the
system or two processes try to access a shared memory area where one process tries to
write to a memory location when the other process is trying to read from this.
What could be the result in these scenarios?
Obviously, unexpected results.
How these issues can be addressed?
• The solution is, make each process aware of the access of a shared resource either directly
or indirectly.
• The act of making processes aware of the access of shared resources by each process to
avoid conflicts is known as ‘Task/ Process Synchronization.
Task Communication/Synchronization Issues
• Various synchronization issues may arise in a multitasking environment if processes are
not synchronized properly.
1. Racing: Have a look at the following piece of code
#include<stdio.h>
//**************************************************************** //counter is an
integer variable and Buffer is a byte array shared //between two processes Process A and Process
B char Buffer[10] = { 1,2,3,4,5,6,7,8,9,10 }; short int counter = 0;
//****************************************************************
UNIT V
// Process A
void Process_A(void)
int i;
for (i = 0; i<5; i++)
If (Buffer [i] >0)
counter++;
// Process B
void Process_B(void)
int j;
for (j = 5; j<10; j++)
If (Buffer [j] >0)
counter++;
• From a programmer perspective the value of counter will be 10 at the end of execution of
processes A & B. But ‘it need not be always’.
UNIT V
• The program statement counter++; looks like a single statement from a high-level
programming language (‘C’ language) perspective.
• The low-level implementation of this statement is dependent on the underlying processor
instruction set and the (cross) compiler in use.
• The low-level implementation of the high-level program statement counter++; under
Windows XP operating system running on an Intel Centrino Duo processor is given below.
mov eax,dword ptr [ebp-4] ;Load counter in Accumulator
add eax,1 ; Increment Accumulator by 1
mov dword ptr [ebp-4],eax ;Store counter with Accumulator
• Imagine a situation where a process switching (context switching) happens from Process
A to Process B when Process A is executing the counter++; statement.
• Process A accomplishes the counter++; statement through three different low-level
instructions.
• Now imagine that the process switching happened at the point where Process A executed
the low-level instruction, mov eax,dword ptr [ebp-4]’ and is about to execute the next
instruction add eax,1.
• Process B increments the shared variable ‘counter’ in the middle of the operation where
Process A tries to increment it.
UNIT V
• When Process A gets the CPU time for execution, it starts from the point where it got
interrupted.
• Though the variable counter is incremented by Process B, Process A is unaware of it and
it increments the variable with the old value.
• This leads to the loss of one increment for the variable counter.
• To summarize, Racing or Race condition is the situation in which multiple processes
compete (race) each other to access and manipulate shared data concurrently. In a Race
condition the fi nal value of the shared data depends on the process which acted on the data
finally.
2. Deadlock:
• ‘Deadlock’ is the condition in which a process is waiting for a resource held by another
process which is waiting for a resource held by the first process.
• None of the competing process will be able to access the resources held by other
processes since they are locked by the respective processes.
• Process A holds a resource x and it wants a resource y held by Process B. Process B is
currently holding resource y and it wants the resource x which is currently held by Process
A.
• Both hold the respective resources, and they compete each other to get the resource held
by the respective processes.
UNIT V
• The result of the competition is ‘deadlock’.
Conditions favoring Deadlock.
1. Mutual Exclusion: The criteria that only one process can hold a resource at a time.
Meaning processes should access shared resources with mutual exclusion.
Example: Accessing of display hardware in an embedded device.
2. Hold and Wait: The condition in which a process holds a shared resource by acquiring the
lock controlling the shared access and waiting for additional resources held by other
processes.
3. No Resource Preemption: The criteria that operating system cannot take back a resource
from a process which is currently holding it and the resource can only be released
voluntarily by the process holding it.
4. Circular Wait: A process is waiting for a resource which is currently held by another
process which in turn is waiting for a resource held by the first process.
In general, there exists a set of waiting process P0, P1 … Pn with P0 is waiting for a
resource held by P1 and P1 is waiting for a resource held by P0, …,Pn is waiting for a
resource held by P0 and P0 is waiting for a resource held by Pn and so on… This forms a
circular wait queue.
Deadlock Handling: A smart OS may foresee the deadlock condition and will act proactively
to avoid such a situation.
• Now if a deadlock occurred, how the OS responds to it? The reaction to deadlock condition
by OS is nonuniform. The OS may adopt any of the following techniques to detect and
prevent deadlock conditions.
1. Ignore Deadlocks
2. Detect and Recover
3. Avoid Deadlocks
4. Prevent Deadlocks
UNIT V
Ignore Deadlocks:
• Always assume that the system design is deadlock free.
• This is acceptable for the reason the cost of removing a deadlock is large compared to the
chance of happening a deadlock.
• UNIX is an example for an OS following this principle.
Detect and Recover
• This approach suggests the detection of a deadlock situation and recovery from it.
• This is similar to the deadlock condition that may arise at a traffic junction. When the
vehicles from different directions compete to cross the junction, deadlock (traffic jam)
condition is resulted. Once a deadlock (traffic jam) is happened at the junction, the only
solution is to back up the vehicles from one direction and allow the vehicles from opposite
direction to cross the junction. If the traffic is too high, lots of vehicles may have to be
backed up to resolve the traffic jam. This technique is also known as ‘back up cars’
technique.
• Operating systems keep a resource graph in their memory. The resource graph is updated
on each resource request and release.
Avoid Deadlocks
• Deadlock is avoided by the careful resource allocation techniques by the Operating System.
It is similar to the traffic light mechanism at junctions to avoid the traffic jams.
Prevent Deadlocks
• Prevent the deadlock condition by negating one of the four conditions favoring the
deadlock situation.
• Ensure that a process does not hold any other resources when it requests a resource. This
can be achieved by implementing the following set of rules/guidelines in allocating
resources to processes.
UNIT V
1. A process must request all its required resource and the resources should be allocated
before the process begins its execution.
2. Grant resource allocation requests from processes only if the process does not hold a
resource currently.
Ensure that resource preemption (resource releasing) is possible at operating system
level.
• This can be achieved by implementing the following set of rules/guidelines in resources
allocation and releasing.
1. Release all the resources currently held by a process if a request made by the process for a
new resource is not able to fulfil immediately.
2. Add the resources which are preempted (released) to a resource list describing the resources
which the process requires to complete its execution.
3. Reschedule the process for execution only when the process gets its old resources and the
new resource which is requested by the process.
3. Livelock
• The Livelock condition is similar to the deadlock condition except that a process in livelock
condition changes its state with time.
• In a livelock condition a process always does something but is unable to make any progress
in the execution completion.
• The livelock condition is better explained with the real-world example, two people
attempting to cross each other in a narrow corridor. Both the persons move towards each
side of the corridor to allow the opposite person to cross. Since the corridor is narrow, none
of them can cross each other. Here both persons perform some action but still they are
unable to achieve their target, cross each other.
UNIT V
4. Starvation
• In the multitasking context, starvation is the condition in which a process does not get the
resources required to continue its execution for a long time.
• As time progresses the process starves on resource.
• Starvation may arise due to various conditions like scheduling policies favoring high
priority tasks and tasks with shortest execution time, etc.
**The Dining Philosophers’ Problem
• The ‘Dining philosophers’ problem’ is an interesting example for synchronization issues
in resource utilization.
• Five philosophers are sitting around a round table, involved in eating and brainstorming.
• At any point of time each philosopher will be in any one of the three states: eating, hungry
or brainstorming.
• For eating, each philosopher requires 2 forks. There are only 5 forks available on the dining
table and they are arranged in a fashion one fork in between two philosophers.
UNIT V
• The philosopher can only use the forks on his/her immediate left and right that too in the
order pickup the left fork first and then the right fork.
Let’s analyze the various scenarios that may occur in this situation.
Scenario 1:
All the philosophers involve in brainstorming together and try to eat together. Each philosopher
picks up the left fork and is unable to proceed since two forks are required for eating the spaghetti
present in the plate. Philosopher 1 thinks that Philosopher 2 sitting to the right of him/her will put
the fork down and waits for it. Philosopher 2 thinks that Philosopher 3 sitting to the right of him/her
will put the fork down and waits for it, and so on. This forms a circular chain of un-granted
requests. If the philosophers continue in this state waiting for the fork from the philosopher sitting
to the right of each, they will not make any progress in eating, and this will result in starvation of
the philosophers and deadlock.
Scenario 2
• All the philosophers start brainstorming together. One of the philosophers is hungry and
he/she picks up the left fork. When the philosopher is about to pick up the right fork, the
philosopher sitting to his right also become hungry and tries to grab the left fork which is
the right fork of his neighbouring philosopher who is trying to lift it, resulting in a ‘ Race
condition’.
UNIT V
Scenario 3
• All the philosophers involve in brainstorming together and try to eat together. Each
philosopher picks up the left fork and is unable to proceed, since two forks are required for
eating the spaghetti present in the plate. Each of them anticipates that the adjacently sitting
philosopher will put his/her fork down and waits for a fixed duration and after this puts the
fork down. Each of them again tries to lift the fork after a fixed duration of time. Since all
philosophers are trying to lift the fork at the same time, none of them will be able to grab
two forks. This condition leads to livelock and starvation of philosophers, where each
philosopher tries to do something, but they are unable to make any progress in achieving
the target.
UNIT V
Solution
• We need to find out alternative solutions to avoid the deadlock, livelock, racing and
starvation condition that may arise due to the concurrent access of forks by philosophers.
This situation can be handled in many ways by allocating the forks in different allocation
techniques including Round Robin allocation, FIFO allocation, etc.
One solution that we could think of is:
• When a philosopher feels hungry, he/she checks whether the philosopher sitting to the left
and right of him is already using the fork, by checking the state of the associated
semaphore. If the forks are in use by the neighbouring philosophers, the philosopher waits
till the forks are available. A philosopher when finished eating puts the forks down and
informs the philosophers sitting to his/her left and right, who are hungry (waiting for the
forks), by signalling the semaphores associated with the forks.
• In the operating system context, the dining philosophers represent the processes and forks
represent the resources. The dining philosophers’ problem is an analogy of processes
competing for shared resources and the different problems like racing, deadlock, starvation
and livelock arising from the competition.
Readers-Writers Problem
• The Readers-Writers problem is a common issue observed in processes competing for
limited shared resources. The Readers-Writers problem is characterized by multiple
processes trying to read and write shared data concurrently. A typical real-world example
for the Readers-Writers problem is the banking system where one process tries to read the
account information like available balance and the other process tries to update the
available balance for that account. This may result in inconsistent results. If multiple
processes try to read a shared data concurrently it may not create any impacts, whereas
when multiple processes try to write and read concurrently it will defi nitely create
inconsistent results. Proper synchronization techniques should be applied to avoid the
readers-writers problem.
UNIT V
Task Synchronization Techniques
• The technique used for task synchronization in a multitasking environment is mutual
exclusion.
• Mutual exclusion blocks a process.
• Based on the behavior of blocked process, mutual exclusion methods can be classified into
two categories.
1. Mutual Exclusion through Busy Waiting/ Spin Lock
2. Mutual Exclusion through Sleep & Wakeup
Semaphore
• Semaphore is a sleep and wakeup based mutual exclusion implementation for shared
resource access.
• Semaphore is a system resource and the process which wants to access the shared resource
can fi rst acquire this system object to indicate the other processes which wants the shared
resource that the shared resource is currently acquired by it.
• The resources which are shared among a process can be either for exclusive use by a
process or for using by a number of processes at a time.
• The display device of an embedded system is a typical example for the shared resource
which needs exclusive access by a process.
• The Hard disk (secondary storage) of a system is a typical example for sharing the resource
among a limited number of multiple processes.
• Based on the implementation of the sharing limitation of the shared resource, semaphores
are classified into two
1. ‘Binary Semaphore’
2. ‘Counting Semaphore’.
UNIT V
Binary Semaphore
• The binary semaphore provides exclusive access to shared resource by allocating the
resource to a single process at a time and not allowing the other processes to access it when
it is being owned by a process.
• The implementation of binary semaphore is OS kernel dependent. Under certain OS kernel
it is referred as mutex.
• The state of a mutex object is set to signalled when it is not owned by any process/thread
and set to non-signalled when it is owned by any process/thread.
• A real-world example for the mutex concept is the hotel accommodation system (lodging
system).
Counting Semaphore
• Unlike a binary semaphore, the ‘Counting Semaphore’ limits the access of resources by a
fixed number of processes/threads.
UNIT V
• ‘Counting Semaphore’ maintains a count between zero and a maximum value.
• It limits the usage of the resource to the maximum value of the count supported by it.
• The state of the counting semaphore object is set to ‘signalled’ when the count of the object
is greater than zero.
• The count associated with a ‘Semaphore object’ is decremented by one when a
process/thread acquires it and the count is incremented by one when a process/thread
releases the ‘Semaphore object’.
• The state of the ‘Semaphore object’ is set to non-signalled when the semaphore is acquired
by the maximum number of processes/threads that the semaphore can support (i.e. when
the count associated with the ‘Semaphore object’ becomes zero)
DEVICE DRIVERS
• Device driver is a piece of software that acts as a bridge between the operating system and
the hardware.
UNIT V
• Device drivers are responsible for initiating and managing the communication with the
hardware peripherals.
• They are responsible for establishing the connectivity, initializing the hardware (setting up
various registers of the hardware device) and transferring data.
• An embedded product may contain different types of hardware components like Wi-Fi
module, File systems, Storage device interface, etc.
• The initialization of these devices and the protocols required for communicating with these
devices may be different.
• All these requirements are implemented in drivers and a single driver will not be able to
satisfy all these.
• Hence each hardware (more specifically each class of hardware) requires a unique driver
component.
• However regardless of the OS types, a device driver implements the following:
1. Device (Hardware) Initialization and Interrupt configuration
2. Interrupt handling and processing
3. Client interfacing (Interfacing with user applications)
• The Device (Hardware) initialization part of the driver deals with configuring the different
registers of the device (target hardware).
UNIT V
• The interrupt configuration part deals with configuring the interrupts that needs to be
associated with the hardware.
• The client interfacing implementation makes use of the Inter Process communication
mechanisms supported by the embedded OS for communicating and synchronizing with
user applications and drivers.
HOW TO CHOOSE AN RTOS
• The decision of choosing an RTOS for an embedded design is very crucial.
• A lot of factors needs to be analyzed carefully before making a decision on the selection of
an RTOS.
• These factors can be either functional or nonfunctional.
Functional Requirements
Processor Support: It is not necessary that all RTOS’s support all kinds of processor
architecture. It is essential to ensure the processor support by the RTOS.
Memory Requirements: The OS requires ROM memory for holding the OS files and it is
normally stored in a non-volatile memory like FLASH. OS also requires working memory RAM
for loading the OS services. Since embedded systems are memory constrained, it is essential to
evaluate the minimal ROM and RAM requirements for the OS under consideration.
Real- time Capabilities: It is not mandatory that the operating system for all embedded systems
need to be Real-time. Analyze the real-time capabilities of the OS under consideration and the
standards met by the operating system for real-time capabilities.
Kernel and Interrupt Latency: The kernel of the OS may disable interrupts while executing
certain services and it may lead to interrupt latency. For an embedded system whose response
requirements are high, this latency should be minimal.
Inter Process Communication and Task Synchronization: The implementation of Inter
Process Communication and Synchronization is OS kernel dependent. Certain kernels may provide
a bunch of options whereas others provide very limited options.
UNIT V
Modularisation Support: It is very useful if the OS supports modularisation where in which the
developer can choose the essential modules and re-compile the OS image for functioning.
Windows CE is an example for a highly modular operating system.
Support for Networking and Communication: The OS kernel may provide stack
implementation and driver support for a bunch of communication interfaces and networking.
Ensure that the OS under consideration provides support for all the interfaces required by the
embedded product.
Development Language Support: Certain operating systems include the run time libraries
required for running applications written in languages like Java and C.
• A Java Virtual Machine (JVM) customized for the Operating System is essential for
running java applications.
• Similarly, the .NET Compact Framework (.NETCF) is required for running Microsoft®
.NET applications on top of the Operating System.
Non-functional Requirements
Custom Developed or Off the Shelf: Depending on the OS requirement, it is possible to go for
the complete development of an operating system suiting the embedded system needs or use an
off the shelf, readily available operating system, which is either a commercial product or an Open-
Source product, which is in close match with the system requirements.
Cost: The total cost for developing or buying the OS and maintaining it in terms of commercial
product and custom build needs to be evaluated before taking a decision on the selection of OS.
Development and Debugging Tools Availability: Certain Operating Systems may be superior in
performance, but the availability of tools for supporting the development may be limited. Explore
the different tools available for the OS under consideration.
Ease of Use: How easy it is to use a commercial RTOS is another important feature that needs
to be considered in the RTOS selection.
UNIT V
After Sales: For a commercial embedded RTOS, after sales in the form of e-mail, on-call
services, etc. for bug fixes, critical patch updates and support for production issues, etc. should be
analyzed thoroughly.