100% found this document useful (1 vote)
36 views17 pages

Lecture 27 28 29

Task/Process Synchronization is essential to manage access to shared resources in multitasking environments, preventing issues like racing, deadlock, and priority inversion. Techniques such as Priority Inheritance and Priority Ceiling address priority inversion, while mutual exclusion can be enforced through methods like busy waiting or sleep/wakeup mechanisms. Semaphores, including counting and binary semaphores, are used to synchronize resource access among processes, ensuring efficient and conflict-free operations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
36 views17 pages

Lecture 27 28 29

Task/Process Synchronization is essential to manage access to shared resources in multitasking environments, preventing issues like racing, deadlock, and priority inversion. Techniques such as Priority Inheritance and Priority Ceiling address priority inversion, while mutual exclusion can be enforced through methods like busy waiting or sleep/wakeup mechanisms. Semaphores, including counting and binary semaphores, are used to synchronize resource access among processes, ensuring efficient and conflict-free operations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

# Task/Process Synchronization:

* The act of making processes aware of the access of shared resources by each process to
avoid conflicts is known as ‘Task/ Process Synchronisation’.

* Various synchronisation issues may arise in a multitasking environment if processes are not
synchronised properly.

# Task/Process Synchronization Issues:

* Racing
* Deadlock
* The Dining Philospher Problem
* Producer-Consumer Buffer Problem
* Readers-Writers Problem
* Priority Inversion Problem

Home work: Read about the Task Synchronization Issues from section 10.8 (K.V. Shibu).
# Priority Inversion Problem:

* It is the byproduct of the combination of blocking based (lock based) process synchronisation
and pre-emptive priority scheduling.

* It is the condition in which a high priority task needs to wait for a low priority task to release
a resource which is shared between the high priority task and the low priority task, and a
medium priority task which doesn’t require the shared resource continue its execution by
preempting the low priority task.

* Priority based preemptive scheduling technique ensures that a high priority task is always
executed first, whereas the lock based process synchronisation mechanism (like mutex,
semaphore, etc.) ensures that a process will not access a shared resource, which is currently
in use by another process.

* Two approachs to handle priority inversion problem are:


- Priority Inheritance
- Priority Ceiling

* In Priority Inheritance, a low-priority task that is currently accessing (by holding the lock) a
shared resource requested by a high-priority task temporarily ‘inherits’ the priority of that
high-priority task, from the moment the high-priority task raises the request.

* The only thing is that it helps the low priority task to continue its execution and release the
shared resource as soon as possible.

* In ‘Priority Ceiling’, a priority is associated with each shared resource. The priority associated
to each resource is the priority of the highest priority task which uses this shared resource.

* Whenever a task accesses a shared resource, the scheduler elevates the priority of the task
to that of the ceiling priority of the resource.

* Priority Ceiling’ brings the added advantage of sharing resources without the need for
synchronisation techniques like locks. The biggest drawback of ‘Priority Ceiling’ is that it may
produce hidden priority inversion.
Fig: Priority Inversion Problem
# Commonly used Priority Inversion Handling mechanism:

- Priority Inheritance
- Priority Ceiling
(1.) Priority Inheritance

Fig: Handling Priority Inversion Problem with Priority Inheritance

Note: In CMSIS-RTOS RTX priority inversion is tackled using priority inheritance method for the
Mutex.
(2.) Priority Ceiling

Fig: Handling Priority Inversion Problem with Priority Ceiling


# Task Synchronisation Techniques:

* Process/Task synchronisation is essential for following:

1. Avoiding conflicts in resource access (racing, deadlock, starvation, livelock, etc.) in a


multitasking environment.
2. Ensuring proper sequence of operation across processes.
3. Communicating between processes.

* In order to synchronise the access to shared resources, the access to the critical section
should be exclusive.

* The exclusive access to critical section of code is provided through mutual exclusion mechanism.

* Mutual exclusion mechanism blocks a process and it can be enforced in different way depending
on the behaiviour of the blocked process:

- Mutual Exclusion through busy waiting/Spin Lock


- Mutual Exclusion through Sleep and Wakeup

Note: The code memory area which holds the program instructions (piece of code) for accessing a
shared resource (like shared memory, shared variables, etc.) is known as ‘critical section’.
# Mutual Exclusion through Busy Waiting/Spin Lock:

* Busy waiting is the simplest method for enforcing mutual exclusion.

* The ‘Busy waiting’ technique uses a lock variable for implementing mutual exclusion.

* Each process checks this lock variable before entering the critical section.

* The lock is set to ‘1’ by a process if the process is already in its critical section; otherwise
the lock is set to ‘0’.

reading ---> comparing ---> setting of the lock variable

Fig: Code Snippet to show how busy waiting enforces mutual exclusion mechanism

* The major challenge in implementing the lock variable based synchronisation is the
non-availability of atomic instruction.

Solution: Proper availaibilty of single atomic instruction which does accessing, testing, and
modification of the lock variable.
# Drawbacks of Busy waiting/ spin lock based mechanism:

* The lock based mutual exclusion implementation always checks the state of a lock and waits till
the lock is available. This keeps the processes/threads always busy and forces the processes to
wait for the availability of the lock for proceeding further.

* If the lock is being held for a long time by a process and if it is preempted by the OS, the other
process waiting for this lock may have to spin a longer time for getting it.
# Mutual Exclusion through Sleep and Wakeup:

* The ‘Busy waiting’ mutual exclusion enforcement mechanism used by processes makes the CPU
always busy by checking the lock to see whether they can proceed. This results in the wastage
of CPU time and leads to high power consumption.

* An alternative to ‘busy waiting’ is the ‘Sleep & Wakeup’ mechanism.

* When a process is not allowed to access the critical section, which is currently being locked
by another process, the process undergoes ‘Sleep’ and enters the ‘blocked’ state.

* It is awakened by the process which currently owns the critical section by sending a wake up
message.

# Important techniques for "Sleep and Wakeup" policy implementation:

- Counting Semaphore
- Binary Semaphore
- Mutex (a special binary semaphore)
- Critical Section Object
- Events
# Semaphore:

* It is a kernel object that one or more process can aquire or release for the purpose of
synchronization for shared resource access.

* The process which wants to access the shared resource needs to first acquire the semaphore
to indicate the other processes which also wants the shared resource that the shared
resource is currently acquired by it.

* The resources which are shared among a process can be either for exclusive use by a process
or for using by a number of processes at a time.

* Therefore, based on the sharing limitation of the shared resource, the semaphore can be
classified as:
- Counting Semaphore
- Binary Semaphore

* The "binary semaphore" provides exclusive access to shared resource by allocating the
resource to a single process at a time.

* The "Counting Semaphore" limits the access of resources by a fixed number of processes.

* Counting Semaphore maintains a count between zero and a maximum value. It limits the
usage of the resource to the maximum value of the count supported by it.

* Initially, the value of counting semaphore is fixed to maximum value, when a process aquire it,
the count value is decremented by 1 and when a process releases it, the count value is
incremented by 1.

* If the count value is at zero, then no new process can aquire the semaphore.

Note: Using Semaphore, access to a group of identical peripherals can be managed. Example: multiple DMA channels
available on the microcontroller chip.
Fig: Counting Semaphore

Fig: Binary Semaphore


A real world example for the counting semaphore concept is the dormitory system for accommodation
A real world example for the mutex concept is the hotel accommodation system
Note:

1. The mutual exclusion semaphore (mutex) is a special implementation of the binary semaphore,
used to prevent the priority inversion problem.

2. Mutex has an option to set the priority of a task owning it to the highest priority of the task
which is being pended while attempting to acquire the semaphore which is already in use by a
low priority task.

3. This ensures that the low priority task which is currently holding the semaphore, when a high
priority task is waiting for it, is not pre-empted by a medium priority task.
# Event Registers:
* Some kernel provides a special register as a part of Process/Task control block
and term it as a event register.

* It is an object belonging to a task and consists of a group of binary event flags used to track
the occurrence of specific events.

* Depending on a given kernel s implementation of this mechanism, an event register can be 8-,
16-, or 32-bits wide, maybe even more. In CMSIS-RTOS, you can have 31 event flag associated
a event.

* Each bit in the event register is treated like a binary flag (also called an event flag) and can
be either set or cleared.

* Through the event register, a task can check for the presence of particular events that can
control its execution.

* An external source, such as another task/process , can set bits in the event register to inform
the task/process that a particular event has occurred.

* Applications define the event associated with


an event flag. This definition must be
agreed upon between the event sender
and receiver using the event register.

You might also like