0% found this document useful (0 votes)
10 views3 pages

Mod 04 Content

Module 4 discusses interprocess communication (IPC) as a crucial aspect of operating system development, particularly in layered or microkernel architectures. It outlines various IPC methods such as signals, shared memory, pipes, messages, semaphores, and sockets, emphasizing the importance of synchronization to prevent race conditions. The module also covers synchronization techniques, including disabling interrupts, lock variables, and various blocking mechanisms, while mentioning the concept of deadlocks as a related topic for future study.

Uploaded by

nightmarepuma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views3 pages

Mod 04 Content

Module 4 discusses interprocess communication (IPC) as a crucial aspect of operating system development, particularly in layered or microkernel architectures. It outlines various IPC methods such as signals, shared memory, pipes, messages, semaphores, and sockets, emphasizing the importance of synchronization to prevent race conditions. The module also covers synchronization techniques, including disabling interrupts, lock variables, and various blocking mechanisms, while mentioning the concept of deadlocks as a related topic for future study.

Uploaded by

nightmarepuma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Module 4: Interprocess Communication

Introduction

Communicating between processes is generally an important component of developing a fully functional


operating system. In a monolithic operating system, there is conceptually one 'process' that makes up the
kernel of the operating system, so interprocess communication isn't quite as critical in that operating
system architecture. In a layered or microkernel approach, where operating system components are
implemented as individual processes, interprocess communication mechanisms and primitives are far
more important. However, even in monolithic operating system, you typically are dealing with interrupts.
The interrupts in the monolithic operating system creates a different thread of execution within the kernel
and there needs to be a way to communicate between these threads in this context as well.

Types of Interprocess Communication

Here are some of the types of interprocess communication an operating system can use and provide:

• Signals
• Shared memory
• Messages
• Pipes
• Semaphores
• Sockets (Network Communication)

Signals are the simplest interprocess communication method and provide the least amount of information.
A signal is a service from the kernel to user processes that mimics the interrupt vector mechanism used in
the kernel.

Shared memory is one of the fastest methods of interprocess communication. After the shared memory is
set up, there are no system calls required to communicate between the processes. However,
synchronization between processes communicating using shared memory must be done carefully.

Pipes are a method of streaming data from one process to another.

Messages are a method of sending discrete record-bounded packages of data from one process to
another.

Semaphores are a method of allowing processes to synchronize with one another.

Sockets can be message based or streamed data. The only real difference between sockets and the
other interprocess mechanisms described is that sockets can allow processes that reside on different
computers (hosts) to communication over networks. Distributed operating systems, where the kernel is
spread out among multiple computers, make use of sockets to communicate.
Synchronization

Interprocess communication deals with multiple processes executing concurrently (either conceptually on
a single processor system, or in actual fact on a multiprocessor system). From the earliest days of
multiprogramming systems, one problem that crops up is race conditions. Your text gives some example
of race conditions - one simple example is two processes that want to use the same printer at the same
time. (Your text talks about a print spooler, which is actually one method to solve the synchronization
problem for the printer.) The area of your program where a race condition might exist is termed a critical
region.

Have you observed race condition issues in your job? The supplied video in the course content for this
module describes some race condition scenarios.

If you inadvertently have a race condition in a program, it may behave unpredictably, since the race may
have different winners when the program is run. Normally, that isn't desirable; we would like to have
predictable and repeatable behavior in our programs. Thus, synchronization is what fixes race conditions.
There are many different ways to synchronize our processes, the following are mentioned in the text:

• Disabling Interrupts
• Lock Variables
• Strict Alternation
• Peterson's Solution
• Test and Set Lock
• Sleep/Wakeup
• Semaphores
• Mutexes, Futexes
• Monitors
• Message Passing
• Barriers

Disabling interrupts gets rid of the synchronization problem by making execution in the critical region
uninterruptable. When disabling interrupts, whoever gets into the critical region first will execute until it
finishes with the critical region. This approach is still used today in many conditions in embedded systems
or in some device drivers while strict alternation is not used today. Peterson's solution on page 125 is like
strict alternation in that it only works between two processes, but allows each process to hold onto the
critical region for an indefinite period. The video in the course content is a brief description of Peterson's
solution with a few examples.

One of the key aspects of synchronization is atomicity. Execution is said to be atomic if the execution
either completes entirely, or does not execute at all. Most assembly code instructions are atomic. We see
that in the Test and Set Lock instruction as discussed on page 125; this single instruction copies a
variable to a register and then sets that variable to 1.

Note that all of the approaches to this point involve busy waiting (aside from disabling interrupts). Busy
waiting might be fine in some cases, but for operating systems which support multiprogramming, busy
waiting is a waste of valuable resources. (Note that on multiprocessor systems, it might make sense to
have a process hold onto a CPU for a bit since processing on the other CPUs might quickly free up the
critical region that process needs. This busy wait in this case is called a spin lock.)

On a multiprogramming operating system, the approaches that follow simply block the process needing
access to the critical region until it is free.
Sleep/wakeup is interesting, simply because many of us are familiar with the sleep( ) function in UNIX/C.
This pair of functions is much different than this call; as you might know, the sleep( ) function blocks the
caller for a time period. The sleep()a function here also blocks, but not for an amount of time; it blocks on
some resource until someone calls wakeup( ) on the same resource.

The sections in your text on synchronization in Pthreads as well as Monitors can be skimmed; we won’t
be looking at these in depth. Pthreads synchronization is a user-level concept, and Monitors are a
language construct (not present in C or C++). Thus more detailed handling of these are better left for a
different course.

A related concept to synchronization is deadlocks. The prior edition of your text discussed some classical
synchronization/deadlock examples in this chapter (e.g. the “dining philosopher’s problem”); however, in
the current edition, this topic is moved to the chapter on “Deadlocks”, which we will be covering in a later
module.

You might also like