0% found this document useful (0 votes)
58 views78 pages

Eiot Unit 3 & 5 Notes

Uploaded by

madhumitha.k
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views78 pages

Eiot Unit 3 & 5 Notes

Uploaded by

madhumitha.k
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

UNIT -3
PROCESSES AND OPERATING SYSTEMS
Structure of a real – time system – Task Assignment and Scheduling – Multiple Tasks and Multiple
Processes – Multirate Systems – Pre emptive real – time Operating systems – Priority based
scheduling – Interposes Communication Mechanisms – Distributed Embedded Systems – MPSoCs
and Shared Memory Multiprocessors – Design Example – Audio Player, Engine Control Unit and
Video Accelerator.
PART-A
1. What do you mean by process and threads?
 A process is a single execution of a program.
 Processes that share the same address space are often called threads.

2. Define Asynchronous input.


 The text compression box provides a simple example of rate control problems. A control panel
on a machine provides an example of a different type of rate control problem, the
asynchronous input.

3.Define initiation time and deadline for periodic and aperiodic process.
 The initiation time is the time at which the process goes from the waiting to the ready state. An
aperiodic process is by definition initiated by an event, such as external data arriving or data
computed by another process. The initiation time of periodic process is generally measured from
that event, although the system may want to make the process ready at some interval after the
event itself.
 A deadline specifies when a computation must be finished. The deadline for an aperiodic process
is generally measured from the initiation time because that is the only reasonable time reference.
The deadline for a periodic process may in general occur at some time other than the end of the
period.

4. Define jitter.
 The jitter of a task, which is the allowable variation in the completion of the task.

5. Define DAG (Directed Acyclic Graph) and task graph or task set.
 The data dependencies must form a directed acyclic graph (DAG)—a cycle in the data
dependencies is difficult to interpret in a periodically executed system.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

 A set of processes with data dependencies is known as a task graph or task set.

6. Define CPU usage metrics, how it is expressed and give the ranges of CPU ratio.
 In addition to the application characteristics, we need to have a basic measure of the efficiency
with which we use the CPU. The simplest and most direct measure is utilization:

 Eq-3.1
 Utilization is the ratio of the CPU time that is being used for useful computations to the total
available CPU time.
 This ratio ranges between 0 and 1, with 1 meaning that all of the available CPU time is being
used for system purposes. Utilization is often expressed as a percentage.

7. With neat sketch describe what are the possible transitions between states available to
a process?

Figure 3.1 -Scheduling states of a process


8. Differentiate scheduling policy and Scheduling overhead.
Scheduling policy Scheduling overhead
Scheduling policy defines how processes are
selected for promotion from the ready state to the Scheduling overhead—the execution
running state. Every multitasking operating system time required to choose the next
implements some type of scheduling policy. execution process, which is incurred
Choosing the right scheduling policy not only in addition to any context switching
ensures that the system will meet all its timing over head
requirements, but it also has a profound influence
on the CPU horsepower required to implement the
system’s functionality.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

9. Define kernel and time quantum.


 The kernel is the part of the operating system that determines what process is running. The
kernel is activated periodically by the timer.
 The length of the timer period is known as the time quantum because it is the smallest increment
in which we can control CPU activity. The kernel determines what process will run next and
causes that process to run. On the next timer interrupt, the kernel may pick the same process or
another process to run.
10. How do we switch between processes before the process is done or define Context
switching mechanism? (Or) Define Context switching in RTOS.
 The timer interrupts causes control to change from the currently executing process to the kernel;
assembly language can be used to save and restore registers.
 The set of registers that defines a process is known as its context, and switching from one
process’s register set to another is known as context switching.
 The data structure that holds the state of the process is known as the record.
11. Define Round-robin scheduling.
 A common scheduling algorithm in general-purpose operating systems is round robin.
 All the processes are kept on a list and scheduled one after the other.
 This is generally combined with pre-emption so that one process does not grab all the CPU time.
 Round-robin scheduling provides a form of fairness in that all processes get a chance to execute.
 However, it does not guarantee the completion time of any task; as the number of processes
increases, the response time of all the processes increases.
12. What is Rate-monotonic scheduling (RMS)?
 Rate-monotonic scheduling (RMS) was one of the first scheduling policies developed for real-time
systems and is still very widely used.
 RMS is a static scheduling policy because it assigns fixed priorities to processes.
 It turns out that these fixed priorities are sufficient to efficiently schedule the processes in many
situations.
13. Define Race condition and how you will avoid race condition using critical section.

 Consider the case in which an I/O device has a flag that must be tested and modified by a
process.
 Problems can arise when other processes may also want to access the device.
 If combinations of events from the two tasks operate on the device in the wrong order, we may
create a critical timing race or race condition that causes erroneous operation.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

For example:
1. Task 1 reads the flag location and sees that it is 0.
2. Task 2 reads the flag location and sees that it is 0.
3. Task 1 sets the flag location to 1 and writes data to the I/O device’s data register.
4. Task 2 also sets the flag to 1 and writes its own data to the device data register, overwriting the
data from task 1.
In this case, both devices thought they were able to write to the device, but the task 1’s write
was never completed because it was overridden by task 2.
Critical sections
 To prevent this type of problem we need to control the order in which some operations occur.
 For example, we need to be sure that a task finishes an I/O operation before allowing another
task to starts its own operation on that I/O device.
 We do so by enclosing sensitive sections of code in a critical section that executes without
interruption.
14.How will you call and release the critical section using Semaphores?
 The semaphore is used to guard a resource.
 We start a critical section by calling a semaphore function that does not return until the resource
is available.
 When we are done with the resource we use another semaphore function to release it. The
semaphore names are, by tradition, P() to gain access to the protected resource and V() to
release it.
/* some nonprotected operations here */
P(); /* wait for semaphore */
/* do protected work here */
V(); /* release semaphore */
This form of semaphore assumes that all system resources are guarded by the same P()/V() pair.
15. What is the function of Test-and-set?
 To implement P() and V(), the microprocessor bus must support an atomic read/write operation,
which is available on a number of microprocessors.
 The test-and-set allows us to implement semaphores. The P() operation uses a test and- set to
repeatedly test a location that holds a lock on the memory block. The P() operation does not exit
until the lock is available; once it is available, the test-and-set automatically sets the lock. Once
past the P() operation, the process can work on the protected memory block.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

 The V() operation resets the lock, allowing other processes access to the region by using the P()
function.
16.Define priority inversion and priority inheritance.
Shared resources cause a new and subtle scheduling problem: a low-priority process blocks
execution of a higher-priority process by keeping hold of its resource, a phenomenon known as
priority inversion.
Priority inheritance
The most common method for dealing with priority inversion is priority inheritance: promote the
priority of any process when it requests a resource from the operating system. The priority of the
process temporarily becomes higher than that of another process that may use the resource. This
ensures that the process will continue executing once it has the resource so that it can finish its
work with the resource, return it to the operating system, and allow other processes to use it. Once
the process is finished with the resource, its priority is demoted to its normal value.
17. Define Earliest-deadline-first scheduling (EDF).
Earliest deadline first (EDF) is another well-known scheduling policy that was also studied by Liu
and Lay land. It is a dynamic priority scheme—it changes process priorities during execution based
on initiation times. As a result, it can achieve higher CPU utilizations than RMS.
18. What do you know about RMS versus EDF?
EDF can extract higher utilization out of the CPU, but it may be difficult to diagnose the possibility
of an imminent overload. Because the scheduler does take some overhead to make scheduling
decisions, a factor that is ignored in the schedulability analysis of both EDF and RMS, running a
scheduler at very high utilizations is somewhat problematic. RMS achieves lower CPU utilization but
is easier to ensure that all deadlines will be satisfied. In some applications, it may be acceptable for
some processes to occasionally miss deadlines.
19.Define blocking and nonblocking communication.
In general, a process can send a communication in one of two ways:
blocking or nonblocking.
 After sending a blocking communication, the process goes into the waiting state until it receives
a response.
 Nonblocking communication allows the process to continue execution after sending the
communication. Both types of communication are useful.
20.What are two major styles of interprocess communication and give brief note about
it?
There are two major styles of interprocess communication:
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

 Shared memory and


 Message passing.

Figure 3.2- Shared memory communication Implemented on a bus.


Message passing

Figure 3.3-Message passing communication.


The devices must communicate relatively infrequently; furthermore, their physical separation is
large enough that we would not naturally think of them as sharing a central pool of memory.
Passing communication packets among the devices is a natural way to describe coordination
between these devices.
21. Define Interrupt latency.(or)What are the several factors in both hardware and
software affect interrupt latency?
Interrupt latency for an RTOS is the duration of time from the declaration of a device interrupt to
the completion of the device’s requested operation. Interrupt latency is critical because data may
be lost when an interrupt is not serviced in a timely fashion.
Several factors in both hardware and software affect interrupt latency:
• The processor interrupts latency;
• The execution time of the interrupt handler;
• Delays due to RTOS scheduling.
22.Define Power-down trade-offs.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

 Going into a low-power mode takes time; generally, the more that is shut off, the longer the
delay incurred during restart. Because power-down and power-up are not free, modes should be
changed carefully.
 Determining when to switch into and out of a power-up mode requires an analysis of the overall
system activity.
 Avoiding a power-down mode can cost unnecessary power.
 Powering down too soon can cause severe performance penalties.
23.Draw the architecture of power managed systems and write the operations of several
elements of the total managed system.
The several elements of the total managed system:
 The service provider is the machine whose power is being managed;
 The service requestor is the machine or person making requests of that power-managed system;
 A queue is used to hold pending requests (e.g., while waiting for the service provider to power
up); and
 The power manager is responsible for sending power management commands to the provider
can be shown in fig 3.4. The power manager can observe the behaviour of the requestor,
provider, and queue

Figure 3.4-Architecture of a Power-Managed system.

24. Define Advanced Configuration and Power Interface (ACPI).


 The Advanced Configuration and Power Interface (ACPI) is an open industry standard for power
management services.
 The ACPI provides some basic power management facilities and abstracts the hardware layer, the
operating system has its own power management module that determines the policy, and the
operating system then uses ACPI to send the required controls to the hardware and to observe
the hardware’s state as input to the power manager.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

25.What are the basic global power states of Advanced Configuration and Power
Interface (ACPI)?
ACPI supports the following five basic global power states:
• G3, the mechanical off state, in which the system consumes no power.
• G2, the soft off state, which requires a full operating system reboot to restore the machine to
working condition. This state has four substates:
• S1, a low wake-up latency state with no loss of system context;
• S2, a low wake-up latency state with a loss of CPU and system cache state;
• S3, a low wake-up latency state in which all system state except for main memory is lost; and
• S4, the lowest-power sleeping state, in which all devices are turned off.
• G1, the sleeping state, in which the system appears to be off and the time required to return to
working condition is inversely proportional to power consumption.
• G0, the working state, in which the system is fully usable.
• The legacy state, in which the system does not comply with ACPI.

26. What are the POSIX semaphores function?


 POSIX supports semaphores but it also supports a direct shared memory mechanism. POSIX
supports counting semaphores in the _POSIX_SEMAPHORES option.
 The POSIX names for P and V are sem_wait () and sem_post() respectively.
 POSIX also provides a sem_trywait () function that tests the semaphore but does not block.
 POSIX shared memory is supported under the _POSIX_SHARED_MEMORY_OBJECTS option.
The shared memory functions create blocks of memory that can be used by several
processes.
 The shm_open () function opens a shared memory object.

27. What are the functions of POSIX RTOS?


 POSIX pipes
 POSIX message queues
28. What are the functions of POSIX message queues?
 POSIX also supports message queues under the _POSIX_MESSAGE_PASSING facility. The
advantage of a queue over a pipe is that, because queues have names, we have to create the
pipe descriptor before creating the other process using it, as with pipes.
 The name of a queue follows the same rules as for semaphores and shared memory: it starts
with a “/” and contains no other “/” characters.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

 In this code, the O_CREAT flag to mq_open () causes it to create the named queue if it doesn’t
yet exist and just opens the queue for the process if it does already exist.

29. What are the strategies used for power optimization in multiprocessing?
The RTOS and system architecture can use static and dynamic power management
Mechanisms to help manage the system’s power consumption. A power management policy is a
strategy for determining when to perform certain power management operations.

30. Illustrate the interconnect networks developed for distributed embedded systems.
 Network abstractions
 CAN bus
 I2C bus
 Ethernet
 Internet

31. List the advantages and limitations of priority based scheduling.


Advantages:
Process with high priority executed first. Process are executed depending upon their priority.
Limitations
However process with low priority has to wait for long time until the high priority processes finishes
the execution.

32. How does priority scheduling improves multitask execution? What is the concept of
multitasking? What does it signify?
 There is at most one process executing on the CPU at any time. Any process that could execute
is in the ready state;
 The OS chooses among the ready processes to select the next executing process. A process may
not, however, always be ready to run.
 For instance, process may be waiting for data from an I/O device or another process, or it may
be set to run from a timer that has not yet expired.
 Such processes are in the waiting state. A process goes into the waiting state when it needs
data that it has not yet received or when it has finished all its work for the current period.
 A process goes into the ready state when it receives its required data and when it enters a new
period.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

 A process can go into the executing state only when it has all its data, is ready to run, and the
scheduler selects the process as the next process to run.
 Each process has a fixed priority that does not vary during the course of execution
 The ready process with the highest priority (with 1 as the highest priority of all) is selected for
execution.
 A process continues execution until it completes or it is preempted by a higher-priority process.

33. What do you meant by CRC cards? (Or) Write the special Characteristics of a CRC
cards.
The CRC card methodology is a well-known and useful way to help analyze a system’s structure. It
is particularly well suited to object-oriented design because it encourages the encapsulation of data
and functions.
 The acronym CRC stands for the following three major items that the methodology tries to
identify:
• Classes define the logical groupings of data and functionality.
• Responsibilities describe what the classes do.
• Collaborators are the other classes with which a given class works

34. Define audio compression.


Audio compression is a lossy process that relies on perceptual coding. The coder eliminates certain
features of the audio stream so that the result can be encoded in fewer bits. It tries to eliminate
features that are not easily perceived by the human audio system.
Masking is one perceptual phenomenon that is exploited by perceptual coding. One tone can be
masked by another if the tones are sufficiently close in frequency. Some audio features can also be
masked if they occur too close in time after another feature.

35. What are the main functions performed by video accelerator? (or)What is the
need for video Accelerator?
It computes motion vectors using full search algorithm.
36. Give one challenge in developing of codes for MPSoCs.
Challenges in programming for Multicore system are Dividing activities, Balance, Data splitting,
Data dependency, Testing and Debugging.
37. List two special functional units of embedded processor used for audio player design.
The two main processor architectures used for audio processing are fixed-point and floating-point.
Fixed- point processors are designed for integer and decimal values.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

38. What is meant by requirement analysis of doing memory scaling for a video
accelerator?
Requirements analysis for memory scaling enables new DRAM architectures, functions, interfaces,
and better integration.
39. What is the difference between the ready and waiting states of process scheduling?
The possible states of a process are: new, running, waiting, ready, and terminated. The process is
created while in the new state. In the running or waiting state, the process is executing or waiting
for an event to occur, respectively. The ready state occurs when the process is ready and waiting
to be assigned to a processor and should not be confused with the waiting state mentioned earlier.
After the process is finished executing its code, it enters the termination state.
40. What do you mean by Accelerators in embedded multiprocessor?
 Accelerators can provide large performance increases for applications with computational
kernels that spend a great deal of time in a small section of code.
 Accelerators can also provide critical speedups for low-latency I/O functions.
41. Define multiprocessor
 A multiprocessor is, in general, any computer system with two or more processors coupled
together.
 We use the term processing element (PE) to mean any unit responsible for computation,
whether it is programmable or not.
 We use the term network (or interconnection network) to describe the interconnections between
the processing elements.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

PART B
1.Explain in detail about multiple tasks and multiple processes with an example (OR)
summaries services of operating system in handling multiple tasks and multiple
processes.
i. Tasks and processes
 Many (if not most) embedded computing systems do more than one thing—that is, the
environment can cause mode changes that in turn cause the embedded system to behave quite
differently. .
 These different tasks are part of the system’s functionality, but that application-level organization
of functionality is often reflected in the structure of the program as well.
 A process is a single execution of a program.
 Processes that share the same address space are often called threads.
 As shown in figure 3.5, this device is connected to serial ports on both ends.

Figure 3.5-Scheduling overhead is paid for at a non-linear rate.


Variable data rates
 The need to receive and send data at different rates—for example, the program may emit
two bits for the first byte and then seven bits for the second byte—will obviously find itself
reflected in the structure of the code.
Asynchronous input
 The text compression box provides a simple example of rate control problems. A control panel on
a machine provides an example of a different type of rate control problem, the asynchronous
input.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

2.Multirate systems
Multirate embedded computing systems are very common, including automobile engines, printers,
and cell phones.
In all these systems, certain operations must be executed periodically, and each operation is
executed at its own rate shown in fig 3.6.

Figure 3.6-Automotive Engine Control


 Automobile engines must meet strict requirements both emissions and fuel economy.
 On the other hand, the engines must still satisfy customers not only in terms of performance but
also in terms of ease of starting in extreme cold and heat, low maintenance, and so on.
 Automobile engine controllers use additional sensors, including the gas pedal position and an
oxygen sensor used to control emissions. They also use a multimode control scheme.
 For example, one mode may be used for engine warm-up, another for cruise, and yet another
for climbing steep hills, and so forth. The larger number of sensors and modes increases the
number of discrete tasks that must be performed.
 The highest-rate task is still firing the spark plugs. The throttle setting must be sampled and
acted upon regularly, although not as frequently as the crankshaft setting and the spark plugs.
The oxygen sensor responds much more slowly than the throttle, so adjustments to the fuel/air
mixture suggested by the oxygen sensor can be computed at a much lower rate.
 The engine controller takes a variety of inputs that determine the state of the engine. It then
controls two basic engine parameters: the spark plug firings and the fuel/air mixture.
 The engine control is computed periodically, but the periods of the different inputs and outputs
range over several orders of magnitude of time.
 The fastest rate that the engine controller must handle is 2 ms and the slowest rate is 1 sec, a
range in rates of almost three orders of magnitude
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

Figure 3.6 (a) -Example definitions of initiation times and deadlines.


Timing requirements on processes
 Two important requirements on processes: initiation time and deadline.
 The initiation time is the time at which the process goes from the waiting to the ready state. An
aperiodic process is by definition initiated by an event, such as external data arriving or data
computed by another process shown in fig 3.6(a). The initiation time is generally measured from
that event, although the system may want to make the process ready at some interval after the
event itself. For a periodically executed process, there are two common possibilities.
 A deadline specifies when a computation must be finished. The deadline for an aperiodic process
is generally measured from the initiation time because that is the only reasonable time reference.
The deadline for a periodic process may in general occur at some time other than the end of the
period.
Jitter
 We may also be concerned with the jitter of a task, which is the allowable variation in the
completion of the task seen in fig 3.7.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

Figure 3.7-Example definitions of initiation times and deadlines.


Missing a deadline
The practical effects of a timing violation depend on the application—the results can be catastrophic
in an automotive control system, whereas a missed deadline in a telephone system may cause a
temporary silence on the line as shown in figure 3.8.

Figure 3.8 - A Sequence of processes with a high initiation rate.

DAG (Directed Acyclic Graph) and task graph or task set


 The data dependencies define a partial ordering on process execution—P1 and P2 can execute in
any order (or in interleaved fashion) but must both complete before P3, and P3 must complete
before P4.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

 All processes must finish before the end of the period. The data dependencies must form
directed acyclic graph (DAG)—a cycle in the data dependencies is difficult to interpret in a
periodically executed system can be shown in fig 3.9.
 A set of processes with data dependencies is known as a task graph or task set.

Figure 3.9- Data dependencies among processes.

Figure 3.10-Communication among processes at different rates.


CPU usage metrics:
In addition to the application characteristics, we need to have a basic measure of the efficiency with
which we use the CPU. The simplest and most direct measure is utilization.

Utilization is the ratio of the CPU time that is being used for useful computations to the total
available CPU time. This ratio ranges between 0 and 1, with 1 meaning that all of the available CPU
time is being used for system purposes. Utilization is often expressed as a percentage can be seen
in fig 3.10.
Process state and scheduling
 The first job of the operating system is to determine the process that runs next. The work of
choosing the order of running processes is known as scheduling. The operating system considers
a process to be in one of three basic scheduling states:
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

 Waiting,
 Ready, and
 Executing.
There is at most one process executing on theCPU at any time.

Figure 3.11-Scheduling states of a process.


 A scheduling policy defines how processes are selected for promotion from the ready state to the
running state can be shown in fig 3.11.
 Scheduling overhead—the execution time required to choose the next execution process, which is
incurred in addition to any context switching overhead.

2. Explain in detail how you will solve the fundamental problem of a cooperative
multitasking system using Preemptive real-time operating systems. (or) Explain how
multiple process are handled by Preemptive real-time operating systems. (or) Discuss
why preemptive scheduling is preferred in real time operating systems.
 A preemptive real-time operating system (RTOS) solves the fundamental problems of a
cooperative multitasking system.
 It executes processes based upon timing requirements provided by the system designer. The
most reliable way to meet timing requirements accurately is to build a preemptive operating
system and to use priorities to control what process runs at any given time.
Two basic concepts
Preemption
 Figure 3.12 shows an example of preemptive execution of an operating system. We want to
share the CPU across two processes.
 The kernel is the part of the operating system that determines what process is running. The
kernel is activated periodically by the timer.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

 The length of the timer period is known as the time quantum because it is the smallest
increment in which we can control CPU activity. The kernel determines what process will run next
and causes that process to run can be shown in fig 3.12. On the next timer interrupt, the kernel
may pick the same process or another process to run.

Figure 3.12-Sequence diagram for preemptive execution.


Context switching mechanism
The timer interrupts causes control to change from the currently executing process to the kernel;
assembly language can be used to save and restore registers. The set of registers that defines a
process is known as its context, and switching from one process’s register set to another is known
as context switching. The data structure that holds the state of the process is known as the record.
Process priorities
The kernel can simply look at the processes and their priorities, see which ones actually want to
execute (some may be waiting for data or for some event), and select the highest priority process
that is ready to run. This mechanism is both flexible and fast.
Processes and context
 The best way to understand processes and context is to dive into an RTOS implementation.
 Figure 3.13 shows a sequence diagram in FreeRTOS.org. This diagram shows the application
tasks, the hardware timer, and all the functions in the kernel that are involved in the context
switch:
o Pre emptive Tick () is called when the timer ticks.
o SIG_OUTPUT_COMPARE1A responds to the timer interrupt and uses port SAVE_CONTEXT () to
swap out the current task context seen in fig 3.13 in the new context.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

o Task Increment Tick () updates the time and Task Switch Context chooses anew task.
o Port RESTORE_CONTEXT () swaps in the new context.

Figure 3.13-Sequence diagram for a FreeRTOS.org context Switch.


Processes and object-oriented design
We need to design systems with processes as components shown in UML active class in fig 3.14.
UML active objects

Figure 3.14-An active class in UML.

Figure 3.15- Collaboration diagram with active and normal objects.


Figure 3.15 shows a simple collaboration diagram in which an object is used as an interface
between two processes:
p1 uses the w object to manipulate its data before the data is sent to the master process.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

3. How will you allocate resources in the computing system among programs based on
the request using Priority-based scheduling? Explain with an example. (or) Explain
scheduling polices with an example.
Subtopics
 Round-robin scheduling
 Priority-Driven Scheduling
 Rate-monotonic scheduling
 Earliest-Deadline-First Scheduling.
Round-robin scheduling
 Round-robin scheduling provides a form of fairness in that all processes get a chance to execute.
 However, it does not guarantee the completion time of any task; as the number of processes
increases, the response time of all the processes increases.
 Real-time systems, in contrast, require their notion of fairness to include timeliness and
satisfaction of deadlines.
Process priorities
A common way to choose the next executing process in an RTOS is based on process priorities.
Each process is assigned a priority, an integer-valued number. The next process to be chosen to
execute is the process in the set of ready processes that has the highest-valued priority.
Priority-Driven Scheduling
 For this example, we will adopt the following simple rules.
 Each process has a fixed priority that does not vary during the course of execution.
 The ready process with the highest priority (with 1 as the highest priority of all) is selected for
execution.
 A process continues execution until it completes or it is preempted by a higher-priority process.
Let’s define a simple system with three processes as seen below.

Table 3.0 Priority-Driven Scheduling


 In addition to describing the properties of the processes in general, we need to know the
environmental setup in the table 3.0 in Priority –Driven Scheduling.
 We assume that P2 is ready to run when the system is started, P1’sdata arrive at time 15, and
P3’s data arrive at time 18.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

 Once we know the process properties and the environment, we can use the priorities to
determine which process is running throughout the complete execution of the system

Figure 3.16 - Priority-Driven Scheduling


 When the system begins execution, P2 is the only ready process, so it is selected for execution.
 At time 15, P1 becomes ready driven scheduling, it preempts P2 and begins execution because it
has a higher priority as shown in fig 3.16.
 Because P1 is the highest-priority process in the system, it is guaranteed to execute until it
finishes. P3’s data arrive at time 18, but it cannot preempt P1.
 Even when P1 finishes. P3 is not allowed to run. P2 is still ready and has higher priority than P3.
Only after both P1 and P2 finish can P3 execute.
Rate-monotonic scheduling
 Rate-monotonic scheduling (RMS), RMS is a static scheduling policy because it assigns fixed
priorities to processes. It turns out that these fixed priorities are sufficient to efficiently schedule
the processes in many situations.
 The theory underlying RMS is known as rate-monotonic analysis (RMA). This theory, as
summarized below, uses a relatively simple model of the system.
• All processes run periodically on a single CPU.
• Context switching time is ignored.
• There are no data dependencies between processes.
• The execution time for a process is constant. All deadlines are at the ends of their periods.
• The highest-priority ready process is always selected for execution.
Rate-Monotonic Scheduling
Here is a simple set of processes and their characteristics.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

Table 3.1 Rate-Monotonic Scheduling


 Applying the principles of RMA, we give P1 the highest priority, P2 the middle priority, and P3
the lowest priority can be shown in table 3.1.
 To understand all the interactions between the periods, we need to construct a timeline equal in
length to the least-common multiple of the process periods, which is 12 in this case.
 The complete schedule for the least-common multiple of the periods is called the unrolled
schedule.

Figure 3.17 - Rate-Monotonic Scheduling


 All three periods start at time zero. P1’s data arrive first. Because P1 is the highest priority
process, it can start to execute immediately as shown in fig.3.17.
 After one-time unit, P1 finishes and goes out of the ready state until the start of its next period.
 At time 1, P2 starts executing as the highest-priority ready process. At time 3, P2 finishes and P3
starts executing.
 P1’snext iteration starts at time 4, at which point it interrupts P3. P3 gets one more time unit of
execution between the second iterations of P1 and P2, but P3 doesn’t get to finish until after the
third iteration of P1.
 The response time of a process as the time at which the process finishes.
 The critical in stand for a process is defined as the instant during execution at which the task has
the largest response time.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

CPU utilization
The bad news is that, although RMS is the optimal static-priority schedule, it does not allow the
system to use 100% of the available CPU cycles. The total CPU utilization for a set of n tasks is

Shared resources
 A process may need to do more than read and write values memory. For example, It may need
to communicate with an I/O device.
 And it may use shared memory locations to communicate with other processes.
Race condition
Consider the case in which an I/O device has a flag that must be tested and modified by a process.
Problems can arise when other processes may also want to access the device. If combinations of
events from the two tasks operate on the device in the wrong order, we may create a critical
timing race or race condition that causes erroeous operation. For example:
1. Task 1 reads the flag location and sees that it is 0.
2. Task 2 reads the flag location and sees that it is 0.
3. Task 1 sets the flag location to 1 and writes data to the I/O device’s data register.
4. Task 2 also sets the flag to 1 and writes its own data to the device data register, overwriting the
data from task 1.In this case, both devices thought they were able to write to the device, but the
task1’s write was never completed because it was overridden by task 2.
Critical sections
To prevent this type of problem we need to control the order in which some operations occur. For
example, we need to be sure that a task finishes an I/O operation before allowing another task to
starts its own operation on that I/O device. We do so by enclosing sensitive sections of code in a
critical section that executes without interruption.
Semaphores
 The semaphore is used to guard a resource. We start a critical section by calling a semaphore
function that does not return until the resource available.
 When we are done with the resource we use another semaphore function to release it.
 The semaphore names are, by tradition, P() to gain access to the protected resource
and V() to release it.
P(); /* wait for semaphore */
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

V(); /* release semaphore */


This form of semaphore assumes that all system resources are guarded by the same P()/V() pair.
Test-and-set
 To implement P() and V(), the microprocessor bus must support an atomic read/write operation,
which is available on a number of microprocessors.
Priority inversion
Shared resources cause a new and subtle scheduling problem: a low-priority process blocks
execution of a higher-priority process by keeping hold of its resource, a phenomenon on known as
priority inversion.
Priority inheritance
The most common method for dealing with priority inversion is priority inheritance: promote the
priority of any process when it requests a resource from the operating system. The priority of the
process temporarily becomes higher than that of another process that may use the resource. This
ensures that the process will continue executing once it has the resource so that it can finish its
work with the resource, returned to the operating system, and allow other processes to use it. Once
the process is finished with the resource, its priority is demoted to its normal value.
Earliest-deadline-first scheduling:
 Earliest deadline first (EDF) is another well-known scheduling policy that was also studied by Liu
and Lay land. It is a dynamic priority scheme—it changes process priorities during execution
based on initiation times. As a result, it can achieve higher CPU utilizations than RMS.
 The EDF policy is also very simple: It assigns priorities in order of deadline. The highest-priority
process is the one whose deadline is nearest in time, and the lowest priority process is the one
whose deadline is farthest away.
 Clearly, priorities must be recalculated at every completion of a process.
 However, the final step of the OS during the scheduling procedure is the same as for RMS—the
highest-priority ready processes chosen for execution.
Earliest-Deadline-First Scheduling
Consider the following processes in the table 3.2 scheduling process.

Table3.2.Earliest-Deadline-First Scheduling
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

The least-common multiple of the periods is 60, and the utilization is 1/3 + 1/4 + 1/5 =0.9833333.
This utilization is too high for RMS, but it can be handled with an earliest deadline-first schedule.
Here is the schedule:
Comparison between RMS & EDF (RMS versus EDF)
 EDF can extract higher utilization out of the CPU, but it may be difficult to diagnose the
possibility of an imminent overload.
 Because the scheduler does take some overhead to make scheduling decisions, a factor that is
ignored in the schedulability analysis of both EDF and RMS, running a scheduler at very high
utilizations is somewhat problematic. Cab be seen in fig 3.16.
 RMS achieves lower CPU utilization but is easier to ensure that all deadlines will be satisfied. In
some applications, it may be acceptable for some processes to occasionally miss deadlines.
 Rate-monotonic scheduling assumes that there are no data dependencies between processes.
Data Dependencies and Scheduling:
Data dependencies imply that certain combinations of processes can never occur as shown in
fig.3.18. Consider the simple example below.

Figure 3.18 - Data Dependencies and Scheduling

Data Dependencies and Scheduling:


 We know that P1 and P2 cannot execute at the same time, because P1 must finish before P2 can
begin.
 Furthermore, we also know that because P3 has a higher priority, it will not preempt both P1 and
P2 in a single iteration.
 If P3 preempts P1, then P3 will complete before P2 begins; if P3 preempts P2, then it will not
interfere with P1 in that iteration.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

 Because we know that some combinations of processes cannot be ready at the same time, we
know that our worst-case CPU requirements are less than would be required if all processes could
be ready simultaneously.
Scheduling and Context Switching Overhead
Appearing below is a set of processes and their characteristics.

Table 3.3 Scheduling and Context Switching Overhead


 First, let us try to find a schedule assuming that context switching time is zero. Followings a
feasible schedule for a sequence of data arrivals that meets all the deadlines:
 Now let us assume that the total time to initiate a process, including context switching and
scheduling policy evaluation, is one-time unit.
 It is easy to see that there is no feasible schedule for the above data arrival sequence, because
we require a total of 2TP1+ TP2 = 2 × (1 + 3) + (1 + 3) = 11 time units to execute one period
of P2 and two periods of P1.
 In most real-time operating systems, a context switch requires only a few hundred instructions,
with only slightly more overhead for a simple real-time scheduler likes
 These small overhead times are not likely to cause serious scheduling problems. In this table 3.3
Scheduling and context switching overhead problems are most likely to manifest themselves in
the highest-rate processes, which are often the most critical in any case.
 Completely checking that all deadlines will be met with nonzero context switching time requires
checking all possible schedules for processes and including the context switch time at each pre-
emption or process initiation. However, assuming an average number of context switches per
process and computing CPU utilization can provide in fig 3.19 at least an estimate of how close
the system is to CPU capacity
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

Figure 3.19 - Data Dependencies and Scheduling

4. Explain in detail how the processes communicate with each other? Or explain in detail
about Inter process communication mechanisms with neat sketch(or) compare the
principle merits and limitations of Inter process communication (or) Demonstrate about
Inter process communication mechanisms.

 Processes often need to communicate with each other. Inter process communication mechanisms
are provided by the operating system as part of the process abstraction.
 In general, a process can send a communication in one of two ways: blocking or non-blocking.
After sending a blocking communication, the process goes into the waiting state until it receives
a response. Non-blocking communication allows the process to continue execution after sending
the communication. Both types of communication are useful.
 There are two major styles of Inter process communication: shared memory and message
passing.
Shared memory communication
As shown in figure 3.20 illustrates how shared memory communication works in a bus-based
system. Two components, such as a CPU and an I/O device, communicate through a shared
memory location.

Figure 3.20-Shared Memory Communication Implemented on a bus.


MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

Message passing
Message passing communication complements the shared memory model as shown in Figure 3.21,
each communicating entity has its own message send/receive unit. The message is not stored on
the communications link, but rather at the senders/receivers at the endpoints. In contrast, shared
memory communication can be seen as a memory block used as a communication device, in which
all the data are stored in the communication link/memory.

Figure 3.21-Message passing communication.


The devices must communicate relatively infrequently; furthermore, their physical separations large
enough that we would not naturally think of them as sharing a central pool of memory. Passing
communication packets among the devices is a natural way to describe coordination between these
devices.
Queues
A queue is a common form of message passing. The queue uses a FIFO discipline and holds
records that represent messages. The Free RTOS.org system provides a setoff queue functions.
Signals
Another form of inter process communication commonly used in Unix is the signal. Signal is simple
because it does not pass data beyond the existence of the signal itself. A signal is analogous to an
interrupt, but it is entirely a software creation. A signal is generated by a process and transmitted
to another process by the operating system.

Figure 3.22-Use of a UML signal.


MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

A UML signal is actually a generalization of the Unix signal. While a Unix signal carries no
parameters other than a condition code, a UML signal is an object. As such, it can carry parameters
as object attributes. Figure 3.22 shows the use of a signal in UML. The sig behavior() behavior of
the class is responsible for throwing the signal, as indicated by <<send>>. The signal object is
indicated by the <<signal>>stereotype.
Mailboxes:
The mailbox is a simple mechanism for asynchronous communication. Some architectures define
mailbox registers. These mailboxes have a fixed number of bits and can be used for small
messages.
In order for the mailbox to be most useful, we want it to contain two items the message itself and a
mail ready flag. The flag is true when a message has been put into the mailbox and cleared when
the message is removed.

5.Explain in detail how will you Evaluate operating system performance?


The scheduling policy does not tell us all that we would like to know about the performance of a real
system running processes. Our analysis of scheduling policies makes some simplifying assumptions:
 Assume that context switches require zero time
 Largely ignore interrupts
 Assume that we know the execution time of the processes
 Probably determined worst-case or best-case times for the processes in isolation
Context switching time
 We need to examine the validity of all these assumptions.
 Context switching time depends on several factors:
 The amount of CPU context that must be saved;
 Scheduler execution time.
Interrupt latency
Interrupt latency for an RTOS is the duration of time from the assertion of advice interrupt to the
completion of the device’s requested operation. Interrupt latency is critical because data may be
lost when an interrupt is not serviced in a timely fashion.
Several factors in both hardware and software affect interrupt latency:
• The processor interrupt latency;
• The execution time of the interrupt handler;
• Delays due to RTOS scheduling.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

Figure 3.23 – Sequence diagram RTOS Interrupt latency


Critical sections and interrupt latency
 The RTOS can delay the execution of an interrupt handler in two ways.
 First, critical sections in the kernel will prevent the RTOS from taking interrupts.
 Longer critical sections can improve performance for some types of workloads because it
reduces the number of context switches. However, long critical sections cause major problems
for interrupts can be shown in fig 3.23.
Interrupt priorities and interrupt latency
Second, a higher-priority interrupt may delay a lower-priority interrupt. A hardware interrupt
handler runs as part of the kernel, not as a user thread as shown in fig 3.24.

Figure 3.24 – Sequence diagram RTOS Interrupt latency


RTOS performance evaluation tools
This sort of view can be helpful in both functional and performance debugging. Windows CE
provides several performance analysis tools:
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

IL Timing, an instrumentation routine in the kernel that measures both interrupt service routine and
interrupt service thread latency;
OSBench measures the timing of operating system tasks such as critical section access, signals, and
so on. Kernel Tracker provides a graphical user interface for RTOS events.
Caches and RTOS Performance
Each process in the shared section of the cache is modelled by a binary variable: 1if present in the
cache and 0 if not. Each process is also characterized by three total execution times: assuming no
caching, with typical caching, and with all code always resident in the cache.
Consider a system containing the following three processes:

Table 3.4 Scheduling on the Cache


Each process uses half the cache, so only two processes can be in the cache at the
same time can be shown in the table 3.4. Appearing below is a first schedule that uses a least-
recently-used cache replacement policy on a process-by-process basis.

Figure 3.25 – Caches and RTOS Performance


In the first iteration, we must fill up the cache, but even in subsequent iterations, competition
among all three processes ensures that a process is never in the cache when it starts to execute. As
a result, we must always use the worst-case execution time can be seen in fig 3.25.Another
schedule in which we have reserved half the cache for P1 is shown below. This Leaves P2 and P3 to
fight over the other half of the cache.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

Figure 3.26 – Caches and RTOS Performance


In fig 3.26 case, P2 and P3 still compete, but P1 is always ready. After the first iteration, we can
use the average-case execution time for P1, which gives us some spare CPU time that could be
used for additional operations.

5.Write short note on power optimization strategies for processes in Real time operating
system environment.
 The RTOS and system architecture can use static and dynamic power management mechanisms
to help manage the system’s power consumption.
 A power management policy is a strategy for determining when to perform certain power
management operations.
 However, the overall strategy embodied in the policy should be designed based on the
characteristics of the static and dynamic power management mechanisms.
Power-down trade-offs.
 Avoiding a power-down mode can cost unnecessary power.
 Powering down too soon can cause severe performance penalties.
Predictive power management
A more sophisticated technique is predictive shutdown. The goal is to predict when the next
request will be made and to start the system just before that time, saving the requestor the start-
up time. In general, predictive shutdown techniques are probabilistic—they make guesses about
activity patterns based on a probabilistic model of expected behavior. This can cause two types of
problems:
• The requestor may have to wait for an activity period. In the worst case, the requestor may not
make a deadline due to the delay incurred by system start-up can be shown in fig 3.27.
• The system may restart itself when no activity is imminent. As a result, the system will waste
power.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

Fig 3.27- An L-Shaped usage distribution

Fig 3.28 - Architecture of a Power – managed System


 As shown in figure 3.28, we need to consider several elements of the total managed system.
 The service provider is the machine whose power is being managed;
 The service requestor is the machine or person making requests of that power-managed
system;
 A queue is used to hold pending requests (e.g., while waiting for the service provider to power
up); and
 The power manager is responsible for sending power management commands to the provider.
The power manager can observe the behaviour of the requestor, provider, and queue.
ACPI
The Advanced Configuration and Power Interface (ACPI) is an open industry standard for power
management services. It is designed to be compatible with a wide variety of operating systems.
ACPI supports the following five basic global power states:
• G3, the mechanical off state, in which the system consumes no power.
• G2, the soft off state, which requires a full operating system reboot to restore the machine to
working condition. This state has four substates:
• S1, a low wake-up latency state with no loss of system context.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

• S2, a low wake-up latency state with a loss of CPU and system cache state.
• S3, a low wake-up latency state in which all system state except for main memory is lost, and
• S4, the lowest-power sleeping state, in which all devices are turned off.
• G1, the sleeping state, in which the system appears to be off and the time required to return to
working condition is inversely proportional to power consumption.
• G0, the working state, in which the system is fully usable.
• The legacy state, in which the system does not comply with ACPI.

Fig 3.29 - The Advanced Configuration and Power Interface and its relationship to a
Complete System
The power manager typically includes an observer, which receives messages through the ACPI
interface that describe the system behavior as shown in fig 3.29. It also includes a decision module
that determines power management actions based on those observations.

6.What is Real time operating system? Give examples for real-time operating systems
(POSIX or Linux or UNIX and Windows CE). Explain them in detail. (or)Discuss features
and services of WINDOWS CE Real time operating system.
A real-time operating system (RTOS) is an operating system (OS) intended to serve real-
time application process data as it comes in, typically without buffering delays.
POSIX
 POSIX is a version of the Unix operating system created by a standards organization.
 POSIX-compliant operating systems are source-code compatible—an application can be compiled
and run without modification on a new POSIX platform assuming that the application uses only
POSIX-standard functions.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

 While Unix was not originally designed as a real-time operating system, POSIX has been
extended to support real-time requirements. Many RTOSs are POSIX-compliant and it serves as
a good model for basic RTOS techniques.
 The POSIX standard has many options; particular implementations do not have to support all
options.
 The existence of features is determined by C preprocessor variables; for example, the FOO
option would be available if the_POSIX_FOO preprocessor variable were defined.
 All these options are defined in the system include file unistd.h.
Linux
 The Linux operating system has become increasing popular as a platform for embedded
computing. Linux is a POSIX-compliant operating system that is available as open source.
However, Linux was not originally designed for real-time operation.
 Some versions of Linux may exhibit long interrupt latencies, primarily due to large critical
sections in the kernel that delay interrupt processing.
 Two methods have been proposed to improve interrupt latency. A dual-kernel approach uses a
specialized kernel, the co-kernel, for real-time processes and the standard kernel for non-real-
time processes. All interrupts must go through the co-kernel to ensure that real-time operations
are predictable. The other method is a kernel patch that provides priority inheritance to reduce
the latency of many kernel operations. These features are enabled using the PREEMPT_RT mode.
Processes in POSIX
In POSIX, a new process is created by making a copy of an existing process. The copying process
creates two different processes both running the same code. The complication comes in ensuring
that one process runs the code intended for the new process while the other process continues the
work of the old process.
 A process makes a copy of itself by calling the fork() function. That function causes the operating
system to create a new process (the child process) which is an early exact copy of the process
that called fork() (the parent process).
 They both share the same code and the same data values with one exception, the return value of
fork(): the parent process is returned the process ID number of the child process, while the child
process gets a return value of 0. We can therefore test the return valueof fork() to determine
which process is the child:}
 The execv() function takes as argument the name of the file that holds the child’scode and the
array of arguments. It overlays the process with the new code and starts executing it from the
main () function. In the absence of an error, execv() should never return. The code that follows
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

the call to perror () and exit(), take care of the case where execv() fails and returns to the
parent process. The exit() function is aC function that is used to leave a process. it relies on an
underlying POSIX function that is called _exit().
 The parent process should use one of the POSIX wait functions before calling exit() for itself. The
wait functions not only return the child process’s status, in many implementations of POSIX they
make sure that the child’s resources (namely memory) are freed. So we can extend our code as
follows:
 The parent stuff() function performs the work of the parent function. The wait() function waits for
the child process; the function sets the integer c status variable to the return value of the child
process.
The POSIX process model
POSIX does not implement lightweight processes. Each POSIX process runs in its own address
space and cannot directly access the data or code of other processes.
Real-time scheduling in POSIX
 POSIX supports real-time scheduling in the POSIX_PRIORITY_SCHEDULING resource. POSIX
supports rate-monotonic scheduling in the SCHED_FIFO scheduling policy. The name of this
policy is unfortunately misleading.
 Whenever a process changes its priority, it is put at the back of the queue for that priority level.
A process can also explicitly move itself to the end of its priority queue with a call to the
sched_yield () function.
 SCHED_RR is a combination of real-time and interactive scheduling techniques: within a priority
level, the processes are time sliced.
 The SCHED_OTHER is defined to allow non-real-time processes to intermix with real-time
processes.
POSIX semaphores
 POSIX supports semaphores but it also supports a direct shared memory mechanism. POSIX
supports counting semaphores in the _POSIX_SEMAPHORES option.
 The POSIX names for P and V are sem_wait () and sem_post () respectively.
 POSIX also provides a sem_trywait () function that tests the semaphore but does not block.
 POSIX shared memory is supported under the _POSIX_SHARED_MEMORY_OBJECTSoption. The
shared memory functions create blocks of memory that can be used by several processes.
 The sem_open() function opens a shared memory object
POSIX pipes
 The pipe is very familiar to Unix users from its shell syntax:% foo file1 | baz> file2
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

POSIX message queues


POSIX also supports message queues under the _POSIX_MESSAGE_PASSING facility. The
advantage of a queue over a pipe is that, because queues have names, we have to create the pipe
descriptor before creating the other process using it, as with pipes.
Windows CE
Windows CE supports devices such as smart phones, electronic instruments, etc. Windows is
designed to run on multiple hardware platforms and instruction set architectures. Some aspects of
Windows CE, such as details of the interrupt structure, are determined by the hardware architecture
and not by the operating system itself.
WinCE architecture
 Figure 3.25 shows a layer diagram for Windows CE. Applications run under the shell and its user
interface. The Win32 APIs manage access to the operating system.A variety of services and
drivers provide the core functionality. The OEM Adaption Layer (OAL) provides an interface to the
hardware in much the same way that a HAL does in other software architecture.
 The architecture of the OAL is shown in Figure3.25. The hardware provides certain primitives
such as a real-time clock and an external connection.
 The OAL itself provides services such as a real-time clock, power management, interrupts, and a
debugging interface.
 A Board Support Package (BSP) for a particular hardware platform includes the OAL and drivers.
Wince memory space
Windows CE provides support for virtual memory with a flat 32-bit virtual address space. A virtual
address can be statically mapped into main memory for key kernel-mode code; an address can also
be dynamically mapped, which is used for all
User-mode and some kernel-mode code. Flash as well as magnetic disk can be used as a backing
store.
Figure 3.30 shows the division of the address space into kernel and user with 2 GBfor the operating
system and 2 GB for the user. Figure shows 3.31 OAL Architecture in organization of the user
address space. The top 1 GB is reserved for system elements such as DLLs, memory mapped files,
and shared system heap. The bottom 1 GB holds user elements such as code, data, stack, and
heap.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

Fig 3.30 Windows CL layer diagram

Fig 3.31 OAL Architecture in Windows CE


WinCE threads and Drivers
WinCE supports two kernel-level units of execution: the thread and the driver. In the fig 3.32 shows
the threads are defined by executable files while drivers are defined by dynamically-linked libraries
(DLLs).

Fig 3.32 Kernel and Users address Spaces in Windows CE


MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

Fig.3.33 User address Space in Windows CE


WinCE scheduling
 Each thread is assigned an integer priority. Lower-valued priorities signify higher priority: 0 is the
highest priority and 255 is the lowest possible priority.
 Priorities 248through 255 are used for non-real-time threads while the higher priorities are used
for various categories of real-time execution. The operating system maintains a queue of ready
processes at each priority level can shown in fig 3.33.
WinCE interrupts
 Interrupt handling is divided among three entities:
 The interrupt service handler (ISH) is a kernel service that provides the first response to the
interrupt.
 The ISH selects an interrupt service routine (ISR) to handle the interrupt. In the fig 3.34 WinCE
interrupt, ISH runs in the kernel with interrupts turned off; as a result, it should be designed to
do as little direct work as possible.
 The ISR in turn calls an interrupt service thread (IST) which performs most of the work required
to handle the interrupt. The IST runs in the OAL and so can be interrupted by a higher-priority
interrupt.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

Fig.3.34 - WinCE interrupts

7. Discuss in detail about MPSoCs and shared memory multiprocessors. (or)Explain the
operation and advantages of CPU Accelerated system (or)Illustrate Why MPSOCs are
preferred over general purpose microprocessor.
 Heterogeneous shared memory multiprocessors
 Accelerators
 Accelerator performance analysis
 Scheduling and allocation
Heterogeneous shared memory multiprocessors
 Many high-performance embedded platforms are heterogeneous multiprocessors. Different
processing elements perform different functions.
 The PEs may be programmable processors with different instruction sets or specialized
accelerators that provide little or no programmability.
 In both cases, the motivation for using different types of PEs is efficiency. Processors with
different instruction sets can perform different tasks faster and using less energy.
 Accelerators provide even faster and lower-power operation for a narrow range of functions.
Accelerators
 One important category of processing element for embedded multiprocessors is the accelerator.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

 Accelerators can provide large performance increases for applications with computational kernels
that spend a great deal of time in a small section of code. Accelerators can also provide critical
speedups for low-latency I/O functions.
 The design of accelerated systems is one example of hardware/software codesign—the
simultaneous design of hardware and software to meet system objectives.
 As illustrated in figure 3.35, a CPU accelerator is attached to the CPU bus. The CPU is often
called the host.
 The CPU talks to the accelerator through data and control registers in the accelerator. These
registers allow the CPU to monitor the accelerator’s operation and to give the accelerator
commands.

.
Fig.3.35 - CPU Accelerator in a System
Accelerator performance analysis

Fig 3.36 Singe threaded Vs Multithreaded control of an accelerator


MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

Accelerator execution time


A simple accelerator will read all its input data in fig 3.36 perform the required computation,
and then write all its results. In this case, the total execution time may be written as

Where txis the execution time of the accelerator assuming all data are available, and tinand tout
are the times required for reading and writing the required variables, respectively.

Fig 3.37 Components of execution time for an accelerator


The values for tin and tout must reflect the time required for the bus transactions, including two
factors:
• The time required to flush any register or cache values to main memory, if those values are
needed in main memory to communicate with the accelerator shown in fig 3.37.
• The time required for transfer of control between the CPU and accelerator. The total speedup S
for a kernel can be written as

Fig 3.38 Streaming data into and out of an accelerator.


MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

In fig 3.38 reflect the streaming data, where tCPU is the execution time of the equivalent function
in software on the CPU and n is the number of times the function will be executed in fig 3.39.
System speedup

Fig 3.39 - Evaluating System Speedup in a single threaded implementation

Fig 3.40 - Evaluating System Speedup in a Multi-threaded implementation

Fig 3.41 - Streaming Data into and out o an accelerator


MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

8. Discuss case study to design of a portable audio player or MP3 player that
decompresses music files as it plays.
Subtopics

 Requirement

 Specification

 Architecture

 Components

 System integration audio player


a) Theory of operation and requirements.
 Audio players are often called MP3 players after the popular audio data format, although a
number of audio compression formats have been developed and are in regular use.
 The earliest portable MP3 players were based on compact disc mechanisms. Modern MP3 players
use either flash memory or disk drives to store music. An MP3 player performs three basic
functions: audio storage, audio decompression, and user interface.
Audio decompression

 The incoming bit stream has been encoded using a Huffman style code, which must be decoded.
The audio data itself is applied to a reconstruction filter, along with a few other parameters.
 Perceptual coding- The coder eliminates certain features of the audio stream so that the result
can be encoded in fewer bits. It tries to eliminate features that are not easily perceived by the
human audio system as shown in fig 3.42.
 Masking is one perceptual phenomenon that is exploited by perceptual coding. One tone can be
masked by another if the tones are sufficiently close in frequency. Some audio features can also
be masked if they occur too close in time after another feature.

Fig 3.42 - MPEG Layer 1 encoder


MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

Fig 3.43 - MPEG Layer 1 data frame format


 MPEG data streams are divided into frames. A frame carries the basic MPEG data, error
correction codes, and additional information. Figure 3.43 shows the Format of MPEG frame
format.

Table 3.5 Requirements for Audio player

Specification
In this table 3.5 shows the major classes in the audio player. The File ID class is an abstraction of
a file in the flash file system. The controller class provides the method that operates the player.

Fig 3.44 - Classes in the Audio player

In figure 3.44 shows a state diagram for file display/selection. This specification assumes
that all files are in the root directory and that all files are playable audio.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

State Diagram

Fig 3.45 - State diagram for the display and selection


System architecture
Audio processors
The Cirrus CS7410 [Cir04B] is an audio controller designed for CD/MP3 players. The audio
controller includes two processors as in fig 3.45.

 The 32-bit RISC processor is used to perform system control and audio decoding.
 The 16-bit DSP is used to perform audio effects such as equalization.

Fig 3.46 - Architecture of a cirrus audio processor for CD/MP3 Player

Component design and testing


As shown in fig 3.46 audio decompression object can be implemented from existing code or created
as new software. In the case of an audio system that does not conform to a standard, it may be
necessary to create an audio compression program to create test files.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS

System integration and debugging

Only after the components are built do we have the satisfaction of putting them together and
seeing a working system. Of course, this phase usually consists of a lot more than just plugging
everything together and standing back. Bugs are typically found during system integration, and
good planning can help us find the bugs quickly.

9. Write in detail about the embedded concepts in the design of simple Engine Control
Unit (ECU).
This unit controls the operation of a fuel-injected engine based on several measurements taken
from the running engine.
Subtopics
 Requirement (Table)
 Specification
 Architecture
 Components
 System integration
Figure 3.47 shows the block diagram of engine control

Fig 3.47 - Engine block diagram


MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS

Table 3.6 Requirement of Engine Control Unit


Theory of operation and requirements

Specification

Table 3.7 Periods for Data in the engine controller


The engine controller must deal with processes that happen at different rates. In table 3.7 shows
the update periods for the different signals.

We will use ΔNE and ΔT to represent the change in RPM and throttle position, respectively. Our
controller computes two output signals, injector pulse width PW and spark advance angle S [Toy].
It first computes initial values for these variables:

The controller then applies corrections to these initial values:


 As the intake air temperature (THA) increases during engine warm-up, the controller reduces
the injection duration.
 As the throttle opens, the controller temporarily increases the injection frequency.
 The controller adjusts duration up or down based upon readings from the exhaust oxygen

48
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS

sensor (OX).
 The injection duration is increased as the battery voltage (+B) drops System architecture

Fig 3.48 States diagram for the engine controller


In the figure 3.48 shows States diagram for the engine controller. The two major processes,
pulse-width and advance-angle, compute the control parameters for the spark plugs and
injectors.
In the the state diagram for throttle sensing, which saves both the current value and change in
value of the throttle. We can use similar control flow to compute changes to the other variables.
Figure 3.49 shows the state diagram for injector pulse width and Figure 3.50 shows the state
diagram for spark advance angle. In each case, the value is computed in two stages, first an
initial value followed by a correction.

Fig 3.49 - State diagram for Throttle Position Sensing

Fig 3.50 - State diagram for Inductor Pulse width

49
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS

Fig 3.51 - State diagram for Spa advance angle

Component design and testing


The various tasks must be coded to satisfy the requirements of RTOS processes. Variables that
are maintained across task execution, such as the change-of-state variables, must be allocated
and saved in appropriate memory locations. The RTOS initialization phase is used to set up the
task periods shown in fig 3.51.
Because some of the output variables depend on changes in state, these tasks should be tested
with multiple input variable sequences to ensure that both the basic and adjustment calculations
are performed correctly.
System integration and testing
Engines generate huge amounts of electrical noise that can cripple digital electronics. They also
operate over very wide temperature ranges: hot during engine operation, potentially very cold
before the engine is started. Any testing performed on an actual engine must be conducted using
an engine controller that has been designed to withstand the harsh environment of the engine
compartment.

10.Explain the design of Video accelerator with specification, architecture testing


and System integration. (or)With neat sketches, explain the working of video
accelerator.
Subtopics

 Requirement (Table)

 Specification

 Architecture

 Components

 System integration
MPEG-2 forms the basis for U.S. HDTV broadcasting. This compression uses several component

50
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS

algorithms together in a feedback loop. The discrete cosine transform (DCT) used in JPEG also
plays a key role in MPEG-2.
As in still image compression, the DCT of a block of pixels is quantized for lossy compression and
then subjected to lossless variable-length coding to further reduce the number of bits required to
represent the block.
Motion-based coding
MPEG uses motion to encode one frame in terms of another. Rather than send each frame
separately, as in motion JPEG, some frames are sent as modified forms of other frames using a
technique known as block motion estimation.
During encoding, the frame is divided into macroblocks. Macroblocks from one frame are
identified in other frames using correlation. The frame can then be encoded using the vector that
describes the motion of the macroblock from one frame to another without explicitly transmitting
all of the pixels.

Figure 3.52 - Block diagram of MPEG - 2 Compression Algorithm


As shown in Figure 3.52 the MPEG-2 encoder also uses a feedback loop to further improve image
quality. This form of coding is lossy and several different conditions can cause prediction to be
imperfect:
 objects within a macroblock may move from one frame to the next,
 macroblock may not be found by the search algorithm, etc.
The encoder uses the encoding information to recreate the lossily-encoded picture, compares it to
the original frame, and generates an error signal that can be used by the receiver to fix smaller
errors.
The decoder must keep some recently decoded frames in memory so that it can retrieve the pixel

51
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS

values of macroblocks. This internal memory saves a great deal of transmission and storage
bandwidth.

Figure 3.53 shows the block motion estimation and fig 3.54 shows the parameters of block
motion of video accelerator.

Fig 3.53 - Block modern estimation

Fig 3.54 - Block modern Search Parameters

52
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS

Table 3.8 Requirements for video accelerator


Specification

Fig 3.55 - Classes describing basic data types in the video accelerator
Figure 3 . 5 5 defines some classes that describe basic data types in the system. The motion
vector, the macro block, and the search area.
Figure 3.56 shows the architecture of video accelerator.

Architecture

Fig 3.56 - An architecture for the motion accelerator

Component design

If we want to use a standard FPGA accelerator board to implement the accelerator, we must first
make sure that it provides the proper memory required for M and S.

53
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS

Designing an FPGA is, for the most part, a straightforward exercise in logic design. Because the
logic for the accelerator is very regular, we can improve the FPGA’s clock rate by properly placing
the logic in the FPGA to reduce wire lengths.
If we are designing our own accelerator board, we have to design both the video accelerator
design proper and the interface to the PCI bus. We can create and exercise the video accelerator
architecture in a hardware description language like VHDL or Verilog and simulate its operation.
System testing
You can use standard video tools to extract a few frames from a digitized video and store them in
JPEG format. Open source for JPEG encoders and decoders is available. These programs can be
modified to read JPEG images and put out pixels in the format required by your accelerator. With
a little more cleverness, the resulting motion vector can be written back onto the image for a
visual confirmation of the result. If you want to be adventurous and try motion estimation on
video, open source MPEG encoders and decoders are also available.

11.With relevant examples, bring out the difference between clock driven scheduling
approach and priority driven scheduling approach.
The real-time task can be scheduled by operating system using various scheduling algorithms.
These scheduling algorithms are classified on the basis of determination of scheduling points.
1. Clock-driven Scheduling :
The scheduling in which the scheduling points are determined by the interrupts received from
2. A /D clock, is known as Clock-driven Scheduling. Clock-driven scheduling handles which task
is to be processed next is dependent at clock interrupt point.
3. Event-driven Scheduling :
The scheduling in which the scheduling points are determined by the events occurrences
excluding clock interrupts, is known as Event-driven Scheduling. Event-driven scheduling
handles which task is to be processed next is independent of clock interrupt point.
Difference between Clock-driven and Event-driven Scheduling:

CLOCK-DRIVEN SCHEDULING EVENT-DRIVEN SCHEDULING

Tasks are scheduled on the basis of Tasks are scheduled on the basis of event
interrupts received by clock. occurrences excluding clock interrupts.

54
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS

CLOCK-DRIVEN SCHEDULING EVENT-DRIVEN SCHEDULING

Scheduling points are determined by Scheduling points are determined by task


clock interrupts. completion and task arrival events.

Clock-driven scheduling algorithms are Event-driven scheduling algorithms are very


simple. complex.

Clock-driven scheduling is not flexible as Event-driven scheduling is more flexible than


event-driven. clock-driven.

It can schedule periodic, sporadic and aperiodic


It can only handle periodic tasks. tasks.

It is called offline scheduling. It is called online scheduling.

It is widely used in embedded systems. It is less suitable for embedded systems.

It is efficient than event-driven. It is sophisticated but more proficient.

It is used in small applications. It is used in larger applications.

12. Explain the concepts of distributed embedded systems (OR)Explain the features and
application of internet enabled embedded systems.(OR)Discuss in detail about the
several interconnected networks used especially for distributed embedded computing
(OR)With a neat diagram, Describe the typical bus transactions on the I2C bus.
 Network abstractions
 CAN bus
 I2C bus

55
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS

 Ethernet
 Internet
Network abstractions
The OSI model includes seven levels of abstraction as shown in figure 3.57.
 Physical: The physical layer defines the basic properties of the interface between systems,
including the physical connections (plugs and wires).
 Data link: The primary purpose of this layer is error detection and control across a single link.
 Network: This layer defines the basic end-to-end data transmission service.
 Transport: The transport layer defines connection-oriented services that ensure that data are
delivered in the proper order and without errors across multiple links.
 Session: A session provides mechanisms for controlling the interaction of end-user services
across a network, such as data grouping and check pointing.
 Presentation: This layer defines data exchange formats and provides transformation utilities
to application programs.
 Application: The application layer provides the application interface between the network and
end-user programs.

Fig 3.57 - The OSI layer


CAN bus
 The CAN bus was designed for automotive electronics and was first used in production cars in
1991. It uses bit-serial transmission.
 CAN run at rates of 1 Mb/second over a twisted pair connection of 40 meters. An optical link
can also be used. The bus protocol supports multiple masters on the bus.

56
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS

Physical layer
 As shown in figure 3.58, each node in the CAN bus has its own electrical drivers and receivers
that connect the node to the bus in wired-AND fashion.
 In CAN terminology, a logical 1 on the bus is called recessive and a logical 0 is dominant. The
driving circuits on the bus cause the bus to be pulled down to 0 if any node on the bus pulls
the bus down (making 0 dominant over 1).
 When all nodes are transmitting 1s, the bus is said to be in the recessive state; when a node
transmits a 0, the bus is in the dominant state. Data are sent on the network in packets known
as data frames.
CAN is a synchronous bus—all transmitters must send at the same time for bus arbitration to
work. Nodes synchronize themselves to the bus by listening to the bit transitions on the bus. The
first bit of a data frame provides the first synchronization opportunity in a frame. The nodes must
also continue to synchronize themselves against later transitions in each frame.

Fig 3.58 - Physical and electrical organization of a CAN bus


Arbitration
 Control of the CAN bus is arbitrated using a technique known as Carrier Sense Multiple Access
with Arbitration on Message Priority (CSMA/AMP).
 This method is similar to the I2C bus’s arbitration method; like I2C, CAN encourages a data-
push programming style.

57
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS

Data frame

Fig 3.59 - The CAN data frame format


Remote frames
A remote frame is used to request data from another node. The requestor sets the RTR bit
to 0 to specify a remote frame; it also specifies zero data bits.
Error handling
An error frame can be generated by any node that detects an error on the bus. Upon
detecting an error, a node interrupts the current transmission with an error frame, which consists
of an error flag field followed by an error delimiter field of 8 recessive bits. The error delimiter
field allows the bus to return to the quiescent state so that data frame transmission can resume
as shown in fig.3.59.

Fig 3.60 - Architecture of a CAN Controller

58
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS

I2C bus
 The I2C bus is a well-known bus commonly used to link microcontrollers into systems.
 It has even been used for the command interface in an MPEG-2 video chip. while a separate
bus was used for high-speed video data, setup information was transmitted to the on-chip
controller through an I2C bus interface.
Physical layer
I2C is designed to be low cost, easy to implement, and of moderate speed (up to 100kilobits per
second for the standard bus and up to 400 kbits/sec for the extended bus).As a result, it uses
only two lines: the serial data line (SDL) for data and the serial clock line (SCL), which indicates
when valid data are on the data line.
In figure 3.61 shows the structure of a typical I2C bus system. Every node in the network is
connected to both SCL and SDL. Some nodes may be able to act as bus masters and the bus may
have more than one master. Other nodes may act as slaves that only respond to requests from
masters.

Fig 3.61 - Structure of an Integrated circuits bus system


Electrical interface

Fig 3.62 - Electrical interface to the IC bus

59
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS

Data link layer


 A bus transaction is comprised of a series of one-byte transmissions and an address followed
by one or more data bytes. I2C encourages a data-push programming style.
 When a master wants to write a slave, it transmits the slave’s address followed by the data.
Because a slave cannot initiate a transfer, the master must senda read request with the slave’s
address and let the slave transmit the data.
 Therefore, an address transmission includes the 7-bit address and 1 bit for data direction: 0 for
writing from the master to the slave and 1 for reading from the slave to the master. (This
explains the 7-bit addresses on the bus.) The format of an address transmission is shown in
figure 3.62.
A bus transaction is initiated by a start signal and completed with an end signal:
• A start is signaled by leaving the SCL high and sending a 1 to 0 transition on SDL.
• A stop is signaled by setting the SCL high and sending a 0 to 1 transition on SDL.

Fig 3.63 - State transition graph for I2C bus master

The formats of some typical complete bus transactions are shown in figure 3.63. In the first
example, the master writes two bytes to the addressed slave. In the second, the master requests
a read from a slave. In the third, the master writes one byte to the slave, and then sends another
start to initiate a read from the slave.

60
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS

Fig 3.64 - Typical bus Transactions on the I2C bus


Application interface
 The I2C interface on a microcontroller can be implemented with varying percentages of the
functionality in software and hardware
 As illustrated in figure 3.64, a typical system has a 1-bit hardware interface with routines for
byte-level functions.
 The I2C device takes care of generating the clock and data. The application code calls routines
to send an address, send a data byte, and so on, which then generates the SCL and SDL,
acknowledges, and so forth as shown in fig.3.65.

Fig 3.65 - An I2C Interface in a microcontroller


Ethernet
 Ethernet is very widely used as a local area network for general-purpose computing.
 Because of its ubiquity and the low cost of Ethernet interfaces, it has seen significant use as a
network for embedded computing.

61
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS

 Ethernet is particularly useful when PCs are used as platforms, making it possible to use
standard components, and when the network does not have to meet rigorous real-time
requirements.
 The physical organization of an Ethernet is very simple, as shown in figure 3.66. The network is
a bus with a single signal path; the Ethernet standard allows for several different
implementations such as twisted pair and coaxial cable.

Fig 3.66 - Ethernet Physical Organization


The Ethernet arbitration scheme is known as Carrier Sense Multiple Access with Collision
Detection (CSMA/CD).A node that has a message waits for the bus to become silent and then
starts transmitting. It simultaneously listens, and if it hears another transmission that interferes
with its transmission, it stops transmitting and waits to retransmit can be shown in fig 3.67.

Fig 3.67 - Exponential back off times.

Fig 3.68 - Ethernet Packet Format


Real-time operations over Ethernet
Ethernet was not designed to support real-time operations; the exponential backoff scheme
cannot guarantee delivery time of any data. Because so much Ethernet hardware and software

62
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS

are available, many different approaches have been developed to extend Ethernet to real-time
operation; some of these are compatible with the standard while others are not.
Internet
 The Internet Protocol (IP) is the fundamental protocol on the Internet.
 It provides connectionless, packet-based communication.
 Industrial automation has long been a good application area for Internet-based embedded
systems.
 Information appliances that use the Internet are rapidly becoming another use of IP in
embedded computing.
Internetworking
 IP is not defined over a particular physical implementation—it is an internetworking standard.
 The relationship between IP and individual networks is illustrated in Figure 3.63 IP works at the
network layer.
 When node A wants to send data to node B, the application’s data pass through several layers
of the protocol stack to get to the Internet Protocol. IP creates packets for routing to the
destination, which are then sent to the data link and physical layers. A node that transmits
data among different types of networks is known as a router.

Fig 3.69 Protocol utilization in internet Communication


The basic format of an IP packet is shown in figure 3.69. The header and data payload are both of
variable length. The maximum total length of the header and data payload is 65,535 bytes. An
Internet address is a number (32 bits in early versions of IP, 128 bits in IPv6). The IP address is
typically written in the form. The names by which users and applications typically refer to Internet
nodes, such as foo.baz.com, are translated into IP addresses via calls to a Domain Name Server
(DNS), one of the higher level services built on top of IP.

63
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS

Fig 3.70 - IP Packet Structure


The Internet also provides higher-level services built on top of IP. The Transmission Control
Protocol (TCP) is one such example. It provides a connection-oriented service that ensures that
data arrive in the appropriate order, and it uses an acknowledgment protocol to ensure that
packets arrive as shown in fig.3.70.
Figure 3.71 shows the relationships between IP and higher-level Internet services. Using as the
foundation, TCP is used to provide File Transport Protocol (FTP) for batch file transfers, Hypertext
Transport Protocol (HTTP) for World Wide Web service, Simple Mail Transfer Protocol (SMTP) for
email, and Telnet for virtual terminals.

Fig 3.71 - The internet Service Stack

64
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS

System speedup

Fig 3.72 - Evaluating System Speedup in a Single –threaded implementation

In fig 3.72 Evaluating system speedup in a separate transport protocol, User Datagram Protocol
(UDP), is used as the basis for the network management services provided by the Simple
Network Management Protocol (SNMP).

65
MEC III ECE ET 3491 EIOT

UNIT V
IOT PHYSICAL DESIGN
Basic building blocks of an IoT device - Raspberry Pi - Board - Linux on Raspberry Pi - Interfaces -
Programming with Python - Case Studies: Home Automation, Smart Cities, Environment and
Agriculture.

PART - A

1. Name the basic blocks of an IoT device.


An IoT system comprises of four basic building blocks as,
 Sensors/Actuators.
 Processors,
 Gateways, and
 Applications.
2. List the functional modules used in IoT device.

An IoT device can consists of some modules based on their functional attributes:

 Sensing/ actuation module,


 Analysis & processing module,
 Communication module, and
 Application module.

3. What is Raspberry Pi?


Raspberry Pi is a series of low-cost small Single-Board Computers (SBCs) with the physical size of a
credit card developed in the United Kingdom by the Raspberry Pi Foundation in association with the
Broadcom. Raspberry Pi runs Linux operating system and can perform almost all tasks that a normal
desktop computer can do. It also allows interfacing sensors and actuators through the general purpose
I/O pins.
4. Name the interfaces used in Raspberry Pi.
The following interfaces are used for data transfer in Raspberry Pi:
 Serial interface.
 Serial Peripheral Interface (SPI).
 Inter-Integrated Circuit (12C) interface.

5. List the communication models used in IOT.


(i) Request response model
(ii) Publisher subscriber model.
(iii) Push pull model.
(iv) Exclusive pair model.

6. Name the different sub models available in IoT domain model.


(i) IoT information model.
(ii) IoT functional model.
(iii) IoT Communication model.

1
MEC III ECE ET 3491 EIOT

(iv) IoT trust, security and privacy model.

7. What is domain model?


A reference model describes the domain using a number of submodels. It captures the basic attributes
of the main concepts and the relationship between these concepts. It also serves as a tool for human
communication between people working in the domain.

8. What is actuator?
IoT devices can have various types actuators attached to it, that allow taking actions upon the physical
entities in the vicinity of the device.

9. What is the function of gateway?


The main task of gateway is to route the processed data to connect one network to another. Gateways
are responsible for bridging sensor nodes with the external internet or World Wide Web. Examples of
gateways are LAN, WAN, PAN etc.

10. What are the functions of sensors used in IoT system?


Sensors are the front end of the devices. It can be either on board of the IoT device or attached to the
device. Iota device can collect various types of information from the on board or attached sensors such
as temperature, humidity, light intensity etc. the sensed information can be communicated either to the
processors or cloud based servers.

2
MEC III ECE ET 3491 EIOT

PART - B

1. Explain in detail about the basic building blocks of an IOT device with neat diagram.

 An IoT system comprises of four basic building blocks as shown in fig 5.1,
(i) Sensors/ Actuators,
(ii) Processors,
(iii) Gateways, and
(iv) Applications

Fig 5.1. Basic building blocks of an IoT device

(i) Sensors: Sensing


 Sensors are the front end of the IoT devices. They really mean "things" in IoT. It can be either on-
board of the IoT device or attached to the device.
 IoT device can collect various types of information from the on-board or attached sensors such as
temperature, humidity, light intensity, etc. The sensed information can be communicated either to
the processors or cloud-based servers.
(ii) Actuators: Actuation
 IOT devices can have various types of actuators attached to it that allow taking actions upon the
physical entities in the vicinity of the device.
 For example, a relay switch connected to an IoT device can turn an appliance on/off based on the
commands sent to the device.
(iii) Processors: Analysis & Processing
 Processors are the brain of the IoT system. The main job of the processor is to process raw data
collected by the sensors and transforms them to some meaningful information and knowledge.
Processors are easily controlled by the applications and their one more important job is to securing
data. They perform encryption and decryption of data. Examples of processors are microcontrollers
and microcomputers.
(iv) Gateway: Communication
The main task of gateway is to route the processed data that is, to connect one network to another.
Gateways are responsible for bridging sensor nodes with the external Internet or World Wide Web.
Ex: LAN, WAN, PAN etc.

3
MEC III ECE ET 3491 EIOT

(v) Applications:
Applications provide a user interface and effective utilization of the data collected. Examples: Smart
home apps, Security system control apps and Industrial control hub apps. An IoT device can consists
of some modules based on their functional attributes as shown in fig. 5.2.
(i) Sensing/actuation module,
(ii) Analysis & processing module,
(iii) Communication module, and
(iv) Application module.

Fig 5.2. IoT device modules

2. Explain in detail about RASPBERRY PI with neat diagram.


 Raspberry Pi is a series of low-cost small Single-Board Computers (SBCs) with the physical size of
a credit card developed in the United Kingdom by the Raspberry Pi Foundation in association with
the Broadcom.
 Raspberry Pi runs Linux operating system and can perform almost all tasks that a normal desktop
computer can do. It also allows interfacing sensors and actuators through the general purpose I/O
pins.
 Raspberry Pi supports Python which is a beginner-friendly programming language that is used in
schools, web development, scientific research, and in many other industries.
Raspberry pi board

Fig 5.3. IoT device modules


Fig 5.3 shows the Raspberry Pi board with the various components / peripherals that are attached as
follows,
(a) Processor & RAM:
Raspberry Pi is based on an ARM processor. The latest version of Raspberry Pi comes with 700 MHz
low power ARM1176JZ-F processor and 512 MB SDRAM.

4
MEC III ECE ET 3491 EIOT

(b) USB ports:


These USB ports are used to connect the peripherals like a keyboard or mouse. The two black ports
are USB 2.0 and the two blue ports are USB 3.0.
(c) Ethernet port:
This port connects the Raspberry Pi to a wired network. Raspberry Pi also has Wi-Fi and Bluetooth
built in for wireless connections.
(d) HDMI ports:
The HDMI port on Raspberry Pi provides both video and audio output to the external monitors. The
raspberry Pi 4 features two micro HDMI ports, allowing it to drive two separate monitors at the same
time.
(e) Camera module port:
This port is used to connect the official Raspberry Pi camera module, which enables it to capture
images.
(f) AV jack: This AV jack allows you to connect speakers or headphones.
(g) GPIO pins:
This General Purpose Input/output pins are used to connect the electronic components.
(h) USB power port:
This USB port powers the Raspberry Pi. The Raspberry Pi4 has a USB Type-C port, while older
versions of the Pi have a micro- USB port.
(i) External Display port:
This port is used to connect the official seven-inch Raspberry Pi touch display for touch-based input.
(j) Micro SD card slot:
This card slot is for the micro SD card that contains the operating system and files.

3. Explain in detail about linux on raspberry pi.


Introduction
 Linux is a powerful, open-source operating system based on Unix that is used for computers,
servers, mainframes, mobile devices, and embedded devices. Raspberry Pi supports various flavors
of Linux including:
(i) Raspbian: Raspbian Linux is a Debian Wheezy port optimized for Raspberry Pi. This is the
recommended Linux for Raspberry Pi.
(ii) Arch: Arch is an Arch Linux port for AMD devices.
(iii) Pidora: Pidora Linux is a Fedora Linux port optimized for Raspberry Pi.
(iv) RaspBMC: An XBMC media-center distribution for Raspberry Pi.
(v) OpenELEC: A fast and user-friendly XBMC media-center distribution.
(vi) RISC OS: A very fast and compact operating system.

Installation
Following are the steps for headless installation of Raspberry Pi OS:
(i) Download Raspberry Pi Imager software by clicking on download button https:
//www.raspberrypi.org/software/
(ii) After installing Raspberry Pi Imager, open it. The interface of Raspberry Pi Imager is given by

5
MEC III ECE ET 3491 EIOT

(iii) Click on choose OS and select Raspberry Pi OS (32-bit) from the selection box as shown below

(iv) Attach your micro SD card to computer and then click on Choose SD card button as shown
below and select the SD card.

(v)Now, click on Write button. Raspberry Pi Imager will download the official Raspberry Pi OS
online and then write it on to your SD card.
(vi) After the process of writing Raspberry Pi OS is over, open the SD card in explorer. Create a file
named wpa_supplicant.conf in the SD card root folder and insert the following text which includes
the password into that file and save it.

6
MEC III ECE ET 3491 EIOT

Table 5.1: Rasberry Pi frequently used commands

4. Explain in detail about RASPBERRY PI interfaces with neat diagram.


 Raspberry Pi has serial, SPI and I2C interfaces for data transfer.
(i)Serial:
 Serial interface on Raspberry Pi has receive (Rx) and transmit (Tx) pins for communication with
serial peripherals as shown in fig.5.4.

Fig 5.4. Serial Interface

(ii) Serial Peripheral Interface (SPI)


SPI is a synchronous serial data protocol used for communicating with one or more peripheral
devices. In an SPI connection as shown in fig 5.5, there is one master device and one or more
peripheral devices. There are five pins on Raspberry Pi for SPI interface:
(a) Master In Slave Out (MISO): Master line for sending data to the peripherals.
(b) Master Out Slave In (MOSI): Slave line for sending data to a master.
(c) Serial Clock (SCLK): Clock generated by master to synchronize data transmission.
(d) Chip Enable 0(CEO): To enable or disable devices.
(e) Chip Enable 1(CE1): To enable or disable devices.

Fig 5.5. SPI Connection

7
MEC III ECE ET 3491 EIOT

(iii) Inter-Integrated Circuit (I2C)

The I2C interface pins on Raspberry Pi allow you to connect hardware modules. I2C interface allows
synchronous data transfer with just two pins: SDA (data line) and SCL (clock line) as shown in
fig.5.6.

Fig 5.6. I2I Interface

5. Explain in detail about Programming Raspberry pi with python.


Controlling LED with Raspberry Pi
Fig 5.7 shows the schematic diagram of connecting an LED to Raspberry Pi. In this example, the
LED is connected to GPIO pin 18, but you can connect the LED to any other GPIO pin as well.

Fig 5.7. Controlling LED with Raspberry Pi


Switching LED on/off from Raspberry Pi console
$echo 18 > /sys/class/gpio/export
$cd /sys/class/gpio/gpio18
#Set pin 18 direction to out
$echo out > direction
#Turn LED on
$echo 1 > value
#Turn LED off
$echo 𝜃> value
The Python program for blinking an LED connected to Raspberry Pi every second is given below.
This program uses the RPI.GPIO module to control the GPIO on Raspberry Pi. In this program we
set pin 18 direction to output and then write True/False alternatively after a delay of one second.

Python program for blinking LED

Import RPi . GPIO as GPIO


Import time
GPIO . setmode (GPIO . BCM)
GPIO . setup (18, GPIO.OUT)
While True:
GPIO . output (18, True)
Time . sleep (1)
GPIO . output (18, False)
Time . sleep (1)
8
MEC III ECE ET 3491 EIOT

6. Explain in detail about case study for Home Automation.

(1) Smart Lighting

 Smart lighting for homes helps in saving energy by adapting the lighting to the ambient conditions
and switching on/off or dimming the lights when needed.
 Smart lighting uses IoT-enabled sensors, bulbs, or adapters to allow users to manage their home or
office lighting.
 Smart lighting solutions can be controlled through an external device like a smartphone or smart
assistant that can be set to operate on a schedule, or triggered by sound or motion.

Key Benefits of IoT-Enabled Smart Lighting:

 Save money by switching to more energy-efficient LED bulbs.


 Set schedules to ensure that lights are off when they aren't needed or control lighting schedules
remotely as a security measure when you're away from home or out of town.
 Adjust the color or dimness of lights in different rooms or individual bulbs.

(2) Smart Appliances

 Modern homes have a number of appliances such as TVs, refrigerators, music systems,
washer/dryer etc. Managing and controlling, these appliances can be a difficult one because each
appliance either having its own controls or remote controls.
 Smart appliances make the management easier and also provide status information to the user
remotely.
 Any appliance can become smart with wireless connectivity and sensors that allow remote control
or autonomous operation through user input, scheduling, or Artificial Intelligence and Machine
Learning (AI/ML).
 Sensors combined with wireless connectivity can provide the end-user with information about the
appliance's usage, temperature, service life, maintenance schedules, or operation anomalies.

Advantages

 Smart appliances enable users to connect, control, and monitor their appliances allowing them to
save time, energy, and money.
 Additionally, they can remotely monitor appliances to ensure that they are turned off for safety,
even after leaving home.

(3) Intrusion Detection

 Home Intrusion detection systems use security cameras and sensors such as PIR sensors and door
sensor to detect intrusions and raise alerts which can be in the form of SMS and an email sent to
the user.
 Advanced systems can even send detailed alerts such as an image grab or a short video clip sent as
an email attachment.

(4) Smoke/Gas Detectors

9
MEC III ECE ET 3491 EIOT

 Smoke detectors are installed in homes and buildings to detect smoke that is typically an early sign
of fire. Smoke detectors use an optical detection ionization or air sampling techniques to detect
smoke.
 Alerts raised by smoke detectors can be in the form of signals to fire alarm system. Gas detectors
can detect the presence of harmful gases such as carbon monoxide (CO) and Liquid Petroleum Gas
(LPG).
 A smart smoke / gas detector can raise alerts in human voice describing where the problem is, send
an SMS or email to the user or the local fire safety department and provide visual feedback on its
status.

7. Explain in detail about case study for Smart Cities.


(1) Smart Parking
 An loT-based smart parking system provides real-time data on parking space availability and
payments which is a helpful tool for businesses and consumers.
 Smart parking is also known as a connected parking system which is a centralized management
system that allows drivers to use a smartphone app to search for and reserve a parking spot.
Features:
Design and implementation of a prototype smart parking system based on wireless sensor network
technology have the following features:
1. Remote parking monitoring,
2. Automated guidance, and
3. Parking reservation mechanism.

(2) Smart Lighting

 Smart lighting systems for roads, parks and buildings can help in saving energy. Smart lights
equipped with sensors can communicate with other lights and exchange information on the sensed
ambient conditions to adapt the lighting.

(3) Smart Roads

Smart roads equipped with sensors can provide information on driving conditions, travel time
estimates and alerts in case of poor driving conditions, traffic congestions and accidents. Such
information can help us for safe drive and in reducing traffic jams.

 Information sensed from the roads can be communicated via Internet to cloud based applications
and social media, the drivers who subscribe such applications can get the information.

(4) Structural Health Monitoring

 Structural health monitoring systems uses a network of sensors to monitor the vibrations levels in
the structures such as bridges and buildings. The data collected from these sensors is analysed to
assess the health of the structures.
 By analysing the data it is possible to detect cracks and mechanical breakdowns, locate the
damages to a structure and also to calculate the remaining lifetime of the structure. Using such
systems advance warning can be given in the case of imminent failure of the structure.

10
MEC III ECE ET 3491 EIOT

(5) Surveillance

 Surveillance of infrastructure, public transport and events in cities is required to ensure safety and
security. City wide surveillance infrastructure comprising of a large number of distributed and
Internet connected video surveillance cameras.
 The video feeds from surveillance cameras can then be aggregated in cloud-based scalable storage
solutions.

(6) Emergency Response

 IoT systems can be used for monitoring the critical infrastructure in cities such as building, gas and
water pipelines, public transport and power substation systems.
 IoT systems for fire directions, gas and water leakage directions can help in generating alerts and
minimizing their effects on the critical infrastructure
 IoT systems for critical infrastructure monitoring enable aggregations and sharing of information is
collected from large number of sensors. Cloud-based architectures multi-model information such
as sensor data, audio, video feeds can be analysed in near real-time to detect adverse events.

8. Explain in detail about case study for Environment.

(1) Weather Monitoring

 IoT-based weather monitoring system can collect data from a number of sensors attached (such as
temperature, humidity, pressure etc) and send the data to cloud based applications and storage
back-ends.
 The data collected in the cloud can then be analysed and visualized by cloud based applications.

(2) Air & Noise Pollution Monitoring

 Air and sound pollution is a growing issue in these days. It is necessary to monitor the air quality
and keep it under control for a better future and healthy living for all.
 An air quality as well as sound pollution monitoring system that allows us to monitor and check
live air quality as well as sound pollution in particular areas through IoT.
 System uses air sensors to sense presence of harmful gases/compounds in the air and constantly
transmit this data to the microcontroller. Also system keeps measuring sound level and reports it to
an online server over IoT.
 The sensors interact with microcontroller which processes this data and transmits it over the
Internet. This allows authorities to monitor air pollution in different areas and take action against it.

(3) Forest Fire Detection

 Forest fires can cause damage to natural resources, property and human life. Early deduction of
forest fires can help in minimizing the damage.
 IoT based forest fire detection systems can use a number of monitoring nodes deployed at a
different locations in a forest. Each monitoring node collects measurements on ambient conditions
including temperature, humidity and light levels.

11
MEC III ECE ET 3491 EIOT

 IoT forest fire detection system can alert the local people based on the level of severity of fire
hazard near to them .Also, Iot can be integrated with drones GPS and satellite services.

(4) River Floods Detection

 River flood can cause extensive damage to the natural and human resource and human life. River
flood occurs due to continuous rainfall which cause the river levels to rise and flow rates to
increase rapidly.
 Early warnings of floods can be given by monitoring the water level and flow rate.
 IoT based river flood monitoring system uses a number of sensor nodes that can monitor the water
level using ultrasonic sensors and flow rate using the flow velocity sensors. Data from a number of
such sensor nodes is aggregated in a server or in the cloud.

9. Explain in detail about case study for Agriculture.

(1) Smart Irrigation System


 Smart irrigation systems can improve crop yields while saving water. Smart irrigation systems use
IoT devices with soil moisture sensors to determine the amount of moisture in the soil and releases
the water flow through the irrigation pipes only when the moisture levels goes below a predefined
threshold.
 This system also collect moisture level measurements on a server or in the cloud where the
collected data can be analysed to plan watering schedules.

(2) Green House Control


 Green houses are structures with glass or plastic roofs that provide conductive environment for
growth of plants. The climatological conditions inside a greenhouse can be monitored and
controlled to provide the best conditions for the growth of plants.

12

You might also like