Eiot Unit 3 & 5 Notes
Eiot Unit 3 & 5 Notes
UNIT -3
PROCESSES AND OPERATING SYSTEMS
Structure of a real – time system – Task Assignment and Scheduling – Multiple Tasks and Multiple
Processes – Multirate Systems – Pre emptive real – time Operating systems – Priority based
scheduling – Interposes Communication Mechanisms – Distributed Embedded Systems – MPSoCs
and Shared Memory Multiprocessors – Design Example – Audio Player, Engine Control Unit and
Video Accelerator.
PART-A
1. What do you mean by process and threads?
A process is a single execution of a program.
Processes that share the same address space are often called threads.
3.Define initiation time and deadline for periodic and aperiodic process.
The initiation time is the time at which the process goes from the waiting to the ready state. An
aperiodic process is by definition initiated by an event, such as external data arriving or data
computed by another process. The initiation time of periodic process is generally measured from
that event, although the system may want to make the process ready at some interval after the
event itself.
A deadline specifies when a computation must be finished. The deadline for an aperiodic process
is generally measured from the initiation time because that is the only reasonable time reference.
The deadline for a periodic process may in general occur at some time other than the end of the
period.
4. Define jitter.
The jitter of a task, which is the allowable variation in the completion of the task.
5. Define DAG (Directed Acyclic Graph) and task graph or task set.
The data dependencies must form a directed acyclic graph (DAG)—a cycle in the data
dependencies is difficult to interpret in a periodically executed system.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS
A set of processes with data dependencies is known as a task graph or task set.
6. Define CPU usage metrics, how it is expressed and give the ranges of CPU ratio.
In addition to the application characteristics, we need to have a basic measure of the efficiency
with which we use the CPU. The simplest and most direct measure is utilization:
Eq-3.1
Utilization is the ratio of the CPU time that is being used for useful computations to the total
available CPU time.
This ratio ranges between 0 and 1, with 1 meaning that all of the available CPU time is being
used for system purposes. Utilization is often expressed as a percentage.
7. With neat sketch describe what are the possible transitions between states available to
a process?
Consider the case in which an I/O device has a flag that must be tested and modified by a
process.
Problems can arise when other processes may also want to access the device.
If combinations of events from the two tasks operate on the device in the wrong order, we may
create a critical timing race or race condition that causes erroneous operation.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS
For example:
1. Task 1 reads the flag location and sees that it is 0.
2. Task 2 reads the flag location and sees that it is 0.
3. Task 1 sets the flag location to 1 and writes data to the I/O device’s data register.
4. Task 2 also sets the flag to 1 and writes its own data to the device data register, overwriting the
data from task 1.
In this case, both devices thought they were able to write to the device, but the task 1’s write
was never completed because it was overridden by task 2.
Critical sections
To prevent this type of problem we need to control the order in which some operations occur.
For example, we need to be sure that a task finishes an I/O operation before allowing another
task to starts its own operation on that I/O device.
We do so by enclosing sensitive sections of code in a critical section that executes without
interruption.
14.How will you call and release the critical section using Semaphores?
The semaphore is used to guard a resource.
We start a critical section by calling a semaphore function that does not return until the resource
is available.
When we are done with the resource we use another semaphore function to release it. The
semaphore names are, by tradition, P() to gain access to the protected resource and V() to
release it.
/* some nonprotected operations here */
P(); /* wait for semaphore */
/* do protected work here */
V(); /* release semaphore */
This form of semaphore assumes that all system resources are guarded by the same P()/V() pair.
15. What is the function of Test-and-set?
To implement P() and V(), the microprocessor bus must support an atomic read/write operation,
which is available on a number of microprocessors.
The test-and-set allows us to implement semaphores. The P() operation uses a test and- set to
repeatedly test a location that holds a lock on the memory block. The P() operation does not exit
until the lock is available; once it is available, the test-and-set automatically sets the lock. Once
past the P() operation, the process can work on the protected memory block.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS
The V() operation resets the lock, allowing other processes access to the region by using the P()
function.
16.Define priority inversion and priority inheritance.
Shared resources cause a new and subtle scheduling problem: a low-priority process blocks
execution of a higher-priority process by keeping hold of its resource, a phenomenon known as
priority inversion.
Priority inheritance
The most common method for dealing with priority inversion is priority inheritance: promote the
priority of any process when it requests a resource from the operating system. The priority of the
process temporarily becomes higher than that of another process that may use the resource. This
ensures that the process will continue executing once it has the resource so that it can finish its
work with the resource, return it to the operating system, and allow other processes to use it. Once
the process is finished with the resource, its priority is demoted to its normal value.
17. Define Earliest-deadline-first scheduling (EDF).
Earliest deadline first (EDF) is another well-known scheduling policy that was also studied by Liu
and Lay land. It is a dynamic priority scheme—it changes process priorities during execution based
on initiation times. As a result, it can achieve higher CPU utilizations than RMS.
18. What do you know about RMS versus EDF?
EDF can extract higher utilization out of the CPU, but it may be difficult to diagnose the possibility
of an imminent overload. Because the scheduler does take some overhead to make scheduling
decisions, a factor that is ignored in the schedulability analysis of both EDF and RMS, running a
scheduler at very high utilizations is somewhat problematic. RMS achieves lower CPU utilization but
is easier to ensure that all deadlines will be satisfied. In some applications, it may be acceptable for
some processes to occasionally miss deadlines.
19.Define blocking and nonblocking communication.
In general, a process can send a communication in one of two ways:
blocking or nonblocking.
After sending a blocking communication, the process goes into the waiting state until it receives
a response.
Nonblocking communication allows the process to continue execution after sending the
communication. Both types of communication are useful.
20.What are two major styles of interprocess communication and give brief note about
it?
There are two major styles of interprocess communication:
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS
Going into a low-power mode takes time; generally, the more that is shut off, the longer the
delay incurred during restart. Because power-down and power-up are not free, modes should be
changed carefully.
Determining when to switch into and out of a power-up mode requires an analysis of the overall
system activity.
Avoiding a power-down mode can cost unnecessary power.
Powering down too soon can cause severe performance penalties.
23.Draw the architecture of power managed systems and write the operations of several
elements of the total managed system.
The several elements of the total managed system:
The service provider is the machine whose power is being managed;
The service requestor is the machine or person making requests of that power-managed system;
A queue is used to hold pending requests (e.g., while waiting for the service provider to power
up); and
The power manager is responsible for sending power management commands to the provider
can be shown in fig 3.4. The power manager can observe the behaviour of the requestor,
provider, and queue
25.What are the basic global power states of Advanced Configuration and Power
Interface (ACPI)?
ACPI supports the following five basic global power states:
• G3, the mechanical off state, in which the system consumes no power.
• G2, the soft off state, which requires a full operating system reboot to restore the machine to
working condition. This state has four substates:
• S1, a low wake-up latency state with no loss of system context;
• S2, a low wake-up latency state with a loss of CPU and system cache state;
• S3, a low wake-up latency state in which all system state except for main memory is lost; and
• S4, the lowest-power sleeping state, in which all devices are turned off.
• G1, the sleeping state, in which the system appears to be off and the time required to return to
working condition is inversely proportional to power consumption.
• G0, the working state, in which the system is fully usable.
• The legacy state, in which the system does not comply with ACPI.
In this code, the O_CREAT flag to mq_open () causes it to create the named queue if it doesn’t
yet exist and just opens the queue for the process if it does already exist.
29. What are the strategies used for power optimization in multiprocessing?
The RTOS and system architecture can use static and dynamic power management
Mechanisms to help manage the system’s power consumption. A power management policy is a
strategy for determining when to perform certain power management operations.
30. Illustrate the interconnect networks developed for distributed embedded systems.
Network abstractions
CAN bus
I2C bus
Ethernet
Internet
32. How does priority scheduling improves multitask execution? What is the concept of
multitasking? What does it signify?
There is at most one process executing on the CPU at any time. Any process that could execute
is in the ready state;
The OS chooses among the ready processes to select the next executing process. A process may
not, however, always be ready to run.
For instance, process may be waiting for data from an I/O device or another process, or it may
be set to run from a timer that has not yet expired.
Such processes are in the waiting state. A process goes into the waiting state when it needs
data that it has not yet received or when it has finished all its work for the current period.
A process goes into the ready state when it receives its required data and when it enters a new
period.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS
A process can go into the executing state only when it has all its data, is ready to run, and the
scheduler selects the process as the next process to run.
Each process has a fixed priority that does not vary during the course of execution
The ready process with the highest priority (with 1 as the highest priority of all) is selected for
execution.
A process continues execution until it completes or it is preempted by a higher-priority process.
33. What do you meant by CRC cards? (Or) Write the special Characteristics of a CRC
cards.
The CRC card methodology is a well-known and useful way to help analyze a system’s structure. It
is particularly well suited to object-oriented design because it encourages the encapsulation of data
and functions.
The acronym CRC stands for the following three major items that the methodology tries to
identify:
• Classes define the logical groupings of data and functionality.
• Responsibilities describe what the classes do.
• Collaborators are the other classes with which a given class works
35. What are the main functions performed by video accelerator? (or)What is the
need for video Accelerator?
It computes motion vectors using full search algorithm.
36. Give one challenge in developing of codes for MPSoCs.
Challenges in programming for Multicore system are Dividing activities, Balance, Data splitting,
Data dependency, Testing and Debugging.
37. List two special functional units of embedded processor used for audio player design.
The two main processor architectures used for audio processing are fixed-point and floating-point.
Fixed- point processors are designed for integer and decimal values.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS
38. What is meant by requirement analysis of doing memory scaling for a video
accelerator?
Requirements analysis for memory scaling enables new DRAM architectures, functions, interfaces,
and better integration.
39. What is the difference between the ready and waiting states of process scheduling?
The possible states of a process are: new, running, waiting, ready, and terminated. The process is
created while in the new state. In the running or waiting state, the process is executing or waiting
for an event to occur, respectively. The ready state occurs when the process is ready and waiting
to be assigned to a processor and should not be confused with the waiting state mentioned earlier.
After the process is finished executing its code, it enters the termination state.
40. What do you mean by Accelerators in embedded multiprocessor?
Accelerators can provide large performance increases for applications with computational
kernels that spend a great deal of time in a small section of code.
Accelerators can also provide critical speedups for low-latency I/O functions.
41. Define multiprocessor
A multiprocessor is, in general, any computer system with two or more processors coupled
together.
We use the term processing element (PE) to mean any unit responsible for computation,
whether it is programmable or not.
We use the term network (or interconnection network) to describe the interconnections between
the processing elements.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS
PART B
1.Explain in detail about multiple tasks and multiple processes with an example (OR)
summaries services of operating system in handling multiple tasks and multiple
processes.
i. Tasks and processes
Many (if not most) embedded computing systems do more than one thing—that is, the
environment can cause mode changes that in turn cause the embedded system to behave quite
differently. .
These different tasks are part of the system’s functionality, but that application-level organization
of functionality is often reflected in the structure of the program as well.
A process is a single execution of a program.
Processes that share the same address space are often called threads.
As shown in figure 3.5, this device is connected to serial ports on both ends.
2.Multirate systems
Multirate embedded computing systems are very common, including automobile engines, printers,
and cell phones.
In all these systems, certain operations must be executed periodically, and each operation is
executed at its own rate shown in fig 3.6.
All processes must finish before the end of the period. The data dependencies must form
directed acyclic graph (DAG)—a cycle in the data dependencies is difficult to interpret in a
periodically executed system can be shown in fig 3.9.
A set of processes with data dependencies is known as a task graph or task set.
Utilization is the ratio of the CPU time that is being used for useful computations to the total
available CPU time. This ratio ranges between 0 and 1, with 1 meaning that all of the available CPU
time is being used for system purposes. Utilization is often expressed as a percentage can be seen
in fig 3.10.
Process state and scheduling
The first job of the operating system is to determine the process that runs next. The work of
choosing the order of running processes is known as scheduling. The operating system considers
a process to be in one of three basic scheduling states:
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS
Waiting,
Ready, and
Executing.
There is at most one process executing on theCPU at any time.
2. Explain in detail how you will solve the fundamental problem of a cooperative
multitasking system using Preemptive real-time operating systems. (or) Explain how
multiple process are handled by Preemptive real-time operating systems. (or) Discuss
why preemptive scheduling is preferred in real time operating systems.
A preemptive real-time operating system (RTOS) solves the fundamental problems of a
cooperative multitasking system.
It executes processes based upon timing requirements provided by the system designer. The
most reliable way to meet timing requirements accurately is to build a preemptive operating
system and to use priorities to control what process runs at any given time.
Two basic concepts
Preemption
Figure 3.12 shows an example of preemptive execution of an operating system. We want to
share the CPU across two processes.
The kernel is the part of the operating system that determines what process is running. The
kernel is activated periodically by the timer.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS
The length of the timer period is known as the time quantum because it is the smallest
increment in which we can control CPU activity. The kernel determines what process will run next
and causes that process to run can be shown in fig 3.12. On the next timer interrupt, the kernel
may pick the same process or another process to run.
o Task Increment Tick () updates the time and Task Switch Context chooses anew task.
o Port RESTORE_CONTEXT () swaps in the new context.
3. How will you allocate resources in the computing system among programs based on
the request using Priority-based scheduling? Explain with an example. (or) Explain
scheduling polices with an example.
Subtopics
Round-robin scheduling
Priority-Driven Scheduling
Rate-monotonic scheduling
Earliest-Deadline-First Scheduling.
Round-robin scheduling
Round-robin scheduling provides a form of fairness in that all processes get a chance to execute.
However, it does not guarantee the completion time of any task; as the number of processes
increases, the response time of all the processes increases.
Real-time systems, in contrast, require their notion of fairness to include timeliness and
satisfaction of deadlines.
Process priorities
A common way to choose the next executing process in an RTOS is based on process priorities.
Each process is assigned a priority, an integer-valued number. The next process to be chosen to
execute is the process in the set of ready processes that has the highest-valued priority.
Priority-Driven Scheduling
For this example, we will adopt the following simple rules.
Each process has a fixed priority that does not vary during the course of execution.
The ready process with the highest priority (with 1 as the highest priority of all) is selected for
execution.
A process continues execution until it completes or it is preempted by a higher-priority process.
Let’s define a simple system with three processes as seen below.
Once we know the process properties and the environment, we can use the priorities to
determine which process is running throughout the complete execution of the system
CPU utilization
The bad news is that, although RMS is the optimal static-priority schedule, it does not allow the
system to use 100% of the available CPU cycles. The total CPU utilization for a set of n tasks is
Shared resources
A process may need to do more than read and write values memory. For example, It may need
to communicate with an I/O device.
And it may use shared memory locations to communicate with other processes.
Race condition
Consider the case in which an I/O device has a flag that must be tested and modified by a process.
Problems can arise when other processes may also want to access the device. If combinations of
events from the two tasks operate on the device in the wrong order, we may create a critical
timing race or race condition that causes erroeous operation. For example:
1. Task 1 reads the flag location and sees that it is 0.
2. Task 2 reads the flag location and sees that it is 0.
3. Task 1 sets the flag location to 1 and writes data to the I/O device’s data register.
4. Task 2 also sets the flag to 1 and writes its own data to the device data register, overwriting the
data from task 1.In this case, both devices thought they were able to write to the device, but the
task1’s write was never completed because it was overridden by task 2.
Critical sections
To prevent this type of problem we need to control the order in which some operations occur. For
example, we need to be sure that a task finishes an I/O operation before allowing another task to
starts its own operation on that I/O device. We do so by enclosing sensitive sections of code in a
critical section that executes without interruption.
Semaphores
The semaphore is used to guard a resource. We start a critical section by calling a semaphore
function that does not return until the resource available.
When we are done with the resource we use another semaphore function to release it.
The semaphore names are, by tradition, P() to gain access to the protected resource
and V() to release it.
P(); /* wait for semaphore */
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS
Table3.2.Earliest-Deadline-First Scheduling
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS
The least-common multiple of the periods is 60, and the utilization is 1/3 + 1/4 + 1/5 =0.9833333.
This utilization is too high for RMS, but it can be handled with an earliest deadline-first schedule.
Here is the schedule:
Comparison between RMS & EDF (RMS versus EDF)
EDF can extract higher utilization out of the CPU, but it may be difficult to diagnose the
possibility of an imminent overload.
Because the scheduler does take some overhead to make scheduling decisions, a factor that is
ignored in the schedulability analysis of both EDF and RMS, running a scheduler at very high
utilizations is somewhat problematic. Cab be seen in fig 3.16.
RMS achieves lower CPU utilization but is easier to ensure that all deadlines will be satisfied. In
some applications, it may be acceptable for some processes to occasionally miss deadlines.
Rate-monotonic scheduling assumes that there are no data dependencies between processes.
Data Dependencies and Scheduling:
Data dependencies imply that certain combinations of processes can never occur as shown in
fig.3.18. Consider the simple example below.
Because we know that some combinations of processes cannot be ready at the same time, we
know that our worst-case CPU requirements are less than would be required if all processes could
be ready simultaneously.
Scheduling and Context Switching Overhead
Appearing below is a set of processes and their characteristics.
4. Explain in detail how the processes communicate with each other? Or explain in detail
about Inter process communication mechanisms with neat sketch(or) compare the
principle merits and limitations of Inter process communication (or) Demonstrate about
Inter process communication mechanisms.
Processes often need to communicate with each other. Inter process communication mechanisms
are provided by the operating system as part of the process abstraction.
In general, a process can send a communication in one of two ways: blocking or non-blocking.
After sending a blocking communication, the process goes into the waiting state until it receives
a response. Non-blocking communication allows the process to continue execution after sending
the communication. Both types of communication are useful.
There are two major styles of Inter process communication: shared memory and message
passing.
Shared memory communication
As shown in figure 3.20 illustrates how shared memory communication works in a bus-based
system. Two components, such as a CPU and an I/O device, communicate through a shared
memory location.
Message passing
Message passing communication complements the shared memory model as shown in Figure 3.21,
each communicating entity has its own message send/receive unit. The message is not stored on
the communications link, but rather at the senders/receivers at the endpoints. In contrast, shared
memory communication can be seen as a memory block used as a communication device, in which
all the data are stored in the communication link/memory.
A UML signal is actually a generalization of the Unix signal. While a Unix signal carries no
parameters other than a condition code, a UML signal is an object. As such, it can carry parameters
as object attributes. Figure 3.22 shows the use of a signal in UML. The sig behavior() behavior of
the class is responsible for throwing the signal, as indicated by <<send>>. The signal object is
indicated by the <<signal>>stereotype.
Mailboxes:
The mailbox is a simple mechanism for asynchronous communication. Some architectures define
mailbox registers. These mailboxes have a fixed number of bits and can be used for small
messages.
In order for the mailbox to be most useful, we want it to contain two items the message itself and a
mail ready flag. The flag is true when a message has been put into the mailbox and cleared when
the message is removed.
IL Timing, an instrumentation routine in the kernel that measures both interrupt service routine and
interrupt service thread latency;
OSBench measures the timing of operating system tasks such as critical section access, signals, and
so on. Kernel Tracker provides a graphical user interface for RTOS events.
Caches and RTOS Performance
Each process in the shared section of the cache is modelled by a binary variable: 1if present in the
cache and 0 if not. Each process is also characterized by three total execution times: assuming no
caching, with typical caching, and with all code always resident in the cache.
Consider a system containing the following three processes:
5.Write short note on power optimization strategies for processes in Real time operating
system environment.
The RTOS and system architecture can use static and dynamic power management mechanisms
to help manage the system’s power consumption.
A power management policy is a strategy for determining when to perform certain power
management operations.
However, the overall strategy embodied in the policy should be designed based on the
characteristics of the static and dynamic power management mechanisms.
Power-down trade-offs.
Avoiding a power-down mode can cost unnecessary power.
Powering down too soon can cause severe performance penalties.
Predictive power management
A more sophisticated technique is predictive shutdown. The goal is to predict when the next
request will be made and to start the system just before that time, saving the requestor the start-
up time. In general, predictive shutdown techniques are probabilistic—they make guesses about
activity patterns based on a probabilistic model of expected behavior. This can cause two types of
problems:
• The requestor may have to wait for an activity period. In the worst case, the requestor may not
make a deadline due to the delay incurred by system start-up can be shown in fig 3.27.
• The system may restart itself when no activity is imminent. As a result, the system will waste
power.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS
• S2, a low wake-up latency state with a loss of CPU and system cache state.
• S3, a low wake-up latency state in which all system state except for main memory is lost, and
• S4, the lowest-power sleeping state, in which all devices are turned off.
• G1, the sleeping state, in which the system appears to be off and the time required to return to
working condition is inversely proportional to power consumption.
• G0, the working state, in which the system is fully usable.
• The legacy state, in which the system does not comply with ACPI.
Fig 3.29 - The Advanced Configuration and Power Interface and its relationship to a
Complete System
The power manager typically includes an observer, which receives messages through the ACPI
interface that describe the system behavior as shown in fig 3.29. It also includes a decision module
that determines power management actions based on those observations.
6.What is Real time operating system? Give examples for real-time operating systems
(POSIX or Linux or UNIX and Windows CE). Explain them in detail. (or)Discuss features
and services of WINDOWS CE Real time operating system.
A real-time operating system (RTOS) is an operating system (OS) intended to serve real-
time application process data as it comes in, typically without buffering delays.
POSIX
POSIX is a version of the Unix operating system created by a standards organization.
POSIX-compliant operating systems are source-code compatible—an application can be compiled
and run without modification on a new POSIX platform assuming that the application uses only
POSIX-standard functions.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS
While Unix was not originally designed as a real-time operating system, POSIX has been
extended to support real-time requirements. Many RTOSs are POSIX-compliant and it serves as
a good model for basic RTOS techniques.
The POSIX standard has many options; particular implementations do not have to support all
options.
The existence of features is determined by C preprocessor variables; for example, the FOO
option would be available if the_POSIX_FOO preprocessor variable were defined.
All these options are defined in the system include file unistd.h.
Linux
The Linux operating system has become increasing popular as a platform for embedded
computing. Linux is a POSIX-compliant operating system that is available as open source.
However, Linux was not originally designed for real-time operation.
Some versions of Linux may exhibit long interrupt latencies, primarily due to large critical
sections in the kernel that delay interrupt processing.
Two methods have been proposed to improve interrupt latency. A dual-kernel approach uses a
specialized kernel, the co-kernel, for real-time processes and the standard kernel for non-real-
time processes. All interrupts must go through the co-kernel to ensure that real-time operations
are predictable. The other method is a kernel patch that provides priority inheritance to reduce
the latency of many kernel operations. These features are enabled using the PREEMPT_RT mode.
Processes in POSIX
In POSIX, a new process is created by making a copy of an existing process. The copying process
creates two different processes both running the same code. The complication comes in ensuring
that one process runs the code intended for the new process while the other process continues the
work of the old process.
A process makes a copy of itself by calling the fork() function. That function causes the operating
system to create a new process (the child process) which is an early exact copy of the process
that called fork() (the parent process).
They both share the same code and the same data values with one exception, the return value of
fork(): the parent process is returned the process ID number of the child process, while the child
process gets a return value of 0. We can therefore test the return valueof fork() to determine
which process is the child:}
The execv() function takes as argument the name of the file that holds the child’scode and the
array of arguments. It overlays the process with the new code and starts executing it from the
main () function. In the absence of an error, execv() should never return. The code that follows
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS
the call to perror () and exit(), take care of the case where execv() fails and returns to the
parent process. The exit() function is aC function that is used to leave a process. it relies on an
underlying POSIX function that is called _exit().
The parent process should use one of the POSIX wait functions before calling exit() for itself. The
wait functions not only return the child process’s status, in many implementations of POSIX they
make sure that the child’s resources (namely memory) are freed. So we can extend our code as
follows:
The parent stuff() function performs the work of the parent function. The wait() function waits for
the child process; the function sets the integer c status variable to the return value of the child
process.
The POSIX process model
POSIX does not implement lightweight processes. Each POSIX process runs in its own address
space and cannot directly access the data or code of other processes.
Real-time scheduling in POSIX
POSIX supports real-time scheduling in the POSIX_PRIORITY_SCHEDULING resource. POSIX
supports rate-monotonic scheduling in the SCHED_FIFO scheduling policy. The name of this
policy is unfortunately misleading.
Whenever a process changes its priority, it is put at the back of the queue for that priority level.
A process can also explicitly move itself to the end of its priority queue with a call to the
sched_yield () function.
SCHED_RR is a combination of real-time and interactive scheduling techniques: within a priority
level, the processes are time sliced.
The SCHED_OTHER is defined to allow non-real-time processes to intermix with real-time
processes.
POSIX semaphores
POSIX supports semaphores but it also supports a direct shared memory mechanism. POSIX
supports counting semaphores in the _POSIX_SEMAPHORES option.
The POSIX names for P and V are sem_wait () and sem_post () respectively.
POSIX also provides a sem_trywait () function that tests the semaphore but does not block.
POSIX shared memory is supported under the _POSIX_SHARED_MEMORY_OBJECTSoption. The
shared memory functions create blocks of memory that can be used by several processes.
The sem_open() function opens a shared memory object
POSIX pipes
The pipe is very familiar to Unix users from its shell syntax:% foo file1 | baz> file2
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS
7. Discuss in detail about MPSoCs and shared memory multiprocessors. (or)Explain the
operation and advantages of CPU Accelerated system (or)Illustrate Why MPSOCs are
preferred over general purpose microprocessor.
Heterogeneous shared memory multiprocessors
Accelerators
Accelerator performance analysis
Scheduling and allocation
Heterogeneous shared memory multiprocessors
Many high-performance embedded platforms are heterogeneous multiprocessors. Different
processing elements perform different functions.
The PEs may be programmable processors with different instruction sets or specialized
accelerators that provide little or no programmability.
In both cases, the motivation for using different types of PEs is efficiency. Processors with
different instruction sets can perform different tasks faster and using less energy.
Accelerators provide even faster and lower-power operation for a narrow range of functions.
Accelerators
One important category of processing element for embedded multiprocessors is the accelerator.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS
Accelerators can provide large performance increases for applications with computational kernels
that spend a great deal of time in a small section of code. Accelerators can also provide critical
speedups for low-latency I/O functions.
The design of accelerated systems is one example of hardware/software codesign—the
simultaneous design of hardware and software to meet system objectives.
As illustrated in figure 3.35, a CPU accelerator is attached to the CPU bus. The CPU is often
called the host.
The CPU talks to the accelerator through data and control registers in the accelerator. These
registers allow the CPU to monitor the accelerator’s operation and to give the accelerator
commands.
.
Fig.3.35 - CPU Accelerator in a System
Accelerator performance analysis
Where txis the execution time of the accelerator assuming all data are available, and tinand tout
are the times required for reading and writing the required variables, respectively.
In fig 3.38 reflect the streaming data, where tCPU is the execution time of the equivalent function
in software on the CPU and n is the number of times the function will be executed in fig 3.39.
System speedup
8. Discuss case study to design of a portable audio player or MP3 player that
decompresses music files as it plays.
Subtopics
Requirement
Specification
Architecture
Components
The incoming bit stream has been encoded using a Huffman style code, which must be decoded.
The audio data itself is applied to a reconstruction filter, along with a few other parameters.
Perceptual coding- The coder eliminates certain features of the audio stream so that the result
can be encoded in fewer bits. It tries to eliminate features that are not easily perceived by the
human audio system as shown in fig 3.42.
Masking is one perceptual phenomenon that is exploited by perceptual coding. One tone can be
masked by another if the tones are sufficiently close in frequency. Some audio features can also
be masked if they occur too close in time after another feature.
Specification
In this table 3.5 shows the major classes in the audio player. The File ID class is an abstraction of
a file in the flash file system. The controller class provides the method that operates the player.
In figure 3.44 shows a state diagram for file display/selection. This specification assumes
that all files are in the root directory and that all files are playable audio.
MAILAM ENGINEEING COLLEGE /ECE/ET 3491/EIOT/ UNIT 3 /PROCESSES AND OPERATING SYSTEMS
State Diagram
The 32-bit RISC processor is used to perform system control and audio decoding.
The 16-bit DSP is used to perform audio effects such as equalization.
Only after the components are built do we have the satisfaction of putting them together and
seeing a working system. Of course, this phase usually consists of a lot more than just plugging
everything together and standing back. Bugs are typically found during system integration, and
good planning can help us find the bugs quickly.
9. Write in detail about the embedded concepts in the design of simple Engine Control
Unit (ECU).
This unit controls the operation of a fuel-injected engine based on several measurements taken
from the running engine.
Subtopics
Requirement (Table)
Specification
Architecture
Components
System integration
Figure 3.47 shows the block diagram of engine control
Specification
We will use ΔNE and ΔT to represent the change in RPM and throttle position, respectively. Our
controller computes two output signals, injector pulse width PW and spark advance angle S [Toy].
It first computes initial values for these variables:
48
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS
sensor (OX).
The injection duration is increased as the battery voltage (+B) drops System architecture
49
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS
Requirement (Table)
Specification
Architecture
Components
System integration
MPEG-2 forms the basis for U.S. HDTV broadcasting. This compression uses several component
50
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS
algorithms together in a feedback loop. The discrete cosine transform (DCT) used in JPEG also
plays a key role in MPEG-2.
As in still image compression, the DCT of a block of pixels is quantized for lossy compression and
then subjected to lossless variable-length coding to further reduce the number of bits required to
represent the block.
Motion-based coding
MPEG uses motion to encode one frame in terms of another. Rather than send each frame
separately, as in motion JPEG, some frames are sent as modified forms of other frames using a
technique known as block motion estimation.
During encoding, the frame is divided into macroblocks. Macroblocks from one frame are
identified in other frames using correlation. The frame can then be encoded using the vector that
describes the motion of the macroblock from one frame to another without explicitly transmitting
all of the pixels.
51
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS
values of macroblocks. This internal memory saves a great deal of transmission and storage
bandwidth.
Figure 3.53 shows the block motion estimation and fig 3.54 shows the parameters of block
motion of video accelerator.
52
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS
Fig 3.55 - Classes describing basic data types in the video accelerator
Figure 3 . 5 5 defines some classes that describe basic data types in the system. The motion
vector, the macro block, and the search area.
Figure 3.56 shows the architecture of video accelerator.
Architecture
Component design
If we want to use a standard FPGA accelerator board to implement the accelerator, we must first
make sure that it provides the proper memory required for M and S.
53
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS
Designing an FPGA is, for the most part, a straightforward exercise in logic design. Because the
logic for the accelerator is very regular, we can improve the FPGA’s clock rate by properly placing
the logic in the FPGA to reduce wire lengths.
If we are designing our own accelerator board, we have to design both the video accelerator
design proper and the interface to the PCI bus. We can create and exercise the video accelerator
architecture in a hardware description language like VHDL or Verilog and simulate its operation.
System testing
You can use standard video tools to extract a few frames from a digitized video and store them in
JPEG format. Open source for JPEG encoders and decoders is available. These programs can be
modified to read JPEG images and put out pixels in the format required by your accelerator. With
a little more cleverness, the resulting motion vector can be written back onto the image for a
visual confirmation of the result. If you want to be adventurous and try motion estimation on
video, open source MPEG encoders and decoders are also available.
11.With relevant examples, bring out the difference between clock driven scheduling
approach and priority driven scheduling approach.
The real-time task can be scheduled by operating system using various scheduling algorithms.
These scheduling algorithms are classified on the basis of determination of scheduling points.
1. Clock-driven Scheduling :
The scheduling in which the scheduling points are determined by the interrupts received from
2. A /D clock, is known as Clock-driven Scheduling. Clock-driven scheduling handles which task
is to be processed next is dependent at clock interrupt point.
3. Event-driven Scheduling :
The scheduling in which the scheduling points are determined by the events occurrences
excluding clock interrupts, is known as Event-driven Scheduling. Event-driven scheduling
handles which task is to be processed next is independent of clock interrupt point.
Difference between Clock-driven and Event-driven Scheduling:
Tasks are scheduled on the basis of Tasks are scheduled on the basis of event
interrupts received by clock. occurrences excluding clock interrupts.
54
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS
12. Explain the concepts of distributed embedded systems (OR)Explain the features and
application of internet enabled embedded systems.(OR)Discuss in detail about the
several interconnected networks used especially for distributed embedded computing
(OR)With a neat diagram, Describe the typical bus transactions on the I2C bus.
Network abstractions
CAN bus
I2C bus
55
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS
Ethernet
Internet
Network abstractions
The OSI model includes seven levels of abstraction as shown in figure 3.57.
Physical: The physical layer defines the basic properties of the interface between systems,
including the physical connections (plugs and wires).
Data link: The primary purpose of this layer is error detection and control across a single link.
Network: This layer defines the basic end-to-end data transmission service.
Transport: The transport layer defines connection-oriented services that ensure that data are
delivered in the proper order and without errors across multiple links.
Session: A session provides mechanisms for controlling the interaction of end-user services
across a network, such as data grouping and check pointing.
Presentation: This layer defines data exchange formats and provides transformation utilities
to application programs.
Application: The application layer provides the application interface between the network and
end-user programs.
56
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS
Physical layer
As shown in figure 3.58, each node in the CAN bus has its own electrical drivers and receivers
that connect the node to the bus in wired-AND fashion.
In CAN terminology, a logical 1 on the bus is called recessive and a logical 0 is dominant. The
driving circuits on the bus cause the bus to be pulled down to 0 if any node on the bus pulls
the bus down (making 0 dominant over 1).
When all nodes are transmitting 1s, the bus is said to be in the recessive state; when a node
transmits a 0, the bus is in the dominant state. Data are sent on the network in packets known
as data frames.
CAN is a synchronous bus—all transmitters must send at the same time for bus arbitration to
work. Nodes synchronize themselves to the bus by listening to the bit transitions on the bus. The
first bit of a data frame provides the first synchronization opportunity in a frame. The nodes must
also continue to synchronize themselves against later transitions in each frame.
57
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS
Data frame
58
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS
I2C bus
The I2C bus is a well-known bus commonly used to link microcontrollers into systems.
It has even been used for the command interface in an MPEG-2 video chip. while a separate
bus was used for high-speed video data, setup information was transmitted to the on-chip
controller through an I2C bus interface.
Physical layer
I2C is designed to be low cost, easy to implement, and of moderate speed (up to 100kilobits per
second for the standard bus and up to 400 kbits/sec for the extended bus).As a result, it uses
only two lines: the serial data line (SDL) for data and the serial clock line (SCL), which indicates
when valid data are on the data line.
In figure 3.61 shows the structure of a typical I2C bus system. Every node in the network is
connected to both SCL and SDL. Some nodes may be able to act as bus masters and the bus may
have more than one master. Other nodes may act as slaves that only respond to requests from
masters.
59
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS
The formats of some typical complete bus transactions are shown in figure 3.63. In the first
example, the master writes two bytes to the addressed slave. In the second, the master requests
a read from a slave. In the third, the master writes one byte to the slave, and then sends another
start to initiate a read from the slave.
60
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS
61
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS
Ethernet is particularly useful when PCs are used as platforms, making it possible to use
standard components, and when the network does not have to meet rigorous real-time
requirements.
The physical organization of an Ethernet is very simple, as shown in figure 3.66. The network is
a bus with a single signal path; the Ethernet standard allows for several different
implementations such as twisted pair and coaxial cable.
62
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS
are available, many different approaches have been developed to extend Ethernet to real-time
operation; some of these are compatible with the standard while others are not.
Internet
The Internet Protocol (IP) is the fundamental protocol on the Internet.
It provides connectionless, packet-based communication.
Industrial automation has long been a good application area for Internet-based embedded
systems.
Information appliances that use the Internet are rapidly becoming another use of IP in
embedded computing.
Internetworking
IP is not defined over a particular physical implementation—it is an internetworking standard.
The relationship between IP and individual networks is illustrated in Figure 3.63 IP works at the
network layer.
When node A wants to send data to node B, the application’s data pass through several layers
of the protocol stack to get to the Internet Protocol. IP creates packets for routing to the
destination, which are then sent to the data link and physical layers. A node that transmits
data among different types of networks is known as a router.
63
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS
64
MAILAM ENGINEEING COLLEGE /ECE/EC 6703/EMBEDDED AND REAL TIME SYSTEMS/UNIT 3/ PROCESSES AND OPERATING SYSTEMS
System speedup
In fig 3.72 Evaluating system speedup in a separate transport protocol, User Datagram Protocol
(UDP), is used as the basis for the network management services provided by the Simple
Network Management Protocol (SNMP).
65
MEC III ECE ET 3491 EIOT
UNIT V
IOT PHYSICAL DESIGN
Basic building blocks of an IoT device - Raspberry Pi - Board - Linux on Raspberry Pi - Interfaces -
Programming with Python - Case Studies: Home Automation, Smart Cities, Environment and
Agriculture.
PART - A
An IoT device can consists of some modules based on their functional attributes:
1
MEC III ECE ET 3491 EIOT
8. What is actuator?
IoT devices can have various types actuators attached to it, that allow taking actions upon the physical
entities in the vicinity of the device.
2
MEC III ECE ET 3491 EIOT
PART - B
1. Explain in detail about the basic building blocks of an IOT device with neat diagram.
An IoT system comprises of four basic building blocks as shown in fig 5.1,
(i) Sensors/ Actuators,
(ii) Processors,
(iii) Gateways, and
(iv) Applications
3
MEC III ECE ET 3491 EIOT
(v) Applications:
Applications provide a user interface and effective utilization of the data collected. Examples: Smart
home apps, Security system control apps and Industrial control hub apps. An IoT device can consists
of some modules based on their functional attributes as shown in fig. 5.2.
(i) Sensing/actuation module,
(ii) Analysis & processing module,
(iii) Communication module, and
(iv) Application module.
4
MEC III ECE ET 3491 EIOT
Installation
Following are the steps for headless installation of Raspberry Pi OS:
(i) Download Raspberry Pi Imager software by clicking on download button https:
//www.raspberrypi.org/software/
(ii) After installing Raspberry Pi Imager, open it. The interface of Raspberry Pi Imager is given by
5
MEC III ECE ET 3491 EIOT
(iii) Click on choose OS and select Raspberry Pi OS (32-bit) from the selection box as shown below
(iv) Attach your micro SD card to computer and then click on Choose SD card button as shown
below and select the SD card.
(v)Now, click on Write button. Raspberry Pi Imager will download the official Raspberry Pi OS
online and then write it on to your SD card.
(vi) After the process of writing Raspberry Pi OS is over, open the SD card in explorer. Create a file
named wpa_supplicant.conf in the SD card root folder and insert the following text which includes
the password into that file and save it.
6
MEC III ECE ET 3491 EIOT
7
MEC III ECE ET 3491 EIOT
The I2C interface pins on Raspberry Pi allow you to connect hardware modules. I2C interface allows
synchronous data transfer with just two pins: SDA (data line) and SCL (clock line) as shown in
fig.5.6.
Smart lighting for homes helps in saving energy by adapting the lighting to the ambient conditions
and switching on/off or dimming the lights when needed.
Smart lighting uses IoT-enabled sensors, bulbs, or adapters to allow users to manage their home or
office lighting.
Smart lighting solutions can be controlled through an external device like a smartphone or smart
assistant that can be set to operate on a schedule, or triggered by sound or motion.
Modern homes have a number of appliances such as TVs, refrigerators, music systems,
washer/dryer etc. Managing and controlling, these appliances can be a difficult one because each
appliance either having its own controls or remote controls.
Smart appliances make the management easier and also provide status information to the user
remotely.
Any appliance can become smart with wireless connectivity and sensors that allow remote control
or autonomous operation through user input, scheduling, or Artificial Intelligence and Machine
Learning (AI/ML).
Sensors combined with wireless connectivity can provide the end-user with information about the
appliance's usage, temperature, service life, maintenance schedules, or operation anomalies.
Advantages
Smart appliances enable users to connect, control, and monitor their appliances allowing them to
save time, energy, and money.
Additionally, they can remotely monitor appliances to ensure that they are turned off for safety,
even after leaving home.
Home Intrusion detection systems use security cameras and sensors such as PIR sensors and door
sensor to detect intrusions and raise alerts which can be in the form of SMS and an email sent to
the user.
Advanced systems can even send detailed alerts such as an image grab or a short video clip sent as
an email attachment.
9
MEC III ECE ET 3491 EIOT
Smoke detectors are installed in homes and buildings to detect smoke that is typically an early sign
of fire. Smoke detectors use an optical detection ionization or air sampling techniques to detect
smoke.
Alerts raised by smoke detectors can be in the form of signals to fire alarm system. Gas detectors
can detect the presence of harmful gases such as carbon monoxide (CO) and Liquid Petroleum Gas
(LPG).
A smart smoke / gas detector can raise alerts in human voice describing where the problem is, send
an SMS or email to the user or the local fire safety department and provide visual feedback on its
status.
Smart lighting systems for roads, parks and buildings can help in saving energy. Smart lights
equipped with sensors can communicate with other lights and exchange information on the sensed
ambient conditions to adapt the lighting.
Smart roads equipped with sensors can provide information on driving conditions, travel time
estimates and alerts in case of poor driving conditions, traffic congestions and accidents. Such
information can help us for safe drive and in reducing traffic jams.
Information sensed from the roads can be communicated via Internet to cloud based applications
and social media, the drivers who subscribe such applications can get the information.
Structural health monitoring systems uses a network of sensors to monitor the vibrations levels in
the structures such as bridges and buildings. The data collected from these sensors is analysed to
assess the health of the structures.
By analysing the data it is possible to detect cracks and mechanical breakdowns, locate the
damages to a structure and also to calculate the remaining lifetime of the structure. Using such
systems advance warning can be given in the case of imminent failure of the structure.
10
MEC III ECE ET 3491 EIOT
(5) Surveillance
Surveillance of infrastructure, public transport and events in cities is required to ensure safety and
security. City wide surveillance infrastructure comprising of a large number of distributed and
Internet connected video surveillance cameras.
The video feeds from surveillance cameras can then be aggregated in cloud-based scalable storage
solutions.
IoT systems can be used for monitoring the critical infrastructure in cities such as building, gas and
water pipelines, public transport and power substation systems.
IoT systems for fire directions, gas and water leakage directions can help in generating alerts and
minimizing their effects on the critical infrastructure
IoT systems for critical infrastructure monitoring enable aggregations and sharing of information is
collected from large number of sensors. Cloud-based architectures multi-model information such
as sensor data, audio, video feeds can be analysed in near real-time to detect adverse events.
IoT-based weather monitoring system can collect data from a number of sensors attached (such as
temperature, humidity, pressure etc) and send the data to cloud based applications and storage
back-ends.
The data collected in the cloud can then be analysed and visualized by cloud based applications.
Air and sound pollution is a growing issue in these days. It is necessary to monitor the air quality
and keep it under control for a better future and healthy living for all.
An air quality as well as sound pollution monitoring system that allows us to monitor and check
live air quality as well as sound pollution in particular areas through IoT.
System uses air sensors to sense presence of harmful gases/compounds in the air and constantly
transmit this data to the microcontroller. Also system keeps measuring sound level and reports it to
an online server over IoT.
The sensors interact with microcontroller which processes this data and transmits it over the
Internet. This allows authorities to monitor air pollution in different areas and take action against it.
Forest fires can cause damage to natural resources, property and human life. Early deduction of
forest fires can help in minimizing the damage.
IoT based forest fire detection systems can use a number of monitoring nodes deployed at a
different locations in a forest. Each monitoring node collects measurements on ambient conditions
including temperature, humidity and light levels.
11
MEC III ECE ET 3491 EIOT
IoT forest fire detection system can alert the local people based on the level of severity of fire
hazard near to them .Also, Iot can be integrated with drones GPS and satellite services.
River flood can cause extensive damage to the natural and human resource and human life. River
flood occurs due to continuous rainfall which cause the river levels to rise and flow rates to
increase rapidly.
Early warnings of floods can be given by monitoring the water level and flow rate.
IoT based river flood monitoring system uses a number of sensor nodes that can monitor the water
level using ultrasonic sensors and flow rate using the flow velocity sensors. Data from a number of
such sensor nodes is aggregated in a server or in the cloud.
12