0% found this document useful (0 votes)
25 views84 pages

RTOS

The document discusses real-time systems (RTS) and real-time operating systems (RTOS), defining RTS and categorizing tasks into hard, firm, and soft real-time tasks based on their deadline significance. It outlines typical applications, desirable features, multitasking principles, task states, and kernel functionalities, emphasizing the importance of context switching and scheduling strategies. Additionally, it explores various scheduling methods, including non-preemptive and preemptive kernels, and introduces concepts like round robin and rate monotonic scheduling.

Uploaded by

Manish Mahato
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views84 pages

RTOS

The document discusses real-time systems (RTS) and real-time operating systems (RTOS), defining RTS and categorizing tasks into hard, firm, and soft real-time tasks based on their deadline significance. It outlines typical applications, desirable features, multitasking principles, task states, and kernel functionalities, emphasizing the importance of context switching and scheduling strategies. Additionally, it explores various scheduling methods, including non-preemptive and preemptive kernels, and introduces concepts like round robin and rate monotonic scheduling.

Uploaded by

Manish Mahato
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

RTS & RTOS

By
Prof. ARN
Embedded S/W industry: The challenge
The Rising Software Intensity: Automobile
RTS Definition
A real-time Computer system is a computer system
in which the correctness of the system behavior
depends not only on the logical results of the computation,
but
Also on the physical instant at which these results are
produced.
Applications of RTS
• Plant control
• Control of production processes / industrial automation
• Railway switching systems
• Automotive applications
• Flight control systems
• Environmental acquisition and monitoring
• Telecommunication systems
• Robotics
• Military systems
• Space missions
• Household appliances
• Virtual / Augmented reality
Communication
to fire artillery
Hard RT vs. Firm vs. Soft RT
Opening of MS
word
document

Declining outcome

Zero outcome

-Ve outcome
Reaching Late
Airport after deployment of
plane Air bags in car
departure
• RT task is called hard
• if missing its deadline may cause catastrophic consequences on the environment under
control
• RT task is called firm
• if missing its deadline makes the result useless, but missing does not cause serious
damage
• RT task is called soft
• if meeting its deadline is desirable (e.g. for performance reasons) but missing does not
cause serious damage
• RTOS that is able to handle hard RT tasks is called hard real-time system

Hard RT vs.
Firm vs.
Soft RT
Typical Real Time Activities:
• Hard RT Activities:
• Sensory data acquisition
• Detection of critical conditions
• Actuator servoing
• Low-level control of critical system components
• Typical application areas:
• automotive : power-train control, air-bag control, steer by wire, brake by wire
• aircraft : engine control, aerodynamic control
• Typical Firm RT Activities:
• decision support
• value prediction
• Typical application areas:
• Weather forecast
• Decisions on stock exchange orders
• Typical Soft RT Activities:
• command interpreter of user interface
• keyboard handling
• displaying messages on screen
• transmitting streaming data
• Typical application areas:
• communication systems (voice over IP!)
• user interaction
• comfort electronics (body electronics in cars)
Desirable Features of Real-Time Systems
• Timeliness
- OS has to provide kernel mechanisms for
- time management
- handling tasks with explicit time constraints
• Design for peak load
• Predictability
• Fault tolerance
• Maintainability
foreground/background or super-loops.
• An application consists of an infinite loop that calls functions to
perform the desired operations (background).
• Interrupt Service Routines (ISRs) handle asynchronous events
(foreground).
• Foreground is also called Interrupt level while background is called Task level.
• Critical operations must be performed by the ISRs to ensure that they
are dealt with in a timely fashion.
• The information for a background module
made available by an ISR is not processed
until the background routine gets its turn to
execute.
• The worst case task level response time
depends on how long the background loop
takes to execute.
• Because the execution time of typical code is
not constant, the time for successive passes
through a portion of the loop is non-
deterministic.
• Furthermore, if a code change is made, the
timing of the loop is affected.
• Most high volume microcontroller-based applications (e.g.,
microwave ovens, telephones, toys, and so on) are designed as
foreground/background systems.
• Also, in microcontroller-based applications, it may be better (from a
power consumption point of view) to halt the processor and perform
all of the processing in ISRs.
Design Problem
Draw flow chart to design software to achieve following functionality:
1. Generate sine wave from DAC_OUT with 100 samples per period T of sine wave
with frequency of sine wave expected is 10 KHz
2. Read temperature data from AD1.2 with sampling rate of 1000 samples per second
with temperature range of 0 to 1023oC
3. Display measured temperature after averaging reading of 100 samples
4. Read set value of temperature by from AD1.3 derived from potentiometer
5. Display the set value on display
6. Amplitude of the sine wave generated is controlled by:
DAC_OUT= SET_VALUE - MEASURED_VALUE * gain + offset
7. Switch off DAC_OUT and turn on hooter operated by relay if panic switch is
pressed
8. Restart the system on reset or on power on reset
9. Reset watch dog timer after every 1000ms
10. Any other?
Solution
1.Use two interrupts
a. Timer interrupt to send DAC data at rate 1µsec: Low priority
b. External interrupt 0 for panic switch: Highest priority
2.Use two resets
a. Power on reset
b. Watchdog reset
ISR1_LP

INC 1µsTickCounter, Send out


DAC_data

IRET

ISR2_HP

Stop timer, Send out null


DAC_data, Turn relay on to start
hooter, GO power down mode, on
RESET start system again
Multi tasking
Sharing CPU time amongst competing tasks
Multi tasking
• Multitasking is the process of scheduling and switching the CPU among
several tasks Intr. Serv. routine While(1);

• Multitasking is like “foreground/background” with multiple backgrounds.


• Multitasking maximizes the utilization of the CPU and also provides for
modular construction of applications.
• One of the most important aspects of multitasking is that it allows the
application programmer to manage complexity (present in real-time
applications).
• Application programs are typically easier to design and maintain if
multitasking is used.
Process/ Task
• A process is an abstraction of a running program and is the logical
unit of work (a task in RTOS based application design) schedulable by
the real - time operating system.
• A process is usually represented by a private data structure that
contains at least an identity, priority level, state of execution (e.g.,
running, ready, or suspended), and resources associated with the
process.
• The design of a real-time application involves splitting the work to be
done into tasks which are responsible for a portion of the problem.
Interrupt
ISR
Task State Diagram RET

• Each task typically is an


infinite loop that can be
in any one of five states:
• DORMANT,
• READY,
Now
• RUNNING/ Executing Required
• Suspended (Waiting for
an event, eg. ADC Data
through ISR )
• Terminated
Not
Required
Resident in
memory
Task state description
• The DORMANT state corresponds to a task which resides in memory but has not been
made available to the multitasking kernel.
• A task is READY when it can execute but its priority is less than the currently running task.
• A task is RUNNING when it has control of the CPU.
• A task is SUSPENDDED (WAITING FOR AN EVENT) when it requires the occurrence of an
event (waiting for an I/O operation to complete, a shared resource not yet available, a
timing pulse to occur, time to expire etc.).
• Finally, a task is INTERRUPTED when an interrupt has occurred and the CPU is in the
process of servicing the interrupt.
Task Control Block
• Each task is associated with a private data structure,
called a task control block
• The operating system stores these TCBs in one or
more data structures, typically in a linked list.
• The operating system manages the TCBs by keeping
track of the state or status of each task.
• 1. Executing 2. Ready 3. Suspended 4. Dormant
• When a executing task is completed, it returns to
the suspended state
The RTX kernel also allocates
the task its own stack. The
stack is allocated at runtime
after the TCB
has been allocated. The
pointer to the stack memory
block is then written into the
TCB.
Stack Management
• The RTX kernel system needs one stack space for the task that is currently in the
RUNNING state:
• Local Stack: stores parameters, automatic variables, and function return addresses.
• On the ARM device, this stack can be anywhere: on microcontroller/ off uc.
• However, for performance reasons, it is better to use the on-chip RAM for the local stack.
• When a task switch occurs: Operations performed are
• the context of the currently running task is stored on the local
stack of the current task
• the stack is switched to that of the next task
• the context of the new task is restored
• the new task starts to run.
• The Local Stack also holds the task context of waiting or ready tasks (see figure on next
slide).
All tasks run in user mode.
The task scheduler switches the user/system
mode stack between tasks.
For this reason, the default user/system
mode stack (which is defined in the startup
file) is used until the first task is created and
started.
The default stack requirements are very
small, so it is optimal to set the user/system
stack in the startup file to 64 bytes.
Some definitions

Resource Shared Resource


• A resource is any entity used by • A shared resource is a resource
a task. that can be used by more than
• A resource can thus be an I/O one task.
device such as a printer, a • Each task should gain exclusive
keyboard, a display, etc. or a access to the shared resource to
variable, a structure, an array, prevent data corruption.
etc. • This is called Mutual Exclusion
Critical Section of Code
• A critical section of code, also called a critical region, is code that
needs to be treated indivisibly.
• Once the section of code starts executing, it must not be interrupted.
• interrupts are disabled before the critical code is executed and enabled when
the critical code is finished
Context Switch (or Task
Switch)
• When a multitasking kernel decides to run a
different task, it simply saves the current
task's context (CPU registers) in the current
task's context storage area – it’s stack
• Once this operation is performed, the new
task's context is restored from its storage
area and then resumes execution of the new
task's code.
• This process is called a context switch or a
task switch.
Overheads:
Context Switch (or Task Switch)
• Context switching adds overhead to the application.
• The more registers a CPU has, the higher the overhead.
• It consumes time
• Performance of a real-time kernel should not be judged on how many
context switches the kernel is capable of doing per second.
Layered architecture of RTOS
• A scheduler determines which task
will run next in a multitasking system,
• while a dispatcher performs the
necessary bookkeeping to start that
particular task.
• A kernel also provides for inter-task
communication and synchronization
via mailboxes, queues, pipes, and
semaphores,
• A real - time executive is an extended
kernel that includes privatized
memory blocks (memory
management and protection),
input/output services, and other
supporting features.
Kernel
• The kernel is the part of a multitasking system responsible for the
management of tasks (that is, for managing the CPU's time) and
communication between tasks.
• The fundamental service provided by the kernel is context switching.
• The use of a real-time kernel will generally simplify the design of
systems by allowing the application to be divided into multiple tasks
managed by the kernel.
Overhead: Kernel
A kernel will add overhead to your system because
1. It requires extra ROM (code space), additional RAM for the kernel
data structures
2. Each task requires its own stack space which has a tendency to eat
up RAM quite quickly.
3. A kernel consumes CPU time (typically between 2% to 5%).
Advantage: Kernel
• A kernel can allow you to make better use of CPU by providing
important services such as
• semaphore management,
• mailboxes,
• queues,
• time delays, etc.
Scheduler
• The scheduler, is the part of the kernel responsible for determining
which task will run next, amongst the tasks in ready state.
• Most real-time kernels are priority based.
• Each task is assigned a priority based on its importance.
• CPU time will be given to the highest priority task AND ready-to-run.
• There are two types of priority-based kernels:
• Non-preemptive and
• Preemptive.
Non-Preemptive Kernel
• Non-preemptive scheduling is also called cooperative multitasking; tasks
cooperate with each other to share the CPU.
• An ISR can make a higher priority task ready to run, but the ISR always
returns to the interrupted task.
• The new higher priority task will gain control of the CPU only when the
current task gives up the CPU
• One of the advantages of a non-preemptive kernel is that interrupt latency
is typically low (being non preemtive, Less critical regions)
• Another advantage of non-preemptive kernels is the lesser need to guard
shared data through the use of semaphores.
• Each task owns the CPU and you don't have to fear that a task will be
preempted.
But still low priority task will
continue to run till it
relinquishes the CPU
Drawbacks
• A higher priority task that has been made ready to run may have to
wait a long time to run, because the current task must give up the
CPU when it is ready to do so.
• Task-level response time in a non-preemptive kernel is non-
deterministic
Preemptive Kernel
• The highest priority task ready to run is always given control of the
CPU.
• When a task makes a higher priority task ready to run, the current
task is preempted (suspended/ ready state) and the higher priority
task is immediately given control of the CPU.
• If an ISR makes a higher priority task ready, when the ISR completes,
the interrupted task is suspended and the new higher priority task is
resumed
• Lower-priority tasks face problem of starvation.
But still low priority task will
continue to run till it relinquishes the
CPU
Round Robin Scheduling
• When two or more tasks have the same priority, the kernel will allow
one task to run for a predetermined amount of time slice, and then
selects another task.
• The kernel gives control to the next task in line if:
a) the current task doesn't have any work to do during its time slice or
b) the current task completes before the end of its time slice.
Rate monotonic scheduling
• The priorities are assigned so that the higher the execution rate, the
higher the priority.
• This scheme is common in embedded applications like avionics
systems
• For example, in the aircraft navigation system, the task that gathers
accelerometer data every 10 ms has the highest priority.
• The task that collects gyro data, every 40 ms, has the second highest priority.
• Finally, the task that updates the pilot’s display every second has the lowest
priority.
Hybrid scheduling system
• This systems is a combination of round - robin and preemptive
priority systems.
• In these systems, tasks of higher priority can always preempt those of
lower priority.
• However, if two or more tasks of the same priority are ready to run
simultaneously, then they run in round-robin fashion
Dynamic Priority Scheduling: Earliest Deadline
First Approach
• At any point of time, the ready task with the earliest deadline has the
highest priority Task ID Arrival time in Duration of execution Deadline
scheduler and
• Used by avionics ind. ready_state
T1 0 10 33
T2 4 03 28
T3 5 10 29

Earliest deadline first scheduling example

T3 10 0
T2 3 0
T1 0 4 13 6 0
0 5 10 15 20 25

Series 1 Series 2 Series 3 Series 4


Shared data and re-entrancy
• A computer program is called re-entrant, if it can be interrupted in
the middle of its execution by another actor before the earlier
invocation has finished execution, without affecting the path that the
first actor would have taken through the code.
• That is, it is possible to “re-enter” the code while it is already running
and still produce correct results
• It is a computer program that is written so that the same copy is
shared by multiple users.
• The interrupt could be caused due to internal action like a jump or
call, or by an external action like a hardware interrupt or signal.
• Once the reentered invocation completes, the previous actor would
continue with its execution from the point where the interrupt
aroused.
• A programmer writes the reentrant program by making sure that no
instructions modify the contents of variable values in other
instructions within the program.
Do Not use static
variables in re-enetrant
functions
If you want to use static
variables, make it
private to the function
Mutual Exclusion
• The easiest way for tasks to communicate with each other is through
shared data structures.
• Tasks can thus reference global variables, pointers, buffers, linked lists, ring
buffers, etc.
• While sharing data (to simplify the exchange of information), you must
ensure that each task has exclusive access to the data to avoid contention
and data corruption.
• The most common methods to obtain exclusive access to shared resources
are:
a) Disabling interrupts
b) Test-And-Set
c) Using semaphores
Disabling interrupts
• The easiest and fastest way to gain exclusive access to a shared
resource is by disabling and enabling interrupts
• You must be careful, however, to not disable interrupts for too long
because this affects the response of your system to interrupts.
Situation
After 10ms task switch

Process 1 Process 2
After 10ms task switch

Memory
Process 1 and 2 trying to access
same array in shared database in
their respective time slot!!!!
Mutual Exclusion using Test-And-Set
(For system without kernel)
• If you are not using a kernel, two functions could ‘agree’ that to
access a resource, they must check a global variable, and if the
variable is 0 the function has access to the resource.
• To prevent the other function from accessing the
resource, however, the first function that gets the resource simply
sets the variable to 1. This is commonly called a Test-And-Set (or TAS)
operation.
• The TAS operation must either be performed indivisibly (by the
processor instructions) or you must disable interrupts when doing the
TAS on the variable
Sample code
LOCKED EQU 0 ; define value indicating

LDR r1, <addr> ; load r1 semaphore address


LDR r0, LOCKED ; preload "locked" value i.e. r0 = 0
1
spin_lock
SWP r0, r0, [r1] ; swap register ro with semaphore []
2
CMP r0, #LOCKED ; if semaphore was locked already
BEQ spin_lock ; retry else lock and use shared
resource
;Now write code to Use shared resource
Mutual Exclusion, Semaphores
• Semaphores are used to:
a) control access to a shared resource (mutual exclusion);
b) signal the occurrence of an event;
c) allow two tasks to synchronize their activities.
• A semaphore is a key that your code acquires in order to continue
execution.
• If the semaphore is already in use, the requesting task is suspended
until the semaphore is released by its current owner.
Binary and counting semaphore
• There are two types of semaphores: binary semaphores and counting
semaphores.
• As its name implies, a binary semaphore can only take two values: 0
or 1.
• A counting semaphore allows values between 0 and 255, 65535 or
4294967295, depending on whether the semaphore mechanism is
implemented using 8, 16 or 32 bits, respectively.
• Along with the semaphore's value, the kernel also needs to keep track
of tasks waiting for the semaphore's availability.
Counting Semaphore cont.
• A task desiring the semaphore will perform a WAIT operation.
• If the semaphore is available (the semaphore value is greater than
0), the semaphore value is decremented and the task continues
execution.
• If the semaphore's value is 0, the task waiting for semaphore is placed
in a waiting list.
• Most kernels allow you to specify a timeout:
• If the semaphore is not available within a certain amount of time, the
requesting task is made ready to run (or under take otherwise works) and an
error code (indicating that a timeout has occurred) is returned to the
semaphore caller function in that task.
• A task releases a semaphore by performing a SIGNAL operation.
• If no task is waiting for the semaphore, the semaphore value is simply
incremented from zero to one.
• However, If any task is waiting for the semaphore, then, the tasks is made
ready to run and the semaphore value is not incremented from zero to one;
and the key is given to one of the tasks waiting for it.
• Depending on the kernel, the task which will receive the semaphore is
either:
a) the highest priority task waiting for the semaphore, or
b) the first task that requested the semaphore (First In First Out, or FIFO).
Example
• Semaphores are especially
useful when tasks are
sharing I/O devices.
• A counting semaphore is
used when a resource can
be used by more than one
task at the same time.
• A Bluetooth dongle can give
8 connections
simultaneously
Priority Inversion Problem
• When a lower - priority task blocks a higher - priority task, a priority
inversion is said to occur.
• Let three tasks, τ 1 , τ 2 , and τ 3 , have decreasing priorities (i.e., τ 1 >
τ 2 > τ 3 , where “ > ” is the precedence symbol), and τ 1 and τ 3 share
some data or resource that requires exclusive access, while τ 2 does
not interact with either of the other two tasks.
• Access to the critical section is carried out through the wait and signal
operations on semaphore s
T1 blocked due to non

Priority Inversion Problem availability of semaphore

T1
preempts
T1 preempts
T3

T2 preempts T3

T3 locked
T3 releases
semaphore
semaphore

T3 executes
• Now, consider the following execution scenario, illustrated in Figure
• Task τ 3 starts at time t 0 , and locks semaphore s at time t 1 . At time
t 2 , τ 1 arrives and preempts τ 3 inside its critical section.
• After a while, τ 1 requests to use the shared resource by attempting
to lock s , but τ 1 gets blocked, as τ 3 is currently using it. Hence, at
time t 3 , τ 3 continues to execute inside its critical section.
• Next, when τ 2 arrives at time t 4 , it preempts τ 3 , as it has a higher
priority and does not interact with either τ1 or τ3 .
• The execution time of τ2 increases the period of blocking of τ1 , as it
is no longer dependent solely on the length of the critical section
executed by τ3 .
priority inheritance protocol
• In the priority inheritance protocol, the priorities of tasks are
dynamically adjusted so that the priority of any task in a critical region
gets the priority of the highest - priority task using that same critical
region.
• Rule: when a task, τi , blocks one or more higher - priority tasks, it
temporarily inherits the highest priority of the blocked tasks
Priority inheritance: example

T3=T1, stops
preemption of
T3 due to T2
Mailboxes
• Mailboxes provide an intertask communication mechanism,
• A mailbox is actually a special memory location that one or more
tasks can use to transfer data, or more generally for synchronization.
• The tasks rely on the kernel to allow them to write to the mailbox via
a post operation or to read from it via a pend operation — direct
access to any mailbox is not allowed.
• Two system calls, pend(d, &s) and post(d, &s) , are used to receive
and send mail, respectively
• d , is the mailed data and the second parameter, &s , is the mailbox location
• the pending task is suspended while waiting for data to appear and avoid CPU
time wastage .
Event Flags
• Event flags are used when a task
needs to synchronize with the
occurrence of multiple events.
• The task can be synchronized
when any of the events have
occurred. This is called disjunctive
synchronization: (logical OR).
• A task can also be synchronized
when all events have occurred :
(logical AND).
Message Queues,
• A message queue is used to send one or more messages to a task.
• A message queue is basically an array of mailboxes.
• Through a service provided by the kernel, a task or an ISR can deposit
a message (or pointer) into a message queue.
• Similarly, one or more tasks can receive messages through a service
provided by the kernel.
• Generally, the first message inserted in the queue will be the first
message extracted from the queue (FIFO).
• A task desiring to receive a message from an empty queue will be
suspended and placed on the waiting list (with time out) until a
message is received
• When a message is deposited into the queue, either the highest
priority task or the first task waiting for the message will be given the
message.
Timer and Clock Services
• In developing real - time software, it is desirable to have easy - to -
use timing services available.
• For example, suppose a diagnostic task checks the “ health ” of an
elevator system periodically.
• Essentially, the task would execute one round of diagnostics and then
wait for a periodic timer interrupt (notification) to run again, this
task will be repeating forever.
• This is usually accomplished by having a programmable timer that is
set to create the required time interval.
• A system call, delay , is commonly available to suspend the executing task
until the desired time has elapsed, after which the suspended task is
moved to the ready list.
• The delay function has one integer parameter, ticks , to specify the length
of the delay.
• In order to generate an appropriate time reference, a timer circuit is
configured to interrupt the CPU at a fixed rate, and the internal system
time is incremented at each timer interrupt.
• The interval of time with which the timer is programmed to interrupt
defines the unit of time in the system — also called a “ tick ” or time
resolution.
Clock Ticks
• A clock tick is a special interrupt that occurs periodically.
• The time between interrupts is application specific and is generally
between 10 and 200 mS.
• The clock tick interrupt allows a kernel to delay tasks for an integral
number of clock ticks and to provide timeouts when tasks are waiting
for events to occur.
• The faster the tick rate, the higher the overhead imposed on the
system.
A situation where higher priority tasks and ISRs execute prior to the
task, which needs to delay for 1 tick.
As you can see, the task attempts to delay for 20 mS but because of its
priority, actually executes at varying intervals.
This will thus cause the execution of the task to jitter
Memory Management in RTOS
• Dynamic memory allocation is support for on demand memory
requests by applications tasks and the operating system itself.
• The operating system has to perform effective memory management
in order to keep the tasks isolated
• Risky allocation of memory is any allocation of memory that can
loose system deterministic nature.
• Such an allocation can produce the stack over flow, or a deadlock
situation.
• Therefore, it is truly important to avoid risky allocation of
memory, while at the same time reducing the overhead incurred by
memory management.
• This overhead is a significant component of the context - switch time
and must be minimized
Memory Manager
• When a task / process is created, the memory manager allocates the
memory addresses (blocks) to the task.
• Threads of a process share the memory space of the process
• Memory manager of the OS has to be secure, robust and well
protected.
Memory manager Contd.
• Memory manager should not produce memory leaks and stack
overflows
• Memory leaks means attempts to write in the memory block that is
not allocated to a process or data structure.
• Stack overflow means that the stack exceeding the allocated memory
block(s)
Memory Managing Strategies for a system
• Fixed-blocks allocation
• Fixed-size blocks allocation, also called memory pool allocation, uses a free
list of fixed-size blocks of memory (often all of the same size).
• This works well for simple embedded systems where no large objects need to
be allocated, but suffers from fragmentation, especially with long memory
addresses.
• However, due to the significantly reduced overhead this method can
substantially improve performance for objects that need frequent allocation /
de-allocation and is often used in video games.
• Dynamic -blocks Allocation
• Dynamic Page-Allocation
• Dynamic Data memory Allocation
• Dynamic address-relocation
• Multiprocessor Memory Allocation
• Memory Protection to OS functions
Memory allocation in RTOSes
• RTOS may disable the support to the dynamic block allocation, MMU
support to dynamic page allocation and dynamic binding as this
increases the latency of servicing the tasks and ISRs
• RTOS may not support to memory protection of the OS functions, as
this increases the latency of servicing the tasks and ISRs.
• User functions are then can run in kernel space and run like kernel
functions
• RTOS may provide for disabling of the support to memory protection
among the tasks because this increases the memory requirement for
each task
Memory Manager provide following functions
• Use of memory address space by a process,
• Specific mechanisms to share the memory space and
• Specific mechanisms to restrict sharing of a given memory space
• Optimization of the access periods of a memory by using an hierarchy
of memory (caches, primary and external secondary magnetic and
optical memories)
• Remember that the access periods are in the following increasing
order: caches, primary and external secondary magnetic and then or
optical.
Fragmentation: Memory Allocation Problems
• Time is spent in first locating next free memory address before allocating
that to the process.
• A standard memory allocation scheme is to scan a linked list of
indeterminate length to find a suitable free memory block.
• When one allotted block of memory is de-allocated, the time is spent in
first locating next allocated memory block before de-allocating that to the
process.
• The time for allocation and de-allocation of the memory and blocks are
variable (not deterministic) when the block sizes are variable and when the
memory is fragmented.
• In RTOS, this leads to unpredictable task performance
Thank you…
For reaching at the end of presentation.

You might also like