Unit I
INTRODUCTION TO OPERATING SYSTEM AND MEMORY
MANAGEMENT
Key terms
Batch System: a type of system developed for the earliest computers
that used punched cards or tape for input., which were entered in
batch.
CPU : Component with circuitry, the chips to control the
interpretation and execution of instructions.
Device Manager : the section of operating system responsible for
controlling the use of devices. It monitors every device, channel, and
control unit and chooses the most efficient way to allocate all of the
system’s devices.
Embedded System : A dedicated computer system , often small and
fast that resides in a larger physical system such as jet or aircrafts or
ships.
File Manager : The section of OS responsible for controlling the use
of files.
Firmware : Software instructions or data that are stored in a fixed or
“firm” way, usually implemented on ROM.
Hardware : Physical machine and its component.
Hybrid System: A computer system that supports both batch and
interactive processes.
Interactive system : system that allows you to interact.
Kernel : The primary part of the operating system that remains in
RAM and is charged with performing the system’s most essential
tasks, such as managing main memory and disk access.
Main memory: Memory unit that words directly with the CPU and in
which the data and instructions must reside in order to be processed.
What is an Operating System ?
The Software that manages all the resources of a computer system.
Definition: The part of the computing system that manages all of the
hardware and all of the software. It controls every file , every device,
every section of main memory and every nanosecond of processing
time..
When user sends command , The Operating system must make sure
that the command is executed or, if it’s not , it must display a message
explaining the error.
A computer system consists of :
A) Software : (programs)
B) Hardware (the physical machine and its electronic components)
Operating System falls in a broad category of software.
Further there are Utility Software and Application software.
Operating System Software
There are four essential managers of every operating system :
1) Memory manager
2) Processor manager
3) Device manager
4) File manager.
These managers are all the basis of operating systems.
Each manager works closely with the other managers and perform its
unique role regardless of which specific operating system is being
discussed.
On other hand , the User Command interface from which users issue
commands to the operating system is the component that is uniques
to each operating system and is very different from one operating
system to another .
Regardless of size or configuration of the system, each of the
subsystem managers must perform the following tasks:
Monitor its resources continuously.
Enforce the policies that determine who gets what, when and how
much.
Allocate the resource when its appropriate.
Deallocate the resource when appropriate.
1) Memory Manager :
It is in charge of main memory, also known as RAM.
It checks the validity of each request for memory space and if it is a legal request,
the memory manager allocates a portion that isn’t already in use.
In a multiuser environment, the memory manager sets up a table to keep track of
who is using which section of memory.
Finally, when time comes to reclaim the memory , the memory manager
deallocates memory.
A primary responsibility of the Memory Manager is to preserve the space in main
memory occupied by the operating system itself – it can’t allow any part of it to
be accidentally or intentionally altered.
2) The Process Manager :
The Process Manager decides how to allocate the central processing unit
(CPU).
It keeps the track of the status of each process.(process is defined here as in
instance of execution of a program).
The process manager monitors whether the CPU is executing a process or
waiting for READ or WRITE command to finish the execution
Because it handles the processes transitions from one state of execution to
another.
Once the Processor Manager allocates the processor, it sets up the necessary
registers and tables and when the job is finished or the maximum amount of
time has expired, it reclaims the processor.
Thus the Processor Manager has two levels of responsibility.
1) To handle jobs as they enter the system and
2) To manage each process within those jobs.
3) Device Manager:
The Device manager monitors every device, channel and control unit.
Its job is to choose the most efficient way to allocate all of the
systems devices, printers, terminals, disk drivers based on scheduling
policy.
The Device manager makes the allocation, starts its operation and
finally deallocates the device.
4) File Manager :
It keeps the track of every file in the system including data files,
assemblers, compilers and application programs.
It enforces restrictions on who has access to which files.
The file manager also controls what users are allowed to do with files
once they access them.
Also, it allocates the resource by opening the file and deallocates it by
closing the file.
Types of Operating System
1) Batch Operating System
This type of operating system does not interact with the computer directly.
There is an operator which takes similar jobs having the same requirements
and groups them into batches.
It is the responsibility of the operator to sort jobs with similar needs.
Batch Operating System is designed to manage and execute a large number
of jobs efficiently by processing them in groups.
The efficiency is measured in throughput – the number of jobs completed
in a given amount of time.
And turnaround measured in hours or even days.
2) Interactive Systems :
It is also called as time-sharing systems, it gives a faster turnaround than
batch systems but are slower than the real-time systems.
Interactive systems were introduced to satisfy the demands of users who
needed fast turnaround when debugging the programs..
This system allowed each user to interact directly with the computer
system via commands entered from a typewriter-like terminal.
The operating system provides immediate feedback to the user and
response time can be measured in minutes or seconds depending on the
active users.
3) Real-Time Operating System
These types of OSs serve real-time systems. The time interval required to
process and respond to inputs is very small. This time interval is called response
time. Real-time systems are used when there are time requirements that are very
strict like missile systems, air traffic control systems, robots, etc.
Types of Real-Time Operating Systems
•Hard Real-Time Systems: Hard Real-Time OSs are meant for applications
where time constraints are very strict, and even the shortest possible delay is not
acceptable. These systems are built for saving lives like automatic parachutes or
airbags which are required to be readily available in case of an accident. Virtual
memory is rarely found in these systems.
•Soft Real-Time Systems: These OSs are for applications where time is less
strict.
4) Hybrid Systems
Hybrid systems are the combination of batch and interactive.
They appear to be interactive because individual users can access the
system via terminals and get fast response, but such system actually
accepts and runs batch programs in the background when the interactive
load is light .
A hybrid system takes advantage of the free time between demands for
processing to execute programs that need no significant operator
assistance.
5)Embedded Systems:
Embedded systems are placed inside other products to add features
and capabilities.
6)Multi-Programming Operating System
Multiprogramming Operating Systems can be simply illustrated as
more than one program is present in the main memory and any one of
them can be kept in execution. This is used for better utilization of
resources.
7)Multi-tasking/Time-sharing Operating systems
It is a type of Multiprogramming system with every process running in round
robin manner. Each task is given some time to execute so that all the tasks work
smoothly. Each user gets the time of the CPU as they use a single system. These
systems are also known as Multitasking Systems. The task can be from a single
user or different users. The time that each task gets to execute is called
quantum. After this time interval is over, the OS switches over to the next task.
8) Multi-Processing Operating System
A Multi-Processing Operating System is a type of Operating System in which
more than one CPU is used for the execution of resources. It betters the
throughput of the System.
9) Multi-User Operating Systems
These systems allow multiple users to be active at the same time. This
system can be either a multiprocessor or a single processor with
interleaving.
10) Distributed Operating System
These types of operating systems are a recent advancement
in the world of computer technology and are being widely
accepted all over the world and, that too, at a great pace.
Various autonomous interconnected computers communicate
with each other using a shared communication network.
Independent systems possess their own memory unit and
CPU.
These systems' processors differ in size and function.
The major benefit of working with these types of operating
systems is that it is always possible that one user can access
the files or software which are not present on his system but
on some other system connected within this network, i.e.,
remote access is enabled within the devices connected to that
network.
11) Network Operating System
These systems run on a server and provide the
capability to manage data, users, groups, security,
applications, and other networking functions.
These types of operating systems allow shared
access to files, printers, security, applications, and
other networking functions over a small private
network.
One more important aspect of Network Operating
Systems is that all the users are well aware of the
underlying configuration, of all other users within
the network, their connections, etc., and that’s why
these computers are popularly known a tightly
coupled systems.
12)Mobile Operating Systems
Mobile operating systems are designed specifically for mobile
devices such as smartphones and tablets.
Examples of such operating systems are Android and iOS.
These operating systems manage the hardware and software resources
of the device, providing a platform for running applications and
ensuring a seamless user experience.
Memory Management: Early Systems
•Memory management is how a computer allocates the main memory for an
operating system (OS), applications, and any other operational processes. Computer
memory is not the same as computer storage. Memory stores data temporarily, but
computer storage, such as hard drives and solid state drives (SSDs), is where the
information is permanently stored.
•Memory management is essential for increasing your computer's performance and
efficiency, as many different applications vie for the computer’s memory. The main
memory has space limits, so efficient management is necessary for a computer to
function correctly. Memory management keeps a tally of memory location statuses,
sharing this information with the processor while keeping a record of the memory
allocation status.
Applications and memory allocation
Once in use, applications manage their own memory to ensure they allocate the
proper memory for their objects and data structures. Applications use two
features, allocation and recycling, to distribute memory and recycle its use.
Allocation: A process in an application that assigns memory to the application
when requested by the program. It does this through an allocator and can be
automatic or manual, depending on the application's programming.
Recycling: The process a program uses to recycle or deallocate memory from an
application that no longer requests memory. As with allocation, this process can
be automatic or manual. When automated, you can call this process garbage
collection.
Memory management techniques
•Memory management techniques can be divided into two primary
categories: contiguous and non-contiguous memory schemes. Examine
different memory management techniques to better understand their
purposes.
•Contiguous memory schemes
•Contiguous memory schemes are programs that have consecutive memory
block addresses and include:
• Single contiguous allocation
• Multiple partitioning
Single contiguous
Single contiguous memory management schemes are one of the oldest techniques. In this technique, the
main memory has two partitions: one for the OS and the other for different user processes.
However, due to its simplicity, single contiguous memory management leads to memory waste. While it
played an important role in developing more advanced systems, this technique isn’t practical for modern
computers.
Working:
Only one program can run at a time.
The program is loaded after the OS, occupying the remaining memory.
If the program is too large to fit in memory, either:
◦ The program is modified to be smaller, or
◦ Overlay techniques are used to load segments of the program in turns.
Advantages:
•Simple and easy to implement.
•No memory fragmentation issues.
Limitations:
•Can only run one job at a time.
•Doesn't support multiprogramming.
•Wastes memory if the program is small.
•If the program is larger than the available memory, it cannot be executed unless
modified or split using overlays.
Algorithm to Load a Job in a Single-User System:
1. Store the first memory location of the program into the base register (for protection).
2. Set the program counter to the program's start address.
3. Read the first instruction.
4. Increment the program counter by the number of bytes in the instruction.
5. Check if the last instruction has been reached:
If yes, stop loading.
If no, continue.
6. Check if the program size is greater than memory:
If yes, stop loading.
If no, continue.
7. Load the instruction in memory.
8. Read the next instruction.
9. Go to step 4.
Multiple partitioning
Multiple partitioning overcomes the issues of running concurrent programs by
breaking the main memory into various parts or partitions, allowing you to run
simultaneous applications and utilize memory more efficiently. Multiple
partitioning uses two techniques to achieve this:
◦ Fixed partitioning
◦ Dynamic partitioning
Fixed (or static) Partitioning
Fixed (or static) partitioning is one of the earliest and simplest memory
management techniques used in operating systems. It involves dividing the main
memory into a fixed number of partitions at system startup, with each partition
being assigned to a process.
These partitions remain unchanged throughout the system’s operation,
providing each process with a designated memory space. This method was
widely used in early operating systems and remains relevant in specific contexts
like embedded systems and real-time applications.
However, while fixed partitioning is simple to implement, it has significant
limitations, including inefficiencies caused by internal fragmentation.
•In fixed partitioning, the memory is divided into fixed-size chunks, with each
chunk being reserved for a specific process. When a process requests memory,
the operating system assigns it to the appropriate partition. Each partition is of
the same size, and the memory allocation is done at system boot time.
•Fixed partitioning has several advantages over other memory allocation
techniques. First, it is simple and easy to implement. Second, it is predictable,
meaning the operating system can ensure a minimum amount of memory for
each process. Third, it can prevent processes from interfering with each other's
memory space, improving the security and stability of the system.
•However, fixed partitioning also has some disadvantages. It can lead to internal
fragmentation, where memory in a partition remains unused. This can happen
when the process's memory requirements are smaller than the partition size,
leaving some memory unused. Additionally, fixed partitioning limits the number
of processes that can run concurrently, as each process requires a dedicated
partition.
• As shown in figure, first process is only consuming 1MB out
of 4MB in the main memory.
Hence, Internal Fragmentation in first block is (4-1) = 3MB.
Sum of Internal Fragmentation in every block = (4-1)+(8-
7)+(8-7)+(16-14)= 3+1+1+2 = 7MB.
Suppose process P5 of size 7MB comes. But this process
cannot be accommodated in spite of available free space
because of contiguous allocation (as spanning is not
allowed). Hence, 7MB becomes part of External
Fragmentation
Algorithm to Load a Job in a Single-User System:
1. Determine the job's requested memory size. Load the job into memory_partition(counter).
2. Compare the job size with the size of the largest partition:
If job_size > size of largest partition: Change memory_partition_status(counter) to
"busy".
Reject the job.
Go to Step 1 to handle the next job.
Print appropriate message to the operator.
Else:
Go to Step 1 to handle the next job in the queue.
Increment counter by 1.
Else:
End loop.
Proceed to Step 3.
5. If no suitable partition is available: Put the job in the
3. Initialize counter to 1.
waiting queue.
4. Repeat While counter <= number of partitions in memory:
6. Go to Step 1 to handle the next job in line.
If job_size > memory_partition_size(counter):
Counter = counter +1.
Else:
If memory_partition_size(counter) == "free":
Dynamic (Variable)Partitioning
oDynamic partitioning helps in overcoming the difficulties
caused by the process of fixed partitioning. In dynamic
partitioning, the partition size initially isn’t declared. It’s
declared at the moment of process loading.
oThe very first partition has to be reserved for the OS. The
left space gets divided into various parts. The actual size
of every partition would be equal to the process size. The
size of the partition varies according to the requirement
of the process. This way, internal fragmentation can be
easily avoided.
oTypes of Dynamic Partitioning Algorithms:
1FirstFit
2. Best Fit
3. Worst Fit
4. Next Fit
Advantages of Variable(Dynamic) Partitioning
• No Internal Fragmentation: In variable Partitioning, space in the main memory is allocated strictly according to
the need of the process, hence there is no case of internal fragmentation. There will be no unused space left in
the partition.
• No restriction on the Degree of Multiprogramming: More processes can be accommodated due to the absence
of internal fragmentation. A process can be loaded until the memory is empty.
• No Limitation on the Size of the Process: In Fixed partitioning, the process with a size greater than the size of the
largest partition could not be loaded and the process can not be divided as it is invalid in the contiguous
allocation technique. Here, In variable partitioning, the process size can't be restricted since the partition size is
decided according to the process size.
Disadvantages of Variable(Dynamic) Partitioning
• Difficult Implementation: Implementing variable Partitioning is difficult as compared to Fixed Partitioning as it
involves the allocation of memory during run-time rather than during system configuration.
• External Fragmentation: There will be external fragmentation despite the absence of internal fragmentation. For
example, suppose in the above example- process P1(2MB) and process P3(1MB) completed their execution.
Hence two spaces are left i.e. 2MB and 1MB. Let's suppose process P5 of size 3MB comes. The space in memory
cannot be allocated as no spanning is allowed in contiguous allocation. The rule says that the process must be
continuously present in the main memory to get executed. Hence it results in External Fragmentation.
First Fit Allocation
First fit allocation is a memory management strategy used by operating
systems to assign memory blocks to processes. In this approach, when a
process requests memory, the system searches through the available
memory blocks and allocates the first block that is large enough to fulfill
the request. The search for a suitable memory block starts from the
beginning of the list of free memory areas and continues sequentially until
a block that meets the size requirements is found. Once this block is
allocated, the system proceeds with its operation, and the allocated
memory is marked as unavailable for other processes.
While first fit allocation is relatively fast because it stops searching once a
suitable block is located, it has some limitations. Over time, this method
can lead to fragmentation, as smaller gaps of unused memory might
accumulate between allocated blocks. These gaps may not be large
enough to accommodate future memory requests, even though there is
enough total unused memory in the system. This reduces overall memory
efficiency, but the simplicity and speed of the method often make it a
practical choice in environments where speed is prioritized over memory
optimization.
• Advantages of First Fit Algorithm
• The First Fit algorithm in operating systems offers several benefits:
• It is straightforward to implement and easy to understand, making it ideal for systems with
limited computational power.
• Memory can be allocated quickly when a suitable free block is found at the start.
• When processes have similar memory sizes, First Fit can help minimize fragmentation by
utilizing the first available block that fits the process.
• Disadvantages of First Fit Algorithm
• Despite its advantages, the First Fit algorithm has a few downsides:
• Over time, First Fit can lead to both external fragmentation, where small free memory blocks
are scattered, and internal fragmentation, where allocated memory exceeds the process’s
requirement, wasting space.
• It may not always allocate memory in the most efficient manner, leading to suboptimal use of
available memory.
• For large processes, First Fit can be less efficient, as it may need to search through numerous
smaller blocks before finding an appropriate one, which can slow down memory allocation.
First-Fit Algorithm
1. Start with the first process in the list.
2. For each process Pn, do the following:
Step 1: Search through the available memory blocks, starting from the first
block.
Step 2: Check if the current memory block is free and has enough space to
accommodate Pn.
Step 3: If a suitable block is found, allocate it to the process and update the
block’s status (mark it as allocated).
Step 3: If no suitable block is found, mark the process as unallocated or waiting.
4. Repeat the above steps for each process.
Best-Fit Allocation
Best-fit allocation is used to efficiently allocate memory blocks to processes in computer
systems. When a process requests memory, the system searches through available memory
blocks to find the smallest block that can accommodate the requested size. This approach aims
to reduce wasted space by minimizing the difference between the requested and allocated
memory sizes, ensuring that the chosen block fits as closely as possible to the required size.
The strategy minimizes memory fragmentation and optimizes memory utilization by leaving as
little unused space as possible after allocation. However, it also tends to leave many small,
leftover blocks that may not be useful for future memory requests, leading to fragmentation
issues over time. Furthermore, the process of finding the best-fit block requires scanning
through available blocks, which can be time-consuming, especially when memory is fragmented.
Despite these challenges, best-fit allocation remains a useful technique in memory
management, helping developers maximize the effective use of available memory resources.
Best-Fit Algorithm:
1.Input memory blocks and processes with sizes.
2. Initialize all memory blocks as free.
3. Start by picking each process and find the
minimum block size that can be assigned to
current process i.e., find min(bockSize[1],
blockSize[2],.....blockSize[n]) >
processSize[current], if found then assign
it to the current process.
4. If not then leave that process and keep checking
the further processes.
• Advantages of Best-Fit Allocation :
• Memory Efficient. The operating system allocates the job minimum possible space in the
memory, making memory management very efficient.
• To save memory from getting wasted, it is the best method.
• Improved memory utilization
• Reduced memory fragmentation
• Minimizes external fragmentation
• Disadvantages of Best-Fit Allocation :
• It is a Slow Process. Checking the whole memory for each job makes the working of the
operating system very slow. It takes a lot of time to complete the work.
• Increased computational overhead
• May lead to increased internal fragmentation
• Can result in slow memory allocation times.
Worst-Fit Memory Allocation
In Worst-Fit Memory Allocation allocation technique, the process traverses the whole memory
and always search for the largest hole/partition, and then the process is placed in that
hole/partition. It is a slow process because it has to traverse the entire memory to search the
largest hole.
Algorithm for Worst Fit Memory Management Scheme
Step 1: Input the list of memory blocks along with their sizes.
Step 2: Input the list of processes with their sizes.
Step 3: For each process, check for the largest memory block (i.e., the block
with the maximum size) that is large enough to accommodate the process.
Step 4: If such a block is found, allocate the process to that block and update
the block size.
Step 5: If no suitable block is found, leave the process unallocated and move
on to the next process.
Step 6: Repeat the above steps for all processes.
Step 7: Stop.
Deallocation
Deallocation is the process of releasing memory that was previously allocated to a process (or job) once the
process has completed its execution.
When a job no longer needs memory — either be
Why is Deallocation Important?
•Without deallocation:
•The memory remains occupied unnecessarily.
•New jobs may be denied memory due to lack of available space.
•It leads to memory fragmentation and inefficient memory utilization.
•By properly deallocating memory:
•The operating system frees up space.
•It improves performance by making space available for other jobs.
•It helps in reducing fragmentation (especially when combined with compaction).
•cause it has terminated or is no longer using a portion of memory — that memory is "returned" to the system for
reuse.
Suppose a job was allocated a block of memory. When it finishes:
The OS must update internal tables (like the memory allocation table).
It must mark that memory block as free or available.
It may merge this newly freed block with adjacent free blocks to prevent
fragmentation.
• For a fixed partition system, the process is quite straightforward: when the job is
completed, the Memory Manager resets the status of the memory block where
the job was stored to "free." Any code—for example, a binary value of 0 indicating
free and 1 indicating busy—may be used so the mechanical task of deallocating a
block of memory is relatively simple.
• A dynamic partition system uses a more complex algorithm because the algorithm
tries to combine the freed block of memory with other free blocks. Therefore, the
system must be prepared for three alternative situations (McNickle & Donovan,
1974):
Case 1. When the block to be deallocated is adjacent to another free block
Case 2. When the block to be deallocated is between two free blocks
Case 3. When the block to be deallocated is isolated from other free blocks
Deallocation Algorithm (General Steps)
Step 1:
Check if the deallocated block is adjacent to any existing free block.
Step 2:
If between two free blocks → merge all three.
If adjacent to one → merge two.
If isolated → add a new free block entry.
Step 3:
Update memory tables:
Modify size fields.
Remove null entries or adjust them.
Set status to “free”.
Relocatable dynamic partitions
Relocatable dynamic partitions in an operating system refer to a memory management
technique where the OS can move processes around in memory to optimize space utilization
and reduce fragmentation. This contrasts with static partitioning, where partitions are fixed in
size and location, and dynamic partitioning, which may still lead to external fragmentation.
Relocatable dynamic partitions address the issue of external fragmentation by allowing the OS
to move processes in memory to consolidate the free space into a single, contiguous block.
This process, often called compaction or defragmentation, involves pausing all running
processes, relocating them to new memory locations, and then resuming their execution.
While effective in eliminating external fragmentation, compaction can be resource-intensive and
may impact system performance if done too frequently.
Benefits of Relocatable Dynamic Partitions:
Reduced Fragmentation:
Compaction helps eliminate external fragmentation, allowing for more efficient memory
utilization.
Increased Multiprogramming:
By making better use of memory, relocatable dynamic partitions can support a higher degree of
multiprogramming, increasing system throughput.
Flexibility:
Dynamic allocation allows for more flexible memory management, adapting to changing process
needs.