0% found this document useful (0 votes)
75 views44 pages

Aos Unit-1

The document provides an overview of advanced operating systems, detailing their role as an interface between software applications and hardware. It discusses types of advanced operating systems, including distributed, multiprocessor, database, and real-time systems, along with their architectures such as monolithic, layered, microkernel, and hybrid. Additionally, it highlights the advantages and disadvantages of multiprocessing systems and their types, emphasizing the complexities involved in managing multiple processors.

Uploaded by

Aasha Ganesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views44 pages

Aos Unit-1

The document provides an overview of advanced operating systems, detailing their role as an interface between software applications and hardware. It discusses types of advanced operating systems, including distributed, multiprocessor, database, and real-time systems, along with their architectures such as monolithic, layered, microkernel, and hybrid. Additionally, it highlights the advantages and disadvantages of multiprocessing systems and their types, emphasizing the complexities involved in managing multiple processors.

Uploaded by

Aasha Ganesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

ADVANCED OPERATING SYSTEM

UNIT - 1

Advance Operating System:


• Advance Operating System is an interface between the software applications and
hardware with level of packaging.

What is OS?

• Operating System is a software, which makes a computer to actually work.


• It is the software the enables all the programs we use.
• The OS organizes and controls the hardware.
• OS acts as an interface between the application programs and the machine hardware.
• Examples: Windows, Linux, Unix and Mac OS, etc.,

What is Advanced OS?

• Traditional Operating System : which ran on stand- alone computers with single
processors.
• Arise of Multiprocessor Systems and Distributed Systems.
• Due to the high demand and popularity of multiprocessor systems, advanced operating
system have gained high importance.

Types of Advanced Operating Systems

• The impetus for advanced operating systems has from 2 dimensions.


1. it has come from advanced operating systems of multicomputer systems and is driven by
a wide variety of high-speed architectures
2. advanced operating systems driven by applications

Types

1. Distributed operating systems


2. Multiprocessor operating systems
3. Database operating systems
4. Real-time operating systems
1. Distributed operating systems
• Are operating systems for a network of autonomous computers connected by a
communication network.
• Controls and manages the hardware and software resources of a distributed system such
that its users view the entire system as a powerful monolithic computer system.
• When a program is executed in a distributed system, the user is not aware of where the
program is executed or of the location of the resources accessed.
• Practical issues: - lack of shared memory - lack of global clock - unpredictable
communication delays.
2. Multiprocessor operating systems
• Consists of a set of processors that shares a set of physical memory blocks over
interconnection network.
• Is a tightly coupled system where the processors share an address space.
• Controls and manages the hardware and software resources such that users view the
system as a powerful uniprocessor system
• User is not aware of the presence of multiple processors and the interconnection network
• Practical issues: - increased complexity of synchronization, scheduling, memory
management, protection and security.
3. Database operating systems
• Must support
1. the concept of transaction, operations to store, retrieve, and manipulate a large volume
of data efficiently
2. Primitives for concurrency control and system failure recovery
• Must have buffer management system to store temporary data and data retrieved from
secondary storage.
4. Real-time operating systems
• Jobs have completion deadlines
• Major issue in the design is the scheduling of jobs in such a way that a maximum number
of jobs satisfy their deadlines. Other issue include designing languages and primitives to
effectively prepare and execute a job schedule

Multiprocessing Operating system

• In operating systems, to improve the performance of more than one CPU can be used
within one computer system called Multiprocessor operating system.
• Multiple CPUs are interconnected so that a job can be divided among them for faster
execution.
• When a job finishes, results from all CPUs are collected and compiled to give the final
output.
• Jobs needed to share main memory and they may also share other system resources
among themselves.
• Multiple CPUs can also be used to run multiple jobs simultaneously.

For Example: UNIX Operating system is one of the most widely used multiprocessing systems.

The basic organization of a typical multiprocessing system is shown in the given figure.

828.claims he was more considerate than Twitter's Elon Musk over Meta layoffs

To employ a multiprocessing operating system effectively, the computer system must have
the following things:

o A motherboard is capable of handling multiple processors in a multiprocessing operating


system.
o Processors are also capable of being used in a multiprocessing system.

Advantages of multiprocessing operating system are:


o Increased reliability: Due to the multiprocessing system, processing tasks can be
distributed among several processors. This increases reliability as if one processor fails;
the task can be given to another processor for completion.
o Increased throughout: As several processors increase, more work can be done in less
o The economy of Scale: As multiprocessors systems share peripherals, secondary storage
devices, and power supplies, they are relatively cheaper than single-processor systems.

Disadvantages of Multiprocessing operating System

o Operating system of multiprocessing is more complex and sophisticated as it takes care


of multiple CPUs at the same time.

Types of multiprocessing systems

o Symmetrical multiprocessing operating system


o Asymmetric multiprocessing operating system

Symmetrical multiprocessing operating system:

• In a Symmetrical multiprocessing system, each processor executes the same copy of the
operating system, takes its own decisions, and cooperates with other processes to smooth
the entire functioning of the system.
• The CPU scheduling policies are very simple.
• Any new job submitted by a user can be assigned to any processor that is least burdened.
It also results in a system in which all processors are equally burdened at any time.
• The symmetric multiprocessing operating system is also known as a "shared every-thing"
system, because the processors share memory and the Input output bus or data path.
• In this system processors do not usually exceed more than 16.

Characteristics of Symmetrical multiprocessing operating system:

o In this system, any processor can run any job or process.


o In this, any processor initiates an Input and Output operation.
Advantages of Symmetrical multiprocessing operating system:

o These systems are fault-tolerant. Failure of a few processors does not bring the entire
system to a halt.

Disadvantages of Symmetrical multiprocessing operating system:

o It is very difficult to balance the workload among processors rationally.


o Specialized synchronization schemes are necessary for managing multiple processors.

Asymmetric multiprocessing operating system

• In an asymmetric multiprocessing system, there is a master slave relationship between the


processors.
• Further, one processor may act as a master processor or supervisor processor while others
are treated as shown below.

In the above figure, the asymmetric processing system shows that CPU n1 acts as a supervisor
whose function controls other following processors.

In this type of system, each processor is assigned a specific task, and there is a designated master
processor that controls the activities of other processors.

For example, we have a math co-processor that can handle mathematical jobs better than the
main CPU. Similarly, we have an MMX processor that is built to handle multimedia-related jobs.
Similarly, we have a graphics processor to handle the graphics-related job better than the main
processor. When a user submits a new job, the OS has to decide which processor can perform it
better, and then that processor is assigned that newly arrived job. This processor acts as the
master and controls the system. All other processors look for masters for instructions or have
predefined tasks. It is the responsibility of the master to allocate work to other processors.

Advantages of Asymmetric multiprocessing operating system:


o In this type of system execution of Input and Output operation or an application program
may be faster in some situations because many processors may be available for a single
job.

Disadvantages of Asymmetric multiprocessing operating system:

o In this type of multiprocessing operating system the processors are unequally burdened.
One processor may be having a long job queue, while another one may be sitting idle.
o In this system, if the process handling a specific work fails, the entire system will go
down.

Architecture of Operating System

Overview

 The operating system provides an environment for the users to execute computer
programs.
 Operating systems are already installed on the computers you buy for eg personal
computers have windows, Linux, and macOS, mainframe computers have z/OS, z/VM,
etc, and mobile phones have operating systems such as Android, and iOS.
 The architecture of an operating system consists of four major components hardware,
kernel, shell, and application and we shall explore all of them in detail one by one.

Scope

 Operating system acts as an intermediary for the users


 Components of the operating system include process management, memory management,
security, error detection, and I/O management.
 There are majorly four architectures of operating systems monolithic, layered,
microkernel, and hybrid.
 Hybrid architecture of operating systems includes all of the previously mentioned
operating systems.

Architecture of an Operating System

Highlights:

 Operating system gives an environment for executing programs by the users.

Kernel is the most central part of the operating systems.

 Software running on any operating system can be system software and application
software.
 The operating system as we know is an intermediary and its functionalities include file
management, memory management, process management, handling input and output, and
peripheral devices as well.

Controlling Job accounting File management


peripheral devices

Error detection Operating system Security

I/O management Coordination btw other Processor


software and user management

 The operating system handles all of the above tasks for the system as well as application
software.
 The architecture of an operating system is basically the design of its software and
hardware components.
 Depending upon the tasks or programs we need to run users can use the operating system
most suitable for that program/software.

User- 1 User- 2

System Application
software software
Software

Operating system

Hardware
CPU RAM I/O

Before explaining various architectures of the operating systems, let's explore a few terms first
which are part of the operating system.
1) Application:

 The application represents the software that a user is running on an operating system it
can be either system or application software.
 eg slack, sublime text editor, etc.

2) Shell:

 The shell represents software that provides an interface for the user where it serves to
launch or start some program for which the user gives instructions.
 It can be of two types first is a command line and another is a graphical user interface.
 for eg: MS-DOS Shell, PowerShell, csh, ksh, etc.

3) Kernel:

 Kernel represents the most central and crucial part of the operating system where it is
used for resource management.
 i.e. it provides necessary I/O, processor, and memory to the application processes
through inter-process communication mechanisms and system calls.
 Let's understand the various types of architectures of the operating system.

Types of Architectures of Operating System

Highlights:

 Architectures of operating systems can be of four types monolithic, layered, microkernel,


and hybrid.
 Hybrid architecture is the combination of all architectures. There are major four types of
architectures of operating systems.

1) Monolithic Architecture
 In monolithic architecture, each component of the operating system is contained
in the kernel.
 i.e. it is working in kernel space, and the components of the operating system
communicate with each other using function calls.
Applications

_______________________________________________________User Space________
Kernel space
System call interfaces

kernel
MM IPC PS

FS I/O Net

MM- Memory scheduling

IPC- Inter process communication

PS- Process scheduling

FS- File system

I/O- input output management

Net- Network management

Examples of this type of architecture are OS/360, VMX, and LINUX.

Advantages:

1. The main advantage of having a monolithic architecture of the operating system is that it
provides CPU scheduling, memory management, memory management, etc through
system calls.
2. In a single address space, the entire large process is running.
3. It is a single static binary file.

Disadvantages:

1. The main disadvantage is that all components are interdependent and when one of them
fails the entire system fails.
2. In case the user has to add a new service or functionality the entire operating system
needs to be changed.

2) Layered architecture

 In Layered architecture, components with similar functionalities are grouped to form a


layer.
 And in this way, total n+1 layers are constructed and counted from 0 to n where each
layer has a different set of functionalities and services.
 Example: THE operating system, also windows XP, and LINUX implements some level
of layering.

The layers are implemented according to the following rule:

1. Each layer can communicate with all of its lower layers but not with its upper layer i.e.
any ith layer can communicate with all layers from 0 to i-1 but not with the i+1th layer.
2. Each layer is designed in such a way that it will only need the functionalities which are
present in itself or the layers below it.

There are 6 layers in layered architecture as shown below:

User program User program User program

System services

File systems

Memory and I/O device management

Process scheduling

Hardware

Let's explain the layers one by one

1) Hardware:

 This layer is the lowest layer in the layered operating system architecture.
 This layer is responsible for the coordination with peripheral devices such
as keyboards, mice, scanners etc.

2) CPU scheduling:

 This layer is responsible for process scheduling, multiple queues are used for scheduling.
 Process entering the system are kept in the job queue while those which are ready to be
executed are put into the ready queue.
 It manages the processes which are to be kept in the CPU and those which are to be kept
out of the CPU.

3) Memory Management:
 This layer handles the aspect of memory management i.e. moving the processes from the
secondary to primary memory for execution and vice-versa.
 There are memories like RAM, and ROM.
 RAM is the memory where our processes run they are moved to the RAM for execution
and when they exit they are removed from RAM.

4) Process Management:

 This layer is responsible for managing the various processes i.e. assigning the CPU to
those processes on a priority basis for their execution.
 Process management uses many scheduling algorithms for prioritizing the processes for
execution such as the Round-Robin algorithm, FCFS(First Come First
Serve), SJF(Shortest Job First), etc.

5) I/O Buffer:

 Buffering is the temporary storage of data and I/O Buffer means that the data input is first
buffered before storing it in the secondary memory.
 All I/O devices have buffers attached to them for the temporary storage of the input data
because it cannot be stored directly in the secondary storage as the speed of
the I/O devices is slow as compared to the processor.

6) User Programs:

 This is the application layer of the layered architecture of the operating system, it deals
with all the application programs running.
 Eg: games, browsers, words, etc. It is the highest layer of layered architecture.

Advantages:

1) Layered architecture of the operating system provides modularity because each layer is
programmed to perform its own tasks only.
2) Since the layered architecture has independent components changing or updating one of
them will not affect the other component or the entire operating system will not stop
working, hence it is easy to debug and update.
3) The user can access the services of the hardware layer but cannot access the hardware
layer itself because it is the innermost layer.
4) Each layer has its own functionalities and it is concerned with itself only and other layers
are abstracted from it.

Disadvantages:

1) Layered architecture is complex in implementation because one layer may use the
services of the other layer and therefore, the layer using the services of another layer must
be put below the other one.
2) In a layered architecture, if one layer wants to communicate with another it has to send a
request which goes through all layers in between which increases response time causing
inefficiency in the system.

3) Microkernel Architecture

 In this architecture, the components like process management, networking, file system
interaction, and device management are executed outside the kernel while memory
management and synchronization are executed inside the kernel.
 The processes inside the kernel have relatively high priority, the components
possess high modularity hence even if one or more components fail the operating system
keeps on working.
Applications

System call interface

File system Process scheduling Device manager

IPC

Memory management Synchronization

Example: Linux and Windows XP contain Modular components.

Advantages:

1. Microkernel operating systems are modular and hence, disturbing one of the components
will not affect the other component.
2. The architecture is compact and isolated and hence relatively efficient.
3. New features can be added without recompilation.

Disadvantages:

1. Implementing drivers as procedures require a function call or context switch.


2. In microkernel architecture, providing services is costlier than monolithic operating
systems.

4) Hybrid Architecture

 Hybrid architecture as the name suggests consists of a hybrid of all the architectures
explained so far and hence it has properties of all of those architectures which makes it
highly useful in present-day operating systems.
The hybrid-architecture consists of three layers

1) Hardware abstraction layer: It is the interface between the kernel and hardware and is
present at the lowest level.

2) Microkernel Layer: This is the old microkernel that we know and it consists of CPU
scheduling, memory management, and inter-process communication.

3) Application Layer: It acts as an interface between the user and the microkernel. It contains
the functionalities like a file server, error detection, I/O device management, etc.

Layer -3 File server I/O management Error detection

Process communication and Memory management


Layer -2
scheduling

Layer -1 Hardware abstraction layer

Example: Microsoft Windows NT kernel implements hybrid architecture of the operating


system.

Advantages:

1. Since it is a hybrid of other architectures it allows various architectures to provide their


services respectively.
2. It is easy to manage because it uses a layered approach.
3. Number of layers is relatively lesser.
4. Security and protection are relatively improved.

Disadvantage:

1. Hybrid architecture of the operating system keeps certain services in the kernel space
while moving less critical services to the user space.

Conclusion

 We conclude that the operating system has various architectures with which we can
describe the functionality of various components.
 The components of the operating system are process management, memory
management, I/O management, Error Detection, and controlling peripheral devices.
 These architectures include monolithic, layered, microkernel, and hybrid architectures
classified on the basis of the structure of components.
 Hybrid architecture is the most efficient and useful architecture as it implements the
functionalities of all other architectures.
 Hybrid architecture is better in terms of security as well.

Frequently Asked Questions

1) How the Hybrid Architecture of Operating Systems is Better Than Other Architectures?

The hybrid architecture of operating systems as we know is a hybrid of other architectures like
monolithic, layered, and microkernel architectures of operating systems so it has the
functionalities of all of them. Since, we can see that hybrid architecture contains all the
functionalities of monolithic, layered, and microkernel architecture, therefore it is better than all
of them.

2) What are the Key Differences Between Monolithic and Layered Architecture of
Operating Systems?

The key differences between the monolithic and layered architecture of the operating system are:

1. In the monolithic operating system the entire operating system functionalities operate in
the kernel space while in layered architecture there are several layers where each layer
has a specific set of functionalities.
2. In the monolithic operating system there are mainly three layers while in a layered the
number of layers is multiple.

The Architecture of Types of Operating Systems

 The operating systems control the hardware resources of a computer.

 The kernel and shell are the parts of the operating system that perform essential
operations.

 When a user gives commands for performing any operation, the request goes to the shell
part, which is also known as an interpreter.

 The shell part then translates the human program into machine code and then transfers the
request to the kernel part.

 When the kernel receives the request from the shell, it processes the request and displays
the result on the screen.

 The kernel is also known as the heart of the operating system as every operation is
performed by it.
Shell

 The shell is a part of the software which is placed between the user and the kernel, and it
provides services of the kernel.

 The shell thus acts as an interpreter to convert the commands from the user to the
machine code.

 Shells present in different types of operating systems are of two types: command-line
shells and graphical shells.

 The command-line shells provide a command-line interface while graphical line shells
provide a graphical user interface.

 Though both shells perform operations, the graphical user interface shells perform slower
than the command line interface shells.

Types of shells

 Korn shell
 Bourne shell
 C shell
 POSIX shell

Kernel

 The kernel is a part of the software.

 It is like a bridge between the shell and hardware.

 It is responsible for running programs and providing secure access to the machine’s
hardware.

 The kernel is used for scheduling, i.e., it maintains a time table for all processes.

Types of kernels:

 Monolithic kernel
 Microkernels
 Exokernels
 Hybrid kernels

Computer Operating System Functions


An operating system performs the following functions:
 Memory management
 Task or process management
 Storage management
 Device or input/output management
 Kernel or scheduling

Memory Management

 Memory management is the process of managing computer memory.


 Computer memories are of two types: primary and secondary memory.
 The memory portion for programs and software is allocated after releasing the memory
space.
Operating System Memory Management

 Memory management is important for the operating system involved in multitasking


wherein the OS requires switching of memory space from one process to another.
 Every single program requires some memory space for its execution, which is provided
by the memory management unit.
 A CPU consists of two types of memory modules: virtual memory and physical memory.
 The virtual memory is RAM memory, and the physical memory is a hard disk memory.
 An operating system manages the virtual memory address spaces, and the assignment of
real memory is followed by the virtual memory address.
 Before executing instructions, the CPU sends the virtual address to the memory
management unit. Subsequently, the MMU sends the physical address to the real memory,
and then the real memory allocates space for the programs or data.

Task or Process Management

 Process management is an instance of a program that is being executed.


 The process consists of a number of elements, such as an identifier, program counter,
memory pointer and context data, and so on.
 The Process is actually an execution of those instructions.

Process Management

 There are two types of process methods: single process and multitasking method.

 The single process method deals with a single application running at a time. The
multitasking method allows multiple processes at a time.

Storage Management

 Storage management is a function of the operating system that handles memory


allocation of the data.
 The system consists of different types of memory devices, such as primary storage
memory (RAM), secondary storage memory, (Hard disk), and cache storage memory.

 Instructions and data are placed in the primary storage or cache memory, which is
referenced by the running program.

 However, the data is lost when the power supply cut off. The secondary memory is a
permanent storage device.

 The operating system allocates a storage place when new files are created and the request
for memory access is scheduled.

Device or Input/output Management

 In computer architecture, the combination of CPU and main memory is the brain of the
computer, and it is managed by the input and output resources.

 Humans interact with the machines by providing information through I/O devices.

 The display, keyboard, printer, and mouse are I/O devices.

 The management of all these devices affects the throughput of a system; therefore, the
input and output management of the system is a primary responsibility of the operating
system
Scheduling

 Scheduling by an operating system is a process of controlling and prioritizing the


messages sent to a processor.

 The operating system maintains a constant amount of work for the processor and thus
balances the workload. As a result, each process is completed within a stipulated time
frame.

Hence, scheduling is very important in real-time systems. The schedulers are mainly of three
types:

 Long term scheduler


 Short term scheduler
 Medium-term schedule

Operating System Structure

What is the Operating System Structure?

 An operating system has a complex structure, so we need a well-defined structure to


assist us in applying it to our unique requirements.
 Just as we break down a big problem into smaller, easier-to-solve sub problems,
designing an operating system in parts is a simpler approach to do it.
 And each section is an Operating System component.
 The approach of interconnecting and integrating multiple operating system components
into the kernel can be described as an operating system structure.
 As mentioned below, various sorts of structures are used to implement operating
systems.

Simple Structure

 It is the simplest Operating System Structure and is not well defined; It can only be used
for small and limited systems.
 In this structure, the interfaces and levels of functionality are well separated; hence
programs can access I/O routines which can cause unauthorized access to I/O routines.

This structure is implemented in MS-DOS operating system:

 The MS-DOS operating System is made up of various layers, each with its own set of
functions.
 These layers are:
o Application Program
o System Program
o MS-DOS device drivers
o ROM BIOS device drivers
 Layering has an advantage in the MS-DOS operating system since all the levels can be
defined separately and can interact with each other when needed.
 It is easier to design, maintain, and update the system if it is made in layers. So that's why
limited systems with less complexity can be constructed easily using Simple Structure.
 If one user program fails, the entire operating system gets crashed.
 The abstraction level in MS-DOS systems is low, so programs and I/O routines are
visible to the end-user, so the user can have unauthorized access.

Layering in simple structure is shown below:

Application Program

System Programs

Device Derivers

BIOS Device Derivers


Advantages of Simple Structure

 It is easy to develop because of the limited number of interfaces and layers.


 Offers good performance due to lesser layers between hardware and applications.

Disadvantages of Simple Structure

 If one user program fails, the entire operating system crashes.


 Abstraction or data hiding is not present as layers are connected and communicate with
each other.
 Layers can access the processes going in the Operating System, which can lead to data
modification and can cause Operating System to crash.

Monolithic Structure

 The Monolithic operating System in which the kernel acts as a manager by managing all
things like file management, memory management, device management, and operational
processes of the Operating System.
 The kernel is the heart of a computer operating system (OS).
 Kernel delivers basic services to all other elements of the System.
 It serves as the primary interface between the Operating System and the hardware.
 In monolithic systems, kernels can directly access all the resources of the operating
System like physical hardware, exp Keyboard, Mouse etc.
 The monolithic kernel is another name for the monolithic operating system.
 Batch processing and time-sharing maximize the usability of a processor by
multiprogramming.
 The monolithic kernel functions as a virtual machine by working on top of the Operating
System and controlling all hardware components.
 This is an outdated operating system that was used in banks to
accomplish minor activities such as batch processing and time-sharing, which enables
many people at various terminals to access the Operating System.

A Diagram of the Monolithic structure is shown below:

Unprivileged
Applications Applications Applications
mode

---------------------------------------------------------------------------------- Priviledged mode

File system Network subsystem

Memory Process
management management Drivers

monolithic kernel
Advantages of Monolithic structure:

 It is simple to design and implement because all operations are managed by kernel only,
and layering is not needed.
 As services such as memory management, file management, process scheduling, etc., are
implemented in the same address space, the execution of the monolithic kernel is
relatively fast as compared to normal systems. Using the same address saves time for
address allocation for new processes and makes it faster.

Disadvantages of Monolithic structure:

 If any service in the monolithic kernel fails, the entire System fails because, in address
space, the services are connected to each other and affect each other.
 It is not flexible, and to introduce a new service

Layered Approach

 In this type of structure, OS is divided into layers or levels.


 The hardware is on the bottom layer (layer 0), while the user interface is on the top
layer (layer N).
 These layers are arranged in a hierarchical way in which the top-level layers use the
functionalities of their lower-level levels.
 In this approach, functionalities of each layer are isolated, and abstraction is also
available.
 In layered structure, debugging is easier as it is a hierarchical model, so all lower-level
layered is debugged, and then the upper layer is checked.
 So all the lower layers are already checked, and the current layer is to be checked only.

Below is the Image illustrating the Layered structure in OS:


Advantages of Layered Structure

 Each layer has its functionalities, so work tasks are isolated, and abstraction is present up
to some level.
 Debugging is easier as lower layers are debugged, and then upper layers are checked.

Disadvantages of Layered Structure

 In Layered Structure, layering causes degradation in performance.


 It takes careful planning to construct the layers since higher layers only utilize the
functions of lower layers.

Micro-kernel

 Micro-Kernel structure designs the Operating System by removing all non-essential


components of the kernel.
 These non-essential components of kernels are implemented as systems and user
programs. Hence these implemented systems are called as Micro-Kernels.
 Each Micro-Kernel is made independently and is isolated from other Micro-Kernels.
 So this makes the system more secure and reliable.
 If any Micro-Kernel fails, then the remaining operating System remains untouched and
works fine.

Application layer

Application Device Unix File server


IPC driver server

Basic IPC, Virtual Memory Scheduling

Advantages of Micro-kernel structure:

 It allows the operating system to be portable between platforms.


 As each Micro-Kernel is isolated, it is safe and trustworthy.
 Because Micro-Kernels are smaller, they can be successfully tested.
 If any component or Micro-Kernel fails, the remaining operating System is unaffected
and continues to function normally.

Disadvantages of Micro-kernel structure:

 Increased inter-module communication reduces system performance.


 System is complex to be constructed.
Conclusion

 The operating System enables the end-user to interact with the computer hardware.
System software is installed and utilized on top of the operating system.
 We can define operating system structure as to how different components of the
operating system are interconnected.
 There are many structures of the operating system:
o Simple Structure
o Monolithic Approach
o Layered Approach
o Micro-kernels
 All these approaches or structures evolved from time to time, making the OS more and
more improved than before.

Operating System Design Issues


 Efficiency
o Most I/O devices slow compared to main memory (and the CPU)
 Use of multiprogramming allows for some processes to be waiting on I/O
while another process executes
 Often I/O still cannot keep up with processor speed
 Swapping may used to bring in additional Ready processes; More I/O
operations
 Optimise I/O efficiency especially Disk & Network I/O
 The quest for generality/uniformity:
o Ideally, handle all I/O devices in the same way; Both in the OS and in user
applications
o Problem:
 Diversity of I/O devices
 Especially, different access methods (random access versus stream based)
as well as vastly different data rates.
 Generality often compromises efficiency!
o Hide most of the details of device I/O in lower-level routines so that processes
and upper levels see devices in general terms such as read, write, open, close,
lock, unlock

Design issues (efficiency, robustness, flexibility, portability, security,


compatibility)
Efficiency:

 An OS allows the computer system resources to be used efficiently.


 Ability to Evolve: An OS should be constructed in such a way as to permit the effective
development, testing, and introduction of new system functions at the same time without
interfering with service.
 Most I/O devices slow compared to main memory (and the CPU)
Use of multiprogramming allows for some processes to be waiting on I/O while another
process executes
 Often, I/O still cannot keep up with processor speed.
 Swapping may use to bring in additional Ready processes; More I/O operations
✓ Optimize I/O efficiency especially Disk & Network I/O
✓ The quest for generality/uniformity:

o Ideally, handle all I/O devices in the same way; Both in the OS and in user applications
o Problem:

Diversity of I/O devices

Especially, different access methods (random access versus stream based) as well as
vastly different data rates.

Generality often compromises efficiency!


o Hide most of the details of device I/O in lower-level routines so that processes and
upper levels see devices in general terms such as read, write, open, close, lock, unlock

Robustness

 In computer science, robustness is the ability of a computer system to cope with errors
during execution and cope with erroneous input.
 Robustness can encompass many areas of computer science, such as robust
programming, robust machine learning, and Robust Security Network.
 Formal techniques, such as fuzz testing, are essential to showing robustness since this
type of testing involves invalid or unexpected inputs.
 Alternatively, fault injection can be used to test robustness.
 Various commercial products perform robustness testing of software analysis.
 A distributed system may suffer from various types of hardware failure.
 The failure of a link, the failure of a site, and the loss of a message are the most common
types. To ensure that the system is robust, we must detect any of these failures,
reconfigure the system so that computation can continue, and recover when a site or a
link is repaired.

Portability

 Portability is the ability of an application to run properly in a different platform to the one
it was designed for, with little or no modification.
 Portability in high-level computer programming is the usability of the same software in
different environments.
 When software with the same functionality is produced for several computing platforms,
portability is the key issue for development cost reduction.

Compatibility

 Compatibility is the capacity for two systems to work together without having to be
altered to do so.
 Compatible software applications use the same data formats. For example, if word
processor applications are compatible, the user should be able to open their document
files in either product.
 Compatibility issues come up when users are using the same type of software for a task,
such as word processors, that cannot communicate with each other.
 This could be due to a difference in their versions or because they are made by different
companies.
 The huge variety of application software available and all the versions of the same
software mean there are bound to be compatibility issues, even when people are using the
same kind of software.
 Compatibility issues come up when users are using the same type of software for a task,
such as word processors, that cannot communicate with each other.
Flexibility

 Flexible operating systems are taken to be those whose designs have been motivated to
some degree by the desire to allow the system to be tailored, either statically or
dynamically, to the requirements of specific applications or application domains.

Process Synchronization in OS
What is Process Synchronization in OS?

 An operating system is a software that manages all applications on a device and basically
helps in the smooth functioning of our computer.
 Because of this reason, the operating system has to perform many tasks, and sometimes
simultaneously.
 This isn't usually a problem unless these simultaneously occurring processes use a
common resource.
 For example, consider a bank that stores the account balance of each customer in the
same database. Now suppose you initially have x rupees in your account. Now, you take
out some amount of money from your bank account, and at the same time, someone tries
to look at the amount of money stored in your account. As you are taking out some
money from your account, after the transaction, the total balance left will be lower than x.
But, the transaction takes time, and hence the person reads x as your account balance
which leads to inconsistent data. If in some way, we could make sure that only one
process occurs at a time, we could ensure consistent data.

Process1 process 1
User 1 Accounts User 2
A

Check balance Deduct Rs.(x)


Current balance Rs(y)

 In the above image, if Process1 and Process2 happen at the same time, user 2 will get the
wrong account balance as Y because of Process1 being transacted when the balance is X.
 Inconsistency of data can occur when various processes share a common resource in a
system which is why there is a need for process synchronization in the operating system.

How Process Synchronization in OS Works?

 Let us take a look at why exactly we need Process Synchronization.


 For example, If a process1 is trying to read the data present in a memory location while
another process2 is trying to change the data present at the same location, there is a high
chance that the data read by the process1 will be incorrect.
` Process 1 read write Process 2
Shared
memory

read
Process 3

Let us look at different elements/sections of a program:

 Entry Section: The entry Section decides the entry of a process.


 Critical Section: Critical section allows and makes sure that only one process is
modifying the shared data.
 Exit Section: The entry of other processes in the shared data after the execution of one
process is handled by the Exit section.
 Remainder Section: The remaining part of the code which is not categorized as above is
contained in the Remainder section.

Race Condition

 When more than one process is either running the same code or modifying the same
memory or any shared data, there is a risk that the result or value of the shared data may
be incorrect because all processes try to access and modify this shared resource.
 Thus, all the processes race to say that my result is correct.
 This condition is called the race condition.
 Since many processes use the same data, the results of the processes may depend on the
order of their execution.

This is mostly a situation that can arise within the critical section. In the critical section, a race
condition occurs when the end result of multiple thread executions varies depending on the
sequence in which the threads execute.

But how to avoid this race condition? There is a simple solution:

 by treating the critical section as a section that can be accessed by only a single process at
a time. This kind of section is called an atomic section.

What is the Critical Section Problem?

Why do we need to have a critical section? What problems occur if we remove it?

 A part of code that can only be accessed by a single process at any moment is known as
a critical section.
 This means that when a lot of programs want to access and change a single shared data,
only one process will be allowed to change at any given moment.
 The other processes have to wait until the data is free to be used.
The wait() function mainly handles the entry to the critical section, while the signal() function
handles the exit from the critical section. If we remove the critical section, we cannot
guarantee the consistency of the end outcome after all the processes finish executing
simultaneously.

We'll look at some solutions to Critical Section Problem but before we move on to that, let us
take a look at what conditions are necessary for a solution to Critical Section Problem.

Requirements of Synchronization

The following three requirements must be met by a solution to the critical section problem:

 Mutual exclusion: If a process is running in the critical section, no other process should
be allowed to run in that section at that time.
 Progress: If no process is still in the critical section and other processes are waiting
outside the critical section to execute, then any one of the threads must be permitted to
enter the critical section. The decision of which process will enter the critical section will
be taken by only those processes that are not executing in the remaining section.
 No starvation: Starvation means a process keeps waiting forever to access the critical
section but never gets a chance. No starvation is also known as Bounded Waiting.
o A process should not wait forever to enter inside the critical section.
o When a process submits a request to access its critical section, there should be a
limit or bound, which is the number of other processes that are allowed to access
the critical section before it.
o After this bound is reached, this process should be allowed to access the critical
section.

Let us now discuss some of the solutions to the Critical Section Problem.

Solutions To The Critical Section Problem

Peterson's solution

 Peterson's approach to critical section problems is extensively utilized. It is a classical


software-based solution.
 The solution is based on the idea that when a process is executing in a critical section,
then the other process executes the rest of the code and vice-versa is also possible, i.e.,
this solution makes sure that only one process executes the critical section at any point in
time.

In Peterson's solution, we have two shared variables that are used by the processes.

 A boolean Flag[]: A boolean array Flag which is initialized to FALSE. This Flag array
represents which process is which process wants to enter into the critical solution.
 int Turn: A integer variable Turn indicates the process number which is ready to enter
into the critical section.
do{
//A process Pi wants to enter into the critical section

//The ith index of flag is set


Flag[i] = True;
Turn = i;
while(Flag[i] && Turn == i);

{ Critical Section };

Flag[i] = False;
// another process can go to Critical Section
Turn = j;

Remainder Section

} while ( True);

Let us look at some disadvantages of Peterson's solution are:

 The Peterson's solution involves Busy waiting


 The solution is also limited to only 2 processes.

Synchronization Hardware

 Hardware can occasionally assist in the solving of critical section issues. Some operating
systems provide a lock feature.
 When a process enters a critical section, it is given a lock, which the process must release
before the process can exit the critical section.
 As a result, additional processes are unable to access a critical section if anyone process
is already using the section. The lock can have either of the two values, 0 or 1.

Mutex Locks

 Implementation of Synchronization hardware is not an easy method, which is why Mutex


Locks were introduced.
 Mutex is a locking mechanism used to synchronize access to a resource in the critical
section. In this method, we use a LOCK over the critical section.
 The LOCK is set when a process enters from the entry section, and it gets unset when the
process exits from the exit section.

Semaphores
 A semaphore is a signaling mechanism, and a process can signal a process that is waiting
on a semaphore.
 This differs from a mutex in that the mutex can only be notified by the process that sets
the shared lock.
 Semaphores make use of the wait() and signal() functions for synchronization among the
processes.

There are two kinds of semaphores:

Binary Semaphores

Binary Semaphores can only have one of two values: 0 or 1. Because of their capacity to ensure
mutual exclusion, they are also known as mutex locks.

 A single binary semaphore is shared between multiple processes.


 When the semaphore is set to 1, it means some process is working on its critical section,
and other processes need to wait, and if the semaphore is set to 0, that means any process
can enter the critical section.

Hence, whenever the binary semaphore is set to 0, any process can then enter its critical section
by setting the binary semaphore to 1. When it has completed its critical section, it can reset the
binary semaphore to 0, enabling another process to enter it.

Counting Semaphores

Counting Semaphores can have any value and are not limited to a certain area. They can be used
to restrict access to a resource that has a concurrent access limit.

Initially, the counting semaphores are set to the maximum amount of processes that can access
the resource at a time. Hence, the counting semaphore indicates that a process can access the
resource if it has a value greater than 0. If it is set to 0, no other process can access the resource.
Hence,

 When a process wants to use that resource, it first checks to see if the value of the
counting semaphore is more than zero.
 If yes, the process can then proceed to access the resource, which involves reducing the
value of the counting semaphore by one.
 When the process completes its critical section code, it can increase the value of the
counting semaphore, making way for some other process to access it.

The snippet code for semaphore would seem something like this:

WAIT ( SE );
while ( SE <= 0 );
SE = SE - 1;
SIGNAL ( SE );
SE = SE + 1;

We use the functions WAIT() and SIGNAL() to control the semaphore.

:::

Conclusion

 Synchronization is the effort of executing processes such that no two processes have
access to the same shared data.
 Four elements of program/data are:
o Entry section
o Critical section
o Exit section
o Reminder section
 The critical section is a portion of code that a single process can access at a specified
moment in time.
 Three essential rules that any critical section solution must follow are as follows:
o Mutual Exclusion
o Progress
o No Starvation(Bounded waiting)
 Solutions to critical section problem are:
o Peterson's solution
o Synchronization hardware
o Mutex Locks
o Semaphore

Operating System - Process Scheduling

Definition

 The process scheduling is the activity of the process manager that handles the removal of
the running process from the CPU and the selection of another process on the basis of a
particular strategy.
 Process scheduling is an essential part of a Multiprogramming operating systems.
 Such operating systems allow more than one process to be loaded into the executable
memory at a time and the loaded process shares the CPU using time multiplexing.

Categories of Scheduling

There are two categories of scheduling:

1. Non-preemptive: Here the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running process
terminates and moves to a waiting state.
2. Preemptive: Here the OS allocates the resources to a process for a fixed amount of time.
During resource allocation, the process switches from running state to ready state or from
waiting state to ready state. This switching occurs as the CPU may give priority to other
processes and replace the process with higher priority with the running process.

Process Scheduling Queues

 The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues.
 The OS maintains a separate queue for each of the process states and PCBs of all
processes in the same execution state are placed in the same queue.
 When the state of a process is changed, its PCB is unlinked from its current queue and
moved to its new state queue.
The Operating System maintains the following important process scheduling queues −
 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes residing in main memory, ready
and waiting to execute. A new process is always put in this queue.
 Device queues − The processes which are blocked due to unavailability of an I/O device
constitute this queue.

 The OS can use different policies to manage each queue (FIFO, Round Robin, Priority,
etc.).
 The OS scheduler determines how to move processes between the ready and run queues
which can only have one entry per processor core on the system; in the above diagram, it
has been merged with the CPU.

Two-State Process Model

Two-state process model refers to running and non-running states which are described below −
S.N. State & Description

1
Running
When a new process is created, it enters into the system as in the
running state.

2
Not Running
Processes that are not running are kept in queue, waiting for their
turn to execute. Each entry in the queue is a pointer to a particular
process. Queue is implemented by using linked list. Use of
dispatcher is as follows. When a process is interrupted, that
process is transferred in the waiting queue. If the process has
completed or aborted, the process is discarded. In either case, the
dispatcher then selects a process from the queue to execute.

Schedulers

Schedulers are special system software which handle process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to run.
Schedulers are of three types −

 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler

Long Term Scheduler

 It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing.
 It selects processes from the queue and loads them into memory for execution. Process
loads into the memory for CPU scheduling.
 The primary objective of the job scheduler is to provide a balanced mix of jobs, such as
I/O bound and processor bound.
 It also controls the degree of multiprogramming.
 If the degree of multiprogramming is stable, then the average rate of process creation
must be equal to the average departure rate of processes leaving the system.
 On some systems, the long-term scheduler may not be available or minimal.
 Time-sharing operating systems have no long term scheduler.
 When a process changes the state from new to ready, then there is use of long-term
scheduler.

Short Term Scheduler

 It is also called as CPU scheduler. Its main objective is to increase system performance
in accordance with the chosen set of criteria.
 It is the change of ready state to running state of the process.
 CPU scheduler selects a process among the processes that are ready to execute and
allocates CPU to one of them.
 Short-term schedulers, also known as dispatchers, make the decision of which process to
execute next. Short-term schedulers are faster than long-term schedulers.

Medium Term Scheduler

 Medium-term scheduling is a part of swapping. It removes the processes from the


memory.
 It reduces the degree of multiprogramming. The medium-term scheduler is in-charge of
handling the swapped out-processes.
 A running process may become suspended if it makes an I/O request.
 A suspended processes cannot make any progress towards completion. In this condition,
to remove the process from memory and make space for other processes, the suspended
process is moved to the secondary storage.
 This process is called swapping, and the process is said to be swapped out or rolled out.
Swapping may be necessary to improve the process mix.

Comparison among Scheduler

S.N. Long-Term Short-Term Medium-Term


Scheduler Scheduler Scheduler

1 It is a job scheduler It is a CPU It is a process


scheduler swapping scheduler.

2 Speed is lesser than Speed is fastest Speed is in between


short term scheduler among other two both short and long
term scheduler.

3 It controls the It provides lesser It reduces the degree


degree of control over degree of
multiprogramming of multiprogramming.
multiprogramming

4 It is almost absent or It is also minimal in It is a part of Time


minimal in time time sharing system sharing systems.
sharing system

5 It selects processes It selects those It can re-introduce


from pool and loads processes which are the process into
them into memory ready to execute memory and
for execution execution can be
continued.

Context Switching

 A context switching is the mechanism to store and restore the state or context of a CPU in
Process Control block so that a process execution can be resumed from the same point at
a later time.
 Using this technique, a context switcher enables multiple processes to share a single
CPU. Context switching is an essential part of a multitasking operating system features.
 When the scheduler switches the CPU from executing one process to execute another, the
state from the current running process is stored into the process control block.
 After this, the state for the process to run next is loaded from its own PCB and used to set
the PC, registers, etc. At that point, the second process can start executing.
 Context switches are computationally intensive since register and memory state must be
saved and restored.
 To avoid the amount of context switching time, some hardware systems employ two or
more sets of processor registers.
 When the process is switched, the following information is stored for later use.
 Program Counter
 Scheduling information
 Base and limit register value
 Currently used register
 Changed State
 I/O State information
 Accounting information

Operating System Scheduling algorithms

 A Process Scheduler schedules different processes to be assigned to the CPU based on


particular scheduling algorithms.
 There are six popular process scheduling algorithms which we are going to discuss in this
chapter −

 First-Come, First-Served (FCFS) Scheduling


 Shortest-Job-Next (SJN) Scheduling
 Priority Scheduling
 Shortest Remaining Time
 Round Robin(RR) Scheduling
 Multiple-Level Queues Scheduling
These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are
designed so that once a process enters the running state, it cannot be preempted until it completes
its allotted time, whereas the preemptive scheduling is based on priority where a scheduler may
preempt a low priority running process anytime when a high priority process enters into a ready
state.

First Come First Serve (FCFS)

 Jobs are executed on first come, first serve basis.


 It is a non-preemptive, pre-emptive scheduling algorithm.
 Easy to understand and implement.
 Its implementation is based on FIFO queue.
 Poor in performance as average wait time is high.

Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time

P0 0-0=0

P1 5-1=4

P2 8-2=6

P3 16 - 3 = 13

Average Wait Time: (0+4+6+13) / 4 = 5.75

Shortest Job Next (SJN)

 This is also known as shortest job first, or SJF


 This is a non-preemptive, pre-emptive scheduling algorithm.
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is known in advance.
 Impossible to implement in interactive systems where required CPU time is not known.
 The processer should know in advance how much time process will take.
Given: Table of processes, and their Arrival time, Execution time

Process Arrival Time Execution Time Service Time


P0 0 5 0

P1 1 3 5

P2 2 8 14

P3 3 6 8

Waiting time of each process is as follows −

Process Waiting Time

P0 0-0=0

P1 5-1=4

P2 14 - 2 = 12

P3 8-3=5

Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25

Priority Based Scheduling

 Priority scheduling is a non-preemptive algorithm and one of the most common scheduling
algorithms in batch systems.
 Each process is assigned a priority. Process with highest priority is to be executed first and
so on.
 Processes with same priority are executed on first come first served basis.
 Priority can be decided based on memory requirements, time requirements or any other
resource requirement.
Given: Table of processes, and their Arrival time, Execution time, and priority. Here we are
considering 1 is the lowest priority.

Process Arrival Time Execution Time Priority Service Time


P0 0 5 1 0

P1 1 3 2 11

P2 2 8 1 14

P3 3 6 3 5

Waiting time of each process is as follows −

Process Waiting Time

P0 0-0=0

P1 11 - 1 = 10

P2 14 - 2 = 12

P3 5-3=2

Average Wait Time: (0 + 10 + 12 + 2)/4 = 24 / 4 = 6

Shortest Remaining Time

 Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
 The processor is allocated to the job closest to completion but it can be preempted by a
newer ready job with shorter time to completion.
 Impossible to implement in interactive systems where required CPU time is not known.
 It is often used in batch environments where short jobs need to give preference.

Round Robin Scheduling

 Round Robin is the preemptive process scheduling algorithm.


 Each process is provided a fix time to execute, it is called a quantum.
 Once a process is executed for a given time period, it is preempted and other process
executes for a given time period.
 Context switching is used to save states of preempted processes.
Wait time of each process is as follows −

Process Wait Time : Service Time - Arrival Time

P0 (0 - 0) + (12 - 3) = 9

P1 (3 - 1) = 2

P2 (6 - 2) + (14 - 9) + (20 - 17) = 12

P3 (9 - 3) + (17 - 12) = 11

Average Wait Time: (9+2+12+11) / 4 = 8.5

Multiple-Level Queues Scheduling

Multiple-level queues are not an independent scheduling algorithm. They make use of other
existing algorithms to group and schedule jobs with common characteristics.
 Multiple queues are maintained for processes with common characteristics.
 Each queue can have its own scheduling algorithms.
 Priorities are assigned to each queue.
For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in another
queue. The Process Scheduler then alternately selects jobs from each queue and assigns them to
the CPU based on the algorithm assigned to the queue.
Memory Management in Operating System (OS)

What do you mean by memory management?

 Memory is the important part of the computer that is used to store the data.
 Its management is critical to the computer system because the amount of main memory
available in a computer system is very limited.
 At any time, many processes are competing for it. Moreover, to increase performance,
several processes are executed simultaneously.
 For this, we must keep several processes in the main memory, so it is even more
important to manage them effectively.
Role of Memory management

Following are the important roles of memory management in a computer system:

o Memory manager is used to keep track of the status of memory locations, whether it is
free or allocated. It addresses primary memory by providing abstractions so that software
perceives a large memory is allocated to it.
o Memory manager permits computers with a small amount of main memory to execute
programs larger than the size or amount of available memory. It does this by moving
information back and forth between primary memory and secondary memory by using
the concept of swapping.
o The memory manager is responsible for protecting the memory allocated to each process
from being corrupted by another process. If this is not ensured, then the system may
exhibit unpredictable behavior.
o Memory managers should enable sharing of memory space between processes. Thus, two
programs can reside at the same memory location although at different times.

Memory Management Techniques:

The memory management techniques can be classified into following main categories:

o Contiguous memory management schemes


o Non-Contiguous memory management schemes
Contiguous memory management schemes:

 In a Contiguous memory management scheme, each program occupies a single


contiguous block of storage locations, i.e., a set of memory locations with consecutive
addresses.

Single contiguous memory management schemes:

 The Single contiguous memory management scheme is the simplest memory


management scheme used in the earliest generation of computer systems.
 In this scheme, the main memory is divided into two contiguous areas or partitions.
 The operating systems reside permanently in one partition, generally at the lower
memory, and the user process is loaded into the other partition.

Advantages of Single contiguous memory management schemes:

o Simple to implement.
o Easy to manage and design.
o In a Single contiguous memory management scheme, once a process is loaded, it is given
full processor's time, and no other processor will interrupt it.

Disadvantages of Single contiguous memory management schemes:

o Wastage of memory space due to unused memory as the process is unlikely to use all the
available memory space.
o The CPU remains idle, waiting for the disk to load the binary image into the main
memory.
o It can not be executed if the program is too large to fit the entire available main memory
space.
o It does not support multiprogramming, i.e., it cannot handle multiple programs
simultaneously.

Multiple Partitioning:

 The single Contiguous memory management scheme is inefficient as it limits computers


to execute only one program at a time resulting in wastage in memory space and CPU
time.
 The problem of inefficient CPU use can be overcome using multiprogramming that
allows more than one program to run concurrently.
 To switch between two processes, the operating systems need to load both processes into
the main memory.
 The operating system needs to divide the available main memory into multiple parts to
load multiple processes into the main memory.
 Thus multiple processes can reside in the main memory simultaneously.

The multiple partitioning schemes can be of two types:

o Fixed Partitioning
o Dynamic Partitioning

Fixed Partitioning

 The main memory is divided into several fixed-sized partitions in a fixed partition
memory management scheme or static partitioning.
 These partitions can be of the same size or different sizes. Each partition can hold a
single process.
 The number of partitions determines the degree of multiprogramming, i.e., the maximum
number of processes in memory.
 These partitions are made at the time of system generation and remain fixed after that.

Advantages of Fixed Partitioning memory management schemes:

o Simple to implement.
o Easy to manage and design.

Disadvantages of Fixed Partitioning memory management schemes:

o This scheme suffers from internal fragmentation.


o The number of partitions is specified at the time of system generation.
Dynamic Partitioning

 The dynamic partitioning was designed to overcome the problems of a fixed partitioning
scheme.
 In a dynamic partitioning scheme, each process occupies only as much memory as they
require when loaded for processing.
 Requested processes are allocated memory until the entire physical memory is exhausted
or the remaining space is insufficient to hold the requesting process.
 In this scheme the partitions used are of variable size, and the number of partitions is not
defined at the system generation time.

Advantages of Dynamic Partitioning memory management schemes:

o Simple to implement.
o Easy to manage and design.

Disadvantages of Dynamic Partitioning memory management schemes:

o This scheme also suffers from internal fragmentation.


o The number of partitions is specified at the time of system segmentation.

Non-Contiguous memory management schemes:

 In a Non-Contiguous memory management scheme, the program is divided into different


blocks and loaded at different portions of the memory that need not necessarily be
adjacent to one another.
 This scheme can be classified depending upon the size of blocks and whether the blocks
reside in the main memory or not.

What is paging?

 Paging is a technique that eliminates the requirements of contiguous allocation of main


memory.
 In this, the main memory is divided into fixed-size blocks of physical memory called
frames.
 The size of a frame should be kept the same as that of a page to maximize the main
memory and avoid external fragmentation.

Advantages of paging:

o Pages reduce external fragmentation.


o Simple to implement.
o Memory efficient.
o Due to the equal size of frames, swapping becomes very easy.
o It is used for faster access of data.

What is Segmentation?

 Segmentation is a technique that eliminates the requirements of contiguous allocation of


main memory.
 In this, the main memory is divided into variable-size blocks of physical memory called
segments.
 It is based on the way the programmer follows to structure their programs. With
segmented memory allocation, each job is divided into several segments of different
sizes, one for each module. Functions, subroutines, stack, array, etc., are examples of
such modules.

You might also like