0% found this document useful (0 votes)
5 views144 pages

2ND IoT OS UNITt-1

Uploaded by

raja
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views144 pages

2ND IoT OS UNITt-1

Uploaded by

raja
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 144

R RAJA SEKHAR

Associate Professor
CSE-Cyber Security & IoT Dept.
MOB :9866635319
EMAIL:reddy.rajasekhar@mallareddyuniversity.ac.in
COURSE OBJECTIVES L/T/P/C
3/0/0/3

• To understand evolution of operating systems


• To understand operating system as layer of abstraction above physical
hardware that facilitates usage convenience and efficient resource
management of computer system.
• To learn design and implementation of policies and mechanisms of OS
sub system .
• To understand and make effective use of Linux utilities and shell
scripting language to solve problems .
• To teach principles of operating system including file handling utilities
,security by file permissions ,process utilities ,disk utilities ,networking
commands ,basic Linux commands ,scripts and filters.
• To teach principles of protection and security problems of OS.
SYLLABUS

UNIT-1
Introduction to operating systems :Overview of operating systems ,OS
operations, operating systems structures ,operating systems services
,System calls, Virtual machines.
Process management: Process concepts, Process scheduling, operations on
processes,inter process communications ,Threads, Scheduling algorithms,
Thread scheduling ,multiple process scheduling.
An Operating System (OS) is a

• Collection of software that manages computer hardware resources


• provides common services for computer programs.
• which acts as an interface between you and the computer
hardware.
• A low level Software which is categorised as a System Software
• Supports a computer's basic functions, such as memory management,
tasks scheduling and controlling peripherals etc.
What is Operating System?

• An Operating System (OS) is an interface between a computer user and


computer hardware. An operating system is a software which performs all
the basic tasks like file management, memory management, process
management, handling input and output, and controlling peripheral devices
such as disk drives and printers.
Computer System

Generally, a Computer System consists of the following components:

• Computer Users are the users who use the overall computer system.
• Application Software's are the software's which users use directly to
perform different activities. These software's are simple and easy to use like
Browsers, Word, Excel, different Editors, Games etc. These are usually
written in high-level languages, such as Python, Java and C++.
• System Software's are the software's which are more complex in nature
and they are more near to computer hardware.
• These software are usually written in low-level languages like assembly
language and includes Operating Systems (Microsoft Windows, macOS,
and Linux), Compiler, and Assembler etc.
• Computer Hardware includes Monitor, Keyboard, CPU, Disks, Memory, etc.
Operating System - Examples

There are plenty of Operating Systems available in the market which include paid and unpaid (Open
Source).

Following are the examples of the few most popular Operating Systems:

• Windows: This is one of the most popular and commercial operating systems developed and
marketed by Microsoft. It has different versions in the market like Windows 8, Windows 10 etc
and most of them are paid.
• Linux This is a Unix based and the most loved operating system first released on September 17,
1991 by Linus Torvalds. Today, it has 30+ variants available like Fedora, OpenSUSE, CentOS,
UBuntu etc. Most of them are available free of charges though you can have their enterprise
versions by paying a nominal license fee.
• MacOS This is again a kind of Unix operating system developed and marketed by Apple Inc.
since 2001.
• iOS This is a mobile operating system created and developed by Apple Inc. exclusively for its
mobile devices like iPhone and iPad etc.
• Android This is a mobile Operating System based on a modified version of the Linux kernel and
other open source software, designed primarily for touchscreen mobile devices such as
smartphones and tablets.
• Some other old but popular Operating Systems include Solaris, VMS, OS/400, AIX, z/OS, etc.
Operating System - Functions

• Process Management
• I/O Device Management
• File Management
• Network Management
• Main Memory Management
• Secondary Storage Management
• Security Management
• Command Interpreter System
• Control over system performance
• Job Accounting
• Error Detection and Correction
• Coordination between other software and users
• Many more other important tasks
Operating Systems - History

• Operating systems have been evolving through the years.


• In the 1950s, computers were limited to running one program at a time like
a calculator, but later in the following decades, computers began to
include more and more software programs, sometimes called libraries, that
formed the basis for today’s operating systems.
• The first Operating System was created by General Motors in 1956 to run a
single IBM mainframe computer, its name was the IBM 704.
• IBM was the first computer manufacturer to develop operating systems
and distribute them in its computers in the 1960s.
Operating Systems - History

There are few facts about Operating System evaluation:

• Stanford Research Institute developed the oN-Line System (NLS) in the late 1960s, which
was the first operating system that resembled the desktop operating system we use today.
• Microsoft bought QDOS (Quick and Dirty Operating System) in 1981 and branded it as
Microsoft Operating System (MS-DOS). As of 1994, Microsoft had stopped supporting MS-
DOS.
• Unix was developed in the mid-1960s by the Massachusetts Institute of Technology, AT&T
Bell Labs, and General Electric as a joint effort. Initially it was named MULTICS, which
stands for Multiplexed Operating and Computing System.
• FreeBSD is also a popular UNIX derivative, originating from the BSD project at Berkeley.
All modern Macintosh computers run a modified version of FreeBSD (OS X).
• Windows 95 is a consumer-oriented graphical user interface-based operating system built
on top of MS-DOS. It was released on August 24, 1995 by Microsoft as part of its
Windows 9x family of operating systems.
• Solaris is a proprietary Unix operating system originally developed by Sun Microsystems in
1991. After the Sun acquisition by Oracle in 2010 it was renamed Oracle Solaris.
Why to Learn Operating System

• If you are aspiring to become a Great Computer Programmer then it is


highly recommended to understand how exactly an Operating System works
inside out.
• This gives opportunity to understand how exactly data is saved in the disk,
how different processes are created and scheduled to run by the CPU, how
to interact with different I/O devices and ports.
• There are various low level concepts which help a programmer to Design
and Develop scalable software's.
• Bottom line is without a good understanding of Operating System Concepts,
it can't be assumed someone to be a good Computer Application
Software developer, and even it is unimaginable imagine someone to
become a System Software developer without knowing Operating System
in-depth.
• If you are a fresher and applying for a job in any standard company like
Google, Microsoft, Amazon, IBM etc then it is very much possible that you
will be asked questions related to Operating System concepts.
Operating System - Overview

• An Operating System (OS) is an interface between a computer user and


computer hardware.
• An operating system is a software which performs all the basic tasks like
file management, memory management, process management, handling
input and output, and controlling peripheral devices such as disk drives and
printers.
• An operating system is software that enables applications to interact with
a computer's hardware.
• The software that contains the core components of the operating system is
called the kernel.
• The primary purposes of an Operating System are to enable applications
(spftwares) to interact with a computer's hardware and to manage a
system's hardware and software resources.
• Some popular Operating Systems include Linux Operating System, Windows
Operating System, VMS, OS/400, AIX, z/OS, etc. Today, Operating systems
is found almost in every device like mobile phones, personal computers,
mainframe computers, automobiles, TV, Toys etc.
Definitions

An Operating System is the low-level software that supports a computer's


basic functions, such as scheduling tasks and controlling peripherals.
or
An operating system is a program that acts as an interface between the user
and the computer hardware and controls the execution of all kinds of
programs.
Or
An operating system (OS) is system software that manages computer
hardware, software resources, and provides common services for computer
programs.
Architecture

• We can draw a generic architecture diagram of an Operating System which


is as follows:
Operating System Generations

• 0th Generation
• First Generation (1951-1956)
• Second Generation (1956-1964)
• Third Generation (1964-1979)
• Fourth Generation (1979 – Present)
Operating System Generations

0th Generation:

The term 0th generation is used to refer to the period of development of


computing when Charles Babbage invented the Analytical Engine and later
John Atanasoff created a computer in 1940.
The hardware component technology of this period was electronic vacuum
tubes.
There was no Operating System available for this generation computer and
computer programs were written in machine language.
This computers in this generation were inefficient and dependent on the
varying competencies of the individual programmer as operators.
Operating System Generations

First Generation (1951-1956)

• The first generation marked the beginning of commercial computing


including the introduction of Eckert and Mauchly’s UNIVAC I in early 1951,
and a bit later, the IBM 701.
• System operation was performed with the help of expert operators and
without the benefit of an operating system for a time though programs
began to be written in higher level, procedure-oriented languages, and
thus the operator’s routine expanded.
• Later mono-programmed operating system was developed, which
eliminated some of the human intervention in running job and provided
programmers with a number of desirable functions.
• These systems still continued to operate under the control of a human
operator who used to follow a number of steps to execute a program.
• Programming language like FORTRAN was developed by John W. Backus in
1956.
Operating System Generations

Second Generation (1956-1964)

• The second generation of computer hardware was most notably characterized by


transistors replacing vacuum tubes as the hardware component technology.
• The first operating system GMOS was developed by the IBM computer.
• GMOS was based on single stream batch processing system, because it collects
all similar jobs in groups or batches and then submits the jobs to the operating
system using a punch card to complete all jobs in a machine.
• Operating system is cleaned after completing one job and then continues to
read and initiates the next job in punch card.
• Researchers began to experiment with multiprogramming and multiprocessing in
their computing services called the time-sharing system.
• A noteworthy example is the Compatible Time Sharing System (CTSS), developed
at MIT during the early 1960s.
Operating System Generations

Third Generation (1964-1979)

• The third generation officially began in April 1964 with IBM’s


announcement of its System/360 family of computers.
• Hardware technology began to use integrated circuits (ICs) which yielded
significant advantages in both speed and economy.
• Operating system development continued with the introduction and
widespread adoption of multiprogramming. The idea of taking fuller
advantage of the computer’s data channel I/O capabilities continued to
develop.
• Another progress which leads to developing of personal computers in fourth
generation is a new development of minicomputers with DEC PDP-1.
• The third generation was an exciting time, indeed, for the development of
both computer hardware and the accompanying operating system
Operating System Generations

Fourth Generation (1979 – Present)

• The fourth generation is characterized by the appearance of the personal


computer and the workstation.
• The component technology of the third generation, was replaced by very
large scale integration (VLSI).
• Many Operating Systems which we are using today like Windows, Linux,
MacOS etc developed in the fourth generation.
Operating System – Functions (or) Operations

Following are some of important functions of an operating System.

• Memory Management
• Processor Management
• Device Management
• File Management
• Network Management
• Security
• Control over system performance
• Job accounting
• Error detecting aids
• Coordination between other software and users
OPERATING SYSTEMS OPERTATIONS

The major operations of the operating system are process


management, memory management, device management and file
management. These are given in detail as follows:
Memory Management

• Memory management refers to management of Primary Memory or Main


Memory.
• Main memory is a large array of words or bytes where each word or byte
has its own address.
• Main memory provides a fast storage that can be accessed directly by the
CPU.
• For a program to be executed, it must in the main memory.
• An Operating System does the following activities for memory management

• Keeps tracks of primary memory, i.e., what part of it are in use by whom,
what part are not in use.
• In multiprogramming, the OS decides which process will get memory when
and how much.
• Allocates the memory when a process requests it to do so.
• De-allocates the memory when a process no longer needs it or has been
terminated.
Processor Management

• In multiprogramming environment, the OS decides which process gets the


processor when and for how much time. This function is called process
scheduling.
• An Operating System does the following activities for processor
management −
• Keeps tracks of processor and status of process. The program responsible
for this task is known as traffic controller.
• Allocates the processor (CPU) to a process.
• De-allocates processor when a process is no longer required.
Device Management

• An Operating System manages device communication via their respective


drivers. It does the following activities for device management −
• Keeps tracks of all devices. Program responsible for this task is known as
the I/O controller.
• Decides which process gets the device when and for how much time.
• Allocates the device in the efficient way.
• De-allocates devices.
File Management

• A file system is normally organized into directories for easy navigation and
usage. These directories may contain files and other directions.
• An Operating System does the following activities for file management −
• Keeps track of information, location, uses, status etc. The collective
facilities are often known as file system.
• Decides who gets the resources.
• Allocates the resources.
• De-allocates the resources.
Other Important Activities

Following are some of the important activities that an Operating System


performs −

• Security − By means of password and similar other techniques, it prevents


unauthorized access to programs and data.
• Control over system performance − Recording delays between request for
a service and response from the system.
• Job accounting − Keeping track of time and resources used by various jobs
and users.
• Error detecting aids − Production of dumps, traces, error messages, and
other debugging and error detecting aids.
• Coordination between other software's and users − Coordination and
assignment of compilers, interpreters, assemblers and other software to
the various users of the computer systems.
Operating System Structures

• An operating system is a design that enables user application programs to


communicate with the hardware of the machine.
• The operating system should be built with the utmost care because it is such a
complicated structure and should be simple to use and modify. Partially
developing the operating system is a simple approach to accomplish this.
• Each of these components needs to have distinct inputs, outputs, and
functionalities.

• Many sorts of structures that implement operating systems, as listed below,

• Simple Structure
• Monolithic Structure
• Layered Approach Structure
• Micro-Kernel Structure
• Exo-Kernel Structure
• Virtual Machines
What is the Operating System Structure?

• The operating system structure refers to the way in which the various
components of an operating system are organized and interconnected.
There are several different approaches to operating system structure, each
with its own advantages and disadvantages.
• An operating system has a complex structure, so we need a well-defined
structure to assist us in applying it to our unique requirements.
• Just as we break down a big problem into smaller, easier-to-solve sub
problems, designing an operating system in parts is a simpler approach to
do it. And each section is an Operating System component.
• The approach of interconnecting and integrating multiple operating system
components into the kernel can be described as an operating system
structure.
SIMPLE STRUCTURE

• It is the most straightforward operating system structure, but it lacks definition and
is only appropriate for usage with tiny and restricted systems.
• Since the interfaces and degrees of functionality in this structure are clearly defined,
programs are able to access I/O routines, which may result in unauthorized access to
I/O procedures.
• This organizational structure is used by the MS-DOS operating system:
• There are four layers that make up the MS-DOS operating system, and each has its
own set of features.
• These layers include ROM BIOS device drivers, MS-DOS device drivers, application
programs, and system programs.
• The MS-DOS operating system benefits from layering because each level can be
defined independently and, when necessary, can interact with one another.
• If the system is built in layers, it will be simpler to design, manage, and update.
Because of this, simple structures can be used to build constrained systems that are
less complex.
• When a user program fails, the operating system as whole crashes.
• Because MS-DOS systems have a low level of abstraction, programs and I/O procedures
are visible to end users, giving them the potential for unwanted access.
• The following figure illustrates layering in simple structure:
Advantages of Simple Structure:
• Because there are only a few interfaces and levels, it is simple to develop.
• Because there are fewer layers between the hardware and the
applications, it offers superior performance.

Disadvantages of Simple Structure:


• The entire operating system breaks if just one user program malfunctions.
• Since the layers are interconnected, and in communication with one
another, there is no abstraction or data hiding.
• The operating system's operations are accessible to layers, which can result
in data tampering and system failure.
MONOLITHIC STRUCTURE

• The monolithic operating system controls all aspects of the operating


system's operation, including file management, memory management,
device management, and operational operations.
• The core of an operating system for computers is called the kernel (OS).
• All other System components are provided with fundamental services by
the kernel.
• The operating system and the hardware use it as their main interface.
When an operating system is built into a single piece of hardware, such as
a keyboard or mouse, the kernel can directly access all of its resources
MONOLITHIC STRUCTURE

• The monolithic operating system is often referred to as the


monolithic kernel.
• Multiple programming techniques such as batch processing
and time-sharing increase a processor's usability.
• Working on top of the operating system and under complete
command of all hardware, the monolithic kernel performs
the role of a virtual computer.
• This is an old operating system that was used in banks to
carry out simple tasks like batch processing and time-
sharing, which allows numerous users at different terminals
to access the Operating System.
MONOLITHIC STRUCTURE
MONOLITHIC STRUCTURE

Advantages of Monolithic Structure:

• Because layering is unnecessary and the kernel alone is responsible for


managing all operations, it is easy to design and execute.
• Due to the fact that functions like memory management, file management,
process scheduling, etc., are implemented in the same address area, the
monolithic kernel runs rather quickly when compared to other systems.
Utilizing the same address speeds up and reduces the time required for
address allocation for new processes.

Disadvantages of Monolithic Structure:

• The monolithic kernel's services are interconnected in address space and


have an impact on one another, so if any of them malfunctions, the entire
system does as well.
• It is not adaptable. Therefore, launching a new service is difficult.
LAYERED STRUCTURE

• The OS is separated into layers or levels in this kind of arrangement. Layer


0 (the lowest layer) contains the hardware, and layer 1 (the highest layer)
contains the user interface (layer N).
• These layers are organized hierarchically, with the top-level layers making
use of the capabilities of the lower-level ones.
• The functionalities of each layer are separated in this method, and
abstraction is also an option.
• Because layered structures are hierarchical, debugging is simpler,
therefore all lower-level layers are debugged before the upper layer is
examined.
• As a result, the present layer alone has to be reviewed since all the lower
layers have already been examined.
LAYERED STRUCTURE

• The image below shows how OS is organized into layers:


LAYERED STRUCTURE

Advantages of Layered Structure:

• Work duties are separated since each layer has its own functionality, and
there is some amount of abstraction.
• Debugging is simpler because the lower layers are examined first, followed
by the top layers.

Disadvantages of Layered Structure:


• Performance is compromised in layered structures due to layering.
• Construction of the layers requires careful design because upper layers
only make use of lower layers' capabilities.
MICRO-KERNEL STRUCTURE

• The operating system is created using a micro-kernel framework that


strips the kernel of any unnecessary parts.
• Systems and user applications are used to implement these optional
kernel components.
• So, Micro-Kernels is the name given to these systems that have been
developed.
• Each Micro-Kernel is created separately and is kept apart from the others.
As a result, the system is now more trustworthy and secure.
• If one Micro-Kernel malfunctions, the remaining operating system is
unaffected and continues to function normally.
MICRO-KERNEL STRUCTURE

• The image below shows Micro-Kernel Operating System Structure:


MICRO-KERNEL STRUCTURE

Advantages of Micro-Kernel Structure:


• It enables portability of the operating system across platforms.
• Due to the isolation of each Micro-Kernel, it is reliable and
secure.
• The reduced size of Micro-Kernels allows for successful testing.
• The remaining operating system remains unaffected and keeps
running properly even if a component or Micro-Kernel fails.
Disadvantages of Micro-Kernel Structure:
• The performance of the system is decreased by increased inter-
module communication.
• The construction of a system is complicated.
EXOKERNEL OPERTAING SYSTEM STRUCTURE

• An operating system called Exokernel was created at MIT with the


goal of offering application-level management of hardware resources.
• The exokernel architecture's goal is to enable application-specific
customization by separating resource management from protection.
Exokernel size tends to be minimal due to its limited operability.
• Because the OS sits between the programs and the actual hardware, it will
always have an effect on the functionality, performance, and breadth of
the apps that are developed on it.
• By rejecting the idea that an operating system must offer abstractions
upon which to base applications, the exokernel operating system makes an
effort to solve this issue.
• The goal is to give developers as few restriction on the use of abstractions
as possible while yet allowing them the freedom to do so when necessary.
Because of the way the exokernel architecture is designed, a single tiny
kernel is responsible for moving all hardware abstractions into unreliable
libraries known as library operating systems.
• Exokernels differ from micro- and monolithic kernels in that their primary
objective is to prevent forced abstraction.
Exokernel
Exokernel operating systems have a number of features, including:

• Enhanced application control support.


• Splits management and security apart.
• A secure transfer of abstractions is made to an unreliable library operating system.
• Brings up a low-level interface.
• Operating systems for libraries provide compatibility and portability

Advantages of Exokernel Structure:


• Application performance is enhanced by it.
• Accurate resource allocation and revocation enable more effective utilization of
hardware resources.
• New operating systems can be tested and developed more easily.
• Every user-space program is permitted to utilize its own customized memory
management.

Disadvantages of Exokernel Structure:


• A decline in consistency
• Exokernel interfaces have a complex architecture.
VIRTUAL MACHINES (VMs)

• The hardware of our personal computer, including the CPU, disc


drives, RAM, and NIC (Network Interface Card), is abstracted by a
virtual machine into a variety of various execution contexts based on
our needs, giving us the impression that each execution environment is
a separate computer.
A virtual box is an example of it.
• Using CPU scheduling and virtual memory techniques, an operating
system allows us to execute multiple processes simultaneously while
giving the impression that each one is using a separate processor and
virtual memory.
• System calls and a file system are examples of extra functionalities
that a process can have that the hardware is unable to give.
• Instead of offering these extra features, the virtual machine method
just offers an interface that is similar to that of the most fundamental
hardware.
• A virtual duplicate of the computer system underneath is made
available to each process.
• We can develop a virtual machine for a variety of reasons, all of
which are fundamentally connected to the capacity to share the
same underlying hardware while concurrently supporting various
execution environments, i.e., various operating systems.
• Disk systems are the fundamental problem with the virtual
machine technique.
• If the actual machine only has three-disc drives but needs to host
seven virtual machines, let's imagine that. It is obvious that it is
impossible to assign a disc drive to every virtual machine because
the program that creates virtual machines would require a sizable
amount of disc space in order to offer virtual memory and
spooling. The provision of virtual discs is the solution.
• The result is that users get their own virtual machines. They can
then use any of the operating systems or software programs that
are installed on the machine below.
• Virtual machine software is concerned with programming
numerous virtual machines simultaneously into a physical machine;
it is not required to take into account any user-support software.
With this configuration, it may be possible to break the challenge
of building an interactive system for several users into two
manageable chunks.
VIRTUAL MACHINES (VMs)

Advantages of Virtual Machines:

• Due to total isolation between each virtual machine and every other
virtual machine, there are no issues with security.
• A virtual machine may offer an architecture for the instruction set that
is different from that of actual computers.
• Simple availability, accessibility, and recovery convenience.
Disadvantages of Virtual Machines:

• Depending on the workload, operating numerous virtual machines


simultaneously on a host computer may have an adverse effect on one
of them.
• When it comes to hardware access, virtual computers are less effective
than physical ones.
Operating System Services

• The operating system provides services to both the user and the programs running in
the system.
• The operating system itself is a program that provides an environment to run other
programs in the system.
• To the user, the operating system provides various services to run multiple user
processes in the system.
• Operating system services such as process management, memory management, and
resource allocation management are provided by the operating system.
Services of Operating System
Services of Operating System

The operating system works as a resource manager for the system.


The various services of the operating system for efficient working of the
system are:

• Program execution
• Control Input/output devices
• Program creation
• Error Detection and Response
• Accounting
• Security and Protection
• File Management
• Communication
• User Interface
• Resource allocation
• Command interpretation
Services of Operating System
Services of Operating System

Program execution

• The operating system loads the program into the memory and takes care of
the memory allocation for the program.
• Program execution is one of the operating system services which also
ensures that the program that is started can also end its execution
either normally or forcefully.
• The program is first loaded into the RAM and then the CPU is being
assigned for program execution through various CPU scheduling
algorithms provided by the operating system.
• After the program execution, the operating system also takes care of
process synchronization, inter-process communication, and deadlock
handling
Services of Operating System

Control Input/output devices

• The programs running in the system need input and output devices access
for performing the input/output operations.
• The access to the input and output devices is given by the operating
system to the program for I/O operations.
• I/O operations mean writing or reading operations performed over any file
or any input/output device.
Program creation

• In order to create, modify and debug programs the operating system


provides tools like editors and debuggers to make the task of programmers
easy.
Services of Operating System

Error Detection and Response

• Handling and detecting error is one of the crucial operating system services
which ensures the smooth working of the system.
• The error can occur in the system in the following devices or programs -
• Network connection error, loose connection of I/O devices, and restricted
network calls are some of the error that occur in input/output devices.
• The program run by the user in the system can also cause errors such as
accessing illegal memory, undefined operations such as division by zero,
excess use of CPU by a program etc.
Services of Operating System

Accounting
• The operating system keeps track of all the data of performance
parameters and response time of the system in order to make it more
robust. This data is used to improve the performance of the operating
system and minimize the response time of the system.
Security and Protection
• If a user downloads a program from the internet there are chances
that the program can contain malicious code which can affect other
programs in the system. The operating system takes care that such a
program is checked for any malicious code before downloading it to
the system.
File Management
• File management is one of the operating system services that handles
the memory management for the programs running in the system.
• The operating system knows the information of all the types of
different files and properties of different storage devices and ensures
the proper management and safety of the files stored in the secondary
storage devices.
Services of Operating System

Communication
• The processes running in the system need to communicate with each
other and also the computers or systems connected over a network
need to exchange data with each other over a secure network.
Operating system uses message passing and shared memory to keep
communication effective and safe.
User Interface
• The user interacts with the system either by command-line interface
or Graphical user interface.
• The command-line interface uses text commands entered by the user
to interact with the system. These commands can also be given using a
terminal emulator, or remote shell client.
• A graphical user interface is a more user-friendly way to interact with
the system. The GUI provides icons, widgets, texts, labels, and text
navigation. The user can easily interact with these icons and widgets
with just a click of a mouse or keyboard.
Services of Operating System

Resource allocation

• The processes running in the system require resources to complete


their execution.
• The operating system uses CPU scheduling to allocate resources
effectively among the processes ensuring better utilization of the CPU.
• The resources used by the processes can be CPU cycles, primary
memory storage, file storage, and I/O devices.

Command Interpretation

• The user interacts with the system through commands and the
operating system interprets these commands and inputs and provides
appropriate outputs accordingly.
• If the interpreter is separate from the kernel then the user can modify
the interpreter and prevent any unauthorized access to the system.
System Call in Operating System

• A system call is a mechanism that provides the interface between a process and the
operating system.
• It is a programmatic method in which a computer program requests a service from the
kernel of the OS.
• System call offers the services of the operating system to the user programs via API
(Application Programming Interface).
• System calls are the only entry points for the kernel system.
Example of System Call

• For example if we need to write a program code to read data from one
file, copy that data into another file.
• The first information that the program requires is the name of the two
files, the input and output files.
• In an interactive system, this type of program execution requires some
system calls by OS.
• First call is to write a prompting message on the screen
• Second, to read from the keyboard, the characters which define the two
files.
How System Call Works?

• Here are the steps for System Call in OS:


How System Call Works?

As you can see in the above-given System Call example diagram.

Step 1) The processes executed in the user mode till the time a system call
interrupts it.
Step 2) After that, the system call is executed in the kernel-mode on a priority
basis.
Step 3) Once system call execution is over, control returns to the user mode.,
Step 4) The execution of user processes resumed in Kernel mode.
Why do you need System Calls in OS?

Following are situations which need system calls in OS:

• Reading and writing from files demand system calls.


• If a file system wants to create or delete files, system calls are required.
• System calls are used for the creation and management of new processes.
• Network connections need system calls for sending and receiving packets.
• Access to hardware devices like scanner, printer, need a system call.
Types of System calls
Here are the five types of System Calls in OS:
• Process Control
• File Management
• Device Management
• Information Maintenance
• Communications
Types of System calls

Process Control
• This system calls perform the task of process creation, process termination, etc.
• Functions:
• End and Abort
• Load and Execute
• Create Process and Terminate Process
• Wait and Signal Event
• Allocate and free memory
File Management
• File management system calls handle file manipulation jobs like creating a file,
reading, and writing, etc.
• Functions:
• Create a file
• Delete file
• Open and close file
• Read, write, and reposition
• Get and set file attributes
Types of System calls

Device Management

• Device management does the job of device manipulation like reading from
device buffers, writing into device buffers, etc.
• Functions:
• Request and release device
• Logically attach/ detach devices
• Get and Set device attributes

Information Maintenance
• It handles information and its transfer between the OS and the user program.
• Functions:
• Get or set time and date
• Get process and device attributes
Types of System calls

Communication

• These types of system calls are specially used for interprocess communications.
• Functions:
• Create, delete communications connections
• Send, receive message
• Help OS to transfer status information
• Attach or detach remote devices

Rules for passing Parameters for System Call

Here are general common rules for passing parameters to the System Call:

• Parameters should be pushed on or popped off the stack by the operating


system.
• Parameters can be passed in registers.
• When there are more parameters than registers, it should be stored in a
block, and the block address should be passed as a parameter to a register.
Important System Calls Used in OS

wait()
• In some systems, a process needs to wait for another process to complete its
execution. This type of situation occurs when a parent process creates a child process,
and the execution of the parent process remains suspended until its child process
executes.
• The suspension of the parent process automatically occurs with a wait() system call.
When the child process ends execution, the control moves back to the parent process.
fork()
• Processes use this system call to create processes that are a copy of themselves. With
the help of this system Call parent process creates a child process, and the execution
of the parent process will be suspended till the child process executes.
exec()
• This system call runs when an executable file in the context of an already running
process that replaces the older executable file. However, the original process
identifier remains as a new process is not built, but stack, data, head, data, etc. are
replaced by the new process.
kill()
• The kill() system call is used by OS to send a termination signal to a process that urges
the process to exit. However, a kill system call does not necessarily mean killing the
process and can have various meanings.
exit()
• The exit() system call is used to terminate program execution. Specially in the multi-
threaded environment, this call defines that the thread execution is complete. The OS
reclaims resources that were used by the process after the use of exit() system call.
Virtual Machines in Operating System

• A virtual machine (VM) is a virtual environment which functions as a


virtual computer system with its own CPU, memory, network interface,
and storage, created on a physical hardware system.
• VMs are isolated from the rest of the system, and multiple VMs can exist on
a single piece of hardware, like a server.
• That means, it as a simulated image of application software and operating
system which is executed on a host computer or a server.
• It has its own operating system and software that will facilitate the
resources to virtual computers.
Characteristics of virtual machines

• Multiple OS systems use the same hardware and partition resources


between virtual computers.
• Separate Security and configuration identity.
• Ability to move the virtual computers between the physical host computers
as holistically integrated files.
virtual machines

• The below diagram shows you the difference between the single OS with no VM and
Multiple OS with VM −
Benefits

• The multiple Operating system environments exist simultaneously on the same


machine, which is isolated from each other.
• Virtual machine offers an instruction set architecture which differs from real
computer.
• Using virtual machines, there is easy maintenance, application provisioning,
availability and convenient recovery.
• Virtual Machine encourages the users to go beyond the limitations of hardware to
achieve their goals.
• The operating system achieves virtualization with the help of a specialized software
called a hypervisor, which emulates the PC client or server CPU, memory, hard disk,
network and other hardware resources completely, enabling virtual machines to share
resources.
• The hypervisor can emulate multiple virtual hardware platforms that are isolated
from each other allowing virtual machines to run Linux and window server operating
machines on the same underlying physical host.
Process Management in OS: PCB in Operating System

What is a Process?

• Process is the execution of a program that performs the actions specified


in that program.
• It can be defined as an execution unit where a program runs.
• The OS helps you to create, schedule, and terminates the processes which
is used by CPU.
• A process created by the main process is called a child process.
• Process operations can be easily controlled with the help of PCB(Process
Control Block).
• You can consider it as the brain of the process, which contains all the
crucial information related to processing like process id, priority, state,
CPU registers, etc.
What is Process Management?

• Process management involves various tasks like creation, scheduling,


termination of processes, and a dead lock.
• Process is a program that is under execution, which is an important part of
modern-day operating systems.
• The OS must allocate resources that enable processes to share and
exchange information.
• It also protects the resources of each process from other methods and
allows synchronization among processes.
• It is the job of OS to manage all the running processes of the system.
• It handles operations by performing tasks like process scheduling and such
as resource allocation.
Process Architecture

Process architecture Image


Here, is an Architecture diagram of the Process

Stack: The Stack stores temporary data like function parameters, returns addresses, and local
variables.
Heap Allocates memory, which may be processed during its run time.
Data: It contains the variable.
Text:Text Section includes the current activity, which is represented by the value of the Program
Counter.
Process Control Blocks

• PCB stands for Process Control Block.


• It is a data structure that is maintained by the Operating System for every process.
The PCB should be identified by an integer Process ID (PID).
• It helps you to store all the information required to keep track of all the running
processes.
• It is also accountable for storing the contents of processor registers.
• These are saved when the process moves from the running state and then returns back
to it.
• The information is quickly updated in the PCB by the OS as soon as the process makes
the state transition.
Process States

Process States Diagram

A process state is a condition of the process at a specific instant of time. It also defines the current
position of the process.
Stages of a process

There are mainly seven stages of a process which are:

• New: The new process is created when a specific program calls from
secondary memory/ hard disk to primary memory/ RAM a
• Ready: In a ready state, the process should be loaded into the primary
memory, which is ready for execution.
• Waiting: The process is waiting for the allocation of CPU time and other
resources for execution.
• Executing: The process is an execution state.
• Blocked: It is a time interval when a process is waiting for an event like I/O
operations to complete.
• Suspended: Suspended state defines the time when a process is ready for
execution but has not been placed in the ready queue by OS.
• Terminated: Terminated state specifies the time when a process is
terminated
• After completing every step, all the resources are used by a process, and
memory becomes free.
Process Schedulers in Operating System

• In computing, a process is the instance of a computer program that is being


executed by one or many threads.
• Scheduling is important in many different computer environments.
• One of the most important areas of scheduling is which programs will work
on the CPU.
• This task is handled by the Operating System (OS) of the computer and
there are many different ways in which we can choose to configure
programs.
• Process schedulers are fundamental components of operating systems
responsible for deciding the order in which processes are executed by the
CPU.
• In simpler terms, they manage how the CPU allocates its time among
multiple tasks or processes that are competing for its attention
What is Process Scheduling?

• Process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another
process based on a particular strategy.
• Process scheduling is an essential part of a Multiprogramming operating
system.
• Such operating systems allow more than one process to be loaded into the
executable memory at a time and the loaded process shares the CPU using
time multiplexing.
Process Scheduling
Categories of Scheduling

Scheduling falls into one of two categories:

Non-Preemptive: In this case, a process’s resource cannot be taken before the


process has finished running.
• When a running process finishes and transitions to a waiting state,
resources are switched.
Preemptive: In this case, the OS assigns resources to a process for a
predetermined period.
• The process switches from running state to ready state or from waiting
state to ready state during resource allocation.
• This switching happens because the CPU may give other processes priority
and substitute the currently active process for the higher priority process.
Types of Process Schedulers

There are three types of process schedulers:

1. Long Term or Job Scheduler


• It brings the new process to the ‘Ready State’.
• It controls the Degree of Multi-programming, i.e., the number of processes
present in a ready state at any point in time.
• It is important that the long-term scheduler make a careful selection of
both I/O and CPU-bound processes.
• I/O-bound tasks are which use much of their time in input and output
operations while CPU-bound processes are which spend their time on the
CPU.
• The job scheduler increases efficiency by maintaining a balance between
the two.
• They operate at a high level and are typically used in batch-processing
systems.
Types of Process Schedulers

2. Short-Term or CPU Scheduler

• It is responsible for selecting one process from the ready state for scheduling it on the
running state.
• Note: Short-term scheduler only selects the process to schedule it doesn’t load the
process on running.
• Here is when all the scheduling algorithms are used. The CPU scheduler is responsible
for ensuring no starvation due to high burst time processes.
Types of Process Schedulers

• The dispatcher is responsible for loading the process selected by the Short-
term scheduler on the CPU (Ready to Running State) Context switching is
done by the dispatcher only.
A dispatcher does the following:
• Switching context.
• Switching to user mode.
• Jumping to the proper location in the newly loaded program
Types of Process Schedulers

3. Medium-Term Scheduler
It is responsible for suspending and resuming the process.
• It mainly does swapping (moving processes from main memory to disk and vice versa).
• Swapping may be necessary to improve the process mix or because a change in
memory requirements has overcommitted available memory, requiring memory to be
freed up.
• It is helpful in maintaining a perfect balance between the I/O bound and the CPU
bound.
• It reduces the degree of multiprogramming.
Some Other Schedulers

• I/O Schedulers: I/O schedulers are in charge of managing the execution of I/O
operations such as reading and writing to discs or networks.
• They can use various algorithms to determine the order in which I/O operations are
executed, such as FCFS (First-Come, First-Served) or RR (Round Robin).
• Real-Time Schedulers: In real-time systems, real-time schedulers ensure that critical
tasks are completed within a specified time frame.
• They can prioritize and schedule tasks using various algorithms such as EDF (Earliest
Deadline First) or RM (Rate Monotonic).
Comparison Among Scheduler

Long Term Scheduler Short Term Schedular Medium Term Scheduler

It is a process-swapping
It is a job scheduler It is a CPU scheduler
scheduler.

Generally, Speed is lesser than Speed is the fastest among all Speed lies in between both
short term scheduler of them. short and long-term schedulers.

It gives less control over how


It controls the degree of It reduces the degree of
much multiprogramming is
multiprogramming multiprogramming.
done.

It is barely present or
It is a minimal time-sharing It is a component of systems for
nonexistent in the time-sharing
system. time sharing.
system.

It can re-enter the process into It can re-introduce the process


It selects those processes which
memory, allowing for the into memory and execution can
are ready to execute
continuation of execution. be continued.
Operations on Process in OS

• In an operating system, processes represent the execution of individual


tasks or programs.
• Process operations involve the creation, scheduling, execution, and
termination of processes.
• The OS allocates necessary resources, such as CPU time, memory, and I/O
devices, to ensure the seamless execution of processes.
• Process operations in OS encompass vital aspects of process lifecycle
management, optimizing resource allocation, and facilitating concurrent
and responsive computing environments.
Process Operations

• Process operations in an operating system involve several key steps that manage the
lifecycle of processes.
• The operations on process in OS ensure efficient utilization of system resources,
multitasking, and a responsive computing environment.
The primary process operations in OS include:
• Process Creation
• Process Scheduling
• Context Switching
• Process Execution
• Inter-Process Communication (IPC)
• Process Termination
• Process Synchronization
• Process State Management
• Process Priority Management
• Process Accounting and Monitoring
Operations on Process in OS
Creating
• Process creation is a fundamental operation within an operating system that involves
the creation and initialization of a new process.
• The process operation in OS is crucial for enabling multitasking, resource allocation,
and concurrent execution of tasks.
• The process creation operation in OS typically follows a series of steps which are as
follows:
Request:
• The process of creation begins with a request from a user or a system component,
such as an application or the operating system itself, to start a new process.
Allocating Resources:
• The operating system allocates necessary resources for the new process, including
memory space, a unique process identifier (PID), a process control block (PCB), and
other essential data structures.
Loading Program Code:
• The program code and data associated with the process are loaded into the allocated
memory space.
Setting Up Execution Environment:
• The OS sets up the initial execution environment for the process.
Initialization:
• Any initializations required for the process are performed at this stage. This might
involve initializing variables, setting default values, and preparing the process for
execution.
Process State:
• After the necessary setup, the new process is typically in a "ready" or "waiting" state,
indicating that it is prepared for execution but hasn't started running yet.
Dispatching/Scheduling
• Dispatching/scheduling, is a crucial operation within an operating system that involves the
selection of the next process to execute on the central processing unit (CPU). This
operation is a key component of process management and is essential for efficient
multitasking and resource allocation. The dispatching operation encompasses the following
key steps:
Process Selection:
• The dispatching operation selects a process from the pool of ready-to-execute processes.
The selection criteria may include factors such as process priority, execution history, and the
scheduling algorithm employed by the OS.
Context Switching:
• Before executing the selected process, the operating system performs a context switch. This
involves saving the state of the currently running process, including the program counter,
CPU registers, and other relevant information, into the process control block (PCB).
Loading New Process:
• Once the context switch is complete, the OS loads the saved state of the selected process
from its PCB. This includes restoring the program counter and other CPU registers to the
values they had when the process was last preempted or voluntarily yielded to the CPU.
Execution:
• The CPU begins executing the instructions of the selected process. The process advances
through its program logic, utilizing system resources such as memory, I/O devices, and
external data.
Timer Interrupts and Preemption:
• During process execution, timer interrupts are set at regular intervals. When a timer
interrupt occurs, the currently running process is preempted, and the CPU returns control to
the operating system.
Scheduling Algorithms:
• The dispatching operation relies on scheduling algorithms that determine the order and
duration of process execution.
Resource Allocation:
• The dispatching operation is responsible for allocating CPU time to processes based on the
scheduling algorithm and their priority. This ensures that high-priority or time-sensitive tasks
receive appropriate attention.
Blocking

• In an operating system, a blocking operation refers to a situation where a


process in OS is temporarily suspended or "blocked" from executing further
instructions until a specific event or condition occurs.
• This event typically involves waiting for a particular resource or condition to
become available before the process can proceed.
• Blocking operations are common in scenarios where processes need to interact
with external resources, such as input/output (I/O) devices, files, or other
processes.
• When a process operation in OS initiates a blocking operation, it enters a state
known as "blocked" or "waiting."
• The operating system removes the process from the CPU's execution queue and
places it in a waiting queue associated with the resource it is waiting for.
• The process remains in this state until the resource becomes available or the
condition is satisfied.
• Blocking operations is crucial for efficient resource management and
coordination among processes.
• They prevent processes from monopolizing system resources while waiting for
external events, enabling the operating system to schedule other processes for
execution. Common examples of blocking operations include:

I/O Operations:
When a process requests data from an I/O device (such as reading data from a disk or receiving
input from a keyboard), it may be blocked until the requested data is ready.
Synchronization:
Processes often wait for synchronization primitives like semaphores or mutexes to achieve
mutual exclusion or coordinate their activities.
Inter-Process Communication:
Processes waiting for messages or data from other processes through mechanisms like message
queues or pipes may enter a blocked state.
Resource Allocation:
Processes requesting system resources, such as memory or network connections, may be
blocked until the resources are allocated.
Preemption

• Preemption in an operating system refers to the act of temporarily


interrupting the execution of a currently running process to allocate the
CPU to another process.
• This interruption is typically triggered by a higher-priority process
becoming available for execution or by the expiration of a time slice
assigned to the currently running process in a time-sharing environment.

Key aspects of preemption include:

Priority-Based Preemption:
• Processes with higher priorities are given preference in execution. When a higher-
priority process becomes available, the OS may preempt the currently running process
to allow the higher-priority process to execute.
Time Sharing:
• In a time-sharing or multitasking environment, processes are allocated small time
slices (quantum) of CPU time. When a time slice expires, the currently running
process is preempted, and the OS selects the next process to run.
Interrupt-Driven Preemption:
• Hardware or software interrupts can trigger preemption. For example, an interrupt
generated by a hardware device or a system call request from a process may cause
the OS to preempt the current process and handle the interrupt.
Fairness and Responsiveness:
• Preemption ensures that no process is unfairly blocked from accessing CPU time. It
guarantees that even low-priority processes get a chance to execute, preventing
starvation.
Real-Time Systems:
• Preemption is crucial in real-time systems, where tasks have strict timing
requirements. If a higher-priority real-time task becomes ready to run, the OS must
preempt lower-priority tasks to ensure timely execution.
Termination of Process
• Termination of a process operation in an operating system refers to the orderly and
controlled cessation of a running process's execution. Process termination occurs when
a process has completed its intended task, when it is no longer needed, or when an
error or exception occurs. This operation involves several steps to ensure proper
cleanup and resource reclamation:
Exit Status:
• When a process terminates, it typically returns an exit status or code that indicates
the outcome of its execution. This status provides information about whether the
process was completed successfully or encountered an error.
Resource Deallocation:
• The OS releases the resources allocated to the process, including memory, file
handles, open sockets, and other system resources. This prevents resource leaks and
ensures efficient utilization of system components.
File Cleanup:
• If the process has opened files or created temporary files, the OS ensures that these
files are properly closed and removed, preventing data corruption and freeing up
storage space.
Parent Process Notification:
• In most cases, the parent process (the process that created the terminating process)
needs to be informed of the termination and the exit status.
Process Control Block Update:
• The OS updates the process control block (PCB) of the terminated process, marking it
as "terminated" and removing it from the list of active processes.
Reclamation of System Resources:
• The OS updates its data structures and internal tables to reflect the availability of
system resources that were used by the terminated process.
Inter Process Communication

• Inter Process Communication is a type of mechanism usually provided by


the operating system (or OS)
• The main aim or goal of this mechanism is to provide communications in
between several processes.
• In short, the intercommunication allows a process letting another process
know that some event has occurred.
Inter Process Communication-Definition

"Inter-process communication is used for exchanging useful information


between numerous threads in one or more processes (or programs).“
• To understand inter process communication, you can consider the following
given diagram that illustrates the importance of inter-process
communication:
Inter Process Communication-Definition

• Inter process communication in OS is way by which multiple processes can


communicate with each other. Shared memory in OS, message queues, FIFO etc are
some of the ways to achieve ipc in os..
A system can have two types of processes i.e. independent or cooperating. Cooperating
processes affect each other and may share data and information among themselves.
Interprocess Communication or IPC

• Interprocess Communication or IPC provides a mechanism to exchange data


and information across multiple processes, which might be on single or
multiple computers connected by a network. This is essential for many
tasks, such as:
• Sharing data
• Coordinating activities
• Managing resources
• Achieving modularity
Synchronization in Inter Process Communication

• Synchronization in Inter Process Communication (IPC) is the process of


ensuring that multiple processes are coordinated and do not interfere with
each other.
• This is important because processes can share data and resources, and if
they are not synchronized, they can overwrite each other's data or cause
other problems.
• There are a number of different mechanisms that can be used to
synchronize processes, including:
• Mutual exclusion: This is a mechanism that ensures that only one process
can access a shared resource at a time. This is typically implemented using
a lock, which is a data structure that indicates whether a resource is
available.
• Condition variables: These are variables that can be used to wait for a
certain condition to be met. For example, a process could wait for a
shared resource to become available before using it.
• Barriers: These are synchronization points that all processes must reach
before they can proceed. This can be used to ensure that all processes
have completed a certain task before moving on to the next task.
• Semaphores: These are variables that can be used to count the number of
times a shared resource is being used. This can be used to prevent a
resource from being used more than a certain number of times at the same
time.
Inter Process Communication

Here are some examples of how synchronization is used in IPC:


• In a database, multiple processes may need to access the same data.
Synchronization is used to ensure that only one process can write to
the data at a time, and that other processes do not read the data
while it is being written.
• In a web server, multiple processes may need to handle requests from
clients. Synchronization is used to ensure that only one process
handles a request at a time, and that other processes do not interfere
with the request.
• In a distributed system, multiple processes may need to communicate
with each other. Synchronization is used to ensure that messages are
sent and received in the correct order, and that processes do not take
actions based on outdated information.
Approaches for Inter-Process Communication
Pipes:
• Pipes are a simple form of shared memory that allows two processes to
communicate with each other.
• It is a half duplex method (or one way communication) used for IPC
between two related processes .
• One process writes data to the pipe, and the other process reads data from
the pipe.
• It is like a scenario like filling water with a tap into a bucket. The filling
process is writing into the pipe and the reading process is retrieving from
the pipe.
• Pipes can be either named or anonymous, depending on whether they have
a unique name or not.
• Named pipes are a type of pipe that has a unique name, and can be
accessed by multiple processes. Named pipes can be used for
communication between processes running on the same host or between
processes running on different hosts over a network.
• Anonymous pipes, on the other hand, are pipes that are created for
communication between a parent process and its child process. Anonymous
pipes are typically used for one-way communication between processes, as
they do not have a unique name and can only be accessed by the processes
that created them.
/Sample pseudo-code program to implement IPC using Pipe//
Start:
Store any message in one character array ( char *msg=”Hello world”)
Declare another character array
Create a pipe by using pipe() system call
Create another process by executing fork() system call
In parent process use system call write() to write message from one process
to another process.
In child process display the message.
Shared Memory processes.
• Multiple processes can access a common
shared memory. Multiple processes
• Shared memory is a region of memory communicate by shared memory, where
that is accessible to multiple processes. one process makes changes at a time
This allows processes to communicate and then others view the change. Shared
with each other by reading and writing memory does not use kernel.
data from the shared memory region.
• Shared memory is a fast and efficient
way for processes to communicate, but
it can be difficult to use if the processes
are not carefully synchronized.
• There are two main types of shared
memory:
– Anonymous shared
memory: Anonymous shared
memory is not associated with any
file or other system object. It is
created by the operating system
and is only accessible to the
processes that created it.
– Mapped shared memory: Mapped
shared memory is associated with a
file or other system object. It is
created by mapping a file into the
address space of one or more
Message Passing:

• Message passing is a method of Inter Process Communication in OS.


• It involves the exchange of messages between processes, where each process sends
and receives messages to coordinate its activities and exchange data with other
processes.
• Processes can communicate without any shared variables, therefore it can be used in
a distributed environment on a network.
• In message passing, each process has a unique identifier, known as a process ID, and
messages are sent from one process to another using this identifier. When a process
sends a message, it specifies the recipient process ID and the contents of the
message, and the operating system is responsible for delivering the message to the
recipient process. The recipient process can then retrieve the contents of the message
and respond, if necessary.
• Message passing in shared memory has a number of advantages over other IPC
mechanisms. First, it is very fast, as messages are simply copied from one process's
address space to another.
• Second, it is very flexible, as any type of data can be shared between processes.
Third, it is relatively easy to implement, as it does not require any special support
from the operating system.

• However, message passing in shared memory also has some disadvantages.


• First, it can be difficult to ensure that messages are delivered in the correct order.
Second, it can be difficult to manage the size of the message queue.
• Third, it can be difficult to port to other platforms, as the implementation of shared
memory can vary from one operating system to another.
Message Queues

• Message queues are a more advanced form of pipes.


• They allow processes to send messages to each other, and they can be used to
communicate between processes that are not running on the same machine.
• Message queues are a good choice for communication between processes that
need to be decoupled from each other.
• In Message Queue IPC, each message has a priority associated with it, and
messages are retrieved from the queue in order of their priority. This allows
processes to prioritize the delivery of important messages and ensures that
critical messages are not blocked by less important messages in the queue.
• Message Queue IPC provides a flexible and scalable method of communication
between processes, as messages can be sent and received asynchronously,
allowing processes to continue executing while they wait for messages to arrive.

• The main disadvantage of Message Queue IPC is that it can introduce additional
overhead, as messages must be copied between address spaces, and the queue
must be managed by the operating system to ensure that it remains
synchronized and consistent across all processes.
• We have linked list to store messages in a kernel of OS and a message
queue is identified using "message queue identifier".

//Psedo-code to implement message queue//


Start:
Create a message queue or connect to an already existing message queue
Write into message queue
Read from the message queue
Perform control operations on the message queue
End
Direct Communication communicate, as processes can access
each other’s data directly without the
need for intermediate communication
• In this, process that want
mechanisms.
communicate must name sender or
receiver . • However, direct communication also
has some limitations, as it can lead to
• A pair of communicating processes
tight coupling between processes, and
must have one link between them.
it can make it more difficult to
• A link (generally bi-directional) change the communication
establishes between every pair of mechanism in the future, as direct
communicating processes. communication is hardcoded into the
• In direct communication, the sender processes themselves.
process must know the identifier of
the receiver process in order to send
a message to it. This identifier can be
a process ID, a port number, or some
other unique identifier. Once the
sender process has the identifier of
the receiver process, it can send a
message to it directly.
• The main advantage of direct
communication is that it provides a
simple and direct way for processes to
Indirect Communication

• Indirect communication in IPC is a method of communication in which


processes do not explicitly name the sender or receiver of the
communication.
• Instead, processes communicate through a shared medium such as a
message queue or mailbox.
• Pairs of communicating processes have shared mailbox.
• Link (uni-directional or bi-directional) is established between pairs of
processes.
• Sender process puts message in the port or mailbox of receiver process and
receiver process takes out (or deletes) the data from mailbox.
• The sender and receiver processes do not need to know each other's
identifiers in order to communicate with each other.
• The main advantage of indirect communication is that it provides a more
flexible and scalable way for processes to communicate, as processes do
not need to have direct access to each other’s data.
• However, indirect communication can also introduce additional overhead,
as data must be copied between address spaces, and the communication
mechanism must be managed by the operating system to ensure that it
remains synchronized and consistent across all processes.
FIFO

• FIFO (First In First Out) is a type of message queue that guarantees


that messages are delivered in the order they were sent.
• It involves the use of a FIFO buffer, which acts as a queue for
exchanging data between processes.
• Used to communicate between two processes that are not related.
• In the FIFO method, one process writes data to the FIFO buffer, and
another process reads the data from the buffer in the order in which it
was written.
• Full-duplex method - Process P1 is able to communicate with Process
P2, and vice versa.
• The main advantage of the FIFO method is that it provides a simple
way for processes to communicate, as data is exchanged sequentially,
and there is no need for processes to coordinate their access to the
FIFO buffer.
• However, the FIFO method can also introduce limitations, as it may
result in slow performance if the buffer becomes full and data must be
written to the disk, or if the buffer becomes empty and data must be
read from the disk.
Why Inter Process Communication (IPC) is Required?

• Inter-process communication (IPC) is required for a number of reasons:


• Sharing data: IPC allows processes to share data with each other. This
is essential for many tasks, such as sharing files, databases, and other
resources.
• Coordinating activities: IPC allows processes to coordinate their
activities. This is essential for tasks such as distributed computing,
where multiple processes are working together to solve a problem.
• Managing resources: IPC allows processes to manage resources such as
memory, devices, and files. This is essential for ensuring that
resources are used efficiently.
• Achieving modularity: IPC allows processes to be developed and
maintained independently of each other. This makes it easier to
develop and maintain large and complex software systems.
• Flexibility: IPC allows processes to run on different hosts or nodes in a
network, providing greater flexibility and scalability in large and
complex systems.
Threads in Operating System

• In computers, a single process might have multiple functionalities running


parallelly where each functionality can be considered as a thread.
• Each thread has its own set of registers and stack space. There can be
multiple threads in a single process having the same or different
functionality.
• Threads in operating systems are also termed lightweight processes.
What is Thread in Operating System?

• Thread is a sequential flow of tasks within a process.


• Threads in an operating system can be of the same or different types. Threads are used to
increase the performance of the applications.
• Each thread has its own program counter, stack, and set of registers. However, the threads
of a single process might share the same code and data/file.
• Threads are also termed lightweight processes as they share common resources.
• Eg: While playing a movie on a device the audio and video are controlled by different
threads in the background.

The above diagram shows the difference between a single-threaded process and a multithreaded process and the
resources that are shared among threads in a multithreaded process.
Components of Thread
• A thread has the following three components:
• Program Counter
• Register Set
• Stack space
Why do We Need Threads?
• Threads in the operating system provide multiple benefits and improve the overall
performance of the system. Some of the reasons threads are needed in the operating
system are:
• Since threads use the same data and code, the operational cost between threads is
low.
• Creating and terminating a thread is faster compared to creating or terminating a
process.
• Context switching is faster in threads compared to processes.
Why Multithreading?

• In Multithreading, the idea is to divide a single process into multiple


threads instead of creating a whole new process. Multithreading is done to
achieve parallelism and to improve the performance of the applications as
it is faster in many ways which were discussed above. The other
advantages of multithreading are mentioned below.
• Resource Sharing: Threads of a single process share the same resources
such as code, data/file.
• Responsiveness: Program responsiveness enables a program to run even if
part of the program is blocked or executing a lengthy operation. Thus,
increasing the responsiveness to the user.
• Economy: It is more economical to use threads as they share the resources
of a single process. On the other hand, creating processes is expensive.
Scheduling Algorithms in Operating System

• In the operating system, everything is carried out by processes. At any


given time, there is only one process that is running on the CPU.
• A process scheduler removes one process from the running state in the CPU
and selects another process to run based on some scheduling algorithms in
OS.
What is a Scheduling Algorithm?

• A CPU scheduling algorithm is used to determine which process will use CPU for
execution and which processes to hold or remove from execution. The main goal or
objective of CPU scheduling algorithms in OS is to make sure that the CPU is never in
an idle state, meaning that the OS has at least one of the processes ready for
execution among the available processes in the ready queue.
There are two types of scheduling algorithms in OS:

1)PREMPTIVE
2)NON PREMPTIVE

Preemptive Scheduling Algorithms


• In these algorithms, processes are assigned with priority.
• Whenever a high-priority process comes in, the lower-priority process that has
occupied the CPU is preempted.
• That is, it releases the CPU, and the high-priority process takes the CPU for its
execution.
Non-Preemptive Scheduling Algorithms
• In these algorithms, we cannot preempt the process.
• That is, once a process is running on the CPU, it will release it either by context
switching or terminating.
• Often, these are the types of algorithms that can be used because of the limitations of
the hardware.
There are some important terminologies to know for understanding the
scheduling algorithms:

• Arrival Time: This is the time at which a process arrives in the ready
queue.
• Completion Time: This is the time at which a process completes its
execution.
• Burst Time: This is the time required by a process for CPU execution.
• Turn-Around Time: This is the difference in time between completion
time and arrival time. This can be calculated as:
• Turn Around Time = Completion Time – Arrival Time
• Waiting Time: This is the difference in time between turnaround time and
burst time. This can be calculated as:
• Waiting Time = Turn Around Time – Burst Time
• Throughput: It is the number of processes that are completing their
execution per unit of time.
Why Do We Need Scheduling?

• A process to complete its execution needs both CPU time and I/O time. In
a multiprogramming system, there can be one process using the CPU while
another is waiting for I/O whereas, in a uni programming system, time
spent waiting for I/O is completely wasted as the CPU is idle at this time.
Multiprogramming can be achieved by the use of process scheduling.

• The purposes of a scheduling algorithm are as follows:

• Maximize the CPU utilization, meaning that keep the CPU as busy as
possible.
• Fair allocation of CPU time to every process
• Maximize the Throughput
• Minimize the turnaround time
• Minimize the waiting time
• Minimize the response time
Types of Scheduling Algorithms in OS

First Come First Serve (FCFS) Scheduling Algorithm

• First Come First Serve is the easiest and simplest CPU scheduling
algorithm to implement.
• In this type of scheduling algorithm, the CPU is first allocated to the
process which requests the CPU first.
• That means the process with minimal arrival time will be executed
first by the CPU. It is a non-preemptive scheduling algorithm as the
priority of processes does not matter, and they are executed in the
manner they arrive in front of the CPU.
• This scheduling algorithm is implemented with a FIFO(First In First
Out) queue.
• As the process is ready to be executed, its Process Control Block (PCB)
is linked with the tail of this FIFO queue.
• Now when the CPU becomes free, it is assigned to the process at the
beginning of the queue.
Advantages
• Involves no complex logic and just picks processes from the ready queue
one by one.
• Easy to implement and understand.
• Every process will eventually get a chance to run so no starvation occurs.

Disadvantages
• Waiting time for processes with less execution time is often very long.
• It favors CPU-bound processes then I/O processes.
• Leads to convoy effect.
• Causes lower device and CPU utilization.
• Poor performance as the average wait time is high.
Shortest Job First (SJF) Scheduling Algorithm
• Shortest Job First is a non-preemptive scheduling algorithm in which the process with
the shortest burst or completion time is executed first by the CPU.
• That means the lesser the execution time, the sooner the process will get the CPU. In
this scheduling algorithm, the arrival time of the processes must be the same, and the
processor must be aware of the burst time of all the processes in advance.
• If two processes have the same burst time, then First Come First Serve
(FCFS) scheduling is used to break the tie.
• The preemptive mode of SJF scheduling is known as the Shortest Remaining Time
First scheduling algorithm.

Advantages
• Results in increased Throughput by executing shorter jobs first, which mostly have
a shorter turnaround time.
• Gives the minimum average waiting time for a given set of processes.
• Best approach to minimize waiting time for other processes awaiting execution.
• Useful for batch-type processing where CPU time is known in advance and waiting for
jobs to complete is not critical.
Disadvantages
• May lead to starvation as if shorter processes keep on coming, then longer processes
will never get a chance to run.
• Time taken by a process must be known to the CPU beforehand, which is not always
possible.
Round Robin Scheduling Algorithm
• The Round Robin algorithm is related to the First Come First Serve
(FCFS) technique but implemented using a preemptive policy.
• In this scheduling algorithm, processes are executed cyclically, and each
process is allocated a small amount of time called time slice or time
quantum.
• The ready queue of the processes is implemented using the circular queue
technique in which the CPU is allocated to each process for the given time
quantum and then added back to the ready queue to wait for its next turn.
• If the process completes its execution within the given quantum of time,
then it will be preempted, and other processes will execute for the given
period of time.
• But if the process is not completely executed within the given time
quantum, then it will again be added to the ready queue and will wait for
its turn to complete its execution.
• The round-robin scheduling is the oldest and simplest scheduling algorithm
that derives its name from the round-robin principle.
• In this principle, each person will take an equal share of something in
turn.
• This algorithm is mostly used for multitasking in time-sharing systems and
operating systems having multiple clients so that they can make efficient
use of resources.
Advantages

• All processes are given the same priority; hence all processes get an equal
share of the CPU.
• Since it is cyclic in nature, no process is left behind, and starvation doesn't
exist.
Disadvantages

• The performance of Throughput depends on the length of the time


quantum. Setting it too short increases the overhead and lowers the CPU
efficiency, but if we set it too long, it gives a poor response to short
processes and tends to exhibit the same behavior as FCFS.
• The average waiting time of the Round Robin algorithm is often long.
• Context switching is done a lot more times and adds to the overhead time.
Shortest Remaining Time First (SRTF) Scheduling Algorithm

• Shortest Remaining Time First (SRTF) scheduling algorithm is basically


a preemptive mode of the Shortest Job First (SJF) algorithm in which
jobs are scheduled according to the shortest remaining time.
• In this scheduling technique, the process with the shortest burst time
is executed first by the CPU, but the arrival time of all processes need
not be the same.
• If another process with the shortest burst time arrives, then the
current process will be preempted, and a newer ready job will be
executed first.
Advantages
• Processes are executed faster than SJF, being the preemptive version
of it.
Disadvantages
• Context switching is done a lot more times and adds to the overhead
time.
• Like SJF, it may still lead to starvation and requires the knowledge of
process time beforehand.
• Impossible to implement in interactive systems where the required
CPU time is unknown.
Conclusion

• Scheduling algorithms tell the CPU which will be the next


process to have CPU time.
• The main goal of scheduling algorithms in OS is to Maximize
Throughput.
• Scheduling algorithms can be preemptive and non-preemptive.
• First Come First Serve, Shortest Job First, Shortest Remaining
Time First, and Round Robin are four widely used scheduling
algorithms, each with its own advantages and disadvantages.
• The best scheduling algorithms depend on the situation, needs,
and hardware and software capabilities.
Thread scheduling
• Many computer configurations have a single CPU. Hence, threads run one at a time in
such a way as to provide an illusion of concurrency.
• Execution of multiple threads on a single CPU in some order is called scheduling.
• The Java runtime environment supports a very simple, deterministic scheduling
algorithm called fixed-priority scheduling.
• This algorithm schedules threads on the basis of their priority relative to other
Runnable threads. When a thread is created, it inherits its priority from the thread
that created it.
• You also can modify a thread's priority at any time after its creation by using
the setPriority method.
• Thread priorities are integers ranging
between MIN_PRIORITY and MAX_PRIORITY (constants defined in the Thread class).
The higher the integer, the higher the priority.
• At any given time, when multiple threads are ready to be executed, the runtime
system chooses for execution the Runnable thread that has the highest priority.
• Only when that thread stops, yields, or becomes Not Runnable will a lower-priority
thread start executing.
• If two threads of the same priority are waiting for the CPU, the scheduler arbitrarily
chooses one of them to run.
• The chosen thread runs until one of the following conditions is true:
• A higher priority thread becomes runnable.
• It yields, or its run method exits.
• On systems that support time-slicing, its time allotment has expired.
• Then the second thread is given a chance to run, and so on, until the interpreter
exits.
• The Java runtime system's thread scheduling algorithm is also preemptive.
• If at any time a thread with a higher priority than all other Runnable threads
becomes Runnable, the runtime system chooses the new higher-priority thread for
execution.
• The new thread is said to preempt the other threads.

Rule of thumb: At any given time, the highest priority thread is running. However, this is
not guaranteed. The thread scheduler may choose to run a lower priority thread to avoid
starvation. For this reason, use thread priority only to affect scheduling policy for
efficiency purposes. Do not rely on it for algorithm correctness.
Multi-processor Scheduling

• A Multi-processor is a system that has more than one processor but shares the
same memory, bus, and input/output devices.
• In multi-processor scheduling, more than one processors(CPUs) share the load to
handle the execution of processes smoothly.
• The scheduling process of a multi-processor is more complex than that of a single
processor system.
• The CPUs may be of the same kind(homogeneous) or different(heterogeneous).
• The multiple CPUs in the system share a common bus, memory, and other I/O devices.
There are two approaches to multi-processor scheduling -
symmetric and asymmetric Multi-processor scheduling.
Multiple Processors Scheduling in Operating System

• A multi-processor is a system that has more than one processor but shares
the same memory, bus, and input/output devices.
• The bus connects the processor to the RAM, to the I/O devices, and to all
the other components of the computer.
• The system is a tightly coupled system.
• This type of system works even if a processor goes down.
• The rest of the system keeps working.
• In multi-processor scheduling, more than one processors(CPUs) share
the load to handle the execution of processes smoothly.
• The scheduling process of a multi-processor is more complex than that of a single
processor system because of the following reasons.
• Load balancing is a problem since more than one processors are present.
• Processes executing simultaneously may require access to shared data.
• Cache affinity should be considered in scheduling.
Approaches to Multiple Processor Scheduling

• Symmetric Multiprocessing: In symmetric multi-processor scheduling, the processors


are self-scheduling.
• The scheduler for each processor checks the ready queue and selects a process to
execute.
• Each of the processors works on the same copy of the operating system and
communicates with each other.
• If one of the processors goes down, the rest of the system keeps working.
– Symmetrical Scheduling with global queues: If the processes to be executed are
in a common queue or a global queue, the scheduler for each processor checks
this global-ready queue and selects a process to execute.
– Symmetrical Scheduling with per queues: If the processors in the system have
their own private ready queues, the scheduler for each processor checks their
own private queue to select a process.
Processor Affinity

• A process has an affinity for a processor on which it runs. This is


called processor affinity.
Let's try to understand why this happens.
• When a process runs on a processor, the data accessed by the process most
recently is populated in the cache memory of this processor. The following
data access calls by the process are often satisfied by the cache memory.
• However, if this process is migrated to another processor for some reason,
the content of the cache memory of the first processor is invalidated, and
the second processor's cache memory has to be repopulated.
• To avoid the cost of invalidating and repopulating the cache memory, the
Migration of processes from one processor to another is avoided.
• There are two types of processor affinity.
• Soft Affinity: The system has a rule of trying to keep running a process on
the same processor but does not guarantee it. This is called soft affinity.
• Hard Affinity: The system allows the process to specify the subset of
processors on which it may run, i.e., each process can run only some of the
processors. Systems such as Linux implement soft affinity, but they also
provide system calls such as sched_setaffinity() to support hard affinity.
Load Balancing

• In a multi-processor system, all processors may not have the same workload.
Some may have a long ready queue, while others may be sitting idle.
• To solve this problem, load balancing comes into the picture.
• Load Balancing is the phenomenon of distributing workload so that the
processors have an even workload in a symmetric multi-processor system.
• In symmetric multiprocessing systems which have a global queue, load balancing
is not required. In such a system, a processor examines the global ready queue
and selects a process as soon as it becomes ideal.
• However, in asymmetric multi-processor with private queues, some processors
may end up idle while others have a high workload.
• There are two ways to solve this.
• Push Migration: In push migration, a task routinely checks the load on each
processor. Some processors may have long queues while some are idle. If the
workload is unevenly distributed, it will extract the load from the overloaded
processor and assign the load to an idle or a less busy processor.
• Pull Migration: In pull migration, an idle processor will extract the load from an
overloaded processor itself.
Multi-Core Processors

• A multi-core processor is a single computing component comprised of two or more


CPUs called cores.
• Each core has a register set to maintain its architectural state and thus appears to the
operating system as a separate physical processor.
• A processor register can hold an instruction, address, etc. Since each core has a
register set, the system behaves as a multi-processor with each core as a processor.
• Symmetric multiprocessing systems which use multi-core processors allow higher
performance at low energy.
Symmetric multiprocessor
• In Symmetric multi-processors, the memory has only one operating system, which
can be run by any central processing unit. When a system call is made, the CPU on
which the system call was made traps the kernel and processed that system call.
• The model works to balance processes and memory dynamically. As the name
suggests, it uses symmetric multiprocessing to schedule processes, and every
processor is self-scheduling. Each processor checks the global or private ready queue
and selects a process to execute it.
• Note: The kernel is the central component of the operating system. It connects the
system hardware to the application software.
• There are three ways of conflict that may arise in a symmetric multi-processor
system. These are as follows.
• Locking system: The resources in a multi-processor are shared among the processors.
To the access safe of these resources to the processors, a locking system is required.
This is done to serialize the access of the resources by the processors.
• Shared data: Since multiple processors are accessing the same data at any given time,
the data may not be consistent across all of these processors. To avoid this, we must
use some kind of strategy or locking scheme.
• Cache Coherence: When the resource data is stored in multiple local caches and
shared by many clients, it may be rendered invalid if one of the clients changes the
memory block. This can be resolved by maintaining a consistent view of the data.

Master-Slave Multiprocessor

• In a master-slave multi-processor, one CPU works as a master while all


others work as slave processors.
• This means the master processor handles all the scheduling processes and
the I/O processes while the slave processors handle the user's
processes. The memory and input-output devices are shared among all
the processors, and all the processors are connected to a common bus.
It uses asymmetric multiprocessing to schedule processes.
Virtualization and Threading

• Virtualization is the process of running multiple operating systems on


a computer system.
• So a single CPU can also act as a multi-processor.
• This can be achieved by having a host operating system and other
guest operating systems.
• Different applications run on different operating systems without
interfering with one another.
• A virtual machine is a virtual environment that functions as a virtual
computer with its CPU, memory, network interface, and storage,
created on a physical hardware system.
• In a time-sharing OS, 100ms(millisecond) is allocated to each time slice
to give users a reasonable response time.
• But it takes more than 100ms, maybe 1 second or more.
• This results in a poor response time for users logged into the virtual
machine.
• Since the virtual operating systems receive a fraction of the available
CPU cycles, the clocks in virtual machines may be incorrect.
• This is because their timers do not take any longer to trigger than they
do on dedicated CPUs.
Conclusion

• A multi-processor is a system that has more than one processor but shares the
same memory, bus, and input/output devices.
• In multi-processor scheduling, more than one processor (CPUs) shares the load
to handle the execution of processes smoothly.
• In symmetric multi-processor scheduling, the processors are self-scheduling.
The scheduler for each processor checks the ready queue and selects a process
to execute.
• In asymmetric multi-processor scheduling, there is a master server, and the
rest of them are slave servers.
• A process has an affinity for a processor on which it runs. This is called
processor affinity.
• Load Balancing is the phenomenon of distributing workload so that the
processors have an even workload in a symmetric multi-processor system.
• A multi-core processor is a single computing component comprised of two or
more CPUs.
• THANK YOU

You might also like