0% found this document useful (0 votes)
502 views111 pages

Aos PG

The document provides an overview of the syllabus for the Advanced Operating Systems course. It discusses six units that will be covered: basics of operating systems, distributed operating systems, real-time operating systems, handheld operating systems, case studies, and contemporary issues. The basics unit covers topics like process scheduling, interprocess communication, and deadlocks. The distributed operating systems unit focuses on communication, logical clocks, and distributed file systems. Real-time operating systems are also discussed.

Uploaded by

sharunprabu8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
502 views111 pages

Aos PG

The document provides an overview of the syllabus for the Advanced Operating Systems course. It discusses six units that will be covered: basics of operating systems, distributed operating systems, real-time operating systems, handheld operating systems, case studies, and contemporary issues. The basics unit covers topics like process scheduling, interprocess communication, and deadlocks. The distributed operating systems unit focuses on communication, logical clocks, and distributed file systems. Real-time operating systems are also discussed.

Uploaded by

sharunprabu8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 111

AVS

COLLEGE OF ARTS & SCIENCE


(AUTONOMOUS)
Attur Main Road, Ramalingapuram, Salem - 106.
(Recognized under section 2(f) & 12(B) of UGC Act 1956 and
Accredited by NAAC with 'A' Grade)
(Co - Educational Institution | Affiliated to Periyar University, Salem
ISO 9001 : 2015 Certified Institution)
principal@avscollege.ac.in | www.avscollege.ac.in
Ph : 98426 29322, 94427 00205.

Study Material
Paper Name : Advanced Operating Systems
Paper Code : 23PCSC05
Batch : 2023-2025
Semester : II
Staff In charge : Umamaheswari .M
Advanced operating system

SYLLABUS (EVEN SEMESTER)

UNIT:1 BASICS OF OPERATING SYSTEMS

Basics of Operating Systems: What is an Operating System? – Main frame Systems –Desktop
Systems – Multiprocessor Systems – Distributed Systems – Clustered Systems –Real-Time
Systems – Handheld Systems – Feature Migration – Computing Environments -Process
Scheduling – Cooperating Processes – Inter Process Communication- Deadlocks –Prevention –
Avoidance – Detection – Recovery.

UNIT:2 DISTRIBUTED OPERATING SYSTEMS

Distributed Operating Systems: Issues – Communication Primitives – Lamport‟s Logical Clocks –


Deadlock handling strategies – Issues in deadlock detection and resolution-distributed file systems
–design issues – Case studies – The Sun Network File System-Coda.

UNIT:3 REAL TIME OPERATING SYSTEM

Realtime Operating Systems: Introduction – Applications of Real Time Systems – Basic Model of
Real Time System – Characteristics – Safety and Reliability - Real Time Task Scheduling

UNIT:4 HANDHELD SYSTEM

Operating Systems for Handheld Systems: Requirements – Technology Overview –Handheld


Operating Systems – PalmOS-Symbian Operating System- Android –Architecture of android – 28
Securing handheld systems.

UNIT:5 CASE STUDIES

Case Studies : Linux System: Introduction – Memory Management – Process Scheduling –


Scheduling Policy - Managing I/O devices – Accessing Files- iOS : Architecture and SDK
Framework - Media Layer - Services Layer - Core OS Layer - File System.

UNIT:6 CONTEMPORARY ISSUES

Expert lectures, online seminars – webinars

Text Books

1. Abraham Silberschatz; Peter Baer Galvin; Greg Gagne, “Operating System Concepts”,
Seventh Edition, John Wiley & Sons, 2004.
2. MukeshSinghal and Niranjan G. Shivaratri, “Advanced Concepts in Operating Systems –
Distributed, Database, and Multiprocessor Operating Systems”, Tata McGraw-Hill, 2001.

Reference Books

3
1. Rajib Mall, “Real-Time Systems: Theory and Practice”, Pearson Education India, 2006.
2. Pramod Chandra P.Bhatt, An introduction to operating systems, concept and practice, PHI,
Third edition, 2010.
3. Daniel.P.Bovet& Marco Cesati,“Understanding the Linux kernel”,3rdedition,O‟Reilly,
2005
4. Neil Smyth, “iPhone iOS 4 Development Essentials – Xcode”, Fourth Edition, Payload
media, 2011.

Related Online Contents [MOOC, SWAYAM, NPTEL, Websites etc.]

1. https://www.udacity.com/course/advanced-operating-systems--ud189
2. https://onlinecourses.nptel.ac.in/noc20_cs04/preview
3. https://minnie.tuhs.org/CompArch/Resources/os-notes.pdf
Advanced operating system

UNIT 1 Basics of operating system

Introduction of Operating System

 An operating system acts as an intermediary between the user of a computer and computer
hardware. The purpose of an operating system is to provide an environment in which a
user can execute programs conveniently and efficiently.

 An operating system is software that manages computer hardware. The hardware must
provide appropriate mechanisms to ensure the correct operation of the computer system
and to prevent user programs from interfering with the proper operation of the system. A
more common definition is that the operating system is the one program running at all
times on the computer (usually called the kernel), with all else being application programs.

 An operating system is concerned with the allocation of resources and services, such as
memory, processors, devices, and information. The operating system correspondingly
includes programs to manage these resources, such as a traffic controller, a scheduler,
a memory management module, I/O programs, and a file system.

History of Operating System

The operating system has been evolving through the years. The following table shows the history
of OS.

Types of OS
Generation Year Electronic device used Devices

First 1945-55 Vacuum Tubes Plug Boards

Second 1955-65 Transistors Batch Systems

Third 1965-80 Integrated Circuits(IC) Multiprogramming

Fourth Since 1980 Large Scale Integration PC

Characteristics of Operating Systems

Let us now discuss some of the important characteristic features of operating systems:

 Device Management: The operating system keeps track of all the devices. So, it is
also called the Input/Output controller that decides which process gets the device,
when, and for how much time.

5
 File Management: It allocates and de-allocates the resources and also decides who
gets the resource.
 Job Accounting: It keeps track of time and resources used by various jobs or users.
 Error-detecting Aids: These contain methods that include the production of dumps,
traces, error messages, and other debugging and error-detecting methods.
 Memory Management: It keeps track of the primary memory, like what part of it is
in use by whom, or what part is not in use, etc. and It also allocates the memory when
a process or program requests it.
 Processor Management: It allocates the processor to a process and then de-allocates
the processor when it is no longer required or the job is done.
 Control on System Performance: It records the delays between the request for a
service and the system.
 Security: It prevents unauthorized access to programs and data using passwords or
some kind of protection technique.
 Convenience: An OS makes a computer more convenient to use.
 Efficiency: An OS allows the computer system resources to be used efficiently.
 Ability to Evolve: An OS should be constructed in such a way as to permit the
effective development, testing, and introduction of new system functions at the same
time without interfering with service.
 Throughput: An OS should be constructed so that It can give maximum
throughput (Number of tasks per unit time).

Functionalities of Operating System


 Resource Management: When parallel accessing happens in the OS means when
multiple users are accessing the system the OS works as Resource Manager, Its
responsibility is to provide hardware to the user. It decreases the load in the system.
 Process Management: It includes various tasks like scheduling and termination of
the process. It is done with the help of CPU Scheduling algorithms.
 Storage Management: The file system mechanism used for the management of the
storage. NIFS, CIFS , CFS, NFS, etc. are some file systems. All the data is stored in
various tracks of Hard disks that are all managed by the storage manager. It
included Hard Disk.
 Memory Management: Refers to the management of primary memory. The
operating system has to keep track of how much memory has been used and by
whom. It has to decide which process needs memory space and how much. OS also
has to allocate and deallocate the memory space.
 Security/Privacy Management: Privacy is also provided by the Operating system
using passwords so that unauthorized applications can’t access programs or data. For
example, Windows uses Kerberos authentication to prevent unauthorized access to
data.

The process operating system as User Interface:


1. User
2. System and application programs
3. Operating system
4. Hardware
 Every general-purpose computer consists of hardware, an operating system(s), system
programs, and application programs. The hardware consists of memory, CPU, ALU,
I/O devices, peripheral devices, and storage devices. The system program consists of
Advanced operating system

compilers, loaders, editors, OS, etc. The application program consists of business
programs and database programs.

Conceptual View of Computer System

 Every computer must have an operating system to run other programs. The operating
system coordinates the use of the hardware among the various system programs and
application programs for various users. It simply provides an environment within
which other programs can do useful work.
 The operating system is a set of special programs that run on a computer system that
allows it to work properly. It performs basic tasks such as recognizing input from the
keyboard, keeping track of files and directories on the disk, sending output to the
display screen, and controlling peripheral devices.

Purposes and Tasks of Operating Systems

 Several tasks are performed by the Operating Systems and it also helps in serving a lot
of purposes which are mentioned below. We will see how Operating System helps us
in serving in a better way with the help of the task performed by it.

Purposes of an Operating System

 It controls the allocation and use of the computing System’s resources among the
various user and tasks.
 It provides an interface between the computer hardware and the programmer that
simplifies and makes it feasible for coding and debugging of application programs.

Tasks of an Operating System

1. Provides the facilities to create and modify programs and data files using an
editor.
2. Access to the compiler for translating the user program from high-level language
to machine language.
3. Provide a loader program to move the compiled program code to the computer’s
memory for execution.
4. Provide routines that handle the details of I/O programming.

I/O System Management

7
 The module that keeps track of the status of devices is called the I/O traffic
controller. Each I/O device has a device handler that resides in a separate process
associated with that device.
The I/O subsystem consists of
 A memory Management component that includes buffering caching and spooling.
 A general device driver interface.
 Drivers for Specific Hardware Devices
Below mentioned are the drivers which are required for a specific Hardware Device. Here we
discussed Assemblers, compilers, and interpreters, loaders.

Assembler

 The input to an assembler is an assembly language program. The output is an object


program plus information that enables the loader to prepare the object program for
execution. At one time, the computer programmer had at his disposal a basic machine
that interpreted, through hardware, certain fundamental instructions. He would
program this computer by writing a series of ones and Zeros (Machine language) and
placing them into the memory of the machine. Examples of assembly languages
include

Compiler and Interpreter

 The High-level languages– examples are C, C++, Java, Python, etc (around 300+
famous high-level languages) are processed by compilers and interpreters. A compiler
is a program that accepts a source program in a “high-level language “and produces
machine code in one go. Some of the compiled languages are FORTRAN, COBOL, C,
C++, Rust, and Go. An interpreter is a program that does the same thing but converts
high-level code to machine code line-by-line and not all at once. Examples of
interpreted languages are

 Python
 Perl
 Ruby

Loader

 A Loader is a routine that loads an object program and prepares it for execution. There
are various loading schemes: absolute, relocating, and direct-linking. In general, the
loader must load, relocate and link the object program. The loader is a program that
places programs into memory and prepares them for execution. In a simple loading
scheme, the assembler outputs the machine language translation of a program on a
secondary device and a loader places it in the core. The loader places into memory the
machine language version of the user’s program and transfers control to it. Since the
loader program is much smaller than the assembler, those make more core available to
the user’s program.

Components of an Operating Systems

There are two basic components of an Operating System.


 Shell
Advanced operating system

 Kernel
Shell

 Shell is the outermost layer of the Operating System and it handles the interaction with the
user. The main task of the Shell is the management of interaction between the User and
OS. Shell provides better communication with the user and the Operating System Shell
does it by giving proper input to the user it also interprets input for the OS and handles the
output from the OS. It works as a way of communication between the User and the OS.

Kernel

 The kernel is one of the components of the Operating System which works as a core
component. The rest of the components depends on Kernel for the supply of the important
services that are provided by the Operating System. The kernel is the primary interface
between the Operating system and Hardware.

Functions of Kernel

The following functions are to be performed by the Kernel.


 It helps in controlling the System Calls.
 It helps in I/O Management.
 It helps in the management of applications, memory, etc.

Types of Kernel

There are four types of Kernel that are mentioned below.


 Monolithic Kernel
 Microkernel
 Hybrid Kernel
 Exokernel
For more, refer to Kernel in Operating System.

Difference Between 32-Bit and 64-Bit Operating Systems

32-Bit Operating System 64-Bit Operating System

32-Bit OS is required for running of 32-Bit 64-Bit Processors can run on any of the
Processors, as they are not capable of running on Operating Systems, like 32-Bit OS or
64-bit processors. 64-Bit OS.

64-Bit Operating System provides


32-Bit OS gives a low efficient performance.
highly efficient Performance.

Less amount of data is managed in 32-Bit A large amount of data can be stored in
Operating System as compared to 64-Bit Os. 64-Bit Operating System.

9
32-Bit Operating System 64-Bit Operating System

32-Bit Operating System can address 2^32 bytes 64-Bit Operating System can address
of RAM. 2^64 bytes of RAM.

Advantages of Operating System

 It helps in managing the data present in the device i.e. Memory Management.
 It helps in making the best use of computer hardware.
 It helps in maintaining the security of the device.
 It helps different applications in running them efficiently.

Disadvantages of Operating System


 Operating Systems can be difficult for someone to use.
 Some OS are expensive and they require heavy maintenance.
 Operating Systems can come under threat if used by hackers.

What is an Operating System?

 Operating System lies in the category of system software. It basically manages all the
resources of the computer. An operating system acts as an interface between the
software and different parts of the computer or the computer hardware. The operating
system is designed in such a way that it can manage the overall resources and
operations of the computer.

 Operating System is a fully integrated set of specialized programs that handle all the
operations of the computer. It controls and monitors the execution of all other programs
that reside in the computer, which also includes application programs and other system
software of the computer. Examples of Operating Systems are Windows, Linux, Mac
OS, etc.

 An Operating System (OS) is a collection of software that manages computer hardware


resources and provides common services for computer programs. The operating system
is the most important type of system software in a computer system.

What is an Operating System Used for?


 The operating system helps in improving the computer software as well as hardware.
Without OS, it became very difficult for any application to be user-friendly. The
Operating System provides a user with an interface that makes any application
attractive and user-friendly. The operating System comes with a large number of device
drivers that make OS services reachable to the hardware environment. Each and every
application present in the system requires the Operating System. The operating system
works as a communication channel between system hardware and system software.
The operating system helps an application with the hardware part without knowing
about the actual hardware configuration. It is one of the most important parts of the
system and hence it is present in every device, whether large or small device.
Advanced operating system

Operating System

For more, refer to Need of Operating Systems.

Functions of the Operating System

 Resource Management: The operating system manages and allocates memory, CPU
time, and other hardware resources among the various programs and processes
running on the computer.
 Process Management: The operating system is responsible for starting, stopping,
and managing processes and programs. It also controls the scheduling of processes
and allocates resources to them.
 Memory Management: The operating system manages the computer’s primary
memory and provides mechanisms for optimizing memory usage.
 Security: The operating system provides a secure environment for the user,
applications, and data by implementing security policies and mechanisms such as
access controls and encryption.
 Job Accounting: It keeps track of time and resources used by various jobs or users.
 File Management: The operating system is responsible for organizing and managing
the file system, including the creation, deletion, and manipulation of files and
directories.
 Device Management: The operating system manages input/output devices such as
printers, keyboards, mice, and displays. It provides the necessary drivers and
interfaces to enable communication between the devices and the computer.
 Networking: The operating system provides networking capabilities such as
establishing and managing network connections, handling network protocols, and
sharing resources such as printers and files over a network.
 User Interface: The operating system provides a user interface that enables users to
interact with the computer system. This can be a Graphical User Interface (GUI), a
Command-Line Interface (CLI), or a combination of both.
 Backup and Recovery: The operating system provides mechanisms for backing up
data and recovering it in case of system failures, errors, or disasters.
 Virtualization: The operating system provides virtualization capabilities that allow
multiple operating systems or applications to run on a single physical machine. This
can enable efficient use of resources and flexibility in managing workloads.

11
 Performance Monitoring: The operating system provides tools for monitoring and
optimizing system performance, including identifying bottlenecks, optimizing
resource usage, and analyzing system logs and metrics.
 Time-Sharing: The operating system enables multiple users to share a computer
system and its resources simultaneously by providing time-sharing mechanisms that
allocate resources fairly and efficiently.
 System Calls: The operating system provides a set of system calls that enable
applications to interact with the operating system and access its resources. System
calls provide a standardized interface between applications and the operating system,
enabling portability and compatibility across different hardware and software
platforms.
 Error-detecting Aids: These contain methods that include the production of dumps,
traces, error messages, and other debugging and error-detecting methods.
 For more, refer to Functions of Operating System.

Objectives of Operating Systems

Let us now see some of the objectives of the operating system, which are mentioned below.
 Convenient to use: One of the objectives is to make the computer system more
convenient to use in an efficient manner.
 User Friendly: To make the computer system more interactive with a more
convenient interface for the users.
 Easy Access: To provide easy access to users for using resources by acting as an
intermediary between the hardware and its users.
 Management of Resources: For managing the resources of a computer in a better
and faster way.
 Controls and Monitoring: By keeping track of who is using which resource,
granting resource requests, and mediating conflicting requests from different
programs and users.
 Fair Sharing of Resources: Providing efficient and fair sharing of resources
between the users and programs.

Types of Operating Systems

 Batch Operating System: A Batch Operating System is a type of operating system


that does not interact with the computer directly. There is an operator who takes
similar jobs having the same requirements and groups them into batches.
 Time-sharing Operating System: Time-sharing Operating System is a type of
operating system that allows many users to share computer resources (maximum
utilization of the resources).
 Distributed Operating System: Distributed Operating System is a type of operating
system that manages a group of different computers and makes appear to be a single
computer. These operating systems are designed to operate on a network of
computers. They allow multiple users to access shared resources and communicate
with each other over the network. Examples include Microsoft Windows Server and
various distributions of Linux designed for servers.
 Network Operating System: Network Operating System is a type of operating
system that runs on a server and provides the capability to manage data, users,
groups, security, applications, and other networking functions.
Advanced operating system

 Real-time Operating System: Real-time Operating System is a type of operating


system that serves a real-time system and the time interval required to process and
respond to inputs is very small. These operating systems are designed to respond to
events in real time. They are used in applications that require quick and deterministic
responses, such as embedded systems, industrial control systems, and robotics.
 Multiprocessing Operating System: Multiprocessor Operating Systems are used in
operating systems to boost the performance of multiple CPUs within a single
computer system. Multiple CPUs are linked together so that a job can be divided and
executed more quickly.
 Single-User Operating Systems: Single-User Operating Systems are designed to
support a single user at a time. Examples include Microsoft Windows for personal
computers and Apple macOS.
 Multi-User Operating Systems: Multi-User Operating Systems are designed to
support multiple users simultaneously. Examples include Linux and Unix.
 Embedded Operating Systems: Embedded Operating Systems are designed to run
on devices with limited resources, such as smartphones, wearable devices, and
household appliances. Examples include Google’s Android and Apple’s iOS.
 Cluster Operating Systems: Cluster Operating Systems are designed to run on a
group of computers, or a cluster, to work together as a single system. They are used
for high-performance computing and for applications that require high availability
and reliability. Examples include Rocks Cluster Distribution and OpenMPI.
For more, refer to Types of Operating Systems.

How to Check the Operating System?

 There are so many factors to be considered while choosing the best Operating System
for our use. These factors are mentioned below.
 Price Factor: Price is one of the factors to choose the correct Operating System
as there are some OS that is free, like Linux, but there is some more OS that is
paid like Windows and macOS.
 Accessibility Factor: Some Operating Systems are easy to use like macOS and
iOS, but some OS are a little bit complex to understand like Linux. So, you must
choose the Operating System in which you are more accessible.
 Compatibility factor: Some Operating Systems support very less applications
whereas some Operating Systems supports more application. You must choose the
OS, which supports the applications which are required by you.
 Security Factor: The security Factor is also a factor in choosing the correct OS,
as macOS provide some additional security while Windows has little fewer
security features.

Examples of Operating Systems

 Windows (GUI-based, PC)


 GNU/Linux (Personal, Workstations, ISP, File, and print server, Three-tier
client/Server)
 macOS (Macintosh), used for Apple’s personal computers and workstations
(MacBook, iMac).
 Android (Google’s Operating System for smartphones/tablets/smartwatches)
 iOS (Apple’s OS for iPhone, iPad, and iPod Touch)

13
Main frame Systems

What is a mainframe system in OS?

A mainframe is a large computer system designed to process very large amounts of data
quickly. Mainframe systems are widely used in industries like the financial sector, airline
reservations, logistics and other fields where a large number of transactions need to be
processed as part of routine business practices.

The different types of Operating System are as follows,

Mainframe Operating Systems

 These types of operating systems are mainly used in E-commerce websites or servers
that are dedicated for business-to-business transactions.
 The operating system in mainframe systems is oriented in a way that it can handle
many jobs simultaneously.
 Mainframe Operating systems can operate with a large amount of input output
transactions.

The services of mainframe operating systems are as follows −

 To handle the batch processing of jobs.


 To handle the transaction processing of multiple requests.
 Timesharing of servers that allows multiple remote users to access the server.

Server Operating Systems

 Server operating systems runs on machines which have dedicated servers. The
examples of server operating systems are solaris, linux and windows.
 Server operating systems allows sharing of multiple resources such as hardware, files
or print services. Web pages are stored on a server which handles request and response
Advanced operating system

Multiprocessor Operating System

 These types of operating systems are also known as parallel computers or


multicomputer that depend upon how multiple processors are connected and shared.
 These computers have strong connectivity and high speed communication
mechanisms. Personal computers are created and embedded with the multiprocessor
technology.
 Multiprocessor operating systems are high processing speed act as multiple processors
into single system

Personal Operating Systems

 These types of operating systems are installed in machines used by common and large
numbers of users.
 They support multiprogramming, running multiple programs like word, excel, games,
and Internet access simultaneously on a single machine.
For example − Linux, Windows, Mac

Handheld Operating System

 Handheld operating systems are present in all handheld devices like Smartphones and
tablets. It is also called a Personal Digital Assistant. The popular handheld device in
today’s market is android and iOS.
 These operating systems need a high processing processor and also embedded with
different types of sensors.

Embedded Operating Systems

 Embedded operating systems are designed for the systems that are not considered as
computers. These operating systems are preinstalled on the devices by the device
manufacturer.
 All pre-installed software’s are in ROM and no changes could be done to it by the
users. The example of embedded operating systems is washing machines, ovens etc.

Real-Time Operating Systems

 Real Time Operating systems concentrate on time constraints because it is used in


applications that are very critical in terms of safety. These systems are divided into
hard real time and soft real time.
 Hard real time systems are having stringent time constraints, certain actions should
occur at that time only. Components are tightly coupled in hard real time.
 Soft real time operating systems sometimes miss the deadlines, even though it will not
cause any damage.

15
Smart Card Operating System

 Smart Card Operating Systems runs on smart cards. It contains a processor chip that is
embedded inside the CPU chip. They have high processing power and memory
constraints.
 These operating systems can handle single functions like making electronic payment
and are license software’s.

Desktop Systems in OS

 A desktop system refers to a personal computer setup that is typically used on a desk or
table. It consists of various hardware components and an operating system that enables
users to performs a wide range of tasks such as document editing, web browsing,
gaming, multimedia consumption, and more.

Operating Systems for Desktop Systems

 An operating system (OS) acts as an interface between the hardware and software of a
desktop system. It manages system resources, facilitates software execution, and
provides a user-friendly environment. Different operating systems offer distinct
features, compatibility, and performance, catering to the diverse needs and preferences
of users.

Importance of Desktop Systems

 Desktop systems play a crucial role in various domains, including education, business,
entertainment, and personal productivity.
 They provide individuals and organizations with powerful computing capabilities,
enabling complex tasks to be completed efficiently.
 Desktop systems facilitate creativity, communication, data analysis, and knowledge
sharing, contributing to enhanced productivity and innovation

Components of a Desktop System

 Central Processing Unit (CPU): The CPU is the brain of a desktop system,
responsible for executing instructions and performing calculations. It processes data
and carries out tasks based on the instructions provided by software programs. The
CPU’s performance is measured by its clock speed, number of cores, and cache size.

 Random Access Memory (RAM): RAM is a type of volatile memory that temporarily
stores data and instructions for the CPU to access quickly. It allows for efficient
multitasking and faster data retrieval, significantly impacting the overall performance
of the system. The amount of RAM in a desktop system determines its capability to
handle multiple programs simultaneously.

 Storage Devices: Desktop systems utilize various storage devices to store and retrieve
data. Hard Disk Drives (HDDs) are the traditional storage medium, offering large
capacities but slower read/write speeds. Solid-State Drives (SSDs) are a newer
Advanced operating system

technology that provides faster data access, enhancing the system’s responsiveness and
reducing loading times.

 Graphics Processing Unit (GPU): The GPU is responsible for rendering images,
videos, and animations on the computer screen. It offloads the graphical processing
tasks from the CPU, ensuring smooth visuals and enabling resource-intensive
applications such as gaming, video editing, and 3D modeling. High-performance GPUs
are essential for users who require demanding graphical capabilities.

 Input and Output Devices: Desktop systems are equipped with various input and
output devices. Keyboards and mice are the primary input devices, allowing users to
interact with the system and input commands. Monitors, printers, speakers, and
headphones serve as output devices, providing visual or auditory feedback based on the
system’s output.

Evolution of Desktop Systems

 Desktop systems have evolved significantly over the years. From the bulky and
limited-capability systems of the past to the sleek and powerful computers of today,
technological advancements have revolutionized the desktop computing experience.
 Smaller form factors, increased processing power, improved storage technologies, and
enhanced user interfaces are some of the notable advancements that have shaped the
evolution of desktop systems.

Popular Desktop Operating Systems

 Windows: Windows, developed by Microsoft, is one of the most widely used desktop
operating systems globally.
 macOS: macOS is the operating system designed specifically for Apple’s Mac
computers. Known for its sleek and intuitive interface, macOS offers seamless
integration with other Apple devices and services.

17
 Linux: Linux is an open-source operating system that provides a high degree of
customization and flexibility. It is favored by developers, system administrators, and
tech enthusiasts due to its stability, security, and vast array of software options.

Future Trends in Desktop Systems

 The future of desktop systems holds exciting possibilities. As technology continues to


advance, we can expect further improvements in processing power, storage capacities,
and energy efficiency. Virtual reality (VR) and augmented reality (AR) integration,
cloud-based computing, artificial intelligence (AI) integration, and seamless
connectivity across devices are some of the trends that will shape the future of desktop
systems.

Multiprocessor Systems

 Most computer systems are single processor systems i.e., they only have one processor.
However, multiprocessor or parallel systems are increasing in importance nowadays.
These systems have multiple processors working in parallel that share the computer
clock, memory, bus, peripheral devices etc. An image demonstrating the
multiprocessor architecture is

 Increased reliability: Multiprocessor systems can continue to operate even if one


processor fails, as the remaining processors can continue to execute tasks.
 Reduced cost: Multiprocessor systems can be more cost-effective than building
multiple single-processor systems to handle the same workload.
 Enhanced parallelism: Multiprocessor systems allow for greater parallelism, as
different processors can execute different tasks simultaneously.
 Disadvantages:
 Increased complexity: Multiprocessor systems are more complex than single-
processor systems, and they require additional hardware, software, and management
resources.
 Higher power consumption: Multiprocessor systems require more power to operate
than single-processor systems, which can increase the cost of operating and
maintaining the system.
 Difficult programming: Developing software that can effectively utilize multiple
processors can be challenging, and it requires specialized programming skills.
 Synchronization issues: Multiprocessor systems require synchronization between
processors to ensure that tasks are executed correctly and efficiently, which can add
complexity and overhead to the system.
Advanced operating system

 Limited performance gains: Not all applications can benefit from multiprocessor
systems, and some applications may only see limited performance gains when running
on a multiprocessor system.

Applications of Multiprocessor

 As a uniprocessor, such as single instruction, single data stream (SISD).


 As a multiprocessor, such as single instruction, multiple data stream (SIMD),
which is usually used for vector processing.
 Multiple series of instructions in a single perspective, such as multiple instruction,
single data stream (MISD), which is used for describing hyper-threading or
pipelined processors.
 Inside a single system for executing multiple, individual series of instructions in
multiple perspectives, such as multiple instruction, multiple data stream (MIMD).

Benefits of using a Multiprocessor

 Enhanced performance.
 Multiple applications.
 Multi-tasking inside an application.
 High throughput and responsiveness.
 Hardware sharing among CPUs.

Advantages:

 Improved performance: Multiprocessor systems can execute tasks faster than


single-processor systems, as the workload can be distributed across multiple
processors.
 Better scalability: Multiprocessor systems can be scaled more easily than single-
processor systems, as additional processors can be added to the system to handle
increased workloads.
 Increased reliability: Multiprocessor systems can continue to operate even if one
processor fails, as the remaining processors can continue to execute tasks.
 Reduced cost: Multiprocessor systems can be more cost-effective than building
multiple single-processor systems to handle the same workload.
 Enhanced parallelism: Multiprocessor systems allow for greater parallelism, as
different processors can execute different tasks simultaneously.

Disadvantages:

 Increased complexity: Multiprocessor systems are more complex than single-


processor systems, and they require additional hardware, software, and management
resources.
 Higher power consumption: Multiprocessor systems require more power to operate
than single-processor systems, which can increase the cost of operating and
maintaining the system.
 Difficult programming: Developing software that can effectively utilize multiple
processors can be challenging, and it requires specialized programming skills.

19
 Synchronization issues: Multiprocessor systems require synchronization between
processors to ensure that tasks are executed correctly and efficiently, which can add
complexity and overhead to the system.
 Limited performance gains: Not all applications can benefit from multiprocessor
systems, and some applications may only see limited performance gains when
running on a multiprocessor system.

Distributed System

 A Distributed Operating System refers to a model in which applications run on


multiple interconnected computers, offering enhanced communication and integration
capabilities compared to a network operating system. In a Distributed OS,
multiple CPUs are utilized, but for end-users, it appears as a typical
centralized operating system. It enables the sharing of various resources such as
CPUs, disks, network interfaces, nodes, and computers across different sites, thereby
expanding the available data within the entire system.
 Effective communication channels like high-speed buses and telephone lines connect
all processors, each equipped with its own local memory and other neighboring
processors. Due to its characteristics, a distributed operating system is classified as a
loosely coupled system. It encompasses multiple computers, nodes, and sites, all
interconnected through LAN/WAN lines. The ability of a Distributed OS to share
processing resources and I/O files while providing users with a virtual machine
abstraction is an important feature.
The diagram below illustrates the structure of a distributed operating system:

Types of Distributed Operating System

There are three types of Distributed OS:

 Client-Server Systems: This strongly connected operating system is appropriate for


multiprocessors and homogenous multicomputer. It functions as a centralized server,
handling and approving all requests originating from client systems.
 Peer-to-Peer Systems: This loosely coupled system is implemented in computer
network applications, consisting of multiple processors without shared memories or
clocks. Each processor possesses its own local memory, and communication between
processors occurs through high-speed buses or telephone lines.
 Middleware: It facilitates interoperability among applications running on different
operating systems. By employing these services, applications can exchange data with
each other, ensuring distribution transparency.
Advanced operating system

Applications of Distributed Operating System

The applications of a Distributed OS encompass various domains as below:


 Internet Technology
 Distributed Databases System
 Air Traffic Control System
 Airline Reservation Control Systems
 Peer-to-Peer Networks System
 Telecommunication Networks
 Scientific Computing System
 Cluster Computing
 Grid Computing
 Data Rendering

Security in Distributed Operating system

 Protection and security are crucial aspects of a Distributed Operating System,


especially in organizational settings. Measures are employed to safeguard the system
from potential damage or loss caused by external sources. Various security measures
can be implemented, including authentication methods such as username/password
and user key. One Time Password (OTP) is also commonly utilized in distributed OS
security applications.

Clustered Systems

 Clustered systems are similar to parallel systems as they both have multiple CPUs.
However a major difference is that clustered systems are created by two or more
individual computer systems merged together. Basically, they have independent
computer systems with a common storage and the systems work together.
A diagram to better illustrate this is −

 The clustered systems are a combination of hardware clusters and software clusters.
The hardware clusters help in sharing of high performance disks between the
systems. The software clusters makes all the systems work together .
 Each node in the clustered systems contains the cluster software. This software
monitors the cluster system and makes sure it is working as required. If any one of
the nodes in the clustered system fail, then the rest of the nodes take control of its
storage and resources and try to restart.

21
 Types of Clustered Systems
 There are primarily two types of clustered systems i.e. asymmetric clustering system
and symmetric clustering system. Details about these are given as follows −
 Asymmetric Clustering System
 In this system, one of the nodes in the clustered system is in hot standby mode and all
the others run the required applications. The hot standby mode is a failsafe in which a
hot standby node is part of the system . The hot standby node continuously monitors
the server and if it fails, the hot standby node takes its place.
 Symmetric Clustering System
 In symmetric clustering system two or more nodes all run applications as well as
monitor each other. This is more efficient than asymmetric system as it uses all the
hardware and doesn't keep a node merely as a hot standby.

Attributes of Clustered Systems

There are many different purposes that a clustered system can be used for. Some of these can
be scientific calculations, web support etc. The clustering systems that embody some major
attributes are −

 Load Balancing Clusters

In this type of clusters, the nodes in the system share the workload to provide a better
performance. For example: A web based cluster may assign different web queries to
different nodes so that the system performance is optimized. Some clustered systems use a
round robin mechanism to assign requests to different nodes in the system.

 High Availability Clusters

These clusters improve the availability of the clustered system. They have extra nodes which
are only used if some of the system components fail. So, high availability clusters remove
single points of failure i.e. nodes whose failure leads to the failure of the system. These types
of clusters are also known as failover clusters or HA clusters.

Benefits of Clustered Systems


The difference benefits of clustered systems are as follows −

Performance

Clustered systems result in high performance as they contain two or more individual
computer systems merged together. These work as a parallel unit and result in much better
performance for the system.

 Fault Tolerance

Clustered systems are quite fault tolerant and the loss of one node does not result in the loss
of the system. They may even contain one or more nodes in hot standby mode which allows
them to take the place of failed nodes.

 Scalability
Advanced operating system

Clustered systems are quite scalable as it is easy to add a new node to the system. There is
no need to take the entire cluster down to add a new node.

Real time systems

A real-time system means that the system is subjected to real-time, i.e., the response should
be guaranteed within a specified timing constraint or the system should meet the specified
deadline. For example flight control systems, real-time monitors, etc.

Types of real-time systems based on timing constraints:

1. Hard real-time system: This type of system can never miss its deadline. Missing the
deadline may have disastrous consequences. The usefulness of results produced by a
hard real-time system decreases abruptly and may become negative if tardiness
increases. Tardiness means how late a real-time system completes its task with
respect to its deadline. Example: Flight controller system.
2. Soft real-time system: This type of system can miss its deadline occasionally with
some acceptably low probability. Missing the deadline have no disastrous
consequences. The usefulness of results produced by a soft real-time system
decreases gradually with an increase in tardiness. Example: Telephone switches.
3. Firm Real-Time Systems: These are systems that lie between hard and soft real-time
systems. In firm real-time systems, missing a deadline is tolerable, but the usefulness
of the output decreases with time. Examples of firm real-time systems include online
trading systems, online auction systems, and reservation systems.

Reference model of the real-time system:

Our reference model is characterized by three elements:

1. A workload model: It specifies the application supported by the system.


2. A resource model: It specifies the resources available to the application.
3. Algorithms: It specifies how the application system will use resources.

Terms related to real-time system:

1. Job: A job is a small piece of work that can be assigned to a processor and may or
may not require resources.
2. Task: A set of related jobs that jointly provide some system functionality.
3. Release time of a job: It is the time at which the job becomes ready for execution.
4. Execution time of a job: It is the time taken by the job to finish its execution.
5. Deadline of a job: It is the time by which a job should finish its execution. Deadline
is of two types: absolute deadline and relative deadline.
6. Response time of a job: It is the length of time from the release time of a job to the
instant when it finishes.
7. The maximum allowable response time of a job is called its relative deadline.
8. The absolute deadline of a job is equal to its relative deadline plus its release time.
9. Processors are also known as active resources. They are essential for the execution of
a job. A job must have one or more processors in order to execute and proceed
towards completion. Example: computer, transmission links.

23
10. Resources are also known as passive resources. A job may or may not require a
resource during its execution. Example: memory, mutex
11. Two resources are identical if they can be used interchangeably else they are
heterogeneous.

Advantages:

 Real-time systems provide immediate and accurate responses to external events,


making them suitable for critical applications such as air traffic control, medical
equipment, and industrial automation.
 They can automate complex tasks that would otherwise be impossible to perform
manually, thus improving productivity and efficiency.
 Real-time systems can reduce human error by automating tasks that require precision,
accuracy, and consistency.
 They can help to reduce costs by minimizing the need for human intervention and
reducing the risk of errors.
 Real-time systems can be customized to meet specific requirements, making them
ideal for a wide range of applications.

Disadvantages:

 Real-time systems can be complex and difficult to design, implement, and test,
requiring specialized skills and expertise.
 They can be expensive to develop, as they require specialized hardware and software
components.
 Real-time systems are typically less flexible than other types of computer systems, as
they must adhere to strict timing requirements and cannot be easily modified or
adapted to changing circumstances.
 They can be vulnerable to failures and malfunctions, which can have serious
consequences in critical applications.
 Real-time systems require careful planning and management, as they must be
continually monitored and maintained to ensure they operate correctly.

Handheld Operating System

 An operating system is a program whose job is to manage a computer’s hardware.


Its other use is that it also provides a basis for application programs and acts as an
intermediary between the computer user and the computer hardware. An amazing
feature of operating systems is how they vary in accomplishing these tasks.
Operating systems for mobile computers provide us with an environment in
which we can easily interface with the computer so that we can execute the
programs. Thus, some of the operating systems are made to be convenient, others
to be well-organized, and the rest to be some combination of the two.

Handheld Operating System:

 Handheld operating systems are available in all handheld devices like


Smartphones and tablets. It is sometimes also known as a Personal Digital
Assistant. The popular handheld device in today’s world is Android and iOS.
Advanced operating system

These operating systems need a high-processing processor and are also embedded
with various types of sensors.

Some points related to Handheld operating systems are as follows:

1. Since the development of handheld computers in the 1990s, the demand for software
to operate and run on these devices has increased.
2. Three major competitors have emerged in the handheld PC world with three different
operating systems for these handheld PCs.
3. Out of the three companies, the first was the Palm Corporation with their PalmOS.
4. Microsoft also released what was originally called Windows CE. Microsoft’s recently
released operating system for the handheld PC comes under the name of Pocket PC.
5. More recently, some companies producing handheld PCs have also started offering a
handheld version of the Linux operating system on their machines.

Features of Handheld Operating System:

1. Its work is to provide real-time operations.


2. There is direct usage of interrupts.
3. Input/Output device flexibility.
4. Configurability.

Types of Handheld Operating Systems:

Types of Handheld Operating Systems are as follows:

 Palm OS
 Symbian OS
 Linux OS
 Windows
 Android

Palm OS:

 Since the Palm Pilot was introduced in 1996, the Palm OS platform has provided
various mobile devices with essential business tools, as well as the capability that
they can access the internet via a wireless connection.
 These devices have mainly concentrated on providing basic personal-information-
management applications. The latest Palm products have progressed a lot, packing in
more storage, wireless internet, etc.

Symbian OS:

 It has been the most widely-used smartphone operating system because of its ARM
architecture before it was discontinued in 2014. It was developed by Symbian Ltd.
 This operating system consists of two subsystems where the first one is the
microkernel-based operating system which has its associated libraries and the second
one is the interface of the operating system with which a user can interact.
 Since this operating system consumes very less power, it was developed for
smartphones and handheld devices.

25
 It has good connectivity as well as stability.
 It can run applications that are written in Python, Ruby, .NET, etc.

Linux OS:

 Linux OS is an open-source operating system project which is a cross-platform


system that was developed based on UNIX.
It was developed by Linus Torvalds. It is a system software that basically allows the
apps and users to perform some tasks on the PC.
 Linux is free and can be easily downloaded from the internet and it is considered that
it has the best community support.
 Linux is portable which means it can be installed on different types of devices like
mobile, computers, and tablets.
 It is a multi-user operating system.
 Linux interpreter program which is called BASH is used to execute commands.
 It provides user security using authentication features.

Windows OS:

 Windows is an operating system developed by Microsoft. Its interface which is


called Graphical User Interface eliminates the need to memorize commands for the
command line by using a mouse to navigate through menus, dialog boxes, and
buttons.
 It is named Windows because its programs are displayed in the form of a square. It
has been designed for both a beginner as well professional.
 It comes preloaded with many tools which help the users to complete all types of
tasks on their computer, mobiles, etc.
 It has a large user base so there is a much larger selection of available software
programs.
 One great feature of Windows is that it is backward compatible which means that its
old programs can run on newer versions as well.

Android OS:

 It is a Google Linux-based operating system that is mainly designed for touchscreen


devices such as phones, tablets, etc.
There are three architectures which are ARM, Intel, and MIPS which are used by the
hardware for supporting Android. These lets users manipulate the devices intuitively,
with movements of our fingers that mirror some common motions such as swiping,
tapping, etc.
 Android operating system can be used by anyone because it is an open-source
operating system and it is also free.
 It offers 2D and 3D graphics, GSM connectivity, etc.
 There is a huge list of applications for users since Play Store offers over one million
apps.
 Professionals who want to develop applications for the Android OS can download the
Android Development Kit. By downloading it they can easily develop apps for
android.
Advanced operating system

Advantages of Handheld Operating System:

1. Less Cost.
2. Less weight and size.
3. Less heat generation.
4. More reliability.

Disadvantages of Handheld Operating System:

1. Less Speed.
2. Small Size.
3. Input / Output System (memory issue or less memory is available).

How Handheld operating systems are different from Desktop operating systems?

 Since the handheld operating systems are mainly designed to run on machines that
have lower speed resources as well as less memory, they were designed in a way that
they use less memory and require fewer resources.
 They are also designed to work with different types of hardware as compared to
standard desktop operating systems.
It happens because the power requirements for standard CPUs far exceed the power
of handheld devices.
 Handheld devices aren’t able to dissipate large amounts of heat generated by CPUs.
To deal with such kind of problem, big companies like Intel and Motorola have
designed smaller CPUs with lower power requirements and also lower heat
generation. Many handheld devices fully depend on flash memory cards for their
internal memory because large hard drives do not fit into handheld devices.

Migration Of Operating-System Concepts And Features

 I/O routine supplied by the system.


 Memory Management - the system should allocate the memory to many work.
 CPU Scheduling - the system should select amongst a number of work ready to run.
 Allocation of devices.

27
Computing Environment

What is Computing Environment?

 When we want to solve a problem using computer, the computer makes use of
various devices which work together to solve that problem. There may be
various number of ways to solve a problem. We use various number of
computer devices arranged in different ways to solve different problems. The
arrangement of computer devices to solve a problem is said to be computing
environment. The formal definition of computing environment is as follows...
 Computing Environment is a collection of computers which are used to
process and exchange the information to solve various types of computing
problems.

Types of Computing Environments

The following are the various types of computing environments.


Advanced operating system

1. Personal Computing Environment


2. Time Sharing Computing Environment
3. Client Server Computing Environment
4. Distributed Computing Environment
5. Grid Computing Environment
6. Cluster Computing Environment

Personal Computing Environment

 Personal computing is a stand alone machine. In personal computing


environment, the complete program resides on stand alone machine and
executed from the same machine. Laptops, mobile devices, printers, scanners
and the computer systems we use at home, office are the examples for
personal computing environment.

Time Sharing Computing Environment

 Time sharing computing environment is stand alone computer in which a


single user can perform multiple operations at a time by using multitasking
operating system. Here the processor time is divided among different tasks
and this is called “Time sharing”. For example, a user can listen to music
while writing something in a text editor. Windows 95 and later versions of
windows OS, iOS and Linux operating systems are the examples for this
computing environment.

Client Server Computing Environment

 The client server environment contains two machines (Client machine and
Server machine). These both machines will exchange the information through
an application. Here Client is a normal computer like PC, Tablet, Mobile, etc.,
and Server is a powerful computer which stores huge data and manages huge
amount of file and emails, etc., In this environment, client requests for data
and server provides data to the client. In the client server environment, the
communication between client and server is performed using HTTP (Hyper
Text Transfer Protocol).

Distributed Computing Environment

 In the distributed computing environment, the complete functionality of a


software is not on single computer but is distributed among multiple
computers. Here we use a method of computer processing in which different
programs of an application run simultaneously on two or more computers.
These computers communicate with each other over a network to perform the
complete task. In distributed computing environment, the data is distributed
among different systems and that data is logically related to each other.

Grid Computing Environment

 Grid computing is a collection of computers from different locations. All


these computers work for a common problem. A grid can be described as

29
distributed collection of large number of computers working for a single
application.

Cluster Computing Environment

 Cluster computing is a collection of inter connected computers. These


computers work together to solve a single problem. In cluster computing
environment, a collection of systems work together as a single system.

Process Scheduling

 The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another process
on the basis of a particular strategy.
 Process scheduling is an essential part of a Multiprogramming operating systems.
Such operating systems allow more than one process to be loaded into the
executable memory at a time and the loaded process shares the CPU using time
multiplexing.

Categories of Scheduling

There are two categories of scheduling:

1. Non-preemptive: Here the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running process
terminates and moves to a waiting state.
2. Preemptive: Here the OS allocates the resources to a process for a fixed amount of
time. During resource allocation, the process switches from running state to ready
state or from waiting state to ready state. This switching occurs as the CPU may
give priority to other processes and replace the process with higher priority with the
running process.
Process Scheduling Queues

 The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues.
The OS maintains a separate queue for each of the process states and PCBs of all
processes in the same execution state are placed in the same queue. When the state of a
process is changed, its PCB is unlinked from its current queue and moved to its new
state queue.
 The Operating System maintains the following important process scheduling queues,

 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
 Device queues − The processes which are blocked due to unavailability of an I/O
device constitute this queue.
Advanced operating system

 The OS can use different policies to manage each queue (FIFO, Round Robin, Priority,
etc.). The OS scheduler determines how to move processes between the ready and run
queues which can only have one entry per processor core on the system; in the above
diagram, it has been merged with the CPU.
 Two-State Process Model
Two-state process model refers to running and non-running states which are described below

S.N. State & Description

Running
1 When a new process is created, it enters into the system as in the
running state.

Not Running
Processes that are not running are kept in queue, waiting for their
turn to execute. Each entry in the queue is a pointer to a particular
process. Queue is implemented by using linked list. Use of
2
dispatcher is as follows. When a process is interrupted, that
process is transferred in the waiting queue. If the process has
completed or aborted, the process is discarded. In either case, the
dispatcher then selects a process from the queue to execute.

Schedulers

 Schedulers are special system software which handle process scheduling in various
ways. Their main task is to select the jobs to be submitted into the system and to decide
which process to run. Schedulers are of three types,

Long-Term Scheduler

Short-Term Scheduler

Medium-Term Scheduler

Long Term Scheduler;

 It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing. It selects processes from the queue and loads
them into memory for execution. Process loads into the memory for CPU scheduling.

 The primary objective of the job scheduler is to provide a balanced mix of jobs, such as
I/O bound and processor bound. It also controls the degree of multiprogramming. If the
degree of multiprogramming is stable, then the average rate of process creation must be
equal to the average departure rate of processes leaving the system.

31
 On some systems, the long-term scheduler may not be available or minimal. Time-
sharing operating systems have no long term scheduler. When a process changes the
state from new to ready, then there is use of long-term scheduler.
Short Term Scheduler

 It is also called as CPU scheduler. Its main objective is to increase system


performance in accordance with the chosen set of criteria. It is the change of ready
state to running state of the process. CPU scheduler selects a process among the
processes that are ready to execute and allocates CPU to one of them.
 Short-term schedulers, also known as dispatchers, make the decision of which process
to execute next. Short-term schedulers are faster than long-term schedulers.

Medium Term Scheduler

 Medium-term scheduling is a part of swapping. It removes the processes from the


memory. It reduces the degree of multiprogramming. The medium-term scheduler is in-
charge of handling the swapped out-processes.
 A running process may become suspended if it makes an I/O request. A suspended
processes cannot make any progress towards completion. In this condition, to remove
the process from memory and make space for other processes, the suspended process is
moved to the secondary storage. This process is called swapping, and the process is
said to be swapped out or rolled out. Swapping may be necessary to improve the
process mix.

Comparison among Scheduler

S.N. Long-Term Scheduler Short-Term Medium-Term


Scheduler Scheduler
1 It is a job scheduler It is a CPU scheduler It is a process swapping
scheduler.
2 Speed is lesser than Speed is fastest among Speed is in between both
short term scheduler other two short and long term
scheduler.
3 It controls the degree of It provides lesser It reduces the degree of
multiprogramming control over degree of multiprogramming.
multiprogramming
4 It is almost absent or It is also minimal in It is a part of Time
minimal in time sharing time sharing system sharing systems.
system
5 It selects processes from It selects those It can re-introduce the
pool and loads them into processes which are process into memory and
memory for execution ready to execute execution can be
continued.
Advanced operating system

Context Switching

 A context switching is the mechanism to store and restore the state or context of a CPU
in Process Control block so that a process execution can be resumed from the same
point at a later time. Using this technique, a context switcher enables multiple
processes to share a single CPU. Context switching is an essential part of a
multitasking operating system features.

 When the scheduler switches the CPU from executing one process to execute another,
the state from the current running process is stored into the process control block. After
this, the state for the process to run next is loaded from its own PCB and used to set the
PC, registers, etc. At that point, the second process can start executing.

Context switches are computationally intensive since register and memory state must be saved
and restored. To avoid the amount of context switching time, some hardware systems employ
two or more sets of processor registers. When the process is switched, the following
information is stored for later use.

 Program Counter
 Scheduling information
 Base and limit register value
 Currently used register
 Changed State
 I/O State information
 Accounting information

Cooperating Process in Operating System

 Inter Process Communication (IPC) is a mechanism that involves communication of


one process with another process. This usually occurs only in one system.

33
Communication can be of two types

 Between related processes initiating from only one process, such as parent and child
processes.
 Between unrelated processes, or two or more different processes.

Following are some important terms that we need to know before proceeding further on this
topic.

Pipes − Communication between two related processes. The mechanism is half duplex
meaning the first process communicates with the second process. To achieve a full duplex i.e.,
for the second process to communicate with the first process another pipe is required.

FIFO − Communication between two unrelated processes. FIFO is a full duplex, meaning the
first process can communicate with the second process and vice versa at the same time.

Message Queues − Communication between two or more processes with full duplex capacity.
The processes will communicate with each other by posting a message and retrieving it out of
the queue. Once retrieved, the message is no longer available in the queue.

Shared Memory − Communication between two or more processes is achieved through a


shared piece of memory among all processes. The shared memory needs to be protected from
each other by synchronizing access to all the processes.

Semaphores − Semaphores are meant for synchronizing access to multiple processes. When
one process wants to access the memory (for reading or writing), it needs to be locked (or
protected) and released when the access is removed. This needs to be repeated by all the
processes to secure data.

Signals − Signal is a mechanism to communication between multiple processes by way of


signaling. This means a source process will send a signal (recognized by number) and the
destination process will handle it accordingly.

Deadlock Detection And Recovery

 Deadlock detection and recovery is the process of detecting and resolving deadlocks
in an operating system. A deadlock occurs when two or more processes are blocked,
waiting for each other to release the resources they need. This can lead to a system-
wide stall, where no process can make progress.
There are two main approaches to deadlock detection and recovery:

1. Prevention: The operating system takes steps to prevent deadlocks from


occurring by ensuring that the system is always in a safe state, where deadlocks
cannot occur. This is achieved through resource allocation algorithms such as the
Banker’s Algorithm.
2. Detection and Recovery: If deadlocks do occur, the operating system must
detect and resolve them. Deadlock detection algorithms, such as the Wait-For
Graph, are used to identify deadlocks, and recovery algorithms, such as the
Rollback and Abort algorithm, are used to resolve them. The recovery algorithm
Advanced operating system

releases the resources held by one or more processes, allowing the system to
continue to make progress.

Difference Between Prevention and Detection/Recovery:

 Prevention aims to avoid deadlocks altogether by carefully managing resource


allocation, while detection and recovery aim to identify and resolve deadlocks that
have already occurred.
 Deadlock detection and recovery is an important aspect of operating system design
and management, as it affects the stability and performance of the system. The choice
of deadlock detection and recovery approach depends on the specific requirements of
the system and the trade-offs between performance, complexity, and risk tolerance.
The operating system must balance these factors to ensure that deadlocks are
effectively detected and resolved.
 In the previous post, we discussed Deadlock Prevention and Avoidance. In this post,
the Deadlock Detection and Recovery technique to handle deadlock is discussed.

Deadlock Detection :

1. If resources have a single instance

In this case for Deadlock detection, we can run an algorithm to check for the cycle in the
Resource Allocation Graph. The presence of a cycle in the graph is a sufficient condition for
deadlock.

In the above diagram, resource 1 and resource 2 have single instances.


There is a cycle R1 → P1 → R2 → P2. So, Deadlock is Confirmed.

2. If there are multiple instances of resources

Detection of the cycle is necessary but not a sufficient condition for deadlock detection, in
this case, the system may or may not be in deadlock varies according to different situations.

3. Wait-For Graph Algorithm

The Wait-For Graph Algorithm is a deadlock detection algorithm used to detect deadlocks in
a system where resources can have multiple instances. The algorithm works by constructing
a Wait-For Graph, which is a directed graph that represents the dependencies between
processes and resources.

35
Deadlock Recovery :

A traditional operating system such as Windows doesn’t deal with deadlock recovery as it is
a time and space-consuming process. Real-time operating systems use Deadlock recovery.
1. Killing the process:
Killing all the processes involved in the deadlock. Killing process one by one.
After killing each process check for deadlock again and keep repeating the
process till the system recovers from deadlock. Killing all the processes one by
one helps a system to break circular wait conditions.
2. Resource Preemption:
Resources are preempted from the processes involved in the deadlock, and
preempted resources are allocated to other processes so that there is a possibility
of recovering the system from the deadlock. In this case, the system goes into
starvation.
3. Concurrency Control:
Concurrency control mechanisms are used to prevent data inconsistencies in
systems with multiple concurrent processes. These mechanisms ensure that
concurrent processes do not access the same data at the same time, which can
lead to inconsistencies and errors. Deadlocks can occur in concurrent systems
when two or more processes are blocked, waiting for each other to release the
resources they need. This can result in a system-wide stall, where no process can
make progress. Concurrency control mechanisms can help prevent deadlocks by
managing access to shared resources and ensuring that concurrent processes do
not interfere with each other.

ADVANTAGES OR DISADVANTAGES:

Advantages of Deadlock Detection and Recovery in Operating Systems:

1. Improved System Stability: Deadlocks can cause system-wide stalls, and


detecting and resolving deadlocks can help to improve the stability of the system.
2. Better Resource Utilization: By detecting and resolving deadlocks, the operating
system can ensure that resources are efficiently utilized and that the system
remains responsive to user requests.
3. Better System Design: Deadlock detection and recovery algorithms can provide
insight into the behavior of the system and the relationships between processes
and resources, helping to inform and improve the design of the system.

Disadvantages of Deadlock Detection and Recovery in Operating Systems:

1. Performance Overhead: Deadlock detection and recovery algorithms can


introduce a significant overhead in terms of performance, as the system must
regularly check for deadlocks and take appropriate action to resolve them.
2. Complexity: Deadlock detection and recovery algorithms can be complex to
implement, especially if they use advanced techniques such as the Resource
Allocation Graph or Timestamping.
3. False Positives and Negatives: Deadlock detection algorithms are not perfect
and may produce false positives or negatives, indicating the presence of
deadlocks when they do not exist or failing to detect deadlocks that do exist.
Advanced operating system

4. Risk of Data Loss: In some cases, recovery algorithms may require rolling back
the state of one or more processes, leading to data loss or corruption.

Deadlock Prevention

We can prevent a Deadlock by eliminating any of the above four conditions.

Eliminate Mutual Exclusion: It is not possible to dis-satisfy the mutual


exclusion because some resources, such as the tape drive and printer, are inherently
non-shareable.

Eliminate Hold and wait: Allocate all required resources to the process before the
start of its execution, this way hold and wait condition is eliminated but it will lead to
low device utilization. for example, if a process requires a printer at a later time and
we have allocated a printer before the start of its execution printer will remain
blocked till it has completed its execution. The process will make a new request for
resources after releasing the current set of resources. This solution may lead to
starvation.

Eliminate No Preemption : Preempt resources from the process when resources are
required by other high-priority processes.

Eliminate Circular Wait : Each resource will be assigned a numerical number. A


process can request the resources to increase/decrease. order of numbering. For
Example, if the P1 process is allocated R5 resources, now next time if P1 asks for R4,
R3 lesser than R5 such a request will not be granted, only a request for resources more
than R5 will be granted.
Detection and Recovery: Another approach to dealing with deadlocks is to detect and
recover from them when they occur. This can involve killing one or more of the
processes involved in the deadlock or releasing some of the resources they hold.

Deadlock Avoidance

 A deadlock avoidance policy grants a resource request only if it can establish that
granting the request cannot lead to a deadlock either immediately or in the future. The
kernal lacks detailed knowledge about future behavior of processes, so it cannot
accurately predict deadlocks. To facilitate deadlock avoidance under these conditions,
it uses the following conservative approach: Each process declares the maximum
number of resource units of each class that it may require. The kernal permits a process
to request these resource units in stages- i.e. a few resource units at a time- subject to

37
the maximum number declared by it and uses a worst case analysis technique to check
for the possibility of future deadlocks. A request is granted only if there is no
possibility of deadlocks; otherwise, it remains pending until it can be granted. This
approach is conservative because a process may complete its operation without
requiring the maximum number of units declared by it.

Resource Allocation Graph

 The resource allocation graph (RAG) is used to visualize the system’s current state as
a graph. The Graph includes all processes, the resources that are assigned to them, as
well as the resources that each Process requests. Sometimes, if there are fewer
processes, we can quickly spot a deadlock in the system by looking at the graph rather
than the tables we use in Banker’s algorithm. Deadlock avoidance can also be done
with Banker’s Algorithm.

Banker’s Algorithm

Bankers’s Algorithm is a resource allocation and deadlock avoidance algorithm which test all
the request made by processes for resources, it checks for the safe state, and after granting a
request system remains in the safe state it allows the request, and if there is no safe state it
doesn’t allow the request made by the process.

Inputs to Banker’s Algorithm

1. Max needs of resources by each process.


2. Currently, allocated resources by each process.
3. Max free available resources in the system.

The request will only be granted under the below condition

1. If the request made by the process is less than equal to the max needed for that
process.
2. If the request made by the process is less than equal to the freely available
resource in the system.

Timeouts: To avoid deadlocks caused by indefinite waiting, a timeout mechanism can be used
to limit the amount of time a process can wait for a resource. If the help is unavailable within
the timeout period, the process can be forced to release its current resources and try again later

One Marks

1. Which of the following is not an operating system?

a. Windows c. Oracle
b. Linux d. DOS

2. What is the maximum length of the filename in DOS?


Advanced operating system

a. 4 c. 8

b. 5 d. 12

3. When was the first operating system developed?

a. 1948 c. 1950
b. 1949 d. 1951

4. When were MS windows operating systems proposed?

a. 1994 c. 1992
b. 1990 d. 1985

5. Which of the following is the extension of Notepad?

a. .txt c. .ppt
b. .xls d. .bmp

6. What else is a command interpreter called?

a. prompt c. shell
b. kernel d. command

7. What is the full name of FAT?

a. File attribute table c. Font attribute table


b. File allocation table d. Format allocation table

8. BIOS is used?

a. By operating system c. By interpreter


b. By compiler d. By application software

9. What is the mean of the Booting in the operating system?

a. Restarting computer c. To scan


b. Install the program d. To turn off

39
10. When does page fault occur?

a. The page is present in memory. c. The page does not present in


b. The deadlock occurs. memory.
d. The buffering occurs.

QUESTIONS(5 Marks and 10 Marks)

1. Why is the operating system important?


2. What's the main purpose of an OS? What are the different types of OS?
3. What are the benefits of a multiprocessor system?
4. What do you mean by RTOS?
5. Write top 10 examples of OS? Explain.
6. What is a deadlock in OS? What are the necessary conditions for a deadlock?
7. What is a time-sharing system?
8. What are the benefits of a multiprocessor system?
9. Which are the necessary conditions to achieve a deadlock?
10. What is IPC? What are the different IPC mechanisms?
Advanced operating system

UNIT:2 DISTRIBUTED OPERATING SYSTEMS

 A Distributed Operating System refers to a model in which applications run on


multiple interconnected computers, offering enhanced communication and integration
capabilities compared to a network operating system. In a Distributed OS,
multiple CPUs are utilized, but for end-users, it appears as a typical
centralized operating system. It enables the sharing of various resources such as
CPUs, disks, network interfaces, nodes, and computers across different sites, thereby
expanding the available data within the entire system.
 Effective communication channels like high-speed buses and telephone lines connect
all processors, each equipped with its own local memory and other neighboring
processors. Due to its characteristics, a distributed operating system is classified as a
loosely coupled system. It encompasses multiple computers, nodes, and sites, all
interconnected through LAN/WAN lines. The ability of a Distributed OS to share
processing resources and I/O files while providing users with a virtual machine
abstraction is an important feature.

The diagram below illustrates the structure of a distributed operating system:

Types of Distributed Operating System

There are three types of Distributed OS:

 Client-Server Systems: This strongly connected operating system is appropriate


for multiprocessors and homogenous multicomputer. It functions as a centralized
server, handling and approving all requests originating from client systems.
 Peer-to-Peer Systems: This loosely coupled system is implemented in computer
network applications, consisting of multiple processors without shared memories
or clocks. Each processor possesses its own local memory, and communication
between processors occurs through high-speed buses or telephone lines.
 Middleware: It facilitates interoperability among applications running on
different operating systems. By employing these services, applications can
exchange data with each other, ensuring distribution transparency.

Applications of Distributed Operating System

The applications of a Distributed OS encompass various domains as below:

 Internet Technology
 Distributed Databases System

41
 Air Traffic Control System
 Airline Reservation Control Systems
 Peer-to-Peer Networks System
 Telecommunication Networks
 Scientific Computing System
 Cluster Computing
 Grid Computing
 Data Rendering

Security in Distributed Operating system

 Protection and security are crucial aspects of a Distributed Operating System,


especially in organizational settings. Measures are employed to safeguard the system
from potential damage or loss caused by external sources. Various security measures
can be implemented, including authentication methods such as username/password
and user key. One Time Password (OTP) is also commonly utilized in distributed OS
security applications.

Communication primitives

Communication primitives are fundamental operations or functions that facilitate


communication between processes or entities in a distributed system. These primitives form
the building blocks for designing distributed algorithms and enable processes to exchange
information. Here are some common communication primitives:

 Send: The basic operation for a process to send a message to another process. It
involves packaging information and transmitting it to a specified destination.
 Receive: The complementary operation to sending, where a process waits to receive a
message. Upon reception, the process can extract and process the information from
the received message.
 Broadcast: Sending a message from one process to all other processes in the system.
This is a one-to-many communication primitive.
 Multicast: Similar to broadcast, but it involves sending a message to a selected group
of processes rather than all processes.
 Point-to-Point Communication: Communication between two specific processes. The
send and receive primitives are used to achieve point-to-point communication.
 Barrier Synchronization: A synchronization primitive where processes wait until all
participating processes have reached a certain point before any of them can proceed.
 Remote Procedure Call (RPC): Invoking a procedure (function or method) on a
remote process as if it were a local procedure. RPC hides the details of
communication between processes.
 Message Queues: Processes can place messages in a queue, and other processes can
retrieve and process these messages. This helps in achieving asynchronous
communication.
 Semaphore Operations: Using semaphores for synchronization, including operations
like wait (P) and signal (V) to control access to shared resources.
 Event Notification: Notifying processes about specific events or conditions, enabling
them to react accordingly.
Advanced operating system

These communication primitives provide a means for processes to interact and coordinate in
a distributed environment, contributing to the development of robust and efficient distributed
systems.

Issues

Distributed operating systems face various challenges, including:

 Communication: Ensuring efficient and reliable communication between distributed


components can be complex.
 Consistency and Replication: Maintaining consistency among distributed data copies
and handling replication issues.
 Fault Tolerance: Dealing with failures in a distributed environment and ensuring the
system continues to operate reliably.
 Concurrency Control: Managing concurrent access to shared resources to prevent
conflicts and ensure data integrity.
 Security: Addressing security concerns, such as authentication, authorization, and
secure communication in a distributed system.
 Resource Management: Efficiently managing distributed resources like memory,
processing power, and storage.
 Synchronization: Coordinating actions across distributed nodes to maintain a
coherent system state.
 Scalability: Designing the system to handle a growing number of nodes or users
without compromising performance.
 Load Balancing: Distributing workload evenly across nodes to prevent bottlenecks
and optimize resource utilization.
 Heterogeneity: Managing diverse hardware, software, and network environments
within the distributed system.

Addressing these issues requires careful design, robust algorithms, and a deep understanding
of distributed systems principles.

Lamport’s logical clock

Lamport’s Logical Clock was created by Leslie Lamport. It is a procedure to determine the
order of events occurring. It provides a basis for the more advanced Vector Clock Algorithm.
Due to the absence of a Global Clock in a Distributed Operating System Lamport Logical
Clock is needed.

Algorithm:

 Happened before relation(->): a -> b, means ‘a’ happened before ‘b’.


 Logical Clock: The criteria for the logical clocks are:

 [C1]: Ci (a) < Ci(b), [ Ci -> Logical Clock, If ‘a’ happened before ‘b’, then
time of ‘a’ will be less than ‘b’ in a particular process. ]
 [C2]: Ci(a) < Cj(b), [ Clock value of Ci(a) is less than Cj(b) ]

Reference:

 Process: Pi

43
 Event: Eij, where i is the process in number and j: jth event in the ith process.
 tm: vector time span for message m.
 Ci vector clock associated with process Pi, the jth element is Ci[j] and
contains Pi‘s latest value for the current time in process Pj.
 d: drift time, generally d is 1.

Implementation Rules[IR]:

 [IR1]: If a -> b [‘a’ happened before ‘b’ within the same process]
then, Ci(b) =Ci(a) + d
 [IR2]: Cj = max(Cj, tm + d) [If there’s more number of processes, then tm =
value of Ci(a), Cj = max value between Cj and tm + d]

For Example:

 Take the starting value as 1, since it is the 1st event and there is no incoming value at
the starting point:

 e11 = 1
 e21 = 1

 The value of the next point will go on increasing by d (d = 1), if there is no incoming
value i.e., to follow [IR1].

 e12 = e11 + d = 1 + 1 = 2
 e13 = e12 + d = 2 + 1 = 3
 e14 = e13 + d = 3 + 1 = 4
 e15 = e14 + d = 4 + 1 = 5
 e16 = e15 + d = 5 + 1 = 6
 e22 = e21 + d = 1 + 1 = 2
 e24 = e23 + d = 3 + 1 = 4
 e26 = e25 + d = 6 + 1 = 7

 When there will be incoming value, then follow [IR2] i.e., take the maximum value
between Cj and Tm + d.

 e17 = max(7, 5) = 7, [e16 + d = 6 + 1 = 7, e24 + d = 4 + 1 = 5,


maximum among 7 and 5 is 7]
Advanced operating system

 e23 = max(3, 3) = 3, [e22 + d = 2 + 1 = 3, e12 + d = 2 + 1 = 3,


maximum among 3 and 3 is 3]
 e25 = max(5, 6) = 6, [e24 + 1 = 4 + 1 = 5, e15 + d = 5 + 1 = 6,
maximum among 5 and 6 is 6]

Limitation:

 In case of [IR1], if a -> b, then C(a) < C(b) -> true.


 In case of [IR2], if a -> b, then C(a) < C(b) -> May be true or may not be true.

Deadlock Handling Strategies:

Deadlocks occur when processes are unable to proceed because each is waiting for the other
to release a resource. Operating systems implement various strategies to handle deadlocks.
Here are some common deadlock handling strategies:

Prevention:

 Resource Allocation Graph (RAG): The system maintains a graph representing the
allocation of resources to processes. Deadlocks are prevented by ensuring that the
graph remains cycle-free.
 Banker's Algorithm: Processes must declare their maximum resource needs upfront,
and the system only grants resource requests if it determines that the allocation will
not lead to a deadlock.

Avoidance:

 Dynamically Check for Safe States: The system continually assesses the current state
and only grants resource requests that guarantee the system will remain in a safe
state.
 Safety Algorithm (like Banker's Algorithm): The system uses algorithms to
determine if a resource request could potentially lead to a deadlock before granting it.

Detection and Recovery:

 Deadlock Detection: Periodically checks for the existence of a deadlock. If detected,


the system may choose to:

45
 Terminate Processes: End one or more processes to break the deadlock.
 Resource Preemption: Preempt resources from one or more processes to allow the
system to recover.

Timeouts and Killing Processes:

 Timeouts: If a process takes too long to acquire resources, the system may assume a
deadlock and take corrective action.
 Process Termination: If deadlock is detected, the system may selectively terminate
one or more processes to resolve the situation.

Resource Allocation Policies:

 Wait-Die and Wound-Wait: These are strategies used in database systems for
handling deadlocks arising from conflicting resource requests. Processes may be
either rolled back and restarted (wait-die) or forced to wait (wound-wait) based on
their age and the state of the resource.

Dynamic Process Termination:

 Selective Process Termination: The system may selectively terminate processes to


resolve the deadlock. The choice of which processes to terminate may be based on
factors like priority, resource usage, etc.

Each strategy has its trade-offs, and the choice depends on the specific requirements and
characteristics of the system. Prevention and avoidance aim to eliminate deadlocks before
they occur, while detection and recovery focus on identifying and resolving deadlocks after
they happen.

Deadlock Detection And Recovery

Deadlock detection and recovery is the process of detecting and resolving deadlocks in an
operating system. A deadlock occurs when two or more processes are blocked, waiting for
each other to release the resources they need. This can lead to a system-wide stall, where no
process can make progress.

There are two main approaches to deadlock detection and recovery:

1. Prevention: The operating system takes steps to prevent deadlocks from occurring
by ensuring that the system is always in a safe state, where deadlocks cannot occur.
This is achieved through resource allocation algorithms such as the Banker’s
Algorithm.
2. Detection and Recovery: If deadlocks do occur, the operating system must detect
and resolve them. Deadlock detection algorithms, such as the Wait-For Graph, are
used to identify deadlocks, and recovery algorithms, such as the Rollback and Abort
algorithm, are used to resolve them. The recovery algorithm releases the resources
held by one or more processes, allowing the system to continue to make progress.

Difference Between Prevention and Detection/Recovery: Prevention aims to avoid


deadlocks altogether by carefully managing resource allocation, while detection and
recovery aim to identify and resolve deadlocks that have already occurred.
Advanced operating system

Deadlock detection and recovery is an important aspect of operating system design and
management, as it affects the stability and performance of the system. The choice of
deadlock detection and recovery approach depends on the specific requirements of the
system and the trade-offs between performance, complexity, and risk tolerance. The
operating system must balance these factors to ensure that deadlocks are effectively detected
and resolved.

In the previous post, we discussed Deadlock Prevention and Avoidance. In this post, the
Deadlock Detection and Recovery technique to handle deadlock is discussed.

Deadlock Detection :

1. If resources have a single instance –


In this case for Deadlock detection, we can run an algorithm to check for the cycle in the
Resource Allocation Graph. The presence of a cycle in the graph is a sufficient condition for
deadlock.

In the above diagram, resource 1 and resource 2 have single instances. There is a cycle R1
→ P1 → R2 → P2. So, Deadlock is Confirmed.

2. If there are multiple instances of resources –


Detection of the cycle is necessary but not a sufficient condition for deadlock detection, in
this case, the system may or may not be in deadlock varies according to different situations.

3. Wait-For Graph Algorithm –

The Wait-For Graph Algorithm is a deadlock detection algorithm used to detect deadlocks in
a system where resources can have multiple instances. The algorithm works by constructing
a Wait-For Graph, which is a directed graph that represents the dependencies between
processes and resources.

Deadlock Recovery :
A traditional operating system such as Windows doesn’t deal with deadlock recovery as it is
a time and space-consuming process. Real-time operating systems use Deadlock recovery.

1. Killing the process –


Killing all the processes involved in the deadlock. Killing process one by one. After
killing each process check for deadlock again and keep repeating the process till the
system recovers from deadlock. Killing all the processes one by one helps a system to
break circular wait conditions.

47
2. Resource Preemption –
Resources are preempted from the processes involved in the deadlock, and preempted
resources are allocated to other processes so that there is a possibility of recovering
the system from the deadlock. In this case, the system goes into starvation.
3. Concurrency Control – Concurrency control mechanisms are used to prevent data
inconsistencies in systems with multiple concurrent processes. These mechanisms
ensure that concurrent processes do not access the same data at the same time, which
can lead to inconsistencies and errors. Deadlocks can occur in concurrent systems
when two or more processes are blocked, waiting for each other to release the
resources they need. This can result in a system-wide stall, where no process can
make progress. Concurrency control mechanisms can help prevent deadlocks by
managing access to shared resources and ensuring that concurrent processes do not
interfere with each other.

ADVANTAGES OR DISADVANTAGES:

Advantages of Deadlock Detection and Recovery in Operating Systems:

1. Improved System Stability: Deadlocks can cause system-wide stalls, and detecting
and resolving deadlocks can help to improve the stability of the system.
2. Better Resource Utilization: By detecting and resolving deadlocks, the operating
system can ensure that resources are efficiently utilized and that the system remains
responsive to user requests.
3. Better System Design: Deadlock detection and recovery algorithms can provide
insight into the behavior of the system and the relationships between processes and
resources, helping to inform and improve the design of the system.

Disadvantages of Deadlock Detection and Recovery in Operating Systems:

1. Performance Overhead: Deadlock detection and recovery algorithms can introduce


a significant overhead in terms of performance, as the system must regularly check
for deadlocks and take appropriate action to resolve them.
2. Complexity: Deadlock detection and recovery algorithms can be complex to
implement, especially if they use advanced techniques such as the Resource
Allocation Graph or Timestamping.
3. False Positives and Negatives: Deadlock detection algorithms are not perfect and
may produce false positives or negatives, indicating the presence of deadlocks when
they do not exist or failing to detect deadlocks that do exist.
4. Risk of Data Loss: In some cases, recovery algorithms may require rolling back the
state of one or more processes, leading to data loss or corruption.

Distributed File System

 A Distributed File System (DFS) as the name suggests, is a file system that is
distributed on multiple file servers or multiple locations. It allows programs to
access or store isolated files as they do with the local ones, allowing programmers
to access files from any network or computer.
Advanced operating system

 The main purpose of the Distributed File System (DFS) is to allows users of
physically distributed systems to share their data and resources by using a Common
File System. A collection of workstations and mainframes connected by a Local
Area Network (LAN) is a configuration on Distributed File System. A DFS is
executed as a part of the operating system. In DFS, a namespace is created and this
process is transparent for the clients.

DFS has two components:

 Location Transparency –
Location Transparency achieves through the namespace component.
 Redundancy –
Redundancy is done through a file replication component.

In the case of failure and heavy load, these components together improve data availability by
allowing the sharing of data in different locations to be logically grouped under one folder,
which is known as the “DFS root”.

It is not necessary to use both the two components of DFS together, it is possible to use the
namespace component without using the file replication component and it is perfectly
possible to use the file replication component without using the namespace component
between servers.

File system replication:

 Early iterations of DFS made use of Microsoft’s File Replication Service (FRS),
which allowed for straightforward file replication between servers. The most recent
iterations of the whole file are distributed to all servers by FRS, which recognises
new or updated files.
 “DFS Replication” was developed by Windows Server 2003 R2 (DFSR). By only
copying the portions of files that have changed and minimising network traffic with
data compression, it helps to improve FRS. Additionally, it provides users with
flexible configuration options to manage network traffic on a configurable schedule.

49
Features of DFS :

 Transparency :

 Structure transparency –
There is no need for the client to know about the number or locations of file
servers and the storage devices. Multiple file servers should be provided for
performance, adaptability, and dependability.
 Access transparency –
Both local and remote files should be accessible in the same manner. The file
system should be automatically located on the accessed file and send it to the
client’s side.
 Naming transparency –
There should not be any hint in the name of the file to the location of the file.
Once a name is given to the file, it should not be changed during transferring
from one node to another.
 Replication transparency –
If a file is copied on multiple nodes, both the copies of the file and their
locations should be hidden from one node to another.

 User mobility :
It will automatically bring the user’s home directory to the node where the user logs
in.
 Performance :
Performance is based on the average amount of time needed to convince the client
requests. This time covers the CPU time + time taken to access secondary storage +
network access time. It is advisable that the performance of the Distributed File
System be similar to that of a centralized file system.
 Simplicity and ease of use :
The user interface of a file system should be simple and the number of commands in
the file should be small.
 High availability :
A Distributed File System should be able to continue in case of any partial failures
like a link failure, a node failure, or a storage drive crash.
A high authentic and adaptable distributed file system should have different and
independent file servers for controlling different and independent storage devices.
 Scalability :
Since growing the network by adding new machines or joining two networks together
is routine, the distributed system will inevitably grow over time. As a result, a good
distributed file system should be built to scale quickly as the number of nodes and
users in the system grows. Service should not be substantially disrupted as the
number of nodes and users grows.
 High reliability :
The likelihood of data loss should be minimized as much as feasible in a suitable
distributed file system. That is, because of the system’s unreliability, users should not
feel forced to make backup copies of their files. Rather, a file system should create
backup copies of key files that can be used if the originals are lost. Many file systems
employ stable storage as a high-reliability strategy.
 Data integrity :
Multiple users frequently share a file system. The integrity of data saved in a shared
file must be guaranteed by the file system. That is, concurrent access requests from
Advanced operating system

many users who are competing for access to the same file must be correctly
synchronized using a concurrency control method. Atomic transactions are a high-
level concurrency management mechanism for data integrity that is frequently
offered to users by a file system.
 Security :
A distributed file system should be secure so that its users may trust that their data
will be kept private. To safeguard the information contained in the file system from
unwanted & unauthorized access, security mechanisms must be implemented.
 Heterogeneity :
Heterogeneity in distributed systems is unavoidable as a result of huge scale. Users of
heterogeneous distributed systems have the option of using multiple computer
platforms for different purposes.

History :

 The server component of the Distributed File System was initially introduced as an
add-on feature. It was added to Windows NT 4.0 Server and was known as “DFS
4.1”. Then later on it was included as a standard component for all editions of
Windows 2000 Server. Client-side support has been included in Windows NT 4.0 and
also in later on version of Windows.
 Linux kernels 2.6.14 and versions after it come with an SMB client VFS known as
“cifs” which supports DFS. Mac OS X 10.7 (lion) and onwards supports Mac OS X
DFS.

Properties:

 File transparency: users can access files without knowing where they are physically
stored on the network.
 Load balancing: the file system can distribute file access requests across multiple
computers to improve performance and reliability.
 Data replication: the file system can store copies of files on multiple computers to
ensure that the files are available even if one of the computers fails.
 Security: the file system can enforce access control policies to ensure that only
authorized users can access files.
 Scalability: the file system can support a large number of users and a large number of
files.
 Concurrent access: multiple users can access and modify the same file at the same
time.
 Fault tolerance: the file system can continue to operate even if one or more of its
components fail.
 Data integrity: the file system can ensure that the data stored in the files is accurate
and has not been corrupted.
 File migration: the file system can move files from one location to another without
interrupting access to the files.
 Data consistency: changes made to a file by one user are immediately visible to all
other users.
Support for different file types: the file system can support a wide range of file types,
including text files, image files, and video files.

51
Applications :

 NFS –
NFS stands for Network File System. It is a client-server architecture that allows a
computer user to view, store, and update files remotely. The protocol of NFS is one
of the several distributed file system standards for Network-Attached Storage (NAS).
 CIFS –
CIFS stands for Common Internet File System. CIFS is an accent of SMB. That is,
CIFS is an application of SIMB protocol, designed by Microsoft.
 SMB –
SMB stands for Server Message Block. It is a protocol for sharing a file and was
invented by IMB. The SMB protocol was created to allow computers to perform read
and write operations on files to a remote host over a Local Area Network (LAN). The
directories present in the remote host can be accessed via SMB and are called as
“shares”.
 Hadoop –
Hadoop is a group of open-source software services. It gives a software framework
for distributed storage and operating of big data using the MapReduce programming
model. The core of Hadoop contains a storage part, known as Hadoop Distributed
File System (HDFS), and an operating part which is a MapReduce programming
model.
 NetWare –
NetWare is an abandon computer network operating system developed by Novell,
Inc. It primarily used combined multitasking to run different services on a personal
computer, using the IPX network protocol.

Working of DFS :

There are two ways in which DFS can be implemented:

 Standalone DFS namespace –


It allows only for those DFS roots that exist on the local computer and are not
using Active Directory. A Standalone DFS can only be acquired on those
computers on which it is created. It does not provide any fault liberation and
cannot be linked to any other DFS. Standalone DFS roots are rarely come across
because of their limited advantage.
 Domain-based DFS namespace –
It stores the configuration of DFS in Active Directory, creating the DFS
namespace root accessible at \\<domainname>\<dfsroot> or \\<FQDN>\<dfsroot>
Advanced operating system

Advantages :

 DFS allows multiple user to access or store the data.


 It allows the data to be share remotely.
 It improved the availability of file, access time, and network efficiency.
 Improved the capacity to change the size of the data and also improves the ability to
exchange the data.
 Distributed File System provides transparency of data even if server or disk fails.

Disadvantages :

 In Distributed File System nodes and connections needs to be secured therefore we


can say that security is at stake.
 There is a possibility of lose of messages and data in the network while movement
from one node to another.
 Database connection in case of Distributed File System is complicated.
 Also handling of the database is not easy in Distributed File System as compared to a
single user system.
 There are chances that overloading will take place if all nodes tries to send data at
once.

CASE STUDIES

 Google File System (GFS): GFS is a distributed file system designed by Google for
their infrastructure. It focuses on scalability and fault tolerance, allowing large-scale
data processing across multiple servers.
 Apache Hadoop: While not an operating system itself, Hadoop is a framework for
distributed storage and processing of large data sets. It's built on the principles of the
Google File System and MapReduce, enabling the distributed processing of massive
datasets.
 Amazon DynamoDB: DynamoDB is a distributed NoSQL database service provided
by Amazon Web Services. It's designed for high availability and scalability, ensuring
low-latency access to data across multiple servers and data centers.
 MapReduce Paradigm: Although not a specific system, the MapReduce
programming model is widely used in distributed systems. It was popularized by
Google and later implemented in Apache Hadoop. The paradigm simplifies parallel
processing of large datasets across distributed nodes.
 Kubernetes: While not an operating system, Kubernetes is a container orchestration
platform that manages the deployment, scaling, and operation of containerized
applications. It provides a distributed system for automating the deployment and
scaling of application containers.

These case studies showcase various aspects of distributed systems, including fault tolerance,
scalability, and efficient resource management.

Network File System (NFS)

The advent of distributed computing was marked by the introduction of distributed file
systems. Such systems involved multiple client machines and one or a few servers. The
server stores data on its disks and the clients may request data through some protocol
messages. Advantages of a distributed file system:

53
 Allows easy sharing of data among clients.
 Provides centralized administration.
 Provides security, i.e. one must only secure the servers to secure data.

 Distributed File System Architecture:

Even a simple client/server architecture involves more components than the physical
file systems discussed previously in OS. The architecture consists of a client-side file
system and a server-side file system. A client application issues a system call (e.g.
read(), write(), open(), close() etc.) to access files on the client-side file system,
which in turn retrieves files from the server. It is interesting to note that to a client
application, the process seems no different than requesting data from a physical disk,
since there is no special API required to do so. This phenomenon is known
as transparency in terms of file access. It is the client-side file system that executes
commands to service these system calls. For instance, assume that a client application
issues the read() system call. The client-side file system then messages the server-
side file system to read a block from the server’s disk and return the data back to the
client. Finally, it buffers this data into the read() buffer and completes the system call.
The server-side file system is also simply called the file server.
 Sun’s Network File System: The earliest successful distributed system could be
attributed to Sun Microsystems, which developed the Network File System (NFS).
NFSv2 was the standard protocol followed for many years, designed with the goal of
simple and fast server crash recovery. This goal is of utmost importance in multi-
client and single-server based network architectures because a single instant of server
crash means that all clients are unserviced. The entire system goes down. Stateful
protocols make things complicated when it comes to crashes. Consider a client A
trying to access some data from the server. However, just after the first read, the
server crashed. Now, when the server is up and running, client A issues the second
read request. However, the server does not know which file the client is referring to,
since all that information was temporary and lost during the crash. Stateless
protocols come to our rescue. Such protocols are designed so as to not store any state
information in the server. The server is unaware of what the clients are doing — what
blocks they are caching, which files are opened by them and where their current file
pointers are. The server simply delivers all the information that is required to service
a client request. If a server crash happens, the client would simply have to retry the
request. Because of their simplicity, NFS implements a stateless protocol.
 File Handles: NFS uses file handles to uniquely identify a file or a directory that the
current operation is being performed upon. This consists of the following
components:

 Volume Identifier – An NFS server may have multiple file systems or


partitions. The volume identifier tells the server which file system is being
referred to.
 Inode Number – This number identifies the file within the partition.
Advanced operating system

 Generation Number – This number is used while reusing an inode number.

 File Attributes: “File attributes” is a term commonly used in NFS terminology. This
is a collective term for the tracked metadata of a file, including file creation time, last
modified, size, ownership permissions etc. This can be accessed by calling stat() on
the file.
 NFSv2 Protocol: Some of the common protocol messages are listed below.

Message Description

NFSPROC_GETATTR Given a file handle, returns file attributes.

NFSPROC_SETATTR Sets/updates file attributes.

Given file handle and name of the file to look up,


NFSPROC_LOOKUP
returns file handle.

Given file handle, offset, count data and attributes,


NFSPROC_READ
reads the data.

Given file handle, offset, count data and attributes,


NFSPROC_WRITE
writes data into the file.

Given the directory handle, name of file and attributes,


NFSPROC_CREATE
creates a file.

Given the directory handle and name of file, deletes


NFSPROC_REMOVE
the file.

Given directory handle, name of directory and


NFSPROC_MKDIR
attributes, creates a new directory.

 The LOOKUP protocol message is used to obtain the file handle for further accessing
data. The NFS mount protocol helps obtain the directory handle for the root (/)
directory in the file system. If a client application opens a file /abc.txt, the client-side
file system will send a LOOKUP request to the server, through the root (/) file handle
looking for a file named abc.txt. If the lookup is successful, the file attributes are
returned.
 Client-Side Caching: To improve performance of NFS, distributed file systems
cache the data as well as the metadata read from the server onto the clients. This is
known as client-side caching. This reduces the time taken for subsequent client
accesses. The cache is also used as a temporary buffer for writing. This helps
improve efficiency even more since all writes are written onto the server at once

55
CODA

Coda is a distributed file system developed as a research project at Carnegie Mellon


University since 1987 under the direction of Mahadev Satyanarayanan. It descended directly
from an older version of Andrew File System (AFS-2) and offers many similar features. The
InterMezzo file system was inspired by Coda.

Coda Distributed File System:

 Purpose: Coda aims to offer a distributed file system with disconnected operation
capabilities, making it suitable for mobile and wireless computing environments.
 Disconnected Operation:One of Coda's notable features is its support for
disconnected operation. Users can continue working with their files even when
temporarily disconnected from the network, and changes are synchronized once the
connection is reestablished.
 Replication and Fault Tolerance: Coda replicates files across multiple servers to
enhance fault tolerance and availability. This replication helps in maintaining
consistency and reliability in the face of server failures or network issues.
 Venus and Vice Components:Coda consists of two main components, Venus (client)
and Vice (server). Venus manages local file access and handles disconnected
operation, while Vice manages the server-side operations.
 Token-based Authentication: Coda employs a token-based authentication system to
control access to files. Users must possess the appropriate tokens to read or modify
files, adding a layer of security.

Coda is an interesting case study in distributed file systems, emphasizing support for
mobility, disconnected operation, and fault tolerance in distributed environments.

One Marks

1. Banker's algorithm is used?

a. To prevent deadlock c. To solve the deadlock


b. To deadlock recovery d. None of these

2. When you delete a file in your computer, where does it go?

a. Recycle bin c. Taskbar


b. Hard disk d. None of these

3. Which is the Linux operating system?

a. Private operating system c. Open-source operating system


b. Windows operating system d. None of these

4. What is the full name of the IDL?


Advanced operating system

a. Interface definition language c. Interface data library


b. Interface direct language d. None of these

5. What is the full name of the DSM?

a. Direct system module c. Demoralized system memory


b. Direct system memory d. Distributed shared memory

Question (5 Mark and 10 Mark)

1. Define distributed OS and Goals?


2. Define work station model and processor pool model with help of diagram?
3. What are the Hardware requirements for distributed system?
4. Why network operating is designed? How its general structure is formed?
5. Draw and explain the architecture of distributed operating system?
6. What are advantages and disadvantages of distributed operating system?
7. Explain middle-ware operating system?
8. Write an application of distributed system in detail?
9. Explain design issues of distributed system in details.
10. What is Transference in distributed system?

57
UNIT:3 REAL TIME OPERATING SYSTEM

Real Time Operating System

 Real-time operating systems (RTOS) are used in environments where a large


number of events, mostly external to the computer system, must be accepted and
processed in a short time or within certain deadlines. such applications are
industrial control, telephone switching equipment, flight control, and real-time
simulations. With an RTOS, the processing time is measured in tenths of seconds.
This system is time-bound and has a fixed deadline. The processing in this type of
system must occur within the specified constraints. Otherwise, This will lead to
system failure.
 Examples of real-time operating systems are airline traffic control systems,
Command Control Systems, airline reservation systems, Heart pacemakers,
Network Multimedia Systems, robots, etc.
The real-time operating systems can be of 3 types –

RTOS

 Hard Real-Time Operating System: These operating systems guarantee that


critical tasks are completed within a range of time.
 For example, a robot is hired to weld a car body. If the robot welds too early or
too late, the car cannot be sold, so it is a hard real-time system that requires
complete car welding by the robot hardly on time., scientific experiments, medical
imaging systems, industrial control systems, weapon systems, robots, air traffic
control systems, etc.
 Soft real-time operating system: This operating system provides some relaxation
in the time limit.
 For example – Multimedia systems, digital audio systems, etc. Explicit,
programmer-defined, and controlled processes are encountered in real-time
systems. A separate process is changed by handling a single external event. The
process is activated upon the occurrence of the related event signaled by an
interrupt.
 Multitasking operation is accomplished by scheduling processes for execution
independently of each other. Each process is assigned a certain level of priority
that corresponds to the relative importance of the event that it services. The
processor is allocated to the highest-priority processes. This type of schedule,
called, priority-based preemptive scheduling is used by real-time systems.
Advanced operating system

 Firm Real-time Operating System: RTOS of this type have to follow deadlines
as well. In spite of its small impact, missing a deadline can have unintended
consequences, including a reduction in the quality of the product. Example:
Multimedia applications.
 Deterministic Real-time operating System: Consistency is the main key in this
type of real-time operating system. It ensures that all the task and processes
execute with predictable timing all the time,which make it more suitable for
applications in which timing accuracy is very
important. Examples: INTEGRITY, PikeOS.

Advantages:

The advantages of real-time operating systems are as follows-

 Maximum consumption: Maximum utilization of devices and systems. Thus


more output from all the resources.

 Task Shifting: Time assigned for shifting tasks in these systems is very less. For
example, in older systems, it takes about 10 microseconds. Shifting one task to
another and in the latest systems, it takes 3 microseconds.

 Focus On Application: Focus on running applications and less importance to


applications that are in the queue.

 Real-Time Operating System In Embedded System: Since the size of programs


is small, RTOS can also be embedded systems like in transport and others.

 Error Free: These types of systems are error-free.

 Memory Allocation: Memory allocation is best managed in these types of


systems.

Disadvantages:

The disadvantages of real-time operating systems are as follows-

 Limited Tasks: Very few tasks run simultaneously, and their concentration is
very less on few applications to avoid errors.

 Use Heavy System Resources: Sometimes the system resources are not so good
and they are expensive as well.

 Complex Algorithms: The algorithms are very complex and difficult for the
designer to write on.

 Device Driver And Interrupt signals: It needs specific device drivers and
interrupts signals to respond earliest to interrupts.

59
 Thread Priority: It is not good to set thread priority as these systems are very
less prone to switching tasks.

 Minimum Switching: RTOS performs minimal task switching.

Comparison of Regular and Real-Time operating systems:

Regular OS Real-Time OS (RTOS)


Complex Simple
Best effort Guaranteed response
Fairness Strict Timing constraints
Average Bandwidth Minimum and maximum limits
Unknown components Components are known
Unpredictable behavior Predictable behavior
Plug and play RTOS is upgradeable

Applications of Real-time System

 Real Time System is a system that is put through real time which means response
is obtained within a specified timing constraint or system meets the specified
deadline.Real time system is of two types – Hard and Soft. Both are used in
different cases. Hard real time systems are used where even the delay of some
nano or micro seconds are not allowed. Soft real time systems provide some
relaxation in time expression.

Applications of Real-time System:


Real-time System has applications in various fields of the technology. Here we will discuss
the important applications of real-time system.

 Industrial application:
Real-time system has a vast and prominent role in modern industries. Systems are
made real time based so that maximum and accurate output can be obtained. In
order to such things real -time systems are used in maximum industrial
organizations. These system somehow lead to the better performance and high
productivity in less time. Some of the examples of industrial applications are:
Automated Car Assembly Plant, Chemical Plant etc.
 Medical Science application:
In the field of medical science, real-time system has a huge impact on the human
health and treatment. Due to the introduction of real-time system in medical
science, many lives are saved and treatment of complex diseases has been turned
down to easier ways. People specially related to medical, now feel more relaxed
due to these systems. Some of the examples of medical science applications are:
Robot, MRI Scan, Radiation therapy etc.
 Peripheral Equipment applications:
Real-time system has made the printing of large banners and such things very
easier. Once these systems came into use, the technology world became more
strong. Peripheral equipment are used for various purposes. These systems are
embedded with micro chips and perform accurately in order to get the desired
response. Some of the examples of peripheral equipment applications are: Laser
printer, fax machine, digital camera etc.
 Telecommunication applications:
Real-time system map the world in such a way that it can be connected within a
Advanced operating system

short time. Real-time systems have enabled the whole world to connect via a
medium across internet. These systems make the people connect with each other
in no time and feel the real environment of togetherness. Some examples of
telecommunication applications of real-time systems are: Video Conferencing,
Cellular system etc.
 Defense applications:
In the new era of atomic world, defense is able to produce the missiles which have
the dangerous powers and have the great destroying ability. All these systems are
real-time system and it provides the system to attack and also a system to defend.
Some of the applications of defense using real time systems are: Missile guidance
system, anti-missile system, Satellite missile system etc.
 Aerospace applications:
The most powerful use of real time system is in aerospace applications. Basically
hard real time systems are used in aerospace applications. here the delay of even
some nano second is not allowed and if it happens, system fails. Some of the
applications of real-time systems in aerospace are: Satellite tracking system,
Avionics, Flight simulation etc.

Basic Model of a Real-time System

Real-time System is a system that is used for performing some specific tasks. These tasks
are related with time constraints and need to be completed in that time interval.

 Basic Model of a Real-time System: The basic model of a real-time system


presents the overview of all the components involved in a real-time system. Real-
time system includes various hardware and software embedded in a such a way
that the specific tasks can be performed in the time constraints allowed. The
accuracy and correctness involved in real-time system makes the model complex.
There are various models of real-time system which are more complex and are
hard to understand. Here we will discuss a basic model of real-time system which
has some commonly used terms and hardware. Following diagram represents a
basic model of Real-time System:

 Sensor: Sensor is used for the conversion of some physical events or


characteristics into the electrical signals. These are hardware devices that takes
the input from environment and gives to the system by converting it. For example,
a thermometer takes the temperature as physical characteristic and then converts it
into electrical signals for the system.
 Actuator: Actuator is the reverse device of sensor. Where sensor converts the
physical events into electrical signals, actuator does the reverse. It converts the
electrical signals into the physical events or characteristics. It takes the input from
the output interface of the system. The output from the actuator may be in any

61
form of physical action. Some of the commonly used actuator are motors and
heaters.
 Signal Conditioning Unit: When the sensor converts the physical actions into
electrical signals, then computer can’t used them directly. Hence, after the
conversion of physical actions into electrical signals, there is need of conditioning.
Similarly while giving the output when electrical signals are sent to the actuator,
then also conditioning is required. Therefore, Signal conditioning is of two types:

 Input Conditioning Unit: It is used for conditioning the electrical signals


coming from sensor.
 Output Conditioning Unit: It is used for conditioning the electrical
signals coming from the system.

 Interface Unit: Interface units are basically used for the conversion of digital to
analog and vice-versa. Signals coming from the input conditioning unit are analog
and the system does the operations on digital signals only, then the interface unit
is used to change the analog signals to digital signals. Similarly, while
transmitting the signals to output conditioning unit the interface of signals are
changed i.e. from digital to analog. On this basis, Interface unit is also of two
types:

 Input Interface: It is used for conversion of analog signals to digital.


 Output Interface: It is used for conversion of digital signals to analog.

Characteristics of Real-time Systems

 Real-time System is a system that is put through real time which means response
is obtained within a specified timing constraint or system meets the specified
deadline.Real time system is of two types – Hard and Soft. Both are used in
different cases. Hard real time systems are used where even the delay of some
nano or micro seconds are not allowed. Soft real time systems provide some
relaxation in time expression.

Characteristics of Real-time System:

Following are the some of the characteristics of Real-time System:

 Time Constraints: Time constraints related with real-time systems simply means
that time interval allotted for the response of the ongoing program. This deadline
means that the task should be completed within this time interval. Real-time
system is responsible for the completion of all tasks within their time intervals.
 Correctness: Correctness is one of the prominent part of real-time systems. Real-
time systems produce correct result within the given time interval. If the result is
not obtained within the given time interval then also result is not considered
correct. In real-time systems, correctness of result is to obtain correct result in
time constraint.
 Embedded: All the real-time systems are embedded now-a-days. Embedded
system means that combination of hardware and software designed for a specific
purpose. Real-time systems collect the data from the environment and passes to
other components of the system for processing.
 Safety: Safety is necessary for any system but real-time systems provide critical
safety. Real-time systems also can perform for a long time without failures. It also
Advanced operating system

recovers very soon when failure occurs in the system and it does not cause any
harm to the data and information.
 Concurrency: Real-time systems are concurrent that means it can respond to a
several number of processes at a time. There are several different tasks going on
within the system and it responds accordingly to every task in short intervals. This
makes the real-time systems concurrent systems.
 Distributed: In various real-time systems, all the components of the systems are
connected in a distributed way. The real-time systems are connected in such a way
that different components are at different geographical locations. Thus all the
operations of real-time systems are operated in distributed ways.
 Stability: Even when the load is very heavy, real-time systems respond in the
time constraint i.e. real-time systems does not delay the result of tasks even when
there are several task going on a same time. This brings the stability in real-time
systems.
 Fault tolerance: Real-time systems must be designed to tolerate and recover from
faults or errors. The system should be able to detect errors and recover from them
without affecting the system’s performance or output.
 Determinism: Real-time systems must exhibit deterministic behavior, which
means that the system’s behavior must be predictable and repeatable for a given
input. The system must always produce the same output for a given input,
regardless of the load or other factors.
 Real-time communication: Real-time systems often require real-time
communication between different components or devices. The system must ensure
that communication is reliable, fast, and secure.
 Resource management: Real-time systems must manage their resources
efficiently, including processing power, memory, and input/output devices. The
system must ensure that resources are used optimally to meet the time constraints
and produce correct results.
 Heterogeneous environment: Real-time systems may operate in a heterogeneous
environment, where different components or devices have different characteristics
or capabilities. The system must be designed to handle these differences and
ensure that all components work together seamlessly.
 Scalability: Real-time systems must be scalable, which means that the system
must be able to handle varying workloads and increase or decrease its resources as
needed.
 Security: Real-time systems may handle sensitive data or operate in critical
environments, which makes security a crucial aspect. The system must ensure that
data is protected and access is restricted to authorized users only.

Safety And Reliability

In real-time operating systems (RTOS), safety and reliability are crucial for applications
where timely and predictable responses are essential. Safety involves ensuring that the
system behaves correctly, while reliability focuses on its ability to perform consistently
over time.

 Deterministic Behavior: RTOS must provide predictable and deterministic


execution of tasks to meet timing requirements. This is critical for safety-critical
applications.
 Task Scheduling: Efficient and reliable task scheduling ensures that critical tasks
are executed on time. Priority-based scheduling is common in RTOS to manage
task execution based on urgency.

63
 Interrupt Handling: Robust interrupt handling is necessary to respond promptly
to external events. RTOS should minimize interrupt latency to meet real-time
constraints.
 Resource Management: Proper resource allocation and management prevent
conflicts and ensure the availability of resources when needed. This includes
managing memory, CPU, and I/O resources.
 Fault Tolerance: RTOS should incorporate mechanisms for fault detection,
isolation, and recovery to enhance system reliability. This is particularly important
in safety-critical applications.
 Real-Time Clocks: Accurate and synchronized timekeeping is essential for
meeting deadlines. RTOS often includes real-time clocks to maintain precise time
references.
 Safety Standards Compliance: Adherence to safety standards such as ISO 26262
for automotive systems or DO-178C for avionics is crucial. Compliance helps
ensure a systematic approach to safety and reliability.
 Redundancy: Introducing redundancy, such as dual-redundant systems, can
enhance reliability by providing backup resources in case of failures.
 Error Handling: Effective error detection and handling mechanisms are essential
to identify and manage errors promptly, preventing cascading failures.
 Testing and Verification: Rigorous testing, including simulation and real-world
testing, is necessary to verify the safety and reliability of an RTOS. This includes
testing under various conditions and failure scenarios.

a reliable and safe RTOS combines deterministic behavior, efficient scheduling, fault
tolerance, and compliance with industry standards to ensure the dependable operation of
real-time systems, particularly in safety-critical applications.

Scheduling in Real Time Systems

 Real-time systems are systems that carry real-time tasks. These tasks need to be
performed immediately with a certain degree of urgency. In particular, these tasks
are related to control of certain events (or) reacting to them. Real-time tasks can
be classified as hard real-time tasks and soft real-time tasks.
 A hard real-time task must be performed at a specified time which could
otherwise lead to huge losses. In soft real-time tasks, a specified deadline can be
missed. This is because the task can be rescheduled (or) can be completed after
the specified time,
 In real-time systems, the scheduler is considered as the most important component
which is typically a short-term task scheduler. The main focus of this scheduler is
to reduce the response time associated with each of the associated processes
instead of handling the deadline.
 If a preemptive scheduler is used, the real-time task needs to wait until its
corresponding tasks time slice completes. In the case of a non-preemptive
scheduler, even if the highest priority is allocated to the task, it needs to wait until
the completion of the current task. This task can be slow (or) of the lower priority
and can lead to a longer wait.
 A better approach is designed by combining both preemptive and non-preemptive
scheduling. This can be done by introducing time-based interrupts in priority
based systems which means the currently running process is interrupted on a time-
based interval and if a higher priority process is present in a ready queue, it is
executed by preempting the current process.
Advanced operating system

Based on schedulability, implementation (static or dynamic), and the result (self or


dependent) of analysis, the scheduling algorithm are classified as follows.

 Static table-driven approaches:


These algorithms usually perform a static analysis associated with scheduling
and capture the schedules that are advantageous. This helps in providing a
schedule that can point out a task with which the execution must be started at
run time.

 Static priority-driven preemptive approaches:


Similar to the first approach, these type of algorithms also uses static analysis
of scheduling. The difference is that instead of selecting a particular schedule,
it provides a useful way of assigning priorities among various tasks in
preemptive scheduling.

 Dynamic planning-based approaches:


Here, the feasible schedules are identified dynamically (at run time). It carries
a certain fixed time interval and a process is executed if and only if satisfies
the time constraint.

 Dynamic best effort approaches:


These types of approaches consider deadlines instead of feasible schedules.
Therefore the task is aborted if its deadline is reached. This approach is used
widely is most of the real-time systems.

Advantages of Scheduling in Real-Time Systems:

 Meeting Timing Constraints: Scheduling ensures that real-time tasks are


executed within their specified timing constraints. It guarantees that critical tasks
are completed on time, preventing potential system failures or losses.
 Resource Optimization: Scheduling algorithms allocate system resources
effectively, ensuring efficient utilization of processor time, memory, and other
resources. This helps maximize system throughput and performance.
 Priority-Based Execution: Scheduling allows for priority-based execution, where
higher-priority tasks are given precedence over lower-priority tasks. This ensures
that time-critical tasks are promptly executed, leading to improved system
responsiveness and reliability.
 Predictability and Determinism: Real-time scheduling provides predictability
and determinism in task execution. It enables developers to analyze and guarantee
the worst-case execution time and response time of tasks, ensuring that critical
deadlines are met.
 Control Over Task Execution: Scheduling algorithms allow developers to have
fine-grained control over how tasks are executed, such as specifying task
priorities, deadlines, and inter-task dependencies. This control facilitates the
design and implementation of complex real-time systems.

Disadvantages of Scheduling in Real-Time Systems:

 Increased Complexity: Real-time scheduling introduces additional complexity to


system design and implementation. Developers need to carefully analyze task

65
requirements, define priorities, and select suitable scheduling algorithms. This
complexity can lead to increased development time and effort.
 Overhead: Scheduling introduces some overhead in terms of context switching,
task prioritization, and scheduling decisions. This overhead can impact system
performance, especially in cases where frequent context switches or complex
scheduling algorithms are employed.
 Limited Resources: Real-time systems often operate under resource-constrained
environments. Scheduling tasks within these limitations can be challenging, as the
available resources may not be sufficient to meet all timing constraints or execute
all tasks simultaneously.
 Verification and Validation: Validating the correctness of real-time schedules
and ensuring that all tasks meet their deadlines require rigorous testing and
verification techniques. Verifying timing constraints and guaranteeing the absence
of timing errors can be a complex and time-consuming process.
 Scalability: Scheduling algorithms that work well for smaller systems may not
scale effectively to larger, more complex real-time systems. As the number of
tasks and system complexity increases, scheduling decisions become more
challenging and may require more advanced algorithms or approaches.

One Marks

1. What is the Real-time systems?

A. Used for monitoring events as they occur


B. Primarily used on mainframe computers
C. Used for real-time interactive users
D. Used for program development

2. The __________ Operating System pays more attention to the meeting of


the time limits.
A. Network
B. Distributed
C. Online
D. Real-time

3. In real time operating system is__________


A. kernel is not required
B. process scheduling can be done only once task
C. must be serviced by its deadline period
D. all processes have the same priority

4. The interrupt latency should be _________ for real time operating systems.
A. maximum
B. minimal
C. dependent on the scheduling
D. zero

5. Which scheduling amount of CPU time is allocated to each process?


A. equal share scheduling
B. none of the mentioned
C. earliest deadline first scheduling
D. proportional share scheduling
Advanced operating system

6. What is the Use of the robot by car manufacturing companies the example of…
A. applicant controlled computers
B. user-controlled computers
C. machine controlled computers
D. network controlled computers

7. When the System processes data instructions without any delay is called as
A. online system
B. real-time system
C. instruction system
D. offline system

8. Which single task of a particular application is process is a type of processor…


A. applicant processor
B. one task processor
C. real time processor
D. dedicated processor

9. The Designing of system take into considerations of_________.


A. operating system
B. communication system
C. hardware
D. all of the above
E. none of these

10. The Time duration required for scheduling dispatcher to stop one process and start
another is called…
A. dispatch latency
B. process latency
C. interrupt latency
D. execution latency

Questions(5 Mark 10 Mark)

1. Can you explain the difference between hard and soft real-time systems?
2. How does a real-time operating system differ from a general-purpose operating
system?
3. Define Real Time Operating System with its general structure.
4. In list the characteristics of RTOS.
5. Differentiate between hard RTOS and soft RTOS.
6. What is Firm classification of RTOS? Explain in detail.
7. What is difference between static and dynamic scheduling? Explain EDF and
clock driven scheduling in detail.
8. Explain Android OS architecture in detail.
9. Why virtual machine is needed? Explain VM OS with the help of diagram.
10. Explain cloud OS architecture in detail.
11. Discuss various issues of cloud OS.
12. What are the advantages and disadvantages of iOS?

67
UNIT:4 HANDHELD SYSTEM

Handheld Operating System

 An operating system is a program whose job is to manage a computer’s hardware.


Its other use is that it also provides a basis for application programs and acts as an
intermediary between the computer user and the computer hardware. An amazing
feature of operating systems is how they vary in accomplishing these tasks.
Operating systems for mobile computers provide us with an environment in which
we can easily interface with the computer so that we can execute the programs.
Thus, some of the operating systems are made to be convenient, others to be well-
organized, and the rest to be some combination of the two.

Handheld Operating System:

 Handheld operating systems are available in all handheld devices like


Smartphones and tablets. It is sometimes also known as a Personal Digital
Assistant. The popular handheld device in today’s world is Android and iOS.
These operating systems need a high-processing processor and are also embedded
with various types of sensors.
Some points related to Handheld operating systems are as follows:
 Since the development of handheld computers in the 1990s, the demand for
software to operate and run on these devices has increased.
 Three major competitors have emerged in the handheld PC world with three
different operating systems for these handheld PCs.
 Out of the three companies, the first was the Palm Corporation with their PalmOS.
 Microsoft also released what was originally called Windows CE. Microsoft’s
recently released operating system for the handheld PC comes under the name of
Pocket PC.
 More recently, some companies producing handheld PCs have also started
offering a handheld version of the Linux operating system on their machines.

Features of Handheld Operating System:


 Its work is to provide real-time operations.
 There is direct usage of interrupts.
 Input/Output device flexibility.
 Configurability.
Types of Handheld Operating Systems:
Types of Handheld Operating Systems are as follows:
 Palm OS
 Symbian OS
 Linux OS
 Windows
 Android
Palm OS:
 Since the Palm Pilot was introduced in 1996, the Palm OS platform has provided
various mobile devices with essential business tools, as well as the capability that
they can access the internet via a wireless connection.
 These devices have mainly concentrated on providing basic personal-information-
management applications. The latest Palm products have progressed a lot, packing
in more storage, wireless internet, etc.
Advanced operating system

Symbian OS:
 It has been the most widely-used smartphone operating system because of its
ARM architecture before it was discontinued in 2014. It was developed by
Symbian Ltd.
 This operating system consists of two subsystems where the first one is the
microkernel-based operating system which has its associated libraries and the
second one is the interface of the operating system with which a user can interact.
 Since this operating system consumes very less power, it was developed for
smartphones and handheld devices.
 It has good connectivity as well as stability.
 It can run applications that are written in Python, Ruby, .NET, etc.
Linux OS:
 Linux OS is an open-source operating system project which is a cross-platform
system that was developed based on UNIX.
It was developed by Linus Torvalds. It is a system software that basically allows
the apps and users to perform some tasks on the PC.
 Linux is free and can be easily downloaded from the internet and it is considered
that it has the best community support.
 Linux is portable which means it can be installed on different types of devices like
mobile, computers, and tablets.
 It is a multi-user operating system.
 Linux interpreter program which is called BASH is used to execute commands.
 It provides user security using authentication features.
Windows OS:
 Windows is an operating system developed by Microsoft. Its interface which is
called Graphical User Interface eliminates the need to memorize commands for
the command line by using a mouse to navigate through menus, dialog boxes, and
buttons.
 It is named Windows because its programs are displayed in the form of a square. It
has been designed for both a beginner as well professional.
 It comes preloaded with many tools which help the users to complete all types of
tasks on their computer, mobiles, etc.
 It has a large user base so there is a much larger selection of available software
programs.
 One great feature of Windows is that it is backward compatible which means that
its old programs can run on newer versions as well.
Android OS:
 It is a Google Linux-based operating system that is mainly designed for
touchscreen devices such as phones, tablets, etc.
There are three architectures which are ARM, Intel, and MIPS which are used by
the hardware for supporting Android. These lets users manipulate the devices
intuitively, with movements of our fingers that mirror some common motions such
as swiping, tapping, etc.
 Android operating system can be used by anyone because it is an open-source
operating system and it is also free.
 It offers 2D and 3D graphics, GSM connectivity, etc.
 There is a huge list of applications for users since Play Store offers over one
million apps.
 Professionals who want to develop applications for the Android OS can download
the Android Development Kit. By downloading it they can easily develop apps for
android.

69
Advantages of Handheld Operating System:
Some advantages of a Handheld Operating System are as follows:
 Less Cost.
 Less weight and size.
 Less heat generation.
 More reliability.
Disadvantages of Handheld Operating System:
Some disadvantages of Handheld Operating Systems are as follows:
 Less Speed.
 Small Size.
 Input / Output System (memory issue or less memory is available).

How Handheld operating systems are different from Desktop operating systems?
 Since the handheld operating systems are mainly designed to run on machines that
have lower speed resources as well as less memory, they were designed in a way
that they use less memory and require fewer resources.
 They are also designed to work with different types of hardware as compared to
standard desktop operating systems.
It happens because the power requirements for standard CPUs far exceed the
power of handheld devices.
 Handheld devices aren’t able to dissipate large amounts of heat generated by
CPUs. To deal with such kind of problem, big companies like Intel and Motorola
have designed smaller CPUs with lower power requirements and also lower heat
generation. Many handheld devices fully depend on flash memory cards for their
internal memory because large hard drives do not fit into handheld devices.

Several Requirements

 A handheld operating system must fulfill several requirements to provide a


seamless and effective user experience. Key considerations include:

 Resource Efficiency: Optimal usage of hardware resources such as CPU, RAM,


and storage to ensure smooth performance.

 Compatibility: Support for a wide range of devices and hardware configurations to


accommodate various handheld systems.

 User Interface (UI): An intuitive and user-friendly interface that is suitable for
small screens and touch input.

 Application Ecosystem: A robust app store or platform that offers a diverse range
of applications for users to enhance the functionality of their handheld devices.

 Security Features:Implementations of security measures such as encryption,


secure boot, and regular security updates to protect user data and the system from
threats.

 Connectivity:Support for various connectivity options, including Wi-Fi,


Bluetooth, NFC, and cellular networks.

 Power Management: Efficient power management to optimize battery life and


ensure longer usage between charges.
Advanced operating system

 Multitasking:Ability to handle multiple applications running simultaneously,


allowing users to switch between tasks seamlessly.

 Updates and Upgrades: Regular software updates and a straightforward


mechanism for users to upgrade their operating systems to the latest versions.

 Customization:Options for users to personalize their device settings, themes, and


preferences according to their needs.

 Accessibility Features: Inclusion of features that make the handheld device


accessible to users with disabilities, such as screen readers, voice commands, and
adjustable text sizes.

 Internationalization and Localization:Support for multiple languages, regional


settings, and localized content to cater to a diverse user base.

 Developer Support: Comprehensive tools and documentation for developers to


create and optimize applications for the operating system.

 Interoperability: Compatibility with other devices and platforms, facilitating data


sharing and integration with other technologies.

 These requirements collectively contribute to the effectiveness and usability of a


handheld operating system, ensuring it meets the demands of users and keeps pace
with evolving technologies.

Technology Overview

 A handheld operating system (OS) is a specialized software platform designed to


run on portable devices, such as smartphones, tablets, and handheld gaming
consoles. Here's a technology overview of key aspects:

 Kernel:
 The core of the operating system, managing hardware resources and providing
essential services.
 May be based on monolithic, microkernel, or hybrid architectures.
 File System:
 Organizes and manages data storage on the device.
 May use file systems like FAT, exFAT, or more modern ones for performance
and reliability.
 User Interface (UI):
 Utilizes graphical interfaces optimized for touchscreens, with gestures and
touch controls.
 May include home screens, app drawers, and notification panels.
 Security:
 Implements security measures such as encryption, secure boot, and
sandboxing to protect user data and the device.
 Incorporates features like device lock, biometric authentication, and secure
credential storage.
 Application Framework:

71
 Provides a framework for app development using programming languages like
Java (Android) or Swift (iOS).
 Includes APIs for accessing device features like camera, sensors, and
networking.
 App Ecosystem:
 Supports app distribution through app stores.
 Uses app containers to ensure isolation and security between applications.
 Connectivity:
 Manages various connectivity options such as Wi-Fi, Bluetooth, NFC, and
cellular networks.
 Implements protocols like TCP/IP for internet connectivity.
 Power Management:
 Incorporates power management features to optimize battery life, including
sleep modes and background task management.
 Multitasking:
 Allows users to run multiple applications simultaneously.
 Utilizes task switching mechanisms and background processing.
 Updates and Upgrades:
 Provides mechanisms for over-the-air (OTA) updates and seamless upgrades
to newer OS versions.
 Ensures backward compatibility for app support.
 Device Drivers:
 Supports a wide array of hardware components through device drivers.
 Manages communication between the OS and hardware peripherals.
 Internationalization and Localization:
 Supports multiple languages and regional settings.
 Allows for easy localization of the user interface and content.
 Accessibility Features:
 Incorporates features to assist users with disabilities, such as screen readers,
voice commands, and haptic feedback.
 Developer Tools:
 Provides SDKs, APIs, and emulators for app development.
 Supports debugging and profiling tools for developers.
 Cloud Integration:
 Integrates with cloud services for data synchronization, backup, and remote
storage.
 The diverse technologies integrated into handheld operating systems, emphasizing
their adaptability to the unique challenges and requirements of portable devices.

Introduction To Mobile Operating System – PALM OS


 PALM OS is an operating system for personal digital assistants, designed for
touchscreen. It consists of a limited number of features designed for low
memory and processor usage which in turn helps in getting longer battery life.
Features of PALM OS
 Elementary memory management system.
 Provides PALM Emulator.
 Handwriting recognition is possible.
 Supports recording and playback.
 Supports C, and C++ software.
Advanced operating system

 The User Interface in the architecture is used for graphical input-output.


 The Memory Management section is used for maintaining databases, global
variables, etc.
 System Management’s job is to maintain events, calendars, dates, times, etc.
 Communication TCP/IP as the name denotes is simply used for
communication.
 Microkernel is an essential tool in architecture. It is responsible for providing
the mechanism needed for the proper functioning of an operating system.

Development Cycle

For the development of the PALM OS, these are the phases it has to go through before it
can be used in the market:

 Editing the code for the operating system that is checking for errors and
correcting errors.
 Compile and Debug the code to check for bugs and correct functioning of the
code.
 Run the program on a mobile device or related device.
 If all the above phases are passed, we can finally have our finished product
which is the operating system for mobile devices named PALM OS.

Advantages

 Fewer features are designed for low memory and processor usage which
means longer battery life.
 No need to upgrade the operating system as it is handled automatically in
PALM OS.
 More applications are available for users.

73
 Extended connectivity for users. Users can now connect to wide areas.

Disadvantages

 The user cannot download applications using the external memory in PALM
OS. It will be a disadvantage for users with limited internal memory.
 Systems and extended connectivity are less compared to what is offered by
other operating systems.

Symbian Operating System:

Introduction:
 Symbian was a mobile operating system designed for smartphones and mobile
devices.
 Developed by Symbian Ltd., a consortium established in 1998, with major
contributions from Nokia, Ericsson, and Motorola.
Architecture:
 Symbian OS had a microkernel architecture, providing a modular and flexible
framework.
 Designed to run on various hardware platforms and support a wide range of
devices.
User Interface:
 Symbian featured a variety of user interfaces, including Series 60, Series 80, and
Series 90.
 Series 60 became the most popular UI, used in many Nokia smartphones.
Applications:
 The platform supported native Symbian applications written in C++.
 The Symbian OS had its own app store, Ovi Store, where users could download
and install applications.
Multitasking:
 Symbian OS was known for its robust multitasking capabilities, allowing users
to run multiple applications simultaneously.
Customization:
 Phone manufacturers could customize the Symbian interface to differentiate their
devices.
 This flexibility led to a diverse range of Symbian-powered phones with unique
features.
Decline and Discontinuation:
 Symbian faced challenges from competitors like iOS and Android, which offered
more modern and user-friendly experiences.
 Nokia's decision to adopt Windows Phone over Symbian contributed to the
decline.
 Nokia officially discontinued Symbian in 2013, marking the end of its era in the
mobile industry.
Legacy:
 Despite its decline, Symbian played a crucial role in the early development of
smartphone operating systems.
 Some of its features and concepts influenced later mobile platforms.
Impact:
 Symbian was once a dominant force in the smartphone market, particularly in the
early 2000s.
Advanced operating system

 Its decline paved the way for the rise of iOS and Android, shaping the current
mobile landscape.
Open Source Transition:
 In 2010, Symbian became an open-source platform, allowing developers to
contribute to its development.
 The open-source transition, however, couldn't revive its fortunes in the face of
strong competition.
Android
 Symbian was a mobile operating system designed for smartphones and mobile
devices.
 Developed by Symbian Ltd., a consortium established in 1998, with major
contributions from Nokia, Ericsson, and Motorola.
Architecture:
 Symbian OS had a microkernel architecture, providing a modular and flexible
framework.
 Designed to run on various hardware platforms and support a wide range of
devices.
User Interface:
 Symbian featured a variety of user interfaces, including Series 60, Series 80, and
Series 90.
 Series 60 became the most popular UI, used in many Nokia smartphones.
Applications:
 The platform supported native Symbian applications written in C++.
 The Symbian OS had its own app store, Ovi Store, where users could download
and install applications.
Multitasking:
 Symbian OS was known for its robust multitasking capabilities, allowing users
to run multiple applications simultaneously.

Customization:
 Phone manufacturers could customize the Symbian interface to differentiate their
devices.
 This flexibility led to a diverse range of Symbian-powered phones with unique
features.
Decline and Discontinuation:
 Symbian faced challenges from competitors like iOS and Android, which offered
more modern and user-friendly experiences.
 Nokia's decision to adopt Windows Phone over Symbian contributed to the
decline.
 Nokia officially discontinued Symbian in 2013, marking the end of its era in the
mobile industry.
Legacy:
 Despite its decline, Symbian played a crucial role in the early development of
smartphone operating systems.
 Some of its features and concepts influenced later mobile platforms.
Impact:
 Symbian was once a dominant force in the smartphone market, particularly in the
early 2000s.
 Its decline paved the way for the rise of iOS and Android, shaping the current
mobile landscape.
Open-Source Transition:

75
 In 2010, Symbian became an open-source platform, allowing developers to
contribute to its development.
 The open-source transition, however, couldn't revive its fortunes in the face of
strong competition.
These notes provide a comprehensive overview of the Symbian operating system, its
features, and its eventual decline in the rapidly evolving mobile industry.

Android Architecture

 Android architecture contains different number of components to support any


android device needs. Android software contains an open-source Linux Kernel
having collection of number of C/C++ libraries which are exposed through an
application framework service.
 Among all the components Linux Kernel provides main functionality of operating
system functions to smartphones and Dalvik Virtual Machine (DVM) provide
platform for running an android application.
The main components of android architecture are following:
 Applications
 Application Framework
 Android Runtime
 Platform Libraries
 Linux Kernel
Advanced operating system

Pictorial representation of android architecture with several main components and their
sub components

Applications
 Applications is the top layer of android architecture. The pre-installed applications
like home, contacts, camera, gallery etc and third party applications downloaded
from the play store like chat applications, games etc. will be installed on this layer
only.
It runs within the Android run time with the help of the classes and services
provided by the application framework.
Application framework
 Application Framework provides several important classes which are used to
create an Android application. It provides a generic abstraction for hardware
access and also helps in managing the user interface with application resources.
Generally, it provides the services with the help of which we can create a
particular class and make that class helpful for the Applications creation.
 It includes different types of services activity manager, notification manager, view
system, package manager etc. which are helpful for the development of our
application according to the prerequisite.

77
Application runtime
 Android Runtime environment is one of the most important part of Android. It
contains components like core libraries and the Dalvik virtual machine(DVM).
Mainly, it provides the base for the application framework and powers our
application with the help of the core libraries.
 Like Java Virtual Machine (JVM), Dalvik Virtual Machine (DVM) is a register-
based virtual machine and specially designed and optimized for android to ensure
that a device can run multiple instances efficiently. It depends on the layer Linux
kernel for threading and low-level memory management. The core libraries enable
us to implement android applications using the standard JAVA or Kotlin
programming languages.
Platform libraries
The Platform Libraries includes various C/C++ core libraries and Java based libraries
such as Media, Graphics, Surface Manager, OpenGL etc. to provide a support for android
development.
 Media library provides support to play and record an audio and video formats.
 Surface manager responsible for managing access to the display subsystem.
 SGL and OpenGL both cross-language, cross-platform application program
interface (API) are used for 2D and 3D computer graphics.
 SQLite provides database support and FreeType provides font support.
 Web-Kit This open source web browser engine provides all the functionality to
display web content and to simplify page loading.
 SSL (Secure Sockets Layer) is security technology to establish an encrypted link
between a web server and a web browser.
Linux Kernel –
 Linux Kernel is heart of the android architecture. It manages all the available
drivers such as display drivers, camera drivers, Bluetooth drivers, audio drivers,
memory drivers, etc. which are required during the runtime.
 The Linux Kernel will provide an abstraction layer between the device hardware
and the other components of android architecture. It is responsible for
management of memory, power, devices etc.
 The features of Linux kernel are:
 Security: The Linux kernel handles the security between the application and
the system.
 Memory Management: It efficiently handles the memory management
thereby providing the freedom to develop our apps.
 Process Management: It manages the process well, allocates resources to
processes whenever they need them.
 Network Stack: It effectively handles the network communication.
 Driver Model: It ensures that the application works properly on the device
and hardware manufacturers responsible for building their drivers into the
Linux build.

Securing A Handheld System

Securing a handheld system, such as a smartphone or tablet, is crucial to protect personal


information and sensitive data. Here are key practices for enhancing the security of
handheld devices:

Screen Lock:
Advanced operating system

 Enable a strong screen lock mechanism, such as a PIN, password, pattern, or


biometric authentication (fingerprint, facial recognition).

Device Encryption:
 Enable full-device encryption to safeguard data stored on the device. This
ensures that even if the device is lost or stolen, the data remains inaccessible.
Regular Software Updates:
 Keep the operating system, applications, and security software up to date to
patch vulnerabilities and benefit from the latest security features.
App Permissions:
 Review and manage app permissions. Only grant necessary permissions to apps
and be cautious about granting access to sensitive information.
App Source Verification:
 Download apps only from official app stores (Google Play, Apple App Store).
Avoid installing apps from untrusted sources to minimize the risk of malware.
Device Tracking and Remote Wipe:
 Enable device tracking services (Find My iPhone, Find My Device) to locate
the device in case of loss. Also, configure remote wipe options to erase data if
the device cannot be recovered.
Network Security:
 Use secure Wi-Fi connections and avoid connecting to public or unsecured
networks. Consider using a Virtual Private Network (VPN) for additional
security.

Biometric Authentication:
 If available, use biometric authentication methods like fingerprint or facial
recognition for a convenient and secure unlocking process.
Secure Backup:
 Regularly back up important data to a secure and trusted cloud service. This
ensures data recovery in case of device loss, damage, or a reset.
App Updates:
 Keep apps updated to the latest versions, as updates often include security
patches. Enable automatic app updates if possible.
Secure Browsing:
 Use secure browsing practices, avoid visiting suspicious websites, and be
cautious with clicking on links from unknown sources, especially in emails or
messages.

Two-Factor Authentication (2FA):


 Enable two-factor authentication for accounts whenever possible. This adds an
extra layer of security beyond passwords.
App Whitelisting:
 Consider using app whitelisting to control which apps are allowed to run on the
device. This helps prevent unauthorized or malicious apps from executing.
Educate Users:
 Educate users on security best practices, such as recognizing phishing attempts,
being cautious with downloads, and understanding the importance of regular
updates.
Privacy Settings:
 Review and adjust privacy settings for apps and the operating system. Limit
the amount of personal information shared with apps and services.

79
 Implementing these practices helps create a robust security posture for
handheld systems, mitigating potential risks and ensuring the protection of
sensitive information.

Questions (1 Mark)

1. Which of the following is not a part of the operating system?


a. Supervisor c. Job control program
b. Performance monitor d. Input/Output control program

2. All the time a computer is switched on, its operating system software has to stay in
a. main storage c. floppy disk
b. primary storage d. disk drive

3. Scheduling is ____________

a. allowing jobs to use the processor


b. unrelated performance consideration
c. not required in uniprocessor systems
d. the same regard less of the purpose of the system

4. Trojan Horse Programs


a. are legitimate programs that allows unauthorized access
b. do not usually work
c. are hidden programs that do not show up on the system
d. usually are immediately discovered

5. The operating system of a computer serves as a software interface between the user and
a. screen c. peripheral
b. memory d. hardware

6. What is the name given to the organised collection of software that controls the overall
operation of a computer?
a. operating system c. peripheral system
b. controlling system d. working system

7. Process is _________
a. a job in secondary memory
b. a program in execution
c. contents of main memory
d. program in High level language kept on disk

8. In a timeshare operating system, when the time slot assigned to a process is completed,
the process switches from the current state to?

a) Suspended state c) Ready state


b) Terminated state d) Blocked state

9. What is a medium-term scheduler?

a. it selects which process has to be brought into the ready queue


Advanced operating system

b. it selects which process has to be executed next and allocates cpu


c. it selects which process to remove from memory by swapping
d. none of these

10. Which of the following does not interrupt a running process ?


a. a device
b. timer
c. scheduler process
d. power failure

Questions (5 Mark and 10 Mark)

1. What is an operating system and what is its importance in business?


2. What types of operating systems are available?
3. Describe about PALM OS.
4. Explain Symbian Operating System.
5. Detail about handheld system.
6. What is Linux kernel?
7. Write about Android Architecture?
8. Describe the operating system.
9. Write short notes on Securing handheld system?
10. What are the Platform libraries?

81
UNIT:5 CASE STUDIES

Linux System

 Linux is one of popular version of UNIX operating System. It is open source as its source
code is freely available. It is free to use. Linux was designed considering UNIX
compatibility. Its functionality list is quite similar to that of UNIX.
Components of Linux System
Linux Operating System has primarily three components

 Kernel − Kernel is the core part of Linux. It is responsible for all major activities of this
operating system. It consists of various modules and it interacts directly with the
underlying hardware. Kernel provides the required abstraction to hide low level
hardware details to system or application programs.
 System Library − System libraries are special functions or programs using which
application programs or system utilities accesses Kernel's features. These libraries
implement most of the functionalities of the operating system and do not requires kernel
module's code access rights.
 System Utility − System Utility programs are responsible to do specialized, individual
level tasks.

Kernel Mode vs User Mode


 Kernel component code executes in a special privileged mode called kernel mode with
full access to all resources of the computer. This code represents a single process,
executes in single address space and do not require any context switch and hence is very
efficient and fast. Kernel runs each processes and provides system services to processes,
provides protected access to hardware to processes.
 Support code which is not required to run in kernel mode is in System Library. User
programs and other system programs works in User Mode which has no access to system
hardware and kernel code. User programs/ utilities use System libraries to access Kernel
functions to get system's low level tasks.
Advanced operating system

Basic Features
Following are some of the important features of Linux Operating System.
 Portable − Portability means software can works on different types of hardware in same
way. Linux kernel and application programs supports their installation on any kind of
hardware platform.
 Open Source − Linux source code is freely available and it is community based
development project. Multiple teams work in collaboration to enhance the capability of
Linux operating system and it is continuously evolving.
 Multi-User − Linux is a multiuser system means multiple users can access system
resources like memory/ ram/ application programs at same time.
 Multiprogramming − Linux is a multiprogramming system means multiple applications
can run at same time.
 Hierarchical File System − Linux provides a standard file structure in which system
files/ user files are arranged.
 Shell − Linux provides a special interpreter program which can be used to execute
commands of the operating system. It can be used to do various types of operations, call
application programs. etc.
 Security − Linux provides user security using authentication features like password
protection/ controlled access to specific files/ encryption of data.

Architecture
The following illustration shows the architecture of a Linux system −

The architecture of a Linux System consists of the following layers −


 Hardware layer − Hardware consists of all peripheral devices (RAM/ HDD/ CPU etc).
 Kernel − It is the core component of Operating System, interacts directly with hardware,
provides low level services to upper layer components.
 Shell − An interface to kernel, hiding complexity of kernel's functions from users. The
shell takes commands from the user and executes kernel's functions.
 Utilities − Utility programs that provide the user most of the functionalities of an
operating systems.

83
Memory management

What is Main Memory?


 The main memory is central to the operation of a Modern Computer. Main Memory is
a large array of words or bytes, ranging in size from hundreds of thousands to billions.
Main memory is a repository of rapidly available information shared by the CPU and
I/O devices. Main memory is the place where programs and information are kept when
the processor is effectively utilizing them. Main memory is associated with the
processor, so moving instructions and information into and out of the processor is
extremely fast. Main memory is also known as RAM (Random Access Memory). This
memory is volatile. RAM loses its data when a power interruption occurs.

Main Memory

What is Memory Management?


 In a multiprogramming computer, the Operating System resides in a part of
memory, and the rest is used by multiple processes. The task of subdividing the
memory among different processes is called Memory Management. Memory
management is a method in the operating system to manage operations between
main memory and disk during process execution. The main aim of memory
management is to achieve efficient utilization of memory.
Why Memory Management is Required?
 Allocate and de-allocate memory before and after process execution.
 To keep track of used memory space by processes.
 To minimize fragmentation issues.
 To proper utilization of main memory.
 To maintain data integrity while executing of process.

Logical and Physical Address Space

 Logical Address Space: An address generated by the CPU is known as a “Logical


Address”. It is also known as a Virtual address. Logical address space can be defined as
the size of the process. A logical address can be changed.
 Physical Address Space: An address seen by the memory unit (i.e the one loaded into
the memory address register of the memory) is commonly known as a “Physical
Address”. A Physical address is also known as a Real address. The set of all physical
addresses corresponding to these logical addresses is known as Physical address space.
Advanced operating system

A physical address is computed by MMU. The run-time mapping from virtual to


physical addresses is done by a hardware device Memory Management Unit(MMU).
The physical address always remains constant.

Static and Dynamic Loading

Loading a process into the main memory is done by a loader. There are two different
types of loading :
 Static Loading: Static Loading is basically loading the entire program into a fixed
address. It requires more memory space.
 Dynamic Loading: The entire program and all data of a process must be in physical
memory for the process to execute. So, the size of a process is limited to the size
of physical memory. To gain proper memory utilization, dynamic loading is used.
In dynamic loading, a routine is not loaded until it is called. All routines are residing
on disk in a relocatable load format. One of the advantages of dynamic loading is that
the unused routine is never loaded. This loading is useful when a large amount of code
is needed to handle it efficiently.

Static and Dynamic Linking

 To perform a linking task a linker is used. A linker is a program that takes one or more
object files generated by a compiler and combines them into a single executable file.
 Static Linking: In static linking, the linker combines all necessary program modules
into a single executable program. So there is no runtime dependency. Some operating
systems support only static linking, in which system language libraries are treated like
any other object module.
 Dynamic Linking: The basic concept of dynamic linking is similar to dynamic loading.
In dynamic linking, “Stub” is included for each appropriate library routine reference. A
stub is a small piece of code. When the stub is executed, it checks whether the needed
routine is already in memory or not. If not available then the program loads the routine
into memory.

Swapping

 When a process is executed it must have resided in memory. Swapping is a process of


swapping a process temporarily into a secondary memory from the main memory,
which is fast compared to secondary memory. A swapping allows more processes to be
run and can be fit into memory at one time. The main part of swapping is transferred
time and the total time is directly proportional to the amount of memory swapped.
Swapping is also known as roll-out, or roll because if a higher priority process arrives
and wants service, the memory manager can swap out the lower priority process and
then load and execute the higher priority process. After finishing higher priority work,
the lower priority process swapped back in memory and continued to the execution
process.

85
swapping in memory management
Memory Management with Monoprogramming (Without Swapping)
This is the simplest memory management approach the memory is divided into two
sections:
 One part of the operating system
 The second part of the user program

 Fence Register

o operating system o user program

 In this approach, the operating system keeps track of the first and last location available
for the allocation of the user program
 The operating system is loaded either at the bottom or at top
 Interrupt vectors are often loaded in low memory therefore, it makes sense to load the
operating system in low memory
 Sharing of data and code does not make much sense in a single process environment
 The Operating system can be protected from user programs with the help of a fence
register.
Advantages of Memory Management
 It is a simple management approach
Disadvantages of Memory Management
 It does not support multiprogramming
 Memory is wasted
Multiprogramming with Fixed Partitions (Without Swapping)
 A memory partition scheme with a fixed number of partitions was introduced to support
multiprogramming. this scheme is based on contiguous allocation
 Each partition is a block of contiguous memory
 Memory is partitioned into a fixed number of partitions.
 Each partition is of fixed size
Example: As shown in fig. memory is partitioned into 5 regions the region is reserved for
updating the system the remaining four partitions are for the user program.
Advanced operating system

Fixed Size Partitioning


Operating
System

p1

p2

p3

p4

Partition Table
 Once partitions are defined operating system keeps track of the status of memory
partitions it is done through a data structure called a partition table.

Sample Partition Table

Starting
Address
of Size of
Partition Partition Status

0k 200k allocated

200k 100k free

300k 150k free

450k 250k allocated

Logical vs Physical Address

 An address generated by the CPU is commonly referred to as a logical address. the


address seen by the memory unit is known as the physical address. The logical
address can be mapped to a physical address by hardware with the help of a base
register this is known as dynamic relocation of memory references.

Contiguous Memory Allocation

 The main memory should accommodate both the operating system and the
different client processes. Therefore, the allocation of memory becomes an
important task in the operating system. The memory is usually divided into two
partitions: one for the resident operating system and one for the user processes. We
normally need several user processes to reside in memory simultaneously.
Therefore, we need to consider how to allocate available memory to the processes

87
that are in the input queue waiting to be brought into memory. In adjacent memory
allotment, each process is contained in a single contiguous segment of memory.

Contiguous Memory Allocation


Memory Allocation
 To gain proper memory utilization, memory allocation must be allocated efficient
manner. One of the simplest methods for allocating memory is to divide memory
into several fixed-sized partitions and each partition contains exactly one process.
Thus, the degree of multiprogramming is obtained by the number of partitions.
 Multiple partition allocation: In this method, a process is selected from the input queue
and loaded into the free partition. When the process terminates, the partition becomes
available for other processes.
 Fixed partition allocation: In this method, the operating system maintains a table that
indicates which parts of memory are available and which are occupied by processes.
Initially, all memory is available for user processes and is considered one large block of
available memory. This available memory is known as a “Hole”. When the process
arrives and needs memory, we search for a hole that is large enough to store this process.
If the requirement is fulfilled then we allocate memory to process, otherwise keeping the
rest available to satisfy future requests. While allocating a memory
sometimes dynamic storage allocation problems occur, which concerns how to satisfy a
request of size n from a list of free holes. There are some solutions to this problem:

First Fit
In the First Fit, the first available free hole fulfil the requirement of the process
allocated.
Advanced operating system

First Fit
Here, in this diagram, a 40 KB memory block is the first available free hole that can
store process A (size of 25 KB), because the first two blocks did not have sufficient
memory space.
Best Fit
In the Best Fit, allocate the smallest hole that is big enough to process requirements.
For this, we search the entire list, unless the list is ordered by size.

Best Fit
Here in this example, first, we traverse the complete list and find the last hole 25KB is
the best suitable hole for Process A(size 25KB). In this method, memory utilization is
maximum as compared to other memory allocation techniques.

Worst Fit

In the Worst Fit, allocate the largest available hole to process. This method produces
the largest leftover hole.

Worst Fit
Here in this example, Process A (Size 25 KB) is allocated to the largest available
memory block which is 60KB. Inefficient memory utilization is a major issue in the
worst fit.

89
Fragmentation
 Fragmentation is defined as when the process is loaded and removed after execution
from memory, it creates a small free hole. These holes cannot be assigned to new
processes because holes are not combined or do not fulfill the memory requirement of
the process. To achieve a degree of multiprogramming, we must reduce the waste of
memory or fragmentation problems. In the operating systems two types of
fragmentation:
 Internal fragmentation: Internal fragmentation occurs when memory blocks are
allocated to the process more than their requested size. Due to this some unused space
is left over and creating an internal fragmentation problem.Example: Suppose there is
a fixed partitioning used for memory allocation and the different sizes of blocks 3MB,
6MB, and 7MB space in memory. Now a new process p4 of size 2MB comes and
demands a block of memory. It gets a memory block of 3MB but 1MB block of memory
is a waste, and it can not be allocated to other processes too. This is called internal
fragmentation.
 External fragmentation: In External Fragmentation, we have a free memory block, but
we cannot assign it to a process because blocks are not contiguous.
Example: Suppose (consider the above example) three processes p1, p2, and p3 come
with sizes 2MB, 4MB, and 7MB respectively. Now they get memory blocks of size
3MB, 6MB, and 7MB allocated respectively. After allocating the process p1 process and
the p2 process left 1MB and 2MB. Suppose a new process p4 comes and demands a
3MB block of memory, which is available, but we can not assign it because free memory
space is not contiguous. This is called external fragmentation.
Both the first-fit and best-fit systems for memory allocation are affected by external
fragmentation. To overcome the external fragmentation problem Compaction is used.
In the compaction technique, all free memory space combines and makes one large
block. So, this space can be used by other processes effectively.

Another possible solution to the external fragmentation is to allow the logical address
space of the processes to be noncontiguous, thus permitting a process to be allocated
physical memory wherever the latter is available.
Paging
Paging is a memory management scheme that eliminates the need for a contiguous
allocation of physical memory. This scheme permits the physical address space of a
process to be non-contiguous.
 Logical Address or Virtual Address (represented in bits): An address generated by
the CPU.
 Logical Address Space or Virtual Address Space (represented in words or
bytes): The set of all logical addresses generated by a program.
 Physical Address (represented in bits): An address actually available on a memory
unit.
 Physical Address Space (represented in words or bytes): The set of all physical
addresses corresponding to the logical addresses.
Example:
 If Logical Address = 31 bits, then Logical Address Space = 231 words = 2 G words (1
G = 230)
 If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address =
log2 227 = 27 bits
 If Physical Address = 22 bits, then Physical Address Space = 222 words = 4 M words (1
M = 220)
 If Physical Address Space = 16 M words = 24 * 220 words, then Physical Address =
log2 224 = 24 bits
Advanced operating system

The mapping from virtual to physical address is done by the memory management unit
(MMU) which is a hardware device and this mapping is known as the paging
technique.
 The Physical Address Space is conceptually divided into several fixed-size blocks,
called frames.
 The Logical Address Space is also split into fixed-size blocks, called pages.
 Page Size = Frame Size
Let us consider an example:
 Physical Address = 12 bits, then Physical Address Space = 4 K words
 Logical Address = 13 bits, then Logical Address Space = 8 K words
 Page size = frame size = 1 K words (assumption)

Paging
The address generated by the CPU is divided into:
 Page Number(p): Number of bits required to represent the pages in Logical Address
Space or Page number
 Page Offset(d): Number of bits required to represent a particular word in a page or page
size of Logical Address Space or word number of a page or page offset.
Physical Address is divided into:
 Frame Number(f): Number of bits required to represent the frame of Physical Address
Space or Frame number frame
 Frame Offset(d): Number of bits required to represent a particular word in a frame or
frame size of Physical Address Space or word number of a frame or frame offset.
The hardware implementation of the page table can be done by using dedicated
registers. But the usage of the register for the page table is satisfactory only if the page
table is small. If the page table contains a large number of entries then we can use
TLB(translation Look-aside buffer), a special, small, fast look-up hardware cache.
 The TLB is an associative, high-speed memory.
 Each entry in TLB consists of two parts: a tag and a value.
 When this memory is used, then an item is compared with all tags simultaneously. If the
item is found, then the corresponding value is returned.

91
Page Map Table
Main memory access time = m
If page table are kept in main memory,
Effective access time = m(for page table)
+ m(for particular page in page table)

TLB Hit and Miss

Process Schedulers in Operating System

 In computing, a process is the instance of a computer program that is being executed by one
or many threads. Scheduling is important in many different computer environments. One of
the most important areas is scheduling which programs will work on the CPU. This task is
handled by the Operating System (OS) of the computer and there are many different ways
in which we can choose to configure programs.
Advanced operating system

What is Process Scheduling?


 Process scheduling is the activity of the process manager that handles the removal
of the running process from the CPU and the selection of another process based on
a particular strategy.
 Process scheduling is an essential part of a Multiprogramming operating system.
Such operating systems allow more than one process to be loaded into the
executable memory at a time and the loaded process shares the CPU using time
multiplexing.

Process scheduler

Categories of Scheduling
Scheduling falls into one of two categories:
 Non-preemptive: In this case, a process’s resource cannot be taken before the
process has finished running. When a running process finishes and transitions to a
waiting state, resources are switched.
 Preemptive: In this case, the OS assigns resources to a process for a predetermined
period. The process switches from running state to ready state or from waiting for
state to ready state during resource allocation. This switching happens because the
CPU may give other processes priority and substitute the currently active process
for the higher priority process.
Types of Process Schedulers
There are three types of process schedulers:
 Long Term or Job Scheduler
It brings the new process to the ‘Ready State’. It controls the Degree of Multi-
programming, i.e., the number of processes present in a ready state at any point in
time. It is important that the long-term scheduler make a careful selection of both
I/O and CPU-bound processes. I/O-bound tasks are which use much of their time
in input and output operations while CPU-bound processes are which spend their
time on the CPU. The job scheduler increases efficiency by maintaining a balance
between the two. They operate at a high level and are typically used in batch-
processing systems.
 Short-Term or CPU Scheduler

93
It is responsible for selecting one process from the ready state for scheduling it on
the running state. Note: Short-term scheduler only selects the process to schedule it
doesn’t load the process on running. Here is when all the scheduling algorithms
are used. The CPU scheduler is responsible for ensuring no starvation due to high
burst time processes.

Short Term Scheduler


The dispatcher is responsible for loading the process selected by the Short-term
scheduler on the CPU (Ready to Running State) Context switching is done by the
dispatcher only. A dispatcher does the following:
 Switching context.
 Switching to user mode.
 Jumping to the proper location in the newly loaded program.
 Medium-Term Scheduler
It is responsible for suspending and resuming the process. It mainly does swapping
(moving processes from main memory to disk and vice versa). Swapping may be
necessary to improve the process mix or because a change in memory requirements has
overcommitted available memory, requiring memory to be freed up. It is helpful in
maintaining a perfect balance between the I/O bound and the CPU bound. It reduces
the degree of multiprogramming.

Medium Term Scheduler


Some Other Schedulers
 I/O schedulers: I/O schedulers are in charge of managing the execution of I/O operations
such as reading and writing to discs or networks. They can use various algorithms to
determine the order in which I/O operations are executed, such as FCFS (First-Come,
First-Served) or RR (Round Robin).
Advanced operating system

 Real-time schedulers: In real-time systems, real-time schedulers ensure that critical tasks
are completed within a specified time frame. They can prioritize and schedule tasks using
various algorithms such as EDF (Earliest Deadline First) or RM (Rate Monotonic).

Comparison Among Scheduler


Long Term Short term Medium Term
Scheduler schedular Scheduler

It is a process-
It is a job It is a CPU
swapping
scheduler scheduler
scheduler.

Generally, Speed Speed lies in


Speed is the
is lesser than between both
fastest among all
short term short and long-
of them.
scheduler term schedulers.

It gives less
It controls the It reduces the
control over how
degree of degree of
much
multiprogrammin multiprogramming
multiprogrammin
g .
g is done.

It is barely
present or It is a minimal It is a component
nonexistent in the time-sharing of systems for
time-sharing system. time sharing.
system.

It can re-enter the


It can re-introduce
process into It selects those
the process into
memory, allowing processes which
memory and
for the are ready to
execution can be
continuation of execute
continued.
execution.

Two-State Process Model Short-Term


The terms “running” and “non-running” states are used to describe the two-state
process model.
 Running: A newly created process joins the system in a running state when it is
created.
 Not running: Processes that are not currently running are kept in a queue and await
execution. A pointer to a specific process is contained in each entry in the queue.
Linked lists are used to implement the queue system. This is how the dispatcher is
used. When a process is stopped, it is moved to the back of the waiting queue. The
process is discarded depending on whether it succeeded or failed. The dispatcher then
chooses a process to run from the queue in either scenario.

95
Context Switching
 In order for a process execution to be continued from the same point at a later time,
context switching is a mechanism to store and restore the state or context of a CPU
in the Process Control block. A context switcher makes it possible for multiple
processes to share a single CPU using this method. A multitasking operating
system must include context switching among its features.
 The state of the currently running process is saved into the process control block
when the scheduler switches the CPU from executing one process to another. The
state used to set the computer, registers, etc. for the process that will run next is
then loaded from its own PCB. After that, the second can start processing.

Context Switching
 In order for a process execution to be continued from the same point at a later time,
context switching is a mechanism to store and restore the state or context of a CPU
in the Process Control block. A context switcher makes it possible for multiple
processes to share a single CPU using this method. A multitasking operating
system must include context switching among its features.
 Program Counter
 Scheduling information
 The base and limit register value
 Currently used register
 Changed State
 I/O State information
 Accounting information

Process scheduling policies

Process scheduling policies determine the order in which processes are selected to run on a
computer's CPU. Common policies include:

1. First Come First Serve (FCFS): Processes are executed in the order they arrive.

2. Shortest Job Next (SJN) or Shortest Job First (SJF): The process with the shortest
burst time is scheduled first.

3. Round Robin (RR): Each process is assigned a fixed time slot, and they take turns
running in that slot.
Advanced operating system

4. Priority Scheduling: Processes with higher priority levels are scheduled before those
with lower priorities.

5. Multilevel Queue Scheduling: Processes are divided into queues based on priority,
and each queue has its scheduling algorithm.

6. Multilevel Feedback Queue Scheduling: Similar to multilevel queue scheduling, but


processes can move between queues based on their behavior.

The choice of scheduling policy depends on the system's requirements and goals. Each policy
has its strengths and weaknesses in terms of throughput, response time, and fairness.

Managing input and output (I/O) devices

Managing input and output (I/O) devices is a crucial aspect of operating systems. Here are
key considerations:

1. Device Drivers: Operating systems use device drivers to communicate with


hardware. These drivers act as intermediaries, translating high-level commands from
the OS into instructions the device can understand.

2. I/O Buffering: Buffering helps optimize data transfer between the CPU and I/O
devices. It involves using memory to store data temporarily before it's sent to or after
it's received from a device.

3. Interrupts: Interrupts allow I/O devices to signal the CPU when they need attention.
This mechanism enhances system responsiveness and efficiency by enabling the CPU
to handle other tasks while waiting for I/O operations to complete.

4. I/O Scheduling: Operating systems employ I/O scheduling algorithms to efficiently


manage requests from multiple processes for access to I/O devices. Common
algorithms include First-Come-First-Serve (FCFS) and Shortest Seek Time First
(SSTF).

5. Direct Memory Access (DMA): DMA allows data to be transferred between


memory and I/O devices without CPU intervention. This minimizes CPU overhead and
speeds up data transfer.

6. Error Handling: Robust error handling mechanisms are essential to manage issues
that may arise during I/O operations, such as device failures or data corruption.

7. Plug and Play: Modern operating systems often support plug-and-play functionality,
allowing users to connect new devices to the system without manual configuration.
The OS automatically detects and configures compatible devices.

8. File Systems: Managing I/O devices also involves interaction with file systems. The
OS must handle reading and writing data to storage devices, maintaining file integrity,
and managing file permissions.

Effective management of I/O devices contributes to overall system efficiency,


responsiveness, and reliability. Operating systems aim to balance the needs of multiple
processes and users while efficiently utilizing available hardware resources.

97
Accessing files

Accessing files in a computer system typically involves the following steps:

1. File Path: Identify the location of the file through its path. The path includes the
directory (folder) structure leading to the file. Paths can be absolute (full path from the
root directory) or relative (path relative to the current directory).

2. File System Interaction: The operating system interacts with the file system to locate
and manage the file. Common file systems include FAT32, NTFS (on Windows), ext4
(on Linux), and HFS+ (on macOS).

3. File Permissions: Check file permissions to ensure that the user has the necessary
rights to access the file. Permissions include read, write, and execute privileges for the
owner, group, and others.

4. File Opening: If permissions allow, the operating system opens the file. This
involves allocating resources, such as file handles or descriptors, to facilitate
subsequent operations.

5. File Reading/Writing: Perform read or write operations as needed. Reading retrieves


data from the file, while writing adds or modifies content. The specific system calls or
API functions used depend on the programming environment and language.

6. File Closing: After completing operations, close the file to release associated
resources. This step is crucial for efficient resource management.

File access can be achieved through various programming interfaces, such as the POSIX API
on Unix-like systems or the Windows API on Windows. High-level programming languages
often provide abstractions, like functions or methods, to simplify file operations.

Remember to handle errors gracefully and consider factors like concurrent access by multiple
processes to avoid conflicts. Security practices, such as validating user input and protecting
against unauthorized access, are also essential when working with file systems.

Architecture of IOS Operating System

 The structure of the iOS operating System is Layered based. Its communication doesn’t
occur directly. The layer’s between the Application Layer and the Hardware layer will help
for Communication. The lower level gives basic services on which all applications rely
and the higher-level layers provide graphics and interface-related services. Most of the
system interfaces come with a special package called a framework.
 A framework is a directory that holds dynamic shared libraries like .a files, header files,
images, and helper apps that support the library. Each layer has a set of frameworks that
are helpful for developers.
Advanced operating system

Architecture of IOS
CORE OS Layer:
All the IOS technologies are built under the lowest level layer i.e. Core OS layer. These
technologies include:
1. Core Bluetooth Framework
2. External Accessories Framework
3. Accelerate Framework
4. Security Services Framework
5. Local Authorization Framework etc.
It supports 64 bit which enables the application to run faster.
CORE SERVICES Layer:
Some important frameworks are present in the CORE SERVICES Layer which helps the iOS
operating system to cure itself and provide better functionality. It is the 2nd lowest layer in the
Architecture as shown above. Below are some important frameworks present in this layer:
1. Address Book Framework-
The Address Book Framework provides access to the contact details of the user.
2. Cloud Kit Framework-
This framework provides a medium for moving data between your app and iCloud.
3. Core Data Framework-
This is the technology that is used for managing the data model of a Model View
Controller app.
4. Core Foundation Framework-
This framework provides data management and service features for iOS
applications.
5. Core Location Framework-
This framework helps to provide the location and heading information to the
application.
6. Core Motion Framework-
All the motion-based data on the device is accessed with the help of the Core
Motion Framework.
7. Foundation Framework-
Objective C covering too many of the features found in the Core Foundation
framework.
8. HealthKit Framework-
This framework handles the health-related information of the user.
9. HomeKit Framework-
This framework is used for talking with and controlling connected devices with the
user’s home.
10. Social Framework-
It is simply an interface that will access users’ social media accounts.

99
11. StoreKit Framework-
This framework supports for buying of contents and services from inside iOS apps.

MEDIA Layer:
With the help of the media layer, we will enable all graphics video, and audio technology of
the system. This is the second layer in the architecture. The different frameworks of MEDIA
layers are:
1. ULKit Graphics-
This framework provides support for designing images and animating the view
content.
2. Core Graphics Framework-
This framework support 2D vector and image-based rendering and it is a native
drawing engine for iOS.
3. Core Animation-
This framework helps in optimizing the animation experience of the apps in iOS.
4. Media Player Framework-
This framework provides support for playing the playlist and enables the user to
use their iTunes library.
5. AV Kit-
This framework provides various easy-to-use interfaces for video presentation,
recording, and playback of audio and video.
6. Open AL-
This framework is an Industry Standard Technology for providing Audio.
7. Core Images-
This framework provides advanced support for motionless images.
8. GL Kit-
This framework manages advanced 2D and 3D rendering by hardware-accelerated
interfaces.
COCOA TOUCH:
COCOA Touch is also known as the application layer which acts as an interface for the user to
work with the iOS Operating system. It supports touch and motion events and many more
features. The COCOA TOUCH layer provides the following frameworks :
1. EvenKit Framework-
This framework shows a standard system interface using view controllers for
viewing and changing events.
2. GameKit Framework-
This framework provides support for users to share their game-related data online
using a Game Center.
3. MapKit Framework-
This framework gives a scrollable map that one can include in your user interface
of the app.
4. PushKit Framework-
This framework provides registration support.
Features of iOS operating System:
Let us discuss some features of the iOS operating system-
1. Highly Securer than other operating systems.
2. iOS provides multitasking features like while working in one application we can
switch to another application easily.
3. iOS’s user interface includes multiple gestures like swipe, tap, pinch, Reverse
pinch.
4. iBooks, iStore, iTunes, Game Center, and Email are user-friendly.
5. It provides Safari as a default Web Browser.
Advanced operating system

6. It has a powerful API and a Camera.


7. It has deep hardware and software integration
Applications of IOS Operating System:
Here are some applications of the iOS operating system-
1. iOS Operating System is the Commercial Operating system of Apple Inc. and is
popular for its security.
2. iOS operating system comes with pre-installed apps which were developed by
Apple like Mail, Map, TV, Music, Wallet, Health, and Many More.
3. Swift Programming language is used for Developing Apps that would run on IOS
Operating System.
4. In iOS Operating System we can perform Multitask like Chatting along with
Surfing on the Internet.
Advantages of IOS Operating System:
The iOS operating system has some advantages over other operating systems available in the
market especially the Android operating system. Here are some of them-
1. More secure than other operating systems.
2. Excellent UI and fluid responsive
3. Suits best for Business and Professionals
4. Generate Less Heat as compared to Android.
Disadvantages of IOS Operating System:
Let us have a look at some disadvantages of the iOS operating system-
1. More Costly.
2. Less User Friendly as Compared to Android Operating System.
3. Not Flexible as it supports only IOS devices.
4. Battery Performance is poor.
Media Layer
In an operating system, the media layer, often referred to as the multimedia subsystem, plays a
crucial role in managing and processing multimedia elements. Here are key components
typically found in the media layer:

1. Audio Subsystem:
 Responsible for audio playback and recording.
 Manages audio devices, drivers, and codecs.
 Provides APIs for applications to interact with audio hardware.

2. Video Subsystem:
 Handles video playback and rendering.
 Manages video drivers, codecs, and graphics processing units (GPUs).
 Ensures synchronization and smooth playback.

3. Graphics Subsystem:
 Deals with graphical elements, including rendering images and graphical user
interfaces (GUIs).
 Manages graphics hardware, drivers, and acceleration.

4. Image Processing:
 Involves decoding and processing various image formats.
 Supports image manipulation and transformation operations.

5. Input/Output (I/O) Handling:


 Manages input devices such as cameras and microphones.
 Facilitates the interaction between multimedia peripherals and the OS.

101
6. Codec Support:
 Includes support for various multimedia codecs for compression and
decompression.
 Ensures compatibility with different file formats.

7. Streaming and Network Support:


 Facilitates streaming of multimedia content over networks.
 Manages network protocols and ensures a smooth multimedia streaming
experience.

8. Media Control APIs:


 Provides application programming interfaces (APIs) for software developers to
integrate and control multimedia elements.

The media layer in an OS aims to abstract the complexity of interacting with


multimedia hardware, offering a standardized interface for applications to utilize
multimedia capabilities seamlessly.

Specific implementations may vary across different operating systems, and advancements in
technology continually influence the features and capabilities of the media layer.

Service Layer

The service layer in a software architecture serves as a crucial component that


encapsulates the application's business logic. Here are some key points regarding the
service layer:

1. Business Logic Encapsulation: The service layer encapsulates the business logic of
the application, ensuring that it remains separate from the user interface and data
access layers. This separation enhances maintainability and reusability.

2. Abstraction of Operations: It provides a set of well-defined services or operations


that the rest of the application can utilize. These services abstract complex business
processes, making them accessible to other layers.

3. Interaction with Data Layer: The service layer interacts with the data layer (database
or external APIs) to retrieve or persist data. This ensures that data-related operations
are centralized and handled consistently.

4. Transaction Management: In many cases, the service layer manages transactions to


ensure data consistency. This is particularly important when multiple operations need
to be executed atomically.

5. Validation and Business Rules: Services often include validation logic to ensure that
incoming data meets specific criteria. They also enforce business rules, ensuring that
the application adheres to the defined workflows and regulations.

6. Security: The service layer is a common location to implement security measures


such as authentication and authorization. It ensures that only authorized users can
access certain functionalities.
Advanced operating system

7. Application Flow Control: It governs the flow of operations within the application.
By orchestrating various components, the service layer ensures that business processes
are executed in the correct sequence.

8. Independence from Presentation Layer: Services are designed to be independent of


the user interface. This allows for flexibility, as changes to the presentation layer won't
necessarily affect the underlying business logic.

9. Scalability: A well-designed service layer contributes to the scalability of the


application. Services can be distributed or scaled independently based on the specific
needs of the system.

10. Testing and Debugging: The service layer facilitates testing by allowing for the
isolation of business logic. Unit testing and debugging are more straightforward when
the core application logic is concentrated in the service layer.

the service layer plays a pivotal role in structuring a software application, promoting
maintainability, scalability, and flexibility by encapsulating and providing a well-defined
interface for the application's business logic.

File Systems in Operating System

A computer file is defined as a medium used for saving and managing data in the computer system.
The data stored in the computer system is completely in digital format, although there can be various
types of files that help us to store the data.

What is a File System?


A file system is a method an operating system uses to store, organize, and manage files and
directories on a storage device. Some common types of file systems include:
1. FAT (File Allocation Table): An older file system used by older versions of Windows
and other operating systems.
2. NTFS (New Technology File System): A modern file system used by Windows. It
supports features such as file and folder permissions, compression, and encryption.
3. ext (Extended File System): A file system commonly used on Linux and Unix-based
operating systems.
4. HFS (Hierarchical File System): A file system used by macOS.
5. APFS (Apple File System): A new file system introduced by Apple for their Macs and
iOS devices.
A file is a collection of related information that is recorded on secondary storage. Or
file is a collection of logically related entities. From the user’s perspective, a file is the
smallest allotment of logical secondary storage.
The name of the file is divided into two parts as shown below:
 name
 extension, separated by a period.
Issues Handled By File System
We’ve seen a variety of data structures where the file could be kept. The file system’s
job is to keep the files organized in the best way possible.
A free space is created on the hard drive whenever a file is deleted from it. To
reallocate them to other files, many of these spaces may need to be recovered.
Choosing where to store the files on the hard disc is the main issue with files one block
may or may not be used to store a file. It may be kept in the disk’s non-contiguous
blocks. We must keep track of all the blocks where the files are partially located.

103
FilesAttributes And Their Operations

Attributes Types Operations

Name Doc Create

Type Exe Open

Size Jpg Read

Creation
Xis Write
Data

Author C Append

Last
Java Truncate
Modified

protection class Delete

Close

Usual
File type extension Function

Read to run
exe, com, machine
Executable
bin language
program

Compiled,
machine
Object obj, o
language not
linked

C, java, Source code in


Source
pas, asm, various
Code
a languages
Advanced operating system

Usual
File type extension Function

Commands to
Batch bat, sh the command
interpreter

Textual data,
Text txt, doc
documents

Various word
Word wp, tex,
processor
Processor rrf, doc
formats

Related files
arc, zip, grouped into
Archive
tar one
compressed file

For containing
mpeg,
Multimedia audio/video
mov, rm
information

It is the textual
xml, html,
Markup data and
tex
documents

It contains
lib, a ,so, libraries of
Library
dll routines for
programmers

It is a format
for printing or
Print or gif, pdf,
viewing an
View jpg
ASCII or
binary file.

File Directories
The collection of files is a file directory. The directory contains information about the files,
including attributes, location, and ownership. Much of this information, especially that is
concerned with storage, is managed by the operating system. The directory is itself a file,
accessible by various file management routines.
Below are information contained in a device directory.
 Name
 Type
 Address

105
 Current length
 Maximum length
 Date last accessed
 Date last updated
 Owner id
 Protection information
The operation performed on the directory are:
 Search for a file
 Create a file
 Delete a file
 List a directory
 Rename a file
 Traverse the file system
Advantages of Maintaining Directories
 Efficiency: A file can be located more quickly.
 Naming: It becomes convenient for users as two users can have same name for
different files or may have different name for same file.
 Grouping: Logical grouping of files can be done by properties e.g. all java
programs, all games etc.
Single-Level Directory
In this, a single directory is maintained for all the users.
 Naming problem: Users cannot have the same name for two files.
 Grouping problem: Users cannot group files according to their needs.

Two-Level Directory
In this separate directories for each user is maintained.
 Path name: Due to two levels there is a path name for every file to locate that file.
 Now, we can have the same file name for different users.
 Searching is efficient in this method.
Advanced operating system

Tree-Structured Directory
The directory is maintained in the form of a tree. Searching is efficient and also there is
grouping capability. We have absolute or relative path name for a file.

File Allocation Methods


There are several types of file allocation methods. These are mentioned below.
 Continuous Allocation
 Linked Allocation(Non-contiguous allocation)
 Indexed Allocation
Continuous Allocation
A single continuous set of blocks is allocated to a file at the time of file creation. Thus, this
is a pre-allocation strategy, using variable size portions. The file allocation table needs just
a single entry for each file, showing the starting block and the length of the file. This
method is best from the point of view of the individual sequential file. Multiple blocks can
be read in at a time to improve I/O performance for sequential processing. It is also easy to
retrieve a single block. For example, if a file starts at block b, and the ith block of the file
is wanted, its location on secondary storage is simply b+i-1.

107
Disadvantages of Continuous Allocation
 External fragmentation will occur, making it difficult to find contiguous blocks of
space of sufficient length. A compaction algorithm will be necessary to free up
additional space on the disk.
 Also, with pre-allocation, it is necessary to declare the size of the file at the time of
creation.
Linked Allocation(Non-Contiguous Allocation)
Allocation is on an individual block basis. Each block contains a pointer to the next block
in the chain. Again the file table needs just a single entry for each file, showing the starting
block and the length of the file. Although pre-allocation is possible, it is more common
simply to allocate blocks as needed. Any free block can be added to the chain. The blocks
need not be continuous. An increase in file size is always possible if a free disk block is
available. There is no external fragmentation because only one block at a time is needed
but there can be internal fragmentation but it exists only in the last disk block of the file.
Disadvantage Linked Allocation(Non-contiguous allocation)
 Internal fragmentation exists in the last disk block of the file.
 There is an overhead of maintaining the pointer in every disk block.
 If the pointer of any disk block is lost, the file will be truncated.
Advanced operating system

 It supports only the sequential access of files.


Indexed Allocation
It addresses many of the problems of contiguous and chained allocation. In this case, the
file allocation table contains a separate one-level index for each file: The index has one
entry for each block allocated to the file. The allocation may be on the basis of fixed-size
blocks or variable-sized blocks. Allocation by blocks eliminates external fragmentation,
whereas allocation by variable-size blocks improves locality. This allocation technique
supports both sequential and direct access to the file and thus is the most popular form of
file allocation.

Disk Free Space Management


Just as the space that is allocated to files must be managed, so the space that is not
currently allocated to any file must be managed. To perform any of the file allocation
techniques, it is necessary to know what blocks on the disk are available. Thus we need a
disk allocation table in addition to a file allocation table. The following are the approaches
used for free space management.
1. Bit Tables: This method uses a vector containing one bit for each block on the disk. Each
entry for a 0 corresponds to a free block and each 1 corresponds to a block in use.
For example 00011010111100110001

109
In this vector every bit corresponds to a particular block and 0 implies that that particular
block is free and 1 implies that the block is already occupied. A bit table has the advantage
that it is relatively easy to find one or a contiguous group of free blocks. Thus, a bit table
works well with any of the file allocation methods. Another advantage is that it is as small
as possible.
2. Free Block List: In this method, each block is assigned a number sequentially and the list
of the numbers of all free blocks is maintained in a reserved block of the disk.

Advantages of File System


 Organization: A file system allows files to be organized into directories and
subdirectories, making it easier to manage and locate files.
 Data protection: File systems often include features such as file and folder permissions,
backup and restore, and error detection and correction, to protect data from loss or
corruption.
 Improved performance: A well-designed file system can improve the performance of
reading and writing data by organizing it efficiently on disk.
Disadvantages of File System
 Compatibility issues: Different file systems may not be compatible with each other,
making it difficult to transfer data between different operating systems.
 Disk space overhead: File systems may use some disk space to store metadata and other
overhead information, reducing the amount of space available for user data.
 Vulnerability: File systems can be vulnerable to data corruption, malware, and other
security threats, which can compromise the stability and security of the system.
Advanced operating system

One Marks

1. What is the main function of the command interpreter?

a) to provide the interface between the API and application program


b) to handle the files in the operating system
c) to get and execute the next user-specified command
d) none of the mentioned

2. In Operating Systems, which of the following is/are CPU scheduling algorithms?

a) Priority c) Shortest Job First


b) Round Robin d) All of the mentioned

3. To access the services of the operating system, the interface is provided by the
___________

a) Library c) Assembly instructions


b) System calls d) API

4. Which one of the following is not a real time operating system?

a) RTLinux c) QNX
b) Palm OS d) VxWorks

5. The FCFS algorithm is particularly troublesome for ____________

a) operating systems c) time sharing systems


b) multiprocessor systems d) multiprogramming systems

Questions (5 mark and 10 mark)

1. What is CPU scheduling in OS?


2. What is Inter-Process Communication (IPC)?
3. How does a file system organize data?
4. What is a file allocation table (FAT)?
5. What is NTFS (New Technology File System)?
6. What are the major differences between Linux and Windows?
7. Define the basic components of Linux.

111

You might also like