0% found this document useful (0 votes)
41 views172 pages

Department of Computer Science

The document outlines the course structure for 'Advanced Operating Systems' in the I MSC CS program, detailing its objectives, expected outcomes, and various units covering topics like types of operating systems, distributed systems, real-time systems, and case studies on Linux and iOS. It emphasizes the importance of understanding operating system design, process management, and real-time task scheduling. Additionally, it provides a list of textbooks, reference materials, and online resources for further learning.

Uploaded by

kannanlost59
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views172 pages

Department of Computer Science

The document outlines the course structure for 'Advanced Operating Systems' in the I MSC CS program, detailing its objectives, expected outcomes, and various units covering topics like types of operating systems, distributed systems, real-time systems, and case studies on Linux and iOS. It emphasizes the importance of understanding operating system design, process management, and real-time task scheduling. Additionally, it provides a list of textbooks, reference materials, and online resources for further learning.

Uploaded by

kannanlost59
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 172

DEPARTMENT OF COMPUTER SCIENCE

COURSE: I MSC CS

SUBJECT CODE: 23PCS05

SUBJECT NAME: ADVANCED OPERATING SYSTEM

SEMESTER: I

1
II – SEMESTER
Course code 23PCSC05 ADVANCED OPERATING SYSTEMS L T P C
Core/Elective/Supportive Core 5 5

Pre-requisite Basics of OS & its functioning


Course Objectives:
The main objectives of this course are to:
1. Enable the students to learn the different types of operating systems and their functioning.
2. Gain knowledge on Distributed Learn case studies in Linux Operating Systems

1. Expected Course Outcomes: Operating Systems


2. Gain insight into the components and management aspects of real time and mobile operating systems.

On the successful completion of the course, student will be able to:


1 Understand the design issues associated with operating systems K1,K2
Master various process management concepts including scheduling, deadlocks
2 K3,K4
and distributed file systems
3 Prepare Real Time Task Scheduling K4,K5
4 Analyze Operating Systems for Handheld Systems K5
5 Analyze Operating Systems like LINUX and iOS K5,K6
K1 - Remember; K2 - Understand; K3 - Apply; K4 - Analyze; K5 - Evaluate; K6 - Create

Unit:1 BASICS OF OPERATING SYSTEMS 12 hours

Basics of Operating Systems: What is an Operating System? – Main frame Systems –Desktop Systems –
Multiprocessor Systems – Distributed Systems – Clustered Systems –Real-Time Systems – Handheld
Systems – Feature Migration – Computing Environments -Process Scheduling – Cooperating Processes –
Inter Process Communication- Deadlocks –Prevention – Avoidance – Detection – Recovery.

Unit:2 DISTRIBUTED OPERATING SYSTEMS 12 hours


Distributed Operating Systems: Issues – Communication Primitives – Lamport‟s Logical Clocks – Deadlock
handling strategies – Issues in deadlock detection and resolution-distributed file systems –design issues –
Case studies – The Sun Network File System-Coda.

Unit:3 REAL TIME OPERATING SYSTEM 10 hours


Realtime Operating Systems : Introduction – Applications of Real Time Systems – Basic Model of Real Time
System – Characteristics – Safety and Reliability - Real Time Task
Scheduling

Unit:4 HANDHELD SYSTEM 12 hours

2
Operating Systems for Handheld Systems: Requirements – Technology Overview –Handheld Operating
Systems – PalmOS-Symbian Operating System- Android –Architecture of android –

Securing handheld systems

Unit:5 CASE STUDIES 12 hours


Case Studies : Linux System: Introduction – Memory Management – Process Scheduling – Scheduling
Policy - Managing I/O devices – Accessing Files- iOS : Architecture and SDK Framework - Media Layer -
Services Layer - Core OS Layer - File System.

Text Books
Abraham Silberschatz; Peter Baer Galvin; Greg Gagne, “Operating System Concepts”, Seventh Edition,
1
John Wiley & Sons, 2004.
MukeshSinghal and Niranjan G. Shivaratri, “Advanced Concepts in Operating Systems –
2
Distributed, Database, and Multiprocessor Operating Systems”, Tata McGraw-Hill, 2001.
Reference Books
1 Rajib Mall, “Real-Time Systems: Theory and Practice”, Pearson Education India, 2006.
Pramod Chandra P.Bhatt, An introduction to operating systems, concept and practice, PHI, Third
2
edition, 2010.
3 Daniel.P.Bovet& Marco Cesati,“Understanding the Linux kernel”,3rdedition,O‟Reilly, 2005
Neil Smyth, “iPhone iOS 4 Development Essentials – Xcode”, Fourth Edition, Payload media, 2011.
4

Related Online Contents [MOOC, SWAYAM, NPTEL, Websites etc.]


1 https://onlinecourses.nptel.ac.in/noc20_cs04/preview
2 https://www.udacity.com/course/advanced-operating-systems--ud189
3 https://minnie.tuhs.org/CompArch/Resources/os-notes.pdf

Mapping with Programming Outcomes

Cos PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10
CO1 S M S S S S M M M M
CO2 S M S S S S S M S M
CO3 S M S S S S S M S M
CO4 S M S S S S S M S M
CO5 S M S S S S S M S M
*S-Strong; M-Medium; L-Low

3
UNIT 1
What is an operating system?
An operating system (OS) is the program that, after being initially loaded into the
computer by a boot program, manages all of the other application programs in a computer. The
application programs make use of the operating system by making requests for services through
a defined application program interface (API). In addition, users can interact directly with the
operating system through a user interface, such as a command-line interface (CLI) or a
graphical UI (GUI).

Why use an operating system?

An operating system brings powerful benefits to computer software and software


development. Without an operating system, every application would need to include its own
UI, as well as the comprehensive code needed to handle all low-level functionality of the
underlying computer, such as disk storage, network interfaces and so on. Considering the vast
array of underlying hardware available, this would vastly bloat the size of every application
and make software development impractical.

Instead, many common tasks, such as sending a network packet or displaying text on a
standard output device, such as a display, can be offloaded to system software that serves as an
intermediary between the applications and the hardware. The system software provides a
consistent and repeatable way for applications to interact with the hardware without the
applications needing to know any details about the hardware.

As long as each application accesses the same resources and services in the same way,
that system software -- the operating system -- can service almost any number of applications.
This vastly reduces the amount of time and coding required to develop and debug an
application, while ensuring that users can control, configure and manage the system hardware
through a common and well-understood interface.

4
What are the functions of an operating system?
An operating system provides three essential capabilities: It offers a UI through
a CLI or GUI; it launches and manages the application execution; and it identifies and exposes
system hardware resources to those applications -- typically, through a standardized API.

GUI. Every operating system requires a UI, enabling users and administrators to
interact with the OS in order to set up, configure and even troubleshoot the operating system
and its underlying hardware. There are two primary types of UI available: CLI and GUI.

The different types of Operating System are as follows −

Mainframe Operating Systems

These types of operating systems are mainly used in E-commerce websites or servers
that are dedicated for business-to-business transactions.The operating system in mainframe
systems is oriented in a way that it can handle many jobs simultaneously.Mainframe Operating
systems can operate with a large amount of input output transactions.

The services of mainframe operating systems are as follows −

 To handle the batch processing of jobs.


 To handle the transaction processing of multiple requests.
 Timesharing of servers that allows multiple remote users to access the server.

5
Server Operating Systems

Server Operating Systems runs on machines which have dedicated servers. The
examples of server operating systems are Solaris, Linux and Windows.

Server Operating Systems allows sharing of multiple resources such as hardware, files
or print services. Web pages are stored on a server which handles request and response

Multiprocessor Operating System

 These types of operating systems are also known as parallel computers or


multicomputer that depend upon how multiple processors are connected and
shared.
 These computers have strong connectivity and high speed communication
mechanisms. Personal computers are created and embedded with the
multiprocessor technology.
 Multiprocessor operating systems are high processing speed act as multiple
processors into single system

Personal Operating Systems

These types of operating systems are installed in machines used by common and large
numbers of users.They support multiprogramming, running multiple programs like word,
excel, games, and Internet access simultaneously on a single machine.

For example − Linux, Windows, Mac

Handheld Operating System

Handheld operating systems are present in all handheld devices like Smartphones and
tablets. It is also called a Personal Digital Assistant. The popular handheld device in today’s
market is android and iOS.These operating systems need a high processing processor and also
embedded with different types of sensor.

Embedded Operating System

6
Embedded operating systems are designed for the systems that are not considered as
computers. These operating systems are preinstalled on the devices by the device manufacturer.

All pre-installed software’s are in ROM and no changes could be done to it by the users.
The example of embedded operating systems is washing machines, ovens etc.

Real-Time Operating Systems

Real Time Operating systems concentrate on time constraints because it is used in


applications that are very critical in terms of safety. These systems are divided into hard real
time and soft real time.Hard real time systems are having stringent time constraints, certain
actions should occur at that time only. Components are tightly coupled in hard real time.

Soft real time operating systems sometimes miss the deadlines, even though it will not
cause any damage.

Smart Card Operating System

Smart Card Operating Systems runs on smart cards. It contains a processor chip that is
embedded inside the CPU chip. They have high processing power and memory constraints.

These operating systems can handle single functions like making electronic payment
and are license software’s.

Desktop Systems:
Operating Systems for Desktop Systems

An operating system (OS) acts as an interface between the hardware and software of a
desktop system. It manages system resources, facilitates software execution, and provides a
user-friendly environment. Different operating systems offer distinct features, compatibility,
and performance, catering to the diverse needs and preferences of users..

Components of a Desktop System

 Central Processing Unit (CPU): The CPU is the brain of a desktop system,
responsible for executing instructions and performing calculations. It processes data

7
and carries out tasks based on the instructions provided by software programs. The
CPU’s performance is measured by its clock speed, number of cores, and cache size.
 Random Access Memory (RAM): RAM is a type of volatile memory that temporarily
stores data and instructions for the CPU to access quickly. It allows for efficient
multitasking and faster data retrieval, significantly impacting the overall performance
of the system. The amount of RAM in a desktop system determines its capability to
handle multiple programs simultaneously.
 Storage Devices: Desktop systems utilize various storage devices to store and retrieve
data. Hard Disk Drives (HDDs) are the traditional storage medium, offering large
capacities but slower read/write speeds. Solid-State Drives (SSDs) are a newer
technology that provides faster data access, enhancing the system’s responsiveness and
reducing loading times.
 Graphics Processing Unit (GPU): The GPU is responsible for rendering images,
videos, and animations on the computer screen. It offloads the graphical processing
tasks from the CPU, ensuring smooth visuals and enabling resource-intensive
applications such as gaming, video editing, and 3D modeling. High-performance GPUs
are essential for users who require demanding graphical capabilities.
 Input and Output Devices: Desktop systems are equipped with various input and
output devices. Keyboards and mice are the primary input devices, allowing users to
interact with the system and input commands. Monitors, printers, speakers, and
headphones serve as output devices, providing visual or auditory feedback based on the
system’s output.

8
Evolution of Desktop Systems

Desktop systems have evolved significantly over the years. From the bulky and limited-
capability systems of the past to the sleek and powerful computers of today, technological
advancements have revolutionized the desktop computing experience.

Smaller form factors, increased processing power, improved storage technologies, and
enhanced user interfaces are some of the notable advancements that have shaped the evolution
of desktop systems.

Popular Desktop Operating Systems

 Windows: Windows, developed by Microsoft, is one of the most widely used desktop
operating systems globally.
 macOS: macOS is the operating system designed specifically for Apple’s Mac
computers. Known for its sleek and intuitive interface, macOS offers seamless
integration with other Apple devices and services.
 Linux: Linux is an open-source operating system that provides a high degree of
customization and flexibility. It is favored by developers, system administrators, and
tech enthusiasts due to its stability, security, and vast array of software options.

Future Trends in Desktop Systems

The future of desktop systems holds exciting possibilities. As technology continues to


advance, we can expect further improvements in processing power, storage capacities, and
energy efficiency. Virtual reality (VR) and augmented reality (AR) integration, cloud-based
computing, artificial intelligence (AI) integration, and seamless connectivity across devices are
some of the trends that will shape the future of desktop systems.

Conclusion

Desktop systems serve as the foundation for our digital lives, enabling us to accomplish
tasks efficiently and explore the vast realm of computing possibilities. With their powerful
hardware components, diverse operating systems, and ever-evolving capabilities, desktop
systems continue to be an indispensable part of our personal and professional endeavors.

9
What is the Multiprocessing Operating System?

Multiprocessor operating systems are used in operating systems to boost the


performance of multiple CPUs within a single computer system.

Multiple CPUs are linked together so that a job can be divided and executed more
quickly. When a job is completed, the results from all CPUs are compiled to provide the final
output. Jobs were required to share main memory, and they may often share other system
resources. Multiple CPUs can be used to run multiple tasks at the same time, for example,
UNIX.

One of the most extensively used operating systems is the multiprocessing operating
system. The following diagram depicts the basic organisation of a typical multiprocessing
system.

 The computer system should have the following features to efficiently use a
multiprocessing operating system:

 In a multiprocessing OS, a motherboard can handle many processors.

 Processors can also be utilised as a part of a multiprocessing system.

Pros of Multiprocessing OS

Increased reliability: Processing tasks can be spread among numerous processors in the
multiprocessing system. This promotes reliability because if one processor fails, the task can
be passed on to another.

Increased throughout: More work could be done in less time as the number of processors
increases.

10
The economy of scale: Multiprocessor systems are less expensive than single-processor
computers because they share peripherals, additional storage devices, and power sources.

Cons of Multiprocessing OS

Multiprocessing operating systems are more complex and advanced since they manage
many CPUs at the same time.

Types of Multiprocessing OS

Symmetrical

Each processor in a symmetrical multiprocessing system runs the same copy of the OS,
makes its own decisions, and collaborates with other processes to keep the system running
smoothly. CPU scheduling policies are straightforward. Any new job that is submitted by a
user could be assigned to the least burdened processor. It also means that at any given time, all
processors are equally taxed.

Since the processors share memory along with the I/O bus or data channel, the
symmetric multiprocessing OS is sometimes known as a “shared everything” system. The
number of processors in this system is normally limited to 16.

Characteristics

 Any processor in this system can run any process or job.

 Any CPU can start an Input and Output operation in this way.

Pros

These are fault-tolerant systems. A few processors failing does not bring the whole
system to a standstill.

11
Cons

 It is quite difficult to rationally balance the workload among processors.

 For handling many processors, specialised synchronisation algorithms are required.

Asymmetric

The processors in an asymmetric system have a master-slave relationship. In addition, one


processor may serve as a master or supervisor processor, while the rest are treated as illustrated
below.

In the asymmetric processing system represented above, CPU n1 serves as a supervisor,


controlling the subsequent CPUs. Each processor in such a system is assigned a specific task,
and the actions of the other processors are overseen by a master processor.

We have a maths coprocessor, for example, that can handle mathematical tasks better
than the main CPU. We also have an MMX processor, which is designed to handle multimedia-
related tasks. We also have a graphics processor to handle graphics-related tasks more
efficiently than the main processor. Whenever a user submits a new job, the operating system
must choose which processor is most suited for the task, and that processor is subsequently
assigned to the newly arriving job. This processor is the system’s master and controller. All
other processors search for masters for instructions or have jobs that are predetermined. The
master is responsible for allocating work to other processors.

Pros

Because several processors are available for a single job, the execution of an I/O
operation or application software in this type of system may be faster in some instances.

12
Cons

The processors are burdened unequally in this form of multiprocessing operating


system. One CPU may have a large job queue while another is idle. If a process handling a
specific task fails in this system, the entire system will fail.

What is a Distributed System?


Distributed System is a collection of autonomous computer systems that are
physically separated but are connected by a centralized computer network that is equipped
with distributed system software. The autonomous computers will communicate among each
system by sharing resources and files and performing the tasks assigned to them.

Example of Distributed System:


Any Social Media can have its Centralized Computer Network as its Headquarters
and computer systems that can be accessed by any user and using their services will be the
Autonomous Systems in the Distributed System Architecture.

 Distributed System Software: This Software enables computers to coordinate their

activities and to share the resources such as Hardware, Software, Data, etc.
 Database: It is used to store the processed data that are processed by each

Node/System of the Distributed systems that are connected to the Centralized


network.

13
 As we can see that each Autonomous System has a common Application that can
have its own data that is shared by the Centralized Database System.
 To Transfer the Data to Autonomous Systems, Centralized System should be
having a Middleware Service and should be connected to a Network.
 Middleware Services enable some services which are not present in the local
systems or centralized system default by acting as an interface between the
Centralized System and the local systems. By using components of Middleware
Services systems communicate and manage data.
 The Data which is been transferred through the database will be divided into
segments or modules and shared with Autonomous systems for processing.
 The Data will be processed and then will be transferred to the Centralized system
through the network and will be stored in the database.
Characteristics of Distributed System:
 Resource Sharing: It is the ability to use any Hardware, Software, or Data
anywhere in the System.
 Openness: It is concerned with Extensions and improvements in the system (i.e.,
How openly the software is developed and shared with others)
 Concurrency: It is naturally present in Distributed Systems, that deal with the
same activity or functionality that can be performed by separate users who are in
remote locations. Every local system has its independent Operating Systems and
Resources.
 Scalability: It increases the scale of the system as a number of processors
communicate with more users by accommodating to improve the responsiveness
of the system.
 Fault tolerance: It cares about the reliability of the system if there is a failure in
Hardware or Software, the system continues to operate properly without
degrading the performance the system.
 Transparency: It hides the complexity of the Distributed Systems to the Users
and Application programs as there should be privacy in every system.
 Heterogeneity: Networks, computer hardware, operating systems, programming
languages, and developer implementations can all vary and differ among
dispersed system components.

14
Advantages of Distributed System:
 Applications in Distributed Systems are Inherently Distributed Applications.
 Information in Distributed Systems is shared among geographically distributed
users
 Resource Sharing (Autonomous systems can share resources from remote
locations).
 It has a better price performance ratio and flexibility.
 It has shorter response time and higher throughput.
 It has higher reliability and availability against component failure.
 It has extensibility so that systems can be extended in more remote locations and
also incremental growth.
Disadvantages of Distributed System:
 Relevant Software for Distributed systems does not exist currently.
 Security possess a problem due to easy access to data as the resources are shared
to multiple systems.
 Networking Saturation may cause a hurdle in data transfer i.e., if there is a lag in
the network then the user will face a problem accessing data.
 In comparison to a single user system, the database associated with distributed
systems is much more complex and challenging to manage.
 If every node in a distributed system tries to send data at once, the network may
become overloaded.
Applications Area of Distributed System:
 Finance and Commerce: Amazon, eBay, Online Banking, E-Commerce
websites.
 Information Society: Search Engines, Wikipedia, Social Networking, Cloud
Computing.
 Cloud Technologies: AWS, Salesforce, Microsoft Azure, SAP.
 Entertainment: Online Gaming, Music, youtube.
 Healthcare: Online patient records, Health Informatics.
 Education: E-learning.
 Transport and logistics: GPS, Google Maps.
 Environment Management: Sensor technologies.

15
Challenges of Distributed Systems:

While distributed systems offer many advantages, they also present some challenges
that must be addressed. These challenges include:

 Network latency: The communication network in a distributed system can


introduce latency, which can affect the performance of the system.
 Distributed coordination: Distributed systems require coordination among the
nodes, which can be challenging due to the distributed nature of the system.
 Security: Distributed systems are more vulnerable to security threats than
centralized systems due to the distributed nature of the system.
 Data consistency: Maintaining data consistency across multiple nodes in a
distributed system can be challenging.

Clustered Operating System

Cluster systems are similar to parallel systems because both systems use multiple CPUs.
The primary difference is that clustered systems are made up of two or more independent
systems linked together. They have independent computer systems and a shared storage media,
and all systems work together to complete all tasks. All cluster nodes use two different
approaches to interact with one another, like message passing interface (MPI) and parallel
virtual machine (PVM). In this article, you will learn about the Clustered Operating system,
its types, classification, advantages, and disadvantages.

What is the Clustered Operating System?

Cluster operating systems are a combination of software and hardware clusters.


Hardware clusters aid in the sharing of high-performance disks among all computer systems,
while software clusters give a better environment for all systems to operate. A cluster system
consists of various nodes, each of which contains its cluster software. The cluster software is
installed on each node in the clustered system, and it monitors the cluster system and ensures
that it is operating properly. If one of the clustered system's nodes fails, the other nodes take
over its storage and resources and try to restart.

16
Cluster components are generally linked via fast area networks, and each node
executing its instance of an operating system. In most cases, all nodes share the same hardware
and operating system, while different hardware or different operating systems could be used in
other cases. The primary purpose of using a cluster system is to assist with weather forecasting,
scientific computing, and supercomputing systems.

There are two clusters available to make a more efficient cluster. These are as follows:

1. Software Cluster
2. Hardware Cluster

Software Cluster

 The Software Clusters allows all the systems to work together.

Hardware Cluster

 It helps to allow high-performance disk sharing among systems.

Types of Clustered Operating System

There are mainly three types of the clustered operating system:

1. Asymmetric Clustering System


2. Symmetric Clustering System
3. Parallel Cluster System

Asymmetric Clustering System

In the asymmetric cluster system, one node out of all nodes is in hot standby mode,
while the remaining nodes run the essential applications. Hot standby mode is completely fail-
safe and also a component of the cluster system. The node monitors all server functions; the
hot standby node swaps this position if it comes to a halt.

17
Symmetric Clustering System

Multiple nodes help run all applications in this system, and it monitors all nodes
simultaneously. Because it uses all hardware resources, this cluster system is more reliable than
asymmetric cluster systems.

Parallel Cluster System

A parallel cluster system enables several users to access similar data on the same shared
storage system. The system is made possible by a particular software version and other apps.

Classification of clusters

Computer clusters are managed to support various purposes, from general-purpose


business requirements like web-service support to computation-intensive scientific
calculations. There are various classifications of clusters. Some of them are as follows:

1. Fail Over Clusters

The process of moving applications and data resources from a failed system to another
system in the cluster is referred to as fail-over. These are the databases used to cluster important
missions, application servers, mail, and file.

2. Load Balancing Cluster

The cluster requires better load balancing abilities amongst all available computer
systems. All nodes in this type of cluster can share their computing workload with other nodes,
resulting in better overall performance. For example, a web-based cluster can allot various web
queries to various nodes, so it helps to improve the system speed. When it comes to grabbing
requests, only a few cluster systems use the round-robin method.

3. High Availability Clusters

These are also referred to as "HA clusters". They provide a high probability that all
resources will be available. If a failure occurs, such as a system failure or the loss of a disk
volume, the queries in the process are lost. If a lost query is retried, it will be handled by a
different cluster computer. It is widely used in news, email, FTP servers, and the web.

18
Advantages and Disadvantages of Cluster Operating System

Various advantages and disadvantages of the Clustered Operating System are as follows:

Advantages

Various advantages of Clustered Operating System are as follows:

1. High Availability: Although every node in a cluster is a standalone computer, the failure of
a single node doesn't mean a loss of service. A single node could be pulled down for
maintenance while the remaining clusters take on a load of that single node.

2. Cost Efficiency: When compared to highly reliable and larger storage mainframe computers,
these types of cluster computing systems are thought to be more cost-effective and cheaper.
Furthermore, most of these systems outperform mainframe computer systems in terms of
performance.

3. Additional Scalability: A cluster is set up in such a way that more systems could be added
to it in minor increments. Clusters may add systems in a horizontal fashion. It means that
additional systems could be added to clusters to improve their performance, fault tolerance,
and redundancy.

4. Fault Tolerance: Clustered systems are quite fault-tolerance, and the loss of a single node
does not result in the system's failure. They might also have one or more nodes in hot standby
mode, which allows them to replace failed nodes.

5. Performance: The clusters are commonly used to improve the availability and performance
over the single computer systems, whereas usually being much more cost-effective than the
single computer system of comparable speed or availability.

6. Processing Speed: The processing speed is also similar to mainframe systems and other
types of supercomputers on the market.

Disadvantages

Various disadvantages of the Clustered Operating System are as follows:

19
1. Cost-Effective

One major disadvantage of this design is that it is not cost-effective. The cost is high,
and the cluster will be more expensive than a non-clustered server management de

Real-Time operating system

In this article, we understand the real time operating system in detail.

What do you mean by Real-Time Operating System?

A real-time operating system (RTOS) is a special-purpose operating system used in


computers that has strict time constraints for any job to be performed. It is employed mostly in
those systems in which the results of the computations are used to influence a process while it
is executing. Whenever an event external to the computer occurs, it is communicated to the
computer with the help of some sensor used to monitor the event. The sensor produces the
signal that is interpreted by the operating system as an interrupt. On receiving an interrupt, the
operating system invokes a specific process or a set of processes to serve the interrupt.

This process is completely uninterrupted unless a higher priority interrupt occurs during
its execution. Therefore, there must be a strict hierarchy of priority among the interrupts. The
interrupt with the highest priority must be allowed to initiate the process , while lower priority
interrupts should be kept in a buffer that will be handled later. Interrupt management is
important in such an operating system.

Real-time operating systems employ special-purpose operating systems because


conventional operating systems do not provide such performance.

The various examples of Real-time operating systems are:

o MTS
20
o Lynx
o QNX
o VxWorks etc.

Applications of Real-time operating system (RTOS):

RTOS is used in real-time applications that must work within specific deadlines. Following
are the common areas of applications of Real-time operating systems are given below.

 Real-time running structures are used inside the Radar gadget.

 Real-time running structures are utilized in Missile guidance.

 Real-time running structures are utilized in on line inventory trading.

 Real-time running structures are used inside the cell phone switching gadget.

 Real-time running structures are utilized by Air site visitors to manipulate structures.

 Real-time running structures are used in Medical Imaging Systems.

 Real-time running structures are used inside the Fuel injection gadget.

 Real-time running structures are used inside the Traffic manipulate gadget.

 Real-time running structures are utilized in Autopilot travel simulators.

Types of Real-time operating system

Following are the three types of RTOS systems are:

Hard Real-Time operating system:

In Hard RTOS, all critical tasks must be completed within the specified time duration,
i.e., within the given deadline. Not meeting the deadline would result in critical failures such
as damage to equipment or even loss of human life.

21
For Example,

Let's take an example of airbags provided by carmakers along with a handle in the
driver's seat. When the driver applies brakes at a particular instance, the airbags grow and
prevent the driver's head from hitting the handle. Had there been some delay even of
milliseconds, then it would have resulted in an accident.

Similarly, consider an on-stock trading software. If someone wants to sell a particular


share, the system must ensure that command is performed within a given critical time.
Otherwise, if the market falls abruptly, it may cause a huge loss to the trader.

Soft Real-Time operating system:

Soft RTOS accepts a few delays via the means of the Operating system. In this kind of
RTOS, there may be a closing date assigned for a particular job, but a delay for a small amount
of time is acceptable. So, cut off dates are treated softly via means of this kind of RTOS.

For Example,

This type of system is used in Online Transaction systems and Livestock price quotation
Systems.

Firm Real-Time operating system:

In Firm RTOS additionally want to observe the deadlines. However, lacking a closing
date might not have a massive effect, however may want to purposely undesired effects, like a
massive discount within the fine of a product.

For Example, this system is used in various forms of Multimedia applications.

Advantages of Real-time operating system:

 The benefits of real-time operating system are as follows-:


 Easy to layout, develop and execute real-time applications under the real-time operating
system.
 The real-time working structures are extra compact, so those structures require much
less memory space.

22
 In a Real-time operating system, the maximum utilization of devices and systems.
 15
 Focus on running applications and less importance to applications that are in the queue.
 Since the size of programs is small, RTOS can also be embedded systems like transport
and others.

 These types of systems are error-free.


 Memory allocation is best managed in these types of systems.

Disadvantages of Real-time operating system:

The disadvantages of real-time operating systems are as follows-

 Real-time operating systems have complicated layout principles and are very costly to

develop.
 Real-time operating systems are very complex and can consume critical CPU

Handheld Operating System:


Handheld operating systems are available in all handheld devices like Smartphones
and tablets. It is sometimes also known as a Personal Digital Assistant. The popular handheld
device in today’s world is Android and iOS. These operating systems need a high-processing
processor and are also embedded with various types of sensors.

Some points related to Handheld operating systems are as follows:

1. Since the development of handheld computers in the 1990s, the demand for
software to operate and run on these devices has increased.
2. Three major competitors have emerged in the handheld PC world with three
different operating systems for these handheld PCs.
3. Out of the three companies, the first was the Palm Corporation with their PalmOS.
4. Microsoft also released what was originally called Windows CE. Microsoft’s
recently released operating system for the handheld PC comes under the name of
Pocket PC.
5. More recently, some companies producing handheld PCs have also started
offering a handheld version of the Linux operating system on their machines.
Features of Handheld Operating System:

23
1. Its work is to provide real-time operations.
2. There is direct usage of interrupts.
3. Input/Output device flexibility.
4. Configurability.
Types of Handheld Operating Systems:
Types of Handheld Operating Systems are as follows:

1. Palm OS
2. Symbian OS
3. Linux OS
4. Windows
5. Android

Palm OS:

 Since the Palm Pilot was introduced in 1996, the Palm OS platform has provided

various mobile devices with essential business tools, as well as the capability that they
can access the internet via a wireless connection.
 These devices have mainly concentrated on providing basic personal-information-

management applications. The latest Palm products have progressed a lot, packing in
more storage, wireless internet, etc.

Symbian OS:

 It has been the most widely-used smartphone operating system because of its ARM

architecture before it was discontinued in 2014. It was developed by Symbian Ltd.


 This operating system consists of two subsystems where the first one is the

microkernel-based operating system which has its associated libraries and the second
one is the interface of the operating system with which a user can interact.
 Since this operating system consumes very less power, it was developed for

smartphones and handheld devices.


 It has good connectivity as well as stability.

 It can run applications that are written in Python, Ruby, .NET, etc.

24
Linux OS:

 Linux OS is an open-source operating system project which is a cross-platform system

that was developed based on UNIX.


It was developed by Linus Torvalds. It is a system software that basically allows the
apps and users to perform some tasks on the PC.
 Linux is free and can be easily downloaded from the internet and it is considered that

it has the best community support.


 Linux is portable which means it can be installed on different types of devices like

mobile, computers, and tablets.


 It is a multi-user operating system.

 Linux interpreter program which is called BASH is used to execute commands.

 It provides user security using authentication features.

Windows OS:

 Windows is an operating system developed by Microsoft. Its interface which is

called Graphical User Interface eliminates the need to memorize commands for the
command line by using a mouse to navigate through menus, dialog boxes, and
buttons.
 It is named Windows because its programs are displayed in the form of a square. It

has been designed for both a beginner as well professional.


 It comes preloaded with many tools which help the users to complete all types of tasks

on their computer, mobiles, etc.


 It has a large user base so there is a much larger selection of available software

programs.
 One great feature of Windows is that it is backward compatible which means that its

old programs can run on newer versions as well.

Android OS:

 It is a Google Linux-based operating system that is mainly designed for touchscreen

devices such as phones, tablets, etc.


There are three architectures which are ARM, Intel, and MIPS which are used by the

25
hardware for supporting Android. These lets users manipulate the devices intuitively,
with movements of our fingers that mirror some common motions such as swiping,
tapping, etc.
 Android operating system can be used by anyone because it is an open-source

operating system and it is also free.


 It offers 2D and 3D graphics, GSM connectivity, etc.

 There is a huge list of applications for users since Play Store offers over one million

apps.
 Professionals who want to develop applications for the Android OS can download the

Android Development Kit. By downloading it they can easily develop apps for
android.
Advantages of Handheld Operating System:
Some advantages of a Handheld Operating System are as follows:

1. Less Cost.
2. Less weight and size.
3. Less heat generation.
4. More reliability.
Disadvantages of Handheld Operating System:
Some disadvantages of Handheld Operating Systems are as follows:

1. Less Speed.
2. Small Size.
3. Input / Output System (memory issue or less memory is available).

How Handheld operating systems are different from Desktop operating systems?
 Since the handheld operating systems are mainly designed to run on machines that

have lower speed resources as well as less memory, they were designed in a way that
they use less memory and require fewer resources.
 They are also designed to work with different types of hardware as compared to

standard desktop operating systems.


It happens because the power requirements for standard CPUs far exceed the power
of handheld devices.

26
Feature Migration

Migration Of Operating-System Concepts And Features

 I/O routine supplied by the system.

 Memory Management - the system should allocate the memory to many work.

 CPU Scheduling - the system should select amongst a number of work ready to run.

 Allocation of devices.

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Computing Environments
Computing environments refer to the technology infrastructure and software platforms
that are used to develop, test, deploy, and run software applications. There are several types of
computing environments, including:

1. Mainframe: A large and powerful computer system used for critical applications
and large-scale data processing.
2. Client-Server: A computing environment in which client devices access resources
and services from a central server.
3. Cloud Computing: A computing environment in which resources and services are
provided over the Internet and accessed through a web browser or client software.
4. Mobile Computing: A computing environment in which users access information
and applications using handheld devices such as smartphones and tablets.

27
5. Grid Computing: A computing environment in which resources and services are
shared across multiple computers to perform large-scale computations.
6. Embedded Systems: A computing environment in which software is integrated into
devices and products, often with limited processing power and memory.
Each type of computing environment has its own advantages and disadvantages, and the choice
of environment depends on the specific requirements of the software application and the
resources available.

In the world of technology where every tasks are performed with help of computers,
these computers have become one part of human life. Computing is nothing but process of
completing a task by using this computer technology and it may involve computer hardware
and/or software. But computing uses some form of computer system to manage, process, and
communicate information. After getting some idea about computing now lets understand about
Types of Computing Environments :
There are the various types of computing environments. They are :

Computing Environments Types

1. Personal Computing Environment : In personal computing environment there is


a stand-alone machine. Complete program resides on computer and executed there.
Different stand-alone machines that constitute a personal computing environment
are laptops, mobiles, printers, computer systems, scanners etc. That we use at our
homes and offices.
2. Time-Sharing Computing Environment : In Time Sharing Computing
Environment multiple users share system simultaneously. Different users (different

28
processes) are allotted different time slice and processor switches rapidly among
users according to it. For example, student listening to music while coding
something in an IDE. Windows 95 and later versions, Unix, IOS, Linux operating
systems are the examples of this time sharing computing environment.
3. Client Server Computing Environment : In client server computing environment
two machines are involved i.e., client machine and server machine, sometime same
machine also serve as client and server. In this computing environment client
requests resource/service and server provides that respective resource/service. A
server can provide service to multiple clients at a time and here mainly
communication happens through computer network.
4. Distributed Computing Environment : In a distributed computing environment
multiple nodes are connected together using network but physically they are
separated. A single task is performed by different functional units of different nodes
of distributed unit. Here different programs of an application run simultaneously on
different nodes, and communication happens in between different nodes of this
system over network to solve task.
5. Grid Computing Environment : In grid computing environment, multiple
computers from different locations works on single problem. In this system set of
computer nodes running in cluster jointly perform a given task by applying
resources of multiple computers/nodes. It is network of computing environment
where several scattered resources provide running environment for single task.
6. Cloud Computing Environment : In cloud computing environment on demand
availability of computer system resources like processing and storage are availed.
Here computing is not done in individual technology or computer rather it is
computed in cloud of computers where all required resources are provided by cloud
vendor. This environment primarily comprised of three services i.e software-as-a-
service (SaaS), infrastructure-as-a-service (IaaS), and platform-as-a-service (PaaS).
7. Cluster Computing Environment : In cluster computing environment cluster
performs task where cluster is a set of loosely or tightly connected computers that
work together. It is viewed as single system and performs task parallelly that’s why
also it is similar to parallel computing environment. Cluster aware applications are
especially used in cluster computing environment.

29
Advantages of different computing environments:

1. Mainframe: High reliability, security, and scalability, making it suitable for mission-
critical applications.
2. Client-Server: Easy to deploy, manage and maintain, and provides a centralized
point of control.
3. Cloud Computing: Cost-effective and scalable, with easy access to a wide range of
resources and services.
4. Mobile Computing: Allows users to access information and applications from
anywhere, at any time.
5. Grid Computing: Provides a way to harness the power of multiple computers for
large-scale computations.
6. Embedded Systems: Enable the integration of software into devices and products,
making them smarter and more functional.
Disadvantages of different computing environments:
1. Mainframe: High cost and complexity, with a significant learning curve for
developers.
2. Client-Server: Dependence on network connectivity, and potential security risks
from centralized data storage.
3. Cloud Computing: Dependence on network connectivity, and potential security and
privacy concerns.
4. Mobile Computing: Limited processing power and memory compared to other
computing environments, and potential security risks.
5. Grid Computing: Complexity in setting up and managing the grid infrastructure.
6. Embedded Systems: Limited processing power and memory, and the need for
specialized skills for software development
Process Schedulers in Operating System
Process scheduling is the activity of the process manager that handles the removal of
the running process from the CPU and the selection of another process on the basis of a
particular strategy.

Process scheduling is an essential part of a Multiprogramming operating system. Such


operating systems allow more than one process to be loaded into the executable memory at a
time and the loaded process shares the CPU using time multiplexing.

30
Categories in Scheduling
Scheduling falls into one of two categories:

 Non-preemptive: In this case, a process’s resource cannot be taken before the


process has finished running. When a running process finishes and transitions to
a waiting state, resources are switched.
 Preemptive: In this case, the OS assigns resources to a process for a predetermined
period of time. The process switches from running state to ready state or from
waiting for state to ready state during resource allocation. This switching happens
because the CPU may give other processes priority and substitute the currently
active process for the higher priority process.
There are three types of process schedulers.

Long Term or Job Scheduler


It brings the new process to the ‘Ready State’. It controls the Degree of Multi-
programming, i.e., the number of processes present in a ready state at any point in time. It is
important that the long-term scheduler make a careful selection of both I/O and CPU-bound
processes. I/O-bound tasks are which use much of their time in input and output operations
while CPU-bound processes are which spend their time on the CPU. The job scheduler
increases efficiency by maintaining a balance between the two. They operate at a high level
and are typically used in batch-processing systems.
Short-Term or CPU Scheduler
It is responsible for selecting one process from the ready state for scheduling it on the
running state. Note: Short-term scheduler only selects the process to schedule it doesn’t load
the process on running. Here is when all the scheduling algorithms are used. The CPU
scheduler is responsible for ensuring no starvation due to high burst time processes
1.Switching context
2 .Switching to user mode.
3. Jumping to the proper location in the newly loaded program.
Medium-Term Scheduler
It is responsible for suspending and resuming the process. It mainly does swapping
(moving processes from main memory to disk and vice versa). Swapping may be necessary to
improve the process mix or because a change in memory requirements has overcommitted
available memory, requiring memory to be freed up. It is helpful in maintaining a perfect

31
balance between the I/O bound and the CPU bound. It reduces the degree
of multiprogramming.
Some Other Schedulers
 I/O schedulers: I/O schedulers are in charge of managing the execution of I/O
operations such as reading and writing to discs or networks. They can use various
algorithms to determine the order in which I/O operations are executed, such as
FCFS (First-Come, First-Served) or RR (Round Robin).
 Real-time schedulers: In real-time systems, real-time schedulers ensure that
critical tasks are completed within a specified time frame. They can prioritize and
schedule tasks using various algorithms such as EDF (Earliest Deadline First) or
RM (Rate Monotonic).
Comparison among Scheduler

Long Term Scheduler Short term schedular Medium Term Scheduler

It is a process-swapping
It is a job scheduler It is a CPU scheduler
scheduler.

Speed lies in between both


Generally, Speed is lesser Speed is the fastest among
short and long-term
than short term scheduler all of them.
schedulers.

It gives less control over


It controls the degree of how much It reduces the degree of
multiprogramming multiprogramming is multiprogramming.
done.

It is barely present or
It is a minimal time- It is a component of systems
nonexistent in the time-
sharing system. for time sharing.
sharing system.

32
Long Term Scheduler Short term schedular Medium Term Scheduler

It can re-enter the process It can re-introduce the


It selects those processes
into memory, allowing for the process into memory and
which are ready to execute
continuation of execution. execution can be continued.

Two-State Process Model Short-term


The terms “running” and “non-running” states are used to describe the two-state process model.

S.no State Description

Running
1. A newly created process joins the system in a running state
when it is created.

Not running
Processes that are not currently running are kept in a queue and
await execution. A pointer to a specific process is contained in
each entry in the queue. Linked lists are used to implement the
2. queue system. This is how the dispatcher is used. When a
process is stopped, it is moved to the back of the waiting
queue. The process is discarded depending on whether it
succeeded or failed. The dispatcher then chooses a process to
run from the queue in either scenario.

Context Switching
In order for a process execution to be continued from the same point at a later time,
context switching is a mechanism to store and restore the state or context of a CPU in the
Process Control block. A context switcher makes it possible for multiple processes to share a
single CPU using this method. A multitasking operating system must include context switching
among its features.

33
The state of the currently running process is saved into the process control block when
the scheduler switches the CPU from executing one process to another. The state used to set
the PC, registers, etc. for the process that will run next is then loaded from its own PCB. After
that, the second can start processing.

In order for a process execution to be continued from the same point at a later time,
context switching is a mechanism to store and restore the state or context of a CPU in
the Process Control block. A context switcher makes it possible for multiple processes to share
a single CPU using this method. A multitasking operating system must include context
switching among its features.
 Program Counter
 Scheduling information
 The base and limit register value
 Currently used register
 Changed State
 I/O State information
 Accounting information

Cooperating Process in Operating System


Pre-requisites: Process Synchronization
In an operating system, everything is around the process. How the process goes through several
different states. So in this article, we are going to discuss one type of process called as
Cooperating Process. In the operating system there are two types of processes:

 Independent Process: Independent Processes are those processes whose task is not
dependent on any other processes.
 Cooperating Process: Cooperating Processes are those processes that depend on
other processes or processes. They work together to achieve a common task in an

34
operating system. These processes interact with each other by sharing the resources
such as CPU, memory, and I/O devices to complete the task.

So now let’s discuss the concept of cooperating processes and how they are used in operating
systems.

 Inter-Process Communication (IPC): Cooperating processes interact with each


other via Inter-Process Communication (IPC). As they are interacting to each other
and sharing some resources with another so running task get the synchronization
and possibilities of deadlock decreases. To implement the IPC there are many
options such as pipes, message queues, semaphores, and shared memory.
 Concurrent execution: These cooperating processes executes simultaneously
which can be done by operating system scheduler which helps to select the process
from ready queue to go to the running state. Because of concurrent execution of
several processes the completion time decreases.
 Resource sharing: In order to do the work, cooperating processes cooperate by
sharing resources including CPU, memory, and I/O hardware. If several processes

35
are sharing resources as if they have their turn, synchronization increases as well as
the response time of process increase.
 Deadlocks: As cooperating processes shares their resources, there might be a
deadlock condition. Deadlock means if p1 process holds the resource A and wait for
B and p2 process hold the B and wait for A. In this condition deadlock occur in
cooperating process. To avoid deadlocks, operating systems typically use
algorithms such as the Banker’s algorithm to manage and allocate resources to
processes.
 Process scheduling: Cooperating processes runs simultaneously but after context
switch, which process should be next on CPU to executes, this is done by the
scheduler. Scheduler do it by using several scheduling algorithms such as Round-
Robin, FCFS, SJF, Priority etc.
In conclusion, cooperating processes are essential unit to increase the concurrent
execution and because of it, the performance of the overall system increases.

What is Inter Process Communication?

In general, Inter Process Communication is a type of mechanism usually provided by


the operating system (or OS). The main aim or goal of this mechanism is to provide
communications in between several processes. In short, the intercommunication allows a
process letting another process know that some event has occurred.

Let us now look at the general definition of inter-process communication, which will
explain the same thing that we have discussed above.

Definition

"Inter-process communication is used for exchanging useful information between


numerous threads in one or more processes (or programs)."

To understand inter process communication, you can consider the following given
diagram that illustrates the importance of inter-process communication

36
Role of Synchronization in Inter Process Communication

It is one of the essential parts of inter process communication. Typically, this is


provided by interprocess communication control mechanisms, but sometimes it can also be
controlled by communication processes.

These are the following methods that used to provide the synchronization:

1. Mutual Exclusion
2. Semaphore
3. Barrier
4. Spinlock

Mutual Exclusion:-

It is generally required that only one process thread can enter the critical section at a
time. This also helps in synchronization and creates a stable state to avoid the race condition.

Semaphore:-

Semaphore is a type of variable that usually controls the access to the shared resources by
several processes. Semaphore is further divided into two types which are as follows:

1. Binary Semaphore
2. Counting Semaphore

Barrier:-

37
A barrier typically not allows an individual process to proceed unless all the processes
does not reach it. It is used by many parallel languages, and collective routines impose barriers

Spinlock:-

Spinlock is a type of lock as its name implies. The processes are trying to acquire the
spinlock waits or stays in a loop while checking that the lock is available or not. It is known as
busy waiting because even though the process active, the process does not perform any
functional operation (or task).

Approaches to Interprocess Communication

We will now discuss some different approaches to inter-process communication which


are as follows:

These are a few different approaches for Inter- Process Communication:

1. Pipes
2. Shared Memory
3. Message Queue
4. Direct Communication
5. Indirect communication
6. Message Passing
7. FIFO

38
To understand them in more detail, we will discuss each of them individually.

Pipe:-

The pipe is a type of data channel that is unidirectional in nature. It means that the data
in this type of data channel can be moved in only a single direction at a time. Still, one can use
two-channel of this type, so that he can able to send and receive data in two processes.
Typically, it uses the standard methods for input and output. These pipes are used in all types
of POSIX systems and in different versions of window operating systems as well.

Shared Memory:-

It can be referred to as a type of memory that can be used or accessed by multiple


processes simultaneously. It is primarily used so that the processes can communicate with each
other. Therefore the shared memory is used by almost all POSIX and Windows operating
systems as well.

Message Queue:-

In general, several different messages are allowed to read and write the data to the
message queue. In the message queue, the messages are stored or stay in the queue unless their
recipients retrieve them. In short, we can also say that the message queue is very helpful in
inter-process communication and used by all operating systems.

39
Message Passing:-

It is a type of mechanism that allows processes to synchronize and communicate with


each other. However, by using the message passing, the processes can communicate with each
other without restoring the hared variables.

Usually, the inter-process communication mechanism provides two operations that are as
follows:

o send (message)

o received (message)

It acts as a type of endpoint for receiving or sending the data in a network. It is correct for
data sent between processes on the same computer or data sent between different computers on
the same network. Hence, it used by several types of operating systems.

File:-

A file is a type of data record or a document stored on the disk and can be acquired on
demand by the file server. Another most important thing is that several processes can access
that file as required or needed.

Signal:-

As its name implies, they are a type of signal used in inter process communication in a
minimal way. Typically, they are the massages of systems that are sent by one process to
another. Therefore, they are not used for sending data but for remote commands between
multiple processes.

Usually, they are not used to send the data but to remote commands in between several
processes.

Why we need interprocess communication?

There are numerous reasons to use inter-process communication for sharing the data. Here
are some of the most important reasons that are given below:

40
o It helps to speedup modularity
o Computational
o Privilege separation
o Convenience

Helps operating system to communicate with each other and synchronize their actions as
well. Introduction of Deadlock in Operating System

A process in operating system uses resources in the following way.


1. Requests a resource
2. Use the resource
3. Releases the resource
A deadlock is a situation where a set of processes are blocked because each process is
holding a resource and waiting for another resource acquired by some other process.
Consider an example when two trains are coming toward each other on the same track and
there is only one track, none of the trains can move once they are in front of each other. A
similar situation occurs in operating systems when there are two or more processes that hold
some resources and wait for resources held by other(s). For example, in the below diagram,
Process 1 is holding Resource 1 and waiting for resource 2 which is acquired by process 2, and
process 2 is waiting for resource 1.

41
Examples Of Deadlock
1. The system has 2 tape drives. P1 and P2 each hold one tape drive and each needs
another one.
2. Semaphores A and B, initialized to 1, P0, and P1 are in deadlock as follows:
 P0 executes wait(A) and preempts.
 P1 executes wait(B).
 Now P0 and P1 enter in deadlock.

P0 P1

wait(A); wait(B)

wait(B); wait(A)

3. Assume the space is available for allocation of 200K bytes, and the following sequence
of events occurs.

P0 P1

Request 80KB; Request 70KB;

Request 60KB; Request 80KB;

Deadlock occurs if both processes progress to their second request.


Deadlock can arise if the following four conditions hold simultaneously (Necessary
Conditions)
Mutual Exclusion: Two or more resources are non-shareable (Only one process can use at a
time)
Hold and Wait: A process is holding at least one resource and waiting for resources.
No Preemption: A resource cannot be taken from a process unless the process releases the
resource.
Circular Wait: A set of processes waiting for each other in circular form.

1) Deadlock prevention or avoidance:


Prevention:

42
The idea is to not let the system into a deadlock state. This system will make sure that
above mentioned four conditions will not arise. These techniques are very costly so we use this
in cases where our priority is making a system deadlock-free.
One can zoom into each category individually, Prevention is done by negating one of the above-
mentioned necessary conditions for deadlock. Prevention can be done in four different ways:
1. Eliminate mutual exclusion 3. Allow preemption
2. Solve hold and Wait 4. Circular wait Solution
Avoidance:

Avoidance is kind of futuristic. By using the strategy of “Avoidance”, we have to make


an assumption. We need to ensure that all information about resources that the process will
need is known to us before the execution of the process. We use Banker’s algorithm (Which is
in turn a gift from Dijkstra) to avoid deadlock.
In prevention and avoidance, we get the correctness of data but performance decreases.
2) Deadlock detection and recovery:

If Deadlock prevention or avoidance is not applied to the software then we can handle
this by deadlock detection and recovery. which consist of two phases:
1. In the first phase, we examine the state of the process and check whether there is a
deadlock or not in the system.
2. If found deadlock in the first phase then we apply the algorithm for recovery of the
deadlock.
In Deadlock detection and recovery, we get the correctness of data but performance decreases.
Recovery from Deadlock
1. Manual Intervention:
When a deadlock is detected, one option is to inform the operator and let them handle
the situation manually. While this approach allows for human judgment and decision-making,
it can be time-consuming and may not be feasible in large-scale systems.
2. Automatic Recovery:
An alternative approach is to enable the system to recover from deadlock automatically.
This method involves breaking the deadlock cycle by either aborting processes or preempting
resources. Let’s delve into these strategies in more detail.
Recovery from Deadlock: Process Termination:
1. Abort all deadlocked processes:

43
This approach breaks the deadlock cycle, but it comes at a significant cost. The
processes that were aborted may have executed for a considerable amount of time, resulting in
the loss of partial computations. These computations may need to be recomputed later.
2. Abort one process at a time:
Instead of aborting all deadlocked processes simultaneously, this strategy involves
selectively aborting one process at a time until the deadlock cycle is eliminated. However, this
incurs overhead as a deadlock-detection algorithm must be invoked after each process
termination to determine if any processes are still deadlocked.
Factors for choosing the termination order:
 The process’s priority
 Completion time and the progress made so far
 Number of processes to be terminated
 Process type (interactive or batch)
Recovery from Deadlock: Resource Preemption:
1. Selecting a victim:
Resource preemption involves choosing which resources and processes should be
preempted to break the deadlock. The selection order aims to minimize the overall cost of
recovery. Factors considered for victim selection may include the number of resources held by
a deadlocked process and the amount of time the process has consumed.
2. Rollback:
If a resource is preempted from a process, the process cannot continue its normal
execution as it lacks the required resource. Rolling back the process to a safe state and restarting
it is a common approach. Determining a safe state can be challenging, leading to the use of
total rollback, where the process is aborted and restarted from scratch.
3. Starvation prevention:
To prevent resource starvation, it is essential to ensure that the same process is not
always chosen as a victim. If victim selection is solely based on cost factors, one process might
repeatedly lose its resources and never complete its designated task. To address this, it is
advisable to limit the number of times a process can be chosen as a victim, including the number
of rollbacks in the cost factor.
3) Deadlock ignorance: If a deadlock is very rare, then let it happen and reboot the system.

This is the approach that both Windows and UNIX take. we use the ostrich algorithm
for deadlock ignorance.

44
In Deadlock, ignorance performance is better than the above two methods but the
correctness of data.
Safe State:
A safe state can be defined as a state in which there is no deadlock. It is achievable if:
 If a process needs an unavailable resource, it may wait until the same has been
released by a process to which it has already been allocated. if such a sequence does
not exist, it is an unsafe state.
 All the requested resources are allocated to the process.
Deadlock Prevention And Avoidance
When two or more processes try to access the critical section at the same time and
they fail to access simultaneously or stuck while accessing the critical section then this
condition is known as Deadlock.
1. Every process needs a few resources to finish running.
2. The procedure makes a resource request. If the resource is available, the OS will
grant it; otherwise, the process will wait.
3. When the process is finished, it is released.
Deadlock Characteristics
The deadlock has the following characteristics:
1. Mutual Exclusion
2. Hold and Wait
3. No preemption
4. Circular wait

Deadlock Prevention
We can prevent a Deadlock by eliminating any of the above four conditions.
Eliminate Mutual Exclusion: It is not possible to dis-satisfy the mutual exclusion because
some resources, such as the tape drive and printer, are inherently non-shareable.
Eliminate Hold and wait: Allocate all required resources to the process before the start of
its execution, this way hold and wait condition is eliminated but it will lead to low device
utilization. for example, if a process requires a printer at a later time and we have allocated a
printer before the start of its execution printer will remain blocked till it has completed its
execution. The process will make a new request for resources after releasing the current set.

45
Eliminate No Preemption : Preempt resources from the process when resources are required
by other high-priority processes.
Eliminate Circular Wait : Each resource will be assigned a numerical number. A process
can request the resources to increase/decrease. order of numbering. For Example, if the P1
process is allocated R5 resources, now next time if P1 asks for R4, R3 lesser than R5 such a
request will not be granted, only a request for resources more than R5 will be granted.
Detection and Recovery: Another approach to dealing with deadlocks is to detect and
recover from them when they occur. This can involve killing one or more of the processes
involved in the deadlock or releasing some of the resources they hold.
Deadlock Avoidance
A deadlock avoidance policy grants a resource request only if it can establish that
granting the request cannot lead to a deadlock either immediately or in the future. The kernal
lacks detailed knowledge about future behavior of processes, so it cannot accurately predict
deadlocks. To facilitate deadlock avoidance under these conditions, it uses the following
conservative approach: Each process declares the maximum number of resource units of each
class that it may require.
The kernal permits a process to request these resource units in stages- i.e. a few
resource units at a time- subject to the maximum number declared by it and uses a worst case
analysis technique to check for the possibility of future deadlocks. A request is granted only
if there is no possibility of deadlocks; otherwise, it remains pending until it can be granted.
This approach is conservative because a process may complete its operation without requiring
the maximum number of units declared by it.
Resource Allocation Graph
The resource allocation graph (RAG) is used to visualize the system’s current state as
a graph. The Graph includes all processes, the resources that are assigned to them, as well as
the resources that each Process requests. Sometimes, if there are fewer processes, we can

46
quickly spot a deadlock in the system by looking at the graph rather than the tables we use
in Banker’s algorithm. Deadlock avoidance can also be done with Banker’s Algorithm.
Banker’s Algorithm
Bankers’s Algorithm is a resource allocation and deadlock avoidance algorithm which
test all the request made by processes for resources, it checks for the safe state, and after
granting a request system remains in the safe state it allows the request, and if there is no safe
state it doesn’t allow the request made by the process.
Inputs to Banker’s Algorithm
1. Max needs of resources by each process.
2. Currently, allocated resources by each process.
3. Max free available resources in the system.
The request will only be granted under the below condition
1. If the request made by the process is less than equal to the max needed for that
process.
2. If the request made by the process is less than equal to the freely available resource
in the system.
Timeouts: To avoid deadlocks caused by indefinite waiting, a timeout mechanism can be
used to limit the amount of time a process can wait for a resource. If the help is unavailable
within the timeout period, the process can be forced to release its current resources and try
again later.
Example:
Total resources in system:
A B C D
6 5 7 6

The total number of resources are


Available system resources are:
A B C D
3 1 1 2

Available resources are

47
Processes (currently allocated resources):
A B C D
P1 1 2 2 1
P2 1 0 3 3
P3 1 2 1 0

Maximum resources we have for a process


Processes (maximum resources):
A B C D
P1 3 3 2 2
P2 1 2 3 4
P3 1 3 5 0

Need = Maximum Resources Requirement – Currently Allocated Resources.


Need = maximum resources - currently allocated resources.
Processes (need resources):
A B C D
P1 2 1 0 1
P2 0 2 0 1
P3 0 1 4 0

Conclusion
 Operating Systems employ deadlock avoidance to prevent deadlock by using
the Banker’s algorithm or the resource allocation graph Deadlock avoidance
work by informing the operating system of the resources needed by the process
to finish execution, and the operating system then determines whether or not
the requirements can be met.
 The System is said to be in a Safe state if all the resources required by the
Process are satisfied with the resources that are currently available.
The System is said to be in an unsafe state if all of the resource requirements
of the Process cannot be met by the available resources in any way.

48
Deadlock Detection And Recovery
Deadlock detection and recovery is the process of detecting and resolving deadlocks in
an operating system. A deadlock occurs when two or more processes are blocked, waiting for
each other to release the resources they need. This can lead to a system-wide stall, where no
process can make progress.

There are two main approaches to deadlock detection and recovery:

1. Prevention: The operating system takes steps to prevent deadlocks from occurring
by ensuring that the system is always in a safe state, where deadlocks cannot occur.
This is achieved through resource allocation algorithms such as the Banker’s
Algorithm.
2. Detection and Recovery: If deadlocks do occur, the operating system must detect
and resolve them. Deadlock detection algorithms, such as the Wait-For Graph, are
used to identify deadlocks, and recovery algorithms, such as the Rollback and Abort
algorithm, are used to resolve them. The recovery algorithm releases the resources
held by one or more processes, allowing the system to continue to make progress.
Difference Between Prevention and Detection/Recovery:
Prevention aims to avoid deadlocks altogether by carefully managing resource
allocation, while detection and recovery aim to identify and resolve deadlocks that have already
occurred.
Deadlock detection and recovery is an important aspect of operating system design and
management, as it affects the stability and performance of the system. The choice of deadlock
detection and recovery approach depends on the specific requirements of the system and the
trade-offs between performance, complexity, and risk tolerance. The operating system must
balance these factors to ensure that deadlocks are effectively detected and resolved.

In the previous post, we discussed Deadlock Prevention and Avoidance. In this post,
the Deadlock Detection and Recovery technique to handle deadlock is discussed.
Deadlock Detection :
In this case for Deadlock detection, we can run an algorithm to check for the cycle in
the Resource Allocation Graph. The presence of a cycle in the graph is a sufficient condition
deadlock.

49
In the above diagram, resource 1 and resource 2 have single instances. There is a cycle
R1 → P1 → R2 → P2. So, Deadlock is Confirmed.

2. If there are multiple instances of resources –

Detection of the cycle is necessary but not a sufficient condition for deadlock detection,
in this case, the system may or may not be in deadlock varies according to different situations.
3. Wait-For Graph Algorithm –
The Wait-For Graph Algorithm is a deadlock detection algorithm used to detect
deadlocks in a system where resources can have multiple instances. The algorithm works by
constructing a Wait-For Graph, which is a directed graph that represents the dependencies
between processes and resources.

DeadlockRecovery:

A traditional operating system such as Windows doesn’t deal with deadlock recovery
as it is a time and space-consuming process. Real-time operating systems use Deadlock
recovery.
1. Killing the process –
Killing all the processes involved in the deadlock. Killing process one by one. After
killing each process check for deadlock again and keep repeating the process till the
system recovers from deadlock. Killing all the processes one by one helps a system
to break circular wait conditions.
2. Resource Preemption –
Resources are preempted from the processes involved in the deadlock, and

50
preempted resources are allocated to other processes so that there is a possibility of
recovering the system from the deadlock. In this case, the system goes into
starvation.

3. Concurrency Control – Concurrency control mechanisms are used to prevent data


inconsistencies in systems with multiple concurrent processes. These mechanisms
ensure that concurrent processes do not access the same data at the same time, which
can lead to inconsistencies and errors. ADVANTAGES OR DISADVANTAGES:

Advantages of Deadlock Detection and Recovery in Operating Systems:

1. Improved System Stability: Deadlocks can cause system-wide stalls, and


detecting and resolving deadlocks can help to improve the stability of the system.
2. Better Resource Utilization: By detecting and resolving deadlocks, the operating
system can ensure that resources are efficiently utilized and that the system remains
responsive to user requests.
3. Better System Design: Deadlock detection and recovery algorithms can provide
insight into the behavior of the system and the relationships between processes and
resources, helping to inform and improve the design of the system.

Disadvantages of Deadlock Detection and Recovery in Operating Systems:

1. Performance Overhead: Deadlock detection and recovery algorithms can


introduce a significant overhead in terms of performance, as the system must
regularly check for deadlocks and take appropriate action to resolve them.
2. Complexity: Deadlock detection and recovery algorithms can be complex to
implement, especially if they use advanced techniques such as the Resource
Allocation Graph or Timestamping.
3. False Positives and Negatives: Deadlock detection algorithms are not perfect and
may produce false positives or negatives, indicating the presence of deadlocks when
they do not exist or failing to detect deadlocks that do exist.
4. Risk of Data Loss: In some cases, recovery algorithms may require rolling back
the state of one or more processes, leading to data loss or corruption
ONE MARKS

51
1. In the case of the index allocation scheme of various blocks to a file, the maximum size
(possible) of the file would depend on :

a. the total number of blocks that have been used for the index, size of all the blocks

b. the actual size of all blocks, the size of the blocks’ address

c. the of the blocks’ size, the blocks’ address size, and the total number of blocks that have
been used for the index

d. None of the above

Answer: (a) the total number of blocks that have been used for the index, size of all the blocks

2. The swap space in a disk is primarily used to:

a. Save process data

b. Save temporary HTML pages

c. Store the device drivers

d. Store the super-block

Answer: (a) Save process data

3. Out of these page replacement algorithms, which one suffers from Belady’s anomaly?

a. LRU

b. FIFO

c. Both LRU and FIFO

d. Optimal Page Replacement

Answer: (b)FIFO

4. An increase in a computer’s RAM leads to a typical improvement in performance because:

a. Fewer page faults occur

52
b. Virtual memory increases

c. Fewer segmentation faults occur

d. A larger RAM is faster

Answer: (a) Fewer page faults occur

5. Consider a computer system that supports 32-bit physical as well as virtual addresses. Now
since the space of the physical address is the same size as the virtual address, the OS designers
would decide to entirely get rid of its virtual memory. Which one of these is true in this case?

a. It is no longer possible to efficiently implement multi-user support

b. It is possible to make CPU scheduling more efficient now

c. There would no longer be a requirement for hardware support for memory management

d. It would be possible to make the processor cache organisation more efficient now

Answer: (c) There would no longer be a requirement for hardware support for memory
management

6. The Virtual memory is:

a. An illusion of a large main memory

b. A large main memory

c. A large secondary memory

d. None of the above

Answer: (a) An illusion of a large main memory

7. A CPU yields 32-bit virtual addresses, and the page size is 4 kilobytes. Here, the processor
consists of a TLB (translation lookaside buffer). It is a 4-way set associative, and it can hold a
total of 128-page table entries. The TLB tag’s minimum size is:

a. 20 bits

53
b. 15 bits

c. 13 bits

d. 11 bits

Answer: (b) 15 bits

8. Thrashing occurs in a system when:

a. The processes on the system access pages and not memory frequently

b. A page fault pops up

c. The processes on the system are in running state

d. The processes on the system are in the waiting state

Answer: (a) The processes on the system access pages and not memory frequently

9. The page fault occurs whenever:

a. The requested page isn’t in the memory

b. The requested page is in the memory

c. An exception is thrown

d. The page is corrupted

Answer: (a) The requested page isn’t in the memory

10. Consider a computer that uses 32–bit physical address, 46–bit virtual address, along with a
page table organisation that is three-level. Here, the base register of the page table stores the
T1 (first–level table) base address, which occupies exactly one page. Every entry of the T1
stores the T2 (second-level table) page’s base address. Similarly, every entry of T2 stores the
T3 (third-level table) page’s base address and every entry of T3 stores a PTE (page table entry).
The size of PTE is 32 bits. In the computer, the processor has a 1 MB 16 way virtually indexed
set-associative physically tagged cache. If the size of the cache block is 64 bytes, then what is
the size of a page in this computer in Kilobytes?

54
a. 4

b. 2

c. 16

d. 8

Answer: (d) 8

11. Consider that the page fault service time in a computer is 10ms and the average memory
access time is 20ns. If, in case, it generates a page fault every 10^6 memory accesses, then
what would be the effective access time for this memory?

a. 30ns

b. 21ns

c. 35ns

d. 23ns

Answer: (a) 30ns

12. FIFO policy is used in a system for page replacement. It consists of 4-page frames, and no
pages loaded, to start with. This system initially accesses 100 separate pages in a particular
order. It then accesses these same 100 pages. The difference is that now they are in the reverse
order. Considering this, how many page faults would occur here?

a. 192

b. 195

c. 196

d. 197

Answer: (c) 196

13. In every entry of a page table, the essential content(s) is/are:

55
a. Page frame number

b. Virtual page number

c. Both page frame number and virtual page number

d. Accessing the right information

Answer: (a) Page frame number

14. When translating a virtual address to a physical address, a multilevel page table is always
a preference as compared to a single level page because it:

a. Helps in the reduction of the total page faults in the page replacement algorithms

b. Reduces the total memory access time for reading or writing a memory location

c. Helps in the reduction of the page table size required for implementing a process’s virtual
address space

d. Is required by the lookaside buffer translation

Answer: (c) Helps in the reduction of the page table size required for implementing a process’s
virtual address space

15. Consider a processor that uses 32-bit virtual addresses, 36-bit physical addresses, and a 4
KB page frame size. Each page table entry is 4 bytes in size. Here, a page table of three-level
is used for the translation of virtual to a physical address. The virtual address, in this case, is
used as follows:

• Bits 12-20 are utilised for indexing into the page table of the third level

• Bits 21-29 are utilised for indexing into the page table of the second level

• Bits 30-31 are utilised for indexing into the page table of the first level, and • Bits 0-11 are
utilised as an offset within the page.

Thus, the total number of bits needed to address the next level page frame or page table for the
first-level, second-level and third-level page table entry are respectively:

56
a. 25, 25 and 24

b. 24, 24 and 20

c. 24, 24 and 24

d. 20, 20 and 20

Answer: (c) 24, 24 and 24

16. Consider that a virtual memory system uses a FIFO page replacement policy. For a process,
it allocates a fixed number of frames. Now consider these statements:

A: An increase in the number of page frames that are allocated to a

process sometimes leads to an increase in the page fault rate.

B: A few programs do not display the locality of reference.

Which one of these statements is TRUE?

a. A is false, but B is true

b. Both A and B are false

c. Both A and B are true, and B is the reason for A

d. Both A and B are true, but B isn’t the reason for A

Answer: (d)Both A and B are true, but B isn’t the reason for A

17. 3 page frames have been allocated to a process. Here, we assume that none of the process’s
pages is available initially in the memory, and the process creates this sequence of page
references: 1, 2, 1, 3, 7, 4, 5, 6, 3, 1 (reference string). If an optimal page replacement policy is
utilised, then how many page faults would occur for the reference string mentioned above?

a. 10

b. 9

c. 8

57
d. 7

Answer: (d) 7

18. Consider paging hardware that has a TLB. Let us assume that the page table and the pages
are in their physical memory. Searching the TLB takes 10 milliseconds, and accessing the
physical memory takes 80 milliseconds. In case the TLB hit ratio is 0.6, then the effective
memory access time is _________ (in milliseconds).

a. 124

b. 122

c. 120

d. 118

Answer: (b) 122

19. A system that has 32-bit virtual addresses & 1 KB page size, it is not practical to use one-
level page tables for translating virtual to a physical address, due to:

a. a large amount of external fragmentation

b. a large amount of internal fragmentation

c. a large computation overhead in the process of translation

d. a large memory overhead when maintaining the page tables

Answer: (d)a large memory overhead when maintaining the page tables

20. Which of these isn’t an advantage of using dynamically linked, shared libraries, as
compared to statically linked libraries?

a. Faster program startup

b. The existing programs do not need to be re-linked so as to take advantage of the newer
library versions

c. Lesser page fault rate in a system

58
d. Smaller sizes of executable files

Answer: (a) Faster program startup

21. Out of all the following, which one isn’t a form of memory?

a. translation lookaside buffer

b. instruction opcode

c. instruction register

d. instruction cache

Answer: (b)instruction opcode

22. The process of dynamic linking can generate security concerns because:

a. Linking is insecure

b. The cryptographic procedures aren’t available for the process of dynamic linking

c. Security is dynamic

d. The path of the searching dynamic libraries isn’t known until the runtime

Answer: (b)The cryptographic procedures aren’t available for the process of dynamic linking

23. Which of these is a false statement?

a. The virtual memory translates a program‘s address space into their physical memory address
space.

b. The virtual memory allows every program to exceed the primary memory’s size.

c. The virtual memory leads to an increase in the degree of multiprogramming

d. The virtual memory leads to a reduction of the context switching overhead

Answer: (d)The virtual memory leads to a reduction of the context switching overhead

59
24. ________ is the process in which load addresses are assigned to a program’s various parts,
and the code and date are adjusted in the program for the reflection of the assigned addresses.

a. Symbol resolution

b. Assembly

c. Parsing

d. Relocation

Answer: (d)Relocation

25. Which one of these is NOT shared by the same process’s threads?

a. Address Space

b. Stack

c. Message Queue

d. File Descriptor Table

Answer: (b)Stack

5 MARKS

1. Basics of operating system

2.What is an operating system

3.Explain about Desktop system

4. Explain about distributed system

5. Explain about clustered system

6. Explain about realtime system

7. Explain about handheld system

8. Explain about multiprocessor system

60
9. Explain about main frame system

10 marks

1.Discuss about the feature migration

2.Explain about the computing environments

3. different about the process scheduling

4. what is cooperative processes and inter process communication

5.what is deadlocks briefly explain

6.Explain about the prevention and avoidance

Unit 2

61
Distributed Operating System

A distributed operating system (DOS) is an essential type of operating system.


Distributed systems use many central processors to serve multiple real-time applications and
users. As a result, data processing jobs are distributed between the processors.

It connects multiple computers via a single communication channel. Furthermore, each


of these systems has its own processor and memory. Additionally, these CPUs communicate
via high-speed buses or telephone lines. Individual systems that communicate via a single
channel are regarded as a single entity. They're also known as loosely coupled systems.

This operating system consists of numerous computers, nodes, and sites joined together
via LAN/WAN lines. It enables the distribution of full systems on a couple of center
processors, and it supports many real-time products and different users. Distributed operating
systems can share their computing resources and I/O files while providing users with virtual
machine abstraction.

Types of Distributed Operating System

There are various types of Distributed Operating systems. Some of them are as follows:

1. Client-Server Systems
2. Peer-to-Peer Systems
3. Middleware
4. Three-tier
5. N-tier

62
Client-Server System

This type of system requires the client to request a resource, after which the server gives
the requested resource. When a client connects to a server, the server may serve multiple clients
at the same time.

Client-Server Systems are also referred to as "Tightly Coupled Operating Systems".


This system is primarily intended for multiprocessors and homogenous multicomputer. Client-
Server Systems function as a centralized server since they approve all requests issued by client
systems.

Server systems can be divided into two parts:

1. Computer Server System

This system allows the interface, and the client then sends its own requests to be
executed as an action. After completing the activity, it sends a back response and transfers the
result to the client.

2. File Server System

It provides a file system interface for clients, allowing them to execute actions like file
creation, updating, deletion, and more.

Peer-to-Peer System

The nodes play an important role in this system. The task is evenly distributed among
the nodes. Additionally, these nodes can share data and resources as needed. Once again, they
require a network to connect.

The Peer-to-Peer System is known as a "Loosely Couple System". This concept is used
in computer network applications since they contain a large number of processors that do not
share memory or clocks. Each processor has its own local memory, and they interact with one
another via a variety of communication methods like telephone lines or high-speed buses.

63
Middleware

Middleware enables the interoperability of all applications running on different


operating systems. Those programs are capable of transferring all data to one other by using
these services.

Three-tier

The information about the client is saved in the intermediate tier rather than in the client,
which simplifies development. This type of architecture is most commonly used in online
applications.

N-tier

 When a server or application has to transmit requests to other enterprise services


on the network, n-tier systems are used.
 Features of Distributed Operating System
 There are various features of the distributed operating system. Some of them
are as follows:

Openness

It means that the system's services are freely displayed through interfaces. Furthermore,
these interfaces only give the service syntax. For example, the type of function, its return type,
parameters, and so on. Interface Definition Languages are used to create these interfaces (IDL).

Scalability

It refers to the fact that the system's efficiency should not vary as new nodes are added
to the system. Furthermore, the performance of a system with 100 nodes should be the same as
that of a system with 1000 nodes.

Resource Sharing

Its most essential feature is that it allows users to share resources. They can also share
resources in a secure and controlled manner. Printers, files, data, storage, web pages, etc., are
examples of shared resources.

64
Flexibility

A DOS's flexibility is enhanced by modular qualities and delivers a more advanced


range of high-level services. The kernel/ microkernel's quality and completeness simplify the
implementation of such services.

Transparency

It is the most important feature of the distributed operating system. The primary purpose
of a distributed operating system is to hide the fact that resources are shared. Transparency also
implies that the user should be unaware that the resources he is accessing are shared.
Furthermore, the system should be a separate independent unit for the user.

Heterogeneity

The components of distributed systems may differ and vary in operating systems,
networks, programming languages, computer hardware, and implementations by different
developers.

Fault Tolerance

Fault tolerance is that process in which user may continue their work if the software or
hardware fails.

Examples of Distributed Operating System

There are various examples of the distributed operating system. Some of them are as
follows:

Solaris

It is designed for the SUN multiprocessor workstations

OSF/1

It's compatible with Unix and was designed by the Open Foundation Software Company.

Micros

65
The MICROS operating system ensures a balanced data load while allocating jobs to
all nodes in the system.

DYNIX

It is developed for the Symmetry multiprocessor computers.

Locus

It may be accessed local and remote files at the same time without any location
hindrance.

Mach

It allows the multithreading and multitasking features.

Applications of Distributed Operating System

There are various applications of the distributed operating system. Some of them are as
follows:

Network Applications

DOS is used by many network applications, including the Web, peer-to-peer networks,
multiplayer web-based games, and virtual communities.

Telecommunication Networks

DOS is useful in phones and cellular networks. A DOS can be found in networks like
the Internet, wireless sensor networks, and routing algorithms.

Parallel Computation

DOS is the basis of systematic computing, which includes cluster computing and grid
computing, and a variety of volunteer computing projects.

Real-Time Process Control

66
The real-time process control system operates with a deadline, and such examples
include aircraft control systems.

Advantages and Disadvantages of Distributed Operating System

There are various advantages and disadvantages of the distributed operating system.
Some of them are as follows:

Advantages

There are various advantages of the distributed operating system. Some of them are as follow:

1. It may share all resources (CPU, disk, network interface, nodes, computers, and so on)
from one site to another, increasing data availability across the entire system.
2. It reduces the probability of data corruption because all data is replicated across all
sites; if one site fails, the user can access data from another operational site.
3. The entire system operates independently of one another, and as a result, if one site
crashes, the entire system does not halt.
4. It increases the speed of data exchange from one site to another site.
5. It is an open system since it may be accessed from both local and remote locations.
6. It helps in the reduction of data processing time.
7. Most distributed systems are made up of several nodes that interact to make them fault-
tolerant. If a single machine fails, the system remains operational.

Disadvantages

There are various disadvantages of the distributed operating system. Some of them are as
follows:

1. The system must decide which jobs must be executed when they must be executed, and
where they must be executed. A scheduler has limitations, which can lead to
underutilized hardware and unpredictable runtimes.
2. It is hard to implement adequate security in DOS since the nodes and connections must
be secured.

67
3. The database connected to a DOS is relatively complicated and hard to manage in
contrast to a single-user system.
4. The underlying software is extremely complex and is not understood very well
compared to other systems.
5. The more widely distributed a system is, the more communication latency can be
expected. As a result, teams and developers must choose between availability,
consistency, and latency.

6. These systems aren't widely available because they're thought to be too expensive.
7. Gathering, processing, presenting, and monitoring hardware use metrics for big clusters
can be a real issue.

COMMUNICATION PRIMITIVES

The sender sends a message that contains data and it is made in such a way that the receiver
can understand it. The inter-process communication in distributed systems is performed using
Message Passing. It permits the exchange of messages between the processes using
primitives for sending and receiving messages.

1. A Fixed-Length Header with 3 Components:


 Address: The address component consists of a sender and receiving process
unique addresses. It contains two parts: one part contains an address for sending
process and the other part contains an address for receiving process.

68
 Sequence number: The sequence number works like Message Identifier (ID) as
it is used in situations where we need to find missing or duplicate messages
because of system failure.
 Structural Information: It contains two fields: The type (data or a pointer to data
included) and the length of the message are structural information.
2. A Collection of Typed Data Objects of Varying Sizes:
Message Passing: A message-passing system gives a collection of message-based IPC
(Inter-Process Communication) protocols while sheltering programmers from the
complexities of sophisticated network protocols and many heterogeneous platforms. The
send() and receive() communication primitives are used by processes for interacting with
each other. For example, Process A wants to communicate with Process B then Process A
will send a message with send() primitive and Process B will receive the message with
receive() primitive.
Characteristics of a Good Message Passing System:
 Simplicity
 Uniform Semantics
 Efficiency
 Correctness
 Reliability
 Flexibility
 Security
 Portability
Issues in Message Passing:
 Who is the message’s sender?
 Who is the intended recipient?
 Is there a single receiver or several receivers?
 Is there any guarantee that the intended recipient received the message? Is it
required to wait for the reply from the sender?
 Is there any strategy for handling a catastrophic event if it occurs during
communication, such as a communication link failure or node crash?
 What should be done with the message if the receiver is not ready to take it?
Whether it will be destroyed or kept in a buffer? What are the steps to follow if
the buffer is also full in the case of buffering?

69
 Can a receiver be able to do the ordering of messages in case of several
outstanding messages?
Synchronization:
 The send() and receive() primitives are called whenever processes need to
communicate.
 The communication primitives’ synchronization of communicative processes is a
critical issue in the communication structure.

Synchronization Semantics: The following are the two ways of message passing between
processes:
 Blocking (Synchronous)
 Non-blocking (Asynchronous)
1. Blocking: The blocking semantics implies that when the call of a send () or receive()
primitive blocks the invoker’s current execution.
2. Non-blocking: The non-blocking semantics imply that when the call of a send () or
receive() primitive does not block the invoker’s current execution and the control
immediately goes back to the invoker.
 Blocking send() primitive: The blocking send() primitive refers to the blocking
of sending process. The process remains blocking until it receives an
acknowledgment from the receiver side that the message has been received after
the execution of this primitive.
 Non-blocking send() primitive: The non-blocking send() primitive refers to the
non-blocking state of the sending process that implies after the execution of send()

70
statement, the process is permitted to continue further with its execution
immediately when the message has been transferred to a buffer.
 Blocking receive() primitive: when they receive statement is executed, the
receiving process is halted until a message is received.
 Nonblocking receive() primitive: The non-blocking receive() primitive implies
that the receiving process is not blocked after executing the receive() statement,
control is returned immediately after informing the kernel of the message buffer’s
location.
The issue in a nonblocking receive() primitive is when a message arrives in the message
buffer, how does the receiving process know? One of the following two procedures can be
used for this purpose:

1. Polling: In the polling method, the receiver can check the status of the buffer when
a test primitive is passed in this method. The receiver regularly polls the kernel to
check whether the message is already in the buffer.
2. Interrupt: A software interrupt is used in the software interrupt method to inform
the receiving process regarding the status of the message i.e. when the message
has been stored into the buffer and is ready for usage by the receiver. So, here in
this method, receiving process keeps on running without having to submit failed
test requests.
In a blocked send() and receive() primitive, there is an issue:
 Blocking send() primitive: The issue that can be raised here is that the sending
process may become permanently halted if receiving process has crashed or the
sent messages are lost because of communication failure. So, blocking send()
primitives have set a fixed time value which when elapsed send operation is halted
with an error status to avoid this problem. Users might be given the option to
specify the timeout value as a parameter of the send primitive, or it could be set
as a default.
 Blocking receive() primitive: To avoid the receiving process from becoming
halted indefinitely, a blocking receive() primitive might be connected with a fixed
time value. This can happen if the prospective sending procedure fails or if the
expected message is lost on the network owing to a communication breakdown.

71
Lamport’s logical clock
Lamport’s Logical Clock was created by Leslie Lamport. It is a procedure to determine the
order of events occurring. It provides a basis for the more advanced Vector Clock Algorithm.
Due to the absence of a Global Clock in a Distributed Operating System Lamport Logical
Clock is needed.
Algorithm:
 Happened before relation(->): a -> b, means ‘a’ happened before ‘b’.
 Logical Clock: The criteria for the logical clocks are:
 [C1]: Ci (a) < Ci(b), [ Ci -> Logical Clock, If ‘a’ happened before ‘b’,
then time of ‘a’ will be less than ‘b’ in a particular process. ]
 [C2]: Ci(a) < Cj(b), [ Clock value of Ci(a) is less than Cj(b) ]
Reference:
 Process: Pi
 Event: Eij, where i is the process in number and j: jth event in the ith process.
 tm: vector time span for message m.
 Ci vector clock associated with process Pi, the jth element is Ci[j] and
contains Pi‘s latest value for the current time in process Pj.
 d: drift time, generally d is 1.
Implementation Rules[IR]:
 [IR1]: If a -> b [‘a’ happened before ‘b’ within the same process]
then, Ci(b) =Ci(a) + d
 [IR2]: Cj = max(Cj, tm + d) [If there’s more number of processes, then tm = value
of Ci(a), Cj = max value between Cj and tm + d]
For Example:

72
 Take the starting value as 1, since it is the 1st event and there is no incoming value
at the starting point:
 e11 = 1
 e21 = 1
 The value of the next point will go on increasing by d (d = 1), if there is no incoming
value i.e., to follow [IR1].
 e12 = e11 + d = 1 + 1 = 2
 e13 = e12 + d = 2 + 1 = 3
 e14 = e13 + d = 3 + 1 = 4
 e15 = e14 + d = 4 + 1 = 5
 e16 = e15 + d = 5 + 1 = 6
 e22 = e21 + d = 1 + 1 = 2
 e24 = e23 + d = 3 + 1 = 4
 e26 = e25 + d = 6 + 1 = 7
 When there will be incoming value, then follow [IR2] i.e., take the maximum value
between Cj and Tm + d.
 e17 = max(7, 5) = 7, [e16 + d = 6 + 1 = 7, e24 + d = 4 + 1 = 5, maximum
among 7 and 5 is 7]
 e23 = max(3, 3) = 3, [e22 + d = 2 + 1 = 3, e12 + d = 2 + 1 = 3, maximum
among 3 and 3 is 3]
 e25 = max(5, 6) = 6, [e24 + 1 = 4 + 1 = 5, e15 + d = 5 + 1 = 6, maximum
among 5 and 6 is 6]
Limitation:
 In case of [IR1], if a -> b, then C(a) < C(b) -> true.
 In case of [IR2], if a -> b, then C(a) < C(b) -> May be true or may not be true.

73
Below is the C program to implement Lamport’s Logical Clock:

 C++
 C
 Java
 Python3
 C#
 Javascript

// C++ program to illustrate the Lamport's

// Logical Clock

#include <bits/stdc++.h>

using namespace std;

// Function to find the maximum timestamp

// between 2 events

74
int max1(int a, int b)

// Return the greatest of the two

if (a > b)

return a;

else

return b;

// Function to display the logical timestamp

void display(int e1, int e2,

int p1[5], int p2[3])

int i;

cout << "\nThe time stamps of "

"events in P1:\n";

for (i = 0; i < e1; i++) {

cout << p1[i] << " ";

cout << "\nThe time stamps of "

75
"events in P2:\n";

// Print the array p2[]

for (i = 0; i < e2; i++)

cout << p2[i] << " ";

// Function to find the timestamp of events

void lamportLogicalClock(int e1, int e2,

int m[5][3])

int i, j, k, p1[e1], p2[e2];

// Initialize p1[] and p2[]

for (i = 0; i < e1; i++)

p1[i] = i + 1;

for (i = 0; i < e2; i++)

p2[i] = i + 1;

cout << "\t";

for (i = 0; i < e2; i++)

cout << "\te2" << i + 1;

76
for (i = 0; i < e1; i++) {

cout << "\n e1" << i + 1 << "\t";

for (j = 0; j < e2; j++)

cout << m[i][j] << "\t";

for (i = 0; i < e1; i++) {

for (j = 0; j < e2; j++) {

// Change the timestamp if the

// message is sent

if (m[i][j] == 1) {

p2[j] = max1(p2[j], p1[i] + 1);

for (k = j + 1; k < e2; k++)

p2[k] = p2[k - 1] + 1;

// Change the timestamp if the

// message is received

if (m[i][j] == -1) {

p1[i] = max1(p1[i], p2[j] + 1);

77
for (k = i + 1; k < e1; k++)

p1[k] = p1[k - 1] + 1;

// Function Call

display(e1, e2, p1, p2);

// Driver Code

int main()

int e1 = 5, e2 = 3, m[5][3];

// message is sent and received

// between two process

/*dep[i][j] = 1, if message is sent

from ei to ej

dep[i][j] = -1, if message is received

by ei from ej

dep[i][j] = 0, otherwise*/

78
m[0][0] = 0;

m[0][1] = 0;

m[0][2] = 0;

m[1][0] = 0;

m[1][1] = 0;

m[1][2] = 1;

m[2][0] = 0;

m[2][1] = 0;

m[2][2] = 0;

m[3][0] = 0;

m[3][1] = 0;

m[3][2] = 0;

m[4][0] = 0;

m[4][1] = -1;

m[4][2] = 0;

// Function Call

lamportLogicalClock(e1, e2, m);

return 0;

Learn Data Structures & Algorithms with GeeksforGeeks

Output

e21 e22 e23

79
e11 0 0 0

e12 0 0 1

e13 0 0 0

e14 0 0 0

e15 0 -1 0

The time stamps of events in P1:

12345

The time stamps of events in P2:

123

Time Complexity: O(e1 * e2 * (e1 + e2))


Auxiliary Space: O(e1 + e2)

Deadlock Handling Strategies in Distributed


The following are the strategies used for Deadlock Handling in Distributed System:

 Deadlock Prevention
 Deadlock Avoidance
 Deadlock Detection and Recovery
1. Deadlock Prevention: As the name implies, this strategy ensures that deadlock can never
happen because system designing is carried out in such a way. If any one of the deadlock-
causing conditions is not met then deadlock can be prevented. Following are the three methods
used for preventing deadlocks by making one of the deadlock conditions to be unsatisfied:
 Collective Requests: In this strategy, all the processes will declare the required
resources for their execution beforehand and will be allowed to execute only if there
is the availability of all the required resources. When the process ends up with
processing then only resources will be released. Hence, the hold and wait condition
of deadlock will be prevented.
 But the issue is initial resource requirements of a process before it starts are based
on an assumption and not because they will be required. So, resources will be
unnecessarily occupied by a process and prior allocation of resources also affects
potential concurrency.

80
 Ordered Requests: In this strategy, ordering is imposed on the resources and thus,
process requests for resources in increasing order. Hence, the circular wait condition
of deadlock can be prevented.
 An ordering strictly indicates that a process never asks for a low resource
while holding a high one.
 There are two more ways of dealing with global timing and transactions
in distributed systems, both of which are based on the principle of
assigning a global timestamp to each transaction as soon as it begins.
 During the execution of a process, if a process seems to be blocked
because of the resource acquired by another process then the timestamp
of the processes must be checked to identify the larger timestamp
process. In this way, cycle waiting can be prevented.
 It is better to give priority to the old processes because of their long
existence and might be holding more resources.
 It also eliminates starvation issues as the younger transaction will
eventually be out of the system.
 Preemption: Resource allocation strategies that reject no-preemption conditions
can be used to avoid deadlocks.
 Wait-die: If an older process requires a resource held by a younger
process, the latter will have to wait. A young process will be destroyed
if it requests a resource controlled by an older process.
 Wound-wait: If an old process seeks a resource held by a young process,
the young process will be preempted, wounded, and killed, and the old
process will resume and wait. If a young process needs a resource held
by an older process, it will have to wait.
2. Deadlock Avoidance: In this strategy, deadlock can be avoided by examining the state of
the system at every step. The distributed system reviews the allocation of resources and
wherever it finds an unsafe state, the system backtracks one step and again comes to the safe
state. For this, resource allocation takes time whenever requested by a process. Firstly, the
system analysis occurs whether the granting of resources will make the system in a safe state
or unsafe state then only allocation will be made.
 A safe state refers to the state when the system is not in deadlocked state and order
is there for the process regarding the granting of requests.

81
 An unsafe state refers to the state when no safe sequence exists for the system. Safe
sequence implies the ordering of a process in such a way that all the processes run
to completion in a safe state.
3. Deadlock Detection and Recovery: In this strategy, deadlock is detected and an attempt is
made to resolve the deadlock state of the system. These approaches rely on a Wait-For-Graph
(WFG), which is generated and evaluated for cycles in some methods. The following two
requirements must be met by a deadlock detection algorithm:
 Progress: In a given period, the algorithm must find all existing deadlocks. There
should be no deadlock existing in the system which is undetected under this
condition. To put it another way, after all, wait-for dependencies for a deadlock have
arisen, the algorithm should not wait for any additional events to detect the
deadlock.
 No False Deadlocks: Deadlocks that do not exist should not be reported by the
algorithm which is called phantom or false deadlocks.
There are different types of deadlock detection techniques:

 Centralized Deadlock Detector: The resource graph for the entire system is
managed by a central coordinator. When the coordinator detects a cycle, it
terminates one of the processes involved in the cycle to break the deadlock.
Messages must be passed when updating the coordinator’s graph. Following are the
methods:
 A message must be provided to the coordinator whenever an arc is
created or removed from the resource graph.
 Every process can transmit a list of arcs that have been added or removed
since the last update periodically.
 When information is needed, the coordinator asks for it.
 Hierarchical Deadlock Detector: In this approach, deadlock detectors are
arranged in a hierarchy. Here, only those deadlocks can be detected that fall within
their range.
 Distributed Deadlock Detector: In this approach, detectors are distributed so that
all the sites can fully participate to resolve the deadlock state. In one of the following
below four classes for the Distributed Detection Algorithm- The probe-based
scheme can be used for this purpose. It follows local WFGs to detect local deadlocks
and probe messages to detect global deadlocks.

82
There are four classes for the Distributed Detection Algorithm:

 Path-pushing: In path-pushing algorithms, the detection of distributed deadlocks


is carried out by maintaining an explicit global WFG.
 Edge-chasing: In an edge-chasing algorithm, probe messages are used to detect the
presence of a cycle in a distributed graph structure along the edges of the graph.
 Diffusion computation: Here, the computation for deadlock detection is dispersed
throughout the system’s WFG.
 Global state detection: The detection of Distributed deadlocks can be made by
taking a snapshot of the system and then inspecting it for signs of a deadlock.
To recover from a deadlock, one of the methods can be followed:

 Termination of one or more processes that created the unsafe state.


 Using checkpoints for the periodic checking of the processes so that whenever
required, rollback of processes that makes the system unsafe can be carried out and
hence, maintained a safe state of the system.
 Breaking of existing wait-for relationships between the processes.
 Rollback of one or more blocked processes and allocating their resources to stopped
processes, allowing them to restart operation.
DEADLOCK ISSUES IN DEADLOCK DETECTION & RESOLUTION
Table of Contents
o Deadlock
 Deadlock Detection:
o 1. Resource Allocation Graph (RAG) Algorithm:
o 2. Resource-Requesting Algorithms:
 Deadlock Resolution:
o 1. Deadlock Prevention:
o 2. Deadlock Avoidance:
o 3. Deadlock Detection with Recovery:
Deadlock

 Deadlock is a fundamental problem in distributed systems.


 A process may request resources in any order, which may not be known a priori
and a process can request resource while holding others.
 If the sequence of the allocations of resources to the processes is not controlled.

83
 A deadlock is a state where a set of processes request resources that are held by
other processes in the set.
DEADLOCK DETECTION:

1. Resource Allocation Graph (RAG) Algorithm:

 Deadlock detection typically involves constructing a resource allocation graph


based on the current resource allocation and request status.
 The RAG algorithm identifies cycles in the graph, indicating the presence of a
potential deadlock.
 However, the RAG algorithm suffers from scalability issues in large systems
due to the overhead of maintaining the graph.
2. Resource-Requesting Algorithms:

 Another approach is to periodically check the state of resource requests and


allocations to identify potential deadlocks.
 This approach involves tracking the resource allocation state and examining
resource requests to detect circular waits.
 However, this method may have high overhead and can only identify deadlocks
when they occur during the detection phase.
DEADLOCK RESOLUTION:

1. Deadlock Prevention:

 Prevention involves ensuring that at least one of the necessary conditions for
deadlock (mutual exclusion, hold and wait, no preemption, circular wait) is not
satisfied.
 By carefully managing resource allocation and enforcing certain policies,
deadlocks can be avoided altogether.
 However, prevention methods can be complex, restrictive, and may limit system
performance or resource utilization.
2. Deadlock Avoidance:

84
 Avoidance involves dynamically analyzing resource requests and allocations to
ensure that the system avoids entering an unsafe state where a deadlock can
occur.
 Resource allocation is made based on resource requirement forecasts and
resource availability to prevent circular waits.
 Avoidance requires a safe state detection algorithm to determine if a resource
allocation will lead to a deadlock.
 However, avoidance techniques may suffer from increased overhead and may
limit system responsiveness.
3. Deadlock Detection with Recovery:

 Deadlock detection algorithms can be used to periodically check the system’s


state for potential deadlocks.
 Once a deadlock is detected, recovery mechanisms can be employed to resolve
the deadlock.
 Recovery may involve aborting one or more processes, rolling back their
progress, and reallocating resources to allow the system to continue.
 However, recovery mechanisms can be complex and may result in data loss or
system instability.

What is DFS (Distributed File System)?


A Distributed File System (DFS) as the name suggests, is a file system that is
distributed on multiple file servers or multiple locations. It allows programs to access or store
isolated files as they do with the local ones, allowing programmers to access files from any
network or computer.

85
The main purpose of the Distributed File System (DFS) is to allows users of physically
distributed systems to share their data and resources by using a Common File System. A
collection of workstations and mainframes connected by a Local Area Network (LAN) is a
configuration on Distributed File System. A DFS is executed as a part of the operating system.
In DFS, a namespace is created and this process is transparent for the clients.

DFS has two components:

 Location Transparency –
Location Transparency achieves through the namespace component.
 Redundancy –
Redundancy is done through a file replication component.
In the case of failure and heavy load, these components together improve data availability by
allowing the sharing of data in different locations to be logically grouped under one folder,
which is known as the “DFS root”.

It is not necessary to use both the two components of DFS together, it is possible to use
the namespace component without using the file replication component and it is perfectly
possible to use the file replication component without using the namespace component between
servers.

File system replication:


Early iterations of DFS made use of Microsoft’s File Replication Service (FRS), which
allowed for straightforward file replication between servers. The most recent iterations of the
whole file are distributed to all servers by FRS, which recognises new or updated files.

“DFS Replication” was developed by Windows Server 2003 R2 (DFSR). By only copying the
portions of files that have changed and minimising network traffic with data compression, it
helps to improve FRS. Additionally, it provides users with flexible configuration options to
manage network traffic on a configurable schedule.

History :
The server component of the Distributed File System was initially introduced as an add-
on feature. It was added to Windows NT 4.0 Server and was known as “DFS 4.1”. Then later
on it was included as a standard component for all editions of Windows 2000 Server. Client-
side support has been included in Windows NT 4.0 and also in later on version of Windows.

86
Linux kernels 2.6.14 and versions after it come with an SMB client VFS known as
“cifs” which supports DFS. Mac OS X 10.7 (lion) and onwards supports Mac OS X DFS.

Properties:
 File transparency: users can access files without knowing where they are physically
stored on the network.
 Load balancing: the file system can distribute file access requests across multiple
computers to improve performance and reliability.
 Data replication: the file system can store copies of files on multiple computers to
ensure that the files are available even if one of the computers fails.
 Security: the file system can enforce access control policies to ensure that only
authorized users can access files.
 Scalability: the file system can support a large number of users and a large number
of files.
 Concurrent access: multiple users can access and modify the same file at the same
time.
 Fault tolerance: the file system can continue to operate even if one or more of its
components fail.
 Data integrity: the file system can ensure that the data stored in the files is accurate
and has not been corrupted.
 File migration: the file system can move files from one location to another without
interrupting access to the files.
 Data consistency: changes made to a file by one user are immediately visible to all
other users.
Support for different file types: the file system can support a wide range of file
types, including text files, image files, and video files.
Applications :
 NFS –
NFS stands for Network File System. It is a client-server architecture that allows a
computer user to view, store, and update files remotely. The protocol of NFS is one
of the several distributed file system standards for Network-Attached Storage
(NAS).

87
 CIFS –
CIFS stands for Common Internet File System. CIFS is an accent of SMB. That is,
CIFS is an application of SIMB protocol, designed by Microsoft.
 SMB –
SMB stands for Server Message Block. It is a protocol for sharing a file and was
invented by IMB. The SMB protocol was created to allow computers to perform
read and write operations on files to a remote host over a Local Area Network
(LAN). The directories present in the remote host can be accessed via SMB and are
called as “shares”.
 Hadoop –
Hadoop is a group of open-source software services. It gives a software framework
for distributed storage and operating of big data using the MapReduce programming
model. The core of Hadoop contains a storage part, known as Hadoop Distributed
File System (HDFS), and an operating part which is a MapReduce programming
model.
 NetWare –
NetWare is an abandon computer network operating system developed by Novell,
Inc. It primarily used combined multitasking to run different services on a personal
computer, using the IPX network protocol.
Working of DFS :
There are two ways in which DFS can be implemented:

 Standalone DFS namespace –


It allows only for those DFS roots that exist on the local computer and are not using
Active Directory. A Standalone DFS can only be acquired on those computers on
which it is created. It does not provide any fault liberation and cannot be linked to
any other DFS. Standalone DFS roots are rarely come across because of their
limited advantage.
 Domain-based DFS namespace –
It stores the configuration of DFS in Active Directory, creating the DFS namespace
root accessible at \\<domainname>\<dfsroot> or \\<FQDN>\<dfsroot>

88
Advantages :
 DFS allows multiple user to access or store the data.
 It allows the data to be share remotely.
 It improved the availability of file, access time, and network efficiency.
 Improved the capacity to change the size of the data and also improves the ability
to exchange the data.
 Distributed File System provides transparency of data even if server or disk fails.
Disadvantages :
 In Distributed File System nodes and connections needs to be secured therefore we
can say that security is at stake.
 There is a possibility of lose of messages and data in the network while movement
from one node to another.
 Database connection in case of Distributed File System is complicated.
 Also handling of the database is not easy in Distributed File System as compared to
a single user system.
 There are chances that overloading will take place if all nodes tries to send data at
once.

Design Issues of Distributed System


Distributed System is a collection of autonomous computer systems that are physically
separated but are connected by a centralized computer network that is equipped with distributed
system software. These are used in numerous applications, such as online gaming, web
applications, and cloud computing. However, creating a distributed system is not simple, and
there are a number of design considerations to take into account. The following are some of
the major design issues of distributed systems:
Design issues of the distributed system –
1. Heterogeneity: Heterogeneity is applied to the network, computer hardware,
operating system, and implementation of different developers. A key component of
the heterogeneous distributed system client-server environment is middleware.

89
Middleware is a set of services that enables applications and end-user to interact
with each other across a heterogeneous distributed system.
2. Openness: The openness of the distributed system is determined primarily by the
degree to which new resource-sharing services can be made available to the users.
Open systems are characterized by the fact that their key interfaces are published.
It is based on a uniform communication mechanism and published interface for
access to shared resources. It can be constructed from heterogeneous hardware and
software.
3. Scalability: The scalability of the system should remain efficient even with a
significant increase in the number of users and resources connected. It shouldn’t
matter if a program has 10 or 100 nodes; performance shouldn’t vary. A distributed
system’s scaling requires consideration of a number of elements, including size,
geography, and management.
4. Security: The security of an information system has three components
Confidentially, integrity, and availability. Encryption protects shared resources and
keeps sensitive information secrets when transmitted.
5. Failure Handling: When some faults occur in hardware and the software program,
it may produce incorrect results or they may stop before they have completed the
intended computation so corrective measures should to implemented to handle this
case. Failure handling is difficult in distributed systems because the failure is partial
i, e, some components fail while others continue to function.
6. Concurrency: There is a possibility that several clients will attempt to access a
shared resource at the same time. Multiple users make requests on the same
resources, i.e. read, write, and update. Each resource must be safe in a concurrent
environment. Any object that represents a shared resource in a distributed system
must ensure that it operates correctly in a concurrent environment.
7. Transparency: Transparency ensures that the distributed system should be
perceived as a single entity by the users or the application programmers rather than
a collection of autonomous systems, which is cooperating. The user should be
unaware of where the services are located and the transfer from a local machine to
a remote one should be transparent.
Mastering System Design by Geeksforgeeks
Want to get a Software Developer/Engineer job at a leading tech company? or Want to
make a smooth transition from SDE I to SDE II or Senior Developer profiles? If yes, then

90
you’re required to dive deep into the System Design world! A decent command over System
Design concepts is very much essential, especially for working professionals, to get a much-
needed advantage over others during tech interviews.

And that’s why, GeeksforGeeks is providing you with an in-depth interview-centric Mastering
System Design that will help you prepare for the questions related to System Designs for
Google, Amazon, Adobe, Uber, and other product-based companies.

Operating System Case Studies Samples For Students

WowEssays.com paper writer service proudly presents to you a free database of


Operating System Case Studies meant to help struggling students deal with their writing
challenges. In a practical sense, each Operating System Case Study sample presented here may
be a pilot that walks you through the essential phases of the writing process and showcases
how to develop an academic work that hits the mark. Besides, if you need more visionary
assistance, these examples could give you a nudge toward an original Operating System Case
Study topic or encourage a novice approach to a banal issue.

In case this is not enough to slake the thirst for efficient writing help, you can request
personalized assistance in the form of a model Case Study on Operating System crafted by an
expert from scratch and tailored to your particular requirements. Be it a simple 2-page paper or

The advent of distributed computing was marked by the introduction of distributed file
systems. Such systems involved multiple client machines and one or a few servers. The server
stores data on its disks and the clients may request data through some protocol
messages. Advantages of a distributed file system:
 Allows easy sharing of data among clients.
 Provides centralized administration.
 Provides security, i.e. one must only secure the servers to secure data.

91
Distributed File System Architecture:

Volume Identifier – An NFS server may have multiple file systems or partitions. The volume
identifier tells the server which file system is being referred to.
 Inode Number – This number identifies the file within the partition.
 Generation Number – This number is used while reusing an inode number.
File Attributes: “File attributes” is a term commonly used in NFS terminology. This is a
collective term for the tracked metadata of a file, including file creation time, last modified,
size, ownership permissions etc. This can be accessed by calling stat() on the file. NFSv2
Protocol: Some of the common protocol messages are listed below.

Message Description

NFSPROC_GETATTR Given a file handle, returns file attributes.

NFSPROC_SETATTR Sets/updates file attributes.

Given file handle and name of the file to look up, returns file
NFSPROC_LOOKUP
handle.

NFSPROC_READ Given file handle, offset, count data and attributes, reads the data.

Given file handle, offset, count data and attributes, writes data
NFSPROC_WRITE
into the file.

92
Message Description

Given the directory handle, name of file and attributes, creates a


NFSPROC_CREATE
file.

NFSPROC_REMOVE Given the directory handle and name of file, deletes the file.

Given directory handle, name of directory and attributes, creates


NFSPROC_MKDIR
a new directory.

The LOOKUP protocol message is used to obtain the file handle for further accessing
data. The NFS mount protocol helps obtain the directory handle for the root (/) directory in the
file system. If a client application opens a file /abc.txt, the client-side file system will send a
LOOKUP request to the server, through the root (/) file handle looking for a file named abc.txt.
If the lookup is successful, the file attributes are returned.
Client-Side Caching: To improve performance of NFS, distributed file systems cache
the data as well as the metadata read from the server onto the clients. This is known as client-
side caching. This reduces the time taken for subsequent client accesses. The cache is also used
as a temporary buffer for writing. This helps improve efficiency even more since all writes are
written onto the server at once.

Introduction:

Real-time operating systems (RTOS) are used in environments where a large number
of events, mostly external to the computer system, must be accepted and processed in a short
time or within certain deadlines. such applications are industrial control, telephone switching
equipment, flight control, and real-time simulations. With an RTOS, the processing time is
measured in tenths of seconds. This system is time-bound and has a fixed deadline. The
processing in this type of system must occur within the specified constraints. Otherwise, This
will lead to system failure.
Examples of real-time operating systems are airline traffic control systems, Command
Control Systems, airline reservation systems, Heart pacemakers, Network Multimedia

93
Systems, robots, etc.
The real-time operating systems can be of 3 types –

RTOS

1. Hard Real-Time Operating System: These operating systems guarantee that


critical tasks are completed within a range of time.
For example, a robot is hired to weld a car body. If the robot welds too early or too
late, the car cannot be sold, so it is a hard real-time system that requires complete
car welding by the robot hardly on time., scientific experiments, medical imaging
systems, industrial control systems, weapon systems, robots, air traffic control
systems, etc.

2. Soft real-time operating system:


This operating system provides some relaxation in the time limit.
For example – Multimedia systems, digital audio systems, etc. Explicit,
programmer-defined, and controlled processes are encountered in real-time
systems. A separate process is changed by handling a single external event. The
process is activated upon the occurrence of the related event signaled by an
interrupt.
Multitasking operation is accomplished by scheduling processes for execution
independently of each other. Each process is assigned a certain level of priority that
corresponds to the relative importance of the event that it services. The processor is
allocated to the highest-priority processes. This type of schedule, called, priority-

94
based preemptive scheduling is used by real-time systems.

3. Firm Real-time Operating System: RTOS of this type have to follow deadlines as
well. In spite of its small impact, missing a deadline can have unintended
consequences, including a reduction in the quality of the product. Example:
Multimedia applications.
4. Deterministic Real-time operating System: Consistency is the main key in this
type of real-time operating system. It ensures that all the task and processes execute
with predictable timing all the time,which make it more suitable for applications in
which timing accuracy is very important. Examples: INTEGRITY, PikeOS.

Advantages:
The advantages of real-time operating systems are as follows-
2. Maximum consumption: Maximum utilization of devices and systems.
4. Task Shifting: Time assigned for shifting tasks in these systems is very less. For
example, in older systems, it takes about 10 microseconds. Shifting one task to
Focus On Application: Focus on running applications and less importance to
5. Real-Time Operating System In Embedded System: Since the size of programs
is small, RTOS can also be embedded systems like in transport and others.

5. Error-Free: These types of systems are error-free.

6. Memory Allocation: Memory allocation is best managed in these types of systems.


Disadvantages:
The disadvantages of real-time operating systems are as follows-

1. Limited Tasks: Very few tasks run simultaneously, and their concentration is very

2. Use Heavy System Resources: Sometimes the system resources are not so good
andtheyareexpensiveaswell.

3. Complex Algorithms: The algorithms are very complex and difficult for the
designer to write on.

95
4. Device Driver And Interrupt signals: It needs specific device drivers and
interrupts signals to respond earliest to interrupts.

5. Thread Priority: It is not good to set thread priority as these systems are very less
prone to switching tasks.

6. Minimum Switching: RTOS performs minimal task switching.


Comparison of Regular and Real-Time operating systems:

Regular OS Real-Time OS (RTOS)

Complex Simple

Best effort Guaranteed response

Fairness Strict Timing constraints

Average Bandwidth Minimum and maximum limits

Unknown components Components are known

Unpredictable behavior Predictable behavior

Plug and play RTOS is upgradeable

Basic Model of a Real-time System


Real-time System is a system that is used for performing some specific tasks. These
tasks are related with time constraints and need to be completed in that time interval.
Basic Model of a Real-time System: The basic model of a real-time system presents the
overview of all the components involved in a real-time system. Real-time system includes

96
various hardware and software embedded in a such a way that the specific tasks can be
performed in the time constraints allowed. The accuracy and correctness involved in real-time
system makes the model complex. There are various models of real-time system which are
more complex and are hard to understand. Here we will discuss a basic model of real-time
system which has some commonly used terms and hardware. Following diagram represents a
basic model of Real-time System:

Sensor: Sensor is used for the conversion of some physical events or characteristics into the
electrical signals. These are hardware devices that takes the input from environment and gives
to the system by converting it. For example, a thermometer takes the temperature as physical
characteristic and then converts it into electrical signals for the system.
Actuator: Actuator is the reverse device of sensor. Where sensor converts the physical events
into electrical signals, actuator does the reverse. It converts the electrical signals into the
physical events or characteristics. It takes the input from the output interface of the system. The
output from the actuator may be in any form of physical action. Some of the commonly used
actuator are motors and heaters.
Signal Conditioning Unit: When the sensor converts the physical actions into electrical
signals, then computer can’t used them directly. Hence, after the conversion of physical actions
into electrical signals, there is need of conditioning. Similarly while giving the output when
electrical signals are sent to the actuator, then also conditioning is required. Therefore, Signal
conditioning is of two types:
ADVERTISING

 Input Conditioning Unit: It is used for conditioning the electrical signals coming
from sensor.
 Output Conditioning Unit: It is used for conditioning the electrical signals coming
from the system.

97
Interface Unit: Interface units are basically used for the conversion of digital to analog and
vice-versa. Signals coming from the input conditioning unit are analog and the system does the
operations on digital signals only, then the interface unit is used to change the analog signals
to digital signals. Similarly, while transmitting the signals to output conditioning unit the
interface of signals are changed i.e. from digital to analog. On this basis, Interface unit is also
of two types:
 Input Interface: It is used for conversion of analog signals to digital.
 Output Interface: It is used for conversion of digital signals to analog.
Characteristics of Real-time Systems
Real-time System is a system that is put through real time which means response is obtained
within a specified timing constraint or system meets the specified deadline.Real time system is
of two types – Hard and Soft. Both are used in different cases. Hard real time systems are used
where even the delay of some nano or micro seconds are not allowed. Soft real time systems
provide some relaxation in time expression.
Characteristics of Real-time System:
Following are the some of the characteristics of Real-time System:

1. Time Constraints: Time constraints related with real-time systems simply means
that time interval allotted for the response of the ongoing program. This deadline
means that the task should be completed within this time interval. Real-time system
is responsible for the completion of all tasks within their time intervals.
2. Correctness: Correctness is one of the prominent part of real-time systems. Real-
time systems produce correct result within the given time interval. If the result is
not obtained within the given time interval then also result is not considered correct.
In real-time systems, correctness of result is to obtain correct result in time
constraint.
3. Embedded: All the real-time systems are embedded now-a-days. Embedded
system means that combination of hardware and software designed for a specific
purpose. Real-time systems collect the data from the environment and passes to
other components of the system for processing.
4. Safety: Safety is necessary for any system but real-time systems provide critical
safety. Real-time systems also can perform for a long time without failures. It also
recovers very soon when failure occurs in the system and it does not cause any harm
to the data and information.

98
5. Concurrency: Real-time systems are concurrent that means it can respond to a
several number of processes at a time. There are several different tasks going on
within the system and it responds accordingly to every task in short intervals. This
makes the real-time systems concurrent systems.
6. Distributed: In various real-time systems, all the components of the systems are
connected in a distributed way. The real-time systems are connected in such a way
that different components are at different geographical locations. Thus all the
operations of real-time systems are operated in distributed ways.
7. Stability: Even when the load is very heavy, real-time systems respond in the time
constraint i.e. real-time systems does not delay the result of tasks even when there
are several task going on a same time. This brings the stability in real-time systems.
8. Fault tolerance: Real-time systems must be designed to tolerate and recover from
faults or errors. The system should be able to detect errors and recover from them
without affecting the system’s performance or output.
9. Determinism: Real-time systems must exhibit deterministic behavior, which
means that the system’s behavior must be predictable and repeatable for a given
input. The system must always produce the same output for a given input, regardless
of the load or other factors.
10. Real-time communication: Real-time systems often require real-time
communication between different components or devices. The system must ensure
that communication is reliable, fast, and secure.
11. Resource management: Real-time systems must manage their resources
efficiently, including processing power, memory, and input/output devices. The
system must ensure that resources are used optimally to meet the time constraints
and produce correct results.
12. Heterogeneous environment: Real-time systems may operate in a heterogeneous
environment, where different components or devices have different characteristics
or capabilities. The system must be designed to handle these differences and ensure
that all components work together seamlessly.
13. Scalability: Real-time systems must be scalable, which means that the system must
be able to handle varying workloads and increase or decrease its resources as
needed.

99
14. Security: Real-time systems may handle sensitive data or operate in critical
environments, which makes security a crucial aspect. The system must ensure that
data is protected and access is restricted to authorized users only.
Scheduling in Real Time Systems
Real-time systems are systems that carry real-time tasks. These tasks need to be
performed immediately with a certain degree of urgency. In particular, these tasks are related
to control of certain events (or) reacting to them. Real-time tasks can be classified as hard real-
time tasks and soft real-time tasks.
A hard real-time task must be performed at a specified time which could otherwise lead
to huge losses. In soft real-time tasks, a specified deadline can be missed. This is because the
task can be rescheduled (or) can be completed after the specified time,

In real-time systems, the scheduler is considered as the most important component


which is typically a short-term task scheduler. The main focus of this scheduler is to reduce the
response time associated with each of the associated processes instead of handling the
deadline.

If a preemptive scheduler is used, the real-time task needs to wait until its corresponding
tasks time slice completes. In the case of a non-preemptive scheduler, even if the highest
priority is allocated to the task, it needs to wait until the completion of the current task. This
task can be slow (or) of the lower priority and can lead to a longer wait.

A better approach is designed by combining both preemptive and non-preemptive


scheduling. This can be done by introducing time-based interrupts in priority based systems
which means the currently running process is interrupted on a time-based interval and if a
higher priority process is present in a ready queue, it is executed by preempting the current
process.

Based on schedulability, implementation (static or dynamic), and the result (self or


dependent) of analysis, the scheduling algorithm are classified as follows.

1. Statictable-driven-approaches:
These algorithms usually perform a static analysis associated with scheduling and
capture the schedules that are advantageous. This helps in providing a schedule that

100
can point out a task with which the execution must be started at run time.

2. Static-priority-driven-preemptive-approaches:
Similar to the first approach, these type of algorithms also uses static analysis of
scheduling. The difference is that instead of selecting a particular schedule, it
provides a useful way of assigning priorities among various tasks in preemptive
scheduling.

3. Dynamic-planning-based-approaches:
Here, the feasible schedules are identified dynamically (at run time). It carries a
certain fixed time interval and a process is executed if and only if satisfies the time
constraint.

4. Dynamic-best-effort-approaches:
These types of approaches consider deadlines instead of feasible schedules.
Therefore the task is aborted if its deadline is reached. This approach is used widely

Advantages of Scheduling in Real-Time Systems:


 Meeting Timing Constraints: Scheduling ensures that real-time tasks are executed
within their specified timing constraints. It guarantees that critical tasks are
completed on time, preventing potential system failures or losses.
 Resource Optimization: Scheduling algorithms allocate system resources
effectively, ensuring efficient utilization of processor time, memory, and other
resources. This helps maximize system throughput and performance.
 Priority-Based Execution: Scheduling allows for priority-based execution, where
higher-priority tasks are given precedence over lower-priority tasks. This ensures
that time-critical tasks are promptly executed, leading to improved system
responsiveness and reliability.
 Predictability and Determinism: Real-time scheduling provides predictability
and determinism in task execution. It enables developers to analyze and guarantee
the worst-case execution time and response time of tasks, ensuring that critical
deadlines are met.
 Control Over Task Execution: Scheduling algorithms allow developers to have
fine-grained control over how tasks are executed, such as specifying task priorities,

101
deadlines, and inter-task dependencies. This control facilitates the design and
implementation of complex real-time systems.
Disadvantages of Scheduling in Real-Time Systems:
 Increased Complexity: Real-time scheduling introduces additional complexity to
system design and implementation. Developers need to carefully analyze task
requirements, define priorities, and select suitable scheduling algorithms. This
complexity can lead to increased development time and effort.
 Overhead: Scheduling introduces some overhead in terms of context switching,
task prioritization, and scheduling decisions. This overhead can impact system
performance, especially in cases where frequent context switches or complex
scheduling algorithms are employed.
 Limited Resources: Real-time systems often operate under resource-constrained
environments. Scheduling tasks within these limitations can be challenging, as the
available resources may not be sufficient to meet all timing constraints or execute
all tasks simultaneously.
 Verification and Validation: Validating the correctness of real-time schedules and
ensuring that all tasks meet their deadlines require rigorous testing and verification
techniques. Verifying timing constraints and guaranteeing the absence of timing
errors can be a complex and time-consuming process.
 Scalability: Scheduling algorithms that work well for smaller systems may not
scale effectively to larger, more complex real-time systems. As the number of tasks
and system complexity increases, scheduling decisions become more challenging
and may require more advanced algorithms or approaches.

Safety and Realibility:

Real-Time Systems need to be reliable!


In this slide set, we will talk about some of the techniques to make fault-tolerant systems (this
is a pre-requisite to making a safe system).
Rather than focus on the large amount of theory in this area, we’ll emphasize a few examples:
Therac-25

102
CANDU Reactors Space Shuttle
Modern Passenger Jets

5-marks

1.What should you learn?


2.What is the difference between reliability, security and safety?
3.What are the steps from the time an error occurs to when a system fails?
4.What are some of the causes of errors? What are some of the approaches to fault tolerance?
5.What are the differences between hardware and software schemes?
6.safety and realibility?
7.basic model real time operating system?
8.charecteristics of real time operating system?
9.explain about the real time task scheduling?
10.application of realtime operating system?

One marks

This set of Operating System Multiple Choice Questions & Answers (MCQs) focuses on
“Real Time Operating System (RTOS)”.

1. In real time operating system ____________


a) all processes have the same priority
b) a task must be serviced by its deadline period
c) process scheduling can be done only once
d) kernel is not required

Answer: b

2. Hard real time operating system has jitter than a soft real time operating system.
a) less
b) more
c) equal
d) none of the mentioned

Answer: a
Explanation: Jitter is the undesired deviation from the true periodicity.

103
3. For real time operating systems, interrupt latency should be ____________
a) minimal
b) maximum
c) zero
d) dependent on the scheduling

Answer: a
4. In rate monotonic scheduling ____________
a) shorter duration job has higher priority
b) longer duration job has higher priority
c) priority does not depend on the duration of the job
d) none of the mentioned

Answer: a

5. In which scheduling certain amount of CPU time is allocated to each process?


a) earliest deadline first scheduling
b) proportional share scheduling
c) equal share scheduling
d) none of the mentioned

Answer: b
6. The problem of priority inversion can be solved by ____________
a) priority inheritance protocol
b) priority inversion protocol
c) both priority inheritance and inversion protocol
d) none of the mentioned

Answer: a

7. Time duration required for scheduling dispatcher to stop one process and start another is
known as ____________
a) process latency
b) dispatch latency
c) execution latency
d) interrupt latency

Answer: b
8. Time required to synchronous switch from the context of one thread to the context of
another thread is called?
a) threads fly-back time
b) jitter
c) context switch time
d) none of the mentioned

Answer: c

104
9. Which one of the following is a real time operating system?
a) RTLinux
b) VxWorks
c) Windows CE
d) All of the mentioned

Answer: d
10. VxWorks is centered around ____________
a) wind microkernel
b) linux kernel
c) unix kernel
d) none of the mentioned

Answer: a
11. What is the disadvantage of real addressing mode?
a) there is a lot of cost involved
b) time consumption overhead
c) absence of memory protection between processes
d) restricted access to memory locations by processes

Answer: c
12. Preemptive, priority based scheduling guarantees ____________
a) hard real time functionality
b) soft real time functionality
c) protection of memory
d) none of the mentioned

Answer: b
Explanation: None.
13. Real time systems must have ____________
a) preemptive kernels
b) non preemptive kernels
c) preemptive kernels or non preemptive kernels
d) neither preemptive nor non preemptive kernels

Answer: a
14. What is Event latency?
a) the amount of time an event takes to occur from when the system started
b) the amount of time from the event occurrence till the system stops
c) the amount of time from event occurrence till the event crashes
d) the amount of time that elapses from when an event occurs to when it is serviced.

Answer: d
Explanation: None.
15. Interrupt latency refers to the period of time ____________
a) from the occurrence of an event to the arrival of an interrupt
b) from the occurrence of an event to the servicing of an interrupt

105
c) from arrival of an interrupt to the start of the interrupt service routine
d) none of the mentioned

Answer: c
16. Real time systems need to the interrupt latency.
a) minimize
b) maximize
c) not bother about
d) none of the mentioned

Answer: a

17. The amount of time required for the scheduling dispatcher to stop one process and start
another is known as ______________
a) event latency
b) interrupt latency
c) dispatch latency
d) context switch

Answer: c
18. The most effective technique to keep dispatch latency low is to ____________
a) provide non preemptive kernels
b) provide preemptive kernels
c) make it user programmed
d) run less number of processes at a time

Answer: b

19. Priority inversion is solved by use of _____________


a) priority inheritance protocol
b) two phase lock protocol
c) time protocol
d) all of the mentioned

Answer: a
20. In a real time system the computer results ____________
a) must be produced within a specific deadline period
b) may be produced at any time
c) may be correct
d) all of the mentioned

Answer: a

106
UNIT 4
Handheld Operating System
An operating system is a program whose job is to manage a computer’s hardware. Its
other use is that it also provides a basis for application programs and acts as an intermediary
between the computer user and the computer hardware. An amazing feature of operating
systems is how they vary in accomplishing these tasks. Operating systems for mobile
computers provide us with an environment in which we can easily interface with the computer
so that we can execute the programs. Thus, some of the operating systems are made to be
convenient, others to be well-organized, and the rest to be some combination of the two.

Handheld Operating System:


Handheld operating systems are available in all handheld devices like Smartphones and
tablets. It is sometimes also known as a Personal Digital Assistant. The popular handheld
device in today’s world is Android and iOS. These operating systems need a high-processing
processor and are also embedded with various types of sensors.

Some points related to Handheld operating systems are as follows:

1. Since the development of handheld computers in the 1990s, the demand for
software to operate and run on these devices has increased.
2. Three major competitors have emerged in the handheld PC world with three
different operating systems for these handheld PCs.
3. Out of the three companies, the first was the Palm Corporation with their PalmOS.
4. Microsoft also released what was originally called Windows CE. Microsoft’s
recently released operating system for the handheld PC comes under the name of
Pocket PC.
5. More recently, some companies producing handheld PCs have also started offering
a handheld version of the Linux operating system on their machines.
Features of Handheld Operating System:
1. Its work is to provide real-time operations.
2. There is direct usage of interrupts.
3. Input/Output device flexibility.
4. Configurability.
Types of Handheld Operating Systems:

107
Types of Handheld Operating Systems are as follows:

1. Palm OS
2. Symbian OS
3. Linux OS
4. Windows
5. Android

Palm OS:

 Since the Palm Pilot was introduced in 1996, the Palm OS platform has provided
various mobile devices with essential business tools, as well as the capability that
they can access the internet via a wireless connection.
 These devices have mainly concentrated on providing basic personal-information-
management applications. The latest Palm products have progressed a lot, packing
in more storage, wireless internet, etc.

Symbian OS:

 It has been the most widely-used smartphone operating system because of its ARM
architecture before it was discontinued in 2014. It was developed by Symbian Ltd.
 This operating system consists of two subsystems where the first one is the
microkernel-based operating system which has its associated libraries and the
second one is the interface of the operating system with which a user can interact.
 Since this operating system consumes very less power, it was developed for
smartphones and handheld devices.
 It has good connectivity as well as stability.
 It can run applications that are written in Python, Ruby, .NET, etc.

Linux OS:

 Linux OS is an open-source operating system project which is a cross-platform


system that was developed based on UNIX.
It was developed by Linus Torvalds. It is a system software that basically allows the
apps and users to perform some tasks on the PC.

108
 Linux is free and can be easily downloaded from the internet and it is considered
that it has the best community support.
 Linux is portable which means it can be installed on different types of devices like
mobile, computers, and tablets.
 It is a multi-user operating system.
 Linux interpreter program which is called BASH is used to execute commands.
 It provides user security using authentication features.

Windows OS:

 Windows is an operating system developed by Microsoft. Its interface which is


called Graphical User Interface eliminates the need to memorize commands for
the command line by using a mouse to navigate through menus, dialog boxes, and
buttons.
 It is named Windows because its programs are displayed in the form of a square. It
has been designed for both a beginner as well professional.
 It comes preloaded with many tools which help the users to complete all types of
tasks on their computer, mobiles, etc.
 It has a large user base so there is a much larger selection of available software
programs.
 One great feature of Windows is that it is backward compatible which means that
its old programs can run on newer versions as well.

Android OS:

 It is a Google Linux-based operating system that is mainly designed for touchscreen


devices such as phones, tablets, etc.
There are three architectures which are ARM, Intel, and MIPS which are used by
the hardware for supporting Android. These lets users manipulate the devices
intuitively, with movements of our fingers that mirror some common motions such
as swiping, tapping, etc.
 Android operating system can be used by anyone because it is an open-source
operating system and it is also free.
 It offers 2D and 3D graphics, GSM connectivity, etc.

109
 There is a huge list of applications for users since Play Store offers over one million
apps.
 Professionals who want to develop applications for the Android OS can download
the Android Development Kit. By downloading it they can easily develop apps for
android.
Advantages of Handheld Operating System:
Some advantages of a Handheld Operating System are as follows:

1. Less Cost.
2. Less weight and size.
3. Less heat generation.
4. More reliability.
Disadvantages of Handheld Operating System:
Some disadvantages of Handheld Operating Systems are as follows:

1. Less Speed.
2. Small Size.
3. Input / Output System (memory issue or less memory is available).

How Handheld operating systems are different from Desktop operating systems?
 Since the handheld operating systems are mainly designed to run on machines that
have lower speed resources as well as less memory, they were designed in a way
that they use less memory and require fewer resources.
 They are also designed to work with different types of hardware as compared to
standard desktop operating systems.
It happens because the power requirements for standard CPUs far exceed the power
of handheld devices.
 Handheld devices aren’t able to dissipate large amounts of heat generated by CPUs.
To deal with such kind of problem, big companies like Intel and Motorola have
designed smaller CPUs with lower power requirements and also lower heat
generation. Many handheld devices fully depend on flash memory cards for their
internal memory because large hard drives do not fit into handheld devices.

110
Palm OS
33 languages

 Article

 Talk

 Read

 Edit

 View history

From Wikipedia, the free encyclopedia

For the modern smartphone operating system by Palm, see webOS.

Palm OS
Garnet OS

Palm m505, running Palm OS 4.0

Developer Palm, Inc., ACCESS (Garnet OS)

111
Written in C++

OS family Palm OS

Working state Discontinued since 2009[1]

Source model Closed-source

Initial release 1996; 27 years ago

Latest release Garnet OS 5.4.9 / October 14,


2007; 16 years ago

Available in English, French, Japanese and


more

Platforms ARM
Motorola 68k

License Proprietary EULA

Official website Garnet OS

Support status

Unsupported

Palm OS (also known as Garnet OS) was a mobile operating system initially developed
by Palm, Inc., for personal digital assistants (PDAs) in 1996. Palm OS was designed for ease
of use with a touchscreen-based graphical user interface. It is provided with a suite of basic
applications for personal information management. Later versions of the OS have been
extended to support smartphones. The software appeared on the company's line of Palm
devices while several other licensees have manufactured devices powered by Palm OS.

112
Following Palm's purchase of the Palm trademark, the currently licensed version
from ACCESS was renamed Garnet OS. In 2007, ACCESS introduced the successor to Garnet
OS, called Access Linux Platform; additionally, in 2009, the main licensee of Palm OS, Palm,
Inc., switched from Palm OS to webOS for their forthcoming devices.

Creator and ownership[edit]

Palm OS was originally developed under the direction of Jeff Hawkins at Palm
Computing, Inc.[2] Palm was later acquired by U.S. Robotics Corp.,[3] which in turn was later
bought by 3Com,[4] which made the Palm subsidiary an independent publicly traded company
on March 2, 2000.[5]

In January 2002, Palm set up a wholly owned subsidiary to develop and license Palm
OS,[6] which was named PalmSource. PalmSource was then spun off from Palm as an
independent company on October 28, 2003. [7] Palm (then called palmOne) became a
regular licensee of Palm OS, no longer in control of the operating system.

In September 2005, PalmSource announced that it was being acquired by ACCESS.[8]

In December 2006, Palm gained perpetual rights to the Palm OS source code from
ACCESS.[9] With this Palm can modify the licensed operating system as needed without paying
further royalties to ACCESS. Together with the May 2005 acquisition of full rights to
the Palm brand name,[10] only Palm can publish releases of the operating system under the
name 'Palm OS'.

As a consequence, on January 25, 2007, ACCESS announced a name change to their current
Palm OS operating system, now titled Garnet OS.[11]

OS overview[edit]

Palm OS was a proprietary mobile operating system. Designed in 1996 for Palm
Computing, Inc.'s new Pilot PDA, it has been implemented on a wide array of mobile devices,
including smartphones, wrist watches, handheld gaming consoles, barcode
readers and GPS devices.

Palm OS versions earlier than 5.0 run on Motorola/Freescale DragonBall processors.


From version 5.0 onwards, Palm OS runs on ARM architecture-based processors.

The key features of the current Palm OS Garnet are:

113
 Simple, single-tasking environment to allow launching of full screen applications
with a basic, common GUI set
 Monochrome or color screens with resolutions up to 480x320 pixel
 Handwriting recognition input system called Graffiti 2
 HotSync technology for data synchronization with desktop computers
 Sound playback and record capabilities
 Simple security model: Device can be locked by password, arbitrary application
records can be made private
 TCP/IP network access
 Serial port/USB, infrared, Bluetooth and Wi-Fi connections
 Expansion memory card support
 Defined standard data format for personal information management applications to
store calendar, address, task and note entries, accessible by third-party applications.

Included with the OS is also a set of standard applications, with the most relevant ones for the
four mentioned PIM operations.

Android Operating System

Android is a mobile operating system based on a modified version of the Linux kernel
and other open-source software, designed primarily for touchscreen mobile devices such as
smartphones and tablets. Android is developed by a partnership of developers known as the
Open Handset Alliance and commercially sponsored by Google. It was disclosed in November
2007, with the first commercial Android device, the HTC Dream, launched in September 2008.

It is free and open-source software. Its source code is Android Open Source Project
(AOSP), primarily licensed under the Apache License. However, most Android devices
dispatch with additional proprietary software pre-installed, mainly Google Mobile Services
(GMS), including core apps such as Google Chrome, the digital distribution platform Google
Play and the associated Google Play Services development platform.

o About 70% of Android Smartphone runs Google's ecosystem, some with vendor-
customized user interface and some with software suite, such as TouchWizand
later One UI by Samsung, and HTC Sense.

114
o Competing Android ecosystems and forksinclude Fire OS (developed by Amazon) or
LineageOS. However, the "Android" name and logo are trademarks of Google which
impose standards to restrict "uncertified" devices outside their ecosystem to use android
branding.

Features of Android Operating System

Below are the following unique features and characteristics of the android operating
system, such as:

1. Near Field Communication (NFC)

Most Android devices support NFC, which allows electronic devices to interact across
short distances easily. The main goal here is to create a payment option that is simpler than
carrying cash or credit cards, and while the market hasn't exploded as many experts had
predicted, there may be an alternative in the works, in the form of Bluetooth Low Energy
(BLE).

2. Infrared Transmission

The Android operating system supports a built-in infrared transmitter that allows you
to use your phone or tablet as a remote control.

3. Automation

The Tasker app allows control of app permissions and also automates them.

4. Wireless App Downloads

115
You can download apps on your PC by using the Android Market or third-party options
like AppBrain. Then it automatically syncs them to your Droid, and no plugging is required.

5. Storage and Battery Swap

Android phones also have unique hardware capabilities. Google's OS makes it possible
to upgrade, replace, and remove your battery that no longer holds a charge. In addition, Android
phones come with SD card slots for expandable storage.

6. Custom Home Screens

While it's possible to hack certain phones to customize the home screen, Android comes
with this capability from the get-go. Download a third-party launcher like Apex, Nova, and you
can add gestures, new shortcuts, or even performance enhancements for older-model devices.

7. Widgets

Apps are versatile, but sometimes you want information at a glance instead of having
to open an app and wait for it to load. Android widgets let you display just about any feature
you choose on the home screen, including weather apps, music widgets, or productivity tools
that helpfully remind you of upcoming meetings or approaching deadlines.

8. Custom ROMs

Because the Android operating system is open-source, developers can twist the current
OS and build their versions, which users can download and install in place of the stock OS.
Some are filled with features, while others change the look and feel of a device. Chances are,
if there's a feature you want, someone has already built a custom ROM for it.

Architecture of Android OS

The android architecture contains a different number of components to support any


android device needs. Android software contains an open-source Linux Kernel with many
C/C++ libraries exposed through application framework services.

Among all the components, Linux Kernel provides the main operating system functions to
Smartphone and Dalvik Virtual Machine (DVM) to provide a platform for running an android

116
application. An android operating system is a stack of software components roughly divided
into five sections and four main layers, as shown in the below architecture diagram.

o Applications
o Application Framework
o Android Runtime
o Platform Libraries
o Linux Kernel

1. Applications

An application is the top layer of the android architecture. The pre-installed applications
like camera, gallery, home, contacts, etc., and third-party applications downloaded from the
play store like games, chat applications, etc., will be installed on this layer.

It runs within the Android run time with the help of the classes and services provided by the
application framework.

2. Application framework

Application Framework provides several important classes used to create an Android


application. It provides a generic abstraction for hardware access and helps in managing the
user interface with application resources. Generally, it provides the services with the help of
which we can create a particular class and make that class helpful for the Applications creation.

117
It includes different types of services, such as activity manager, notification manager,
view system, package manager etc., which are helpful for the development of our application
according to the prerequisite.

The Application Framework layer provides many higher-level services to applications in the
form of Java classes. Application developers are allowed to make use of these services in their
applications. The Android framework includes the following key services:

o Activity Manager: Controls all aspects of the application lifecycle and activity stack.
o Content Providers: Allows applications to publish and share data with other
applications.
o Resource Manager: Provides access to non-code embedded resources such as strings,
colour settings and user interface layouts.
o Notifications Manager: Allows applications to display alerts and notifications to the
user.
o View System: An extensible set of views used to create application user interfaces.

3. Application runtime

Android Runtime environment contains components like core libraries and the Dalvik virtual
machine (DVM). It provides the base for the application framework and powers our application
with the help of the core libraries.

Like Java Virtual Machine (JVM), Dalvik Virtual Machine (DVM) is a register-based virtual
machine designed and optimized for Android to ensure that a device can run multiple instances
efficiently.

It depends on the layer Linux kernel for threading and low-level memory management. The
core libraries enable us to implement android applications using the
standard JAVA or Kotlin programming languages.

4. Platform libraries

The Platform Libraries include various C/C++ core libraries and Java-based libraries such as
Media, Graphics, Surface Manager, OpenGL, etc., to support Android development.

118
o app: Provides access to the application model and is the cornerstone of all Android
applications.
o content: Facilitates content access, publishing and messaging between applications and
application components.
o database: Used to access data published by content providers and includes SQLite
database, management classes.
o OpenGL: A Java interface to the OpenGL ES 3D graphics rendering API.
o os: Provides applications with access to standard operating system services, including
messages, system services and inter-process communication.
o text: Used to render and manipulate text on a device display.
o view: The fundamental building blocks of application user interfaces.
o widget: A rich collection of pre-built user interface components such as buttons, labels,
list views, layout managers, radio buttons etc.

o WebKit: A set of classes intended to allow web-browsing capabilities to be built into


applications.

o media: Media library provides support to play and record an audio and video format.
o surface manager: It is responsible for managing access to the display subsystem.
o SQLite: It provides database support, and FreeType provides font support.
o SSL: Secure Sockets Layer is a security technology to establish an encrypted link
between a web server and a web browser.

5. Linux Kernel

Linux Kernel is the heart of the android architecture. It manages all the available drivers
such as display, camera, Bluetooth, audio, memory, etc., required during the runtime.

The Linux Kernel will provide an abstraction layer between the device hardware and the other
android architecture components. It is responsible for the management of memory, power,
devices etc. The features of the Linux kernel are:

o Security: The Linux kernel handles the security between the application and the
system.

119
o Memory Management: It efficiently handles memory management, thereby providing
the freedom to develop our apps.
o Process Management: It manages the process well, allocates resources to processes
whenever they need them.
o Network Stack: It effectively handles network communication.
o Driver Model: It ensures that the application works properly on the device and
hardware manufacturers responsible for building their drivers into the Linux build.

Android Applications

Android applications are usually developed in the Java language using the Android
Software Development Kit. Once developed, Android applications can be packaged easily and
sold out either through a store such as Google Play, SlideME, Opera Mobile Store, Mobango,
F-droid or the Amazon Appstore.

Android powers hundreds of millions of mobile devices in more than 190 countries
around the world. It's the largest installed base of any mobile platform and growing fast. Every
day more than 1 million new Android devices are activated worldwide.

Android Emulator

The Emulator is a new application in the Android operating system. The Emulator is a new
prototype used to develop and test android applications without using any physical device.

120
The android emulator has all of the hardware and software features like mobile devices
except phone calls. It provides a variety of navigation and control keys. It also provides a screen
to display your application. The emulators utilize the android virtual device configurations.
Once your application is running on it, it can use services of the android platform to help other
applications, access the network, play audio, video, store, and retrieve the data.

Advantages of Android Operating System

We considered every one of the elements on which Android is better as thought about than
different platforms. Below are some important advantages of Android OS, such as:

o Android Google Developer: The greatest favourable position of Android is Google.


Google claims an android operating system. Google is a standout amongst the most
trusted and rumoured item on the web. The name Google gives trust to the clients to
purchase Android gadgets.
o Android Users: Android is the most utilized versatile operating system. More than a
billion individuals clients utilize it. Android is likewise the quickest developing
operating system in the world. Various clients increment the number of applications and
programming under the name of Android.
o Android Multitasking: The vast majority of us admire this component of Android.
Clients can do heaps of undertakings on the double. Clients can open a few applications
on the double and oversee them very. Android has incredible UI, which makes it simple
for clients to do multitasking.
o Google Play Store App: The best part of Android is the accessibility of many
applications. Google Play store is accounted for as the world's largest mobile store. It
has practically everything from motion pictures to amusements and significantly more.
These things can be effortlessly downloaded and gotten to through an Android phone.
o Android Notification and Easy Access: Without much of a stretch, one can access
their notice of any SMS, messages, or approaches their home screen or the notice board
of the android phone. The client can view all the notifications on the top bar. Its UI
makes it simple for the client to view more than 5 Android notices immediately.
o Android Widget: Android operating system has a lot of widgets. This gadget improves
the client encounter much and helps in doing multitasking. You can include any gadget

121
relying on the component you need on your home screen. You can see warnings,
messages, and a great deal more use without opening applications.

Disadvantages of Android Operating System

We know that the Android operating system has a considerable measure of interest for users
nowadays. But at the same time, it most likely has a few weaknesses. Below are the following
disadvantages of the android operating system, such as:

o Android Advertisement pop-ups: Applications are openly accessible in the Google


play store. Yet, these applications begin demonstrating tons of advertisements on the
notification bar and over the application. This promotion is extremely difficult and
makes a massive issue in dealing with your Android phone.
o Android require Gmail ID: You can't get to an Android gadget without your email ID
or password. Google ID is exceptionally valuable in opening Android phone bolts as
well.

o Android Battery Drain: Android handset is considered a standout amongst the most
battery devouring operating systems. In the android operating system, many processes
are running out of sight, which brings about the draining of the battery. It is difficult to
stop these applications as the lion's share of them is system applications.
o Android Malware/Virus/Security: Android gadget is not viewed as protected when
contrasted with different applications. Hackers continue attempting to take your data.
It is anything but difficult to target any Android phone, and each day millions of
attempts are done on Android phones.

Android Architecture
Android architecture contains different number of components to support any android
device needs. Android software contains an open-source Linux Kernel having collection of
number of C/C++ libraries which are exposed through an application framework services.

Among all the components Linux Kernel provides main functionality of operating
system functions to smartphones and Dalvik Virtual Machine (DVM) provide platform for
running an android application.

The main components of android architecture are following:-

122
 Applications
 Application Framework
 Android Runtime
 Platform Libraries
 Linux Kernel
Pictorial representation of android architecture with several main components and their sub
components –

Applications –
Applications is the top layer of android architecture. The pre-installed applications like
home, contacts, camera, gallery etc and third party applications downloaded from the play store
like chat applications, games etc. will be installed on this layer only.
It runs within the Android run time with the help of the classes and services provided by the
application framework.

Application framework –
Application Framework provides several important classes which are used to create an
Android application. It provides a generic abstraction for hardware access and also helps in
managing the user interface with application resources. Generally, it provides the services with
the help of which we can create a particular class and make that class helpful for the
Applications creation.

123
It includes different types of services activity manager, notification manager, view
system, package manager etc. which are helpful for the development of our application
according to the prerequisite.

Application runtime –
Android Runtime environment is one of the most important part of Android. It contains
components like core libraries and the Dalvik virtual machine(DVM). Mainly, it provides the
base for the application framework and powers our application with the help of the core
libraries.

Like Java Virtual Machine (JVM), Dalvik Virtual Machine (DVM) is a register-based
virtual machine and specially designed and optimized for android to ensure that a device can
run multiple instances efficiently. It depends on the layer Linux kernel for threading and low-
level memory management. The core libraries enable us to implement android applications
using the standard JAVA or Kotlin programming languages.
Platform libraries –
The Platform Libraries includes various C/C++ core libraries and Java based libraries
such as Media, Graphics, Surface Manager, OpenGL etc. to provide a support for android
development.

 Media library provides support to play and record an audio and video formats.
 Surface manager responsible for managing access to the display subsystem.
 SGL and OpenGL both cross-language, cross-platform application program
interface (API) are used for 2D and 3D computer graphics.
 SQLite provides database support and FreeType provides font support.
 Web-Kit This open source web browser engine provides all the functionality to
display web content and to simplify page loading.
 SSL (Secure Sockets Layer) is security technology to establish an encrypted link
between a web server and a web browser.
Linux Kernel –
Linux Kernel is heart of the android architecture. It manages all the available drivers such as
display drivers, camera drivers, Bluetooth drivers, audio drivers, memory drivers, etc. which
are required during the runtime.

124
The Linux Kernel will provide an abstraction layer between the device hardware and the other
components of android architecture. It is responsible for management of memory, power,
devices etc.

The features of Linux kernel are:

 Security: The Linux kernel handles the security between the application and the
system.
 Memory Management: It efficiently handles the memory management thereby
providing the freedom to develop our apps.
 Process Management: It manages the process well, allocates resources to
processes whenever they need them.
 Network Stack: It effectively handles the network communication.
 Driver Model: It ensures that the application works properly on the device and
hardware manufacturers responsible for building their drivers into the Linux build.
 Securing handheld devices
 Abstract
 Handheld devices are becoming an indispensable part of everyday life. In this paper,
we review the security of
 handheld devices, which are based on the Pocket PC operating system. We then identify
the risks and threats
 of having these handheld devices connected to the Internet, and propose several
methods to protect against
 the threats. We point out some advantages and security threats of porting server
applications to handheld
 devices. We also consider the newest technology in mobile applications using the
Microsoft's NET
 framework and provide the security risk analysis for it. We conclude the paper with an
open problem.
 Publication Details
 This article was published as: Susilo, W, Securing handheld devices, 10th IEEE
International Conference on
 Networks, 27-30 August 2002, 349-354. Copyright IEEE 2002
 Securing handheld devices
 Abstract

125
 Handheld devices are becoming an indispensable part of everyday life. In this paper,
we review the security of
 handheld devices, which are based on the Pocket PC operating system. We then identify
the risks and threats
 of having these handheld devices connected to the Internet, and propose several
methods to protect against
 the threats. We point out some advantages and security threats of porting server
applications to handheld
 devices. We also consider the newest technology in mobile applications using the
Microsoft's NET
 framework and provide the security risk analysis for it. We conclude the paper with an
open problem.
 Publication Details
 This article was published as: Susilo, W, Securing handheld devices, 10th IEEE
International Conference on
 Networks, 27-30 August 2002, 349-354. Copyright IEEE 2002

Secure handheld system

Protect Your Handheld Device

1. Maintain physical protection of your handheld device at all times.


o Keep the device with you or securely locked away.
o Longwood users connecting to the university's email and calendaring services
must contact the Help Desk immediately if their device is lost or stolen.
o For other users, if your device is lost or stolen you may wish to notify your
service provider immediately as they may be able to remotely erase all data from
the device.
2. Enable only the necessary functions, features and capabilities.
o Disable unneeded services such as Bluetooth, Wi-Fi and other wireless
interfaces until they are needed.
o Prior to installing any applications or functions weigh benefits (utility provided
by the application) versus risks (opening a new avenue attack on your device).
o ONLY download applications from reputable sources.
3. Keep your device free of malware.

126
o Use device antivirus and firewall software when available.
o Install updates as they become available to fix security issues with your device.
o Do not open attachments or follow links sent from individuals you do not know
as this may result in malware being installed on your device. You should also
be suspicious of attachments and links sent by friends. As handheld devices
become more popular they become more attractive targets for hackers.
o Only download applications from reputable sources.
4. Use a strong device password or PIN.
o Choose a different password than the password you use to access the university
network or other university applications.
o Set the device to automatically lock after a set period of inactivity.
5. Take steps to reduce the risks associated with losing data stored on your device.
o Avoid storing confidential personal information, including user IDs and
passwords, or university restricted information on a handheld device.
o If the storage of your confidential personal information or university restricted
information cannot be avoided use encryption to protect the data from
unauthorized access.
o Regularly backup information stored on your handheld device to another system
in case your device is lost, stolen or damaged.
o Prior to disposing of a handheld device all of the data should be securely
removed. Deleting information from the device is not enough!

ONE MARKS

1. Handheld systems include ?

A. PFAs
B. PDAs
C. PZAs
D. PUAs
Ans : B

Explanation: Handheld systems include Personal Digital Assistants(PDAs)

127
2. Which of the following is an example of PDAs?

A. Palm-Pilots
B. Cellular Telephones
C. Both A and B
D. None of the above
Ans : C

Explanation: Handheld systems include Personal Digital Assistants(PDAs), such as Palm-


Pilots or Cellular Telephones with connectivity to a network such as the Internet.

3. Many handheld devices have between of memory

A. 256 KB and 8 MB
B. 512 KB and 2 MB
C. 256 KB and 4 MB
D. 512 KB and 8 MB
Ans : D

Explanation: Many handheld devices have between 512 KB and 8 MB of memory.

4. Handheld devices do not use virtual memory techniques.

A. TRUE
B. FALSE
C. Can be true or false
D. Can not say
Ans : A

Explanation: True, Currently, many handheld devices do not use virtual memory techniques,
thus forcing program developers to work within the confines of limited physical memory.

5. Faster processors require power

A. very less
B. less
C. more
D. very more
Ans : C

Explanation: Faster processors require more power.


6. To include a faster processor in a handheld device would require a battery.

A. very small
B. small

128
C. medium
D. larger
Ans : D

Explanation: To include a faster processor in a handheld device would require a larger battery
that would have to be replaced more frequently.

7. Some handheld computers contains features of tiny built-in keyboards or microphone that
allows

A. text input
B. data input
C. print input
D. voice input
Ans : D

Explanation: Some handheld computers contains features of tiny built-in keyboards or


microphone that allows voice input.

8. Popular type of handheld computer is

A. Smart phone
B. Laptops
C. Personal digital assistant
D. TVs
Ans : C

Explanation: Popular type of handheld computer is Personal digital assistant.

9. A mobile device that functions as a personal information manager is

A. LPDs
B. CRTs
C. UCDs
D. PDAs
Ans : D

Explanation: A mobile device that functions as a personal information manager is PDAs

10. Some handheld devices may use wireless technology such as BlueTooth, allowing remote
access to e-mail and web browsing.

A. Yes
B. No

129
C. Can be yes or no
D. Can not say
Ans : A

Explanation: Yes, Some handheld devices may use wireless technology such as BlueTooth,
allowing remote access to e-mail and web browsing.

5 marks
1. What is handheld operating system?
2. What ishandheld system requirements?
3.Explain about handheld system technology?
4.what is palm os?
5.discuss about Symbian operating system?

10 marks
1. What is android?
2. what is architechture of android?
3. Different between android architechture android?
4. explain about the palm os operating system?
5. different between Symbian os and palm os?

130
UNIT 5

Operating System Case Studies

WowEssays.com paper writer service proudly presents to you a free database of


Operating System Case Studies meant to help struggling students deal with their writing
challenges. In a practical sense, each Operating System Case Study sample presented here may
be a pilot that walks you through the essential phases of the writing process and showcases
how to develop an academic work that hits the mark. Besides, if you need more visionary
assistance, these examples could give you a nudge toward an original Operating System Case
Study topic or encourage a novice approach to a banal issue.

In case this is not enough to slake the thirst for efficient writing help, you can request
personalized assistance in the form of a model Case Study on Operating System crafted by an
expert from scratch and tailored to your particular requirements. Be it a simple 2-page paper or
a profound, extended piece, our writers specialized in Operating System and related topics will
submit it within the pre-set period. Buy cheap essays or research papers now!

Introduction to Linux Operating System


The Linux Operating System is a type of operating system that is similar to Unix, and
it is built upon the Linux Kernel. The Linux Kernel is like the brain of the operating system
because it manages how the computer interacts with its hardware and resources. It makes sure
everything works smoothly and efficiently. But the Linux Kernel alone is not enough to make
a complete operating system. To create a full and functional system, the Linux Kernel is
combined with a collection of software packages and utilities, which are together called Linux
distributions. These distributions make the Linux Operating System ready for users to run their
applications and perform tasks on their computers securely and effectively. Linux distributions
come in different flavors, each tailored to suit the specific needs and preferences of users.
What is Linux

131
Linux is a powerful and flexible family of operating systems that are free to use and
share. It was created by a person named Linus Torvalds in 1991. What’s cool is that
anyone can see how the system works because its source code is open for everyone to
explore and modify. This openness encourages people from all over the world to work
together and make Linux better and better. Since its beginning, Linux has grown into a
stable and safe system used in many different things, like computers, smartphones, and
big supercomputers. It’s known for being efficient, meaning it can do a lot of tasks
quickly, and it’s also cost-effective, which means it doesn’t cost a lot to use. Lots of
people love Linux, and they’re part of a big community where they share ideas and help
each other out. As technology keeps moving forward, Linux will keep evolving and
staying important in the world of computers.
Linux Distribution
Linux distribution is an operating system that is made up of a collection of
software based on Linux kernel or you can say distribution contains the Linux kernel
and supporting libraries and software. And you can get Linux based operating system
by downloading one of the Linux distributions and these distributions are available for
different types of devices like embedded devices, personal computers, etc. Around 600
+ Linux Distributions are available and some of the popular Linux distributions are:
 MX Linux
 Manjaro
 Linux Mint
 elementary
 Ubuntu
 Debian
 Solus
 Fedora
 openSUSE
 Deepin
Architecture of Linux
Linux architecture has the following components:

132
Linux Architecture

1. Kernel: Kernel is the core of the Linux based operating system. It virtualizes the
common hardware resources of the computer to provide each process with its virtual
resources. This makes the process seem as if it is the sole process running on the
machine. The kernel is also responsible for preventing and mitigating conflicts
between different processes. Different types of the kernel are:
 Monolithic Kernel
 Hybrid kernels
 Exo kernels
 Micro kernels
2. System Library:Linux uses system libraries, also known as shared libraries, to
implement various functionalities of the operating system. These libraries contain
pre-written code that applications can use to perform specific tasks. By using these
libraries, developers can save time and effort, as they don’t need to write the same
code repeatedly. System libraries act as an interface between applications and the
kernel, providing a standardized and efficient way for applications to interact with
the underlying system.
3. Shell:The shell is the user interface of the Linux Operating System. It allows users
to interact with the system by entering commands, which the shell interprets and
executes. The shell serves as a bridge between the user and the kernel, forwarding
the user’s requests to the kernel for processing. It provides a convenient way for

133
users to perform various tasks, such as running programs, managing files, and
configuring the system.
4. Hardware Layer: The hardware layer encompasses all the physical components of
the computer, such as RAM (Random Access Memory), HDD (Hard Disk Drive),
CPU (Central Processing Unit), and input/output devices. This layer is responsible
for interacting with the Linux Operating System and providing the necessary
resources for the system and applications to function properly. The Linux kernel
and system libraries enable communication and control over these hardware
components, ensuring that they work harmoniously together.
5. System Utility: System utilities are essential tools and programs provided by the
Linux Operating System to manage and configure various aspects of the system.
These utilities perform tasks such as installing software, configuring network
settings, monitoring system performance, managing users and permissions, and
much more. System utilities simplify system administration tasks, making it easier
for users to maintain their Linux systems efficiently.
Advantages of Linux
 The main advantage of Linux is it is an open-source operating system. This means
the source code is easily available for everyone and you are allowed to contribute,
modify and distribute the code to anyone without any permissions.
 In terms of security, Linux is more secure than any other operating system. It does
not mean that Linux is 100 percent secure, it has some malware for it but is less
vulnerable than any other operating system. So, it does not require any anti-virus
software.
 The software updates in Linux are easy and frequent.
 Various Linux distributions are available so that you can use them according to your
requirements or according to your taste.
 Linux is freely available to use on the internet.
 It has large community support.
 It provides high stability. It rarely slows down or freezes and there is no need to
reboot it after a short time.
 It maintains the privacy of the user.
 The performance of the Linux system is much higher than other operating systems.
It allows a large number of people to work at the same time and it handles them
efficiently.

134
 It is network friendly.
 The flexibility of Linux is high. There is no need to install a complete Linux suite;
you are allowed to install only the required components.
 Linux is compatible with a large number of file formats.
 It is fast and easy to install from the web. It can also install it on any hardware even
on your old computer system.
 It performs all tasks properly even if it has limited space on the hard disk.
Disadvantages of Linux
 It is not very user-friendly. So, it may be confusing for beginners.
 It has small peripheral hardware drivers as compared to windows.
Frequently Asked Questions in Linux Operating System
What is Linux Operating System?
Linux is an open-source operating system developed by Linus Torvalds in 1991. It
provides a customizable and secure alternative to proprietary systems. With its stable
performance, Linux is widely used across devices, from personal computers to servers and
smartphones. The collaborative efforts of its developer community continue to drive
innovation, making Linux a dominant force in the world of computing.
Is There Any Difference between Linux and Ubuntu?
The answer is YES. The main difference between Linux and Ubuntu is Linux is the family of
open-source operating systems which is based on Linux kernel, whereas Ubuntu is a free open-
source operating system and the Linux distribution which is based on Debian. Or in other
words, Linux is the core system and Ubuntu is the distribution of Linux. Linux is developed by
Linus Torvalds and released in 1991 and Ubuntu is developed by Canonical Ltd. and released
in 2004.
How do I install software on Linux Operating System?
To install software on Linux, we can use package managers specific to your Linux distribution.
For example,
In Ubuntu, you can use the “apt” package manager,
while on Fedora, you can use “dnf.”
You can simply open a terminal and use the package manager to search for and install
software.
For example,
To install the text editor “nano” on Ubuntu, you can use the command
sudo apt install nano

135
Can we dual-boot Linux with another operating system?
Yes, we can dual-boot Linux with another operating system, such as Windows. During the
installation of Linux, we can allocate a separate partition for Linux, and a boot manager (like
GRUB) allows us to choose which operating system to boot when starting our computer.
How can I update my Linux distribution?
We can update our Linux distribution using the package manager of our specific distribution.
For instance, on Ubuntu, we can run the following commands to update the package list and
upgrade the installed packages:
sudo apt update
sudo apt upgrade
What are the essential Linux commands for beginners?
Some essential Linux commands for beginners include:
 ls: List files and directories
 cd: Change directory
 mkdir: Create a new directory
 rm: Remove files or directories
 cp: Copy files and directories
 mv: Move or rename files and directories
 cat: Display file content
 grep: Search for text in files
 sudo: Execute commands with administrative privileges
How do I access the command-line interface in Linux Operating System?
To access the command-line interface in Linux, we can open a terminal window. In most Linux
distributions, we can press Ctrl + Alt + T to open the terminal. The terminal allows us to
execute commands directly, providing more advanced control over our system.
Conclusion
In this article we discussed Linux Operating System which is a powerful and flexible
open-source operating system based on the Linux Kernel. With a collaborative global
community, it offers security, frequent updates, and diverse distributions tailored to user needs.
Its architecture, comprising the kernel, system libraries, shell, hardware layer, and utilities,
ensures efficient functionality. While Linux boasts high performance, stability, and
compatibility, challenges include user-friendliness for beginners and a limited number of
peripheral hardware drivers. Despite this, Linux remains a significant player in computing,
poised for continued evolution and relevance.

136
Memory Management in Operating System
The term memory can be defined as a collection of data in a specific format. It is used
to store instructions and process data. The memory comprises a large array or group of words
or bytes, each with its own location. The primary purpose of a computer system is to execute
programs. These programs, along with the information they access, should be in the main
memory during execution. The CPU fetches instructions from memory according to the value
of the program counter.
To achieve a degree of multiprogramming and proper utilization of memory, memory
management is important. Many memory management methods exist, reflecting various
approaches, and the effectiveness of each algorithm depends on the situation.
Here, we will cover the following memory management topics:
 What is Main Memory?
 What is Memory Management?
 Why Memory Management is Required?
 Logical Address Space and Physical Address Space
 Static and Dynamic Loading
 Static and Dynamic Linking
 Swapping
 Contiguous Memory Allocation
 Memory Allocation
 First Fit
 Best Fit
 Worst Fit
 Fragmentation
 Internal Fragmentation
 External Fragmentation
 Paging
Before we start Memory management, let us know what is main memory is.
What is Main Memory?
The main memory is central to the operation of a Modern Computer. Main Memory is
a large array of words or bytes, ranging in size from hundreds of thousands to billions. Main
memory is a repository of rapidly available information shared by the CPU and I/O devices.
Main memory is the place where programs and information are kept when the processor is
effectively utilizing them. Main memory is associated with the processor, so moving

137
instructions and information into and out of the processor is extremely fast. Main memory is
also known as RAM (Random Access Memory). This memory is volatile. RAM loses its data
when a power interruption occurs.

Main Memory

What is Memory Management?


In a multiprogramming computer, the Operating System resides in a part of memory,
and the rest is used by multiple processes. The task of subdividing the memory among different
processes is called Memory Management. Memory management is a method in the operating
system to manage operations between main memory and disk during process execution. The
main aim of memory management is to achieve efficient utilization of memory.
Why Memory Management is Required?
 Allocate and de-allocate memory before and after process execution.
 To keep track of used memory space by processes.
 To minimize fragmentation issues.
 To proper utilization of main memory.
 To maintain data integrity while executing of process.
Now we are discussing the concept of Logical Address Space and Physical Address Space

138
Logical and Physical Address Space
 Logical Address Space: An address generated by the CPU is known as a “Logical
Address”. It is also known as a Virtual address. Logical address space can be
defined as the size of the process. A logical address can be changed.
 Physical Address Space: An address seen by the memory unit (i.e the one loaded
into the memory address register of the memory) is commonly known as a “Physical
Address”. A Physical address is also known as a Real address. The set of all
physical addresses corresponding to these logical addresses is known as Physical
address space. A physical address is computed by MMU. The run-time mapping
from virtual to physical addresses is done by a hardware device Memory
Management Unit(MMU). The physical address always remains constant.
Static and Dynamic Loading
Loading a process into the main memory is done by a loader. There are two different types of
loading :
 Static Loading: Static Loading is basically loading the entire program into a fixed
address. It requires more memory space.
 Dynamic Loading: The entire program and all data of a process must be in physical
memory for the process to execute. So, the size of a process is limited to the size
of physical memory. To gain proper memory utilization, dynamic loading is used.
In dynamic loading, a routine is not loaded until it is called. All routines are residing
on disk in a relocatable load format. One of the advantages of dynamic loading is
that the unused routine is never loaded. This loading is useful when a large amount
of code is needed to handle it efficiently.
Static and Dynamic Linking
To perform a linking task a linker is used. A linker is a program that takes one or more
object files generated by a compiler and combines them into a single executable file.
 Static Linking: In static linking, the linker combines all necessary program
modules into a single executable program. So there is no runtime dependency. Some
operating systems support only static linking, in which system language libraries
are treated like any other object module.
 Dynamic Linking: The basic concept of dynamic linking is similar to dynamic
loading. In dynamic linking, “Stub” is included for each appropriate library routine
reference. A stub is a small piece of code. When the stub is executed, it checks

139
whether the needed routine is already in memory or not. If not available then the
program loads the routine into memory.
Swapping
When a process is executed it must have resided in memory. Swapping is a process of
swapping a process temporarily into a secondary memory from the main memory, which is fast
compared to secondary memory. A swapping allows more processes to be run and can be fit
into memory at one time. The main part of swapping is transferred time and the total time is
directly proportional to the amount of memory swapped. Swapping is also known as roll-out,
or roll because if a higher priority process arrives and wants service, the memory manager can
swap out the lower priority process and then load and execute the higher priority process. After
finishing higher priority work, the lower priority process swapped back in memory and
continued to the execution process.

swapping in memory management

Memory Management with Monoprogramming (Without Swapping)


This is the simplest memory management approach the memory is divided into two sections:
 One part of the operating system
 The second part of the user program

Fence Register

operating system user program

 In this approach, the operating system keeps track of the first and last location
available for the allocation of the user program

140
 The operating system is loaded either at the bottom or at top
 Interrupt vectors are often loaded in low memory therefore, it makes sense to load
the operating system in low memory
 Sharing of data and code does not make much sense in a single process environment
 The Operating system can be protected from user programs with the help of a fence
register.
Advantages of Memory Management
 It is a simple management approach
Disadvantages of Memory Management
 It does not support multiprogramming
 Memory is wasted
Multiprogramming with Fixed Partitions (Without Swapping)
 A memory partition scheme with a fixed number of partitions was introduced to
support multiprogramming. this scheme is based on contiguous allocation
 Each partition is a block of contiguous memory
 Memory is partitioned into a fixed number of partitions.
 Each partition is of fixed size
Example: As shown in fig. memory is partitioned into 5 regions the region is reserved for
updating the system the remaining four partitions are for the user program.
Fixed Size Partitioning

Operating System

p1

p2

p3

p4

Partition Table

141
Once partitions are defined operating system keeps track of the status of memory
partitions it is done through a data structure called a partition table.
Sample Partition Table

Starting Address of Partition Size of Partition Status

0k 200k allocated

200k 100k free

300k 150k free

450k 250k allocated

Logical vs Physical Address


An address generated by the CPU is commonly referred to as a logical address. the
address seen by the memory unit is known as the physical address. The logical address can be
mapped to a physical address by hardware with the help of a base register this is known as
dynamic relocation of memory references.
Contiguous Memory Allocation
The main memory should accommodate both the operating system and the different
client processes. Therefore, the allocation of memory becomes an important task in the
operating system. The memory is usually divided into two partitions: one for the
resident operating system and one for the user processes. We normally need several user
processes to reside in memory simultaneously. Therefore, we need to consider how to allocate
available memory to the processes that are in the input queue waiting to be brought into
memory. In adjacent memory allotment, each process is contained in a single contiguous
segment of memory.

142
Contiguous Memory Allocation

Memory Allocation
To gain proper memory utilization, memory allocation must be allocated efficient
manner. One of the simplest methods for allocating memory is to divide memory into several
fixed-sized partitions and each partition contains exactly one process. Thus, the degree of
multiprogramming is obtained by the number of partitions.
 Multiple partition allocation: In this method, a process is selected from the input
queue and loaded into the free partition. When the process terminates, the partition
becomes available for other processes.
 Fixed partition allocation: In this method, the operating system maintains a table
that indicates which parts of memory are available and which are occupied by
processes. Initially, all memory is available for user processes and is considered one
large block of available memory. This available memory is known as a “Hole”.
When the process arrives and needs memory, we search for a hole that is large
enough to store this process. If the requirement is fulfilled then we allocate memory
to process, otherwise keeping the rest available to satisfy future requests. While
allocating a memory sometimes dynamic storage allocation problems occur, which
concerns how to satisfy a request of size n from a list of free holes. There are some
solutions to this problem:
First Fit
In the First Fit, the first available free hole fulfil the requirement of the process allocated.

143
Process Scheduling in OS (Operating System)

Operating system uses various schedulers for the process scheduling described below.

1. Long term scheduler

Long term scheduler is also known as job scheduler. It chooses the processes from the
pool (secondary memory) and keeps them in the ready queue maintained in the primary
memory.

Long Term scheduler mainly controls the degree of Multiprogramming. The purpose
of long term scheduler is to choose a perfect mix of IO bound and CPU bound processes among
the jobs present in the pool.

If the job scheduler chooses more IO bound processes then all of the jobs may reside in
the blocked state all the time and the CPU will remain idle most of the time. This will reduce
the degree of Multiprogramming. Therefore, the Job of long term scheduler is very critical and
may affect the system for a very long time.

2. Short term scheduler

Short term scheduler is also known as CPU scheduler. It selects one of the Jobs from the ready
queue and dispatch to the CPU for the execution.

A scheduling algorithm is used to select which job is going to be dispatched for the
execution. The Job of the short term scheduler can be very critical in the sense that if it
selects job whose CPU burst time is very high then all the jobs after that, will have to
wait in the ready queue for a very long time.

This problem is called starvation which may arise if the short term scheduler makes some
mistakes while selecting the job.

3. Medium term scheduler

Medium term scheduler takes care of the swapped out processes.If the running state
processes needs some IO time for the completion then there is a need to change its state from
running to waiting.

144
Medium term scheduler is used for this purpose. It removes the process from the
running state to make room for the other processes. Such processes are the swapped out
processes and this procedure is called swapping. The medium term scheduler is responsible for
suspending and resuming the processes.

It reduces the degree of multiprogramming. The swapping is necessary to have a perfect


mix of processes in the ready queue.

Operating System - Process Scheduling

Definition

The process scheduling is the activity of the process manager that handles the removal
of the running process from the CPU and the selection of another process on the basis of a
particular strategy.

Process scheduling is an essential part of a Multiprogramming operating systems. Such


operating systems allow more than one process to be loaded into the executable memory at a
time and the loaded process shares the CPU using time multiplexing.

Categories of Scheduling

There are two categories of scheduling:

1. Non-preemptive: Here the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running process
terminates and moves to a waiting state.
2. Preemptive: Here the OS allocates the resources to a process for a fixed amount of
time. During resource allocation, the process switches from running state to ready state
or from waiting state to ready state. This switching occurs as the CPU may give priority
to other processes and replace the process with higher priority with the running process.

Process Scheduling Queues

The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues.
The OS maintains a separate queue for each of the process states and PCBs of all processes in

145
the same execution state are placed in the same queue. When the state of a process is changed,
its PCB is unlinked from its current queue and moved to its new state queue.

The Operating System maintains the following important process scheduling queues −

 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes residing in main memory, ready
and waiting to execute. A new process is always put in this queue.
 Device queues − The processes which are blocked due to unavailability of an I/O
device constitute this queue.

The OS can use different policies to manage each queue (FIFO, Round Robin, Priority,
etc.). The OS scheduler determines how to move processes between the ready and run
queues which can only have one entry per processor core on the system; in the above
diagram, it has been merged with the CPU.

Two-State Process Model

Two-state process model refers to running and non-running states which are described below

S.N. State & Description

1 Running

146
When a new process is created, it enters into the system as in the
running state.

Not Running
Processes that are not running are kept in queue, waiting for their
turn to execute. Each entry in the queue is a pointer to a particular
process. Queue is implemented by using linked list. Use of
2
dispatcher is as follows. When a process is interrupted, that
process is transferred in the waiting queue. If the process has
completed or aborted, the process is discarded. In either case, the
dispatcher then selects a process from the queue to execute.

Schedulers

Schedulers are special system software which handle process scheduling in various ways.
Their main task is to select the jobs to be submitted into the system and to decide which process
to run. Schedulers are of three types −

 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler

Long Term Scheduler

It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing. It selects processes from the queue and loads them into
memory for execution. Process loads into the memory for CPU scheduling.

The primary objective of the job scheduler is to provide a balanced mix of jobs, such
as I/O bound and processor bound. It also controls the degree of multiprogramming. If the
degree of multiprogramming is stable, then the average rate of process creation must be equal
to the average departure rate of processes leaving the system.

On some systems, the long-term scheduler may not be available or minimal. Time-
sharing operating systems have no long term scheduler. When a process changes the state from
new to ready, then there is use of long-term scheduler.

147
Short Term Scheduler

It is also called as CPU scheduler. Its main objective is to increase system performance
in accordance with the chosen set of criteria. It is the change of ready state to running state of
the process. CPU scheduler selects a process among the processes that are ready to execute and
allocates CPU to one of them.

Short-term schedulers, also known as dispatchers, make the decision of which process
to execute next. Short-term schedulers are faster than long-term schedulers.

Medium Term Scheduler

Medium-term scheduling is a part of swapping. It removes the processes from the


memory. It reduces the degree of multiprogramming. The medium-term scheduler is in-charge
of handling the swapped out-processes.

A running process may become suspended if it makes an I/O request. A suspended


processes cannot make any progress towards completion. In this condition, to remove the
process from memory and make space for other processes, the suspended process is moved to
the secondary storage. This process is called swapping, and the process is said to be swapped
out or rolled out. Swapping may be necessary to improve the process mix.

Comparison among Scheduler

Short-Term Medium-Term
S.N. Long-Term Scheduler
Scheduler Scheduler

It is a process swapping
1 It is a job scheduler It is a CPU scheduler
scheduler.

Speed is in between both


Speed is lesser than Speed is fastest among
2 short and long term
short term scheduler other two
scheduler.

148
It provides lesser
It controls the degree of It reduces the degree of
3 control over degree of
multiprogramming multiprogramming.
multiprogramming

It is almost absent or
It is also minimal in It is a part of Time
4 minimal in time sharing
time sharing system sharing systems.
system

It selects processes It can re-introduce the


It selects those
from pool and loads process into memory and
5 processes which are
them into memory for execution can be
ready to execute
execution continued.

Context Switching

A context switching is the mechanism to store and restore the state or context of a CPU
in Process Control block so that a process execution can be resumed from the same point at a
later time. Using this technique, a context switcher enables multiple processes to share a single
CPU. Context switching is an essential part of a multitasking operating system features.

When the scheduler switches the CPU from executing one process to execute another,
the state from the current running process is stored into the process control block. After this,
the state for the process to run next is loaded from its own PCB and used to set the PC, registers,
etc. At that point, the second process can start executing.

149
Context switches are computationally intensive since register and memory state must be saved
and restored. To avoid the amount of context switching time, some hardware systems employ
two or more sets of processor registers. When the process is switched, the following
information is stored for later use.

 Program Counter
 Scheduling information
 Base and limit register value
 Currently used register
 Changed State
 I/O State information
 Accounting information
Device Management in Operating System
The process of implementation, operation, and maintenance of a device by an operating
system is called device management. When we use computers we will be having various
devices connected to our system like mouse, keyboard, scanner, printer, and pen drives. So all
these are the devices and the operating system acts as an interface that allows the users to
communicate with these devices.

150
An operating system is responsible for successfully establishing the connection
between these devices and the system. The operating system uses the concept of drivers for
establishing a connection between these devices with the system.
Functions of Device Management
 Keeps track of all devices and the program which is responsible to perform this is
called the I/O controller.
 Monitoring the status of each device such as storage drivers, printers, and other
peripheral devices.
 Enforcing preset policies and taking a decision on which process gets the device
when and for how long.
 Allocates and deallocates the device in an efficient way.
Types of Device
There are three main types of devices:
1. Block Device: It stores information in fixed-size block, each one with its own
address. Example, disks.
2. Character Device: It delivers or accepts a stream of characters. the individual
characters are not addressable. For example, printers, keyboards etc.
3. Network Device: It is for transmitting data packets.
Features of Device Management in Operating System
 Operating system is responsible in managing device communication through their
respective drivers.
 Operating system keeps track of all devices by using a program known as an input
output controller.
 It decides which process to assign to CPU and for how long.
 O.S. is responsible in fulfilling the request of devices to access the process.
 It connects the devices to various programs in efficient way without error.
 Deallocate devices when they are not in use.
Device Drivers
Operating system is responsible for managing device communication through their
respective drivers. As we know that the operating system will have many devices like the
mouse, printer, and scanner and operating system is responsible for managing these devices
and establishing the communication between these devices with the computer through their
respective drivers. So operating system uses its respective drivers each and every device will

151
have its own driver. Without the use of their respective driver, that device cannot make
communication with other systems.
Device Tracking
Operating system keeps track of all devices by using a program known as input output
controller. Apart from allowing the system to make the communication between these drivers
operating system is also responsible in keeping track all these devices which are connected
with the system.
If any device request any process which is under execution by the CPU then the
operating system has to send a signal to the CPU to immediately release that process and moves
to the next process from the main memory so that the process which is asked by the device
fulfills the request of this device. That’s why operating system has to continuously keep on
checking the status of all the devices and for doing that operating system uses a specialized
program which is known as Input/Output controller.
Process Assignment
Operating system decides which process to assign to CPU and for how long. So
operating system is responsible in assigning the processes to the CPU and it is also responsible
in selecting appropriate process from the main memory and setting up the time for that process
like how long that process needs to get executed inside the CPU.
Operating system is responsible in fulfilling the request of devices to access the process.
If the printer requests for the process which is now getting executed by the CPU then it is the
responsibility of the operating system to fulfill that request. So what operating system will do
is it will tell the CPU that you need to immediately release that process which the device printer
is asking for and assign it to the printer.
Connection
Operating system connects the devices to various programs in efficient way without
error. So we use software to access these drivers because we cannot directly access to keyboard,
mouse, printers, scanners, etc. We have to access these devices with the help of software.
Operating system helps us in establishing an efficient connection with these devices with the
help of various software applications without any error.
Device Allocation
Device allocation refers to the process of assigning specific devices to processes or
users. It ensures that each process or user has exclusive access to the required devices or shares
them efficiently without interference.
Device Deallocation

152
Operating system deallocates devices when they are no longer in use. When these
drivers or devices are in use, they will be using certain space in the memory so it is the
responsibility of the operating system to continuously keep checking which device is in use
and which device is not in use so that it can release that device if we are no longer using that
device.

File Access Methods

Let's look at various ways to access files stored in secondary memory.

Sequential Access

Most of the operating systems access the file sequentially. In other words, we can say that most
of the files need to be accessed sequentially by the operating system.

In sequential access, the OS read the file word by word. A pointer is maintained which
initially points to the base address of the file. If the user wants to read first word of the file then
the pointer provides that word to the user and increases its value by 1 word. This process
continues till the end of the file.

Modern word systems do provide the concept of direct access and indexed access but
the most used method is sequential access due to the fact that most of the files such as text files,
audio files, video files, etc need to be sequentially accessed.

153
Direct Access

The Direct Access is mostly required in the case of database systems. In most of the
cases, we need filtered information from the database. The sequential access can be very slow
and inefficient in such cases.

Suppose every block of the storage stores 4 records and we know that the record we
needed is stored in 10th block. In that case, the sequential access will not be implemented
because it will traverse all the blocks in order to access the needed record.

Direct access will give the required result despite of the fact that the operating system
has to perform some complex tasks such as determining the desired block number. However,
that is generally implemented in database applications.

Indexed Access

If a file can be sorted on any of the filed then an index can be assigned to a group of certain
records. However, A particular record can be accessed by its index. The index is nothing but
the address of a record in the file.

In index accessing, searching in a large database became very quick and easy but we need to
have some extra space in the memory to store the index value.

154
Architecture of IOS Operating System
IOS is a Mobile Operating System that was developed by Apple Inc. for iPhones, iPads,
and other Apple mobile devices. iOS is the second most popular and most used Mobile
Operating System after Android.
The structure of the iOS operating System is Layered based. Its communication doesn’t occur
directly.

The layer’s between the Application Layer and the Hardware layer will help for
Communication. The lower level gives basic services on which all applications rely and the
higher-level layers provide graphics and interface-related services. Most of the system
interfaces come with a special package called a framework.

A framework is a directory that holds dynamic shared libraries like .a files, header files,
images, and helper apps that support the library. Each layer has a set of frameworks that are
helpful for developers.

Architecture of IOS

CORE-OS-Layer:
All the IOS technologies are built under the lowest level layer i.e. Core OS layer. These
technologies include:
1. Core Bluetooth Framework
2. External Accessories Framework
3. Accelerate Framework
4. Security Services Framework
5. Local Authorization Framework etc.
It supports 64 bit which enables the application to run faster.

155
CORE-SERVICES-Layer:
Some important frameworks are present in the CORE SERVICES Layer which helps the iOS
operating system to cure itself and provide better functionality. It is the 2nd lowest layer in the
Architecture as shown above. Below are some important frameworks present in this layer:
1. Address-Book-Framework-
The Address Book Framework provides access to the contact details of the user.
2. Cloud-Kit-Framework-
This framework provides a medium for moving data between your app and iCloud.
3. Core-Data-Framework-
This is the technology that is used for managing the data model of a Model View
Controller app.
4. Core-Foundation-Framework-
This framework provides data management and service features for iOS
applications.
5. Core-Location-Framework-
This framework helps to provide the location and heading information to the
application.
6. Core-Motion-Framework-
All the motion-based data on the device is accessed with the help of the Core Motion
Framework.
7. Foundation-Framework-
Objective C covering too many of the features found in the Core Foundation
framework.
8. Health-Kit-Framework-
This framework handles the health-related information of the user.
9. Home-Kit-Framework-
This framework is used for talking with and controlling connected devices with the
user’s home.
10. SocialFramework-
It is simply an interface that will access users’ social media accounts.
11. StoreKitFramework-
This framework supports for buying of contents and services from inside iOS apps.
MEDIALayer:
With the help of the media layer, we will enable all graphics video, and audio technology of

156
the system. This is the second layer in the architecture. The different frameworks of MEDIA
layers are:
1. UL-Kit-Graphics-
This framework provides support for designing images and animating the view
content.
2. Core-Graphics-Framework-
This framework support 2D vector and image-based rendering and it is a native
drawing engine for iOS.
3. Core-Animation-
This framework helps in optimizing the animation experience of the apps in iOS.
4. Media-Player-Framework-
This framework provides support for playing the playlist and enables the user to use
their iTunes library.
5. AV-Kit-
This framework provides various easy-to-use interfaces for video presentation,
recording, and playback of audio and video.
6. Open-AL-
This framework is an Industry Standard Technology for providing Audio.
7. Core-Images-
This framework provides advanced support for motionless images.
8. GL-Kit-
This framework manages advanced 2D and 3D rendering by hardware-accelerated
interfaces.
COCOATOUCH:
COCOA Touch is also known as the application layer which acts as an interface for the user to
work with the iOS Operating system. It supports touch and motion events and many more
features. The COCOA TOUCH layer provides the following frameworks :
1. Even—Kit-Framework-
This framework shows a standard system interface using view controllers for
viewing and changing events.
2. Game-Kit-Framework-
This framework provides support for users to share their game-related data online
using a Game Center.

157
3. Map-Kit-Framework-
This framework gives a scrollable map that one can include in your user interface
of the app.
4. Push-Kit-Framework-
This framework provides registration support.
Features-of-iOS-operating-System:
Let us discuss some features of the iOS operating system-
1. Highly Securer than other operating systems.
2. iOS provides multitasking features like while working in one application we can
switch to another application easily.
3. iOS’s user interface includes multiple gestures like swipe, tap, pinch, Reverse
pinch.
4. iBooks, iStore, iTunes, Game Center, and Email are user-friendly.
5. It provides Safari as a default Web Browser.
6. It has a powerful API and a Camera.
7. It has deep hardware and software integration
Applications-of-IOS-Operating-System:
Here are some applications of the iOS operating system-
1. iOS Operating System is the Commercial Operating system of Apple Inc. and is
popular for its security.
2. iOS operating system comes with pre-installed apps which were developed by
Apple like Mail, Map, TV, Music, Wallet, Health, and Many More.
3. Swift Programming language is used for Developing Apps that would run on IOS
Operating System.
4. In iOS Operating System we can perform Multitask like Chatting along with
Surfing on the Internet.
Advantages-of-IOS-Operating-System:
The iOS operating system has some advantages over other operating systems available in the
market especially the Android operating system. Here are some of them-
1. More secure than other operating systems.
2. Excellent UI and fluid responsive
3. Suits best for Business and Professionals
4. Generate Less Heat as compared to Android.

158
Disadvantages of IOS Operating System:
Let us have a look at some disadvantages of the iOS operating system-
1. More Costly.
2. Less User Friendly as Compared to Android Operating System.
3. Not Flexible as it supports only IOS devices.
4. Battery Performance is poor.
Layered Operating System
File Systems in Operating System
A file system is a method an operating system uses to store, organize, and manage files
and directories on a storage device. Some common types of file systems include:
1. FAT (File Allocation Table): An older file system used by older versions of
Windows and other operating systems.
2. NTFS (New Technology File System): A modern file system used by Windows. It
supports features such as file and folder permissions, compression, and encryption.
3. ext (Extended File System): A file system commonly used on Linux and Unix-
based operating systems.
4. HFS (Hierarchical File System): A file system used by macOS.
5. APFS (Apple File System): A new file system introduced by Apple for their Macs
and iOS devices.
The advantages of using a file system
1. Organization: A file system allows files to be organized into directories and
subdirectories, making it easier to manage and locate files.
2. Data protection: File systems often include features such as file and folder
permissions, backup and restore, and error detection and correction, to protect data
from loss or corruption.
3. Improved performance: A well-designed file system can improve the
performance of reading and writing data by organizing it efficiently on disk.
Disadvantages of using a file system
1. Compatibility issues: Different file systems may not be compatible with each
other, making it difficult to transfer data between different operating systems.
2. Disk space overhead: File systems may use some disk space to store metadata and
other overhead information, reducing the amount of space available for user data.

159
3. Vulnerability: File systems can be vulnerable to data corruption, malware, and
other security threats, which can compromise the stability and security of the
system.
A file is a collection of related information that is recorded on secondary storage. Or file is a
collection of logically related entities. From the user’s perspective, a file is the smallest
allotment of logical secondary storage.
The name of the file is divided into two parts as shown below:
 name
 extension, separated by a period.
The Following Issues Are Handled By The File Syst4em
We’ve seen a variety of data structures where the file could be kept. The file system‘s job is to
keep the files organized in the best way possible.
A free space is created on the hard drive whenever a file is deleted from it. To reallocate them
to other files, many of these spaces may need to be recovered. Choosing where to store the files
on the hard disc is the main issue with files one block may or may not be used to store a file. It
may be kept in the disk’s non-contiguous blocks. We must keep track of all the blocks where
the files are partially located.
Files Attributes And Their Operations

Attributes Types Operations

Name Doc Create

Type Exe Open

Size Jpg Read

Creation Data Xis Write

Author C Append

Last Modified Java Truncate

160
Attributes Types Operations

protection class Delete

Close

File type Usual extension Function

Executable exe, com, bin Read to run machine language program

Object obj, o Compiled, machine language not linked

C, java, pas, asm,


Source Code Source code in various languages
a

Batch bat, sh Commands to the command interpreter

Text txt, doc Textual data, documents

Word
wp, tex, rrf, doc Various word processor formats
Processor

Archive arc, zip, tar Related files grouped into one compressed file

Multimedia mpeg, mov, rm For containing audio/video information

Markup xml, html, tex It is the textual data and documents

Library lib, a ,so, dll It contains libraries of routines for programmers

It is a format for printing or viewing an ASCII or


Print or View gif, pdf, jpg
binary file.

FILE DIRECTORIES

161
The collection of files is a file directory. The directory contains information about the
files, including attributes, location, and ownership. Much of this information, especially that is
concerned with storage, is managed by the operating system. The directory is itself a file,
accessible by various file management routines.
Information contained in a device directory is:
 Name
 Type
 Address
 Current length
 Maximum length
 Date last accessed
 Date last updated
 Owner id
 Protection information
The operation performed on the directory are:
 Search for a file
 Create a file
 Delete a file
 List a directory
 Rename a file
 Traverse the file system
The advantages of maintaining directories are:
 Efficiency: A file can be located more quickly.
 Naming: It becomes convenient for users as two users can have same name for
different files or may have different name for same file.
 Grouping: Logical grouping of files can be done by properties e.g. all java
programs, all games etc.
SINGLE-LEVEL DIRECTORY
In this, a single directory is maintained for all the users.
 Naming problem: Users cannot have the same name for two files.
 Grouping problem: Users cannot group files according to their needs.

162
 Path name: Due to two levels there is a path name for every file to locate that file.
 Now, we can have the same file name for different users.
 Searching is efficient in this method.

TREE-STRUCTURED DIRECTORY
The directory is maintained in the form of a tree. Searching is efficient and also there
is grouping capability. We have absolute or relative path name for a file.

163
FILE ALLOCATION METHODS
Continuous Allocation
A single continuous set of blocks is allocated to a file at the time of file creation. Thus,
this is a pre-allocation strategy, using variable size portions. The file allocation table needs just
a single entry for each file, showing the starting block and the length of the file. This method
is best from the point of view of the individual sequential file. Multiple blocks can be read in
at a time to improve I/O performance for sequential processing. It is also easy to retrieve a
single block. For example, if a file starts at block b, and the ith block of the file is wanted, its
location on secondary storage is simply b+i-1.

164
Disadvantage
 External fragmentation will occur, making it difficult to find contiguous blocks of
space of sufficient length. A compaction algorithm will be necessary to free up
additional space on the disk.
 Also, with pre-allocation, it is necessary to declare the size of the file at the time of
creation.
Linked Allocation(Non-contiguous allocation)
Allocation is on an individual block basis. Each block contains a pointer to the next
block in the chain. Again the file table needs just a single entry for each file, showing the
starting block and the length of the file. Although pre-allocation is possible, it is more common
simply to allocate blocks as needed. Any free block can be added to the chain. The blocks need
not be continuous. An increase in file size is always possible if a free disk block is available.
There is no external fragmentation because only one block at a time is needed but there can be
internal fragmentation but it exists only in the last disk block of the file.
Disadvantage
 Internal fragmentation exists in the last disk block of the file.
 There is an overhead of maintaining the pointer in every disk block.
 If the pointer of any disk block is lost, the file will be truncated.
 It supports only the sequential access of files.
Indexed Allocation
It addresses many of the problems of contiguous and chained allocation. In this case,
the file allocation table contains a separate one-level index for each file: The index has one
entry for each block allocated to the file. The allocation may be on the basis of fixed-size blocks
or variable-sized blocks. Allocation by blocks eliminates external fragmentation, whereas
allocation by variable-size blocks improves locality. This allocation technique supports both
sequential and direct access to the file and thus is the most popular form of file allocation.

165
Disk Free Space Management
Just as the space that is allocated to files must be managed, so the space that is not currently
allocated to any file must be managed. To perform any of the file allocation techniques, it is
necessary to know what blocks on the disk are available. Thus we need a disk allocation table
in addition to a file allocation table. The following are the approaches used for free space
management.
1. Bit Tables: This method uses a vector containing one bit for each block on the disk.
Each entry for a 0 corresponds to a free block and each 1 corresponds to a block in
use.
For example 00011010111100110001
In this vector every bit corresponds to a particular block and 0 implies that that
particular block is free and 1 implies that the block is already occupied. A bit table
has the advantage that it is relatively easy to find one or a contiguous group of free
blocks. Thus, a bit table works well with any of the file allocation methods. Another
advantage is that it is as small as possible.
2. Free Block List: In this method, each block is assigned a number sequentially and
the list of the numbers of all free blocks is maintained in a reserved block of the
disk.

166
Layered Structure is a type of system structure in which the different services of
the operating system are split into various layers, where each layer has a specific well-defined
task to perform. It was created to improve the pre-existing structures like the Monolithic
structure ( UNIX ) and the Simple structure ( MS-DOS ).
Example – The Windows NT operating system uses this layered approach as a part of it.
DesignAnalysis :
The whole Operating System is separated into several layers ( from 0 to n ) as the diagram
shows. Each of the layers must have its own specific function to perform. There are some rules
in the implementation of the layers as follows.
1. The outermost layer must be the User Interface layer.
2. The innermost layer must be the Hardware layer.
3. A particular layer can access all the layers present below it but it cannot access the
layers present above it. That is layer n-1 can access all the layers from n-2 to 0 but
it cannot access the nth layer.
Thus if the user layer wants to interact with the hardware layer, the response will be traveled
through all the layers from n-1 to 1. Each layer must be designed and implemented such that it
will need only the services provided by the layers below it.

167
5marks
1.How does a file system organize data?
2.What is a file allocation table (FAT)?
3.What is NTFS (New Technology File System)?
4.What is file system?

5.Explain about memory management?

6.Explain about process scheduling?

7. Explain about scheduling policy?

8. Discuss about services layer?

9. Discuss about core OS layer?

10. What different is File system?

10 marks

1. Explain about case studies?

2. Explain about linux System?

3. Explain about process scheduling?

4. Discuss about Framework?

5. Discuss about core OS layer?

6. Architecture and SDK framework?

168
7. Explain about media layer?

8. Explain about file system?

9. Discuss about IOS?

One marks

1. Which of these commands do we use for removing the files?

a. erase

b. delete

c. rm

d. dm

Answer: (c) rm

2. Which of these hardware architectures does Red Hat not support?

a. Macintosh

b. Alpha

c. IBM-compatible

d. SPARC

Answer: (a) Macintosh

3. Which of these commands do we use for the creation of an installation boot floppy in Linux?

a. bootfp disk

b. mkboot disk

c. dd & rawrite

d. w & rawrite

169
Answer: (c) dd & rawrite

4. In which of these directories do we store the user-defined files of the system that we use for
the creation of various user directories?

a. /etc/users

b. /etc/skel/

c. /etc/default

d. /usr/temp

Answer: (b) /etc/skel

5. Which of these is NOT used in the form of a communication command?

a. write

b. mesg

c. mail

d. grep

Answer: (d) grep

6. The Virtual memory is:

a. An illusion of a large main memory

b. A large main memory

c. A large secondary memory

d. None of the above

Answer: (a) An illusion of a large main memory

7. A CPU yields 32-bit virtual addresses, and the page size is 4 kilobytes. Here, the processor
consists of a TLB (translation lookaside buffer). It is a 4-way set associative, and it can hold a
total of 128-page table entries. The TLB tag’s minimum size is:

170
a. 20 bits

b. 15 bits

c. 13 bits

d. 11 bits

Answer: (b) 15 bits

8. Thrashing occurs in a system when:

a. The processes on the system access pages and not memory frequently

b. A page fault pops up

c. The processes on the system are in running state

d. The processes on the system are in the waiting state

Answer: (a) The processes on the system access pages and not memory frequently

9. The page fault occurs whenever:

a. The requested page isn’t in the memory

b. The requested page is in the memory

c. An exception is thrown

d. The page is corrupted

Answer: (a) The requested page isn’t in the memory

10. Consider a computer that uses 32–bit physical address, 46–bit virtual address, along with a
page table organisation that is three-level. Here, the base register of the page table stores the
T1 (first–level table) base address, which occupies exactly one page. Every entry of the T1
stores the T2 (second-level table) page’s base address. Similarly, every entry of T2 stores the
T3 (third-level table) page’s base address and every entry of T3 stores a PTE (page table entry).
The size of PTE is 32 bits. In the computer, the processor has a 1 MB 16 way virtually indexed

171
set-associative physically tagged cache. If the size of the cache block is 64 bytes, then what is
the size of a page in this computer in Kilobytes?

a. 4

b. 2

c. 16

d. 8

Answer: (d) 8

11. Consider that the page fault service time in a computer is 10ms and the average memory
access time is 20ns. If, in case, it generates a page fault every 10^6 memory accesses, then
what would be the effective access time for this memory?

a. 30ns

b. 21ns

c. 35ns

d. 23ns

Answer: (a) 30ns

172

You might also like