Aos PG
Aos PG
Study Material
Paper Name : Advanced Operating Systems
Paper Code : 23PCSC05
Batch : 2023-2025
Semester : II
Staff In charge : Umamaheswari .M
Advanced operating system
Basics of Operating Systems: What is an Operating System? – Main frame Systems –Desktop
Systems – Multiprocessor Systems – Distributed Systems – Clustered Systems –Real-Time
Systems – Handheld Systems – Feature Migration – Computing Environments -Process
Scheduling – Cooperating Processes – Inter Process Communication- Deadlocks –Prevention –
Avoidance – Detection – Recovery.
Realtime Operating Systems: Introduction – Applications of Real Time Systems – Basic Model of
Real Time System – Characteristics – Safety and Reliability - Real Time Task Scheduling
Text Books
1. Abraham Silberschatz; Peter Baer Galvin; Greg Gagne, “Operating System Concepts”,
Seventh Edition, John Wiley & Sons, 2004.
2. MukeshSinghal and Niranjan G. Shivaratri, “Advanced Concepts in Operating Systems –
Distributed, Database, and Multiprocessor Operating Systems”, Tata McGraw-Hill, 2001.
Reference Books
3
1. Rajib Mall, “Real-Time Systems: Theory and Practice”, Pearson Education India, 2006.
2. Pramod Chandra P.Bhatt, An introduction to operating systems, concept and practice, PHI,
Third edition, 2010.
3. Daniel.P.Bovet& Marco Cesati,“Understanding the Linux kernel”,3rdedition,O‟Reilly,
2005
4. Neil Smyth, “iPhone iOS 4 Development Essentials – Xcode”, Fourth Edition, Payload
media, 2011.
1. https://www.udacity.com/course/advanced-operating-systems--ud189
2. https://onlinecourses.nptel.ac.in/noc20_cs04/preview
3. https://minnie.tuhs.org/CompArch/Resources/os-notes.pdf
Advanced operating system
An operating system acts as an intermediary between the user of a computer and computer
hardware. The purpose of an operating system is to provide an environment in which a
user can execute programs conveniently and efficiently.
An operating system is software that manages computer hardware. The hardware must
provide appropriate mechanisms to ensure the correct operation of the computer system
and to prevent user programs from interfering with the proper operation of the system. A
more common definition is that the operating system is the one program running at all
times on the computer (usually called the kernel), with all else being application programs.
An operating system is concerned with the allocation of resources and services, such as
memory, processors, devices, and information. The operating system correspondingly
includes programs to manage these resources, such as a traffic controller, a scheduler,
a memory management module, I/O programs, and a file system.
The operating system has been evolving through the years. The following table shows the history
of OS.
Types of OS
Generation Year Electronic device used Devices
Let us now discuss some of the important characteristic features of operating systems:
Device Management: The operating system keeps track of all the devices. So, it is
also called the Input/Output controller that decides which process gets the device,
when, and for how much time.
5
File Management: It allocates and de-allocates the resources and also decides who
gets the resource.
Job Accounting: It keeps track of time and resources used by various jobs or users.
Error-detecting Aids: These contain methods that include the production of dumps,
traces, error messages, and other debugging and error-detecting methods.
Memory Management: It keeps track of the primary memory, like what part of it is
in use by whom, or what part is not in use, etc. and It also allocates the memory when
a process or program requests it.
Processor Management: It allocates the processor to a process and then de-allocates
the processor when it is no longer required or the job is done.
Control on System Performance: It records the delays between the request for a
service and the system.
Security: It prevents unauthorized access to programs and data using passwords or
some kind of protection technique.
Convenience: An OS makes a computer more convenient to use.
Efficiency: An OS allows the computer system resources to be used efficiently.
Ability to Evolve: An OS should be constructed in such a way as to permit the
effective development, testing, and introduction of new system functions at the same
time without interfering with service.
Throughput: An OS should be constructed so that It can give maximum
throughput (Number of tasks per unit time).
compilers, loaders, editors, OS, etc. The application program consists of business
programs and database programs.
Every computer must have an operating system to run other programs. The operating
system coordinates the use of the hardware among the various system programs and
application programs for various users. It simply provides an environment within
which other programs can do useful work.
The operating system is a set of special programs that run on a computer system that
allows it to work properly. It performs basic tasks such as recognizing input from the
keyboard, keeping track of files and directories on the disk, sending output to the
display screen, and controlling peripheral devices.
Several tasks are performed by the Operating Systems and it also helps in serving a lot
of purposes which are mentioned below. We will see how Operating System helps us
in serving in a better way with the help of the task performed by it.
It controls the allocation and use of the computing System’s resources among the
various user and tasks.
It provides an interface between the computer hardware and the programmer that
simplifies and makes it feasible for coding and debugging of application programs.
1. Provides the facilities to create and modify programs and data files using an
editor.
2. Access to the compiler for translating the user program from high-level language
to machine language.
3. Provide a loader program to move the compiled program code to the computer’s
memory for execution.
4. Provide routines that handle the details of I/O programming.
7
The module that keeps track of the status of devices is called the I/O traffic
controller. Each I/O device has a device handler that resides in a separate process
associated with that device.
The I/O subsystem consists of
A memory Management component that includes buffering caching and spooling.
A general device driver interface.
Drivers for Specific Hardware Devices
Below mentioned are the drivers which are required for a specific Hardware Device. Here we
discussed Assemblers, compilers, and interpreters, loaders.
Assembler
The High-level languages– examples are C, C++, Java, Python, etc (around 300+
famous high-level languages) are processed by compilers and interpreters. A compiler
is a program that accepts a source program in a “high-level language “and produces
machine code in one go. Some of the compiled languages are FORTRAN, COBOL, C,
C++, Rust, and Go. An interpreter is a program that does the same thing but converts
high-level code to machine code line-by-line and not all at once. Examples of
interpreted languages are
Python
Perl
Ruby
Loader
A Loader is a routine that loads an object program and prepares it for execution. There
are various loading schemes: absolute, relocating, and direct-linking. In general, the
loader must load, relocate and link the object program. The loader is a program that
places programs into memory and prepares them for execution. In a simple loading
scheme, the assembler outputs the machine language translation of a program on a
secondary device and a loader places it in the core. The loader places into memory the
machine language version of the user’s program and transfers control to it. Since the
loader program is much smaller than the assembler, those make more core available to
the user’s program.
Kernel
Shell
Shell is the outermost layer of the Operating System and it handles the interaction with the
user. The main task of the Shell is the management of interaction between the User and
OS. Shell provides better communication with the user and the Operating System Shell
does it by giving proper input to the user it also interprets input for the OS and handles the
output from the OS. It works as a way of communication between the User and the OS.
Kernel
The kernel is one of the components of the Operating System which works as a core
component. The rest of the components depends on Kernel for the supply of the important
services that are provided by the Operating System. The kernel is the primary interface
between the Operating system and Hardware.
Functions of Kernel
Types of Kernel
32-Bit OS is required for running of 32-Bit 64-Bit Processors can run on any of the
Processors, as they are not capable of running on Operating Systems, like 32-Bit OS or
64-bit processors. 64-Bit OS.
Less amount of data is managed in 32-Bit A large amount of data can be stored in
Operating System as compared to 64-Bit Os. 64-Bit Operating System.
9
32-Bit Operating System 64-Bit Operating System
32-Bit Operating System can address 2^32 bytes 64-Bit Operating System can address
of RAM. 2^64 bytes of RAM.
It helps in managing the data present in the device i.e. Memory Management.
It helps in making the best use of computer hardware.
It helps in maintaining the security of the device.
It helps different applications in running them efficiently.
Operating System lies in the category of system software. It basically manages all the
resources of the computer. An operating system acts as an interface between the
software and different parts of the computer or the computer hardware. The operating
system is designed in such a way that it can manage the overall resources and
operations of the computer.
Operating System is a fully integrated set of specialized programs that handle all the
operations of the computer. It controls and monitors the execution of all other programs
that reside in the computer, which also includes application programs and other system
software of the computer. Examples of Operating Systems are Windows, Linux, Mac
OS, etc.
Operating System
Resource Management: The operating system manages and allocates memory, CPU
time, and other hardware resources among the various programs and processes
running on the computer.
Process Management: The operating system is responsible for starting, stopping,
and managing processes and programs. It also controls the scheduling of processes
and allocates resources to them.
Memory Management: The operating system manages the computer’s primary
memory and provides mechanisms for optimizing memory usage.
Security: The operating system provides a secure environment for the user,
applications, and data by implementing security policies and mechanisms such as
access controls and encryption.
Job Accounting: It keeps track of time and resources used by various jobs or users.
File Management: The operating system is responsible for organizing and managing
the file system, including the creation, deletion, and manipulation of files and
directories.
Device Management: The operating system manages input/output devices such as
printers, keyboards, mice, and displays. It provides the necessary drivers and
interfaces to enable communication between the devices and the computer.
Networking: The operating system provides networking capabilities such as
establishing and managing network connections, handling network protocols, and
sharing resources such as printers and files over a network.
User Interface: The operating system provides a user interface that enables users to
interact with the computer system. This can be a Graphical User Interface (GUI), a
Command-Line Interface (CLI), or a combination of both.
Backup and Recovery: The operating system provides mechanisms for backing up
data and recovering it in case of system failures, errors, or disasters.
Virtualization: The operating system provides virtualization capabilities that allow
multiple operating systems or applications to run on a single physical machine. This
can enable efficient use of resources and flexibility in managing workloads.
11
Performance Monitoring: The operating system provides tools for monitoring and
optimizing system performance, including identifying bottlenecks, optimizing
resource usage, and analyzing system logs and metrics.
Time-Sharing: The operating system enables multiple users to share a computer
system and its resources simultaneously by providing time-sharing mechanisms that
allocate resources fairly and efficiently.
System Calls: The operating system provides a set of system calls that enable
applications to interact with the operating system and access its resources. System
calls provide a standardized interface between applications and the operating system,
enabling portability and compatibility across different hardware and software
platforms.
Error-detecting Aids: These contain methods that include the production of dumps,
traces, error messages, and other debugging and error-detecting methods.
For more, refer to Functions of Operating System.
Let us now see some of the objectives of the operating system, which are mentioned below.
Convenient to use: One of the objectives is to make the computer system more
convenient to use in an efficient manner.
User Friendly: To make the computer system more interactive with a more
convenient interface for the users.
Easy Access: To provide easy access to users for using resources by acting as an
intermediary between the hardware and its users.
Management of Resources: For managing the resources of a computer in a better
and faster way.
Controls and Monitoring: By keeping track of who is using which resource,
granting resource requests, and mediating conflicting requests from different
programs and users.
Fair Sharing of Resources: Providing efficient and fair sharing of resources
between the users and programs.
There are so many factors to be considered while choosing the best Operating System
for our use. These factors are mentioned below.
Price Factor: Price is one of the factors to choose the correct Operating System
as there are some OS that is free, like Linux, but there is some more OS that is
paid like Windows and macOS.
Accessibility Factor: Some Operating Systems are easy to use like macOS and
iOS, but some OS are a little bit complex to understand like Linux. So, you must
choose the Operating System in which you are more accessible.
Compatibility factor: Some Operating Systems support very less applications
whereas some Operating Systems supports more application. You must choose the
OS, which supports the applications which are required by you.
Security Factor: The security Factor is also a factor in choosing the correct OS,
as macOS provide some additional security while Windows has little fewer
security features.
13
Main frame Systems
A mainframe is a large computer system designed to process very large amounts of data
quickly. Mainframe systems are widely used in industries like the financial sector, airline
reservations, logistics and other fields where a large number of transactions need to be
processed as part of routine business practices.
These types of operating systems are mainly used in E-commerce websites or servers
that are dedicated for business-to-business transactions.
The operating system in mainframe systems is oriented in a way that it can handle
many jobs simultaneously.
Mainframe Operating systems can operate with a large amount of input output
transactions.
Server operating systems runs on machines which have dedicated servers. The
examples of server operating systems are solaris, linux and windows.
Server operating systems allows sharing of multiple resources such as hardware, files
or print services. Web pages are stored on a server which handles request and response
Advanced operating system
These types of operating systems are installed in machines used by common and large
numbers of users.
They support multiprogramming, running multiple programs like word, excel, games,
and Internet access simultaneously on a single machine.
For example − Linux, Windows, Mac
Handheld operating systems are present in all handheld devices like Smartphones and
tablets. It is also called a Personal Digital Assistant. The popular handheld device in
today’s market is android and iOS.
These operating systems need a high processing processor and also embedded with
different types of sensors.
Embedded operating systems are designed for the systems that are not considered as
computers. These operating systems are preinstalled on the devices by the device
manufacturer.
All pre-installed software’s are in ROM and no changes could be done to it by the
users. The example of embedded operating systems is washing machines, ovens etc.
15
Smart Card Operating System
Smart Card Operating Systems runs on smart cards. It contains a processor chip that is
embedded inside the CPU chip. They have high processing power and memory
constraints.
These operating systems can handle single functions like making electronic payment
and are license software’s.
Desktop Systems in OS
A desktop system refers to a personal computer setup that is typically used on a desk or
table. It consists of various hardware components and an operating system that enables
users to performs a wide range of tasks such as document editing, web browsing,
gaming, multimedia consumption, and more.
An operating system (OS) acts as an interface between the hardware and software of a
desktop system. It manages system resources, facilitates software execution, and
provides a user-friendly environment. Different operating systems offer distinct
features, compatibility, and performance, catering to the diverse needs and preferences
of users.
Desktop systems play a crucial role in various domains, including education, business,
entertainment, and personal productivity.
They provide individuals and organizations with powerful computing capabilities,
enabling complex tasks to be completed efficiently.
Desktop systems facilitate creativity, communication, data analysis, and knowledge
sharing, contributing to enhanced productivity and innovation
Central Processing Unit (CPU): The CPU is the brain of a desktop system,
responsible for executing instructions and performing calculations. It processes data
and carries out tasks based on the instructions provided by software programs. The
CPU’s performance is measured by its clock speed, number of cores, and cache size.
Random Access Memory (RAM): RAM is a type of volatile memory that temporarily
stores data and instructions for the CPU to access quickly. It allows for efficient
multitasking and faster data retrieval, significantly impacting the overall performance
of the system. The amount of RAM in a desktop system determines its capability to
handle multiple programs simultaneously.
Storage Devices: Desktop systems utilize various storage devices to store and retrieve
data. Hard Disk Drives (HDDs) are the traditional storage medium, offering large
capacities but slower read/write speeds. Solid-State Drives (SSDs) are a newer
Advanced operating system
technology that provides faster data access, enhancing the system’s responsiveness and
reducing loading times.
Graphics Processing Unit (GPU): The GPU is responsible for rendering images,
videos, and animations on the computer screen. It offloads the graphical processing
tasks from the CPU, ensuring smooth visuals and enabling resource-intensive
applications such as gaming, video editing, and 3D modeling. High-performance GPUs
are essential for users who require demanding graphical capabilities.
Input and Output Devices: Desktop systems are equipped with various input and
output devices. Keyboards and mice are the primary input devices, allowing users to
interact with the system and input commands. Monitors, printers, speakers, and
headphones serve as output devices, providing visual or auditory feedback based on the
system’s output.
Desktop systems have evolved significantly over the years. From the bulky and
limited-capability systems of the past to the sleek and powerful computers of today,
technological advancements have revolutionized the desktop computing experience.
Smaller form factors, increased processing power, improved storage technologies, and
enhanced user interfaces are some of the notable advancements that have shaped the
evolution of desktop systems.
Windows: Windows, developed by Microsoft, is one of the most widely used desktop
operating systems globally.
macOS: macOS is the operating system designed specifically for Apple’s Mac
computers. Known for its sleek and intuitive interface, macOS offers seamless
integration with other Apple devices and services.
17
Linux: Linux is an open-source operating system that provides a high degree of
customization and flexibility. It is favored by developers, system administrators, and
tech enthusiasts due to its stability, security, and vast array of software options.
Multiprocessor Systems
Most computer systems are single processor systems i.e., they only have one processor.
However, multiprocessor or parallel systems are increasing in importance nowadays.
These systems have multiple processors working in parallel that share the computer
clock, memory, bus, peripheral devices etc. An image demonstrating the
multiprocessor architecture is
Limited performance gains: Not all applications can benefit from multiprocessor
systems, and some applications may only see limited performance gains when running
on a multiprocessor system.
Applications of Multiprocessor
Enhanced performance.
Multiple applications.
Multi-tasking inside an application.
High throughput and responsiveness.
Hardware sharing among CPUs.
Advantages:
Disadvantages:
19
Synchronization issues: Multiprocessor systems require synchronization between
processors to ensure that tasks are executed correctly and efficiently, which can add
complexity and overhead to the system.
Limited performance gains: Not all applications can benefit from multiprocessor
systems, and some applications may only see limited performance gains when
running on a multiprocessor system.
Distributed System
Clustered Systems
Clustered systems are similar to parallel systems as they both have multiple CPUs.
However a major difference is that clustered systems are created by two or more
individual computer systems merged together. Basically, they have independent
computer systems with a common storage and the systems work together.
A diagram to better illustrate this is −
The clustered systems are a combination of hardware clusters and software clusters.
The hardware clusters help in sharing of high performance disks between the
systems. The software clusters makes all the systems work together .
Each node in the clustered systems contains the cluster software. This software
monitors the cluster system and makes sure it is working as required. If any one of
the nodes in the clustered system fail, then the rest of the nodes take control of its
storage and resources and try to restart.
21
Types of Clustered Systems
There are primarily two types of clustered systems i.e. asymmetric clustering system
and symmetric clustering system. Details about these are given as follows −
Asymmetric Clustering System
In this system, one of the nodes in the clustered system is in hot standby mode and all
the others run the required applications. The hot standby mode is a failsafe in which a
hot standby node is part of the system . The hot standby node continuously monitors
the server and if it fails, the hot standby node takes its place.
Symmetric Clustering System
In symmetric clustering system two or more nodes all run applications as well as
monitor each other. This is more efficient than asymmetric system as it uses all the
hardware and doesn't keep a node merely as a hot standby.
There are many different purposes that a clustered system can be used for. Some of these can
be scientific calculations, web support etc. The clustering systems that embody some major
attributes are −
In this type of clusters, the nodes in the system share the workload to provide a better
performance. For example: A web based cluster may assign different web queries to
different nodes so that the system performance is optimized. Some clustered systems use a
round robin mechanism to assign requests to different nodes in the system.
These clusters improve the availability of the clustered system. They have extra nodes which
are only used if some of the system components fail. So, high availability clusters remove
single points of failure i.e. nodes whose failure leads to the failure of the system. These types
of clusters are also known as failover clusters or HA clusters.
Performance
Clustered systems result in high performance as they contain two or more individual
computer systems merged together. These work as a parallel unit and result in much better
performance for the system.
Fault Tolerance
Clustered systems are quite fault tolerant and the loss of one node does not result in the loss
of the system. They may even contain one or more nodes in hot standby mode which allows
them to take the place of failed nodes.
Scalability
Advanced operating system
Clustered systems are quite scalable as it is easy to add a new node to the system. There is
no need to take the entire cluster down to add a new node.
A real-time system means that the system is subjected to real-time, i.e., the response should
be guaranteed within a specified timing constraint or the system should meet the specified
deadline. For example flight control systems, real-time monitors, etc.
1. Hard real-time system: This type of system can never miss its deadline. Missing the
deadline may have disastrous consequences. The usefulness of results produced by a
hard real-time system decreases abruptly and may become negative if tardiness
increases. Tardiness means how late a real-time system completes its task with
respect to its deadline. Example: Flight controller system.
2. Soft real-time system: This type of system can miss its deadline occasionally with
some acceptably low probability. Missing the deadline have no disastrous
consequences. The usefulness of results produced by a soft real-time system
decreases gradually with an increase in tardiness. Example: Telephone switches.
3. Firm Real-Time Systems: These are systems that lie between hard and soft real-time
systems. In firm real-time systems, missing a deadline is tolerable, but the usefulness
of the output decreases with time. Examples of firm real-time systems include online
trading systems, online auction systems, and reservation systems.
1. Job: A job is a small piece of work that can be assigned to a processor and may or
may not require resources.
2. Task: A set of related jobs that jointly provide some system functionality.
3. Release time of a job: It is the time at which the job becomes ready for execution.
4. Execution time of a job: It is the time taken by the job to finish its execution.
5. Deadline of a job: It is the time by which a job should finish its execution. Deadline
is of two types: absolute deadline and relative deadline.
6. Response time of a job: It is the length of time from the release time of a job to the
instant when it finishes.
7. The maximum allowable response time of a job is called its relative deadline.
8. The absolute deadline of a job is equal to its relative deadline plus its release time.
9. Processors are also known as active resources. They are essential for the execution of
a job. A job must have one or more processors in order to execute and proceed
towards completion. Example: computer, transmission links.
23
10. Resources are also known as passive resources. A job may or may not require a
resource during its execution. Example: memory, mutex
11. Two resources are identical if they can be used interchangeably else they are
heterogeneous.
Advantages:
Disadvantages:
Real-time systems can be complex and difficult to design, implement, and test,
requiring specialized skills and expertise.
They can be expensive to develop, as they require specialized hardware and software
components.
Real-time systems are typically less flexible than other types of computer systems, as
they must adhere to strict timing requirements and cannot be easily modified or
adapted to changing circumstances.
They can be vulnerable to failures and malfunctions, which can have serious
consequences in critical applications.
Real-time systems require careful planning and management, as they must be
continually monitored and maintained to ensure they operate correctly.
These operating systems need a high-processing processor and are also embedded
with various types of sensors.
1. Since the development of handheld computers in the 1990s, the demand for software
to operate and run on these devices has increased.
2. Three major competitors have emerged in the handheld PC world with three different
operating systems for these handheld PCs.
3. Out of the three companies, the first was the Palm Corporation with their PalmOS.
4. Microsoft also released what was originally called Windows CE. Microsoft’s recently
released operating system for the handheld PC comes under the name of Pocket PC.
5. More recently, some companies producing handheld PCs have also started offering a
handheld version of the Linux operating system on their machines.
Palm OS
Symbian OS
Linux OS
Windows
Android
Palm OS:
Since the Palm Pilot was introduced in 1996, the Palm OS platform has provided
various mobile devices with essential business tools, as well as the capability that
they can access the internet via a wireless connection.
These devices have mainly concentrated on providing basic personal-information-
management applications. The latest Palm products have progressed a lot, packing in
more storage, wireless internet, etc.
Symbian OS:
It has been the most widely-used smartphone operating system because of its ARM
architecture before it was discontinued in 2014. It was developed by Symbian Ltd.
This operating system consists of two subsystems where the first one is the
microkernel-based operating system which has its associated libraries and the second
one is the interface of the operating system with which a user can interact.
Since this operating system consumes very less power, it was developed for
smartphones and handheld devices.
25
It has good connectivity as well as stability.
It can run applications that are written in Python, Ruby, .NET, etc.
Linux OS:
Windows OS:
Android OS:
1. Less Cost.
2. Less weight and size.
3. Less heat generation.
4. More reliability.
1. Less Speed.
2. Small Size.
3. Input / Output System (memory issue or less memory is available).
How Handheld operating systems are different from Desktop operating systems?
Since the handheld operating systems are mainly designed to run on machines that
have lower speed resources as well as less memory, they were designed in a way that
they use less memory and require fewer resources.
They are also designed to work with different types of hardware as compared to
standard desktop operating systems.
It happens because the power requirements for standard CPUs far exceed the power
of handheld devices.
Handheld devices aren’t able to dissipate large amounts of heat generated by CPUs.
To deal with such kind of problem, big companies like Intel and Motorola have
designed smaller CPUs with lower power requirements and also lower heat
generation. Many handheld devices fully depend on flash memory cards for their
internal memory because large hard drives do not fit into handheld devices.
27
Computing Environment
When we want to solve a problem using computer, the computer makes use of
various devices which work together to solve that problem. There may be
various number of ways to solve a problem. We use various number of
computer devices arranged in different ways to solve different problems. The
arrangement of computer devices to solve a problem is said to be computing
environment. The formal definition of computing environment is as follows...
Computing Environment is a collection of computers which are used to
process and exchange the information to solve various types of computing
problems.
The client server environment contains two machines (Client machine and
Server machine). These both machines will exchange the information through
an application. Here Client is a normal computer like PC, Tablet, Mobile, etc.,
and Server is a powerful computer which stores huge data and manages huge
amount of file and emails, etc., In this environment, client requests for data
and server provides data to the client. In the client server environment, the
communication between client and server is performed using HTTP (Hyper
Text Transfer Protocol).
29
distributed collection of large number of computers working for a single
application.
Process Scheduling
The process scheduling is the activity of the process manager that handles the
removal of the running process from the CPU and the selection of another process
on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems.
Such operating systems allow more than one process to be loaded into the
executable memory at a time and the loaded process shares the CPU using time
multiplexing.
Categories of Scheduling
1. Non-preemptive: Here the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running process
terminates and moves to a waiting state.
2. Preemptive: Here the OS allocates the resources to a process for a fixed amount of
time. During resource allocation, the process switches from running state to ready
state or from waiting state to ready state. This switching occurs as the CPU may
give priority to other processes and replace the process with higher priority with the
running process.
Process Scheduling Queues
The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues.
The OS maintains a separate queue for each of the process states and PCBs of all
processes in the same execution state are placed in the same queue. When the state of a
process is changed, its PCB is unlinked from its current queue and moved to its new
state queue.
The Operating System maintains the following important process scheduling queues,
Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
Device queues − The processes which are blocked due to unavailability of an I/O
device constitute this queue.
Advanced operating system
The OS can use different policies to manage each queue (FIFO, Round Robin, Priority,
etc.). The OS scheduler determines how to move processes between the ready and run
queues which can only have one entry per processor core on the system; in the above
diagram, it has been merged with the CPU.
Two-State Process Model
Two-state process model refers to running and non-running states which are described below
Running
1 When a new process is created, it enters into the system as in the
running state.
Not Running
Processes that are not running are kept in queue, waiting for their
turn to execute. Each entry in the queue is a pointer to a particular
process. Queue is implemented by using linked list. Use of
2
dispatcher is as follows. When a process is interrupted, that
process is transferred in the waiting queue. If the process has
completed or aborted, the process is discarded. In either case, the
dispatcher then selects a process from the queue to execute.
Schedulers
Schedulers are special system software which handle process scheduling in various
ways. Their main task is to select the jobs to be submitted into the system and to decide
which process to run. Schedulers are of three types,
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
Long Term Scheduler;
It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing. It selects processes from the queue and loads
them into memory for execution. Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as
I/O bound and processor bound. It also controls the degree of multiprogramming. If the
degree of multiprogramming is stable, then the average rate of process creation must be
equal to the average departure rate of processes leaving the system.
31
On some systems, the long-term scheduler may not be available or minimal. Time-
sharing operating systems have no long term scheduler. When a process changes the
state from new to ready, then there is use of long-term scheduler.
Short Term Scheduler
Context Switching
A context switching is the mechanism to store and restore the state or context of a CPU
in Process Control block so that a process execution can be resumed from the same
point at a later time. Using this technique, a context switcher enables multiple
processes to share a single CPU. Context switching is an essential part of a
multitasking operating system features.
When the scheduler switches the CPU from executing one process to execute another,
the state from the current running process is stored into the process control block. After
this, the state for the process to run next is loaded from its own PCB and used to set the
PC, registers, etc. At that point, the second process can start executing.
Context switches are computationally intensive since register and memory state must be saved
and restored. To avoid the amount of context switching time, some hardware systems employ
two or more sets of processor registers. When the process is switched, the following
information is stored for later use.
Program Counter
Scheduling information
Base and limit register value
Currently used register
Changed State
I/O State information
Accounting information
33
Communication can be of two types
Between related processes initiating from only one process, such as parent and child
processes.
Between unrelated processes, or two or more different processes.
Following are some important terms that we need to know before proceeding further on this
topic.
Pipes − Communication between two related processes. The mechanism is half duplex
meaning the first process communicates with the second process. To achieve a full duplex i.e.,
for the second process to communicate with the first process another pipe is required.
FIFO − Communication between two unrelated processes. FIFO is a full duplex, meaning the
first process can communicate with the second process and vice versa at the same time.
Message Queues − Communication between two or more processes with full duplex capacity.
The processes will communicate with each other by posting a message and retrieving it out of
the queue. Once retrieved, the message is no longer available in the queue.
Semaphores − Semaphores are meant for synchronizing access to multiple processes. When
one process wants to access the memory (for reading or writing), it needs to be locked (or
protected) and released when the access is removed. This needs to be repeated by all the
processes to secure data.
Deadlock detection and recovery is the process of detecting and resolving deadlocks
in an operating system. A deadlock occurs when two or more processes are blocked,
waiting for each other to release the resources they need. This can lead to a system-
wide stall, where no process can make progress.
There are two main approaches to deadlock detection and recovery:
releases the resources held by one or more processes, allowing the system to
continue to make progress.
Deadlock Detection :
In this case for Deadlock detection, we can run an algorithm to check for the cycle in the
Resource Allocation Graph. The presence of a cycle in the graph is a sufficient condition for
deadlock.
Detection of the cycle is necessary but not a sufficient condition for deadlock detection, in
this case, the system may or may not be in deadlock varies according to different situations.
The Wait-For Graph Algorithm is a deadlock detection algorithm used to detect deadlocks in
a system where resources can have multiple instances. The algorithm works by constructing
a Wait-For Graph, which is a directed graph that represents the dependencies between
processes and resources.
35
Deadlock Recovery :
A traditional operating system such as Windows doesn’t deal with deadlock recovery as it is
a time and space-consuming process. Real-time operating systems use Deadlock recovery.
1. Killing the process:
Killing all the processes involved in the deadlock. Killing process one by one.
After killing each process check for deadlock again and keep repeating the
process till the system recovers from deadlock. Killing all the processes one by
one helps a system to break circular wait conditions.
2. Resource Preemption:
Resources are preempted from the processes involved in the deadlock, and
preempted resources are allocated to other processes so that there is a possibility
of recovering the system from the deadlock. In this case, the system goes into
starvation.
3. Concurrency Control:
Concurrency control mechanisms are used to prevent data inconsistencies in
systems with multiple concurrent processes. These mechanisms ensure that
concurrent processes do not access the same data at the same time, which can
lead to inconsistencies and errors. Deadlocks can occur in concurrent systems
when two or more processes are blocked, waiting for each other to release the
resources they need. This can result in a system-wide stall, where no process can
make progress. Concurrency control mechanisms can help prevent deadlocks by
managing access to shared resources and ensuring that concurrent processes do
not interfere with each other.
ADVANTAGES OR DISADVANTAGES:
4. Risk of Data Loss: In some cases, recovery algorithms may require rolling back
the state of one or more processes, leading to data loss or corruption.
Deadlock Prevention
Eliminate Hold and wait: Allocate all required resources to the process before the
start of its execution, this way hold and wait condition is eliminated but it will lead to
low device utilization. for example, if a process requires a printer at a later time and
we have allocated a printer before the start of its execution printer will remain
blocked till it has completed its execution. The process will make a new request for
resources after releasing the current set of resources. This solution may lead to
starvation.
Eliminate No Preemption : Preempt resources from the process when resources are
required by other high-priority processes.
Deadlock Avoidance
A deadlock avoidance policy grants a resource request only if it can establish that
granting the request cannot lead to a deadlock either immediately or in the future. The
kernal lacks detailed knowledge about future behavior of processes, so it cannot
accurately predict deadlocks. To facilitate deadlock avoidance under these conditions,
it uses the following conservative approach: Each process declares the maximum
number of resource units of each class that it may require. The kernal permits a process
to request these resource units in stages- i.e. a few resource units at a time- subject to
37
the maximum number declared by it and uses a worst case analysis technique to check
for the possibility of future deadlocks. A request is granted only if there is no
possibility of deadlocks; otherwise, it remains pending until it can be granted. This
approach is conservative because a process may complete its operation without
requiring the maximum number of units declared by it.
The resource allocation graph (RAG) is used to visualize the system’s current state as
a graph. The Graph includes all processes, the resources that are assigned to them, as
well as the resources that each Process requests. Sometimes, if there are fewer
processes, we can quickly spot a deadlock in the system by looking at the graph rather
than the tables we use in Banker’s algorithm. Deadlock avoidance can also be done
with Banker’s Algorithm.
Banker’s Algorithm
Bankers’s Algorithm is a resource allocation and deadlock avoidance algorithm which test all
the request made by processes for resources, it checks for the safe state, and after granting a
request system remains in the safe state it allows the request, and if there is no safe state it
doesn’t allow the request made by the process.
1. If the request made by the process is less than equal to the max needed for that
process.
2. If the request made by the process is less than equal to the freely available
resource in the system.
Timeouts: To avoid deadlocks caused by indefinite waiting, a timeout mechanism can be used
to limit the amount of time a process can wait for a resource. If the help is unavailable within
the timeout period, the process can be forced to release its current resources and try again later
One Marks
a. Windows c. Oracle
b. Linux d. DOS
a. 4 c. 8
b. 5 d. 12
a. 1948 c. 1950
b. 1949 d. 1951
a. 1994 c. 1992
b. 1990 d. 1985
a. .txt c. .ppt
b. .xls d. .bmp
a. prompt c. shell
b. kernel d. command
8. BIOS is used?
39
10. When does page fault occur?
Internet Technology
Distributed Databases System
41
Air Traffic Control System
Airline Reservation Control Systems
Peer-to-Peer Networks System
Telecommunication Networks
Scientific Computing System
Cluster Computing
Grid Computing
Data Rendering
Communication primitives
Send: The basic operation for a process to send a message to another process. It
involves packaging information and transmitting it to a specified destination.
Receive: The complementary operation to sending, where a process waits to receive a
message. Upon reception, the process can extract and process the information from
the received message.
Broadcast: Sending a message from one process to all other processes in the system.
This is a one-to-many communication primitive.
Multicast: Similar to broadcast, but it involves sending a message to a selected group
of processes rather than all processes.
Point-to-Point Communication: Communication between two specific processes. The
send and receive primitives are used to achieve point-to-point communication.
Barrier Synchronization: A synchronization primitive where processes wait until all
participating processes have reached a certain point before any of them can proceed.
Remote Procedure Call (RPC): Invoking a procedure (function or method) on a
remote process as if it were a local procedure. RPC hides the details of
communication between processes.
Message Queues: Processes can place messages in a queue, and other processes can
retrieve and process these messages. This helps in achieving asynchronous
communication.
Semaphore Operations: Using semaphores for synchronization, including operations
like wait (P) and signal (V) to control access to shared resources.
Event Notification: Notifying processes about specific events or conditions, enabling
them to react accordingly.
Advanced operating system
These communication primitives provide a means for processes to interact and coordinate in
a distributed environment, contributing to the development of robust and efficient distributed
systems.
Issues
Addressing these issues requires careful design, robust algorithms, and a deep understanding
of distributed systems principles.
Lamport’s Logical Clock was created by Leslie Lamport. It is a procedure to determine the
order of events occurring. It provides a basis for the more advanced Vector Clock Algorithm.
Due to the absence of a Global Clock in a Distributed Operating System Lamport Logical
Clock is needed.
Algorithm:
[C1]: Ci (a) < Ci(b), [ Ci -> Logical Clock, If ‘a’ happened before ‘b’, then
time of ‘a’ will be less than ‘b’ in a particular process. ]
[C2]: Ci(a) < Cj(b), [ Clock value of Ci(a) is less than Cj(b) ]
Reference:
Process: Pi
43
Event: Eij, where i is the process in number and j: jth event in the ith process.
tm: vector time span for message m.
Ci vector clock associated with process Pi, the jth element is Ci[j] and
contains Pi‘s latest value for the current time in process Pj.
d: drift time, generally d is 1.
Implementation Rules[IR]:
[IR1]: If a -> b [‘a’ happened before ‘b’ within the same process]
then, Ci(b) =Ci(a) + d
[IR2]: Cj = max(Cj, tm + d) [If there’s more number of processes, then tm =
value of Ci(a), Cj = max value between Cj and tm + d]
For Example:
Take the starting value as 1, since it is the 1st event and there is no incoming value at
the starting point:
e11 = 1
e21 = 1
The value of the next point will go on increasing by d (d = 1), if there is no incoming
value i.e., to follow [IR1].
e12 = e11 + d = 1 + 1 = 2
e13 = e12 + d = 2 + 1 = 3
e14 = e13 + d = 3 + 1 = 4
e15 = e14 + d = 4 + 1 = 5
e16 = e15 + d = 5 + 1 = 6
e22 = e21 + d = 1 + 1 = 2
e24 = e23 + d = 3 + 1 = 4
e26 = e25 + d = 6 + 1 = 7
When there will be incoming value, then follow [IR2] i.e., take the maximum value
between Cj and Tm + d.
Limitation:
Deadlocks occur when processes are unable to proceed because each is waiting for the other
to release a resource. Operating systems implement various strategies to handle deadlocks.
Here are some common deadlock handling strategies:
Prevention:
Resource Allocation Graph (RAG): The system maintains a graph representing the
allocation of resources to processes. Deadlocks are prevented by ensuring that the
graph remains cycle-free.
Banker's Algorithm: Processes must declare their maximum resource needs upfront,
and the system only grants resource requests if it determines that the allocation will
not lead to a deadlock.
Avoidance:
Dynamically Check for Safe States: The system continually assesses the current state
and only grants resource requests that guarantee the system will remain in a safe
state.
Safety Algorithm (like Banker's Algorithm): The system uses algorithms to
determine if a resource request could potentially lead to a deadlock before granting it.
45
Terminate Processes: End one or more processes to break the deadlock.
Resource Preemption: Preempt resources from one or more processes to allow the
system to recover.
Timeouts: If a process takes too long to acquire resources, the system may assume a
deadlock and take corrective action.
Process Termination: If deadlock is detected, the system may selectively terminate
one or more processes to resolve the situation.
Wait-Die and Wound-Wait: These are strategies used in database systems for
handling deadlocks arising from conflicting resource requests. Processes may be
either rolled back and restarted (wait-die) or forced to wait (wound-wait) based on
their age and the state of the resource.
Each strategy has its trade-offs, and the choice depends on the specific requirements and
characteristics of the system. Prevention and avoidance aim to eliminate deadlocks before
they occur, while detection and recovery focus on identifying and resolving deadlocks after
they happen.
Deadlock detection and recovery is the process of detecting and resolving deadlocks in an
operating system. A deadlock occurs when two or more processes are blocked, waiting for
each other to release the resources they need. This can lead to a system-wide stall, where no
process can make progress.
1. Prevention: The operating system takes steps to prevent deadlocks from occurring
by ensuring that the system is always in a safe state, where deadlocks cannot occur.
This is achieved through resource allocation algorithms such as the Banker’s
Algorithm.
2. Detection and Recovery: If deadlocks do occur, the operating system must detect
and resolve them. Deadlock detection algorithms, such as the Wait-For Graph, are
used to identify deadlocks, and recovery algorithms, such as the Rollback and Abort
algorithm, are used to resolve them. The recovery algorithm releases the resources
held by one or more processes, allowing the system to continue to make progress.
Deadlock detection and recovery is an important aspect of operating system design and
management, as it affects the stability and performance of the system. The choice of
deadlock detection and recovery approach depends on the specific requirements of the
system and the trade-offs between performance, complexity, and risk tolerance. The
operating system must balance these factors to ensure that deadlocks are effectively detected
and resolved.
In the previous post, we discussed Deadlock Prevention and Avoidance. In this post, the
Deadlock Detection and Recovery technique to handle deadlock is discussed.
Deadlock Detection :
In the above diagram, resource 1 and resource 2 have single instances. There is a cycle R1
→ P1 → R2 → P2. So, Deadlock is Confirmed.
The Wait-For Graph Algorithm is a deadlock detection algorithm used to detect deadlocks in
a system where resources can have multiple instances. The algorithm works by constructing
a Wait-For Graph, which is a directed graph that represents the dependencies between
processes and resources.
Deadlock Recovery :
A traditional operating system such as Windows doesn’t deal with deadlock recovery as it is
a time and space-consuming process. Real-time operating systems use Deadlock recovery.
47
2. Resource Preemption –
Resources are preempted from the processes involved in the deadlock, and preempted
resources are allocated to other processes so that there is a possibility of recovering
the system from the deadlock. In this case, the system goes into starvation.
3. Concurrency Control – Concurrency control mechanisms are used to prevent data
inconsistencies in systems with multiple concurrent processes. These mechanisms
ensure that concurrent processes do not access the same data at the same time, which
can lead to inconsistencies and errors. Deadlocks can occur in concurrent systems
when two or more processes are blocked, waiting for each other to release the
resources they need. This can result in a system-wide stall, where no process can
make progress. Concurrency control mechanisms can help prevent deadlocks by
managing access to shared resources and ensuring that concurrent processes do not
interfere with each other.
ADVANTAGES OR DISADVANTAGES:
1. Improved System Stability: Deadlocks can cause system-wide stalls, and detecting
and resolving deadlocks can help to improve the stability of the system.
2. Better Resource Utilization: By detecting and resolving deadlocks, the operating
system can ensure that resources are efficiently utilized and that the system remains
responsive to user requests.
3. Better System Design: Deadlock detection and recovery algorithms can provide
insight into the behavior of the system and the relationships between processes and
resources, helping to inform and improve the design of the system.
A Distributed File System (DFS) as the name suggests, is a file system that is
distributed on multiple file servers or multiple locations. It allows programs to
access or store isolated files as they do with the local ones, allowing programmers
to access files from any network or computer.
Advanced operating system
The main purpose of the Distributed File System (DFS) is to allows users of
physically distributed systems to share their data and resources by using a Common
File System. A collection of workstations and mainframes connected by a Local
Area Network (LAN) is a configuration on Distributed File System. A DFS is
executed as a part of the operating system. In DFS, a namespace is created and this
process is transparent for the clients.
Location Transparency –
Location Transparency achieves through the namespace component.
Redundancy –
Redundancy is done through a file replication component.
In the case of failure and heavy load, these components together improve data availability by
allowing the sharing of data in different locations to be logically grouped under one folder,
which is known as the “DFS root”.
It is not necessary to use both the two components of DFS together, it is possible to use the
namespace component without using the file replication component and it is perfectly
possible to use the file replication component without using the namespace component
between servers.
Early iterations of DFS made use of Microsoft’s File Replication Service (FRS),
which allowed for straightforward file replication between servers. The most recent
iterations of the whole file are distributed to all servers by FRS, which recognises
new or updated files.
“DFS Replication” was developed by Windows Server 2003 R2 (DFSR). By only
copying the portions of files that have changed and minimising network traffic with
data compression, it helps to improve FRS. Additionally, it provides users with
flexible configuration options to manage network traffic on a configurable schedule.
49
Features of DFS :
Transparency :
Structure transparency –
There is no need for the client to know about the number or locations of file
servers and the storage devices. Multiple file servers should be provided for
performance, adaptability, and dependability.
Access transparency –
Both local and remote files should be accessible in the same manner. The file
system should be automatically located on the accessed file and send it to the
client’s side.
Naming transparency –
There should not be any hint in the name of the file to the location of the file.
Once a name is given to the file, it should not be changed during transferring
from one node to another.
Replication transparency –
If a file is copied on multiple nodes, both the copies of the file and their
locations should be hidden from one node to another.
User mobility :
It will automatically bring the user’s home directory to the node where the user logs
in.
Performance :
Performance is based on the average amount of time needed to convince the client
requests. This time covers the CPU time + time taken to access secondary storage +
network access time. It is advisable that the performance of the Distributed File
System be similar to that of a centralized file system.
Simplicity and ease of use :
The user interface of a file system should be simple and the number of commands in
the file should be small.
High availability :
A Distributed File System should be able to continue in case of any partial failures
like a link failure, a node failure, or a storage drive crash.
A high authentic and adaptable distributed file system should have different and
independent file servers for controlling different and independent storage devices.
Scalability :
Since growing the network by adding new machines or joining two networks together
is routine, the distributed system will inevitably grow over time. As a result, a good
distributed file system should be built to scale quickly as the number of nodes and
users in the system grows. Service should not be substantially disrupted as the
number of nodes and users grows.
High reliability :
The likelihood of data loss should be minimized as much as feasible in a suitable
distributed file system. That is, because of the system’s unreliability, users should not
feel forced to make backup copies of their files. Rather, a file system should create
backup copies of key files that can be used if the originals are lost. Many file systems
employ stable storage as a high-reliability strategy.
Data integrity :
Multiple users frequently share a file system. The integrity of data saved in a shared
file must be guaranteed by the file system. That is, concurrent access requests from
Advanced operating system
many users who are competing for access to the same file must be correctly
synchronized using a concurrency control method. Atomic transactions are a high-
level concurrency management mechanism for data integrity that is frequently
offered to users by a file system.
Security :
A distributed file system should be secure so that its users may trust that their data
will be kept private. To safeguard the information contained in the file system from
unwanted & unauthorized access, security mechanisms must be implemented.
Heterogeneity :
Heterogeneity in distributed systems is unavoidable as a result of huge scale. Users of
heterogeneous distributed systems have the option of using multiple computer
platforms for different purposes.
History :
The server component of the Distributed File System was initially introduced as an
add-on feature. It was added to Windows NT 4.0 Server and was known as “DFS
4.1”. Then later on it was included as a standard component for all editions of
Windows 2000 Server. Client-side support has been included in Windows NT 4.0 and
also in later on version of Windows.
Linux kernels 2.6.14 and versions after it come with an SMB client VFS known as
“cifs” which supports DFS. Mac OS X 10.7 (lion) and onwards supports Mac OS X
DFS.
Properties:
File transparency: users can access files without knowing where they are physically
stored on the network.
Load balancing: the file system can distribute file access requests across multiple
computers to improve performance and reliability.
Data replication: the file system can store copies of files on multiple computers to
ensure that the files are available even if one of the computers fails.
Security: the file system can enforce access control policies to ensure that only
authorized users can access files.
Scalability: the file system can support a large number of users and a large number of
files.
Concurrent access: multiple users can access and modify the same file at the same
time.
Fault tolerance: the file system can continue to operate even if one or more of its
components fail.
Data integrity: the file system can ensure that the data stored in the files is accurate
and has not been corrupted.
File migration: the file system can move files from one location to another without
interrupting access to the files.
Data consistency: changes made to a file by one user are immediately visible to all
other users.
Support for different file types: the file system can support a wide range of file types,
including text files, image files, and video files.
51
Applications :
NFS –
NFS stands for Network File System. It is a client-server architecture that allows a
computer user to view, store, and update files remotely. The protocol of NFS is one
of the several distributed file system standards for Network-Attached Storage (NAS).
CIFS –
CIFS stands for Common Internet File System. CIFS is an accent of SMB. That is,
CIFS is an application of SIMB protocol, designed by Microsoft.
SMB –
SMB stands for Server Message Block. It is a protocol for sharing a file and was
invented by IMB. The SMB protocol was created to allow computers to perform read
and write operations on files to a remote host over a Local Area Network (LAN). The
directories present in the remote host can be accessed via SMB and are called as
“shares”.
Hadoop –
Hadoop is a group of open-source software services. It gives a software framework
for distributed storage and operating of big data using the MapReduce programming
model. The core of Hadoop contains a storage part, known as Hadoop Distributed
File System (HDFS), and an operating part which is a MapReduce programming
model.
NetWare –
NetWare is an abandon computer network operating system developed by Novell,
Inc. It primarily used combined multitasking to run different services on a personal
computer, using the IPX network protocol.
Working of DFS :
Advantages :
Disadvantages :
CASE STUDIES
Google File System (GFS): GFS is a distributed file system designed by Google for
their infrastructure. It focuses on scalability and fault tolerance, allowing large-scale
data processing across multiple servers.
Apache Hadoop: While not an operating system itself, Hadoop is a framework for
distributed storage and processing of large data sets. It's built on the principles of the
Google File System and MapReduce, enabling the distributed processing of massive
datasets.
Amazon DynamoDB: DynamoDB is a distributed NoSQL database service provided
by Amazon Web Services. It's designed for high availability and scalability, ensuring
low-latency access to data across multiple servers and data centers.
MapReduce Paradigm: Although not a specific system, the MapReduce
programming model is widely used in distributed systems. It was popularized by
Google and later implemented in Apache Hadoop. The paradigm simplifies parallel
processing of large datasets across distributed nodes.
Kubernetes: While not an operating system, Kubernetes is a container orchestration
platform that manages the deployment, scaling, and operation of containerized
applications. It provides a distributed system for automating the deployment and
scaling of application containers.
These case studies showcase various aspects of distributed systems, including fault tolerance,
scalability, and efficient resource management.
The advent of distributed computing was marked by the introduction of distributed file
systems. Such systems involved multiple client machines and one or a few servers. The
server stores data on its disks and the clients may request data through some protocol
messages. Advantages of a distributed file system:
53
Allows easy sharing of data among clients.
Provides centralized administration.
Provides security, i.e. one must only secure the servers to secure data.
Even a simple client/server architecture involves more components than the physical
file systems discussed previously in OS. The architecture consists of a client-side file
system and a server-side file system. A client application issues a system call (e.g.
read(), write(), open(), close() etc.) to access files on the client-side file system,
which in turn retrieves files from the server. It is interesting to note that to a client
application, the process seems no different than requesting data from a physical disk,
since there is no special API required to do so. This phenomenon is known
as transparency in terms of file access. It is the client-side file system that executes
commands to service these system calls. For instance, assume that a client application
issues the read() system call. The client-side file system then messages the server-
side file system to read a block from the server’s disk and return the data back to the
client. Finally, it buffers this data into the read() buffer and completes the system call.
The server-side file system is also simply called the file server.
Sun’s Network File System: The earliest successful distributed system could be
attributed to Sun Microsystems, which developed the Network File System (NFS).
NFSv2 was the standard protocol followed for many years, designed with the goal of
simple and fast server crash recovery. This goal is of utmost importance in multi-
client and single-server based network architectures because a single instant of server
crash means that all clients are unserviced. The entire system goes down. Stateful
protocols make things complicated when it comes to crashes. Consider a client A
trying to access some data from the server. However, just after the first read, the
server crashed. Now, when the server is up and running, client A issues the second
read request. However, the server does not know which file the client is referring to,
since all that information was temporary and lost during the crash. Stateless
protocols come to our rescue. Such protocols are designed so as to not store any state
information in the server. The server is unaware of what the clients are doing — what
blocks they are caching, which files are opened by them and where their current file
pointers are. The server simply delivers all the information that is required to service
a client request. If a server crash happens, the client would simply have to retry the
request. Because of their simplicity, NFS implements a stateless protocol.
File Handles: NFS uses file handles to uniquely identify a file or a directory that the
current operation is being performed upon. This consists of the following
components:
File Attributes: “File attributes” is a term commonly used in NFS terminology. This
is a collective term for the tracked metadata of a file, including file creation time, last
modified, size, ownership permissions etc. This can be accessed by calling stat() on
the file.
NFSv2 Protocol: Some of the common protocol messages are listed below.
Message Description
The LOOKUP protocol message is used to obtain the file handle for further accessing
data. The NFS mount protocol helps obtain the directory handle for the root (/)
directory in the file system. If a client application opens a file /abc.txt, the client-side
file system will send a LOOKUP request to the server, through the root (/) file handle
looking for a file named abc.txt. If the lookup is successful, the file attributes are
returned.
Client-Side Caching: To improve performance of NFS, distributed file systems
cache the data as well as the metadata read from the server onto the clients. This is
known as client-side caching. This reduces the time taken for subsequent client
accesses. The cache is also used as a temporary buffer for writing. This helps
improve efficiency even more since all writes are written onto the server at once
55
CODA
Purpose: Coda aims to offer a distributed file system with disconnected operation
capabilities, making it suitable for mobile and wireless computing environments.
Disconnected Operation:One of Coda's notable features is its support for
disconnected operation. Users can continue working with their files even when
temporarily disconnected from the network, and changes are synchronized once the
connection is reestablished.
Replication and Fault Tolerance: Coda replicates files across multiple servers to
enhance fault tolerance and availability. This replication helps in maintaining
consistency and reliability in the face of server failures or network issues.
Venus and Vice Components:Coda consists of two main components, Venus (client)
and Vice (server). Venus manages local file access and handles disconnected
operation, while Vice manages the server-side operations.
Token-based Authentication: Coda employs a token-based authentication system to
control access to files. Users must possess the appropriate tokens to read or modify
files, adding a layer of security.
Coda is an interesting case study in distributed file systems, emphasizing support for
mobility, disconnected operation, and fault tolerance in distributed environments.
One Marks
57
UNIT:3 REAL TIME OPERATING SYSTEM
RTOS
Firm Real-time Operating System: RTOS of this type have to follow deadlines
as well. In spite of its small impact, missing a deadline can have unintended
consequences, including a reduction in the quality of the product. Example:
Multimedia applications.
Deterministic Real-time operating System: Consistency is the main key in this
type of real-time operating system. It ensures that all the task and processes
execute with predictable timing all the time,which make it more suitable for
applications in which timing accuracy is very
important. Examples: INTEGRITY, PikeOS.
Advantages:
Task Shifting: Time assigned for shifting tasks in these systems is very less. For
example, in older systems, it takes about 10 microseconds. Shifting one task to
another and in the latest systems, it takes 3 microseconds.
Disadvantages:
Limited Tasks: Very few tasks run simultaneously, and their concentration is
very less on few applications to avoid errors.
Use Heavy System Resources: Sometimes the system resources are not so good
and they are expensive as well.
Complex Algorithms: The algorithms are very complex and difficult for the
designer to write on.
Device Driver And Interrupt signals: It needs specific device drivers and
interrupts signals to respond earliest to interrupts.
59
Thread Priority: It is not good to set thread priority as these systems are very
less prone to switching tasks.
Real Time System is a system that is put through real time which means response
is obtained within a specified timing constraint or system meets the specified
deadline.Real time system is of two types – Hard and Soft. Both are used in
different cases. Hard real time systems are used where even the delay of some
nano or micro seconds are not allowed. Soft real time systems provide some
relaxation in time expression.
Industrial application:
Real-time system has a vast and prominent role in modern industries. Systems are
made real time based so that maximum and accurate output can be obtained. In
order to such things real -time systems are used in maximum industrial
organizations. These system somehow lead to the better performance and high
productivity in less time. Some of the examples of industrial applications are:
Automated Car Assembly Plant, Chemical Plant etc.
Medical Science application:
In the field of medical science, real-time system has a huge impact on the human
health and treatment. Due to the introduction of real-time system in medical
science, many lives are saved and treatment of complex diseases has been turned
down to easier ways. People specially related to medical, now feel more relaxed
due to these systems. Some of the examples of medical science applications are:
Robot, MRI Scan, Radiation therapy etc.
Peripheral Equipment applications:
Real-time system has made the printing of large banners and such things very
easier. Once these systems came into use, the technology world became more
strong. Peripheral equipment are used for various purposes. These systems are
embedded with micro chips and perform accurately in order to get the desired
response. Some of the examples of peripheral equipment applications are: Laser
printer, fax machine, digital camera etc.
Telecommunication applications:
Real-time system map the world in such a way that it can be connected within a
Advanced operating system
short time. Real-time systems have enabled the whole world to connect via a
medium across internet. These systems make the people connect with each other
in no time and feel the real environment of togetherness. Some examples of
telecommunication applications of real-time systems are: Video Conferencing,
Cellular system etc.
Defense applications:
In the new era of atomic world, defense is able to produce the missiles which have
the dangerous powers and have the great destroying ability. All these systems are
real-time system and it provides the system to attack and also a system to defend.
Some of the applications of defense using real time systems are: Missile guidance
system, anti-missile system, Satellite missile system etc.
Aerospace applications:
The most powerful use of real time system is in aerospace applications. Basically
hard real time systems are used in aerospace applications. here the delay of even
some nano second is not allowed and if it happens, system fails. Some of the
applications of real-time systems in aerospace are: Satellite tracking system,
Avionics, Flight simulation etc.
Real-time System is a system that is used for performing some specific tasks. These tasks
are related with time constraints and need to be completed in that time interval.
61
form of physical action. Some of the commonly used actuator are motors and
heaters.
Signal Conditioning Unit: When the sensor converts the physical actions into
electrical signals, then computer can’t used them directly. Hence, after the
conversion of physical actions into electrical signals, there is need of conditioning.
Similarly while giving the output when electrical signals are sent to the actuator,
then also conditioning is required. Therefore, Signal conditioning is of two types:
Interface Unit: Interface units are basically used for the conversion of digital to
analog and vice-versa. Signals coming from the input conditioning unit are analog
and the system does the operations on digital signals only, then the interface unit
is used to change the analog signals to digital signals. Similarly, while
transmitting the signals to output conditioning unit the interface of signals are
changed i.e. from digital to analog. On this basis, Interface unit is also of two
types:
Real-time System is a system that is put through real time which means response
is obtained within a specified timing constraint or system meets the specified
deadline.Real time system is of two types – Hard and Soft. Both are used in
different cases. Hard real time systems are used where even the delay of some
nano or micro seconds are not allowed. Soft real time systems provide some
relaxation in time expression.
Time Constraints: Time constraints related with real-time systems simply means
that time interval allotted for the response of the ongoing program. This deadline
means that the task should be completed within this time interval. Real-time
system is responsible for the completion of all tasks within their time intervals.
Correctness: Correctness is one of the prominent part of real-time systems. Real-
time systems produce correct result within the given time interval. If the result is
not obtained within the given time interval then also result is not considered
correct. In real-time systems, correctness of result is to obtain correct result in
time constraint.
Embedded: All the real-time systems are embedded now-a-days. Embedded
system means that combination of hardware and software designed for a specific
purpose. Real-time systems collect the data from the environment and passes to
other components of the system for processing.
Safety: Safety is necessary for any system but real-time systems provide critical
safety. Real-time systems also can perform for a long time without failures. It also
Advanced operating system
recovers very soon when failure occurs in the system and it does not cause any
harm to the data and information.
Concurrency: Real-time systems are concurrent that means it can respond to a
several number of processes at a time. There are several different tasks going on
within the system and it responds accordingly to every task in short intervals. This
makes the real-time systems concurrent systems.
Distributed: In various real-time systems, all the components of the systems are
connected in a distributed way. The real-time systems are connected in such a way
that different components are at different geographical locations. Thus all the
operations of real-time systems are operated in distributed ways.
Stability: Even when the load is very heavy, real-time systems respond in the
time constraint i.e. real-time systems does not delay the result of tasks even when
there are several task going on a same time. This brings the stability in real-time
systems.
Fault tolerance: Real-time systems must be designed to tolerate and recover from
faults or errors. The system should be able to detect errors and recover from them
without affecting the system’s performance or output.
Determinism: Real-time systems must exhibit deterministic behavior, which
means that the system’s behavior must be predictable and repeatable for a given
input. The system must always produce the same output for a given input,
regardless of the load or other factors.
Real-time communication: Real-time systems often require real-time
communication between different components or devices. The system must ensure
that communication is reliable, fast, and secure.
Resource management: Real-time systems must manage their resources
efficiently, including processing power, memory, and input/output devices. The
system must ensure that resources are used optimally to meet the time constraints
and produce correct results.
Heterogeneous environment: Real-time systems may operate in a heterogeneous
environment, where different components or devices have different characteristics
or capabilities. The system must be designed to handle these differences and
ensure that all components work together seamlessly.
Scalability: Real-time systems must be scalable, which means that the system
must be able to handle varying workloads and increase or decrease its resources as
needed.
Security: Real-time systems may handle sensitive data or operate in critical
environments, which makes security a crucial aspect. The system must ensure that
data is protected and access is restricted to authorized users only.
In real-time operating systems (RTOS), safety and reliability are crucial for applications
where timely and predictable responses are essential. Safety involves ensuring that the
system behaves correctly, while reliability focuses on its ability to perform consistently
over time.
63
Interrupt Handling: Robust interrupt handling is necessary to respond promptly
to external events. RTOS should minimize interrupt latency to meet real-time
constraints.
Resource Management: Proper resource allocation and management prevent
conflicts and ensure the availability of resources when needed. This includes
managing memory, CPU, and I/O resources.
Fault Tolerance: RTOS should incorporate mechanisms for fault detection,
isolation, and recovery to enhance system reliability. This is particularly important
in safety-critical applications.
Real-Time Clocks: Accurate and synchronized timekeeping is essential for
meeting deadlines. RTOS often includes real-time clocks to maintain precise time
references.
Safety Standards Compliance: Adherence to safety standards such as ISO 26262
for automotive systems or DO-178C for avionics is crucial. Compliance helps
ensure a systematic approach to safety and reliability.
Redundancy: Introducing redundancy, such as dual-redundant systems, can
enhance reliability by providing backup resources in case of failures.
Error Handling: Effective error detection and handling mechanisms are essential
to identify and manage errors promptly, preventing cascading failures.
Testing and Verification: Rigorous testing, including simulation and real-world
testing, is necessary to verify the safety and reliability of an RTOS. This includes
testing under various conditions and failure scenarios.
a reliable and safe RTOS combines deterministic behavior, efficient scheduling, fault
tolerance, and compliance with industry standards to ensure the dependable operation of
real-time systems, particularly in safety-critical applications.
Real-time systems are systems that carry real-time tasks. These tasks need to be
performed immediately with a certain degree of urgency. In particular, these tasks
are related to control of certain events (or) reacting to them. Real-time tasks can
be classified as hard real-time tasks and soft real-time tasks.
A hard real-time task must be performed at a specified time which could
otherwise lead to huge losses. In soft real-time tasks, a specified deadline can be
missed. This is because the task can be rescheduled (or) can be completed after
the specified time,
In real-time systems, the scheduler is considered as the most important component
which is typically a short-term task scheduler. The main focus of this scheduler is
to reduce the response time associated with each of the associated processes
instead of handling the deadline.
If a preemptive scheduler is used, the real-time task needs to wait until its
corresponding tasks time slice completes. In the case of a non-preemptive
scheduler, even if the highest priority is allocated to the task, it needs to wait until
the completion of the current task. This task can be slow (or) of the lower priority
and can lead to a longer wait.
A better approach is designed by combining both preemptive and non-preemptive
scheduling. This can be done by introducing time-based interrupts in priority
based systems which means the currently running process is interrupted on a time-
based interval and if a higher priority process is present in a ready queue, it is
executed by preempting the current process.
Advanced operating system
65
requirements, define priorities, and select suitable scheduling algorithms. This
complexity can lead to increased development time and effort.
Overhead: Scheduling introduces some overhead in terms of context switching,
task prioritization, and scheduling decisions. This overhead can impact system
performance, especially in cases where frequent context switches or complex
scheduling algorithms are employed.
Limited Resources: Real-time systems often operate under resource-constrained
environments. Scheduling tasks within these limitations can be challenging, as the
available resources may not be sufficient to meet all timing constraints or execute
all tasks simultaneously.
Verification and Validation: Validating the correctness of real-time schedules
and ensuring that all tasks meet their deadlines require rigorous testing and
verification techniques. Verifying timing constraints and guaranteeing the absence
of timing errors can be a complex and time-consuming process.
Scalability: Scheduling algorithms that work well for smaller systems may not
scale effectively to larger, more complex real-time systems. As the number of
tasks and system complexity increases, scheduling decisions become more
challenging and may require more advanced algorithms or approaches.
One Marks
4. The interrupt latency should be _________ for real time operating systems.
A. maximum
B. minimal
C. dependent on the scheduling
D. zero
6. What is the Use of the robot by car manufacturing companies the example of…
A. applicant controlled computers
B. user-controlled computers
C. machine controlled computers
D. network controlled computers
7. When the System processes data instructions without any delay is called as
A. online system
B. real-time system
C. instruction system
D. offline system
10. The Time duration required for scheduling dispatcher to stop one process and start
another is called…
A. dispatch latency
B. process latency
C. interrupt latency
D. execution latency
1. Can you explain the difference between hard and soft real-time systems?
2. How does a real-time operating system differ from a general-purpose operating
system?
3. Define Real Time Operating System with its general structure.
4. In list the characteristics of RTOS.
5. Differentiate between hard RTOS and soft RTOS.
6. What is Firm classification of RTOS? Explain in detail.
7. What is difference between static and dynamic scheduling? Explain EDF and
clock driven scheduling in detail.
8. Explain Android OS architecture in detail.
9. Why virtual machine is needed? Explain VM OS with the help of diagram.
10. Explain cloud OS architecture in detail.
11. Discuss various issues of cloud OS.
12. What are the advantages and disadvantages of iOS?
67
UNIT:4 HANDHELD SYSTEM
Symbian OS:
It has been the most widely-used smartphone operating system because of its
ARM architecture before it was discontinued in 2014. It was developed by
Symbian Ltd.
This operating system consists of two subsystems where the first one is the
microkernel-based operating system which has its associated libraries and the
second one is the interface of the operating system with which a user can interact.
Since this operating system consumes very less power, it was developed for
smartphones and handheld devices.
It has good connectivity as well as stability.
It can run applications that are written in Python, Ruby, .NET, etc.
Linux OS:
Linux OS is an open-source operating system project which is a cross-platform
system that was developed based on UNIX.
It was developed by Linus Torvalds. It is a system software that basically allows
the apps and users to perform some tasks on the PC.
Linux is free and can be easily downloaded from the internet and it is considered
that it has the best community support.
Linux is portable which means it can be installed on different types of devices like
mobile, computers, and tablets.
It is a multi-user operating system.
Linux interpreter program which is called BASH is used to execute commands.
It provides user security using authentication features.
Windows OS:
Windows is an operating system developed by Microsoft. Its interface which is
called Graphical User Interface eliminates the need to memorize commands for
the command line by using a mouse to navigate through menus, dialog boxes, and
buttons.
It is named Windows because its programs are displayed in the form of a square. It
has been designed for both a beginner as well professional.
It comes preloaded with many tools which help the users to complete all types of
tasks on their computer, mobiles, etc.
It has a large user base so there is a much larger selection of available software
programs.
One great feature of Windows is that it is backward compatible which means that
its old programs can run on newer versions as well.
Android OS:
It is a Google Linux-based operating system that is mainly designed for
touchscreen devices such as phones, tablets, etc.
There are three architectures which are ARM, Intel, and MIPS which are used by
the hardware for supporting Android. These lets users manipulate the devices
intuitively, with movements of our fingers that mirror some common motions such
as swiping, tapping, etc.
Android operating system can be used by anyone because it is an open-source
operating system and it is also free.
It offers 2D and 3D graphics, GSM connectivity, etc.
There is a huge list of applications for users since Play Store offers over one
million apps.
Professionals who want to develop applications for the Android OS can download
the Android Development Kit. By downloading it they can easily develop apps for
android.
69
Advantages of Handheld Operating System:
Some advantages of a Handheld Operating System are as follows:
Less Cost.
Less weight and size.
Less heat generation.
More reliability.
Disadvantages of Handheld Operating System:
Some disadvantages of Handheld Operating Systems are as follows:
Less Speed.
Small Size.
Input / Output System (memory issue or less memory is available).
How Handheld operating systems are different from Desktop operating systems?
Since the handheld operating systems are mainly designed to run on machines that
have lower speed resources as well as less memory, they were designed in a way
that they use less memory and require fewer resources.
They are also designed to work with different types of hardware as compared to
standard desktop operating systems.
It happens because the power requirements for standard CPUs far exceed the
power of handheld devices.
Handheld devices aren’t able to dissipate large amounts of heat generated by
CPUs. To deal with such kind of problem, big companies like Intel and Motorola
have designed smaller CPUs with lower power requirements and also lower heat
generation. Many handheld devices fully depend on flash memory cards for their
internal memory because large hard drives do not fit into handheld devices.
Several Requirements
User Interface (UI): An intuitive and user-friendly interface that is suitable for
small screens and touch input.
Application Ecosystem: A robust app store or platform that offers a diverse range
of applications for users to enhance the functionality of their handheld devices.
Technology Overview
Kernel:
The core of the operating system, managing hardware resources and providing
essential services.
May be based on monolithic, microkernel, or hybrid architectures.
File System:
Organizes and manages data storage on the device.
May use file systems like FAT, exFAT, or more modern ones for performance
and reliability.
User Interface (UI):
Utilizes graphical interfaces optimized for touchscreens, with gestures and
touch controls.
May include home screens, app drawers, and notification panels.
Security:
Implements security measures such as encryption, secure boot, and
sandboxing to protect user data and the device.
Incorporates features like device lock, biometric authentication, and secure
credential storage.
Application Framework:
71
Provides a framework for app development using programming languages like
Java (Android) or Swift (iOS).
Includes APIs for accessing device features like camera, sensors, and
networking.
App Ecosystem:
Supports app distribution through app stores.
Uses app containers to ensure isolation and security between applications.
Connectivity:
Manages various connectivity options such as Wi-Fi, Bluetooth, NFC, and
cellular networks.
Implements protocols like TCP/IP for internet connectivity.
Power Management:
Incorporates power management features to optimize battery life, including
sleep modes and background task management.
Multitasking:
Allows users to run multiple applications simultaneously.
Utilizes task switching mechanisms and background processing.
Updates and Upgrades:
Provides mechanisms for over-the-air (OTA) updates and seamless upgrades
to newer OS versions.
Ensures backward compatibility for app support.
Device Drivers:
Supports a wide array of hardware components through device drivers.
Manages communication between the OS and hardware peripherals.
Internationalization and Localization:
Supports multiple languages and regional settings.
Allows for easy localization of the user interface and content.
Accessibility Features:
Incorporates features to assist users with disabilities, such as screen readers,
voice commands, and haptic feedback.
Developer Tools:
Provides SDKs, APIs, and emulators for app development.
Supports debugging and profiling tools for developers.
Cloud Integration:
Integrates with cloud services for data synchronization, backup, and remote
storage.
The diverse technologies integrated into handheld operating systems, emphasizing
their adaptability to the unique challenges and requirements of portable devices.
Development Cycle
For the development of the PALM OS, these are the phases it has to go through before it
can be used in the market:
Editing the code for the operating system that is checking for errors and
correcting errors.
Compile and Debug the code to check for bugs and correct functioning of the
code.
Run the program on a mobile device or related device.
If all the above phases are passed, we can finally have our finished product
which is the operating system for mobile devices named PALM OS.
Advantages
Fewer features are designed for low memory and processor usage which
means longer battery life.
No need to upgrade the operating system as it is handled automatically in
PALM OS.
More applications are available for users.
73
Extended connectivity for users. Users can now connect to wide areas.
Disadvantages
The user cannot download applications using the external memory in PALM
OS. It will be a disadvantage for users with limited internal memory.
Systems and extended connectivity are less compared to what is offered by
other operating systems.
Introduction:
Symbian was a mobile operating system designed for smartphones and mobile
devices.
Developed by Symbian Ltd., a consortium established in 1998, with major
contributions from Nokia, Ericsson, and Motorola.
Architecture:
Symbian OS had a microkernel architecture, providing a modular and flexible
framework.
Designed to run on various hardware platforms and support a wide range of
devices.
User Interface:
Symbian featured a variety of user interfaces, including Series 60, Series 80, and
Series 90.
Series 60 became the most popular UI, used in many Nokia smartphones.
Applications:
The platform supported native Symbian applications written in C++.
The Symbian OS had its own app store, Ovi Store, where users could download
and install applications.
Multitasking:
Symbian OS was known for its robust multitasking capabilities, allowing users
to run multiple applications simultaneously.
Customization:
Phone manufacturers could customize the Symbian interface to differentiate their
devices.
This flexibility led to a diverse range of Symbian-powered phones with unique
features.
Decline and Discontinuation:
Symbian faced challenges from competitors like iOS and Android, which offered
more modern and user-friendly experiences.
Nokia's decision to adopt Windows Phone over Symbian contributed to the
decline.
Nokia officially discontinued Symbian in 2013, marking the end of its era in the
mobile industry.
Legacy:
Despite its decline, Symbian played a crucial role in the early development of
smartphone operating systems.
Some of its features and concepts influenced later mobile platforms.
Impact:
Symbian was once a dominant force in the smartphone market, particularly in the
early 2000s.
Advanced operating system
Its decline paved the way for the rise of iOS and Android, shaping the current
mobile landscape.
Open Source Transition:
In 2010, Symbian became an open-source platform, allowing developers to
contribute to its development.
The open-source transition, however, couldn't revive its fortunes in the face of
strong competition.
Android
Symbian was a mobile operating system designed for smartphones and mobile
devices.
Developed by Symbian Ltd., a consortium established in 1998, with major
contributions from Nokia, Ericsson, and Motorola.
Architecture:
Symbian OS had a microkernel architecture, providing a modular and flexible
framework.
Designed to run on various hardware platforms and support a wide range of
devices.
User Interface:
Symbian featured a variety of user interfaces, including Series 60, Series 80, and
Series 90.
Series 60 became the most popular UI, used in many Nokia smartphones.
Applications:
The platform supported native Symbian applications written in C++.
The Symbian OS had its own app store, Ovi Store, where users could download
and install applications.
Multitasking:
Symbian OS was known for its robust multitasking capabilities, allowing users
to run multiple applications simultaneously.
Customization:
Phone manufacturers could customize the Symbian interface to differentiate their
devices.
This flexibility led to a diverse range of Symbian-powered phones with unique
features.
Decline and Discontinuation:
Symbian faced challenges from competitors like iOS and Android, which offered
more modern and user-friendly experiences.
Nokia's decision to adopt Windows Phone over Symbian contributed to the
decline.
Nokia officially discontinued Symbian in 2013, marking the end of its era in the
mobile industry.
Legacy:
Despite its decline, Symbian played a crucial role in the early development of
smartphone operating systems.
Some of its features and concepts influenced later mobile platforms.
Impact:
Symbian was once a dominant force in the smartphone market, particularly in the
early 2000s.
Its decline paved the way for the rise of iOS and Android, shaping the current
mobile landscape.
Open-Source Transition:
75
In 2010, Symbian became an open-source platform, allowing developers to
contribute to its development.
The open-source transition, however, couldn't revive its fortunes in the face of
strong competition.
These notes provide a comprehensive overview of the Symbian operating system, its
features, and its eventual decline in the rapidly evolving mobile industry.
Android Architecture
Pictorial representation of android architecture with several main components and their
sub components
Applications
Applications is the top layer of android architecture. The pre-installed applications
like home, contacts, camera, gallery etc and third party applications downloaded
from the play store like chat applications, games etc. will be installed on this layer
only.
It runs within the Android run time with the help of the classes and services
provided by the application framework.
Application framework
Application Framework provides several important classes which are used to
create an Android application. It provides a generic abstraction for hardware
access and also helps in managing the user interface with application resources.
Generally, it provides the services with the help of which we can create a
particular class and make that class helpful for the Applications creation.
It includes different types of services activity manager, notification manager, view
system, package manager etc. which are helpful for the development of our
application according to the prerequisite.
77
Application runtime
Android Runtime environment is one of the most important part of Android. It
contains components like core libraries and the Dalvik virtual machine(DVM).
Mainly, it provides the base for the application framework and powers our
application with the help of the core libraries.
Like Java Virtual Machine (JVM), Dalvik Virtual Machine (DVM) is a register-
based virtual machine and specially designed and optimized for android to ensure
that a device can run multiple instances efficiently. It depends on the layer Linux
kernel for threading and low-level memory management. The core libraries enable
us to implement android applications using the standard JAVA or Kotlin
programming languages.
Platform libraries
The Platform Libraries includes various C/C++ core libraries and Java based libraries
such as Media, Graphics, Surface Manager, OpenGL etc. to provide a support for android
development.
Media library provides support to play and record an audio and video formats.
Surface manager responsible for managing access to the display subsystem.
SGL and OpenGL both cross-language, cross-platform application program
interface (API) are used for 2D and 3D computer graphics.
SQLite provides database support and FreeType provides font support.
Web-Kit This open source web browser engine provides all the functionality to
display web content and to simplify page loading.
SSL (Secure Sockets Layer) is security technology to establish an encrypted link
between a web server and a web browser.
Linux Kernel –
Linux Kernel is heart of the android architecture. It manages all the available
drivers such as display drivers, camera drivers, Bluetooth drivers, audio drivers,
memory drivers, etc. which are required during the runtime.
The Linux Kernel will provide an abstraction layer between the device hardware
and the other components of android architecture. It is responsible for
management of memory, power, devices etc.
The features of Linux kernel are:
Security: The Linux kernel handles the security between the application and
the system.
Memory Management: It efficiently handles the memory management
thereby providing the freedom to develop our apps.
Process Management: It manages the process well, allocates resources to
processes whenever they need them.
Network Stack: It effectively handles the network communication.
Driver Model: It ensures that the application works properly on the device
and hardware manufacturers responsible for building their drivers into the
Linux build.
Screen Lock:
Advanced operating system
Device Encryption:
Enable full-device encryption to safeguard data stored on the device. This
ensures that even if the device is lost or stolen, the data remains inaccessible.
Regular Software Updates:
Keep the operating system, applications, and security software up to date to
patch vulnerabilities and benefit from the latest security features.
App Permissions:
Review and manage app permissions. Only grant necessary permissions to apps
and be cautious about granting access to sensitive information.
App Source Verification:
Download apps only from official app stores (Google Play, Apple App Store).
Avoid installing apps from untrusted sources to minimize the risk of malware.
Device Tracking and Remote Wipe:
Enable device tracking services (Find My iPhone, Find My Device) to locate
the device in case of loss. Also, configure remote wipe options to erase data if
the device cannot be recovered.
Network Security:
Use secure Wi-Fi connections and avoid connecting to public or unsecured
networks. Consider using a Virtual Private Network (VPN) for additional
security.
Biometric Authentication:
If available, use biometric authentication methods like fingerprint or facial
recognition for a convenient and secure unlocking process.
Secure Backup:
Regularly back up important data to a secure and trusted cloud service. This
ensures data recovery in case of device loss, damage, or a reset.
App Updates:
Keep apps updated to the latest versions, as updates often include security
patches. Enable automatic app updates if possible.
Secure Browsing:
Use secure browsing practices, avoid visiting suspicious websites, and be
cautious with clicking on links from unknown sources, especially in emails or
messages.
79
Implementing these practices helps create a robust security posture for
handheld systems, mitigating potential risks and ensuring the protection of
sensitive information.
Questions (1 Mark)
2. All the time a computer is switched on, its operating system software has to stay in
a. main storage c. floppy disk
b. primary storage d. disk drive
3. Scheduling is ____________
5. The operating system of a computer serves as a software interface between the user and
a. screen c. peripheral
b. memory d. hardware
6. What is the name given to the organised collection of software that controls the overall
operation of a computer?
a. operating system c. peripheral system
b. controlling system d. working system
7. Process is _________
a. a job in secondary memory
b. a program in execution
c. contents of main memory
d. program in High level language kept on disk
8. In a timeshare operating system, when the time slot assigned to a process is completed,
the process switches from the current state to?
81
UNIT:5 CASE STUDIES
Linux System
Linux is one of popular version of UNIX operating System. It is open source as its source
code is freely available. It is free to use. Linux was designed considering UNIX
compatibility. Its functionality list is quite similar to that of UNIX.
Components of Linux System
Linux Operating System has primarily three components
Kernel − Kernel is the core part of Linux. It is responsible for all major activities of this
operating system. It consists of various modules and it interacts directly with the
underlying hardware. Kernel provides the required abstraction to hide low level
hardware details to system or application programs.
System Library − System libraries are special functions or programs using which
application programs or system utilities accesses Kernel's features. These libraries
implement most of the functionalities of the operating system and do not requires kernel
module's code access rights.
System Utility − System Utility programs are responsible to do specialized, individual
level tasks.
Basic Features
Following are some of the important features of Linux Operating System.
Portable − Portability means software can works on different types of hardware in same
way. Linux kernel and application programs supports their installation on any kind of
hardware platform.
Open Source − Linux source code is freely available and it is community based
development project. Multiple teams work in collaboration to enhance the capability of
Linux operating system and it is continuously evolving.
Multi-User − Linux is a multiuser system means multiple users can access system
resources like memory/ ram/ application programs at same time.
Multiprogramming − Linux is a multiprogramming system means multiple applications
can run at same time.
Hierarchical File System − Linux provides a standard file structure in which system
files/ user files are arranged.
Shell − Linux provides a special interpreter program which can be used to execute
commands of the operating system. It can be used to do various types of operations, call
application programs. etc.
Security − Linux provides user security using authentication features like password
protection/ controlled access to specific files/ encryption of data.
Architecture
The following illustration shows the architecture of a Linux system −
83
Memory management
Main Memory
Loading a process into the main memory is done by a loader. There are two different
types of loading :
Static Loading: Static Loading is basically loading the entire program into a fixed
address. It requires more memory space.
Dynamic Loading: The entire program and all data of a process must be in physical
memory for the process to execute. So, the size of a process is limited to the size
of physical memory. To gain proper memory utilization, dynamic loading is used.
In dynamic loading, a routine is not loaded until it is called. All routines are residing
on disk in a relocatable load format. One of the advantages of dynamic loading is that
the unused routine is never loaded. This loading is useful when a large amount of code
is needed to handle it efficiently.
To perform a linking task a linker is used. A linker is a program that takes one or more
object files generated by a compiler and combines them into a single executable file.
Static Linking: In static linking, the linker combines all necessary program modules
into a single executable program. So there is no runtime dependency. Some operating
systems support only static linking, in which system language libraries are treated like
any other object module.
Dynamic Linking: The basic concept of dynamic linking is similar to dynamic loading.
In dynamic linking, “Stub” is included for each appropriate library routine reference. A
stub is a small piece of code. When the stub is executed, it checks whether the needed
routine is already in memory or not. If not available then the program loads the routine
into memory.
Swapping
85
swapping in memory management
Memory Management with Monoprogramming (Without Swapping)
This is the simplest memory management approach the memory is divided into two
sections:
One part of the operating system
The second part of the user program
Fence Register
In this approach, the operating system keeps track of the first and last location available
for the allocation of the user program
The operating system is loaded either at the bottom or at top
Interrupt vectors are often loaded in low memory therefore, it makes sense to load the
operating system in low memory
Sharing of data and code does not make much sense in a single process environment
The Operating system can be protected from user programs with the help of a fence
register.
Advantages of Memory Management
It is a simple management approach
Disadvantages of Memory Management
It does not support multiprogramming
Memory is wasted
Multiprogramming with Fixed Partitions (Without Swapping)
A memory partition scheme with a fixed number of partitions was introduced to support
multiprogramming. this scheme is based on contiguous allocation
Each partition is a block of contiguous memory
Memory is partitioned into a fixed number of partitions.
Each partition is of fixed size
Example: As shown in fig. memory is partitioned into 5 regions the region is reserved for
updating the system the remaining four partitions are for the user program.
Advanced operating system
p1
p2
p3
p4
Partition Table
Once partitions are defined operating system keeps track of the status of memory
partitions it is done through a data structure called a partition table.
Starting
Address
of Size of
Partition Partition Status
0k 200k allocated
The main memory should accommodate both the operating system and the
different client processes. Therefore, the allocation of memory becomes an
important task in the operating system. The memory is usually divided into two
partitions: one for the resident operating system and one for the user processes. We
normally need several user processes to reside in memory simultaneously.
Therefore, we need to consider how to allocate available memory to the processes
87
that are in the input queue waiting to be brought into memory. In adjacent memory
allotment, each process is contained in a single contiguous segment of memory.
First Fit
In the First Fit, the first available free hole fulfil the requirement of the process
allocated.
Advanced operating system
First Fit
Here, in this diagram, a 40 KB memory block is the first available free hole that can
store process A (size of 25 KB), because the first two blocks did not have sufficient
memory space.
Best Fit
In the Best Fit, allocate the smallest hole that is big enough to process requirements.
For this, we search the entire list, unless the list is ordered by size.
Best Fit
Here in this example, first, we traverse the complete list and find the last hole 25KB is
the best suitable hole for Process A(size 25KB). In this method, memory utilization is
maximum as compared to other memory allocation techniques.
Worst Fit
In the Worst Fit, allocate the largest available hole to process. This method produces
the largest leftover hole.
Worst Fit
Here in this example, Process A (Size 25 KB) is allocated to the largest available
memory block which is 60KB. Inefficient memory utilization is a major issue in the
worst fit.
89
Fragmentation
Fragmentation is defined as when the process is loaded and removed after execution
from memory, it creates a small free hole. These holes cannot be assigned to new
processes because holes are not combined or do not fulfill the memory requirement of
the process. To achieve a degree of multiprogramming, we must reduce the waste of
memory or fragmentation problems. In the operating systems two types of
fragmentation:
Internal fragmentation: Internal fragmentation occurs when memory blocks are
allocated to the process more than their requested size. Due to this some unused space
is left over and creating an internal fragmentation problem.Example: Suppose there is
a fixed partitioning used for memory allocation and the different sizes of blocks 3MB,
6MB, and 7MB space in memory. Now a new process p4 of size 2MB comes and
demands a block of memory. It gets a memory block of 3MB but 1MB block of memory
is a waste, and it can not be allocated to other processes too. This is called internal
fragmentation.
External fragmentation: In External Fragmentation, we have a free memory block, but
we cannot assign it to a process because blocks are not contiguous.
Example: Suppose (consider the above example) three processes p1, p2, and p3 come
with sizes 2MB, 4MB, and 7MB respectively. Now they get memory blocks of size
3MB, 6MB, and 7MB allocated respectively. After allocating the process p1 process and
the p2 process left 1MB and 2MB. Suppose a new process p4 comes and demands a
3MB block of memory, which is available, but we can not assign it because free memory
space is not contiguous. This is called external fragmentation.
Both the first-fit and best-fit systems for memory allocation are affected by external
fragmentation. To overcome the external fragmentation problem Compaction is used.
In the compaction technique, all free memory space combines and makes one large
block. So, this space can be used by other processes effectively.
Another possible solution to the external fragmentation is to allow the logical address
space of the processes to be noncontiguous, thus permitting a process to be allocated
physical memory wherever the latter is available.
Paging
Paging is a memory management scheme that eliminates the need for a contiguous
allocation of physical memory. This scheme permits the physical address space of a
process to be non-contiguous.
Logical Address or Virtual Address (represented in bits): An address generated by
the CPU.
Logical Address Space or Virtual Address Space (represented in words or
bytes): The set of all logical addresses generated by a program.
Physical Address (represented in bits): An address actually available on a memory
unit.
Physical Address Space (represented in words or bytes): The set of all physical
addresses corresponding to the logical addresses.
Example:
If Logical Address = 31 bits, then Logical Address Space = 231 words = 2 G words (1
G = 230)
If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address =
log2 227 = 27 bits
If Physical Address = 22 bits, then Physical Address Space = 222 words = 4 M words (1
M = 220)
If Physical Address Space = 16 M words = 24 * 220 words, then Physical Address =
log2 224 = 24 bits
Advanced operating system
The mapping from virtual to physical address is done by the memory management unit
(MMU) which is a hardware device and this mapping is known as the paging
technique.
The Physical Address Space is conceptually divided into several fixed-size blocks,
called frames.
The Logical Address Space is also split into fixed-size blocks, called pages.
Page Size = Frame Size
Let us consider an example:
Physical Address = 12 bits, then Physical Address Space = 4 K words
Logical Address = 13 bits, then Logical Address Space = 8 K words
Page size = frame size = 1 K words (assumption)
Paging
The address generated by the CPU is divided into:
Page Number(p): Number of bits required to represent the pages in Logical Address
Space or Page number
Page Offset(d): Number of bits required to represent a particular word in a page or page
size of Logical Address Space or word number of a page or page offset.
Physical Address is divided into:
Frame Number(f): Number of bits required to represent the frame of Physical Address
Space or Frame number frame
Frame Offset(d): Number of bits required to represent a particular word in a frame or
frame size of Physical Address Space or word number of a frame or frame offset.
The hardware implementation of the page table can be done by using dedicated
registers. But the usage of the register for the page table is satisfactory only if the page
table is small. If the page table contains a large number of entries then we can use
TLB(translation Look-aside buffer), a special, small, fast look-up hardware cache.
The TLB is an associative, high-speed memory.
Each entry in TLB consists of two parts: a tag and a value.
When this memory is used, then an item is compared with all tags simultaneously. If the
item is found, then the corresponding value is returned.
91
Page Map Table
Main memory access time = m
If page table are kept in main memory,
Effective access time = m(for page table)
+ m(for particular page in page table)
In computing, a process is the instance of a computer program that is being executed by one
or many threads. Scheduling is important in many different computer environments. One of
the most important areas is scheduling which programs will work on the CPU. This task is
handled by the Operating System (OS) of the computer and there are many different ways
in which we can choose to configure programs.
Advanced operating system
Process scheduler
Categories of Scheduling
Scheduling falls into one of two categories:
Non-preemptive: In this case, a process’s resource cannot be taken before the
process has finished running. When a running process finishes and transitions to a
waiting state, resources are switched.
Preemptive: In this case, the OS assigns resources to a process for a predetermined
period. The process switches from running state to ready state or from waiting for
state to ready state during resource allocation. This switching happens because the
CPU may give other processes priority and substitute the currently active process
for the higher priority process.
Types of Process Schedulers
There are three types of process schedulers:
Long Term or Job Scheduler
It brings the new process to the ‘Ready State’. It controls the Degree of Multi-
programming, i.e., the number of processes present in a ready state at any point in
time. It is important that the long-term scheduler make a careful selection of both
I/O and CPU-bound processes. I/O-bound tasks are which use much of their time
in input and output operations while CPU-bound processes are which spend their
time on the CPU. The job scheduler increases efficiency by maintaining a balance
between the two. They operate at a high level and are typically used in batch-
processing systems.
Short-Term or CPU Scheduler
93
It is responsible for selecting one process from the ready state for scheduling it on
the running state. Note: Short-term scheduler only selects the process to schedule it
doesn’t load the process on running. Here is when all the scheduling algorithms
are used. The CPU scheduler is responsible for ensuring no starvation due to high
burst time processes.
Real-time schedulers: In real-time systems, real-time schedulers ensure that critical tasks
are completed within a specified time frame. They can prioritize and schedule tasks using
various algorithms such as EDF (Earliest Deadline First) or RM (Rate Monotonic).
It is a process-
It is a job It is a CPU
swapping
scheduler scheduler
scheduler.
It gives less
It controls the It reduces the
control over how
degree of degree of
much
multiprogrammin multiprogramming
multiprogrammin
g .
g is done.
It is barely
present or It is a minimal It is a component
nonexistent in the time-sharing of systems for
time-sharing system. time sharing.
system.
95
Context Switching
In order for a process execution to be continued from the same point at a later time,
context switching is a mechanism to store and restore the state or context of a CPU
in the Process Control block. A context switcher makes it possible for multiple
processes to share a single CPU using this method. A multitasking operating
system must include context switching among its features.
The state of the currently running process is saved into the process control block
when the scheduler switches the CPU from executing one process to another. The
state used to set the computer, registers, etc. for the process that will run next is
then loaded from its own PCB. After that, the second can start processing.
Context Switching
In order for a process execution to be continued from the same point at a later time,
context switching is a mechanism to store and restore the state or context of a CPU
in the Process Control block. A context switcher makes it possible for multiple
processes to share a single CPU using this method. A multitasking operating
system must include context switching among its features.
Program Counter
Scheduling information
The base and limit register value
Currently used register
Changed State
I/O State information
Accounting information
Process scheduling policies determine the order in which processes are selected to run on a
computer's CPU. Common policies include:
1. First Come First Serve (FCFS): Processes are executed in the order they arrive.
2. Shortest Job Next (SJN) or Shortest Job First (SJF): The process with the shortest
burst time is scheduled first.
3. Round Robin (RR): Each process is assigned a fixed time slot, and they take turns
running in that slot.
Advanced operating system
4. Priority Scheduling: Processes with higher priority levels are scheduled before those
with lower priorities.
5. Multilevel Queue Scheduling: Processes are divided into queues based on priority,
and each queue has its scheduling algorithm.
The choice of scheduling policy depends on the system's requirements and goals. Each policy
has its strengths and weaknesses in terms of throughput, response time, and fairness.
Managing input and output (I/O) devices is a crucial aspect of operating systems. Here are
key considerations:
2. I/O Buffering: Buffering helps optimize data transfer between the CPU and I/O
devices. It involves using memory to store data temporarily before it's sent to or after
it's received from a device.
3. Interrupts: Interrupts allow I/O devices to signal the CPU when they need attention.
This mechanism enhances system responsiveness and efficiency by enabling the CPU
to handle other tasks while waiting for I/O operations to complete.
6. Error Handling: Robust error handling mechanisms are essential to manage issues
that may arise during I/O operations, such as device failures or data corruption.
7. Plug and Play: Modern operating systems often support plug-and-play functionality,
allowing users to connect new devices to the system without manual configuration.
The OS automatically detects and configures compatible devices.
8. File Systems: Managing I/O devices also involves interaction with file systems. The
OS must handle reading and writing data to storage devices, maintaining file integrity,
and managing file permissions.
97
Accessing files
1. File Path: Identify the location of the file through its path. The path includes the
directory (folder) structure leading to the file. Paths can be absolute (full path from the
root directory) or relative (path relative to the current directory).
2. File System Interaction: The operating system interacts with the file system to locate
and manage the file. Common file systems include FAT32, NTFS (on Windows), ext4
(on Linux), and HFS+ (on macOS).
3. File Permissions: Check file permissions to ensure that the user has the necessary
rights to access the file. Permissions include read, write, and execute privileges for the
owner, group, and others.
4. File Opening: If permissions allow, the operating system opens the file. This
involves allocating resources, such as file handles or descriptors, to facilitate
subsequent operations.
6. File Closing: After completing operations, close the file to release associated
resources. This step is crucial for efficient resource management.
File access can be achieved through various programming interfaces, such as the POSIX API
on Unix-like systems or the Windows API on Windows. High-level programming languages
often provide abstractions, like functions or methods, to simplify file operations.
Remember to handle errors gracefully and consider factors like concurrent access by multiple
processes to avoid conflicts. Security practices, such as validating user input and protecting
against unauthorized access, are also essential when working with file systems.
The structure of the iOS operating System is Layered based. Its communication doesn’t
occur directly. The layer’s between the Application Layer and the Hardware layer will help
for Communication. The lower level gives basic services on which all applications rely
and the higher-level layers provide graphics and interface-related services. Most of the
system interfaces come with a special package called a framework.
A framework is a directory that holds dynamic shared libraries like .a files, header files,
images, and helper apps that support the library. Each layer has a set of frameworks that
are helpful for developers.
Advanced operating system
Architecture of IOS
CORE OS Layer:
All the IOS technologies are built under the lowest level layer i.e. Core OS layer. These
technologies include:
1. Core Bluetooth Framework
2. External Accessories Framework
3. Accelerate Framework
4. Security Services Framework
5. Local Authorization Framework etc.
It supports 64 bit which enables the application to run faster.
CORE SERVICES Layer:
Some important frameworks are present in the CORE SERVICES Layer which helps the iOS
operating system to cure itself and provide better functionality. It is the 2nd lowest layer in the
Architecture as shown above. Below are some important frameworks present in this layer:
1. Address Book Framework-
The Address Book Framework provides access to the contact details of the user.
2. Cloud Kit Framework-
This framework provides a medium for moving data between your app and iCloud.
3. Core Data Framework-
This is the technology that is used for managing the data model of a Model View
Controller app.
4. Core Foundation Framework-
This framework provides data management and service features for iOS
applications.
5. Core Location Framework-
This framework helps to provide the location and heading information to the
application.
6. Core Motion Framework-
All the motion-based data on the device is accessed with the help of the Core
Motion Framework.
7. Foundation Framework-
Objective C covering too many of the features found in the Core Foundation
framework.
8. HealthKit Framework-
This framework handles the health-related information of the user.
9. HomeKit Framework-
This framework is used for talking with and controlling connected devices with the
user’s home.
10. Social Framework-
It is simply an interface that will access users’ social media accounts.
99
11. StoreKit Framework-
This framework supports for buying of contents and services from inside iOS apps.
MEDIA Layer:
With the help of the media layer, we will enable all graphics video, and audio technology of
the system. This is the second layer in the architecture. The different frameworks of MEDIA
layers are:
1. ULKit Graphics-
This framework provides support for designing images and animating the view
content.
2. Core Graphics Framework-
This framework support 2D vector and image-based rendering and it is a native
drawing engine for iOS.
3. Core Animation-
This framework helps in optimizing the animation experience of the apps in iOS.
4. Media Player Framework-
This framework provides support for playing the playlist and enables the user to
use their iTunes library.
5. AV Kit-
This framework provides various easy-to-use interfaces for video presentation,
recording, and playback of audio and video.
6. Open AL-
This framework is an Industry Standard Technology for providing Audio.
7. Core Images-
This framework provides advanced support for motionless images.
8. GL Kit-
This framework manages advanced 2D and 3D rendering by hardware-accelerated
interfaces.
COCOA TOUCH:
COCOA Touch is also known as the application layer which acts as an interface for the user to
work with the iOS Operating system. It supports touch and motion events and many more
features. The COCOA TOUCH layer provides the following frameworks :
1. EvenKit Framework-
This framework shows a standard system interface using view controllers for
viewing and changing events.
2. GameKit Framework-
This framework provides support for users to share their game-related data online
using a Game Center.
3. MapKit Framework-
This framework gives a scrollable map that one can include in your user interface
of the app.
4. PushKit Framework-
This framework provides registration support.
Features of iOS operating System:
Let us discuss some features of the iOS operating system-
1. Highly Securer than other operating systems.
2. iOS provides multitasking features like while working in one application we can
switch to another application easily.
3. iOS’s user interface includes multiple gestures like swipe, tap, pinch, Reverse
pinch.
4. iBooks, iStore, iTunes, Game Center, and Email are user-friendly.
5. It provides Safari as a default Web Browser.
Advanced operating system
1. Audio Subsystem:
Responsible for audio playback and recording.
Manages audio devices, drivers, and codecs.
Provides APIs for applications to interact with audio hardware.
2. Video Subsystem:
Handles video playback and rendering.
Manages video drivers, codecs, and graphics processing units (GPUs).
Ensures synchronization and smooth playback.
3. Graphics Subsystem:
Deals with graphical elements, including rendering images and graphical user
interfaces (GUIs).
Manages graphics hardware, drivers, and acceleration.
4. Image Processing:
Involves decoding and processing various image formats.
Supports image manipulation and transformation operations.
101
6. Codec Support:
Includes support for various multimedia codecs for compression and
decompression.
Ensures compatibility with different file formats.
Specific implementations may vary across different operating systems, and advancements in
technology continually influence the features and capabilities of the media layer.
Service Layer
1. Business Logic Encapsulation: The service layer encapsulates the business logic of
the application, ensuring that it remains separate from the user interface and data
access layers. This separation enhances maintainability and reusability.
3. Interaction with Data Layer: The service layer interacts with the data layer (database
or external APIs) to retrieve or persist data. This ensures that data-related operations
are centralized and handled consistently.
5. Validation and Business Rules: Services often include validation logic to ensure that
incoming data meets specific criteria. They also enforce business rules, ensuring that
the application adheres to the defined workflows and regulations.
7. Application Flow Control: It governs the flow of operations within the application.
By orchestrating various components, the service layer ensures that business processes
are executed in the correct sequence.
10. Testing and Debugging: The service layer facilitates testing by allowing for the
isolation of business logic. Unit testing and debugging are more straightforward when
the core application logic is concentrated in the service layer.
the service layer plays a pivotal role in structuring a software application, promoting
maintainability, scalability, and flexibility by encapsulating and providing a well-defined
interface for the application's business logic.
A computer file is defined as a medium used for saving and managing data in the computer system.
The data stored in the computer system is completely in digital format, although there can be various
types of files that help us to store the data.
103
FilesAttributes And Their Operations
Creation
Xis Write
Data
Author C Append
Last
Java Truncate
Modified
Close
Usual
File type extension Function
Read to run
exe, com, machine
Executable
bin language
program
Compiled,
machine
Object obj, o
language not
linked
Usual
File type extension Function
Commands to
Batch bat, sh the command
interpreter
Textual data,
Text txt, doc
documents
Various word
Word wp, tex,
processor
Processor rrf, doc
formats
Related files
arc, zip, grouped into
Archive
tar one
compressed file
For containing
mpeg,
Multimedia audio/video
mov, rm
information
It is the textual
xml, html,
Markup data and
tex
documents
It contains
lib, a ,so, libraries of
Library
dll routines for
programmers
It is a format
for printing or
Print or gif, pdf,
viewing an
View jpg
ASCII or
binary file.
File Directories
The collection of files is a file directory. The directory contains information about the files,
including attributes, location, and ownership. Much of this information, especially that is
concerned with storage, is managed by the operating system. The directory is itself a file,
accessible by various file management routines.
Below are information contained in a device directory.
Name
Type
Address
105
Current length
Maximum length
Date last accessed
Date last updated
Owner id
Protection information
The operation performed on the directory are:
Search for a file
Create a file
Delete a file
List a directory
Rename a file
Traverse the file system
Advantages of Maintaining Directories
Efficiency: A file can be located more quickly.
Naming: It becomes convenient for users as two users can have same name for
different files or may have different name for same file.
Grouping: Logical grouping of files can be done by properties e.g. all java
programs, all games etc.
Single-Level Directory
In this, a single directory is maintained for all the users.
Naming problem: Users cannot have the same name for two files.
Grouping problem: Users cannot group files according to their needs.
Two-Level Directory
In this separate directories for each user is maintained.
Path name: Due to two levels there is a path name for every file to locate that file.
Now, we can have the same file name for different users.
Searching is efficient in this method.
Advanced operating system
Tree-Structured Directory
The directory is maintained in the form of a tree. Searching is efficient and also there is
grouping capability. We have absolute or relative path name for a file.
107
Disadvantages of Continuous Allocation
External fragmentation will occur, making it difficult to find contiguous blocks of
space of sufficient length. A compaction algorithm will be necessary to free up
additional space on the disk.
Also, with pre-allocation, it is necessary to declare the size of the file at the time of
creation.
Linked Allocation(Non-Contiguous Allocation)
Allocation is on an individual block basis. Each block contains a pointer to the next block
in the chain. Again the file table needs just a single entry for each file, showing the starting
block and the length of the file. Although pre-allocation is possible, it is more common
simply to allocate blocks as needed. Any free block can be added to the chain. The blocks
need not be continuous. An increase in file size is always possible if a free disk block is
available. There is no external fragmentation because only one block at a time is needed
but there can be internal fragmentation but it exists only in the last disk block of the file.
Disadvantage Linked Allocation(Non-contiguous allocation)
Internal fragmentation exists in the last disk block of the file.
There is an overhead of maintaining the pointer in every disk block.
If the pointer of any disk block is lost, the file will be truncated.
Advanced operating system
109
In this vector every bit corresponds to a particular block and 0 implies that that particular
block is free and 1 implies that the block is already occupied. A bit table has the advantage
that it is relatively easy to find one or a contiguous group of free blocks. Thus, a bit table
works well with any of the file allocation methods. Another advantage is that it is as small
as possible.
2. Free Block List: In this method, each block is assigned a number sequentially and the list
of the numbers of all free blocks is maintained in a reserved block of the disk.
One Marks
3. To access the services of the operating system, the interface is provided by the
___________
a) RTLinux c) QNX
b) Palm OS d) VxWorks
111