0% found this document useful (0 votes)
20 views66 pages

Complete Unit of Java

The document discusses various concepts related to file systems, including file protection, directory structures, RAID characteristics, and disk scheduling algorithms. It covers file attributes, operations, types, and structures, as well as directory concepts and permissions. Additionally, it addresses different directory structures such as single-level, two-level, tree-structured, cyclic graph, and general graph directories.

Uploaded by

kaifazmi7090
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views66 pages

Complete Unit of Java

The document discusses various concepts related to file systems, including file protection, directory structures, RAID characteristics, and disk scheduling algorithms. It covers file attributes, operations, types, and structures, as well as directory concepts and permissions. Additionally, it addresses different directory structures such as single-level, two-level, tree-structured, cyclic graph, and general graph directories.

Uploaded by

kaifazmi7090
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

Unit 5

Frequently asked questions from this unit:


Ques 1 Explain in detail about file system protection and security. (2021-22)

Ques 2 Explain the structure and management of file directories, including hierarchical and flat
directory structures? (2023-24) ----Hierarchal is same as tree and flat is same as single

Ques 3 Explain the term RAID and its characteristics. Also, explain various RAID levels with their
advantages and disadvantages. (2022-23)

Ques 4 Suppose the following disk request sequence (track numbers) for a disk with 100 tracks is
given: 45, 20, 90, 10, 50, 60, 80, 25, 70. Assume that the initial position of the R/W head is on track 49.
Calculate the net head movement using: (i) SSTF (ii) SCAN (iii) CSCAN (iv) LOOK (2022-23)

Ques 5. A hard disk having 2000 cylinders, numbered from 0 to 1999. the drive is currently serving
the request at cylinder 143,and the previous request was at cylinder 125.The status of the queue is as
follows 86, 1470, 913, 1774,948,1509,1022,1750,130 What is the total distance (in cylinders) that the
disk arm moves to satisfy the entire pending request for each of the following diskscheduling
algorithms? (i) SSTF (ii) FCFS (2021-22) (2018-19)

Ques 6 Explain in detail about disk storage and disk scheduling. (2021-22)

Ques 7 Write a short note on

1. Linked file allocation method


2. Indexed file allocation method
3. Contiguous allocation method
4. I/O Buffering (2018-19) (2020-21)(2017-18)

Ques 8 Explain about file concept. Define in detail about file organization and access mechanism.

(2022-23)

Ques 9Disk scheduling numerical (As discussed in lectures)

1. FCFS
2. SSTF
3. SCAN
4. LOOK
5. CSCAN
6. CLOOK

Ques 10 Explain FCFS, SSTF, SCAN, CSCAN, LOOK, CLOOK scheduling with e.g.

Ques 11. Difference between Track and sectors (2 marks)

1. File concept:

A file is a collection of related information that is stored on secondary storage. Information stored
in files must be persistent i.e. not affected by power failures & system reboots. Files represent both
programs as well as data.

Part of the OS dealing with the files is known as file system. The File system takes care of the

following issues o File Structure

We have seen various data structures in which the file can be stored. The task of the file system
is to maintain an optimal file structure.

o Recovering Free space

Whenever a file gets deleted from the hard disk, there is a free space created in the disk.
There can be many such spaces which need to be recovered in order to reallocate them to
other files.

o disk space assignment to the files

The major concern about the file is deciding where to store the files on the hard disk. There

are various disks scheduling algorithm which will be covered later. o tracking data location

A File may or may not be stored within only one block. It can be stored in the non
contiguous blocks on the disk. We need to keep track of all the blocks on which the part of
the files reside.

The important file concepts include:


1. File attributes: A file has certain attributes which vary from one operating system to another.
Name: Every file has a name by which it is referred.
Identifier: It is unique number that identifies the file within the file system.
Type: This information is needed for those systems that support different types of files.
Location: It is a pointer to a device & to the location of the file on that device Size: It is
the current size of a file in bytes, words or blocks.
Protection: It is the access control information that determines who can read, write & execute
a file.
EX: The Admin of the computer may want the different protections for the different files.
Therefore each file carries its own set of permissions to the different group of Users.
Time, date & user identification: It gives information about time of creation or last
modification & last use.
2. File operations: The operating system can provide system calls to create, read, write, reposition,
delete and truncate files.

Creating files: Two steps are necessary to create a file. First, space must be found for the
file in the file system. Secondly, an entry must be made in the directory for the new file.
Creation of the file is the most important operation on the file. Different types of files are
created by different methods for example text editors are used to create a text file, word
processors are used to create a word file and Image editors are used to create the image
files.
Reading a file: Data & read from the file at the current position. The system must keep a
read pointer to know the location in the file from where the next read is to take place. Once
the read has been taken place, the read pointer is updated. Every file is opened in three
different modes: Read, Write and append. A Read pointer is maintained by the OS, pointing
to the position up to which, the data has been read.

Writing a file: Data are written to the file at the current position. The system must keep a
write pointer to know the location in the file where the next write is to take place. The write
pointer must be updated whenever a write occurs.
Repositioning within a file (seek): The directory is searched for the appropriate entry & the
current file position is set to a given value. After repositioning data can be read from or
written into that position. So, Repositioning is simply moving the file pointers forward or
backward depending upon the user's requirement. It is also called as seeking.

Deleting a file: To delete a file, we search the directory for the required file.
After deletion, the space is released so that it can be reused by other files. Deleting the file
will not only delete all the data stored inside the file, It also deletes all the attributes of the
file. The space which is allocated to the file will now become available and can be allocated
to the other files.
Truncating a file: Truncating is simply deleting the file except deleting attributes. The file is
not completely deleted although the information stored inside the file gets replaced.
3. File types: The file name is spilt into 2 parts, Name & extension. Usually these two parts are
separated by a period. The user & the OS can know the type of the file from the extension itself.
Listed below are some file types along with their extension:
File Type Extension
Executable exe, bin, com
File
Object File obj, o (compiled)
Source Code C, C++, Java, .pas(Pascal) file
Batch File bat, sh (commands to command interpreter)
Text File txt, doc (textual data documents)
Archieve File arc, zip, rar (related files grouped together into file compressed for storage)

Multimedia File mpeg (Binary file containing audio or A/V information)


4. File structure: Files can be structured in several ways. Three common possible are:
Byte sequence: The figure shows an unstructured sequence of bytes. The OS doesn‘t care
about the content of file. It only sees the bytes. This structure provides maximum flexibility.
Users can write anything into their files & name them according to their convenience. Both
UNIX & windows use this approach.

byte

Record sequence: In this structure, a file is a sequence of fixed length records. Here the
read operation returns one records & the write operation overwrites or append
Record

Tree: In this organization, a file consists of a tree of records of varying lengths. Each record
consists of a key field. The tree is stored on the key field to allow first searching for a
particular key.
Access methods: Basically, access method is divided into 2 types:
Sequential access: It is the simplest access method. Information in the file is processed in
order i.e. one record after another. A process can read all the data in a file in order starting
from beginning but can‘t skip & read arbitrarily from any location. Sequential files can be
rewound. It is convenient when storage medium was magnetic tape rather than disk.
Direct access: A file is made up of fixed length-logical records that allow programs to read &
write records rapidly in no particular order. This method can be used when disk are used for
storing files. This method is used in many applications e.g. database systems.
Ex: If an airline customer wants to reserve a seat on a particular flight, the reservation program
must be able to access the record for that flight directly without reading the records before it.
In a direct access file, there is no restriction in the order of reading or writing. For example,
we can read block 14, then read block 50 & then write block 7 etc. Direct access files are very
useful for immediate access to large amount of information.

2. Directory concept
Directory structure: The file system of computers can be extensive. Some systems store thousands
of file on disk. To manage all these data, we need to organize them. The organization is done in 2
steps. The file system is broken into partitions. Each partition contains information about file within
it.
Directory can be defined as the listing of the related files on the disk. The directory may store some
or the entire file attributes.

A directory can be viewed as a file which contains the Meta data of the bunch of files.

Operation on a directory:
Search for a file: We need to be able to search a directory for a particular file.
Create a file: New files are created & added to the directory.
Delete a file: When a file is no longer needed, we may remove it from the directory.
List a directory: We should be able to list the files of the directory.
Rename a file: The name of a file is changed when the contents of the file changes.
Traverse the file system: It is useful to be able to access every directory & every file within
a directory.
Structure of a directory: The most common schemes for defining the structure of the directory
are:
1. Single level directory: It is the simplest directory structure. All files are present in the same
directory. So it is easy to manage & understand.
Limitation: A single level directory is difficult to manage when the no. of files increases or when
there is more than one user. Since all files are in same directory, they must have unique names.
So, there is confusion of file names between different users.

Advantages

1. Implementation is very simple.


2. If the sizes of the files are very small then the searching becomes faster.
3. File creation, searching, deletion is very simple since we have only one directory.
Disadvantages

1. We cannot have two files with the same name.


2. The directory may be very big therefore searching for a file may take so much time.
3. Protection cannot be implemented for multiple users.
4. There are no ways to group same kind of files.
5. Choosing the unique name for every file is a bit complex and limits the number of files in the
system because most of the Operating System limits the number of characters used to
construct the file name.

2. Two level directories: The solution to the name collision problem in single level directory is
to create a separate directory for each user. In a two level directory structure, each user has its
own user file directory. When a user logs in, then master file directory is searched. It is indexed
by user name & each entry points to the UFD of that user.
Limitation: It solves name collision problem. But it isolates one user from another. It is an
advantage when users are completely independent. But it is a disadvantage when the users
need to access each other‘s files & co-operate among themselves on a particular task.

3. Tree structured directories: It is the most common directory structure. A two level directory
is a two level tree. So, the generalization is to extend the directory structure to a tree of arbitrary
height. It allows users to create their own subdirectories & organize their files. Every file in
the system has a unique path name. It is the path from the root through all the sub-directories
to a specified file. Each user has a current directory. It contains most of the files that are of
current interest to the user. Path names can be of two types: An absolute path name begins
from the root directory & follows the path down to the specified files. A relative path name
defines the path from the current directory.
EX: If the current directory is root/spell/mail, then the relative path name is prt/first & the
absolute path name is root/ spell/ mail/ prt/ first. Here users can access the files of other users
also by specifying their path names.
Advantage: Searching is more efficient in this directory structure.

Permissions on the file and directory

A tree structured directory system may consist of various levels therefore there is a set of permissions
assigned to each file and directory.

The permissions are R W X which are regarding reading, writing and the execution of the files or
directory. The permissions are assigned to three types of users: owner, group and others.

There is a identification bit which differentiate between directory and file. For a directory, it is d and
for a file, it is dot (.)

4. A cyclic graph directory: It is a generalization of tree structured directory scheme. An a cyclic


graph allows directories to have shared sub-directories & files. A shared directory or file is not
the same as two copies of a file. Here a programmer can view the copy but the changes made
in the file by one programmer are not reflected in the other‘s copy. But in a shared file, there is
only one actual file. So many changes made by a person would be immediately visible to others.
This scheme is useful in a situation where several people are working as a team. So, here all the
files that are to be shared are put together in one directory. Shared files and sub-directories can
be implemented in several ways. A common way used in UNIX systems is to create a new
directory entry called link. It is a pointer to another file or sub-directory. The other approach is
to duplicate all information in both sharing directories. A cyclic graph structure is more flexible
then a tree structure but it is also more complex.
Limitation: Now a file may have multiple absolute path names. So, distinct file names may
refer to the same file. Another problem occurs during deletion of a shared file. When a file is
removed by any one user. It may leave dangling pointer to the non existing file. One serious
problem in a cyclic graph structure is ensuring that there are no cycles. To avoid these problems,
some systems do not allow shared directories or files. E.g. MS-DOS uses a tree structure rather
than a cyclic to avoid the problems associated with deletion. One approach for deletion is to
preserve the file until all references to it are deleted.
5. General graph directory: When links are added to an existing tree structured directory, the
tree structure is destroyed, resulting in a simple graph structure. Linking is a technique that
allows a file to appear in more than one directory. The advantage is the simplicity of algorithm
to transverse the graph & determines when there are no more references to a file. But a similar
problem exists when we are trying to determine when a file can be deleted. Here also a value
zero in the reference count means that there are no more references to the file or directory &
the file can be deleted. But when cycle exists, the reference count may be non-zero even when
there are no references to the directory or file. This occurs due to the possibility of self
referencing (cycle) in the structure. So, here we have to use garbage collection scheme to
determine when the last references to a file has been deleted & the space can be reallocated. It
involves two steps:
• Transverse the entire file system & mark everything that can be accessed.
• Everything that isn‘t marked is added to the list of free space.
But this process is extremely time consuming. It is only necessary due to presence of cycles
in the graph. So, a cyclic graph structure is easier to work than this.

In general graph directory structure, cycles are allowed within a directory structure where
multiple directories can be derived from more than one parent directory.

Protection

When information is kept in a computer system, a major concern is its protection from physical
damage (reliability) as well as improper access.
Types of access: In case of systems that don‘t permit access to the files of other users. Protection
is not needed. So, one extreme is to provide protection by prohibiting access. The other extreme
is to provide free access with no protection. Both these approaches are too extreme for general
use. So, we need controlled access. It is provided by limiting the types of file access. Access is
permitted depending on several factors. One major factor is type of access requested. The
different types of operations that can be controlled are:

Read
Write
Execute
Append
Delete
List

Access lists and groups:


Various users may need different types of access to a file or directory. So, we can associate
an access lists with each file and directory to implement identity dependent access. When
a user access requests access to a particular file, the OS checks the access list associated
with that file. If that user is granted the requested access, then the access is allowed.
Otherwise, a protection violation occurs & the user is denied access to the file. But the
main problem with access lists is their length. It is very tedious to construct such a list.
So, we use a condensed version of the access list by classifying the users into 3 categories:

Owners: The user who created the file.


Group: A set of users who are sharing the files.

Others: All other users in the system.


Here only 3 fields are required to define protection. Each field is a collection of bits each of which
either allows or prevents the access. E.g.
The UNIX file system defines 3 fields of 3 bits each: r w x
r(read access) w(write access) x(execute
access)
Separate fields are kept for file owners, group & other users. So, a bit is needed to record protection
information for each file.

3.File allocation Methods


Allocation methods:

There are various methods which can be used to allocate disk space to the files. Selection of an
appropriate allocation method will significantly affect the performance and efficiency of the
system. Allocation method provides a way in which the disk will be utilized and the files will be
accessed.
We have mainly discussed 3 methods of allocating disk space which are widely used.
1. Contiguous allocation:

If the blocks are allocated to the file in such a way that all the logical blocks of the file get the
contiguous physical block in the hard disk then such allocation scheme is known as contiguous
allocation.

In the image shown below, there are three files in the directory. The starting block and the length
of each file are mentioned in the table. We can check in the table that the contiguous blocks are
assigned to each file as per its need.

a. It requires each file to occupy a set of contiguous blocks on the disk.


b. Number of disk seeks required for accessing contiguously allocated file is minimum.
c. The IBM VM/CMS OS uses contiguous allocation. Contiguous allocation of a file is defined
by the disk address and length (in terms of block units).

d. If the file is ‘n‘ blocks long and starts at location ‘b‘, then it occupies blocks b, b+1, b+2,-
--,b+ n-1.
e. The directory for each file indicates the address of the starting block and the length of the
area allocated for each file.
f. Contiguous allocation supports both sequential and direct access. For sequential access, the
file system remembers the disk address of the last block referenced and reads the next block
when necessary.
g. For direct access to block i of a file that starts at block b we can immediately access block
b+i

Problems: One difficulty with contiguous allocation is finding space for a new file. It
also suffers from the problem of external fragmentation. As files are deleted and
allocated, the free disk space is broken into small pieces. A major problem in contiguous
allocation is how much space is needed for a file. When a file is created, the total amount
of space it will need must be found and allocated. Even if the total amount of space
needed for a file is known in advances, pre-allocation is inefficient. Because a file that
grows very slowly must be allocated enough space for its final size even though most of
that space is left unused for a long period time. Therefore, the file has a large amount of
internal fragmentation.

2. Linked List Allocation

Linked List allocation solves all problems of contiguous allocation. In linked list allocation, each file
is considered as the linked list of disk blocks. However, the disks blocks allocated to a particular file
need not to be contiguous on the disk. Each disk block allocated to a file contains a pointer which
points to the next disk block allocated to the same file.
a. Linked allocation solves all problems of contiguous allocation.
b. In linked allocation, each file is linked list of disk blocks, which are scattered throughout
the disk.
c. The directory contains a pointer to the first and last blocks of the file.
d. Each block contains a pointer to the next block.
e. These pointers are not accessible to the user. To create a new file, we simply create a new
entry in the directory.
f. For writing to the file, a free block is found by the free space management system and this
new block is written to & linked to the end of the file.
g. To read a file, we read blocks by following the pointers from block to block.
h. There is no external fragmentation with linked allocation & any free block can be used to
satisfy a request.
i. Also there is no need to declare the size of a file when that file is created. A file can continue
to grow as long as there are free blocks.
th
Limitations: It can be used effectively only for sequential access files. To find the i
block of the file, we must start at the beginning of that file and follow the pointers until
we get the ith block. So it is inefficient to support direct access files. Due to the presence
of pointers each file requires slightly more space than before. Another problem is
reliability. Since the files are linked together by pointers scattered throughout the disk.
What would happen if a pointer were lost or damaged?

Disadvantages
1. Random Access is not provided.
2. Pointers require some space in the disk blocks.
3. Any of the pointers in the linked list must not be broken otherwise the file will get corrupted.
4. Need to traverse each block.

3. Indexed Allocation:

Instead of maintaining a file allocation table of all the disk pointers, Indexed allocation scheme
stores all the disk pointers in one of the blocks called as indexed block. Indexed block doesn't hold
the file data, but it holds the pointers to all the disk blocks allocated to that particular file. Directory
entry will only contain the index block address.
a. Indexed allocation solves the problem of linked allocation by bringing all the pointers
together to one location known as the index block.
b. Each file has its own index block which is an array of disk block addresses. The ith entry in
the index block points to the ith block of the file.
c. The directory contains the address of the index block. The read the ith block, we use the
pointer in the ith index block entry and read the desired block.
d. To write into the ith block, a free block is obtained from the free space manager and its

address is put in the ith index block entry.

e. Indexed allocation supports direct access without suffering external fragmentation.

Limitations: The pointer overhead of index block is greater than the pointer overhead
of linked allocation. So here more space is wasted than linked allocation. In indexed
allocation, an entire index block must be allocated, even if most of the pointers are nil.
4.Disk Scheduling

As we know, a process needs two type of time, CPU time and IO time. For I/O, it requests the
Operating system to access the disk.

However, the operating system must be fare enough to satisfy each request and at the same time,
operating system must maintain the efficiency and speed of process execution.

The technique that operating system uses to determine the request which is to be satisfied next is
called disk scheduling.

Important terms related to disk scheduling.

Seek Time

Seek time is the time taken in locating the disk arm to a specified track where the read/write request
will be satisfied.

Rotational Latency

It is the time taken by the desired sector to rotate itself to the position from where it can access the
R/W heads.
Transfer Time

It is the time taken to transfer the data.

Disk Access Time

Disk access time is given as,

Disk Access Time = Rotational Latency + Seek Time + Transfer Time

Disk Response Time

It is the average of time spent by each request waiting for the IO operation.

Purpose of Disk Scheduling

The main purpose of disk scheduling algorithm is to select a disk request from the queue of IO
requests and decide the schedule when this request will be processed.

Goal of Disk Scheduling Algorithm o


Fairness o High throughout o Minimal
traveling head time o To minimize the seek
time

Disk Scheduling Algorithms

The list of various disks scheduling algorithm is given below. Each algorithm is carrying some
advantages and disadvantages. The limitation of each algorithm leads to the evolution of a new
algorithm.

o FCFS scheduling algorithm o SSTF


(shortest seek time first) algorithm o
SCAN scheduling o C-SCAN
scheduling o LOOK Scheduling o CLOOK
scheduling

FCFS Scheduling Algorithm

It is the simplest Disk Scheduling algorithm. It services the IO requests in the order in which they
arrive. There is no starvation in this algorithm, every request is serviced.

Disadvantages o The scheme does not optimize the seek time. o The request may come from different
processes therefore there is the possibility of inappropriate movement of the head.
Example (Take from notebook)

SSTF Scheduling Algorithm

Shortest seek time first (SSTF) algorithm selects the disk I/O request which requires the least disk
arm movement from its current position regardless of the direction. It reduces the total seek time as
compared to FCFS.

It allows the head to move to the closest track in the service queue.

Disadvantages o It may cause starvation for some requests. o Switching direction on the frequent
basis slows the working of algorithm. o It is not the most optimal algorithm. Example (Take
from notebook)

SCAN Algorithm

It is also called as Elevator Algorithm. In this algorithm, the disk arm moves into a particular
direction till the end, satisfying all the requests coming in its path,and then it turns backand moves
in the reverse direction satisfying requests coming in its path.

It works in the way an elevator works, elevator moves in a direction completely till the last floor of
that direction and then turns back.

Example (Take from notebook)

C-SCAN algorithm

In C-SCAN algorithm, the arm of the disk moves in a particular direction servicing requests until
it reaches the last cylinder, then it jumps to the last cylinder of the opposite direction without
servicing any request then it turns back and start moving in that direction servicing the remaining
requests.

Example (Take from notebook)


Look Scheduling

It is like SCAN scheduling Algorithm to some extant except the difference that, in this scheduling
algorithm, the arm of the disk stops moving inwards (or outwards) when no more request in that
direction exists. This algorithm tries to overcome the overhead of SCAN algorithm which forces
disk arm to move in one direction till the end regardless of knowing if any request exists in the
direction or not.
Example (Take from notebook)

C Look Scheduling

C Look Algorithm is similar to C-SCAN algorithm to some extent. In this algorithm, the arm of
the disk moves outwards servicing requests until it reaches the highest request cylinder, then it
jumps to the lowest request cylinder without servicing any request then it again start moving
outwards servicing the remaining requests.

It is different from C SCAN algorithm in the sense that, C SCAN force the disk arm to move till
the last cylinder regardless of knowing whether any request is to be serviced on that cylinder or
not.

Example (Take from notebook)

Previous year numerical asked from disk scheduling


UNIT 05
DISK SCHEDULING CONCEPTS AND ALGORITHMS

DISK SCHEDULING
Disk scheduling is done by operating systems to schedule I/O requests arriving for the disk.
Disk scheduling is also known as I/O scheduling.

Disk scheduling is important because:


Multiple I/O requests may arrive by diffcrctit processes and only one I/O request can bc
served at a time by the disk controller. Thus other I/O requests need to wait in the waiting
queue and need to be scheduled.
Two or more request may be far from each other so can result in greater disk arm
movement.
• Hard drives are one of the slowest parts of the computersystem and thus need to be
accessed in an efficient manner.

DISK SCHEDULING CONCEPTS


Seek Time: Seek time is the time taken to locate the disk arm to a specified track where
the data is to be read or write. So the disk scheduling algorithm that gives minimum
average seek time is better.

• Rotational Latencv: Rotational Latency is the time taken by the desired sector of disk
to rotate into a position so that it can access the read/write heads. So the disk scheduling
algorithm that gives minimum rotational latency is better.

• Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating
speed of the disk and number of bytes to be transferred.

• Disk Access Time: Disk Access Time = Seek Time + Rotational Latency + Transfer
Time

Disk Response Time: Response Time is the average of time spent by a request waiting
to perform its I/O operation.Average Response time is the response time of the all
requests. Variance Response Time is measure of how individual request are serviced with
respect to average response time. So the disk scheduling algorithm that gives minimum
variance response time is better.
DISK SCHEDULING ALGORITHMS

Following disk scheduling algorithms arc as follows:

1. FCFS (First Come First Scrvc) disk scheduling algorithm

2. SSTF (Shortest Seek Time First) disk scheduling algorithm

3. SCAN disk scheduling algorithm

4. CSCAN (Circular SCAN) disk scheduling algorithm

5. LOOK disk scheduling algorithm

6. CLOOK (Circular LOOK) disk scheduling algorithm


withyou for
*NOTE* Refer Disk Scheduling Algorithm PDF (Unit 04 Part 2.1) shared
numerical.

1. FCFS (First Come First Serve) Disk Scheduling Algorithm


FCFS is the simplest of all the Disk Scheduling Algorithms.
queue.
Jn FCFS, the requests are addressed in the order they arrive in the disk

Advantages:
Every request gets a fair chance
No indefinite postponement, i.e. no starvation.

Disadvantages:
Does not try to optimize seek time
May not provide the best possible service

contains track numbers 82, 170, 43,


QI. A disk contains 200 tracks (0-199). Request queue
is 50. Calculate total number of
140, 24, 16, 190 respectively. Current position of R/W Head
FCFS disk scheduling algorithm.
track movement by RAVhead (total seek time) using

Soln. Total Seek Time = 642 units


queue contains track numbers 55, 58, 39, 18,
Q2. A disk contains 200 tracks (0-199). Request
of WW Head is 100. Calculate total
90, 160, 150, 38, 184 respectively. Current position
using FCFS disk scheduling
number of track movement by R/W head (total seek time)
algorithm.

Soln. Total Seek Time = 498 units

hv manei mahonHnj tmaneif QQ7mahønHrt1fiynmail


Q3. A disk contains 200 tracks (0-199). Request queue contains track numbers 23, 89, 132,
42, 187 respectively. Current position of WW Head is 100. Calculate total number of track
movement by RJW head (total seek time) using FCFS disk scheduling algorithm.

Soln. Total Seek Time = 421 units

2. SSTF (Shortest Seek Time First) Disk Scheduling Algorithm

Also called as Closest Cylinder Next (CCN) disk scheduling algorithm.


In SSTF (Shortest Seek Time First), requests having shortest seek time are executed first.
So, the seek time of every request is calculated in advance in the queue and then they are
scheduled according to their calculated seek tune.
As a result, the request near the disk arm will get executed first.
SSTF is certainly an Improvement over FCFS as it decreases the average response time and
increases the throughput of system.

Advantages:
• Average Response Time decreases
Throughput increases
Tries to give an optimum result in majority of the eases

Disadvantages:
Overhead to calculate seek time in advance
• Can cause Starvation for a request if it has higher seek time as compared to incoming
requests
• High variance of response time as SSTF favours only some requests

numbers 82, 170, 43,


QI. A disk contains 200 tracks (0-199). Request queue contains track
total number of
140, 24, 16, 190 respectively. Current position of RAV Head is 50. Calculate
algorithm.
track movement by R/W head (total seek time) using SSTF disk scheduling

Solm Total Seek Time = 208 units


track numbers 55, 58, 39, 18,
Q2. A disk contains 200 tracks (0-199). Request queue contains
100. Calculate total
90, 160, 150, 38, 184 respectively. Current position of WW Head is
disk scheduling
number of track movement by RAV head (total seek time) using SSTF
algorithm.

Soln. Total Seek Time = 248 units


track numbers 23, 89, 132,
Q3. A disk contains 200 tracks (0-199). Request queue contains
total number of track
42, 187 respectively. Current position of WW Head is 100. Calculate
algorithm.
movement by RAV head (total seek time) using SSTF disk scheduling

Soln. Total Seek Time = 273 units


3. SCAN Disk Scheduling Algorithm

In SCAN algorithm the disk arm moves into a particulardirectionand services the requests
coming in its path and after reaching the end of disk, it reverses its direction and again services
the request arriving in its path.
So, this algorithm works as an elevator and hence also known as elevator algorithm.
As a result, the requests at the midrange are serviced more and those arriving behind the disk
arm will have to wait.

Advantages:
High throughput
Low variance of response time
Average response time

Disadvantages:
Long waiting time for requests for locations just visited by disk arm

QI. A disk contains 200 tracks (0-199). Request queue contains track numbers 82, 170, 43,
140, 24, 16, 190 respectively. Current position of R/W Head is 50. Assume the R/W head is
moving towards higher value. Calculate total number of track movement by R/W head (total
seek time) using SCAN disk scheduling algorithm.

Soln. Total Seek Time = 332 units

Q2. A disk contains 200 tracks (0-199). Request queue contains track numbers 55, 58, 39, 18,
90, 160, 150, 38, 184 respectively. Current position of RAV Head is 100. Assume the R W head
is moving towards higher value. Calculate total number of track movement by RAV head (total
seek time) using SCAN disk scheduling algorithm.

Soln. Total Seek Time = 280 units

Q3. A disk contains 200 tracks (0-199). Request queue contains track numbers 23, 89, 132, 42,
187 respectively. Current position of R/W Head is 100. Assume the R/W head is moving
towards 0. Calculate total number of track movement by WW head (total seek time) using
SCAN disk scheduling algorithm.

Soln. Total Seek Time = 287 units

4. CSCAN (Circular SCAN) Disk Scheduling Algorithm


In SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing
its direction. So, it may be possible that too many requests are waiting at the other end or there
may be zero or few requests pending at the scanned area.

hv manei mahønAm tm.neil QQ7m•hønAn r•r.mi


instead of reversing
These situations are avoided in CSCANalgorithm in which the disk arm
from there.
its direction goes to the other end of the disk and starts servicing thc requests
similar to SCAN
So, the disk arm moves in a circular fashion and this algorithm is also
algorithm and hence it is known as C-SCAN (Circular SCAN).

Advantages:
• Provides more uniform wait time compared to SCAN

82, 170, 43,


QI. A disk contains 200 tracks (0-199). Request queue contains track numbers
RNVhead is
140, 24, 16, 190 respectively. Current position of WW Head is 50. Assume the
head (total
moving towards higher value. Calculate total number of track movement by WW
seek time) using CSCAN disk scheduling algorithm

Soln. Total Seek Time = 391 units


numbers 55, 58, 39, 18,
Q2. A disk contains 200 tracks (0-199). Request queue contains track
Ilead is 100. Assume the RSV head
90, 160, 150, 38, 184 respectively. Current position of WW
movement by WW head (total
is moving towards higher value. Calculate total number oftrack
seek time) using CSCAN disk scheduling algorithm.

Soln. Total Seek Time = 388 units


contains track numbers 23, 89, 132, 42,
Q3. A disk contains 200 tracks (0-199). Request queue
the R/W head is moving
187 respectively. Current position of R/W Head is 100. Assume
(total seek time) using
towards O. Calculate total number of track movement by RAV head
CSCAN disk scheduling algorithm.

Soln. Total Seek Time = 366 units

S. LOOK Disk Scheduling Algorithm


the difference that the disk arm
It is similar to the SCAN disk scheduling algorithm except for
request to be serviced in front of
in spite of going to the end of the disk goes only to the last
the head and then reverses its direction from there only.
traversal to the end of the
Thus it prevents the extra delay which occurred due to unnecessary

track numbers 82, 170, 43,


QI. A disk contains 200 tracks (0-199), Request queue contains
the R/W head is
140, 24, 16, 190 respectively. Current position of WW Head is 50. Assume
by R/W head (total
moving towards higher value. Calculate total number of track movement
seek time) using CSCAN disk scheduling algorithm.

Soln. Total Seek Time = 314 units


Q2. A disk contains 200 tracks (0-199).
Request queue contains track numbers 55.58.39. 18,
90. 160. 150.38.184 respectively.
Current position of R/W Head is 100. Assume thc R/W head
is moving towards higher value.
Calculate total number of track movement by R/W head (total
seek time) using CSCAN disk scheduling
algorithm.
Soln. Total Seck Time 265 units
Q3. A disk contains 200 tracks (0-199).
Request queue contains track numbers 23, 89, 132, 42,
187respectively. Current position of R/W Head is
100. Assume the R/W head is moving
towards 0. Calculate total number of track
movement by RJ'Whead (total seek time) using
CSCAN disk scheduling algorithrn

Soln. Total Seek Time = 241 units

6. CLOOK (Circular LOOK) Disk


Scheduling Algorithm
As LOOK is similar to SCAN algorithm, in
similar way, CLOOK is similar to CSCAN disk
scheduling algorithm.

In CLOOK, the disk arm in spite of going to the end


goes only to the last request to bc serviced
in front of the head and then from there goes to the other
end's last request.
Thus, it also prevents the extra delay which occurred due to
unnecessary traversal to the end of
the disk.

QI. A disk contains 200 tracks (0-199). Request queue contains track
numbers 82, 170, 43,
140, 24, 16, 190 respectively. Current position of R/'W Head is 50. Assume the
R/W head is
moving towards higher value. Calculate total number of track movement by R/W
head (total
seek time) using CSCAN disk scheduhng algorithm.

Soln. Total Seek Time = 341 units

Q2. A disk contains 200 tracks (0-199). Request queue contains track numbers 55, 58, 39, 18,
90, 160, 150, 38, 184 respectively. Current position of WW Head is 100. Assume the WW head
is moving towards higher value. Calculate total number of track movement by WW head (total
seek time) using CSCAN disk scheduling algorithm

Solm Total Seek Time = 322 units

Q3. A disk contains 200 tracks (0-199). Request queue contains track numbers 23, 89, 132, 42,
187 respectively. Current position of R/W Head is 100. Assume the R/W head is moving
towards O. Calculate total number of track movement by WW head (total seek time) using
CSCAN disk scheduling algorithm.

Soln. Total Seek Time = 296 units


ZOO (O - .

60, cačaa.-tL

50 2 łqą

ag-50) ł- ł.
(Qh- IG) -16)

•ęąnned withCamScanner
sso . 56, 150
100

55 q 0100 '50

—(Ico•-S5) (58-55)+ -I B) - q o)
( 160-150) 4 -BB)
200 •

Ob KJ 100.

100

(132-cq)+ qa)+

with CamScanner
CSTF (

0 50 、
0
3

200 (0-tqq) 01hA2-u-e,


66) s Q', 3 q ) q O' I '09 3

100 •
u-e-u_,

5 100

32 +
q wvu,

hu manci (maneilQQ7mahonHn Sçanned with CamScanner


y _ 200 co-tqqò .

100 •

uL.L
00-99) t 032 -Ba) +

(loo—cq)
200 u-2AE

oh 50.

50

C62-50)
(29-16)

332

hu manei imanei QQ7mah.nAn Sçqnned with CamScanner


55, 150)
100. R)
P vQbĂ.L,. $L&O.L

55 58 150

C55-39)

Studocu
Șs;ąmed with CamScanner
(い式Ⅸこ& 、
・しk れ0 . 5) 气) 、
ちもも 十

, い。
) +の +い” 2つャ- 。
)キい3~ のキいい一

00 十、

宅キ、 読し

.宀 with CamScanne「
A Roo (o-łqq). 6
66,
100. Î\Lu.cwxe
C.oLQÂÂ.b&.L

55 50 90 IGO tqq

050 - 100)
aqq-Ńq)
(58-56)
100)
(40-0)

WithCamScanner
LOOK Cac

190
o} 50. d.b.a.

o 16 43 50 180

090-6a)f 6+0-190) ±

190-IG)

WithCamScanner
0 0 3 55ら 0 ー00 馬
9日 9

い00ー
、0のキ 0ーの + いも
いし u - 0) をいq ←日0) ャ国 0 - 5のヤ5 に

い5-32f ヤ 3り斗

銀い こ

S}- に山 い↓ 、 。
気 良。気 山 (。- 国の。% 止 平 。叱

03 气ー 00

キ年ャ協

00に

民區にし~ 込。 。
し乙 。
-い・% 当气

こ毳こ、、炊0し、
、 し
一0 , 3,店。
、 0 し 000 、
aq. 円 0 〉 と
0

0 ー0 ー
キ0 0一9


し はい



い。ー
5。 ) + C峭- )
) +いq04 し
十十2 ユ
40 十一

200 10-łqq).

100.

23 ț00

¯ëcoL PJJc

CŁq-q2)łCQ-Q3)

55
UNIT 05
I/O Management
VO DEVICES (1/0 'IARDWARE)
I/O Devices
One of the imponant jobs of an Operating System is to manage various I/O devices including
mouse, keyboards, touch pad, disk drives, display adapters, USB dcviccs, Bit-mapped scrcen,
LED, Analog-to-digital converter, On/off switch, network connections, audio I/O, printers etc.
An I/O system is required to take an application I/O request and send it to the physical device,
then take whatever response comes back from the device and send it to the application.
I/O devices can be divided into two categories —
Block devices —A block device is one with which the driver communicates by sending
entire blocks of data. For example, Ilard disks, USB cameras, Disk-On-Key etc.
• Character devices —A character device is one with which the driver communicates
by sending and receiving single characters (bytes, octets). For example, serial ports,
parallel ports, sounds cards etc.
I/O devices can also be divided into three categories —
• Human Readable Devices
o Suitable for communicating with the computer user
o Ex: printers, terminals, video display, keyboard, mouse etc.
Machine Readable Devices
o Suitable for communicating with electronic equipment
o Ex: disk drivers, USB keys, sensors, controllers etc.
Communication Devices
o Suitable for:communicating with remote devices.
o Ex: modem, digital line drivers etc.

All these devices differ significantly from one another with regard to:

Data rate
Control complexity
• Transfer unit and direction
Data representation
Error handling

hv manei mahonHn. tmaneil QQ7mahønHni,fönmail


Device Drivers
Device drivers are software modules that can be plugged into an OS to handlc a particular
device. Operating System takes help from device drivers to handle all I/O devices.

Device Controllers
The Device Controller works like an interface between a device and a device driver.
I/O units (Keyboard, mouse, printer, etc.) typically consist of a mechanical
component and an electronic component where electronic component is called the
deuce controller.
There is always a device controller and a device driver for each device to communicate
with the Operating Systems.
A device controller may bc able to handle multiple devices
As an interfaceits main task is to convert serial bit stream to block of bytes, perform
error correction as necessary,
Any device connected to the computer is connected by a plug and socket, and the socket
is connectedto a devicecontroller.
Following is a model for connecting the CPU, memory, controllers, and I/O devices
where CPU and device controllersall use a commonbus for communication.

Keyboard Drive Disk Orwe

Keyboard
Controlkr Cmtrolkr

Q. What is the difference between Synchronousand AsynchronousI/O?

• SynchronousI/O
In this schemeCPU executionwaits while I/O proceeds.
o Also called as Blocking I/O
• Asynchronous I/O
c I/O proceeds concurrently with CPU execution.
Also called as Non-BlockingI/O
Communication to I/O Devices
The CPU must have a way to pass information to and from an I/O deuce. There are three
approaches available to commumcate Withthe CPU and Deuce.

• Special Instruction I/O


Memory-mapped I/O
• Direct memory access (DMA)
• Polling I/O
• Interrupt I/O

1. Special Instruction I/O

• This uses CPU instructions that are specifically made for controlling I/O devices.
• These instructions typically allow data to be sent to an I/O device or read from an I/O
device.

2. Memory Mapped I/O


and
When using memory-mapped I/O, the same address space is shared by memory
I/O devices.
The device is connecteddirectly to certain main memorylocationsso that I/O device
can transfer block of data to/from memory without going through CPU.

I/O Comrunds
I/O Device

While using memory mapped 10, OS allocates buffer in memory and informs I/O
device to use that buffer to send data to the CPU.
I/O device operates with CPU, interrupts CPU when finished.
The advantage to this method is that every instruction which can access memory can
be used to manipulate an I/O device.
Memory mapped 10 is used for most high-speed I/O devices like disks,
communication interfaces.
3. Direct Memory Access (DMA)

Slow devices like keyboards will generate an interrupt to the main CPU aner each byte
is transferred.
If a fast device such as a disk generated an interrupt for each byte, the operating system
would spend most of its time handling these interrupts.
So a typical computer uses direct memory access (DMA) hardware to reduce this
overhead.
Direct Memory Access (DMA) means CPU grants I/O module authority to read
from or write to memory without involvement.
DMA module itself controls exchange of data between main memory and the I/O
device.
CPU is only involved at the beginning and end of the transfer and interrupted only
after entire block has been transferred.
Direct Memory Access needs a special hardware called DMA controller (DMAC)
that manages the data transfers and arbitrates access to the system bus.
The controllers are programmed with source and destination pointers (where to
read/write the data), counters to track the number of transferred bytes, and settings,
which includes I/O and memory types, interrupts and states for the CPU cycles.

Data Bus

usa Drive Printer

4. Polling I/O

• A computer must have a way of detecting the arrival of any type of input.
• Polling can be used for this purpose.
• This techniques allow the processor to deal with events that can happen at any time and
that are not related to the process it is currently running.
Polling is the simplest way for an I/O device to communicate with the processor.
The process ofperiodically checking status of the device to see if it is time for the next
I/O operation, is called polling.
The I/O device simply puts the information in a Status register, and the processor must
come and get the information.
• Most of the time, devices will not require attention and when one does it will have to
wait until it is next interrogated by the polling program.
This is an inefficient method and much of the processors time is wasted on unnecessary
polls.
Compare this method to a teacher continually asking every student in a class, one after
another, if they need help. Obviously the more efficient method would be for a student
to inform the teacher whenever they require assistance.

5. Interrupt I/O

A computer must have a way of detecting the arrival of any type of input.
• Interrupt can be used for this purpose.
at any time and
This techniques allow the processor to deal with events that can happen
that are not related to the process it is currently running.
method.
An alternative scheme for dealing with VO is the interrupt-driven
that requires attention.
An interrupt is a signal to the microprocessor from a device
it needs CPU's attention
A device controller puts an interrupt signal on the bus when
when CPU receives an interrupt.
interrupt handler using the interrupt
It saves its current state and invokes the appropriate
vector (addresses of OS routines to handle various events).
continues with its original
When the interrupting device has been dealt with, the CPU
task as if it had never been interrupted.
Interrupt.
Q. Differentiate between DMA, Polling, and

Ans. As explained above (refer video:

OQ7m.hønHn enmi
hv m•nei
1/0 SUBSYSTEMS

The combination of I/O devices, device drivers, and the I/O subsystem comprises the
overall I/O system in an embedded environment.
I/O subsystem is the most complex (messiest) part of the OS.

Goal/Purpose of I/O Subsystems

• To hide the device-specific information from the kernel as well as from the application
developer
• To provide a uniform access method to the peripheral I/O devices of the system

I/O Subsystem Layered Model

Application Software Generic

110Subsystem
Device Drivers

InterruptHandlers

I/O Device Hardware Specific Details


VO Subsystem Architecture

memory SCSI controller


controlbr

expansion bus keyboard


IDE contoller interface

parallel

hv mane' m•hønhl
To cope with various impcdancc mismatches bctwccn devices (speed, transfer size),
OS may buffer data in memory.
Various buffering strategies:
l. Single buffering: OS assigns a system buffer to the user request
2. Double buffering: process consumes from one buffer while system fills the
next
3. Circular buffering: most useful for burst 10
Without a buffer, the OS directly access the device when it needs

Operating System User Process

1/0 Device In

(a) No buffering

1. Single Buffering

Operating system assigns a buffer in main memory for an I/O request.


Operating System User Process

In Move
I/O Device

(b) Single buffering

2. Double Buffering

Operating system uses two buffers instead of one.


A process can transfer data to or from one buffer, while the operating system empties
or fillsotherbuffer
Also known as buffer swapping.
Operating System User Process

In Move
I/O Device

(c) Double buffering


3. Circular Buffering

Two or more buffers are used by the operating system.


• Each individual buffer is a unit in a circular buffer.
It is used when I/O operation must keep up with process.

Operating System User Process

In Move
I/O Device

(d) Circular buffering

tm—neif
Unit-5
(I/O Management and Disk Scheduling)

RAID
(Redundant Arrays of
Independent/Inexpensive Disks)
RAID is a technique that makes use of a combination of multiple disks instead of
using a single disk for increased performance, data redundancy, or both.

Why Data Redundancy?


Data redundancy, although taking up extra space, adds to disk reliability. This
means, in case of disk failure, if the same data is also backed up onto another
disk, we can retrieve the data and go on with the operation. On the other hand, if
the data is spread across just multiple disks without the RAID technique, the loss
of a single disk can affect the entire data.

Key Evaluation Points for a RAID System:

 Reliability: How many disk faults can the system tolerate?


 Availability: What fraction of the total session time is a system in uptime
mode, i.e. how available is the system for actual use?
 Performance: How good is the response time? How high is the throughput
(rate of processing work)? Note that performance contains a lot of parameters
and not just the two.
 Capacity: Given a set of N disks each with B blocks, how much useful
capacity is available to the user?

RAID is very transparent to the underlying system. This means, to the host
system, it appears as a single big disk presenting itself as a linear array of blocks.
This allows older technologies to be replaced by RAID without making too many
changes to the existing code.
Different RAID Levels
1. RAID-0 (Stripping)
2. RAID-1 (Mirroring)
3. RAID-2 (Bit-Level Stripping with Dedicated Parity)- Obsolete, no need to
discuss.
4. RAID-3 (Byte-Level Stripping with Dedicated Parity)
5. RAID-4 (Block-Level Stripping with Dedicated Parity)
6. RAID-5 (Block-Level Stripping with Distributed Parity)
7. RAID-6 (Block-Level Stripping with two Parity Bits)

1. RAID-0 (Stripping)

 Blocks are “stripped” across disks.


 In the figure below, blocks “0,1,2,3” form a stripe.

 Instead of placing just one block into a disk at a time, we can work with two
(or more) blocks placed into a disk before moving on to the next one.

Evaluation
 Reliability:0
There is no duplication of data. Hence, a block once lost cannot be recovered.
 Capacity:N*B
The entire space is being used to store data. Since there is no duplication, N
disks each having B blocks are fully utilized.
Advantages
1. It is easy to implement.
2. It utilizes the storage capacity in a better way.
Disadvantages
1. A single drive loss can result in the complete failure of the system.
2. Not a good choice for a critical system.

2. RAID-1 (Mirroring)
 More than one copy of each block is stored in a separate disk. Thus, every
block has two (or more) copies, lying on different disks.
 The below figure shows a RAID-1 system with mirroring level 2.

 RAID 0 was unable to tolerate any disk failure. But RAID 1 is capable of
reliability.

Evaluation
Assume a RAID system with mirroring level 2.
 Reliability:1toN/2
1 disk failure can be handled for certain because blocks of that disk would have
duplicates on some other disk. If we are lucky enough and disks 0 and 2 fail,
then again this can be handled as the blocks of these disks have duplicates on
disks 1 and 3. So, in the best case, N/2 disk failures can be handled.
 Capacity:N*B/2
Only half the space is being used to store data. The other half is just a mirror
of the already stored data.
Advantages
1. It covers complete redundancy.
2. It can increase data security and speed.
Disadvantages
1. It is highly expensive.
2. Storage capacity is less.

4.RAID-3 (Byte-Level Stripping with Dedicated Parity)

 It consists of byte-level striping with dedicated parity striping.


 At this level, we store parity information in a disc section and write to a
dedicated parity drive.
 Whenever failure of the drive occurs, it helps in accessing the parity drive,
through which we can reconstruct the data.

 Here in the below figure, Disk 3 contains the Parity bits for Disk 0, Disk 1, and
Disk 2. If data loss occurs, we can construct it with Disk 3.
Advantages
1. Data can be transferred in bulk.
2. Data can be accessed in parallel.
Disadvantages
1. It requires an additional drive for parity.
2. In the case of small-size files, it performs slowly.

5. RAID-4 (Block-Level Stripping with Dedicated Parity)

 Instead of duplicating data, this adopts a parity-based approach.


 In the figure below, we can observe one column (disk) dedicated to parity.

 Parity is calculated using a simple XOR function. If the data bits are 0,0,0,1
the parity bit is XOR(0,0,0,1) = 1. If the data bits are 0,1,1,0 the parity bit is
XOR(0,1,1,0) = 0. A simple approach is that an even number of ones results in
parity 0, and an odd number of ones results in parity 1.

 Assume that in the above figure, C3 is lost due to some disk failure. Then, we
can recompute the data bit stored in C3 by looking at the values of all the other
columns and the parity bit. This allows us to recover lost data.
Evaluation
 Reliability: 1
RAID-4 allows recovery of at most 1 disk failure (because of the way parity
works). If more than one disk fails, there is no way to recover the data.
 Capacity:(N-1)*B
One disk in the system is reserved for storing the parity. Hence, (N-1) disks
are made available for data storage, each disk having B blocks.
Advantages
1. It helps in reconstructing the data if at most one data is lost.
Disadvantages
1. It can’t help in reconstructing when more than one data is lost.

6. RAID-5 (Block-Level Stripping with Distributed Parity)

 This is a slight modification of the RAID-4 system where the only difference
is that the parity rotates among the drives.

 In the figure below, we can notice how the parity bit “rotates”.
 This was introduced to make the random write performance better.
Evaluation
 Reliability:1
RAID-5 allows recovery of at most 1 disk failure (because of the way parity
works). If more than one disk fails, there is no way to recover the data. This is
identical to RAID-4.
 Capacity:(N-1)*B
Overall, space equivalent to one disk is utilized in storing the parity. Hence,
(N-1) disks are made available for data storage, each disk having B blocks.
Advantages
1. Data can be reconstructed using parity bits.
2. It makes the performance better.
Disadvantages
1. Its technology is complex and extra space is required.
2. If both discs get damaged, data will be lost forever.

7. RAID-6 (Block-Level Stripping with two Parity


Bits)

 Raid-6 helps when there is more than one disk failure. A pair of independent
parities are generated and stored on multiple disks at this level. Ideally, you
need four disk drives for this level.
 There are also hybrid RAIDs, which make use of more than one RAID level
nested one after the other, to fulfill specific requirements.
Advantages
1. Very high data Accessibility.
2. Fast read data transactions.
Disadvantages
1. Due to double parity, it has slow write data transactions.
2. Extra space is required.

Advantages of RAID
1. Increased data reliability: RAID provides redundancy, which means that if
one disk fails, the data can be recovered from the remaining disks in the array.
This makes RAID a reliable storage solution for critical data.
2. Improved performance: RAID can improve performance by spreading data
across multiple disks. This allows multiple read/write operations to co-occur,
which can speed up data access.
3. Scalability: RAID can be scaled by adding more disks to the array. This means
that storage capacity can be increased without having to replace the entire
storage system.
4. Cost-effective: Some RAID configurations, such as RAID 0, can be
implemented with low-cost hardware. This makes RAID a cost-effective
solution for small businesses or home users.

Disadvantages of RAID
1. Cost: Some RAID configurations, such as RAID 5 or RAID 6, can be
expensive to implement. This is because they require additional hardware or
software to provide redundancy.
2. Performance limitations: Some RAID configurations, such as RAID 1 or
RAID 5, can have performance limitations. For example, RAID 1 can only
read data as fast as a single drive, while RAID 5 can have slower write speeds
due to the parity calculations required.
3. Complexity: RAID can be complex to set up and maintain. This is especially
true for more advanced configurations, such as RAID 5 or RAID 6.
4. Increased risk of data loss: While RAID provides redundancy, it is not a
substitute for proper backups. If multiple drives fail simultaneously, data loss
can still occur.

You might also like