0% found this document useful (0 votes)
29 views49 pages

Os Unit 2

The document is a question bank for Operating System Concepts tailored for BCA students, covering various topics such as logical and physical addresses, memory allocation strategies, fragmentation, paging, segmentation, and virtual memory. It includes definitions, explanations, and comparisons of key concepts like swapping, demand paging, and disk scheduling. Additionally, it addresses practical aspects like dynamic loading, shared libraries, and the handling of page faults.

Uploaded by

mishriyaanwar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views49 pages

Os Unit 2

The document is a question bank for Operating System Concepts tailored for BCA students, covering various topics such as logical and physical addresses, memory allocation strategies, fragmentation, paging, segmentation, and virtual memory. It includes definitions, explanations, and comparisons of key concepts like swapping, demand paging, and disk scheduling. Additionally, it addresses practical aspects like dynamic loading, shared libraries, and the handling of page faults.

Uploaded by

mishriyaanwar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

National Education Policy – 2020 [NEP-2020]

QUESTION BANK OF Operating System Concepts


IV SEMESTER BCA
UNIT-II

2 Marks Questions
1. Define logical and physical address.
An address generated by the CPU is commonly referred to as a logical
address, whereas an address seen by the memory unit is commonly referred
to as a physical address.

2. What is Address binding?


 User programs typically refer to memory addresses with symbolic names.
These symbolic names must be mapped or bound to physical memory
addresses.
 Address binding of instructions to memory-addresses can happen at 3
different stages.
1. Compile Time
2. Load Time-
3. Execution Time-

3. What is first-fit, best-fit , and worst-fit Memory Allocation


 First Fit: Allocate the first hole that is big enough. Searching can start either
at the beginning of the set of holes or at the location where the previous first-
fit search ended.
 Best Fit: Allocate the smallest hole that is big enough. We must search the
entire list, unless the list is ordered by size. This strategy produces the
smallest left over hole.
 Worst Fit: Allocate the largest hole. Again, we must search the entire list,
unless it is sorted by size. This strategy produces the largest left over hole.

4. Define swapping and swap space.

Mr. K Praveen Kumar, SMC SHIRVA Page 1


Swapping is the process of moving a process from memory to backing store
and moving another process from backing store to memory
Like virtual memory, swap space is a secondary memory. It’s used by the OS
when there is no physical memory available for further execution of the
processes.
If the Operating System faces a situation when it needs memory, but the
RAM is full, it moves the inactive pages from RAM to swap memory. If
these pages are needed in the future, the OS can move them again from swap
space to RAM.

5. What is fragmentation? List their types.


Fragmentation is an unwanted problem where the memory blocks cannot be
allocated to the processes due to their small size and the blocks remain unused.
It can also be understood as when the processes are loaded and removed from
the memory they create free space or hole in the memory and these small
blocks cannot be allocated to new upcoming processes and results in inefficient
use of memory. Basically, there are two types of fragmentation:Internal
Fragmentation, External Fragmentation

6. What is paging and Page Fault?


Page faults dominate more like an error. A page fault will happen if a
program tries to access a piece of memory that does not exist in physical
memory (main memory). The fault specifies the operating system to trace all
data into virtual memory management and then relocate it from secondary
memory to its primary memory, such as a hard disk.
 Paging is a memory management scheme that permits the physical-address
space of a process to be noncontiguous. One way of implementing this
solution is through the use of a paging scheme. Physical memory is broken
into fixed-sized blocks called frames. Logical memory is also broken into
blocks of the same size called pages. When a process is to be executed, its
pages are loaded into any available memory frames from the backing store.

7. What is segmentation?

Mr. K Praveen Kumar, SMC SHIRVA Page 2


Segmentation is a memory-management scheme that supports this user view
of memory. A logical-address space is a collection of segments. Each segment
has a name length and a length. The addresses specify both the segment name
and the offset within the segment. The user therefore specifies each address by
two quantities: a segment name and an offset.

8. What is demand paging and pure demand paging?


A demand-paging system is similar to a paging system with swapping.
Processes reside on secondary memory (which is usually a disk). When we
want to execute a process, we swap it into memory. Rather than swapping the
entire process into memory, however, we use a lazy swapper. A lazy swapper
never swaps a page into memory unless that page will be needed.
Pure demand paging: In some cases when initially no pages are loaded into
the memory, pages in such cases are only loaded when are demanded by the
process by generating page faults. It is then referred to as Pure Demand
Paging.

9. Define thrashing and virtual memory.


Thrashing: If a process does not have “enough” pages, the page-fault rate is
very high. This leads to:
 Low CPU utilization.
 Operating system thinks that it needs to increase the degree of
multiprogramming.
 Another process added to the system.
Thrashing is a process is busy swapping pages in and out.
 Virtual memory is the separation of user logical memory from physical
memory. This separation allows an extremely large virtual memory to be
provided for programmers when only a smaller physical memory is
available.
 Virtual memory makes the task of programming much easier, because
the programmer no longer needs to worry about the amount of physical
memory available, or about what code can be placed in overlays, but can
concentrate instead on the problem to be programmed.
10. Differentiate between logical and physical address space.

Mr. K Praveen Kumar, SMC SHIRVA Page 3


The set of all logical addresses generated by a program is referred to as a
logical address space; the set of all physical addresses corresponding to these
logical addresses is referred to as a physical address space. Thus, in the
execution-time address-binding scheme, the logical and physical address
spaces differ.

11. Differentiate between the paging and segmentation.

Key Paging Segmentation


Memory Size In Paging, a process In Segmentation, a
address space is broken process address space is
into fixed sized blocks broken in varying sized
called pages. blocks called sections.
Accountability Operating System Compiler is responsible to
divides the memory calculate the segment size,
into pages. the virtual address and
actual address.
Size Page size is determined Section size is determined
by available memory. by the user.
Speed Paging technique is Segmentation is slower
faster in terms of than paging.
memory access.
Fragmentation Paging can cause Segmentation can cause
internal fragmentation external fragmentation as
as some pages may go some memory block may
underutilized. not be used at all.
Logical During paging, a During segmentation, a
Address logical address is logical address is divided
divided into page into section number and
number and page offset. section offset.
Table During paging, a During segmentation, a

Mr. K Praveen Kumar, SMC SHIRVA Page 4


logical address is logical address is divided
divided into page into section number and
number and page offset. section offset.
Data Storage Page table stores the Segmentation table stores
page data. the segmentation data.

12. What do you mean by belady’s anomaly?


FIFO Replacement is also called Belady’s Anomaly because the page-fault
rate may increase as the number of allocated frames increases

13. What are the limitations of swapping?


Limitations: Context-switch time is fairly high.
1. If we want to swap a process, we must be sure that it is completely idle.
Two solutions:
i) Never swap a process with pending I/O.
ii) Execute I/O operations only into OS buffers.

14. Differentiate swapping and demand paging.


Swapping: Swapping is a memory management technique and is used to
temporarily remove the inactive programs from the main memory of the
computer system.
 Any process must be in the memory for its execution, but can be swapped
temporarily out of memory to a backing store and then again brought back
into the memory to complete its execution.
 Swapping is done so that other processes get memory for their execution.
 Due to the swapping technique performance usually gets affected, but it
also helps in running multiple and big processes in parallel. The
swapping process is also known as a technique for memory
compaction. Basically, low priority processes may be swapped out so that
processes with a higher priority may be loaded and executed.
Demand paging: The demand paging system is similar to the swapping
paging system in that processes are mostly stored in the main memory
(usually on the hard disk).

Mr. K Praveen Kumar, SMC SHIRVA Page 5


 As a result, demand paging is a procedure that addresses the problem above
just by shifting pages on demand
 Lazy swapper is another name for this ( It never swaps the page into the
memory unless it is needed
 A pager is a kind of swapper that deals with the individual pages of a
process
 Demand Paging is a method in which a page is only brought into main
memory when the CPU requests it. At first, just those pages are loaded
directly required by the operation. Pages that are never accessed are thus
never loaded into physical memory

15. Define Working-Set Model


 The working-set model is based on the assumption of locality.

 This model uses a parameter, Δ, to define the working-set window.


 The idea is to examine the most recent Δ page references.
 The set of pages in the most recent Δ page references is set
 If a page is in active use, it will be in the working set.
If it is no longer being used, it will drop from the working set Δ time units after
its last reference

16. What is relocation register? Why it is


used.
The base register is now called a
relocation register. The value in the
relocation register is added to every
address generated by a user process at the

Mr. K Praveen Kumar, SMC SHIRVA Page 6


time the address is sent to memory (see Figure 9.5). For example, if the base
is at 14000, then an attempt by the user to address location 0 is dynamically
relocated to location 14000; an access to location 346 is mapped to location
14346.

17. What is Disk Scheduling?


Disc scheduling is an important process in operating systems that determines
the order in which disk access requests are serviced. The objective of disc
scheduling is to minimize the time it takes to access data on the disk and to
minimize the time it takes to complete a disk access request

18. What do you mean by Low-level formatting?


The process of dividing the disk into sectors and filling the disk with a
special data structure is called low-level formatting. Sector is the smallest
unit of area that is read /written by the disk controller. The data structure for
a sector typically consists of a header, a data area (usually 512 bytes in size)
and a trailer. The header and trailer contain information used by the disk
controller, such as a sector number and an error-correcting code (ECC).

19. What is memory-management unit?


The run-time mapping from virtual to physical addresses is done by a
Hardware device called the memory-management unit (MMU).

20. Write a note on Dynamic Loading and Dynamic Linking


Dynamic Loading: Dynamic loading is a method used to carry out memory
space utilization efficiently.
 Since the size of the process depends on the size of physical memory, there
are situations where in the data required for executing a process requires

Mr. K Praveen Kumar, SMC SHIRVA Page 7


more space than the space available in the physical memory.
 To overcome this space issue dynamic loading is used.
 A routine is not loaded until it is called.
This works as follows:
1. Initially, all routines are kept to n disk in a relocatable-load format.
2. Firstly, the main-program is loaded into memory and is executed.
3. When a main-program calls the routine, the main-program first checks to
see whether the routine has been loaded.
4. If routine has been not yet loaded, the loader is called to load desired
routine into memory.
5. Finally, control is passed to the newly loaded-routine.
Dynamic Linking
 The concept of dynamic linking is similar to the concept of dynamic
loading. The difference is that instead of postponing the loading of
routines, it postpones linking of libraries until they are called.
 This feature eliminates the requirement of including language library
associated with the program in the executable image.
 With use of dynamic linking, unnecessary wastage of memory and disk
can be avoided. This can be done by using a small code called stub in
every library routine.
 It is responsible for pointing out the location of memory resident library
associated with the called routine

21. Write a note on Shared Libraries


Shared Libraries: While fixing bugs in libraries, there can be two types of
modifications i.e., major and minor. Major modifications such as change in
the program addresses typically changes (increments) the version number
of the library whereas minor bug fixes do not change it.
 When dynamic linking is used the latest version installed is just referenced
whereas in the absence of dynamic linking, they need to be re linked. There
exist multiple version of libraries as there can be programs that might use
older version of libraries (those that were installed before updating the
library). This system where multiple versions of shared libraries exist is
known as shared libraries.

22. What do you mean by zero-fill-on-demand?


Mr. K Praveen Kumar, SMC SHIRVA Page 8
 When a page fault occurs, the operating system must bring the desired page
from secondary storage into main memory.
 Most operating systems maintain a free-frame list -- a pool of free frames for
satisfying such requests. (Figure 10.6). (Free frames must also be allocated
when the stack or heap segments from a process expand.)
 Operating system typically allocate free frames using a technique known as
zero-fill-on-demand -- the content of the frames zeroed-out before being
allocated.
 When a system starts up, all available memory is placed on the free-frame list.

23. What is virtual address space?


The virtual address space is the set of logical addresses used by a process.
Physical address space is the set of effective memory address of an
instruction or data byte where they are actually located. The Memory
Management Unit (MMU) maps the virtual addresses to their respective
physical addresses.

24. What is a sparse address space?


Consider the following figure that shows the dynamic memory allocation of
virtual memory
Figure 10.2 Virtual address space of a process in
memory.
In the figure 10.2 the above organization allows heap to
grow upwards and stack to downwards. The empty space
(hole) between stack and heap is the virtual address space,
such type of space is also known as sparse address
space.
The advantage of using virtual memory is sharing of
pages and leads to following benefits
 Several processes can share system libraries by mapping them into
virtual address space. These libraries are stored as pages in physical
memory.
 Inter process communication can be done by sharing virtual memory
among several processes.
Mr. K Praveen Kumar, SMC SHIRVA Page 9
 Process creation can speed up by sharing pages with fork( ) system call.

25. Define Copy-on-write.


Copy-on-write (COW) is a technique allows a child process to share the same
address space as its parent. If either the child or the parent process writes
(modifies) a page, a copy of the page is made. COW allows more efficient
process creation as only modified pages are copied

26. What are the major problems to implement Demand Paging?


Two major problems to implement Demand paging:
1. Develop a frame-allocation algorithm. If we have multiple process in
memory, we must decide how many frames to allocate to each process
2. Develop a page-replacement algorithm. We must select the frames that
are to be replaced.
Designing appropriate algorithms to solve these problems is an important
task, because disk I/O is so expensive.

27. What are the steps required to handle a page fault in demand paging?
Steps required handling a page fault
1. The memory address requested is first checked, to make sure it was a
valid memory request.
2. If the reference is to an invalid
page, the process is terminated.
Otherwise, if the page is not
present in memory, it must be
paged in.
3. A free frame is located, possibly
from a free-frame list.
4. A disk operation is scheduled to
bring in the necessary page from
disk.
5. After the page is loaded to
memory, the process's page table is updated with the new frame number,
and the invalid bit is changed to indicate that this is now a valid page
reference.
6. The instruction that caused the page fault must now be restarted from the
Mr. K Praveen Kumar, SMC SHIRVA Page 10
beginning.
Fig: steps in handling page fault
28. What is the basic approach of Page Replacement?
If no frame is available, find one that is not currently being used and free it.
A frame can be freed by writing its contents to swap space, and changing the
page table to indicate that the page is no longer in memory. Now the freed
frame can be used to hold the page for which the process faulted.

29. What is a Reference String?


An algorithm is evaluated by running it on a particular string of memory
references and computing the number of page faults. The string of memory
reference is called a reference string.

30. Define equal allocation and proportional allocation


An allocation algorithm that assigns equal amounts of a resource to all
requestors. For instance, if there are 93 frames and 5 processes, each process
will get 18 frames. The 3 leftover frames can be used as a free-frame buffer
pool. This scheme is called equal allocation. In virtual memory, assigning an
equal number of frames to each process.
Proportional allocation an allocation algorithm that assigns a resource in
proportion to some aspect of the requestor. In virtual memory, the assignment
of page frames in proportion to the size each process.
With proportional allocation, we would split 62 frames between two
processes, one of 10 pages and one of 127 pages, by allocating 4 frames and
57frames, respectively, since
10/137×62≈4and
127/137×62≈57. In this way, both processes share the available frames
according to their “needs,” rather than equally

31. What is locality model?


A model for page replacement based on the working-set strategy is called
locality model. The locality model states that, as a process executes, it moves
from locality to locality. A locality is a set of pages that are actively used
together. A running program is generally composed of several different
localities, which may overlap. For example, when a function is called, it
defines a new locality.
Mr. K Praveen Kumar, SMC SHIRVA Page 11
Three or More Marks questions.
1. Explain first-fit, best-fit, and worst-fit Memory Allocation with example.
First Fit Algorithm: In First it algorithm, the memory manager scans along
the linked list until it finds a hole that is large enough to store the program.
Searching starts at the beginning of a set of hole and stop searching as soon
as a free hole that is large enough is found. This algorithm is the fastest
because it searches as little as possible

After allocation Before allocation


Figure (1): Example of Memory Configuration Before and After Allocation
of 11 KB and 9 KB Block (Based on First Fit Algorithm)
Let us consider a linked list containing the following holes in the order
specified, 10, 20, 15, 9, 13, 8, and 11
Assume that there are two programs waiting to enter into memory. The
program sizes are 11 and 9 respectively. The memory manager allocates hole
20 to program of size 11 and hole 10 for program of size 9. Although, there
are holes of sizes 11 and 9 in the linked list, the memory manager will not
scan the entire linked list. It starts searching from the beginning of the linked
Mr. K Praveen Kumar, SMC SHIRVA Page 12
list until it finds a hole whose size is greater than or equal to program size.
Once, such a hole is found, the rest of the linked list is not scanned.
(b) Best Fit Algorithm
In best it algorithm, the memory manager searches the entire linked list and
takes the smallest hole that is adequate to store the programs.
Suppose, for the program of size 11, it scans the entire linked list and
allocates the hole 11 for the program of size 11. For the program of size 9, it
searches the entire linked list and allocates the hole of size 9.

Figure (2): Example of Memory Configuration Before and After Allocation


to 11 KB and 9 KB Block
Best fit algorithm is slow, because every time algorithm is called it scans the
entire linked list. However, it results in minimal internal fragmentation
because it allocates the best hole for the program. However, first fit
algorithm may result in more internal fragmentation because it does not scan
the entire linked list.
(c) Worst Fit Algorithm
In worst it algorithm, the memory manager scans the entire linked list and
allocates the largest hole to the program. For the program of size 11, it
Mr. K Praveen Kumar, SMC SHIRVA Page 13
allocates the hole of size 20 which is maximum and for the program of size
9, it allocates hole of size 15 which is next to the largest size. Thus, the
entire linked list is scanned twice. After allocation is performed, the
remaining part of the hole can be used to allocate another program

When best it searches a list of holes from smallest to largest, as soon as it


finds a hole that its, it knows that the hole is the smallest one and that will do
the job. No further searching is needed, as it is with the single list scheme.
With a hole list sorted by size, first fit and best it are equally it and next it is
pointless.
First it and best it allocation algorithms are better than worst it algorithm as
per the criteria of storage utilization and decreasing time. Both first fit and
best fit are same in terms of storage utilization. However, first fit performs
operations faster than other algorithms.

2. Explain paging memory management with an example.


 Paging is a memory management scheme that permits the physical-address
space of a process to be noncontiguous.
Mr. K Praveen Kumar, SMC SHIRVA Page 14
 One way of implementing this solution is through the use of a paging scheme.
 Paging avoids the considerable problem of fitting the varying-sized memory
chunks onto the backing store.
 When some code fragments or data residing in main memory need to be
swapped out, space must be found on the backing store.

 Physical memory is broken into fixed-sized blocks called frames.


 Logical memory is also broken into blocks of the same size called pages.
 When a process is to be executed, its pages are loaded into any available
memory frames from the backing store.
 The backing store is divided into fixed-sized blocks that are of the same
size as the memory frames.
 When a process arrives in the system to be executed, its size, expressed
inpages, is examined. Each page of the process needs one frame. Thus, if
the "process requires n pages, there must be at least n frames available in
memory.
 If there are n frames available, they are allocated to this arriving process.
 The first page of the process is loaded into one of the allocated frames, and
the ; frame number is put in the page table for this process.
 The next page is loaded into another frame, and its frame number is put
into the page table, and so on.

3. Explain Paging Hardware.


The basic method for implementing paging involves breaking physical
Mr. K Praveen Kumar, SMC SHIRVA Page 15
memory into fixed-sized blocks called frames and breaking logical memory
into blocks of the same size called pages.
When a process is to be executed, its pages are loaded into any available
memory frames from the backing store.
The backing store is divided into fixed-sized blocks that are of the same size
as the memory frames.
The hardware support for paging is illustrated inFigure1.

Figure1: Paging hardware

Address generated by CPU is divided into 2parts (Figure 2):


1. Page-number (p)is used as an index to the page-table. The page-table
contains the base-address of each page in physical-memory.
2. Offset (d) is combined with the base-address to define the physical-
address. This physical-address is sent to the memory-unit.
The page table maps the page number to a frame number, to yield a physical
address
The page table maps the page number to a frame number, to yield a physical
address which also has two parts: The frame number and the offset within
that frame.
The numbers of bits in the frame number determine show many frames the
system can address and the number of bits in the offset determines the size
of each frame.
The paging model of memory is shown in Figure2.

Mr. K Praveen Kumar, SMC SHIRVA Page 16


Figure2: Paging model of logical and physical memory.
The page size (like the frame size)is defined by the hardware.
The size of a page is typically a power of 2, varying between 512 bytes and
16 MB per page, depending on the computer architecture.
The selection of a power of 2 as a page size makes the translation of a
logical address into a page number and page offset.
If the size of logical address space is 2m and a page size is 2n addressing
units (bytes or words), then the high-order m – n bits of a logical address
designate the page number, and then low-order bits designate the page
offset.
Thus, the logical address is as follows:
Page number Page offset
p d
m-n n
When a process requests memory (e.g. when its code is loaded in from disk),
free frames are allocated from a free-frame list, and inserted into that
process's page table.
Processes are blocked from accessing anyone else's memory because all of
their memory requests are mapped through their page table. There is no way
for them to generate an address that maps into any other process's memory
space.
The operating system must keep track of each individual process's page
table, updating it whenever the process's pages get moved in and out of
memory, and applying the correct page table when processing system calls
for a particular process. This all increases the over head involved when
Mr. K Praveen Kumar, SMC SHIRVA Page 17
swapping processes in and out of the CPU.

Figure: Free frames (a)before allocation and (b)after


allocation.

4. Write short notes on disk management


 The process of dividing the disk into sectors and filling the disk with a
special data structure is called low-level formatting. Sector is the smallest
unit of area that is read /written by the disk controller. The data structure for
a sector typically consists of a header, a data area (usually 512 bytes in size)
and a trailer. The header and trailer contain information used by the disk
controller, such as a sector number and an error-correcting code (ECC).
 When the controller writes a sector of data during normal I/O, the ECC is
updated with a value calculated from all the bytes in the data area. When a
sector is read, the ECC is recalculated and is compared with the stored
value. If the stored and calculated numbers are different, this mismatch
indicates that the data area of the sector has become corrupted and that the
disk sector may be bad.
 Most hard disks are low-level- formatted at the factory as a part of the
manufacturing process. This formatting enables the manufacturer to test the
disk and to initialize the mapping from logical block numbers to defect-free
sectors on the disk.

Mr. K Praveen Kumar, SMC SHIRVA Page 18


 When the disk controller is instructed for low-level-formatting of the disk,
the size of data block of all sector sit can also be told how many bytes of
data space to leave between the header and trailer of all sectors. It is of sizes,
such as 256, 512, and 1,024bytes. Formatting a disk with a larger sector size
means that fewer sectors can fit on each track; but it also means that fewer
headers and trailers are written on each track and more space is available for
user data. The operating system need store cord its own data structures on
the disk. It does so in two steps i.e., Partition and logical formatting.
1. Partition– is to partition the disk into one or more groups of cylinders. The
operating system can treat each partition as though it were a separate disk.
For instance, one partition can hold a copy of the operating system's
executable code, while another holds user files.
2. Logicalformatting (or creation of a file system)- Now, the operating
system stores the initial file-system data structures onto the disk. These data
structures may include maps of free and allocated space (a FAT or modes)
and an initial empty directory.
To increase efficiency, most file systems group blocks together into larger
chunks, frequently called clusters.

5. Explain any two Structure of the Page Table


a. Hierarchical Paging
b. Hashed Page Tables
c. Inverted Page Tables
Hierarchical Paging: Break up the logical address space into multiple page tables •
A simple technique is a two-level page table
• We then page the page table
Two-Level Paging Example
• A logical address (on 32-bit machine with 1K page size) is divided into:
– a page number consisting of 22 bits
– a page offset consisting of 10 bits
• Since the page table is paged, the page number is further divided into:
– a 12-bit page number
– a 10-bit page offset

Mr. K Praveen Kumar, SMC SHIRVA Page 19


• Thus, a logical address is as follows:


Two-Level Paging
Where p1 is an index into the outer page table, and p2 is the displacement within
the page of the inner page table
• Known as forward-mapped page table
Address-Translation Scheme

64-bit Logical Address Space


Even two-level paging scheme not sufficient
■If page size is 4 KB (212)
Mr. K Praveen Kumar, SMC SHIRVA Page 20
●Then page table has 252entries
●If two level schemes, inner page tables could be 210 4-byte entries
●Address would look like

Outer page table has 242 entries or 244 bytes


●One solution is to add a 2nd outer page table
●But in the following example the 2nd outer page table is still 234 bytes in
Size and possibly 4 memory access to get to one physical memory location

Hashed Page Tables


• Common in address spaces > 32 bits
• The virtual page number is hashed into a page table
– This page table contains a chain of elements hashing to the same location
• Each element contains
– (1) the virtual page number
– (2) the value of the mapped page frame
– (3) a pointer to the next element
• Virtual page numbers are compared in this chain searching for a match
– If a match is found, the corresponding physical frame is extracted
• Variation for 64-bit addresses is clustered page tables
Similar to hashed but each entry refers to several pages (such as 16) rather than 1
– Especially useful for sparse address spaces (where memory references are non-
contiguous and scattered)

Mr. K Praveen Kumar, SMC SHIRVA Page 21


Inverted Page Table
• Rather than each process having a page table and keeping track of all possible
logical pages,– track all physical pages
• One entry for each real page of memory
• Entry consists of
– The virtual address of the page stored in that real memory location,
– Information about the process that owns that page
• Decreases memory needed to store each page table
– But increases time needed to search the table when a page reference occurs
• Use hash table to limit the search to one/few page-table entries
– TLB can accelerate access
• But how to implement shared memory?
– One mapping of a virtual address to the shared physical address

6. Write a note on memory compaction.


One of the solutions to external fragmentation is to apply memory
compaction where kernel shuffles the
memory contents in order to place all
free memory partitions together to
form a single large block.
Compaction is performed
periodically and it requires dynamic
relocation of program
Compaction is possible only if
relocation is dynamic and done at
execution-time

Mr. K Praveen Kumar, SMC SHIRVA Page 22


-address space of the processes to be non-
contiguous. This allows a process to be allocated physical-memory
wherever such memory is available. Two techniques achieve this solution:
1) Paging and 2) Segmentation.

7. Explain external fragmentation.


External Fragmentation: When dynamic partitioning is used for the
allocation of processes, some of the memory space could be left over after
the allocation of each and every frame. For instance consider the example,
External fragmentation occurs when free memory is divided into small,
non-contiguous blocks, making it difficult to allocate memory to processes
even if the total free memory is sufficient. This happens because the
available memory is scattered across the system in fragments, and no
single block is large enough to meet the requirements of a process.

Figure: Allocating (or) freeing up the Disk Space


In the above example, 200 KB is allotted to a partition of 250 KB, 250 KB
process is allocated to a 300 KB partition and 150 KB process is allocated to
a 200 KB frame. These three processes waste 50 KB each which is called
internal fragmentation.
Now, these 50 KB partitions cannot be used by a process larger than 50 KB
irrespective of the overall free space of 150 kB. This problem is referred to
as external fragmentation.
If there is a partition of size ‘m’ bytes and there are no programs in the
queue of size ≤ m bytes, then the entire partition is left empty until a job of
size ≤m bytes arrives into the queue. Thus, external fragmentation is equal to
‘m’ bytes i.e., the entire partition.

Mr. K Praveen Kumar, SMC SHIRVA Page 23


For example, if 100 KB partition becomes free and there are no programs in
the queue of size ≤100 KB, then the entire 100 kB partition is left empty.
This is called external fragmentation. One solution to the problem of
external fragmentation is compaction.

8. Explain Contiguous Memory Allocation


The main memory of computer is divided into two major sections, one
contains the Operating System (OS) and other is left for user processes. The
OS is usually placed in starting locations or low
memory area and an Interrupt Vector Table (IVT)
is stored before OS. Memory allocation means to
bring the waiting processes from the ready queue
to user space in main memory. When each
process is allocated a single contiguous memory
section then it is called as contiguous memory
allocation. There are two types of memory
partitioning. They are:
1. Fixed-sized partitioning
2. Variable-sized partitioning
1. Fixed-sized Partitioning
memory is divided into fixed-sized partitions.
partition may contain exactly one process.
degree of multiprogramming is bound by the number of partitions.
a partition is free, a process is selected from the input queue and loaded
into the free partition.
the process terminates, the partition becomes available for another process.
2. Variable-sized Partitioning
OS keeps at able indicating which parts of memory are available and which
parts are occupied.
hole is a block of available memory. Normally, memory contains a set of holes
of various sizes.
, all memory is available for user-processes and considered one large
hole.
a process arrives, the process is allocated memory from a large hole.
Mr. K Praveen Kumar, SMC SHIRVA Page 24
ep the
remaining memory available to satisfy future requests
Memory Mapping and Protection
-protection means protecting OS from user-process and protecting user
processes from one another.
-protection is done using
o Relocation-register: contains the value of the smallest physical-address.
o Limit-register: contains the range of logical-addresses.
logical-address must be less than the limit-register.
MMU maps the logical-address dynamically by adding the value in there
Location- registers. This mapped-address is sent to memory
the CPU scheduler selects a process for execution; the dispatcher loads the
relocation and limit-registers with the correct values.
every address generated by the CPU is checked against these registers,
we can protect the OS from the running-process.
relocation-register scheme provides an effective way to allow the OS size to
change dynamically.
OS code: Code that comes & goes as needed to save memory-space
and over head for unnecessary swapping.

9. Explain how the OS implements dynamic partition allocation.


(ii) Dynamic Partitioning: In dynamic partitioning, partitions are created
dynamically depending on the size of process being loaded. The partition
size is exactly equal to the size of process being loaded. For example,
initially the memory will be empty, and then a process of 200 KB is loaded

Mr. K Praveen Kumar, SMC SHIRVA Page 25


into memory and is allocated 200 KB. Then some other process of 500 KB is
loaded which is also allocated 500 KB. Likewise more two processes are
loaded of size 300 KB and 100 KB. Figure (2) shows the allocation
sequence.

10. With a neat diagram explain dynamic relocation using relocation


register.
 The run-time mapping from virtual to physical addresses is done by a
hardware device called the memory-management unit (MMU).
 This mapping
with a simple MMU
scheme that is a
generalization of the
base register scheme.
 The base register
is now called a
relocation
register. The value
in the relocation
register is added to
every address
generated by a user
process at the time
the address is sent
to memory (see
Figure 9.5).

Mr. K Praveen Kumar, SMC SHIRVA Page 26


 For example, if the base is at 14000, then an attempt by the user to address
location 0 is dynamically relocated to location 14000; an access to location
346 is mapped to location 14346.
 The user program never accesses the real physical addresses. The program
can create a pointer to location 346, store it in memory, manipulate it, and
compare it with other addresses—all as the number 346.
 Only when it is used as a memory address (in an indirect load or store,
perhaps) is it relocated relative to the base register. The user program
deals with logical addresses. The memory mapping hardware converts
logical addresses into physical addresses
11. Write short notes on fragmentation.
Fragmentation: Fragmentation is denied as wastage of memory space. It is
a problem that occurs in dynamic memory allocation system when some
blocks are too small to satisfy a request. The problem associated with
memory fragmentation particularly in allocating (or) freeing up the disk
space is much similar with the problems associated with variable partitioning
that exist in multiprogramming systems while allocating primary
memory. Continuous allocation and cleanup of memory fragments could
result in the creation of huge number of fragments in the memory that can
result in various performance problems.
(i) Internal Fragmentation
Internal fragmentation means wastage of memory space when a partition is
allocated with a process whose size is less than the partition. For example, in
multiprogramming, with fixed number of partitions, a partition of400 KB
becomes free, then operating system scans through the queue and selects the
largest job i.e., 385 KB job and loads it into 400 KB partition. In this case 15
KB memory is being wasted. This is called internal fragmentation.
In general, if there is a partition of ‘m’ bytes and there is a program of size
‘n’ bytes where m > n and a program is loaded into the partition, then
internal fragmentation is equal to (m – n) bytes.
(i) External Fragmentation: refer the above 7 answer

Mr. K Praveen Kumar, SMC SHIRVA Page 27


12. Explain swapping with diagram.
 A process needs to be in memory to be executed. A process, however, can be
swapped temporarily out of memory to a backing store, and then brought back
into memory for continued execution.
 A variant of this swapping policy is used for priority-based scheduling
algorithms. If a higher-priority process arrives and wants service, the memory
manager can swap out the lower-priority process so that it can load and
execute the higher-priority process. When the higher-priority process finishes,
the lower-priority process can be swapped back in and continued. This variant
of swapping is sometimes called roll out, roll in.
 Normally a process that is swapped out will be swapped back into the same
memory space that it occupied previously. This restriction is dictated by the
method of address binding. If binding is done at assembly or load time, then
process cannot be moved to different locations. If execution-time binding is
being used, then it is possible to swap a process into a different memory
space, because the physical addresses are computed during execution time.

 Swapping requires a backing store. The backing store is commonly a fast disk.
It must be large enough to accommodate copies of all memory images for all
users, and must provide direct access to these memory images. The system
maintains a ready queue consisting of all processes whose memory images are
on the backing store or in memory and are ready to run. Whenever the CPU
scheduler decides to execute a process, it calls the dispatcher. The dispatcher
checks to see whether the next process in the queue is in memory. If the
process is not, and there is no free memory region, the dispatcher swaps out a

Mr. K Praveen Kumar, SMC SHIRVA Page 28


process currently in memory and swaps in the desired process. It then reloads
registers as normal and transfers control to the selected process
13. Explain segmentation with a neat diagram.
Segmentation is a memory-management scheme that supports this user view
of memory. A logical-address space is a collection of segments. Each segment
has a name length and a length. The addresses specify both the segment name
and the offset within the segment. The user therefore specifies each address by
two quantities: a segment name and an offset.
Consider an example: we have five segments numbered from 0 through 4. The
segments are stored in physical memory as shown. The segment table has a
separate entry for each segment, giving the beginning address of the segment
in physical memory (or base) and the length of that segment (or limit).

Hardware:
User can refer objects in the program by a two-dimensional address; the actual
physical memory is
still, of course, a
one-dimensional
sequence of bytes.
Thus we must
define an
implementation to
map two-
dimensional user-defined addresses into one-dimensional physical addresses. This

Mr. K Praveen Kumar, SMC SHIRVA Page 29


mapping is affected by a segment table. Each entry of the segment table has a
segment base and a segment limit. This segment base contains the starting physical
address where the segment resides in memory, whereas the segment limit specifies
the length of the segment.

14. Explain Segmentation hardware.


Hardware support for Segmentation
-table maps 2dimensional user-defined addresses into one-
dimensional physical addresses.
-table, each entry has following 2 fields:
1. Segment-base contains starting physical-address where the segment
resides in memory.
2. Segment-limit specifies the length of the segment (Figure2).
-address consists of 2parts:
1. Segment-number(s)is used as an index to the segment-table
2. Offset (d) must be between 0 and the segment-limit.
-limit, then we trap to the OS (logical
addressing attempt beyond end of segment).
-base to produce the
physical memory address.

15. Write short notes on demand paging.

Mr. K Praveen Kumar, SMC SHIRVA Page 30


 A demand-paging system is
similar to a paging system
with swapping. Processes
reside on secondary
memory (which is usually a
disk). When we want to
execute a process, we swap
it into memory. Rather than
swapping the entire process
into memory, however, we
use a lazy swapper.
 A lazy swapper never swaps a page into memory unless that page will be
needed.
 A swapper manipulates entire processes, whereas a pager is concerned
with the individual pages of a process. We shall thus use the term pager,
rather than swapper, in connection with demand paging.
 Bring a page into memory only when it is needed
 Less I/O needed
 Less memory needed
 Faster response
 More users
 Page is needed reference to it
 Invalid reference abort
 not-in-memory ⇒bring to memory
Basic concept: Instead of swapping the whole process the pager swaps only the
necessary pages in to memory. Thus it avoids reading unused pages and decreases
the swap time and amount of physical memory needed.
 The valid-invalid bit
scheme can be used to
distinguish between the
pages that are on the
disk and that are in
memory.
 With each page table
entry a valid–invalid bit
is associated
Mr. K Praveen Kumar, SMC SHIRVA Page 31
 (v⇒in-memory, i⇒not-in-memory)
 Initially valid–invalid bit is set to i on all entries
 During address translation, if valid–invalid bit in page table entry is I⇒ page
fault.
 If the bit is valid then the page is both legal and is in memory.
 If the bit is invalid then either page is not valid or is valid but is currently on
the disk.
 Marking a page as invalid will have no effect if the processes never access
to that page. Suppose if it access the page which is marked invalid, causes a
page fault trap. This may result in failure of OS to bring the desired page
into memory.

16. Explain FIFO page replacement policy.


 The simplest page-replacement algorithm is a FIFO algorithm.
 A FIFO replacement algorithm associates with each page the time when that
page was brought into memory.
 When a page must be replaced, the oldest page is chosen.
 Notice that it is not strictly necessary to record the time when a page is
brought in.
 We can create a FIFO queue to hold all pages in memory.
 We replace the page at the head of the queue.
 When a page is brought into memory, we insert it at the tail of the queue.
 For our example reference string, our three frames are initially empty.
 The first three references (7,0,1) cause page faults, and are brought into these
empty frames.
 The next reference (2) replaces page 7, because page 7 was brought in first.
 Since 0 is the next reference and 0 is already in memory, we have no fault for
this reference.
 The first reference to 3 results in page 0 being replaced, since it was the first
of the three pages in memory (0, 1, and 2) to be brought in.
 This replacement means that the next reference, to 0, will fault.
 Page 1 is then replaced by page 0.
 This process continues as shown in Figure 9.8.
 Every time a fault occurs, we show which pages are in our three frames. There
are 15 faults altogether.
Mr. K Praveen Kumar, SMC SHIRVA Page 32
o To illustrate the problems that are possible with a FIFO page-
replacement algorithm, we consider the reference string

17. Explain LRU page replacement algorithm with an example.


LRU replacement associates with each page the time of that page's last use.
 When a page must be replaced, LRU chooses that page that has not been
used for the longest period of time.
 This strategy is the optimal page-replacement algorithm looking
backward in time, rather than forward.

 The LRU algorithm produces 12 faults.


 Notice that the first five faults are the same as the optimal replacement.
When the reference to page 4 occurs, however, LRU replacement sees
that, of the three frames in memory, page 2 was used least recently.
 The most recently used page is page0, and just before that page 3 was
used.
 Thus, the LRU algorithm replaces page2, not knowing that page 2 is
about to be used.
 When it then faults for page 2, the LRU algorithm replaces page 3 since,
of the three pages in memory (0,3,4},page 3 is the least recently used.
Mr. K Praveen Kumar, SMC SHIRVA Page 33
 Despite these problems, LRU replacement with12 faults are still much
better than FIFO replacement with 15.

18. Consider the following page reference string:


7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1
How many page faults would occur for the following replacement
algorithm assuming three frames?
i) LRU algorithm. : Refer the above answer
ii) Optimal replacement algorithm.
Optimal replacement algorithm

 Replace page that will not be used for longest period of time.
 The optimum replacement algorithm produces nine page faults, the first three
references cause faults that fill the three empty frames.
 The reference to page2 replaces page 7, because 7 will not be used until
reference 18, whereas page0 will be used at 5, and page 1 at 14.
 The reference to page 3 replaces page 1, as page 1 will be the last of the three
pages in memory to be referenced again.
 With only nine page faults, optimal replacement is much better than a FIFO
algorithm, which had 15 faults.
 In fact, no replacement algorithm can process this reference string in three
frames with less than nine faults.
 Unfortunately, the optimal page-replacement algorithm is difficult to
implement, because it requires future knowledge of the reference string.

19. Explain optimal page replacement algorithm.

Mr. K Praveen Kumar, SMC SHIRVA Page 34


Refer above answer

20. Explain the need of page replacement in a page fault service routine.
Page replacement takes the following approach. If no frame is free, we find
one that is not currently being used and free it. We can free a frame by
writing its contents to swap space and changing the page table (and all other
tables) to indicate that the page is no longer in memory (Figure 10.10). We
can now use the freed frame to hold the page for which the process faulted.
We modify the page-fault service routine to include page replacement:

1. Find the location of the desired page on secondary storage.


2. Find a free frame:
a. If there is a free frame, use it.
b. If there is no free frame, use a page-replacement algorithm to select
a victim frame.
c. Write the victim frame to secondary storage (if necessary); change
the page and frame tables accordingly.
3. Read the desired page into the newly freed frame; change the page and

Mr. K Praveen Kumar, SMC SHIRVA Page 35


frame tables.
4. Continue the process from where the page fault occurred.

21. Explain Second-Chance page replacement algorithm.


 The basic algorithm of second-chance replacement is a FIFO
replacement algorithm. When a page has been selected, if the value of the
reference bit is 0, we proceed to replace this page; but if the reference bit is set to
1, we give the page a second chance and move on to select the next FIFO page.
 When a page gets a second chance, its reference bit is cleared, and its
arrival time is reset to the current time.
 Thus, a page that is given a second chance will not be replaced until all
other pages have been replaced (or given second chances).
 In addition, if a page is used often enough to keep its reference bit set, it
will never be replaced.
 One way to implement the second-chance algorithm (sometimes referred
to as the clock algorithm) is as a circular queue.
 A pointer (that is, a hand on the clock) indicates which page is to be
replaced next.
 When a frame is needed, the pointer advances until it finds a page with a
0 reference bit. As it advances, it clears the reference bits (Figure 10.17).
Once a victim page is found, the page is replaced, and the new page is
inserted in the circular queue in that position.

Mr. K Praveen Kumar, SMC SHIRVA Page 36


22. Explain Enhanced Second-Chance page replacement algorithm
We can enhance the second-chance algorithm by considering the reference
bit and the modify bit as an ordered pair. With these two bits, we have the
following four possible classes
1. (0, 0) neither recently used nor modified—best page to replace
2. (0, 1) not recently used but modified—not quite as good, because the page
will need to be written out before replacement
3. (1, 0) recently used but clean—probably will be used again soon
4. (1, 1) recently used and modified—probably will be used again soon, and
the page will be need to be written out to secondary storage before it can be
replaced
Each page is in one of these four classes.

23. Write a note on Allocation of Frame.


 Consider a simple case of a system with 128 frames. The operating
system may take 35, leaving 93 frames for the user process.
 Under pure demand paging, all 93 frames would initially be put on the
free-frame list. When a user process started execution, it would generate a
sequence of page faults.
Mr. K Praveen Kumar, SMC SHIRVA Page 37
 The first 93 page faults would all get free frames from the free-frame list.
When the free-frame list was exhausted, a page-replacement algorithm
would be used to select one of the 93 in-memory pages to be replaced
with the 94th, and so on.
 When the process terminated, the 93 frames would once again be placed
on the free-frame list.
 We can require that the operating system allocate all its buffer and table
space from the free-frame list. When this space is not in use by the
operating system, it can be used to support user paging.
 We can try to keep three free frames reserved on the free-frame list at all
times. Thus, when a page fault occurs, there is a free frame available to
page into.
 While the page swap is taking place, a replacement can be selected, which
is then written to the storage device as the user process continues to
execute.

24. Write a note on Thrashing.


Thrashing refers to a situation where in the operating system waste most of
its crucial time in accessing the secondary storage, looking-out for the
referenced pages that is unavailable in the memory. This situation (i.e.,
thrashing) can arise in both the demand paging system as well as in circular
job stream. In this situation, OS swaps in the referenced page from
secondary storage to primary while swapping out certain pages from the
memory. In other words, thrashing is referred to a situation where the
processor spends most of the time in swapping of pages instead of executing
the instructions.
Reason for Thrashing
 One of the causes of thrashing is decreased CPU utilization. In this
case, OS increases the degree of multiprogramming by including a
new process which requires extra page frames.
 Now, newly joined process takes the frames from active processes. This
results in the occurrence of page faults in these active processes as well.
 As a result, it leads to more decrement in the CPU utilization and results in
a strucked situation.
This situation is expressed in the form of a graph as follows
Mr. K Praveen Kumar, SMC SHIRVA Page 38
As it can be observed from the graph that
CPU utilization increases with increase in the
degree of multiprogramming but at a certain
point after reaching maximum CPU
utilization, it decreases sharply. That
particular point is referred to as thrashing.

25. Write a note on Working-Set Model


 The working-set model is based on the assumption of locality.
 This model uses a parameter, Δ, to define the working-set window.
 The idea is to examine the most recent Δ page references.
 The set of pages in the most recent Δ page references is set
 If a page is in active use, it will be in the working set.
 If it is no longer being used, it will drop from the working set Δ time units
after its last reference
 For example, given the sequence of memory references shown in Figure
 10.22, if Δ= 10 memory references, then the working set at timet1is {1, 2, 5,
6, 7}.
 Bytimet2, the working set has changed to {3, 4}.
 The accuracy of the working set depends on the selection of Δ.
 If Δ is
too
small, it
will
not

encompass the entire locality;


 If Δ is too large, it may overlap several localities.
 In the extreme, if Δ is infinite, the working set is the set of pages touched
during the process execution.

Mr. K Praveen Kumar, SMC SHIRVA Page 39


26. Given the following reference string 7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1
with 3 frames of memory. Write the steps of FIFO algorithm which
shows the occurrence of page fault.
Refer answer 16 in the above
27. Consider the following page reference string: 7, 2, 3, 1, 2, 5, 3, 4, 6, 7, 7,
1, 0, 5, 4, 6, 2, 3, 0 , 1. Assuming demand paging with three frames, how
many page faults would occur for the following replacement
algorithms?
a) LRU replacement 18 page fault
b) FIFO replacement 17 page fault
c) Optimal replacement 13 page fault
FIFO replacement
7 7 7 1 1 1 6 6 6 0 0 0 6 6 6 0 0
2 2 2 5 5 5 7 7 7 5 5 5 2 2 2 1
3 3 3 4 4 4 1 1 1 4 4 4 3 3 3
Total 17 page fault in FIFO
LRU replacement
7 7 7 1 1 3 3 3 7 7 7 5 5 5 2 2 2 1
2 2 2 2 2 4 4 4 1 1 1 4 4 4 3 3 3
3 3 5 5 5 6 6 6 0 0 0 6 6 6 0 0
Total 18 page fault in LRU
7, 2, 3, 1, 2, 5, 3, 4, 6, 7, 7, 1, 0, 5, 4, 6, 2, 3, 0 , 1
Optimal replacement
7 7 7 1 1 1 1 1 1 1 1 1 1
2 2 2 5 5 5 5 5 4 6 2 3
3 3 3 4 6 7 0 0 0 0 0
Total 13 page fault in Optimal

Mr. K Praveen Kumar, SMC SHIRVA Page 40


28. Explain First Come First Serve (FCFS) disk scheduling with example.
FCFS scheduling algorithm:
This is the simplest form of disk scheduling algorithm. This services the
request in the order they are received. This algorithm is fair but do not provide
fastest service. It takes no special care to minimize the overall seek time.
Eg:- consider a disk queue with request for i/o to blocks on cylinders. 98, 183,
37, 122, 14, 124, 65, 67

If the disk head is initially at 53, it will first move from 53 to 98 then to 183
and then to 37,122,14,124,65,67 for a total head movement of 640 cylinders.
The wild swing from 122 to 14 and then back to 124 illustrates the problem
with this schedule.
Total Number of cylinders moved by the head = (98-53) + (183-98) + (183-
37) + (122-37) + (122-14)+(124-14)+(124-65)+(67-65)
= 45+85+146+85+108+110+59+2 = 640 Cylinders

29. Explain Shortest Seek Time First (SSTF) disk scheduling with example.
SSTF (Shortest Seek Time First) algorithm: This selects the request with
minimum seek time from the current head position. SSTF chooses the pending
request closest to the current head position.
Eg:- consider a disk queue with request for i/o to blocks on cylinders. 98, 183,
37, 122, 14, 124, 65, 67

Mr. K Praveen Kumar, SMC SHIRVA Page 41


If the disk head is initially at 53, the closest is at cylinder 65, then 67, then 37
is closer than98 to 67. So it services 37, continuing we service 14, 98, 122, 124
and finally 183. The total head movement is only 236 cylinders. SSTF is a
substantial improvement over FCFS, it is not optimal.

30. Explain SCAN disk scheduling with example.


SCAN algorithm: In this the disk arm starts moving towards one end, servicing
the request as it reaches each cylinder until it gets to the other end of the disk.
At the other end, the direction of the head movement is reversed and servicing
continues. The initial direction is chosen depending upon the direction of the
head. Eg:- consider a disk queue with request for i/o to blocks on cylinders. 98,
183, 37, 122, 14,124,65,67

If the disk head is initially at 53 and if the head is moving towards the outer
track, it services 65, 67, 98, 122, 124 and183. At cylinder 199 the arm will
reverse and will move towards the other end of the disk servicing 37 and
then14. The SCAN is also called as elevator algorithm

31. Explain CSCAN disk scheduling with example.


Mr. K Praveen Kumar, SMC SHIRVA Page 42
C-SCAN (Circular scan) algorithm: C-SCAN is a variant of SCAN designed to
provide a more uniform wait time. Like SCAN, C-SCAN moves the head from
end of the disk to the other servicing the request along the way. When the head
reaches the other end, it immediately returns to the beginning of the disk,
without servicing any request on the return.
Eg:- consider a disk queue with request for i/o to blocks on cylinders. 98, 183,
37, 122, 14,124,65,67.

If the disk head is initially at 53 and if the head is moving towards the outer
track, it services 65, 67, 98, 122, 124 and 183. At cylinder 199 the arm will
reverse and will move immediately towards the other end of the disk, then
changes the direction of head and serves14andthen37.
Note: If the disk head is initially at 53 and if the head is moving towards track
0, it services37 and 14 first. At cylinder 0 the arm will reverse and will move
immediately towards theotherendofthediskservicing65, 67, 98, 122,124and183.

32. Explain LOOK disk scheduling with example.


LOOK disk scheduling: It is identical to the SCAN disc scheduling
algorithm, except that instead of traveling to the end of the track, the disc arm
only goes to the last request to be handled in front of the head and then
reverses course from there.
As a result, the extra time caused by unneeded overhead to the disk end is
avoided
Algorithm
1. Arrange all the I/O requests in ascending order

Mr. K Praveen Kumar, SMC SHIRVA Page 43


2. It is provided the initial direction in which the head will move, and it will
serve in that direction
3. The head responds to each request individually in the direction it is
moving
4. The head keeps moving in the same direction until all of the requests in
that direction have been serviced
5. If the last request is reached the direction of the head will be reversed.
While reversing the direction, all the pending requests will be serviced
Consider an example, suppose the requests to be addressed are 82, 140, 170,
190, 70, 35, and 20. The Head pointer starts at 50, and it is also given that the
disk arm should move “towards the larger value Calculate the seek time

Seek time can be calculated by counting the head movements


=(82-50)+(140-82)+(170-140)+(190-170)+(190-70)+(70-35)+(35-20)
=32+58+30+20+120+35+15= 310
33. Explain CLOOK disk scheduling with example.
C-Look Scheduling algorithm: Look and C-Look scheduling are different
version of SCAN
and C-SCAN
respectively. Here
the arm goes only
as far as the final
request in each
direction. Then it
reverses, without
going all the way to
the end of the disk.
Mr. K Praveen Kumar, SMC SHIRVA Page 44
The Look and C-
Look scheduling look for a request before continuing to move in a given
direction. Eg:- consider a disk queue with request for i/o to blocks on cylinders.
98, 183, 37, 122, 14,124, 65,67
If the disk head is initially at 53 and if the head is moving towards the outer track,
it services 65, 67, 98, 122, 124 and 183. At the final request 183, the arm will
reverse and will move towards the first request 14 and then serves

34. Consider a disk with 200 tracks and the queue has random requests
from different processes in the order:55, 58, 39, 18, 90, 160, 150, 38, 184
Initially arm is at 100. Find the Average Seek length using FIFO, SSTF,
SCAN and C-SCAN algorithm.
First Come First Serve (FCFS) Disk Scheduling

Total Head Movements


= (100 - 55) + (58 - 55)
+ (58 - 39) + (39 - 18) +
(90 - 18) + (160 - 90) +
(160 - 150) + (150 - 38)
+ (184 - 38)
= 45 + 3 + 19 + 21 + 72
+ 70 + 10 + 112 + 146
= 498 Cylinders
Average Seek Length = 498/9 = 55.34 ms

Shortest Seek Time First (SSTF) Disk Scheduling


Total Head Movements
= (100 - 90) + (90 - 58) + (58 -
55) + (55 - 39) + (39 - 38) + (38
- 18) + (150 - 18) + (160 - 150)
+ (184 - 160)
= 10 + 32 + 3 + 16 + 1 + 20 +
132 + 10 + 24
= 248 Cylinders

Mr. K Praveen Kumar, SMC SHIRVA Page 45


Average Seek Length = 248/9 = 27.56 ms

SCAN (Elevator) Disk Scheduling

Total Head Movements


= (150 - 100) + (160 -
150) + (184 - 160) + (199
- 184) + (199 - 90) + (90 -
58) + (58 - 55) + (55 - 39)
+ (39 - 38) + (38 - 18)
= 50 + 10 + 24 + 15 + 109
+ 32 + 3 + 16 + 1 + 20
= 280 Cylinders
OR
Total Head Movements = (199 - 100) + (199 - 18) = 99 + 181 = 280 Cylinders
Average Seek Length = 280/10 = 28 ms

C-SCAN (Circular SCAN) Disk Scheduling

Total Head Movements


= (150 - 100) + (160 - 150) +
(184 - 160) + (199 - 184) +
(199 - 0) + (18 - 0) + (38 -
18) + (39 - 38) + (55 - 39) +
(58 - 55) + (90 - 58)
= 50 +10 + 24 + 15 + 199 +
18 + 20 + 1 + 16 + 3 + 32
= 388 Cylinders OR
Total Head Movements = (199 - 100) + (199 - 0) + (90 - 0) = 99 + 199 + 90
= 388 Cylinders
Average Seek Length = 388/11 = 35.28 ms

35. Consider a disk with 200 tracks and the queue has random requests
from different processes in the order: 98, 183, 37, 122, 14, 124, 65, 67

Mr. K Praveen Kumar, SMC SHIRVA Page 46


Initially arm is at 55. Find the Average Seek length using FIFO, SSTF,
LOOK and C-LOOK algorithm.
FIFO Algorithm

Total Number of cylinders


moved by the head = (98-55) +
(183-98) + (183-37) + (122-37)
+ (122-14)+(124-14)+(124-
65)+(67-65)
=43+85+146+85+108+110+59+2
=638
Average Seek Length = 638/8
= 79.75 ms

Shortest Seek Time First algorithm:

Total Head
Movements
= (65 - 55) + (67-65)+(67-
37)+(37-14)+(98-14)+(122-
98)+(124-122)+(183-124)
= 10+ 02 + 30 +
23 + 84 +
24+2+59= 234
Cylinders
Average Seek
Length = 234/8
= 29.25 ms

Mr. K Praveen Kumar, SMC SHIRVA Page 47


LOOK Algorithm

Total Head Movements


= (65-55) + (67-65)+(98-67)+(122-98)+(124-122)+(183-124)+(183-37)+(37-14)
= 10+ 02 + 31 + 24 + 02 +59+146+23= 297 Cylinders
Average Seek Length = 297/8 = 37.125 ms
CLOOK Algorithm

Total Head Movements


= (65-55) + (67-65)+(98-67)+(122-98)+(124-122)+(183-124)+(183-14)+(37-14)
= 10+ 02 + 31 + 24 + 02 +59+169+23
= 320 Cylinders
Mr. K Praveen Kumar, SMC SHIRVA Page 48
Average Seek Length = 320/8 = 40 ms
36. Consider the following page reference string: 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2,
3, 7, 6, 3, 2, 1, 2, 3, 6. How many page faults would occur for the
following replacement algorithms, assuming four frames?
Remember that all frames are initially empty, so your first unique pages
will cost one fault each.
a) LRU replacement
b) FIFO replacement
c) Optimal replacement
LRU 4 frames: 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6.
1 1 1 1 1 1 1 1 6 6
2 2 2 2 2 2 2 2 2
3 3 5 5 3 3 3 3
4 4 6 6 7 7 1
Total 10 page fault
Optimal 4 frames: 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6.
1 1 1 1 1 1 7 1
2 2 2 2 2 2 2
3 3 3 3 3 3
4 5 6 6 6
Total 8 page fault
FIFO 4 frames: 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6.
1 1 1 1 5 5 5 5 3 3 3 3 1 1
2 2 2 2 6 6 6 6 7 7 7 7 3
3 3 3 3 2 2 2 2 6 6 6 6
4 4 4 4 1 1 1 1 2 2 2
Total 14 page fault

Mr. K Praveen Kumar, SMC SHIRVA Page 49

You might also like