Os Unit 4
Os Unit 4
                                        UNIT-4
                              Memory & Storage Management
Syllabus: Memory-Management Strategies: Introduction, Contiguous memory allocation,
Paging, Structure of the Page Table, Swapping. Virtual Memory Management: Introduction,
Demand paging, Copy-on-write, Page replacement, Allocation of frames, Thrashing Storage
Management: Overview of Mass Storage Structure, HDD Scheduling.
Introduction
   The main purpose of a computer system is to execute programs. These
    programs, together with the data they access, must be at least partially in
    main memory during execution..
   To improve both the utilization of the CPU and the speed of the computer’s
     response to its users, general-purpose computers must keep several
     programs in memory, creating a need for memory management.
   The operating system is responsible for the following activities in
     connection with memory management:
         Keeping track of which parts of memory are currently being used and
           who is using them
         Deciding which processes (or parts of processes) and data to move
           into and out of memory
         Allocating and de-allocating memory space as needed.
Memory Hardware:
   Memory consists of a large array of bytes, each with its own address. The
    CPU fetches instructions from memory according to the value of the
    program counter.
   Main memory and the registers built into the processor itself are the only
    general-purpose storage that the CPU can access directly.
   Registers that are built into the CPU are generally accessible within one
    cycle of the CPU clock.
   A memory access may take many cycles of the CPU clock. In such cases,
    the processor normally needs to stall, since it does not have the data
    required to complete the instruction.
   The remedy is to add fast memory between the CPU and main memory,
    typically on the CPU chip for fast access called as CACHE.
Memory Protection:
   For proper system operation we must protect the operating system from
    access by user processes.
   Each process has a separate memory space. Separate per-process memory
    space protects the processes from each other.
   The hardware protection of memory is provided by two registers
                Base Register
                Limit Register
   The base register holds the smallest legal physical memory address, called
    the starting address of the process.
   The Limit register specifies the size of range of the process.
   If the base register holds 300040 and the limit register is 120900, then the
    program can legally access all addresses from 300040 through 420939
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur          Page 2
II B.Tech IT                              Regulation: R23                                      OS-UNIT-4
   This scheme prevents a user program from modifying the code or data
     structures of either the operating system or other users.
   The address generated by the CPU for a process should lie between the Base
     address of the process and base + Limit of the process, Else the hardware
     sends an interrupt to the OS.
Address Binding:
(Multistep processing of a User Program (or) Steps a user program goes through before being executed)
     Address binding is the process of mapping the program's logical or virtual
      addresses to corresponding physical or main memory addresses.
     Addresses in the source program are generally symbolic.
     A compiler typically binds these symbolic addresses to relocatable addresses.
     The linkage editor or loader in turn binds the relocatable addresses to
      absolute addresses
     Each binding is a mapping from one address space to another
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur                           Page 3
II B.Tech IT                            Regulation: R23                                 OS-UNIT-4
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur                     Page 4
II B.Tech IT                          Regulation: R23                                OS-UNIT-4
Note:
    The user program never sees the real physical addresses
    We can use logical address and virtual address interchangeably.
      i.e.: The logical address is a virtual as it does not exist physically. Hence, it is also called
      as virtual address.
    The user program generates only logical addresses.
Dynamic Loading:
   Dynamic Loading is the process of loading a routine only when it is called
    or needed during runtime.
   Initially all routines are kept on disk in a relocatable load format.
   The main program is loaded into memory and is executed. When a routine
    needs to call another routine, the calling routine first checks to see whether
    the other routine has been loaded. If it has not, the relocatable linking loader
    is called to load the desired routine into memory.
   The advantage of dynamic loading is that a routine is loaded only when it is
    needed.
   This method is particularly useful when large amounts of code are needed to
    handle infrequently occurring cases, such as error routines.
Dynamic Linking and Shared Libraries:
   Dynamically linked libraries are system libraries that are linked to user
    programs when the programs are in execution.
   In Dynamic linking the linking of system libraries are postponed until
    execution time.
   Static Linking combines the system libraries to the user program at the time
    of compilation.
   Dynamic linking saves both the disk space and the main memory space.
   The libraries can be replaced by a new version and all the programs that
    reference the library will use the new version. This system is called as
    Shared libraries which can be done with dynamic linking.
Note:
    Compiling is process of converting source program into object files.
    Linking is process of combining object files and system libraries into an executable file.
    Loading is process of loading a program from secondary storage to main memory.
Contiguous Memory Allocation
    In contiguous memory allocation, each process is contained in a single
     section of memory that is contiguous to the section containing the next
     process.
    Contiguous memory management strategies involve allocating memory to
     processes in contiguous blocks (adjacent slots in main memory), where each
     process's memory is located in a single, contiguous region of physical
     memory.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur                  Page 5
II B.Tech IT                          Regulation: R23                                  OS-UNIT-4
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur                     Page 6
II B.Tech IT                          Regulation: R23                               OS-UNIT-4
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur                 Page 7
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
External Fragmentation:
    External fragmentation occurs when the free memory space is fragmented
      into small, non-contiguous blocks, making it difficult to satisfy memory
      allocation requests, even though the total free memory might be sufficient.
    As processes are loaded and removed from memory, the free memory space
      is broken into little pieces. External fragmentation exists when there is
      enough total memory space to satisfy a request but the available spaces are
      not contiguous, so that the memory cannot be allocated to the process.
    Both the first-fit and best-fit strategies for memory allocation suffer from
      external fragmentation.
    Statistical analysis of first fit, for instance, reveals that, even with some
      optimization, given N allocated blocks, another 0.5 N blocks will be lost to
      fragmentation. That is, one-third of memory may be unusable! This property
      is known as the 50-percent rule.
    Compaction: One solution to the problem of external fragmentation is
      compaction. The goal is to shuffle the memory contents so as to place all
      free memory together in one large block.
    Another possible solution to the external-fragmentation problem is to permit
      the logical address space of the processes to be noncontiguous, thus
      allowing a process to be allocated physical memory wherever such memory
      is available. Two complementary techniques achieve this solution:
      segmentation and paging. These techniques can also be combined.
Internal Fragmentation:
    Internal fragmentation occurs when the allocated memory is larger than
      required by the process. It leads to wasted space within allocated memory
      blocks.
    In fixed size partitions, each process is allocated with a partition,
      irrespective of its size. The allocated memory for a process may be slightly
      larger than requested memory; this memory that is wasted internal to a
      partition, is called as internal fragmentation.
    Consider a multiple-partition allocation scheme with a hole of 18,464 bytes.
      Suppose that the next process requests 18,462 bytes. If we allocate exactly
      the requested block, we are left with a hole of 2 bytes. With this approach,
      the memory allocated to a process may be slightly larger than the requested
      memory. The difference between these two numbers is internal
      fragmentation—unused memory that is internal to a partition.
Note:
    Internal fragmentation occurs within allocated memory blocks, while external
      fragmentation occurs in the overall free memory space.
    Both types of fragmentation can impact system performance and memory utilization,
      and various strategies can be employed to mitigate their effects in memory
      management systems.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur          Page 8
  II B.Tech IT                          Regulation: R23                           OS-UNIT-4
      Every address generated by the CPU is divided into two parts: a page
       number (p) and a page offset (d).
      The page number is used as an index into a page table.
      The page table contains the base address of each page in physical memory.
      This base address is combined with the page offset to define the physical
       memory address that is sent to the memory unit.
  Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur          Page 9
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
Paging Model:
   The paging model of memory is shown in following figure:
    The page size (like the frame size) is defined by the hardware. The size of a
     page is a power of 2, varying between 512 bytes and 1 GB per page,
     depending on the computer architecture.
    The selection of a power of 2 as a page size makes the translation of a
     logical address into a page number and page offset particularly easy.
    If the size of the logical address space is 2 m, and a page size is 2n bytes, then
     the high-order m-n bits of a logical address designate the page number, and
     the n low-order bits designate the page offset. Thus, the logical address is as
     follows:
     where p is an index into the page table and d is the displacement within
     the page.
Paging Example:
   Suppose, the logical address, n= 2 and m = 4. Using a page size of 4 bytes
      and a physical memory of 32 bytes (8 pages), we show how the
      programmer’s view of memory can be mapped into physical memory.
   Physical address = (frame number  page size) + offset
   Logical address 0 is page 0, offset 0. Indexing into the page table, we find
      that page 0 is in frame 5. Thus, logical address 0 maps to physical address
      20 [= (5 × 4) + 0].
   Logical address 3 (page 0, offset 3) maps to physical address 23 [= (5 × 4) +
      3].
   Logical address 4 is page 1, offset 0; according to the page table, page 1 is
      mapped to frame 6. Thus, logical address 4 maps to physical address
      24 [= (6 × 4) + 0].
   Logical address 13 maps to physical address 9.
   Logical address 15 maps to physical address 11.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 10
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 11
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
Hardware Support:
(Paging Hardware with TLB)
    The hardware implementation of the page table can be done in several ways.
     The page table is implemented as a set of dedicated registers if the size of
     the page table is too small.
    If the size of the page table is too large then the page table is kept in main
     memory and a page table base register is used to point to the page table.
    When the page table is kept in main memory then two memory accesses are
     required to access a byte.
                 One for accessing the page table entry
                 Another one for accessing the byte.
    Thus the overhead of accessing the main memory increases.
    The standard solution to this problem is to use a special, small, fast lookup
     hardware cache called a translation look-aside buffer (TLB).
    Each entry in the TLB consists of two parts: a key (or tag) and a value.
    The TLB contains only a few of the page-table entries.
    When a logical address is generated by the CPU, its page number is
     presented to the TLB. If the page number is found (TLB hit), its frame
     number is immediately available and is used to access memory.
    If the page number is not in the TLB (TLB miss), a memory reference to the
     page table must be made. When the frame number is obtained, we can use it
     to access memory.
    The percentage of times that the page number of interest is found in the TLB
     is called the hit ratio.
    The access time of a byte is said to be effective when the TLB hit ratio is
     high.
    Thus the effective access time is given by
        Effective Access Time = TLB hit ratio  (TLB Search Time+ Memory Access Time)
                            + (1-TLB hit ratio)  (TLB Search Time +2  Memory Access Time)
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur          Page 12
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
Memory Protection:
   Memory protection in a paged environment is accomplished by protection
    bits associated with each frame.
   One bit can define a page to be read–write or read-only. When the physical
    address is being computed, the protection bits can be checked to verify that
    no writes are being made to a read-only page
   One additional bit is generally attached to each entry in the page table: a
    valid–invalid bit. When this bit is set to valid, the associated page is in the
    process’s logical address space and is thus a legal.
   When the bit is set to invalid, the page is not in the process’s logical address
    space.
   Page-table length register (PTLR), is used to indicate the size of the page
    table. This value is checked against every logical address to verify that the
    address is in the valid range for the process.
Shared Pages:
   An advantage of paging is the possibility of sharing common code.
   If the code is reentrant code (or pure code), however, it can be shared.
   Reentrant code is non-self-modifying code: it never changes during
     execution. Thus, two or more processes can execute the same code at the
     same time
   Example: Consider three processes that share a page editor which is of three
     pages. Each process has its own data page.
   Only one copy of the editor need be kept in physical memory. Each user’s
     page table maps onto the same physical copy of the editor, but data pages
     are mapped onto different frames.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 13
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
Note:
    Pages and frames are identical in size; a page refers to a fixed-size block of a process's
      virtual memory space and a frame refers to a fixed-size block of main memory. A virtual
      memory page must be loaded into a frame in main memory before a processor can
      access its contents.
           Physical Address Space (PAS) = Size of Main Memory
           Size of Main Memory = Total number of frames  Frame size
           Size of logical Address space =Total number of pages  Page size
           Frame size = Page size
           If the number of frames in the Main Memory =2n,then number of bits required in
              frame number =n
           If page size = 2n bytes, then number of bits in page offset=n bits
           To represent 2n Bytes, we need n bits
           With n bits we can represent 2n Bytes
           If size of Main Memory=2n Bytes ,then number of bits in Main Memory=n bits
           Number of pages = Logical Address Space /Page size
           Number of frames =Physical Address Space /frame size
Problem-1: Consider a logical address space of 256 pages with a 4-KB page size,
mapped onto a physical memory of 64 frames.
  a. How many bits are required in the logical address?
  b. How many bits are required in the physical address?
Solution:
Number of pages in Logical Address Space: 256 pages
Page Size: 4 KB
Physical Memory: 64 frames
Total logical address space = 256 * 4-KB =220
  Number of bits required=log2(address space Size) =log2(220) = 20 bits
Total physical address space = 64 * 4-KB =218 (size of frame = size of page)
  Number of bits required=log2(address space Size) =log2(218) = 18 bits
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur           Page 14
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 15
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
Given:
      Hit Ratio = 70% = 0.70
      TLB search time = 30 ns
      Memory Access Time = 90 ns
Using the formula:
EAT = 0.70×(30 +90) +(1-0.7)×( 30 + 2×90 )
    = 0.70×120 +0.30×210
    = 84+63 =147 ns
Problem-5: If physical address is 12 bits and logical address is 13 bits find logical
address space and physical address space.
Solution:
               Logical Address Space= 2bits required = 212 = 22  210 =4K
               Physical Address Space= 2bits required = 213 =23  210 =8K
Problem-6: If logical address space is 128 KB and physical address space is
512KB and page size is 16 KB. Find
      i)   Number of bits for logical address
      ii)  Number of bits for physical address
      iii) Number of pages in logical address space
      iv) Number of frames in physical address space
      v)   Page table size
      vi) Address fields of logical address
      vii) Address fields of physical address
Solution:
     i) Logical Address Space (LAS): 128 KB
         Number of bits required for LAS=log2(Logical address space Size)
                                         =log2(128 KB) = log2(217) = 17 bits
     ii) Physical Address Space (PAS): 512 KB
         Number of bits required for PAS=log2(physical address space Size)
                                         =log2(512 KB) = log2(219) = 19 bits
                                 LAS       128 KB     27
     iii) Number of pages =              =          = 4 =23 = 8 pages
                               pagesize     16 KB     2
                                   PAS       512 KB      29
     iv) Number of frames =                =         = 4 =25 =32 pages
                                framesize     16 KB     2
     v) Page table size = number of pages  page entry size (number bits for frame
                           number)
                         = 8  5 = 40 bits ( since number frames = 25)
      vi) Number of bits required for page number=3 bits (since number of pages =23)
             Page offset = 14 bits ( since page size = 16KB =214 )
               Address fields of logical address:
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 16
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
      vii) Number of bits required for frame number=5 bits (since number of frames =25)
           Frame offset = 14 bits ( since frame size = 16KB =214 )
            Address fields of physical address:
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur             Page 17
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
       where p1 is an index into the outer page table and p2 is the displacement
       within the page of the inner page table.
       The address-translation method for this architecture is shown in following
       Figure.
       Because address translation works from the outer page table inward, this
       scheme is also known as a forward-mapped page table.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 18
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
    Example System: Modern operating systems like Linux and Windows often
     employ hierarchical paging. For instance, Linux typically uses a two-level
     paging scheme (page global directory and page table) or even a three-level
     paging scheme (PML4, PDPT, PD, and PT) on x86-64 architectures.
Advantages:
    Memory Efficiency: It allows for efficient use of memory by organizing the
     page table in a hierarchical structure.
    Scalability: Hierarchical paging scales well with the size of the address
     space. As the address space grows, additional levels of the page table can be
     added to accommodate it.
    Protection: It provides a natural way to implement protection mechanisms,
     such as different access permissions for different parts of the address space,
     by managing permissions at different levels of the hierarchy.
Disadvantages:
    Traversal Overhead: Traversing multiple levels of the page table hierarchy
     can introduce overhead in address translation, potentially impacting
     performance.
    Fragmentation: It can lead to fragmentation of the page table itself,
     especially if the address space is not evenly distributed or if there are many
     small allocations.
Hashed Page Tables:
    A common approach for handling address spaces larger than 32 bits is to use
     hashed page table, with the hash value being the virtual page number.
    Hashed page tables use a hash function to map virtual page numbers to page
     table entries.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 19
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
    Each entry in the hash table contains a linked list of elements that hash to
     the same location (to handle collisions). Each element consists of three
     fields:
      (1) The virtual page number
      (2) The value of the mapped page frame
      (3) A pointer to the next element in the linked list.
    The algorithm works as follows: The virtual page number in the virtual
     address is hashed into the hash table. The virtual page number is compared
     with field 1 in the first element in the linked list. If there is a match, the
     corresponding page frame (field 2) is used to form the desired physical
     address. If there is no match, subsequent entries in the linked list are
     searched for a matching virtual page number.
    This scheme is shown in following figure.
   A variation of this scheme that is useful for 64-bit address spaces has been
     proposed. This variation uses clustered page tables, which are similar to
     hashed page tables except that each entry in the hash table refers to several
     pages (such as 16) rather than a single page. Therefore, a single page-table
     entry can store the mappings for multiple physical-page frames. Clustered
     page tables are particularly useful for sparse address spaces, where memory
     references are noncontiguous and scattered throughout the address space.
   Example System: One example of a system that could use hashed page
     tables is the early versions of the MIPS architecture. For instance, MIPS
     R2000 and R3000 processors used hashed page tables in their memory
     management units.
Advantages:
   Flexibility: Hashed page tables can efficiently handle large or sparse
     address spaces, as they only allocate memory for pages that are actually in
     use.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 20
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 21
II B.Tech IT                          Regulation: R23                                 OS-UNIT-4
Pros: Saves memory, scales       Pros: Good for large/sparse       Pros: Saves memory (good for
well.                            memory.                           many small processes).
Cons: Slower due to multiple     Cons: Slower if too many          Cons: Slower lookups (must search
lookups                          collisions                        entire table
Advantages and disadvantages of paging:
Advantages:
    Paging reduces external fragmentation, but still suffers from internal
     fragmentation.
    Paging is simple to implement and assumed as an efficient memory
     management technique.
    Due to equal size of the pages and frames, swapping becomes very easy.
Disadvantages:
    Paging does not preserve user view of memory allocation
    Page table requires extra memory space, so may not be good for a system
     having small main memory
    Multi-level paging may lead to memory reference overhead.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur                  Page 22
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
Swapping
    A process must be in memory to be executed. A process, however, can be
     swapped temporarily out of memory (swap-out) to a backing store (Disk)
     and then brought back into main memory (swap-in) for continued execution.
     This phenomenon is called swapping.
    Swapping is a memory management technique that allows OS to efficiently
     utilize main memory (MM) by temporarily moving unused or less
     frequently accessed portions of memory to secondary storage (SS) and
     bringing them back into MM when needed.
    Swapping allows the OS to move processes between main memory and
     secondary storage.
    Swapping makes it possible for the total physical address space of all
     processes to exceed the real physical memory of the system, thus increasing
     the degree of multiprogramming in a system.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 23
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 24
II B.Tech IT                          Regulation: R23                              OS-UNIT-4
    The technique of swapping is employed to service a larger number of processes than can
     fit into the computer’s memory.
    Swapping has the potential to improve both system performance and response times of
     processes.
Paging Vs. Swapping:
                      Paging                                           Swapping
Divides memory into fixed-size blocks (pages) to Moves entire processes between main
manage physical and logical address spaces       memory (RAM) and secondary storage (disk)
efficiently.                                     when needed.
Works at the page level (small fixed-size units,     Works at the process level (entire processes
e.g., 4KB).                                          are swapped).
Reduces external fragmentation; may cause page Can cause high overhead due to large I/O
faults (minor delays).                         operations (slower than paging).
Example: A process’s pages are loaded on-            Example: An idle process is moved to disk to
demand (demand paging).                              free RAM for active processes.
                                                   Analogy: Swapping is like returning the
Analogy: Paging is like fetching only the required
                                                   entire book to the shelf and borrowing
chapters of a book from a library.
                                                   another one.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur               Page 25
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
    The virtual address space of a process refers to the logical (or virtual) view
     of how a process is stored in memory.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 26
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
    Typically, this view is that a process begins at a certain logical address, say,
     address 0 and exists in contiguous memory
    Physical memory may be organized in page frames and that the physical
     page frames assigned to a process may not be contiguous. It is up to the
     memory management unit (MMU) to map logical pages to physical page
     frames in memory.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 27
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
Note:
    Swapping involves moving entire processes between RAM and disk, while virtual
      memory involves managing memory at the page level and allows programs to use more
      memory than physically available by transparently moving data between RAM and disk.
Demand Paging
    Virtual memory is commonly implemented by demand paging.
    In demand paging, instead of loading the entire program in to memory,
     pages are loaded only as they are needed.
    With demand-paged virtual memory, pages are loaded only when they are
     demanded during program execution. Pages that are never accessed are thus
     never loaded into physical memory.
    A demand-paging system is similar to a paging system with swapping where
     processes reside in secondary memory. When we want to execute a process,
     we swap it into memory. Rather than swapping the entire process into
     memory, a swapper never swaps a page into memory unless that page will
     be needed.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 28
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
Note:
    A swapper manipulates entire processes, whereas a pager is concerned with the
      individual pages of a process.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 29
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
Page Fault:
   If a page requested by a process is in memory, then the process can access it.
     If the requested page is not in main memory, then it is page fault. i.e.:
     Access to a page marked invalid causes a page fault.
The procedure for handling page fault:
(Steps in handling a page fault)
    The paging hardware, in translating the address through the page table, the
     invalid bit is set, causing a trap to the operating system. This trap is the
     result of the operating system’s failure to bring the desired page into
     memory.
    When a page fault occurs (indicated by an invalid bit in the page table), the
     operating system handles it through the following steps:
        1. Check Validity: Verify if the memory reference is valid.
        2. Terminate or Load Page: If invalid, terminate the process; if valid,
            proceed to load the page.
        3. Find Free Frame: Locate a free memory frame.
        4. Read from Disk: Schedule a disk read to load the required page into
            the frame.
        5. Update Tables: Modify internal and page tables to reflect that the
            page is now in memory.
        6. Restart Instruction: Resume the process from the point it was
            interrupted.
    The scheme in which never bring a page into memory until it is required is
     called pure demand paging
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 30
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 31
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 32
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
Page Replacement
(Need For Page Replacement)
     Page replacement    is basic to demand paging
      When there is a page fault the OS decides to load the pages from the
       secondary memory to the main memory. It looks for the free frame. If there
       is no free frame then the pages that are not currently in use will be swapped
       out of the main memory, and the desired page will be swapped into the main
       memory.
      The process of swapping a page out of main memory to the swap space and
       swapping in the desired page into the main memory for execution is called
       as Page Replacement.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 33
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
    If no frames are free, two page transfers (one out and one in) are required.
     This situation effectively doubles the page-fault service time and increases
     the effective access time accordingly.
    This overhead can be reduced by using a modify bit (or dirty bit).
    When this scheme is used, each page or frame has a modify bit associated
     with it in the hardware.
    Modify Bit: The modify bit for a page is set by the hardware whenever any
     byte in the page is written into, indicating that the page has been modified.
    When we select a page for replacement, we examine it’s modify bit. If the
     bit is set, we know that the page has been modified since it was read in from
     the disk. In this case, we must write the page to the disk.
    If the modify bit is not set, however, the page has not been modified since it
     was read into memory. In this case, we need not write the memory page to
     the disk: it is already there.
Note:
    The valid-invalid bit indicates whether the corresponding page in virtual memory is valid
      or not.
    Modify (or dirty) bit is used to track whether a page in memory has been modified since
      it was loaded from disk.
Page Replacement Algorithms
    If we have multiple processes in memory, we must decide how many frames
     to allocate to each process; and when page replacement is required, we must
     select the frames that are to be replaced.
    The string of memory references made by a process is called a reference
     string.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur          Page 34
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
        The next reference (2) replaces page 7, because page 7 was brought in
          first.
        Page 0 is the next reference and 0 is already in memory, we have no
          fault for this reference.
        The first reference to 3 results in replacement of page 0, since it is
          now first in line.
        Because of this replacement, the next reference, to 0, will fault. Page
          1 is then replaced by page 0. The process continues until all the pages
          are referenced.
    Advantages:
        The FIFO page-replacement algorithm is easy to understand and
          program.
        Requires minimal overhead.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 35
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
    Disadvantages:
           The Performance is not always good.
           It Suffers from Belady’s Anomaly.
    Belady’s Anomaly: The page fault increases as the number of allocated
      memory frames increases. This unexpected result is called as Belady’s
      Anomaly.
2. Optimal Page Replacement
    This algorithm has the lowest page-fault rate of all algorithms and will never
      suffer from Belady’s anomaly
    Optimal page replacement algorithm Replace the page that will not be used
      for the longest period of time
    Example: Consider the Reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1,
      2, 0, 1, 7, 0, 1 for a memory with three frames.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 36
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 37
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
    Stack Algorithm:
               A stack algorithm is an algorithm for which it can be shown that
                  the set of pages in memory for n frames is always a subset of the
                  set of pages that would be in memory with n + 1 frames.
               Both optimal replacement and LRU replacement does not suffer
                  from Belady’s anomaly. Both belong to a class of page-
                  replacement algorithms, called stack algorithms that can never
                  exhibit Belady’s anomaly.
4. LRU Approximation Page Replacement Algorithm:
    The system provides support to the LRU algorithm in the form of a bit
      called Reference bit.
    Reference Bit: The reference bit for a page is set by the hardware whenever
      that page is referenced (either a read or a write to any byte in the page).
    Reference bits are associated with each entry in the page table.
    Initially, all bits are cleared (to 0) by the operating system. As a user process
      executes, the bit associated with each page referenced is set (to 1) by the
      hardware.
    After some time, we can determine which pages have been used and which
      have not been used by examining the reference bits.
    This information is the basis for many page-replacement algorithms that
      approximate LRU replacement.
  a) Additional-Reference-bits algorithm
    The additional ordering information can be gained by recording the
      reference bits at regular intervals. We can keep an 8-bit byte for each page
      in a table in memory.
    At regular intervals (say, every 100 milliseconds), a timer interrupt transfers
      control to the operating system.
    These 8-bit shift registers contain the history of page use for the last eight
      time periods.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 38
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
    If the shift register contains 00000000, for example, then the page has not
     been used for eight time periods.
    A page that is used at least once in each period has a shift register value of
     11111111.
    A page with a history register value of 11000100 has been used more
     recently than one with a value of 01110111.
    Thus the page with the lowest number is the LRU page, and it can be
     replaced.
b) Second-Chance Algorithm:
    The basic algorithm of second-chance replacement is a FIFO replacement
     algorithm.
    When a page has been selected, however, we inspect its reference bit.
    If the value is 0, we proceed to replace this page; but if the reference bit is
     set to 1, we give the page a second chance and move on to select the next
     FIFO page.
    When a page gets a second chance, its reference bit is cleared, and its arrival
     time is reset to the current time. Thus, a page that is given a second chance
     will not be replaced until all other pages have been replaced
    One way to implement the second-chance algorithm is as a circular queue.
    A pointer indicates which page is to be replaced next. When a frame is
     needed, the pointer advances until it finds a page with a 0 reference bit.
    Once a victim page is found, the page is replaced, and the new page is
     inserted in the circular queue in that position
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 39
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 40
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
Allocation of Frames
(Allocation of Frames in Virtual Memory Systems)
Frame Allocation Basics:
    Frame allocation determines how the OS distributes a fixed amount of
      memory among processes.
    The system must allocate a limited number of free frames among processes.
    Initially, free frames are used to satisfy page faults. Once exhausted, page
      replacement begins.
    Example: In a system with 128 KB memory and 1 KB pages, 128 frames
      exist. If 35 KB is used by OS, 93 frames remain for user processes. With
      pure demand paging, initially, all 93 frames are free; after they’re used, a
      page-replacement algorithm selects which pages to replace.
    The OS can reserve a few frames to always remain free to handle sudden
      page faults smoothly.
Minimum Number of Frames:
    The minimum number of frames per process is determined by hardware
      and instruction requirements (e.g., at least 1 for instruction, 1 for memory
      reference).
    Complex addressing like indirect addressing requires more frames.
    A counter may limit levels of indirection to avoid excessive frame
      requirements.
    A process must be given enough frames to complete its instructions without
      frequent faults.
Allocation Algorithms:
    Equal Allocation:
           Every process gets an equal number of frames.
           Simple but inefficient—small processes may waste memory.
    Proportional Allocation:
          Frames are assigned based on the process's size using the formula
          More fair and efficient, but does not consider process priority.
   Priority-Based Allocation
          High-priority processes can be allocated more frames, possibly at the
            expense of low-priority ones.
Global vs. Local Replacement:
   Global Replacement: A process may replace pages belonging to other
     processes; improves throughput but can cause inconsistent performance.
   Local Replacement: A process replaces only its own pages; performance is
     more predictable but can lead to inefficiencies.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 41
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
Thrashing
   Trashing is a condition where the system spends more time handling Page
     faults than executing processes, leading to severe performance degradation.
    If the process does not have the number of frames it needs to support pages
     in active use, it wills quickly page-fault. At this point, it must replace some
     page. However, since all its pages are in active use, it must replace a page
     that will be needed again right away. Consequently, it quickly faults again,
     and again, and again, replacing pages that it must bring back in immediately.
     This high paging activity is called thrashing.
    A process is thrashing if it is spending more time paging than executing.
    Thrashing is a phenomenon that occurs when a computer's performance
     drops drastically as a result of excessive paging due to insufficient physical
     memory to support the workload. Instead of executing useful work, the
     system spends most of its time in paging, leading to a significant decrease in
     performance.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 42
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
Causes of Thrashing:
    Insufficient Physical Memory: If the system doesn't have enough RAM to
      hold all the active processes and data, it starts swapping data in and out of
      virtual memory, leading to thrashing.
    Over commitment of Memory: When the system allows more processes to
      run concurrently than the available physical memory can accommodate, it
      may result in excessive swapping and thrashing.
    Poor Memory Management: Inefficient memory management algorithms or
      poorly optimized applications can contribute to thrashing.
    Fragmentation: Fragmentation of virtual memory can also exacerbate
      thrashing by making it harder for the system to find contiguous blocks of
      memory.
    If we draw graph, in which CPU utilization is plotted against the degree of
      multiprogramming. As the degree of multiprogramming increases, CPU
      utilization also increases, although more slowly, until a maximum is
      reached. If the degree of multiprogramming is increased even further,
      thrashing sets in, and CPU utilization drops sharply. At this point, to
      increase CPU utilization and stop thrashing, we must decrease the degree of
      multiprogramming.
Detection of Thrashing:
   High Disk Activity: One of the most apparent signs of thrashing is high disk
      activity. If the system is constantly swapping data in and out of the disk, it
      indicates that thrashing might be occurring.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 43
II B.Tech IT                             Regulation: R23                        OS-UNIT-4
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 44
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
    The two surfaces of a platter are covered with a magnetic material. We store
     information by recording it magnetically on the platters.
    A read–write head “flies” just above each surface of every platter. The
     heads are attached to a disk arm that moves all the heads as a unit.
    The surface of a platter is logically divided into circular tracks, which are
     subdivided into sectors.
    Cylinder: The set of tracks that are at one arm position makes up a cylinder.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 45
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 46
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
FCFS Scheduling:
   The simplest form of disk scheduling is, of course, the first-come, first-
    served (FCFS) algorithm.
   This algorithm is intrinsically fair, but it generally does not provide the
    fastest service.
   Consider, for example, a disk queue with requests for I/O to blocks on
    cylinders: 98, 183, 37, 122, 14, 124, 65, 67 in that order.
    If the disk head is initially at cylinder 53, it will first move from 53 to 98,
     then to 183, 37, 122, 14, 24, 65, and finally to 67.
    The total head movements is
      Head Movements =
         (53-98)+(98-183)+((183-37)+(122-14)+(14 -124)+(124-65)+(65-67) = 640
   Disadvantage: The request from 122 to 14 and then back to 124 increases the
     total head movements.
   If the requests for cylinders 37 and 14 could be serviced together, before or
     after the requests for 122 and 124, the total head movement could be
     decreased and performance could be thereby improved.
SSTF Scheduling:
   The SSTF algorithm selects the request with the least seek time from the
     current head position.
   In other words, SSTF chooses the pending request closest to the current
     head position.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 48
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
    Example: Consider, for example, Given a disk with 200 cylinders and a
     disk queue with requests 98, 183, 37, 122, 14, 124, 65, 67, for I/O to blocks
     on cylinders. Disk head is initially at 53.
   The closest request to the initial head position (53) is at cylinder 65.
   Once we are at cylinder 65, the next closest request is at cylinder 67.
   From there, the request at cylinder 37 is closer than the one at 98, so 37 is
    served next.
   Continuing, we service the request at cylinder 14, then 98, 122, 124, and
    finally 183.
   This scheduling method results in a total head movement of only 236
    cylinders
   SSTF algorithm gives a substantial improvement in performance.
   Disadvantage: SSTF may cause starvation of some requests.
   Starvation: Suppose that we have two requests in the queue, for cylinders
    14 and 186, and while the request from 14 is being serviced, a new request
    near 14 arrives. This new request will be serviced next, making the request
    at 186 wait. While this request is being serviced, another request close to 14
    could arrive. In theory, a continual stream of requests near one another
    could cause the request for cylinder 186 to wait indefinitely.
SCAN Scheduling:
   In the SCAN algorithm, the disk arm starts at one end of the disk and
    moves toward the other end, servicing requests as it reaches each cylinder,
    until it gets to the other end of the disk.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 49
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
    At the other end, the direction of head movement is reversed, and servicing
     continues.
    The head continuously scans back and forth across the disk.
    The SCAN algorithm is sometimes called the elevator algorithm.
    Example: Consider, for example, Given a disk with 200 cylinders and a
     disk queue with requests 98, 183, 37, 122, 14, 124, 65, 67, for I/O to blocks
     on cylinders. Disk head is initially at 53.
    Assuming that the disk arm is moving toward 0 the head will next service 37
     and then 14.
    At cylinder 0, the arm will reverse and will move toward the other end of
      the disk, servicing the requests at 65, 67, 98, 122, 124, and 183.
    If a request arrives in the queue just in front of the head, it will be serviced
      almost immediately; a request arriving just behind the head will have to wait
      until the arm moves to the end of the disk, reverses direction, and comes
      back.
Circular SCAN (C-SCAN) scheduling:
    It is a variant of SCAN designed to provide a more uniform wait time. 
    C-SCAN moves the head from one end of the disk to the other, servicing
      requests along the way. When the head reaches the other end, however, it
      immediately returns to the beginning of the disk without servicing any
      requests on the return trip. 
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 50
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
                                                                                   
LOOK Scheduling:
   The LOOK algorithm is the same as the SCAN algorithm in that it also
    services the requests on both directions of the disk head, but it “Looks"
    ahead to see if there are any requests pending in the direction of head
    movement.
   If no requests are pending in the direction of head movement, then the disk
    head traversal will be reversed to the opposite direction and requests on the
    other direction can be served.
   In LOOK scheduling, the arm goes only as far as final requests in each
    direction and then reverses direction without going all the way to the end.
   Consider an example, given a disk with 200 cylinders (0-199), suppose we
    have 8 pending requests: 98, 183, 37, 122, 14, 124, 65, 67 and that the
    read/write head is currently at cylinder 53.
   In order to complete these requests, the arm will move in the increasing
    order first and then will move in decreasing order after reaching the end. So,
    the order in which it will execute is 65, 67, 98, 122, 124, 183, 37, and 14.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur          Page 51
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
C-LOOK Scheduling:
   This is just an enhanced version of C-SCAN.
   Arm only goes as far as the last request in each direction, then reverses
    direction immediately, without servicing all the way to the end of the disk
    and then turns the next direction to provide the service.
FCFS serves the disk I/O requests in the order they arrive. It works on a first-come, first-
served basis.
SSTF selects the request with the shortest seek time, i.e., the request closest to the
current head position.
SCAN moves the disk arm in one direction (up or down) servicing requests along the
way until it reaches the end, then reverses direction.
LOOK is similar to SCAN but reverses direction when there are no more requests in the
current direction.
C-SCAN is a variant of SCAN that always scans the disk in one direction (e.g., from
innermost to outermost track) and jumps to the other end without servicing requests.
C-LOOK is a variant of LOOK that only looks for requests in one direction and jumps to
the other end without servicing requests.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 52
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 53
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
6. What is paging? Explain its structure for 32-byte memory with 4-byte pages.
   7. What is effective access time? Compute it for 70% hit ratio, 20 ns to search TLB
      and 100 ns to access memory. Observe the difference when it is changed to 90%
      hit ratio.
   10. Explain in detail about how Virtual memory is implemented with a neat diagram.
       Does virtual memory increase computer speed? Give justification to your answer.
       (or) Explain about Virtual Memory Management in detail.
   13. How demand paging affects the performance of a computer system? Give
       explanation.
   16. What is a page fault? Explain the steps involved in handling a page fault with a
       neat sketch.
       (or) Under what circumstances do page faults occur? Describe the actions taken
       by the operating system when a page fault occurs with neat picture.
       (or) When page fault occur? Discuss the procedure for handling a page fault with
       neat diagram.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 54
II B.Tech IT                          Regulation: R23                                OS-UNIT-4
   17. What is the need for page replacement in paging? Describe any two page
       replacement algorithms with examples.
       (or) Explain the LRU and Optimal page replacement algorithms.
       (or) Describe any two page replacement algorithms. Compare them based on
        efficiency and performance
   22. What is the need of Page replacement? Consider the following reference string:
       7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1. Find the number of Page Faults
       with FIFO, Optimal Page replacement and LRU with four free frames which are
       empty initially. Evaluate which algorithm gives the minimum number of page
       faults?
   23. Consider the following page reference string: 1,2,3,4,2,1,5,6,2,1,2,3, 7,6,3, 2,1,
       2,3,6. How many page faults would occur for FIFO, LRU and optimal page
       replacement algorithms assuming four page frame and all frames are initially
       empty.
   24. Discuss how LRU and FIFO page replacement algorithms can be implemented on
       the following reference string when the numbers of frames are 3. Also, calculate
       the number of page faults. 3, 2, 1, 0, 2, 2, 1, 7, 6, 7, 0, 1, 2, 0, 3, 0, 4, 1, 5, 4, 5, 6, 7,
       6, 7, 2, 4, 2, 7, 3.
   25. A system uses 4 page frames for storing process pages in main memory. Assume
      that all the page frames are initially empty. Find the number of Page
      faults, Hit ratio and Miss ratios for Optimal Page replacement algorithm while
       processing the page reference string given below.
       1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3,6
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur                 Page 55
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
   27. What is thrashing? What is the cause of Thrashing? How does the system detect
       Thrashing? What can the system do to eliminate this problem?
       (or) What is thrashing in a virtual memory system? Explain the causes of thrashing and
       discuss how the operating system can detect and prevent it.
   28. How does thrashing affect system performance? Discuss the role of the Working
       Set Model and Page Fault Frequency (PFF) in mitigating thrashing.
   29. Explain about various issues involved in selecting appropriate disk scheduling
       algorithm.
       (or) What is disk access time? Explain the factors influencing the selection of a
       disk scheduling algorithm.
   31. Suppose the following disk request sequence (track numbers) for a disk with 100
       tracks is given: 45, 20, 90, 10, 50, 60, 80, 25, and 70. Assume that the initial
       position of the R/W head is on track 50. Find out the additional distance that will
       be traversed by the R/W head when the Shortest Seek Time First (SSTF) algorithm
       is used compared to the SCAN (Elevator) algorithm (assuming that SCAN
       algorithm moves towards 100 when it starts execution).
   32. Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41,
       122, 14, 124, 65, 67. The FCFS scheduling algorithm is used. The head is initially at
       cylinder number 53. The cylinders are numbered from 0 to 199. Calculate the
       total head movement (in number of cylinders) incurred while servicing these
       requests.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 56
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
   33. Suppose the following disk request sequence (track numbers) for a disk with 200
       tracks is given: 82,170,43,140,24,16,190. Assume that the initial position of the
       R/W head is on track 50. Calculate the Seek Time for SSTF, SCAN, LOOK
   34. Consider a disk queue with following requests for I/O to blocks on cylinders
      30,70,115,130.110,80,20,25 (Assume disk head is at 90) Draw FCFS and SSTF
      scheduling and also determine how many times the disk head changes its
      direction for each of the above mentioned scheduling techniques.
Assignment Questions:
   1. What is the difference between
          i)      Page and a frame
          ii)     Internal and external fragmentation
          iii)    Logical and physical addresses.
          iv)     Valid-invalid bit and modify bit
   3. Consider a logical address space of 64 pages of 1,024 words each, mapped onto
       a physical memory of 32 frames.
          a. How many bits are there in the logical address?
          b. How many bits are there in the physical address?
   4. Consider a logical address space of 256 pages with a 4-KB page size, mapped
       onto a physical memory of 64 frames.
         a. How many bits are required in the logical address?
         b. How many bits are required in the physical address?
   5. A system uses 4 page frames for storing process pages in main memory.
       Assume that all the page frames are initially empty. Find the number of Page
       faults, Hit ratio and Miss ratios for Optimal Page replacement algorithm while
       processing the page reference string given below. 1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,
       2,1,2,3,6. Under what circumstances do page faults occur?
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur         Page 57
II B.Tech IT                          Regulation: R23                                OS-UNIT-4
   11. Given memory partitions of 500 KB, 100 KB, 200 KB, 300 KB, and 600 KB (in
       order), how would each of the first-fit, best-fit, and worst-fit algorithms place
       processes of 212 KB, 417 KB, 112 KB and 426 KB (in order)? Which algorithm
       makes the most efficient use of memory?
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur                 Page 58
II B.Tech IT                          Regulation: R23                           OS-UNIT-4
   15. A system uses 3 page frames for storing process pages in main memory. It uses
       the optimal page replacement policy. Assume that three frames and all the page
       frames are initially empty. What is the total number of page faults that will occur
       while processing the page reference string given below? Also calculate the hit
       ratio and miss ratio. 4, 7, 6, 1, 7, 6, 1, 2, 7, 2.
   16. Suppose that a disk drive has 5,000 cylinders, numbered 0 to 4,999. The drive is
       currently serving a request at cylinder 2,150, and the previous request was at
       cylinder 1,805. The queue of pending requests, in FIFO order, is: 2,069, 1,212,
       2,296, 2,800, 544, 1,618, 356, 1,523, 4,965, 3681 Starting from the current head
       position, what is the total distance (in cylinders) that the disk arm moves to
       satisfy all the pending requests for each of the following disk-scheduling
       algorithms? FCFS,SSTF,SCAN,LOOK,C-SCAN and C-LOOK
   17. Suppose the following disk request sequence (track numbers) for a disk with 100
       tracks is given: 45, 20, 90, 10, 50, 60, 80, 25, and 70. Assume that the initial
       position of the R/W head is on track 50. Find out the additional distance that will
       be traversed by the R/W head when the Shortest Seek Time First (SSTF) algorithm
       is used compared to the SCAN (Elevator) algorithm (assuming that SCAN
       algorithm moves towards 100 when it starts execution).
   18. Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41,
       122, 14, 124, 65, 67. The FCFS scheduling algorithm is used. The head is initially at
       cylinder number 53. The cylinders are numbered from 0 to 199. Calculate the
       total head movement (in number of cylinders) incurred while servicing these
       requests.
   19. Suppose the following disk request sequence (track numbers) for a disk with 200
       tracks is given: 82,170,43,140,24,16,190. Assume that the initial position of the
       R/W head is on track 50. Calculate the Seek Time for SSTF, SCAN, LOOK
   20. Consider a disk queue with following requests for I/O to blocks on cylinders
       30,70,115,130.110,80,20,25 (Assume disk head is at 90) Draw FCFS,SSTF,SCAN
       and LOOK scheduling and also determine how many times the disk head
       changes its direction for each of the above mentioned scheduling techniques.
Prepared By: MD SHAKEEL AHMED, Associate Professor, Dept. Of IT, VVIT, Guntur Page 59