0% found this document useful (0 votes)
4 views4 pages

Mid 2

mid 2

Uploaded by

Anupam roy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views4 pages

Mid 2

mid 2

Uploaded by

Anupam roy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

1.

Binary Semaphore, Counting Semaphore, Race Condition, and Readers-Writers


Problem
A binary semaphore is a synchronization primitive that can only take two values: 0
(locked/unavailable) or 1 (unlocked/available). It is essentially a mutex (mutual exclusion lock) used to
ensure that only one process or thread can access a critical section or shared resource at a time.
Operations are typically wait() (decrement if >0, else block) and signal() (increment if <1).
A counting semaphore (or general semaphore) can take any non-negative integer value, representing
the number of available resources. It is used to control access to a finite number of identical resources
(e.g., a pool of connections). The wait() operation decrements the count if >0 (else blocks), and signal()
increments it, potentially waking up waiting processes.
A race condition occurs when multiple processes or threads access and manipulate shared data
concurrently, and the final outcome depends on the unpredictable timing or order of their execution.
This can lead to incorrect or inconsistent results, such as data corruption or unexpected behavior. For
example, two threads incrementing a shared counter without synchronization might result in lost
updates if they read the same value before writing.
The Readers-Writers problem in process synchronization involves managing concurrent access to a
shared resource (e.g., a database or file) where multiple "readers" (processes that only read data) can
access it simultaneously without issues, but "writers" (processes that modify data) require exclusive
access to prevent inconsistencies. The challenge is to ensure no writer is starved (endlessly waiting due
to continuous readers) or readers are starved (due to frequent writers), while avoiding race conditions.
There are two common variants:
First Readers-Writers Problem (readers have priority): Readers can enter if no writer is active or
waiting. Writers wait until no readers are active. This may starve writers if readers keep arriving.
Second Readers-Writers Problem (writers have priority): Writers can enter if no readers or other
writers are active, and pending writers block new readers. This may starve readers.
A solution uses semaphores:
A mutex semaphore (binary) for mutual exclusion on read_count.
A wrt semaphore (binary) for writer access.
read_count tracks active readers.
Pseudocode outline:
Reader: wait(mutex); read_count++; if(read_count==1) wait(wrt); signal(mutex); // read; wait(mutex);
read_count--; if(read_count==0) signal(wrt); signal(mutex);
Writer: wait(wrt); // write; signal(wrt);
This ensures multiple readers can proceed together, but writers have exclusive access.

2a. Problems with Single-Level Page Tables and Elimination Using Multilevel Page
Tables
The main problem with using single-level page tables for very large virtual address spaces (VAS) is that
the page table becomes excessively large and must be stored contiguously in memory. For example, in
a 64-bit VAS with 4KB pages, there are 2^52 pages (since page offset is 12 bits, virtual page number is
52 bits), requiring a page table with 2^52 entries. Each entry might be 8 bytes, leading to a page table
size of about 32 petabytes, which is impractical to allocate or store in main memory. This wastes
memory even for sparsely used VAS and increases overhead for context switches.
This problem is eliminated using multilevel page tables (e.g., two-level or three-level), which divide the
page table into a hierarchy of smaller tables. Only the top-level table and actively used lower-level
tables are allocated, reducing memory usage for sparse address spaces. The virtual address is split into
multiple indices: one for each level, plus the page offset. If a lower-level table isn't needed (e.g., for
unused address ranges), it isn't created, saving space.
Example with Two-Level Page Tables:
Consider a 32-bit VAS with 4KB pages (page offset: 12 bits, so virtual page number: 20 bits). In a
single-level table, we'd need 2^20 entries (about 4MB if each is 4 bytes).
In two-level paging:
Split the 20-bit VPN into 10 bits for outer page table (1024 entries) and 10 bits for inner page table
(1024 entries each).
Virtual address: [10 bits outer index | 10 bits inner index | 12 bits offset].
The outer table points to inner tables (only allocating inner tables for used regions).
If only 1/1024 of the VAS is used, we allocate one outer table (4KB) + one inner table (4KB), totaling
~8KB instead of 4MB.
Lookup: Use outer index to find inner table pointer, then inner index to find frame number, then add
offset.
This hierarchy can extend to three or more levels for larger VAS (e.g., 64-bit systems often use four-
level paging).

2b. Page Faults in LRU Page Replacement


For the Least Recently Used (LRU) algorithm, we replace the page that has not been used for the
longest time. With 3 frames and reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
To arrive at the solution, simulate step-by-step:
Maintain a list of frames where the most recently used (MRU) page is at the end, and least recently
used (LRU) at the front.
On a hit: Move the page to the end (MRU position).
On a fault: If full, evict the front (LRU), add new page to end; if not full, just add to end.

Reference Frames (LRU to MRU) Hit/Fault Evicted Total Faults

7 [7] Fault - 1

0 [7, 0] Fault - 2

1 [7, 0, 1] Fault - 3

2 [0, 1, 2] Fault 7 4

0 [1, 2, 0] Hit - 4

3 [2, 0, 3] Fault 1 5

0 [2, 3, 0] Hit - 5
Reference Frames (LRU to MRU) Hit/Fault Evicted Total Faults

4 [3, 0, 4] Fault 2 6

2 [0, 4, 2] Fault 3 7

3 [4, 2, 3] Fault 0 8

0 [2, 3, 0] Fault 4 9

3 [2, 0, 3] Hit - 9

2 [0, 3, 2] Hit - 9

1 [3, 2, 1] Fault 0 10

2 [3, 1, 2] Hit - 10

0 [1, 2, 0] Fault 3 11

1 [2, 0, 1] Hit - 11

7 [0, 1, 7] Fault 2 12

0 [1, 7, 0] Hit - 12

1 [7, 0, 1] Hit - 12

Total page faults: 12.

3. Advantages of Segmentation over Paging, and Calculations for the Given System
Advantages of segmentation over paging:
Variable-sized allocation: Segments are logical units (e.g., code, data, stack) of varying sizes,
reducing internal fragmentation compared to fixed-size pages.
Better protection and sharing: Protection bits per segment (e.g., read-only for code), and
segments can be shared easily (e.g., shared libraries) without page alignment issues.
Logical structure: Reflects program structure, making it easier for compilers and programmers to
manage (e.g., separate segments for functions).
Easier growth: Segments can grow dynamically without fixed page constraints.
However, segmentation can suffer from external fragmentation.
For the byte-addressable system with 24-bit logical address, 24-bit physical address, and 1KB (2^10
bytes) page size:
i. How many logical addresses can we generate?

Logical address space is 24 bits, so total addresses = 2^24 = 16,777,216 (or 16 MB).
(Calculated as 2 raised to the power of the address bits.)
ii. How many frames can we generate?

Physical address space is 24 bits (16 MB), page/frame size is 1KB.

Number of frames = physical memory size / frame size = 2^24 / 2^10 = 2^14 = 16,384.

(Divide total physical bytes by bytes per frame.)


iii. What is the size of the instruction offset?

The offset is the bits for addressing within a page = log2(page size) = log2(2^10) = 10 bits.

(In paging, virtual address = [page number bits | offset bits].)


iv. What is the total address space need for the physical memory and logical memory?

Logical memory (VAS): 2^24 bytes = 16 MB.

Physical memory: 2^24 bytes = 16 MB.

(Directly from the address bit lengths; both are 24 bits, so same size.)

You might also like