Threads
1. Single Level Thread vs. Multilevel Thread
Feature Single Level Thread Multilevel Thread
Definition Only one level of threads (user or Supports multiple levels (user and
kernel) kernel)
Control Controlled by user-level library or Combination of user-level and
kernel kernel-level
Flexibility Less flexible More flexible and efficient
Example Simple user-thread model Solaris, Windows threads (hybrid
model)
2. Thread vs. Process
Feature Thread Process
Definition Lightweight subprocess (shares Independent unit of execution
memory)
Memory Shares memory with peer threads Has its own memory and
resources
Overhead Lower Higher
Communicatio Easier (shared memory) Complex (IPC needed)
n
Dependency Threads within same process Processes are independent
3. User Thread vs. Kernel Thread
Feature User Thread Kernel Thread
Managemen By user-level library Managed by OS kernel
t
Speed Faster (no kernel involvement) Slower (more overhead)
Blocking One blocking thread may block all Only the blocked thread is
paused
Portability More portable Less portable
4. Multithreading Models
● Many-to-One: Many user threads mapped to one kernel thread. Fast but one block
affects all.
● One-to-One: Each user thread maps to one kernel thread. Good concurrency but
more resource-heavy.
● Many-to-Many: Many user threads mapped to many kernel threads. Flexible and
scalable.
● Two-level Model: Mix of many-to-many with some one-to-one mappings (e.g.,
Solaris).
5. Short Notes
i. Pthread (POSIX Thread)
● Standard for thread creation and synchronization in UNIX.
● Portable and used in Linux systems.
● Functions: pthread_create, pthread_join, etc.
ii. Linux Thread
● In Linux, threads are implemented using clone() system call.
● Treated similarly to processes (share resources like memory, file descriptors).
● Known as “lightweight processes.”
6. Solaris 2 Threads and Java Thread States
Solaris 2 Threads:
● Three-level model: User threads, kernel threads, and lightweight processes (LWP).
● Allows good flexibility in thread management.
Java Thread States:
1. New – Created but not started
2. Runnable – Ready to run
3. Blocked – Waiting to acquire lock
4. Waiting – Waiting indefinitely for another thread
5. Timed Waiting – Waiting for a specific time
6. Terminated – Execution finished
Deadlock
1. What is Deadlock?
● A situation where a set of processes are blocked because each process is holding a
resource and waiting for another.
Example:
● P1 holds R1 and waits for R2.
● P2 holds R2 and waits for R1.
2. Necessary Conditions for Deadlock (Coffman conditions)
1. Mutual Exclusion – Only one process at a time can use a resource.
2. Hold and Wait – A process holding resources may request more.
3. No Preemption – Resources cannot be forcibly taken.
4. Circular Wait – Set of processes are waiting on each other in a cycle.
3. Deadlock Avoidance Algorithm
● Ensures the system will never enter a deadlock state.
● Banker’s Algorithm:
○ Works like a bank allocating resources only if it leads to a safe state.
4. Deadlock Detection Algorithm
● Allows system to enter deadlock and detects it later using:
○ Wait-for Graph for single instance.
○ Resource Allocation Matrix for multiple instances.
5. Short Note: Resource Allocation Graph (RAG)
● Graph representation of resource and process.
● Edges:
○ Request edge (P → R)
○ Assignment edge (R → P)
● Cycle in RAG implies potential deadlock.
Memory Management System
1. Hardware Address Protection
● Base Register: Holds the smallest legal physical memory address.
● Limit Register: Specifies the range of logical addresses.
● Prevents processes from accessing memory outside their range.
2. Logical vs Physical Address Space
Type Description
Logical Address Generated by CPU (virtual
address)
Physical Actual location in RAM
Address
3. Swapping & Schematic View
Moving processes between RAM and disk to manage memory.
Schematic shows process in memory, then swapped out to disk, and back in.
4. Dynamic Storage Allocation
Strategy Description
First Fit Allocate first block large enough
Best Fit Allocate smallest sufficient block
Worst Fit Allocate largest available block
. Paging
● Divides memory into fixed-size pages (logical) and frames (physical).
● Eliminates external fragmentation.
Paging Example:
● Page size = 4KB, logical address = 13-bit
○ Page number: higher bits
○ Offset: lower bits
Paging Hardware:
● Page Table: Maps pages to frames.
● TLB: Cache for page table entries for faster access.
6. Address Translation Scheme
● Logical Address → (Page Number, Offset)
● Page Table → Maps to Frame Number
● Physical Address = Frame Number + Offset
Deadlock Problems
1. Resource Allocation Graph (RAG) Math
● Use a directed graph:
○ Processes (circles), Resources (squares)
○ Request edge: Process → Resource
○ Assignment edge: Resource → Process
● Deadlock detection: If there’s a cycle, a deadlock may exist (certainly exists if each
resource has one instance).
📝 Approach:
● Draw the RAG
● Check if a cycle exists
● If yes → deadlock is possible
2. Banker’s Algorithm – Safe State Check
Given:
● Total resources: A=10, B=6, C=9, D=6
● Assume the snapshot includes Allocation, Max, and Available matrices.
📝 Approach:
1. Calculate Need = Max - Allocation
2. Use Available to simulate process completion
3. If all processes can finish one-by-one, it’s in a safe state
4. Otherwise, unsafe
✅ Make sure to go step by step with Work and Finish arrays.
💾 Memory Management System Problems
1. Best-Fit & Next-Fit Allocation
Given:
● Memory blocks: [100, 180, 120, 130, 300] KB
● Processes: [110, 90, 200, 140] KB
📝 Best-Fit:
● Choose the smallest block that fits
● Sort blocks on each allocation attempt
📝 Next-Fit:
● Like first-fit but starts from where the last allocation ended
➡️ Allocate each process, track fragmentation, and show final memory state.
2. CPU Utilization with I/O Wait
Given:
● 4 cores, 16 GB RAM
● Each process needs 2 GB ⇒ max 8 processes
● 0.3 chance of I/O wait (⇒ 0.7 active)
📝 Formula:
Utilization/core = 1 - (p^n)
Where:
● p = probability of process in I/O = 0.3
● n = processes per core = 8 / 4 = 2
➡️ Plug in values:
● Per core = 1 - (0.3)^2 = 1 - 0.09 = 0.91 (91%)
● Total system = 0.91 × 4 = 3.64 cores active (or 91% overall)
3. Memory Access Time with TLB
Given:
● Memory access = 50 ns
● TLB hit rate = 75%, TLB access = 2 ns
📝 Formula:
EMAT = (Hit Rate × TLB + Mem) + (Miss Rate × (TLB + 2 × Mem))
➡️ Calculation:
● EMAT = (0.75 × (2 + 50)) + (0.25 × (2 + 2×50))
● = (0.75 × 52) + (0.25 × 102) = 39 + 25.5 = 64.5 ns
4. Segmentation Address Translation
Segmen Base Limit
t
0 1200 800
1 5000 1500
2 10000 3500
3 16000 1000
4 21000 500
Logical Chec Resu
Addr k lt
(Seg 0, 200 < 1200
200) 800 + 200
✔ =
1400
(Seg 1, 1400 5000
1400) < +
1500 1400
✔ =
6400
(Seg 2, 3600 Seg
3600) > ment
❌
3500 ation
Fault
(Seg 4, 450 < 2100
450) 500 0+
✔ 450 =
2145
0
5. Page Replacement Algorithms
Reference string:
2, 4, 1, 3, 5, 2, 1, 4, 3, 5, 2, 1, 4
Frame size = 4
📝 i. FIFO (First-In, First-Out)
● Replace the oldest page when full
➡️ Track page queue and count how many new pages entered
📝 ii. LRU (Least Recently Used)
● Replace the page that was least recently used
➡️ Maintain a usage history to track which page to evict
iii. Which Performs Better?
● Count total page faults in each
● Usually LRU gives fewer faults than FIFO
● Justify based on page replacement strategy