0% found this document useful (0 votes)
51 views22 pages

OS Scheduling & Management Guide

OS PRIORITY BASED TOPICS FOR END SEM

Uploaded by

shubham jha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views22 pages

OS Scheduling & Management Guide

OS PRIORITY BASED TOPICS FOR END SEM

Uploaded by

shubham jha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

★★★ Priority Topics

• Gantt Chart
• Monitor Semaphore
• Disk Scheduling Algorithms
o Total Distance (FCFS, SSTF, SCAN, C-SCAN, LOOK, etc.)
• Page Replacement Algorithms
o FIFO
o Optimal Page
o LRU
o NRU
o IMP
• Deadlock Management
o Deadlock Avoidance
o Banker's Algorithm
o Resource Allocation Graph
o Deadlock Prevention

★★ Priority Topics

• Disk Access and Performance


o Seek Time
o Rotational Latency
• Real-Time Scheduling
o Critical Section Problem
o Bakery Algorithm
o Inter-process Communication and Synchronization
o Long-term and Short-term Scheduling
o Average Turnaround Time
• File System
o Virtual File System
o Mounted File System
o File Directory Structure (Single-Level Tree, etc.)
o File Structure (Ext2/Ext3)
o Inodes
o Data Integrity
• Memory Management Schemes
o Free Space Allocation
o Partitioning Memory Management (Fixed/Variable)
o Segmentation and Paging
o Internal and External Fragmentation
o Binding
• Mutual Exclusion Techniques
o Semaphores
o Synchronization
• Process Scheduling
o Round Robin Scheduling
o Idle Time
o Process Starvation
o Data Transfer Rate
o Timesharing
o Convoy Effect (CFCFS)
• Thrashing and Paging System
o Burst Time
o Average Turnaround Time
o Memory Access Time
• Logical and Physical File Issues
o Binding (Internal and External Fragmentation)
• Logical I/O and Device Drivers
o Logical I/O
o Soft Page Faults and Hard Page Faults
o Device Drivers
• TLB (Translation Lookaside Buffer)
• Network/Distributed OS

★ Priority Topics

• Overlay Concepts
o File Attributes
o Accessing Methods
• Caching and Buffering
o Preemptive Scheduling
o Cache Management
• File Allocation Methods
o Single-Level Tree
o DOS/UNIX Allocation Methods
• Dining Philosopher Problem
• Timesharing and Multiprogramming
o Multiprocessing
o Multitasking
• Real-Time System Overview

ONE TIME COMERS

Memory Management Scheme

• Buddy System
• Medium-Term Scheduler

• File Management

• File Attributes
• Accessing Methods
• Virtual File System
• Mounted File System
• File Directory Structure
• File Allocation Methods (Single-Level Tree, etc.)
• Synchronization Techniques

• Semaphores (Hardware to Semaphores)


• Mutual Exclusion

• Disk Scheduling Algorithms (Note: Removed the starred "Disk Scheduling Algorithms"
from the starred list, so it's empty.)

• Logical I/O

• File Structure
• Ext2/Ext3

• Network/Distributed OS
★★★ PRIORITY

Gantt Chart:

• Definition: A Gantt Chart is a type of bar chart that represents a project schedule
over time. It is used in project management to visualize the start and finish dates of
the various elements of a project.
• Purpose: It is particularly useful for scheduling tasks, showing dependencies, and
tracking progress.
• Key Components:
o Tasks are represented as horizontal bars.
o The length of the bars shows the duration of tasks.
o The tasks are ordered chronologically on the chart.

2. Monitor Semaphore:

• Monitor: A monitor is an abstract data type that encapsulates shared data and the
procedures that operate on that data. It provides synchronization to ensure that only
one process executes a procedure at a time.
o Key Concepts:
 Only one thread can be active in a monitor at any given time.
 It uses condition variables for thread synchronization.
• Semaphore: A semaphore is a synchronization primitive that controls access to a
shared resource by multiple processes. It can either be binary (0 or 1) or counting
(with a greater range of values).
o Key Operations:
 Wait(): Decreases the semaphore value; if it is 0, the process is
blocked.
 Signal(): Increases the semaphore value; if blocked processes are
waiting, one of them is unblocked.

3. Disk Scheduling Algorithms:

These algorithms determine the order in which disk I/O requests should be serviced to
minimize the total distance the disk arm travels, thus improving performance.

a) Total Distance (FCFS, SSTF, SCAN, C-SCAN, LOOK, etc.):

• FCFS (First-Come, First-Served): The simplest disk scheduling algorithm. Requests


are processed in the order in which they arrive.
o Drawback: This algorithm can cause the disk arm to travel a long distance,
resulting in inefficient disk usage.
• SSTF (Shortest Seek Time First): It selects the request that is closest to the current
position of the disk head, minimizing the seek time for each request.
o Drawback: Can lead to starvation, where some requests may never get
serviced if there are always closer requests to be processed.
• SCAN (Elevator Algorithm): The disk arm moves in one direction servicing
requests until it reaches the end, and then reverses direction.
o Advantages: It reduces the total travel distance compared to FCFS.
o Drawback: If requests are located at the ends, the total distance can still be
significant.
• C-SCAN (Circular SCAN): Similar to SCAN, but when the disk arm reaches the
end, it immediately returns to the other end without servicing requests in reverse
direction.
o Advantages: Provides a more uniform wait time compared to SCAN.
• LOOK: This is a variant of SCAN, where the disk arm reverses direction once the
furthest request is serviced, but does not go to the end of the disk.
o Advantages: More efficient than SCAN in some scenarios, as the arm doesn’t
travel all the way to the end.

4. Page Replacement Algorithms:

Page replacement algorithms are used in operating systems when the memory is full, and new
pages need to be loaded into memory.

a) FIFO (First-In-First-Out):

• Description: The oldest page (the page that has been in memory the longest) is
replaced when a new page needs to be loaded into memory.
o Drawback: FIFO can lead to poor performance since the oldest page may still
be frequently used.

b) Optimal Page:

• Description: The optimal page replacement algorithm replaces the page that will not
be used for the longest period in the future. It requires knowledge of future memory
accesses, which is impractical in real systems.
o Advantage: This is the best possible page replacement strategy in terms of
minimizing page faults.

c) LRU (Least Recently Used):

• Description: Replaces the page that has not been used for the longest time. LRU tries
to approximate the optimal page replacement by assuming that pages that were
recently used will likely be used again.
o Implementation: Can be implemented using a counter or stack-based
methods.
o Advantage: Often performs better than FIFO and is closer to the optimal
strategy.

d) MFU (Most Frequently Used):


• Description: This algorithm replaces the page that has been used the most frequently,
based on the assumption that the most frequently used pages are less likely to be
useful in the future.
o Drawback: It is not ideal for many real-world scenarios, as frequently used
pages are often important to keep in memory.

5. Deadlock Management:

Deadlock occurs when a set of processes are blocked because each process is holding a
resource and waiting for another resource held by another process.

a) Deadlock Avoidance:

• Description: Deadlock avoidance involves ensuring that the system will never enter a
deadlock state. This is done by examining the current state of the system to determine
if the allocation of resources will result in a safe or unsafe state.
o Algorithm: The Banker’s Algorithm is one such method for deadlock
avoidance, which checks if the system is in a safe state before allocating
resources.

b) Banker's Algorithm:

• Description: A resource allocation and deadlock avoidance algorithm that tests


whether resource allocation requests can be granted safely, ensuring that all processes
can eventually complete without causing deadlock.
o Working: It checks if there is a "safe sequence" of process execution that
ensures no deadlock occurs.

c) Resource Allocation Graph:

• Description: This graph represents processes and resources as nodes. Edges represent
resource requests and allocations. A cycle in the graph indicates that deadlock may
occur.
o Deadlock Prevention: If a cycle is detected, the system takes steps to break
the cycle and prevent deadlock.

d) Deadlock Prevention:

• Description: In deadlock prevention, the system is designed to prevent one of the four
necessary conditions for deadlock from occurring.
o Conditions: The four necessary conditions for deadlock are mutual exclusion,
hold and wait, no preemption, and circular wait.
o Approach: One of these conditions is eliminated to ensure deadlock does not
occur, such as requiring processes to request all resources at once (avoiding
hold and wait).
★★ Priority Topics
1. Disk Access and Performance:

a) Seek Time:

• Definition: Seek time is the time taken for the disk’s read/write head to move to the
track that contains the required data. The faster this time, the better the disk
performance.
• Impact: Seek time is crucial for performance since it contributes to the overall time
for accessing data.
• Optimization: Techniques like SSTF (Shortest Seek Time First) and LOOK
Scheduling algorithms can minimize seek time by prioritizing the closest requests to
the current head position.
• Average Seek Time Calculation: It’s the average of the seek times for all the
requests to the disk.

b) Rotational Latency:

• Definition: Rotational latency is the delay waiting for the correct disk sector to rotate
into place beneath the read/write head.
• Formula: On average, rotational latency is calculated as:
Average Latency=12×1RPM×60\text{Average Latency} = \frac{1}{2} \times
\frac{1}{\text{RPM}} \times 60Average Latency=21×RPM1×60
• Optimization: Disks with higher RPM (Revolutions Per Minute) offer lower
rotational latency. RAID configurations can also distribute the workload to reduce
latency.
• Impact on Performance: This is a critical factor in disk I/O operations, and it affects
applications that rely on large amounts of data access.

2. Real-Time Scheduling:

a) Critical Section Problem:

• Definition: The critical section problem arises when multiple processes or threads
attempt to access shared resources, and it’s essential to prevent conflicts or race
conditions.
• Solution: Mutual exclusion mechanisms like semaphores, locks, or monitors
prevent simultaneous access, ensuring that only one process can access the resource at
a time.
• Deadlock Avoidance: Deadlock can occur if mutual exclusion is not handled
properly. Techniques such as wait-for graph analysis or using timeout mechanisms
can mitigate this risk.
• Fairness: Solutions must ensure that no process is starved (i.e., waits indefinitely for
a resource).

b) Bakery Algorithm:
• Definition: A simple algorithm used to solve the critical section problem in
distributed systems. Each process chooses a number (like taking a ticket at a bakery)
and enters the critical section in numerical order.
• Working: Each process waits for its turn based on the number assigned, ensuring
mutual exclusion in a fair and orderly fashion.
• Advantages: It guarantees fairness and prevents starvation, ensuring that every
process eventually gets access to the critical section.
• Use Cases: Suitable for scenarios with low contention and relatively low numbers of
processes.

c) Inter-process Communication (IPC) and Synchronization:

• Definition: IPC is a mechanism that allows processes to communicate and


synchronize their actions. It is necessary when processes need to share data or
coordinate their activities.
• Mechanisms: Common methods include shared memory, message passing, and
pipes. Synchronization tools like mutexes, semaphores, and condition variables are
used to coordinate access to shared resources.
• Types:
o Message Passing: One process sends a message to another, which then reads
the message.
o Shared Memory: Multiple processes share a common memory space, with
careful synchronization to avoid conflicts.

d) Long-term and Short-term Scheduling:

• Long-term Scheduling: Responsible for deciding which processes are admitted into
the system from the job queue. It balances the mix of processes in the system,
ensuring fair CPU time distribution.
• Short-term Scheduling: Involves deciding which process in the ready queue gets
CPU time next. It operates at much higher frequency than long-term scheduling.
• Interaction: Long-term scheduling decides the overall system mix, while short-term
scheduling focuses on immediate CPU allocation to processes.

e) Average Turnaround Time:

• Definition: The average turnaround time is the total time taken from the arrival of a
process to its completion. It includes waiting time, execution time, and time spent in
the system.
• Formula: Turnaround Time=Completion Time−Arrival Time\text{Turnaround Time}
= \text{Completion Time} - \text{Arrival
Time}Turnaround Time=Completion Time−Arrival Time
• Objective: In process scheduling, one goal is to minimize the average turnaround
time to increase system throughput and responsiveness.
• Optimization: Scheduling algorithms like Shortest Job First (SJF) can help
minimize this time.

3. File System:
a) Virtual File System (VFS):

• Definition: VFS is an abstraction layer in the operating system that allows


applications to access different file systems through a uniform interface.
• Purpose: VFS hides the complexity of interacting with different types of file systems
(e.g., FAT, NTFS, ext4) and presents a consistent API to user programs.
• Functionality: It manages file system operations like opening, reading, and writing
files in a manner that works independently of the underlying file system type.

b) Mounted File System:

• Definition: A mounted file system is a file system that is connected to the operating
system and made accessible through a directory, referred to as a mount point.
• Working: When you mount a file system, the operating system attaches the file
system to a directory in the existing file hierarchy, allowing you to access its files.
• Example: In Linux, mounting is done using commands like mount /dev/sda1
/mnt/ to attach the file system to /mnt/.

c) File Directory Structure:

• Single-Level Directory: A flat structure where all files are placed in a single
directory, making it simple but inefficient for large numbers of files.
• Hierarchical Directory: Files are organized in directories and subdirectories,
forming a tree structure. This makes it easier to manage large numbers of files.
• Benefits: A hierarchical structure allows for better organization and scalability,
improving performance as the number of files grows.

d) File Structure (Ext2/Ext3):

• Ext2: The second extended file system, known for its simplicity and reliability. It
does not support journaling, which can make recovery slower in case of failure.
• Ext3: Built on Ext2, it adds journaling capabilities, which enhance reliability by
keeping logs of file system changes, reducing the risk of corruption after crashes.

e) Inodes:

• Definition: An inode is a data structure that stores metadata about a file, excluding its
name and content. It contains information such as the file's size, permissions, and
pointers to its data blocks.
• Working: Every file in a file system has an inode, which is used to retrieve file
properties and access the data stored in blocks.
• Importance: Inodes enable efficient file management by separating file metadata
from the file name and content, providing faster file lookup and access.

4. Memory Management Schemes:

a) Free Space Allocation:


• Definition: Free space allocation manages the unused memory areas in a system,
keeping track of free and allocated blocks of memory.
• Techniques:
o Bitmaps: A bitmap is a simple structure that represents whether a memory
block is free (0) or allocated (1).
o Linked Lists: A list of free memory blocks, each block pointing to the next
free block.

b) Partitioning Memory Management (Fixed/Variable):

• Fixed Partitioning: Divides memory into fixed-size partitions. Each partition is


dedicated to a process, but it may lead to internal fragmentation if a process does
not fully use the allocated space.
• Variable Partitioning: Memory is allocated based on the size of the process. This
reduces internal fragmentation but can lead to external fragmentation over time.

c) Segmentation and Paging:

• Segmentation: Divides memory into variable-sized segments based on the logical


divisions of a program (e.g., code, data, stack).
• Paging: Divides memory into fixed-size blocks called pages. Paging reduces external
fragmentation and is more flexible for memory allocation.

d) Internal and External Fragmentation:

• Internal Fragmentation: Occurs when allocated memory blocks are larger than
needed by the process, leaving unused space inside allocated blocks.
• External Fragmentation: Occurs when free memory is scattered throughout the
system, making it difficult to allocate large contiguous blocks of memory.

e) Binding:

• Definition: Binding is the process of associating program code and data with memory
locations. It can be done at compile-time, load-time, or runtime, depending on the
system’s memory management approach.
• Types:
o Compile-time Binding: Memory locations are determined during program
compilation.
o Load-time Binding: Memory addresses are assigned when the program is
loaded into memory.
o Execution-time Binding: Memory addresses are determined at runtime,
allowing dynamic memory allocation.

5. Mutual Exclusion Techniques:

a) Semaphores:

• Definition: A semaphore is a synchronization primitive used to control access to a


shared resource by multiple processes in concurrent systems.
• Types:
o Binary Semaphore: Can only take values 0 or 1, used to implement mutual
exclusion (like a mutex).
o Counting Semaphore: Can take any non-negative integer value, used to
manage access to a resource pool (e.g., a fixed number of identical resources).
• Operations: Semaphores use two atomic operations:
o wait(): Decreases the semaphore value, blocking if the value is zero.
o signal(): Increases the semaphore value, waking up a waiting process if any.
• Example: Used in managing resource pools (like printers or buffers) to avoid race
conditions.

b) Synchronization:

• Definition: Synchronization ensures that processes or threads operate in a coordinated


manner, preventing conflicts when accessing shared resources.
• Methods:
o Mutex (Mutual Exclusion): A mechanism to ensure that only one thread or
process can access a resource at a time.
o Condition Variables: Allow processes to wait until certain conditions are met
before continuing.
o Monitors: A high-level synchronization construct that combines mutual
exclusion and condition synchronization in one.
• Purpose: It is vital in preventing race conditions and ensuring that multiple processes
don’t alter shared data simultaneously in an unpredictable manner.

6. Process Scheduling:

a) Round Robin Scheduling:

• Definition: Round Robin (RR) is a pre-emptive scheduling algorithm where each


process gets a fixed time slice (quantum) in which to execute before the next process
is given a turn.
• Working: Each process in the ready queue is assigned a time quantum. If a process
doesn't finish in the given time, it is moved to the back of the queue.
• Advantage: Ensures fairness and responsiveness for all processes, preventing
starvation.
• Disadvantage: If the quantum is too large, it degenerates into FIFO scheduling. If it’s
too small, it leads to high overhead due to frequent context switching.

b) Idle Time:

• Definition: Idle time refers to periods when the CPU is not executing any process,
typically because the process scheduler cannot find a process to run.
• Impact: Idle time can reduce system efficiency, especially in real-time or high-
priority systems.
• Optimization: By improving scheduling algorithms and ensuring that all CPUs (in
multi-core systems) are used efficiently, idle time can be minimized.

c) Process Starvation:
• Definition: Process starvation occurs when a process is perpetually delayed from
getting the resources it needs due to other higher-priority processes.
• Cause: This happens in priority-based scheduling when low-priority processes are
never selected.
• Solution: Aging can be applied, where the priority of a process gradually increases
the longer it waits, ensuring that even low-priority processes eventually get executed.

d) Data Transfer Rate:

• Definition: The data transfer rate refers to the amount of data transferred from one
point to another per unit of time, typically measured in bytes per second (Bps).
• Significance: This is a critical metric in disk I/O, network communications, and
memory operations.
• Optimization: To improve performance, high-speed buses, faster disk interfaces (e.g.,
SSDs), and efficient data transfer protocols are used.

e) Timesharing:

• Definition: Timesharing allows multiple users to share the system resources by giving
each user a small time slice to execute their processes.
• Goal: To enable the system to appear as if each user has exclusive access to the
computer, though in reality, the CPU time is shared among all users.
• Application: Commonly used in multi-user environments such as mainframes and
networked servers.

f) Convoy Effect (CFCFS):

• Definition: The convoy effect refers to the situation where a single long process holds
up the CPU, preventing shorter processes from executing, especially in First-Come-
First-Serve (FCFS) scheduling.
• Cause: This happens because FCFS does not preempt or prioritize processes.
• Impact: Leads to inefficiencies, particularly in systems where processes have varying
execution times.

7. Thrashing and Paging System:

a) Burst Time:

• Definition: Burst time refers to the amount of time a process requires on the CPU to
complete its execution.
• Impact on Scheduling: Short burst times favor algorithms like Shortest Job First
(SJF), whereas long burst times may lead to process starvation if not handled well.
• Burst Time in Paging: In the context of paging, a process with long burst times may
require more memory resources, potentially leading to more frequent page swaps.

b) Average Turnaround Time:


• Definition: Average turnaround time is the average time a process takes to complete
its execution after arriving in the system.
• Formula: Turnaround Time=Completion Time−Arrival Time\text{Turnaround Time}
= \text{Completion Time} - \text{Arrival
Time}Turnaround Time=Completion Time−Arrival Time
• Optimization: Scheduling algorithms like SJF or Priority Scheduling can help
minimize the average turnaround time by giving higher priority to shorter jobs.

c) Memory Access Time:

• Definition: Memory access time is the total time taken to read from or write to
memory, including the time spent in fetching data and performing operations.
• Impact on Paging: High memory access times can increase the number of page faults
in a system, affecting the overall performance.
• Optimization: Techniques like TLB (Translation Lookaside Buffer) and
prefetching can reduce memory access times by quickly retrieving frequently
accessed data.

8. Logical and Physical File Issues:

a) Binding (Internal and External Fragmentation):

• Internal Fragmentation: Occurs when memory allocated to a process is larger than


the amount needed, leading to wasted space inside the allocated memory blocks.
• External Fragmentation: Happens when free memory is scattered throughout the
system, leaving small gaps that are unusable for larger processes, despite having
enough total free space.
• Binding: Binding is the process of associating a process with memory addresses, and
fragmentation occurs when memory is not efficiently managed.

9. Logical I/O and Device Drivers:

a) Logical I/O:

• Definition: Logical I/O refers to the abstraction layer through which the operating
system interacts with the hardware to perform input/output operations. It enables the
system to handle devices in a uniform way.
• Example: When reading a file from disk, the operating system abstracts the physical
details of how the data is stored and provides a logical interface to the user program.

b) Soft Page Faults and Hard Page Faults:

• Soft Page Fault: Occurs when the required page is in memory but is not currently
mapped into the process’s address space. It can be handled by adjusting the page
table.
• Hard Page Fault: Occurs when the required page is not in memory and needs to be
loaded from disk, causing a significant delay in process execution.

c) Device Drivers:

• Definition: Device drivers are software programs that allow the operating system to
interact with hardware devices such as printers, hard drives, and network interfaces.
• Functionality: They handle communication between the hardware and the operating
system, translating operating system commands into device-specific commands.

10. TLB (Translation Lookaside Buffer):

• Definition: The TLB is a cache used to store recently accessed virtual-to-physical


address translations.
• Purpose: By caching the most recently used page table entries, the TLB reduces the
number of memory accesses required to perform address translation, improving
performance.
• Impact: A high hit rate in the TLB improves system performance by reducing the
need to access slower main memory.

11. Network/Distributed OS:

• Definition: A network or distributed operating system manages a group of


independent computers, allowing them to communicate and share resources.
• Characteristics:
o Transparency: Hides the complexity of the underlying network from the user.
o Scalability: It can scale efficiently as more nodes are added to the network.
o Fault Tolerance: Ensures the system remains operational even if individual
nodes fail.
• Applications: Used in cloud computing, distributed databases, and large-scale web
applications.
★ Priority Topics

1. Overlay Concepts:

a) File Attributes:

• Definition: File attributes are metadata associated with a file that provide information
about the file, such as its type, size, creation date, permissions, and access time.
• Common File Attributes:
o Name: The file's name, typically with an extension indicating its type.
o Type: Specifies the format of the file (e.g., text, binary).
o Size: The size of the file in bytes.
o Permissions: Determines the actions users or processes can perform (e.g.,
read, write, execute).
o Timestamps: Including creation, modification, and last accessed times.
• Purpose: These attributes allow the file system to manage files efficiently, enforce
security, and provide information about the file.

b) Accessing Methods:

• Definition: File accessing methods describe how data within a file is read or written.
• Types:
o Sequential Access: Data is read in a linear fashion, one after the other.
Typically used for text files.
o Direct/Random Access: Allows data to be read or written at any point in the
file. Used for binary files and databases.
o Indexed Access: Uses an index to access data at various positions in the file.
Common in database systems.
o Hashed Access: Uses a hash function to access data directly based on a key,
often used in hash tables.

2. Caching and Buffering:

a) Preemptive Scheduling:

• Definition: Preemptive scheduling is a type of CPU scheduling where the operating


system can interrupt and suspend a running process to allocate CPU time to another
process.
• How it Works: When a higher-priority process becomes ready to run, the operating
system preempts the current process and assigns CPU time to the new process.
• Advantages:
o Improves responsiveness in systems where processes with varying priorities
exist.
o Ensures that higher-priority tasks get CPU time promptly.
• Example: Preemptive scheduling is used in systems with Round Robin, Shortest
Job First (SJF), and Priority Scheduling.

b) Cache Management:

• Definition: Cache management involves controlling the use of cache memory to


optimize access to frequently used data or instructions.
• Techniques:
o LRU (Least Recently Used): Evicts the least recently accessed items to make
room for new ones.
o FIFO (First In, First Out): Evicts the oldest item in the cache when it
becomes full.
o MRU (Most Recently Used): Evicts the most recently used item, though this
is rarely efficient.
• Purpose: To minimize the average time to access data, improving overall system
performance by reducing the need to access slower storage.

3. File Allocation Methods:

a) Single-Level Tree:

• Definition: A simple file system structure where files are stored in a single directory.
• Organization: All files are placed in one level, with no subdirectories or hierarchical
organization.
• Advantages:
o Simple and easy to implement.
o No directory traversal required; quick to access files.
• Disadvantages: It becomes inefficient when dealing with large numbers of files, as it
lacks a clear organizational structure.

b) DOS/UNIX Allocation Methods:

• DOS Allocation:
o File Allocation Table (FAT): In FAT, files are stored in clusters, and the
table keeps track of which clusters belong to each file. It uses a linked list of
clusters for each file.
• UNIX Allocation:
o Inodes: In UNIX-like systems, files are represented by inodes, which contain
metadata (file size, permissions, etc.), and the file's data blocks are indexed
using pointers in the inode.
o Ext2/Ext3: Common UNIX file systems that use inodes to track file data
locations and block allocation methods to organize file storage efficiently.

4. Dining Philosopher Problem:


• Definition: The Dining Philosopher Problem is a classical synchronization problem
involving multiple processes (philosophers) that need to share resources (forks) while
avoiding deadlock and starvation.
• Problem Description: Five philosophers sit at a table with a fork between each pair.
They alternately think and eat. To eat, a philosopher needs both forks. The challenge
is to ensure that philosophers don't starve or deadlock.
• Solution:
o Semaphores or Mutexes are typically used to synchronize the philosophers.
o Solution Strategies include ensuring that a philosopher picks up both forks
before eating and releases them after eating, using techniques such as
"resource hierarchy" or "timeouts" to prevent deadlock.

5. Timesharing and Multiprogramming:

a) Multiprocessing:

• Definition: Multiprocessing is the use of two or more processors (CPUs) within a


computer system to execute processes concurrently.
• Benefits:
o Improved performance by parallel execution.
o Allows a system to perform more tasks in a given time period.
• Types:
o Symmetric Multiprocessing (SMP): Multiple processors share a common
memory and work on different tasks simultaneously.
o Asymmetric Multiprocessing (AMP): One processor controls the system,
while others are used for specific tasks.

b) Multitasking:

• Definition: Multitasking refers to the ability of an operating system to run multiple


tasks (processes) simultaneously on a single processor.
• Types:
o Preemptive Multitasking: The operating system can preempt a running task
and allocate CPU time to another task.
o Cooperative Multitasking: Tasks voluntarily give up control of the CPU,
allowing other tasks to run.
• Benefit: Allows the user to run multiple applications at once, improving system
efficiency and user experience.

6. Real-Time System Overview:

• Definition: A real-time system is a computer system that must respond to inputs or


events within a strict time frame, usually within milliseconds or microseconds.
• Types:
o Hard Real-Time Systems: Systems where missing a deadline can result in
catastrophic failure (e.g., airbag systems in cars).
o Soft Real-Time Systems: Systems where deadlines are important but missing
them does not cause total failure (e.g., video streaming or online gaming).
• Characteristics:
o Determinism: Predictable and consistent response times.
o Reliability: Systems must function without failure, even under stress.
o Concurrency: Ability to handle multiple processes/tasks at the same time.
• Applications: Used in applications such as embedded systems, industrial control
systems, medical devices, and autonomous vehicles.
1. Memory Management Scheme:

a) Buddy System:

• Definition: The Buddy System is a memory allocation scheme that divides memory
into blocks of free memory that are powers of two. The system splits memory into
smaller blocks (or "buddies") to fulfill a memory request.
• How it Works: When a request for memory is made, the system searches for the
smallest available block that fits the request. If no block of the required size is
available, it splits a larger block into two equal "buddy" blocks, which can later be
merged if both become free.
• Advantages:
o Efficient allocation and deallocation, reducing fragmentation.
o Simple to implement with low overhead.
• Disadvantages: May still result in internal fragmentation, as blocks are only powers
of two.

b) Medium-Term Scheduler:

• Definition: The Medium-Term Scheduler controls the movement of processes


between the ready queue and the swapped-out queue in memory management
systems.
• Purpose: It temporarily removes processes from the main memory (swapping) to
improve system performance by freeing up memory for other processes. It later swaps
them back when necessary.
• Role: It helps balance the system's performance by managing the degree of
multiprogramming and the memory available for active processes.

2. File Management:

a) File Attributes:

• Definition: File attributes provide metadata about the file, such as its name, size, type,
permissions, and timestamps.
• Common Attributes:
o Name: The name of the file, often with an extension indicating its type.
o Size: The amount of storage space the file occupies.
o Permissions: Defines which users or processes can read, write, or execute the
file.
o Creation, Modification, and Access Times: Track when the file was created,
last modified, or accessed.

b) Accessing Methods:

• Definition: File accessing methods describe how data in a file is read or written.
• Types:
o Sequential Access: Data is accessed in a linear fashion, suitable for text files.
o Direct/Random Access: Allows reading and writing at any location in the
file, used for binary files.
o Indexed Access: Files are accessed through an index for faster searching,
commonly used in databases.

c) Virtual File System (VFS):

• Definition: A Virtual File System (VFS) is an abstraction layer that allows the
operating system to provide a uniform interface for accessing different file systems.
• Purpose: It abstracts the details of different file systems, allowing applications to
work with files on different storage systems without worrying about their underlying
structure.
• Advantages:
o Provides a unified interface for interacting with various file systems like FAT,
NTFS, ext4, etc.

d) Mounted File System:

• Definition: A mounted file system refers to a file system that is connected or attached
to a host operating system and can be accessed by the system.
• How it Works: When a device (such as a USB drive or network share) is mounted, its
file system is integrated into the directory structure of the operating system, allowing
users to access files seamlessly.
• Example: Mounting a USB drive makes it accessible as part of the system's directory,
like /media/usb on Linux.

e) File Directory Structure:

• Definition: A file directory structure defines how files are organized in a file system.
• Types:
o Single-Level Directory: All files are stored in one directory. Simple but
inefficient for large systems.
o Two-Level Directory: One directory for the file system and another for each
user, separating files into distinct user directories.
o Hierarchical Directory: A tree-like structure where directories can contain
other directories, allowing for efficient organization and easy management of
files.

f) File Allocation Methods:

• Definition: File allocation methods determine how files are stored on disk.
• Types:
o Single-Level Tree: All files are stored in a single directory. Simple but
inefficient for large systems.
o Indexed Allocation: Each file has an index block that points to its data blocks,
allowing for direct access to file data.
o Contiguous Allocation: Files are stored in contiguous blocks on disk,
providing efficient access but leading to fragmentation over time.
o Linked Allocation: Each file's data block contains a pointer to the next block,
which helps manage disk space more flexibly.
3. Synchronization Techniques:

a) Semaphores (Hardware to Semaphores):

• Definition: A semaphore is a synchronization tool used to control access to shared


resources by multiple processes.
• Types:
o Binary Semaphore: Takes only two values, 0 or 1, often used for mutual
exclusion.
o Counting Semaphore: Can take any integer value, used for managing
multiple instances of a resource.
• How it Works: Semaphores are implemented using atomic operations like wait()
and signal(). When a process performs wait(), it decreases the semaphore value. If
the value is negative, the process is blocked. Signal() increases the semaphore
value, potentially waking up blocked processes.

b) Mutual Exclusion:

• Definition: Mutual exclusion ensures that only one process can access a critical
section of code at any time, preventing conflicts over shared resources.
• Techniques:
o Locks: The most basic form of mutual exclusion, where a process locks a
resource before accessing it.
o Monitors: A higher-level synchronization construct that automatically locks
and unlocks resources, ensuring that only one process can execute the critical
section at a time.

4. Disk Scheduling Algorithms:

a) Disk Scheduling Algorithms:

• Definition: Disk scheduling algorithms are used to determine the order in which disk
I/O requests are processed, aiming to optimize access times and minimize seek time.
• Types:
o FCFS (First-Come, First-Served): Processes requests in the order they are
received.
o SSTF (Shortest Seek Time First): Selects the request with the shortest seek
time, reducing the movement of the disk arm.
o SCAN: Moves the disk arm in one direction, servicing requests, and then
reverses direction once it reaches the end.
o C-SCAN (Circular SCAN): Similar to SCAN, but when the arm reaches the
end, it immediately returns to the other end without servicing requests.
o LOOK: Similar to SCAN, but the arm only moves as far as the last request in
the current direction.
5. Logical I/O:

a) Logical I/O:

• Definition: Logical I/O refers to the interface between user processes and the
operating system's file system, abstracting physical hardware details.
• How it Works: User applications interact with files using logical addresses (file
names and paths), while the operating system manages the mapping to physical
storage locations.
• Benefits: Simplifies user interaction with files, decoupling file operations from the
hardware specifics.

6. File Structure:

a) Ext2/Ext3:

• Ext2 (Second Extended File System):


o A traditional Unix file system that does not have journaling. It is fast and
reliable but lacks features like recovery from crashes.
• Ext3 (Third Extended File System):
o An extension of Ext2 with journaling, meaning it logs changes before writing
them to disk, making it more resilient to crashes and system failures.

7. Network/Distributed OS:

• Definition: A Network or Distributed Operating System is an OS designed to manage


a group of independent computers and make them appear as a single system to users.
• Key Features:
o Transparency: Hides the complexities of distributed systems, such as location
of resources or failure handling.
o Fault Tolerance: Ensures the system continues to operate even when some
components fail.
o Resource Sharing: Allows multiple computers to share resources like CPU,
memory, and storage.
• Examples: Systems like Google File System (GFS) and Hadoop Distributed File
System (HDFS) are examples of distributed systems used for large-scale data storage
and processing.

You might also like