0% found this document useful (0 votes)
14 views25 pages

Selected Questions

The document provides an overview of operating system concepts including processes, file operations, and CPU scheduling algorithms. It discusses the structure of operating systems, stages of process management, and various file accessing modes. Key topics include the Banker's algorithm for deadlock avoidance, virtual memory implementation, and the Round Robin scheduling algorithm.

Uploaded by

baijayantis56
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views25 pages

Selected Questions

The document provides an overview of operating system concepts including processes, file operations, and CPU scheduling algorithms. It discusses the structure of operating systems, stages of process management, and various file accessing modes. Key topics include the Banker's algorithm for deadlock avoidance, virtual memory implementation, and the Round Robin scheduling algorithm.

Uploaded by

baijayantis56
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

1. A program in execution is called process.

2. The bankers algorithm is used for deadlock avoidance.

3. Virtual memory can be implemented with paging and segmentation.

4. The maximum length of file name in DOS is 8+3 characters (8 characters for the name and 3 for
the extension, commonly known as the 8.3 format).

5. Shell is a command interpreter in Unix operating system.

6. The first OS was developed in the early 1950s. The first recognizable modern operating system
was GM-NAA I/O, developed in 1956 for IBM's 704 mainframe.

7. A scheduler that selects the process from secondary storage device is called long-term scheduler
(or job scheduler).

8. Page making process from main memory to disk is called paging out or swapping out.

Long Answer Questions

File Operations and Accessing Modes in Operating Systems

1. File Operations in Operating Systems


File operations represent the fundamental interactions between programs and the file system. The
operating system provides these operations as part of its file management subsystem:

Create
Purpose: Establishes a new file in the file system
Process:
Allocates disk space based on allocation strategy (contiguous, linked, indexed)
Creates directory entry with file attributes
Initializes file control blocks and metadata
System Call Example: create() , open() with creation flags
Considerations: Permissions, quotas, naming conventions

Open
Purpose: Prepares an existing file for access
Process:
Locates file in directory structure
Verifies access permissions
Creates file descriptor/handle in process's open file table
Loads file metadata into memory for quick access
System Call Example: open() , fopen()
Return Value: File descriptor or handle used in subsequent operations

Read
Purpose: Transfers data from file to program memory
Process:
Validates read permissions and boundaries
Locates data blocks using file system's allocation method
Transfers data through system buffers to user space
Updates current position pointer
System Call Example: read() , fread()
Parameters: File descriptor, buffer address, number of bytes to read

Write
Purpose: Transfers data from program memory to file
Process:
Validates write permissions
Allocates additional blocks if needed
Transfers data from user space to system buffers to disk
Updates file size and timestamps
System Call Example: write() , fwrite()
Considerations: File size limits, disk space availability

Seek
Purpose: Repositions the file pointer for subsequent operations
Process:
Calculates new position based on current position and offset
Validates against file boundaries
Updates current position pointer
System Call Example: lseek() , fseek()
Options: Seek from beginning (SEEK_SET), current position (SEEK_CUR), or end (SEEK_END)

Close
Purpose: Terminates access to a file
Process:
Flushes modified data and metadata to disk
Updates directory entry if needed
Frees system resources (file descriptor, buffers)
System Call Example: close() , fclose()
Importance: Prevents resource leaks and ensures data integrity

Delete/Remove
Purpose: Removes a file from the file system
Process:
Verifies deletion permissions
Removes directory entry
Marks disk blocks as free
Updates related metadata
System Call Example: unlink() , remove()
Considerations: Reference counts for shared files

Truncate
Purpose: Modifies file size, usually reducing it
Process:
Updates file size attribute
Frees unused disk blocks
Maintains file position if within new boundaries
System Call Example: truncate() , ftruncate()

Flush
Purpose: Forces buffered data to be written to physical media
Process:
Writes all modified data in buffers to disk
Updates file system metadata
System Call Example: fsync() , fflush()
Usage: Critical for data integrity in database and transaction systems

File Attribute Operations


Purpose: Modify or retrieve file metadata
Operations:
Get/set permissions (chmod)
Get/set ownership (chown)
Get/set timestamps (access, modification, creation)
Get file status information (stat)
System Call Examples: chmod() , chown() , stat()

Lock Operations
Purpose: Controls concurrent access to files
Types:
Shared locks (multiple readers)
Exclusive locks (single writer)
Advisory vs. mandatory locks
System Call Example: fcntl() , flock()
Use Cases: Database management, concurrent file updates

2. File Accessing Modes


File accessing modes define how data within files can be read or written:

Sequential Access
Description: Processing records in order from beginning to end
Operations:
Read-next: Retrieves the next record in sequence
Write-next: Adds a record after the current position
Implementation: Maintains a current position pointer that advances after each operation
Applications:
Processing log files
Batch transaction processing
Sequential data streams
Advantages:
Simple implementation
Efficient for sequential processing
Lower overhead (no position indexing)
Disadvantages:
Inefficient for random record access (must read from beginning)

Direct/Random Access
Description: Ability to access any record without reading preceding ones
Operations:
Read(n): Retrieves record at position n
Write(n): Updates record at position n
Implementation:
Fixed-size records for easy offset calculation
Index structures or hashing for variable-sized records
Applications:
Database management systems
Transaction processing systems
Interactive applications
Advantages:
Fast access to specific records
Efficient updates to individual records
Disadvantages:
More complex implementation
Overhead for position management

Indexed Sequential Access Method (ISAM)


Description: Combines sequential and direct access using indexes
Components:
Data file with sequential records
Index file(s) for direct access
Overflow areas for record additions
Operations:
Index lookup followed by direct access
Sequential scanning within blocks
Applications:
Large databases requiring both sequential and random access
Customer information systems
Advantages:
Efficient for both sequential processing and record lookup
Good performance for range queries
Disadvantages:
Index maintenance overhead
Complex implementation

Memory-Mapped Files
Description: Maps file contents directly into process address space
Implementation:
Virtual memory system maps file pages to memory pages
File I/O becomes memory operations
Operations:
Direct memory read/write operations
Page fault handling for automatic data transfer
System Call Example: mmap()
Applications:
High-performance file processing
Shared memory between processes
Database implementations
Advantages:
Reduced system call overhead
Potential performance improvement
Simplified programming model
Disadvantages:
Limited file size (address space constraints)
Careful synchronization required

Buffered Access Mode


Description: Uses intermediate memory buffers for file operations
Implementation:
System or library maintains memory buffers
Data transferred in blocks between disk and buffers
Applications:
Most standard file I/O
Situations where access patterns benefit from caching
Advantages:
Improved performance through read-ahead and write-behind
Reduced disk operations
Disadvantages:
Memory overhead for buffers
Potential data loss if system crashes before flush

Unbuffered Access Mode


Description: Direct transfer between program memory and disk
Implementation:
Bypasses system caches
Typically requires aligned memory buffers
Applications:
Database systems with custom caching
Real-time data collection
System Call Example: open() with O_DIRECT flag
Advantages:
Predictable I/O behavior
No double buffering
Disadvantages:
Usually lower performance
More complex programming

Access Control Modes


Exclusive Access:
Single process can access the file
Prevents concurrent access conflicts
Example: open() with exclusive lock
Shared Access:
Multiple processes can read simultaneously
Write operations may be restricted
Example: open() with shared lock
Read-Only Mode:
Prevents modifications to file content
Example: open(file, O_RDONLY)
Write-Only Mode:
Permits only writing operations
Example: open(file, O_WRONLY)
Read-Write Mode:
Allows both reading and writing
Example: open(file, O_RDWR)

Asynchronous vs. Synchronous Access


Synchronous:
Operations complete before returning control
Program execution blocked during I/O
Simpler programming model
Asynchronous:
Operations initiated but return before completion
Notification mechanism when operation completes
System Call Example: aio_read() , aio_write()
Advantages: Improved parallelism and responsiveness

3. Explain the Structure of Operating System


An operating system (OS) is a complex software system with multiple interconnected components
organized in a layered architecture. The detailed structure includes:

1. Kernel (Core of the OS)


Process Management Subsystem: Handles process creation, termination, scheduling,
synchronization, and inter-process communication
Memory Management Unit: Implements virtual memory, paging, segmentation, and memory
allocation/deallocation
I/O Management Subsystem: Manages device drivers, buffers, and provides a uniform interface
for hardware
File System Module: Organizes data storage, implements file operations, and maintains directory
structures
Protection and Security Mechanisms: Enforces access controls and protects system resources

2. Hardware Abstraction Layer (HAL)


Provides a uniform interface between hardware and software components
Abstracts hardware-specific details to allow portability across different platforms
Includes device drivers that communicate directly with hardware devices

3. System Call Interface


Provides the gateway for applications to request OS services
Transitions between user mode and kernel mode through controlled entry points
Examples include open(), read(), write(), fork(), and exec() functions

4. Shell (Command Interpreter)


Command Line Interface (CLI): Text-based interface like Bash, PowerShell, or CMD
Graphical User Interface (GUI): Visual interface like Windows Explorer or macOS Finder
Interprets and executes user commands by invoking appropriate system programs

5. System Libraries
C Library (libc): Provides standard functions used by applications
Language Runtime Libraries: Support execution of programs written in various languages
System Utility Libraries: Offer additional functionality for common operations

6. System Utilities and Services


Device Management Tools: Control and configure hardware devices
File Management Utilities: Copy, move, delete files and directories
System Monitoring Tools: Track system performance and resource usage
Network Services: Implement protocols for communication and resource sharing

7. Applications Layer
User applications that run on top of the OS infrastructure
Includes productivity software, development tools, and user programs

Major Architectural Approaches:

Monolithic Structure

Characteristics: All OS components run in kernel space with full hardware privileges
Advantages: Direct communication between components, high performance
Disadvantages: Poor fault isolation, difficult maintenance
Examples: Traditional UNIX, Linux (though modular)
Implementation: Typically a single large binary with all services compiled together

Microkernel Architecture

Characteristics: Minimal kernel with only essential functions; most services run as user processes
Core Components: Address space management, thread management, IPC primitives
Services as User Processes: File systems, device drivers, network protocols
Advantages: Better reliability, easier extension, enhanced security
Disadvantages: Performance overhead due to frequent context switching and IPC
Examples: MINIX, QNX, L4 microkernel family
Inter-Process Communication: Critical for performance as services must communicate frequently

Layered Architecture

Organization: Hierarchical layers with well-defined interfaces


Data Flow: Each layer uses services only from lower layers
Benefits: Clear separation of concerns, easier debugging
Drawbacks: Potential performance overhead from layer traversals
Example Implementation: THE operating system (historical)
Modular Kernel

Structure: Core kernel with loadable modules that can be inserted/removed at runtime
Module Types: Device drivers, file systems, network protocols
Benefits: Flexibility of monolithic with some modularity benefits
Implementation: Uses technique like dynamic linking for modules
Examples: Linux, FreeBSD, Solaris

Hybrid Systems

Approach: Combine elements of multiple architecture styles


Example: Windows NT kernel has microkernel structure but implements many services in kernel
space
Rationale: Balance between performance and modularity

Exokernel Architecture

Philosophy: Minimize abstraction, provide applications direct control over hardware


Implementation: OS enforces resource allocation but not management policies
Benefits: Maximum flexibility and performance for specialized applications

4. Explain Various Stages of Process


A process is a program in execution that goes through several well-defined stages during its lifetime,
managed by the operating system:

1. Process Creation
Initialization Mechanisms:

System Initialization: Created during boot time


User Request: Launched via shell or GUI
Process Spawning: Parent process creates child process
Batch Job Initiation: Scheduled by job scheduler

Resource Allocation:

Memory space allocation (stack, heap, data, and text segments)


Process Control Block (PCB) creation
File descriptor table setup
Security context establishment

Process Control Block Creation:


Process ID assignment
Parent process information
CPU state initialization (program counter, register values)
Process state (initially "new")
CPU scheduling information
Memory management information (page tables)
Accounting information (CPU time, time limits)
I/O status information (allocated devices, open files)

Address Space Initialization:

Code segment loading from executable file


Data segment initialization
Stack creation for function calls and local variables
Heap initialization for dynamic memory allocation

2. Ready State
Characteristics:

Process is loaded in main memory


All resources except CPU are allocated
Waiting for CPU time
Multiple processes may be in ready state simultaneously

Ready Queue Management:

Processes organized in prioritized queues


Short-term scheduler selects processes from these queues
State transitions tracked in PCB

Dispatcher Readiness:

Process context stored in PCB


Waiting for dispatcher to allocate CPU

3. Running State
Execution Environment:

Process actively using CPU


Instructions executed sequentially
Register values changing as execution progresses
Memory accesses performed

CPU Burst Analysis:

Processes alternate between CPU and I/O bursts


CPU-bound vs. I/O-bound process behavior

State Transition Causes:

Timer interrupts (time quantum expiration)


Higher priority process becomes ready
Voluntary CPU release (yield)
I/O or event wait request

Context Switching Procedure:

Save current process state to its PCB


Load next process state from its PCB
Update memory management structures
Switch CPU to new process

4. Blocked/Waiting State
Triggering Events:

I/O request initiation


Wait for child process termination
Semaphore or mutex acquisition attempts
Timer or event wait requests

Resource Management:

Process may hold some resources while waiting


Potential deadlock conditions monitored

Multiple Wait Queues:

Separate queues for different events/resources


Device-specific wait queues
System call wait queues

Wait State Monitoring:

OS tracks completion of waited-for event


Interrupt handlers notify when wait condition satisfied
5. Ready/Resume State
Transition Mechanisms:

I/O completion interrupt


Event occurrence (signal received)
Resource becomes available
Timer expiration

Queue Management:

Process moved from wait queue to ready queue


Priority recalculation may occur
Aging mechanisms may adjust priority

Scheduling Implications:

May trigger immediate rescheduling if priority is high


Short-term scheduler reevaluates execution order

6. Termination/Exit State
Termination Causes:

Normal completion (exit() system call)


Fatal error (segmentation fault, illegal instruction)
External termination (kill signal)
Parent termination (cascading termination)

Resource Deallocation:

Memory freed (stack, heap, page tables)


File descriptors closed
I/O buffers flushed
Shared resources released

Termination Processing:

Exit status saved for parent process


Child processes handling (adoption or termination)
Process accounting information updated
Zombie state if parent hasn't waited

Zombie and Orphan Processes:


Zombie: Terminated but entry remains until parent collects exit status
Orphan: Parent terminated before child, adopted by init process

7. Advanced Process State Considerations


Suspended States (additional states in some systems):

Suspended Ready: Process removed from memory but ready to run when restored
Suspended Blocked: Process removed from memory and also waiting for an event

State Transitions Diagram:

Complex web of possible transitions between states


Transitions triggered by system calls, interrupts, and scheduler decisions

Real-time Process States:

Additional states for deadline management


Priority inheritance protocols for critical sections

Each of these stages is meticulously managed by the operating system's process management
subsystem to ensure efficient multitasking, resource utilization, and system stability.

5. CPU Scheduling Algorithms


CPU scheduling is the process of determining which process in the ready queue should be allocated
the CPU. This decision is made by the CPU scheduler based on specific algorithms. Let's examine two
important CPU scheduling algorithms in detail:

1. Round Robin (RR) Scheduling Algorithm


Round Robin is a preemptive scheduling algorithm specifically designed for time-sharing systems. It's
one of the most widely used scheduling algorithms in modern operating systems.

Key Characteristics:

Time Quantum: Each process is assigned a fixed time slice or quantum


Preemptive: Processes are forcibly switched after their time quantum expires
Circular Queue: Processes are kept in a circular ready queue
Fair Allocation: Equal opportunity for all processes regardless of burst time

Algorithm Steps:

1. Processes are placed in a FIFO queue


2. CPU is allocated to the first process for a time interval of up to one time quantum
3. If the process completes within the time quantum, it is removed from the queue
4. If the process requires more CPU time, it is preempted and placed at the back of the queue
5. The CPU is then allocated to the next process in the queue
6. This cycle continues until all processes complete execution

Advantages:

Fair allocation of CPU


Good for time-sharing environments
Low response time for short processes
No starvation as every process gets CPU time

Disadvantages:

Performance heavily depends on time quantum selection


Higher average waiting time compared to SJF
High context switching overhead

Time Quantum Considerations:

Too large: Degenerates to FCFS


Too small: Excessive context switching overhead
Ideal: Slightly larger than the time required for a typical interaction

Example:

Consider the following processes with their arrival time and burst time:

Process Arrival Time Burst Time

P1 0 10

P2 1 5

P3 2 8

With a time quantum of 4 units, the execution sequence would be:

Time 0-4: P1 (6 remaining)


Time 4-8: P2 (1 remaining)
Time 8-12: P3 (4 remaining)
Time 12-16: P1 (2 remaining)
Time 16-17: P2 (0 remaining) - P2 completes
Time 17-21: P3 (0 remaining) - P3 completes
Time 21-23: P1 (0 remaining) - P1 completes
Visual timeline:

P1 |####|----|----|-##-|----|----|
P2 |----|####|----|----|-#--|----|
P3 |----|----|-###|----|####|----|
0 4 8 12 16 20 24

Calculation:

P1 completion time: 23
P2 completion time: 17
P3 completion time: 21

Average Turnaround Time = ((23-0)+(17-1)+(21-2))/3 = 19.33 Average Waiting Time = ((23-10-0)+


(17-5-1)+(21-8-2))/3 = 11.67

2. Shortest Job First (SJF) Scheduling Algorithm


SJF is a non-preemptive scheduling algorithm that selects the process with the smallest burst time
first.

Key Characteristics:

Optimal: Produces minimum average waiting time for a given set of processes
Non-preemptive: Once a process starts execution, it runs to completion
Burst Time Based: Selection is based on expected execution time
Challenging Implementation: Requires knowledge of future CPU burst times

Algorithm Steps:

1. When CPU becomes available, select the process with the smallest burst time
2. Execute the selected process to completion
3. If multiple processes have the same burst time, break the tie using FCFS or arrival time

Advantages:

Optimal average waiting time


Favors short jobs, improving system throughput
Reduces average turnaround time

Disadvantages:

Potential starvation for longer processes


Requires prediction of CPU burst time
Not suitable for interactive systems

Prediction Mechanisms:

To implement SJF, the system needs to predict future CPU burst times, typically using:

Exponential averaging: τₙ₊₁ = α × t₍ₙ₎ + (1 - α) × τₙ (where τ is the predicted time, t is the actual
time, and α is a weighting factor)
Historical data analysis
Process behavior heuristics

Example:

Consider the following processes with their arrival time and burst time:

Process Arrival Time Burst Time

P1 0 7

P2 2 4

P3 4 1

P4 5 4

With SJF scheduling, the execution sequence would be:

Time 0-7: P1 (only process available at time 0)


Time 7-8: P3 (shortest among P2, P3, P4)
Time 8-12: P2 (shortest among P2, P4)
Time 12-16: P4

Visual timeline:

P1 |#######|----|----|----|
P2 |-------|----|-###|----|
P3 |-------|#---|----|----|
P4 |-------|----|----|####|
0 7 8 12 16

Calculation:

P1 completion time: 7
P2 completion time: 12
P3 completion time: 8
P4 completion time: 16
Average Turnaround Time = ((7-0)+(12-2)+(8-4)+(16-5))/4 = 8.0 Average Waiting Time = ((7-7-0)+
(12-4-2)+(8-1-4)+(16-4-5))/4 = 5.0

6. Page Replacement Algorithms


Page replacement algorithms are used in virtual memory systems to decide which page to evict from
memory when a new page needs to be brought in and all frames are occupied.

1. Least Recently Used (LRU) Algorithm


LRU is based on the principle of temporal locality, which suggests that pages that have been used
recently are likely to be used again in the near future.

Key Characteristics:

Usage History: Tracks how recently each page has been accessed
Replacement Policy: Removes the page that hasn't been used for the longest time
Implementation Methods: Counter-based, stack-based, or approximation algorithms
Overhead: Can be resource-intensive to maintain perfect usage history

Algorithm Steps:

1. When a page fault occurs and all frames are full


2. Identify the page that has not been accessed for the longest period
3. Replace this page with the new page being requested
4. Update the usage history for all pages

Implementation Techniques:

Counter Implementation: Each page entry has a time-of-last-use field


Stack Implementation: Recently accessed pages are moved to the top of a stack
Reference Bits: Hardware support via reference bits that are periodically reset
Clock Algorithm: An approximation of LRU using a circular list with reference bits

Example:

Consider a memory with 3 frames and the following reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2

Frame allocation sequence:

Reference: 7 0 1 2 0 3 0 4 2 3 0 3 2
Frame 1: 7 7 7 2 2 2 2 4 4 0 0 0 0
Frame 2: - 0 0 0 0 3 3 3 3 3 3 3 2
Frame 3: - - 1 1 1 1 0 0 2 2 2 2 -
Fault: Y Y Y Y N Y Y Y Y Y Y N Y
Total page faults: 11

Visual representation:

Reference: 7 0 1 2 0 3 0 4 2 3 0 3 2
↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓
[7]→[7]→[7]→[2]→[2]→[2]→[2]→[4]→[4]→[0]→[0]→[0]→[0]
↓ [0]→[0]→[0]→[0]→[3]→[3]→[3]→[3]→[3]→[3]→[3]→[2]
↓ [1]→[1]→[1]→[1]→[0]→[0]→[2]→[2]→[2]→[2]

2. Optimal Page Replacement Algorithm


The Optimal (or OPT) algorithm replaces the page that will not be used for the longest period in the
future. It provides the lowest possible page fault rate for a fixed number of frames.

Key Characteristics:

Future Knowledge: Requires knowledge of future page references


Theoretical Benchmark: Used as a baseline to evaluate other algorithms
Not Implementable: Cannot be practically implemented in most systems
Optimal Performance: Guarantees the minimum number of page faults

Algorithm Steps:

1. When a page fault occurs and all frames are full


2. Look at future references to determine when each page will be used next
3. Replace the page that will not be used for the longest period
4. If a page will never be used again, it's an ideal candidate for replacement

Significance:

Provides a theoretical lower bound on page fault rate


Useful for evaluating the efficiency of implementable algorithms
Demonstrates the performance gap between ideal and practical solutions

Example:

Using the same reference string as before: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2

Frame allocation sequence:

Reference: 7 0 1 2 0 3 0 4 2 3 0 3 2
Frame 1: 7 7 7 2 2 2 2 4 4 4 0 0 0
Frame 2: - 0 0 0 0 0 0 0 2 2 2 2 2
Frame 3: - - 1 1 1 3 3 3 3 3 3 3 3
Fault: Y Y Y Y N Y N Y Y N Y N N

Total page faults: 8

Visual representation:

Reference: 7 0 1 2 0 3 0 4 2 3 0 3 2
↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓
[7]→[7]→[7]→[2]→[2]→[2]→[2]→[4]→[4]→[4]→[0]→[0]→[0]
↓ [0]→[0]→[0]→[0]→[0]→[0]→[0]→[2]→[2]→[2]→[2]→[2]
↓ [1]→[1]→[1]→[3]→[3]→[3]→[3]→[3]→[3]→[3]→[3]

Notice how OPT performs better than LRU (8 vs 11 page faults) because it has perfect knowledge of
future references.

Comparison and Analysis

CPU Scheduling Algorithm Selection Factors:


System type (batch, interactive, real-time)
Performance metrics priority (throughput, response time, fairness)
Process characteristics (CPU-bound vs I/O-bound)
Implementation complexity and overhead

Page Replacement Algorithm Selection Factors:


Memory constraints
Application access patterns
Hardware support for implementation
Performance requirements
Overhead tolerance

Real-world Implementations:
Most modern operating systems use variants of Round Robin for process scheduling
LRU approximations like Clock Algorithm are common for page replacement
Hybrid approaches often provide better overall performance than pure algorithms

Here are answers to the 3-mark questions:

Define OS
An Operating System (OS) is system software that manages computer hardware, software resources,
and provides common services for computer programs. It acts as an intermediary between users and
computer hardware, handling tasks like process management, memory management, file system
management, I/O operations, security, and providing a user interface. The OS abstracts hardware
complexity and creates an environment where applications can execute efficiently.

Compare DOS & UNIX


Architecture: DOS is a single-user, single-tasking OS, while UNIX is multi-user and multi-tasking
Command Interface: DOS uses a simple command line with limited functionality; UNIX has a
powerful shell with extensive commands and scripting
File System: DOS uses FAT file system with limited file naming (8.3 format); UNIX has hierarchical
file system with longer filenames and better security
Portability: DOS was primarily for x86 architecture; UNIX is portable across many hardware
platforms
Security: UNIX has robust user/group permissions and security features; DOS has minimal
security controls

Define Process
A process is a program in execution. It is an active entity with a program counter indicating the next
instruction, a set of associated resources including memory space (code, data, stack, heap), processor
registers, I/O information, and process control information. Each process has its own address space
and one or more threads of execution. The operating system manages processes through creation,
scheduling, inter-process communication, and termination.

What do you mean by System Call


A system call is an interface that allows user programs to request services from the operating system's
kernel. It provides a controlled entry point into the kernel, allowing programs to perform privileged
operations like file operations, process management, device access, and communication. System calls
transfer control from user mode to kernel mode, execute the requested service with elevated
privileges, and then return control back to the user program with any results.

Define Pages
Pages are fixed-size blocks of memory used in virtual memory systems. The operating system divides
the logical address space into equal-sized pages and physical memory into corresponding page
frames. This allows for efficient memory management by enabling non-contiguous physical memory
allocation and implementation of virtual memory through paging. Pages typically range from 4KB to
64KB in size and facilitate memory protection, sharing, and efficient swapping between main memory
and secondary storage.

What do you mean by Page Fault


A page fault is an exception that occurs when a program tries to access a page in its virtual address
space that is not currently mapped to physical memory. When a page fault occurs, the operating
system must locate the required page on disk, allocate a page frame in physical memory, load the
page from disk into memory, update the page table, and then resume the program's execution. Page
faults are essential for implementing demand paging in virtual memory systems.

Define Deadlock
Deadlock is a situation in a multi-processing environment where two or more processes are unable to
proceed because each is waiting for resources held by the others, creating a circular wait condition.
Deadlock occurs when four conditions are simultaneously present: mutual exclusion, hold and wait,
no preemption, and circular wait. Deadlocks can cause system stagnation and require detection and
resolution strategies like process termination, resource preemption, or prevention through careful
resource allocation.

What do you mean by CPU


CPU (Central Processing Unit) is the primary component of a computer that executes instructions of a
program. It performs arithmetic and logical operations, controls data flow, and manages execution
sequence according to program instructions. Modern CPUs contain multiple cores for parallel
processing, cache memory for faster data access, and specialized units for different operations. The
CPU works in conjunction with the operating system to execute processes and manage system
resources through context switching and scheduling.

Define Thread
A thread is the smallest unit of processing that can be scheduled by an operating system. It exists
within a process and shares the process's resources including memory space and opened files. Each
thread has its own stack, register set, and program counter. Multiple threads within a process can
execute concurrently, improving application responsiveness and performance on multi-core systems.
Threads enable parallel execution while requiring fewer resources than creating separate processes.

Define Scheduling
Scheduling is the process by which the operating system selects which process or thread should run
next on the CPU. The scheduler aims to optimize system performance metrics like throughput,
response time, and resource utilization. Scheduling algorithms include First-Come-First-Served,
Shortest Job First, Round Robin, Priority Scheduling, and Multilevel Queue. The scheduler makes
decisions during process state transitions (e.g., from running to waiting, or from ready to running) and
is crucial for multitasking environments.

Compare Process & Program


A program is a passive entity - a set of instructions stored on disk as an executable file. It's static code
and data that exists independently of execution. A process, in contrast, is an active entity - a program
in execution with its own address space, program counter, stack, and associated system resources.
While multiple processes can execute the same program simultaneously (each with its own resources),
a program is just the template from which processes are created. Processes have states (running,
ready, blocked), while programs do not.

Functions of Long-term Schedulers


Long-term schedulers (job schedulers) control the degree of multiprogramming by selecting which
processes from the job pool are loaded into memory for execution. Their key functions include:

Regulating the number of processes in memory to ensure optimal system load


Maintaining a good mix of CPU-bound and I/O-bound processes for resource utilization
Selecting processes from the submission queue to create processes in the ready queue
Controlling system stability by preventing memory overload
Making decisions about which jobs to admit based on resource requirements and availability

Compare Thread & Process


Threads and processes are both units of execution, but they differ significantly:

Threads share the same address space, code section, data section, and system resources within a
process; processes have independent address spaces
Thread creation and context switching is faster and less resource-intensive than process creation
Threads have individual program counters, register sets, and stacks but share heap memory;
processes have completely independent memory
Inter-thread communication is simpler (shared memory) compared to inter-process
communication
One thread's failure can affect all threads in the process; processes are protected from each
other's failures

What Do You Mean by File Sharing


File sharing is the system capability that allows multiple users or processes to access the same file
concurrently. It enables collaboration and resource efficiency by providing controlled access to
common data. Operating systems implement sharing through mechanisms like file permissions, locks,
and access modes. File sharing requires synchronization methods to prevent conflicts during
concurrent access, including read/write locks, record locks, or advisory locking. Modern distributed
systems extend file sharing across networks through protocols like NFS, SMB, or distributed file
systems.

What Do You Mean by Swapping


Swapping is a memory management technique where entire processes are temporarily transferred
from main memory to a secondary storage (swap space) when they are not immediately needed, and
brought back when required for execution. This allows more processes to be maintained in the
system than would fit in physical memory alone. The swapper (mid-term scheduler) makes decisions
about which processes to swap out based on priority, size, and activity. Swapping involves significant
overhead due to the time required to move entire processes between memory and disk.

What Do You Mean by Thrashing


Thrashing is a degraded state where the system spends more time swapping pages between memory
and disk than executing application code. It occurs when the working set of active processes exceeds
available physical memory, causing continuous page faults. During thrashing, CPU utilization drops
dramatically as processes wait for page transfers, system throughput collapses, and response time
increases significantly. Causes include insufficient physical memory, poor page replacement
algorithms, or too many active processes. Solutions include reducing the degree of
multiprogramming, implementing working set models, or adding more physical memory.

Compare the Random Mode in File


Random access mode in files allows direct access to any record without reading preceding data. It
works by calculating record positions using fixed-size records or index structures. Random access
enables efficient data retrieval for specific records, making it suitable for database applications and
interactive systems. It requires additional mechanisms for position management but provides fast
access to individual records. This contrasts with sequential access, which requires reading through all
preceding data to reach a specific position in the file.

Compare Random Mode in Page and Segmentation


In paging systems, random access occurs at the page level where any page can be accessed directly
through its page number and offset, enabling efficient memory utilization but potentially causing
internal fragmentation due to fixed page sizes. In segmentation, random access operates at the
logical segment level (functions, data structures) with variable-sized segments, eliminating internal
fragmentation but potentially causing external fragmentation. Paging uses page tables for address
translation while segmentation uses segment tables with base and limit registers. Hybrid systems like
Intel x86 combine both approaches to balance their advantages.

What Do You Mean by IPC


Inter-Process Communication (IPC) refers to mechanisms that allow processes to exchange data and
synchronize actions. IPC is necessary because processes have isolated address spaces for protection
and security. Common IPC methods include:

Shared memory: Fastest method where processes access common memory regions
Message passing: Communication through send/receive operations via queues
Pipes: Unidirectional data channels between related processes
Sockets: Communication endpoints for networked processes
Semaphores: Synchronization primitives for coordinating process access to shared resources
Signals: Software interrupts for asynchronous event notification between processes

Each IPC mechanism balances performance, flexibility, and implementation complexity for different
application requirements.

You might also like