0% found this document useful (0 votes)
28 views18 pages

Summer 2023

The document discusses various concepts related to operating systems, including features of Linux, differences between time-sharing and real-time systems, and services provided by operating systems. It also covers scheduling techniques, memory management, deadlock conditions, and interprocess communication methods. Additionally, it explains the structure of process control blocks and the advantages of multiprocessor operating systems.

Uploaded by

bhoyepooja93
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views18 pages

Summer 2023

The document discusses various concepts related to operating systems, including features of Linux, differences between time-sharing and real-time systems, and services provided by operating systems. It also covers scheduling techniques, memory management, deadlock conditions, and interprocess communication methods. Additionally, it explains the structure of process control blocks and the advantages of multiprocessor operating systems.

Uploaded by

bhoyepooja93
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

1.

Attempt any FIVE of the following: (10)

a) State any two features of Linux.

1. Multitasking:
Linux supports multitasking, allowing multiple processes to run simultaneously. It
uses process scheduling algorithms to manage CPU resources and ensures all
processes are given fair access to the CPU. This feature enhances the overall
performance of the system.
2. Security:
Linux is known for its robust security mechanisms. It uses user authentication
(passwords), access control lists (ACLs), and the principle of least privilege to protect
system resources. It also has built-in tools like SELinux (Security-Enhanced Linux)
for enforcing mandatory access control policies.

b) Difference between Time Sharing System and Real-Time System (any 2 points)

Time Sharing System Real-Time System


The CPU is shared among multiple users or tasks,
Designed to process data with minimal
ensuring that each process receives a portion of
delay to meet strict timing constraints
CPU time. (e.g., deadlines).
The system focuses on fairness and Focuses on meeting deadlines, with tasks
responsiveness, allowing users to interact with
given priorities to ensure timely
processes. completion.
Example: Flight control systems, where
Example: Unix-based systems, where many users
actions must happen within a specific
can access the system simultaneously.
time frame.

c) State any four services of the operating system.

1. Process Management:
The operating system is responsible for managing processes, including process
creation, scheduling, synchronization, and termination. It ensures that processes are
executed in an efficient and fair manner.
2. Memory Management:
The operating system allocates and deallocates memory to processes. It manages the
physical and virtual memory, ensuring that processes don’t interfere with each other’s
memory space.
3. File Management:
The operating system provides services to create, delete, read, write, and modify files.
It organizes data into directories for easy access and handles the permissions to
protect file integrity.
4. Security Management:
The OS ensures the protection of data and resources from unauthorized access. It uses
encryption, authentication, user permissions, and auditing mechanisms to maintain the
integrity of the system.

d) Write the difference between pre-emptive and non-preemptive scheduling.

Pre-emptive Scheduling Non-Preemptive Scheduling


The CPU can be taken away from a process Once a process starts executing, it runs to
that is currently executing if a higher priority completion or until it voluntarily gives up
process arrives. control of the CPU.
Commonly used in systems requiring Simpler to implement but can lead to
responsiveness and fairness, such as real-time inefficient CPU usage as a process may
systems or multi-user systems. occupy the CPU longer than necessary.
Examples: Round Robin, Shortest Remaining Examples: First Come First Serve (FCFS),
Time First (SRTF). Shortest Job First (SJF).

e) Define the following terms:

i) Virtual Memory
Virtual memory is a memory management technique that allows the system to compensate
for physical memory shortages by temporarily transferring data from RAM to disk storage.
This enables large applications to run by giving the illusion of more memory than is
physically available.

ii) Paging
Paging is a memory management scheme that divides both the physical memory and the
logical memory of a process into fixed-sized blocks. The logical memory is divided into
pages, and the physical memory is divided into frames. Pages are mapped to frames, enabling
non-contiguous allocation of memory.

f) List any four file attributes and their meanings.

1. File Name:
The name of the file, which uniquely identifies it in a directory or file system.
2. File Type:
Specifies the type of file (e.g., text, binary, executable). It helps in defining how the
file is used by the system.
3. File Size:
The amount of disk space that the file occupies, measured in bytes or kilobytes.
4. Permissions:
Defines the access control of the file. This includes who can read, write, or execute
the file. In Unix-like systems, it uses read, write, and execute permissions for the
owner, group, and others.
g) Define Deadlock.

A deadlock occurs when a set of processes are blocked because each process is holding a
resource and waiting for another resource held by another process. This cycle of waiting
prevents any of the processes from continuing execution, thus leading to a system deadlock.

Conditions for deadlock:

1. Mutual Exclusion: At least one resource must be held in a non-shareable mode.


2. Hold and Wait: A process holding one resource is waiting for additional resources
held by other processes.
3. No Preemption: Resources cannot be forcibly taken from processes holding them.
4. Circular Wait: A set of processes exists where each process is waiting for a resource
held by the next process in the set.

2. Attempt any THREE of the following: (12)

a) Explain different types of system calls.

1. Process Control:
These system calls manage processes, including creation, termination, and control.
Examples:
o fork(): Creates a new process by duplicating the calling process.
o exit(): Terminates a process.
2. File Management:
These system calls manage file operations, such as opening, closing, reading, and
writing files.
Examples:
o open(): Opens a file.
o read(): Reads data from a file.
3. Device Management:
These system calls interact with hardware devices, such as disk drives or printers.
Examples:
o ioctl(): Provides control over a device.
o read(): Reads data from a device.
4. Information Maintenance:
These system calls handle tasks like querying system information.
Examples:
o getpid(): Returns the process ID.
o time(): Returns the current time.
5. Communication:
These system calls facilitate interprocess communication.
Examples:
o pipe(): Creates an inter-process communication channel.
o msgget(): Creates a message queue.
b) Draw and explain process control block in detail.

The Process Control Block (PCB) is a data structure used by the operating system to
manage the execution of a process. It stores all the information about a process needed by the
OS to manage it.

Components of PCB:

1. Process ID (PID): A unique identifier for the process.


2. Program Counter (PC): Keeps track of the address of the next instruction to be
executed.
3. CPU Registers: Includes data from the CPU registers that the process was using.
4. Memory Management Information: Information like page tables or segment tables
that are used for memory allocation.
5. Scheduling Information: Stores the priority and state of the process (e.g., ready,
running, waiting).
6. I/O Status Information: Tracks information about the I/O devices the process is
using, such as open file descriptors.
7. Process State: Describes the current state of the process (e.g., running, ready,
waiting).

Diagram of PCB:

+-------------------+
| Process ID (PID) |
+-------------------+
| Program Counter |
+-------------------+
| CPU Registers |
+-------------------+
| Memory Info |
+-------------------+
| Scheduling Info |
+-------------------+
| I/O Status Info |
+-------------------+
| Process State |
+-------------------+

c) State and explain four scheduling criteria.

1. CPU Utilization:
The percentage of time the CPU is being actively used. The higher the CPU
utilization, the better the system is utilized. The OS should aim to keep the CPU
utilization high, ideally above 90%.
2. Throughput:
The number of processes completed per unit of time. Higher throughput means that
more processes are being executed, improving system efficiency.
3. Turnaround Time:
The total time taken from the submission of a process to its completion. This includes
waiting time, execution time, and any time spent in other states. It is desirable to
minimize turnaround time for better system performance.
4. Waiting Time:
The total time a process spends waiting in the ready queue before it gets executed.
Minimizing waiting time is essential for improving the responsiveness of the system.

d) Define fragmentation. Explain Internal and External Fragmentation.

Fragmentation refers to the inefficient use of memory space, which can occur in systems
that use dynamic memory allocation.

1. Internal Fragmentation:
This occurs when fixed-size memory blocks are allocated, but a process requires less
memory than the block size, leaving unused space within the allocated memory. This
results in wasted memory within each allocated block.
2. External Fragmentation:
This happens when free memory is divided into small, non-contiguous blocks over
time, leading to wasted memory. Even though the total amount of free memory may
be sufficient for a process, it may be scattered, causing an allocation failure.

3. Attempt any THREE of the following: (12)

a) Describe multiprocessor OS with its advantages.

A multiprocessor operating system manages systems with multiple processors (CPUs) that
can work concurrently. The OS coordinates the processes and the use of processors for tasks
such as load balancing, synchronization, and resource sharing.

Advantages:

1. Increased Throughput: With multiple processors working in parallel, more


processes can be handled in less time, resulting in better throughput.
2. Fault Tolerance: If one processor fails, the remaining processors can take over,
ensuring that the system continues to function.

b) Explain different components of the operating system.

1. Kernel:
The core part of the operating system that manages system resources such as CPU,
memory, and I/O devices. It provides low-level services for processes, file
management, and memory management.
2. Shell:
The user interface that allows users to interact with the kernel, typically through
command-line instructions or graphical user interfaces (GUI).
3. File System:
The component responsible for organizing files and directories on storage devices. It
provides methods for file creation, deletion, access, and management.
4. Device Drivers:
Software components that facilitate communication between the operating system and
hardware devices (e.g., printers, disk drives, and network interfaces).
5. User Interface:
Provides an interface (CLI or GUI) for users to interact with the system. It acts as the
bridge between the user and the OS.

c) Explain different types of schedulers.

1. Long-Term Scheduler:
Responsible for admitting processes to the ready queue from the job pool. It controls
the degree of multiprogramming by deciding which processes should be brought into
memory for execution.
2. Short-Term Scheduler:
Also known as the CPU scheduler, it decides which process from the ready queue will
be executed next by the CPU.
3. Medium-Term Scheduler:
Manages the swapping of processes between memory and disk. It temporarily
removes processes from memory to balance system load and optimize performance.

d) Explain two-level directory structure with a suitable diagram.

In a two-level directory structure, the file system is organized into two levels:

1. Root Directory: Contains subdirectories, which help organize the file system
hierarchically.
2. Subdirectories: Each subdirectory contains files or further subdirectories.

Diagram:

Root Directory
|
+-- Subdirectory1
| |-- File1
| |-- File2
+-- Subdirectory2
|-- File3
|-- File4

Sure! Below are the detailed answers for questions 4, 5, and 6 from your exam paper.
4. Attempt any THREE of the following: (12)

a) Explain real-time OS. Explain its types.

Real-Time Operating System (RTOS) is an operating system designed to process data and
respond to inputs within strict timing constraints, typically where timely responses are critical
for the system’s operation. In a real-time system, it’s not just the correctness of the output
that matters, but also the time it takes to generate the output.

Types of Real-Time Operating Systems:

1. Hard Real-Time OS:


o In a hard real-time system, it is imperative that the task must be completed
within the specified time limits (deadline). If a task misses its deadline, the
system might fail to perform its intended function.
o Example: Airplane flight control systems, pacemakers, etc.
2. Soft Real-Time OS:
o Soft real-time systems do not have rigid deadlines. Missing a deadline is
undesirable, but not catastrophic. Performance can degrade but it does not lead
to system failure.
o Example: Video streaming, online gaming, etc.
3. Firm Real-Time OS:
o These systems have deadlines, but missing a deadline does not cause complete
system failure. However, repeated misses may degrade system performance
significantly.
o Example: Telephone switching systems.

b) Draw process state diagram and describe each state.

A Process State Diagram shows the different states a process can be in during its lifecycle.
These states are handled by the operating system’s scheduler.

Process States:

1. New:
The process is being created. It’s the initial state before it is moved to the ready
queue.
2. Ready:
The process is ready to be executed but is waiting for CPU time. It is waiting in the
ready queue.
3. Running:
The process is currently being executed by the CPU.
4. Waiting (Blocked):
The process is waiting for some event (e.g., I/O operation) to complete. It cannot
proceed until the event occurs.
5. Terminated:
The process has finished execution and is no longer in the system. It has completed its
task and will be removed from the process table.
Process State Diagram:

+---------+ +-------+ +---------+


| New |---->| Ready |---->| Running |
+---------+ +-------+ +---------+
^ |
| v
+-------+
|Waiting|
+-------+
|
v
+---------+
| Terminated |
+---------+

c) Describe any four conditions for deadlock.

Deadlock is a situation where a set of processes are blocked because each process is holding
a resource and waiting for another resource that another process holds.

Four Necessary Conditions for Deadlock:

1. Mutual Exclusion:
At least one resource must be in a non-shareable mode, i.e., only one process can use
the resource at a time. If another process requests the resource, it must wait until the
first process releases it.
2. Hold and Wait:
A process holding one resource is waiting for additional resources that are currently
being held by other processes.
3. No Preemption:
Resources cannot be forcibly taken away from the processes holding them. A process
can release the resource voluntarily, but not preemptively by the OS.
4. Circular Wait:
A set of processes exists where each process is holding a resource and waiting for
another resource in the set. This creates a circular chain of waiting processes.

d) Differentiate between paging and segmentation (any 4 points)

Paging Segmentation
Memory is divided into fixed-size blocks called Memory is divided into variable-size
pages. blocks called segments.
The process is divided into pages, and each page Each segment represents a logical unit
maps to a frame in physical memory. such as code, data, or stack.
The page table maps logical pages to physical The segment table maps logical segments
frames. to physical memory locations.
Paging Segmentation
Paging eliminates fragmentation by allocating Segmentation allows for natural grouping
memory in fixed-size blocks, but it can cause of data, but it can lead to external
internal fragmentation. fragmentation.
Segmentation is more flexible but can be
Paging is used in modern systems for efficient
less efficient in terms of memory
memory management.
allocation.

e) Explain fixed and variable memory management.

Fixed Memory Management:


In fixed memory management, memory is divided into fixed-size blocks. Each process is
assigned a fixed-sized block of memory, and the OS allocates and deallocates these blocks
when needed.

 Advantages:
o Simpler to implement.
o Less overhead in managing memory.
 Disadvantages:
o Wastage of memory due to fixed block sizes (internal fragmentation).
o Limited flexibility.

Variable Memory Management:


In variable memory management, memory is allocated in variable-sized blocks according to
the process requirements. The OS dynamically allocates memory to processes based on their
demand.

 Advantages:
o More efficient as it allocates exactly the amount of memory required.
o Reduces internal fragmentation.
 Disadvantages:
o Can lead to external fragmentation.
o More complex to implement and manage.

5. Attempt any TWO of the following: (12)

a) Explain the working of interprocess communication considering:

i) Shared Memory

 In shared memory, multiple processes are allowed to access the same memory space.
Communication is done by writing to and reading from this shared memory region.
 Steps:
1. A process creates a shared memory segment (using system calls like shmget in
Unix/Linux).
2. Other processes can attach to this segment (using shmat).
3. Processes communicate by writing and reading data from this segment.
4. Once communication is done, the processes detach (shmdt) and the shared
memory is removed (shmctl).

Advantages:

 Faster communication, as no data needs to be copied.


 Efficient for large data transfers.

Disadvantages:

 Synchronization is required to avoid data corruption.


 Shared memory needs careful management to avoid conflicts.

ii) Message Passing

 Message passing involves communication between processes using messages. Each


process has its own memory space, and communication is done by sending and
receiving messages through a communication channel.
 Steps:
1. One process sends a message to a mailbox or queue.
2. Another process retrieves the message from the queue or mailbox.
3. The communication may be synchronous (blocking) or asynchronous (non-
blocking).

Advantages:

 No shared memory issues; each process has its own space.


 Suitable for distributed systems.

Disadvantages:

 Can be slower compared to shared memory due to the need to copy messages.
 Message passing overhead.

b) With neat diagram explain multilevel queue scheduling.

Multilevel Queue Scheduling is a CPU scheduling algorithm that divides the ready queue
into several smaller queues, each with different priority levels. Each queue has its own
scheduling algorithm, and processes are placed in these queues based on certain
characteristics (e.g., priority, type of process).

Multilevel Queue Scheduling Diagram:

+------------------+ +------------------+ +------------------+


| High Priority | ---> | Medium Priority | ---> | Low Priority |
| Queue | | Queue | | Queue |
| (Preemptive) | | (Round Robin) | | (FIFO) |
+------------------+ +------------------+ +------------------+
 High Priority Queue: Processes in this queue are executed first. It may use
preemptive scheduling.
 Medium Priority Queue: Processes here may use a round-robin algorithm.
 Low Priority Queue: This queue processes tasks using a simple FIFO scheduling.

Advantages:

 Different types of processes can be handled efficiently.


 Priority scheduling ensures important tasks are processed first.

Disadvantages:

 It can lead to starvation for processes in lower-priority queues.


 Complex to implement.

c) Write two uses of the following operating system tools:

i) Security Policy

 Access Control: Defines who can access system resources and the types of access
(read, write, execute).
 Audit Logs: Tracks user activities and helps in monitoring for unauthorized access or
malicious activity.

ii) User Management

 User Authentication: Ensures that only authorized users can access the system
through usernames and passwords or biometric data.
 Permissions Management: Assigns specific access rights to different users for
resources such as files and devices.

iii) Task Scheduler

 Process Scheduling: Decides when and which process gets access to the CPU.
 Automating Tasks: Schedules repetitive tasks (like backups or updates) to run at
predefined intervals without user intervention.

6. Attempt any TWO of the following: (12)

a) List file allocation method and explain any one in detail.

There are several file allocation methods in operating systems. Below are the most commonly
used:
1. Contiguous Allocation
2. Linked Allocation
3. Indexed Allocation

Let's explain Contiguous Allocation in detail.

Contiguous Allocation:
In the contiguous allocation method, each file is stored in a set of contiguous blocks on the
disk. The operating system maintains a table that maps files to the disk's start block and the
length (number of contiguous blocks) of the file. This method is the simplest to implement,
and the disk blocks for a file are stored consecutively.

Advantages of Contiguous Allocation:

1. Fast Access Time: Since the blocks are contiguous, the read/write head can move
sequentially, making access faster.
2. Simple Implementation: The file's data is stored in a linear fashion, so the allocation
and deallocation are easier to manage.

Disadvantages of Contiguous Allocation:

1. External Fragmentation: Over time, as files are created and deleted, the disk may
develop gaps, leading to fragmentation, and finding a large enough contiguous free
block becomes difficult.
2. Limited File Size Expansion: If a file grows larger than its originally allocated space,
it must be reallocated to a new block, which could cause performance issues and
additional complexity.

b) For the page reference string:

7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1

Calculate the page faults applying:

i) Optimal Page Replacement Algorithm:

The Optimal Page Replacement Algorithm replaces the page that will not be used for the
longest period in the future. It is considered the best page replacement strategy but requires
future knowledge of page references.

We are given 3 frames. Let's calculate page faults:

Initial state (empty frames):

 Pages in frames: [ ]

1. Page Reference: 7 → Page Fault.


Frames: [7]
2. Page Reference: 0 → Page Fault.
Frames: [7, 0]
3. Page Reference: 1 → Page Fault.
Frames: [7, 0, 1]
4. Page Reference: 2 → Page Fault. Replace 7 (because 7 is not used again in future).
Frames: [2, 0, 1]
5. Page Reference: 0 → No Page Fault (0 is already in the frame).
Frames: [2, 0, 1]
6. Page Reference: 3 → Page Fault. Replace 1 (because 1 is not used again in future).
Frames: [2, 0, 3]
7. Page Reference: 0 → No Page Fault (0 is already in the frame).
Frames: [2, 0, 3]
8. Page Reference: 4 → Page Fault. Replace 2 (because 2 is not used again in future).
Frames: [4, 0, 3]
9. Page Reference: 2 → Page Fault. Replace 3 (because 3 is not used again in future).
Frames: [4, 0, 2]
10. Page Reference: 3 → Page Fault. Replace 0 (because 0 is not used again in future).
Frames: [4, 3, 2]
11. Page Reference: 0 → Page Fault. Replace 4 (because 4 is not used again in future).
Frames: [0, 3, 2]
12. Page Reference: 3 → No Page Fault (3 is already in the frame).
Frames: [0, 3, 2]
13. Page Reference: 2 → No Page Fault (2 is already in the frame).
Frames: [0, 3, 2]
14. Page Reference: 1 → Page Fault. Replace 3 (because 3 is not used again in future).
Frames: [0, 1, 2]
15. Page Reference: 2 → No Page Fault (2 is already in the frame).
Frames: [0, 1, 2]
16. Page Reference: 0 → No Page Fault (0 is already in the frame).
Frames: [0, 1, 2]
17. Page Reference: 1 → No Page Fault (1 is already in the frame).
Frames: [0, 1, 2]
18. Page Reference: 7 → Page Fault. Replace 2 (because 2 is not used again in future).
Frames: [0, 1, 7]
19. Page Reference: 0 → No Page Fault (0 is already in the frame).
Frames: [0, 1, 7]
20. Page Reference: 1 → No Page Fault (1 is already in the frame).
Frames: [0, 1, 7]

Total Page Faults (Optimal): 15

ii) LRU (Least Recently Used) Page Replacement Algorithm:

The LRU (Least Recently Used) page replacement algorithm replaces the page that has not
been used for the longest period of time.

Let’s calculate the page faults for 3 frames using LRU:


Initial state (empty frames):

 Pages in frames: [ ]

1. Page Reference: 7 → Page Fault.


Frames: [7]
2. Page Reference: 0 → Page Fault.
Frames: [7, 0]
3. Page Reference: 1 → Page Fault.
Frames: [7, 0, 1]
4. Page Reference: 2 → Page Fault. Replace 7 (because 7 is the least recently used).
Frames: [2, 0, 1]
5. Page Reference: 0 → No Page Fault (0 is already in the frame).
Frames: [2, 0, 1]
6. Page Reference: 3 → Page Fault. Replace 1 (because 1 is the least recently used).
Frames: [2, 0, 3]
7. Page Reference: 0 → No Page Fault (0 is already in the frame).
Frames: [2, 0, 3]
8. Page Reference: 4 → Page Fault. Replace 2 (because 2 is the least recently used).
Frames: [4, 0, 3]
9. Page Reference: 2 → Page Fault. Replace 3 (because 3 is the least recently used).
Frames: [4, 0, 2]
10. Page Reference: 3 → Page Fault. Replace 0 (because 0 is the least recently used).
Frames: [4, 3, 2]
11. Page Reference: 0 → Page Fault. Replace 4 (because 4 is the least recently used).
Frames: [0, 3, 2]
12. Page Reference: 3 → No Page Fault (3 is already in the frame).
Frames: [0, 3, 2]
13. Page Reference: 2 → No Page Fault (2 is already in the frame).
Frames: [0, 3, 2]
14. Page Reference: 1 → Page Fault. Replace 3 (because 3 is the least recently used).
Frames: [0, 1, 2]
15. Page Reference: 2 → No Page Fault (2 is already in the frame).
Frames: [0, 1, 2]
16. Page Reference: 0 → No Page Fault (0 is already in the frame).
Frames: [0, 1, 2]
17. Page Reference: 1 → No Page Fault (1 is already in the frame).
Frames: [0, 1, 2]
18. Page Reference: 7 → Page Fault. Replace 2 (because 2 is the least recently used).
Frames: [0, 1, 7]
19. Page Reference: 0 → No Page Fault (0 is already in the frame).
Frames: [0, 1, 7]
20. Page Reference: 1 → No Page Fault (1 is already in the frame).
Frames: [0, 1, 7]

Total Page Faults (LRU): 15

iii) FIFO (First In, First Out) Page Replacement Algorithm:


The FIFO (First In, First Out) page replacement algorithm replaces the oldest page in the
memory (the page that has been in memory the longest).

Let’s calculate the page faults for 3 frames using FIFO:

Initial state (empty frames):

 Pages in frames: [ ]

1. Page Reference: 7 → Page Fault.


Frames: [7]
2. Page Reference: 0 → Page Fault.
Frames: [7, 0]
3. Page Reference: 1 → Page Fault.
Frames: [7, 0, 1]
4. Page Reference: 2 → Page Fault. Replace 7 (because 7 is the oldest).
Frames: [2, 0, 1]
5. Page Reference: 0 → No Page Fault (0 is already in the frame).
Frames: [2, 0, 1]
6. Page Reference: 3 → Page Fault. Replace 1 (because 1 is the oldest).
Frames: [2, 0, 3]
7. Page Reference: 0 → No Page Fault (0 is already in the frame).
Frames: [2, 0, 3]
8. Page Reference: 4 → Page Fault. Replace 2 (because 2 is the oldest).
Frames: [4, 0, 3]
9. Page Reference: 2 → Page Fault. Replace 3 (because 3 is the oldest).
Frames: [4, 0, 2]
10. Page Reference: 3 → Page Fault. Replace 0 (because 0 is the oldest).
Frames: [4, 3, 2]
11. Page Reference: 0 → Page Fault. Replace 4 (because 4 is the oldest).
Frames: [0, 3, 2]
12. Page Reference: 3 → No Page Fault (3 is already in the frame).
Frames: [0, 3, 2]
13. Page Reference: 2 → No Page Fault (2 is already in the frame).
Frames: [0, 3, 2]
14. Page Reference: 1 → Page Fault. Replace 2 (because 2 is the oldest).
Frames: [0, 1, 3]
15. Page Reference: 2 → Page Fault. Replace 3 (because 3 is the oldest).
Frames: [0, 1, 2]
16. Page Reference: 0 → No Page Fault (0 is already in the frame).
Frames: [0, 1, 2]
17. Page Reference: 1 → No Page Fault (1 is already in the frame).
Frames: [0, 1, 2]
18. Page Reference: 7 → Page Fault. Replace 2 (because 2 is the oldest).
Frames: [0, 1, 7]
19. Page Reference: 0 → No Page Fault (0 is already in the frame).
Frames: [0, 1, 7]
20. Page Reference: 1 → No Page Fault (1 is already in the frame).
Frames: [0, 1, 7]
Total Page Faults (FIFO): 15

c) Consider the four processes P1, P2, P3, and P4 with the following burst times. Find
out the average waiting time and average turn around time for the following algorithms.

Processes:

 P1: Arrival Time = 0, Burst Time = 8


 P2: Arrival Time = 1, Burst Time = 4
 P3: Arrival Time = 2, Burst Time = 9
 P4: Arrival Time = 3, Burst Time = 5

i) FCFS (First-Come, First-Served) Scheduling:

In FCFS, processes are scheduled in the order of their arrival times.

Gantt Chart:

| P1 | P2 | P3 | P4 |
0 8 12 21 26

 Turnaround Time (TAT) = Completion Time - Arrival Time


 Waiting Time (WT) = Turnaround Time - Burst Time

For each process:

 P1:
o TAT = 8 - 0 = 8
o WT = 8 - 8 = 0
 P2:
o TAT = 12 - 1 = 11
o WT = 11 - 4 = 7
 P3:
o TAT = 21 - 2 = 19
o WT = 19 - 9 = 10
 P4:
o TAT = 26 - 3 = 23
o WT = 23 - 5 = 18

Average Turnaround Time (TAT) = (8 + 11 + 19 + 23) / 4 = 15.25

Average Waiting Time (WT) = (0 + 7 + 10 + 18) / 4 = 8.75

ii) RR (Round Robin) Scheduling (Quantum = 4ms):


Gantt Chart for RR with a quantum of 4ms:

| P1 | P2 | P3 | P4 | P1 | P2 | P3 | P1 | P4 | P3 | P2 | P1 | P4 |
0 4 8 12 16 20 24 28 32 36 40 44 48

 P1:
o TAT = 48 - 0 = 48
o WT = 48 - 8 = 40
 P2:
o TAT = 24 - 1 = 23
o WT = 23 - 4 = 19
 P3:
o TAT = 40 - 2 = 38
o WT = 38 - 9 = 29
 P4:
o TAT = 36 - 3 = 33
o WT = 33 - 5 = 28

Average Turnaround Time (TAT) = (48 + 23 + 38 + 33) / 4 = 35.5

Average Waiting Time (WT) = (40 + 19 + 29 + 28) / 4 = 29

iii) SJF (Shortest Job First) Scheduling:

In SJF, the process with the shortest burst time is scheduled first. If two processes have the
same burst time, FCFS is applied.

Gantt Chart:

| P1 | P2 | P4 | P3 |
0 8 12 17 26

 P1:
o TAT = 8 - 0 = 8
o WT = 8 - 8 = 0
 P2:
o TAT = 12 - 1 = 11
o WT = 11 - 4 = 7
 P4:
o TAT = 17 - 3 = 14
o WT = 14 - 5 = 9
 P3:
o TAT = 26 - 2 = 24
o WT = 24 - 9 = 15

Average Turnaround Time (TAT) = (8 + 11 + 14 + 24) / 4 = 14.25

Average Waiting Time (WT) = (0 + 7 + 9 + 15) / 4 = 7.75

You might also like