0% found this document useful (0 votes)
15 views17 pages

Os 1,2,3

Uploaded by

faketyson78
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views17 pages

Os 1,2,3

Uploaded by

faketyson78
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

OPERATING SYSTEM PYQ (2022-2023)

MODULE 1: [ Fundamentals of Operating System]


Q.1. Explain UNIX OS kernel.
Ans. The UNIX operating system (OS) kernel is a crucial component that serves as the core of the
UNIX operating system. The kernel is responsible for managing hardware resources, providing
essential services to other parts of the operating system, and facilitating communication between
software and hardware components.
It helps in process management, memory management and file management.

Q.2. What are various objectives and functions of operating system.


Ans. The following are the main objectives of an operating system:
To make the computer system convenient to use in an efficient manner.
To hide the details of the hardware resources from the users.
To provide users a convenient interface to use the computer system.
To manage the resources of a computer system. To keep track of who is using which resource, granting resource
Functions of an Operating System (any 4): request.
Memory Management: Keeps track of the primary memory, i.e. what part of it is in use by whom,
what part is not in use, etc. and allocates the memory when a process or program requests it.

Processor Management: Allocates the processor (CPU) to a process and deallocates the processor
when it is no longer required.

Network Management: Think of them as traffic cops for your internet traffic. Operating systems
help computers talk to each other and the internet. They manage how data is packaged and sent
over the network, making sure it arrives safely and in the right order.

File Management: Allocates and de-allocates the resources and decides who gets the resources.

User Interface or Command Interpreter


The user interacts with the computer system through the operating system. Hence OS acts as an
interface between the user and the computer hardware. This user interface is offered through a set
of commands or a graphical user interface (GUI).
MODULE 2: [Process Management]
Q.1. Compare process scheduling and process switching.
Ans.
Q.2.

Q.3. What is Threading and Multithreading? Explain importance of Multithreading?


Ans. Within a program, a Thread is a separate execution path. It is a lightweight process that the
operating system can schedule and run concurrently with other threads.
Multithreading is a technique used in operating systems to improve the performance and
responsiveness of computer systems. Multithreading allows multiple threads (i.e., lightweight
processes) to share the same resources of a single process, such as the CPU, memory, and I/O
devices.

Importance of Multithreading:
• Multithreading can improve the performance and efficiency of a program by utilizing
the available CPU resources more effectively
• Multithreading can enhance responsiveness in applications that involve user
interaction.
• Multithreading can enable better resource utilization. For example, in a server
application, multiple threads can handle incoming client requests simultaneously,
allowing the server to serve more clients concurrently.
• Multithreading can facilitate better code organization and modularity by dividing
complex tasks into smaller, manageable units of execution.
Q.4. Write short note on Process Control Block.
Ans. Process Control Block is a data structure that contains information of the process related to it.
The process control block is also known as a task control block, entry of the process table, etc. It is
very important for process management as the data structuring for processes is done in terms of the
PCB. It also defines the current state of the system. Structure of the Process Control Block The
process control stores many data items that are needed for efficient process management. Some of
these data items are explained with the help of the given diagram –

The following are the data items –


Process State: This specifies the process state i.e., new, ready, running, waiting or terminated.
Process Number: This shows the number of the particular process.
Program Counter: This contains the address of the next instruction that needs to be executed in the
process.
Registers: This specifies the registers that are used by the process. They may include accumulators,
index registers, stack pointers, general purpose registers etc.
List of Open Files: These are the different files that are associated with the process.
CPU scheduling information:
priorities, scheduling queue, Pointers
Memory-management information: memory allocated to the process
Accounting information: CPU used, clock time elapsed since start, time limits
I/O status information: I/O devices allocated to process, list of open files.
Q.5. Differentiate between process and threads.
Q.6. Consider the following set of processes indicated as (process name, Arrival time, burst time)
for the following: (P1,0,6), (P2,1,4), (P3,3,5), (P4,5,3).
Draw the Gantt Charts illustrating the execution of these processes using preemitive and non-
preemitive SJF and FCFS. Calculate average turn around time, average waiting time in each case.

Q.7. What is a thread? How multithreading is beneficial? Compare and contrast different
multithreading models.
Ans. A thread is the unit of execution within a process. A process can have anywhere from just one
thread to any threads.
Multithreading is a function of the CPU that permits multiple threads to run independently while
sharing the same process resources.

Benefits of Multithreading: Write any 2-3 benefits.

1. Responsiveness: Multithreading in an interactive application enables a program to continue


running even if a section is blocked or executing a lengthy process, increasing user responsiveness.

2. Resource Sharing: Processes can only share the resources only via two techniques such as:

(1) Message Passing

(2) Shared Memory

The programmer must explicitly structure such strategies. On the other hand, by default, threads
share the memory and resources of the process they belong to.

3. Economy: Allocating memory and resources for process creation is an expensive procedure
because it is a time and space-consuming task.

4. Scalability: The advantages of multi-programming become much more apparent in the case of
multiprocessor architecture, when threads may execute in parallel on many processors. When there
is just one thread, it is impossible to break the processes into smaller jobs performed by different
processors.

5. Utilization of multiprocessor architecture: The advantages of multithreading might be


considerably amplified in a multiprocessor architecture, where every thread could execute in parallel
on a distinct processor.

6. Minimized system resource usage: Threads have a minimal influence on the system's resources.
The overhead of creating, maintaining, and managing threads is lower than a general process.

One to One Model: The one-to-one model maps each of the user threads to a kernel thread. This
means that many threads can run in parallel on multiprocessors and other threads can run when one
thread makes a blocking system call.

Many to One Model: The many to one model maps many of the user threads to a single kernel
thread. This model is quite efficient as the user space manages the thread management.

Many to Many Models: The many to many model maps many of the user threads to a equal number
or lesser kernel threads. The number of kernel threads depends on the application or machine.
Q.8. Consider the following five processes, with the length of the CPU burst time given in
milliseconds. Process time is P1-10, P2-29, P3-3, P4-7, P5-12. Consider the first come first serve
(FCFS), Non Preemitive Shortest Job First (SJF), Round Robin (RR) (quantumn=10ms) schedule in
algorithms. Illustrate the schedule using Gantt chart. Calculate the Average Waiting Time and Turn
Around the Time.

Ans:
MODULE 3: [Process Coordination]
Q.1. What is Semaphore? What is its significance?
Ans. Semaphores are just normal variables used to coordinate the activities of multiple processes
in a computer system. They are used to enforce mutual exclusion, avoid race conditions, and
implement synchronization between processes.
Semaphores are of two types:

Binary Semaphore – This is also known as a mutex lock. It can have only two values – 0 and 1. Its
value is initialized to 1. It is used to implement the solution of critical section problems with multiple
processes.

Counting Semaphore – Its value can range over an unrestricted domain. It is used to control access
to a resource that has multiple instances.

Significance: - 1) The significance of semaphores lies in their ability to solve critical section
problems and implement process synchronization.
2) They ensure that only one thread/process can access a shared resource at a time,
preventing conflicts and maintaining consistency.
3) They are a powerful tool for process synchronization and access control for a common
resource in a concurrent environment.
4) Semaphores do not require busy waiting; hence, the operating system does not waste the
CPU cycles when a process can’t operate due to a lack of access to a resource.
Q.2.

Ans: The given state of the system represents the allocation and request matrices for four processes
(P1, P2, P3, and P4) and five types of resources (RS1, RS2, RS3, RS4, and RS5).
To identify the deadlocked processes, we need to check if there is a cycle in the resource allocation
graph.
The resource allocation graph can be constructed using the allocation and request matrices.
Each process is represented by a node, and each resource is represented by a directed edge.
In the allocation matrix, a "1" in the (i, j) position indicates that process Pi is allocated resource RSj.
In the request matrix, a "1" in the (i, j) position indicates that process Pi is requesting resource RSj.
To construct the resource allocation graph, we connect a process node to a resource node if the
process is allocated that resource.
Based on the given allocation and request matrices, the resource allocation graph is as follows:

P1 -> RS2 -> P2

P2 -> RS1 -> P1

P2 -> RS3 -> P3

P3 -> RS4 -> P4

P4 -> RS1 -> P1

P4 -> RS3 -> P3

From the resource allocation graph, we can see that there is a cycle involving processes P1, P2, and
P4. Therefore, these processes are deadlocked.

The deadlocked processes are: P1, P2, and P4.

Q.3. Write short note on Necessary conditions for deadlock.


Ans. 1) Mutual Exclusion: Mutual Exclusion is the process by which only a single process is allowed
to access the resources present in the Operating System. If this condition is going on in the
Operating System, the Hold and Wait condition occurs. So, due to this Infinite waiting might occur
and this is going to cause Deadlock to the Process Executed in the Operating System.
As an example:
(1) Process A obtains Resource 1.
(2) Process B acquires Resource 2.

2) Hold and Wait: The hold and wait condition specifies that a process must be holding at least one
resource while waiting for other processes to release resources that are currently held by other
processes.
In our example,
(1) Process A has Resource 1 and is awaiting Resource 2.
(2) Process B currently has Resource 2 and is awaiting Resource 1.
(3) Both processes hold one resource while waiting for the other, satisfying the hold and wait
condition.

3) No Preemption: Preemption is the act of taking a resource from a process before it has finished
its task. According to the no preemption condition, resources cannot be taken forcibly from a
process a process can only release resources voluntarily after completing its task. In our scenario,
neither Process A nor Process B can be forced to release the resources in their possession. The
processes can only release resources voluntarily.

4) Circular Wait: Circular Wait is the condition where the one process depends upon the other
processes and cause deadlock in the Operating System.
Q.4. Write short note on Deadlock avoidance.
Ans. Deadlock Avoidance is a strategy used in operating systems to prevent the occurrence of
deadlocks. Deadlocks are situations where two or more processes are unable to proceed because
each is waiting for the other to release a resource.
Here are some key points about deadlock avoidance:

1) Unlike deadlock prevention, which aims to eliminate the possibility of deadlocks, deadlock
avoidance focuses on dynamically detecting and avoiding situations that could lead to deadlocks.

2) The operating system avoids Deadlock by knowing the maximum resource requirements of the
processes initially, and also, the Operating System knows the free resources available at that time.

3) The operating system tries to allocate the resources according to the process requirements and
checks if the allocation can lead to a safe state or an unsafe state.

4) If the resource allocation leads to an unsafe state, then the Operating System does not proceed
further with the allocation sequence.

5) Deadlock avoidance requires that the system has some additional a priori information available.
Simplest and most useful model requires that each process declare the maximum number of
resources of each type that it may need.

Q.5. Explain about Resource Allocation Graph (RAG).


Ans. The resource allocation graph is the pictorial representation of the state of a system. As its
name suggests, the resource allocation graph is the complete information about all the processes
which are holding some resources or waiting for some resources.
In Resource allocation graph, the process is represented by a Circle while the Resource is
represented by a rectangle. Let's see the types of vertices and edges in detail.

Vertices are mainly of two types, Resource and process.


1. Process Vertex: Every process will be represented as a process vertex. Generally, the process will
be represented with a circle.

2. Resource Vertex: Every resource will be represented as a resource vertex. It is also two types:

Single instance type resource: It represents as a box, inside the box, there will be one dot. So the
number of dots indicate how many instances are present of each resource type.

Multi-resource instance type resource: It also represents as a box, inside the box, there will
be many dots present.

Edges in RAG are also of two types, one represents assignment and other represents the wait of a
process for a resource. The above image shows each of them.

A resource is shown as assigned to a process if the tail of the arrow is attached to an instance to the
resource and the head is attached to a process.

A process is shown as waiting for a resource if the tail of an arrow is attached to the process while the
head is pointing towards the resource.
Q.6. Give the explanation of necessary conditions for deadlock. Explain how a resource
allocation graph determines a deadlock.
Ans. A process requests resources, if those are not available at that time; a process enters into the
wait state. It may happen that waiting processes will never change the state again, because
resources requested by the process is occupied by some other process. This is known as deadlock

Necessary Conditions for Deadlock:

The four necessary conditions for a deadlock to arise are as follows.

Mutual Exclusion: Only one process can use a resource at any given time i.e. the resources are non-
sharable.

Hold and wait: A process is holding at least one resource at a time and is waiting to acquire other
resources held by some other process.

No pre-emption: The resource can be released by a process voluntarily i.e. after execution of the
process.

Circular Wait: A set of processes are waiting for each other in a circular fashion. For example, lets
say there are a set of processes{P0,P1,P2,P3} such that P0 depends on P1,P1 depends on P2,P2
depends on P3 and P3 depends on P0. This creates a circular relation between all these processes
and they have to wait forever to be executed.

Resource Allocation Graph With A Deadlock

Deadlock Prevention & Avoidance: Ensure that the system will never enter a deadlock
state
Deadlock Detection & Recovery: Detect that a deadlock has occurred and recover
Deadlock Ignorance: Pretend that deadlocks will never occur.
Resource Allocation Graph With A Cycle But No Deadlock
• If graph contains no cycles Þ no deadlock.
• If graph contains a cycle Þ then

if only one instance per resource type, then deadlock.

if several instances per resource type,then there is a possibility of deadloc


k.

Resource-Allocation Graph For Deadlock Avoidance

Simplest and most useful model requires that each process declare the maximum number of
resources of each type that it may need. The deadlock-avoidance algorithm dynamically examines
the resource-allocation state to ensure that there can never be a circular-wait condition.
Q.7. What is semaphore and its types? How the classic synchronization
problem – Dinning Philosopher is solved using semaphores?
Ans. Semaphores are integer variables that are used to solve the critical section problem by using
two atomic operations, wait and signal that are used for process synchronization.

Types of Semaphores: There are two main types of semaphores i.e. counting semaphores and
binary semaphores. Details about these are given as follows −

Counting Semaphores: These are integer value semaphores and have an unrestricted value domain.
These semaphores are used to coordinate the resource access, where the semaphore count is the
number of available resources. If the resources are added, semaphore count automatically
incremented and if the resources are removed, the count is decremented.

Binary Semaphores: The binary semaphores are like counting semaphores but their value is
restricted to 0 and 1. The wait operation only works when the semaphore is 1 and the signal
operation succeeds when semaphore is 0. It is sometimes easier to implement binary semaphores
than counting semaphores.

Solving Dining Philosopher's Problem using the concept of Semaphores


The Dining Philosophers problem is typically solved using semaphores to coordinate access to the
shared resources (the forks in this case) in a mutually exclusive manner to prevent deadlocks and
ensure the philosophers can dine peacefully. Here's a outline of how semaphores are used to solve
the Dining Philosophers problem:

1.Semaphore Initialization: -Each fork is represented by a semaphore initialized to 1.

2.Semaphore Operations: T o pick up a fork, a philosopher must acquire the semaphore associated
with that fork.

3.Concurrency Control: After using the fork, the philosopher releases it by signalling the semaphore.

4.Handling Starvation: Philosophers follow a protocol to ensure they only pick up a fork if both
required forks are available, preventing deadlock.

5.Deadlock Avoidance: Semaphores ensure mutual exclusion, allowing only one philosopher to use a
fork at a time.

You might also like