0% found this document useful (0 votes)
20 views25 pages

OS Q-1 Sol

Uploaded by

24mca015
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views25 pages

OS Q-1 Sol

Uploaded by

24mca015
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

Summer 2018

Q-1(a) Define the following: (Any Seven)1. Multiprogramming.2. Process3.


Semaphore4. Critical Section5. Memory Fault6. Virtual Memory7. TLB8.
Dispatcher9. Mutual Exclusion10. Context Switching

1. Multiprogramming:
Multiprogramming is a method where multiple programs are loaded into
memory and executed by the CPU concurrently. The operating system
switches between programs to keep the CPU busy while others are waiting
for I/O operations, thus improving system utilization.

2. Process:
A process is a program in execution. It includes the program code, current
activity (values in CPU registers, program counter), and associated resources
like memory, open files, etc. It is the basic unit of execution in an operating
system.

3. Semaphore:
A semaphore is a synchronization tool used to manage concurrent processes
and avoid race conditions. It is an integer variable that is accessed only
through two atomic operations: wait() (or P) and signal() (or V).
Semaphores can be used to achieve mutual exclusion and coordination
between processes.

4. Critical Section:
A critical section is a part of a program where shared resources (like data or
devices) are accessed. Only one process can execute in its critical section at
a time to prevent data inconsistency and race conditions.

5. Memory Fault (Page Fault):


A memory fault, more specifically a page fault, occurs when a program
tries to access a page that is not currently in main memory. The operating
system must then bring the required page into memory from secondary
storage (like a hard disk).

6. Virtual Memory:
Virtual memory is a memory management technique that gives an
application the illusion of having a large contiguous block of memory, even
if the physical memory is limited. It allows systems to run larger programs
than physically possible by using disk space as an extension of RAM.

7. TLB (Translation Lookaside Buffer):


The TLB is a small, fast cache in the CPU that stores recent translations of
virtual memory addresses to physical addresses. It speeds up memory access
in systems that use paging by avoiding repeated lookups in the page table.

8. Dispatcher:
The dispatcher is a component of the operating system responsible for
giving control of the CPU to the process selected by the short-term
scheduler. It performs context switching, switches to user mode, and jumps
to the proper location in the user program to restart execution.

9. Mutual Exclusion:
Mutual exclusion is a property of concurrency control that ensures that only
one process or thread can access a critical section or shared resource at any
given time. It prevents race conditions and ensures data integrity.

10.Context Switching:
Context switching is the process of saving the state of a currently running
process and loading the state of the next process to be executed by the CPU.
It allows multiple processes to share a single CPU efficiently, enabling
multitasking.

Q-1(b) 1. What is PCB? List the content of PCB.

A Process Control Block (PCB) is a data structure used by the operating system to
manage information about a process. The process control keeps track of many
important pieces of information needed to manage processes efficiently. The
diagram helps explain some of these key data items.

● Pointer: It is a stack pointer that is required to be saved when the process is


switched from one state to another to retain the current position of the
process.
● Process state: It stores the respective state of the process.
● Process number: Every process is assigned a unique id known as process
ID or PID which stores the process identifier.
● Program counter: Program Counter stores the counter, which contains the
address of the next instruction that is to be executed for the process.
● Register: Registers in the PCB, it is a data structure. When a processes is
running and it’s time slice expires, the current value of process specific
registers would be stored in the PCB and the process would be swapped out.
When the process is scheduled to be run, the register values is read from the
PCB and written to the CPU registers. This is the main purpose of the
registers in the PCB.
● Memory limits: This field contains the information about memory
management system used by the operating system. This may include page
tables, segment tables, etc.
● List of Open files: This information includes the list of files opened for a
process.

Additional Points to Consider for Process Control Block (PCB)

● Interrupt Handling: The PCB also contains information about the


interrupts that a process may have generated and how they were handled by
the operating system.
● Context Switching: The process of switching from one process to another is
called context switching. The PCB plays a crucial role in context switching
by saving the state of the current process and restoring the state of the next
process.
● Real-Time Systems: Real-time operating systems may require additional
information in the PCB, such as deadlines and priorities, to ensure that time-
critical processes are executed in a timely manner.
● Virtual Memory Management: The PCB may contain information about a
process virtual memory management, such as page tables and page fault
handling.
● Fault Tolerance: Some operating systems may use multiple copies of the
PCB to provide fault tolerance in case of hardware failures or software
errors.

Location of The Process Control Block

The Process Control Block (PCB) is stored in a special part of memory that normal
users can’t access. This is because it holds important information about the
process. Some operating systems place the PCB at the start of the kernel stack for
the process, as this is a safe and secure spot.

Advantages of Process Control Block (PCB)

● Stores Process Details: PCB keeps all the important information about a
process, like its state, ID, and resources it uses.
● Helps Resume Processes: When a process is paused, PCB saves its current
state so it can continue later without losing data.
● Ensures Smooth Execution: By storing all the necessary details, PCB helps
the operating system run processes efficiently and without interruptions.

Disadvantages of Process Control Block (PCB)

● Uses More Memory : Each process needs its own PCB, so having many
processes can consume a lot of memory.
● Slows Context Switching : During context switching , the system has to
update the PCB of the old process and load the PCB of the new one, which
takes time and affects performance.
● Security Risks : If the PCB is not well-protected, someone could access or
modify it, causing security problems for processes.

2. Explain three main objectives of Operating System.

1. User Convenience:

OSes strive to create an intuitive and easy-to-use interface, simplifying the process
of interacting with the computer and managing files and applications. This includes
features like graphical user interfaces (GUIs), command-line interfaces (CLIs), and
other user-friendly tools that allow users to easily access and manipulate system
resources.
2. Resource Management:

The OS is responsible for managing and allocating the computer's resources, such
as CPU time, memory, storage, and input/output devices. It ensures that these
resources are allocated fairly and efficiently among various processes and users,
preventing resource contention and maximizing overall system performance.

3. Program Execution:

OSes provide a platform for executing user programs and applications, providing
the necessary environment and services for them to run successfully. This includes
loading programs into memory, managing their execution, and handling
communication between programs and the hardware.

Winter 2018

Q-1(a) Define the following: (Any Seven)1. Monitor 2. Multitasking 3.


Interrupts 4. Response time 5. Memory fault 6. Busy waiting or Spin waiting
7. Dispatcher 8. Jacketing

1. Monitor:
A monitor is a synchronization construct that allows safe access to shared
resources in concurrent programming. It encapsulates shared variables, the
procedures that operate on them, and the synchronization between
concurrent threads using those procedures. Only one thread can execute any
of the monitor’s procedures at a time, ensuring mutual exclusion.

2. Multitasking:
Multitasking is the ability of an operating system to execute multiple tasks
(processes or threads) simultaneously. It can be achieved through time-
sharing, where the CPU rapidly switches between tasks, giving the illusion
of parallel execution. There are two types: preemptive (the OS decides
when to switch tasks) and cooperative (tasks yield control voluntarily).

3. Interrupts:
An interrupt is a signal to the processor indicating an event that needs
immediate attention. It temporarily halts the current CPU operations, saves
its state, and transfers control to a predefined interrupt handler to address the
event (e.g., input from a keyboard or mouse, or hardware failure).

4. Response Time:
Response time is the time interval between the submission of a request and
the first response produced by the system. It is a critical performance metric,
especially in real-time and interactive systems.

5. Memory Fault (also known as a Page Fault):


A memory fault occurs when a program accesses a part of memory that is
not currently in main memory (RAM), triggering the OS to retrieve the data
from secondary storage (like a hard disk) and load it into RAM. This
typically happens in a demand-paging system.

6. Busy Waiting or Spin Waiting:


Busy waiting (or spin waiting) occurs when a process continuously checks
for a condition (like the availability of a resource) in a loop without
relinquishing the CPU. This wastes CPU cycles, especially if the wait is
long, and is generally discouraged unless the wait time is very short.

7. Dispatcher:
A dispatcher is part of the CPU scheduler that gives control of the CPU to
the process selected by the short-term scheduler. This includes context
switching, switching to user mode, and jumping to the correct location in the
user program to resume execution.

8. Jacketing:
Jacketing is a technique used in OS design where system calls or user
requests are "wrapped" in a standardized interface or procedure to handle
differences across platforms or enforce safety and consistency. It’s often
used to manage hardware dependencies or handle cross-platform
compatibility.

Q-1(b) Explain Seven-state Process Models mentioning all the transitions.

In an operating system, a process is a program that is being executed. During its


execution, a process goes through different states. Understanding these states helps
us see how the operating system manages processes, ensuring that the computer
runs efficiently. Please refer Process in Operating System to understand more
details about processes.
Each process goes through several stages throughout its life cycle. In this article,
We discuss different states of process in detail.

Process Lifecycle

When you run a program (which becomes a process), it goes through different
phases before it completion. These phases, or states, can vary depending on the
operating system, but the most common process lifecycle includes two, five, or
seven states. Here’s a simple explanation of these states:

The Seven-State Model

The states of a process are as follows:

● New State: In this step, the process is about to be created but not yet
created. It is the program that is present in secondary memory that will be
picked up by the OS to create the process.
● Ready State: New -> Ready to run. After the creation of a process, the
process enters the ready state i.e. the process is loaded into the main
memory. The process here is ready to run and is waiting to get the CPU time
for its execution. Processes that are ready for execution by the CPU are
maintained in a queue called a ready queue for ready processes.
● Run State: The process is chosen from the ready queue by the OS for
execution and the instructions within the process are executed by any one of
the available processors.
● Blocked or Wait State: Whenever the process requests access to I/O needs
input from the user or needs access to a critical region(the lock for which is
already acquired) it enters the blocked or waits state. The process continues
to wait in the main memory and does not require CPU. Once the I/O
operation is completed the process goes to the ready state.
● Terminated or Completed State: Process is killed as well as PCB is
deleted. The resources allocated to the process will be released or
deallocated.
● Suspend Ready: Process that was initially in the ready state but was
swapped out of main memory(refer to Virtual Memory topic) and placed
onto external storage by the scheduler is said to be in suspend ready state.
The process will transition back to a ready state whenever the process is
again brought onto the main memory.
● Suspend Wait or Suspend Blocked: Similar to suspend ready but uses the
process which was performing I/O operation and lack of main memory
caused them to move to secondary memory. When work is finished it may
go to suspend ready.

● CPU and I/O Bound Processes: If the process is intensive in terms of CPU
operations, then it is called CPU bound process. Similarly, If the process is
intensive in terms of I/O operations then it is called I/O bound process.

How Does a Process Move From One State to Other State?

A process can move between different states in an operating system based on its
execution status and resource availability. Here are some examples of how a
process can move between different states:

● New to Ready: When a process is created, it is in a new state. It moves to


the ready state when the operating system has allocated resources to it and it
is ready to be executed.
● Ready to Running: When the CPU becomes available, the operating system
selects a process from the ready queue depending on various scheduling
algorithms and moves it to the running state.
● Running to Blocked: When a process needs to wait for an event to occur
(I/O operation or system call), it moves to the blocked state. For example, if
a process needs to wait for user input, it moves to the blocked state until the
user provides the input.
● Running to Ready: When a running process is preempted by the operating
system, it moves to the ready state. For example, if a higher-priority process
becomes ready, the operating system may preempt the running process and
move it to the ready state.
● Blocked to Ready: When the event a blocked process was waiting for
occurs, the process moves to the ready state. For example, if a process was
waiting for user input and the input is provided, it moves to the ready state.
● Running to Terminated: When a process completes its execution or is
terminated by the operating system, it moves to the terminated state.

Summer 2019
Q-1(a) Define the term. 1. Thread.2. Access time.3. Fragmentation.4.
Scheduling.5. Cache memory.6. Multiprocessing.7. Middleware.

1. Thread
A thread is a lightweight process that is part of a larger process. In an OS,
threads share the same resources (like memory) of the parent process but run
independently, allowing for multitasking within a single application.

2. Access Time
In an OS, access time refers to the time taken to access data from memory
or storage devices. It is critical for system performance and includes
components such as seek time, latency, and transfer time in the case of disks.

3. Fragmentation
Fragmentation occurs when free memory is broken into small, non-
contiguous blocks.

○ Internal Fragmentation: Unused space within allocated memory


blocks.

○ External Fragmentation: Free memory scattered throughout, not


usable for larger allocations.

4. Scheduling
Scheduling is the OS's method of deciding which process or thread gets to
use the CPU at any given time. Types of scheduling include:

○ Preemptive scheduling: OS can interrupt and switch processes.

○ Non-preemptive scheduling: Process runs until it completes or


blocks.

5. Cache Memory
Cache memory is a fast, small memory layer between the CPU and main
memory, used to store frequently accessed data or instructions. The OS helps
manage cache usage to improve system performance.

6. Multiprocessing
Multiprocessing refers to a system with more than one CPU, allowing
multiple processes to run simultaneously. The OS manages load balancing
and process scheduling across CPUs.

7. Middleware
Middleware in an OS environment is software that sits between the OS and
applications in a distributed system. It facilitates communication, data
exchange, and service coordination across networked systems.

Q-1(b) Do as directed. 1. Deadlock can occur without circular wait condition.


(TRUE/FALSE).2. The address of a page table in memory is pointed by page
table bas register. (TRUE/FALSE).3. PCB stands for _________.4. List any
two reasons for process termination.5. What is mutual exclusion?6. The size of
all the segments are same within a process. (TRUE/FALSE).7. List any two
preemptive scheduling policy.

1. Deadlock can occur without circular wait condition.


FALSE

○ Circular wait is one of the four necessary conditions for deadlock. If


there is no circular wait, deadlock cannot occur.

2. The address of a page table in memory is pointed by page table base


register.
TRUE

○ The Page Table Base Register (PTBR) holds the base address of the
page table in memory.

3. PCB stands for


Process Control Block

○ It is a data structure used by the operating system to store information


about a process.

4. List any two reasons for process termination:

○ Normal completion

○ Error in process (e.g., invalid memory access)

5. What is mutual exclusion?

○ Mutual exclusion is a property of concurrency control, which ensures


that only one process or thread can access a critical section or shared
resource at a time to prevent conflicts and data inconsistency.

6. The size of all the segments are same within a process.


FALSE

○ In segmentation, segments vary in size depending on the needs of the


process (e.g., code, data, stack segments differ in size).

7. List any two preemptive scheduling policies:

○ Round Robin (RR)

○ Shortest Remaining Time First (SRTF)

Winter 2019

Q-1(a) Define the following terms (Attempt any


seven).1.dispacher2.interrupt3.internal
fragmentation4.kernel5.multitasking6.difference between process and
thread7.starvation8.difference between strong and weak semaphore

1. Dispatcher:
The dispatcher is a component of the operating system that gives control of
the CPU to the process selected by the scheduler. It performs context
switching, switching to user mode, and jumping to the appropriate location
in the user program to restart execution.

2. Interrupt:
An interrupt is a signal to the processor from hardware or software
indicating an event that needs immediate attention. It temporarily halts the
current execution to run an interrupt handler, then resumes.

3. Internal Fragmentation:
Internal fragmentation occurs when memory is allocated in fixed-size
blocks and a process doesn't use the entire allocated block, leading to unused
space inside the block.
4. Kernel:
The kernel is the core part of an operating system that manages system
resources such as CPU, memory, and I/O devices. It operates in privileged
mode and provides essential services to applications and system
components.

5. Multitasking:
Multitasking refers to the ability of an operating system to execute multiple
tasks (processes or threads) seemingly at the same time by switching
between them rapidly.

6. Difference Between Process and Thread:

Aspect Process Thread

Definitio Independent A lightweight


n program in part of a
execution process

Memory Has its own Shares


memory memory with
space other threads
of the same
process

Overhea More Less overhead


d overhead

Commun Slower (via Faster (shared


ication IPC) memory)
7. Starvation:
Starvation occurs when a process waits indefinitely to gain access to
resources because other higher-priority processes are continuously given
preference by the scheduler.

8. Difference Between Strong and Weak Semaphore:

Type Description

Strong Follows FIFO (First


Semaphore In, First Out) order to
unblock waiting
processes.

Weak Does not guarantee


Semaphore any specific order;
any waiting process
may be unblocked.

Q-1(b) What is process? Explain seven state Process Model with diagram.

In an operating system, a process is a program that is being executed. During its


execution, a process goes through different states. Understanding these states helps
us see how the operating system manages processes, ensuring that the computer
runs efficiently. Please refer Process in Operating System to understand more
details about processes.

Each process goes through several stages throughout its life cycle. In this article,
We discuss different states of process in detail.

Process Lifecycle

When you run a program (which becomes a process), it goes through different
phases before it completion. These phases, or states, can vary depending on the
operating system, but the most common process lifecycle includes two, five, or
seven states. Here’s a simple explanation of these states:
The Seven-State Model

The states of a process are as follows:

● New State: In this step, the process is about to be created but not yet
created. It is the program that is present in secondary memory that will be
picked up by the OS to create the process.
● Ready State: New -> Ready to run. After the creation of a process, the
process enters the ready state i.e. the process is loaded into the main
memory. The process here is ready to run and is waiting to get the CPU time
for its execution. Processes that are ready for execution by the CPU are
maintained in a queue called a ready queue for ready processes.
● Run State: The process is chosen from the ready queue by the OS for
execution and the instructions within the process are executed by any one of
the available processors.
● Blocked or Wait State: Whenever the process requests access to I/O needs
input from the user or needs access to a critical region(the lock for which is
already acquired) it enters the blocked or waits state. The process continues
to wait in the main memory and does not require CPU. Once the I/O
operation is completed the process goes to the ready state.
● Terminated or Completed State: Process is killed as well as PCB is
deleted. The resources allocated to the process will be released or
deallocated.
● Suspend Ready: Process that was initially in the ready state but was
swapped out of main memory(refer to Virtual Memory topic) and placed
onto external storage by the scheduler is said to be in suspend ready state.
The process will transition back to a ready state whenever the process is
again brought onto the main memory.
● Suspend Wait or Suspend Blocked: Similar to suspend ready but uses the
process which was performing I/O operation and lack of main memory
caused them to move to secondary memory. When work is finished it may
go to suspend ready.

● CPU and I/O Bound Processes: If the process is intensive in terms of CPU
operations, then it is called CPU bound process. Similarly, If the process is
intensive in terms of I/O operations then it is called I/O bound process.
How Does a Process Move From One State to Other State?

A process can move between different states in an operating system based on its
execution status and resource availability. Here are some examples of how a
process can move between different states:

● New to Ready: When a process is created, it is in a new state. It moves to


the ready state when the operating system has allocated resources to it and it
is ready to be executed.
● Ready to Running: When the CPU becomes available, the operating system
selects a process from the ready queue depending on various scheduling
algorithms and moves it to the running state.
● Running to Blocked: When a process needs to wait for an event to occur
(I/O operation or system call), it moves to the blocked state. For example, if
a process needs to wait for user input, it moves to the blocked state until the
user provides the input.
● Running to Ready: When a running process is preempted by the operating
system, it moves to the ready state. For example, if a higher-priority process
becomes ready, the operating system may preempt the running process and
move it to the ready state.
● Blocked to Ready: When the event a blocked process was waiting for
occurs, the process moves to the ready state. For example, if a process was
waiting for user input and the input is provided, it moves to the ready state.
● Running to Terminated: When a process completes its execution or is
terminated by the operating system, it moves to the terminated state.

Summer 2029

Q-1(a) Define Terms: i) JCL, ii) Race Condition, iii) Monitor, iv) Virtual
Memory iv) SMP, iv) Dispatcher v) Process vi) Starvationvii) TLB

i) JCL (Job Control Language)

JCL is a scripting language used on IBM mainframe operating systems to instruct


the system on how to run a batch job or start a process. It specifies which programs
to execute, what resources to use, and where input/output data is located.
ii) Race Condition

A race condition occurs when two or more processes or threads access shared data
at the same time, and the final result depends on the timing or order of execution. It
can lead to unpredictable behavior or errors.

iii) Monitor

A monitor is a synchronization construct used in concurrent programming to


control access to shared resources. It allows only one thread to execute a critical
section of code at a time and often includes methods and condition variables.

iv) Virtual Memory

Virtual memory is a memory management technique that allows the execution of


processes that may not be completely in physical memory. It uses disk space to
simulate additional RAM, enabling larger programs to run and better multitasking.

v) SMP (Symmetric Multiprocessing)

SMP is a computer architecture where two or more identical processors share a


common memory and operate under a single operating system instance. All
processors are treated equally and can perform any task.

vi) Dispatcher

The dispatcher is a part of the operating system's scheduler. It is responsible for


transferring control of the CPU to the process selected by the short-term scheduler
by performing context switching, loading registers, and switching to user mode.

vii) Process

A process is a program in execution. It is an active entity with its own address


space, registers, program counter, and other resources. The OS uses a Process
Control Block (PCB) to manage it.

viii) Starvation

Starvation is a situation in which a process waits indefinitely to acquire a resource


because other higher-priority processes are continuously favored by the scheduler.

ix) TLB (Translation Lookaside Buffer)


The TLB is a special, high-speed cache in the memory management unit (MMU)
that stores recent translations of virtual memory to physical addresses. It speeds up
virtual memory access by avoiding repeated page table lookups.

Q-1(b) Do as directed.i) Say TRUE or False: “The deadlock avoidance


strategy does not predict 01deadlock with certainty”.2) Give one difference
between Monolithic and Microkernel.01 3) Give one difference between Batch
multi-programming and Time sharing.01 4) Give one example of consumable
and reusable resource.015) Explain first-fit, best-fit, next-fit placement
policies.03
i) TRUE or FALSE:

"The deadlock avoidance strategy does not predict deadlock with certainty."
FALSE

● Deadlock avoidance strategies (like the Banker's Algorithm) are designed


to prevent deadlocks by ensuring the system only enters safe states,
meaning they do predict and avoid deadlock with certainty under correct
implementation.

ii) One difference between Monolithic and Microkernel:

Monolithic Microkernel
Kernel

All OS services Only essential


run in kernel services run in kernel
space (e.g., file space; others run in
system, device user space.
drivers).

iii) One difference between Batch Multiprogramming and Time Sharing:

Batch Time Sharing


Multiprogrammin
g

No user interaction Allows interactive


during execution; user sessions with
jobs are processed quick response time.
in batches.

iv) Examples of Resources:

● Consumable Resource: A message (once received, it no longer exists)

● Reusable Resource: CPU, memory (can be used by one process, then


reused by another)

v) Memory Placement Policies:

1. First-Fit:

○ Allocates the first block of memory that is large enough for the
request.

○ Fast, but may cause fragmentation at the beginning of memory.

2. Best-Fit:

○ Allocates the smallest block that is large enough.

○ Reduces wasted space, but slower and can lead to more


fragmentation.

3. Next-Fit:

○ Similar to first-fit, but starts searching from where it last left off.

○ Slightly faster than first-fit in some cases; may lead to uneven


memory usage.
Winter 2020

Q-1(a) Do as Directed.1) Define Weak semaphore.2) Define the term DMA.3)


Virtual memory space is always smaller than physical memory
space.(True/False)4) Segmentation avoids external memory fragmentation.
(True/False)5) Define race condition.6) Every I/O device typically has a
waiting queue associated with it.(True/False)7) Define the term locality of
reference.

1) Define Weak Semaphore:

A weak semaphore is a synchronization primitive that does not guarantee the


order in which waiting processes are unblocked. Any waiting process may be
chosen arbitrarily when the semaphore is signaled.

2) Define the term DMA (Direct Memory Access):

DMA is a technique where certain hardware subsystems (like disk or network


controllers) access the main system memory directly, without involving the CPU,
enabling faster data transfer and freeing the CPU for other tasks.

3) Virtual memory space is always smaller than physical memory space.

False

● Virtual memory is typically larger than physical memory because it


includes both RAM and disk space, allowing the system to run larger
programs than would otherwise fit in physical memory.

4) Segmentation avoids external memory fragmentation.

False

● Segmentation can still cause external fragmentation, as segments are of


variable sizes and can leave unusable gaps in memory.

5) Define Race Condition:


A race condition occurs when the outcome of a process depends on the timing
or sequence of uncontrollable events such as the interleaving of operations from
multiple threads or processes accessing shared data.

6) Every I/O device typically has a waiting queue associated with it.

True

● Each I/O device usually has a device queue where processes wait for the
device to become available.

7) Define the term Locality of Reference:

Locality of reference refers to the tendency of a program to access a small


portion of memory repeatedly over a short period. It can be:

● Temporal locality: Recently accessed items are likely to be accessed again


soon.

● Spatial locality: Nearby memory locations are likely to be accessed soon.

Q-1(b) Discuss necessary conditions for a deadlock to occur. State general


approach for avoiding deadlock.

Necessary Conditions for Deadlock

A deadlock occurs when a set of processes are blocked because each process is
holding a resource and waiting for another resource held by another process. Four
conditions must all be true simultaneously for a deadlock to occur:

1. Mutual Exclusion

○ At least one resource must be held in a non-sharable mode.


○ Only one process can use the resource at a time.

2. Hold and Wait

○ A process is holding at least one resource and is waiting to acquire


additional resources currently held by other processes.

3. No Preemption

○ Resources cannot be forcibly taken away from a process.


○ A resource can only be released voluntarily by the process holding it.

4. Circular Wait

○ A set of processes are waiting for each other in a circular chain.


○ For example: P1 → waiting for resource held by P2 → waiting for
resource held by P3 → … → waiting for resource held by P1.

If even one of these conditions is not met, deadlock cannot occur.

General Approach for Avoiding Deadlock

Deadlock avoidance involves ensuring that the system never enters an unsafe
state where deadlock is possible. The general approach includes:
1. Resource Allocation Strategies

● The system must decide in advance whether granting a resource to a process


might lead to a deadlock
2. Use of Safe States

● A safe state means there exists a sequence of all processes such that each
process can finish even if it gets all its needed resources.
● The system only allows allocations that keep it in a safe state.
3. Banker’s Algorithm (Dijkstra’s Algorithm)

● Works like a bank loan system: it grants resources only if the system
remains in a safe state.
● Processes must declare the maximum number of resources they may need
in advance.

By enforcing safe state conditions and carefully allocating resources, deadlocks


can be avoided proactively rather than being dealt with reactively.

Winter 2021
Q-1(a) Define the following terms (Attempt any seven). 1. Kernel
2.Jacketing3. Starvation 4. Internal fragmentation 5.multitasking 6. Response
time7. Interrupt 8. Memory fault

1. Kernel

The kernel is the central part of an operating system that manages system
resources, such as CPU, memory, and I/O devices. It provides a communication
layer between the hardware and the software, ensuring that tasks are carried out
efficiently and securely. The kernel operates in privileged mode (or supervisor
mode) with direct access to hardware.

2. Jacketing

Jacketing refers to a technique used to enhance the safety and performance of


processes or resources, particularly in the context of I/O operations. In computer
systems, it can also refer to an additional layer of control or security that wraps
around certain operations to manage their execution or to provide extra
functionality. (Note: Jacketing is not a commonly used term in operating systems.
If this was a specific context, please provide further details.)

3. Starvation

Starvation occurs when a process is continuously denied the resources it needs to


execute because other higher-priority processes are constantly given preference. As
a result, the process may never get a chance to complete, leading to indefinite
waiting.

4. Internal Fragmentation

Internal fragmentation occurs when memory is allocated in fixed-sized blocks,


but the process does not use the entire allocated memory block. This results in
wasted space inside the allocated memory block, which cannot be used by other
processes. It's inefficient because even though memory is allocated, part of it
remains unused.

5. Multitasking

Multitasking refers to the ability of an operating system to manage and execute


multiple tasks (or processes) concurrently. It allows the system to switch between
tasks quickly, giving the illusion of simultaneous execution. There are two types of
multitasking:
● Preemptive multitasking: The OS controls when a task should be paused.
● Cooperative multitasking: Tasks voluntarily give up control to allow
others to run.

6. Response Time

Response time is the time taken between the submission of a request by a user or
process and the system's first response. In the context of operating systems, it often
refers to how quickly a system can start responding to user input or to a system
call.

7. Interrupt

An interrupt is a mechanism by which the CPU is temporarily halted to attend to


an event or condition that requires immediate attention. When an interrupt occurs,
the current task is paused, and the CPU begins executing a special piece of code
known as the interrupt handler. After handling the interrupt, the CPU resumes
the task that was interrupted.

8. Memory Fault

A memory fault occurs when a process attempts to access memory that is either
outside its allocated range or protected, leading to an error. This can happen when
a process tries to read or write to a part of memory that it does not have permission
to access, such as accessing uninitialized memory or a page that is not currently in
memory (page fault).

Q-1(b) Explain Seven-state Process Models mentioning all the transitions.

In an operating system, a process is a program that is being executed. During its


execution, a process goes through different states. Understanding these states helps
us see how the operating system manages processes, ensuring that the computer
runs efficiently. Please refer Process in Operating System to understand more
details about processes.

Each process goes through several stages throughout its life cycle. In this article,
We discuss different states of process in detail.

Process Lifecycle

When you run a program (which becomes a process), it goes through different
phases before it completion. These phases, or states, can vary depending on the
operating system, but the most common process lifecycle includes two, five, or
seven states. Here’s a simple explanation of these states:

The Seven-State Model

The states of a process are as follows:

● New State: In this step, the process is about to be created but not yet
created. It is the program that is present in secondary memory that will be
picked up by the OS to create the process.
● Ready State: New -> Ready to run. After the creation of a process, the
process enters the ready state i.e. the process is loaded into the main
memory. The process here is ready to run and is waiting to get the CPU time
for its execution. Processes that are ready for execution by the CPU are
maintained in a queue called a ready queue for ready processes.
● Run State: The process is chosen from the ready queue by the OS for
execution and the instructions within the process are executed by any one of
the available processors.
● Blocked or Wait State: Whenever the process requests access to I/O needs
input from the user or needs access to a critical region(the lock for which is
already acquired) it enters the blocked or waits state. The process continues
to wait in the main memory and does not require CPU. Once the I/O
operation is completed the process goes to the ready state.
● Terminated or Completed State: Process is killed as well as PCB is
deleted. The resources allocated to the process will be released or
deallocated.
● Suspend Ready: Process that was initially in the ready state but was
swapped out of main memory(refer to Virtual Memory topic) and placed
onto external storage by the scheduler is said to be in suspend ready state.
The process will transition back to a ready state whenever the process is
again brought onto the main memory.
● Suspend Wait or Suspend Blocked: Similar to suspend ready but uses the
process which was performing I/O operation and lack of main memory
caused them to move to secondary memory. When work is finished it may
go to suspend ready.
● CPU and I/O Bound Processes: If the process is intensive in terms of CPU
operations, then it is called CPU bound process. Similarly, If the process is
intensive in terms of I/O operations then it is called I/O bound process.

How Does a Process Move From One State to Other State?

A process can move between different states in an operating system based on its
execution status and resource availability. Here are some examples of how a
process can move between different states:

● New to Ready: When a process is created, it is in a new state. It moves to


the ready state when the operating system has allocated resources to it and it
is ready to be executed.
● Ready to Running: When the CPU becomes available, the operating system
selects a process from the ready queue depending on various scheduling
algorithms and moves it to the running state.
● Running to Blocked: When a process needs to wait for an event to occur
(I/O operation or system call), it moves to the blocked state. For example, if
a process needs to wait for user input, it moves to the blocked state until the
user provides the input.
● Running to Ready: When a running process is preempted by the operating
system, it moves to the ready state. For example, if a higher-priority process
becomes ready, the operating system may preempt the running process and
move it to the ready state.
● Blocked to Ready: When the event a blocked process was waiting for
occurs, the process moves to the ready state. For example, if a process was
waiting for user input and the input is provided, it moves to the ready state.
● Running to Terminated: When a process completes its execution or is
terminated by the operating system, it moves to the terminated state.

You might also like