0% found this document useful (0 votes)
43 views12 pages

Osy QB Ans-1

Uploaded by

Ritika Darade
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views12 pages

Osy QB Ans-1

Uploaded by

Ritika Darade
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

OSY QB ANS

Chapter no 4: CPU Scheduling

Q1. Explain any four scheduling criteria.


• CPU utilization: - In multiprogramming the main objective is to keep CPU
as busy as possible.
• CPU utilization can range from 0 to 100 percent.
• Throughput: - It is the number of processes that are completed per unit
time.
• Turnaround time:-The time interval from the time of submission of a
process to the time of completion of that process is called as turnaround
time.
• It is calculated as: Turnaround Time = Waiting Time + Burst Time or End
Time – Arrival Time
• Waiting time: - It is the sum of time periods spent in the ready queue by
a process. It is calculated as: Waiting Time = Start Time – Arrival Time
• Response time:- The time period from the submission of a request until
the first response is produced is called as response time.

Q2. Explain Deadlock and necessary conditions for deadlock.


A Deadlock is a situation where each of the computer process waits for a
resource which is being assigned to some another process. In this situation,
none of the process gets executed since the resource it needs, is held by some
other process which is also waiting for some other resource to be released.
Four condition for deadlock:
1. Mutual exclusion: Only one process at a time can use non-sharable
resource.
2. Hold and wait: A process is holding at least one resource and is waiting
to acquire additional resources held by other processes.
3. No pre-emption: A resource can be released only voluntarily by the
process holding it after that process completes its task.
4. Circular wait: There exists a set {P0, P1, ..., P0} of waiting processes such
that P0 is waiting for a resource that is held by P1, P1 is waiting for a
resource that is held by P2, ..., Pn–1 is waiting for a resource that is held
by Pn, and Pn is waiting for a resource that is held by P0.

Q.3 Numerical based on FCFS, SJF, Round Robin


Q.4 Write a note on Deadlock prevention and avoidance
Deadlock Prevention
Deadlock prevention aims to eliminate at least one of the four necessary
conditions for deadlock. Common strategies include:
1. Mutual Exclusion Avoidance: If feasible, make resources shareable.
However, for resources like printers or file locks, this is often impractical.
2. Hold and Wait Prevention: Require processes to request all the
resources they need at once, before they start execution, or to release all
resources if they need to wait for additional resources. This limits
resource utilization flexibility but helps avoid deadlocks.
3. No Pre-emption: Allow pre-emption of resources, meaning resources
can be forcibly taken from a process if necessary. This can be complex
and is often used in contexts like memory and CPU scheduling.
4. Circular Wait Prevention: Impose an ordering of resource requests. If
processes request resources in a predefined order, circular wait cannot
occur, as each process can only request resources that are “higher” in
the order than any it currently holds.

Deadlock Avoidance
Deadlock avoidance differs from prevention by allowing the system to make
decisions dynamically based on current resource allocations, using algorithms
to assess whether a resource allocation request may lead to a deadlock.
Banker’s Algorithm (by Dijkstra) is a common deadlock avoidance algorithm
used in systems with multiple instances of each resource type. It ensures that
resources are allocated only if the resulting state remains safe. The system
checks for:
• Safe State: If at least one sequence of processes exists such that each
process can finish executing without leading to deadlock.
• Unsafe State: A state where it is possible to allocate resources in a way
that leads to a deadlock.
The system simulates resource allocation for each request to determine if it can
be safely granted without leading to an unsafe state.
Chapter no 5: Memory Management

Q.1 Describe paging and segmentation


Paging
refers to the transfer of memory pages from physical memory to disk and vice
versa. Virtual memory uses a technique called demand paging for its
implementation. Logical address space of a process can be non contiguous;
process is allocated physical memory whenever the latter is available.
Segmentation
a memory management technique that divides a program's memory into
distinct segments. Each segment represents a logical grouping of information,
such as code, data, or stack segments, which are managed independently. This
division allows the OS to handle memory more flexibly and provide better
protection and organization

Q.2 Explain partitioning techniques


Fixed Partitioning: -
Main memory is divided into multiple partitions of fixed size at the time of
system generation. A process may be loaded into a partition of equal size or
greater size. Partitions can be of equal size or unequal size.
Equal size partitioning: -
Main memory is divided into equal size partitions. Any process with less or
equal size can be loaded in any available partition. If all partitions are full and
no process is in ready or running state then the operating system can swap a
process out of any partition and load in any other process.
Disadvantage: -Main memory Utilization is extremely inefficient.
Unequal size partitioning: -
Main memory is divided into multiple partitions of unequal size. Each process
can be loaded into the smallest partition within which the process will fit.

Q.3 Numerical based on Page replacement algorithms.


Q.4 Define: i)Page fault
ii)Virtual Memory
iii)Locality of reference
i) Page Fault:
A page fault occurs when a program tries to access a block of memory (a
"page") that is not currently in the physical memory (RAM).
When this happens, the operating system must retrieve the page from a slower
storage (like a hard drive or SSD) and load it into RAM. This process can slow
down the system if page faults happen frequently.

ii) Virtual Memory:


Virtual memory is a memory management technique that gives an application
the impression it has contiguous working memory while actually using physical
memory and disk storage.
This system allows for larger programs to run on a system than would be
possible with the physical memory alone, by temporarily transferring data to
disk storage.

iii) Locality of Reference:


Locality of reference is a principle that describes how programs tend to access
the same set of memory locations within a short time period. This principle is
divided into:
• Temporal locality: The same memory locations are accessed repeatedly
within a short period.
• Spatial locality: Memory locations close to each other are accessed
within a short time frame.
Operating systems and hardware take advantage of this principle to optimize
performance, such as through caching or efficient memory paging.

Q.5 Explain free space management techniques.


Frequently, the free-space list is implemented as a bit map or bit vector. Each
block is represented by a 1 bit. If the block is free, the bit is 0; if the block is
allocated, the bit is 1.
For example, consider a disk where blocks 2, 3, 4, 5, 8, 9, 10, 11, 12, 13, 17, 18,
25, 26, and 27 are free, and the rest of the blocks are allocated.
The free-space bit map would be:
11000011000000111001111110001111...
The main advantage of this approach is that it is relatively simple and efficient
to find n consecutive free blocks on the disk.
Unfortunately, bit vectors are inefficient unless the entire vector is kept in
memory for most accesses. Keeping it main memory is possible for smaller
disks such as on microcomputers, but not for larger ones.
Chapter no 6.: File Management

Q.1 Explain any four file operations

1.Create
Creation of the file is the most important operation on the file. Different types
of files are created by different methods for example text editors are used to
create a text file, word processors are used to create a word file and Image
editors are used to create the image files.
2.Write
Writing the file is different from creating the file. The OS maintains a write
pointer for every file which points to the position in the file from which, the
data needs to be written.
3.Read
Every file is opened in three different modes : Read, Write and append. A Read
pointer is maintained by the OS, pointing to the position up to which, the data
has been read.
4.Re-position
Re-positioning is simply moving the file pointers forward or backward
depending upon the user's requirement. It is also called as seeking.
5.Delete
Deleting the file will not only delete all the data stored inside the file, It also
deletes all the attributes of the file. The space which is allocated to the file will
now become available and can be allocated to the other files.
6.Truncate
Truncating is simply deleting the file except deleting attributes. The file is not
completely deleted although the information stored inside the file get
replaced.

Q.2 Write a note on RAID.


RAID works by placing data on multiple disks and allowing input/output (I/O)
operations to overlap in a balanced way, improving performance.

Because using multiple disks increases the mean time between failures, storing
data redundantly also increases fault tolerance.

RAID arrays appear to the operating system (OS) as a single logical drive.

Q.3 Explain file allocation methods: i)Linked


ii)Indexed
i) Linked Allocation:
• In this method, each file occupies disk blocks scattered anywhere on the
disk.
• It is a linked list of allocated blocks.
• When space has to be allocated to the file, any free block can be used
from the disk and system makes an entry in directory.
• Directory entry for allocated file contains file name, a pointer to the first
allocated block and last
• allocated block of the file.
• The file pointer is initialized to nil value to indicate empty file.
• A write to a file, causes search of free block.
• After getting free block data is written to the file and that block is linked
to the end of the file.
• To read the file, read blocks by following the pointers from block to block
starting with block
• address specified in the directory entry.

ii) Indexed Allocation:

• In this method, each file has its own index block.


• This index block is an array of disk block addresses.
• When a file is created, an index block and other disk blocks according to
the file size are allocated to that file.
• Pointer to each allocated block is stored in the index block of that file.
• Directory entry contains file name and address of index block.
• When any block is allocated to the file, its address is updated in the index
block.
• Any free disk block can be allocated to the file. Each ith entry in the index
block points to the ith
• block of the file. To find and read the ith block, we use the pointer in the
ith index block entry.

Q.4 Explain directory structure.


1. Single-level directory:
• The simplest method is to have one big list of all the files on the disk. The
entire system will contain only one directory which is supposed to
mention all the files present in the file system. The directory contains
one entry per each file present on the file system.

Advantages:
•Since it is a single directory, so its implementation is very easy.
• If files are smaller in size, searching will faster.
• The operations like file creation, searching, deletion, updating are very easy
in such a directory structure.
Disadvantages:
• There may chance of name collision because two files can not have the same
name.
• Searching will become time taking if directory will large.
• In this can not group the same type of files together.

2. Two-level directory:
• In two level directory systems, we can create a separate directory for
each user. There is one master directory which contains separate
directories dedicated to each user. For each user, there is a different
directory present at the second level, containing group of user's file. The
system doesn't let a user to enter in the other user's directory without
permission.
Advantages:
• We can give full path like /User-name/directory-name/.
• Different users can have same directory as well as file name.
• Searching of files become more easy due to path name and user-
grouping.

Disadvantages:
• A user is not allowed to share files with other users.
• Still it not very scalable
• two files of the same type cannot be grouped together in the same user.

3. Tree-structured directory:
• In Tree structured directory system, any directory entry can either be a
file or sub directory.
• Root directory contains all directories for each user
• The users can create subdirectories and even store files in their directory
• A user do not have access to root directory and cannot modify it
Advantages:
• Very generalize, since full path name can be given.
• Very scalable, the probability of name collision is less.
• Searching becomes very easy, we can use both absolute path as well as
relative.

Disadvantages:
• Every file does not fit into the hierarchical model, files may be saved into
multiple
directories.
• We can not share files.
• It is inefficient, because accessing a file may go under multiple directories.

You might also like