0% found this document useful (0 votes)
7 views5 pages

Mid 1

mid 1

Uploaded by

Anupam roy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views5 pages

Mid 1

mid 1

Uploaded by

Anupam roy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

a.

The three cases when a CPU can operate in kernel mode are:
System calls: This occurs when a user program requests a service from the operating system
(e.g., file I/O or process creation). The CPU switches to kernel mode via a trap instruction to
execute the privileged code safely, preventing direct hardware access by user programs.
Interrupts: These are signals from hardware (e.g., timer, I/O completion) or software that require
immediate attention. The CPU enters kernel mode to handle the interrupt service routine, saving
the current state and resuming user mode after handling.
Exceptions (or faults): These happen due to errors or abnormal conditions in program
execution (e.g., divide by zero, page fault, or invalid memory access). The CPU switches to
kernel mode to invoke the exception handler, which may terminate the process or recover from
the error.
b. Distinguishing features:
Real-time system: Designed for environments where timeliness is critical, with tasks having
strict deadlines (hard real-time: missing deadlines is catastrophic; soft real-time: deadlines are
desirable but not fatal). Features include predictable response times, priority-based scheduling,
minimal jitter, and resource reservation to ensure determinism. Examples: avionics or medical
monitoring systems.
Multiprocessor system: Involves multiple CPUs or cores sharing memory and resources for
parallel processing. Features include load balancing across processors, cache coherence
mechanisms, inter-processor communication, and scalability for higher throughput. Examples:
symmetric multiprocessing (SMP) where all processors are equal, or asymmetric where one
handles OS tasks.
The main difficulty a programmer must overcome in writing an operating system for a real-time
environment is ensuring predictability and determinism in task execution, as the OS must guarantee
that critical tasks meet their deadlines despite interrupts, varying workloads, or hardware variability,
often requiring specialized scheduling algorithms (e.g., rate monotonic) and avoidance of non-
deterministic features like dynamic memory allocation.
2.
a. A context switch is the process of saving the state (e.g., registers, program counter, memory
maps) of a currently running process and loading the state of another process to allow it to run,
typically triggered by the scheduler during multitasking.
Steps in making a system call:
1. The user program invokes a library function (e.g., read() in C) that sets up parameters and
issues a trap instruction (e.g., syscall in x86).
2. The CPU switches to kernel mode, saves user state, and jumps to the system call handler.
3. The kernel validates parameters, executes the requested service (e.g., accessing hardware).
4. The kernel restores user state, switches back to user mode, and returns the result to the
program.
Advantages of hybrid implementation of threads (combining user-level and kernel-level threads):
Efficiency: User-level threads allow fast creation and switching within the process without kernel
involvement, reducing overhead.
Flexibility: Kernel-level threads enable true parallelism on multiprocessors and prevent blocking
of the entire process if one thread blocks (e.g., on I/O).
Scalability: Maps many user threads to fewer kernel threads (M:N model), balancing
performance and resource use, as seen in systems like Solaris.
b. Inter-process communication (IPC) in OS refers to mechanisms that allow processes to
exchange data and synchronize actions, essential for cooperation in multitasking environments while
maintaining process isolation.
There are two main models for IPC:
Shared Memory: Processes share a common memory region where one writes data and others
read it. It's fast (no kernel copying) but requires synchronization (e.g., semaphores) to avoid race
conditions. Figure description: Imagine two processes A and B connected to a central "Shared
Memory Block" via arrows labeled "Read/Write"; a semaphore guards access to prevent
concurrent writes.
Message Passing: Processes communicate by sending/receiving messages via the kernel (e.g.,
pipes, sockets, or message queues). It's safer (no direct memory access) but slower due to
kernel overhead. Can be blocking (synchronous) or non-blocking (asynchronous). Figure
description: Process A sends a message to a "Kernel Message Queue," from which Process B
receives it; arrows show "Send" from A to queue and "Receive" from queue to B.
For the scheduling algorithms:
i. Non-preemptive Shortest Job First (SJF): Selects the process with the shortest remaining burst
time; no preemption.
Gantt Chart:

P2 (0-40) | P1 (40-110) | P4 (110-160) | P3 (160-260)


At t=0: P1(70), P2(40) available; shortest P2 runs first.
At t=10: P3(100) arrives, but non-preemptive.
At t=40: P2 finishes; available P1(70), P3(100); shortest P1 runs.
At t=70: P4(50) arrives.
At t=110: P1 finishes; available P3(100), P4(50); shortest P4 runs.
At t=160: P4 finishes; P3 runs.
Turnaround times (TAT = completion - arrival):

P1: 110 - 0 = 110

P2: 40 - 0 = 40

P3: 260 - 10 = 250

P4: 160 - 70 = 90

Average TAT: (110 + 40 + 250 + 90) / 4 = 490 / 4 = 122.5 sec


ii. Round Robin (RR) with quantum 30 sec: Time-slice of 30 sec; processes in FIFO queue by
arrival.
Gantt Chart:

P1 (0-30) | P2 (30-60) | P1 (60-90) | P3 (90-120) | P2 (120-130) | P1 (130-140) | P3 (140-170) | P4


(170-200) | P3 (200-230) | P4 (230-250) | P3 (250-260)
Queue starts with P1, P2 (arrival 0); P1 first (FIFO).
P3 arrives at 10, joins queue.
P4 at 70, joins.
Each gets 30 sec unless finishes earlier (P2 finishes in two slices: 30+10).
Turnaround times:

P1: 140 - 0 = 140 (runs 30+30+10)

P2: 130 - 0 = 130 (30+10)


P3: 260 - 10 = 250 (30+30+30+10)

P4: 250 - 70 = 180 (30+20)

Average TAT: (140 + 130 + 250 + 180) / 4 = 700 / 4 = 175 sec


iii. Priority Scheduling: Lower number = higher priority; within same priority, use RR with q=30,
FIFO on arrival.
Priorities: P2(1 highest), P3(2), P4(2), P1(3 lowest).
Gantt Chart:

P2 (0-30) | P1 (30-60) | P2 (60-70) | P3 (70-100) | P4 (100-130) | P3 (130-160) | P4 (160-190) | P3


(190-220) | P1 (220-230) | P3 (230-260) | P1 (260-290)
At t=0: P1(3), P2(1); P2 highest, runs.
At t=10: P3(2) arrives.
At t=30: P2 slice ends; queue: P1(3), P3(2); highest P3, but P2 still has time, wait no: priority is
non-preemptive between levels? Wait, typically priority is preemptive, but question doesn't
specify; assuming preemptive as common. Wait, question: "Priority Scheduling" – usually
preemptive unless said otherwise. But for same priority, RR (which is preemptive).
To clarify: Preemptive priority overall, with RR for ties.
Higher priority preempts lower.
Same priority: RR q=30, FIFO arrival.
At t=0: P2(1) runs (higher than P1(3)).
At t=10: P3(2) arrives, priority 2 > current P2? No, 1 is higher (lower number).
P2 continues.
P2 needs 40, so at t=30: P2 has 10 left, but since RR only for same priority, and no others at priority
1, P2 continues? Wait, no: The RR is only within same priority class.
For different priorities, it's standard priority scheduling (assume preemptive: higher priority
preempts).
Within same, RR.
P2 (1) runs from 0-40 (no preemption as highest).
At t=10: P3(2) arrives, but P2 higher, so P2 finishes at 40.
At t=40: Queue: P1(3), P3(2); P3 higher, runs.
P3 runs 30 (q=30, but since alone in priority 2? Wait, RR only if multiple in same class.
At t=40: only P3 in pri 2, P1 in 3.
So P3 runs without slice limit? No, the RR is for same priority, but if alone, it runs to completion or
until preempted.
But question: "Within the same priority class, schedule according to the scheduling algorithm
mentioned in (ii)" which is RR.
For single process in class, it's like FCFS, but since RR with q=30, but alone, it can run full.
But in practice, for priority, the quantum applies per turn.
To simulate:
Ready queue sorted by priority, then FIFO or RR within.
Since RR within priority, treat each priority level as a RR queue.
Higher priority levels preempt lower.
Run highest priority queue first, using RR if multiple.
At t=0: Pri1: P2; Pri2: empty; Pri3: P1.
Run P2 (pri1), since alone, but is quantum applied? The RR is specified for same priority, implying
quantum only when multiple, but to be consistent, probably apply quantum even for single.
But in standard multilevel queue with RR in levels, quantum applies.
But let's assume preemptive priority, and for same pri, RR with q=30.
P2 runs from 0-40 (alone in pri1, so full, as no other to rotate).
At t=10: P3 added to pri2.
But pri2 lower than pri1, no preempt.
At t=40: P2 done.
Now highest: pri2 P3, runs.
At t=70: P4 arrives, pri2, same as P3.
So pri2 queue: P3 (running), add P4 to queue (FIFO arrival: P3 arrived earlier).
Since RR for pri2, P3 has run 70-40=30? From 40 to 70 is 30 sec, exactly quantum.
At t=70: P3 slice ends, switch to next in pri2: P4.
P4 runs 70-100 (30 sec).
At t=100: P4 slice ends (20 left), switch back to P3 (FIFO? RR is round robin, so cycle).
P3 runs 100-130 (30 sec).
At t=130: P3 slice, now P4.
P4 runs 130-150 (20 sec, finishes).
At t=150: P3 alone in pri2, runs 150-180 (30), then 180-210 (30), then 210-220 (10 left, finishes).
P3 total 100, ran 30 (40-70), 30 (100-130), 30 (150-180), 30 (180-210), 10 (210-220). That's
30+30+30+30+10=130, but needs 100? Mistake.
P3 burst 100.
Started at 40-70: 30, remaining 70.
Then 100-130: 30, remaining 40.
Then 150-180: 30, remaining 10.
Then 180-190: 10, finishes.
Pri3 P1 was waiting, but lower priority, so only runs when pri2 empty.
At t=0-40: P2.
40-70: P3 (30).
70-100: P4 (30).
100-130: P3 (30).
130-160: P4 (30, but P4 burst 50, ran 30, remaining 20; at 130-150: 20, finishes at 150.
P4: starts at 70, runs 70-100: 30, remaining 20.
Then after P3's 100-130, next P4 runs 130-150: 20, finishes.
Then P3 continues alone in pri2: from 150.
P3 remaining at 130: after second 30 (100-130), remaining 40-30=10? Let's recalculate P3
remaining.
Start: 100.
40-70: 30, rem 70.
100-130: 30, rem 40.
After P4 finishes at 150, P3 runs 150-180: 30, rem 10.
180-190: 10, finishes.
Now pri2 empty at 190, run pri3 P1 (70 burst).
But P1 was preempted? No, P1 hasn't run yet.
At t=0, P2 ran, then P3, etc.
P1 runs 190-260: 70.
But P1 burst 70, 190+70=260.
Is priority preemptive? During P3/P4, no higher, so yes.
But P1 is lower, so waits.
Turnaround:
P2: 40-0=40
P3: 190-10=180
P4: 150-70=80
P1: 260-0=260
Avg TAT: (40+180+80+260)/4 = 560/4 = 140 sec
Gantt: P2 (0-40) | P3 (40-70) | P4 (70-100) | P3 (100-130) | P4 (130-150) | P3 (150-180) | P3 (180-
190) | P1 (190-260)
P3's last is 180-190 for 10.
But in Gantt, combine consecutive same process.
So P2 (0-40) | P3 (40-70) | P4 (70-100) | P3 (100-130) | P4 (130-150) | P3 (150-190) | P1 (190-260)
Yes.
For RR in same pri, when alone, it can run multiple quanta consecutively? In standard RR, if no
other, it gets another slice immediately.
But in my sim, after P4 finishes, P3 gets back and runs consecutive slices until done.
But since quantum 30, and remaining 40 at 150, it runs 150-180 (30), rem 10; then since alone,
immediately another slice 180-190 (10).
Yes, but in Gantt, it's consecutive P3 150-190.
Yes.
Now, for avg TAT 140.
In question, "Consider the processes enter FIFO queue according to their arrival times.

You might also like