Here are the lecture notes for the "Process Management" section of your content, presented in
a simple, clear, and explanatory manner, based on the "Abraham Silberschatz-Operating
System Concepts (9th, 2012_12).pdf" textbook.
Lecture Notes: Unit I - Process Management
Reference: Abraham Silberschatz, Peter Baer Galvin, Greg Gagne. Operating System
Concepts, 9th Edition (Relevant sections primarily from Chapter 3: Processes, and Chapter 5:
CPU Scheduling).
Introduction to Process Management:
      What is a Process? In simple terms, a process is a program in execution. A program
       is a passive entity (like a file on disk), whereas a process is an active entity, with a
       program counter, registers, and a stack.
      Why Manage Processes? Modern operating systems allow multiple programs to run
       "concurrently" (or appear to run at the same time). Process management is the OS's
       job of handling these multiple running programs efficiently and safely. This includes
       creating them, scheduling their use of the CPU, letting them communicate, and
       terminating them.
1. Process Concept
      Program vs. Process: The Key Difference
           o   Program: A static set of instructions and data stored on disk. It's like a recipe
               – a set of instructions waiting to be followed.
           o   Process: An instance of a program being executed. It's like a chef actually
               following the recipe – it has ingredients (data), a workspace (memory), and is
               actively doing work (executing instructions).
      Process as a Unit of Work: A process is the fundamental unit of work in a modern
       time-sharing system. Every time you open an application (browser, word processor),
       you are creating a new process (or multiple processes).
      Process Memory Layout (Address Space): A process needs memory to run. This
       memory is typically divided into sections:
           o   Text Section: Contains the program's executable code. This is read-only and
               often shared among multiple processes running the same program (e.g.,
               multiple instances of Chrome share the same code).
           o   Data Section: Contains global variables (initialized and uninitialized).
           o   Heap Section: Memory that is dynamically allocated during the process's
               runtime (e.g., when a program needs more memory to store user input).
       o   Stack Section: Used for temporary data storage, such as function parameters,
           return addresses, and local variables. It grows and shrinks as functions are
           called and return.
   Process Control Block (PCB): The Process's Identity Card
       o   The PCB is a data structure maintained by the OS for each process. It contains
           all the information needed to manage that specific process.
       o   Key Information in a PCB:
                 Process State: (e.g., new, running, waiting, ready, terminated).
                 Program Counter: The address of the next instruction to be executed
                  for this process.
                 CPU Registers: Values of all CPU registers (accumulator, index
                  registers, stack pointer, etc.) when the process was last interrupted.
                 CPU Scheduling Information: Process priority, pointers to
                  scheduling queues.
                 Memory-Management Information: Pointers to page tables or
                  segment tables, base and limit registers.
                 Accounting Information: CPU time used, real time elapsed, job
                  numbers, etc.
                 I/O Status Information: List of open files, list of I/O devices
                  allocated to the process.
       o   Importance: The PCB allows the OS to stop a process, save its exact state,
           and later resume it from exactly where it left off, giving the illusion of
           continuous execution for multiple processes.
   Process States: The Life Cycle of a Process
       o   Processes go through various states during their lifetime. These states describe
           the current activity of a process.
       o   New: The process is being created.
       o   Running: Instructions are being executed by the CPU. Only one process can
           be truly "running" per CPU core at any given instant.
       o   Waiting (Blocked): The process is waiting for some event to occur (e.g., I/O
           completion, receiving a signal, waiting for memory). It cannot proceed until
           the event happens.
       o   Ready: The process is waiting for a CPU to be assigned to it. It has all its
           necessary resources (memory, files) but is just waiting its turn for the CPU.
       o   Terminated: The process has finished execution. It might still be in memory
           for a short time to return status or for parent processes to collect information.
      Context Switch: The OS's Juggling Act
          o   When the CPU switches from one process to another, the OS must perform a
              "context switch."
          o   Steps:
                 1.    Save the state (CPU registers, program counter, etc.) of the currently
                       running process into its PCB.
                 2.    Load the saved state of the next process (from its PCB) into the CPU
                       registers.
          o   Overhead: Context switches are pure overhead, meaning the system does no
              useful work while performing them. Their speed depends on hardware support
              and memory speed. Frequent context switches can reduce system
              performance.
2. Process Scheduling
      Purpose: The OS must decide which process gets access to the CPU and for how
       long. This is the job of the CPU Scheduler.
      Goal: To maximize CPU utilization, ensure fairness, and provide good response times
       for interactive users.
      CPU-I/O Burst Cycle: Processes typically alternate between periods of CPU
       execution (CPU burst) and I/O waiting (I/O burst).
          o   CPU-bound processes: Have long CPU bursts and infrequent I/O.
          o   I/O-bound processes: Have short CPU bursts and frequent I/O.
          o   Effective scheduling tries to balance these types to keep the CPU busy.
      When Does Scheduling Happen? (Dispatcher)
          o   The dispatcher is the module that gives control of the CPU to the process
              selected by the CPU scheduler.
          o   It is invoked during context switching:
                 1. When a process switches from running to waiting (e.g., for I/O).
                 2. When a process switches from running to ready (e.g., after a time slice
                    expires).
                 3. When a process switches from waiting to ready (e.g., I/O completes).
                 4. When a process terminates.
      Types of Schedulers:
       o   Long-Term Scheduler (Job Scheduler): Selects processes from the job pool
           and loads them into memory for execution. It controls the degree of
           multiprogramming (number of processes in memory). Infrequent execution.
       o   Short-Term Scheduler (CPU Scheduler): Selects from among the processes
           that are ready to execute and allocates the CPU to one of them. It runs very
           frequently (milliseconds).
       o   Medium-Term Scheduler (Optional): Can "swap out" processes from
           memory (to disk) to reduce the degree of multiprogramming, and later "swap
           in" to continue execution. Used in systems that support swapping.
   Scheduling Criteria (Goals for a Good Scheduler):
       o   CPU Utilization: Keep the CPU as busy as possible.
       o   Throughput: Number of processes completed per unit time.
       o   Turnaround Time: Total time from submission to completion of a process
           (including waiting).
       o   Waiting Time: Total time a process spends in the ready queue.
       o   Response Time: Time from submission of a request until the first response is
           produced (for interactive processes).
   Common Scheduling Algorithms (Examples):
       o   First-Come, First-Served (FCFS): Processes are executed in the order they
           arrive. Simple, but can lead to long waiting times for short processes if a long
           one arrives first (convoy effect).
       o   Shortest-Job-First (SJF): The process with the smallest next CPU burst is
           executed next. Optimal for minimizing average waiting time, but difficult to
           implement as future CPU burst times are unknown.
                 Preemptive SJF (Shortest-Remaining-Time-First): If a new process
                  arrives with a shorter remaining time than the currently running
                  process, the current process is preempted.
       o   Priority Scheduling: Each process has a priority number; the CPU is
           allocated to the highest-priority process. Can lead to starvation (low-priority
           processes never run).
                 Aging: A solution to starvation, where the priority of a process
                  increases as it waits.
       o   Round Robin (RR): Each process gets a small unit of CPU time (time
           quantum or time slice), typically 10-100 milliseconds. If the process does not
           complete within this quantum, it is preempted and added to the end of the
           ready queue. Good for interactive systems.
          o   Multilevel Queue Scheduling: The ready queue is partitioned into separate
              queues (e.g., foreground/interactive, background/batch). Each queue can have
              its own scheduling algorithm.
          o   Multilevel Feedback Queue Scheduling: Allows processes to move between
              different queues based on their CPU burst behavior. This prevents starvation
              and can favor I/O-bound processes.
3. Operations on Processes
      Process Creation:
          o   A parent process can create child processes, forming a process tree.
          o   The fork() system call (in UNIX-like systems) creates a new process that is a
              duplicate of the parent process.
          o   The exec() system call (in UNIX-like systems) is often used after fork() to
              load a new program into the address space of the newly created child process.
          o   In Windows, CreateProcess() combines the functionalities of fork() and exec().
          o   Resource Sharing: Parent and child processes can share all, some, or none of
              their resources.
          o   Execution: Parent and child can execute concurrently or the parent can wait
              for the child to terminate.
      Process Termination:
          o   Normal Termination: A process executes its last statement and then uses an
              exit() system call to ask the OS to delete it. It returns a status value (e.g., 0 for
              success).
          o   Abnormal Termination:
                     By Parent: A parent process can terminate its child process (e.g., if the
                      child exceeds resource limits, performs illegal operations, or the task
                      assigned to the child is no longer needed).
                     By OS: If a process crashes (e.g., division by zero, invalid memory
                      access), the OS terminates it.
          o   Zombie Process: A child process that has terminated but whose parent has not
              yet called wait() (or similar system call) to collect its exit status and release its
              resources. It occupies a minimal amount of system resources (its PCB).
          o   Orphan Process: A child process whose parent has terminated without calling
              wait() for the child. Orphaned processes are typically adopted by the init (or
              systemd) process, which then calls wait() for them to prevent them from
              becoming zombies.
4. Co-operating Processes
      Concept: Processes are independent by default, but sometimes they need to work
       together to achieve a common goal. Such processes are called "co-operating
       processes."
      Reasons for Co-operation:
          o   Information Sharing: Multiple processes may need access to the same data
              (e.g., a shared database).
          o   Computation Speed-up: A task can be broken into subtasks, and each subtask
              assigned to a separate process, running in parallel.
          o   Modularity: Designing a system as a collection of co-operating modules
              (processes) can improve system structure.
          o   Convenience: A user might want to perform multiple tasks simultaneously
              (e.g., compile, print, and edit).
      Challenges:
          o   Data Consistency: Ensuring that shared data remains accurate and consistent
              when multiple processes are accessing and modifying it.
          o   Process Synchronization: Coordinating the execution of processes so that
              they proceed in a desired order and do not interfere with each other's critical
              operations. This is a major challenge in concurrent programming.
5. Inter-Process Communication (IPC)
      Purpose: Mechanisms provided by the operating system that allow co-operating
       processes to exchange data and information.
      Two Fundamental Models:
          o   5.1. Shared Memory:
                    Concept: A region of memory is established that is shared by multiple
                     co-operating processes. Once the shared memory segment is set up,
                     processes can read from and write to it directly.
                    Mechanism: The OS typically helps set up the shared memory region,
                     but after that, no further OS intervention is required for data exchange.
                    Advantages:
                            Very Fast: Once established, communication occurs at
                             memory access speeds. No kernel involvement for each data
                             transfer.
                            Efficient: Avoids the overhead of copying data multiple times.
          Disadvantages:
                 Synchronization Issues: Processes must provide their own
                  mechanisms to synchronize access to the shared data (e.g.,
                  using semaphores or mutexes) to prevent race conditions and
                  ensure data consistency.
                 Complexity: Programmers must manage shared memory
                  carefully.
          Use Cases: Large data transfers, high-performance computing.
o   5.2. Message Passing:
          Concept: Processes communicate by exchanging messages. The OS
           provides send() and receive() primitives for message exchange.
          Mechanism: Messages are exchanged via a communication link
           established by the OS. The OS is directly involved in each message
           transfer.
          Advantages:
                 Simpler to Implement: Less risk of synchronization errors for
                  programmers as the OS handles data transfer and often some
                  synchronization.
                 Suitable for Distributed Systems: Easily extends to
                  communication across networks between processes on different
                  machines.
                 Better Isolation: Processes do not share memory, reducing the
                  risk of one process corrupting another's data.
          Disadvantages:
                 Slower: Involves more overhead due to system calls (kernel
                  involvement) for each message sent and received, and potential
                  data copying.
                 Limited Message Size: Messages usually have a fixed or
                  maximum size.
          Communication Link Types:
                 Direct Communication: Processes explicitly name each other
                  to send/receive messages (e.g., send(P, message)).
                 Indirect Communication: Messages are sent to and received
                  from mailboxes (or ports). Processes communicate only if they
                  share a common mailbox.
          Synchronization in Message Passing:
                             Blocking (Synchronous): The send() call blocks until the
                              message is received. The receive() call blocks until a message
                              is available.
                             Non-Blocking (Asynchronous): The send() call sends the
                              message and continues immediately. The receive() call either
                              receives a message or returns an error/null if no message is
                              available.
Process Scheduling: The Art of CPU Allocation
      Core Problem: In a multi-programming environment, many processes are ready to
       run, but there are far fewer CPU cores available (often just one or a few). The central
       challenge is how to distribute the CPU's time among these competing processes fairly
       and efficiently, while also meeting various performance goals (like quick response for
       users, or high overall work completion).
      The CPU Scheduler's Role: The operating system component responsible for
       selecting which process from the "ready" state will be given control of the CPU. This
       decision happens very frequently (hundreds or thousands of times per second).
      CPU-I/O Burst Cycle: Most processes alternate between:
          o   CPU burst: Periods when the process is actively using the CPU to execute
              instructions.
          o   I/O burst: Periods when the process is waiting for an I/O operation to
              complete (e.g., reading from disk, receiving network data). During this time,
              the CPU can be given to another process.
          o   Problem: If the scheduler isn't smart, it could assign the CPU to a process that
              immediately goes into an I/O wait, leaving the CPU idle. Or, it could let an
              I/O-bound process (which needs the CPU only for short bursts) wait too long
              behind a CPU-bound process.
Key Scheduling Concepts & Their Role in Problem Solving:
   1. Preemption:
          o   Concept: A scheduling decision can either be non-preemptive (once a process
              gets the CPU, it runs until it completes its CPU burst or voluntarily yields the
              CPU) or preemptive (the OS can interrupt a running process and assign the
              CPU to another process, even if the first process hasn't finished its burst).
          o   Problem Solved: Unresponsiveness & Long Waits. In a non-preemptive
              system, a single long-running process could hog the CPU, making the entire
              system unresponsive, especially for interactive users.
          o   Solution: Preemptive scheduling allows the OS to take the CPU away from a
              running process after a certain time or if a higher-priority process becomes
              ready, ensuring that all processes get a chance to run and interactive
              applications remain responsive.
   2. Scheduling Criteria (Goals): These are the metrics the scheduler tries to optimize,
      each addressing a different problem.
          o   Problem: CPU Idleness: How to keep the CPU busy as much as possible?
                    Criterion: CPU Utilization (percentage of time the CPU is busy).
                     Higher is generally better.
          o   Problem: Slow Overall Work Completion: How to maximize the amount of
              work done by the system?
                    Criterion: Throughput (number of processes completed per unit
                     time). Higher is generally better.
          o   Problem: Long Total Wait for Completion: How to minimize the total time
              a process spends in the system until it's completely finished?
                    Criterion: Turnaround Time (total time from submission to
                     completion, including waiting, execution, and I/O time). Lower is
                     generally better.
          o   Problem: Excessive Waiting in Ready Queue: How to minimize the time
              processes spend waiting for their turn on the CPU?
                    Criterion: Waiting Time (total time a process spends in the ready
                     queue). Lower is generally better.
          o   Problem: Laggy Interactive Applications: How to ensure users of
              interactive programs (like word processors, web browsers) feel the system is
              immediately responding to their input?
                    Criterion: Response Time (time from when a request is submitted
                     until the first response is produced, not necessarily completion). Lower
                     is generally better.
Common Scheduling Algorithms and the Problems They Address/Create:
   1. First-Come, First-Served (FCFS)
          o   Concept: Processes are executed in the order they arrive in the ready queue.
              Non-preemptive.
          o   Problem Addressed: Simple to implement and understand. Fair in a queue-
              like sense.
          o   Problem Created (and Example): Convoy Effect.
                    Problem: If a very long CPU-bound process arrives first, all
                     subsequent processes (even very short, I/O-bound ones) must wait for
                     the long process to complete its CPU burst. This leads to very low
                 CPU utilization if I/O devices are sitting idle waiting for a CPU-bound
                 process to finish, and very long average waiting times.
                Example: Consider processes P1 (24ms CPU burst), P2 (3ms), P3
                 (3ms). If they arrive P1, P2, P3:
                        P1 runs 24ms.
                        P2 waits 24ms, then runs 3ms.
                        P3 waits 27ms, then runs 3ms.
                        Average Waiting Time = (0 + 24 + 27) / 3 = 17ms.
                        If they arrived P2, P3, P1:
                                 P2 runs 3ms.
                                 P3 waits 3ms, then runs 3ms.
                                 P1 waits 6ms, then runs 24ms.
                                 Average Waiting Time = (0 + 3 + 6) / 3 = 3ms.
                Problem: FCFS is highly sensitive to the arrival order and can perform
                 poorly if short processes are stuck behind long ones.
2. Shortest-Job-First (SJF)
      o   Concept: The CPU is assigned to the process that has the smallest next CPU
          burst. Can be preemptive or non-preemptive. Preemptive SJF is also known as
          Shortest-Remaining-Time-First (SRTF).
      o   Problem Addressed: Optimal Average Waiting Time. SJF is provably
          optimal for minimizing the average waiting time for a given set of processes.
      o   Problem Created: Practicality & Starvation.
                Problem: How do you know the length of the next CPU burst? It's
                 impossible to predict perfectly in a general-purpose OS. SJF relies on
                 estimation (e.g., using exponential averaging of past bursts).
                Problem: Starvation (for non-preemptive SJF and strict priority
                 variants): If there's a continuous stream of short jobs, a very long job
                 might never get to run, as there's always a shorter job arriving.
3. Priority Scheduling
      o   Concept: Each process has a priority number (lower number often means
          higher priority). The CPU is allocated to the process with the highest priority.
          Can be preemptive or non-preemptive.
      o   Problem Addressed: Allowing critical or time-sensitive tasks to run quickly.
      o   Problem Created: Starvation (Indefinite Blocking).
               Problem: Low-priority processes might never execute if there is a
                continuous stream of higher-priority processes always ready to run.
                They "starve" for CPU time.
               Solution: Aging.
                      Problem Solved by Aging: Starvation of low-priority
                       processes.
                      Concept: Gradually increases the priority of processes that
                       have been waiting in the ready queue for a long time.
                       Eventually, even a very low-priority process will gain enough
                       priority to run.
                      Example: A process starting with priority 10 (low) might have
                       its priority increased by 1 every 5 minutes it waits. After 50
                       minutes, its priority becomes 0 (highest), allowing it to finally
                       get the CPU.
4. Round Robin (RR)
     o   Concept: Designed for time-sharing systems. Each process gets a small unit
         of CPU time, called a time quantum (or time slice), typically 10 to 100
         milliseconds. If the process does not complete within this quantum, it is
         preempted and put at the end of the ready queue.
     o   Problem Addressed: Fairness & Responsiveness for Interactive Users.
               Solution: Ensures that no single process can hog the CPU for too long.
                Provides the illusion of all processes running concurrently, making
                interactive applications feel responsive.
     o   Problem Created: Performance based on Time Quantum Choice.
               Problem 1: Too Large a Time Quantum. If the quantum is very
                large, RR behaves like FCFS, reintroducing the convoy effect and poor
                responsiveness.
               Problem 2: Too Small a Time Quantum. If the quantum is too small,
                there will be excessive context switches. Each context switch is pure
                overhead (time spent saving and loading process states, not doing
                useful work). This can drastically reduce CPU utilization and overall
                throughput.
               Example (Time Quantum Problem):
                      Scenario A: Large Quantum (e.g., 100ms) for a 10ms burst
                       process: The process finishes its burst within the quantum, so
                       no preemption overhead, but other processes might wait longer
                       if there are many active ones.
                      Scenario B: Small Quantum (e.g., 1ms) for a 10ms burst
                       process: The process is preempted 9 times and requires 9
                              context switches. If a context switch takes 0.1ms, that's 0.9ms
                              of overhead for a 10ms burst – nearly 10% waste.
                             Solution: The choice of time quantum is critical and usually a
                              trade-off. A quantum that is typically much larger than a
                              context switch time, but not excessively large, is ideal.
   5. Multilevel Queue Scheduling
           o   Concept: The ready queue is partitioned into multiple separate queues, each
               with its own scheduling algorithm. Processes are permanently assigned to a
               queue (e.g., foreground/interactive processes in one queue with RR,
               background/batch processes in another queue with FCFS).
           o   Problem Addressed: Differentiating scheduling needs for different types of
               processes.
           o   Problem Created: Starvation Across Queues.
                     Problem: If higher-priority queues are always busy, lower-priority
                      queues might never get CPU time.
                     Solution: Implement time slicing between queues (e.g., 80% CPU time
                      for interactive, 20% for batch) or assign priorities to queues.
   6. Multilevel Feedback Queue Scheduling
           o   Concept: Allows processes to move between different queues. This is the
               most general and complex scheduling algorithm. Processes can change their
               priority based on their behavior (e.g., CPU-bound processes might be moved
               to lower-priority queues, while I/O-bound processes or newly interactive
               processes might be moved to higher-priority queues).
           o   Problem Addressed:
                     Problem: Optimizing for both interactive and batch processes
                      dynamically, without requiring prior knowledge of process behavior.
                     Problem: Starvation (by allowing lower-priority processes to
                      eventually move to higher-priority queues if they wait long enough).
                     Problem: Adapting to changing process behavior (e.g., a process that
                      was CPU-bound suddenly becomes I/O-bound).
           o   Solution: By having multiple queues with different priorities and time quanta,
               and rules for promoting/demoting processes, this algorithm provides a highly
               flexible and adaptive scheduling strategy.
Overall Problem Solving with Scheduling:
The various scheduling algorithms are all attempts to solve the fundamental problem of
efficiently and fairly sharing the CPU among multiple competing demands. Each algorithm
has its strengths and weaknesses, making the choice of scheduler a critical design decision for
any operating system, depending on its primary goals (e.g., responsiveness for a desktop OS,
throughput for a server OS, real-time guarantees for an embedded OS).