Operating System: Noor Agha "Wafa"
Operating System: Noor Agha "Wafa"
PAKTIA UNIVERSITY
         FACULTY OF COMPUTER SCIENCE
DEPARTMENT OF INFORMATION SYSTEM & NETWORK ENGINEERING
Operating
 System
          By: Noor Agha
                                     Outlines
I (Software)
                                       Advanced Database
                                                                II (Operating System)
  III (Services & functions of OS)
                                                           IV (Process in Operating System)
 V (Threads in Operating System)
                                                             VI (Process synchronization)
VII (Deadlock in operating system)
                                                           VIII (Memory management in OS)
       IX (Virtual memory)
                                                                 X (I/O management)
XI (Secondary Storage management)
                                                                  XII (File system)
          XIII (Security)
                      Software
Ch
   a   pt
            er
                 -I   By: Noor Agha
What is a software?
  Any idea?
                     Topics in the chapter
    Introduction to software
                                                         Classification of software
                                      Operating System
On the basis of Purpose/application
                                                          On the basis of platform
  On the basis of deployment
                                                          On the basis of License
On the basis of development model
                                                           On the basis of Size
    On the basis User interface
                                                         On the basis of copy right
           Conclusion
                               What is a software?
   "Software is a set of sequence of instructions that allows the users to
         perform a well-defined function or some specified task."
User Interface: Software can be classified as Graphical User Interface (GUI) software
or Command-Line Interface (CLI) software.
System software: System software is software that directly operates the computer hardware
and provides the basic functionality to the users as well as to the other software to operate
smoothly.
system software basically controls a computer’s internal functioning and also controls
hardware devices such as monitors, printers, and storage devices, etc. It is like an interface
between hardware and user applications.
It helps them to communicate with each other because hardware understands machine
language(i.e. 1 or 0) whereas user applications are work in human-readable languages like
English, Hindi, German, etc. so system software converts the human-readable language into
machine language and vice versa
Various types of system software are: Operating system, device drivers, and languages
processor (for conversion purpose)
Operating system
Any idea?
                                   Operating system
It is the main program of a computer system. When the computer system is turned ON,
it is the first software that loads into the computer’s memory.
It manages all the resources such as memory, CPU, printer, hard disk, etc. with help of the
drivers, and provides an interface to the user, which helps the user to interact with the
computer system.
 It also allows you to communicate with the computer without knowing how to speak the
computer's language.
Software?
Classification of software?
System software?
Application software?
Utility software?
               Operating system?
                                    Conclusion
A software is nothing but a set and collection of instructions that directs a computer
tells a computer what to do?
There many criteria based on which we can classify software, but based on
applications purpose, a software is of three types: System software, Application
software and utility software.
System software: System software is software that directly operates the computer
hardware and provides the basic functionality to the users as well as to the other
software to operate smoothly.
Operating system: operating system is the most important software that runs on a
computer, It is the main program of a computer system. When the computer system is
turned ON, it is the first software that loads into the computer’s memory.
Operating system: It manages all the resources such as memory, CPU, printer, hard
disk, etc. with help of the drivers, and provides an interface to the user, which helps
the user to interact with the computer system.
END OF THE SECTION
                        Operating System
Ch
   a   pt
            er
                 -I I      By: Noor Agha
             Topics in the chapter
    Introduction to OS
                             Operating System
                                                 History of operating system
Need for operating system
                                                  Components of operating
                                                         system
Types of operating systems
It is the main program of a computer system. When the computer system is turned ON,
it is the first software that loads into the computer’s memory.
It manages all the resources such as memory, CPU, printer, hard disk, etc. with help of the
drivers, and provides an interface to the user, which helps the user to interact with the
computer system.
 It also allows you to communicate with the computer without knowing how to speak the
computer's language.
The interface between computer hardware and the user is known as the operating system.
                   History of operating system
  The history of the operating system has five generations. From the days
when computers were running manually without any operating system, to
    today's smart, cloud-connected, and AI-integrated platforms. Each
generation brought new innovations that changed the way we interact with
 technology. Let’s take a quick journey through this fascinating evolution!
                       History of operating system
                    Let us understand each of them in detail.
Multitasking:
Controls memory:
Provides Security:
Kernel
Shell
File system
Components of an operating system
              Kernel
            Components of an operating system
                                 Kernel
A kernel is basically the core and the heart of an OS (Operating system).
  A kernel basically acts as a bridge between any user and the various
                      resources offered by a system
 Kernel loads first into memory when an operating system is loaded and
    remains into memory until operating system is shut down again.
            Components of an operating system
                                Kernel
To establish communication between user level application and hardware.
 NTFS (New Technology File System): A modern file system used by Windows. It supports
  features such as file and folder permissions, compression, and encryption.
 ext (Extended File System): A file system commonly used on Linux and Unix-based operating
  systems.
 APFS (Apple File System): A new file system introduced by Apple for their Macs and iOS
  devices.
       Types of Operating system
Various types of operating system are as following:
                        Types of Operating system
                               Batch Operating System
  Batch operating systems were one of the earliest types of operating systems
    designed to manage tasks (or "jobs") in a sequential, automated manner.
 In a batch processing environment, users submit a series of tasks, which are grouped into
                        batches and processed without interaction.
Job Collection:
Users would prepare their tasks or programs (jobs) offline, often on punch cards, paper
tapes, or later on files.
Jobs were submitted to the system in a batch, where multiple jobs could be grouped together
for processing.
Types of Operating system
        Paper tape
                        Types of Operating system
                               Batch Operating System
Job Scheduling:
The batch OS would collect all the jobs and schedule them for execution in the order they
were received .
The system would manage a queue of jobs to be executed sequentially, ensuring that they
didn’t interfere with each other.
No User Interaction:
Once a job was submitted, the user would not interact with the system while it was running.
The OS would process the job to completion.
The jobs ran in a "batch," with no need for real-time input from the user
Types of Operating system
   Batch Operating System
                      Types of Operating system
                     Multiprogramming Operating System
Each process needs two types of system time: CPU time and IO time.
Unlike batch OS, in a multiprogramming environment, when a process does its I/O,
The CPU can start the execution of other processes. Therefore, multiprogramming
improves the efficiency of the system.
Increased throughput: As several processors increase, more work can be done in less
time.
                       Types of Operating system
               Multiprocessing Operating System Disadvantages
       It is of two types:
                   Types of Operating system
                       Network Operating System
These systems run on a server and provide the capability to manage data,
 users, groups, security, applications, and other networking functions.
In Real-Time Systems, each job carries a certain deadline within which the
job is supposed to be completed, otherwise, the huge loss will be there, or
        even if the result is produced, it will be completely useless.
In a Distributed OS, multiple CPUs are utilized, but for end-users, it appears
                  as a typical centralized operating system.
Types of Operating system
  Distributed Operating System
                    Structure of operating system
    The approach of interconnecting and integrating multiple operating system
  components into the kernel can be described as an operating system structure.
 Since the operating system is such a complex structure, it should be created with
                utmost care so it can be used and modified easily,
Just as we break down a big problem into smaller, easier-to-solve sub problems.
An easy way to do this is to create the operating system in parts. Each of these parts
         should be well defined with clear inputs, outputs and functions.
various sorts of structures are used to implement operating systems
Simple Structure
Monolithic structure
Layered structure
Micro-kernel structure
                          Modular structure
                                    Simple structure
📌 Description: The Simple Structure is the most basic form of OS design. There is no clear
separation between different OS components. Everything is written as one large program
that directly interacts with hardware and applications. It’s not layered, not modular, and there
is no abstraction—just one block of code with everything inside.
🧱 Components:
System calls
File management
I/O handling
Process control
Memory management
📈 Advantages:
Easy to develop for small systems
Fast execution due to direct calls
📉 Disadvantages:
No modularity (hard to update/debug)
High risk of system crashes
Difficult to maintain or scale
💻 Examples: MS-DOS
Simple structure
                               Monolithic structure
📈 Advantages:
Better debugging (isolate problems in a layer)
Easier to maintain or update a specific layer
📉 Disadvantages:
Slower due to multiple layers
Layer design must be careful; upper layers depend on lower ones
💻 Examples:
THE Operating System
Multics
Monolithic structure
                              Micro-kernel structure
📌 Description:
Only essential services run in kernel mode (e.g., CPU scheduling, memory management,
IPC). All other OS services (file systems, device drivers) are moved to user space and
communicate via message passing.
🧱 Components:
Kernel: Minimal (IPC, CPU scheduling)
User space services: File system, device drivers, etc.
📈 Advantages:
Very secure and stable
Crashes in services don’t crash the kernel
📉 Disadvantages:
Slower due to message passing
Complex communication between components
💻 Examples:
Minix
QNX
MacOS X (based on Mach kernel)
Micro-kernel structure
                                   Layered approach
📌 Description:
The OS is split into layers. Each layer only interacts with the layer just below it. The lowest
layer is hardware, and the top layer is user interface.
🧱 Components:
Hardware (Layer 0)
CPU scheduling and memory (Layer 1)
Device drivers (Layer 2)
File systems (Layer 3)
Shell/User interface (Layer 4)
📈 Advantages:
Better debugging (isolate problems in a layer)
Easier to maintain or update a specific layer
📉 Disadvantages:
Slower due to multiple layers
Layer design must be careful; upper layers depend on lower ones
💻 Examples:
THE Operating System
Multics
Layered approach
                                 Modular approach
📌 Description:
This is a modern version of monolithic kernel, but it supports loadable modules (e.g.,
device drivers, file systems) that can be added or removed during runtime.
🧱 Components:
Core kernel
Loadable modules (device drivers, etc.)
📈 Advantages:
Easier to maintain and update
Good performance and flexibility
📉 Disadvantages:
Still not as secure as microkernel
Module bugs can still affect the kernel
💻 Examples:
Modern Linux
Solaris
Modular approach
                                  Client server model
Description:
The OS is organized into a set of services, each running as a separate process. These
services act as servers, and applications act as clients that request services like file access
or printing.
🧱 Components:
Kernel (minimal)
Multiple servers: File server, device server, etc.
📈 Advantages:
Highly modular
Suitable for distributed systems
📉 Disadvantages:
More overhead due to multiple context switches and message passing
💻 Examples:
Windows NT
Mac OS X (partially)
OS
Conclusion
END OF THE SECTION
                            Services & Functions
                            of Operating System
Ch
   a   pt
            er
                 -I I            By: Noor Agha
                        I
             Topics in the chapter
Operating system services
                            Operating System
                                               Operating system functions
      Conclusion
                     Operating system services
Operating system is a software that acts as an intermediary between the user and
                              computer hardware.
It is a program with the help of which we are able to run various applications.
  It is the one program that is running all the time. Every computer must have an
               operating system to smoothly execute other programs.
The OS coordinates the use of the hardware and application programs for various
     users. It provides a platform for other application programs to work.
 Program Execution
 Input Output Operations
 File Management
 Error Handling
 Resource Management
 Communication between Processes
 Networking
 System Utilities
 User Interface
                             Program Execution
The order in which they are executed depends on the CPU Scheduling Algorithms. A
few are FCFS, SJF, etc.
When the program is in execution, the Operating System also handles deadlock i.e. no
two processes come for execution at the same time.
The Operating System is responsible for the smooth execution of both user and
system programs.
                         Input Output Operations
Operating System manages the input-output operations and establishes
communication between the user and device drivers.
An I/O subsystem comprises of I/O devices and their corresponding driver software.
Device drivers are software that is associated with hardware that is being managed by
the OS so that the sync between the devices works properly.
An Operating System manages the communication between user and device drivers.
I/O operation means read or write operation with any file or any specific I/O device.
Operating system provides the access to the required I/O device when required.
                   File manipulation & management
A file represents a collection of related information. Computers can store files on the
disk (secondary storage), for long-term storage purpose.
Examples of storage media include magnetic tape, magnetic disk and optical disk
drives like CD, DVD. Each of these media has its own properties like speed, capacity,
data transfer rate and data access methods.
A file system is normally organized into directories for easy navigation and usage.
These directories may contain files and other directions.
                   File manipulation & management
     Following are the major activities of an operating system with respect to file
                                     management:
 The operating system gives the permission to the program for operation on file.
Multiple processes communicate with one another through communication lines in the
                                    network.
Following are the major activities of an operating system with respect to communication:
 Both the processes can be on one computer or on different computers, but are
  connected through a computer network.
 These include hardware and software errors such as device failure, memory error,
      division by zero, attempts to access forbidden memory locations, etc.
 To avoid error, the operating system monitors the system for detecting errors and
         takes suitable action with at least impact on running applications.
                                   Error handling
    The main activities of an operating system with regard to error handling are as follows:
 The OS takes the proper action to guarantee accurate and reliable computing.
                         Resource Management
 System resources are shared between various processes. It is the Operating system
                          that manages resource sharing
It also manages the CPU time among processes using CPU Scheduling Algorithms.
It also helps in the memory management of the system. It also controls input-output
devices.
The OS also ensures the proper use of all the resources available by deciding which
resource to be used by whom.
                            Resource Management
 In case of multi-user or multi-tasking environment, resources such as main memory,
         CPU cycles and files storage are to be allocated to each user or job.
Following are the major activities of an operating system with respect to resource management:
The operating system ensures that all access to system resources must be monitored
                                  and controlled.
   It also ensures that the external resources or peripherals must be protected from
                                      invalid access.
Following are the major activities of an operating system with respect to protection:
 The OS ensures that external I/O devices are protected from invalid access
  attempts.
Users either interact with the operating system through the command-line interface or
graphical user interface or GUI.
                    Functions of Operating System
Users either interface with the operating system through the command-line interface
or graphical user interface or GUI.
Functions of Operating System
Conclusion
END OF THE SECTION
                           Process in
                        Operating System
Ch
   a   pt
            er
                 -I V
                           By: Noor Agha
              Topics in the chapter
  Introduction to process
                              Operating System
                                                 Process vs program
Process control block (PCB)
                                                 Process attributes
     Process life cycle
                                                 Process scheduling
        What is a process in OS?
Process vs program
Program vs software
For example, when we write a program in C or C++ and compile it, the
compiler creates binary code. The original code and binary code are both
programs. When we actually run the binary code, it becomes a process.
The process is not as same as program. The Program is not as same as Process.
Basically, a process is the running      On the other hand, the program is the
instance of the program.                 executable code.
A process can belong to one program      A program can have may processes
                     Example of Process and program
# addition
print ('Sum: ', a + b)
# subtraction
print ('Subtraction: ', a - b)
# multiplication
print ('Multiplication: ', a * b)
# division
print ('Division: ', a / b)
                Example of Process and program
Process ID: When a process is created, a unique id is assigned to the process which is
used for unique identification of the process in the system.
Process state: The Process, from its creation to the completion, goes through various states
which are new, ready, running and waiting.
Program counter: A program counter stores the address of the last instruction of the
process on which the process was suspended. The CPU uses this address when the
execution of this process is resumed.
Priority: Every process has its own priority. The process with the highest priority among the
processes gets the CPU first.
               Attributes or Characteristics of a Process
  A process has the following attributes:
General Purpose Registers: Every process has its own set of registers
which are used to hold the data which is generated during the execution of
the process.
List of open files: During the Execution, Every process uses some files
which need to be present in the main memory.
List of open devices: OS also maintain the list of all open devices which are
used during the execution of the process.
It is also known as the task control block. It is a data structure, which contains the
following information about a process:
Process ID: It is a Identification mark which is present for the Process. This is very useful for
finding the process. It is also very useful for identifying the process also.
Process State: The Process, from its creation to the completion, goes through various
states which are new, ready, running and waiting.
Program Counter: The address of the following instruction to be executed from memory is
stored in a CPU register called a program counter (PC) in the computer processor.
                            Process Control Block
CPU Registers: Every process has its own set of registers which are used to hold the data
which is generated during the execution of the process.
Accounts information: Amount of CPU used for process execution, time limits, execution
ID, etc
I/O Status information: it contains information about the I/O devices used during process
execution.
Priority: Every process has its own priority. The process with the highest priority among the
processes gets the CPU first.
               Process life cycle states of a process
When a process is executed, it goes through some states from beginning till the end
called process life cycle or states of a process
                  Process life cycle or states of a process
S.N.                                            State & Description
1      Start
       This is the initial state when a process is first started/created.
2      Ready
       The process is waiting to be assigned to a processor. Ready processes are waiting to have the
       processor allocated to them by the operating system so that they can run. Process may come
       into this state after Start state or while running it by but interrupted by the scheduler to assign
       CPU to some other process.
3      Running
       Once the process has been assigned to a processor by the OS scheduler, the process state is
       set to running and the processor executes its instructions.
4      Waiting
       Process moves into the waiting state if it needs to wait for a resource, such as waiting for user
       input, or waiting for a file to become available.
5      Terminated or Exit
       Once the process finishes its execution, or it is terminated by the operating system, it is moved
       to the terminated state where it waits to be removed from main memory.
                             Process scheduling
Such operating systems allow more than one process to be loaded into the executable
memory at a time and the loaded process shares the CPU using time multiplexing.
It is the process of removing the running task from the processor and selecting
another task for processing. It schedules a process into different states like ready,
waiting, and running.
                        Categories of Scheduling
                       There are two categories of scheduling:
Non-preemptive: Here the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running process
terminates and moves to a waiting state.
Preemptive: Here the OS allocates the resources to a process for a fixed amount of
time. During resource allocation, the process switches from running state to ready
state or from waiting state to ready state. This switching occurs as the CPU may give
priority to other processes and replace the process with higher priority with the
running process.
                      Process Scheduling Queues
The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues.
The OS maintains a separate queue for each of the process states and PCBs of all
processes in the same execution state are placed in the same queue.
When the state of a process is changed, its PCB is unlinked from its current queue
and moved to its new state queue.
                      Process Scheduling Queues
The Operating System maintains the following important process scheduling queues:
Job queue: This queue keeps all the processes in the system.
Ready queue: This queue keeps a set of all processes residing in main memory, ready
and waiting to execute. A new process is always put in this queue.
Device queues: The processes which are blocked due to unavailability of an I/O device
constitute this queue.
Process Scheduling Queues
                            Process scheduling
Process scheduling is the activity of the process manager that handles the removal of
the running process from the CPU and the selection of another process on the basis
of a particular strategy.
Such operating systems allow more than one process to be loaded into the executable
memory at a time and the loaded process shares the CPU using time multiplexing.
                             Types of schedulers
Long Term or Job Scheduler: Long term scheduler is also known as job scheduler.
It chooses the processes from secondary memory and keeps them in the ready queue
in the primary memory.
Short term scheduler: Short term scheduler is also known as CPU scheduler.
It selects one of the Jobs from the ready queue and dispatch to the CPU for the
execution.
A scheduling algorithm is used to select which job is going to be dispatched for the
execution.
                            Types of schedulers
Medium term scheduler: Medium term scheduler takes care of the swapped out
processes.
If the running state processes needs some IO time for the completion then there is a
need to change its state from running to waiting.
It removes the process from the running state to make room for the other processes.
Such processes are the swapped out processes and this procedure is called
swapping.
The medium term scheduler is responsible for suspending and resuming the
processes.
Types of schedulers
                     What is scheduling algorithm
A CPU scheduling algorithm is used to determine which process will use CPU for
execution and which processes to hold or remove from execution.
The main goal or objective of CPU scheduling algorithms in OS is to make sure that
the CPU is never in an idle state, meaning that the OS has at least one of the
processes ready for execution among the available processes in the ready queue.
A Scheduling Algorithm is the algorithm which tells us how much CPU time we can
allocate to the processes.
                   Purpose of scheduling algorithm
A process to complete its execution needs both CPU time and I/O time.
In a multiprogramming system, there can be one process using the CPU while another
is waiting for I/O whereas, in a uni-programming system, time spent waiting for I/O is
completely wasted as the CPU is idle at this time.
 Maximize the CPU utilization, meaning that keep the CPU as busy as possible.
Parameter Description
Response time     Time between a command request and the beginning of a response
                           Scheduling Strategies
Scheduling falls into one of two categories:
                           Scheduling Strategies
Scheduling falls into one of two categories:
SJF or SJN (Shortest Job First or Shortest Job Next ) scheduling algorithm
First come first serve scheduling algorithm states that the process that requests the
CPU first is allocated the CPU first and is implemented by using FIFO queue.
Characteristics of FCFS:
Based on this algorithm, the process whose burst time is the smallest will be executed
first.
Characteristics of FCFS:
 Shortest Job first has the advantage of having a minimum average waiting time
   among all operating system scheduling algorithms.
 It is associated with each task as a unit of time to complete.
SJF (Shortest Job First) Scheduling Algorithm
         LJF (Longest Job First) Scheduling Algorithm
Longest Job First(LJF) scheduling process is just opposite of shortest job first (SJF)
as the name suggests this algorithm is based upon the fact that the process with the
largest burst time is processed first.
P3 P4 P1 P2
  0      6     9      11     12
                      Priority Scheduling Algorithm
Processes with same priority are executed on first come first served basis.
Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
The processor is allocated to the job closest to completion but it can be preempted
by a newer ready job with shorter time to completion.
                 Round Robin Scheduling Algorithm
Round Robin is a preemptive process scheduling algorithm.
Once a process is executed for a given time period, it is preempted and other
process executes for a given time period.
A small unit of time, called a time quantum or time slice, is defined. A time quantum
is generally from10 to 100milliseconds in length.
The ready queue is treated as a circular queue. The CPU scheduler goes around the
ready queue, allocating the CPU to each process for a time interval of up to 1 time
quantum.
Round Robin Scheduling Algorithm
              Multilevel Queue Scheduling Algorithm
With both priority and round-robin scheduling, all processes may be placed
in a single queue, and the scheduler then selects the process with the highest
priority to run.
              Multilevel Queue Scheduling Algorithm
 if there are multiple processes in the highest-priority queue, they are executed in
round-robin order.
                               Context Switch
A context switch is the mechanism to store and restore the state or context of a CPU
in Process Control block so that a process execution can be resumed from the same
point at a later time.
When the scheduler switches the CPU from executing one process to execute another,
the state from the current running process is stored into the process control block.
                                Context Switch
After this, the state for the process to run next is loaded from its own PCB and used to
set the PC, registers, etc. At that point, the second process can start executing.
Context Switch
                 IPC (Inter-process Communication)
After this, the state for the process to run next is loaded from its own PCB and used to
set the PC, registers, etc. At that point, the second process can start executing.
Conclusion
END OF THE SECTION
                         Threads in
                      Operating System
Ch
   a   pt
            er
                 -V      By: Noor Agha
               Topics in the chapter
Introduction to threads in OS
                                Operating System
                                                   Examples of threads in OS
     Why threads in OS
                                                     Threads VS process
  Components of threads
                                                    Types of threads in OS
   Multi-threading in OS
                                                         Conclusion
             What is thread in operating system?
Threads share the same memory space and resources of a process, allowing
           for concurrent execution and efficient multitasking.
It is a lightweight process that the operating system can schedule and run
                       concurrently with other threads.
 The operating system creates and manages threads, and they share the
     same memory and resources as the program that created them.
Web Browser: Imagine you're using a web browser that needs to download
  multiple files simultaneously while also rendering a webpage. Each file
download and rendering task could be handled by a separate thread within
                           the browser's process
   Video Game: In a multiplayer video game, there are often various tasks
   running concurrently, such as updating player positions, handling user
input, and managing network communication. Each of these tasks could be
 assigned to a separate thread within the game's process. This ensures that
the game remains responsive and can handle multiple players interacting in
                  real-time without delays or interruptions.
                 Why do we need threads in OS?
 Threads in the operating system provide multiple benefits and improve the
 overall performance of the system. Some of the reasons threads are needed
 in the operating system are:
Processes use more resources and hence they       Threads share resources and hence they are
are termed as heavyweight processes.              termed as lightweight processes.
Creation and termination times of processes are   Creation and termination times of threads are
slower.                                           faster compared to processes.
Thread State: Indicates the current condition of the thread, such as running,
waiting, or terminated.
Program Counter: Keeps track of the address of the next instruction to be
executed by the thread.
Stack: Each thread has its own stack, which stores local variables, function
parameters, and return addresses.
Thread Priority: Determines the order in which threads are scheduled for
execution by the operating system.
 User-level threads are implemented and managed by the user and the
  kernel is not aware of it.
Imagine you're baking a cake, and you have two friends helping you: Alice
and Bob.
Alice and Bob are user-level threads, representing smaller tasks within the
baking process.
                                           Process (You):
You oversee the entire baking process, coordinating Alice and Bob's tasks. While Alice is waiting
for sugar or Bob is waiting for the oven to heat, you can't proceed with other tasks like preparing
the frosting. If there's a delay or problem with one task, it affects the entire baking process since
you're managing everything.
In this scenario, Alice and Bob represent user-level threads because they're managed directly by
you (the process), without involving any external authorities (like a professional baker or a recipe
book).
They're lightweight and responsive to your instructions, but they rely on your coordination for the
overall success of the baking project.
                     Kernel Level Thread (KLT)
 Kernel level threads are implemented using system calls and Kernel level
  threads are recognized by the kernel of OS.
Imagine you're running a big event, like a concert, with multiple security
guards managing different entrances. In this analogy:
• You represent the operating system kernel, overseeing the entire event.
• Each security guard represents a kernel-level thread, responsible for
  managing a specific entrance.
               Kernel Level Thread (KLT) example
You are already aware of the term multitasking that allows processes to run
concurrently.
Also, we can say that when multiple threads run concurrently it is known as
multithreading
                        Multi-threading in OS
Multithreading can also handle multiple requests from the same user.
threads share the memory and the resources of the process to which they
belong by default.
The benefit of sharing code and data is that it allows an application to have
several threads of activity within same address space.
                  Advantages of multithreading
Number of CPU           More than one                  Can be one or more than      One
                                                       one
Number of the process   Number of processes to be Execution of various        Processes are executed
being executed          executed at a time can be components of the same      one by one at a time.
                        more than one             process are done at a time.
                                Operating System
      Critical section
                                                           Mutex
        Conclusion
                    Process synchronization
 Independent Process: The execution of one process does not affect the execution
  of other processes.
Note: Process synchronization problem arises in the case of Cooperative process also
because resources are shared in Cooperative processes.
                       Process synchronization
Because of this reason, the operating system has to perform many tasks,
and sometimes simultaneously.
This isn't usually a problem unless these simultaneously occurring processes use a
common resource.
Note: Process synchronization problem arises in the case of Cooperative process also
because resources are shared in Cooperative processes.
                        Process synchronization
Note: Process synchronization problem arises in the case of Cooperative process also
because resources are shared in Cooperative processes.
                        Process synchronization
   For example, consider a bank that stores the account balance of each
customer in the same database. Now suppose you initially have x rupees in
  your account. Now, you take out some amount of money from your bank
   account, and at the same time, someone tries to look at the amount of
money stored in your account. As you are taking out some money from your
account, after the transaction, the total balance left will be lower than x. But,
 the transaction takes time, and hence the person reads x as your account
                  balance which leads to inconsistent data.
If in some way, we could make sure that only one process occurs at a time, we could
ensure consistent data, and here is why we need process synchronization.
                                 Race condition
When more than one process is executing the same code or accessing the same
memory or any shared variable in that condition there is a possibility that the output
or the value of the shared variable is wrong so for that all the processes doing the
race to say that my output is correct this condition known as a race condition.
Several processes access and process the manipulations over the same data
concurrently, then the outcome depends on the particular order in which the access
takes place.
In the above image, if Process1 and Process2 happen at the same time, user 2 will get
the wrong account balance as Y because of Process1 being transacted when the
balance is X.
Inconsistency of data can occur when various processes share a common resource in
a system which is why there is a need for process synchronization in the operating
system
                               Example
 Let us consider the following actions are done by the two processes,
value+3 // process p1
value=6
value-3 // process p2
value=3
The original value of value should be 6, but due to the interruption of the process p2,
the value is changed back to 3. This is the problem of synchronization.
Solutions to critical section problem
                            Conditions to be fulfill
Any solution to critical section problem must fulfill the following condition:
Mutual exclusion: when one process is executing in critical section , no other process
should enter the critical section.
Progress: When no process is executing in its critical section, and there exists a
process that wishes to enter its critical section, it should not have to wait indefinitely
to enter it.
Bounded waiting: Bounded waiting means that each process must have a limited
waiting time. It should not wait endlessly to access the critical section.
           Software based solution
                Peterson`s solution
Video link
https://www.youtube.com/watch?v=gYCiTtgGR5Q&t=946s
Peterson`s solution
Peterson`s solution
          Software based solution
                   Semaphore
Video link
https://www.youtube.com/watch?v=XDIOC2EY5JE&t=10s
Semaphore
Semaphore
          Hardware based solutions
                    Test & Lock
Video link
https://www.youtube.com/watch?v=5oZYS5dTrmk&t=112s
Test & Lock
Test & Lock
Conclusion
END OF THE SECTION
                          Deadlock in
                        Operating System
Ch
   a   pt
            er
                 -V
                   II      By: Noor Agha
            Topics in the chapter
                           Operating System
Introduction to deadlock
                                              Conditions of deadlock
  Handling deadlock
                                                   Conclusion
Introduction to deadlock
Real world example of deadlock
Example of deadlock
Deadlock conditions
Handling deadlock
Deadlock handling strategies
Conclusion
END OF THE SECTION
                    Memory Management
Ch
  ap
    t   er
           -V
             II I       By: Noor Agha
           Topics in the chapter
Introduction to memory
      management
                                            What is the need for memory
                         Operating System
 Memory Management                                 management?
    Techniques
                                               Contiguous memory
Non-Contiguous memory                         management schemes
 management schemes
                                                 What is paging?
What is Segmentation?
                                                    Swapping
    Fragmentation
                                                    Conclusion
             Introduction to memory management
Memory is the important part of the computer that is used to store the data.
 At any time, many processes are competing for it. Moreover, to increase
      performance, several processes are executed simultaneously.
It handles or manages primary memory and moves processes back and forth
              between main memory and disk during execution.
          Introduction to memory management
. In this scheme, the main memory is divided into two contiguous areas or partitions
 The operating systems reside permanently in one partition, generally at the lower
         memory, and the user process is loaded into the other partition.
                           Multiple Partitioning
 The problem of inefficient CPU use can be overcome using multiprogramming that
                allows more than one program to run concurrently.
To switch between two processes, the operating systems need to load both processes
                             into the main memory.
The operating system needs to divide the available main memory into multiple parts to
load multiple processes into the main memory. Thus multiple processes can reside in
                          the main memory simultaneously.
                           Multiple Partitioning
               The multiple partitioning schemes can be of two types:
 Fixed Partitioning
 Dynamic Partitioning
                              Fixed Partitioning
The main memory is divided into several fixed-sized partitions in a fixed partition
memory management scheme or static partitioning.
These partitions can be of the same size or different sizes. Each partition can hold a
single process.
These partitions are made at the time of system generation and remain fixed after that.
                            Dynamic Partitioning
Requested processes are allocated memory until the entire physical memory is
exhausted or the remaining space is insufficient to hold the requesting process.
In this scheme the partitions used are of variable size, and the number of partitions is
not defined at the system generation time.
Non-contiguous memory allocation/management technique
The blocks of memory allocated to the process need not be contiguous, and the
operating system keeps track of the various blocks allocated to the process.
Non-contiguous memory allocation is suitable for larger memory sizes and where
efficient use of memory is important.
Non-contigious memory allocation/management technique
Non-contiguous memory allocation can be done in two ways
Paging
 Segmentation
                                      Paging
In paging, the memory is divided into fixed-size pages, and each page is assigned to a
process.
Paging is a memory management scheme that eliminates the need for a contiguous
allocation of physical memory.
In paging, the physical memory is divided into fixed-size blocks called page frames
which are the same size as the pages used by the process.
The process’s logical address space is also divided into fixed-size blocks called
pages, which are the same size as the page frames.
When a process requests memory, the operating system allocates one or more page
frames to the process and maps the process’s logical pages to the physical page
frames.
                                    Paging
The mapping between logical pages and physical page frames is maintained by the
page table, which is used by the memory management unit to translate logical
addresses into physical addresses.
The page table maps each logical page number to a physical page frame number.
The mapping from virtual to physical address is done by the Memory Management
Unit (MMU) which is a hardware device and this mapping is known as the paging
technique.
                                         Example
 Let us consider the main memory size 16 Kb and Frame size is 1 KB therefore the main
  memory will be divided into the collection of 16 frames of 1 KB each.
 There are 4 processes in the system that is P1, P2, P3 and P4 of 4 KB each. Each
  process is divided into pages of 1 KB each so that one page can be stored in one frame.
 Initially, all the frames are empty therefore pages of the processes will get stored in the
  contiguous way.
 Frames, pages and the mapping between the two is shown in the image below.
Paging
Paging
                                Segmentation
 In segmentation, the memory is divided into variable-sized segments, and each
segment is assigned to a process. This technique is more flexible than paging but
requires more overhead to keep track of the allocated segments.
The details about each segment are stored in a table called a segment table. Segment
table is stored in one (or many) of the segments.
                                 Segment Table
The details about each segment are stored in a table called a segment table. Segment
table is stored in one (or many) of the segments.
Processes can't be assigned to memory blocks due to their small size, and the
memory blocks stay unused.
                            Types of Fragmentation
There are mainly two types of fragmentation in the operating system. These are as follows:
 Internal Fragmentation
 External Fragmentation
                             Internal Fragmentation
There are mainly two types of fragmentation in the operating system. These are as follows:
When a process is allocated to a memory block, and if the process is smaller than the
amount of memory requested, a free space is created in the given memory block. Due
to this, the free space of the memory block is unused, which causes internal
fragmentation.
                 Example of Internal Fragmentation
Assume that memory allocation in RAM is done using fixed partitioning (i.e., memory
blocks of fixed sizes). 2MB, 4MB, 4MB, and 8MB are the available sizes. The Operating
System uses a part of this RAM.
Let's suppose a process P1 with a size of 3MB arrives and is given a memory block of
4MB. As a result, the 1MB of free space in this block is unused and cannot be used to
allocate memory to another process. It is known as internal fragmentation.
Example of Internal Fragmentation
               How to avoid internal fragmentation?
The problem of internal fragmentation may arise due to the fixed sizes of the memory
blocks. It may be solved by assigning space to the process via dynamic partitioning.
Dynamic partitioning allocates only the amount of space requested by the process. As
a result, there is no internal fragmentation.
                          External fragmentation
The quantity of available memory is substantially reduced if there is too much external
fragmentation.
There is enough memory space to complete a request, but it is not contiguous. It's
known as external fragmentation.
External fragmentation
        How to remove external fragmentation?
Ch
  ap
    t   er
           -I   X     By: Noor Agha
               Topics in the chapter
Introduction to virtual memory
                                 Operating System
                                                    Advantages of Virtual memory
  Disadvantages of Virtual
         memory
                                                          Demand paging
          Swapping
In this scheme, User can load the bigger size processes than the available
main memory by having the illusion that the memory is available to load the
                                 process.
  Instead of loading one big process in the main memory, the Operating
  System loads the different parts of more than one process in the main
                                 memory.
 CPU Architecture
                   (32bit can address up to 2^32, 64bit can address up to 2^64)
 Operating system
 Disk space
                  Advantages of virtual memory
3. The user will have the lesser hard disk space for its use.
                          Demand Paging
 In demand paging, the pages of a process which are least used, get stored
                        in the secondary memory.
replacement algorithms which are used to determine the pages which will be
                                replaced.
                    Page replacement algorithms
 In an operating system that uses paging for memory management, a page
replacement algorithm is needed to decide which page needs to be replaced
                       when a new page comes in.
At any given time, only a few pages of any process are in the main memory,
       and therefore more processes can be maintained in memory.
when the OS brings one page in, it must throw another out. If it throws out a
   page just before it is used, then it will just have to get that page again
almost immediately. Too much of this leads to a condition called Thrashing.
The system spends most of its time swapping pages rather than executing
     instructions. So a good page replacement algorithm is required.
Conclusion
END OF THE SECTION
                 I/O Device
                Management
Ch
  ap
    t   er
           -X
                 By: Noor Agha
               Topics in the chapter
  Introduction to I/O device
        management
                                                           Device drivers
                               Operating System
     Device controllers
                                                              Polling
          Interrupts
                                                            Conclusion
               Input/output device management
We have many input devices for computers like keyboard, mouse, scanner,
 microphone, etc and there are output devices like printer, speakers, etc.
Other than these, there are other hardware devices such as Disk, USB, etc.
The operating system and hardware or the devices of the computer are not
connected directly to each other they are connected to each other through
                   special programs known as drivers.
For example, a printer driver convert OS commands into signals that printer
                               can understand.
                           Device controller
The Device Controller works like an interface between a device and a device
                                  driver.
 There is always a device controller and a device driver for each device to
communicate with the Operating Systems. A device controller may be able
                        to handle multiple devices.
As an interface its main task is to convert serial bit stream to block of bytes
                       and perform error correction .
                        Device controller
Any device connected to the computer is connected by a plug and socket,
           and the socket is connected to a device controller.
Following is a model for connecting the CPU, memory, controllers, and I/O
   devices where CPU and device controllers all use a common bus for
             Synchronous vs asynchronous I/O
Synchronous I/O − In this scheme CPU execution waits while I/O proceeds
The CPU must have a way to pass information to and from an I/O device.
There are three approaches available to communicate with the CPU and Device.
2. Memory-mapped I/O
This uses CPU instructions that are specifically made for controlling I/O devices.
These instructions typically allow data to be sent to an I/O device or read from an I/O
                                       device.
                   Memory-mapped I/O
A space in memory is specified that both CPU and I/O have access to.
     Both CPU and I/O read to and write from this shared apace
Memory-mapped I/O
                     Direct Memory Access (DMA)
Slow devices like keyboards will generate an interrupt to the main CPU after each byte
is transferred. If a fast device such as a disk generated an interrupt for each byte, the
       operating system would spend most of its time handling these interrupts.
Direct Memory Access (DMA) means CPU grants I/O module authority to read from or
                      write to memory without involvement.
   DMA module itself controls exchange of data between main memory and the I/O
                                     device.
  CPU is only involved at the beginning and end of the transfer and interrupted only
                       after entire block has been transferred.
                  Direct Memory Access (DMA)
Direct Memory Access needs a special hardware called DMA controller (DMAC) that
                         manages the data transfers
                       Polling vs Interrupts I/O
A computer must have a way of detecting the arrival of any type of input.
There are two ways that this can happen, known as polling and interrupts.
Both of these techniques allow the processor to deal with events that can happen at
      any time and that are not related to the process it is currently running.
                                 Polling I/O
Polling is the simplest way for an I/O device to communicate with the processor.
The I/O device simply puts the information in a Status register, and the processor
                       must come and get the information.
                                Interrupts I/O
A device controller puts an interrupt signal on the bus when it needs CPU’s attention
             when CPU receives an interrupt, It saves its current state
When the interrupting device has been dealt with, the CPU continues with its original
                      task as if it had never been interrupted.
                           Which one is better?
Ch
  ap
    t   er
           -X
             I       By: Noor Agha
              Topics in the chapter
  Introduction to storage
       management
                                                Goals Storage Management
                             Operating System
    Disk management
                                                     Disk Scheduling
Disk Scheduling Algorithms
                                                       Conclusion
              Introduction to storage management
   Organizations rely on local and premise storage, thus they store all the
        sensitive and important data using various storage devices.
1. Performance
2. Reliability
3. Recoverability
4. Capacity
                         Disk Management
As a computer user, you might have noticed that your computer's hard drive
  can become cluttered and slow over time, This is why disk management
                            comes into play.
partitioning
Defragmentation
Back up
                           Disk Scheduling
Multiple I/O requests may arrive by different processes and only one I/O
request can be served at a time by the disk controller.
Thus other I/O requests need to wait in the waiting queue and need to be
scheduled.
Two or more requests may be far from each other so this can result in
greater disk arm movement.
                   Disk Scheduling Algorithms
                       FCFS (First Come First Serve)
Ch
  ap
    t   er
           -X
             II   By: Noor Agha
             Topics in the chapter
      What is file?
                                                      File attributes
                            Operating System
       File types
                                               Introduction to file system
Identifier: Every file is identified by a unique tag number within a file system known as an
identifier.
Type: This attribute is required for systems that support various types of files.
Protection: This attribute assigns and controls the access rights of reading, writing, and
executing the file.
Time, date and security: It is used for protection, security, and also used for monitoring
File Types
                    Introduction to File system
File system is the part of the operating system which is responsible for file
                                management.
 It provides a mechanism to store the data and access to the file contents
                      including data and programs.
              The advantages of using a file system
Data protection: File systems often include features such as file and folder
permissions, backup and restore, and error detection and correction, to protect
data from loss or corruption.
Compatibility issues: Different file systems may not be compatible with each
other, making it difficult to transfer data between different operating systems.
Disk space overhead: File systems may use some disk space to store metadata
and other overhead information, reducing the amount of space available for user
data.
FAT (File Allocation Table): An older file system used by older versions of
Windows and other operating systems.
NTFS (New Technology File System): A modern file system used by Windows. It
supports features such as file and folder permissions, compression, and
encryption.
ext (Extended File System): A file system commonly used on Linux and Unix-
based operating systems.
APFS (Apple File System): A new file system introduced by Apple for their Macs
and iOS devices.
          Various operations performed by file system
Create:
Read:
Write:
Open:
Append: it means simply add the data to the file without erasing existing data.
Delete:
Close:
                       File Access Mechanisms
File access mechanism refers to the manner in which the records of a file
may be accessed. There are several ways to access files:
1. Sequential access
2. Direct/Random access
the information in the file is processed in order, one record after the other.
Each record has its own address on the file with by the help of which it can
be directly accessed for reading or writing.
The records need not be in any sequence within the file and they need not
be in adjacent locations on the storage medium.
Direct/Random access
                     Indexed sequential access
An index is created for each file which contains pointers to various blocks.
Index is searched sequentially and its pointer is used to access the file
directly.
Indexed sequential access
Conclusion
END OF THE SECTION
                    Security
Ch
  ap
    t   er
           -X
             II I   By: Noor Agha
               Topics in the chapter
What is security & protection?
                                 Operating System
                                                    What is the need for security
                                                          and protection?
What are various threats and
        challenges?
                                                    How to secure our system and
                                                               data?
         conclusion
                 What is security & protection?
 Integrity means that data has not been modified and should not be
  modified.
 For this we use HASH function.
 It is performed on the data at both the sides, sender as well as receiver.
 Produced values at both the sides must match with each other.
II. External: Disasters such as floods, earthquakes, landscapes, etc. cause it.
                 What are various threats and challenges?
I. Malware: malicious software, U don`t know presence, damage computer, affects performance.
II. Virus: a program that gets into computer and replicates self, infect files, affects performance.
III. Spyware: a computer program that tracks, records, and reports a user’s activity (offline and online) and share with intruder.
IV. Worms: a program that gets into computer and replicates self, affects performance, it is stand alone.
V. Trojan: also known as Trojan horse looks a useful program, performs a harmful/unwanted action
      (Trick, wooden horse, by Greeks, enter Troy, winning war)
VI. Denial Of Service Attack: attacker tries to introduce self legitimate user, attacker tries to make a system or
      network resource unavailable such banking, commerce, trading organizations, etc.
VII. Phishing: Deceive users, theft of sensitive info such as credit card, username & password.
              How to secure our system and data?
 Training
                How to secure our system and data?
 Physically:
II. Keep computer and network devices away from what it can be damaged by.
II.   Software especially ones that can help us in controlling access, detecting and
      preventing malwares must be updated.
 Regular Backup
END OF THE SECTION