Unit-1: Operating System
Introduction to Operating System
What is Operating System?
A program that acts as an intermediary between a user of a computer
and the computer hardware. (or)
An operating system is a program that manages a computer’s
hardware.
Operating system goals:
Execute user programs and make solving user problems easier
Make the computer system convenient to use
Use the computer hardware in an efficient manner
Introduction to Operating System
Computer System Structure:
Computer system can be divided into four components:
Hardware – provides basic computing resources
CPU, memory, I/O devices
Operating system
Controls and coordinates use of hardware among various
applications and users
Application programs – define the ways in which the system
resources are used to solve the computing problems of the users
Word processors, compilers, web browsers, database systems,
video games
Users
People, machines, other computers
Introduction to Operating System
What Operating Systems Do:
Fig.1: Abstract view of the components of a computer system
Introduction to Operating System
Depends on the point of view
1. User View:
Users want convenience, ease of use and good performance
Don’t care about resource utilization
But shared computer such as mainframe or minicomputer must keep all
users happy
Users of dedicate systems such as workstations have dedicated resources
but frequently use shared resources from servers
Handheld computers are resource poor, optimized for usability and battery
life
Some computers have little or no user interface, such as embedded
computers in devices and automobiles
Introduction to Operating System
2. System View:
OS is a resource allocator
Manages all resources
Decides between conflicting requests for efficient and fair resource
use
OS is a control program
Controls execution of programs to prevent errors and improper use
of the computer
“The one program running at all times on the computer” is the kernel.
Everything else is either
a system program (ships with the operating system) , or
an application program.
Operating-System Structure
1. Purpose of Operating Systems:
- Provides an environment for executing programs.
- Organizes resources like CPU, memory, and I/O devices for effective
utilization.
2. Multiprogramming:
- Multiple programs are kept in memory simultaneously to ensure the CPU is
always utilized.
- Jobs are stored in a disk job pool and loaded into memory as space becomes
available.
- The CPU switches between jobs, preventing idle time during I/O operations.
3. Time Sharing (Multitasking):
- Extends multiprogramming to allow user interaction.
Operating-System Structure
- CPU rapidly switches between programs to provide an illusion of dedicated
use.
- Requires interactive devices (e.g., keyboard, mouse) and ensures short
response times (<1 second).
4. Processes and Interactive I/O:
- A program in memory and executing is called a process.
- Interactive I/O (e.g., user typing) is slow relative to the CPU, so the system
switches to other processes during idle times.
5. Memory Management:
- Several jobs are kept in memory simultaneously.
- Techniques like job scheduling and memory management determine which
jobs are loaded into memory.
Operating-System Structure
6. CPU Scheduling:
- Ensures efficient execution by deciding which job to run first when
multiple jobs are ready.
7. Swapping and Virtual Memory:
- Swapping moves processes between memory and disk to manage
space.
- Virtual memory allows execution of programs larger than physical
memory and abstracts logical from physical memory.
8. File System and Disk Management:
- Provides mechanisms for organizing and accessing files stored on
disks.
- Requires efficient disk management to support the file system.
Operating-System Structure
9. Resource Protection:
- Protects resources from unauthorized use and ensures jobs do not
interfere with each other.
10. Job Synchronization and Deadlock Prevention:
- Mechanisms are provided for jobs to synchronize and communicate.
- Deadlock management ensures jobs do not wait indefinitely for
resources.
Operating-System Operations
If any processes has some error it affected many processes.
Interrupt driven (hardware and software)
Hardware interrupt by one of the devices
Software interrupt (exception or trap):
Software error (e.g., division by zero)
Request for operating system service
Other process problems include infinite loop, processes modifying
each other or the operating system
Dual-Mode and Multimode Operation:
Dual-mode operation allows OS to protect itself and other system
components
User mode and kernel mode
User mode: Creating a text document, etc.
Operating-System Operations
Kernel mode: Normally in system booting hardware starts in kernel
mode and after loading the operating system it starts user application in
user code.
Mode bit provided by hardware
Provides ability to distinguish when system is running user code or
kernel code
Some instructions designated as privileged, only executable in
kernel mode
System call changes mode to kernel, return from call resets it to user
Increasingly CPUs support multi-mode operations
i.e. virtual machine manager (VMM) mode for guest VMs
Operating-System Operations
Operating System Services
Operating systems provide an environment for execution of programs
and services to programs and users
One set of operating-system services provides functions that are helpful
to the user:
User interface - Almost all operating systems have a user interface
(UI).
Varies between Command-Line interface (CLI), Graphics User
Interface (GUI), Batch
Program execution - The system must be able to load a program
into memory and to run that program, end execution, either normally
or abnormally (indicating error)
I/O operations - A running program may require I/O, which may
involve a file or an I/O device
File-system manipulation - The file system is of particular interest.
Programs need to read and write files and directories, create and
delete them, search them, list file Information, permission
management.
Operating System Services
A view of operating system services
Operating System Services
Communications – Processes may exchange information, on the same
computer or between computers over a network
Communications may be via shared memory or through message
passing (packets moved by the OS)
Error detection – OS needs to be constantly aware of possible errors
May occur in the CPU and memory hardware, in I/O devices, in user
program
For each type of error, OS should take the appropriate action to ensure
correct and consistent computing
Debugging facilities can greatly enhance the user’s and programmer’s
abilities to efficiently use the system
Operating System Services
Another set of OS functions exists for ensuring the efficient operation of the
system itself via resource sharing
Resource allocation - When multiple users or multiple jobs running
concurrently, resources must be allocated to each of them
Many types of resources - CPU cycles, main memory, file storage, I/O
devices.
Accounting - To keep track of which users use how much and what kinds of
computer resources
Protection and security - The owners of information stored in a multiuser or
networked computer system may want to control use of that information,
concurrent processes should not interfere with each other
Protection involves ensuring that all access to system resources is
controlled
Security of the system from outsiders requires user authentication,
extends to defending external I/O devices from invalid access attempts
User Operating System Interface - CLI
Command Line Interface (CLI) or command interpreter allows direct
command entry
Sometimes implemented in kernel, sometimes by systems
program
Sometimes multiple flavors implemented – shells
Primarily fetches a command from user and executes it
– Sometimes commands built-in, sometimes just names of
programs
» If the latter, adding new features doesn’t require shell
modification
User Operating System Interface - CLI
User Operating System Interface - GUI
User-friendly desktop metaphor interface
Usually mouse, keyboard, and monitor
Icons represent files, programs, actions, etc
Various mouse buttons over objects in the interface cause various
actions (provide information, options, execute function, open directory
(known as a folder)
Invented at Xerox PARC
Many systems now include both CLI and GUI interfaces
Microsoft Windows is GUI with CLI “command” shell
Apple Mac OS X as “Aqua” GUI interface with UNIX kernel underneath
and shells available
Solaris is CLI with optional GUI interfaces (Java Desktop, KDE)
User Operating System Interface - GUI
Touchscreen devices require new
interfaces
Mouse not possible or not desired
Actions and selection based on
gestures
Virtual keyboard for text entry
Voice commands.
System Calls
System call: A System Call is the main way a user program interacts
with the Operating System.
Programming interface to the services provided by the OS
Typically written in a high-level language (C or C++)
Mostly accessed by programs via a high-level Application Program
Interface (API) rather than direct system call use
Three most common APIs are
Win32 API for Windows
POSIX* API for POSIX-based systems (including virtually all versions of
UNIX, Linux, and Mac OS X)
Java API for the Java virtual machine (JVM)
System Calls
System call sequence to copy the contents of one file to another file
System Calls
Consider the ReadFile() function in the Win32 API—a function for reading from a
file
A description of the parameters passed to ReadFile()
HANDLE file—the file to be read
LPVOID buffer—a buffer where the data will be read into and written from
DWORD bytesToRead—the number of bytes to be read into the buffer
LPDWORD bytesRead—the number of bytes read during the last read
LPOVERLAPPED ovl—indicates if overlapped I/O is being used
System Calls
Typically, a number associated with each system call
System-call interface maintains a table indexed according to these
numbers
The system call interface invokes intended system call in OS kernel
and returns status of the system call and any return values
The caller need know nothing about how the system call is
implemented
Just needs to obey API and understand what OS will do as a result
call
Most details of OS interface hidden from programmer by API
Managed by run-time support library (set of functions built into
libraries included with compiler)
System Calls
API – System Call – OS Relationship
System Call Parameter Passing
Often, more information is required than simply identity of desired
system call
Exact type and amount of information vary according to OS and
call
Three general methods used to pass parameters to the OS
Simplest: pass the parameters in registers
In some cases, may be more parameters than registers
Parameters stored in a block, or table, in memory, and address of
block passed as a parameter in a register
This approach taken by Linux and Solaris
Parameters placed, or pushed, onto the stack by the program and
popped off the stack by the operating system
Block and stack methods do not limit the number or length of
parameters being passed
Parameter Passing via Table
Types of System Calls
Process control
end, abort
load, execute
create process, terminate process
get process attributes, set process attributes
wait for time
wait event, signal event
allocate and free memory
File management
create file, delete file
open, close file
read, write, reposition
get and set file attributes
Types of System Calls (Cont.)
Device management
request device, release device
read, write, reposition
get device attributes, set device attributes
logically attach or detach devices
Information maintenance
get time or date, set time or date
get system data, set system data
get and set process, file, or device attributes
Communications
create, delete communication connection
send, receive messages
transfer status information
attach and detach remote devices
Examples of Windows and
Unix System Calls
Standard C Library Example
C program invoking printf() library call, which calls write() system call
Process Concept
C program invoking printf() library call, which calls write() system call
An operating system executes a variety of programs:
Batch system – jobs
Time-shared systems – user programs or tasks
Textbook uses the terms job and process almost interchangeably
Process – a program in execution; process execution must progress in
sequential fashion
A process includes:
program counter
stack
data section
Process Concept
1. The process:
Multiple parts
The program code, also called text section
Current activity including program counter, processor registers
Stack containing temporary data
Function parameters, return addresses, local variables
Data section containing global variables
Heap containing memory dynamically allocated during run time
Program is passive entity, process is active
Program becomes process when executable file loaded into
memory
Execution of program started via GUI mouse clicks, command line entry
of its name, etc
One program can be several processes
Consider multiple users executing the same program
Process Concept
Process in Memory
Process Concept
2. Process State:
As a process executes, it changes state.
The state of a process is defined in part by the current activity of that
process.
Each process may be in one of the following states:
New: The process is being created
Running: Instructions are being executed
Waiting: The process is waiting for some event to occur
Ready: The process is waiting to be assigned to a processor
Terminated: The process has finished execution
Process Concept
Process Concept
3. Process Control Block:
Information associated with each process
(also called task control block)
Process state – running, waiting, etc
Program counter – location or addres of instruction
to next execute
CPU registers – contents of all process-centric
registers
CPU scheduling information- priorities, scheduling
queue pointers
Memory-management information – memory
allocated to the process
Accounting information – CPU used, clock time
elapsed since start, time limits
I/O status information – I/O devices allocated to
process, list of open files
Process Concept
A context switching is a process that involves switching of the CPU
from one process or task to another.
Fig. CPU Switch From Process to Process
Process Concept
4. Threads:
A process is a program that performs a single thread of execution. For
example, when a process is running a word-processor program, a
single thread of instructions is being executed.
Process Representation in Linux:
Represented by the C structure task_struct
pid_t pid; /* process identifier */
long state; /* state of the process */
unsigned int time slice /* scheduling information */ struct task
struct *parent; /* this process’s parent */ struct list head
children; /* this process’s children */ struct files struct *files;
/* list of open files */ struct mm struct *mm; /* address space of
this pro */
Process Scheduling
The objective of multiprogramming is to have some process running at
all times, to maximize CPU utilization.
The objective of time sharing is to switch the CPU among processes so
frequently that users can interact with each program while it is running.
To meet those objectives, the process scheduler selects an available
process (possibly from a set of several available processes) from
program execution on the CPU.
From a single-processor system, there will never be more than one
running process.
If there are more processes, the rest will have to wait until the CPU
is free and can be rescheduled.
Process Scheduling
The act of Scheduling a process means changing the active PCB
pointed to by the CPU. Also called a context switch
Maximize CPU use, quickly switch processes onto CPU for time
sharing
Process scheduler selects among available processes for next
execution on CPU
Maintains scheduling queues of processes
Job queue – set of all processes in the system
Ready queue – set of all processes residing in main memory,
ready and waiting to execute
Device queues – set of processes waiting for an I/O device
Processes migrate among the various queues
Process Scheduling
1. Scheduling Queues:
Ready Queue And Various I/O Device Queues
Process Scheduling
Medium Term Short Term
Scheduler Scheduler
Long Term
Scheduler
Interrupt Handler
Fig. :Queueing-diagram representation of process scheduling
Process Scheduling
2. Schedulers:
Long-term scheduler (or job scheduler) – selects which processes
should be brought into the ready queue.
It controls the degree of multi-programming and the number of
processes in a ready state at any time.
This scheduler brings the new processes from the process queue to the
ready state.
Long-term scheduler carefully selects both I/O and CPU-bound
processes.
This scheduler increases the efficiency of CPU utilization since it
maintains a balance between I/O and CPU-bound processes.
Process Scheduling
Short-term scheduler (or CPU scheduler) – selects which process
should be executed next and allocates CPU.
And this is where all the scheduling algorithms are used.
This scheduler job is to select the process only and not to load the
process.
A short-term scheduler ensures that there is no starvation due to owing
to high burst time processes.
Short-term scheduler is invoked very frequently (milliseconds) (must
be fast)
Long-term scheduler is invoked very infrequently (seconds, minutes)
(may be slow)
Process Scheduling
Medium-term scheduler is responsible for suspending and resuming
the process.
A running process can become suspended, when makes an I/O request
then this scheduler moves processes from main memory to disk and
vice versa.
It also increases the balance between I/O and CPU-bound processes.
Mixture of CPU and memory resource management
Swap out/in jobs to improve mix and to get memory.
Controls change of priority.
Processes can be described as either:
I/O-bound process – spends more time doing I/O than
computations, many short CPU bursts
CPU-bound process – spends more time doing computations; few
very long CPU bursts
Process Scheduling
Addition of Medium Term Scheduling
Process Scheduling
LONG TERM SCHEDULER SHORT TERM SCHEDULER
It is a job scheduler. It is a CPU scheduler.
Speed is lesser than short term Speed is faster than long term
scheduler. scheduler.
It controls the degree of It provides lesser control over
Multi-programing. the degree of multi
programming.
Process Scheduling
3. Context Switch:
When CPU switches to another process, the system must save the
state of the old process and load the saved state for the new process
via a context switch.
Context of a process represented in the PCB
Context-switch time is overhead; the system does no useful work while
switching
The more complex the OS and the PCB -> longer the context
switch
Time dependent on hardware support
Some hardware provides multiple sets of registers per CPU ->
multiple contexts loaded at once
Operations on Processes
The processes in most systems can execute concurrently, and they
may be created and deleted dynamically.
Thus, these systems must provide a mechanism for process creation
and termination.
1. Process Creation:
Parent process create children processes, which, in turn create other
processes, forming a tree of processes.
Generally, process identified and managed via a process identifier
(pid)
Resource sharing
Parent and children share all resources
Children share subset of parent’s resources
Parent and child share no resources
Operations on Processes
Operations on Processes
Execution
Parent and children execute concurrently
Parent waits until children terminate
Address space
Child duplicate of parent
Child has a program loaded into it
UNIX examples
fork system call creates new process
exec system call used after a fork to replace the process’ memory
space with a new program
Operations on Processes
Operations on Processes
2. Process Termination:
Process executes last statement and asks the operating system to
delete it (exit)
Output data from child to parent (via wait)
Process’ resources are deallocated by operating system
Parent may terminate execution of children processes (abort)
Child has exceeded allocated resources
Task assigned to child is no longer required
If parent is exiting
Some operating systems do not allow child to continue if its
parent terminates
– All children terminated - cascading termination
Interprocess Communication
Processes executing concurrently in the operating system may be
either independent processes or cooperating processes.
A process is independent if it cannot affect or be affected by the other
processes executing in the system. Any process that does not share
data with any other process is independent.
A process is cooperating if it can affect or be affected by the other
processes executing in the system.
Reasons for cooperating processes:
Information sharing
Computation speedup
Modularity
Convenience
Cooperating processes need inter-process communication (IPC)
Two models of IPC
Shared memory
Message passing
Interprocess Communication
Communications models. (a) Message passing. (b) Shared memory
Interprocess Communication
1. Shared-Memory Systems:
Interprocess communication using shared memory requires
communicating processes to establish a region of shared memory.
Typically, a shared-memory region resides in the address space of the
process creating the shared-memory segment.
Other processes that wish to communicate using this shared-memory
segment must attach it to their address space.
Shared memory requires that two or more processes agree to remove
this restriction.
They can then exchange information by reading and writing data in the
shared areas.
Interprocess Communication
Producer-Consumer Problem:
Paradigm for cooperating processes, producer process produces
information that is consumed by a consumer process.
Ex1: The assembler, in turn, may produce object modules that are
consumed by the loader.
Ex2: The producer–consumer problem also provides a useful metaphor
for the client–server paradigm.
One solution to the producer–consumer problem uses shared memory.
To allow producer and consumer processes to run concurrently, we must
have available a buffer of items that can be filled by the producer and
emptied by the consumer.
Two types of buffers can be used
unbounded-buffer places no practical limit on the size of the buffer
bounded-buffer assumes that there is a fixed buffer size
Interprocess Communication
Bounded-Buffer – Shared-Memory Solution:
Shared data
#define BUFFER_SIZE 10
typedef struct {
...
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
Solution is correct, but can only use BUFFER_SIZE-1 elements
Interprocess Communication
Bounded-Buffer – Producer:
while (true) {
/* Produce an item */
while (((in = (in + 1) % BUFFER SIZE count) == out)
; /* do nothing -- no free buffers */
buffer[in] = item;
in = (in + 1) % BUFFER SIZE;
}
Interprocess Communication
Bounded Buffer – Consumer:
while (true) {
while (in == out)
; // do nothing -- nothing to consume
// remove an item from the buffer
item = buffer[out];
out = (out + 1) % BUFFER SIZE;
return item;
}
Interprocess Communication
2. Message-Passing Systems:
The scheme requires that these processes share a region of memory and
that the code for accessing and manipulating the shared memory be written
explicitly by the application programmer.
Mechanism for processes to communicate and to synchronize their actions
Message system – processes communicate with each other without
resorting to shared variables
IPC facility provides two operations:
send(message)
receive(message)
The message size is either fixed or variable
If P and Q wish to communicate, they need to:
establish a communication link between them
exchange messages via send/receive
Interprocess Communication
2. Message-Passing Systems:
Implementation issues:
How are links established?
Can a link be associated with more than two processes?
How many links can there be between every pair of communicating
processes?
What is the capacity of a link?
Is the size of a message that the link can accommodate fixed or
variable?
Is a link unidirectional or bi-directional?
Interprocess Communication
2.1 Naming:
Direct Communication:
Processes must name each other explicitly:
send (P, message) – send a message to process P
receive(Q, message) – receive a message from process Q
Properties of communication link
Links are established automatically
A link is associated with exactly one pair of communicating processes
Between each pair there exists exactly one link
The link may be unidirectional, but is usually bi-directional
Interprocess Communication
Indirect Communication:
Messages are directed and received from mailboxes (also referred to as
ports)
Each mailbox has a unique id
Processes can communicate only if they share a mailbox
Properties of communication link
Link established only if processes share a common mailbox
A link may be associated with many processes
Each pair of processes may share several communication links
Link may be unidirectional or bi-directional
Interprocess Communication
Operations
create a new mailbox
send and receive messages through mailbox
destroy a mailbox
Primitives are defined as:
send(A, message) – send a message to mailbox A
receive(A, message) – receive a message from mailbox A
Interprocess Communication
Mailbox sharing
P1, P2, and P3 share mailbox A
P1, sends; P2 and P3 receive
Who gets the message?
Solutions
Allow a link to be associated with at most two processes
Allow only one process at a time to execute a receive operation
Allow the system to select arbitrarily the receiver. Sender is notified
who the receiver was.
Interprocess Communication
2.2 Synchronization:
Message passing may be either blocking or non-blocking
Blocking is considered synchronous
Blocking send -- the sender is blocked until the message is received
Blocking receive -- the receiver is blocked until a message is
available
Non-blocking is considered asynchronous
Non-blocking send -- the sender sends the message and continue
Non-blocking receive -- the receiver receives:
A valid message, or
Null message
Different combinations possible
If both send and receive are blocking, we have a rendezvous
Interprocess Communication
Producer-consumer becomes trivial
message next_produced;
while (true) {
/* produce an item in next produced */
send(next_produced);
}
message next_consumed;
while (true) {
receive(next_consumed);
/* consume the item in next consumed */
}
Interprocess Communication
2.3 Buffering:
Queue of messages attached to the link.
implemented in one of three ways
1. Zero capacity – no messages are queued on a link.
Sender must wait for receiver (rendezvous)
2. Bounded capacity – finite length of n messages
Sender must wait if link full
3. Unbounded capacity – infinite length
Sender never waits
Threads-Overview
A thread is a basic unit of CPU utilization; it comprises a thread ID, a
program counter, a register set, and a stack.
It shares with other threads belonging to the same process its code section,
data section, and other operating-system resources, such as open files and
signals.
A traditional (or heavyweight) process has a single thread of control. If a
process has multiple threads of control, it can perform more than one task at
a time.
Threads-Overview
Threads-Overview
Motivation:
Most modern applications are multithreaded
Threads run within application
Multiple tasks with the application can be implemented by separate threads
Update display
Fetch data
Spell checking
Answer a network request
Process creation is heavy-weight while thread creation is light-weight
Can simplify code, increase efficiency
Kernels are generally multithreaded
Threads-Overview
Benefits:
Responsiveness – may allow continued execution if part of process is
blocked, especially important for user interfaces
Resource Sharing – threads share resources of process, easier than
shared memory or message passing
Economy – cheaper than process creation, thread switching lower
overhead than context switching
Scalability – process can take advantage of multiprocessor architectures
Multicore Programming
1. Programming Challenges:
Multicore or multiprocessor systems putting pressure on programmers,
challenges include:
Dividing activities
Balance
Data splitting
Data dependency
Testing and debugging
Parallelism implies a system can perform more than one task
simultaneously
Concurrency supports more than one task making progress
Single processor / core, scheduler providing concurrency
Multicore Programming
Concurrent execution on single-core system:
Parallelism on a multi-core system:
Multicore Programming
Types of parallelism
Data parallelism – distributes subsets of the same data across multiple
cores, same operation on each core.
Task parallelism – distributing threads across cores, each thread
performing unique operation
Multi-Threading Models
Many-to-One
One-to-One
Many-to-Many
Multi-Threading Models
Many-to-One:
Many user-level threads mapped to
single kernel thread
One thread blocking causes all to
block
Multiple threads may not run in
parallel on muticore system because
only one may be in kernel at a time
Few systems currently use this
model
Examples:
Solaris Green Threads
GNU Portable Threads
Multi-Threading Models
One-to-One:
Each user-level thread maps to
kernel thread
Creating a user-level thread creates
a kernel thread
More concurrency than many-to-one
Number of threads per process
sometimes restricted due to
overhead
Examples
Windows
Linux
Solaris 9 and later
Multi-Threading Models
Many-to-Many:
Allows many user level threads to be
mapped to many kernel threads
Allows the operating system to
create a sufficient number of kernel
threads
Solaris prior to version 9
Windows with the ThreadFiber
package
Threading Issues
Semantics of fork() and exec() system calls
Signal handling
Synchronous and asynchronous
Thread cancellation of target thread
Asynchronous or deferred
Thread-local storage
Scheduler Activations
Threading Issues
1. Semantics of fork() and exec():
Does fork()duplicate only the calling thread or all threads?
Some UNIXes have two versions of fork
exec() usually works as normal – replace the running process including
all threads
Threading Issues
2. Signal Handlin():
Signals are used in UNIX systems to notify a process that a particular
event has occurred.
A signal handler is used to process signals
1. Signal is generated by particular event
2. Signal is delivered to a process
3. Signal is handled by one of two signal handlers:
1. default
2. user-defined
Every signal has default handler that kernel runs when handling signal
User-defined signal handler can override default
For single-threaded, signal delivered to process
Threading Issues
2. Signal Handlin():
Where should a signal be delivered for multi-threaded?
Deliver the signal to the thread to which the signal applies
Deliver the signal to every thread in the process
Deliver the signal to certain threads in the process
Assign a specific thread to receive all signals for the process
Threading Issues
3. Thread Cancellation:
Terminating a thread before it has finished
Thread to be canceled is target thread
Two general approaches:
Asynchronous cancellation terminates the target thread immediately
Deferred cancellation allows the target thread to periodically check if it
should be cancelled
Pthread code to create and cancel a thread:
Threading Issues
3. Thread Cancellation:
Invoking thread cancellation requests cancellation, but actual cancellation
depends on thread state
If thread has cancellation disabled, cancellation remains pending until
thread enables it
Default type is deferred
Cancellation only occurs when thread reaches cancellation point
I.e. pthread_testcancel()
Then cleanup handler is invoked
On Linux systems, thread cancellation is handled through signals
Threading Issues
4. Thread-Local Storage:
Thread-local storage (TLS) allows each thread to have its own copy of
data
Useful when you do not have control over the thread creation process (i.e.,
when using a thread pool)
Different from local variables
Local variables visible only during single function invocation
TLS visible across function invocations
Similar to static data
TLS is unique to each thread
Threading Issues
5. Scheduler Activations:
Both M:M and Two-level models require communication to
maintain the appropriate number of kernel threads allocated
to the application
Typically use an intermediate data structure between user
and kernel threads – lightweight process (LWP)
Appears to be a virtual processor on which process can
schedule user thread to run
Each LWP attached to kernel thread
How many LWPs to create?
Scheduler activations provide upcalls - a communication
mechanism from the kernel to the upcall handler in the
thread library
This communication allows an application to maintain the
correct number kernel threads