Chapter 4: Threads &
Concurrency
Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018
Outline
Overview
Multicore Programming
Multithreading Models
Thread Libraries
Operating System Examples
Operating System Concepts – 10th Edition 4.2 Silberschatz, Galvin and Gagne ©2018
Objectives
Identify the basic components of a thread, and contrast threads
and processes
Describe the benefits and challenges of designing
multithreaded applications
Designing multithreaded applications using the Pthreads
Operating System Concepts – 10th Edition 4.3 Silberschatz, Galvin and Gagne ©2018
Motivation
Most modern applications are multithreaded
Threads run within application
Multiple tasks with the application can be implemented by separate
threads
• Update display
• Fetch data
• Spell checking
• Answer a network request
Process creation is heavy-weight while thread creation is light-weight
Can simplify code, increase efficiency
Kernels are generally multithreaded
Operating System Concepts – 10th Edition 4.4 Silberschatz, Galvin and Gagne ©2018
Single and Multithreaded Processes
Operating System Concepts – 10th Edition 4.5 Silberschatz, Galvin and Gagne ©2018
Multithreaded Server Architecture
Operating System Concepts – 10th Edition 4.6 Silberschatz, Galvin and Gagne ©2018
Benefits
Responsiveness – may allow continued execution if part of process is
blocked, especially important for user interfaces
Resource Sharing – threads share resources of process, easier than
shared memory or message passing
Economy – cheaper than process creation, thread switching lower
overhead than context switching
Scalability – process can take advantage of multicore architectures
Operating System Concepts – 10th Edition 4.7 Silberschatz, Galvin and Gagne ©2018
Multicore Programming
In response to the need for more computing performance, single-CPU
systems evolved into multi-CPU systems.
• Current trend in system design is to place multiple computing cores
on a single processing chip
Parallelism implies a system can perform more than one task
simultaneously
Concurrency supports more than one task making progress
• In single processor / core systems, scheduler provides concurrency
Operating System Concepts – 10th Edition 4.8 Silberschatz, Galvin and Gagne ©2018
Concurrency vs. Parallelism
Concurrent execution on single-core system:
Parallelism on a multi-core system:
Operating System Concepts – 10th Edition 4.9 Silberschatz, Galvin and Gagne ©2018
Multicore Programming
Multicore or multiprocessor systems puts pressure on programmers,
challenges include:
• Dividing activities
• Balance
• Data splitting
• Data dependency
• Testing and debugging
Operating System Concepts – 10th Edition 4.10 Silberschatz, Galvin and Gagne ©2018
Multicore Programming
Types of parallelism
• Data parallelism – distributes
subsets of the same data across
multiple cores, same operation on
each
Matrix multiplication: Splitting
rows among threads
Image processing: Applying
same filters on different blocks
• Task parallelism – distributing
threads across cores, each thread
performing unique operation
Web server: Database
queries, logging, network I/O
Video player: decodes video,
audio, playback sync.
Operating System Concepts – 10th Edition 4.11 Silberschatz, Galvin and Gagne ©2018
Amdahl’s Law
Identifies performance gains from adding additional cores to an
application that has both serial and parallel components
S is serial portion
N processing cores
That is, if application is 75% parallel / 25% serial, moving from 1 to 2
cores results in speedup of 1.6 times
As N approaches infinity, speedup approaches 1 / S
Serial portion of an application has disproportionate effect on
performance gained by adding additional cores
But does the law take into account contemporary multicore systems?
Operating System Concepts – 10th Edition 4.12 Silberschatz, Galvin and Gagne ©2018
Amdahl’s Law
But does the law take into account contemporary multicore systems?
Operating System Concepts – 10th Edition 4.13 Silberschatz, Galvin and Gagne ©2018
Thread Libraries
Thread libraries provide programmers with an API for creating and
managing threads.
Thread libraries may be implemented either in user or in kernel space.
• User space; API functions implemented solely within user space,
with no kernel support.
• Kernal space; involves system calls, and requires a kernel with
thread library support.
• A few well established primary thread libraries
POSIX Pthreads - may be provided as either a user or kernel
library
Win32 threads - provided as a kernel-level library on Windows
systems.
Java threads – May be Pthreads or Win32 depending on the
OS and hardware the JVM is running.
Operating System Concepts – 10th Edition 4.14 Silberschatz, Galvin and Gagne ©2018
POSIX Pthreads: API Functions
pthread_create: Create a new thread
pthread_join: Wait for a thread to terminate
pthread_exit:Terminate calling thread
pthread_mutex_init: Initialize a mutex
pthread_mutex_lock: Lock a mutex
pthread_mutex_unlock: Unlock a mutex
Operating System Concepts – 10th Edition 4.15 Silberschatz, Galvin and Gagne ©2018
User Threads and Kernel Threads
User threads - management done by user-level threads library
Kernel threads - Supported by the Kernel
• Exists virtually in all general purpose OS:
Windows, Linux, Mac OS X, iOS, Android
Even user threads will ultimately need kernel thread support (Why??)
Operating System Concepts – 10th Edition 4.16 Silberschatz, Galvin and Gagne ©2018
Multithreading Models
Many-to-One
One-to-One
Many-to-Many
Operating System Concepts – 10th Edition 4.17 Silberschatz, Galvin and Gagne ©2018
Many-to-One
Many user-level threads mapped to single kernel thread
Blocking one thread causes all to block
Multiple threads may not run in parallel on multicore system because
only one may be in kernel at a time
Old approach: Few systems currently use this model
Examples:
• Solaris Green Threads
• GNU Portable Threads
Operating System Concepts – 10th Edition 4.18 Silberschatz, Galvin and Gagne ©2018
One-to-One
Each user-level thread maps to kernel thread
Creating a user-level thread creates a kernel thread
More concurrency than many-to-one
Number of threads per process sometimes restricted due to overhead
Examples
• Windows
• Linux
Operating System Concepts – 10th Edition 4.19 Silberschatz, Galvin and Gagne ©2018
Many-to-Many Model
Allows many user level threads to be mapped to many kernel threads
Allows the operating system to create a sufficient number of kernel
threads
Windows with the ThreadFiber package
Otherwise not very common
Operating System Concepts – 10th Edition 4.20 Silberschatz, Galvin and Gagne ©2018
Two-level Model
Similar to M:M, except that it allows a user thread to be bound to
kernel thread
Operating System Concepts – 10th Edition 4.21 Silberschatz, Galvin and Gagne ©2018
Pthreads
Specification, not implementation
• API specifies behavior of the thread library, implementation is up
to development of the library
Example: Sum of N natural numbers
Global data: Any variable declared globally are shared among all
threads of the same process
Local data: Data local to a function (running in a thread) are stored in
thread stack
Operating System Concepts – 10th Edition 4.22 Silberschatz, Galvin and Gagne ©2018
Pthreads Example: Code
#include<stdio.h>
#include<pthread.h>
Operating System Concepts – 10th Edition 4.23 Silberschatz, Galvin and Gagne ©2018
Pthreads Example: Code
#include<stdio.h>
#include<pthread.h>
int sum; // global variable shared over threads
Operating System Concepts – 10th Edition 4.24 Silberschatz, Galvin and Gagne ©2018
Pthreads Example: Code
#include<stdio.h>
#include<pthread.h>
int sum; // global variable shared over threads
void *runner (void *param); // threads begin execution in a specified function
Operating System Concepts – 10th Edition 4.25 Silberschatz, Galvin and Gagne ©2018
Pthreads Example: Code
#include<stdio.h>
#include<pthread.h>
int sum; // global variable shared over threads
void *runner (void *param); // threads begin execution in a specified function
int main(int argc, char *argv[]){
Operating System Concepts – 10th Edition 4.26 Silberschatz, Galvin and Gagne ©2018
Pthreads Example: Code
#include<stdio.h>
#include<pthread.h>
int sum; // global variable shared over threads
void *runner (void *param); // threads begin execution in a specified function
int main(int argc, char *argv[]){
pthread_t tid; \\ declares the identifier for the thread
Operating System Concepts – 10th Edition 4.27 Silberschatz, Galvin and Gagne ©2018
Pthreads Example: Code
#include<stdio.h>
#include<pthread.h>
int sum; // global variable shared over threads
void *runner (void *param); // threads begin execution in a specified function
int main(int argc, char *argv[]){
pthread_t tid; \\ declares the identifier for the thread
pthread_attr_t attr; \\ set of thread attributes
Operating System Concepts – 10th Edition 4.28 Silberschatz, Galvin and Gagne ©2018
Pthreads Example: Code
#include<stdio.h>
#include<pthread.h>
int sum; // global variable shared over threads
void *runner (void *param); // threads begin execution in a specified function
int main(int argc, char *argv[]){
pthread_t tid; \\ declares the identifier for the thread
pthread_attr_t attr; \\ set of thread attributes
pthread_attr_init(&attr); \\ set the default attributes of the thread
Operating System Concepts – 10th Edition 4.29 Silberschatz, Galvin and Gagne ©2018
Pthreads Example: Code
#include<stdio.h>
#include<pthread.h>
int sum; // global variable shared over threads
void *runner (void *param); // threads begin execution in a specified function
int main(int argc, char *argv[]){
pthread_t tid; \\ declares the identifier for the thread
pthread_attr_t attr; \\ set of thread attributes
pthread_attr_init(&attr); \\ set the default attributes of the thread
pthread_create(&tid, &attr, runner, argv[1]); \\ create the thread
Operating System Concepts – 10th Edition 4.30 Silberschatz, Galvin and Gagne ©2018
Pthreads Example: Code
#include<stdio.h>
#include<pthread.h>
int sum; // global variable shared over threads
void *runner (void *param); // threads begin execution in a specified function
int main(int argc, char *argv[]){
pthread_t tid; \\ declares the identifier for the thread
pthread_attr_t attr; \\ set of thread attributes
pthread_attr_init(&attr); \\ set the default attributes of the thread
pthread_create(&tid, &attr, runner, argv[1]); \\ create the thread
pthread_join(tid,NULL); \\ wait for the thread to exit
Operating System Concepts – 10th Edition 4.31 Silberschatz, Galvin and Gagne ©2018
Pthreads Example: Code
#include<stdio.h>
#include<pthread.h>
int sum; // global variable shared over threads
void *runner (void *param); // threads begin execution in a specified function
int main(int argc, char *argv[]){
pthread_t tid; \\ declares the identifier for the thread
pthread_attr_t attr; \\ set of thread attributes
pthread_attr_init(&attr); \\ set the default attributes of the thread
pthread_create(&tid, &attr, runner, argv[1]); \\ create the thread
pthread_join(tid,NULL); \\ wait for the thread to exit
printf("sum = %d∖n",sum);
}
Operating System Concepts – 10th Edition 4.32 Silberschatz, Galvin and Gagne ©2018
Pthreads Example: Code
#include<stdio.h>
#include<pthread.h>
int sum; // global variable shared over threads
void *runner (void *param); // threads begin execution in a specified function
/* The thread will execute in this function */
void *runner(void *param) {
int i, upper = atoi(param);
sum = 0;
for (i = 1; i <= upper; i++)
sum += i;
pthread_exit(0); \\ thread terminates
}
Operating System Concepts – 10th Edition 4.33 Silberschatz, Galvin and Gagne ©2018
Pthreads Example: Code
Growing dominance of multicore systems, writing programs containing
several threads has become common.
Simple method for waiting on several threads using the pthread_join()
function is to enclose the operation within a simple for loop
#define NUM_THREADS 10
/* an array of threads to be joined upon */
Pthread_t workers[NUM THREADS];
for (int i = 0; i < NUM THREADS; i++)
pthread_join(workers[i], NULL);
Operating System Concepts – 10th Edition 4.34 Silberschatz, Galvin and Gagne ©2018
OpenMP
Collection of compiler directives and widely used API for C, C++,
FORTRAN
Provides support for parallel programming in shared-memory
environments
Identifies parallel regions – blocks of code that can run in parallel
#pragma omp parallel
Create as many threads as there are cores
Operating System Concepts – 10th Edition 4.35 Silberschatz, Galvin and Gagne ©2018
End of Chapter 4
Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018