Threads
-by Akansha Singh
A thread is a basic unit of CPU utilization; it
comprises a thread ID, a program counter
(PC), a register set, and a stack.
It shares with other threads belonging to
the same process its code section, data
Overview section, and other operating-system
resources, such as open files and signals.
A traditional process has a single thread of
control. If a process has multiple threads
of control, it can perform more than one
task at a time.
Examples
An application that creates photo thumbnails from a collection of
images may use a separate thread to generate a thumbnail from each
separate image.
A web browser might have one thread display images or text while
another thread retrieves data from the network.
A word processor may have a thread for displaying graphics, another
thread for responding to keystrokes from the user, and a third thread
for performing spelling and grammar checking in the background.
Client-Server Example
• In certain situations, a single application may
be required to perform several similar tasks.
• For example, a web server accepts client
requests for web pages, images, sound, and
so forth. A busy web server may have several
(perhaps thousands of) clients concurrently
accessing it.
• If the web server ran as a traditional single-
threaded process, it would be able to service
only one client at a time, and a client might
have to wait a very long time for its request to
be serviced.
SOLUTIONS:
A. TO CREATE CHILD PROCESS
B. TO CREATE THREADS
To have the server run as a single process that accepts
requests. It is generally more efficient to use one process that
contains multiple threads.
When the server receives a request, it creates a
separate process to service that request. If the web-server process is multithreaded, the server
will create a separate thread that listens for client
In fact, this process-creation method was in common requests.
use before threads became popular. Process creation is
time consuming and resource intensive. When a request is made, rather than creating another
process, the server creates a new thread to service the
However, if the new process will perform the same request and resumes listening for additional requests.
tasks as the existing process, it will incur an overhead.
Process vs Child Process vs Thread
1. Process: An independent unit of execution with its own memory, resources, etc.
2. Child Process: A new process created by a parent process (via fork() or similar).
Has its own memory space.
Gets a copy of parent's data — changes in one don’t reflect in the other.
Can communicate with parent using IPC (pipes, shared memory, sockets, etc.).
Independent execution, but can be waited on by parent (wait()).
3. Thread: A lightweight unit inside a process — multiple threads share memory\resources.
Shares everything except its own stack.
Can access and modify shared variables, heap, file descriptors, etc.
Faster to create and switch between.
Needs careful synchronization (mutexes, semaphores) to avoid race conditions.
Benefits of Multi-threading
1. Responsiveness: Multithreading an interactive application may allow a program to continue running
even if part of it is blocked or is performing a lengthy operation, thereby increasing responsiveness to the
user. This quality is especially useful in designing user interfaces.
2. Resource sharing: Threads share the memory and the resources of the process to which they belong
by default. The benefit of sharing code and data is that it allows an application to have several different
threads of activity within the same address space.
3. Economy. Allocating memory and resources for process creation is costly. Because threads share the
resources of the process to which they belong, it is more economical to create and context-switch
threads. Additionally, context switching is typically faster between threads than between processes.
4. Scalability. The benefits of multithreading can be even greater in a multiprocessor architecture, where
threads may be running in parallel on different processing cores. A single-threaded process can run on
only one processor, regardless how many are available.
Types of Parallelism
• Data parallelism focuses on distributing subsets of the same data across multiple computing cores
and performing the same operation on each core.
• Consider, for example, summing the contents of an array of size N. On a single-core system, one
thread would simply sum the elements [0] ...[N − 1].
• On a dual-core system, however, thread A, running on core 0, could sum the elements [0] . . . [N∕2 −
1] while thread B, running on core 1, could sum the elements [N∕2] . . . [N − 1]. The two threads
would be running in parallel on separate computing cores.
• Task parallelism involves distributing not data but tasks (threads) across multiple computing cores.
Each thread is performing a unique operation.
• Different threads may be operating on the same data, or they may be operating on different data.
• Consider again our example above. In contrast to that situation, an example of task parallelism might
involve two threads, each performing a unique statistical operation on the array of elements.
• The threads again are operating in parallel on separate computing cores, but each is performing a
unique operation.
• Fundamentally, then, data parallelism involves the distribution of data across multiple cores, and
task parallelism involves the distribution of tasks across multiple cores
MultiThreading Models
User level and Kernel level Threads
• User threads are supported above the kernel and are managed without kernel support, whereas
kernel threads are supported and managed directly by the operating system.
• Virtually all contemporary operating systems—including Windows, Linux, and macOS— support
kernel threads.
USER LEVEL THREAD KERNEL LEVEL THREAD
Implemented by user or programs Implemented by OS
OS doesn’t know anything about user level Recognized by OS
threads and it views them as process only. OS
can’t recognize user level threads
If one user level thread is performing blocking If one kernel level thread is performing blocking
system call then entire process will go into system call, then another thread will continue
blocked state the execution
It is defined as dependent threads Designed as independent threads
Implementation is easy Implementation is complicated
Less context More context
No hardware support is required here Scheduling of kernel level threads require
hardware support
1. Many-to-one • The many-to-one model maps many user-level threads to
model one kernel thread.
• Thread management is done by the thread library in user
space, so it is efficient.
• However, the entire process will block if a thread makes a
blocking system call. Also, because only one thread can
access the kernel at a time, multiple threads are unable to
run in parallel on multicore systems.
• Example-Green threads—a thread library available for
Solaris systems and adopted in early versions of Java—
used the many-to-one model.
• However, very few systems continue to use the model
because of its inability to take advantage of multiple
processing cores, which have now become standard on
most computer systems.
2. One-to-one • The one-to-one model maps each user thread to a kernel
model thread.
• It provides more concurrency than the many-to-one
model by allowing another thread to run when a thread
makes a blocking system call.
• It also allows multiple threads to run in parallel on
multiprocessors.
• The only drawback to this model is that creating a user
thread requires creating the corresponding kernel thread,
and a large number of kernel threads may burden the
performance of a system.
• Linux, along with the family of Windows operating
systems, implement the one-to-one model.
3. Many-to-many
model
• The many-to-many model multiplexes many user-level
threads to a smaller or equal number of kernel threads.
• The number of kernel threads may be specific to either a
particular application or a particular machine (an
application may be allocated more kernel threads on a
system with eight processing cores than a system with
four cores).
Concurrency and Parallelism
A concurrent system supports more than one task by allowing all the tasks
to make progress.
In contrast, a parallel system can perform more than one task
simultaneously. Thus, it is possible to have concurrency without parallelism.
Before the advent of multiprocessor and multicore architectures, most computer
systems had only a single processor, and CPU schedulers were designed to provide
the illusion of parallelism by rapidly switching between processes, thereby allowing
each process to make progress. Such processes were running concurrently, but
not in parallel.
Models Whereas the many to-one model allows the developer to create as many user
threads as he/she wishes, it does not result in parallelism, because the kernel
comparison can schedule only one kernel thread at a time.
in terms of The one-to-one model allows greater concurrency, but the developer has to be
careful not to create too many threads within an application.
concurrency
The many-to-many model suffers from neither of these shortcomings:
developers can create as many user threads as necessary, and the
corresponding kernel threads can run in parallel on a multiprocessor.
Also, when a thread performs a blocking system call, the kernel can schedule
another thread for execution.
Although the many-to-many model appears to be the most flexible of the
models, in practice it is difficult to implement.
In addition, with an increasing number of processing cores appearing on most
systems, limiting the number of kernel threads has become less important. As a
result, most operating systems now use the one-to-one model.