0% found this document useful (0 votes)
205 views24 pages

Distributed Shared Memory

Distributed shared memory (DSM) allows processes on different nodes in a network to access shared memory blocks. There are two approaches - shared memory, where nodes directly read and write to the shared memory, and message passing, where nodes send and receive messages. Message passing is more fault tolerant but shared memory is easier to use. DSM provides a unified virtual address space across nodes.

Uploaded by

Sudha Patel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
205 views24 pages

Distributed Shared Memory

Distributed shared memory (DSM) allows processes on different nodes in a network to access shared memory blocks. There are two approaches - shared memory, where nodes directly read and write to the shared memory, and message passing, where nodes send and receive messages. Message passing is more fault tolerant but shared memory is easier to use. DSM provides a unified virtual address space across nodes.

Uploaded by

Sudha Patel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 24

Distributed Shared Memory

1
There are 2 Ways of Information Sharing

• Using a Shared Memory


– A node writes information in the memory
– Another node reads information from the
memory Memory
Write v Read v
Node 1 Node 2
Node 3

• Using Message Passing


– A node sends a message to another node
– The second node receives the message from
the other Node 1 Send v Recv v
Node 2
Node 3
2
Shared Memory is Easier to
Use
• Shared Memory is easy to use
– If information is written, collaboration
progresses!

• Message Passing is difficult to use


– To which node the information should be
sent?

3
Message Passing Tolerates Failures

• Shared Memory is failure-prone


– Communication trusts on memory availability
Memory Read v
Write v
Node 1 Node 2
Node 3

• Message-Passing is fault-tolerant
– As long as there is a way to route a message
Node 1 Node 2

Send v Recv v
Node 3

4
DSM Introduction
• DSM provides a virtual address space shared among processes on loosely
coupled processors.
Formal Definition:

A Distributed Shared Memory System is a pair (P,M) where P is a set


of N processors { P1, P2, P3, . . . Pn } and M is a shared memory.

Each process Pi sequentially executes read and write operations on


data items in M in the order defined by the program running on it.

• It is an abstraction that integrates the local memory of different machines in a


network environment into a single logical entity shared by cooperating
processes executing on multiple sites.
• Due to the virtual existence of the share memory, DSM is sometimes also
referred to as Distributed Shared virtual Memory (DSVM).

5
DSM Introduction
Distributed Shared Memory
(Virtual)

Memory Memory Memory

MMM MMM MMM

Node 1 Node 2 Node n


6
Communication Network
DSM Introduction
• To facilitate memory operation, the shared memory
space is partitioned into blocks.
• When a process on a node accesses some data from a
memory block of the shared memory space, the local
memory manager handles the request.
• Variation of this general approach are used in different
implementations depending on whether the DSM system
allows replication and/or migration of shared-memory
data blocks.

7
Design and Implementation Issues of DSM
Granularity
• Granularity refers to the block size of a DSM system ,
the unit of sharing and the unit of data transfer across
the network.
• Possible units are a few words, a page, or a few pages.

Factors influencing block size selection


Other factors that influence the choice of block size are
described below.
• Paging overhead: a process is likely to access a large
region of its shared address space in a small amount of
time. So paging overhead is less for large block sizes as
compared to the paging overhead for small block sizes. 8
Granularity
• Directory Size: Information regarding blocks is
maintained in directory. Obviously, the larger the block
size, the smaller the directory which results in reduced
directory management overhead.
• Thrashing: When data items in the same data block are
being updated by multiple nodes at the same time, no
real work can get done. A DSM must use a policy to
avoid this situation (usually known as thrashing).
• False Sharing: It occurs when two different processes
access two unrelated variables that reside in the same
data block. The larger is the block size, the higher is the
probability of false sharing.

9
Structure of shared memory space
• Structure defines the abstract view of the shared memory space
to be presented to the application programmers of a DSM system.
• It may appear to its programmers as storage for words or a
storage for data objects.

Approaches to structure shared memory space:


1. No Structuring
• Most DSM systems do not structure their shared-memory space.
• In these systems Shared memory is simply a linear array of
words.
• Advantage is that it is convenient to choose any suitable page
size as the unit of sharing and may be used for all applications.

10
Structure of shared memory space
• So it is simple and easy to design such a DSM system.

2. Structuring by data type


• Shared memory space is structured either as a collection
of objects or as a collection of variables in the source
language.
• Granularity in such a DSM system is an object or a
variable.
• Since the sizes of the objects and variables vary greatly,
these DSM systems use variable grain size to match the
size of the object/variable being accessed by the
application.
• Variable grain size complicates design and
implementation of these DSM systems. 11
Structure of shared memory space
3. Structuring as a database
• Shared memory space is ordered as an associative
memory called a tuple space, which is a collection of
immutable tuples with typed data items in their fields.
• To perform updates, old data items in the DSM are
replaced by new data items.

12
Consistency Models
• A consistency model basically refers to the degree of
consistency that has to be maintained for the shared-
memory data for the memory to work correctly for a
certain set of applications.
• A consistency model is essentially a contract between
the software and the memory. If the software agrees to
obey certain rules, the memory promises to work
correctly.
• Consistency requirements vary from application to
application.
• Applications that depend on a stronger consistency
model may not perform correctly if executed in a
system that supports only weaker consistency model.
• If a system supports the stronger consistency model,
then the weaker consistency model is automatically 13
supported but the converse is not true.
Strict Consistency Model
• This is the strongest form of memory coherence.
• Value returned by a read operation on a memory
address is always the same as the value written by the
most recent write operation to that address. All writes
become visible to all processes.
• Implementation of the strict consistency model requires
the existence of an absolute global time so that memory
read/write operations can be correctly ordered to make
the meaning of “most recent” clear.
• Absolute synchronization of clocks of all the nodes of a
distributed system is not possible.

14
Sequential Consistency Model
• All processes see the same order of all memory access
operations on the shared memory.
• The exact order in which the memory access operations
are interleaved does not matter.
• Example: If the three operations read (r1), write (w1),
read (r2) are performed on a memory address in that
order, any of the orderings (r1, w1, r2), (r1, r2, w1), (w1,
r1, r2), (w1, r2, r1), (r2, r1, w1), (r2, w1, r1) of three
operations is acceptable provided all processes see the
same ordering.
• This model is weaker than that of the strict consistency
model.
15
Sequential Consistency Model
• Disadvantage is that it does not guarantee that a read
operation on a particular memory address always returns
the same value as written by the most recent write
operation to that address.

• It provides one-copy/single-copy semantics because all


the processes sharing a memory location always see
exactly the same contents stored in it. So it is acceptable
by most applications.

16
Causal Consistency Model
• A shared memory system is said to support the casual
consistency model if all write operations that are
potentially related are seen by all processors in the same
(correct) order.
• Write operations that are not potentially causally related
may be seen by different processes in different orders.
• Example: if a write operation (w2) is causally related to
another write opeation (w1), the acceptable order is (w1,
w2) because the value written by w2 might have been
influenced in some way by the value written by w1. (w2,
w1) is not an acceptable order.

17
PRAM Consistency Model
• This model proves a weaker consistency semantics than the
consistency models described so far.
• It only ensures that all write operations performed by a
single process are seen by all other processes in the order
in which they were performed as if all the write operations
performed by a single process are in a pipeline.
• But Write operations performed by different processes may
be seen by different processes in different order.
• Example: If w11 and w12 are two write operations
performed by a process p1in that order, and w21 and w22
are two write operations performed by a process p2 in that
order, process p3 may see them in the order [(w11, w12),
(w21,w22)] and another process p4 may see them in the
order [(w21, w22), (w11,w12)]. 18
Processor Consistency Model
• It is similar to PRAM model with an additional
restriction of memory coherence
• For any memory location all processes agree on
the same order of all write operations to that
location. In effect, processor consistency ensures
that all write operations performed on the same
memory location are seen by all processes in the
same order.
• This means that for the earlier example, if w12
and w22 are write operations for writing to the
same memory locations x, all processes must see
them in the same order – w12 before w22 or w22
before w12. 19
Weak Consistency Model
• It is designed to take advantage of the following
characteristics common to many applications.
• It is not necessary to show the change in memory done by
every write operation to other processes. The result of
several write operations can be combined and send to
other processes when they need it.

20
Weak Consistency Model
• For this, a DSM system that supports the weak
consistency model uses a special variable called a
synchronization variable.
• When a synchronization variable is accessed by a
process, (1) all changes made to the memory by the
process are propagated to other nodes,(when process
exits the critical section) (2) all changes made to the
memory by other processes are propagated from other
nodes to the process's node (when process enters a
critical section)

21
Release Consistency Model
• To overcome the disadvantage of weak consistency
model, this is designed.
• It provides a mechanism to clearly tell the system whether
a process is entering a critical section or exiting from a
critical section so that the system can decide and perform
only either the first or the second operation when a
synchronization variable is accessed by a process.
• Two synchronization variables, acquire and release, are
used.

22
Discussion of Consistency
Models
• It is difficult to grade the consistency models based on
performance because quite different results are usually
obtained for different applications.
• Therefore, in the design of a DSM system, the choice of
consistency model usually depends on several other
factors, such as how easy is it to implement, how much
concurrency does it allow, and how easy is it to use.
• Strict consistency model is never used in the design of a
DSM system because its implementation is practically
impossible.
• Sequential consistency model is commonly used.
• Others are the main choices in the weaker category.
• Last two provides better concurrency. 23
Advantages of DSM
• Simpler abstraction
• Better portability of distributed application programs
• Better performance of some applications
• Flexible communication environment
• Ease of process migration.

24

You might also like