0% found this document useful (0 votes)
15 views3 pages

HPC Unit 3

The document discusses key concepts in parallel computing, including synchronization, serialization, and contention, which are crucial for ensuring correct execution and data consistency. It also covers strategies for optimizing communication in parallel systems, such as reducing communication overhead, optimal domain decomposition, and message aggregation. Additionally, it differentiates between non-blocking and asynchronous communication methods to enhance performance.

Uploaded by

Sahil Sayyad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views3 pages

HPC Unit 3

The document discusses key concepts in parallel computing, including synchronization, serialization, and contention, which are crucial for ensuring correct execution and data consistency. It also covers strategies for optimizing communication in parallel systems, such as reducing communication overhead, optimal domain decomposition, and message aggregation. Additionally, it differentiates between non-blocking and asynchronous communication methods to enhance performance.

Uploaded by

Sahil Sayyad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

🔄 Synchronization, Serialization, and Contention

1. Synchronization

 Synchronization is the coordination of multiple processes or threads to ensure correct


execution order and data consistency.
 In parallel computing, synchronization is necessary when multiple processes access
shared resources to prevent race conditions.
 Common methods include mutexes, barriers, locks, and semaphores.

Example:

c
CopyEdit
#pragma omp parallel
{
#pragma omp critical
{
// Only one thread can execute this at a time
shared_resource++;
}
}

2. Serialization

 Serialization refers to executing processes one after another rather than concurrently.
 It often arises implicitly due to dependencies or resource sharing.
 Excessive serialization reduces parallel efficiency and scalability.

3. Contention

 Contention occurs when multiple threads or processes compete for the same resources
(e.g., CPU, memory, communication bus, or locks).
 High contention can lead to bottlenecks and poor performance.

4. Implicit Serialization and Synchronization

 Implicit serialization happens when system behavior inadvertently enforces


sequential execution, even though parallelism is possible.
 Examples:
o Accessing a shared file or variable without parallel-safe mechanisms.
o Barrier synchronization forcing all threads to wait.
🚦 Communication Optimization in Parallel Systems
5. Reducing Communication Overhead

 Communication overhead is the time spent in data transfer rather than computation.
 In distributed systems, minimizing inter-process communication can significantly
boost performance.
 Techniques include:
o Combining small messages (aggregation)
o Using non-blocking communication
o Overlapping computation and communication

6. Optimal Domain Decomposition

 Domain decomposition splits a problem's data or computational domain across


processes.
 The goal is to balance the load and minimize communication between processes.
 Common strategies:
o Block Decomposition: Equal-sized chunks
o Cyclic Decomposition: Round-robin allocation
o Block-Cyclic Decomposition: Combines both for load balancing

Example in Grid Computation: Splitting a 2D matrix among processes in a way that edge
communication is minimized.

7. Aggregating Messages

 Aggregating (or batching) combines multiple small messages into a single large
message.
 This reduces communication latency and startup cost.
 Especially useful in high-latency environments like cloud clusters or over WAN.

Illustration: Sending 10 small packets vs. 1 combined packet – the latter has lower overhead.

8. Non-blocking vs. Asynchronous Communication

� Non-blocking Communication:

 The sender/receiver initiates communication and immediately proceeds without


waiting.
 Example in MPI:

c
CopyEdit
MPI_Isend(...); // Non-blocking send
// Do some computation
MPI_Wait(...); // Wait for send to complete

� Asynchronous Communication:

 A broader concept where a task is decoupled from the sender or receiver’s execution
timeline.
 All non-blocking communication is asynchronous, but asynchronous may involve
callbacks, events, or futures beyond MPI.

Feature Non-blocking Asynchronous


Return immediately ✅ ✅
Requires wait or check ✅ (MPI_Wait) ✅/❌
Used in MPI ✅ ✅
Broader programming concept ❌ ✅

✅ Summary Table
Concept Description
Synchronization Coordination to ensure correctness in concurrent access
Serialization Forced sequential execution
Contention Competition for shared resources
Implicit Serialization Unintentional serialization due to dependencies
Reducing Communication
Minimize time spent in data exchange
Overhead
Dividing work to balance load and minimize
Domain Decomposition
communication
Aggregating Messages Combining messages to reduce latency
Non-blocking Communication Does not wait for send/receive to complete
Asynchronous Communication Decouples sender and receiver timelines

You might also like