0% found this document useful (0 votes)
93 views9 pages

Parallel Processing Exam Answers

The document discusses key concepts in parallel processing and advanced computer architecture, including inter-process arbitration, communication, synchronization, cache coherence, and shared memory multiprocessors. It also covers the importance of parallelism in processors, pipelining techniques, data hazards, resource hazards, and the benefits of parallel processors alongside Flynn's classification. These concepts are essential for understanding system performance and efficient resource management in computing environments.

Uploaded by

samanjnv
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
93 views9 pages

Parallel Processing Exam Answers

The document discusses key concepts in parallel processing and advanced computer architecture, including inter-process arbitration, communication, synchronization, cache coherence, and shared memory multiprocessors. It also covers the importance of parallelism in processors, pipelining techniques, data hazards, resource hazards, and the benefits of parallel processors alongside Flynn's classification. These concepts are essential for understanding system performance and efficient resource management in computing environments.

Uploaded by

samanjnv
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

University Exam Preparation: Parallel Processing & Advanced Computer Architecture

---

1 to 9: (Previously Answered Above)

---

10. Define the Following in Detail:

a) Inter-Process Arbitration

Inter-process arbitration is the technique used to control access to shared resources among multiple

processes to avoid conflicts. In multiprocessor systems, multiple processes may request access to

shared memory or devices at the same time. Arbitration ensures fair and deadlock-free access.

Techniques:

- Priority-based arbitration: Fixed or dynamic priority assigned to processes.

- Round-robin arbitration: Processes are served in a cyclic order.

- Bus arbitration: Used in multiprocessor systems to grant access to shared buses.

Importance:

- Prevents race conditions.

- Maintains data integrity.

- Ensures system stability and fairness.

b) Inter-Process Communication (IPC)

Inter-Process Communication (IPC) refers to methods used by multiple processes to exchange data
and signals. It is essential in multitasking and multiprocessing environments.

Methods of IPC:

1. Shared Memory:

- Multiple processes access a common memory region.

- Fast but requires synchronization mechanisms (mutex, semaphore).

2. Message Passing:

- Processes send and receive messages via the OS.

- Safer but slower compared to shared memory.

3. Pipes and FIFOs:

- Pipes are used for one-way communication.

- FIFOs support two-way communication between unrelated processes.

4. Sockets:

- Used in networked communication between processes on different systems.

Importance:

- Enables modular design.

- Allows parallelism and better CPU utilization.

c) Synchronization

Synchronization is a method to coordinate multiple processes to ensure ordered and predictable

execution, especially when they share resources.

Mechanisms:
1. Mutex (Mutual Exclusion): Prevents concurrent access to shared resources.

2. Semaphore: Counting variable to control access to shared resources.

3. Monitors: High-level synchronization mechanism.

4. Barrier Synchronization: All processes must reach a barrier before proceeding.

Goals:

- Avoid race conditions.

- Ensure data consistency.

- Maintain process cooperation and coordination.

11. Explain About Cache Coherence in Detail

Definition:

Cache coherence refers to the consistency of data stored in local caches of a shared resource. In

multiprocessor systems, each processor may have its own cache, and if one processor updates a

data item, other caches must be updated or invalidated to reflect this change.

Cache Coherence Problem:

Occurs when multiple caches contain copies of the same memory block and one cache modifies its

copy.

Solutions:

1. Write-Through vs Write-Back Caches:

- Write-through ensures memory is updated immediately.

- Write-back delays update, requiring coherence mechanisms.


2. Coherence Protocols:

- Snoopy Protocol:

- All caches monitor (snoop) a shared bus to keep track of changes.

- Example: MESI protocol (Modified, Exclusive, Shared, Invalid).

- Directory-Based Protocol:

- A central directory maintains the status of each memory block and coordinates updates.

MESI Protocol States:

1. Modified: Cache has the only valid copy and it is modified.

2. Exclusive: Cache has the only valid copy and it matches main memory.

3. Shared: Cache shares copy with others.

4. Invalid: Cache copy is not valid.

Importance:

- Ensures data consistency.

- Prevents stale data from being used.

- Critical in symmetric multiprocessing (SMP).

12. Discuss Shared Memory Multiprocessors

Definition:

Shared memory multiprocessors are systems where multiple processors access a common memory

space. Each processor may also have a private cache.

Architecture Types:

1. UMA (Uniform Memory Access):


- Equal access time for all processors.

- Example: SMP systems.

2. NUMA (Non-Uniform Memory Access):

- Memory is divided among processors.

- Access to remote memory is slower.

Characteristics:

- Global address space.

- Requires cache coherence mechanisms.

- Easier to program due to shared addressability.

Advantages:

- Simpler programming model.

- Efficient data sharing.

Disadvantages:

- Memory contention and bottlenecks.

- Scalability limitations.

13. Discuss the Concept of Parallelism in Processors. How Does it Affect System Performance?

Definition:

Parallelism in processors refers to the ability to execute multiple instructions or operations

simultaneously to improve performance.

Types:
1. Instruction-Level Parallelism (ILP): Achieved through pipelining and superscalar execution.

2. Data-Level Parallelism (DLP): Achieved via vector and SIMD processors.

3. Task-Level Parallelism (TLP): Independent tasks run on separate cores.

Techniques to Achieve Parallelism:

- Pipelining

- Superscalar architecture

- Multithreading

- Multi-core processors

- GPUs

Performance Metrics:

- Throughput: Number of instructions executed per unit time.

- Latency: Time taken to complete an instruction.

Impact on Performance:

- Reduces execution time.

- Improves system responsiveness.

- Allows high-speed data processing and computation.

14. Explain the Basic Concepts of Pipelining. Derive the Expression for Speedup and Efficiency

Basic Concepts:

Pipelining is a technique of decomposing a process into stages, with each stage executed in

parallel.

Stages:
- IF (Instruction Fetch)

- ID (Instruction Decode)

- EX (Execute)

- MEM (Memory Access)

- WB (Write Back)

Speedup (S):

S = Time taken for non-pipelined execution / Time taken for pipelined execution

Assuming:

- n = number of instructions

- k = number of pipeline stages

- Each stage takes 1 clock cycle

Then:

Non-pipelined time = n k

Pipelined time = k + (n 1)

Speedup S = nk / (k + n 1)

As n , S n / (n/k) = k

Efficiency (E):

E = Speedup / Number of stages = S / k

15. Explain Data Hazards and Resource Hazards in Pipelining

Data Hazards:

These occur when instructions depend on the results of previous instructions.


Types:

1. RAW (Read After Write): Instruction needs a value before it's written.

2. WAR (Write After Read): Instruction writes before an earlier read completes.

3. WAW (Write After Write): Two instructions write to the same location.

Solutions:

- Forwarding (bypassing)

- Pipeline stalling (insert NOPs)

Resource Hazards:

Occur when hardware resources are insufficient to support all concurrent operations.

Examples:

- Single ALU being used by multiple stages

- Memory access conflicts

Solutions:

- Duplicate resources

- Pipeline scheduling

16. Various Benefits of Parallel Processors. Explain Flynns Classification Again

Benefits of Parallel Processors:

1. Improved Performance:

- Multiple tasks executed simultaneously.

2. High Throughput:
- More instructions completed per unit time.

3. Energy Efficiency:

- Lower power per task due to workload distribution.

4. Scalability:

- Additional processors improve capacity.

5. Better Utilization of Hardware:

- Reduced idle time of processing units.

6. Supports Complex Applications:

- Ideal for AI, ML, simulation, graphics, etc.

Flynns Classification (Revisited):

1. SISD: Single instruction on single data. Traditional uniprocessor.

2. SIMD: Single instruction on multiple data. Used in GPUs and array processors.

3. MISD: Multiple instructions on single data. Rare in practice.

4. MIMD: Multiple instructions on multiple data. Used in modern multi-core and cluster systems.

---

End of Comprehensive Answers for All University Exam Questions

You might also like