0% found this document useful (0 votes)
16 views10 pages

HPC Project Report

The project report details the implementation of Parallel Matrix Multiplication using OpenMP, highlighting its efficiency in reducing computation time for matrix operations. It includes a methodology for initializing matrices, performing both sequential and parallel multiplications, and measuring performance improvements. The conclusion emphasizes the significant speedup achieved with parallelization and suggests future enhancements like dynamic scheduling and GPU acceleration.

Uploaded by

yaseeniqbal365
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views10 pages

HPC Project Report

The project report details the implementation of Parallel Matrix Multiplication using OpenMP, highlighting its efficiency in reducing computation time for matrix operations. It includes a methodology for initializing matrices, performing both sequential and parallel multiplications, and measuring performance improvements. The conclusion emphasizes the significant speedup achieved with parallelization and suggests future enhancements like dynamic scheduling and GPU acceleration.

Uploaded by

yaseeniqbal365
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

A PROJECT REPORT ON

“Parallel Matrix Multiplication Using


OpenMP”

SUBMITTED BY

[Your Full Name] [Your Roll Number]

DEPARTMENT OF COMPUTER ENGINEERING


NBN SINHGAD SCHOOL OF ENGINEERING, PUNE -41
SAVITRIBAI PHULE PUNE UNIVERSITY
2024-2025
CERTIFICATE
This is to certify that the project report entitled

“Parallel Matrix Multiplication Using OpenMP”

is the bonafide work of [Your Name], submitted to the Department of Computer


Engineering of NBN Sinhgad School of Engineering, Pune, in partial fulfillment of the
requirements for the degree of Bachelor of Engineering under Savitribai Phule Pune
University during the academic year 2024–2025.

(Dr. S. P. Bendale) (Dr. S. P. Bendale)


Subject In-charge Head of Department
ACKNOWLEDGEMENT
I would like to express my heartfelt gratitude to our Head of Department, Dr. S. P. Bendale,
and my project guide for their support and encouragement.

I am also thankful to the faculty and staff of the Computer Engineering department for their
help. This project gave me valuable exposure to parallel computing concepts and real-world
applications of High-Performance Computing.
INDEX
Sr. No. Title of Chapter Page No.

01 Abstract 01

02 Introduction 02

03 Methodology 03

04 Snapshots 04

05 System Requirements 05

06 Conclusion 06
ABSTRACT
High-Performance Computing (HPC) involves solving complex problems using parallel or
distributed computing. This project focuses on implementing Parallel Matrix Multiplication
using OpenMP, a shared memory multiprocessing framework in C/C++.

The goal is to demonstrate how parallelization can significantly reduce computation time
for matrix operations compared to a sequential approach. By distributing the workload
among multiple CPU cores, OpenMP efficiently speeds up matrix calculations, which are
fundamental to scientific computing and simulations.

This project highlights the practical application of parallel computing concepts and the
performance benefits of using HPC techniques in real-world scenarios.
INTRODUCTION
In many engineering and scientific problems, matrix operations such as multiplication are
computationally expensive. As the size of matrices increases, so does the time required for
computation.

High-Performance Computing (HPC) provides a solution by executing tasks concurrently


using multiple cores or processors. OpenMP is a widely used API that enables parallel
programming in C/C++ and Fortran on shared-memory systems.

This project explores how to use OpenMP to parallelize matrix multiplication, measure the
performance improvements, and understand the trade-offs involved.
METHODOLOGY
The implementation follows these steps:

1. Matrix Initialization: Two matrices (A and B) of user-defined size are initialized with
random values.

2. Sequential Multiplication: A baseline version multiplies the matrices using nested loops,
without parallelization.

3. Parallel Multiplication with OpenMP:


- The outer loop is parallelized using #pragma omp parallel for.
- Each thread computes part of the result matrix independently.

4. Performance Comparison: Execution time is measured for both methods using


omp_get_wtime().

5. Analysis: The speedup is calculated, and results are displayed for different matrix sizes
and thread counts.
SNAPSHOTS
Add screenshots of the output or code execution here if available.
SYSTEM REQUIREMENTS
- Language: C/C++
- Compiler: GCC with OpenMP support (e.g., gcc -fopenmp)
- OS: Linux/Ubuntu or Windows with MinGW
- RAM: 4GB or higher
- Dependencies: OpenMP enabled system
CONCLUSION
This project successfully demonstrated the power of High-Performance Computing through
OpenMP-based parallel matrix multiplication.

Compared to the sequential version, the parallel version achieved significant speedup,
especially for large matrix sizes and higher thread counts. It also helped in understanding
parallel computing concepts, thread management, and performance tuning.

In future work, the code can be extended to support dynamic scheduling, GPU acceleration
using CUDA, or implementation on distributed systems using MPI.

You might also like