0% found this document useful (0 votes)
6 views6 pages

Explanation

The document describes the implementation of the Circuit Satisfiability problem using MPI for parallel processing, where the goal is to find all combinations of 16 boolean inputs that satisfy a logical circuit. The C code utilizes MPI for distributing the workload among processes, with each process checking combinations and reporting satisfiable solutions. The document also includes instructions for compiling and running the program, demonstrating consistent results across different numbers of processes.

Uploaded by

hossam osman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views6 pages

Explanation

The document describes the implementation of the Circuit Satisfiability problem using MPI for parallel processing, where the goal is to find all combinations of 16 boolean inputs that satisfy a logical circuit. The C code utilizes MPI for distributing the workload among processes, with each process checking combinations and reporting satisfiable solutions. The document also includes instructions for compiling and running the program, demonstrating consistent results across different numbers of processes.

Uploaded by

hossam osman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Circuit Satisfiability Problem with MPI

1. Problem Description

The problem implemented is the Circuit Satisfiability problem, as presented in the


provided slides. This problem involves finding all combinations of 16 boolean inputs
that satisfy a given logical circuit. Since there are 16 inputs, there are 2^16 = 65,536
possible combinations to test. The problem is NP-complete, and the solution method
used is an exhaustive search.

To parallelize this task, the combinations are allocated in a cyclic fashion to multiple
processes. Each process examines its assigned combinations, and if it finds a
satisfiable combination, it prints it. Finally, a collective communication operation
(reduction) is used to sum the total number of satisfiable solutions found across all
processes.

2. Code Explanation

The C code ( sat.c ) uses the Message Passing Interface (MPI) for parallel execution.
Below is a breakdown of the key components:

EXTRACT_BIT Macro

#define EXTRACT_BIT(n,i) ((n&(1<<i))?1:0)

This macro extracts the i -th bit from an integer n . It's used to convert the integer
representation of a combination into an array of boolean values ( v[16] ).
check_circuit Function

int check_circuit (int id, int z) {


int v[16];
int i;
for (i = 0; i < 16; i++) v[i] = EXTRACT_BIT(z,i);

if ((v[0] || v[1]) && (!v[1] || !v[3]) && (v[2] || v[3])


&& (!v[3] || !v[4]) && (v[4] || !v[5])
&& (v[5] || !v[6]) && (v[5] || v[6])
&& (v[6] || !v[15]) && (v[7] || !v[8])
&& (!v[7] || !v[13]) && (v[8] || v[9])
&& (v[8] || !v[9]) && (!v[9] || !v[10])
&& (v[9] || v[11]) && (v[10] || v[11])
&& (v[12] || v[13]) && (v[13] || !v[14])
&& (v[14] || v[15])) {
printf ("%d) %d%d%d%d%d%d%d%d%d%d%d%d%d%d%d%d\n", id,
v[0],v[1],v[2],v[3],v[4],v[5],v[6],v[7],v[8],v[9],
v[10],v[11],v[12],v[13],v[14],v[15]);
fflush (stdout);
return 1;
}
return 0;
}

This function takes a process id and an integer z (representing a 16-bit combination)


as input. It extracts the bits into an array v and then evaluates the complex boolean
expression defined in the problem. If the expression is true, it prints the combination
and returns 1; otherwise, it returns 0.
main Function

int main (int argc, char *argv[]) {


int i;
int id; /* Process rank */
int p; /* Number of processes */
int count; /* Local sum */
int global_count; /* Global sum */

MPI_Init (&argc, &argv);


MPI_Comm_rank (MPI_COMM_WORLD, &id);
MPI_Comm_size (MPI_COMM_WORLD, &p);

count = 0;
for (i = id; i < 65536; i += p)
count += check_circuit (id, i);

MPI_Reduce (&count, &global_count, 1, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD);

if (!id) {
printf ("There are %d different solutions\n", global_count);
}

printf ("Process %d is done\n", id);


fflush (stdout);

MPI_Finalize();
return 0;
}

MPI Initialization: MPI_Init initializes the MPI environment. MPI_Comm_rank


gets the unique ID (rank) of the current process, and MPI_Comm_size gets the
total number of processes ( p ) in the communicator MPI_COMM_WORLD .

Work Distribution: The for loop for (i = id; i < 65536; i += p)


implements the cyclic allocation of work. Each process starts checking
combinations from its id and then increments by p to get its next combination.
The check_circuit function is called for each combination, and its return value
(1 if satisfiable, 0 otherwise) is added to count (local sum).

MPI_Reduce: MPI_Reduce is a collective communication function. It sums the


count from all processes and stores the total sum in global_count on process 0
(the root process). The arguments specify:
&count : Address of the local variable to be reduced.

&global_count : Address of the variable to store the result (only relevant


for the root process).

1 : Number of elements to reduce.


MPI_INT : Data type of the elements.

MPI_SUM : Reduction operation (summation).

0 : Rank of the root process (process 0).

MPI_COMM_WORLD : The communicator.

Result Output: Only process 0 prints the global_count , which represents the
total number of satisfiable solutions.

MPI Finalization: MPI_Finalize cleans up the MPI environment.

3. How to Compile

To compile the sat.c file, you need an MPI compiler, such as mpicc . If you don't have
it installed, you can install it on a Debian/Ubuntu-based system using:

sudo apt-get update


sudo apt-get install -y mpich

Once mpicc is available, navigate to the directory where sat.c is saved and compile
it using the following command:

mpicc -O -o sat sat.c

-O : Optimizes the compiled code.

-o sat : Specifies the output executable file name as sat .

4. How to Run

To run the compiled MPI program, use the mpirun command. You can specify the
number of processes using the -np flag.

Running with 1 CPU

mpirun -np 1 ./sat

Output:
0) 1010111110011001
0) 0110111110011001
0) 1110111110011001
0) 1010111111011001
0) 0110111111011001
0) 1110111111011001
0) 1010111110111001
0) 0110111110111001
0) 1110111110111001
There are 9 different solutions
Process 0 is done

Running with 2 CPUs

mpirun -np 2 ./sat

Output:

0) 0110111110011001
0) 0110111111011001
0) 0110111110111001
1) 1010111110011001
1) 1110111110011001
1) 1010111111011001
1) 1110111111011001
1) 1010111110111001
1) 1110111110111001
There are 9 different solutions
Process 0 is done
Process 1 is done

Running with 3 CPUs

mpirun -np 3 ./sat

Output:
0) 0110111110011001
0) 1110111111011001
0) 1010111110111001
1) 1110111110011001
1) 1010111111011001
1) 0110111110111001
Process 1 is done
2) 1010111110011001
2) 0110111111011001
2) 1110111110111001
There are 9 different solutions
Process 0 is done
Process 2 is done

As you can see from the outputs, regardless of the number of processes used, the
program correctly identifies that there are 9 different solutions to the circuit
satisfiability problem. The output order of individual solutions may vary between runs
due to the nature of parallel execution, but the final count remains consistent. Each
process also prints

You might also like