2

I am newbie on MPI and study it as a university course. The task was to numerically find the value of const e using MPI_Send() and MPI_Recv(). The only suitable way I found was

enter image description here

I run it on 2, 3 and 4 cores but get the wrong number while on 1 core everything is fine. Here's my code:

#include <iostream> #include <fstream> #include <cmath> #include "mpi.h" using namespace std; const int n = 1e04; double start_time, _time; int w_size, w_rank, name_len; char cpu_name[MPI_MAX_PROCESSOR_NAME]; ofstream fout("exp_result", std::ios_base::app | std::ios_base::out); long double factorial(int num){ if (num < 1) return 1; else return num * factorial(num - 1); } void e_finder(){ long double sum = 0.0, e = 0.0; if(w_rank == 0) start_time = MPI_Wtime(); for(int i = 0; i < n; i+=w_size) sum += 1.0 / factorial(i); MPI_Send(&sum, 1, MPI_LONG_DOUBLE, 0, 0, MPI_COMM_WORLD); if(w_rank == 0){ // e += sum; for (int i = 0; i < w_size; i++){ MPI_Recv(&sum, 1, MPI_LONG_DOUBLE, i, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE); e += sum; } _time = MPI_Wtime() - start_time; cout.precision(29); cout << "e = "<< e << endl << fixed << "error is " << abs(e - M_E) << endl; cout.precision(9); cout << "\nwall clock time = " <<_time << " sec\n"; fout << w_size << "\t" << _time << endl; } } int main(int argc, char const *argv[]) { MPI_Init(NULL, NULL); MPI_Comm_size(MPI_COMM_WORLD, &w_size); MPI_Comm_rank(MPI_COMM_WORLD, &w_rank); MPI_Get_processor_name(cpu_name, &name_len); cout<<"calculations started on cpu:" << w_rank << "!\n"; MPI_Barrier(MPI_COMM_WORLD); e_finder(); MPI_Finalize(); fout.close(); return 0; } 

Can someone help me find out and grasp the mistake? Here are the outputs:

$ mpirun -np 1 ./exp1 calculations started on cpu:0! e = 2.718281828459045235428168108 error is 0.00000000000000014463256980957 wall clock time = 4.370553009 sec $ mpirun -np 2 ./exp1 calculations started on cpu:0! calculations started on cpu:1! e = 3.0861612696304875570925407846 error is 0.36787944117144246629694248618 wall clock time = 2.449338411 sec $ mpirun -np 3 ./exp1 calculations started on cpu:0! calculations started on cpu:1! calculations started on cpu:2! e = 3.5041749401277555767651727958 error is 0.78589311166871048596957449739 wall clock time = 2.011082204 sec $ mpirun -np 4 ./exp1 calculations started on cpu:0! calculations started on cpu:3! calculations started on cpu:1! calculations started on cpu:2! e = 4.1667658813667669917037150729 error is 1.44848405290772190090811677443 wall clock time = 1.617427335 sec 
3
  • 1
    You are aware that this is a rather wasteful way to evaluate exp(1)? There should only be one single factorial computation during the whole process. Commented Sep 30, 2019 at 0:29
  • @LutzL I understand that it's wasteful to calculate factorial this way. Will it be the better way if I'll calculate factorial up to n and store it on an array? Commented Sep 30, 2019 at 5:39
  • Only slightly. Storing it can be avoided as each factorial is used exactly twice, in computing the next factorial and in computing one term of the series. You can do both in one loop. Commented Sep 30, 2019 at 6:31

1 Answer 1

3

The issue is with how you are dividing up the work. It seems like you want each program to calculate a portion of the fractions. However, they are all starting on the first fraction and then calculating every w_size-th fraction. This results in some fractions being calculated multiple times and some never being calculated at all. This should be fixed by changing the line

for(int i = 0; i < n; i+=w_size) 

to

for(int i = w_rank; i < n; i+=w_size) 

This makes each program start on a different fraction, and since they are calculating every w_size-th fractions, there shouldn't be any more collisions between calculated fractions.

Sign up to request clarification or add additional context in comments.

1 Comment

that's it! thanks. but now I'm afraid how does my pi const calculator works with the wrong loop?

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.