1

I have this serial code that I'm trying to convert to parallel using MPI. However I can't seem to get the MPI_Scatter() function to work correctly without crashing. The function loops over an array called cells and modifies some of the values.

Below is the original serial code:

int accelerate_flow(const t_param params, t_speed* cells, int* obstacles) { register int ii,jj; /* generic counters */ register float w1,w2; /* weighting factors */ /* compute weighting factors */ w1 = params.density * params.accel * oneover9; w2 = params.density * params.accel * oneover36; int i; /* modify the first column of the grid */ jj=0; for(ii=0;ii<params.ny;ii++) { if( !obstacles[ii*params.nx] && (cells[ii*params.nx].speeds[3] > w1 && cells[ii*params.nx].speeds[6] > w2 && cells[ii*params.nx].speeds[7] > w2)) { /* increase 'east-side' densities */ cells[ii*params.nx].speeds[1] += w1; cells[ii*params.nx].speeds[5] += w2; cells[ii*params.nx].speeds[8] += w2; /* decrease 'west-side' densities */ cells[ii*params.nx].speeds[3] -= w1; cells[ii*params.nx].speeds[6] -= w2; cells[ii*params.nx].speeds[7] -= w2; } } return EXIT_SUCCESS; } 

And here is my attempt at using MPI:

int accelerate_flow(const t_param params, t_speed* cells, int* obstacles, int myrank, int ntasks) { register int ii,jj = 0;; /* generic counters */ register float w1,w2; /* weighting factors */ int recvSize; int cellsSendTag = 123, cellsRecvTag = 321; int size = params.ny / ntasks, i; MPI_Request* cellsSend, *cellsRecieve; MPI_Status *status; /* compute weighting factors */ w1 = params.density * params.accel * oneover9; w2 = params.density * params.accel * oneover36; t_speed* recvCells = (t_speed*)malloc(size*sizeof(t_speed)*params.nx); MPI_Scatter(cells, sizeof(t_speed)*params.nx*params.ny, MPI_BYTE, recvCells, size*sizeof(t_speed)*params.nx, MPI_BYTE, 0, MPI_COMM_WORLD); for(ii= 0;ii < size;ii++) { if( !obstacles[ii*params.nx] && (recvCells[ii*params.nx].speeds[3] > w1 && recvCells[ii*params.nx].speeds[6] > w2 && recvCells[ii*params.nx].speeds[7] > w2)) { /* increase 'east-side' densities */ recvCells[ii*params.nx].speeds[1] += w1; recvCells[ii*params.nx].speeds[5] += w2; recvCells[ii*params.nx].speeds[8] += w2; /* decrease 'west-side' densities */ recvCells[ii*params.nx].speeds[3] -= w1; recvCells[ii*params.nx].speeds[6] -= w2; recvCells[ii*params.nx].speeds[7] -= w2; } } MPI_Gather(recvCells, size*sizeof(t_speed)*params.nx, MPI_BYTE, cells, params.ny*sizeof(t_speed)*params.nx, MPI_BYTE, 0, MPI_COMM_WORLD); return EXIT_SUCCESS; } 

And here is the t_speed structure:

typedef struct { float speeds[NSPEEDS]; } t_speed; 

params.nx = 300, params.ny = 200

Would greatly appreciate any help. Thanks.

0

1 Answer 1

3

The first count argument to MPI_Scatter is the number of elements to send to each process, not in total. Here, the send count and the receive count will be the same, and will be nx*ny/ntasks; so you'd have something like

int count=params.nx*params.ny/ntasks; MPI_Scatter(cells, sizeof(t_speed)*count, MPI_BYTE, recvCells,sizeof(t_speed)*count, MPI_BYTE, 0, MPI_COMM_WORLD); 

Note that this will only work when ntasks evenly divides nx*ny, otherwise you'll have to use Scatterv.

Sign up to request clarification or add additional context in comments.

1 Comment

Thank you very much, its was just a simple size error with both the scatter and gather.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.