1

I'm trying to parallelize the recursion in Karatsuba's polynomial multiply. But it's slower than without threading. What's problem I have this code:

int karatsubaMain(int size) { Polynom pol1(size),pol2(size); omp_set_num_threads(8); double start = omp_get_wtime(); int* result = mult(pol1.polynom,pol2.polynom,0,pol1.size); double end = omp_get_wtime(); printf("%f ", end - start); return 0; } int * mult(int*a, int *b,int start, int N){ int * c= new int[2*N-1]; if(N==1){ c[0]=a[start]*b[start]; return c; } int * t1= new int[N/2]; int * t2= new int[N/2]; int * cM,*cL,*cH; for(int i=0;i<N/2;i++){ t1[i]=a[start +i]+a[start + i + N/2]; t2[i]=b[start +i]+b[start + i + N/2 ]; } #pragma omp parallel shared(cM,cL,cH) { #pragma omp single nowait { #pragma omp task if(N > 4096) cM=mult(t1,t2,0,N/2); #pragma omp task if(N > 4096) cL=mult(a,b,0,N/2); #pragma omp task if(N > 4096) cH=mult(a,b,N/2,N/2); } #pragma omp taskwait } c[N-1]=0; for(int i=0;i<N-1;i++){ c[i]=cL[i]; c[N+i]=cH[i]; c[N/2+i]+=(cM[i]-(cL[i]+cH[i])); } delete []t1; delete []t2; delete []cM; delete []cL; delete []cH; return c; } 
0

1 Answer 1

1

First I show you what you do that you understand the changes better:

In each step you do this:

#pragma omp parallel shared(cM,cL,cH) //open a new parallel region (ie create threads) { #pragma omp single nowait //only one thread should do the following { #pragma omp task if(N > 4096) //create task cM=mult(t1,t2,0,N/2); #pragma omp task if(N > 4096) //create task cL=mult(a,b,0,N/2); #pragma omp task if(N > 4096) //create task cH=mult(a,b,N/2,N/2); } //after this line all threads are working on the same #pragma omp taskwait //before executing further the tasks should be finished } // close all threads created at this parallel 

what you inteded to do:

create some threads, once create the root of the recursion, every recursive call is a task, all should work on tasks, when all child-tasks are finished calculate result, take new task

in karatsubaMain() you should once create threads and one thread then inserts the root task:

double start = omp_get_wtime(); int* result; #pragma omp parallel shared(result, a, b, size) { #pragma omp single //also #pragma omp master usable here result = mult(a, b, 0, size); } double end = omp_get_wtime(); 

in mult() you shoudl only create tasks since this region is already processed by different threads in parallel:

for(int i = 0; i < N / 2; i++) { t1[i] = a[start + i] + a[start + i + N / 2]; t2[i] = b[start + i] + b[start + i + N / 2 ]; } #pragma omp task shared(cM) if(N > 4096) cM = mult(t1, t2, 0, N / 2); #pragma omp task shared(cL) if(N > 4096) cL = mult(a, b, 0, N / 2); #pragma omp task shared(cH) if(N > 4096) cH = mult(a, b, N / 2, N / 2); #pragma omp taskwait c[N - 1] = 0; 

in that way i was able to speed up a simplified version of your code (int-array insead of polynom) by about 15% with respect to sequential code.

general remark: most off the times it is not advisable to use nested parallel regions

Sign up to request clarification or add additional context in comments.

1 Comment

On 12 cores cluster is cca 50% speed up. Thank you

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.