1

I have use all types of multiprocessing Pool but still the program only using single core (available=8).Any help will be appreciated and thanks in advance.

 import multiprocessing from multiprocessing import Pool class pdf_gen(): def __init__(self): pdf = self.pdf = FPDF() pdf.set_auto_page_break(True,0.1) def get_data_from_mysql(self) : pdf = self.pdf # connection is established and result is stored in 'res'. dup = [] dup.insert(0,res) z = tuple(dup) pool = multiprocessing.Pool(multiprocessing.cpu_count()) pool.apply_async(self.mysql_to_pdf_data,z) pool.close() pool.join() def mysql_to_pdf_data(self,*result) : try : pdf = self.pdf # Entered data need to be in pdf finally : pdf.output('save_pdf.pdf','F') 

It takes approximately 20 mins to make pdf but I need it to take upto max 4 minutes.

I have updated my code.

 import multiprocessing from multiprocessing import Pool class pdf_gen(): def __init__(self): pdf = self.pdf = FPDF() pdf.set_auto_page_break(True,0.1) def get_data_from_mysql(self) : pdf = self.pdf # connection is established and result is stored in 'res'. dup = [] dup.insert(0,res) z = tuple(dup) return z def mysql_to_pdf_data(self,*result) : try : pdf = self.pdf # Entered data need to be in pdf finally : pdf.output('save_pdf.pdf','F') def main() : pdf = pdf_gen() recover_data = pdf.get_data_from_mysql() pool = multiprocessing.Pool(multiprocessing.cpu_count()) pool.map(pdf.mysql_to_pdf_data,recover_data) 

I have attached the cpu utilisaion it using only 1 core at a particular time

1 Answer 1

1

apply_async calls one function with all the arguments (z) and returns a result object that can be called later to wait for the result of the function.

I think you want map or starmap, which takes either an iterable of single arguments or an iterable of sets of arguments, respectively, and calls the function with each set of arguments in parallel.

Example of map. Note it takes 5 seconds to process the first 8 on my 8-core system, and 5 more seconds to process the next two:

import multiprocessing as mp import time def process(pdf): name = mp.current_process().name print(name,'got',pdf) s = time.time() while time.time() - s < 5: # Spin wait to max CPU utilization pass return pdf + ' processed' if __name__ == '__main__': args = [f'pdf{x}' for x in range(1,11)] pool = mp.Pool() start = time.time() print(pool.map(process,args)) pool.close() pool.join() print('took',time.time() - start,'secs') 

Output:

SpawnPoolWorker-2 got pdf1 SpawnPoolWorker-1 got pdf2 SpawnPoolWorker-3 got pdf3 SpawnPoolWorker-5 got pdf4 SpawnPoolWorker-6 got pdf5 SpawnPoolWorker-4 got pdf6 SpawnPoolWorker-7 got pdf7 SpawnPoolWorker-8 got pdf8 SpawnPoolWorker-1 got pdf9 SpawnPoolWorker-2 got pdf10 ['pdf1 processed', 'pdf2 processed', 'pdf3 processed', 'pdf4 processed', 'pdf5 processed', 'pdf6 processed', 'pdf7 processed', 'pdf8 processed', 'pdf9 processed', 'pdf10 processed'] took 10.375263214111328 secs 

When running the above code, you can see that all the CPUs maxed out during execution:

CPU Utilization Graph

Sign up to request clarification or add additional context in comments.

5 Comments

I have used map and starmap but they are also using only one core. I am having hard luck finding out the solution for this problem.
@PuneetSingh Did you try the above code I posted and it only uses one core? I've updated to show it clearly uses all of them. Update your question with a similar example if it isn't working for you.
I have run your code it works perfectly as you had said. Can you make it doable for my project too?
I have update my code in above section.Can you take a look over there and let me know the problem?
@PuneetSingh It looks like your code puts one item in a list with get_data_from_mysql, so that is only one job and will use only one core. Put a bunch of items in the list like my code does it it should use more cores.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.