Skip to main content
Commonmark migration
Source Link

Try concurrent.futures.ThreadPoolExecutor.map in Python Standard Library (New in version 3.2).

Similar to map(func, *iterables) except:

 
  • the iterables are collected immediately rather than lazily;
  • func is executed asynchronously and several calls to func may be made concurrently.

A simple example (modified from ThreadPoolExecutor Example):

import concurrent.futures import urllib.request URLS = [ 'http://www.foxnews.com/', 'http://www.cnn.com/', 'http://europe.wsj.com/', 'http://www.bbc.co.uk/', ] # Retrieve a single page and report the URL and contents def load_url(url, timeout): # Do something here # For example with urllib.request.urlopen(url, timeout=timeout) as conn: try: data = conn.read() except Exception as e: # You may need a better error handler. return b'' else: return data # We can use a with statement to ensure threads are cleaned up promptly with concurrent.futures.ThreadPoolExecutor(max_workers=20) as executor: # map l = list(executor.map(lambda url: load_url(url, 60), URLS)) print('Done.') 

Try concurrent.futures.ThreadPoolExecutor.map in Python Standard Library (New in version 3.2).

Similar to map(func, *iterables) except:

 
  • the iterables are collected immediately rather than lazily;
  • func is executed asynchronously and several calls to func may be made concurrently.

A simple example (modified from ThreadPoolExecutor Example):

import concurrent.futures import urllib.request URLS = [ 'http://www.foxnews.com/', 'http://www.cnn.com/', 'http://europe.wsj.com/', 'http://www.bbc.co.uk/', ] # Retrieve a single page and report the URL and contents def load_url(url, timeout): # Do something here # For example with urllib.request.urlopen(url, timeout=timeout) as conn: try: data = conn.read() except Exception as e: # You may need a better error handler. return b'' else: return data # We can use a with statement to ensure threads are cleaned up promptly with concurrent.futures.ThreadPoolExecutor(max_workers=20) as executor: # map l = list(executor.map(lambda url: load_url(url, 60), URLS)) print('Done.') 

Try concurrent.futures.ThreadPoolExecutor.map in Python Standard Library (New in version 3.2).

Similar to map(func, *iterables) except:

  • the iterables are collected immediately rather than lazily;
  • func is executed asynchronously and several calls to func may be made concurrently.

A simple example (modified from ThreadPoolExecutor Example):

import concurrent.futures import urllib.request URLS = [ 'http://www.foxnews.com/', 'http://www.cnn.com/', 'http://europe.wsj.com/', 'http://www.bbc.co.uk/', ] # Retrieve a single page and report the URL and contents def load_url(url, timeout): # Do something here # For example with urllib.request.urlopen(url, timeout=timeout) as conn: try: data = conn.read() except Exception as e: # You may need a better error handler. return b'' else: return data # We can use a with statement to ensure threads are cleaned up promptly with concurrent.futures.ThreadPoolExecutor(max_workers=20) as executor: # map l = list(executor.map(lambda url: load_url(url, 60), URLS)) print('Done.') 
Source Link
Haipeng Li
  • 141
  • 1
  • 3

Try concurrent.futures.ThreadPoolExecutor.map in Python Standard Library (New in version 3.2).

Similar to map(func, *iterables) except:

  • the iterables are collected immediately rather than lazily;
  • func is executed asynchronously and several calls to func may be made concurrently.

A simple example (modified from ThreadPoolExecutor Example):

import concurrent.futures import urllib.request URLS = [ 'http://www.foxnews.com/', 'http://www.cnn.com/', 'http://europe.wsj.com/', 'http://www.bbc.co.uk/', ] # Retrieve a single page and report the URL and contents def load_url(url, timeout): # Do something here # For example with urllib.request.urlopen(url, timeout=timeout) as conn: try: data = conn.read() except Exception as e: # You may need a better error handler. return b'' else: return data # We can use a with statement to ensure threads are cleaned up promptly with concurrent.futures.ThreadPoolExecutor(max_workers=20) as executor: # map l = list(executor.map(lambda url: load_url(url, 60), URLS)) print('Done.')