Skip to main content
added 5 characters in body
Source Link
Urban48
  • 1.5k
  • 1
  • 15
  • 28

here is a grategreat article on how to makebuild stuff in parallel,

it uses multiprocessing.dummy to run things in different threads

here is a little example:

from urllib2 import urlopen from multiprocessing.dummy import Pool urls = [url_a, url_b, url_c ] pool = Pool() res = pool.map(urlopen, urls) pool.close() pool.join() 

for python >= 3.3 I suggest concurrent.futures

example:

import functools import urllib.request import futures URLS = ['http://www.foxnews.com/', 'http://www.cnn.com/', 'http://europe.wsj.com/', 'http://www.bbc.co.uk/', 'http://some-made-up-domain.com/'] def load_url(url, timeout): return urllib.request.urlopen(url, timeout=timeout).read() with futures.ThreadPoolExecutor(50) as executor: future_list = executor.run_to_futures( [functools.partial(load_url, url, 30) for url in URLS]) 

example taken from: here

here is a grate article how to make stuff in parallel

it uses multiprocessing.dummy to run things in different threads

here is a little example:

from urllib2 import urlopen from multiprocessing.dummy import Pool urls = [url_a, url_b, url_c ] pool = Pool() res = pool.map(urlopen, urls) pool.close() pool.join() 

for python >= 3.3 I suggest concurrent.futures

example:

import functools import urllib.request import futures URLS = ['http://www.foxnews.com/', 'http://www.cnn.com/', 'http://europe.wsj.com/', 'http://www.bbc.co.uk/', 'http://some-made-up-domain.com/'] def load_url(url, timeout): return urllib.request.urlopen(url, timeout=timeout).read() with futures.ThreadPoolExecutor(50) as executor: future_list = executor.run_to_futures( [functools.partial(load_url, url, 30) for url in URLS]) 

example taken from: here

here is a great article on how to build stuff in parallel,

it uses multiprocessing.dummy to run things in different threads

here is a little example:

from urllib2 import urlopen from multiprocessing.dummy import Pool urls = [url_a, url_b, url_c ] pool = Pool() res = pool.map(urlopen, urls) pool.close() pool.join() 

for python >= 3.3 I suggest concurrent.futures

example:

import functools import urllib.request import futures URLS = ['http://www.foxnews.com/', 'http://www.cnn.com/', 'http://europe.wsj.com/', 'http://www.bbc.co.uk/', 'http://some-made-up-domain.com/'] def load_url(url, timeout): return urllib.request.urlopen(url, timeout=timeout).read() with futures.ThreadPoolExecutor(50) as executor: future_list = executor.run_to_futures( [functools.partial(load_url, url, 30) for url in URLS]) 

example taken from: here

replaced http://stackoverflow.com/ with https://stackoverflow.com/
Source Link
URL Rewriter Bot
URL Rewriter Bot

here is a grate article how to make stuff in parallel

it uses multiprocessing.dummy to run things in different threads

here is a little example:

from urllib2 import urlopen from multiprocessing.dummy import Pool urls = [url_a, url_b, url_c ] pool = Pool() res = pool.map(urlopen, urls) pool.close() pool.join() 

for python >= 3.3 I suggest concurrent.futures

example:

import functools import urllib.request import futures URLS = ['http://www.foxnews.com/', 'http://www.cnn.com/', 'http://europe.wsj.com/', 'http://www.bbc.co.uk/', 'http://some-made-up-domain.com/'] def load_url(url, timeout): return urllib.request.urlopen(url, timeout=timeout).read() with futures.ThreadPoolExecutor(50) as executor: future_list = executor.run_to_futures( [functools.partial(load_url, url, 30) for url in URLS]) 

example taken from: herehere

here is a grate article how to make stuff in parallel

it uses multiprocessing.dummy to run things in different threads

here is a little example:

from urllib2 import urlopen from multiprocessing.dummy import Pool urls = [url_a, url_b, url_c ] pool = Pool() res = pool.map(urlopen, urls) pool.close() pool.join() 

for python >= 3.3 I suggest concurrent.futures

example:

import functools import urllib.request import futures URLS = ['http://www.foxnews.com/', 'http://www.cnn.com/', 'http://europe.wsj.com/', 'http://www.bbc.co.uk/', 'http://some-made-up-domain.com/'] def load_url(url, timeout): return urllib.request.urlopen(url, timeout=timeout).read() with futures.ThreadPoolExecutor(50) as executor: future_list = executor.run_to_futures( [functools.partial(load_url, url, 30) for url in URLS]) 

example taken from: here

here is a grate article how to make stuff in parallel

it uses multiprocessing.dummy to run things in different threads

here is a little example:

from urllib2 import urlopen from multiprocessing.dummy import Pool urls = [url_a, url_b, url_c ] pool = Pool() res = pool.map(urlopen, urls) pool.close() pool.join() 

for python >= 3.3 I suggest concurrent.futures

example:

import functools import urllib.request import futures URLS = ['http://www.foxnews.com/', 'http://www.cnn.com/', 'http://europe.wsj.com/', 'http://www.bbc.co.uk/', 'http://some-made-up-domain.com/'] def load_url(url, timeout): return urllib.request.urlopen(url, timeout=timeout).read() with futures.ThreadPoolExecutor(50) as executor: future_list = executor.run_to_futures( [functools.partial(load_url, url, 30) for url in URLS]) 

example taken from: here

Source Link
Urban48
  • 1.5k
  • 1
  • 15
  • 28

here is a grate article how to make stuff in parallel

it uses multiprocessing.dummy to run things in different threads

here is a little example:

from urllib2 import urlopen from multiprocessing.dummy import Pool urls = [url_a, url_b, url_c ] pool = Pool() res = pool.map(urlopen, urls) pool.close() pool.join() 

for python >= 3.3 I suggest concurrent.futures

example:

import functools import urllib.request import futures URLS = ['http://www.foxnews.com/', 'http://www.cnn.com/', 'http://europe.wsj.com/', 'http://www.bbc.co.uk/', 'http://some-made-up-domain.com/'] def load_url(url, timeout): return urllib.request.urlopen(url, timeout=timeout).read() with futures.ThreadPoolExecutor(50) as executor: future_list = executor.run_to_futures( [functools.partial(load_url, url, 30) for url in URLS]) 

example taken from: here