I am trying to write python code that downloads web pages using separate threads. Here is an example of my code:
import urllib2 from threading import Thread import time URLs = ['http://www.yahoo.com/', 'http://www.time.com/', 'http://www.cnn.com/', 'http://www.slashdot.org/' ] def thread_func(arg): t = time.time() page = urllib2.urlopen(arg) page = page.read() print time.time() - t for url in URLs: t = Thread(target = thread_func, args = (url, )) t.start() t.join() I run the code and the threads seem to execute serially, if I'm not mistaken, with the time of the download measured but each one is output to console after a certain amount of time. Am I coding this correctly?