Like `urllib2`, `requests` is blocking.

It does have some async functionality, but that's not what I'd use here. And I wouldn't suggest using another library, either.

The simplest answer is to run each request in a separate thread. Unless you have hundreds of them, this should be fine. (How many hundreds is too many depends on your platform. On Windows, the limit is probably how much memory you have for thread stacks; on most other platforms the cutoff comes earlier.)

If you _do_ have hundreds, you can put them in a threadpool. The [`ThreadPoolExecutor` Example][1] in the `concurrent.futures` page is almost exactly what you need; just change the `urllib` calls to `requests` calls. (If you're on 2.x, use [`futures`][2], the backport of the same packages on PyPI.) The downside is that you don't actually kick off all 1000 requests at once, just the first, say, 8.

If you have hundreds, and they all need to be in parallel, this sounds like a job for [`gevent`][3]. Have it monkeypatch everything, then write the exact same code you'd write with threads, but spawning `greenlet`s instead of `Thread`s.

 [1]: http://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor-example
 [2]: http://pypi.python.org/pypi/futures
 [3]: http://pypi.python.org/pypi/gevent