Skip to main content
added 304 characters in body
Source Link
abarnert
  • 367.8k
  • 54
  • 626
  • 691

Like urllib2, requests is blocking.

It does have some async functionality, but that's not what I'd use here. AndBut I wouldn't suggest using another library, either.

The simplest answer is to run each request in a separate thread. Unless you have hundreds of them, this should be fine. (How many hundreds is too many depends on your platform. On Windows, the limit is probably how much memory you have for thread stacks; on most other platforms the cutoff comes earlier.)

If you do have hundreds, you can put them in a threadpool. The ThreadPoolExecutor Example in the concurrent.futures page is almost exactly what you need; just change the urllib calls to requests calls. (If you're on 2.x, use futures, the backport of the same packages on PyPI.) The downside is that you don't actually kick off all 1000 requests at once, just the first, say, 8.

If you have hundreds, and they all need to be in parallel, this sounds like a job for gevent. Have it monkeypatch everything, then write the exact same code you'd write with threads, but spawning greenlets instead of Threads.

grequests, which evolved out of the old async support directly in requests, effectively does the gevent + requests wrapping for you. And for the simplest cases, it's great. But for anything non-trivial, I find it easier to read explicit gevent code. Your mileage may vary.

Of course if you need to do something really fancy, you probably want to go to twisted, tornado, or tulip (or wait a few months for tulip to be part of the stdlib).

Like urllib2, requests is blocking.

It does have some async functionality, but that's not what I'd use here. And I wouldn't suggest using another library, either.

The simplest answer is to run each request in a separate thread. Unless you have hundreds of them, this should be fine. (How many hundreds is too many depends on your platform. On Windows, the limit is probably how much memory you have for thread stacks; on most other platforms the cutoff comes earlier.)

If you do have hundreds, you can put them in a threadpool. The ThreadPoolExecutor Example in the concurrent.futures page is almost exactly what you need; just change the urllib calls to requests calls. (If you're on 2.x, use futures, the backport of the same packages on PyPI.) The downside is that you don't actually kick off all 1000 requests at once, just the first, say, 8.

If you have hundreds, and they all need to be in parallel, this sounds like a job for gevent. Have it monkeypatch everything, then write the exact same code you'd write with threads, but spawning greenlets instead of Threads.

Like urllib2, requests is blocking.

But I wouldn't suggest using another library, either.

The simplest answer is to run each request in a separate thread. Unless you have hundreds of them, this should be fine. (How many hundreds is too many depends on your platform. On Windows, the limit is probably how much memory you have for thread stacks; on most other platforms the cutoff comes earlier.)

If you do have hundreds, you can put them in a threadpool. The ThreadPoolExecutor Example in the concurrent.futures page is almost exactly what you need; just change the urllib calls to requests calls. (If you're on 2.x, use futures, the backport of the same packages on PyPI.) The downside is that you don't actually kick off all 1000 requests at once, just the first, say, 8.

If you have hundreds, and they all need to be in parallel, this sounds like a job for gevent. Have it monkeypatch everything, then write the exact same code you'd write with threads, but spawning greenlets instead of Threads.

grequests, which evolved out of the old async support directly in requests, effectively does the gevent + requests wrapping for you. And for the simplest cases, it's great. But for anything non-trivial, I find it easier to read explicit gevent code. Your mileage may vary.

Of course if you need to do something really fancy, you probably want to go to twisted, tornado, or tulip (or wait a few months for tulip to be part of the stdlib).

added 304 characters in body
Source Link
abarnert
  • 367.8k
  • 54
  • 626
  • 691

Like urllib2, requests is blocking.

You can alwaysIt does have some async functionality, but that's not what I'd use here. And I wouldn't suggest using another library, either.

The simplest answer is to run the requestseach request in a separate threadsthread. Unless you have hundreds of them, this should be fine. (How many hundreds is too many depends on your platform. On Windows, the limit is probably how much memory you have for thread stacks; on most other platforms the cutoff comes earlier.)

If you do have hundreds, you can put them in a threadpool. The ThreadPoolExecutor Example in the concurrent.futures page is almost exactly what you need; just change the urllib calls to requests calls. (If you're on 2.x, use futures, the backport of the same packages on PyPI.) The downside is that you don't actually kick off all 1000 requests at once, just the first, say, 8.

If you have hundreds, and they all need to be in parallel rather than just, say, 8 at a time, this sounds like a job for gevent. Have it monkeypatch everything, then write the exact same code you'd write with threads, but spawning greenlets instead of Threads.

Like urllib2, requests is blocking.

You can always run the requests in separate threads. Unless you have hundreds of them, this should be fine.

If you do have hundreds, you can put them in a threadpool. The ThreadPoolExecutor Example in the concurrent.futures page is almost exactly what you need; just change the urllib calls to requests calls. (If you're on 2.x, use futures, the backport of the same packages on PyPI.)

If you have hundreds, and they all need to be in parallel rather than just, say, 8 at a time, this sounds like a job for gevent. Have it monkeypatch everything, then write the exact same code you'd write with threads, but spawning greenlets instead of Threads.

Like urllib2, requests is blocking.

It does have some async functionality, but that's not what I'd use here. And I wouldn't suggest using another library, either.

The simplest answer is to run each request in a separate thread. Unless you have hundreds of them, this should be fine. (How many hundreds is too many depends on your platform. On Windows, the limit is probably how much memory you have for thread stacks; on most other platforms the cutoff comes earlier.)

If you do have hundreds, you can put them in a threadpool. The ThreadPoolExecutor Example in the concurrent.futures page is almost exactly what you need; just change the urllib calls to requests calls. (If you're on 2.x, use futures, the backport of the same packages on PyPI.) The downside is that you don't actually kick off all 1000 requests at once, just the first, say, 8.

If you have hundreds, and they all need to be in parallel, this sounds like a job for gevent. Have it monkeypatch everything, then write the exact same code you'd write with threads, but spawning greenlets instead of Threads.

Source Link
abarnert
  • 367.8k
  • 54
  • 626
  • 691

Like urllib2, requests is blocking.

You can always run the requests in separate threads. Unless you have hundreds of them, this should be fine.

If you do have hundreds, you can put them in a threadpool. The ThreadPoolExecutor Example in the concurrent.futures page is almost exactly what you need; just change the urllib calls to requests calls. (If you're on 2.x, use futures, the backport of the same packages on PyPI.)

If you have hundreds, and they all need to be in parallel rather than just, say, 8 at a time, this sounds like a job for gevent. Have it monkeypatch everything, then write the exact same code you'd write with threads, but spawning greenlets instead of Threads.