I have currently been working with my own "retry" function where I would like to retry until the requests works. There is some scenarios where if I hit any 5xx status, I should retry with a long delays.
If I hit specific status code e.g. 200 or 404, it should not raise the status code else raise it.
So I have done something like this:
import time import requests from bs4 import BeautifulSoup from requests import ( RequestException, Timeout ) def do_request(): try: # There is some scenarios where I would use my own proxies by doing # requests.get("https://www.bbc.com/", timeout=0.1, proxies={'https': 'xxx.xxxx.xxx.xx')) while (response := requests.get("https://www.bbc.com/", timeout=0.1)).status_code >= 500: print("sleeping") time.sleep(20) if response.status_code not in (200, 404): response.raise_for_status() print("Successful requests!") soup = BeautifulSoup(response.text, 'html.parser') for link in soup.find_all("a", {"class": "media__link"}): yield link.get('href') except Timeout as err: print(f"Retry due to timed out: {err}") except RequestException as err: raise RequestException("Unexpected request error") # ----------------------------------------------------# if __name__ == '__main__': for found_links in do_request(): print(found_links) The problem for me now is that I have on purpose set the timeout to 0.1 to trigger the exception Timeout to happend and what I want it to happend here is that it should retry the request again once it hits it.
Currently it is stopping there and I wonder what should I do to be able to retry the requests again if it hits a timeout where I do not raise the error?