A better requests and urllib package.
A simple package for hitting multiple URLs and performing GET/POST requests in parallel.
pip install request-boostImportant: Virtual Environment is recommended
# Sample data number_of_sample_urls = 1000 urls = [ f'https://postman-echo.com/get?random_data={test_no}' for test_no in range(number_of_sample_urls) ] headers = [{'sample_header':test_no} for test_no in range(number_of_sample_urls)]from request_boost import boosted_requests results = boosted_requests(urls=urls) print(results)from request_boost import boosted_requests results = boosted_requests(urls=urls, no_workers=16, max_tries=5, timeout=5, headers=headers, verbose=True, parse_json=True) print(results)from request_boost import boosted_requests # Config number_of_sample_urls = 100 # Sample data urls = [f'https://postman-echo.com/get?random_data={test_no}' for test_no in range(number_of_sample_urls)] post_urls = [f'https://postman-echo.com/post' for test_no in range(number_of_sample_urls)] headers = [{'sample_header': test_no} for test_no in range(number_of_sample_urls)] data = [{'sample_data': test_no} for test_no in range(number_of_sample_urls)] # Required for POST requests, #For POST request data can be just list of empty dict but not NONE simple_results = boosted_requests(urls=urls, no_workers=16, max_tries=5, timeout=5, headers=None, verbose=False, parse_json=True) header_results = boosted_requests(urls=urls, no_workers=16, max_tries=5, timeout=5, headers=headers, parse_json=True) post_results = boosted_requests(urls=post_urls, no_workers=16, max_tries=5, timeout=5, headers=headers, data=data, verbose=True, parse_json=True)boosted_requests( urls, no_workers=32, max_tries=5, after_max_tries="assert", timeout=10, headers=None, data=None, verbose=True, parse_json=True, ) Get data from APIs in parallel by creating workers that process in the background :param urls: list of URLS :param no_workers: maximum number of parallel processes {Default::32} :param max_tries: Maximum number of tries before failing for a specific URL {Default::5} :param after_max_tries: What to do if not successfull after "max_tries" for a specific URL, one of {"assert", "break"} {Default::assert} :param timeout: Waiting time per request {Default::10} :param headers: Headers if any for the URL requests :param data: data if any for the URL requests (Wherever not None a POST request is made) :param verbose: Show progress [True or False] {Default::True} :param parse_json: Parse response to json [True or False] {Default::True} :return: List of response for each API (order is maintained) You can give me a small 🤓 dopmaine 🤝 support by ⭐STARRING⭐ this project
Kuldeep Singh Sidhu
Github: github/singhsidhukuldeep https://github.com/singhsidhukuldeep
Website: Kuldeep Singh Sidhu (Website) http://kuldeepsinghsidhu.com
LinkedIn: Kuldeep Singh Sidhu (LinkedIn) https://www.linkedin.com/in/singhsidhukuldeep/
The full list of all the contributors is available here
- Make the backend used (urllib/request library) an option that the user can choose, by default we use urllib
- For parallel processing add options for multi-processing along with multi-threading
- Set-up tests for edge cases and changes verification
- Set-up CI/CD pipleine (possibly using GitHub actions) to publish changes to PyPi
- Improeve the doc-strings documentation to add more explanantion and examples
- Add a message queue to deploy the service across machines
- Add option to run URL requests with Self signed certificate
verify=False - Add option to supress warnings
- Add progess bars from tqdm and ascii
- Add option to add session and auth
If this helped you in any way, it would be great if you could share it with others.
pip3 install setuptools twine- Go to project folder
python3 setup.py sdisttwine upload --repository-url https://upload.pypi.org/legacy/ dist/*
OR
Go to your project folder and:
pip3 install setuptools twine python3 setup.py sdist twine upload --repository-url https://upload.pypi.org/legacy/ dist/*