My code is working in this way but it's speed is very slow because of for loops, can you help me, to make it work with aiohttp, asyncio?
def field_info(field_link): response = requests.get(field_link) soup = BeautifulSoup(response.text, 'html.parser') races = soup.findAll('header', {'class': 'dc-field-header'}) tables = soup.findAll('table', {'class': 'dc-field-comp'}) for i in range(len(races)): race_name = races[i].find('h3').text race_time = races[i].find('time').text names = tables[i].findAll('span', {'class': 'title'}) trainers = tables[i].findAll('span', {'class': 'trainer'}) table = [] for j in range(len(names)): table.append({ 'Name': names[j].text, 'Trainer': trainers[j].text, }) return { 'RaceName': race_name, 'RaceTime': race_time, 'Table': table } links = [link1, link2, link3] for link in links: scraped_info += field_info(link)
asyncionoraiohttpwill give your code magic parallelism, nor will they speed up CPU-bound tasks. They're meant for asynchronous programming.range(len(names)), you can usefor name, trainer in zip(names, trainers)and avoid the index lookups inside the loop.