(By the way, the fact that your computer has 8 cores is irrelevant here. Your code isn't CPU-bound, it's I/O bound. If you do 12 requests in parallel, or even 500 of them, at any given moment, almost all of your threads are waiting on a socket.recv or similar call somewhere, blocking until the server responds, so they aren't using your CPU.)
If you're trying to get around the rate limits for a major site, (a) you're probably violating their T&C, and (b) you're almost surely going to trigger some kind of detection and get yourself blocked.1
In your edited question, you attempted to do this with multiprocessing.dummy.Pool.map, which is fine—but you're getting the arguments wrong.
Your function takes a list of urls and loops over them:
def scrape_google(url_list): # ... for i in url_list: But then you call it with a single URL at a time:
results = pools.map(scrape_google, urls) This is similar to using the builtin map, or a list comprehension:
results = map(scrape_google, urls) results = [scrape_google(url) for url in urls] What happens if you get a single URL instead of a list of them, but try to use it as a list? A string is a sequence of its characters, so you loop over the characters of the URL one by one, trying to download each character as if it were a URL.
So, you want to change your function, like this:
def scrape_google(url): reviews = # … request = requests.get(url, proxies=proxies, verify=False).text # … return reviews Now it takes a single URL, and returns a set of reviews for that URL. The pools.map will call it with each URL, and give you back an iterable of reviews, one per URL.