This repository contains working examples of scraping Google Maps search results using:
- Selenium
- Playwright (with stealth)
- HasData Google Maps API
in both Python and Node.js. Each method includes clean and minimal code samples with working selectors and data saving logic.
Python 3.10+ or Node.js 18+
Install required packages:
pip install selenium pandas playwright playwright-stealth playwright installInstall required packages:
npm install selenium-webdriver playwright playwright-extra playwright-extra-plugin-stealth axiosgoogle-maps-scraper/ │ ├── python/ │ ├── selenium_scraper.py │ ├── playwright_scraper.py │ ├── hasdata_api_scraper.py │ ├── nodejs/ │ ├── selenium_scraper.js │ ├── playwright_scraper.js │ ├── hasdata_api_scraper.js │ └── README.md Each script scrapes the same data: business name, rating, reviews, category, services, image, and detail URL. Output is saved in both .json and .csv.
Classic Google Maps scraping using Selenium with visible browser.
| Parameter | Description | Example |
|---|---|---|
query | Search query | "pizza in New York" |
max_scrolls | Number of scroll cycles | 10 |
scroll_pause | Pause between scrolls | 2 seconds |
max_scrolls | Scroll repetitions | 10 |
output_file | Output CSV/JSON filename | "maps_data.csv/json" |
Runs headless or headful with stealth mode to avoid detection.
| Parameter | Description | Example |
|---|---|---|
query | Search query | "restaurants in Chicago" |
headless | Headless mode or not | True |
scroll_pause | Pause between scrolls | 2 seconds |
max_scrolls | Scroll repetitions | 10 |
output_file | Output CSV/JSON filename | "output.csv" |
Use Google Maps scraping API (by HasData) — no browser automation needed.
| Parameter | Description | Example |
|---|---|---|
api_key | Your HasData API key | "your-key" |
query | Search query | "bars near San Francisco" |
output_file | Output file | "results.json / results.csv" |
- Selectors may change — always verify current class names on Google Maps.
- Avoid rate limiting — random delays, proxies, or API-based approach recommended.
- For heavy usage, prefer HasData or your own headless proxy farm.
These examples are for educational purposes only. Learn more about the legality of web scraping.
