I’m using Selenium with a custom Chrome setup via undetected_chromedriver, and I run into a critical issue during automated browsing sessions.
After successfully processing around 5–6 LinkedIn profiles, Chrome becomes unresponsive and turns completely white — no tabs, no search bar, just a blank screen. At that point, my script logs repeated retries like the following:
Retrying (Retry(total=2, ...)) after connection broken by 'ReadTimeoutError("HTTPConnectionPool(host='localhost', port=34569): Read timed out. (read timeout=120)")'
This eventually leads to complete failure as the WebDriver cannot continue.
when running the scraping with the same linkedin profile after ending, If I terminate the entire program and start the same script again, I can start all over and it works fine. but stops again after around 6 profiles,
This issue persists even after attempting relaunches within the script, where a "something went wrong" message appears, suggesting a session or state issue. However, if I fully stop the script and restart it (even within a minute), the same session data works without errors. I'm looking for a way to replicate this fresh-start success during mid-program relaunches to handle thousands of profiles without hangs.
What I’ve tried: • Limiting memory usage via Chrome flags (e.g., --disable-gpu, --no-sandbox, etc.) • Adding delays between profile visits • Monitoring for zombie Chrome processes (none found) • Retrying with fresh drivers/sessions • Process Check: Checked for lingering Chrome processes after driver.quit() using Task Manager, but no significant improvement was observed.
Expected behavior: Chrome should continue loading profiles without freezing or crashing. I’d expect performance degradation maybe, but not a complete hang
--disable-gpu,--no-sandbox,doesn't limit memory usage. 2. Adding delays between profile visits is logic specific. Update the question with your code trials.