0

I have been working with Selenium and ChromeDriver. There is a page, which apparently detects that it is a scraper and responds to the URLs with 403 Forbbiden, to solve this I used the following to "Deactivate" the WebDriver's marks.

ChromeOptions options = new ChromeOptions(); options.setExperimentalOption("useAutomationExtension", false); options.setExperimentalOption("excludeSwitches", new String[]{"enable-automation"}); this.driver = new ChromeDriver(options); 

With this it worked well, since apparently he no longer detected the scraper. The problem is that a couple of weeks ago Chrome was updated to its 80 version and this stopped working. Although I have this configuration, it seems that the page detected the scraper again.

I know it's something with the page and the scraper, because if I run it manually it works without problems and it doesn't give me 403 Forbbiden.

URL to scrape = https://www.fedex.com/en-us/home.html

Using manually or ChromeDriver version before v79

Using manually or ChromeDriver version before v79

Using ChromeDriver after v79

Using ChromeDriver after v79

I'm using

Java 1.8

Selenium v4.0.0-alpha-3

Chrome v80

ChromeDriver v80

Regards

4
  • Please, share with us the page and what are you trying to do: stackoverflow.com/help/minimal-reproducible-example Commented Feb 25, 2020 at 12:37
  • I added it, but the problem is that the page is with username and password. But I left an example of the error. Commented Feb 25, 2020 at 12:54
  • Does this helps you? Commented Feb 25, 2020 at 14:32
  • I don't think it's necessary, because manually if it works for me Commented Feb 25, 2020 at 14:34

0

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.