5

I am trying to download all images off this website path http://www.samsung.com/sg/consumer/mobile-devices/smartphones/ using the below code

wget -e robots=off -nd -nc -np --recursive -r -p --level=5 --accept jpg,jpeg,png,gif --convert-links -N --limit-rate=200k --wait 1.0 -U 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:14.0) Gecko/20100101 Firefox/14.0.1' -P testing_folder www.samsung.com/sg/consumer/mobile-devices/smartphones 

I would expect to see the images of the phones downloaded to my testing_folder.But all I see is some global images like logo etc. I dont seem to be able to get the phone images downloaded. The code above seems to work on some other websites through.

I have gone through all the wget questions on this forum but this particular issue doesnt seem to have an answer. Can someone help, I am sure there is a easy out. What am I doing wrong ?

UPDATE: It looks like it is an issue with possible javascript pages and hence seems like end of the road, since apparently wget cant handle javascript pages well. If anyone can still help, will be delighted.

7
  • Looks like those images don't have any extensions like jpg, jpeg etc. Inspecting the page doesn't show direct links to those images, that's probably why your script isn't working. Commented Jun 17, 2015 at 17:54
  • 1
    I haven't looked at the page, but it's entirely possible that the images are populated by javascript, which means that the page when fetched with wget would not contain those img links. Fetch the page with wget and examine the HTML source. Commented Jun 17, 2015 at 17:55
  • ronakg, thanks. If i change the path to the below there is definitely an image there which i would like to scrape. samsung.com/sg/consumer/mobile-devices/smartphones/galaxy-s/… However, this too doesnt seem to work Commented Jun 17, 2015 at 17:57
  • 1
    This page has some useful discussion on the topic, but the tl;dr is "it's complicated". Commented Jun 17, 2015 at 18:46
  • 1
    You'll probably have to use something more beefy like PhantomJS (headless webkit based browser that is scriptable) to pull down images that are populated via JS. Commented Jun 17, 2015 at 19:25

1 Answer 1

1

Steps:

  1. configure a proxy server, for example Apache httpd with mod_proxy and mod_http_proxy

  2. visit the page with a web browser that supports JavaScript and is configured to use your proxy server

  3. harvest the URLs from the proxy server log file and put them in a file

Or:

  1. Start Firefox and open web page

  2. F10 - Tools - Page Info - Media - right click - select all - right click - copy

  3. Paste into file with your favourite editor

Then:

  1. optionally, (if you don't want to find out how to get wget read a list of URLs from a file), add minimal html tags (html, body and img) to the file

  2. use wget to download the image specifying the file created in step 3 or 4 as the starting point

Sign up to request clarification or add additional context in comments.

3 Comments

@Jochim, thanks but steps 3,4,5 are what i am capable of doing myself. points 1 and 2 are beyond my abilities since i am a beginner.
How about these alternative steps? Do they capture all images? Looks ok to me but maybe not all images are loaded at this stage.
Thanks for the alternative steps. Did exactly that with www.roca.in, but all i ended up getting are extra images and not the ones i require. appreciate the effort.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.