30

suppose, I need to perform a set of procedure on a particular website say, fill some forms, click submit button, send the data back to server, receive the response, again do something based on the response and send the data back to the server of the website. I know there is a webbrowser module in python, but I want to do this without invoking any web browser. It hast to be a pure script.

Is there a module available in python, which can help me do that?
thanks

2
  • 1
    Duplicate: stackoverflow.com/search?q=%5Bpython%5D+scraping. Every question on screen scraping answers this question. Specifically: stackoverflow.com/questions/419260/grabbing-text-from-a-webpage Commented Aug 18, 2009 at 10:23
  • 2
    Selenium is the only full solution to this as far as I can tell and I have looked at every option for this sort of thing I Can find.. if you just need to grab web pages then mechanize will do fine or do basic form entry, but for real browser emulation it seems you need selenium Commented Aug 25, 2010 at 22:45

15 Answers 15

19

selenium will do exactly what you want and it handles javascript

Sign up to request clarification or add additional context in comments.

3 Comments

Although I don't think this can be done headless, which is what is often implied by "pure script", this will as closely as possible emulate a real browser experience...since it's using a real browser. Most sites today are completely broken without Javascript, which makes mechanize obsolete.
this is wrong..you can easily fake a browser using Pyvirtual display to run python with selenium in a headless mode..
There is http://www.seleniumhq.org/docs/03_webdriver.jsp#htmlunit-driver. Also see, https://github.com/detro/ghostdriver. Both these are for headles javascript. First one is official and second one third party.
18

You can also take a look at mechanize. Its meant to handle "stateful programmatic web browsing" (as per their site).

4 Comments

mechanize, in my experience, is pretty slow, but once https, cookies, logins, are involved, it's much easier than urllib2.
selenium provides a lot more than mechanize but mechanize is good for just basic stuff but will cause issues if you are trying to do real browser emulation as it doesn't do things like auto download images, css files, etc and seems to always be detectable by the strictest sites as being an automated tool
Unfortunately, mechanize is not maintained anymore, and does not support Python 3.
As of March 2017, maintenance has been taken over by someone else and it does indeed support Python 3: github.com/python-mechanize/mechanize
8

I think the best solutions is the mix of requests and BeautifulSoup, I just wanted to update the question so it can be kept updated.

Comments

8

All answers are old, I recommend and I am a big fan of requests

From homepage:

Python’s standard urllib2 module provides most of the HTTP capabilities you need, but the API is thoroughly broken. It was built for a different time — and a different web. It requires an enormous amount of work (even method overrides) to perform the simplest of tasks.

Things shouldn't be this way. Not in Python.

Comments

3

Selenium http://www.seleniumhq.org/ is the best solution for me. you can code it with python, java, or anything programming language you like with ease. and easy simulation that convert into program.

Comments

2

There are plenty of built in python modules that whould help with this. For example urllib and htmllib.

The problem will be simpler if you change the way you're approaching it. You say you want to "fill some forms, click submit button, send the data back to server, recieve the response", which sounds like a four stage process.

In fact, what you need to do is post some data to a webserver and get a response.

This is as simple as:

>>> import urllib >>> params = urllib.urlencode({'spam': 1, 'eggs': 2, 'bacon': 0}) >>> f = urllib.urlopen("http://www.musi-cal.com/cgi-bin/query", params) >>> print f.read() 

(example taken from the urllib docs).

What you do with the response depends on how complex the HTML is and what you want to do with it. You might get away with parsing it using a regular expression or two, or you can use the htmllib.HTMLParser class, or maybe a higher level more flexible parser like Beautiful Soup.

Comments

2

Selenium2 includes webdriver, which has python bindings and allows one to use the headless htmlUnit driver, or switch to firefox or chrome for graphical debugging.

Comments

2

Do not forget zope.testbrowser which is wrapper around mechanize .

zope.testbrowser provides an easy-to-use programmable web browser with special focus on testing.

Comments

1

The best solution that i have found (and currently implementing) is : - scripts in python using selenium webdriver - PhantomJS headless browser (if firefox is used you will have a GUI and will be slower)

Comments

1

I have found the iMacros Firefox plugin (which is free) to work very well.

It can be automated with Python using Windows COM object interfaces. Here's some example code from http://wiki.imacros.net/Python. It requires Python Windows Extensions:

import win32com.client def Hello(): w=win32com.client.Dispatch("imacros") w.iimInit("", 1) w.iimPlay("Demo\\FillForm") if __name__=='__main__': Hello() 

2 Comments

Does this only work on windows machines?
Yes, as far as I know, anything using win32 libraries only works on Windows.
0

You likely want urllib2. It can handle things like HTTPS, cookies, and authentication. You will probably also want BeautifulSoup to help parse the HTML pages.

Comments

0

You may have a look at these slides from the last italian pycon (pdf): The author listed most of the library for doing scraping and autoted browsing in python. so you may have a look at it.

I like very much twill (which has already been suggested), which has been developed by one of the authors of nose and it is specifically aimed at testing web sites.

Comments

0

Internet Explorer specific, but rather good:

http://pamie.sourceforge.net/

The advantage compared to urllib/BeautifulSoup is that it executes Javascript as well since it uses IE.

Comments

0

httplib2 + beautifulsoup

Use firefox + firebug + httpreplay to see what the javascript passes to and from the browser from the website. Using httplib2 you can essentially do the same via post and get

Comments

0

For automation you definitely might wanna check out

webbot

Its is based on selenium and offers lot more features with very little code like automatically finding elements to perform actions like click , type based on the your parameters.

Its even works for sites with dynamically changing class names and ids .

Here is doc : https://webbot.readthedocs.io/

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.