First of all, make sure you see these "missing tags" in the html coming into BeautifulSoup to parse. It could be that the problem is not in how BeautifulSoup parses the HTML, but in how you are retrieving the HTML data to parse.
I suspect, you are downloading the google homepage via urllib2 or requests and compare what you see inside str(soup) with what you see in a real browser. If this is case, then you cannot compare the two, since neither urllib2, nor requests is a browser and cannot execute javascript or manipulate DOM after the page load, or make asynchronous requests. What you get with urllib2 or requests is basically an initial HTML page "without a dynamic part".
If the problem is still in how BeautifulSoup parses the HTML...
As it clearly stated in docs, the behavior depends on which parser BeautifulSoup would choose to use under-the-hood:
There are also differences between HTML parsers. If you give Beautiful Soup a perfectly-formed HTML document, these differences won’t matter. One parser will be faster than another, but they’ll all give you a data structure that looks exactly like the original HTML document. But if the document is not perfectly-formed, different parsers will give different results.
See Installing a parser and Specifying the parser to use.
Since you don't specify a parser explicitly, the following rule is applied:
If you don’t specify anything, you’ll get the best HTML parser that’s installed. Beautiful Soup ranks lxml’s parser as being the best, then html5lib’s, then Python’s built-in parser.
See also Differences between parsers.
In other words, try to approach the problem using different parsers and see how the result would differ:
soup = BeautifulSoup(html, 'lxml') soup = BeautifulSoup(html, 'html5lib') soup = BeautifulSoup(html, 'html.parser')