0

I use htmlparser 1.6 to parse web sites.

The problem is that when I parse pdf web sites, I obtain in the output file strange characters like

ØÇÁÖÜ/:?ÖQØ?WÕWÏ 

This is a fragment of my code :

try { parser = new Parser (); if (1 < args.length) filter = new TagNameFilter (args[1]); else { filter = null; parser.setFeedback (Parser.STDOUT); Parser.getConnectionManager ().setMonitor (parser); } Parser.getConnectionManager ().setRedirectionProcessingEnabled (true); Parser.getConnectionManager ().setCookieProcessingEnabled (true); // Here the pdf web site parser.setResource ("http://hal.archives-ouvertes.fr" + "/docs/00/16/76/78/PDF /27_Bendaoud.pdf"); NodeList list = parser.parse(filter); NodeIterator i = list.elements (); while (i.hasMoreNodes ()) processMyNodes(i.nextNode ()); } catch (EncodingChangeException ece) { try { parser.reset (); NodeList list = parser.parse(filter); for (NodeIterator i = list.elements (); i.hasMoreNodes (); ) processMyNodes (i.nextNode ()); } catch (ParserException e) { e.printStackTrace (); } } catch (ParserException e) { e.printStackTrace (); } 

Update:

I have used iText to parse PDF files. It works well on local files but I want to parse PDF files which are hosted in web servers such as this one:

http://protege.stanford.edu/publications/ontology_development/ontology101.pdf"

How do I do this task using iText or other libraries?

2
  • You will most likely get an answer if 1/ you format code extracts, and 2/ you give more details (language of your code, link to htmlparser, ...) Commented Oct 23, 2010 at 21:38
  • 1
    Um, pdf's aren't html, and therefore I wouldn't expect htmlparser to parse them in any way shape or form. Commented Oct 23, 2010 at 22:13

2 Answers 2

3

The clue is in the name - HTMLParser parses HTML. HTML looks like this:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head><title>SimonJ's homepage</title></head> <body>...</body> </html> 

PDFs are not HTML - in their raw form, they look something like this:

%PDF-1.5^M%<E2><E3><CF><D3>1 0 obj<</Contents 3 0 R/Type/Page/Parent 121 0 R/Rotate 0/MediaBox[0 0 419.528015 595.276001]/CropBox[0 0 419.528015 595.276001]/Resources 2 0 R>>^Mendobj^M2 0 obj<</ColorSpace<</Cs6 132 0 R>> /Font<</F3 102 0 R/F4 105 0 R>>/ProcSet[/PDF/Text]/ExtGState<</GS1 134 0 R>>>>^Mendobj^M3 0 obj<</Length 917/Filter/FlateDecode>>stream H<89><A4><95><DB>r<A3>F^P<86><9F><80>w<E8>K<94>Z<8D><E7><C0><CC>0<97>^X!^E^WF <8A><C0><9B><B8>\{At2ESC ^\!<EF><96><DF>>= K"<B1>R<9B>Jq<C1><A9>^O_<FF>... 

which is rather different, hence why HTMLParser can't cope. If you want to parse PDFs you'll probably want to investigate something like iText or PDFBox although be warned: the PDF file format wasn't designed for easy extraction of text - many a PhD student has burnt out whilst trying...

Sign up to request clarification or add additional context in comments.

Comments

1

HtmlParser or any other HTML or XML parser hasn't got a hope in hell of parsing PDFs. HTML is a completely different format to PDF format.

What you need to do is get your web crawling software to pay attention to the content type headers returned that the remote web server returns when you GET a document. This tells you the nominal format of the resource that you have just fetched. If the content type is PDF, or some other format that your link extractor cannot cope with, you should not attempt to parse it.

At the moment your code does this:

parser.setResource ("http://hal.archives-ouvertes.fr" + "/docs/00/16/76/78/PDF /27_Bendaoud.pdf"); 

This need to be replaced with something that sets the resource using an already opened InputStream, etc.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.