I would like to get the files without headers. I have tried many things like
wget --header="" http://xxxxx.xxxxxx.xx How can I get any files without headers?
Could you assign the output of wget to a string, then use something else to process it to drop headers (or parse them out of the text)?
For example, using bash and grep, you can store the html from a webpage as a string, then use grep to extract the text in the <body> section:
w1=$(wget --quiet --output-document - www.example.com) echo $w1 | grep --only-matching "<body>.*</body>" which gives the output below (I have added some newlines to improve how it displays here):
<body> <div> <h1>Example Domain</h1> <p> This domain is established to be used for illustrative examples in documents. You may use this domain in examples without prior coordination or asking for permission. </p> <p> <a href="http://www.iana.org/domains/example">More information...</a></p> </div> </body>