99

I need to download remote file using curl.

Here's the sample code I have:

$ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $st = curl_exec($ch); $fd = fopen($tmp_name, 'w'); fwrite($fd, $st); fclose($fd); curl_close($ch); 

But it can't handle big files, because it reads to memory first.

Is it possible to stream the file directly to disk?

5 Answers 5

189
<?php set_time_limit(0); //This is the file where we save the information $fp = fopen (dirname(__FILE__) . '/localfile.tmp', 'w+'); //Here is the file we are downloading, replace spaces with %20 $ch = curl_init(str_replace(" ","%20",$url)); // make sure to set timeout to a high enough value // if this is too low the download will be interrupted curl_setopt($ch, CURLOPT_TIMEOUT, 600); // write curl response to file curl_setopt($ch, CURLOPT_FILE, $fp); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); // get curl response curl_exec($ch); curl_close($ch); fclose($fp); ?> 
Sign up to request clarification or add additional context in comments.

7 Comments

Correct me if I'm wrong, but I don't think you actually need to manually fwrite the data since you're using CURLOPT_FILE.
As @SashaChedygov has pointed out above, you don't need to use fwrite AND CURLOPT_FILE. Passing the $fp is enough. I did both and ended up with 1 at the end of content in the file.
It appears that setting CURLOPT_FILE before setting CURLOPT_RETURNTRANSFER doesn't work, presumably because CURLOPT_FILE depends on CURLOPT_RETURNTRANSFER being set. php.net/manual/en/function.curl-setopt.php#99082
@paperclip I was having the same problem with the 1 added at the end of file. In my case I removed the CURLOPT_FILE statement and manually saved the file with fwrite. Just wanted to add that BOTH options are feasible (but we should use just ONE of them).
@DonovanP No you shouldn't, If you need to add this it just means your certificate is invalid and you're using https wrong
|
30

I use this handy function:

By downloading it with a 4094 byte step it will not full your memory

function download($file_source, $file_target) { $rh = fopen($file_source, 'rb'); $wh = fopen($file_target, 'w+b'); if (!$rh || !$wh) { return false; } while (!feof($rh)) { if (fwrite($wh, fread($rh, 4096)) === FALSE) { return false; } echo ' '; flush(); } fclose($rh); fclose($wh); return true; } 

Usage:

 $result = download('http://url','path/local/file'); 

You can then check if everything is ok with:

 if (!$result) throw new Exception('Download error...'); 

5 Comments

@Severus you catch http error as fopen() returning false and timeout you put it in the while loop (call time() and do the math)
cURL already has working implementation of this (see the accepted answer), why would you want implement something on your own?
Because cURL procedural interface is pretty bad
for what it's worth, I've been using stream_copy_to_stream instead of manually copying contents, makes a shorter code. Neither this nor that works with https (unless you stipulate a $context). Concerting procedural style - file functions aren't quite OOP too, and if you put curl options in an array, it'll look way cleaner anyway.
I tested it with a https, works great!!!, Thanks for you help @dynamic.
7

Find below code if you want to download the contents of the specified URL also want to saves it to a file.

<?php $ch = curl_init(); /** * Set the URL of the page or file to download. */ curl_setopt($ch, CURLOPT_URL,'http://news.google.com/news?hl=en&topic=t&output=rss'); $fp = fopen('rss.xml', 'w+'); /** * Ask cURL to write the contents to a file */ curl_setopt($ch, CURLOPT_FILE, $fp); curl_exec ($ch); curl_close ($ch); fclose($fp); ?> 

If you want to downloads file from the FTP server you can use php FTP extension. Please find below code:

<?php $SERVER_ADDRESS=""; $SERVER_USERNAME=""; $SERVER_PASSWORD=""; $conn_id = ftp_connect($SERVER_ADDRESS); // login with username and password $login_result = ftp_login($conn_id, $SERVER_USERNAME, $SERVER_PASSWORD); $server_file="test.pdf" //FTP server file path $local_file = "new.pdf"; //Local server file path ##----- DOWNLOAD $SERVER_FILE AND SAVE TO $LOCAL_FILE--------## if (ftp_get($conn_id, $local_file, $server_file, FTP_BINARY)) { echo "Successfully written to $local_file\n"; } else { echo "There was a problem\n"; } ftp_close($conn_id); ?> 

Comments

4

when curl is used to download a large file then CURLOPT_TIMEOUT is the main option you have to set for.

CURLOPT_RETURNTRANSFER has to be true in case you are getting file like pdf/csv/image etc.

You may find the further detail over here(correct url) Curl Doc

From that page:

curl_setopt($request, CURLOPT_TIMEOUT, 300); //set timeout to 5 mins curl_setopt($request, CURLOPT_RETURNTRANSFER, true); // true to get the output as string otherwise false 

3 Comments

u can also go through the blog example regarding file download with curl understanding curl basics
This writes the file to memory first which is what the OP was specifically trying to avoid. That said, setting CURLOPT_TIMEOUT is very important as the process will timeout in the midst of receiving data once that time is hit.
For those who are googling this question after 2023 – the CURLOPT_RETURNTRANSFER is depreciated.
3

You can use this function, which creates a tempfile in the filesystem and returns the path to the downloaded file if everything worked fine:

function getFileContents($url) { // Workaround: Save temp file $img = tempnam(sys_get_temp_dir(), 'pdf-'); $img .= '.' . pathinfo($url, PATHINFO_EXTENSION); $fp = fopen($img, 'w+'); $ch = curl_init(); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_FILE, $fp); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); $result = curl_exec($ch); curl_close($ch); fclose($fp); return $result ? $img : false; } 

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.