I have created a simple WCF service to prototype file uploading. The service:
[ServiceContract] public class Service1 { [OperationContract] [WebInvoke(Method = "POST", UriTemplate = "/Upload")] public void Upload(Stream stream) { using (FileStream targetStream = new FileStream(@"C:\Test\output.txt", FileMode.Create, FileAccess.Write)) { stream.CopyTo(targetStream); } } } It uses webHttpBinding with transferMode set to "Streamed" and maxReceivedMessageSize, maxBufferPoolSize and maxBufferSize all set to 2GB. httpRuntime has maxRequestLength set to 10MB.
The client issues HTTP requests in the following way:
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(@"http://.../Service1.svc/Upload"); request.Method = "POST"; request.SendChunked = true; request.AllowWriteStreamBuffering = false; request.ContentType = MediaTypeNames.Application.Octet; using (FileStream inputStream = new FileStream(@"C:\input.txt", FileMode.Open, FileAccess.Read)) { using (Stream outputStream = request.GetRequestStream()) { inputStream.CopyTo(outputStream); } } Now, finally, what's wrong:
When uploading the file 100MB big, the server returns HTTP 400 (Bad request). I've tried to enable WCF tracing, but it shows no error. When I increase httpRuntime.maxRequestLength to 1GB, the file gets uploaded without problems. The MSDN says that maxRequestLength "specifies the limit for the input stream buffering threshold, in KB".
This leads me to believe that the whole file (all 100MB of it) is first stored in "input stream buffer" and only then it is available to my Upload method on server. I can actually see that the size of file on server does not gradually increase (as I would expect), instead, in the moment it is created it is already 100MB big.
The question: How can I get this to work so that the "input stream buffer" is reasonably small (say, 1MB) and when it overflows, my Upload method gets called? In other words, I want the upload to be truly streamed without having to buffer the whole file anywhere.
EDIT: I now discovered the httpRuntime contains another setting that is relevant here - requestLengthDiskThreshold. It seems that when the input buffer grows beyond this threshold, it is no longer stored in memory, but instead, on filesystem. So at least the whole 100MB big file is not kept in memory (this is what I was most afraid of), however, I still would like to know whether there is some way to avoid this buffer altogether.