Disable memory buffer for file uploads

Hi,

I’m using the latest version of Nginx, PHP/PHP-FPM on a Linux machine
with 2GB of ram and plenty of hard disk space.

My problem is this: when I upload a large (2GB, for exemple) file on my
Web site, Nginx buffers the whole file in memory, and this will be come
a huge problem as we’re going to have a lot of users uploading large
files in a near future. So I gave a try to the Nginx upload module as I
thought it would write directly to disk and skip memory, but no, I still
have the same issue.

I’m currently looking at solutions like Plupload (Flash uploader) that
can chunk an upload in many small files - but the perfect solution would
be to be able to tell Nginx to write the client body directly to file,
and never keep it in memory.

Is it possible?

Thanks!

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,185014,185014#msg-185014

On Tue, Mar 22, 2011 at 08:17:28PM -0400, akaris wrote:

Hi there,

My problem is this: when I upload a large (2GB, for exemple) file on my
Web site, Nginx buffers the whole file in memory,

That shouldn’t happen unless your client_body_buffer_size is really
really big.

http://wiki.nginx.org/HttpCoreModule#client_body_buffer_size

nginx will only buffer in memory up to the value of
client_body_buffer_size; beyond that, it will buffer to disk.

How do you determine that nginx buffers the whole file in memory?

All the best,

f

Francis D. [email protected]

I have a question, will write to the disk much more slower than using
memory? So my question is :
when users upload large files, should I use large memory for
client_body_buffer_size or little memory.
when users upload small files, should i use large memory for
client_body_buffer_size or little memory. And in this situation, when he
finishes uploading, will the content be remove when the memory for
client_body_buffer_size is full, and so that new uploaded files could
use the memory.

On Wed, Mar 23, 2011 at 11:43:08AM +0800, Space Lee wrote:

Hi there,

I have a question, will write to the disk much more slower than using memory?

Usually, yes.

But the full sequence is more like “read from network”, “write to
buffer”,
“read from buffer”, “write to fastcgi server or http server or whatever
nginx sends the request to”.

Will the write to disk (buffer) be much slower than the read from
network?

If you have enough memory (and a good OS disk cache), the “read from
buffer” may effectively be “from memory” anyway.

So my question is :
when users upload large files, should I use large memory for
client_body_buffer_size or little memory.
when users upload small files, should i use large memory for
client_body_buffer_size or little memory.

How much time will it take for your user to complete the upload http
request, for nginx to buffer it, pass it on, and to have it processed
by whatever nginx passes it on to?

In that time, how many other user upload-requests will start? (= N)

And how much total real memory do you want to use for upload-request
buffers? (= M)

M/N is a rough guide to the size you want for client_body_buffer_size.
You
don’t want your upload-request buffers to push the memory use above the
real memory available.

And in this situation, when he finishes uploading, will the content be remove
when the memory for client_body_buffer_size is full, and so that new uploaded
files could use the memory.

The buffer will eventually be released. From a capacity point of view,
expect
it not to be released until the http connection is closed by the server.

(Exactly when it is released is less important – for capacity planning,
assume the worst :slight_smile:

All the best,

f

Francis D. [email protected]

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs