Client body buffering with FastCGI

Hello all,

I’m trying to configure AjaXplorer, a PHP/Ajax file manager, to work
behind nginx 0.8.54 on FreeBSD 7.3. The problem I’m running into is
the inability to upload files more than ~64 MB in size. Ideally, I’d
like to bump that limit up to 1 GB. I realize that HTTP is not ideal
for this, but other transfer methods are not an option.

PHP and nginx are both configured to accept 1 GB POST requests. As far
as I can tell, nginx buffers the contents of the entire upload to disk
before forwarding the request to the FastCGI process. This data is
then read from disk and written back to disk by PHP. The whole
write/read/write cycle is causing a timeout, first in nginx, and then
in the PHP process (though there may also be some other problem that I
haven’t figured out yet).

For now, I’m curious whether there is a way to bypass the disk buffer
and have nginx start sending the request as soon as it has all the
headers? PHP can then buffer the entire request in memory and begin
processing it as soon as the last byte is received.

I’m also looking into the upload module for nginx, which eliminates
the need to buffer the request in memory. However, AjaXplorer isn’t
written to work with this module, so it would require some effort on
my part to modify the code. I would prefer to avoid doing this, if
possible.

  • Max

Hello!

On Thu, Feb 17, 2011 at 09:48:24AM -0500, Maxim K. wrote:

write/read/write cycle is causing a timeout, first in nginx, and then
in the PHP process (though there may also be some other problem that I
haven’t figured out yet).

Setting bigger timeouts should help. All timeouts in nginx are
configurable (proxy_connect_timeout, proxy_send_timeout,
proxy_read_timeout - and similar ones for other backend modules).

Though it sounds strange that nginx times out while writing
request to php as it should reset timer on any write operation.
Timeout may happen after writing request (read timeout) - i.e. if
php takes too long to process request, but you have to enlarge it
anyway then.

Which message do you see in nginx error log?

For now, I’m curious whether there is a way to bypass the disk buffer
and have nginx start sending the request as soon as it has all the
headers? PHP can then buffer the entire request in memory and begin
processing it as soon as the last byte is received.

No.

Maxim D.

On Thu, Feb 17, 2011 at 12:05 PM, Maxim D. [email protected]
wrote:

PHP and nginx are both configured to accept 1 GB POST requests. As far

Though it sounds strange that nginx times out while writing
request to php as it should reset timer on any write operation.
Timeout may happen after writing request (read timeout) - i.e. if
php takes too long to process request, but you have to enlarge it
anyway then.

I think the timeouts are a side effect; the problem seems to be
between nginx and the FastCGI unix socket. I just ran two quick tests.
All possible timeouts for PHP and nginx have been set to 60 seconds,
memory and POST size limits are at 1 GB.

First, I uploaded a 90 MB file. All went well - the upload finished in
~3 seconds, PHP took ~8 seconds to copy it to the final destination.
So 11 seconds total from the time that I hit ‘upload’ until I got a
success notification.

Next, I tried to upload a 100 MB file. The upload took ~4 seconds, but
then nothing… The server sat for 1 minute with CPU 100% idle. After
that, nginx timed out. I had these 2 messages in the error log:

2011/02/17 13:14:21 [warn] 68428#0: *8 a client request body is
buffered to a temporary file /srv/upload/tmp/0000000002
2011/02/17 13:15:25 [error] 68428#0: *8 upstream timed out (60:
Operation timed out) while sending request to upstream

As soon as the second message appeared, the PHP process began
executing, copying 20 MB of the uploaded data to the final
destination. The remaining 80 MB never made it. In my other tests, the
amount of data saved varied between 20 and 60 MB.

In other words, it looks like nginx receives the entire request and
begins writing it out to the FastCGI socket. After copying a portion
of the data, the transfer breaks. Nginx then times out and closes the
socket, which causes PHP to begin executing this partially received
request. I did verify that AjaXplorer code is not executed until the
nginx timeout, so this software is not the problem. The fault is
either with PHP, nginx, or the operating system.

Any ideas on what could be preventing the entire request from being
written out to the FastCGI socket? I have error_log set to ‘debug’,
but the two messages above is all I’m getting.

  • Max

On Feb 17, 2011, at 21:47 , Maxim K. wrote:

proxy_read_timeout - and similar ones for other backend modules).
memory and POST size limits are at 1 GB.
2011/02/17 13:14:21 [warn] 68428#0: *8 a client request body is
begins writing it out to the FastCGI socket. After copying a portion
of the data, the transfer breaks. Nginx then times out and closes the
socket, which causes PHP to begin executing this partially received
request. I did verify that AjaXplorer code is not executed until the
nginx timeout, so this software is not the problem. The fault is
either with PHP, nginx, or the operating system.

Any ideas on what could be preventing the entire request from being
written out to the FastCGI socket? I have error_log set to ‘debug’,
but the two messages above is all I’m getting.

You need to rebuild nginx with debug log:
http://nginx.org/en/docs/debugging_log.html

As to the issue, try to use FastCGI TCP socket instead of unix socket.


Igor S.
http://sysoev.ru/en/

On Thu, Feb 17, 2011 at 1:47 PM, Maxim K. [email protected] wrote:

proxy_read_timeout - and similar ones for other backend modules).
memory and POST size limits are at 1 GB.
2011/02/17 13:14:21 [warn] 68428#0: *8 a client request body is
begins writing it out to the FastCGI socket. After copying a portion

  • Max
    I think I managed to solve the problem, but not find the answer as to
    what causes it. I decided to see what would happen if the FastCGI
    server listened on 127.0.0.1 rather than /tmp/php.sock. After making
    the switch, I was able to upload 256 MB of data in 56 seconds without
    any problems (repeated this 5 times just to be sure).

Could there be a problem with how nginx opens unix sockets that would
cause some of the data for large requests to be lost?

  • Max

On Feb 17, 2011, at 22:38 , Maxim K. wrote:

for this, but other transfer methods are not an option.
configurable (proxy_connect_timeout, proxy_send_timeout,
All possible timeouts for PHP and nginx have been set to 60 seconds,

In other words, it looks like nginx receives the entire request and

  • Max

I think I managed to solve the problem, but not find the answer as to
what causes it. I decided to see what would happen if the FastCGI
server listened on 127.0.0.1 rather than /tmp/php.sock. After making
the switch, I was able to upload 256 MB of data in 56 seconds without
any problems (repeated this 5 times just to be sure).

Could there be a problem with how nginx opens unix sockets that would
cause some of the data for large requests to be lost?

I believe it’s unix socket issue, not nginx’s one.


Igor S.
http://sysoev.ru/en/

Hello!

On Thu, Feb 17, 2011 at 01:47:43PM -0500, Maxim K. wrote:

proxy_read_timeout - and similar ones for other backend modules).
memory and POST size limits are at 1 GB.
2011/02/17 13:14:21 [warn] 68428#0: *8 a client request body is
begins writing it out to the FastCGI socket. After copying a portion
of the data, the transfer breaks. Nginx then times out and closes the
socket, which causes PHP to begin executing this partially received
request. I did verify that AjaXplorer code is not executed until the
nginx timeout, so this software is not the problem. The fault is
either with PHP, nginx, or the operating system.

Any ideas on what could be preventing the entire request from being
written out to the FastCGI socket? I have error_log set to ‘debug’,
but the two messages above is all I’m getting.

You have to recompile nginx with --with-debug configure argument
to get debugging output, see A debugging log
for details.

You may also want to provide some additional info about your
operating system and nginx config you use.

Maxim D.

On Thu, Feb 17, 2011 at 2:43 PM, Igor S. [email protected] wrote:

the inability to upload files more than ~64 MB in size. Ideally, I’d

I think the timeouts are a side effect; the problem seems to be
then nothing… The server sat for 1 minute with CPU 100% idle. After
amount of data saved varied between 20 and 60 MB.
written out to the FastCGI socket? I have error_log set to ‘debug’,
Could there be a problem with how nginx opens unix sockets that would
cause some of the data for large requests to be lost?

I believe it’s unix socket issue, not nginx’s one.

Understood, thanks.

  • Max

2011/2/17 Maxim D. [email protected]:

behind nginx 0.8.54 on FreeBSD 7.3. The problem I’m running into is
haven’t figured out yet).

Next, I tried to upload a 100 MB file. The upload took ~4 seconds, but
destination. The remaining 80 MB never made it. In my other tests, the
Any ideas on what could be preventing the entire request from being
Maxim D.
Here’s the information in case you want to see why unix sockets cause
this problem. I don’t think you even need to use the specific software
I was trying to configure. Write a 1-line PHP script that saves some
POSTed variable to a file. Then post more than 64 MB of data and see
if it breaks.

The OS is FreeBSD 7.3-RELEASE-p4 amd64. Nginx 0.8.54 configuration for
AjaXplorer with irrelevant parts removed:

worker_processes 2;
events { worker_connections 512; }

http
{
include mime.types;
sendfile on;

server
{
    listen 80 accept_filter=httpready;
    server_name localhost;
    root /srv/upload/ajaxplorer;
    client_max_body_size 256m;

    location = / {
        rewrite ^ /index.php last;
    }

    location ~ ^/(?:index|content)\.php$ {
        fastcgi_pass unix:/tmp/php.sock;
        fastcgi_param SCRIPT_FILENAME $request_filename;
        include fastcgi_params;
    }

    location /             { deny  all; }
    location /client/      { allow all; }
    location /client/html/ { deny  all; }
    location /plugins/     { allow all; }
    location ~* \.php$     { deny  all; }
}

}

  • Max

On Fri, Feb 18, 2011 at 07:43:09AM -0500, Maxim K. wrote:

    client_max_body_size 256m;

    location /             { deny  all; }
    location /client/      { allow all; }
    location /client/html/ { deny  all; }
    location /plugins/     { allow all; }
    location ~* \.php$     { deny  all; }
}

}

BTW, it’s better to not use regex at all:

 location = / {
     fastcgi_pass unix:/tmp/php.sock;
     fastcgi_param SCRIPT_FILENAME $document_root/index.php;
     include fastcgi_params;
 }

 location = /index.php {
     fastcgi_pass unix:/tmp/php.sock;
     fastcgi_param SCRIPT_FILENAME $request_filename;
     include fastcgi_params;
 }

 location = /content.php {
     fastcgi_pass unix:/tmp/php.sock;
     fastcgi_param SCRIPT_FILENAME $request_filename;
     include fastcgi_params;
 }


Igor S.
http://sysoev.ru/en/

On Fri, Feb 18, 2011 at 10:28 AM, Maxim D. [email protected]
wrote:

if it breaks.
fastcgi_pass unix:/tmp/php.sock;
I’ll try to reproduce this issue here and dig further into it, but
unfortunately I’m a bit busy now and it’s unlikely to happen
anytime soon.

Maxim D.

Interesting - both a and b solve the problem. I didn’t think that
sendfile would be used to transfer the request to the FastCGI socket,
but I guess that’s where the problem is.

Which configuration provides more efficient data transfer - sendfile
on with a TCP socket, or sendfile off with a unix socket?

  • Max

Hello!

On Fri, Feb 18, 2011 at 07:43:09AM -0500, Maxim K. wrote:

[…]

Here’s the information in case you want to see why unix sockets cause
this problem. I don’t think you even need to use the specific software
I was trying to configure. Write a 1-line PHP script that saves some
POSTed variable to a file. Then post more than 64 MB of data and see
if it breaks.

The OS is FreeBSD 7.3-RELEASE-p4 amd64. Nginx 0.8.54 configuration for
AjaXplorer with irrelevant parts removed:

[…]

sendfile on;

[…]

        fastcgi_pass unix:/tmp/php.sock;

[…]

Ok, thanks for info. I tend to think that it’s kernel problem
with sendfile(2) and unix sockets.

Could you please test if a) not using sendfile (in location in
question) resolves the problem and b) not using unix sockets
resolves the problem? Just to make things more clear.

I’ll try to reproduce this issue here and dig further into it, but
unfortunately I’m a bit busy now and it’s unlikely to happen
anytime soon.

Maxim D.

Hello!

On Fri, Feb 18, 2011 at 03:17:25PM -0500, Maxim K. wrote:

POSTed variable to a file. Then post more than 64 MB of data and see

I’ll try to reproduce this issue here and dig further into it, but
unfortunately I’m a bit busy now and it’s unlikely to happen
anytime soon.

Maxim D.

Interesting - both a and b solve the problem. I didn’t think that
sendfile would be used to transfer the request to the FastCGI socket,
but I guess that’s where the problem is.

Ok, thanks for testing.

Which configuration provides more efficient data transfer - sendfile
on with a TCP socket, or sendfile off with a unix socket?

The speed difference between tcp and unix sockets isn’t that huge,
and copyin/copyout overhead without sendfile is likely to be
bigger. Though a) I’ve never tested and b) disk seeks may be more
important if you in fact hitting disk.

Maxim D.