Proxy_buffering off causes truncated responses when backend emits response in small chunks

We use nginx as both a load-balancer and webserver. This issue is with
the
nginx functioning as a load-balancer.

We reverse proxy to 6 nginx webservers running a number of Unicorn
(Rails)
application servers, these webserver nginx instances also run Evan
Miller’s
mod_zip to assemble archives on the fly. We have discovered under
certain
circumstances the load-balancing nginx will “hang-up” on the webserver
if
the load-balancer is configured with proxy_buffering off, however
proxy_buffering on seems to succeed. We would prefer to run without
proxy_buffering to prevent the load-balancer’s local storage from being
overrun.

Our default setup uses nginx 0.7.65 for both the load-balancer and the
webserver, however switching to using 1.0.12 as the load-balancer has
the
same problem. We have experimented with different software doing the
load-balancing and it does not exhibit this issue.

I’ve have linked the nginx configuration file we’re using on the load
balancer, and debug logs for both 0.7.65 and 1.0.12.

https://x.onehub.com/transfers/sg32zsar

The buffering on log is very long, but it does show success of a 4.8GB
response, the other responses always fail at the same point (826 MB).

The client sees the following (in access.log):

$ curl -b cookie.txt -o US.zip
https://mydomain.com/folders/7816672/archive
% Total % Received % Xferd Average Speed Time Time Time
Current
Dload Upload Total Spent Left
Speed
16 4922M 16 826M 0 0 1906k 0 0:44:03 0:07:23 0:36:40
1075k
curl: (18) transfer closed with 4294967296 bytes remaining to read

The nginx instance serving as the webserver logs:
Feb 07 15:03:39 ip-10-2-185-35 error.log: 2012/02/07 23:03:39 [info]
21425#0: *16656299 client closed prematurely connection, so upstream
connection is closed too (104: Connection reset by peer) while reading
upstream, client: 10.254.174.80, server: mydomain.com, request: “GET
/folders/7816672/archive HTTP/1.0”, subrequest: “/s3/asset-27235062”,
upstream: "
http://72.21.215.100:80/bucket/asset-27235062?AWSAccessKeyId=key&Expires=1328741778&Signature=signaturehttp://72.21.215.100/bucket/asset-27235062?AWSAccessKeyId=key&Expires=1328741778&Signature=signature",
host: “mydomain.com

Hello!

On Tue, Feb 07, 2012 at 03:54:08PM -0800, W. Andrew Loe III wrote:

overrun.
If you want to disable disk buffering you don’t need to disable
buffering at all. Use

proxy_max_temp_file_size 0;

instead.

The only valid reason to disable buffering completely with
“proxy_buffering off” is when you need even single byte from
backend to be immediately passed to client, i.e. as in some
streaming / long polling cases.

The buffering on log is very long, but it does show success of a 4.8GB
1075k
curl: (18) transfer closed with 4294967296 bytes remaining to read

The response is over 4G, and this won’t work with “proxy_buffering
off” in 1.0.x on 32bit systems. The non-buffered mode was
originally designed for small memcached responses and used to use
size_t for length storage.

You have to upgrade to 1.1.x where it now uses off_t and will be
able to handle large responses even on 32bit platforms.
Alternatively, just forget about “proxy_buffering off” as you
don’t need it anyway, see above.

Maxim D.

Mystery solved. I will just proxy_max_temp_file_size 0, as the intention
was to just avoid the disk.

Thank you!