Is it possible that nginx will not buffer the client body?

Hello!

is it possible that nginx will not buffer the client body before handle
the
request to upstream?

we want to use nginx as a reverse proxy to upload very very big file to
the
upstream, but the default behavior of nginx is to save the whole request
to
the local disk first before handle it to the upstream, which make the
upstream impossible to process the file on the fly when the file is
uploading, results in much high request latency and server-side resource
consumption.

Thanks!

I know nginx team are working on it. You can wait for it.

If you are eager for this feature, you could try my patch:
https://github.com/taobao/tengine/pull/91. This patch has been running
in
our production servers.

2013/1/11 li zJay [email protected]

lm011111 Wrote:

the local disk first before handle it to the upstream, which make the
upstream impossible to process the file on the fly when the file is
uploading, results in much high request latency and server-side
resource
consumption.

Thanks!


You could use the nginx upload module to do this.

http://www.grid.net.ru/nginx/upload.en.html

I currently am using this to support a video upload application built
with
Django and has full upload resume functionality as well. The only
caveat is
that I cannot support resumes across different sessions, but within the
same
session it works (uses the X-UPLOAD-SESSION-ID header or something
similar).

Posted at Nginx Forum:

This patch should work between nginx-1.2.6 and nginx-1.3.8.

The documentation is here:

client_body_postpone_sending

Syntax: client_body_postpone_sending size

Default: 64k

Context: http, server, location

If you specify the proxy_request_buffering or
fastcgi_request_buffering
to be off, Nginx will send the body to backend when it receives more
than
size data or the whole request body has been received. It could save
the
connection and reduce the IO number with backend.

proxy_request_buffering

Syntax: proxy_request_buffering on | off

Default: on

Context: http, server, location

Specify the request body will be buffered to the disk or not. If it’s
off,
the request body will be stored in memory and sent to backend after
Nginx
receives more than client_body_postpone_sending data. It could save
the
disk IO with large request body.

 Note that, if you specify it to be off, the nginx retry mechanism 

with
unsuccessful response will be broken after you sent part of the request
to
backend. It will just return 500 when it encounters such unsuccessful
response. This directive also breaks these variables: $request_body,
$request_body_file. You should not use these variables any more while
their
values are undefined.

fastcgi_request_buffering

Syntax: fastcgi_request_buffering on | off

Default: on

Context: http, server, location

The same as proxy_request_buffering.

2013/1/13 li zJay [email protected]

Hello!

@yaoweibin

If you are eager for this feature, you could try my patch:
https://github.com/taobao/tengine/pull/91. This patch has been running in
our production servers.

what’s the nginx version your patch based on?

Thanks!

This patch should work between nginx-1.2.6 and nginx-1.3.8.

The documentation is here:

client_body_postpone_sending

Syntax: client_body_postpone_sending size

Default: 64k

Context: http, server, location

If you specify the proxy_request_buffering or
fastcgi_request_buffering
to be off, Nginx will send the body to backend when it receives more
than
size data or the whole request body has been received. It could save
the
connection and reduce the IO number with backend.

proxy_request_buffering

Syntax: proxy_request_buffering on | off

Default: on

Context: http, server, location

Specify the request body will be buffered to the disk or not. If it’s
off,
the request body will be stored in memory and sent to backend after
Nginx
receives more than client_body_postpone_sending data. It could save
the
disk IO with large request body.

 Note that, if you specify it to be off, the nginx retry mechanism 

with
unsuccessful response will be broken after you sent part of the request
to
backend. It will just return 500 when it encounters such unsuccessful
response. This directive also breaks these variables: $request_body,
$request_body_file. You should not use these variables any more while
their
values are undefined.

fastcgi_request_buffering

Syntax: fastcgi_request_buffering on | off

Default: on

Context: http, server, location

The same as proxy_request_buffering.

2013/1/13 li zJay [email protected]

Yes. It should work for any request method.

2013/1/16 Pasi Kärkkäinen [email protected]

On Sun, Jan 13, 2013 at 08:22:17PM +0800, ??? wrote:

This patch should work between nginx-1.2.6 and nginx-1.3.8.
The documentation is here:

proxy_request_buffering

with unsuccessful response will be broken after you sent part of the
request to backend. It will just return 500 when it encounters such
unsuccessful response. This directive also breaks these variables:
$request_body, $request_body_file. You should not use these variables any
more while their values are undefined.

Hello,

This patch sounds exactly like what I need aswell!
I assume it works for both POST and PUT requests?

Thanks,

– Pasi

     we want to use nginx as a reverse proxy to upload very very big file

 [10][email protected]
  1. https://github.com/taobao/tengine/pull/91
  2. mailto:[email protected]
  3. https://github.com/taobao/tengine/pull/91
  4. mailto:[email protected]
  5. mailto:[email protected]
  6. nginx Info Page
  7. mailto:[email protected]
  8. nginx Info Page
  9. mailto:[email protected]
  10. nginx Info Page

On Thu, Jan 17, 2013 at 11:15:58AM +0800, ??? wrote:

Yes. It should work for any request method.

Great, thanks, I’ll let you know how it works for me. Probably in two
weeks or so.

– Pasi

 > Â  Â If you specify the `proxy_request_buffering` or
 > Â  Â Default: `on`
 > Â  Â  Â  Â  Â  Â  Note that, if you specify it to be off, the nginx
 >
 > Â  Â  Â Hello!
 <[3][3][email protected]> wrote:
 > Â  Â  Â  Â  Â is it possible that nginx will not buffer the client
 > Â  Â  Â  Â  Â the fly when the file is uploading, results in much high
 > Â  Â  Â  Â Developer @ Server Platform Team of Taobao
 > Â  Â --
 > Â  Â 5. mailto:[16][email protected]
 > [24]http://mailman.nginx.org/mailman/listinfo/nginx

References
9. nginx Info Page
20. nginx Info Page
21. mailto:[email protected]
22. nginx Info Page
23. mailto:[email protected]
24. nginx Info Page
25. mailto:[email protected]
26. nginx Info Page

Use the patch I attached in this mail thread instead, don’t use the pull
request patch which is for tengine.

Thanks.

2013/2/22 Pasi Kärkkäinen [email protected]

On Fri, Jan 18, 2013 at 10:38:21AM +0200, Pasi Kärkkäinen wrote:

On Thu, Jan 17, 2013 at 11:15:58AM +0800, ??? wrote:

Yes. It should work for any request method.

Great, thanks, I’ll let you know how it works for me. Probably in two weeks or
so.

Hi,

Adding the tengine pull request 91 on top of nginx 1.2.7 doesn’t work:

cc1: warnings being treated as errors
src/http/ngx_http_request_body.c: In function
‘ngx_http_read_non_buffered_client_request_body’:
src/http/ngx_http_request_body.c:506: error: implicit declaration of
function ‘ngx_http_top_input_body_filter’
make[1]: *** [objs/src/http/ngx_http_request_body.o] Error 1
make[1]: Leaving directory `/root/src/nginx/nginx-1.2.7’
make: *** [build] Error 2

ngx_http_top_input_body_filter() cannot be found from any .c/.h files…
Which other patches should I apply?

Perhaps this?

Thanks,

– Pasi

On Fri, Feb 22, 2013 at 11:25:24AM +0200, Pasi Kärkkäinen wrote:

On Fri, Feb 22, 2013 at 10:06:11AM +0800, Weibin Y. wrote:

Use the patch I attached in this mail thread instead, don’t use the pull
request patch which is for tengine.Â
Thanks.

Oh sorry I missed that attachment. It seems to apply and build OK.
I’ll start testing it.

I added the patch on top of nginx 1.2.7 and enabled the following
options:

client_body_postpone_sending 64k;
proxy_request_buffering off;

after that connections through the nginx reverse proxy started failing
with errors like this:

[error] 29087#0: *49 upstream prematurely closed connection while
reading response header from upstream
[error] 29087#0: *60 upstream sent invalid header while reading response
header from upstream

And the services are unusable.

Commenting out the two config options above makes nginx happy again.
Any idea what causes that? Any tips how to troubleshoot it?

Thanks!

– Pasi

Can you show me your configure? It works for me with nginx-1.2.7.

Thanks.

2013/2/22 Pasi Kärkkäinen [email protected]

On Fri, Feb 22, 2013 at 10:06:11AM +0800, Weibin Y. wrote:

Use the patch I attached in this mail thread instead, don’t use the pull
request patch which is for tengine.Â
Thanks.

Oh sorry I missed that attachment. It seems to apply and build OK.
I’ll start testing it.

Thanks!

– Pasi

 make[1]: Leaving directory `/root/src/nginx/nginx-1.2.7'

 > > Â  Â  Â > Ã* Â Ã* Default: 64k
 > > Â  Â  Â with
 to backend
 > > Â  Â  Â > Ã* Â Ã* with unsuccessful response will be broken after
 > > Â  Â  Â variables any
 > > Â  Â  Â -- Pasi
 > > Â  Â  Â >
 > > Â  Â  Â > Ã* Â Ã* Â Ã* Â Ã* patch:
 > > Â  Â  Â body before
 > > Â  Â  Â > Ã* Â Ã* Â Ã* Â Ã* Â Ã* upstream, which make the upstream
 > > Â  Â  Â > Ã* Â Ã* Â Ã* Â Ã* Â Ã* nginx mailing list
 > > Â  Â  Â > Ã* Â Ã* Â Ã* Â Ã* nginx mailing list
 > > Â  Â  Â >
 > > Â  Â  Â > Ã* Â Ã* 3. mailto:[14][16][email protected]
 > > Â  Â  Â > Ã* Â 11.
 > > Â  Â  Â [26][28]http://mailman.nginx.org/mailman/listinfo/nginx
 > > Â  Â 3. mailto:[31][email protected]
 > > Â  14. mailto:[42][email protected]
 > > Â  25. mailto:[53][email protected]
 > [58]http://mailman.nginx.org/mailman/listinfo/nginx

References
9. nginx Info Page
20. nginx Info Page
31. mailto:[email protected]
42. mailto:[email protected]
53. mailto:[email protected]
54. nginx Info Page
55. mailto:[email protected]
56. nginx Info Page
57. mailto:[email protected]
58. nginx Info Page
59. mailto:[email protected]
60. nginx Info Page


nginx mailing list
[email protected]
http://mailman.nginx.org/mailman/listinfo/nginx

This would be an excellent feature!

Posted at Nginx Forum:

I know nginx team are working on it. You can wait for it.

Hopefully they will find a solution!

Posted at Nginx Forum:

On Mon, Feb 25, 2013 at 10:13:42AM +0800, Weibin Y. wrote:

Can you show me your configure? It works for me with nginx-1.2.7.
Thanks.

Hi,

I’m using the nginx 1.2.7 el6 src.rpm rebuilt with “headers more” module
added,
and your patch.

I’m using the following configuration:

server {
listen public_ip:443 ssl;
server_name service.domain.tld;

    ssl                     on;
    keepalive_timeout       70;

    access_log              /var/log/nginx/access-service.log;
    access_log              /var/log/nginx/access-service-full.log 

full;
error_log /var/log/nginx/error-service.log;

    client_header_buffer_size 64k;
    client_header_timeout   120;

    proxy_next_upstream error timeout invalid_header http_500 

http_502 http_503;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_buffering off;
proxy_cache off;

add_header Last-Modified “”;
if_modified_since off;

    client_max_body_size    262144M;
    client_body_buffer_size 1024k;
    client_body_timeout     240;

    chunked_transfer_encoding off;

client_body_postpone_sending 64k;

proxy_request_buffering off;

location / {

proxy_pass      https://service-backend;

}
}

Thanks!

– Pasi

 > Oh sorry I missed that attachment. It seems to apply and build OK.
 with errors like this:
 Thanks!
 > > Â  Â  Â > >
 > >
 > > Â  Â  Â ngx_http_top_input_body_filter() cannot be found from any
 > > Â  Â  Â >
 > > Â  Â  Â > > Ã* Â Ã* Â Ã* > Ã** Ã* Ã** ##
 > > Â  Â  Â send the body
 > > Â  Â  Â > > Ã* Â Ã* Â Ã* >
 > > Â  Â  Â > > Ã* Â Ã* Â Ã* > Ã** Ã* Ã** the request body will be
 > > Â  Â  Â > > Ã* Â Ã* Â Ã* >
 > > Â  Â  Â it encounters
 undefined.
 > > Â  Â  Â > >
 > > Â  Â  Â > > Ã* Â Ã* Â Ã* This patch has
 > > Â  Â  Â > > Ã* Â Ã* Â Ã* >
 our production servers.
 > > Â  Â  Â > > Ã* Â Ã* Â Ã* > Ã** Ã* Ã** Ã* Ã** Ã* Ã** Ã* Ã** handle
 request to the local disk
 > > Â  Â  Â > > Ã* Â Ã* Â Ã* > Ã** Ã* Ã** Ã* Ã** Ã* Ã** Ã* Ã** latency
 > > Â  Â  Â [7][7][9][10]http://mailman.nginx.org/mailman/listinfo/nginx
 > > Â  Â  Â > > Ã* Â Ã* Â Ã* > Ã** Ã* Ã** Ã* Ã** Ã* Ã**
 > > Â  Â  Â > > Ã* Â Ã* Â Ã* > Ã** Ã* Ã** Ã* Ã**
 > > Â  Â  Â > > Ã* Â Ã* Â Ã* > Ã** Ã* Ã** Visible links
 > > Â  Â  Â > > Ã* Â Ã* Â Ã* > Ã** Ã* Ã** 6.
 > > Â  Â  Â [22][24][25]http://mailman.nginx.org/mailman/listinfo/nginx
 > > Â  Â  Â > > Ã* Â Ã* Â Ã* nginx mailing list
 > > Â  Â  Â > > Ã* Â Ã* Visible links
 > > Â  Â  Â > > Ã* Â Ã* 8. mailto:[36][37][email protected]
 [43][44]https://github.com/taobao/tengine/pull/91
 > > Â  Â  Â > > Ã* Â 23. mailto:[51][52][email protected]
 > > Â  Â  Â >
 > > Â  Â --
 > > Â  Â 4. [65]https://github.com/taobao/tengine/pull/91
 > > Â  15. [76]https://github.com/taobao/tengine/pull/91
 > > Â  26. [87]http://mailman.nginx.org/mailman/listinfo/nginx
 > > Â  37. [98]http://mailman.nginx.org/mailman/listinfo/nginx
 > > Â  48. [109]http://mailman.nginx.org/mailman/listinfo/nginx
 > > Â  59. mailto:[120][email protected]
 > [125]http://mailman.nginx.org/mailman/listinfo/nginx

References
9. mailto:[email protected]
20. mailto:[email protected]
31. https://github.com/taobao/tengine/pull/91
42. https://github.com/taobao/tengine/pull/91
53. nginx Info Page
64. mailto:[email protected]
75. mailto:[email protected]
86. mailto:[email protected]
97. mailto:[email protected]
108. mailto:[email protected]
119. nginx Info Page
120. mailto:[email protected]
121. nginx Info Page
122. mailto:[email protected]
123. nginx Info Page
124. mailto:[email protected]
125. nginx Info Page
126. mailto:[email protected]
127. nginx Info Page

Hello!

On Thu, Feb 28, 2013 at 05:36:23PM +0000, André Cruz wrote:

I’m also very interested in being able to configure nginx to NOT
proxy the entire request.

Regarding this patch,
https://github.com/alibaba/tengine/pull/91, is anything
fundamentally wrong with it? I don’t understand Chinese so I’m
at a loss here…

As a non-default mode of operation the aproach taken is likely
good enough (not looked into details), but the patch won’t work
with current nginx versions - at least it needs (likely major)
adjustments to cope with changes introduced during work on chunked
request body support as available in nginx 1.3.9+.


Maxim D.
http://nginx.org/en/donation.html

On Thursday 28 February 2013 21:36:23 Andr Cruz wrote:

I’m also very interested in being able to configure nginx to NOT proxy the
entire request.

[…]

Actually, you can.

http://nginx.org/r/proxy_set_body
http://nginx.org/r/proxy_pass_request_body

wbr, Valentin V. Bartenev