Bug of discarding request body

Hi, There:

 I run into a bug which I believe it is about

(discard_body() for short). This bug is reproducible using the 1.9.7

The discard_body() discards request body by reading it. However,
the if the body is not ready yet (i.e.
returns NGX_AGAIN), it still return NGX_OK.

ngx_http_special_response_handler() (special_handler() for short)
discard_body(). If the discard_body() returns NGX_OK, it does NOT
keepalive connection, in the meantime, it sends out the special response
before the request body is completely discarded. This cause the problem!

This is my setup to expose the bug:

My OS is Ubuntu 14.

a). the “backend” server have two locations, one return 200; the other
one return 403
b). the reverse proxy hook to the backend with:
b1) keepalive
b2) uploading is unbuffered;
so we can send incomplete request to the backend, which is to
mimic the situation
when the body is not sent at all of somehow dealayed.
b3) send incomplete POST request R1 to the ‘/bad’ relocation
via "cat post_bad.txt | nc -q -1 8080’
b4) (much be quick) send another request R2 via “curl -X GET
you may end up see 400 reponse.


If you strace the backend server, you will see it first send

403-response to
the proxy, then call recvfrom() trying to get the body the R1.

The recvfrom() does not get the body of the R1, instead it gets the

part of the R2, and discard it.

The subsequent call to recvfrom() get the trailing part of the R2.

Nginx thought
it is starting part of R2, and gives 400 response.

Proposed Fix:

  1. Make sure discard_body() completely discard body before it sends
    the header

    The disadvantage is that it may waste lots of resource in

  2. If discard_body() does not complete, just return NGX_AGAIN instead
    NGX_OK, whichyby special handler disable keepalive, making sure
    the boundary of requests are clean.

    The disadvantage is that it compromise the performance the
    connections between backend and proxy.

Please shed some light, lots thanks in advance!


Following is my setup:

  1. proxy conf;

    upstream backend {
    keepalive 32;

    server {
    listen 8080;
    proxy_request_buffering off;
    location / {
    proxy_http_version 1.1;
    proxy_set_header Connection “”;
    proxy_pass http://backend;

o. backend:
server {
listen 8081;
proxy_request_buffering off; # dose not matter

     location /good {
         content_by_lua 'ngx.say("lol")';

     location /bad1 {
         return 403;

o. incomplete request to the bad request
cat post_bad1.txt | nc -q -1 8080

The post_bad1.txt is attached to this mail.

Hi, Maxim:

Thank you so much for your insightful comment!

Unbuffered-uploading not just to make things easier to reproduce the

It is trivial. So to speak.

It is translating to say it is rather dangerous to use the

unbuffered-uploading along
with keepalive connections, as the combination make the proxy server
paper thin
to penetrate.


This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs