Incomplete HTTP request body sent to upstream

Hi,

I’m experiencing an issue whereby nginx and the upstream server get into
disagreement about the state of the HTTP interaction, apparently caused
by
nginx not transmitting the complete request body. The scenario is as
follows, using nginx as a reverse proxy with upstream keepalive:

  1. Client sends a POST request to nginx with a Content-Length header and
    a
    relatively large body, i.e. spanning many TCP segments.
  2. Nginx forwards the request line and headers and starts forwarding the
    body to the upstream server.
  3. While nginx is still sending, the upstream server responds early with
    a
    409 based on information in the request headers, without consuming the
    body.
  4. Nginx eventually stops sending the body, i.e. it does not transmit
    the
    full number of bytes as specified in the Content-Length, presumably
    because
    of the server response.
  5. Nginx reuses the same upstream connection for a different request, in
    this case a GET request.
  6. The upstream server does not see this as a new HTTP request, as it is
    still awaiting more data according to the Content-Length.

At this point the client who sent the GET request and nginx wait for a
response while the upstream server is waiting for more data until one of
them hits a timeout (whichever has the lowest timeout) which eventually
results in the connection being closed.

According to RFC2616, 8.2.2 [1] if the request contained a
Content-Length
and the client (nginx in this case) ceases to transmit the body (due to
an
error response) the client (nginx) would have to close the connection,
which does not happen.

I am reasonably certain that the client is always transmitting the full
body as the problem does not occur when the client talks directly to the
upstream server and an otherwise identical request/response pattern
(i.e.
an early error response).

Can someone clarify on whether this is expected behaviour / as designed
on
behalf of nginx?

Nginx versions used: 1.6.0, 1.6.2, 1.7.7

[1] RFC 2616 - Hypertext Transfer Protocol -- HTTP/1.1

  • Roman

Hello!

On Wed, Nov 19, 2014 at 02:13:41PM +0100, Roman Borschel wrote:

body to the upstream server.
At this point the client who sent the GET request and nginx wait for a
body as the problem does not occur when the client talks directly to the
upstream server and an otherwise identical request/response pattern (i.e.
an early error response).

Can someone clarify on whether this is expected behaviour / as designed on
behalf of nginx?

This is a bug, currently keepalive connections cache doesn’t know
about the fact that nginx stopped sending the body early and the
connection shouldn’t be cached. Some earlier discussion and
an attempt to fix this can be found in the thread here:

http://mailman.nginx.org/pipermail/nginx-devel/2012-March/002040.html

Trivial workaround is to disable use of keepalive connections
(actually, this is the default) if your backend behaves this way.


Maxim D.
http://nginx.org/

Hi Maxim,

thanks for the quick answer and the pointer to the earlier discussion.
May
I ask if there is there a specific reason for the past discussion not
leading to the issue getting resolved, i.e. is this bug on the roadmap
somewhere or should I file an issue?

  • Roman

Hello!

On Thu, Nov 20, 2014 at 05:34:58PM +0100, Roman Borschel wrote:

Hi Maxim,

thanks for the quick answer and the pointer to the earlier discussion. May
I ask if there is there a specific reason for the past discussion not
leading to the issue getting resolved, i.e. is this bug on the roadmap
somewhere or should I file an issue?

I haven’t checked, but likely it’s not yet in the trac. It’s
probably a good idea to add a ticket to make sure it won’t be
forgotten.

by

409 based on information in the request headers, without consuming the
At this point the client who sent the GET request and nginx wait for a
I am reasonably certain that the client is always transmitting the full
connection shouldn’t be cached. Some earlier discussion and


nginx mailing list
[email protected]
nginx Info Page


Maxim D.
http://nginx.org/