Broken pipe while sending request to upstream

Hi.

I’ve set up nginx as a proxy for a jetty service. Works nicely, most of
the
time, except

… when issuing a (somewhat) larger POST request to some entity which
is
protected by HTTP Basic access authentication.

The web app responds with a 401 immediately, probably closing the
connection
right away:

127.0.0.1 - - [17/Sep/2013:14:17:38 +0000] “POST
/scm/blub?cmd=unbundle
HTTP/1.0” 401 1412

But nginx gratuitously insists on sending all the data, which fails
eventually:

2013/09/17 16:17:38 [error] 22873#0: *1 writev() failed (32: Broken
pipe)
while sending request to upstream, client: 192.168.2.8, server:
test.int,
request: “POST /scm/blub?cmd=unbundle HTTP/1.1”, upstream:
http://127.0.0.1:8082/scm/blub?cmd=unbundle”, host: “test.int”

I also tried different config options like enabling sendfile, increasing
buffer and timeout sizes, but it didn’t help.

Is there some way to make this work? Is this a bug?

I’m using Ubuntu 12.04 LTS on linux with nginx 1.1.19-1ubuntu0.2.

Thanks for any help!

Posted at Nginx Forum:

Hello!

On Tue, Sep 17, 2013 at 11:11:05AM -0400, Claudio wrote:

I also tried different config options like enabling sendfile, increasing
buffer and timeout sizes, but it didn’t help.

Is there some way to make this work? Is this a bug?

As long as a connection is closed before nginx is able to get a
response - it looks like a problem in your backend. Normally such
connections need lingering close to make sure a client has a chance
to read a response.


Maxim D.
http://nginx.org/en/donation.html

Hi Maxim.

Maxim D. Wrote:

As long as a connection is closed before nginx is able to get a
response - it looks like a problem in your backend. Normally such
connections need lingering close to make sure a client has a chance
to read a response.

Thanks for your prompt response!

I read an illustrative description about the lingering close here
(https://mail-archives.apache.org/mod_mbox/httpd-dev/199701.mbox/[email protected]>)
and now better understand the problem per se.

What I’m not getting straight is why nginx does not see the response
(assuming it really was sent off by the server). Does nginx try to read
data
from the connection while sending or when an error occurs during send?
(Sorry for those dumb questions, but obviously I don’t have the
slightest
idea how nginx works…)

According to jetty’s documentation, “Jetty attempts to gently close all
TCP/IP connections with proper half close semantics, so a linger timeout
should not be required and thus the default is -1.” Would this actually
enable nginx to see the response from the server? Or is it really
necessary
to fully read the body before sending a response, as indicated by this
(switching reverse proxy from apache2 to nginx – .pQd's log)
post I found?

I don’t know for sure about the client, but nginx is talking via
HTTP/1.1 to
the web app. Is it possible to enable the Expect: 100-continue method
for
this connection so that nginx sees the early response?

Alternatively, is it possible to work around this problem? Could I
define
some rules to the extent that say, if it is a POST request to that
specific
location without an “Authorization” header present, strip the request
body, set the content-length to 0 and then forward this request?

Posted at Nginx Forum:

Hello!

On Wed, Sep 18, 2013 at 02:52:39AM -0400, Claudio wrote:

According to jetty’s documentation, “Jetty attempts to gently close all
TCP/IP connections with proper half close semantics, so a linger timeout
should not be required and thus the default is -1.” Would this actually
enable nginx to see the response from the server? Or is it really necessary
to fully read the body before sending a response, as indicated by this
(switching reverse proxy from apache2 to nginx – .pQd's log)
post I found?

While sending a request nginx monitors a connection to see if
there are any data available from an upstream (using an event
method configured), and if they are - it reads the data (and
handles as a normal http response).

It doesn’t try to read anything if it got a write error though,
and an error will be reported if a backend closes the connection
before nginx was able to see there are data available for reading.

Playing with settings like sendfile, sendfile_max_chunk, as well
as tcp buffers configured in your OS might be helpful if your
backend closes connection to early. The idea is to make sure
nginx won’t be blocked for a long time in sendfile or so, and will
be able to detect data available for reading before an error
occurs during writing.

I don’t know for sure about the client, but nginx is talking via HTTP/1.1 to
the web app. Is it possible to enable the Expect: 100-continue method for
this connection so that nginx sees the early response?

No, “Expect: 100-continue” isn’t something nginx is able to use
while talking to backends.

Alternatively, is it possible to work around this problem? Could I define
some rules to the extent that say, if it is a POST request to that specific
location without an “Authorization” header present, strip the request
body, set the content-length to 0 and then forward this request?

You can, but I would rather recommend digging deeper in what goes
on and fixing the root cause.


Maxim D.
http://nginx.org/en/donation.html

Hello Maxim,

thanks a lot for your explanation. I’ve (kind off) solved the problem
for
now.

I was testing with another proxy in between nginx and the Jetty server
to
see whether that would behave differently. I just used Twitter’s
finagle,
which is based on Netty and got a few error messages like this:

18.09.2013 11:59:58 com.twitter.finagle.builder.SourceTrackingMonitor
handle
FATAL: A server service unspecified threw an exception
com.twitter.finagle.ChannelClosedException: ChannelException at remote
address: localhost/127.0.0.1:8082
at com.twitter.finagle.NoStacktrace(Unknown Source)

So I tried to dig deeper on the Jetty side of things.

In the end, I just upgraded the web application running inside of Jetty
and
this solved the problem. Maybe I should make this a reflex: first update
everything to the latest version before even trying to understand the
problem, but that’s not so easy to do in general…

Thanks again!

Posted at Nginx Forum: