I’ve spent the last few days researching this, and I’m pretty sure that
there’s
a bug in how nginx handles FastCGI requests.
According to http://fastcgi.com/devkit/doc/fcgi-spec.html#S3.5 :
“The Web server controls the lifetime of transport connections. The Web
server
can close a connection when no requests are active. Or the Web server
can
delegate close authority to the application (see FCGI_BEGIN_REQUEST). In
this
case the application closes the connection at the end of a specified
request.”
and
"Simple applications will process one request at a time and accept a new
transport connection for each request. More complex applications will
process
concurrent requests, over one or multiple transport connections, and
will keep
transport connections open for long periods of time. "
I apparently have a “more complex” application library (perl library
FCGI::Async, see CPAN). I’m finding that nginx sits and waits for the
application to close the connection - which it does not, since it wants
to be
able to multiplex requests. If you terminate the FastCGI application
prematurely, nginx will assume the request is complete, and send the
response
to the browser just fine. But if you don’t, it waits forever, and then
times
out.
If you see also this: http://fastcgi.com/devkit/doc/fcgi-spec.html#SB
Example 4 shows this multiplexing.
Nginx should consider the FastCGI request complete when it receives
FCGI_REQUEST_COMPLETE, not when the connection is closed. Even if it
forces
the connection closed, it would be better than the behavior right now.
I will note that lighttpd seems to handle this correctly, but I didn’t
look
into too much data there. And I was going to try to provide a patch to
0.7.1,
but I haven’t had enough time with the source yet to be able to
understand all
that is going on, I didn’t know where I should start.
I can provide a test application (in perl, which requires several
libraries)
upon request, but the FastCGI spec should be sufficient.
Thanks.