Hello!
On Mon, Dec 14, 2009 at 12:42:41PM +0100, Rasmus Andersson wrote:
I’m well aware of this. The code (as you might have noticed) is in the
early stages. Cleanup etc will be done once I’ve got the general ideas
straight.
Yep. Probably it’s a good idea to post it for review once you
done cleanup? Unless you are going to run out of karma points
till it happens…
can see nginx log in debug mode – no “disconnect” or “connect”
I don’t see nginx running in debug mode on the screenshot in
question. Instead you use ngx_log_error(NGX_LOG_DEBUG, …) for
your messages which produce similar output (i.e. logs “[debug]”
level) but not in fact debug. And other debug messages are
obviously not there.
fcgi client 127.0.0.1 connected on fd 5
…
app_handle_beginrequest 0x100530
…
app_handle_beginrequest 0x100530
…
Multiple requests received and handled over a period of time over one
persistent connection from nginx.
But I guess I’m missing something?
Looks like. Or you are running not the code you published.
On the other hand, it sets FASTCGI_KEEP_CONN flag and thus breaks
things as nginx relay on fastcgi application to close connection.
The FastCGI server will still be responsible for closing the connection.
No. Once nginx sets FASTCGI_KEEP_CONN flag - it takes
this responsibility according to fastcgi spec.
But looks like you didn’t take the actual problem: nginx needs
fastcgi application to close connections after it finished sending
response. It uses connection close as flush signal. Without this
requests will hang.
So basically it’s not something usefull right now. šAnd it breaks
3 out of 4 fastcgi subtests in the test suite
(nginx-tests: log).
I wasn’t aware of those tests. In what way does it break the tests?
Would you please help me solve the issues in my code breaking these
tests?
As I don’t see how you code can work at all - it’s unlikely I’ll
be able to help. Tests just show that I’m right and it doesn’t
work at all (the only test that passes checks that HEAD request
returns no body…).
website become very popular and alot of your visitors have slow
Saving gigabytes of memory and tens of thousands of file descriptors.
Today, the only option is to build purpose-made nginx-modules or whole
http-servers running separately from nginx.
I say this with real-world experience. Would be awesome to implement
the complete FastCGI 1.0 spec in nginx and be the first web server to
support long-lived and slow connections with rich web apps!
Difference between fastcgi multiplexing and multiple tcp
connections to the same fastcgi server isn’t that huge. Note
well: same fastcgi server, not another one.
You may save two file descriptors per request (one in nginx, one
in fastcgi app), and associated tcp buffers. But all this isn’t
likely to be noticable given the number of resources you already
spent on request in question.
The only use-case which appears to be somewhat valid is
long-polling apps which consume almost no resources. But even
here you aren’t likely to save more than half resources.
may want to take a look if you are going to continue your
multiplexing work.
Thanks. Do you know where I can find those? Any hint to where I should
start googling to find it in the archives?
Here is initial post in russian mailing list:
http://nginx.org/pipermail/nginx-ru/2009-April/024101.html
Here is the update for the last patch:
http://nginx.org/pipermail/nginx-ru/2009-April/024379.html
Not sure patches still applies cleanly, I haven’t touch this for a
while.
Maxim D.