I have a simple question about the Nginx FastCGI implementation. Let’s
say
that 1,000 web browsers are requesting pages from our server, and we’re
using FastCGI within Nginx.
While the FastCGI specification permits sending little pieces of each
request (“multiplexing” them), my understanding is that most (all?)
implementations don’t do this. Therefore,
(1) does Nginx FastCGI provide exactly one complete HTTP request at a
time?
(2) does Nginx FastCGI want us to respond to that particular HTTP
request
before it gives us the next one?
The reason that I ask is this: If my FastCGI server is high-latency,
I’m
wondering if it can get lots of requests to work on, or if it will have
to
finish one and only then get the next one, and only Nginx knows the
answer
to this question.
Many thanks for your kind help, and I hope that I didn’t miss the answer
to
this question in my search of the archives.
On Sun, Jun 30, 2013 at 05:10:23PM -0400, kgk wrote:
(1) does Nginx FastCGI provide exactly one complete HTTP request at a
time?
(2) does Nginx FastCGI want us to respond to that particular HTTP request
before it gives us the next one?
The reason that I ask is this: If my FastCGI server is high-latency, I’m
wondering if it can get lots of requests to work on, or if it will have to
finish one and only then get the next one, and only Nginx knows the answer
to this question.
FastCGI multiplexing isn’t used by nginx. That is, within a
single connection to a fastcgi application only one request is
sent, and then nginx will wait for a response. More connections
will be opened if there are multiple simulteneous requests.