Why Nginx Doesn't Implement FastCGI Multiplexing?

Hi,

I’m doing some research on FastCGI recently. As I see from the FastCGI
specification, it does support multiplexing through a single
connection. But apparently none of the current web servers, like
Nginx, Apache, or Lighttpd supports this feature.

I found a thread from nginx dev mailing list back to 2009, stating
that multiplexing won’t make much difference in performance:

But I also find an interesting article on how great this feature is,
back to 2002:
http://www.nongnu.org/fastcgi/#multiplexing

I don’t have the ability to perform a test on this, but another
protocol, SPDY, that recently becomes very popular, and its Nginx
patch is already usable, also features multiplexing. So I’m curious
about why spdy’s multiplexing is great while fastcgi’s is not.

One reason I can think of is that tcp connection on the internet is
expensive, affecting by RTT, CWND, and other tube warming-up issue.
But tcp conneciton within IDC (or unix-domain socket on localhost) is
much cheaper. Besides, the application can also go the event-based
way, to accept as much connections as it can from the listening socket
and perform asynchronously.

Does my point make sense? or some other more substantial reasons?

Thanks

Jerry

Hello!

On Sat, Mar 09, 2013 at 10:43:47PM +0800, Ji Zhang wrote:

But I also find an interesting article on how great this feature is,
back to 2002:
FastCGI — The Forgotten Treasure

This article seems to confuse FastCGI multiplexing with
event-based programming. Handling multiple requests in a single
process is great - and nginx does so. But you don’t need FastCGI
multiplexing to do it.

and perform asynchronously.

Does my point make sense? or some other more substantial reasons?

You are correct, since FastCGI is used mostly for local
communication, multiplexing on application level isn’t expected to
be beneficial. Another reason is that multiplexing isn’t
supported (and probably will never be) by the major FastCGI
application - PHP.

There were several discussions on FastCGI multiplexing here, and
general consensus seems to be that FastCGI multiplexing might
be useful to reduce costs of multiple long-polling connections to
an application, as it will reduce number of sockets OS will have
to maintain. It’s yet to be demonstrated though.


Maxim D.
http://nginx.org/en/donation.html

You clearly do not understand what the biggest FastCGI connection
multiplexing advantage is. It makes it possible to use much less TCP
connections (read “less ports”). Each TCP connection requires separate
port
and “local” TCP connection requires two ports. Add ports used by
browser-to-Web-server connections and you’ll see the whole picture. Even
if
Unix-sockets are used between Web-server and FastCGI-server there is an
advantage in using connection multiplexing - less used file descriptors.

FastCGI connection multiplexing could be great tool for beating C10K
problem. And long-polling HTTP-requests would benefit from connection
multiplexing even more.

Of course, if you’re running 1000 hits/day Web-site it is not someting
you’d
worry about.

Posted at Nginx Forum:

It is yet to prove that C10K-related problems are based on sockets/ports
exhaustion…

The common struggling points on a machine involve multiples locations
and
your harddisks, RAM & processing capabilities will be quickly overwelmed
before you lack sockets and/or ports…

If you are tempted of using enormous capacities against the struggling
points to finally achieve socket exhaustion, you are using the old
‘mainframe’ paradigm : few machines responsible for the whole work.
Google proved the opposite one (several ‘standard’ machines working in
a
cluster) was more accessible/profitable/scalable/affordable.

Could you provide some real-world-based insights on the absolute
necessity
of the FastCGI multiplexing capability?

And please mind your words. Stating that someone ‘clearly doesn’t
understand’ might be understood as calling that person ‘stupid’.
That rhetorical level might lead the debate to a quick and sound end.

B. R.

Another scenario. Consider application that takes few seconds to process
single request. In non-multiplexing mode we’re still limited to roughly
32K
simultaneous requests even though we could install enough backend
servers to
handle 64K such requests per second.

Now, imagine we can use FastCGI connection multiplexing. It could be
just
single connection per backend. And, again, we are able to serve roughly
twice as many requests per second with the same hardware but little tiny
feature called FastCGI connection multiplexing.

Posted at Nginx Forum:

Consider Comet application (aka long-polling Ajax requests). There is no
CPU-load since most of the time application just waits for some event to
happen and nothing is being transmitted. Something like chat or stock
monitoring Web application used by thousands of users simultaneously.

Every request (one socket/one port) would generate one connection to
backend
(another socket/port). So each request would take two sockets or
theoretical
limit is approximately 32K simulteneous requests. Even using keep-alive
feature on backend side does not help here since connection can be used
by
another request only after current one is fully served.

With FastCGI connection multiplexing we can effectively serve twice as
many
requests/clients.

Of course, there are applications that are limited by other resources
rather
that sockets/ports.

Is it really so difficult to implement?

P.S. I remember when some people were saying that keep-alive feature for
FastCGI backends side would be pointless.

P.P.S. English is not my first language. Please accept my sincere
apologies
for making offencive statement. I did not mean to do so.

Posted at Nginx Forum:

Many projects would kill for 100% performance or scalability gain.

Posted at Nginx Forum:

Funny thing is that resistance to implement that feature is so dence
that it
feels like its about breaking compatibility. It is all about more
complete
protocol specification implementation without any penalties beside
making
some internal changes.

Posted at Nginx Forum:

Scenario 1:
With long-polling requests, each client uses only one port since the
same
connection is continuously used, HTTP being stateless. The loss of the
connection would mean potential loss of data.

32K simultaneous active connections to the same service on a single
machine? I suspect the bottleneck is somewhere else…

Scenario 2:
So you would use several backend and a single frontend? Frontend,
espacially when only used as proxy/cache, are the easiest components to
replicate…
Once again, I highly suspect that managing 32K connections on a single
server is CPU-consuming…

I am no among the developers at all… I am merely discussing the
usefulness of such a request.
I prefer developers to concentrate on usable stuff rather than on
superfluous features: the product will be more efficient and usage-based
and not an all-in-one monster.

My 2 cents.
I’ll stop there.

B. R.

On Jul 20, 2013, at 5:02 , momyc wrote:

multiplexing even more.
The main issue with FastCGI connection multiplexing is lack of flow
control.
Suppose a client stalls but a FastCGI backend continues to send data to
it.
At some point nginx should say the backend to stop sending to the client
but the only way to do it is just to close all multiplexed connections.


Igor S.

On Fri, Jul 19, 2013 at 11:55 PM, momyc [email protected] wrote:

You clearly… err.

​Hmmm?​

​… and I haven’t seen a clue indicating that multiplexing would be as
useful in practice as​

​it is claimed to be in theory.​

I am no among the developers at all

That’s what I thought.

​Well. You must be an expert on the matter. I’ll probably be enlightened
reading whatever follows.​
​…​
​ :o)​

Developer omniscience? ​I am done here.​

B. R.

The main issue with FastCGI connection multiplexing is lack of flow
control.
Suppose a client stalls but a FastCGI backend continues to send data to
it.
At some point nginx should say the backend to stop sending to the client
but the only way to do it is just to close all multiplexed connections

The FastCGI spec has some fuzzy points. This one is easy. What Nginx
does in
case client stalls and proxied server still sends data? HTTP protocol
has no
flow control either.

Posted at Nginx Forum:

You clearly… err.

32K simultaneous active connections to the same service on a single
machine? I suspect the bottleneck is somewhere else…

I don’t know what exactly “service” means in context of our conversation
but
if that means server then I did not say that everything should be
handled by
single FastCGI server. I said single Nginx server can easily dispatch
thousands of HTTP requests to a number of remote FastCGI backends.

I am no among the developers at all

That’s what I thought.

Posted at Nginx Forum:

On Jul 20, 2013, at 8:36 , momyc wrote:

The main issue with FastCGI connection multiplexing is lack of flow
control.
Suppose a client stalls but a FastCGI backend continues to send data to it.
At some point nginx should say the backend to stop sending to the client
but the only way to do it is just to close all multiplexed connections

The FastCGI spec has some fuzzy points. This one is easy. What Nginx does in
case client stalls and proxied server still sends data? HTTP protocol has no
flow control either.

It closes both connections to a client and a backend, since HTTP lacks
both
flow control and multiplexing.


Igor S.

Actually 2) is natural since there is supposed to be de-multiplexer on
Nginx
side and it should know where to dispatch the record received from
backend

Posted at Nginx Forum:

It’s my next task to implement connection multiplexing feature in
Nginx’s
FastCGI module. I haven’t looked at recent sources yet and I am not
familiar
with Nginx architecture so if you could give me some pointers on where I
could to start it would be great. Sure thing anything I produce would be
available for merging with main Nginx sources.

Posted at Nginx Forum:

OK, it probably closes connection to backend server. Well, in case of
multiplexed FastCGI Nginx should do two things:

  1. send FCGI_ABORT_REQUEST to backend for given request
  2. start dropping records for given request if it still receives records
    from backend for given request

Posted at Nginx Forum:

On Sat, 2013-07-20 at 00:50 -0400, momyc wrote:

It’s my next task to implement connection multiplexing feature in Nginx’s
FastCGI module. I haven’t looked at recent sources yet and I am not familiar
with Nginx architecture so if you could give me some pointers on where I
could to start it would be great. Sure thing anything I produce would be
available for merging with main Nginx sources.

This career cynic - sorry sysadmin - looks forward to this fabled
doubling in performance…


Steve H. BSc(Hons) MNZCS

Linkedin: http://www.linkedin.com/in/steveholdoway
Skype: sholdowa

And, possible 3) if there is no other requests for that connection, just
close it like it never existed

Posted at Nginx Forum:

Well, there is supposed to be one FCGI_REQUEST_COMPLETE set in reply to
FCGI_ABORT_REQUEST but it can be ignored in this particular case.

I can see Nginx drops connections before receiving final
FCGI_REQUEST_COMPLETE at the end of normal request processing in some
cases.
And that’s something about running out of file descriptors.

Posted at Nginx Forum: