Forum: NGINX Why Nginx Doesn't Implement FastCGI Multiplexing?

41b5eae77503b163be051082df4d358e?d=identicon&s=25 Ji Zhang (Guest)
on 2013-03-09 15:44
(Received via mailing list)
Hi,

I'm doing some research on FastCGI recently. As I see from the FastCGI
specification, it does support multiplexing through a single
connection. But apparently none of the current web servers, like
Nginx, Apache, or Lighttpd supports this feature.

I found a thread from nginx dev mailing list back to 2009, stating
that multiplexing won't make much difference in performance:
http://forum.nginx.org/read.php?29,30275,30312

But I also find an interesting article on how great this feature is,
back to 2002:
http://www.nongnu.org/fastcgi/#multiplexing

I don't have the ability to perform a test on this, but another
protocol, SPDY, that recently becomes very popular, and its Nginx
patch is already usable, also features multiplexing. So I'm curious
about why spdy's multiplexing is great while fastcgi's is not.

One reason I can think of is that tcp connection on the internet is
expensive, affecting by RTT, CWND, and other tube warming-up issue.
But tcp conneciton within IDC (or unix-domain socket on localhost) is
much cheaper. Besides, the application can also go the event-based
way, to accept as much connections as it can from the listening socket
and perform asynchronously.

Does my point make sense? or some other more substantial reasons?

Thanks

Jerry
A8108a0961c6087c43cda32c8616dcba?d=identicon&s=25 Maxim Dounin (Guest)
on 2013-03-11 13:12
(Received via mailing list)
Hello!

On Sat, Mar 09, 2013 at 10:43:47PM +0800, Ji Zhang wrote:

>
> But I also find an interesting article on how great this feature is,
> back to 2002:
> http://www.nongnu.org/fastcgi/#multiplexing

This article seems to confuse FastCGI multiplexing with
event-based programming.  Handling multiple requests in a single
process is great - and nginx does so.  But you don't need FastCGI
multiplexing to do it.

> and perform asynchronously.
>
> Does my point make sense? or some other more substantial reasons?

You are correct, since FastCGI is used mostly for local
communication, multiplexing on application level isn't expected to
be beneficial.  Another reason is that multiplexing isn't
supported (and probably will never be) by the major FastCGI
application - PHP.

There were several discussions on FastCGI multiplexing here, and
general consensus seems to be that FastCGI multiplexing might
be useful to reduce costs of multiple long-polling connections to
an application, as it will reduce number of sockets OS will have
to maintain.  It's yet to be demonstrated though.

--
Maxim Dounin
http://nginx.org/en/donation.html
2974d09ac2541e892966b762aad84943?d=identicon&s=25 momyc (Guest)
on 2013-07-20 03:03
(Received via mailing list)
You clearly do not understand what the biggest FastCGI connection
multiplexing advantage is. It makes it possible to use much less TCP
connections (read "less ports"). Each TCP connection requires separate
port
and "local" TCP connection requires two ports. Add ports used by
browser-to-Web-server connections and you'll see the whole picture. Even
if
Unix-sockets are used between Web-server and FastCGI-server there is an
advantage in using connection multiplexing - less used file descriptors.

FastCGI connection multiplexing could be great tool for beating C10K
problem. And long-polling HTTP-requests would benefit from connection
multiplexing even more.

Of course, if you're running 1000 hits/day Web-site it is not someting
you'd
worry about.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241040#msg-241040
1266aa99d1601b47bbd3ec22affbb81c?d=identicon&s=25 B.R. (Guest)
on 2013-07-20 03:27
(Received via mailing list)
It is yet to prove that C10K-related problems are based on sockets/ports
exhaustion...

The common struggling points on a machine involve multiples locations
and
your harddisks, RAM & processing capabilities will be quickly overwelmed
before you lack sockets and/or ports...

If you are tempted of using enormous capacities against the struggling
points to finally achieve socket exhaustion, you are using the old
'mainframe' paradigm : few machines responsible for the whole work.
 Google proved the opposite one (several 'standard' machines working in
a
cluster) was more accessible/profitable/scalable/affordable.

Could you provide some real-world-based insights on the absolute
necessity
of the FastCGI multiplexing capability?

And please mind your words. Stating that someone 'clearly doesn't
understand' might be understood as calling that person 'stupid'.
That rhetorical level might lead the debate to a quick and sound end.
---
*B. R.*
2974d09ac2541e892966b762aad84943?d=identicon&s=25 momyc (Guest)
on 2013-07-20 04:05
(Received via mailing list)
Consider Comet application (aka long-polling Ajax requests). There is no
CPU-load since most of the time application just waits for some event to
happen and nothing is being transmitted. Something like chat or stock
monitoring Web application used by thousands of users simultaneously.

Every request (one socket/one port) would generate one connection to
backend
(another socket/port). So each request would take two sockets or
theoretical
limit is approximately 32K simulteneous requests. Even using keep-alive
feature on backend side does not help here since connection can be used
by
another request only after current one is fully served.

With FastCGI connection multiplexing we can effectively serve twice as
many
requests/clients.

Of course, there are applications that are limited by other resources
rather
that sockets/ports.

Is it really so difficult to implement?

P.S. I remember when some people were saying that keep-alive feature for
FastCGI backends side would be pointless.

P.P.S. English is not my first language. Please accept my sincere
apologies
for making offencive statement. I did not mean to do so.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241042#msg-241042
2974d09ac2541e892966b762aad84943?d=identicon&s=25 momyc (Guest)
on 2013-07-20 04:51
(Received via mailing list)
Another scenario. Consider application that takes few seconds to process
single request. In non-multiplexing mode we're still limited to roughly
32K
simultaneous requests even though we could install enough backend
servers to
handle 64K such requests per second.

Now, imagine we can use FastCGI connection multiplexing. It could be
just
single connection per backend. And, again, we are able to serve roughly
twice as many requests per second with the same hardware but little tiny
feature called FastCGI connection multiplexing.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241043#msg-241043
2974d09ac2541e892966b762aad84943?d=identicon&s=25 momyc (Guest)
on 2013-07-20 05:00
(Received via mailing list)
Many projects would kill for 100% performance or scalability gain.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241044#msg-241044
2974d09ac2541e892966b762aad84943?d=identicon&s=25 momyc (Guest)
on 2013-07-20 05:07
(Received via mailing list)
Funny thing is that resistance to implement that feature is so dence
that it
feels like its about breaking compatibility. It is all about more
complete
protocol specification implementation without any penalties beside
making
some internal changes.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241045#msg-241045
1266aa99d1601b47bbd3ec22affbb81c?d=identicon&s=25 B.R. (Guest)
on 2013-07-20 05:41
(Received via mailing list)
Scenario 1:
With long-polling requests, each client uses only one port since the
same
connection is continuously used, HTTP being stateless. The loss of the
connection would mean potential loss of data.

32K simultaneous active connections to the same service on a single
machine? I suspect the bottleneck is somewhere else...

Scenario 2:
So you would use several backend and a single frontend? Frontend,
espacially when only used as proxy/cache, are the easiest components to
replicate...
Once again, I highly suspect that managing 32K connections on a single
server is CPU-consuming...

I am no among the developers at all... I am merely discussing the
usefulness of such a request.
I prefer developers to concentrate on usable stuff rather than on
superfluous features: the product will be more efficient and usage-based
and not an all-in-one monster.

My 2 cents.
I'll stop there.
---
*B. R.*
2974d09ac2541e892966b762aad84943?d=identicon&s=25 momyc (Guest)
on 2013-07-20 05:55
(Received via mailing list)
You clearly... err.

> 32K simultaneous active connections to the same service on a single
machine? I suspect the bottleneck is somewhere else...

I don't know what exactly "service" means in context of our conversation
but
if that means server then I did not say that everything should be
handled by
single FastCGI server. I said single Nginx server can easily dispatch
thousands of HTTP requests to a number of remote FastCGI backends.

> I am no among the developers at all

That's what I thought.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241047#msg-241047
1266aa99d1601b47bbd3ec22affbb81c?d=identicon&s=25 B.R. (Guest)
on 2013-07-20 06:11
(Received via mailing list)
On Fri, Jul 19, 2013 at 11:55 PM, momyc <nginx-forum@nginx.us> wrote:

> You clearly... err.
>

​Hmmm?​

>
​... and I haven't seen a clue indicating that multiplexing would be as
useful in practice as​

​it is claimed to be in theory.​

>
> > I am no among the developers at all
>
> That's what I thought.
>

​Well. You must be an expert on the matter. I'll probably be enlightened
reading whatever follows.​
​..​
​ :o)​

Developer omniscience? ​I am done here.​
---
*B. R.*
0f7a1240e82f744c6c607fa7081b99f7?d=identicon&s=25 Igor Sysoev (Guest)
on 2013-07-20 06:26
(Received via mailing list)
On Jul 20, 2013, at 5:02 , momyc wrote:

> multiplexing even more.
The main issue with FastCGI connection multiplexing is lack of flow
control.
Suppose a client stalls but a FastCGI backend continues to send data to
it.
At some point nginx should say the backend to stop sending to the client
but the only way to do it is just to close all multiplexed connections.


--
Igor Sysoev
http://nginx.com/services.html
2974d09ac2541e892966b762aad84943?d=identicon&s=25 momyc (Guest)
on 2013-07-20 06:36
(Received via mailing list)
> The main issue with FastCGI connection multiplexing is lack of flow
control.
Suppose a client stalls but a FastCGI backend continues to send data to
it.
At some point nginx should say the backend to stop sending to the client
but the only way to do it is just to close all multiplexed connections

The FastCGI spec has some fuzzy points. This one is easy. What Nginx
does in
case client stalls and proxied server still sends data? HTTP protocol
has no
flow control either.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241050#msg-241050
0f7a1240e82f744c6c607fa7081b99f7?d=identicon&s=25 Igor Sysoev (Guest)
on 2013-07-20 06:40
(Received via mailing list)
On Jul 20, 2013, at 8:36 , momyc wrote:

>> The main issue with FastCGI connection multiplexing is lack of flow
> control.
> Suppose a client stalls but a FastCGI backend continues to send data to it.
> At some point nginx should say the backend to stop sending to the client
> but the only way to do it is just to close all multiplexed connections
>
> The FastCGI spec has some fuzzy points. This one is easy. What Nginx does in
> case client stalls and proxied server still sends data? HTTP protocol has no
> flow control either.

It closes both connections to a client and a backend, since HTTP lacks
both
flow control and multiplexing.


--
Igor Sysoev
http://nginx.com/services.html
2974d09ac2541e892966b762aad84943?d=identicon&s=25 momyc (Guest)
on 2013-07-20 06:42
(Received via mailing list)
OK, it probably closes connection to backend server. Well, in case of
multiplexed FastCGI Nginx should do two things:
1) send FCGI_ABORT_REQUEST to backend for given request
2) start dropping records for given request if it still receives records
from backend for given request

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241051#msg-241051
2974d09ac2541e892966b762aad84943?d=identicon&s=25 momyc (Guest)
on 2013-07-20 06:43
(Received via mailing list)
Actually 2) is natural since there is supposed to be de-multiplexer on
Nginx
side and it should know where to dispatch the record received from
backend

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241053#msg-241053
2974d09ac2541e892966b762aad84943?d=identicon&s=25 momyc (Guest)
on 2013-07-20 06:50
(Received via mailing list)
It's my next task to implement connection multiplexing feature in
Nginx's
FastCGI module. I haven't looked at recent sources yet and I am not
familiar
with Nginx architecture so if you could give me some pointers on where I
could to start it would be great. Sure thing anything I produce would be
available for merging with main Nginx sources.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241054#msg-241054
2974d09ac2541e892966b762aad84943?d=identicon&s=25 momyc (Guest)
on 2013-07-20 06:51
(Received via mailing list)
And, possible 3) if there is no other requests for that connection, just
close it like it never existed

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241055#msg-241055
671d9faabfe3d3382be736b93fbfa1d5?d=identicon&s=25 Steve Holdoway (Guest)
on 2013-07-20 07:00
(Received via mailing list)
On Sat, 2013-07-20 at 00:50 -0400, momyc wrote:
> It's my next task to implement connection multiplexing feature in Nginx's
> FastCGI module. I haven't looked at recent sources yet and I am not familiar
> with Nginx architecture so if you could give me some pointers on where I
> could to start it would be great. Sure thing anything I produce would be
> available for merging with main Nginx sources.
>
This career cynic - sorry sysadmin - looks forward to this fabled
doubling in performance...

--
Steve Holdoway BSc(Hons) MNZCS
http://www.greengecko.co.nz
Linkedin: http://www.linkedin.com/in/steveholdoway
Skype: sholdowa
0f7a1240e82f744c6c607fa7081b99f7?d=identicon&s=25 Igor Sysoev (Guest)
on 2013-07-20 07:01
(Received via mailing list)
On Jul 20, 2013, at 8:41 , momyc wrote:

> OK, it probably closes connection to backend server. Well, in case of
> multiplexed FastCGI Nginx should do two things:
> 1) send FCGI_ABORT_REQUEST to backend for given request
> 2) start dropping records for given request if it still receives records
> from backend for given request

Suppose a slow client. Since nginx receives data quickly backend will
send data quickly too because it does not know about slow client.
At some point buffered data surpasses limit and nginx has to abort
connection to backend. It does not happen if backend knows a real speed
of the client.


--
Igor Sysoev
http://nginx.com/services.html
2974d09ac2541e892966b762aad84943?d=identicon&s=25 momyc (Guest)
on 2013-07-20 07:03
(Received via mailing list)
Well, there is supposed to be one FCGI_REQUEST_COMPLETE set in reply to
FCGI_ABORT_REQUEST but it can be ignored in this particular case.

I can see Nginx drops connections before receiving final
FCGI_REQUEST_COMPLETE at the end of normal request processing in some
cases.
And that's something about running out of file descriptors.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241058#msg-241058
2974d09ac2541e892966b762aad84943?d=identicon&s=25 momyc (Guest)
on 2013-07-20 07:05
(Received via mailing list)
What proxy module does in that case? You said earlier HTTP lacks flow
conrol
too. So what is the difference?

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241059#msg-241059
0f7a1240e82f744c6c607fa7081b99f7?d=identicon&s=25 Igor Sysoev (Guest)
on 2013-07-20 07:10
(Received via mailing list)
On Jul 20, 2013, at 9:05 , momyc wrote:

> What proxy module does in that case? You said earlier HTTP lacks flow conrol
> too. So what is the difference?

The proxy module stops reading from backend, but it does not close
backend connection.
It reads again from backend when some buffers will send to the slow
client.


--
Igor Sysoev
http://nginx.com/services.html
2974d09ac2541e892966b762aad84943?d=identicon&s=25 momyc (Guest)
on 2013-07-20 07:11
(Received via mailing list)
If it's time to close backend connection in non-multiplexed
configuration
just send FCGI_ABORT_REQUEST for that particular request, and start
dropping
records for that request received from the backend.

Please shoot me any other questions about problems with implementing
that
feature.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241061#msg-241061
2974d09ac2541e892966b762aad84943?d=identicon&s=25 momyc (Guest)
on 2013-07-20 07:24
(Received via mailing list)
What do you mean by "stop readning"? Oh, you just stop checking if
anything
is ready for reading. I see. Well, this is rude flow control I'd say.
Proxied server could unexpectedly drop connection because it would think
Nginx is dead.

There is a nice feature I don't remember how exactly it's called when
some
content could be buffered on Nginx (in proxy mode) and there is strict
limit
of how much could be buffered and when it goes to file. This is what
could
be used for that case. If buffer overflow happens close client, abort
backend, drop records for that request. Keep connection and keep
receiving
and de-multiplexing records for good requests.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241062#msg-241062
2974d09ac2541e892966b762aad84943?d=identicon&s=25 momyc (Guest)
on 2013-07-20 07:25
(Received via mailing list)
"abort backend" meant "abort request"

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241063#msg-241063
0f7a1240e82f744c6c607fa7081b99f7?d=identicon&s=25 Igor Sysoev (Guest)
on 2013-07-20 07:43
(Received via mailing list)
On Jul 20, 2013, at 9:23 , momyc wrote:

> What do you mean by "stop readning"? Oh, you just stop checking if anything
> is ready for reading. I see. Well, this is rude flow control I'd say.
> Proxied server could unexpectedly drop connection because it would think
> Nginx is dead.

TCP will say to backend that nginx is alive. It can drop only after some
timeout.

> There is a nice feature I don't remember how exactly it's called when some
> content could be buffered on Nginx (in proxy mode) and there is strict limit
> of how much could be buffered and when it goes to file. This is what could
> be used for that case. If buffer overflow happens close client, abort
> backend, drop records for that request. Keep connection and keep receiving
> and de-multiplexing records for good requests.

Yes, but it is useless to buffer a long polling connection in a file.


--
Igor Sysoev
http://nginx.com/services.html
2974d09ac2541e892966b762aad84943?d=identicon&s=25 momyc (Guest)
on 2013-07-20 09:53
(Received via mailing list)
> it is useless to buffer a long polling connection in a file.

For Nginx there is no any difference between long-polling or other
request.
It would't even know. All it should care is how much to buffer and for
how
long to keep those buffers until droping them and aborting request. I do
not
see any technical problem here.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241066#msg-241066
2974d09ac2541e892966b762aad84943?d=identicon&s=25 momyc (Guest)
on 2013-07-20 10:18
(Received via mailing list)
> Yes, but it is useless to buffer a long polling connection in a file.

Buffering of some data on Web-server is fine as long as client receives
whatever server has sent or client gets closed connection. If sending is
not
possible after buffers are full dropping client connection and aborting
request is not a problem. Problems like that should be dealt with on
higher
level of abstraction.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241068#msg-241068
0f7a1240e82f744c6c607fa7081b99f7?d=identicon&s=25 Igor Sysoev (Guest)
on 2013-07-20 11:18
(Received via mailing list)
On Jul 20, 2013, at 11:52 , momyc wrote:

>> it is useless to buffer a long polling connection in a file.
>
> For Nginx there is no any difference between long-polling or other request.
> It would't even know. All it should care is how much to buffer and for how
> long to keep those buffers until droping them and aborting request. I do not
> see any technical problem here.

There is no technical problem. There is an issue of practical utility of
such backend. There are two types of backend:

1) The first one uses a large amount of memory to process request. It
should
send a generated response as soon as possible and then moves to a next
request.
nginx can buffer thousands of such responses and sends them to clients.
Persistent connection between nginx and backend and nginx buffering help
in
this case. Multiplexing just complicates the backend logic without any
benefit.
The bottle neck here is not number of connections to a single listen
port (64K)
but amount of memory.

2) The second type of backend uses a small amount of memory per request,
can process simultaneously thousands of clients and does NOT need
buffering
at all. Multiplexing helps such backends but only together with a flow
control.


--
Igor Sysoev
http://nginx.com/services.html
2974d09ac2541e892966b762aad84943?d=identicon&s=25 DevNginx (Guest)
on 2013-10-04 15:44
(Received via mailing list)
I would also like to add a vote for FCGI multiplexing.

There is no obligation for backends, since non-implementing backends can
indicate FCGI_CANT_MPX_CONN in response to a FCGI_GET_VALUES request by
nginx.  The other poster has already mentioned FCGI_ABORT_REQUEST and
dropping response packets from dangling requests.

My scenario is that I have a variety of requests:  some take a while,
but
others are a quick URL rewrite culminating in a X-Accel-Redirect. This
rewrite involves complicated logic which is part of my overall backend
application., which I would rather not factor out and rewrite into a
nginx
module  The actual computation for the URL rewrite is miniscule compared
to
the overhead of opening/closing a TCP connection, so FCGI request
multiplexing would be of great help here.

If the overhead of a multiplexed FCGI request starts to approach doing
the
work directly in an nginx module, it would give a valuable alternative
to
writing modules.  This would avoid the pitfalls of writing modules (code
refactoring, rewriting in C, jeopardizing nginx worker process, etc.).

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,243430#msg-243430
A8108a0961c6087c43cda32c8616dcba?d=identicon&s=25 Maxim Dounin (Guest)
on 2013-10-04 18:49
(Received via mailing list)
Hello!

On Fri, Oct 04, 2013 at 09:43:41AM -0400, DevNginx wrote:

> application., which I would rather not factor out and rewrite into a nginx
> module  The actual computation for the URL rewrite is miniscule compared to
> the overhead of opening/closing a TCP connection, so FCGI request
> multiplexing would be of great help here.
>
> If the overhead of a multiplexed FCGI request starts to approach doing the
> work directly in an nginx module, it would give a valuable alternative to
> writing modules.  This would avoid the pitfalls of writing modules (code
> refactoring, rewriting in C, jeopardizing nginx worker process, etc.).

Your use case seems to be perfectly covered by a keepalive connections
support, which is already here.  See http://nginx.org/r/keepalive.

--
Maxim Dounin
http://nginx.org/en/donation.html
2974d09ac2541e892966b762aad84943?d=identicon&s=25 DevNginx (Guest)
on 2013-10-05 17:12
(Received via mailing list)
Maxim Dounin Wrote:
-------------------------------------------------------
> Your use case seems to be perfectly covered by a keepalive connections
>
> support, which is already here.  See http://nginx.org/r/keepalive.

OK, yeah that would work for me.  Thanks.

There is still the possibility that long running requests could clog the
connections, but I can work around that by listening on two different
ports
and having nginx route the quickies to their dedicated port.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,243454#msg-243454
649dcb30c019511b97326085a200935b?d=identicon&s=25 Wter S. (wter_s)
on 2014-09-14 21:22
Question about FastCGI: How it handle simultaneous connections with one
process when PHP itself is blocking language ? What if I have something
"sleep(100)" . Wont it block the process for the other users ?
Thanks


Maxim Dounin wrote in post #1101079:
> Hello!
>
> On Sat, Mar 09, 2013 at 10:43:47PM +0800, Ji Zhang wrote:
>
>>
>> But I also find an interesting article on how great this feature is,
>> back to 2002:
>> http://www.nongnu.org/fastcgi/#multiplexing
>
> This article seems to confuse FastCGI multiplexing with
> event-based programming.  Handling multiple requests in a single
> process is great - and nginx does so.  But you don't need FastCGI
> multiplexing to do it.
>
>> and perform asynchronously.
>>
>> Does my point make sense? or some other more substantial reasons?
>
> You are correct, since FastCGI is used mostly for local
> communication, multiplexing on application level isn't expected to
> be beneficial.  Another reason is that multiplexing isn't
> supported (and probably will never be) by the major FastCGI
> application - PHP.
>
> There were several discussions on FastCGI multiplexing here, and
> general consensus seems to be that FastCGI multiplexing might
> be useful to reduce costs of multiple long-polling connections to
> an application, as it will reduce number of sockets OS will have
> to maintain.  It's yet to be demonstrated though.
>
> --
> Maxim Dounin
> http://nginx.org/en/donation.html
A8108a0961c6087c43cda32c8616dcba?d=identicon&s=25 Maxim Dounin (Guest)
on 2014-09-15 12:44
(Received via mailing list)
Hello!

On Sun, Sep 14, 2014 at 09:22:48PM +0200, Wter S. wrote:

> Question about FastCGI: How it handle simultaneous connections with one
> process when PHP itself is blocking language ? What if I have something
> "sleep(100)" . Wont it block the process for the other users ?
> Thanks

FastCGI doesn't imply PHP (and, actually, PHP doesn't imply
blocking as well - there are some event-driven PHP frameworks out
there).

As of now, implementation of the FastCGI protocol in PHP doesn't
support FastCGI multiplexing at all, and that's one of the reasons
why nginx doesn't implement FastCGI multiplexing as well.  Quoting
the message you've replied to:

> > ...  Another reason is that multiplexing isn't
> > supported (and probably will never be) by the major FastCGI
> > application - PHP.

--
Maxim Dounin
http://nginx.org/
649dcb30c019511b97326085a200935b?d=identicon&s=25 Wter S. (wter_s)
on 2014-09-15 18:16
Maxim Dounin wrote in post #1157635:
> Hello!
>
> On Sun, Sep 14, 2014 at 09:22:48PM +0200, Wter S. wrote:
>
>
> FastCGI doesn't imply PHP
> As of now, implementation of the FastCGI protocol in PHP doesn't
> support FastCGI multiplexing at all, and that's one of the reasons
> why nginx doesn't implement FastCGI multiplexing as well.
> --
> Maxim Dounin
> http://nginx.org/

Then how Nginx is able to handle thousands simultaneous requests (where
some of them contains blocking IO operations) with only one process (or
let say 10 processes) ?

Thanks !
A8108a0961c6087c43cda32c8616dcba?d=identicon&s=25 Maxim Dounin (Guest)
on 2014-09-16 14:01
(Received via mailing list)
Hello!

On Mon, Sep 15, 2014 at 06:16:58PM +0200, Wter S. wrote:

> Then how Nginx is able to handle thousands simultaneous requests (where
> some of them contains blocking IO operations) with only one process (or
> let say 10 processes) ?

That's because nginx is event-driven server and uses non-blocking
IO whenever possible.

--
Maxim Dounin
http://nginx.org/
Please log in before posting. Registration is free and takes only a minute.
Existing account

NEW: Do you have a Google/GoogleMail, Yahoo or Facebook account? No registration required!
Log in with Google account | Log in with Yahoo account | Log in with Facebook account
No account? Register here.