FastCGI and PHP

After the recent thread on php-fastcgi and memory leaks, I’ve realised
that I’m a little unsure as to exactly how nginx and php-fastcgi
communicate with one another. I’m wondering whether anyone could spare
the time for clarification?

I understand that when started (in my case using spawn-fastcgi from the
lighttpd project) php-fastcgi creates a master process and a number of
child processes. Nginx then passes requests through to php-fastcgi,
which processes the request and returns a response.

What I’m unsure on is whether nginx is passing the request directly to
one of the child processes or the master process which then delegates.

I’m also unsure as to how nginx passes through the fastcgi params we
configure.

The reason I ask is that I have some useful components written in Eiffel
which I’d like to make available via a webserver. I’ve found a small
fastcgi server written in Eiffel, which I’d like to expand on to
replicate the kind of through-put the php-fastcgi instance I’m running
allows.

Thanks.

Phillip B Oldham
The Activity People
[email protected] mailto:[email protected]


Policies

This e-mail and its attachments are intended for the above named
recipient(s) only and may be confidential. If they have come to you in
error, please reply to this e-mail and highlight the error. No action
should be taken regarding content, nor must you copy or show them to
anyone.

This e-mail has been created in the knowledge that Internet e-mail is
not a 100% secure communications medium, and we have taken steps to
ensure that this e-mail and attachments are free from any virus. We must
advise that in keeping with good computing practice the recipient should
ensure they are completely virus free, and that you understand and
observe the lack of security when e-mailing us.

I understand that when started (in my case using spawn-fastcgi from the
lighttpd project) php-fastcgi creates a master process and a number of
child processes. Nginx then passes requests through to php-fastcgi,
which processes the request and returns a response.

Correct. But to clarify - spawn-fcgi is not really required - php has
all
the features allready in-built it just makes easier to start up.

What I’m unsure on is whether nginx is passing the request directly to
one of the child processes or the master process which then delegates.

nginx passes the requests to the ip/port (or you can also use unix
sockets)
using the fastcgi protocol. More details:

http://www.fastcgi.com/devkit/doc/fcgi-spec.html

php (its master process) handles its childs on its own… then again
there
are some caveats like php master process doesnt really know if all of
the
childs are bussy and can’t return FCGI_OVERLOADED (as from fastcgi
spec).

The reason I ask is that I have some useful components written in Eiffel
which I’d like to make available via a webserver. I’ve found a small
fastcgi server written in Eiffel, which I’d like to expand on to
replicate the kind of through-put the php-fastcgi instance I’m running
allows.

It doesnt really matter as far as the service listening on the port
“talks”
the Fastcgi protocol.
You can add as many backend handlers to nginx as you wish the only
limitation is you can’t push a single request to all of them eg if you
open
/index.php you can’t handle it with php and some other fastcgi app the
same
time but probably there is a possibility of a subrequest.

rr

Hi Philip,

I’m not sure about the way PHP-fcgi manages its processes internally,
but there’s some detail about the way the FastCGI communication
happens in the FastCGI protocol spec at
http://www.fastcgi.com/devkit/doc/fcgi-spec.html

In short, the protocol works by exchanging FCGI “records” between the
client and server which are basically chunks of information wrapped up
in FCGI “messages”. Each message header identifies itself including a
request ID and a record type, allowing the client and server to
establish input, output and error streams. The web server (FastCGI
client) passes the normal CGI/1.1 environment variables (configured by
nginx as fastcgi_params) to the FastCGI server’s input stream, and the
FastCGI server returns the request’s output to the web server via its
output stream. Any errors should be passed back via the error stream.

There is scope in the protocol for multiplexing these connections,
i.e. communicating multiple requests and responses across the same
connection and identifying each by its request ID, but there seems to
be no flow control built into the protocol, making this a somewhat
dangerous proposition. As I understand it the protocol also allows for
long-lived FastCGI connections which are re-used for multiple request/
response cycles, but this isn’t possible with nginx currently as it
uses HTTP/1.0 to talk to upstream servers; additionally, the upstream
server might be overwhelmed if nginx made hundreds of long-standing
connections to it.

There’s further discussion of this and FastCGI in general at
http://thread.gmane.org/gmane.comp.web.nginx.english/2974/focus=2974

cheers,
Igor

Reinis R. wrote:

It doesnt really matter as far as the service listening on the port
“talks” the Fastcgi protocol.
You can add as many backend handlers to nginx as you wish the only
limitation is you can’t push a single request to all of them eg if you
open /index.php you can’t handle it with php and some other fastcgi
app the same time but probably there is a possibility of a subrequest.
So is it possible to have multiple instances of a fastcgi server, each
listening on a different port/socket, and have nginx balance the load
between them? If so, would nginx understand the FCGI_OVERLOADED response
and pass the request to another instance?

Phillip B Oldham
The Activity People
[email protected] mailto:[email protected]


Policies

This e-mail and its attachments are intended for the above named
recipient(s) only and may be confidential. If they have come to you in
error, please reply to this e-mail and highlight the error. No action
should be taken regarding content, nor must you copy or show them to
anyone.

This e-mail has been created in the knowledge that Internet e-mail is
not a 100% secure communications medium, and we have taken steps to
ensure that this e-mail and attachments are free from any virus. We must
advise that in keeping with good computing practice the recipient should
ensure they are completely virus free, and that you understand and
observe the lack of security when e-mailing us.

So is it possible to have multiple instances of a fastcgi server, each
listening on a different port/socket, and have nginx balance the load
between them?

Yes it should be possible (I haven’t done it myself on nginx because for
my
setup using a single fastcgi server is enough as it saturates the CPU
anyway) - by providing all the servers (ip/ports) in upstream (
http://wiki.codemongers.com/NginxHttpUpstreamModule ) an then giving
that
to fastcgi_pass

If so, would nginx understand the FCGI_OVERLOADED response and pass the
request to another instance?

Not sure about this - probably the devs can give you an answer.

But fact is php doesnt return anything like that (at least it didnt some
time ago I’ve last checked) and you have to relay on timeouts or
returned
errors (usually 500 - Internal Server Error or 502 Bad gateway).

rr

Thanks Igor. That makes lots of sense. I appreciate your time.

I’ll grab a copy of the fastcgi server code, read the spec, and start
playing around!

Igor C. wrote:

request ID and a record type, allowing the client and server to
dangerous proposition. As I understand it the protocol also allows for
Igor

number of child processes. Nginx then passes requests through to
Eiffel which I’d like to make available via a webserver. I’ve found a

This e-mail has been created in the knowledge that Internet e-mail is
Igor C. • POKE • www.pokelondon.com

Phillip B Oldham
The Activity People
[email protected] mailto:[email protected]


Policies

This e-mail and its attachments are intended for the above named
recipient(s) only and may be confidential. If they have come to you in
error, please reply to this e-mail and highlight the error. No action
should be taken regarding content, nor must you copy or show them to
anyone.

This e-mail has been created in the knowledge that Internet e-mail is
not a 100% secure communications medium, and we have taken steps to
ensure that this e-mail and attachments are free from any virus. We must
advise that in keeping with good computing practice the recipient should
ensure they are completely virus free, and that you understand and
observe the lack of security when e-mailing us.

Reinis R. wrote:

… and you have to relay on timeouts or returned errors (usually 500

  • Internal Server Error or 502 Bad gateway).
    That, in itself, is interesting to know. Its probably one of the causes
    why my scripts occasionally return 500 errors for no reason that I can
    find in the source.

Hopefully soon I can get our guys away from php-fastcgi for good!

Phillip B Oldham
The Activity People
[email protected] mailto:[email protected]


Policies

This e-mail and its attachments are intended for the above named
recipient(s) only and may be confidential. If they have come to you in
error, please reply to this e-mail and highlight the error. No action
should be taken regarding content, nor must you copy or show them to
anyone.

This e-mail has been created in the knowledge that Internet e-mail is
not a 100% secure communications medium, and we have taken steps to
ensure that this e-mail and attachments are free from any virus. We must
advise that in keeping with good computing practice the recipient should
ensure they are completely virus free, and that you understand and
observe the lack of security when e-mailing us.

On Wed, Apr 02, 2008 at 08:08:57AM +0100, Phillip B Oldham wrote:

After the recent thread on php-fastcgi and memory leaks, I’ve realised
that I’m a little unsure as to exactly how nginx and php-fastcgi
communicate with one another. I’m wondering whether anyone could spare
the time for clarification?

You have already received a few good responses, but let me throw in my
0.02PLN too.

I understand that when started (in my case using spawn-fastcgi from the
lighttpd project) php-fastcgi creates a master process and a number of
child processes. Nginx then passes requests through to php-fastcgi,
which processes the request and returns a response.

Yup.

What I’m unsure on is whether nginx is passing the request directly to
one of the child processes or the master process which then delegates.

Selection of the child process is done implicitly by the kernel. All the
children are sleeping in accept() on the same socket, so there’s no
delegation process to speak of.

This approach has both advantages (mostly trivial implementation)
and disadvantages, mostly that the web server is left in the dark about
overloaded fcgi instances, their activity etc. It also prevents
knowledge of e.g. whether there are any ready backends at all – this
is a pain when running under a process manager like mod_fastcgi for
apache.

I’m also unsure as to how nginx passes through the fastcgi params we
configure.

I’m sure you’ll find the protocol details in the spec, I just know it’s
a binary protocol.

The reason I ask is that I have some useful components written in Eiffel
which I’d like to make available via a webserver. I’ve found a small
fastcgi server written in Eiffel, which I’d like to expand on to
replicate the kind of through-put the php-fastcgi instance I’m running
allows.

I’m not sure whether the fastcgi protocol supports multiplexing on a
single socket and the fcgi library insists on emulating stdio, so in the
worst case you’ll have to use a prefork/postfork/threaded/etc. model.

I don’t know how well libfcgi copes with multiplexed I/O (e.g. will
FCGI_getc always return immediately after a select returns) and “almost
a socket” abstractions tend to have problems with that so beware.

Good luck and please share your findings :slight_smile:

Best regards,
Grzegorz N.

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs