When I kick a jmeter test against the server I can see dozens of
processes
forked by the 5548 the master FCGI process. As understood I should not
see this behavior when using FastCGI, isnt it ?
When I kick a jmeter test against the server I can see dozens of processes
forked by the 5548 the master FCGI process. As understood I should not
see this behavior when using FastCGI, isnt it ?
FastCGI is just protocol, nothing more. Implementation details
(including number of forked processes) are up to FastCGI
application.
Script you refer to is just wrapper which executes CGI scripts.
And it does so by running separate process for each request.
In my case nginx-fcgi does currently the job and it is not
a very solid solution. Correct ?
A single page load can - and most often does - have more than one
request; a webserver gets hits by more than oner person… The higher
the traffic the higher the chances of you fork-bombing your own
server. It might reach the maximum allowed number of processes, if
defined, or whichever maximum size for the PID variable (will-guessing
this one)? Nice academic exercise.
Anyway, you should look for a fastcgi daemon instead of a script;
anything that won’t spwan 1 process per 1 request. On Debian i use the
php-cgi package and a fastcgi script to spawn the daemon on
/etc/init.d. It uses 4-6 processes top, it’s attached.
A single page load can - and most often does - have more than one
request; a webserver gets hits by more than oner person… The higher
the traffic the higher the chances of you fork-bombing your own
server. It might reach the maximum allowed number of processes, if
defined, or whichever maximum size for the PID variable (will-guessing
this one)? Nice academic exercise.
sure, makes sense.
Anyway, you should look for a fastcgi daemon instead of a script;
anything that won’t spwan 1 process per 1 request. On Debian i use the
php-cgi package and a fastcgi script to spawn the daemon on
/etc/init.d. It uses 4-6 processes top, it’s attached.
Thanks. Probable a fastcgi daemon written in C or Perl. Found
out the Catalyst::Engine::FastCGI , FCGI::Engine. I would
stick with the current version nginx-fcgi for time being.
It would be nice to have a std fastcgi daemon written in C,
part of NGINX. But most likely this is not the purpose of
the project.
If you guys have any ideas about a simple C based fastcgi daemon
let me know. Would be cool if would compile across Solaris, Linux,
FreeBSD.
When I kick a jmeter test against the server I can see dozens of processes
forked by the 5548 the master FCGI process. As understood I should not
see this behavior when using FastCGI, isnt it ?
FastCGI is just protocol, nothing more. Implementation details
(including number of forked processes) are up to FastCGI
application.
In this case, the nginx-fcgi daemon, which is a perl
process, handling the CGI things.
Script you refer to is just wrapper which executes CGI scripts.
And it does so by running separate process for each request.
Hmmm. Right. So basically if we want to have a solid
FCGI daemon, one can write our own FCGI app which
will receive the requests from nginx. Right ?
In my case nginx-fcgi does currently the job and it is not
a very solid solution. Correct ?
thanks. I know. I read a bit the std and looked
over. Im not fluent in C but if I will have time
I will think something - probable not very trivial
to write the FCGI server in C.
I use FCGI::Engine with nginx (on FreeBSD) and it works great. It’s
pretty
busy internal application (around 30 fcgi child processes) migrated from
mod_perl2 because of memory problems/leaks.
Interesting. As for the large response time, you’d generally run a bunch
of fcgiwraps (e.g. by using the -c option) to improve responsiveness.
The memory cost should be negligible compared to a single instance.
Also, if your real CGI scripts you are going to run are all Perl-based,
you may get better results by converting them to talk FastCGI directly
to save on the overhead of launching Perl and loading its modules once
per request. However, this includes some gotchas and is generally
getting way off topic
I will document all of these for future. SDR Reporting is
and will be only CGI Perl so I will deploy fcgiwraps as part
of the final solution. As for improving these numbers I will
open a CR in bugzilla to keep track of all of these.
Interesting. As for the large response time, you’d generally run a bunch
of fcgiwraps (e.g. by using the -c option) to improve responsiveness.
The memory cost should be negligible compared to a single instance.
Right now all the requests are processed sequentially, so if your CGI
script slept for 5 seconds, you’d get 0.2 req/sec tops plus an
absolutely abysmal response time.
Of course, this also applies to the Perl wrapper. You can implement a
simplistic prefork there by replacing:
my $pid = fork();
if( $pid == 0 ) {
&main;
exit 0;
}
with:
for (1 … HOWEVER_MANY_CHILDREN_YOU_WANT) {
my $pid = fork();
if( $pid == 0 ) {
&main;
exit 0;
}
}
Also, if your real CGI scripts you are going to run are all Perl-based,
you may get better results by converting them to talk FastCGI directly
to save on the overhead of launching Perl and loading its modules once
per request. However, this includes some gotchas and is generally
getting way off topic
Best regards,
Grzegorz N.
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.