On 1/12/06, John J. firstname.lastname@example.org wrote:
Has anyone else had a similar problem? Is there an elegant work-around
that I can use to detect these dead processes and kill them?
I am experiencing something similar. Apache at my hosting provider is
configured to send a signal -USR1 to the fcgi processes every four
hours in order to make them exit and restart. What seems to be
happening is that the FCGI process receives the USR1 and doesn’t exit
until the next request. Meanwhile Apache thinks it has killed the
process and doesn’t send it any more requests. After a while I reach
my process limit with processes stuck in this state. kill -9 will kill
them and get things going again.
I have been playing around with changes to dispatch.fcgi, here’s my
current code but it isn’t always working correctly.
if ENV[“RAILS_ENV”] == “production”
class MyRailsFCGIHandler < RailsFCGIHandler
def initialize(log_file_path = nil, gc_request_period = nil)
def process!(provider = FCGI)
# Make a note of $" so we can safely reload this instance.
run_gc! if gc_request_period
usr1 = trap("USR1", "DEFAULT")
provider.each_cgi do |cgi|
dispatcher_log :info, "terminated gracefully"
rescue SystemExit => exit_error
dispatcher_log :info, "terminated by explicit exit"
rescue Object => fcgi_error
# retry on errors that would otherwise have terminated the FCGI
# but only if they occur more than 10 seconds apart.
if !(SignalException === fcgi_error) && Time.now - @last_error_on
@last_error_on = Time.now
dispatcher_error(fcgi_error, “almost killed by this error”)
dispatcher_error(fcgi_error, “killed by this error”)
dispatcher_log :info, "ignoring request to terminate immediately"
MyRailsFCGIHandler.process! nil, 50
RailsFCGIHandler.process! nil, 50