Forum: Ruby on Rails Fwd: mongrel cluster restart with capistrano fails but manua

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
Bb71d4877b2f770208509ea5933eaaac?d=identicon&s=25 Michael Steinfeld (Guest)
on 2007-04-06 05:10
(Received via mailing list)
---------- Forwarded message ----------
From: Michael Steinfeld <mikeisgreat@gmail.com>
Date: Apr 3, 2007 12:23 PM
Subject: mongrel cluster restart with capistrano fails but manually
works
To: mongrel-users@rubyforge.org


Hi all,

I am out of my head here...

I have a 3 node cluster with 10 mongrels running on each. When I
deploy, I break all the mongrels every time. I have tried just about
everything. I can restart my mongrels without a hitch manually, it's
only when I use cap deploy. Maybe I am missing something here... so if
I can get some help it would be appreciated. The errors are the
typical mongrel pid errors, I can paste them if need be.

I have mongrel_cluster.yml sym linked from
/{current_path}/config/mongrel_cluster.yml to
/etc/mongrel_cluster/mongrel_cluster.yml

cwd: /home/app/current
port: "8000"
address: 127.0.0.1
pid_file: log/mongrel.pid
environment: production
servers: 10



here is a snippet of  my deploy.rb

<snip>

require 'mongrel_cluster/recipes'
require 'capistrano'

# below snipped for brevity
role :web
role :app

...


# restart mongrel with /etc/init.d/mongrel_cluster restart
desc "Restart the web server"
task :restart, :roles => :app do
  begin
    run "sudo /etc/init.d/mongrel_cluster stop"
    run "sleep 5"
    run "sudo /etc/init.d/mongrel_cluster start"
end

...
# couple of tasks to chown some files and directories
desc "Only app servers get this applied to them post-deployment"
task :after_app_deploy, :roles=>:app do
  sudo "chown -PRf foo:foo       #{deploy_to}/current/"
  sudo "chown -PRf foo:foo       #{deploy_to}/releases/"
  sudo "chmod -Rf 755                     #{deploy_to}/releases/"
  sudo "chmod -Rf 750                     #{deploy_to}/current/public/"
end

desc "Apache restarter"
task :kick_apache, :roles=>:web do
  sudo "/usr/local/apache/bin/apachectl restart"
end


Now, mongrel gets restarted even if I don't use the above when I
restart when I run deploy.. but I don't see where I can change that or
if that is in fact causing the problem.

for whatever reason, mongrels never start on 8000 - 8004, but do start
from 8005 - 8009, so I am thinking something in the cap script. I did
migrate recently from lighty+fastcgi to apache+mongrel
and I have 10 mongrels per machine on ports 8000 - 8009. I was
thinking that dispatch was screwing up ports 8000-04 somehow.. (
totally guessing )

I was thinking about writing a task similar to below... but I think it
is overkill.

desc "Kill all the pids in case there are some zombies and remove the
.pid files"
task :before_before_deploy, roles => :app do
  begin
    run "sudo kill -9 `ps -ef | grep mongrel | egrep -v grep | awk
'{print $2}'`"
    run "cd /home/user/app/log && sudo rm -rf *.pid"
  end

task :after_after_deploy, roles => :app do
  begin
    " sudo /etc/init.d/mongrel_cluster start"
  end

</snip>


Hopefully someone can see something that I don't.
Thanks for your time

--
-mike


--
-mike
583c3d751af6c13d910b4fa8c6ef065d?d=identicon&s=25 Justin Mazzi (Guest)
on 2007-04-06 18:03
try sudo /etc/init.d/mongrel_cluster restart instead.
This topic is locked and can not be replied to.