Forum: Rails deployment Mephisto + Mongrel_cluster 503 Service Temporarily Unavailab

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
boboroshi (Guest)
on 2007-02-11 07:38
(Received via mailing list)
I had Mephisto running deployed via capistrano per Coda Hale's
instructions on a FreeBSD server and everything was running swimmingly
until the power went out at my house and when the server came back up,
the cached pages are fine, but the /admin section generates

Service Temporarily Unavailable
The server is temporarily unable to service your request due to
maintenance downtime or capacity problems. Please try again later.

This happens at 80 (apache) and 8080 (the cluster itself). However, if
I cd into the current directory in the capistrano deploy and have
mongrel_rails start, I can access the app on 3000.

So no worky:
http://www.boboroshi.com/admin
http://www.boboroshi.com:8080/admin

Worky:
http://www.boboroshi.com:3000/admin

I've upgraded all gems, etc.  ANy thoughts?

Thanks in advance.
-J

John A.
boboroshi
www.meticulous.com
www.boboroshi.com
Luis L. (Guest)
on 2007-02-11 15:49
(Received via mailing list)
On 2/11/07, boboroshi <removed_email_address@domain.invalid> wrote:
> This happens at 80 (apache) and 8080 (the cluster itself). However, if
> I cd into the current directory in the capistrano deploy and have
> mongrel_rails start, I can access the app on 3000.
>

Looks like your cluster ins't set to automatically start when the
system goes up... (init.d script?)

have you tried mongrel_rails cluster::start in your app directory?

> -J
>
> John A.
> boboroshi
> www.meticulous.com
> www.boboroshi.com
>
>
> >
>


--
Luis L.
Multimedia systems
-
Leaders are made, they are not born. They are made by hard effort,
which is the price which all of us must pay to achieve any goal that
is worthwhile.
Vince Lombardi
boboroshi (Guest)
on 2007-02-11 16:39
(Received via mailing list)
Luis -

Tried that. FreeBSD doesn't have an etc/init.d directed. it has an
inetd.conf where you add one line files. Running the command says the
PID is already going

$ mongrel_rails cluster::start
Starting 3 Mongrel servers...
** !!! PID file log/mongrel.8000.pid already exists.  Mongrel could be
running already.  Check your log/mongrel.log for errors.
** !!! Exiting with error.  You must stop mongrel and clear the .pid
before I'll attempt a start.
mongrel_rails start -d -e production -p 8000 -a 127.0.0.1 -P log/
mongrel.8000.pid -c /home/jathayde/boboroshi/current

(one for each cluster node occurs)

mongrel_rails cluster::restart retarts the server with no command line
feedback but the 503 response persists.

I also tried to cap restart and cap deploy, neither of which kicks the
system back up. The static pages are still being served from the
cache, but that's it.
Sean (Guest)
on 2007-02-11 20:52
(Received via mailing list)
Either do a restart or manually delete the .pid. When the power goes
out, mongrel doesn't exit cleanly, and so that is why a .pid would be
left and things not working. I had a similar problem before.

Regards,

Sean
Luis L. (Guest)
on 2007-02-11 23:44
(Received via mailing list)
On 2/11/07, Sean <removed_email_address@domain.invalid> wrote:
>
> Either do a restart or manually delete the .pid. When the power goes
> out, mongrel doesn't exit cleanly, and so that is why a .pid would be
> left and things not working. I had a similar problem before.
>

If memory don't trick me, latest mongrel_cluster check if the actual
process in the pid file exist, and if not, delete the file and spawn
again.

Anyway, something related to these issues was commited in the latest
version of mongrel and mongrel_cluster.


--
Luis L.
Multimedia systems
-
Leaders are made, they are not born. They are made by hard effort,
which is the price which all of us must pay to achieve any goal that
is worthwhile.
Vince Lombardi
boboroshi (Guest)
on 2007-02-12 05:44
(Received via mailing list)
Thanks Sean and Luis. Deleting the PID files in log did the trick.
This topic is locked and can not be replied to.