Mongrel doesn't expose it's ports

Here is my mongrel_cluster.yml


cwd: /var/www/apps/MyApplication
log_file: log/mongrel.log
port: “8000”
environment: production
address: 127.0.0.1
pid_file: tmp/pids/mongrel.pid
servers: 2

Now I type:

mongrel_rails cluster::restart
already stopped port 8000
already stopped port 8001
starting port 8000
starting port 8001

And then:
curl -I http://127.0.0.1:8000/
curl: (7) couldn’t connect to host

No mongrel!

further:
nmap 127.0.0.1

Starting Nmap 4.53 ( http://insecure.org ) at 2008-03-19 01:08 EDT
Interesting ports on localhost (127.0.0.1):
Not shown: 1706 closed ports
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
139/tcp open netbios-ssn
445/tcp open microsoft-ds
631/tcp open ipp
3000/tcp open ppp
3001/tcp open nessusd
3306/tcp open mysql

no mongrel!

also, when i try to see my site (after deploying via capistrano):
[Wed Mar 19 01:09:04 2008] [error] (111)Connection refused: proxy: HTTP:
attempt to connect to 127.0.0.1:8001 (127.0.0.1) failed
[Wed Mar 19 01:09:04 2008] [error] ap_proxy_connect_backend disabling
worker for (127.0.0.1)
[Wed Mar 19 01:09:04 2008] [error] (111)Connection refused: proxy: HTTP:
attempt to connect to 127.0.0.1:8000 (127.0.0.1) failed
[Wed Mar 19 01:09:04 2008] [error] ap_proxy_connect_backend disabling
worker for (127.0.0.1)

This has been many hours of frustration – any help greatly
appreciated.

best,

tim

Can you post the last several lines from the various log files?
mongrel.log, development / production.log?

Also does mongrel work, if you try
mongrel_rails start -p 8000. (make sure there isn’t a port conflict)

as root

lsof | grep 8000
lsof | grep 8001, can show if anything is using those ports.

I’ll have to wait at morning, to look at my cluster.yml

David H. wrote:

Can you post the last several lines from the various log files?
mongrel.log, development / production.log?

Also does mongrel work, if you try
mongrel_rails start -p 8000. (make sure there isn’t a port conflict)

as root

lsof | grep 8000
lsof | grep 8001, can show if anything is using those ports.

I’ll have to wait at morning, to look at my cluster.yml

Interesting. my log files are:

(realizing i should be in the ruby-deployment forum, sorry)

development.log mongrel.3000.log mongrel.3001.log test.log

i deleted the 3000 log files and then restarted the cluster

they look normal:
** Daemonized, any open files are closed. Look at
tmp/pids/mongrel.3000.pid and
log/mongrel.3000.log for info.
** Starting Mongrel listening at 0.0.0.0:3000
** Starting Rails with development environment…
** Rails loaded.
** Loading any Rails specific GemPlugins
** Signals ready. TERM => stop. USR2 => restart. INT => stop (no
restart).
** Rails signals registered. HUP => reload (without restart). It might
not wor
k well.
** Mongrel 1.1.4 available at 0.0.0.0:3000
** Writing PID file to tmp/pids/mongrel.3000.pid
** TERM signal received.

now if i from lsof and grep 8000
[email protected]:~/stadion_consulting/0318_fww/svn$ lsof | grep 8000
x-session 5729 tbbooher 19u unix 0xec848000 19334
/tmp/.ICE-unix/5729
gconfd-2 5766 tbbooher 44u unix 0xe90b8000 19992
socket
bluetooth 5811 tbbooher 14u unix 0xec388000 19526
/tmp/orbit-tbbooher/linc-16b3-0-43012b343ac11
update-no 5815 tbbooher txt REG 3,1 48000 2966281
/usr/bin/update-notifier

so it looks like their might be a port conflict
[email protected]:~/stadion_consulting/0318_fww/svn$ lsof | grep 8000
x-session 5729 tbbooher 19u unix 0xec848000 19334
/tmp/.ICE-unix/5729
gconfd-2 5766 tbbooher 44u unix 0xe90b8000 19992
socket
bluetooth 5811 tbbooher 14u unix 0xec388000 19526
/tmp/orbit-tbbooher/linc-16b3-0-43012b343ac11
update-no 5815 tbbooher txt REG 3,1 48000 2966281
/usr/bin/update-notifier

mongrel is running now but I still can’t curl http://127.0.0.1:8000 and
now i don’t have any log files

help!

best,

tim

Tim B. wrote:

** TERM signal received.

mongrel is running now but I still can’t curl http://127.0.0.1:8000 and
now i don’t have any log files

It looks like mongrel is terminated on startup. Perhaps you have a
rouge mongrel process running still on that port that wasn’t shut down?
Try:

ps -aef | egrep mongrel

To see if a process is there. If there is, then kill it (them) and try
again.

I’ve had problems expecting mongrel to be on a certain port when using
cluster. If there are problems with the mongrel launching cluster
won’t tell you. Try just using mongrel_rails start -p 8000 and see if
you can then hit the port. That step has taken care of my issues with
mongrel_cluster because mongrel_rails will let you know exactly what’s
failing (if anything).

Hope that helps.

Tyler

On Mar 19, 4:19 am, Tim B. [email protected]

This may sound like a silly question… but you are trying to access
mongrel from the SAME machine it is running on right?

What happens if you trying to access it from the global ip? (for e.g
http://10.0.1.100:8000)

try restarting mongrel using “mongrel_rails cluster::start --clean -C
/your/mongrel.yml”

the -clean will remove any of your pid_file if it needs to…

mike

On Wed, Mar 19, 2008 at 12:44 PM, Mark B.

oh, and i get a little farther with the --clean option

$ mongrel_rails cluster::stop
already stopped port 8000
already stopped port 8001
$ mongrel_rails cluster::start --clean -C config/mongrel_cluster.yml
starting port 8000
starting port 8001
$ lsof -i TCP:8000
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
mongrel_r 21933 tbbooher 3u IPv4 43048 TCP localhost:8000
(LISTEN)
$ curl -I http://127.0.0.1:8000
curl: (7) couldn’t connect to host

!!!
and one minute later:
ps -aef | egrep mongrel
tbbooher 21946 5904 0 19:24 pts/0 00:00:00 grep -E mongrel
^^
nothing
and
lsof -i TCP:8000 returns nothing

!!!

thanks for any help.

fyi – here is my mongrel cluster
more mongrel_cluster.yml

cwd: /var/www/apps/FitWitWeb/current
log_file: log/mongrel.log
port: “8000”
environment: production
address: 127.0.0.1
pid_file: tmp/pids/mongrel.pid
servers: 2

ruby script/server -e production -p 8000

runs fine in my /var/www/apps/FitWitWeb/current directory

my pid directory (/var/www/apps/FitWitWeb/current/tmp/pids) has the
following permissions:
lrwxrwxrwx 1 www-data www-data 35 2008-03-18 22:50 pids ->
/var/www/apps/FitWitWeb/shared/pids

You said:
and one minute later:
ps -aef | egrep mongrel
tbbooher 21946 5904 0 19:24 pts/0 00:00:00 grep -E mongrel

OK, mongrel is crashing. You should check the logs and post them. I
noticed they are called mongrel.3000.log etc. The logs are typically
name after the port that mongrel is listening on.

Your config could be messed up, see if you can find your app on port
3000.

eg.
Mongrel 1.1.4 available at 0.0.0.0:3000

On a side note, please be aware that when you did:
ps -aef | grep -v mongrel mongrel
grep: mongrel: No such file or directory

the -v inverts the search, which you don’t want. It also interprets
the second time you put mongrel as a separate command (mongrel
mongrel). On my system this causes the original grep to fail (even if
you remove the -v). So, it’s not giving you the results you require.

Luke

i killed all mongrel processes (none were running)

can start mongrel on port 8000
mongrel_rails start -p 8000
** Starting Mongrel listening at 0.0.0.0:8000
** Starting Rails with development environment…
** Rails loaded.
** Loading any Rails specific GemPlugins
** Signals ready. TERM => stop. USR2 => restart. INT => stop (no
restart).
** Rails signals registered. HUP => reload (without restart). It might
not work well.
** Mongrel 1.1.4 available at 0.0.0.0:8000
** Use CTRL-C to stop.

can start mongrel without errors:
mongrel_rails cluster::start
starting port 8000
starting port 8001

curl -I http://127.0.0.1:8000
curl: (7) couldn’t connect to host

hmm . … is mongrel running

ps -aef | grep -v mongrel mongrel
grep: mongrel: No such file or directory

try all lsof with 8000 – no mongrel on that port:
lsof | grep 8000
x-session 5729 tbbooher 19u unix 0xec848000 19334
/tmp/.ICE-unix/5729
gconfd-2 5766 tbbooher 44u unix 0xe90b8000 19992
socket
bluetooth 5811 tbbooher 14u unix 0xec388000 19526
/tmp/orbit-tbbooher/linc-16b3-0-43012b343ac11
update-no 5815 tbbooher txt REG 3,1 48000 2966281
/usr/bin/update-notifier

more specificially, lsof -i TCP:8000 returns nothing . . .

any other ideas . . .?

i can’t find anything.

best,

tim

Tim B. wrote:

Here is my mongrel_cluster.yml


cwd: /var/www/apps/MyApplication
log_file: log/mongrel.log
port: “8000”
environment: production
address: 127.0.0.1
pid_file: tmp/pids/mongrel.pid
servers: 2

Now I type:

mongrel_rails cluster::restart
already stopped port 8000
already stopped port 8001
starting port 8000
starting port 8001

And then:
curl -I http://127.0.0.1:8000/
curl: (7) couldn’t connect to host

No mongrel!

further:
nmap 127.0.0.1

Starting Nmap 4.53 ( http://insecure.org ) at 2008-03-19 01:08 EDT
Interesting ports on localhost (127.0.0.1):
Not shown: 1706 closed ports
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
139/tcp open netbios-ssn
445/tcp open microsoft-ds
631/tcp open ipp
3000/tcp open ppp
3001/tcp open nessusd
3306/tcp open mysql

no mongrel!

also, when i try to see my site (after deploying via capistrano):
[Wed Mar 19 01:09:04 2008] [error] (111)Connection refused: proxy: HTTP:
attempt to connect to 127.0.0.1:8001 (127.0.0.1) failed
[Wed Mar 19 01:09:04 2008] [error] ap_proxy_connect_backend disabling
worker for (127.0.0.1)
[Wed Mar 19 01:09:04 2008] [error] (111)Connection refused: proxy: HTTP:
attempt to connect to 127.0.0.1:8000 (127.0.0.1) failed
[Wed Mar 19 01:09:04 2008] [error] ap_proxy_connect_backend disabling
worker for (127.0.0.1)

This has been many hours of frustration – any help greatly
appreciated.

best,

tim

I am experiencing the same problem as I switch my production server from
one colo to another. Needless to say, this all works fine on the current
machine. But the new machine has this problem.

Here is what I know: if I run webrick or mongrel on (say) port 3000,
-I cannot hit that from the local machine (via curl, wget)
-lsof (when run as sudo) shows the listener;
-On my ‘working’ machine I can use curl to get localhost:3000 just fine.

So, needless to say, Apache cannot hit the mongrel ports either.

I think that the problem isnt mongrel or webrick, it is something in the
OS configuration.

N.B. selinux is off for this machine.
The old machine is RHEL4 the new is CentOS 4.5 (at gogrid)

Hers output from lsof:
mongrel_r 7729 mike 3u IPv4 118549 TCP gogrid-centos4:9000
(LISTEN)
Any idea what the gogrid-centos4 column is for (it is also * for other
processes).

Any ideas on what to try or fix here?

Mike