Currently, I have Apache2 as my frontend and Mongrel as the rails
application server.
All request comes into apache but requests for the Rails app are proxied
(
mod_proxy )
to an instance of the Mongrel web server running on a different port.
Just
want to know
if this is the best practice in terms of robustness, scalability and
security. Thanks
Henry!
–
Hey Henry -
On 9-Jan-08, at 9:04 PM, Henry Addo wrote:
–
Mobile Development Company
when you say mod_proxy, did you mean mod_proxy_balancer ?
If so, one of the downside of this solution is that
mod_proxy_balancer won’t skip over busy instances - but will wait for
it to be free. yuck.
For this reason, and the heaviness of apache, we’ve switched to a
combo of nginx and haproxy. haproxy is so damn light without the
previous problem. It can also do some proxying avail in hardware
balancers.
Ezra (where did I read that?) mentioned that nginx was getting a
balancer itself - not sure if that’s been addressed
Some of the best minds in rails deployment are on a separate list -
you might ask there.
http://groups.google.com/group/rubyonrails-deployment/
Jodi
I believe Jodi was referring to a fair balancer, not a round-robin one.
Nginx is testing a fair balancer which will skip over busy backends but
I
don’t think there’s a stable build of it.
http://brainspl.at/articles/2007/11/09/a-fair-proxy-balancer-for-nginx-and-mongrel
Ah I see, I had read the brainspl.at article before but hadn’t noted
that it was not in the stable version of nginx.
You can get the module here
http://wiki.codemongers.com/NginxHttpUpstreamFairModule?highlight=(fair)
Going back to Henry’s original question, I have several websites
running on Apache+Monrgel_cluster+mod_proxy_balancer and they are
perfectly fine, including one receiving several thousands of hits a
day. We’re moving most new deploys across to the nginx config as above
though, Apache is very heavy to use just as a balancer manager.
Nginx does have a balancer,
here’s a sample nginx install and .conf for ya (ubuntu install
apt-get install -y nginx
cd /etc/nginx
mv nginx.conf nginx.conf.orig
user www-data www-data;
worker_processes 6;
pid /var/run/nginx.pid;
events { worker_connections 1024; }
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] ’
'“$request” $status $body_bytes_sent
“$http_referer” ’
‘“$http_user_agent” “$http_x_forwarded_for”’;
access_log /var/log/nginx_access.log main;
error_log /var/log/nginx_error.log debug;
sendfile on;
tcp_nopush on;
tcp_nodelay off;
gzip on;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_proxied any;
gzip_types text/plain text/html text/css application/x-javascript
text/xml application/xml application/xml+rss text/javascript;
upstream mongrel {
server 127.0.0.1:8000;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
}
server {
listen 80;
client_max_body_size 50M;
# server_name www.[servername].com [servername].com;
root /data/www/current/public;
access_log /var/log/nginx.servername.access.log main;
if (-f $document_root/system/maintenance.html) {
rewrite ^(.)$ /system/maintenance.html last;
break;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect false;
proxy_max_temp_file_size 0;
if (-f $request_filename) {
break;
}
if (-f $request_filename/index.html) {
rewrite (.) $1/index.html break;
}
if (-f $request_filename.html) {
rewrite (.*) $1.html break;
}
if (!-f $request_filename) {
proxy_pass http://mongrel;
break;
}
}
error_page 500 502 503 504 /500.html;
location = /500.html {
root /data/www/current/public;
}
}
}
/etc/init.d/nginx reload
this works great for us.