Load Balancing and High Availability

Hi,

I am a nginx newbie. I have nginx configured as a reverse proxy/load
balancer in front of a small cluster of Jboss servers. I have configured
as
per the tutorials on the web like this one ref:

and all works fine with simple round robin load balancing.

My question is, if one of the backend Jboss servers goes down how do I
stop
nginx from load balancing requests to the dead application server?

Thanks,
W

Posted at Nginx Forum:

Hi,

IN answer to my own question I found this…

±-----+
Max Fails
According to the default round robin settings, nginx will continue to
send
data to the virtual private servers, even if the servers are not
responding.
Max fails can automatically prevent this by rendering unresponsive
servers
inoperative for a set amount of time. There are two factors associated
with
the max fails: max_fails and fall_timeout.

Max fails refers to the maximum number of failed attempts to connect to
a
server should occur before it is considered inactive.

Fall_timeout specifies the length of that the server is considered
inoperative. Once the time expires, new attempts to reach the server
will
start up again. The default timeout value is 10 seconds.
±-----+

Is this the beast way? Is there any gotchas/best practices to be aware
of?

Thanks
W

Posted at Nginx Forum:

Hi,

Am Dienstag, 23. Juli 2013, 11:06:46 schrieb toriacht:

Hi,

IN answer to my own question I found this…

±-----+
Max Fails
Fall_timeout specifies the length of that the server is considered
inoperative. Once the time expires, new attempts to reach the server will
start up again. The default timeout value is 10 seconds.
±-----+

this is the way I did it.

I set max_fails=1 and fail_timeout in my upstream definition and in my
location block
proxy_next_upstream http_502 http_503 error;

You can use any allowed http status code here

Rgds, Axel

Hi Team,

I am newbiw too and i am setting up load balancer with nginx from "
How To Set Up Nginx Load Balancing | DigitalOcean"
but my reuest are not going to the servers which i have configured.

below is the my nginx.conf setup

upstream nitesh {
server 192.168.1.2;
server 192.168.1.3;
server 192.168.1.4;
}

}

and below is my virtual.conf setup

server {
listen *:80;
server_name nginx.whmcs.co.in;
access_log /var/log/nginx/nginx.access.log;
error_log /var/log/nginx/nginx_error.log debug;
log_format upstreamlog ‘[$time_local] $remote_addr -
$remote_user -
$server_name to: $upstream_addr: $request upstream_response_time
$upstream_response_time msec $msec request_time $request_time’;
location / {
proxy_pass http://nitesh;
}
}

but i am not getting the setup page while running website.

my another three server are running with apache.

so update me on the same that i should have to change in that
configuration.

Posted at Nginx Forum:

Hi Axel,

Thank you for the reply. I have pasted some of my nginx.conf file below.
Can
you conform if i’m setting proxy_next_upstream in correct location
please?

Also, is there a way to have fail_timeout increment per fail? i.e if it
failed once try again in 30 secs as it might be minor issue, then if
that
fails try a again in 2 mins, then 4 mins etc?

user nginx;
worker_processes 1;

error_log /var/log/nginx/error.log;
#error_log /var/log/nginx/error.log notice;
#error_log /var/log/nginx/error.log info;

pid /run/nginx.pid;

events {
worker_connections 1024;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

log_format  main  '$remote_addr - $remote_user [$time_local] 

“$request”

'$status $body_bytes_sent “$http_referer” ’
‘“$http_user_agent” “$http_x_forwarded_for”’;

access_log  /var/log/nginx/access.log  main;

sendfile        on;
#tcp_nopush     on;

#keepalive_timeout  0;
keepalive_timeout  65;

#set headers
proxy_set_header        Host $host;
proxy_set_header        X-Real-IP $remote_addr;
proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header        X-Queue-Start "t=${msec}000";

switch to next upstream server in these scenarios

proxy_next_upstream http_502 http_503 error;

# load balancer
# ip_hash provides sticky session
upstream balancer {
    ip_hash;
    server 127.0.0.1:8180 max_fails=1 fail_timeout=2000s;
    server 127.0.0.1:8280 max_fails=1 fail_timeout=2000s;

}
#gzip  on;

# Load modular configuration files from the /etc/nginx/conf.d

directory.
# See Core functionality
# for more information.
include /etc/nginx/conf.d/*.conf;

server {
    listen       80;
    server_name  localhost;

    #charset koi8-r;

    access_log  /var/log/nginx/host.access.log  main;

location / {
root /usr/share/nginx/html;
index index.html index.htm;
}

    #rules for rest
    location /rest/ {
            #proxy_pass http://127.0.0.1:8080/MyApp/rest/;
            proxy_pass http://balancer/MyApp/rest/;
     }


Many thanks

Posted at Nginx Forum: