Nginx session-stickiness

Hi,

Is there any way to achieve session stickiness via Nginx proxying?

I`ve read about ip_hash but this is not going to work if one of the
backend
servers fails, right? Clients that should go to it wil lnot be served
unless
I mark the server as down. If this is what nginx does, then it is not
much os
a solution.

Also is there any other way for keeping sessions like cookie
tracking/insertion?

I’d be interested in an answer to this as well. Many thanks.

Regards,
David

Also is there any other way for keeping sessions like cookie
tracking/insertion?

If you use php in your applications, there is an extension which is
called session_mysql. It allows you to store your sessions in a mysql
database which can be accessed from all your hosts and/or replicated.
This is not really nginx anymore, but this may help.

You may also have a look at OpenBSD’s hoststated.

Is there any way to achieve session stickiness via Nginx proxying?

I`ve read about ip_hash but this is not going to work if one of the
backend
servers fails, right? Clients that should go to it wil lnot be served
unless
I mark the server as down. If this is what nginx does, then it is not much
os
a solution.

Have a look at proxy_next_upstream.

http://wiki.codemongers.com/NginxHttpProxyModule#proxy_next_upstream

You can turn it on for “error”, and if nginx fails to connect to the
hashed
backend, it’ll try the next hashed server just as if you’d marked it as
down.

Rob

On Sat, Apr 5, 2008 at 8:41 PM, Rob M. [email protected] wrote:

Have a look at proxy_next_upstream.

http://wiki.codemongers.com/NginxHttpProxyModule#proxy_next_upstream

You can turn it on for “error”, and if nginx fails to connect to the hashed
backend, it’ll try the next hashed server just as if you’d marked it as
down.

Won’t this have the downside of possibly sending multiple failing
requests to the upstreams? We used this for a while but ran into
problems with duplicate requests. For example we had people sending
WAY too many mails out in an request, the appserver would timeout
halfway through, it’d send a portion of the emails, and then send the
request to another upstream. The subsequent requests would do the
same thing and people would get the same email for every upstream
defined.

I think the short answer to the original question is no, but I’d love
to be proved wrong. :wink: This is why people are offering alternatives
instead of pointing to a clear solution. FWIW most of our clients use
cookie, database, or memcached based sessions. Personally I kind of
like the approach of having your application manage session data.

http://wiki.codemongers.com/NginxHttpProxyModule#proxy_next_upstream

Won’t this have the downside of possibly sending multiple failing
requests to the upstreams? We used this for a while but ran into
problems with duplicate requests. For example we had people sending
WAY too many mails out in an request, the appserver would timeout
halfway through, it’d send a portion of the emails, and then send the

It would be nice if you could just use:

proxy_next_upstream connect_failed;

At the moment it seems “error” refers to too many things, including an
upstream timeout.

Rob

+1, it is common for our app to have backends that can’t be
connect()ed temporarily during a roll or restart.

+1, it is common for our app to have backends that can’t be
connect()ed temporarily during a roll or restart.

At the moment we do this by having a separate file included as:

include /etc/nginx-servers.conf;

A separate process is kept running which every 10 seconds queries our DB
for
“up” servers and rebuilds the nginx-servers.conf file. If a server is
marked
as down, it adds a “down” suffix to the appropriate server, and then
HUPs
nginx.

Our code to do a rolling restart of the backends basically updates the
DB to
let it know the backend is down, waits 15 seconds, restarts the backend,
then marks it as up again in the DB, waits 15 seconds, then moves to the
next server.

This pretty much ensures that no clients see any downtime at all, though
I
think “keep alive” connections may still see a problem, haven’t tested
closely…

Rob

I’ve just set the fail timout for connection to 1 second. That
generally avoids sitting in connect() to long for a dead backend.

On Sun, Apr 6, 2008 at 10:42 PM, Rob M. [email protected] wrote:

A separate process is kept running which every 10 seconds queries our DB
think “keep alive” connections may still see a problem, haven’t tested
closely…

IMHO, this is overkill. It’s really neat, but I don’t think you need
to do this at all. We host lots of rails apps and don’t run into
problems that require that kind of approach. You’ll get error log
messages, but clients don’t notice. The only time we restart/HUP
nginx is when we rotate logs or upgrade nginx itself. Nginx’s
upstream stuff has always been smart enough to detect the appropriate
server to send requests to without hacks like this, it’s one of the
reasons I loved it immediately. :slight_smile: We’ve had unfair weighting with
the fair queueing patch but even then it sends requests to available
upstream servers. You’re basically faking load balancer heartbeats
inside nginx and afaik you don’t need to. If you’re on j2ee, php,
python or whatever apps then that might make sense, if you’re on rails
I wouldn’t do this.

IMHO, this is overkill. It’s really neat, but I don’t think you need
to do this at all. We host lots of rails apps and don’t run into
problems that require that kind of approach. You’ll get error log

I’m confused. You previously said:

Won’t this have the downside of possibly sending multiple failing
requests to the upstreams? We used this for a while but ran into
problems with duplicate requests. For example we had people sending
WAY too many mails out in an request, the appserver would timeout
halfway through, it’d send a portion of the emails, and then send the
request to another upstream. The subsequent requests would do the
same thing and people would get the same email for every upstream
defined.

I then looked at the docs.


http://wiki.codemongers.com/NginxHttpProxyModule#proxy_next_upstream

error - an error has occurred while connecting to the server, sending a
request to it, or reading its response;
timeout - occurred timeout during the connection with the server,
transfer
the requst or while reading response from the server;

And assumed that a “timeout” was a subset of “error”. Is that right or
wrong
then? If I do:

proxy_next_upstream error;

And one of my connections times out, will nginx send the request to the
next
backend or not? If it does, then that’s a problem because it can launch
the
same “slow” action to occur multiple times on multiple servers. It means
that we do need a “connect_error” option so we can just say:

proxy_next_upstream connect_error;

If not, then we’re all ok, we can just use the “error” option.

Anyway, having said all that, we still do need our solution for some
annoying edge cases. Basically systems can crash in very, very odd ways.
It’s been a while (I think it was linux 2.6.18), but we had a system
crash
in a state where it would accept TCP connections, but wasn’t responding
to
them in any way. That was quite nasty because basically it meant
connections
coming in to that server would have to wait the full proxy_read_timeout
before being passed to the next backend server. Since the server was
remote,
it took a little while to get it rebooted at the co-location facility.

Fortunately, because of our above scheme, and the fact we remotely check
each server every 2 minutes, when that server failed to pass it’s “ping”
test after 30 seconds, it was marked down in the database, and was
automatically taken out of service without intervention required by us.

Rob

Hello!

On Mon, Apr 07, 2008 at 09:21:57AM +0200, Buzzz wrote:

smtp backend without auth , just forward the smtp-client data (mail) to
the smtp backend.

Sorry, but nginx has no code to do smtp auth with backend. The
only thing it can do is to send HELO/EHLO and non-standard XCLIENT
command (if enabled in config).

Note however that after authentication nginx establishes opaque
pipe between client and backend, so everything client sends will
be transferred to backend (and vice versa). So if client
re-sends authentication for some reason - this will be seen by
backend.

Maxim D.

Hi

I’m trying to configure an smtp proxy.
I’ve wrote my own perl auth handler and my backend smtp server does not
use smtp-auth.

I’ve seen that nginx smtp proxy try to smtp-auth with the backend after
a
successful auth from the smtp client.

Is it possibile to disable this ? i would like that nginx talk with the
smtp backend without auth , just forward the smtp-client data (mail) to
the smtp backend.

Thx for help,
Davide

The problem was the backend smtp server that does not support the
xclient
command issued by nginx.

I’ve disabled the command and now works properly

Thx for help,
Davide