Persistence + Round robin

Hey guys,
I’ve read through the docs and this doesn’t seem possible in the current
setup, but I could be wrong. Here’s what I need:

Load balancing using a uri param, but if its not found, fall back to
round
robin, but in chunks of 100 by the Referer header. For example, the
first
100 people with Referer=www.somedomain.com get sent to nodeA, the 2nd
100 to
nodeB, the 3rd 100 to nodeA, etc. After a user is assigned a node, any
subsequent request they make needs will have a uri_param that points
Nginx
to the right server.

We’re using Evan M.'s awesome Upstream Hash Module, but it doesn’t
seem
its configureable to this level.

Is this possible? If not, I’d like to hire anyone willing to implement
this. Anyone interested?

Thanks,
Brian

On Wed, Sep 24, 2008 at 12:18:41PM -0500, Brian Moschel wrote:

We’re using Evan M.'s awesome Upstream Hash Module, but it doesn’t seem
its configureable to this level.

Is this possible? If not, I’d like to hire anyone willing to implement
this. Anyone interested?

I think you might be able to do something like this without writing new
code. How about (not tried, just thinking):

upstream up_default {

sublimUSE UPSTREAM_FAIRinal message :slight_smile:

server 10.0.0.1:80;
server 10.0.0.2:80;
server 10.0.0.3:80;
}

upstream up_1 { server 10.0.0.1:80; }
upstream up_2 { server 10.0.0.2:80; }
upstream up_3 { server 10.0.0.3:80; }

location / {
if (…something…) {
proxy_pass http://up_1;
}
if (…) {
proxy_pass http://up_2;
}
if (…) {
proxy_pass http://up_3;
}
proxy_pass http://up_default;
}

You’d have to send something back to the client (e.g. in generated URLs)
so that you can identify the original backend later.

Best regards,
Grzegorz N.

On Thu, Sep 25, 2008 at 3:46 PM, Grzegorz N.
[email protected] wrote:

   }
   if (...) {
           proxy_pass http://up_3;
   }
   proxy_pass http://up_default;

}

uh… right - that’s the magical “something” there… :wink:

-jf


In the meantime, here is your PSA:
“It’s so hard to write a graphics driver that open-sourcing it would not
help.”
– Andrew Fear, Software Product Manager, NVIDIA Corporation

Grzegorz,
While your proposed solution provides a clever way to stick requests to
the
same node, it doesn’t provide a way to fulfill the first part of my
requirement: dispatching the first 100 requests with Referer=
www.somedomain.com to nodeA and the next 100 to nodeB. UPSTREAM_FAIR
looks
great but it doesn’t fill this need.

Maybe a little background would help. This is a comet application
(hence
the need for sticky sessions) and its using a shared memory system
(Terracotta) for multiple nodes. Because of how we built it, the shared
memory works much faster when users from the same referer are grouped on
the
same node, hence the need for this requirement.

Is there any way to do this second requirement without coding something?

Thanks,
Brian

On Thu, Sep 25, 2008 at 03:43:24AM -0500, Brian Moschel wrote:

Grzegorz,
While your proposed solution provides a clever way to stick requests to the
same node, it doesn’t provide a way to fulfill the first part of my
requirement: dispatching the first 100 requests with Referer=
www.somedomain.com to nodeA and the next 100 to nodeB. UPSTREAM_FAIR looks
great but it doesn’t fill this need.

No, not really. upstream_fair won’t help you there, it’s just a totally
awesome load balancer module (j/k). I can’t see a way to do this in pure
nginx right now. But you might use some tricks on your application
servers, e.g. nodeA sticks the session to nodeB when appropriate (and
e.g.
transfers the user’s session data to nodeB somehow).

Anyway, I think you’ll have to write some code, either in your
application,
or in nginx.

Best regards,
Grzegorz N.

On Don 25.09.2008 03:43, Brian Moschel wrote:

Grzegorz,
While your proposed solution provides a clever way to stick requests to
the same node, it doesn’t provide a way to fulfill the first part of my
requirement: dispatching the first 100 requests with Referer=
www.somedomain.com to nodeA and the next 100 to nodeB. UPSTREAM_FAIR
looks great but it doesn’t fill this need.

maybe the perl module can help you in this:

http://wiki.codemongers.com/NginxEmbeddedPerlModule

but i don’t know if there is a ‘global’ variable which can holds the
current counter.

Maybe Igor can answer this.

BR

Aleks

On Thu, Sep 25, 2008 at 03:56:36PM +0800, Jeffrey ‘jf’ Lim wrote:

uh… right - that’s the magical “something” there… :wink:

:slight_smile:

How about embedding the upstream ID in the URL or query string and
matching that using the rewrite module? Or as a part of cookies sent to
the client (you could match that via $http_cookie, altough I seem to
remeber that there was a special module for that). So a backend sets a
cookie containing e.g. “backend=1” (along with any other data).

I think you could match it like this:

location / {
if ($http_cookie ~ backend=1$) {
proxy_pass http://up_1;
}

etc.

}

Unfortunately you have to pass something to the client but a cookie will
probably be the least ugly.

Best regards,
Grzegorz N.