Hey guys,
I’ve read through the docs and this doesn’t seem possible in the current
setup, but I could be wrong. Here’s what I need:
Load balancing using a uri param, but if its not found, fall back to
round
robin, but in chunks of 100 by the Referer header. For example, the
first
100 people with Referer=www.somedomain.com get sent to nodeA, the 2nd
100 to
nodeB, the 3rd 100 to nodeA, etc. After a user is assigned a node, any
subsequent request they make needs will have a uri_param that points
Nginx
to the right server.
We’re using Evan M.'s awesome Upstream Hash Module, but it doesn’t
seem
its configureable to this level.
Is this possible? If not, I’d like to hire anyone willing to implement
this. Anyone interested?
On Thu, Sep 25, 2008 at 3:46 PM, Grzegorz N. [email protected] wrote:
}
if (...) {
proxy_pass http://up_3;
}
proxy_pass http://up_default;
}
uh… right - that’s the magical “something” there…
-jf
–
In the meantime, here is your PSA:
“It’s so hard to write a graphics driver that open-sourcing it would not
help.”
– Andrew Fear, Software Product Manager, NVIDIA Corporation
Grzegorz,
While your proposed solution provides a clever way to stick requests to
the
same node, it doesn’t provide a way to fulfill the first part of my
requirement: dispatching the first 100 requests with Referer= www.somedomain.com to nodeA and the next 100 to nodeB. UPSTREAM_FAIR
looks
great but it doesn’t fill this need.
Maybe a little background would help. This is a comet application
(hence
the need for sticky sessions) and its using a shared memory system
(Terracotta) for multiple nodes. Because of how we built it, the shared
memory works much faster when users from the same referer are grouped on
the
same node, hence the need for this requirement.
Is there any way to do this second requirement without coding something?
On Thu, Sep 25, 2008 at 03:43:24AM -0500, Brian Moschel wrote:
Grzegorz,
While your proposed solution provides a clever way to stick requests to the
same node, it doesn’t provide a way to fulfill the first part of my
requirement: dispatching the first 100 requests with Referer= www.somedomain.com to nodeA and the next 100 to nodeB. UPSTREAM_FAIR looks
great but it doesn’t fill this need.
No, not really. upstream_fair won’t help you there, it’s just a totally
awesome load balancer module (j/k). I can’t see a way to do this in pure
nginx right now. But you might use some tricks on your application
servers, e.g. nodeA sticks the session to nodeB when appropriate (and
e.g.
transfers the user’s session data to nodeB somehow).
Anyway, I think you’ll have to write some code, either in your
application,
or in nginx.
Grzegorz,
While your proposed solution provides a clever way to stick requests to
the same node, it doesn’t provide a way to fulfill the first part of my
requirement: dispatching the first 100 requests with Referer= www.somedomain.com to nodeA and the next 100 to nodeB. UPSTREAM_FAIR
looks great but it doesn’t fill this need.
On Thu, Sep 25, 2008 at 03:56:36PM +0800, Jeffrey ‘jf’ Lim wrote:
uh… right - that’s the magical “something” there…
How about embedding the upstream ID in the URL or query string and
matching that using the rewrite module? Or as a part of cookies sent to
the client (you could match that via $http_cookie, altough I seem to
remeber that there was a special module for that). So a backend sets a
cookie containing e.g. “backend=1” (along with any other data).