Using the upstream_hash_module and handling changes when adding new servers


#1

Hi all,

I’m looking to use the hash module to split out many files across
several servers. I’ve done some testing, and can confirm the obvious
fact that when you add a new server, some of the files that used to be
found at server1 are now looked for at the new server. One way to handle
this would be to copy all of the files from all servers to the new
server. I’d like to avoid this though, and only have files on the boxes
where they’re needed.

So, I’ve looked at the module a bit in the hopes of extracting the logic
and basically allowing myself to say for a file named “foo.html”, the
hash module will direct me to serverX. Has anyone does this before? I
would imagine this would be useful to others as well. Going to jump back
into the code and trace through it some more; if anyone has experience
in this area, I’d be curious to see how you handled things.

Thanks much,
Jack

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,884,884#msg-884


#2

I do something like this but solved it a completely different way. I
put
storage behind the balanced servers and shared it among them using NFS.
That
way every server sees a consistent set of files. Otherwise users get
“nailed”
to a server and that sort of defeats the purpose of load balancing.
Hope this
helps.

-Larry


#3

On Tue, Apr 7, 2009 at 9:17 AM, Larry B. removed_email_address@domain.invalid
wrote:

I do something like this but solved it a completely different way. Â I put
storage behind the balanced servers and shared it among them using NFS. Â That
way every server sees a consistent set of files. Â Otherwise users get “nailed”
to a server and that sort of defeats the purpose of load balancing. Â Hope this
helps.

-Larry

yeah this is probably the traditional way, i do it this way as well.

nfs can be annoying though; i would suggest using freebsd, netapp or a
solaris based server instance if possible. nfs on linux is notoriously
weak.


#4

On Tue, 2009-04-07 at 10:18 -0700, Michael S. wrote:

nfs can be annoying though; i would suggest using freebsd, netapp or a
solaris based server instance if possible. nfs on linux is notoriously
weak.

I’d suggest looking into one of the clustered filesystems such as GFS or
Lustre, although that might be more difficult to deploy on an existing
infrastructure.

Regards,
Cliff


#5

On Tue, Apr 7, 2009 at 10:59 AM, Cliff W. removed_email_address@domain.invalid wrote:

I’d suggest looking into one of the clustered filesystems such as GFS or
Lustre, although that might be more difficult to deploy on an existing
infrastructure.

those require exported filesystems (iscsi, fake iscsi, SANs, etc) and
can be a pain in the ass to manage.

I tried OCFS2 for a little while as it required the most
straightforward setup and it had its own issues. GFS2 was a horrible
PITA when I tried to set it up as well. There is also glusterfs - it
looks like it’s more like Lustre, it’s worth it to check out.

NFS can scale to thousands of users traditionally (probably not the
Linux server version, hah) so the OP’s farm probably does not have
that large of requirements, if he does, I’d say look at going with
mogilefs or something like that and change the logic in the
application layer. It’s essentially just creating a global filesystem
anyway but gives you greater control over devices/how much space is
allocated/etc.


#6

On Tue, 2009-04-07 at 11:16 -0700, Michael S. wrote:

On Tue, Apr 7, 2009 at 10:59 AM, Cliff W. removed_email_address@domain.invalid wrote:

I’d suggest looking into one of the clustered filesystems such as GFS or
Lustre, although that might be more difficult to deploy on an existing
infrastructure.

those require exported filesystems (iscsi, fake iscsi, SANs, etc) and
can be a pain in the ass to manage.

Technically block devices, but yes. I’d suggest trying a supported
distro such as RHEL/CentOS which ships with this stuff and includes some
nice management tools. Of course, this comes back to what I said
earlier about being difficult to deploy on an existing infrastructure.

Regards,
Cliff


#7

mike Wrote:

helps.
weak.
Thanks guys, interesting. I thought of something along these lines with
DragonFly BSD and their HAMMER filesystem. Will play around and see what
I find. Appreciate it!

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,884,888#msg-888