Forum: NGINX using the upstream_hash_module and handling changes when adding new servers

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
jackdempsey (Guest)
on 2009-04-07 19:56
(Received via mailing list)
Hi all,

I'm looking to use the hash module to split out many files across
several servers. I've done some testing, and can confirm the obvious
fact that when you add a new server, some of the files that used to be
found at server1 are now looked for at the new server. One way to handle
this would be to copy all of the files from all servers to the new
server. I'd like to avoid this though, and only have files on the boxes
where they're needed.

So, I've looked at the module a bit in the hopes of extracting the logic
and basically allowing myself to say for a file named "foo.html", the
hash module will direct me to serverX. Has anyone does this before? I
would imagine this would be useful to others as well. Going to jump back
into the code and trace through it some more; if anyone has experience
in this area, I'd be curious to see how you handled things.

Thanks much,
Jack

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,884,884#msg-884
Larry B. (Guest)
on 2009-04-07 20:24
(Received via mailing list)
I do something like this but solved it a completely different way.  I
put
storage behind the balanced servers and shared it among them using NFS.
That
way every server sees a consistent set of files.  Otherwise users get
"nailed"
to a server and that sort of defeats the purpose of load balancing.
Hope this
helps.

-Larry
Michael S. (Guest)
on 2009-04-07 21:27
(Received via mailing list)
On Tue, Apr 7, 2009 at 9:17 AM, Larry B. <removed_email_address@domain.invalid>
wrote:
> I do something like this but solved it a completely different way.  I put
> storage behind the balanced servers and shared it among them using NFS.  That
> way every server sees a consistent set of files.  Otherwise users get "nailed"
> to a server and that sort of defeats the purpose of load balancing.  Hope this
> helps.
>
> -Larry

yeah this is probably the traditional way, i do it this way as well.

nfs can be annoying though; i would suggest using freebsd, netapp or a
solaris based server instance if possible. nfs on linux is notoriously
weak.
Cliff W. (Guest)
on 2009-04-07 22:05
(Received via mailing list)
On Tue, 2009-04-07 at 10:18 -0700, Michael S. wrote:
>
> nfs can be annoying though; i would suggest using freebsd, netapp or a
> solaris based server instance if possible. nfs on linux is notoriously
> weak.

I'd suggest looking into one of the clustered filesystems such as GFS or
Lustre, although that might be more difficult to deploy on an existing
infrastructure.

Regards,
Cliff
Michael S. (Guest)
on 2009-04-07 22:26
(Received via mailing list)
On Tue, Apr 7, 2009 at 10:59 AM, Cliff W. <removed_email_address@domain.invalid> 
wrote:

> I'd suggest looking into one of the clustered filesystems such as GFS or
> Lustre, although that might be more difficult to deploy on an existing
> infrastructure.

those require exported filesystems (iscsi, fake iscsi, SANs, etc) and
can be a pain in the ass to manage.

I tried OCFS2 for a little while as it required the most
straightforward setup and it had its own issues. GFS2 was a horrible
PITA when I tried to set it up as well. There is also glusterfs - it
looks like it's more like Lustre, it's worth it to check out.

NFS can scale to thousands of users traditionally (probably not the
Linux server version, hah) so the OP's farm probably does not have
that large of requirements, if he does, I'd say look at going with
mogilefs or something like that and change the logic in the
application layer. It's essentially just creating a global filesystem
anyway but gives you greater control over devices/how much space is
allocated/etc.
Cliff W. (Guest)
on 2009-04-07 23:43
(Received via mailing list)
On Tue, 2009-04-07 at 11:16 -0700, Michael S. wrote:
> On Tue, Apr 7, 2009 at 10:59 AM, Cliff W. <removed_email_address@domain.invalid> wrote:
>
> > I'd suggest looking into one of the clustered filesystems such as GFS or
> > Lustre, although that might be more difficult to deploy on an existing
> > infrastructure.
>
> those require exported filesystems (iscsi, fake iscsi, SANs, etc) and
> can be a pain in the ass to manage.

Technically block devices, but yes.   I'd suggest trying a supported
distro such as RHEL/CentOS which ships with this stuff and includes some
nice management tools.   Of course, this comes back to what I said
earlier about being difficult to deploy on an existing infrastructure.

Regards,
Cliff
jackdempsey (Guest)
on 2009-04-08 15:04
(Received via mailing list)
mike Wrote:
-------------------------------------------------------
> > helps.
> weak.
Thanks guys, interesting. I thought of something along these lines with
DragonFly BSD and their HAMMER filesystem. Will play around and see what
I find. Appreciate it!

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,884,888#msg-888
This topic is locked and can not be replied to.