Forum: NGINX Multiple nginx instances share same proxy cache storage

Announcement (2017-05-07): www.ruby-forum.com is now read-only since I unfortunately do not have the time to support and maintain the forum any more. Please see rubyonrails.org/community and ruby-lang.org/en/community for other Rails- und Ruby-related community platforms.
2974d09ac2541e892966b762aad84943?d=identicon&s=25 badtzhou (Guest)
on 2014-08-05 01:43
(Received via mailing list)
I am thinking about setting up multiple nginx instances share single
proxy
cache storage using NAS, NFS or some kind of distributed file system.
Cache
key will be the same for all nginx instances.
Will this theory work? What kind of problem will it cause(locking,
cached
corruption or missing metadata in the memory)?

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,252275,252275#msg-252275
A8108a0961c6087c43cda32c8616dcba?d=identicon&s=25 Maxim Dounin (Guest)
on 2014-08-05 02:49
(Received via mailing list)
Hello!

On Mon, Aug 04, 2014 at 07:42:20PM -0400, badtzhou wrote:

> I am thinking about setting up multiple nginx instances share single proxy
> cache storage using NAS, NFS or some kind of distributed file system. Cache
> key will be the same for all nginx instances.
> Will this theory work? What kind of problem will it cause(locking, cached
> corruption or missing metadata in the memory)?

As soon as a cache is loaded, nginx relies on it's memory data to
manage cache (keep it under the specified size, remove inactive
items and so on).  As a result it won't be happy if you'll try to run
multiple nginx instances working with the same cache directory.
It can tolerate multiple instances working with the same cache for
a short period of time (e.g., during binary upgrade).  But running
nginx this way intentionally is a bad idea.

Besides, using NFS (as well as other NASes) for nginx cache is a
bad idea due to blocking file operations.

--
Maxim Dounin
http://nginx.org/
111a08d93ad6b7c5b4f5eb09c45c746d?d=identicon&s=25 Robert Paprocki (Guest)
on 2014-08-11 02:24
(Received via mailing list)
Any options then to support an architecture with multiple nginx nodes
sharing or distributing a proxy cache between them? i.e., a HAProxy
machine load balances to several nginx nodes (for failover reasons), and
each of these nodes handles http proxy + proxy cache for a remote
origin? If nginx handles cache info in memory, it seems that multiple
instances could not be used to maintain the same cache info (something
like rsyncing the cache contents between nodes thus would not work); are
there any recommendations to achieve such a solution?
2974d09ac2541e892966b762aad84943?d=identicon&s=25 itpp2012 (Guest)
on 2014-08-11 11:37
(Received via mailing list)
Robert Paprocki Wrote:
-------------------------------------------------------
> like rsyncing the cache contents between nodes thus would not work);
> are there any recommendations to achieve such a solution?

I would imagine a proxy location directive and location tag;

shared memory pool1 = nginx allocated and managed
shared memory pool2 = socket or tcp pool on a caching server elsewhere

The problem you have is speed and concurrency of requests, rsyncing a
cache
requires a specific tag which needs to be respected by each instance
using
it or you will have a battle between instances.

A better idea would be a database with a persistent connection, in
memory
cached again to avoid duplicate queries.
ea. use the database for a central repository of cached items and local
memory to avoid hitting the database more then once for each item. No
disk-IO would be involved so it should also be non-blocking.

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,252275,252479#msg-252479
A8108a0961c6087c43cda32c8616dcba?d=identicon&s=25 Maxim Dounin (Guest)
on 2014-08-11 12:19
(Received via mailing list)
Hello!

On Sun, Aug 10, 2014 at 05:24:04PM -0700, Robert Paprocki wrote:

> Any options then to support an architecture with multiple nginx
> nodes sharing or distributing a proxy cache between them? i.e.,
> a HAProxy machine load balances to several nginx nodes (for
> failover reasons), and each of these nodes handles http proxy +
> proxy cache for a remote origin? If nginx handles cache info in
> memory, it seems that multiple instances could not be used to
> maintain the same cache info (something like rsyncing the cache
> contents between nodes thus would not work); are there any
> recommendations to achieve such a solution?

Distinct caches will be best from failover point of view.

To maximize cache effeciency, you may consider using URI-based
hashing to distribute requests between cache nodes.

--
Maxim Dounin
http://nginx.org/
2974d09ac2541e892966b762aad84943?d=identicon&s=25 ThomasLohner (Guest)
on 2014-12-25 14:03
(Received via mailing list)
Hi,


> > maintain the same cache info (something like rsyncing the cache
> http://nginx.org/
I wonder if it would hurt to make nginx load cache metadata from file as
a
fallback only if there's no entry in the keys_zone. If this would be a
param
for proxy_cache_path we could build a distributed cache cluster by
simply
copying cache files to other nodes.  Making this a param would not hut
performance if you don't want this behavior. The functionality is
already
there, because nginx loads metadata from files on startup.

Is this a valid feature request or does no one care aboout clustering
nginx
caches?

--
Thomas

Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,252275,255788#msg-255788
63d2c727d1c3f41dcad9dfad1ce91d2e?d=identicon&s=25 Iuliana Constantin (iuliana26)
on 2016-11-25 17:00
hey guys, along the lines of the nginx multi-server architecture
described here, we built a control panel that can manage such large
deployments automatically with many other nice tools.
If you are available to give it a try at https://clustercs.com we highly
value your feedback (support@clustercs.com) ... this is an effort from
developers for developers ... thanks a lot!
This topic is locked and can not be replied to.