The former provides a generic and transparent response cache layer
based on the Nginx subrequests while the latter provides a memcached
client that implements most of the memcached TCP ASCII protocol and
these two can work together.
On Sun, Jun 17, 2012 at 8:06 PM, amodpandey [email protected]
wrote:
And I pass it
memcached_pass $scheme://memcached_pass_stream/;
I am using memcached cluster (AWS ElastiCache)
Q1 I hope this is the right way to integrate with multiple nodes.
No, unless all your memcached nodes are read-only and contain exactly
the same data copy.
Q2 If yes, what logic nginx follows to pick the nodes? I should use
similar logic to set data else there might be many misses.
By default, round-robin is used to pick up nodes. You can do key
modulo hashing by means of the set_hashed_upstream directive provided
by the ngx_set_misc module:
Alternatively, you can just use some Lua code to calculate the backend
upstream name in arbitrary way that you like for each individual
request. See ngx_lua for details. And here’s an example that
determines a backend for proxy_pass on-the-fly by querying a redis
backend via a little Lua code:
I did a test with a simple set up. Amazon large machine with nginx
sering a file from disk and in another set-up serving teh same file from
a local memcached. To my surprise I do not see any difference in the
performance. They were equal. So given the complexity of memcache in
between I see not having memcache is better.
How could be this possible? I did test to 100, 300 and 500 concurrent
users with 89kb of html file for 1 minute using sieze.
On Mon, Jun 18, 2012 at 6:20 AM, amodpandey [email protected]
wrote:
Thank you Agentzh.
Please do not capitalize my nick. Thank you.
I did a test with a simple set up. Amazon large machine with nginx
sering a file from disk and in another set-up serving teh same file from
a local memcached. To my surprise I do not see any difference in the
performance. They were equal. So given the complexity of memcache in
between I see not having memcache is better.
It’s very likely that your cache does not work at all Please ensure
that the cache hit rate on your side is not zero
The following slide shows how ngx_srcache + ngx_memc performs on
Amazon EC2’s standard Small instance for caching small resultsets (a
single-line resultset) from a MySQL custom cluster:
You can walk through all the remaining slides in the same place to
learn more about ngx_srcache and ngx_memc in your web browser by using
the pagedown/pageup keys on your keyboard.
On Wed, Jun 20, 2012 at 10:00 AM, amodpandey [email protected]
wrote:
Thank you agentzh.
I is hitting memcache and I have rechecked it.
Also, ensure that you’ve configured the memcached connection pool and
if you’re using local memcached, you’d better use the unix domain
socket to talk to memcached. Setting worker_cpu_affinity usually helps
as well.
If you’re benchmarking ngx_srcache + ngx_memc with nginx’s ngx_static
module with a single or just a few URLs, then there’ll little
difference because the static file requests will hit the operating
system’s page cache any way.
Best regards,
-agentzh
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.