How to set html files into memcache

Using HttpMemcachedModule I could easily integrate memcache with nginx.

I wanted to know how do I set the missing htmls into memcache. Like
this

If $uri in memchache
serve from memcache
else
set into memcache
server from proxy

Please suggest

Posted at Nginx Forum:

Hello!

On Sun, Jun 17, 2012 at 6:11 AM, amodpandey [email protected]
wrote:

Using HttpMemcachedModule I could easily integrate memcache with nginx.

I wanted to know how do I set the missing htmls into memcache. Like
this

Take a look at my ngx_srcache and ngx_memc modules:

http://wiki.nginx.org/HttpSRCacheModule
http://wiki.nginx.org/HttpMemcModule

The former provides a generic and transparent response cache layer
based on the Nginx subrequests while the latter provides a memcached
client that implements most of the memcached TCP ASCII protocol and
these two can work together.

Best regards,
-agentzh

Thank you Agentzh.

I will look into that.

I have an associated question. I have a memcache cluster (multiple
nodes). So I defined an

upstream memcached_pass_stream {
server node1:11211;
server node2:11211;
server node3:11211;
server node4:11211;
}

And I pass it
memcached_pass $scheme://memcached_pass_stream/;

I am using memcached cluster (AWS ElastiCache)

Q1 I hope this is the right way to integrate with multiple nodes.

Q2 If yes, what logic nginx follows to pick the nodes? I should use
similar logic to set data else there might be many misses.

Posted at Nginx Forum:

Hello!

On Sun, Jun 17, 2012 at 8:06 PM, amodpandey [email protected]
wrote:

And I pass it
memcached_pass $scheme://memcached_pass_stream/;

I am using memcached cluster (AWS ElastiCache)

Q1 I hope this is the right way to integrate with multiple nodes.

No, unless all your memcached nodes are read-only and contain exactly
the same data copy.

Q2 If yes, what logic nginx follows to pick the nodes? I should use
similar logic to set data else there might be many misses.

By default, round-robin is used to pick up nodes. You can do key
modulo hashing by means of the set_hashed_upstream directive provided
by the ngx_set_misc module:

http://wiki.nginx.org/HttpSetMiscModule#set_hashed_upstream

Alternatively, you can just use some Lua code to calculate the backend
upstream name in arbitrary way that you like for each individual
request. See ngx_lua for details. And here’s an example that
determines a backend for proxy_pass on-the-fly by querying a redis
backend via a little Lua code:

http://openresty.org/#DynamicRoutingBasedOnRedis

The basic idea is essentially the same.

Regards,
-agentzh

Thank you Agentzh.

I did a test with a simple set up. Amazon large machine with nginx
sering a file from disk and in another set-up serving teh same file from
a local memcached. To my surprise I do not see any difference in the
performance. They were equal. So given the complexity of memcache in
between I see not having memcache is better.

How could be this possible? I did test to 100, 300 and 500 concurrent
users with 89kb of html file for 1 minute using sieze.

Posted at Nginx Forum:

Hello!

On Mon, Jun 18, 2012 at 6:20 AM, amodpandey [email protected]
wrote:

Thank you Agentzh.

Please do not capitalize my nick. Thank you.

I did a test with a simple set up. Amazon large machine with nginx
sering a file from disk and in another set-up serving teh same file from
a local memcached. To my surprise I do not see any difference in the
performance. They were equal. So given the complexity of memcache in
between I see not having memcache is better.

It’s very likely that your cache does not work at all :slight_smile: Please ensure
that the cache hit rate on your side is not zero :slight_smile:

The following slide shows how ngx_srcache + ngx_memc performs on
Amazon EC2’s standard Small instance for caching small resultsets (a
single-line resultset) from a MySQL custom cluster:

http://agentzh.org/misc/slides/libdrizzle-lua-nginx/#84

And below is the result for caching big resultsets (100KB) in exactly
the same setup:

http://agentzh.org/misc/slides/libdrizzle-lua-nginx/#85

You can walk through all the remaining slides in the same place to
learn more about ngx_srcache and ngx_memc in your web browser by using
the pagedown/pageup keys on your keyboard.

Regards,
-agentzh

Thank you agentzh.

I is hitting memcache and I have rechecked it.

Posted at Nginx Forum:

Hello!

On Wed, Jun 20, 2012 at 10:00 AM, amodpandey [email protected]
wrote:

Thank you agentzh.

I is hitting memcache and I have rechecked it.

Also, ensure that you’ve configured the memcached connection pool and
if you’re using local memcached, you’d better use the unix domain
socket to talk to memcached. Setting worker_cpu_affinity usually helps
as well.

If you’re benchmarking ngx_srcache + ngx_memc with nginx’s ngx_static
module with a single or just a few URLs, then there’ll little
difference because the static file requests will hit the operating
system’s page cache any way.

Best regards,
-agentzh