From Nginx official site there are two method to store remote upstream
on local file system, “proxy_cache” and “proxy_store”, the document
describes clearly on how to configure, but still I have problem on the
differences. Months ago I did some test over the both and found
proxy_cache owns far better performance, and I thought proxy_cache is
“cache in memory”, recently after diving into source code, I recognized
both are file based, so i’m not sure if it could afford high connection
where disk probably becomes the bottleneck.
On Sun, Nov 13, 2011 at 10:10:29AM -0500, zhenwei wrote:
From Nginx official site there are two method to store remote upstream
on local file system, “proxy_cache” and “proxy_store”, the document
describes clearly on how to configure, but still I have problem on the
differences. Months ago I did some test over the both and found
proxy_cache owns far better performance, and I thought proxy_cache is
“cache in memory”, recently after diving into source code, I recognized
both are file based, so i’m not sure if it could afford high connection
where disk probably becomes the bottleneck.
“proxy_cache” is general-purpose cache with automatic lookups
before proxy_pass, expiration support and so on. It is usually
what you need if you need caching capabilities.
“proxy_store” is just a method to store proxied files on disk. It
may be used to construct cache-like setups (usually involving
try_files and/or error_page-based fallback), though it’s up to you
to implement any required logic.
On Sunday 13 November 2011 19:10:29 zhenwei wrote:
From Nginx official site there are two method to store remote upstream
on local file system, “proxy_cache” and “proxy_store”, the document
describes clearly on how to configure, but still I have problem on the
differences.
The “proxy_store” just stores backend’s responses to a defined path.
It’s totally
up to you, what to do with these files after they were stored.
The “proxy_cache” alone doesn’t do anything. But with other
proxy_cache_*
directives, you can setup a file cache with key, life time, etc.
Quit agree with you that SSD seems the right way to get high random
write/read performs well, especially we’re serving thousands of websites
for each Nginx.
On Monday 14 November 2011 05:37:58 zhenwei wrote:
thanks, and how about the performance?
So, if you care about disk performance and have enough RAM, probably,
you may
need to tune kernel disk cache or even consider to put nginx cache on
“/dev/shm”.
Also, you can use the memcached module for caching.
wbr, Valentin V. Bartenev
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.