Proxy cache make system load and I/O high

Hi, all:

This really exhaust me.

I had an nginx server with “proxy cache” on, the num of cache files is
almost 500K, and the files size are all near 50K. But now with just not
to 200 connections, the “load average” is almost to “3”, and the I/O of
cache disk is “1000KB/s”, and with lots of connections “TIME_OUT”. I
the $upstream_cache_status variable to the log, and in the result, 70%
not using proxy cache, 20% are missing and only 10% are hit.
But I had a same server with “proxy store” on, and the num of cache
files is
also 500K. But on this server, even with 1000 connections, the “load
average” is never up to “0.3”, and the I/O for read is just up to
and all connections are in a “ESTABLISHED” or “FIT_WAIT” status.

My question is :

  1. Why with so less connections, the I/O is still so high?
  2. What other operation does “proxy cache” do opposite to “proxy
    Aren’t they all look up the file, and: if found, read file from disk;
    not, proxy to the upstream server? While in fact they are so
    different on
    the “load average”.

On Wed, Dec 22, 2010 at 9:09 PM, eagle sbc [email protected]

But I had a same server with “proxy store” on, and the num of cache files
different on the “load average”.

I also made a test that using memory to make a tmpfs, and locate the
all in memory. And the ‘load average’ performance is then down to ‘0.3’.
my server can only spare 1.5G memory for the tmpfs, and even with
‘max_size=1G’ configured, nginx doesn’t recycle the files, but soon the
filesystem is filling up(may be this is not correct, I seen some memory
space freed, but if I configured the max_size=100G for the disk
when the cache using up to 1G, their also some disk space freed. So I
maybe may other configuration caused that recycle). That’s wired. Is
that my
new writes too frequent than the recycle?