Hi all,Does nginx support to cache static files in memory if it finds
several large files are requested frequently? If yes, how to control the
cache? If not, is there any simple way to implement a similar function?
The linux file system will cache recently or frequently used files in
memory regardless, and the effect is similar in performance to an
application level cache. A separate application level cache would not
necessarily be desirable since the file system cache would largely
have the same items in it anyway; storing them again in an application
level memory cache would simply leave less total memory available to
file caching or other uses.
Thanks! So is there any guidance about how to configure cache behavior
of linux file systems to gain max performance?
It’s noticed that Apache provides MMapFile directives in
mod_file_cache, which allows you to have Apache map a static file’s
contents
into memory at start time (using the mmap system call). Apache will use
the
in-memory contents for all subsequent accesses to this file.
Does NginX have similar module or configuration options?
I’m not exactly sure. Generally it works pretty well if you’ve got
enough spare ram compared to the size of the files that need to be in
the cache. There is a tendency to only buy as much ram for your server
as you need to avoid swapping, but if you add a couple more gigs you
can improve your i/o performance pretty significantly. That’s also
pretty cheap with ram prices so low these days.
hua Wrote:
It’s noticed that Apache provides MMapFile directives in mod_file_cache, which allows you to have Apache
map a static file’s contents into memory at start time (using the mmap system call).
Apache will use the in-memory contents for all subsequent accesses to this file.
Does NginX have similar module or configuration options?
mmap syscall doesn’t mean file loaded into memory. It means that
application can read/write file as memory, and two or more processes can
share access to this file without allocating memory and load file into
each other. eNGINeX doesn’t need this feature, because it is FSM, and
not read file at once, it read file piece by piece so fast as peer can
receive. In this case best give OS to manage cache and files.
Thanks! So is there any guidance about how to
configure cache behavior
of linux file systems to gain max
performance?
This is offtopic, try google if you want to find answers
Posted at Nginx Forum:
openvz uses conteiner virtualization.
The memory is shared with other vm’s.
The configuration of yourvz.conf show how much is your guaranteed
memory, and bursted memory.
And its not possible add a swap disk using swapon to a file image.
Em 24/05/2011, s 10:31, drmetal escreveu:
Hello, does this (linux RAM cache) apply to OpenVZ machines (Guests)?
Because when I run free -m command, i see 0 in buffers and cached.
Anyone knows how that goes in OpenVZ?
Thanks
Posted at Nginx Forum:
Thanks for the answer but this doesn’t answer my question. Obviously I
have shared RAM. My question was, does linux store frequently used files
in RAM ? Because this is usually reported in the free -m command
output.
But in the case of OpenVZ VPS I see nothing (0) in the cached and
buffers column.
I researched a little, the system caches files, they are just not
reported.
Felipe aka lpr Wrote:
Em 24/05/2011, às 10:31, drmetal escreveu:
nginx mailing list
[email protected]
nginx Info Page
Posted at Nginx Forum: