I am using Nginx 0.5.35 on a server that has a Xeon 5130 dual core, 4 GB
of RAM, and 10,000 RPM HD. Nginx serves millions of large images per
day. Sometimes tens of millions in a day. Over the past week the server
has been experiencing record traffic. I am looking for a way to reduce
the amount of RAM consumed by Nginx, but still deliver images at the
same rate. The CPU cycles are at an acceptable level, but yesterday
Nginx nearly consumed all 4 GB of RAM before I had to reboot the server
under very heavy traffic. All this server does is serve images files
like jpeg, etc. Some of my configuration is below:
Since this is a dual core CPU I am using:
worker_processes 2;
worker_connections 12000;
use epoll; # This is a RedHat Enterprise Server 4
I have tried reducing the keepalive_timeout to close the connection
sooner, so that resources might be freed sooner, but it has no noticable
effect.
Can someone make some suggestions how I could handle the same traffic,
but manage the RAM usage better.
Wow. I would set keepalive to maybe 5 or 10, not 75.
If you are serving that much traffic, I might even try turning
keepalives off altogether.
But if you already modified those values, and didn’t see a change,
then I don’t know.
You may try setting expires headers for your images, if they don’t
change very often (or at all).
Yes, I use standard Nginx without proxy, fastcgi, perl, ect. It is
compiled and installed without any added modules. Nginx is only serving
the static image files.
You should probably use the defaults for a server that only serves
static images. If they client can’t talk to you fast enough to send a
small GET request, they probably won’t be able to receive the response
in a timely manner. Best to drop the quickly.
On Thu, Feb 21, 2008 at 08:36:31AM +1100, Dave C. wrote:
You should probably use the defaults for a server that only serves
static images. If they client can’t talk to you fast enough to send a
small GET request, they probably won’t be able to receive the response
in a timely manner. Best to drop the quickly.
If nginx uses sendfile, it eats kernel memory, but not its own.
So these timeouts should not affect on nginx memory usage.
Sure, please forward me your contact details, the config is spread out
over many files so isn’t appropriate to post here.
Cheers
Dave
I believe you can attach files when replying to these messages. I’m a
bit wary of posting my email address on a bbs or chat forum. However, I
would really like to see how you’ve configured your Nginx install.
sorry, why do you compress jpeg, gif and png files? they are already
compressed… double compression just uses cpu power and causes global
warming
jodok
You may a great point. I had created a one-size-fits-all config, but I
will comment out those files that don’t need to be compressed. Thank you
for pointing that out.
This is cause of memory and CPU consumption. You do not need to compress
already compressed jpegs/etc. If you serve images only, you should turn
gzip off at all (it’s default).
As to other MIME types:
there are no such types as
application/x-httpd-php
application/x-httpd-fastphp
application/x-httpd-eruby
they probably exist as internal MIME-types inside Apache, but they are
never showed to a client.
the following types as
application/xhtml+xml
application/rss+xml
application/atom_xml
probably do not exist too.
Keep the list as small as possible, because nginx iterates it
sequenctally.
You might want to consider dropping your gzip ratio, local testing
here showed little benefit past about 4. At level 9 you’ll be using 4x
the CPU for a tiny gain in compression, which is more than mitigated
by the extra delay in overcompressing the pages.
Also, as Jodok has just pointed out, there is little observable gain
in compressing image/* mime types.