we have an image hosting, and of course we’re using Nginx!
Now on we’re having problem with disk i/o. With 200GB of images.
Is there any recommendation for this matter? Nginx configuration or
Hardware/other software configuration?
We already used Dual Quad Core 3Ghz, 8 GB RAM, 6x73GB 15k SCSI server
Hmm well talking from experience nginx can serve upwards of 5,000 req/s
for
static files if configured right. Ill leave this for Igor or someone
more
expericed to reply to.
If its just serving static files nginx should perform better without
caching, unless they are calling PHP for all images (in which case thats
the
root of their performance problem)
Hmm well talking from experience nginx can serve upwards of 5,000 req/s
for
static files if configured right. Ill leave this for Igor or someone
more
expericed to reply to.
Actually the image size is arround 100KB each. The server is running in
250Mbps traffic.
I already described the disk I’m using is scsi 15K RPM in raid 0
Sent from my BlackBerry
powered by Sinyal Kuat INDOSAT
But if those 5,000 req/s each hit a different 1mb image then you need to
read 5gb of data from you storage which if your storage is just a plain
old
sata disk is going to be a huge problem.
Unfortunately Indo P. is not providing nearly enough information to
give
any sort of advice. If he is lucky then putting lots of ram in the
machine
for pagecache can help here if the cache hit ratio is good but if it is
not
then he probably has to distribute the I/O across more spindles and go
for
a raid with lots of disks.
we have an image hosting, and of course we’re using Nginx!
Now on we’re having problem with disk i/o. With 200GB of images.
Is there any recommendation for this matter? Nginx configuration or
Hardware/other software configuration?
We already used Dual Quad Core 3Ghz, 8 GB RAM, 6x73GB 15k SCSI server
Hi there,
I have something kinda image hosting, almost 220G (651k files, images).
Since most users getting quite random content, there is no way cache
most used files in ram.
We got best performances with ext4 and bfq io sched, Whole system is
gentoo-based etc… so I think you may want try optimize it on OS level.
Dual Core Xeon 2,5GHz, 4G ram and standalone raid10 array, two nginx
workers.
On Thu, Oct 14, 2010 at 12:36:05AM -0700, Indo P. wrote:
hi there,
we have an image hosting, and of course we’re using Nginx!
Now on we’re having problem with disk i/o. With 200GB of images.
Is there any recommendation for this matter? Nginx configuration or
Hardware/other software configuration?
We already used Dual Quad Core 3Ghz, 8 GB RAM, 6x73GB 15k SCSI server
What OS do you use ?
Try to increase number of worker_proceses, for example, to 20.
Ah, sorry I somehow didn’t catch that last line with the actual HW
setup.
If my math skills don’t fail me (and they very well might) the traffic
and
image size data mean that you serve about 300 images per second.
Can you post a few lines of vmstat output and perhaps the output of
“iostat
-d 60 2”?
Also how are the hits distributed across the whole pool of images? Are
these hits truly random or are some images hit significantly more often
than others?
More ram would obviously take some pressure off the disks if they are
really the problem.
Actually the image size is arround 100KB each. The server is running in
250Mbps traffic.
I already described the disk I’m using is scsi 15K RPM in raid 0
So do you see iowait (by running ‘iostat’ or ‘top’) which could mean
that
the bottleneck is disk system (and then the only way to improve the
situation is either by getting more disks or adding memory for caching
(either just by pure linux vm file cache or adding some memory only
proxies
like varnish)).
Usually its good to try other servers for comparison like apache or
lighttpd - if the default configurations show the same results its not
the
webserver at fault and therefore nothing wrong with nginx config.
But still its too few data (for example nginx version / configuration
(maybe
too less workers) / some IO metrics (filesystem version (ext, xfs … ),
file
atributes (directory structure)) / network load) to give any solution or
hints - in short be more detailed about the problem you have.
Actually the image size is arround 100KB each. The server is running in 250Mbps
traffic.
I already described the disk I’m using is scsi 15K RPM in raid 0
Actually the image size is arround 100KB each. The server is running in 250Mbps
traffic.
I already described the disk I’m using is scsi 15K RPM in raid 0
Basic tunings you have to apply when serving static which doesn’t
fit into memory are:
If you use sendfile:
Make sure your OS uses appropriate read-ahead for sendfile to
avoid trashing disks with small requests (and seeks). For FreeBSD
8.1+ it should be enough to set read_ahead directive in nginx
config (0.8.18+).
Using bigger socket buffers (listen … sndbuf=… in nginx
config) may help too.
If serving large files - make sure you use appropriate
sendfile_max_chunk to avoid blocking nginx worker on disk for too
long.
Consider switching sendfile off if you can’t persuade it to read
large blocks from disk.
If not using sendfile:
Tune output_buffers (again, to avoid trashing disks with small
requests and seeks). Default is 2x32k, which will result in 4
disk requests for 100k file. Changing it to 1x128k would result
in 2x memory usage but 4x less disk requests, this is probably
good thing to do if you are disk-bound.
In both cases:
Using aio may help a lot (“aio sendfile” is only available under
FreeBSD) by adding more concurency to disk load and generally
improving nginx interactivity. Though right now it requires
patches to avoid socket leaks, see here:
Using directio may help to improve disk cache effictiveness by
excluding large files (if you have some) from cache. Though keep
in mind that it disables sendfile so if you generally tune for
sendfile - you may have to apply output_buffers tunings as well.
It’s hard to say anything more than this without knowing lots of
details.