From my experience, 60 small static files / second used 2% cpu on a
OS X Xeon Server. Are you suspecting that your server is being limited
in some way ?
I think you should increase this, as you have more CPU and also you are
using 4 workers. Try to set this at lets say “worker_connections
2048;”. It might work.
From my experience, 60 small static files / second used 2% cpu on a
OS X Xeon Server. Are you suspecting that your server is being limited
in some way ?
Right now, I don’t have problems.
I just want to optimize my web server for the best result (it takes
400-500ms to serve a 5kB image).
This e-mail and its attachments are intended for the above named
recipient(s) only and may be confidential. If they have come to you in
error, please reply to this e-mail and highlight the error. No action
should be taken regarding content, nor must you copy or show them to
anyone.
This e-mail has been created in the knowledge that Internet e-mail is
not a 100% secure communications medium, and we have taken steps to
ensure that this e-mail and attachments are free from any virus. We must
advise that in keeping with good computing practice the recipient should
ensure they are completely virus free, and that you understand and
observe the lack of security when e-mailing us.
Compression, gzip. You should try it, maybe for jpeg is not actually
worth it… Hmm, try what can i say.
I already use static gzip on css and javascript file.
And for jpeg, it is not worth it, indeed.
Moreover, if you look at the monitoring image (in attachment), the loss
of time is while sending request and receiving http-headers from server
(25% of the total time).
I was wondering if we can reduce it?
PS: watch the last line of the image in attachment: it’s the javascript
from Google Analytics, and the timing is really, really short!
I’m wondering if there is something to reduce those timings…
What I see on these images is that TCP connection times are about
100ms-500ms. These values depend on client speed connection and kernel
internel processing only. User-level applications can not affect on it.
Thus, if you have 100-500ms TCP round trip, then small data chunks
will be processed in the same time, even if nginx will not block on disk
reads.
As to disk blocking, you have 30G images and 2G memory. nginx is
probably
blocking on disk - see “wa” percents in top. You may try to increase
worker_processes to 10-20.
As to using varnish+nginx, I think varnish will not help in this case.
oh yes I forgot, I have confronted one of the developers on #varnish
with my results. The said developer stated that nginx currently does
have the edge performance wise, even though they have the “superior”
architecture.
I have tried both approaches. nginx on ZFS (arc caching) is superior in
performance to varnish in front of nginx. I can easily choke varnish
with many requests.
oh yes I forgot, I have confronted one of the developers on #varnish
with my results. The said developer stated that nginx currently does
have the edge performance wise, even though they have the “superior”
architecture.
and would this be on solaris only? or does it apply to other platforms
as well?
-jf
–
In the meantime, here is your PSA:
“It’s so hard to write a graphics driver that open-sourcing it would not
help.”
– Andrew Fear, Software Product Manager, NVIDIA Corporation http://kerneltrap.org/node/7228
You’re spending a good chunk of time doing disk seeks. Even if read
time is 0, average seek time is about 10ms. So you’re looking at a
limit of 100 images/sec. This is the ideal case. Add read time + TCP
delays + other delays, you’re looking 50-60 images/sec.
Yes. There is caching at the OS level too, which needs to be tuned
(like cache sizes, inode vs content etc). I did not look at your
benchmarks, but was commenting more on Gen’s post. In the graphs he’s
given, the latency between connect and sending first byte (the green
part) is high, which might explain disk seek/read.
-p
On Tue, Oct 14, 2008 at 11:36 AM, Victor I. [email protected]
wrote:
umm, linux file cache? solaris ZFS ARC cache? who says you will be doing
disk seeks for frequently used files?
seems like varnish is clashing with what is already in place.