include /etc/nginx/params_uwsgi;
uwsgi_intercept_errors off;
uwsgi_pass unix:/tmp/uwsgi-tna.sock;
}
I made a simple test: $ ab -c 10 -n 10000 http://… on localhost.
I am sure that the request is sent to uWSGI only once and is then
cached.
result: Requests per second: 3712.93 [#/sec] (mean)
Meanwhile, on the same machine and the same nginx for static file
reaches the result:
Requests per second: 4826.62 [#/sec] (mean)
How is that static files are faster to 30%! from the cache based on
static files ?
On Fri, Aug 19, 2011 at 03:58:09AM -0400, ddarko wrote:
uwsgi_pass unix:/tmp/uwsgi-tna.sock;
How is that static files are faster to 30%! from the cache based on
static files ?
Cached response require additional reading of original response
headers, as well as calculating cache key and parsing the headers
in question (which may be an issue on cpu-bound servers). So I
would expect it to be somewhat slower than plain static files.
While 30% looks a bit too many, it certainly depens havily on
response sizes involved as well as type of a limiting factor (i.e.
cpu or disk bound). In worst case I would expect cache to be 2x
slower compared to static files (though it’s unlikely to happen in
real life).
Maxim D.
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.