We’ve been running Nginx for sometime powering a fairly high traffic
website. The single box, dual CPU (dual core) has been serving us well
but its time to upgrade. I’ve got a new system configured, quad CPU
(quad core) = 16 cores (plenty of RAM) and CentOS. My understanding is
that even though there are 16 cores, more than 5 Nginx instances will
not greatly improve performance.
I decided to benchmark the system using a single Nginx instance and
default configuration with a static html file. Using “ab” (locally)
with 50,000 connections and 1,000 concurrent ones I get around 10K
requests / second.
What particular parameters should I look into adjusting to try to double
that number? (Nginx and OS).
What is considered top performance for a single box / instance?
On Sat, Oct 22, 2011 at 10:40:53PM -0400, iberkner wrote:
requests / second.
What particular parameters should I look into adjusting to try to double
that number? (Nginx and OS).
Likely just doubling number of “ab” processes will do the trick
(i.e. just run two “ab” in parallel), it looks like most obvious
bottleneck in your test.
Time4Learning.com - Online interactive curriculum for home use, PreK-8th
Grade. Time4Writing.com - Online writing tutorials for high, middle, and
elementary
school students. Time4Learning.net - A forum to chat with parents online about kids,
education, parenting and more. spellingcity.com - Online vocabulary and spelling activities for
teachers,
parents and students.
I ran them in 2 separate shells concurrently (as best I could) and got
about
the same result, i.e. 10K req. / sec. per each “ab” test, so it does
look as
if the bottleneck is the testing tool.
I would like to see > 50,000 req. / sec. ultimately in a controlled
testing
environment, what is the best way to try and achieve that? Running “ab”
in
“n” different shells at the same time is an option, but difficult.
The varnish folks, Krystjan in particular, have blogged about how they
do this. Easy to find them on the google
BlackBerry PIN 280C6BCD +1 647 459 9475
I ran them in 2 separate shells concurrently (as best I could) and
got about the same result, i.e. 10K req. / sec. per each “ab” test,
so it does look as if the bottleneck is the testing tool.
I would like to see > 50,000 req. / sec. ultimately in a controlled
testing environment, what is the best way to try and achieve that?
Running “ab” in “n” different shells at the same time is an option,
but difficult.
Forget ab. Use httpload/http_load. It uses a single process, hence it
mimics to a certain extent the way Nginx works. It uses select()
though instead of epoll() or kqueue().
Requesting the empty_gif you should get always something north of 20k
req/s on a semi-decent machine.
Thanks, I’ve been doing a lot of reading regarding this subject matter,
haven’t found anything specific by Varnish folks and Krystjan, if you
can
point me to a blog entry would be great.
Time4Learning.com - Online interactive curriculum for home use, PreK-8th
Grade. Time4Writing.com - Online writing tutorials for high, middle, and
elementary
school students. Time4Learning.net - A forum to chat with parents online about kids,
education, parenting and more. spellingcity.com - Online vocabulary and spelling activities for
teachers,
parents and students.
This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.