Benchmarking Nginx on a prehistoric PIII 500

Hi list,

After installing and configuring Nginx, I am now in the phase of
making some benchmarking.

My test server is a Pentium III 500 with 384MB of Sdram. Running
Ubuntu 7.10 and Nginx 0.5.26 from package. I also installed Apache 2
for comparison.

The test command is:
httperf --server localhost --port 80 --uri / --num-conns 100

When requesting a static file (a simple hello world with the xhtml
header), I get:
Nginx: 750 req/s (deviation pretty even)
Apache2: 760req/s (deviation: 600-800)

When accessing Redmine’s index login page, I get:
Nginx+thin (2 servers): 30req/s
Mongrel (1 instance): 30req/s

Do you think these are reasonable figures for my computer? I’d like to
know if my current setup is “optimal”, well you know what I mean.

Hi Igor,

That’s pretty sick! 2000req/s is like 3 times as much as my result!!!

What could possibly make so much difference between your setup and
mine? The hardware? Config file? I do however have some other services
running in the background: mail server, monitoring tools, etc.

I’ll checkout more thoroughly the nginx documentation too.

Thanks for giving these figures,

You are running the httperf command on the server… Have you tried it
with another machine?

Liang

On Wed, Mar 12, 2008 at 10:16:14PM +0100, Thomas wrote:

Do you think these are reasonable figures for my computer? I’d like to
know if my current setup is “optimal”, well you know what I mean.

Three years ago I used nginx on Pentium III 650MHz under FreeBSD 4.10
for static images. Real world load was 2000 req/s, 50Mbits/s,
31000 keep-alive connections.

Yeah that could be the problem, I just realized I was testing from my
server. My other box is windows, so I’ll have to install cygwin to
compile httperf, because I am not aware of any available binary for
windows.

It was too strange that 1 Mongrel would perform as well (or as bad) as
2 thin servers. Also the apache and nginx results are too close, there
is no reason why I get the same result.

On Thu, Mar 13, 2008 at 12:48:57PM +0100, Thomas wrote:

Yeah that could be the problem, I just realized I was testing from my
server. My other box is windows, so I’ll have to install cygwin to
compile httperf, because I am not aware of any available binary for
windows.

It was too strange that 1 Mongrel would perform as well (or as bad) as
2 thin servers. Also the apache and nginx results are too close, there
is no reason why I get the same result.

It seems that you benchmarked httperf itself.
I do not think that cygwin httperf would be better: windows does not
have
high perfomance TCP/IP and cygwin emulation will make things more worse.

Hi guys,

Here are results of the test I’ve made a couple of weeks ago:
http://blog.arh.cc/index.php?/archives/6-HTTP-server-comparison.html

Denis.

Thomas wrote:

Thanks Denis, your tests are really interesting.

Does anybody know of a LiveCD that would have httperf on it?

I am not aware of any live cd with httperf. However, you can compile a
static binary and run it from your live CD environment.

Thanks Denis, your tests are really interesting.

Does anybody know of a LiveCD that would have httperf on it?

You can install it in a live cd, it should be no problem.

But livecds are know to be slow, a bechmark should not be compromised
in resources.


Aníbal

@Denis, can you provide the configuration files and the “32B test
file”, so I can run the same kind of tests on my setup? Because I see
your hardware is really tough.

I have been doing some quick tests with apache bench from my other
windows windows, and it appears Nginx handles much better high level
of concurrencies than apache.

Denis,

Thanks for that :slight_smile:

On the graph for disabled keepalive it’s very odd to see the initial
increase for Cherokee and Apache 1.3. Might that be due to the way
child processes are configured? I.e., not enough to start, and not
enough kept on idle? Anyway, those stand out as curious.

On Thu, 2008-03-13 at 11:02 -0700, Cliff W. wrote:

It’s also worth noting that with a 32 byte file you are mostly measuring
connection and http protocol overhead. You should probably serve a 1K
file at least.

Also, you should provide more than a simple graph, specifically, how
many failed requests (if any)? Average time per request and time for
longest request (standard deviation). Apache bench provides all of
these numbers.

Most people would probably like to know the system load and memory/cpu
utilization as well.

You mention that config files will follow soon, so I won’t harp about
the importance of those =)

Regards,
Cliff

you might look at using siege as well.
I never had good luck with httperf. I always used ab and siege, and
compared the results.

You need more information about your test setup.

  1. What type of machine(s) and software are you running the benchmark
    client from?
  2. What type of network setup is involved?
  3. Are you restarting the VMware instance between each run of tests?
  4. How many runs of tests?

Without this information, the tests aren’t very informative.

It’s also worth noting that with a 32 byte file you are mostly measuring
connection and http protocol overhead. You should probably serve a 1K
file at least.

Regards,
Cliff

Thanks a bunch Denis!

I’ll dive into your files tomorrow.

Personally what I would do is actually test against a “real webpage”,
i.e: I will create a webpage with text, images, and css and hammer
that page as much as I can. My first tests with nginx showed that I
get the same performance serving my hello world page or a page that
has the nginx logo (200kg jpg) on 10000 connections with 1000
concurrency, with cpu load of 50%.

We should create a template page, and test against it. For instance we
could use the nginx index page (wiki.codemongers.com) as a reference,
it has text and images.

Also you may note that the design of your pages can greatly affect the
performance: some webpages require 50 requests, and some leaner pages
only take 15. I won’t even talk about pages loaded with js and
flash…

I’d like to know: if my cpu load is only 50% and the number of
requests starts to drop: where is (are) my bottleneck(s)?

Best regards,

I’ve filled some missing details about benchmarks and published
configuration files and benchmarking script source:
http://blog.arh.cc/index.php?/archives/9-HTTP-server-comparison-2.html

@Cliff: I hope that post answers all of your questions
@Thomas: I was wondering about those peeks too, maybe publishing
configuration files will solve this questions.

On Thu, 2008-03-13 at 21:09 +0100, Denis Arh wrote:

I’ve filled some missing details about benchmarks and published
configuration files and benchmarking script source:
http://blog.arh.cc/index.php?/archives/9-HTTP-server-comparison-2.html

@Cliff: I hope that post answers all of your questions

It does. One thing I would immediately recommend is to not run the
client on the same box as the server. This pulls resources away from
the server process and has a significant impact on both client and
server performance.
Basically what will happen is the two processes will reach an
equilibrium point where neither can reach maximum throughput, only a
balance (the faster the client goes, the slower the server goes and vice
versa).

Regards,
Cliff

On Thu, 2008-03-13 at 19:54 -0500, Denis S. Filimonov wrote:

I’m surprised noone has asked the most obvious question yet: Why vmware?
It’s a huge unknown factor in the measurement which renders the whole
experiment questionable at best.

I think this is actually a good benchmark specifically because it’s in
VMware (although I think Xen or OpenVZ would have been more
appropriate). It’s not a pure HTTP server test, rather a test of how
the various servers operate in the constrained environment of a VPS.
With the massive increase in VPS hosting and the fact that lots of
people utilizing VPS hosting would benefit from a lightweight HTTP
server makes this particular benchmark quite useful.

Regards,
Cliff

I’m surprised noone has asked the most obvious question yet: Why vmware?
It’s a huge unknown factor in the measurement which renders the whole
experiment questionable at best.